TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

First Amendment Rights of the Litigation Industry

December 21st, 2014

When a Wall Street Journal opinion piece stated that “the plaintiffs bar is all but running the Senate[1],” Frederick Martin (“Fred”) Baron, former president of the litigation industry’s Association of Trial Lawyers of America (ATLA), reportedly quipped that “I really, strongly disagree with that. Particularly the ‘all but’.” Baron, affectionately known as “Robber Baron” for his aggressive advocacy for uninjured asbestos claimants and questionable deposition coaching tactics, was the ultimate Democratic party insider. He was the finance chair of John Edwards’ ill-fated presidential campaign, and the sugar daddy for Rielle Hunter, the mother of Edwards’ out-of-wedlock child. You cannot get more “inside” than that.

Robber Baron died in 2008, but his legacy is a reminder of the hypocrisy of those who decry the Citizens United[2] opinion, which held that corporations and unions have first amendment rights to speak in ways that might influence the outcomes of elections. While many fuss over “corporate” speech, the litigation industry has operated largely without constraint. Last year, for example, plaintiffs’ counsel, Edward F. Blizzard, and representatives of the litigation industry’s ATLA, now operating under the self-serving name, American Association for Justice (AAJ), met with Food and Drug Administration officials to influence agency policy on generic medication warnings. This week, the Times featured front-page coverage of how the litigation industry has co-opted the policies and agendas of the States’ attorneys general, and directed their targeting of corporations. See Eric Lipton, “Lawyers Create Big Paydays by Coaxing Attorneys General to Sue,” New York Times (Dec. 18, 2014).

The litigation industry makes its presence felt in many ways, sometimes as an omnipresent threat that influences business and professional judgments. President Obama criticized Sony’s decision to pull down The Interview, as an undue concession to terrorists. SeeSony’s Decision to Pull Movie Is a ‘Mistake,’ Obama Says.” Obama went so far as to express his wish that “they’d spoken to me first.” But would Obama, or anyone, have been able to control the litigation industry’s second-guessing of Sony’s or any individual theater owner’s decision to show the movie?

Lipton’s article is a vivid reminder that the plaintiffs’ trial bar remains the largest rent-seeking lobby in the United States.


[1] John Fund, “Have You Registered to Sue?” Wall Street Journal (Nov. 6, 2002).

[2] Citizens United v. Federal Election Comm’n, 558 U.S. 310 (2010).

Showing Causation in the Absence of Controlled Studies

December 17th, 2014

The Federal Judicial Center’s Reference Manual on Scientific Evidence has avoided any clear, consistent guidance on the issue of case reports. The Second Edition waffled:

“Case reports lack controls and thus do not provide as much information as controlled epidemiological studies do. However, case reports are often all that is available on a particular subject because they usually do not require substantial, if any, funding to accomplish, and human exposure may be rare and difficult to study. Causal attribution based on case studies must be regarded with caution. However, such studies may be carefully considered in light of other information available, including toxicological data.”

F.J.C. Reference Manual on Scientific Evidence at 474-75 (2d ed. 2000). Note the complete lack of discussion of base-line risk, prevalence of exposure, and external validity of the “toxicological data.”

The second edition’s more analytically acute and rigorous chapter on statistics generally acknowledged the unreliability of anecdotal evidence of causation. See David Kaye & David Freedman, “Reference Guide on Statistics,” in F.J.C. Reference Manual on Scientific Evidence 91 – 92 (2d ed. 2000).

The Third Edition of the Reference Manual is even less coherent. Professor Berger’s introductory chapter[1] begrudgingly acknowledges, without approval, that:

“[s]ome courts have explicitly stated that certain types of evidence proffered to prove causation have no probative value and therefore cannot be reliable.59

The chapter on statistical evidence, which had been relatively clear in the second edition, now states that controlled studies may be better but case reports can be helpful:

“When causation is the issue, anecdotal evidence can be brought to bear. So can observational studies or controlled experiments. Anecdotal reports may be of value, but they are ordinarily more helpful in generating lines of inquiry than in proving causation.14

Reference Manual at 217 (3d ed. 2011). The “generally” is given no context or contour for readers. These authors fail to provide any guidance on what will come from anecdotal evidence, or when and why anecdotal reports may do more than merely generating “lines of inquiry.”

In Matrixx Initiatives Inc. v. Siracusano, 131 S. Ct. 1309 (2011), the Supreme Court went out of its way, way out of its way, to suggest that statistical significance was not always necessary to support conclusions of causation in medicine. Id. at 1319. The Court cited three Circuit court decisions to support its suggestion, but two of three involved specific causation inferences from so-called differential etiologies. General causation was assumed in those two cases, and not at issue[2]. The third case, the notorious Wells v. Ortho Pharmaceutical Corp., 788 F. 2d 741, 744–745 (11th Cir. 1986), was also cited in support of the suggestion that statistical significance was not necessary, but in Wells, the plaintiffs’ expert witnesses actually relied upon studies that claimed at least nominal statistical significance. Wells was and remains representative of what is possible and results when trial judges ignore the constraints of study validity. The Supreme Court, in any event, abjured any intent to specify “whether the expert testimony was properly admitted in those cases [Wells and others],” and the Court made no “attempt to define here what constitutes reliable evidence of causation.” 131 S. Ct. at 1319.

The causal claim in Siracusano involved anosmia, loss of the sense of smell, from the use of Zicam, zinc gluconate. The case arose from a motion to dismiss the complaint; no evidence was ever presented or admitted. No baseline risk of anosmia was pleaded; nor did plaintiffs allege that any controlled study demonstrated an increased risk of anosmia from nasal instillation of zinc gluconate. There were, however, clinical trials conducted in the 1930s, with zinc sulfate for poliomyelitis prophylaxis, which showed a substantial incidence of anosmia in the treated children[3]. Matrixx tried to argue that this evidence was unreliable, in part because it involved a different compound, but this argument (1) in turn demonstrated a factual issue that required discovery and perhaps a trial, and (2) traded on a clear error in asserting that the zinc in zinc sulfate and zinc gluconate were different, when in fact they are both ionic compounds that result in zinc ion exposure, as the active constituent.

The position stridently staked out in Matrixx Initiatives is not uncommon among defense counsel in tort cases. Certainly, similar, unqualified statements, rejecting the use of case reports for supporting causal conclusions, can be found in the medical literature[4].

When the disease outcome has an expected value, a baseline rate, in the exposed population, then case reports simply confirm what we already know: cases of the disease happen in people regardless of their exposure status. For this reason, medical societies, such as the Teratology Society, have issued guidances that generally downplay or dismiss the role that case reports may have in the assessment and determination of causality for birth defects:

“5. A single case report by itself is not evidence of a causal relationship between an exposure and an outcome.  Combinations of both exposures and adverse developmental outcomes frequently occur by chance. Common exposures and developmental abnormalities often occur together when there is no causal link at all. Multiple case reports may be appropriate as evidence of causation if the exposures and outcomes are both well-defined and low in incidence in the general population. The use of multiple case reports as evidence of causation is analogous to the use of historical population controls: the co-occurrence of thalidomide ingestion in pregnancy and phocomelia in the offspring was evidence of causation because both thalidomide use and phocomelia were highly unusual in the population prior to the period of interest. Given how common exposures may be, and how common adverse pregnancy outcome is, reliance on multiple case reports as the sole evidence for causation is unsatisfactory.”

The Public Affairs Committee of the Teratology Society, “Teratology Society Public Affairs Committee Position Paper Causation in Teratology-Related Litigation,” 73 Birth Defects Research (Part A) 421, 423 (2005).

When the base rate for the outcome is near zero, and other circumstantial evidence is present, some commentators insist that causality may be inferred from well-documented case reports:

“However, we propose that some adverse drug reactions are so convincing, even without traditional chronological causal criteria such as challenge tests, that a well documented anecdotal report can provide convincing evidence of a causal association and further verification is not needed.”

Jeffrey K. Aronson & Manfred Hauben, “Drug safety: Anecdotes that provide definitive evidence,” 333 Brit. Med. J. 1267, 1267 (2006) (Dr. Hauben was medical director of risk management strategy for Pfizer, in New York, at the time of publication). But which ones are convincing, and why?

        *        *        *        *        *        *        *        *        *

Dr. David Schwartz, in a recent blog post, picked up on some of my discussion of the gadolinium case reports (see here and there), and posited the ultimate question: when are case reports sufficient to show causation? David Schwartz, “8 Examples of Causal Inference Without Data from Controlled Studies” (Dec. 14, 2014).

Dr. Schwartz discusses several causal claims, all of which gave rise to litigation at some point, in which case reports or case series played an important, if not dispositive, role:

  1.      Gadolinium-based contrast agents and NSF
  2.      Amphibole asbestos and malignant mesothelioma
  3.      Ionizing radiation and multiple cancers
  4.      Thalidomide and teratogenicity
  5.      Rezulin and acute liver failure
  6.      DES and clear cell vaginal adenocarcinoma
  7.      Vinyl chloride and angiosarcoma
  8.      Manganese exposure and manganism

Dr. Schwartz’s discussion is well worth reading in its entirety, but I wanted to emphasize some of Dr. Schwartz’s caveats. Most of the exposures are rare, as are the outcomes. In some cases, the outcomes occur almost exclusively with the identified exposures. All eight examples pose some danger of misinterpretation. Gadolinium-based contrast agents appear to create a risk of NSF only in the presence of chronic renal failure. Amphibole asbestos, and most importantly, crocidolite causes malignant mesothelioma after a very lengthy latency period. Ionizing radiation causes some cancers that are all-too common, but the presence of multiple cancers in the same person, after a suitable latency period, is distinctly uncommon, as is the level of radiation needed to overwhelm bodily defenses and induce cancers. Thalidomide was associated by case reports fairly quickly with phocomelia, which has an extremely low baseline risk. Other birth defects were not convincingly demonstrated by the case series. Rezulin, an oral antidiabetic medication, was undoubtedly causally responsible for rare cases of acute liver failure. Chronic liver disease, however, which is common among type 2 diabetic patients, required epidemiologic evidence, which never materialized[5].

Manganism, by definition, is the cause of manganism, but extremely high levels of manganese exposure, and specific speciation of the manganese, are essential to the causal connection. Manganism raises another issue often seen in so-called signature diseases: diagnostic accuracy. Unless the diagnostic criteria have perfect (100%) specificity, with no false-positive diagnoses, then once again, we expect false-positive cases to appear when the criteria are applied to large numbers of people. In the welding fume litigation, where plaintiffs’ counsel and physicians engaged in widespread, if not wanton, medico-legal screenings, it was not surprising that they might find occasional cases that appeared to satisfy their criteria. Of course, the more the criteria are diluted to accommodate litigation goals, the more likely there will be false positive cases.[6]

Dr. Schwartz identifies some common themes and important factors in identifying the bases for inferring causality from uncontrolled evidence:

“(a) low or no background rate of the disease condition;

(b) low background rate of the exposure;

(c) a clear understanding of the mechanism of action.”

These factors and perhaps others should not be confused with strict criteria here. The exemplar cases suggest a family resemblance of overlapping factors that help support the inference, even against the most robust skepticism.

In litigation, defense counsel typically argue that analytical epidemiology is always necessary, and plaintiffs’ counsel claim epidemiology is never needed. The truth is more nuanced and conditional, but the great majority of litigated cases do require epidemiology for health effects because the claimed harms are outcomes that have an expected incidence or prevalence in the exposed population irrespective of exposure.


[1] Reference Manual on Scientific Evidence at 23 (3d ed. 2011) (citing “Cloud v. Pfizer Inc., 198 F. Supp. 2d 1118, 1133 (D. Ariz. 2001) (stating that case reports were merely compilations of occurrences and have been rejected as reliable scientific evidence supporting an expert opinion that Daubert requires); Haggerty v. Upjohn Co., 950 F. Supp. 1160, 1164 (S.D. Fla. 1996), aff’d, 158 F.3d 588 (11th Cir. 1998) (“scientifically valid cause and effect determinations depend on controlled clinical trials and epidemiological studies”); Wade-Greaux v. Whitehall Labs., Inc., 874 F. Supp. 1441, 1454 (D.V.I. 1994), aff’d, 46 F.3d 1120 (3d Cir. 1994) (stating there is a need for consistent epidemiological studies showing statistically significant increased risks).”)

[2] Best v. Lowe’s Home Centers, Inc., 563 F. 3d 171, 178 (6th Cir 2009); Westberry v. Gislaved Gummi AB, 178 F. 3d 257, 263–264 (4th Cir. 1999).

[3] There may have been a better argument for Matrixx in distinguishing the method and place of delivery of the zinc sulfate in the polio trials of the 1930s, but when Matrixx’s counsel was challenged at oral argument, he asserted simply, and wrongly, that the two compounds were different.

[4] Johnston & Hauser, “The value of a case report,” 62 Ann. Neurology A11 (2007) (“No matter how compelling a vignette may seem, one must always be concerned about the reliability of inference from an “n of one.” No statistics are possible in case reports. Inference is entirely dependent, then, on subjective judgment. For a case meant to suggest that agent A leads to event B, the association of these two occurrences in the case must be compared to the likelihood that the two conditions could co-occur by chance alone …. Such a subjective judgment is further complicated by the fact that case reports are selected from a vast universe of cases.”); David A. Grimes & Kenneth F. Schulz, “Descriptive studies: what they can and cannot do,” 359 Lancet 145, 145, 148 (2002) (“A frequent error in reports of descriptive studies is overstepping the data: studies without a comparison group allow no inferences to be drawn about associations, causal or otherwise.”) (“Common pitfalls of descriptive reports include an absence of a clear, specific, and reproducible case definition, and interpretations that overstep the data. Studies without a comparison group do not allow conclusions about cause and disease.”); Troyen A. Brennan, “Untangling Causation Issues in Law and Medicine: Hazardous Substance Litigation,” 107 Ann. Intern. Med. 741, 746 (1987) (recommending that testifying physicans “[a]void anecdotal evidence; clearly state the opposing side is relying on anecdotal evidence and why that is not good science.”).

[5] See In re Rezulin, 2004 WL 2884327, at *3 (S.D.N.Y. 2004).

[6] This gaming of diagnostic criteria has been a major invitation to diagnostic invalidity in litigation over asbestosis and silicosis in the United States.

Power at the FDA

December 11th, 2014

For six years, the Food and Drug Administration (FDA) has been pondering a proposed rule to abandon the current system of pregnancy warning categories for prescription drugs. Last week, the agency finally published its final rule for pregnancy and lactation labeling[1]. The rule, effective in June 2015, will require removal of the current category labeling, A, B, C, D, or X, in favor of risk statements and narrative summaries of the human, animal, and pharmacologic data for adverse maternal and embryo/fetal outcomes.

The labeling system, which will be phased out, discouraged or prohibited inclusion of actual epidemiologic data results for teratogenicity. With sponsors required to present actual data, the agency voiced a concern whether prescribing physicians, who are the intended readers of the labeling, interpret a statistically non-significant result as showing a lack of association:

“We note that it is difficult to be certain that a lack of findings equates to a lack of risk because the failure of a study to detect an association between a drug exposure and an adverse outcome may be related to many factors, including a true lack of an association between exposure and outcome, a study of the wrong population, failure to collect or analyze the right data endpoints, and/or inadequate power. The intent of this final rule is to require accurate descriptions of available data and facilitate the determination of whether the data demonstrate potential associations between drug exposure and an increased risk for developmental toxicities.[2]

When human epidemiologic data are available, the agency had proposed the following for inclusion in drug labeling[3]:

Narrative description of risk(s) based on human data. FDA proposed that when there are human data, the risk conclusion must be followed by a brief description of the risks of developmental abnormalities as well as other relevant risks associated with the drug. To the extent possible, this description must include the specific developmental abnormality (e.g., neural tube defects); the incidence, seriousness, reversibility, and correctability of the abnormality; and the effect on the risk of dose, duration of exposure, and gestational timing of exposure. When appropriate, the description must include the risk above the background risk attributed to drug exposure and confidence limits and power calculations to establish the statistical power of the study to identify or rule out a specified level of risk (proposed [21 C.F.R.] § 201.57(c)(9)(i)(C)(4)).”

The agency rebuffed comments that physicians would be unable to interpret confidence intervals, and confused by actual data and the need to interpret study results. The agency’s responses to comments to the proposed rule note that the final rule requires a description of the data, and its limitations, in approved labeling[4]:

‘‘Confidence intervals and power calculations are important for the review and interpretation of the data. As noted in the draft guidance on pregnancy and lactation labeling, which is being published concurrently with the final rule, the confidence intervals and power calculation, when available, should be part of that description of limitations.’’

The agency’s insistence upon power calculations is surprising. The proposed rule talked about requiring ‘‘confidence limits and power calculations to establish the statistical power of the study to identify or rule out a specified level of risk (proposed § 201.57(c)(9)(i)(C)(4)).” The agency’s failure to retain the qualification of power, at some specified level of risk, makes the requirement meaningless. A study with ample power to find a doubling of risk may have low power to find a 20% increase in risk. Power is dependent upon the specified alternative to the null hypothesis, as well as the level of alpha, or statistical significance.

The final rule omits all references to power and power calculations, with or without the qualifier of at some specified level of risk, from the revised sections of part 201; indeed the statistical concepts of power and confidence interval do not show up at all, other than a vague requirement that the limitation of data from epidemiologic studies be described[5]:

‘‘(3) Description of human data. For human data, the labeling must describe adverse developmental outcomes, adverse reactions, and other adverse effects. To the extent applicable, the labeling must describe the types of studies or reports, number of subjects and the duration of each study, exposure information, and limitations of the data. Both positive and negative study findings must be included.”

Presumably, the proposed rule’s requirement of providing power calculations and confidence intervals is part of the future requirement to describe data limitations. The agency, however, omitted this level of detail from the revised regulation.

The same day that the FDA issued the final rule, it also issued a draft guidance on pregnancy and lactation labeling, for public comment[6].

The guidance recommends what the regulation, in its final form, does not require specifically. First, the guidance recommends omission of individual case reports from the human data section, because:

‘‘Individual case reports are rarely sufficient to characterize risk and therefore ordinarily should not be included in this section.[7]

And for actual controlled epidemiologic studies, the guidance suggests that:

‘‘If available, data from the comparator or control group, and data confidence intervals and power calculations should also be included.[8]

Statistically, this guidance is no guidance at all. Power calculations can never be presented without a specified alternative hypothesis to the null hypothesis of no increased risk of birth defects. Furthermore, virtually no study provides power calculations of data already acquired and analyzed for point estimates and confidence intervals. The guidance is unclear as to whether sponsors should attempt to calculate power from the data in a study, and try to anticipate what level of specified risk is of interest to the agency and to prescribing physicians. More disturbing yet is the agency’s failure to explain why it is recommending both confidence intervals and power calculations, in the face of many leading groups’ recommendations to abandon power calculations when confidence intervals are available for the analyzed data.[9]


[1] Dep’t of Health & Human Services, Food & Drug Admin., 21 CFR Part 201, Content and Format of Labeling for Human Prescription Drug and Biological Products; Requirements for Pregnancy and Lactation Labeling; Pregnancy, Lactation, and Reproductive Potential: Labeling for Human Prescription Drug and Biological Products—Content and Format; Draft Guidance for Industry; Availability; Final Rule and Notice, 79 Fed. Reg. 72064 (Dec. 4, 2014) [Docket No. FDA–2006–N–0515 (formerly Docket No. 2006N–0467)]

[2] Id. at 72082a.

[3] Id. at 72082c-083a.

[4] Id. at 72083c.

[5] Id. at 72102a (§ 201.57(c)(9)(i)(D)(3)).

[6] U.S. Department of Health and Human Services, Food and Drug Administration, Pregnancy, Lactation, and Reproductive Potential: Labeling for Human Prescription Drug and Biological Products — Content and Format DRAFT GUIDANCE (Dec. 2014).

[7] Id. at 12.

[8] Id.

[9] See, e.g., Vandenbroucke, et al., “Strengthening the reporting of observational studies in epidemiology (STROBE):  Explanation and elaboration,” 18 Epidemiology 805, 815 (2007) (Section 10, sample size) (“Do not bother readers with post hoc justifications for study size or retrospective power calculations. From the point of view of the reader, confidence intervals indicate the statistical precision that was ultimately obtained. It should be realized that confidence intervals reflect statistical uncertainty only, and not all uncertainty that may be present in a study (see item 20).”); Douglas Altman, et al., “The Revised CONSORT Statement for Reporting Randomized Trials:  Explanation and Elaboration,” 134 Ann. Intern. Med. 663, 670 (2001) (“There is little merit in calculating the statistical power once the results of the trial are known, the power is then appropriately indicated by confidence intervals.”).

More Case Report Mischief in the Gadolinium Litigation

November 28th, 2014

The Decker case is one curious decision, by the MDL trial court, and the Sixth Circuit. Decker v. GE Healthcare Inc., ___ F.3d ___, 2014 FED App. 0258P, 2014 U.S. App. LEXIS 20049 (6th Cir. Oct. 20, 2014). First, the Circuit went out of its way to emphasize that the trial court had discretion, not only in evaluating the evidence on a Rule 702 challenge, but also in devising the criteria of validity[1]. Second, the courts ignored the role and the weight being assigned to Federal Rule of Evidence 703, in winnowing the materials upon which the defense expert witnesses could rely. Third, the Circuit approved what appeared to be extremely asymmetric gatekeeping of plaintiffs’ and defendant’s expert witnesses. The asymmetrical standards probably were the basis for emphasizing the breadth of the trial court’s discretion to devise the criteria for assessing scientific validity[2].

In barring GEHC’s expert witnesses from testifying about gadolinium-naive nephrogenic systemic fibrosis (NSF) cases, Judge Dan Polster, the MDL judge, appeared to invoke a double standard. Plaintiffs could adduce any case report or adverse event report (AER) on the theory that the reports were relevant to “notice” of a “safety signal” between gadolinium-based contrast agents in MRI and NSF. Defendants’ expert witnesses, however, were held to the most exacting standards of clinical identity with the plaintiff’s particular presentation of NSP, biopsy-proven presence of Gd in affected tissue, and documentation of lack of GBCA-exposure, before case reports would be permitted as reliance materials to support the existence of gadolinium-naïve NSF.

A fourth issue with the Decker opinion is the latitude it permitted the district court to allow testimony from plaintiffs’ pharmacovigilance expert witness, Cheryl Blume, Ph.D., over objections, to testify about the “signal” created by the NSF AERs available to GEHC. Decker at *11. At the same trial, the MDL judge prohibited GEHC’s expert witness, Dr. Anthony Gaspari, to testify that the AERs described by Blume did not support a clinical diagnosis of NSF.

On a motion for reconsideration, Judge Polster reaffirmed his ruling on grounds that

(1) the AERs were too incomplete to rule in or rule out a diagnosis of NSF, although they were sufficient to create a “signal”;

(2) whether the AERs were actual cases of NSF was not relevant to their being safety signals;

(3) Dr. Gaspari was not an expert in pharmacovigilance, which studied “signals” as opposed to causation; and

(4) Dr. Gaspari’s conclusion that the AERs were not NSF was made without reviewing all the information available to GEHC at the time of the AERs.

Decker at *12.

The fallacy of this stingy approach to Dr. Gaspari’s testimony lies in the courts’ stubborn refusal to recognize that if an AER was not, as a matter of medical science, a case of NSF, then it could not be a “signal” of a possible causal relationship between GBCA and NSF. Pharmacovigilance does not end with ascertaining signals; yet the courts privileged Blume’s opinions on signals even though she could not proceed to the next step and evaluate diagnostic accuracy and causality. This twisted logic makes a mockery of pharmacovigilance. It also led to the exclusion of Dr. Gaspari’s testimony on a key aspect of plaintiffs’ liability evidence.

The erroneous approach pioneered by Judge Polster was compounded by the district court’s refusal to give a jury instruction that AERs were only relevant to notice, and not to causation. Judge Polster offered his reasoning that “the instruction singles out one type of evidence, and adds, rather than minimizes, confusion.” Judge Polster cited the lack of any expert witness testimony that suggested that AERs showed causation and “besides, it doesn’t matter because those patients are not, are not the plaintiffs.” Decker at *17.

The lack of dispute about the meaning of AERs would have seemed all the more reason to control jury speculation about their import, and to give a binding instruction on AERs and their limited significance. As for the AER patients’ not being the plaintiffs, well, the case report patients were not the plaintiffs, either. This last reason is not even wrong[3]. The Circuit, in affirming, turned a blind eye to the district court’s exercise of discretion in a way that systematically increased the importance of Blume’s testimony on signals, while systematically hobbling the defendant’s expert witnesses.


[1]THE STANDARD OF APPELLATE REVIEW FOR RULE 702 DECISIONS” (Nov. 12, 2014).

[2]Gadolinium, Nephrogenic Systemic Fibrosis, and Case Reports” (Nov. 24, 2014).

[3] “Das ist nicht nur nicht richtig, es ist nicht einmal falsch!” The quote is attributed to Wolfgang Pauli in R. E. Peierls, “Wolfgang Ernst Pauli, 1900-1958,” 5 Biographical Memoirs Fellows Royal Soc’y 175, 186 (1960).

 

The opinions, statements, and asseverations expressed on Tortini are my own, or those of invited guests, and these writings do not necessarily represent the views of clients, friends, or family, even when supported by good and sufficient reason.