TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Lipitor Diabetes MDL’s Inexact Analysis of Fisher’s Exact Test

March 23rd, 2019

Muriel Bristol was a biologist who studied algae at the Rothamsted Experimental Station in England, after World War I.  In addition to her knowledge of plant biology, Bristol claimed the ability to tell whether tea had been added to milk, or the tea poured first and then milk had been added.  Bristol, as a scientist and a proper English woman, preferred the latter.

Ronald Fisher, who also worked at Rothamsted, expressed his skepticism over Dr. Bristol’s claim. Fisher set about to design a randomized experiment that would efficiently and effectively test her claim. Bristol was presented with eight cups of tea, four of which were prepared with milk added to tea, and four prepared with tea added to milk.  Bristol, of course, was blinded to which was which, but was required to label each according to its manner of preparation. Fisher saw his randomized experiment as a 2 x 2 contingency table, from he could calculate the observed outcome (and ones more extreme if there were any more extreme outcomes) using the assumption of fixed marginal rates and the hypergeometric probability distribution.  Fisher’s Exact Test was born at tea time.[1]

Fisher described the origins of his Exact Test in one of his early texts, but he neglected to report whether his experiment vindicated Bristol’s claim. According to David Salsburg, H. Fairfield Smith, one of Fisher’s colleagues, acknowledged that Bristol nailed Fisher’s Exact test, with all eight cups correctly identified. The test has gone on to become an important tool in the statistician’s armamentarium.

Fisher’s Exact, like any statistical test, has model assumptions and preconditions.  For one thing, the test is designed for categorical data, with binary outcomes. The test allows us to evaluate whether two proportions are likely different by chance alone, by calculating the probability of the observed outcome, as well as more extreme outcomes.

The calculation of an exact attained significance probability, using Fisher’s approach, provides a one-sided p-value, with no unique solution to calculating a two-side attained significance probability. In discrimination cases, the one-sided p-value may well be more appropriate for the issue at hand. The Fisher’s Exact Test has thus played an important role in showing the judiciary that small sample size need not be an insuperable barrier to meaningful statistical analysis. In discrimination cases, the one-sided p-value provided by the test is not a particular problem.[2]

The difficulty of using Fisher’s Exact for small sample sizes is that the hypergeometric distribution, upon which the test is based, is highly asymmetric. The observed one-sided p-value does not measure the probability of a result equally extreme in the opposite direction. There are at least three ways to calculate the p-value:

  • Double the one-sided p-value.
  • Add the point probabilities from the opposite tail that are more extreme than the observed point probability.
  • Use the mid-P value; that is, add all values more extreme (smaller) than the observed point probability from both sides of the distribution, PLUS ½ of the observed point probability.

Some software programs will proceed in one of these ways by default, but their doing so does guarantee the most accurate measure of two-tailed significance probability.

In the Lipitor MDL for diabetes litigation, Judge Gergel generally used sharp analyses to cut through the rancid fat of litigation claims, to get to the heart of the matter. By and large, he appears to have done a splendid job. In course of gatekeeping under Federal Rule of Evidence 702, however, Judge Gergel may have misunderstood the nature of Fisher’s Exact Test.

Nicholas Jewell is a well-credentialed statistician at the University of California.  In the courtroom, Jewell is a well-known expert witness for the litigation industry.  He is no novice at generating unreliable opinion testimony. See In re Zoloft Prods. Liab. Litig., No. 12–md–2342, 2015 WL 7776911 (E.D. Pa. Dec. 2, 2015) (excluding Jewell’s opinions as scientifically unwarranted and methodologically flawed). In re Zoloft Prod. Liab. Litig., MDL NO. 2342, 12-MD-2342, 2016 WL 1320799 (E.D. Pa. April 5, 2016) (granting summary judgment after excluding Dr. Jewell). SeeThe Education of Judge Rufe – The Zoloft MDL” (April 9, 2016).

In the Lipitor cases, some of Jewell’s opinions seemed outlandish indeed, and Judge Gergel generally excluded them. See In re Lipitor Marketing, Sales Practices and Prods. Liab. Litig., 145 F.Supp. 3d 573 (D.S.C. 2015), reconsideration den’d, 2016 WL 827067 (D.S.C. Feb. 29, 2016). As Judge Gergel explained, Jewell calculated a relative risk for abnormal blood glucose in a Lipitor group to be 3.0 (95% C.I., 0.9 to 9.6), using STATA software. Also using STATA, Jewell obtained an attained significance probability of 0.0654, based upon Fisher’s Exact Test. Lipitor Jewell at *7.

Judge Gergel did not report whether Jewell’s reported p-value of 0.0654, was one- or two-sided, but he did state that the attained probability “indicates a lack of statistical significance.” Id. & n. 15. The rest of His Honor’s discussion of the challenged opinion, however, makes clear that of 0.0654 must have been a two-sided value.  If it had been a one-sided p-value, then there would have been no way of invoking the mid-p to generate a two-sided p-value below 5%. The mid-p will always be larger than the one-tailed exact p-value generated by Fisher’s Exact Test.

The court noted that Dr. Jewell had testified that he believed that STATA generated this confidence interval by “flip[ping]” the Taylor series approximation. The STATA website notes that it calculates confidence intervals for odds ratios (which are different from the relative risk that Jewell testified he computed), by inverting the Fisher exact test.[3] Id. at *7 & n. 17. Of course, this description suggests that the confidence interval is not based upon exact methods.

STATA does not provide a mid p-value calculation, and so Jewell used an on-line calculator, to obtain a mid p-value of 0.04, which he declared statistically significant. The court took Jewell to task for using the mid p-value as though it were a different analysis or test.  Id. at *8. Because the mid-p value will always be larger than the one-sided exact p-value from Fisher’s Exact Test, the court’s explanation does not really make sense:

“Instead, Dr. Jewell turned to the mid-p test, which would ‘[a]lmost surely’ produce a lower p-value than the Fisher exact test.”

Id. at *8. The mid-p test, however, is not different from the Fisher’s exact; rather it is simply a way of dealing with the asymmetrical distribution that underlies the Fisher’s exact, to arrive at a two-tailed p-value that more accurately captures the rate of Type I error.

The MDL court acknowledged that the mid-p approach, was not inherently unreliable, but questioned Jewell’s inconsistent, selective use of the approach for only one test.[4]  Jewell certainly did not help the plaintiffs’ cause and his standing by having discarding the analyses that were not incorporated into his report, thus leaving the MDL court to guess at how much selection went on in his process of generating his opinions..  Id. at *9 & n. 19.

None of Jewell’s other calculated p-values involved the mid-p approach, but the court’s criticism begs the question whether the other p-values came from a Fisher’s Exact Test with small sample size, or other highly asymmetrical distribution. Id. at *8. Although Jewell had shown himself willing to engage in other dubious, result-oriented analyses, Jewell’s use of the mid-p for this one comparison may have been within acceptable bounds after all.

The court also noted that Jewell had obtained the “exact p-value and that this p-value was not significant.” Id. The court’s notation here, however, does not report the important detail whether that exact, unreported p-value was merely the doubled of the one-sided p-value given by the Fisher’s Exact Test. As the STATA website, cited by the MDL court, explains:

“The test naturally gives a one-sided p-value, and there are at least four different ways to convert it to a two-sided p-value (Agresti 2002, 93). One way, not implemented in Stata, is to double the one-sided p-value; doubling is simple but can result in p-values larger than one.”

Wesley Eddings, “Fisher’s exact test two-sided idiosyncrasy” (Jan. 2009) (citing Alan Agresti, Categorical Data Analysis 93 (2d ed. 2002)).

On plaintiffs’ motion for reconsideration, the MDL court reaffirmed its findings with respect to Jewell’s use of the mid-p.  Lipitor Jewell Reconsidered at *3. In doing so, the court insisted that the one instance in which Jewell used the mid-p stood in stark contrast to all the other instances in which he had used Fisher’s Exact Test.  The court then cited to the record to identify 21 other instances in which Jewell used a p-value rather than a mid-p value.  The court, however, did not provide the crucial detail whether these 21 other instances actually involved small-sample applications of Fisher’s Exact Test.  As result-oriented as Jewell can be, it seems safe to assume that not all his statistical analyses involved Fisher’s Exact Test, with its attendant ambiguity for how to calculate a two-tailed p-value.


[1] Sir Ronald A. Fisher, The Design of Experiments at chapter 2 (1935); see also Stephen Senn, “Tea for three: Of infusions and inferences and milk in first,” Significance 30 (Dec. 2012); David Salsburg, The Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century  (2002).

[2] See, e.g., Dendy v. Washington Hosp. Ctr., 431 F. Supp. 873 (D.D.C. 1977) (denying preliminary injunction), rev’d, 581 F.2d 99 (D.C. Cir. 1978) (reversing denial of relief, and remanding for reconsideration). See also National Academies of Science, Reference Manual on Scientific Evidence 255 n.108 (3d ed. 2011) (“Well-known small sample techniques [for testing significance and calculating p-values] include the sign test and Fisher’s exact test.”).

[3] See Wesley Eddings, “Fisher’s exact test two-sided idiosyncrasy” (Jan. 2009), available at <http://www.stata.com/support/faqs/statistics/fishers-exact-test/>, last visited April 19, 2016 (“Stata’s exact confidence interval for the odds ratio inverts Fisher’s exact test.”). This article by Eddings contains a nice discussion of why the Fisher’s Exact Test attained significance probability disagrees with the calculated confidence interval. Eddings points out the asymmetry of the hypergeometric distribution, which complicates arriving at an exact p-value for a two-sided test.

[4] See Barber v. United Airlines, Inc., 17 Fed. Appx. 433, 437 (7th Cir. 2001) (“Because in formulating his opinion Dr. Hynes cherry-picked the facts he considered to render an expert opinion, the district court correctly barred his testimony because such a selective use of facts fails to satisfy the scientific method and Daubert.”).

ASA Statement Goes to Court – Part 2

March 7th, 2019

It has been almost three years since the American Statistical Association (ASA) issued its statement on statistical significance. Ronald L. Wasserstein & Nicole A. Lazar, “The ASA’s Statement on p-Values: Context, Process, and Purpose,” 70 The American Statistician 129 (2016) [ASA Statement]. Before the ASA’s Statement, courts and lawyers from all sides routinely misunderstood, misstated, and misrepresented the meaning of statistical significance.1 These errors were pandemic despite the efforts of the Federal Judicial Center and the National Academies of Science to educate judges and lawyers, through their Reference Manuals on Scientific Evidence and seminars. The interesting question is whether the ASA’s Statement has improved, or will improve, the unfortunate situation.2

The ASA Statement on Testosterone

“Ye blind guides, who strain out a gnat and swallow a camel!”
Matthew 23:24

To capture the state of the art, or the state of correct and flawed interpretations of the ASA Statement, reviewing a recent but now resolved, large so-called mass tort may be illustrative. Pharmaceutical products liability cases almost always turn on evidence from pharmaco-epidemiologic studies that compare the rate of an outcome of interest among patients taking a particular medication with the rate among similar, untreated patients. These studies compare the observed with the expected rates, and invariably assess the differences as either a “risk ratio,” or a “risk difference,” for both the magnitude of the difference and for “significance probability” of observing a rate at least as large as seen in the exposed group, given the assumptions that that the medication did not change the rate and that the data followed a given probability distribution. In these alleged “health effects” cases, claims and counterclaims of misuse of significance probability have been pervasive. After the ASA Statement was released, some lawyers began to modify their arguments to suggest that their adversaries’ arguments offend the ASA’s pronouncements.

One litigation that showcases the use and misuse of the ASA Statement arose from claims that AbbVie, Inc.’s transdermal testosterone medication (TRT) causes heart attacks, strokes, and venous thromboembolism. The FDA had reviewed the plaintiffs’ claims, made in a Public Citizen complaint, and resoundingly rejected the causal interpretation of two dubious observational studies, and an incomplete meta-analysis that used an off-beat composite end point.3 The Public Citizen petition probably did succeed in pushing the FDA to convene an Advisory Committee meeting, which again resulted in a rejection of the causal claims. The FDA did, however, modify the class labeling for TRT with respect to indication and a possible association with cardiovascular outcomes. And then the litigation came.

Notwithstanding the FDA’s determination that a causal association had not been shown, thousands of plaintiffs sued several companies, with most of the complaints falling on AbbVie, Inc., which had the largest presence in the market. The ASA Statement came up occasionally in pre-trial depositions, but became a major brouhaha, when AbbVie moved to exclude plaintiffs’ causation expert witnesses.4

The Defense’s Anticipatory Parry of the ASA Statement

As AbbVie described the situation:

Plaintiffs’ experts uniformly seek to abrogate the established methods and standards for determining … causal factors in favor of precisely the kind of subjective judgments that Daubert was designed to avoid. Tests for statistical significance are characterized as ‘misleading’ and rejected [by plaintiffs’ expert witnesses] in favor of non-statistical ‘estimates’, ‘clinical judgment’, and ‘gestalt’ views of the evidence.”5

AbbVie’s brief in support of excluding plaintiffs’ expert witnesses barely mentioned the ASA Statement, but in a footnote, the defense anticipated the Plaintiffs’ opposition would be based on rejecting the importance of statistical significance testing and the claim that this rejection was somehow supported by the ASA Statement:

The statistical community is currently debating whether scientists who lack expertise in statistics misunderstand p-values and overvalue significance testing. [citing ASA Statement] The fact that there is a debate among professional statisticians on this narrow issue does not validate Dr. Gerstman’s [plaintiffs’ expert witness’s] rejection of the importance of statistical significance testing, or undermine Defendants’ reliance on accepted methods for determining association and causation.”6

In its brief in support of excluding causation opinions, the defense took pains to define statistical significance, and managed to do so, painfully, or at least in ways that the ASA conferees would have found objectionable:

Any association found must be tested for its statistical significance. Statistical significance testing measures the likelihood that the observed association could be due to chance variation among samples. Scientists evaluate whether an observed effect is due to chance using p-values and confidence intervals. The prevailing scientific convention requires that there be 95% probability that the observed association is not due to chance (expressed as a p-value < 0.05) before reporting a result as “statistically significant. * * * This process guards against reporting false positive results by setting a ceiling for the probability that the observed positive association could be due to chance alone, assuming that no association was actually present.7

AbbVie’s brief proceeded to characterize the confidence interval as a tool of significance testing, again in a way that misstates the mathematical meaning and importance of the interval:

The determination of statistical significance can be described equivalently in terms of the confidence interval calculated in connection with the association. A confidence interval indicates the level of uncertainty that exists around the measured value of the association (i.e., the OR or RR). A confidence interval defines the range of possible values for the actual OR or RR that are compatible with the sample data, at a specified confidence level, typically 95% under the prevailing scientific convention. Reference Manual, at 580 (Ex. 14) (“If a 95% confidence interval is specified, the range encompasses the results we would expect 95% of the time if samples for new studies were repeatedly drawn from the same population.”). * * * If the confidence interval crosses 1.0, this means there may be no difference between the treatment group and the control group, therefore the result is not considered statistically significant.”8

Perhaps AbbVie’s counsel should be permitted a plea in mitigation by having cited to, and quoted from, the Reference Manual on Scientific Evidence’s chapter on epidemiology, which was also wide of the mark in its description of the confidence interval. Counsel would have been better served by the Manual’s more rigorous and accurate chapter on statistics. Even so, the above-quoted statements give an inappropriate interpretation of random error as a probability about the hypothesis being tested.9 Particularly dangerous, in terms of failing to advance AbbVie’s own objectives, was the characterization of the confidence interval as measuring the level of uncertainty, as though there were no other sources of uncertainty other than random error in the measurement of the risk ratio.

The Plaintiffs’ Attack on Significance Testing

The Plaintiffs, of course, filed an opposition brief that characterized the defense position as an attempt to:

elevate statistical significance, as measured by confidence intervals and so-called p-values, to the status of an absolute requirement to the establishment of causation.”10

Tellingly, the plaintiffs’ brief fails to point to any modern-era example of a scientific determination of causation based upon epidemiologic evidence, in which the pertinent studies were not assessed for, and found to show, statistical significance.

After citing a few judicial opinions that underplayed the importance of statistical significance, the Plaintiffs’ opposition turned to the ASA Statement for what it perceived to be support for its loosey-goosey approach to causal inference.11 The Plaintiffs’ opposition brief quoted a series of propositions from the ASA Statement, without the ASA’s elaborations and elucidations, and without much in the way of explanation or commentary. At the very least, the Plaintiffs’ heavy reliance upon, despite their distortions of, the ASA Statement helped them to define key statistical concepts more carefully than had AbbVie in its opening brief.

The ASA Statement, however, was not immune from being misrepresented in the Plaintiffs’ opposition brief. Many of the quoted propositions were quite beside the points of the dispute over the validity and reliability of Plaintiffs’ expert witnesses’ conclusions of causation about testosterone and heart attacks, conclusions not reached or shared by the FDA, any consensus statement from medical organizations, or any serious published systematic review:

P-values do not measure the probability that the studied hypothesis is true, … .”12

This proposition from the ASA Statement is true, but trivially true. (Of course, this ASA principle is relevant to the many judicial decisions that have managed to misstate what p-values measure.) The above-quoted proposition follows from the definition and meaning of the p-value; only someone who did not understand significance probability would confuse it with the probability of the truth of the studied hypothesis. P-values’ not measuring the probability of the null hypothesis, or any alternative hypothesis, is not a flaw in p-values, but arguably their strength.

A p-value, or statistical significance, does not measure the size of an effect or the importance of a result.”13

Again, true, true, and immaterial. The existence of other importance metrics, such as the magnitude of an association or correlation, hardly detracts from the importance of assessing the random error in an observed statistic. The need to assess clinical or practical significance of an association or correlation also does not detract from the importance of the assessed random error in a measured statistic.

By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis.”14

The Plaintiffs’ opposition attempted to spin the above ASA statement as a criticism of p-values involves an elenchi ignoratio. Once again, the p-value assumes a probability model and a null hypothesis, and so it cannot provide a “measure” or the model or hypothesis’ probability.

The Plaintiffs’ final harrumph on the ASA Statement was their claim that the ASA Statement’s conclusion was “especially significant” to the testosterone litigation:

Good statistical practice, as an essential component of good scientific practice, emphasizes principles of good study design and conduct, a variety of numerical and graphical summaries of data, understanding of the phenomenon under study, interpretation of results in context, complete reporting and proper logical and quantitative understanding of what data summaries mean. No single index should substitute for scientific reasoning.”15

The existence of other important criteria in the evaluation and synthesis of a complex body of studies does not erase or supersede the importance of assessing stochastic error in the epidemiologic studies. Plaintiffs’ Opposition Brief asserted that the Defense had attempted to:

to substitute the single index, the p-value, for scientific reasoning in the reports of Plaintiffs’ experts should be rejected.”16

Some of the defense’s opening brief could indeed be read as reducing causal inference to the determination of statistical significance. A sympathetic reading of the entire AbbVie brief, however, shows that it had criticized the threats to validity in the observational epidemiologic studies, as well as some of the clinical trials, and other rampant flaws in the Plaintiffs’ expert witnesses’ reasoning. The Plaintiffs’ citations to the ASA Statement’s “negative” propositions about p-values (to emphasize what they are not) appeared to be the stuffing of a strawman, used to divert attention from other failings of their own claims and proffered analyses. In other words, the substance of the Rule 702 application had much more to do with data quality and study validity than statistical significance.

What did the trial court make of this back and forth about statistical significance and the ASA Statement? For the most part, the trial court denied both sides’ challenges to proffered expert witness testimony on causation and statistical issues. In sorting the controversy over the ASA Statement, the trial court apparently misunderstood key statistical concepts and paid little attention to the threats to validity other than random variability in study results.17 The trial court summarized the controversy as follows:

In arguing that the scientific literature does not support a finding that TRT is associated with the alleged injuries, AbbVie emphasize [sic] the importance of considering the statistical significance of study results. Though experts for both AbbVie and plaintiffs agree that statistical significance is a widely accepted concept in the field of statistics and that there is a conventional method for determining the statistical significance of a study’s findings, the parties and their experts disagree about the conclusions one may permissibly draw from a study result that is deemed to possess or lack statistical significance according to conventional methods of making that determination.”18

Of course, there was never a controversy presented to the court about drawing a conclusion from “a study.” By the time the briefs were filed, both sides had multiple observational studies, clinical trials, and meta-analyses to synthesize into opinions for or against causal claims.

Ironically, AbbVie might claim to have prevailed in having the trial court adopt its misleading definitions of p-values and confidence intervals:

Statisticians test for statistical significance to determine the likelihood that a study’s findings are due to chance. *** According to conventional statistical practice, such a result *** would be considered statistically significant if there is a 95% probability, also expressed as a “p-value” of <0.05, that the observed association is not the product of chance. If, however, the p-value were greater than 0.05, the observed association would not be regarded as statistically significant, according to prevailing conventions, because there is a greater than 5% probability that the association observed was the result of chance.”19

The MDL court similarly appeared to accept AbbVie’s dubious description of the confidence interval:

A confidence interval consists of a range of values. For a 95% confidence interval, one would expect future studies sampling the same population to produce values within the range 95% of the time. So if the confidence interval ranged from 1.2 to 3.0, the association would be considered statistically significant, because one would expect, with 95% confidence, that future studies would report a ratio above 1.0 – indeed, above 1.2.”20

The court’s opinion clearly evidences the danger in stating the importance of statistical significance without placing equal emphasis on the need to exclude bias and confounding. Having found an observational study and one meta-analysis of clinical trial safety outcomes that were statistically significant, the trial court held that any dispute over the probativeness of the studies was for the jury to assess.

Some but not all of AbbVie’s brief might have encouraged this lax attitude by failing to emphasize study validity at the same time as emphasizing the importance of statistical significance. In any event, trial court continued with its précis of the plaintiffs’ argument that:

a study reporting a confidence interval ranging from 0.9 to 3.5, for example, should certainly not be understood as evidence that there is no association and may actually be understood as evidence in favor of an association, when considered in light of other evidence. Thus, according to plaintiffs’ experts, even studies that do not show a statistically significant association between TRT and the alleged injuries may plausibly bolster their opinions that TRT is capable of causing such injuries.”21

Of course, a single study that reported a risk ratio greater than 1.0, with a confidence interval 0.9 to 3.5 might be reasonably incorporated into a meta-analysis that in turn could support, or not support a causal inference. In the TRT litigation, however, the well-conducted, most up-to-date meta-analyses did not report statistically significant elevated rates of cardiovascular events among users of TRT. The court’s insistence that a study with a confidence interval 0.9 to 3.5 cannot be interpreted as evidence of no association is, of course, correct. Equally correct would be to say that the interval shows that the study failed to show an association. The trial court never grappled with the reality that the best conducted meta-analyses failed to show statistically significant increases in the rates of cardiovascular events.

The American Statistical Association and its members would likely have been deeply disappointed by how both parties used the ASA Statement for their litigation objectives. AbbVie’s suggestion that the ASA Statement reflects a debate about “whether scientists who lack expertise in statistics misunderstand p-values and overvalue significance testing” would appear to have no support in the Statement itself or any other commentary to come out of the meeting leading up to the Statement. The Plaintiffs’ argument that p-values properly understood are unimportant and misleading similarly finds no support in the ASA Statement. Conveniently, the Plaintiffs’ brief ignored the Statement’s insistence upon transparency in pre-specification of analyses and outcomes, and in handling of multiple comparisons:

P-values and related analyses should not be reported selectively. Conducting multiple analyses of the data and reporting only those with certain p-values (typically those passing a significance threshold) renders the reported p-values essentially uninterpretable. Cherrypicking promising findings, also known by such terms as data dredging, significance chasing, significance questing, selective inference, and ‘p-hacking’, leads to a spurious excess of statistically significant results in the published literature and should be vigorously avoided.”22

Most if not all of the plaintiffs’ expert witnesses’ reliance materials would have been eliminated under this principle set forth by the ASA Statement.


1 See, e.g., In re Ephedra Prods. Liab. Litig., 393 F.Supp. 2d 181, 191 (S.D.N.Y. 2005). See alsoConfidence in Intervals and Diffidence in the Courts” (March 4, 2012); “Scientific illiteracy among the judiciary” (Feb. 29, 2012).

3Letter of Janet Woodcock, Director of FDA’s Center for Drug Evaluation and Research, to Sidney Wolfe, Director of Public Citizen’s Health Research Group (July 16, 2014) (denying citizen petition for “black box” warning).

4 Defendants’ (AbbVie, Inc.’s) Motion to Exclude Plaintiffs Expert Testimony on the Issue of Causation, and for Summary Judgment, and Memorandum of Law in Support, Case No. 1:14-CV-01748, MDL 2545, Document #: 1753, 2017 WL 1104501 (N.D. Ill. Feb. 20, 2017) [AbbVie Brief].

5 AbbVie Brief at 3; see also id. at 7-8 (“Depending upon the expert, even the basic tests of statistical significance are simply ignored, dismissed as misleading… .”) AbbVie’s definitions of statistical significance occasionally wandered off track and into the transposition fallacy, but generally its point was understandable.

6 AbbVie Brief at 63 n.16 (emphasis in original).

7 AbbVie Brief at 13 (emphasis in original).

8 AbbVie Brief at 13-14 (emphasis in original).

9 The defense brief further emphasized statistical significance almost as though it were a sufficient basis for inferring causality from observational studies: “Regardless of this debate, courts have routinely found the traditional epidemiological method—including bedrock principles of significance testing—to be the most reliable and accepted way to establish general causation. See, e.g., In re Zoloft, 26 F. Supp. 3d 449, 455; see also Rosen v. Ciba-Geigy Corp., 78 F.3d 316, 319 (7th Cir. 1996) (“The law lags science; it does not lead it.”). AbbVie Brief at 63-64 & n.16. The defense’s language about “including bedrock principles of significance testing” absolves it of having totally ignored other necessary considerations, but still the defense might have advantageously pointed out at the other needed considerations for causal inference at the same time.

10 Plaintiffs’ Steering Committee’ Memorandum of Law in Opposition to Motion of AbbVie Defendants to Exclude Plaintiffs’ Expert Testimony on the Issue of Causation, and for Summary Judgment at p.34, Case No. 1:14-CV-01748, MDL 2545, Document No. 1753 (N.D. Ill. Mar. 23, 2017) [Opp. Brief].

11 Id. at 35 (appending the ASA Statement and the commentary of more than two dozen interested commentators).

12 Id. at 38 (quoting from the ASA Statement at 131).

13 Id. at 38 (quoting from the ASA Statement at 132).

14 Id. at 38 (quoting from the ASA Statement at 132).

15 Id. at 38 (quoting from the ASA Statement at 132).

16 Id. at 38

17  In re Testosterone Replacement Therapy Prods. Liab. Litig., MDL No. 2545, C.M.O. No. 46, 2017 WL 1833173 (N.D. Ill. May 8, 2017) [In re TRT]

18 In re TRT at *4.

19 In re TRT at *4.

20 Id.

21 Id. at *4.

22 ASA Statement at 131-32.

Daubert Retrospective – Statistical Significance

January 5th, 2019

The holiday break was an opportunity and an excuse to revisit the briefs filed in the Supreme Court by parties and amici, in the Daubert case. The 22 amicus briefs in particular provided a wonderful basis upon which to reflect how far we have come, and also how far we have to go, to achieve real evidence-based fact finding in technical and scientific litigation. Twenty-five years ago, Rules 702 and 703 vied for control over errant and improvident expert witness testimony. With Daubert decided, Rule 702 emerged as the winner. Sadly, most courts seem to ignore or forget about Rule 703, perhaps because of its awkward wording. Rule 702, however, received the judicial imprimatur to support the policing and gatekeeping of dysepistemic claims in the federal courts.

As noted last week,1 the petitioners (plaintiffs) in Daubert advanced several lines of fallacious and specious argument, some of which was lost in the shuffle and page limitations of the Supreme Court briefings. The plaintiffs’ transposition fallacy received barely a mention, although it did bring forth at least a footnote in an important and overlooked amicus brief filed by American Medical Association (AMA), the American College of Physicians, and over a dozen other medical specialty organizations,2 all of which both emphasized the importance of statistical significance in interpreting epidemiologic studies, and the fallacy of interpreting 95% confidence intervals as providing a measure of certainty about the estimated association as a parameter. The language of these associations’ amicus brief is noteworthy and still relevant to today’s controversies.

The AMA’s amicus brief, like the brief filed by the National Academies of Science and the American Association for the Advancement of Science, strongly endorsed a gatekeeping role for trial courts to exclude testimony not based upon rigorous scientific analysis:

The touchstone of Rule 702 is scientific knowledge. Under this Rule, expert scientific testimony must adhere to the recognized standards of good scientific methodology including rigorous analysis, accurate and statistically significant measurement, and reproducibility.”3

Having incorporated the term “scientific knowledge,” Rule 702 could not permit anything less in expert witness testimony, lest it pollute federal courtrooms across the land.

Elsewhere, the AMA elaborated upon its reference to “statistically significant measurement”:

Medical researchers acquire scientific knowledge through laboratory investigation, studies of animal models, human trials, and epidemiological studies. Such empirical investigations frequently demonstrate some correlation between the intervention studied and the hypothesized result. However, the demonstration of a correlation does not prove the hypothesized result and does not constitute scientific knowledge. In order to determine whether the observed correlation is indicative of a causal relationship, scientists necessarily rely on the concept of “statistical significance.” The requirement of statistical reliability, which tends to prove that the relationship is not merely the product of chance, is a fundamental and indispensable component of valid scientific methodology.”4

And then again, the AMA spelled out its position, in case the Court missed its other references to the importance of statistical significance:

Medical studies, whether clinical trials or epidemiologic studies, frequently demonstrate some correlation between the action studied … . To determine whether the observed correlation is not due to chance, medical scientists rely on the concept of ‘statistical significance’. A ‘statistically significant’ correlation is generally considered to be one in which statistical analysis suggests that the observed relationship is not the result of chance. A statistically significant correlation does not ‘prove’ causation, but in the absence of such a correlation, scientific causation clearly is not proven.95

In its footnote 9, in the above quoted section of the brief, the AMA called out the plaintiffs’ transposition fallacy, without specifically citing to plaintiffs’ briefs:

It is misleading to compare the 95% confidence level used in empirical research to the 51% level inherent in the preponderance of the evidence standard.”6

Actually the plaintiffs’ ruse was much worse than misleading. The plaintiffs did not compare the two probabilities; they equated them. Some might call this ruse, an outright fraud on the court. In any event, the AMA amicus brief remains an available, citable source for opposing this fraud and the casual dismissal of the importance of statistical significance.

One other amicus brief touched on the plaintiffs’ statistical shanigans. The Product Liability Advisory Council, National Association of Manufacturers, Business Roundtable, and Chemical Manufacturers Association jointly filed an amicus brief to challenge some of the excesses of the plaintiffs’ submissions.7  Plaintiffs’ expert witness, Shanna Swan, had calculated type II error rates and post-hoc power for some selected epidemiologic studies relied upon by the defense. Swan’s complaint had been that some studies had only 20% probability (power) to detect a statistically significant doubling of limb reduction risk, with significance at p < 5%.8

The PLAC Brief pointed out that power calculations must assume an alternative hypothesis, and that the doubling of risk hypothesis had no basis in the evidentiary record. Although the PLAC complaint was correct, it missed the plaintiffs’ point that the defense had set exceeding a risk ratio of 2.0, as an important benchmark for specific causation attributability. Swan’s calculation of post-hoc power would have yielded an even lower probability for detecting risk ratios of 1.2 or so. More to the point, PLAC noted that other studies had much greater power, and that collectively, all the available studies would have had much greater power to have at least one study achieve statistical significance without dodgy re-analyses.


1 The Advocates’ Errors in Daubert” (Dec. 28, 2018).

2 American Academy of Allergy and Immunology, American Academy of Dermatology, American Academy of Family Physicians, American Academy of Neurology, American Academy of Orthopaedic Surgeons, American Academy of Pain Medicine, American Association of Neurological Surgeons, American College of Obstetricians and Gynecologists, American College of Pain Medicine, American College of Physicians, American College of Radiology, American Society of Anesthesiologists, American Society of Plastic and Reconstructive Surgeons, American Urological Association, and College of American Pathologists.

3 Brief of the American Medical Association, et al., as Amici Curiae, in Support of Respondent, in Daubert v. Merrell Dow Pharmaceuticals, Inc., U.S. Supreme Court no. 92-102, 1993 WL 13006285, at *27 (U.S., Jan. 19, 1993)[AMA Brief].

4 AMA Brief at *4-*5 (emphasis added).

5 AMA Brief at *14-*15 (emphasis added).

6 AMA Brief at *15 & n.9.

7 Brief of the Product Liability Advisory Council, Inc., National Association of Manufacturers, Business Roundtable, and Chemical Manufacturers Association as Amici Curiae in Support of Respondent, as Amici Curiae, in Support of Respondent, in Daubert v. Merrell Dow Pharmaceuticals, Inc., U.S. Supreme Court no. 92-102, 1993 WL 13006288 (U.S., Jan. 19, 1993) [PLAC Brief].

8 PLAC Brief at *21.

The Advocates’ Errors in Daubert

December 28th, 2018

Over 25 years ago, the United States Supreme Court answered a narrow legal question about whether the so-called Frye rule was incorporated into Rule 702 of the Federal Rules of Evidence. Plaintiffs in Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993), appealed a Ninth Circuit ruling that the Frye rule survived, and was incorporated into, the enactment of a statutory evidentiary rule, Rule 702. As most legal observers can now discern, plaintiffs won the battle and lost the war. The Court held that the plain language of Rule 702 does not memorialize Frye; rather the rule requires an epistemic warrant for the opinion testimony of expert witnesses.

Many of the sub-issues of the Daubert case are now so much water over the dam. The case involved claims of birth defects from maternal use of an anti-nausea medication, Bendectin. Litigation over Bendectin is long over, and the medication is now approved for use in pregnant women, on the basis of a full new drug application, supported by clinical trial evidence.

In revisiting Daubert, therefore, we might imagine that legal scholars and scientists would be interested in the anatomy of the errors that led Bendectin plaintiffs stridently to maintain their causal claims. The oral argument before the Supreme Court is telling with respect to some of the sources of error. Two law professors, Michael H. Gottesman, for plaintiffs, and Charles Fried, for the defense, squared off one Tuesday morning in March 1993. A review of Gottesman’s argument reveals several fallacious lines of argument, which are still relevant today:

A. Regulation is Based Upon Scientific Determinations of Causation

In his oral argument, Gottesman asserted that regulators (as opposed to the scientific community) are in charge of determining causation,1 and environmental regulations are based upon scientific causation determinations.2 By the time that the Supreme Court heard argument in the Daubert case, this conflation of scientific and regulatory standards for causal conclusions was fairly well debunked.3 Gottesman’s attempt to mislead the Court failed, but the effort continues in courtrooms around the United States.

B. Similar Chemical Structures Have the Same Toxicities

Gottesman asserted that human teratogenicity can be determined from similarity in chemical structures with other established teratogens.4 Close may count in horseshoes, but in chemical structural activities, small differences in chemical structures can result in huge differences in toxicologic or pharmacologic properties. A silly little methyl group on a complicated hydrocarbon ring structure can make a world of difference, as in the difference between estrogen and testosterone.

C. All Animals React the Same to Any Given Substance

Gottesman, in his oral argument, maintained that human teratogenicity can be determined from teratogenicity in non-human, non-primate, murine species.5 The Court wasted little time on this claim, the credibility of which has continued to decline in the last 25 years.

D. The Transposition Fallacy

Perhaps of greatest interest to me was Gottesman’s claim that the probability of the claimed causal association can be determined from the p-value or from the coefficient of confidence taken from the observational epidemiologic studies of birth defects among children of women who ingested Bendectin in pregancy; a.k.a. the transposition fallacy.6

All these errors are still in play in American courtrooms, despite efforts of scientists and scientific organizations to disabuse judges and lawyers. The transposition fallacy, which has been addressed in these pages and elsewhere at great length seems especially resilient to educational efforts. Still, the fallacy was as well recognized at the time of the Daubert argument as it is today, and it is noteworthy that the law professor who argued the plaintiffs’ case, in the highest court of the land, advanced this fallacious argument, and that the scientific and statistical community did little to nothing to correct the error.7

Although Professor Gottesman’s meaning in the oral argument is not entirely clear, on multiple occasions, he appeared to have conflated the coefficient of confidence, from confidence intervals, with the posterior probability that attaches to the alternative hypothesis of some association:

What the lower courts have said was yes, but prove to us to a degree of statistical certainty which would give us 95 percent confidence that the human epidemiological data is reflective, that these higher numbers for the mothers who used Bendectin were not the product of random chance but in fact are demonstrating the linkage between this drug and the symptoms observed.”8

* * * * *

“… what was demonstrated by Shanna Swan was that if you used a degree of confidence lower than 95 percent but still sufficient to prove the point as likelier than not, the epidemiological evidence is positive… .”9

* * * * *

The question is, how confident can we be that that is in fact probative of causation, not at a 95 percent level, but what Drs. Swan and Glassman said was applying the Rothman technique, a published technique and doing the arithmetic, that you find that this does link causation likelier than not.”10

Professor Fried’s oral argument for the defense largely refused or failed to engage with plaintiffs’ argument on statistical inference. With respect to the “Rothman” approach, Fried pointed out that plaintiffs’ statistical expert witness, Shanna swan, never actually employed “the Rothman principle.”11

With respect to plaintiffs’ claim that individual studies had low power to detect risk ratios of two, Professor Fried missed the opportunity to point out that such post-hoc power calculations, whatever validity they might possess, embrace the concept of statistical significance at the customary 5% level. Fried did note that a meta-analysis, based upon all the epidemiologic studies, rendered plaintiffs’ power complaint irrelevant.12

Some readers may believe that judging advocates speaking extemporaneously about statistical concepts might be overly harsh. How well then did the lawyers explain and represent statistical concepts in their written briefs in the Daubert case?

Petitioners’ Briefs

Petitioners’ Opening Brief

The petitioners’ briefs reveal that Gottesman’s statements at oral argument represent a consistent misunderstanding of statistical concepts. The plaintiffs consistently conflated significance probability or the coefficient of confidence with the civil burden of proof probability:

The crux of the disagreement between Merrell’s experts and those whose testimony is put forward by plaintiffs is that the latter are prepared to find causation more probable than not when the epidemiological evidence is strongly positive (albeit not at a 95% confidence level) and when it is buttressed with animal and chemical evidence predictive of causation, while the former are unwilling to find causation in the absence of an epidemiological study that satisfies the 95% confidence level.”13

After giving a reasonable fascimile of a definition of statistical significance, the plaintiffs’ brief proceeds to confuse the complement of alpha, or the coefficient of confidence (typically 95%), with probability that the observed risk ratio in a sample is the actual population parameter of risk:

But in toxic tort lawsuits, the issue is not whether it is certain that a chemical caused a result, but rather whether it is likelier than not that it did. It is not self-evident that the latter conclusion would require eliminating the null hypothesis (i.e. non-causation) to a confidence level of 95%.3014

The plaintiffs’ brief cited heavily to Rothman’s textbook, Modern Epidemiology, with the specious claim that the textbook supported the plaintiffs’ use of the coefficient of confidence to derive a posterior probability (> 50%) of the correctness of an elevated risk ratio for birth defects in children born to mothers who had taken Bendectin in their first trimesters of pregnancy:

An alternative mechanism has been developed by epidemiologists in recent years to give a somewhat more informative picture of what the statistics mean. At any given confidence level (e.g. 95%) a confidence interval can be constructed. The confidence interval identifies the range of relative risks that collectively comprise the 95% universe. Additional confidence levels are then constructed exhibiting the range at other confidence levels, e.g., at 90%, 80%, etc. From this set of nested confidence intervals the epidemiologist can make assessments of how likely it is that the statistics are showing a true association. Rothman, Tab 9, pp. 122-25. By calculating nested confidence intervals for the data in the Bendectin studies, Dr. Swan was able to determine that it is far more likely than not that a true association exists between Bendectin and human limb reduction birth defects. Swan, Tab 12, at 3618-28.”15

The heavy reliance upon Rothman’s textbook at first blush appears confusing. Modern Epidemiology makes one limited mention of nested confidence intervals, and certainly never suggests that such intervals can provide a posterior probability of the correctness of the hypothesis. Rothman’s complaints about reliance upon “statistical significance,” however, are well-known, and Rothman himself submitted an amicus brief16 in Daubert, a brief that has its own problems.17

In direct response to the Rothman Brief,18 Professor Alvin Feinstein filed an amicus brief in Daubert, wherein he acknowledged that meta-analyses and re-analyses can be valid, but these techniques are subject to many sources of invalidity, and their employment by careful practitioners in some instances should not be a blank check to professional witnesses who are supported by plaintiffs’ counsel. Similarly, Feinstein acknowledged that standards of statistical significance:

should be appropriately flexible, but they must exist if science is to preserve its tradition of intellectual discipline and high quality research.”19

Petitioners’ Reply Brief

The plaintiffs’ statistical misunderstandings are further exemplified in their Reply Brief, where they reassert the transposition fallacy and alternatively state that associations with p-values greater than 5%, or 95% confidence intervals that include the risk ratio of 1.0, do not show the absence of an association.20 The latter point was, of course irrelevant in the Daubert case, in which plaintiffs had the burden of persuasion. As in their oral argument through Professor Gottesman, the plaintiffs’ appellate briefs misunderstand the crucial point that confidence intervals are conditioned upon the data observed from a particular sample, and do not provide posterior probabilities for the correctness of a claimed hypothesis.

Defense Brief

The defense brief spent little time on the statistical issue or plaintiffs’ misstatements, but dispatched the issue in a trenchant footnote:

Petitioners stress the controversy some epidemiologists have raised about the standard use by epidemiologists of a 95% confidence level as a condition of statistical significance. Pet. Br. 8-10. See also Rothman Amicus Br. It is hard to see what point petitioners’ discussion establishes that could help their case. Petitioners’ experts have never developed and defended a detailed analysis of the epidemiological data using some alternative well-articulated methodology. Nor, indeed, do they show (or could they) that with some other plausible measure of confidence (say, 90%) the many published studies would collectively support an inference that Bendectin caused petitioners’ limb reduction defects. At the very most, all that petitioners’ theoretical speculations do is question whether these studies – as the medical profession and regulatory authorities in many countries have concluded – affirmatively prove that Bendectin is not a teratogen.”21

The defense never responded to the specious argument, stated or implied within the plaintiffs’ briefs, and in Gottesman’s oral argument, that a coefficient of confidence of 51% would have generated confidence intervals that routinely excluded the null hypothesis of risk ratio of 1.0. The defense did, however, respond to plaintiffs’ power argument by adverting to a meta-analysis that failed to find a statistically significant association.22

The defense also advanced two important arguments to which the plaintiffs’ briefs never meaningfully responded. First, the defense detailed the “cherry picking” or selective reliance engaged in by plaintiffs’ expert witnesses.23 Second, the defense noted that plaintiffs’ had a specific causation problem in that their expert witnesses had been attempting to infer specific causation based upon relative risks well below 2.0.24

To some extent, the plaintiffs’ statistical misstatements were taken up by an amicus brief submitted by the United States government, speaking through the office of the Solicitor General.25 Drawing upon the Supreme Court’s decisions in race discrimination cases,26 the government asserted that epidemiologists “must determine” whether a finding of an elevated risk ratio “could have arisen due to chance alone.”27

Unfortunately, the government’s brief butchered the meaning of confidence intervals. Rather than describe the confidence interval as showing what point estimates of risk ratios are reasonable compatible with the sample result, the government stated that confidence intervals show “how close the real population percentage is likely to be to the figure observed in the sample”:

since there is a 95 percent chance that the ‘true’ value lies within two standard deviations of the sample figure, that particular ‘confidence interval’ (i.e., two standard deviations) is therefore said to have a ‘confidence level’ of about 95 percent.” 28

The Solicitor General’s office seemed to have had some awareness that it was giving offense with the above definition because it quickly added:

“While it is customary (and, in many cases, easier) to speak of ‘a 95 percent chance’ that the actual population percentage is within two standard deviations of the figure obtained from the sample, ‘the chances are in the sampling procedure, not in the parameter’.”29

Easier perhaps but clearly erroneous to speak that way, and customary only among the unwashed. The government half apologized for misleading the Court when it followed up with a better definition from David Freedman’s textbook, but sadly the government lawyers were not content to let the matter sit there. The Solicitor General offices brief obscured the textbook definition with a further inaccurate and false précis:

if the sampling from the general population were repeated numerous times, the ‘real’ population figure would be within the confidence interval 95 percent of the time. The ‘real’ figure would be outside that interval the remaining five percent of the time.”30

The lawyers in the Solicitor General’s office thus made the rookie mistake of forgetting that in the long run, after numerous repeated samples, there would be numerous confidence intervals, not one. The 95% probability of containing the true population value belongs to the set of the numerous confidence intervals, not “the confidence interval” obtained in the first go around.

The Daubert case has been the subject of nearly endless scholarly comment, but few authors have chosen to revisit the parties’ briefs. Two authors have published a paper that reviewed the scientists’ amici briefs in Daubert.31 The Rothman brief was outlined in detail; the Feinstein rebuttal was not substantively discussed. The plaintiffs’ invocation of the transposition fallacy in Daubert has apparently gone unnoticed.


1 Oral Argument in Daubert v. Merrell Dow Pharmaceuticals, Inc., U.S. Supreme Court no. 92-102, 1993 WL 754951, *5 (Tuesday, March 30, 1993) [Oral Arg.]

2 Oral Arg. at *6.

3 In re Agent Orange Product Liab. Litig., 597 F. Supp. 740, 781 (E.D.N.Y.1984) (“The distinction between avoidance of risk through regulation and compensation for injuries after the fact is a fundamental one.”), aff’d in relevant part, 818 F.2d 145 (2d Cir. 1987), cert. denied sub nom. Pinkney v. Dow Chemical Co., 484 U.S. 1004 (1988).

4 Org. Arg. at *19.

5 Oral Arg. at *18-19.

6 Oral Arg. at *19.

7 See, e.g., “Sander Greenland on ‘The Need for Critical Appraisal of Expert Witnesses in Epidemiology and Statistics’” (Feb. 8, 2015) (noting biostatistician Sander Greenland’s publications, which selectively criticize only defense expert witnesses and lawyers for statistical misstatements); see alsoSome High-Value Targets for Sander Greenland in 2018” (Dec. 27, 2017).

8 Oral Arg. at *19.

9 Oral Arg. at *20

10 Oral Arg. at *44. At the oral argument, this last statement was perhaps Gottesman’s clearest misstatement of statistical principles, in that he directly suggested that the coefficient of confidence translates into a posterior probability of the claimed association at the observed size.

11 Oral Arg. at *37.

12 Oral Arg. at *32.

13 Petitioner’s Brief in Daubert v. Merrell Dow Pharmaceuticals, Inc., U.S. Supreme Court No. 92-102, 1992 WL 12006442, *8 (U.S. Dec. 2, 1992) [Petitioiner’s Brief].

14 Petitioner’s Brief at *9.

15 Petitioner’s Brief at *n. 36.

16 Brief Amici Curiae of Professors Kenneth Rothman, Noel Weiss, James Robins, Raymond Neutra and Steven Stellman, in Support of Petitioners, 1992 WL 12006438, Daubert v. Merrell Dow Pharmaceuticals, Inc., U.S. S. Ct. No. 92-102 (Dec. 2, 1992).

18 Brief Amicus Curiae of Professor Alvan R. Feinstein in Support of Respondent, in Daubert v. Merrell Dow Pharmaceuticals, Inc., U.S. Supreme Court no. 92-102, 1993 WL 13006284, at *2 (U.S., Jan. 19, 1993) [Feinstein Brief].

19 Feinstein Brief at *19.

20 Petitioner’s Reply Brief in Daubert v. Merrell Dow Pharmaceuticals, Inc., U.S. Supreme Court No. 92-102, 1993 WL 13006390, at *4 (U.S., Feb. 22, 1993).

21 Respondent’s Brief in Daubert v. Merrell Dow Pharmaceuticals, Inc., U.S. Supreme Court No. 92-102, 1993 WL 13006277, at n. 32 (U.S., Jan. 19, 1993) [Respondent Brief].

22 Respondent Brief at *4.

23 Respondent Brief at *42 n.32 and 47.

24 Respondent Brief at *40-41 (citing DeLuca v. Merrell Dow Pharms., Inc., 911 F.2d 941, 958 (3d Cir. 1990)).

25 Brief for the United States as Amicus Curiae Supporting Respondent in Daubert v. Merrell Dow Pharmaceuticals, Inc., U.S. Supreme Court No. 92-102, 1993 WL 13006291 (U.S., Jan. 19, 1993) [U.S. Brief].

26 See, e.g., Hazelwood School District v. United States, 433 U.S. 299, 308-312

(1977); Castaneda v. Partida, 430 U.S. 482, 495-499 & nn.16-18 (1977) (“As a general rule for such large samples, if the difference between the expected value and the observed number is greater than two or three standard deviations, then the hypothesis that the jury drawing was random would be suspect to a social scientist.”).

27 U.S. Brief at *3-4. Over two decades later, when politically convenient, the United States government submitted an amicus brief in a case involving alleged securities fraud for failing to disclose adverse events of an over-the-counter medication. In Matrixx Initiatives Inc. v. Siracusano, 131 S. Ct. 1309 (2011), the securities fraud plaintiffs contended that they need not plead “statistically significant” evidence for adverse drug effects. The Solicitor General’s office, along with counsel for the Food and Drug Division of the Department of Health & Human Services, in their zeal to assist plaintiffs disclaimed the necessity, or even the importance, of statistical significance:

[w]hile statistical significance provides some indication about the validity of a correlation between a product and a harm, a determination that certain data are not statistically significant … does not refute an inference of causation.”

Brief for the United States as Amicus Curiae Supporting Respondents, in Matrixx Initiatives, Inc. v. Siracusano, 2010 WL 4624148, at *14 (Nov. 12, 2010).

28 U.S. Brief at *5.

29 U.S. Brief at *5-6 (citing David Freedman, Freedman, R. Pisani, R. Purves & A. Adhikari, Statistics 351, 397 (2d ed. 1991)).

30 U.S. Brief at *6 (citing Freedman’s text at 351) (emphasis added).

31 See Joan E. Bertin & Mary S. Henifin, Science, Law, and the Search for Truth in the Courtroom: Lessons from Dauburt v. Menell Dow,” 22 J. Law, Medicine & Ethics 6 (1994); Joan E. Bertin & Mary Sue Henifin, “Scientists Talk to Judges: Reflections on Daubert v. Merrell Dow,” 4(3) New Solutions 3 (1994). The authors’ choice of the New Solutions journal is interesting and curious. New Solutions: A journal of Environmental and Occupational Health Policy was published by the Oil, Chemical and Atomic Workers International Union, under the control of Anthony Mazzocchi (June 13, 1926 – Oct. 5, 2002), who was the union’s secretary-treasurer. Anthony Mazzocchi, “Finding Common Ground: Our Commitment to Confront the Issues,” 1 New Solutions 3 (1990); see also Steven Greenhouse, “Anthony Mazzocchi, 76, Dies; Union Officer and Party Father,” N.Y. Times (Oct. 9, 2002). Even a cursory review of this journal’s contents reveals how concerned, even obsessed, the union was interested and invested in the litigation industry and that industry’s expert witnesses. 

 

The “Rothman” Amicus Brief in Daubert v. Merrill Dow Pharmaceuticals

November 17th, 2018

Then time will tell just who fell
And who’s been left behind”

                  Dylan, “Most Likely You Go Your Way” (1966)

 

When the Daubert case headed to the Supreme Court, it had 22 amicus briefs in tow. Today that number is routine for an appeal to the high court, but in 1992, it was a signal of intense interest in the case among both the scientific and legal community. To the litigation industry, the prospect of judicial gatekeeping of expert witness testimony was an anathema. To the manufacturing industry, the prospect was precious to defend against specious claiming.

With the benefit of 25 years of hindsight, a look at some of those amicus briefs reveals a good deal about the scientific and legal acumen of the “friends of the court.” Not all amicus briefs in the case were equal; not all have held up well in the face of time. The amicus brief of the American Association for the Advancement of Science and the National Academy of Science was a good example of advocacy for the full implementation of gatekeeping on scientific principles of valid inference.1 Other amici urged an anything goes approach to judicial oversight of expert witnesses.

One amicus brief often praised by Plaintiffs’ counsel was submitted by Professor Kenneth Rothman and colleagues.2 This amicus brief is still cited by parties who find support in the brief for their excuses for not having consistent, valid, strong, and statistically significance evidence to support their claims of causation. To be sure, Rothman did target statistical significance as a strict criterion of causal inference, but there is little support in the brief for the loosey-goosey style of causal claiming that is so prevalent among lawyers for the litigation industry. Unlike the brief filed by the AAAS and the National Academy of Science, Rothman’s brief abstained from the social policies implied by judicial gatekeeping or its rejection. Instead, Rothman’s brief wet out to make three narrow points:

(1) courts should not rely upon strict statistical significance testing for admissibility determinations;

(2) peer review is not an appropriate touchstone for the validity of an expert witness’s opinion; and

(3) unpublished, non-peer-reviewed “reanalysis” of studies is a routine part of the scientific process, and regularly practiced by epidemiologists and other scientists.

Rothman was encouraged to target these three issues by the lower courts’ opinions in the Daubert case, in which the courts made blanket statements about the role of absent statistical significance and peer review, and the illegitimacy of “re-analyses” of published studies.

Professor Rothman has made many admirable contributions to epidemiologic practice, but the amicus brief submitted by him and his colleagues falls into the trap of making the sort of blanket general statements that they condemned in the lower courts’ opinions. Of the brief’s three points, the first, about statistical significance is the most important for epidemiologic and legal practice. Despite reports of an odd journal here or there “abolishing” p-values, most medical journals continue to require the presentation of either p-values or confidence intervals. In the majority of medical journals, 95% confidence intervals that exclude a null hypothesis risk ratio of 1.0, or risk difference of 0, are labelled “statistically significant,” sometimes improvidently in the presence of multiple comparisons and lack of pre-specification of outcome.

For over three decades, Rothman has criticized the prevailing practice on statistical significance. Professor Rothman is also well known for his advocacy for the superiority of confidence intervals over p-values in conveying important information about what range of values are reasonably compatible with the observed data.3 His criticisms of p-values and his advocacy for estimation with intervals have pushed biomedical publishing to embrace confidence intervals as more informative than just p-values. Still, his views on statistical significance have never gained complete acceptance at most clinical journals. Biomedical scientists continue to interpret 95% confidence intervals, at least in part, as to whether they show “significance” by excluding the null hypothesis value of no risk difference or of risk ratios equal to 1.0.

The first point in Rothman’s amicus brief is styled:

THE LOWER COURTS’ FOCUS ON SIGNIFICANCE TESTING IS BASED ON THE INACCURATE ASSUMPTION THAT ‘STATISTICAL SIGNIFICANCE’ IS REQUIRED IN ORDER TO DRAW INFERENCES FROM EPIDEMIOLOGICAL INFORMATION”

The challenge by Rothman and colleagues to the “assumption” that statistical significance is necessary is what, of course, has endeared this brief to the litigation industry. A close read of the brief, however, shows that Rothman’s critique of the assumption is equivocal. Rothman et amici characterized the lower courts as having given:

blind deference to inappropriate and arcane publication standards and ‘significance testing’.”4

The brief is silent about what might be knowing deference, or appropriate publication standards. To be sure, judges have often poorly expressed their reasoning for deciding scientific evidentiary issues, and perhaps poor communication or laziness by judges was responsible for Rothman’s interest in joining the Daubert fray. Putting aside the unclear, rhetorical, and somewhat hyperbolic use of “arcane” in the quote above, the suggestion of inappropriate blind deference is itself expressed in equivocal terms in the brief. At times the authors rail at the use of statistical significance as the “sole” criterion, and at times, they seem to criticize its use at all.

At least twice in their brief, Rothman and friends declare that the lower court:

misconstrues the validity and appropriateness of significance testing as a decision making tool, apparently deeming it the sole test of epidemiological hypotheses.”5

* * * * * *

this Court should reject significance testing as the sole acceptable criterion of scientific validity in epidemiology.”6

Characterizing “statistical significance” as not the sole test or criterion of scientific inference is hardly controversial, and it implies that statistical significance is one test, criterion, or factor among others. This position is consistent with the current ASA Statement on Significance Testing.7 There is, of course, much more to evaluate in a study or a body of studies, than simply whether they individually or collectively help us to exclude chance as an explanation for their findings.

Statistical Significance Is Not Necessary At All

Elsewhere, Rothman and friends take their challenge to statistical significance testing beyond merely suggesting that such testing is only one test or criterion among others. Indeed, their brief in other places states their opinion that significance testing is not necessary at all:

Testing for significance, however, is often mistaken for a sine qua non of scientific inference.”8

And at other times, Rothman and friends go further yet and claim not only that significance is not necessary, but that it is not even appropriate or useful:

Significance testing, however, is neither necessary nor appropriate as a requirement for drawing inferences from epidemiologic data.”9

Rothman compares statistical significance testing with “scientific inference,” which is not a mechanical, mathematical procedure, but rather a “thoughtful evaluation[] of possible explanations for what is being observed.”10 Significance testing, in contrast,” is “merely a statistical tool,” used inappropriately “in the process of developing inferences.”11 Rothman suggests that the term “statistical significance” could be eliminated from scientific discussions without loss of meaning, and this linguistic legerdemain shows that the phrase is unimportant in science and in law.12 Rothman’s suggestion, however, ignores that causal assessments have always required an evaluation of the play of chance, especially for putative causes, which are neither necessary nor sufficient, and which modify underlying stochastic processes by increasing or decreasing the probability of a specified outcome. Asserting that statistical significance is misleading because it never describes the size of an association, which the Rothman brief does, is like telling us that color terms tell us nothing about the mass of a body.

The Rothman brief does make the salutary point that labeling a study outcome as not “statistically significant” carries the danger that the study’s data have no value, or that the study may be taken to reject the hypothesized association. In 1992, such an interpretation may have been more common, but today, in the face of the proliferation of meta-analyses, the risk of such interpretations of single study outcomes is remote.

Questionable History of Statistics

Rothman suggests that the development of statistical hypothesis testing occurred in the context of agricultural and quality-control experiments, which required yes-no answers for future action.13 This suggestion clearly points at Sir Ronald Fisher and Jerzy Neyman, and their foundational work on frequentist statistical theory and practice. In part, the amici correctly identified the experimental milieu in which Fisher worked, but the description of Fisher’s work is neither accurate nor fair. Fisher spent a lifetime thinking and writing about statistical tests, in much more nuanced ways than implied by the claim that such testing occurred in context of agricultural and quality-control experiments. Although Fisher worked on agricultural experiments, his writings acknowledged that when statistical tests and analyses were applied to observational studies, much more searching analyses of bias and confounding were required. Fisher’s and Berkson’s reactions to the observational studies of Hill and Doll on smoking and lung cancer are telling in this regard. These statisticians criticized the early smoking lung cancer studies, not for lack of statistical significance, but for failing to address confounding by a potential common genetic propensity to smoke and to develop lung cancer.

Questionable History of Drug Development

Twice in Rothman’s amicus brief, the authors suggest that “undue reliance” on statistical significance has resulted in overlooking “effective new treatments” because observed benefits were considered “not significant,” despite an “indication” of efficacy.14 The brief never provided any insight on what is due reliance and what is undue reliance on statistical significance. Their criticism of “undue reliance” implies that there are modes or instances of “due reliance” upon statistical significance. The amicus brief fails also to inform readers exactly what “effective new treatments” have been overlooked because the outcomes were considered “not significant.” This omission is regrettable because it leaves the reader with only abstract recommendations, without concrete examples of what such effective treatments might be. The omission was unfortunate because Rothman almost certainly could have marshalled examples. Recently, Rothman tweeted just such an example:15

“30% ↓ in cancer risk from Vit D/Ca supplements ignored by authors & editorial. Why? P = 0.06. http://bit.ly/2oanl6w http://bit.ly/2p0CRj7. The 95% confidence interval for the risk ratio was 0.42–1.02.”

Of course, this was a large, carefully reported randomized clinical trial, with a narrow confidence interval that just missed “statistical significance.” It is not an example that would have given succor to Bendectin plaintiffs, who were attempting to prove an association by identifying flaws in noisy observational studies that generally failed to show an association.

Readers of the 1992 amicus brief can only guess at what might be “indications of efficacy”; no explanation or examples are provided.16 The reality of FDA approvals of new drugs is that pre-specified 5% level of statistical significance is virtually always enforced.17 If a drug sponsor has “indication of efficacy,” it is, of course, free to follow up with an additional, larger, better-designed clinical trial. Rothman’s recent tweet about the vitamin D clinical trial does provide some context and meaning to what the amici may have meant over 25 years ago by indication of efficacy. The tweet also illustrates Rothman’s acknowledgment of the need to address random variability in a data set, whether by p-value or confidence interval, or both. Clearly, Rothman was criticizing the authors of the vitamin D trial for stopping short of claiming that they had shown (or “demonstrated”) a cancer survival benefit. There is, however, a rich literature on vitamin D and cancer outcomes, and such a claim could be made, perhaps, in the context of a meta-analysis or meta-regression of multiple clinical trials, with a synthesis of other experimental and observational data.18

Questionable History of Statistical Analyses in Epidemiology

Rothman’s amicus brief deserves credit for introducing a misinterpretation of Sir Austin Bradford Hill’s famous paper on inferring causal associations, which has become catechism in the briefs of plaintiffs in pharmaceutical and other products liability cases:

No formal tests of significance can answer those questions. Such tests can, and should, remind us of the effects that the play of chance can create, and they will instruct us in the likely magnitude of those effects. Beyond that they contribute nothing to the ‘proof’ of our hypothesis.”

Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295, 290 (1965) (quoted at Rothman Brief at *6).

As exegesis of Hill’s views, this quote is misleading. The language quoted above was used by Hill in the context of his nine causal viewpoints or criteria. The Rothman brief ignores Hill’s admonition to his readers, that before reaching the nine criteria, there is a serious, demanding predicate that must be shown:

Disregarding then any such problem in semantics we have this situation. Our observations reveal an association between two variables, perfectly clear-cut and beyond what we would care to attribute to the play of chance. What aspects of that association should we especially consider before deciding that the most likely interpretation of it is causation?”

Id. at 295 (emphasis added). Rothman and co-authors did not have to invoke the prestige and authority of Sir Austin, but once they did, they were obligated to quote him fully and with accurate context. Elsewhere, in his famous textbook, Hill expressed his view that common sense was insufficient to interpret data, and that the statistical method was necessary to interpret data in medical studies.19

Rothman complains that statistical significance focuses the reader on conjecture on the role of chance in the observed data rather than the information conveyed by the data themselves.20 The “incompleteness” of statistical analysis for arriving at causal conclusions, however, is not an argument against its necessity.

The Rothman brief does make the helpful point that statistical significance cannot be sufficient to support a conclusion of causation because many statistically significant associations or correlations will be non-causal. They give a trivial example of wearing dresses and breast cancer, but the point is well-taken. Associations, even when statistically significant, are not necessarily causal conclusions. Who ever suggested otherwise, other than expert witnesses for the litigation industry?

Unnecessary Fears

The motivation for Rothman’s challenge to the assumption that statistical significance is necessary is revealed at the end of the argument on Point I. The authors plainly express their concern that false negatives will shut down important research:

To give weight to the failure of epidemiological studies to meet strict ‘statistical significant’ standards — to use such studies to close the door on further inquiry — is not good science.”21

The relevance of this concern to the proceedings is a mystery. The judicial decisions in the case are not referenda on funding initiatives. Scientists were as free in 1993, after Daubert was decided, as they were in 1992, when Rothman wrote, to pursue the hypothesis that Bendectin caused birth defects. The decision had the potential to shut down tort claims, and left scientists to their tasks.

Reanalyses Are Appropriate Scientific Tools to Assess and Evaluate Data, and to Forge Causal Opinions

The Rothman brief took issue with the lower courts’ dismissal of plaintiffs’ expert witnesses’ re-analyses of data in published studies. The authors argued that reanalyses were part of the scientific method, and not “an arcane or specialized enterprise,” deserving of heightened or skeptical scrutiny.22

Remarkably, the Rothman brief, if accepted by the Supreme Court on the re-analysis point, would have led to the sort of unthinking blanket acceptance of a methodology, which the brief’s authors condemned in the context of blanket acceptance of significance testing. The brief covertly urges “blind deference” to its authors on the blanket approval of re-analyses.

Although amici have tight page limits, the brief’s authors made clear that they were offering no substantive opinions on the data involved in the published epidemiologic studies on Bendectin, or on the plaintiffs’ expert witnesses’ re-analyses. With the benefit of hindsight, we can see that the sweeping language used by the Ninth Circuit on re-analyses might have been taken to foreclose important and valid meta-analyses or similar approaches. The Rothman brief is not terribly explicit on what re-analysis techniques were part of the scientific method, but meta-analyses surely had been on the authors’ minds:

by focusing on inappropriate criteria applied to determine what conclusions, if any, can be reached from any one study, the trial court forecloses testimony about inferences that can be drawn from the combination of results reported by many such studies, even when those studies, standing alone, might not justify such inferences.”23

The plaintiffs’ statistical expert witness in Daubert had proffered a re-analysis of at least one study by substituting a different control sample, as well as a questionable meta-analyses. By failing to engage on the propriety of the specific analyses at issue in Daubert, the Rothman brief failed to offer meaningful guidance to the appellate court.

Reanalyses Are Not Invalid Just Because They Have Not Been Published

Rothman was certainly correct that the value of peer review was overstated by the defense in Bendectin litigation.24 The quality of pre-publication peer review is spotty, at best. Predatory journals deploy a pay-to-play scheme, which makes a mockery of scientific publishing. Even at respectable journals, peer review cannot effectively guard against fraud, or ensure that statistical analyses have been appropriately done.25 At best, peer review is a weak proxy for study validity, and an unreliable one at that.

The Rothman brief may have moderated the Supreme Court’s reaction to the defense’s argument that peer review is a requirement for studies, or “re-analyses,” relied upon by expert witnesses. The Court in Daubert opined, in dicta, that peer review is a non-dispositive consideration:

The fact of publication (or lack thereof) in a peer reviewed journal … will be a relevant, though not dispositive, consideration in assessing the scientific validity of a particular technique or methodology on which an opinion is premised.”26

To the extent that Rothman and colleagues might have been disappointed in this outcome, they missed some important context of the Bendectin cases. Most of the cases had been resolved by a consolidated causation issues trial, but many opt-out cases had to be tried in state and federal courts around the country.27 The expert witnesses challenged in Daubert (Drs. Swan and Done) participated in many of these opt-out cases, and in each case, they opined that Bendectin was a public health hazard. The failure of these witnesses to publish their analyses and re-analyses spoke volumes about their bona fides. Courts (and juries if the Swan and Done proffered testimony were admissible) could certainly draw negative inferences from the plaintiffs’ expert witnesses’ failure to publish their opinions and re-analyses.

The Fate of the “Rothman Approach” in the Courts

The so-called “Rothman approach” was urged by Bendectin plaintiffs in opposing summary judgment in a case pending in federal court, in New Jersey, before the Supreme Court decided Daubert. Plaintiffs resisted exclusion of their expert witnesses, who had relied upon inconsistent and statistically non-significant studies on the supposed teratogenicity of Bendectin. The trial court excluded the plaintiffs’ witnesses, and granted summary judgment.28

On appeal, the Third Circuit reversed and remanded the DeLucas’s case for a hearing under Rule 702:

by directing such an overall evaluation, however, we do not mean to reject at this point Merrell Dow’s contention that a showing of a .05 level of statistical significance should be a threshold requirement for any statistical analysis concluding that Bendectin is a teratogen regardless of the presence of other indicia of reliability. That contention will need to be addressed on remand. The root issue it poses is what risk of what type of error the judicial system is willing to tolerate. This is not an easy issue to resolve and one possible resolution is a conclusion that the system should not tolerate any expert opinion rooted in statistical analysis where the results of the underlying studies are not significant at a .05 level.”29

After remand, the district court excluded the DeLuca plaintiffs’ expert witnesses, and granted summary judgment, based upon the dubious methods employed by plaintiffs’ expert witnesses in cherry picking data, recalculating risk ratios in published studies, and ignoring bias and confounding in studies. The Third Circuit affirmed the judgment for Merrell Dow.30

In the end, the decisions in the DeLuca case never endorsed the Rothman approach, although Professor Rothman can take credit perhaps for forcing the trial court, on remand, to come to grips with the informational content of the study data, and the many threats to validity, which severely undermined the relied-upon studies and the plaintiffs’ expert witnesses’ opinions.

More recently, in litigation over alleged causation of birth defects in offspring of mothers who used Zoloft during pregnancy, plaintiffs’ counsel attempted to resurrect, through their expert witnesses, the Rothman approach. The multidistrict court saw through counsel’s assertions that the Rothman approach had been adopted in DeLuca, or that it had become generally accepted.31 After protracted litigation in the Zoloft cases, the district court excluded plaintiffs’ expert witnesses and entered summary judgment for the defense. The Third Circuit found that the district court’s handling of the statistical significance issues was fully consistent with the Circuit’s previous pronouncements on the issue of statistical significance.32


1 filed in Daubert v. Merrell Dow Pharmaceuticals, Inc., U.S. Supreme Court No. 92-102 (Jan. 19, 1993), was submitted by Richard A. Meserve and Lars Noah, of Covington & Burling, and by Bert Black, 12 Biotechnology Law Report 198 (No. 2, March-April 1993); see Daubert’s Silver Anniversary – Retrospective View of Its Friends and Enemies” (Oct. 21, 2018).

2 Brief Amici Curiae of Professors Kenneth Rothman, Noel Weiss, James Robins, Raymond Neutra and Steven Stellman, in Support of Petitioners, 1992 WL 12006438, Daubert v. Merrell Dow Pharmaceuticals, Inc., U.S. S. Ct. No. 92-102 (Dec. 2, 1992). [Rothman Brief].

3 Id. at *7.

4 Rothman Brief at *2.

5 Id. at *2-*3 (emphasis added).

6 Id. at *7 (emphasis added).

7 See Ronald L. Wasserstein & Nicole A. Lazar, “The ASA’s Statement on p-Values: Context, Process, and Purpose,” 70 The American Statistician 129 (2016)

8 Id. at *3.

9 Id. at *2.

10 Id. at *3 – *4.

11 Id. at *3.

12 Id. at *3.

13 Id. at *4 -*5.

14 Id. at*5, *6.

15 at <https://twitter.com/ken_rothman/status/855784253984051201> (April 21, 2017). The tweet pointed to: Joan Lappe, Patrice Watson, Dianne Travers-Gustafson, Robert Recker, Cedric Garland, Edward Gorham, Keith Baggerly, and Sharon L. McDonnell, “Effect of Vitamin D and Calcium Supplementation on Cancer Incidence in Older WomenA Randomized Clinical Trial,” 317 J. Am. Med. Ass’n 1234 (2017).

16 In the case of United States v. Harkonen, Professors Ken Rothman and Tim Lash, and I made common cause in support of Dr. Harkonen’s petition to the United States Supreme Court. The circumstances of Dr. Harkonen’s indictment and conviction provide a concrete example of what Dr. Rothman probably was referring to as “indication of efficacy.” I supported Dr. Harkonen’s appeal because I agreed that there had been a suggestion of efficacy, even if Harkonen had overstated what his clinical trial, standing alone, had shown. (There had been a previous clinical trial, which demonstrated a robust survival benefit.) From my perspective, the facts of the case supported Dr. Harkonen’s exercise of speech in a press release, but it would hardly have justified FDA approval for the indication that Dr. Harkonen was discussing. If Harkonen had indeed committed “wire fraud,” as claimed by the federal prosecutors, then I had (and still have) a rather long list of expert witnesses who stand in need of criminal penalties and rehabilitation for their overreaching opinions in court cases.

17 Robert Temple, “How FDA Currently Makes Decisions on Clinical Studies,” 2 Clinical Trials 276, 281 (2005); Lee Kennedy-Shaffer, “When the Alpha is the Omega: P-Values, ‘Substantial Evidence’, and the 0.05 Standard at FDA,” 72 Food & Drug L.J. 595 (2017); see alsoThe 5% Solution at the FDA” (Feb. 24, 2018).

18 See, e.g., Stefan Pilz, Katharina Kienreich, Andreas Tomaschitz, Eberhard Ritz, Elisabeth Lerchbaum, Barbara Obermayer-Pietsch, Veronika Matzi, Joerg Lindenmann, Winfried Marz, Sara Gandini, and Jacqueline M. Dekker, “Vitamin D and cancer mortality: systematic review of prospective epidemiological studies,” 13 Anti-Cancer Agents in Medicinal Chem. 107 (2013).

19 Austin Bradford Hill, Principles of Medical Statistics at 2, 10 (4th ed. 1948) (“The statistical method is required in the interpretation of figures which are at the mercy of numerous influences, and its object is to determine whether individual influences can be isolated and their effects measured.”) (emphasis added).

20 Id. at *6 -*7.

21 Id. at *9.

22 Id.

23 Id. at *10.

24 Rothman Brief at *12.

25 See William Childs, “Peering Behind The Peer Review Curtain,” Law360 (Aug. 17, 2018).

26 Daubert v. Merrell Dow Pharms., 509 U.S. 579, 594 (1993).

27 SeeDiclegis and Vacuous Philosophy of Science” (June 24, 2015).

28 DeLuca v. Merrell Dow Pharms., Inc., 131 F.R.D. 71 (D.N.J. 1990).

29 DeLuca v. Merrell Dow Pharms., Inc., 911 F.2d 941, 955 (3d Cir. 1990).

30 DeLuca v. Merrell Dow Pharma., Inc., 791 F. Supp. 1042 (D.N.J. 1992), aff’d, 6 F.3d 778 (3d Cir. 1993).

31 In re Zoloft (Sertraline Hydrochloride) Prods. Liab. Litig., MDL No. 2342; 12-md-2342, 2015 WL 314149 (E.D. Pa. Jan. 23, 2015) (Rufe, J.) (denying PSC’s motion for reconsideration), aff’d, 858 F.3d 787 (3d Cir. 2017) (affirming exclusion of plaintiffs’ expert witnesses’ dubious opinions, which involved multiple methodological flaws and failures to follow any methodology faithfully). See generallyZoloft MDL Relieves Matrixx Depression” (Jan. 30, 2015); “WOE — Zoloft Escapes a MDL While Third Circuit Creates a Conceptual Muddle” (July 31, 2015).

32 See Pritchard v. Dow Agro Sciences, 430 F. App’x 102, 104 (3d Cir. 2011) (excluding Concussion hero, Dr. Bennet Omalu).

The American Statistical Association Statement on Significance Testing Goes to Court – Part I

November 13th, 2018

It has been two and one-half years since the American Statistical Association (ASA) issued its statement on statistical significance. Ronald L. Wasserstein & Nicole A. Lazar, “The ASA’s Statement on p-Values: Context, Process, and Purpose,” 70 The American Statistician 129 (2016) [ASA Statement]. When the ASA Statement was published, I commended it as a needed counterweight to the exaggerated criticisms of significance testing.1 Lawyers and expert witnesses for the litigation industry had routinely poo-poohed the absence of statistical significance, but over-endorsed its presence in poorly designed and biased studies. Courts and lawyers from all sides routinely misunderstand, misstated, and misrepresented the meaning of statistical significance.2

The ASA Statement had potential to help resolve judicial confusion. It is written in non-technical language, which is easily understood by non-statisticians. Still, the Statement has to be read with care. The principle of charity led me to believe that lawyers and judges would read the Statement carefully, and that it would improve judicial gatekeeping of expert witnesses’ opinion testimony that involved statistical evidence. I am less sanguine now about the prospect of progress.

No sooner had the ASA issued its Statement than the spinning started. One scientist, and an editor PLoS Biology, blogged that “the ASA notes, the importance of the p-value has been greatly overstated and the scientific community has become over-reliant on this one – flawed – measure.”3 Lawyers for the litigation industry were even less restrained in promoting wild misrepresentations about the Statement, with claims that the ASA had condemned the use of p-values, significance testing, and significance probabilities, as “flawed.”4 And yet, no where in the ASA’s statement does the group suggest that the the p-value was a “flawed” measure.

Criminal Use of the ASA Statement

Where are we now, two plus years out from the ASA Statement? Not surprisingly, the Statement has made its way into the legal arena. The Statement has been used in any number of depositions, relied upon in briefs, and cited in at least a couple of judicial decisions, in the last two years. The empirical evidence of how the ASA Statement has been used, or might be used in the future, is still sparse. Just last month, the ASA Statement was cited by the Washington State Supreme Court, in a ruling that held the death penalty unconstitutional. State of Washington v. Gregory, No. 88086-7, (Wash. S.Ct., Oct. 11, 2018) (en banc). Mr. Gregory, who was facing the death penalty, after being duly convicted or rape, robbery, and murder. The prosecution was supported by DNA matches, fingerprint identification, and other evidence. Mr. Gregory challenged the constitutionality of his imposed punishment, not on per se grounds of unconstitutionality, but on race disparities in the imposition of the death penalty. On this claim, the Washington Supreme Court commented on the empirical evidence marshalled on Mr. Gregory’s behalf:

The most important consideration is whether the evidence shows that race has a meaningful impact on imposition of the death penalty. We make this determination by way of legal analysis, not pure science. At the very most, there is an 11 percent chance that the observed association between race and the death penalty in Beckett’s regression analysis is attributed to random chance rather than true association. Commissioner’s Report at 56-68 (the p-values range from 0.048-0.111, which measures the probability that the observed association is the result of random chance rather than a true association).[8] Just as we declined to require ‘precise uniformity’ under our proportionality review, we decline to require indisputably true social science to prove that our death penalty is impermissibly imposed based on race.

Id. (internal citations omitted).

Whatever you think of the death penalty, or how it is imposed in the United States, you will have to agree that the Court’s discussion of statistics is itself criminal. In the above quotation from the Court’s opinion, the Court badly misinterpreted the p-values generated in various regression analyses that were offered to support claims of race disparity. The Court’s equating statistically significant evidence of race disparity in these regression analyses with “indisputably true social science” also reflects a rhetorical strategy that imputes ridiculously high certainty (indisputably true) to social science conclusions in order to dismiss the need for them in order to accept a causal race disparity claim on empirical evidence.5

Gregory’s counsel had briefed the Washington Court on statistical significance, and raised the ASA Statement as excuse and justification for not presenting statistically significant empirical evidence of race disparity.6 Footnote 8, in the above quote from the Gregory decision shows that the Court was aware of the ASA Statement, which makes the Court’s errors even more unpardonable: 

[8] The most common p-value used for statistical significance is 0.05, but this is not a bright line rule. The American Statistical Association (ASA) explains that the ‘mechanical “bright-line” rules (such as “p < 0.05”) for justifying scientific claims or conclusions can lead to erroneous beliefs and poor decision making’.”7

Conveniently, Gregory’s counsel did not cite to other parts of the ASA Statement, which would have called for a more searching review of the statistical regression analyses:

“Good statistical practice, as an essential component of good scientific practice, emphasizes principles of good study design and conduct, a variety of numerical and graphical summaries of data, understanding the phenomenon under study, interpretation of results in context, complete reporting and proper logical and quantitative understanding of what data summaries mean. No single index should substitute for scientific reasoning.”8

The Supreme Court of Washington first erred in its assessment of what scientific evidence requires in terms of a burden of proof. It then accepted spurious arguments to excuse the absence of statistical significance in the statistical evidence before it, on the basis of a distorted representation of the ASA Statement. Finally, the Court erred in claiming support from social science evidence, by ignoring other methodological issues in Gregory’s empirical claims. Ironically, the Court had made significance testing the end all and be all of its analysis, and when it dispatched statistical significance as a consideration, the Court jumped to the conclusion it wanted to reach. Clearly, the intended message of the ASA Statement had been subverted by counsel and the Court.

2 See, e.g., In re Ephedra Prods. Liab. Litig., 393 F.Supp. 2d 181, 191 (S.D.N.Y. 2005). See alsoConfidence in Intervals and Diffidence in the Courts” (March 4, 2012); “Scientific illiteracy among the judiciary” (Feb. 29, 2012).

5 Moultrie v. Martin, 690 F.2d 1078, 1082 (4th Cir. 1982) (internal citations omitted) (“When a litigant seeks to prove his point exclusively through the use of statistics, he is borrowing the principles of another discipline, mathematics, and applying these principles to the law. In borrowing from another discipline, a litigant cannot be selective in which principles are applied. He must employ a standard mathematical analysis. Any other requirement defies logic to the point of being unjust. Statisticians do not simply look at two statistics, such as the actual and expected percentage of blacks on a grand jury, and make a subjective conclusion that the statistics are significantly different. Rather, statisticians compare figures through an objective process known as hypothesis testing.”).

6 Supplemental Brief of Allen Eugene Gregory, at 15, filed in State of Washington v. Gregory, No. 88086-7, (Wash. S.Ct., Jan. 22, 2018).

7 State of Washington v. Gregory, No. 88086-7, (Wash. S.Ct., Oct. 11, 2018) (en banc) (internal citations omitted).

8 ASA Statement at 132.

The Hazard of Composite End Points – More Lumpenepidemiology in the Courts

October 20th, 2018

One of the challenges of epidemiologic research is selecting the right outcome of interest to study. What seems like a simple and obvious choice can often be the most complicated aspect of the design of clinical trials or studies.1 Lurking in this choice of end point is a particular threat to validity in the use of composite end points, when the real outcome of interest is one constituent among multiple end points aggregated into the composite. There may, for instance, be strong evidence in favor of one of the constituents of the composite, but using the composite end point results to support a causal claim for a different constituent begs the question that needs to be answered, whether in science or in law.

The dangers of extrapolating from one disease outcome to another is well-recognized in the medical literature. Remarkably, however, the problem received no meaningful discussion in the Reference Manual on Scientific Evidence (3d ed. 2011). The handbook designed to help judges decide threshold issues of admissibility of expert witness opinion testimony discusses the extrapolation from sample to population, from in vitro to in vivo, from one species to another, from high to low dose, and from long to short duration of exposure. The Manual, however, has no discussion of “lumping,” or on the appropriate (and inappropriate) use of composite or combined end points.

Composite End Points

Composite end points are typically defined, perhaps circularly, as a single group of health outcomes, which group is made up of constituent or single end points. Curtis Meinert defined a composite outcome as “an event that is considered to have occurred if any of several different events or outcomes is observed.”2 Similarly, Montori defined composite end points as “outcomes that capture the number of patients experiencing one or more of several adverse events.”3 Composite end points are also sometimes referred to as combined or aggregate end points.

Many composite end points are clearly defined for a clinical trial, and the component end points are specified. In some instances, the composite nature of an outcome may be subtle or be glossed over by the study’s authors. In the realm of cardiovascular studies, for example, investigators may look at stroke as a single endpoint, without acknowledging that there are important clinical and pathophysiological differences between ischemic strokes and hemorrhagic strokes (intracerebral or subarachnoid). The Fletchers’ textbook4 on clinical epidemiology gives the example:

In a study of cardiovascular disease, for example, the primary outcomes might be the occurrence of either fatal coronary heart disease or non-fatal myocardial infarction. Composite outcomes are often used when the individual elements share a common cause and treatment. Because they comprise more outcome events than the component outcomes alone, they are more likely to show a statistical effect.”

Utility of Composite End Points

The quest for statistical “power” is often cited as a basis for using composite end points. Reduction in the number of “events,” such as myocardial infarction (MI), through improvements in medical care has led to decreased rates of MI in studies and clinical trials. These low event rates have caused power issues for clinical trialists, who have responded by turning to composite end points to capture more events. Composite end points permit smaller sample sizes and shorter follow-up times, without sacrificing power, the ability to detect a statistically significant increased rate of a prespecified size and Type I error. Increasing study power, while reducing sample size or observation time, is perhaps the most frequently cited rationale for using composite end points.

Competing Risks

Another reason sometimes offered in support of using composite end points is composites provide a strategy to avoid the problem of competing risks.5 Death (any cause) is sometimes added to a distinct clinical morbidity because patients who are taken out of the trial by death are “unavailable” to experience the morbidity outcome.

Multiple Testing

By aggregating several individual end points into a single pre-specified outcome, trialists can avoid corrections for multiple testing. Trials that seek data on multiple outcomes, or on multiple subgroups, inevitably raise concerns about the appropriate choice of the measure for the statistical test (alpha) to determine whether to reject the null hypothesis. According to some authors, “[c]omposite endpoints alleviate multiplicity concerns”:

If designated a priori as the primary outcome, the composite obviates the multiple comparisons associated with testing of the separate components. Moreover, composite outcomes usually lead to high event rates thereby increasing power or reducing sample size requirements. Not surprisingly, investigators frequently use composite endpoints.”6

Other authors have similarly acknowledged that the need to avoid false positive results from multiple testing is an important rationale for composite end points:

Because the likelihood of observing a statistically significant result by chance alone increases with the number of tests, it is important to restrict the number of tests undertaken and limit the type 1 error to preserve the overall error rate for the trial.”7

Indecision about an Appropriate Single Outcome

The International Conference on Harmonization suggests that the inability to select a single outcome variable may lead to the adoption of a composite outcome:

If a single primary variable cannot be selected …, another useful strategy is to integrate or combine the multiple measurements into a single or composite variable.”8

The “indecision” rationale has also been criticized as “generally not a good reason to use a composite end point.”9

Validity of Composite End Points

The validity of composite end points depends upon methodological assumptions, which will have to be made at the time of the study design and protocol creation. After the data are collected and analyzed, the assumptions may or may not be supported. Among the supporting assumptions about the validity of using composites are:10

  • similarity in patient importance for included component end points,

  • similarity of association size of the components, and

  • number of events across the components.

The use of composite end points can sometimes be appropriate in the “first look” at a class of diseases or disorders, with the understanding that further research will sort out and refine the associated end point. Research into the causes of human birth defects, for instance, often starts out with a look at “all major malformations,” before focusing in on specific organ and tissue systems. To some extent, the legal system, in its gatekeeping function, has recognized the dangers and invalidity of lumping in the epidemiology of birth defects.11 The Frischhertz decision, for instance, clearly acknowledged that given the clear evidence that different birth defects arise at different times, based upon interference with different embryological processes, “lumping” of end points was methodologically inappropriate. 2012 U.S. Dist. LEXIS 181507, at *8 (citing Chamber v. Exxon Corp., 81 F. Supp. 2d 661 (M.D. La. 2000), aff’d, 247 F.3d 240 (5th Cir. 2001) (unpublished)).

The Chamber decision involved a challenge to the causation opinion of frequent litigation industry witness, Peter Infante,12 who attempted to defend his opinion about benzene and chronic myelogenous leukemia, based upon epidemiology of benzene and acute myelogenous leukemia. Plaintiffs’ witnesses and counsel sought to evade the burden of producing evidence of an AML association by pointing to a study that reported “excess leukemias,” without specifying the relevant type. Chamber, 81 F. Supp. 2d at 664. The trial court, however, perspicaciously recognized the claimants’ failure to identify relevant evidence of the specific association needed to support the causal claim.

The Frischhertz and Chamber cases are hardly unique. Several state and federal courts have concurred in the context of cancer causation claims.13 In the context of birth defects litigation, the Public Affairs Committee of the Teratology Society has weighed in with strong guidance that counsels against extrapolation between different birth defects in litigation:

Determination of a causal relationship between a chemical and an outcome is specific to the outcome at issue. If an expert witness believes that a chemical causes malformation A, this belief is not evidence that the chemical causes malformation B, unless malformation B can be shown to result from malformation A. In the same sense, causation of one kind of reproductive adverse effect, such as infertility or miscarriage, is not proof of causation of a different kind of adverse effect, such as malformation.”14

The threat to validity in attributing a suggested risk for a composite end point to all included component end points is not, unfortunately, recognized by all courts. The trial court, in Ruff v. Ensign-Bickford Industries, Inc.,15 permitted plaintiffs’ expert witness to reanalyze a study by grouping together two previously distinct cancer outcomes to generate a statistically significant result. The result in Ruff is disappointing, but not uncommon. The result is also surprising, considering the guidance provided by the American Law Institute’s Restatement:

Even when satisfactory evidence of general causation exists, such evidence generally supports proof of causation only for a specific disease. The vast majority of toxic agents cause a single disease or a series of biologically-related diseases. (Of course, many different toxic agents may be combined in a single product, such as cigarettes.) When biological-mechanism evidence is available, it may permit an inference that a toxic agent caused a related disease. Otherwise, proof that an agent causes one disease is generally not probative of its capacity to cause other unrelated diseases. Thus, while there is substantial scientific evidence that asbestos causes lung cancer and mesothelioma, whether asbestos causes other cancers would require independent proof. Courts refusing to permit use of scientific studies that support general causation for diseases other than the one from which the plaintiff suffers unless there is evidence showing a common biological mechanism include Christophersen v. Allied-Signal Corp., 939 F.2d 1106, 1115-1116 (5th Cir. 1991) (applying Texas law) (epidemiologic connection between heavy-metal agents and lung cancer cannot be used as evidence that same agents caused colon cancer); Cavallo v. Star Enters., 892 F. Supp. 756 (E.D. Va. 1995), aff’d in part and rev’d in part, 100 F.3d 1150 (4th Cir. 1996); Boyles v. Am. Cyanamid Co., 796 F. Supp. 704 (E.D.N.Y. 1992). In Austin v. Kerr-McGee Ref. Corp., 25 S.W.3d 280, 290 (Tex. Ct. App. 2000), the plaintiff sought to rely on studies showing that benzene caused one type of leukemia to prove that benzene caused a different type of leukemia in her decedent. Quite sensibly, the court insisted that before plaintiff could do so, she would have to submit evidence that both types of leukemia had a common biological mechanism of development.”

Restatement (Third) of Torts § 28 cmt. c, at 406 (2010). Notwithstanding some of the Restatement’s excesses on other issues, the guidance on composites, seems sane and consonant with the scientific literature.

Role of Mechanism in Justifying Composite End Points

A composite end point may make sense when the individual end points are biologically related, and the investigators can reasonably expect that the individual end points would be affected in the same direction, and approximately to the same extent:16

Confidence in a composite end point rests partly on a belief that similar reductions in relative risk apply to all the components. Investigators should therefore construct composite endpoints in which the biology would lead us to expect similar effects across components.”

The important point, missed by some investigators and many courts, is that the assumption of similar “effects” must be tested by examining the individual component end points, and especially the end point that is the harm claimed by plaintiffs in a given case.

Methodological Issues

The acceptability of composite end points is often a delicate balance between the statistical power and efficiency gained and the reliability concerns raised by using the composite. As with any statistical or interpretative tool, the key questions turn on how the tool is used, and for what purpose. The reliability issues raised by the use of composites are likely to be highly contextual.

For instance, there is an important asymmetry between justifying the use of a composite for measuring efficacy and the use of the same composite for safety outcomes. A biological improvement in type 2 diabetes might be expected to lead to a reduction in all the macrovascular complications of that disease, but a medication for type 2 diabetes might have a very specific toxicity or drug interaction, which affects only one constituent end point among all macrovascular complications, such as myocardial infarction. The asymmetry between efficacy and safety outcomes is specifically addressed by cardiovascular epidemiologists in an important methodological paper:17

Varying definitions of composite end points, such as MACE, can lead to substantially different results and conclusions. There, the term MACE, in particular, should not be used, and when composite study end points are desired, researchers should focus separately on safety and effectiveness outcomes, and construct separate composite end points to match these different clinical goals.”

There are many clear, published statements that caution consumers of medical studies against being misled by claims based upon composite end points. Several years ago, for example, the British Medical Journal published a paper with six methodological suggestions for consumers of studies, one of which deals explicitly with composite end points:18

“Guide to avoid being misled by biased presentation and interpretation of data

1. Read only the Methods and Results sections; bypass the Discuss section

2. Read the abstract reported in evidence based secondary publications

3. Beware faulty comparators

4. Beware composite endpoints

5. Beware small treatment effects

6. Beware subgroup analyses”

The paper elaborates on the problems that arise from the use of composite end points:19

Problems in the interpretation of these trials arise when composite end points include component outcomes to which patients attribute very different importance… .”

Problems may also arise when the most important end point occurs infrequently or when the apparent effect on component end points differs.”

When the more important outcomes occur infrequently, clinicians should focus on individual outcomes rather than on composite end points. Under these circumstances, inferences about the end points (which because they occur infrequently will have very wide confidence intervals) will be weak.”

Authors generally acknowledge that “[w]hen large variations exist between components the composite end point should be abandoned.”20

Methodological Issues Concerning Causal Inferences from Composite End Points to Individual End Points

Several authors have criticized pharmaceutical companies for using composite end points to “game” their trials. Composites allow smaller sample size, but they lend themselves to broader claims for outcomes included within the composite. The same criticism applies to attempts to infer that there is risk of an individual endpoint based upon a showing of harm in the composite endpoint.

If a trial report specifies a composite endpoint, the components of the composite should be in the well-known pathophysiology of the disease. The researchers should interpret the composite endpoint in aggregate rather than as showing efficacy of the individual components. However, the components should be specified as secondary outcomes and reported beside the results of the primary analysis.”21

Virtually the entire field of epidemiology and clinical trial study has urged caution in inferring risk for a component end point from suggested risk in a composite end point:

In summary, evaluating trials that use composite outcome requires scrutiny in regard to the underlying reasons for combining endpoints and its implications and has impact on medical decision-making (see below in Sect. 47.8). Composite endpoints are credible only when the components are of similar importance and the relative effects of the intervention are similar across components (Guyatt et al. 2008a).”22

Not only do important methodologists urge caution in the interpretation of composite end points,23 they emphasize a basic point of scientific (and legal) relevancy:

[A] positive result for a composite outcome applies only to the cluster of events included in the composite and not to the individual components.”24

Even regular testifying expert witnesses for the litigation industry insist upon the “principle of full disclosure”:

The analysis of the effect of therapy on the combined end point should be accompanied by a tabulation of the effect of the therapy for each of the component end points.”25

Gatekeepers in our judicial system need to be more vigilant against bait-and-switch inferences based upon composite end points. The quest for statistical power hardly justifies larding up an end point with irrelevant data points.


1 See, e.g., Milton Packer, “Unbelievable! Electrophysiologists Embrace ‘Alternative Facts’,” MedPage (May 16, 2018) (describing clinical trialists’ abandoning pre-specified intention-to-treat analysis).

2 Curtis Meinert, Clinical Trials Dictionary (Johns Hopkins Center for Clinical Trials 1996).

3 Victor M. Montori, et al., “Validity of composite end points in clinical trials.” 300 Brit. Med. J. 594, 596 (2005).

4 R. Fletcher & S. Fletcher, Clinical Epidemiology: The Essentials at 109 (4th ed. 2005).

5 Neaton, et al., “Key issues in end point selection for heart failure trials: composite end points,” 11 J. Cardiac Failure 567, 569a (2005).

6 Schulz & Grimes, “Multiplicity in randomized trials I: endpoints and treatments,” 365 Lancet 1591, 1593a (2005).

7 Freemantle & Calvert, “Composite and surrogate outcomes in randomized controlled trials,” 334 Brit. Med. J. 756, 756a – b (2007).

8 International Conference on Harmonisation of Technical Requrements for Registration of Pharmaceuticals for Human Use; “ICH harmonized tripartite guideline: statistical principles for clinical trials,” 18 Stat. Med. 1905 (1999).

9 Neaton, et al., “Key issues in end point selection for heart failure trials: composite end points,” 11 J. Cardiac Failure 567, 569b (2005).

10 Montori, et al., “Validity of composite end points in clinical trials.” 300 Brit. Med. J. 594, 596, Summary Point No. 2 (2005).

11 SeeLumpenepidemiology” (Dec. 24, 2012), discussing Frischhertz v. SmithKline Beecham Corp., 2012 U.S. Dist. LEXIS 181507 (E.D. La. 2012).Frischhertz was decided in the same month that a New York City trial judge ruled Dr. Shira Kramer out of bounds in the commission of similarly invalid lumping, in Reeps v. BMW of North America, LLC, 2012 NY Slip Op 33030(U), N.Y.S.Ct., Index No. 100725/08 (New York Cty. Dec. 21, 2012) (York, J.), 2012 WL 6729899, aff’d on rearg., 2013 WL 2362566, aff’d, 115 A.D.3d 432, 981 N.Y.S.2d 514 (2013), aff’d sub nom. Sean R. v. BMW of North America, LLC, ___ N.E.3d ___, 2016 WL 527107 (2016). See also New York Breathes Life Into Frye Standard – Reeps v. BMW(Mar. 5, 2013).

12Infante-lizing the IARC” (May 13, 2018).

13 Knight v. Kirby Inland Marine, 363 F.Supp. 2d 859, 864 (N.D. Miss. 2005), aff’d, 482 F.3d 347 (5th Cir. 2007) (excluding opinion of B.S. Levy on Hodgkin’s disease based upon studies of other lymphomas and myelomas); Allen v. Pennsylvania Eng’g Corp., 102 F.3d 194, 198 (5th Cir. 1996) (noting that evidence suggesting a causal connection between ethylene oxide and human lymphatic cancers is not probative of a connection with brain cancer);Current v. Atochem North America, Inc., 2001 WL 36101283, at *3 (W.D. Tex. Nov. 30, 2001) (excluding expert witness opinion of Michael Gochfeld, who asserted that arsenic causes rectal cancer on the basis of studies that show association with lung and bladder cancer; Hill’s consistency factor in causal inference does not apply to cancers generally); Exxon Corp. v. Makofski, 116 S.W.3d 176, 184-85 (Tex. App. Houston 2003) (“While lumping distinct diseases together as ‘leukemia’ may yield a statistical increase as to the whole category, it does so only by ignoring proof that some types of disease have a much greater association with benzene than others.”).

14The Public Affairs Committee of the Teratology Society, “Teratology Society Public Affairs Committee Position Paper Causation in Teratology-Related Litigation,” 73 Birth Defects Research (Part A) 421, 423 (2005).

15 168 F. Supp. 2d 1271, 1284–87 (D. Utah 2001).

16 Montori, et al., “Validity of composite end points in clinical trials.” 300 Brit. Med. J. 594, 595b (2005).

17 Kevin Kip, et al., “The problem with composite end points in cardiovascular studies,” 51 J. Am. Coll. Cardiol. 701, 701 (2008) (Abstract – Conclusions) (emphasis in original).

18 Montori, et al., “Users’ guide to detecting misleading claims in clinical research reports,” 329 Brit. Med. J. 1093 (2004) (emphasis added).

19 Id. at 1094b, 1095a.

20 Montori, et al., “Validity of composite end points in clinical trials.” 300 Brit. Med. J. 594, 596 (2005).

21 Schulz & Grimes, “Multiplicity in randomized trials I: endpoints and treatments,” 365 Lancet 1591, 1595a (2005) (emphasis added). These authors acknowledge that composite end points often lack clinical relevancy, and that the gain in statistical efficiency comes at the high cost of interpretational difficulties. Id. at 1593.

22 Wolfgang Ahrens & Iris Pigeot, eds., Handbook of Epidemiology 1840 (2d ed. 2014) (47.5.8 Use of Composite Endpoints).

23 See, e.g., Stuart J. Pocock, John J.V. McMurray, and Tim J. Collier, “Statistical Controversies in Reporting of Clinical Trials: Part 2 of a 4-Part Series on Statistics for Clinical Trials,” 66 J. Am. Coll. Cardiol. 2648, 2650-51 (2015) (“Interpret composite endpoints carefully.”)(“COMPOSITE ENDPOINTS. These are commonly used in CV RCTs to combine evidence across 2 or more outcomes into a single primary endpoint. But, there is a danger of oversimplifying the evidence by putting too much emphasis on the composite, without adequate inspection of the contribution from each separate component.”); Eric Lim, Adam Brown, Adel Helmy, Shafi Mussa, and Douglas G. Altman, “Composite Outcomes in Cardiovascular Research: A Survey of Randomized Trials,” 149 Ann. Intern. Med. 612, 612, 615-16 (2008) (“Individual outcomes do not contribute equally to composite measures, so the overall estimate of effect for a composite measure cannot be assumed to apply equally to each of its individual outcomes.”) (“Therefore, readers are cautioned against assuming that the overall estimate of effect for the composite outcome can be interpreted to be the same for each individual outcome.”); Freemantle, et al., “Composite outcomes in randomized trials: Greater precision but with greater uncertainty.” 289 J. Am. Med. Ass’n 2554, 2559a (2003) (“To avoid the burying of important components of composite primary outcomes for which on their own no effect is concerned, . . . the components of a composite outcome should always be declared as secondary outcomes, and the results described alongside the result for the composite outcome.”).

24 Freemantle & Calvert, “Composite and surrogate outcomes in randomized controlled trials.” 334 Brit. Med. J. 757a (2007).

25 Lem Moyé, “Statistical Methods for Cardiovascular Researchers,” 118 Circulation Research 439, 451 (2016).

Carl Cranor’s Conflicted Jeremiad Against Daubert

September 23rd, 2018

Carl Cranor’s Conflicted Jeremiad Against Daubert

It seems that authors who have the most intense and refractory conflicts of interest (COI) often fail to see their own conflicts and are the most vociferous critics of others for failing to identify COIs. Consider the spectacle of having anti-tobacco activists and tobacco plaintiffs’ expert witnesses assert that the American Law Institute had an ethical problem because Institute members included some tobacco defense lawyers.1 Somehow these authors overlooked their own positional and financial conflicts, as well as the obvious fact that the Institute’s members included some tobacco plaintiffs’ lawyers as well. Still, the complaint was instructive because it typifies the abuse of ethical asymmetrical standards, as well as ethical blindspots.2

Recently, Raymond Richard Neutra, Carl F. Cranor, and David Gee published a paper on the litigation use of Sir Austin Bradford Hill’s considerations for evaluating whether an association is causal or not.3 See Raymond Richard Neutra, Carl F. Cranor, and David Gee, “The Use and Misuse of Bradford Hill in U.S. Tort Law,” 58 Jurimetrics 127 (2018) [cited here as Cranor]. Their paper provides a startling example of hypocritical and asymmetrical assertions of conflicts of interests.

Neutra is a self-styled public health advocate4 and the Chief of the Division of Environmental and Occupational Disease Control (DEODC) of the California Department of Health Services (CDHS). David Gee, not to be confused with the English artist or the Australian coin forger, is with the European Environment Agency, in Copenhagen, Denmark. He is perhaps best known for his precautionary principle advocacy and his work with trade unions.5

Carl Cranor is with the Center for Progressive Reform, and he teaches philosophy at one of the University of California campuses. Although he is neither a lawyer nor a scientist, he participates with some frequency as a consultant, and as an expert witness, in lawsuits, on behalf of claimants. Perhaps Cranor’s most notorious appearance as an expert witness resulted in the decision of Milward v. Acuity Specialty Products Group, Inc., 639 F.3d 11 (1st Cir. 2011), cert. denied sub nom., U.S. Steel Corp. v. Milward, 132 S. Ct. 1002 (2012). Probably less generally known is that Cranor was one of the founders of an organization, the Council for Education and Research on Toxics (CERT), which recently was the complaining party in a California case in which CERT sought money damages for Starbucks’ failure to label each cup of coffee sold as known to the State of California as causing cancer.6 Having a so-called not-for-profit corporation can also be pretty handy, especially when it holds itself out as a scientific organization and files amicus briefs in support of reversing Daubert exclusions of the founding members of the corporation, as CERT did on behalf of its founding member in the Milward case.7 The conflict of interest, in such an amicus brief, however, is no longer potential or subtle, and violates the duty of candor to the court.

In this recent article on Hill’s considerations for judging causality, Cranor followed CERT’s lead from Milward. Cranor failed to disclose that he has been a party expert witness for plaintiffs, in cases in which he was advocating many of the same positions put forward in the Jurimetrics article, including the Milward case, in which he was excluded from testifying by the trial court. Cranor’s lack of candor with the readers of the Jurimetrics article is all the more remarkable in that Cranor and his co-authors give conflicts of interest outsize importance in substantive interpretations of scholarship:

the desired reliability for evidence evaluation requires that biases that derive from the financial interests and ideological commitments of the investigators and editors that control the gateways to publication be considered in a way that Hill did not address.”

Cranor at 137 & n.59. Well, we could add that Cranor’s financial interests and ideological commitments might well be considered in evaluating the reliability of the opinions and positions advanced in this most recent work by Cranor and colleagues. If you believe that COIs disqualify a speaker from addressing important issues, then you have all the reason you need to avoid reading Cranor’s recent article.

Dubious Scholarship

The more serious problem with Cranor’s article is not his ethically strained pronouncements about financial interests, but the dubious scholarship he and his colleagues advance to thwart judicial gatekeeping of even more dubious expert witness opinion testimony. To begin with, the authors disparage the training and abilities of federal judges to assess the epistemic warrant and reliability of proffered causation opinions:

With their enhanced duties to review scientific and technical testimony federal judges, typically not well prepared by legal education for these tasks, have struggled to assess the scientific support for—and the reliability and relevance of—expert testimony.”

Cranor at 147. Their assessment is fair but hides the authors’ cynical agenda to remove gatekeeping and leave the assessment to lay juries, who are less well prepared for the task, and whose function ensures no institutional accountability, review, or public evaluation.

Similarly, the authors note the temporal context and limitations of Bradford Hill’s 1965 paper, which date and limit the advice provided over 50 years ago in a discipline that has changed dramatically with the advancement of biological, epidemiologic, and genetic science.8 Even at the time of its original publication in 1965, Bradford Hill’s paper, which was based upon an informal lecture, was not designed or intended to be a definitive treatment of causal inference. Cranor and his colleagues make no effort to review Bradford Hill’s many other publications, both before and after his 1965 dinner speech, for evidence of his views on the factors for causal inference, including the role of statistical testing and inference.

Nonetheless, Bradford Hill’s 1965 paper has become a landmark, even if dated, because of its author’s iconic status in the world of public health, earned for his showing that tobacco smoking causes lung cancer,9 and for advancing the role of double-blind randomized clinical trials.10 Cranor and his colleagues made no serious effort to engage with the large body of Bradford Hill’s writings, including his immensely important textbook, The Principles of Medical Statistics, which started as a series of articles in The Lancet, and went through 12 editions in print.11 Hill’s reputation will no doubt survive Cranor’s bowdlerized version of Sir Austin’s views.

Epidemiology is Dispensable When It Fails to Support Causal Claims

The egregious aspect of Cranor’s article is its bill of particulars against the federal judiciary for allegedly errant gatekeeping, which for these authors translates really into any gatekeeping at all. Cranor at 144-45. Indeed, the authors provide not a single example of what was a “proper” exclusion of an expert witness, who was contending for some doubtful causal claim. Perhaps they have never seen a proper exclusion, but doesn’t that speak volumes about their agenda and their biases?

High on the authors’ list of claimed gatekeeping errors is the requirement that a causal claim be supported with epidemiologic evidence. Although some causal claims may be supported by strong evidence of a biological process with mechanistic evidence, such claims are not common in United States tort litigation.

In support of the claim that epidemiology is dispensable, Cranor suggests that:

Some courts have recognized this, and distinguished scientific committees often do not require epidemiological studies to infer harm to humans. For example, the International Agency for Research on Cancer (IRAC) [sic], the National Toxicology Program, and California’s Proposition 65 Scientific Advisory Panel, among others, do not require epidemiological data to support findings that a substance is a probable or—in some cases—a known human carcinogen, but it is welcomed if available.”

Cranor at 149. California’s Proposition 65!??? Even IARC is hard to take seriously these days with its capture by consultants for the litigation industry, but if we were to accept IARC as an honest broker of causal inferences, what substance “known” to IARC to cause cancer in humans (Category I) was branded as a “known carcinogen” without the support of epidemiologic studies? Inquiring minds might want to know, but they will not learn the answer from Cranor and his co-authors.

When it comes to adverting to legal decisions that supposedly support the authors’ claim that epidemiology is unnecessary, their scholarship is equally wanting. The paper cites the notorious Wells case, which was so roundly condemned in scientific circles, that it probably helped ensure that a decision such as Daubert would ultimately be handed down by the Supreme Court. The authors seemingly cannot read, understand, and interpret even the most straightforward legal decisions. Here is how they cite Wells as support for their views:

Wells v. Ortho Pharm. Corp., 788 F.2d 741, 745 (11th Cir. 1986) (reviewing a district court’s decision deciding not to require the use of epidemiological evidence and instead allowing expert testimony).”

Cranor at 149-50 n.122. The trial judge in Wells never made such a decision; indeed, the case was tried by the bench, before the Supreme Court decided Daubert. There was no gatekeeping involved at all. More important, however, and contrary to Cranor’s explanatory parenthetical, both sides presented epidemiologic evidence in support of their positions.12

Cranor and his co-authors similarly misread and misrepresent the trial court’s decision in the litigation over maternal sertraline use and infant birth defects. Twice they cite the Multi-District Litigation trial court’s decision that excluded plaintiffs’ expert witnesses:

In re Zoloft (Sertraline Hydrochloride) Prods. Liab. Litig., 26 F. Supp. 3d 449, 455 (E.D. Pa. 2014) (expert may not rely on nonstatistically significant studies to which to apply the [Bradford Hill] factors).”

Cranor at 144 n.85; 158 n.179. The MDL judge, Judge Rufe, decidedly never held that an expert witness may not rely upon a statistically non-significant study in a “Bradford Hill” analysis, and the Third Circuit, which affirmed the exclusions of the plaintiffs’ expert witnesses’ testimony, was equally clear in avoiding the making of such a pronouncement.13

Who Needs Statistical Significance

Part of Cranor’s post-science agenda is to intimidate judges into believing that statistical significance is unnecessary and a wrong-headed criterion for judging the validity of relied upon research. In their article, Cranor and friends suggest that Hill agreed with their radical approach, but nothing could be further from the truth. Although these authors parse almost every word of Hill’s 1965 article, they conveniently omit Hill’s views about the necessary predicates for applying his nine considerations for causal inference:

Disregarding then any such problem in semantics we have this situation. Our observations reveal an association between two variables, perfectly clear-cut and beyond what we would care to attribute to the play of chance. What aspects of that association should we especially consider before deciding that the most likely interpretation of it is causation?”

Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295, 295 (1965). Cranor’s radicalism leaves no room for assessing whether a putative association is “beyond what we would care to attribute to the play of chance,” and his poor scholarship ignores Hill’s insistence that this statistical analysis be carried out.14

Hill’s work certainly acknowledged the limitations of statistical method, which could not compensate for poorly designed research:

It is a serious mistake to rely upon the statistical method to eliminate disturbing factors at the completion of the work.  No statistical method can compensate for a badly planned experiment.”

Austin Bradford Hill, Principles of Medical Statistics at 4 (4th ed. 1948). Hill was equally clear, however, that the limits on statistical methods did not imply that statistical methods are not needed to interpret a properly planned experiment or study. In the summary section of his textbook’s first chapter, Hill removed any doubt about his view of the importance, and the necessity, of statistical methods:

The statistical method is required in the interpretation of figures which are at the mercy of numerous influences, and its object is to determine whether individual influences can be isolated and their effects measured.”

Id. at 10 (emphasis added).

In his efforts to eliminate judicial gatekeeping of expert witness testimony, Cranor has struggled with understanding of statistical inference and testing.15 In an early writing, a 1993 book, Cranor suggests that we “can think of type I and II error rates as “standards of proof,” which begs the question whether they are appropriately used to assess significance or posterior probabilities.16 Indeed, Cranor goes further, in confusing significance and posterior probabilities, when he described the usual level of alpha (5%) as the “95%” rule, and claimed that regulatory agencies require something akin to proof “beyond a reasonable doubt,” when they require two “statistically significant” studies.17

Cranor has persisted in this fallacious analysis in his writings. In a 2006 book, he erroneously equated the 95% coefficient of statistical confidence with 95% certainty of knowledge.18 Later in this same text, Cranor again asserted his nonsense that agency regulations are written when supported by “beyond a reasonable doubt.”19 Given that Cranor has consistently confused significance and posterior probability, he really should not be giving advice to anyone about statistical or scientific inference. Cranor’s persistent misunderstandings of basic statistical concepts do, however, explain his motivation for advocating the elimination of statistical significance testing, even if these misunderstandings make his enterprise intellectually unacceptable.

Cranor and company fall into a similar muddle when they offer advice on post-hoc power calculations, which advice ignores standard statistical learning for interpreting completed studies.20 Another measure of the authors’ failed scholarship is their omission of any discussion of recent efforts by many in the scientific community to lower the threshold for statistical significance, based upon the belief that the customary 5% p-value is an order of magnitude too high.21

 

Relative Risks Greater Than Two

There are other tendentious arguments and treatments in Cranor’s brief against gatekeeping, but I will stop with one last example. The inference of specific causation from study risk ratios has provoked a torrent of verbiage from Sander Greenland (who is cited copiously by Cranor). Cranor, however, does not even scratch the surface of the issue and fails to cite the work of epidemiologists, such as Duncan C. Thomas, who have defended the use of probabilities of (specific) causation. More important, however, Cranor fails to speak out against the abuse of using any relative risk greater than 1.0 to support an inference of specific causation, when the nature of the causal relationship is neither necessary nor sufficient. In this context, Kenneth Rothman has reminded us that someone can be exposed to, or have, a risk, and then develop the related outcome, without there being any specific causation:

An elementary but essential principle to keep in mind is that a person may be exposed to an agent and then develop disease without there being any causal connection between the exposure and the disease. For this reason, we cannot consider the incidence proportion or the incidence rate among exposed people to measure a causal effect.”

Kenneth J. Rothman, Epidemiology: An Introduction at 57 (2d ed. 2012).

The danger in Cranor’s article in Jurimetrics is that some readers will not realize the extreme partisanship in its ipse dixit, and erroneous, pronouncements. Caveat lector


1 Elizabeth Laposata, Richard Barnes & Stanton Glantz, “Tobacco Industry Influence on the American Law Institute’s Restatements of Torts and Implications for Its Conflict of Interest Policies,” 98 Iowa L. Rev. 1 (2012).

2 The American Law Institute responded briefly. See Roberta Cooper Ramo & Lance Liebman, “The ALI’s Response to the Center for Tobacco Control Research & Education,” 98 Iowa L. Rev. Bull. 1 (2013), and the original authors’ self-serving last word. Elizabeth Laposata, Richard Barnes & Stanton Glantz, “The ALI Needs to Implement Modern Conflict of Interest Policies,” 98 Iowa L. Rev. Bull. 17 (2013).

3 Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295 (1965).

4 Raymond Richard Neutra, “Epidemiology Differs from Public Health Practice,” 7 Epidemiology 559 (1996).

7From Here to CERT-ainty” (June 28, 2018).

8 Kristen Fedak, Autumn Bernal, Zachary Capshaw, and Sherilyn A Gross, “Applying the Bradford Hill Criteria in the 21st Century: How Data Integration Has Changed Causal Inference in Molecular Epidemiology,” Emerging Themes in Epidemiol. 12:14 (2015); John P. A. Ioannides, “Exposure Wide Epidemiology, Revisiting Bradford Hill,” 35 Stats. Med. 1749 (2016).

9 Richard Doll & Austin Bradford Hill, “Smoking and Carcinoma of the Lung,” 2(4682) Brit. Med. J. (1950).

10 Geoffrey Marshall (chairman), “Streptomycin Treatment of Pulmonary Tuberculosis: A Medical Research Council Investigation,” 2 Brit. Med. J. 769, 769–71 (1948).

11 Vern Farewell & Anthony Johnson,The origins of Austin Bradford Hill’s classic textbook of medical statistics,” 105 J. Royal Soc’y Med. 483 (2012). See also Hilary E. Tillett, “Bradford Hill’s Principles of Medical Statistics,” 108 Epidemiol. Infect. 559 (1992).

13 In re Zoloft Prod. Liab. Litig., No. 16-2247 , __ F.3d __, 2017 WL 2385279, 2017 U.S. App. LEXIS 9832 (3d Cir. June 2, 2017) (affirming exclusion of biostatistician Nichols Jewell’s dodgy opinions, which involved multiple methodological flaws and failures to follow any methodology faithfully).

14 See Bradford Hill on Statistical Methods” (Sept. 24, 2013).

16 Carl F. Cranor, Regulating Toxic Substances: A Philosophy of Science and the Law at 33-34 (1993) (arguing incorrectly that one can think of α, β (the chances of type I and type II errors, respectively and 1- β as measures of the “risk of error” or “standards of proof.”); see also id. at 44, 47, 55, 72-76. At least one astute reviewer called Cranor on his statistical solecisms. Michael D. Green, “Science Is to Law as the Burden of Proof is to Significance Testing: Book Review of Cranor, Regulating Toxic Substances: A Philosophy of Science and the Law,” 37 Jurimetrics J. 205 (1997) (taking Cranor to task for confusing significance and posterior (burden of proof) probabilities).

17 Id. (squaring 0.05 to arrive at “the chances of two such rare events occurring” as 0.0025, which impermissibly assumes independence between the two studies).

18 Carl F. Cranor, Toxic Torts: Science, Law, and the Possibility of Justice 100 (2006) (incorrectly asserting that “[t]he practice of setting α =.05 I call the “95% rule,” for researchers want to be 95% certain that when knowledge is gained [a study shows new results] and the null hypothesis is rejected, it is correctly rejected.”).

19 Id. at 266.

21 See, e.g., John P. A. Ioannidis, “The Proposal to Lower P Value Thresholds to .005,” 319 J. Am. Med. Ass’n 1429 (2018); Daniel J. Benjamin, James O. Berger, Valen E. Johnson, et al., “Redefine statistical significance,” 2 Nature Human Behavior 6 (2018).

The Appeal of the Learned Treatise

August 16th, 2018

In many states, the so-called “learned treatise” doctrine creates a pseudo-exception to the rule against hearsay. The contents of such a treatise can be read to the jury, not for its truth, but for the jury to consider against the credibility of an expert witness who denies the truth of the treatise. Supposedly, some lawyers can understand the distinction between the treatise’s content’s being admitted for its truth as opposed to the credibility of an expert witness who denies its truth. Under the Federal Rules of Evidence, and in some states, the language of the treatise may be considered for its truth as well, but the physical treatise may not be entered into evidence. There are several serious problems with both the state and the federal versions of the doctrine.1

Legal on-line media recently reported about an appeal in the Pennsylvania Superior Court, which heard arguments in a case that apparently turned on allegations of trial court error in refusing to allow learned treatise cross-examination of a plaintiff’s expert witness in Pledger v. Janssen Pharms., Inc., Phila. Cty. Ct. C.P., April Term 2012, No. 1997. See Matt Fair, “J&J Urges Pa. Appeals Court To Undo $2.5M Risperdal Verdict,” Law360 (Aug. 8, 2018) (reporting on defendants’ appeal in Pledger, Pa. Super. Ct. nos. 2088 EDA 2016 and 2187 EDA 2016).

In Pledger, plaintiff claimed that he developed gynecomastia after taking the defendants’ antipsychotic medication Risperdal. Defendants warned about gynecomastia, but the plaintiff claimed that the defendants had not accurately quantified the rate of gynecomastia in its package insert.

From Mr. Fair’s reporting, readers can discern only one ground for appeal, namely whether the “trial judge improperly barred it from using a scientific article to challenge an expert’s opinion that the antipsychotic drug Risperdal caused an adolescent boy to grow breasts.” Without having heard the full oral argument, or having read the briefs, the reader cannot tell whether there were other grounds. According to Mr. Fair, defense counsel contended that the trial court’s refusal to allow the learned treatise “had allowed the [plaintiff’s] expert’s opinion to go uncountered during cross-examination.” The argument, according to Mr. Fair, continued:

Instead of being able to confront the medical causation expert with an article that absolutely contradicted and undermined his opinion, the court instead admonished counsel in front of the jury and said, ‘In Pennsylvania, we don’t try cases by books, we try them by live witnesses’.”

The cross-examination at issue, on the other hand, related to whether gynecomastia could occur naturally in pre-pubertal boys. Plaintiffs’ expert witness, Dr. Mark Solomon, a plastic surgeon, opined that gynecomastia did not occur naturally, and the defense counsel attempted to confront him with a “learned treatise,” an article from the Journal of Endocrinology, which apparently stated to the contrary. Solomon, following the usual expert witness playbook, testified that he had not read the article (and why would a surgeon have read this endocrinology journal?) Defense counsel pressed, and according to Mr. Fair, the trial judge disallowed further inquiry on cross-examination. On appeal, the defendants argued that the trial judge violated the learned treatise rule that allows “scholarly articles to be used as evidence.” The plaintiffs contended, in defense of their judgment below, that the “learned treatise rule” does not allow “scholarly articles to simply be read verbatim into the record,” and that the defense had the chance to raise the article in the direct examination of its own expert witnesses.

The Law360 reporting is curious on several fronts. The assigned error would have only been in support of a challenge to the denial of a new trial, and in a Risperdal case, the defense would likely have made a motion for judgment notwithstanding the verdict, as well as for new trial. Although the appellate briefs are not posted online, the defense’s post-trial motions in Pledger v. Janssen Pharms., Inc., Phila. Cty. Ct. C.P., April Term 2012, No. 1997, are available. See Defendants’ Motions for Post-Trial Relief Pursuant to Pa.R.C.P. 227.1 (Mar. 6, 2015).

At least at the post-trial motion stage, the defendants clearly made both motions for judgment and for a new trial, as expected.

As for the preservation of the “learned treatise” issue, the entire assignment of error is described in a single paragraph (out of 116 paragraphs) in the post-trial motion, as follows:

27. Moreover, appearing to rely on Aldridge v. Edmunds, 750 A.2d 292 (Pa. 2000), the Court prevented Janssen from cross-examining Dr. Solomon with scientific authority that would undermine his position. See, e.g., Tr. 60:9-63:2 (p.m.). Aldridge, however, addresses the use of learned treatises in the direct examination, and it cites with approval the case of Cummings v. Borough of Nazareth, 242 A.2d 460, 466 (Pa. 1968) (plurality op.), which stated that “[i]t is entirely proper in examination and cross-examination for counsel to call the witness’s attention to published works on the matter which is the subject of the witness’s testimony.” Janssen should not have been so limited in its cross examination of Dr. Solomon.

In Cummings, the issue revolved around using manuals that contained industry standards for swimming pool construction, not the appropriateness of a learned scientific treatise. Cummings v. Nazareth Borough, 430 Pa. 255, 266-67 (Pa. 1968). The defense motion did not contend that the defense counsel had laid the appropriate foundation for the learned treatise to be used. In any event, the trial judge wrote an opinion on the post-trial motions, in which he did not appear to address the learned treatise issue at all. Pledger v Janssen Pharms, Inc., Phila. Ct. C.P., Op. sur post-trial motions (Aug. 10., 2017) (Djerassi, J.).

The Pennsylvania Supreme Court has addressed the learned treatise exception to the rule against hearsay on several occasions. Perhaps the leading case described the law as:

well-settled that an expert witness may be cross-examined on the contents of a publication upon which he or she has relied in forming an opinion, and also with respect to any other publication which the expert acknowledges to be a standard work in the field. * * * In such cases, the publication or literature is not admitted for the truth of the matter asserted, but only to challenge the credibility of the witness’ opinion and the weight to be accorded thereto. * * * Learned writings which are offered to prove the truth of the matters therein are hearsay and may not properly be admitted into evidence for consideration by the jury.”

Majdic v. Cincinnati Mach. Co., 537 A. 2d 334, 621-22 (Pa. 1988) (internal citations omitted).

The Law360 report is difficult to assess. Perhaps the reporting by Mr. Fair was non-eponymously unfair? There is no discussion of how the defense had laid its foundation. Perhaps the defense had promised “to connect up” by establishing the foundation of the treatise through a defense expert witness. If there had been a foundation established, or promised to be established, the post-trial motion would have, in the normal course of events, cited the transcript for the proffer of a foundation. And why did Mr. Fair report on the oral argument as though the learned treatise issue was the only issue before the court? Inquiring minds want to know.

Judge Djerassi’s opinion on post-trial motions was perhaps more notable for embracing some testimony on statistical significance from Dr. David Kessler, former Commissioner of the FDA, and now a frequent testifier for the lawsuit industry on regulatory matters. Judge Djerassi, in his opinion, stated:

This statistically significant measure is shown in Table 21 and was within a chi-square rate of .02, meaning within a 98% chance of certainty. In Dr. Kessler’s opinion this is a statistically significant finding. (N.T. 1/29/15, afternoon, pp. p. 27, ln. 2 10-11, p. 28, lns. 7-12).”

Post-trial opinion at p.11.2 Surely, the defense’s expert witnesses explained that the chi-square test did not yield a measure of certainty that the measured statistic was the correct value.

The trial court’s whopper was enough of a teaser to force me to track down Kessler’s testimony, which was posted to the internet by the plaintiffs’ law firm. Judge Djerassi’s erroneous interpretation of the p-value can indeed be traced to Kessler’s improvident testimony:

Q. And since 2003, what have you been doing at University of California San Francisco, sir?

A. Among other things, I am currently a professor of pediatrics, professor of epidemiology, professor of biostatistics.

Pledger Transcript, Thurs., Jan. 28, 2015, Vol. 3, Morning Session at 111:3-7.

A. What statistical significance means is it’s mathematical and scientific calculations, but when we say something is statistically significant, it’s unlikely to happen by chance. So that association is very likely to be real. If you redid this, general statistically significant says if I redid this and redid the analysis a hundred times, I would get the same result 95 of those times.

Pledger Transcript, Fri., Jan. 29, 2015, Vol. 4, Morning Session at 80:18 – 81:2.

Q. So, sir, if we see on a study — and by the way, do the investigators of a study decided in their own criteria what is statistically significant? Do they assign what’s called a P value?

A. Exactly. So you can set it at 95, you can set it at 98, you can set it at 90. Generally, 95 significance level, for those of you who are mathematicians or scientifically inclined, it’s a P less than .05.

Q. As a general rule?

A. Yes.

Q. So if I see a number that is .0158, next to a dataset, that would mean that it occurs by chance less than two in 100. Correct?

A. Yes, that’s what the P value is saying.

Pledger Transcript, Fri., Jan. 29, 2015, Vol. 4, Morning Session at 81:5-20

Q. … If someone — if something has a p-value of less than .02, the converse of it is that your 98 — .98, that would be 98 percent certain that the result is not by chance?

A. Yes. That’s a fair way of saying it.

Q. And if you have a p-value of .10, that means the converse of it is 90 percent, or 90 percent that it’s not by chance, correct?

A. Yes.

Pledger Transcript, Fri., Jan. 29, 2015, Vol. 4, Afternoon Session at 7:14-22.

Q. Okay. And the last thing I’d like to ask about — sorry to keep going back and forth — is so if the jury saw a .0158, that’s of course less than .02, which means that it is 90 — almost 99 percent not by chance.

A. Yes. It’s statistically significant, as I would call it.

Pledger Transcript, Fri., Jan. 29, 2015, Vol. 4, Afternoon Session at 8:7-13.


2 See also Djerassi opinion at p.13 n. 13 (“P<0.02 is the chi—square rate reflecting a data outcome within a 98% chance of certainty.”).

N.J. Supreme Court Uproots Weeds in Garden State’s Law of Expert Witnesses

August 8th, 2018

The United States Supreme Court’s decision in Daubert is now over 25 years old. The idea of judicial gatekeeping of expert witness opinion testimony is even older in New Jersey state courts. The New Jersey Supreme Court articulated a reliability standard before the Daubert case was even argued in Washington, D.C. See Landrigan v. Celotex Corp., 127 N.J. 404, 414 (1992); Rubanick v. Witco Chem. Corp., 125 N.J. 421, 447 (1991). Articulating a standard, however, is something very different from following a standard, and in many New Jersey trial courts, until very recently, the standard was pretty much anything goes.

One counter-example to the general rule of dog-eat-dog in New Jersey was Judge Nelson Johnson’s careful review and analysis of the proffered causation opinions in cases in which plaintiffs claimed that their use of the anti-acne medication isotretinoin (Accutane) caused Crohn’s disease. Judge Johnson, who sits in the Law Division of the New Jersey Superior Court for Atlantic County held a lengthy hearing, and reviewed the expert witnesses’ reliance materials.1 Judge Johnson found that the plaintiffs’ expert witnesses had employed undue selectivity in choosing what to rely upon. Perhaps even more concerning, Judge Johnson found that these witnesses had refused to rely upon reasonably well-conducted epidemiologic studies, while embracing unpublished, incomplete, and poorly conducted studies and anecdotal evidence. In re Accutane, No. 271(MCL), 2015 WL 753674, 2015 BL 59277 (N.J.Super. Law Div., Atlantic Cty. Feb. 20, 2015). In response, Judge Johnson politely but firmly closed the gate to conclusion-driven duplicitous expert witness causation opinions in over 2,000 personal injury cases. “Johnson of Accutane – Keeping the Gate in the Garden State” (Mar. 28, 2015).

Aside from resolving over 2,000 pending cases, Judge Johnson’s judgment was of intense interest to all who are involved in pharmaceutical and other products liability litigation. Judge Johnson had conducted a pretrial hearing, sometimes called a Kemp hearing in New Jersey, after the New Jersey Supreme Court’s opinion in Kemp v. The State of New Jersey, 174 N.J. 412 (2002). At the hearing and in his opinion that excluded plaintiffs’ expert witnesses’ causation opinions, Judge Johnson demonstrated a remarkable aptitude for analyzing data and inferences in the gatekeeping process.

When the courtroom din quieted, the trial court ruled that the proffered testimony of Dr., Arthur Kornbluth and Dr. David Madigan did not meet the liberal New Jersey test for admissibility. In re Accutane, No. 271(MCL), 2015 WL 753674, 2015 BL 59277 (N.J.Super. Law Div. Atlantic Cty. Feb. 20, 2015). And in closing the gate, Judge Johnson protected the judicial process from several bogus and misleading “lines of evidence,” which have become standard ploys to mislead juries in courthouses where the gatekeepers are asleep. Recognizing that not all evidence is on the same analytical plane, Judge Johnson gave case reports short shrift.

[u]nsystematic clinical observations or case reports and adverse event reports are at the bottom of the evidence hierarchy.”

Id. at *16. Adverse event reports, largely driven by the very litigation in his courtroom, received little credit and were labeled as “not evidentiary in a court of law.” Id. at 14 (quoting FDA’s description of FAERS).

Judge Johnson recognized that there was a wide range of identified “risk factors” for irritable bowel syndrome, such as prior appendectomy, breast-feeding as an infant, stress, Vitamin D deficiency, tobacco or alcohol use, refined sugars, dietary animal fat, fast food. In re Accutane, 2015 WL 753674, at *9. The court also noted that there were four medications generally acknowledged to be potential risk factors for inflammatory bowel disease: aspirin, nonsteroidal anti-inflammatory medications (NSAIDs), oral contraceptives, and antibiotics. Understandably, Judge Johnson was concerned that the plaintiffs’ expert witnesses preferred studies unadjusted for potential confounding co-variables and studies that had involved “cherry picking the subjects.” Id. at *18.

Judge Johnson had found that both sides in the isotretinoin cases conceded the relative unimportance of animal studies, but the plaintiffs’ expert witnesses nonetheless invoked the animal studies in the face of the artificial absence of epidemiologic studies that had been created by their cherry-picking strategies. Id.

Plaintiffs’ expert witnesses had reprised a common claimants’ strategy; namely, they claimed that all the epidemiology studies lacked statistical power. Their arguments often ignored that statistical power calculations depend upon statistical significance, a concept to which many plaintiffs’ counsel have virulent antibodies, as well as an arbitrarily selected alternative hypothesis of association size. Furthermore, the plaintiffs’ arguments ignored the actual point estimates, most of which were favorable to the defense, and the observed confidence intervals, most of which were reasonably narrow.

The defense responded to the bogus statistical arguments by presenting an extremely capable clinical and statistical expert witness, Dr. Stephen Goodman, to present a meta-analysis of the available epidemiologic evidence.

Meta-analysis has become an important facet of pharmaceutical and other products liability litigation[1]. Fortunately for Judge Johnson, he had before him an extremely capable expert witness, Dr. Stephen Goodman, to explain meta-analysis generally, and two meta-analyses he had performed on isotretinoin and irritable bowel outcomes.

Dr. Goodman explained that the plaintiffs’ witnesses’ failure to perform a meta-analysis was telling when meta-analysis can obviate the plaintiffs’ hyperbolic statistical complaints:

the strength of the meta-analysis is that no one feature, no one study, is determinant. You don’t throw out evidence except when you absolutely have to.”

In re Accutane, 2015 WL 753674, at *8.

Judge Johnson’s judicial handiwork received non-deferential appellate review from a three-judge panel of the Appellate Division, which reversed the exclusion of Kornbluth and Madigan. In re Accutane Litig., 451 N.J. Super. 153, 165 A.3d 832 (App. Div. 2017). The New Jersey Supreme Court granted the isotretinoin defendants’ petition for appellate review, and the issues were joined over the appropriate standard of appellate review for expert witness opinion exclusions, and the appropriateness of Judge Johnson’s exclusions of Kornbluth and Madigan. A bevy of amici curiae joined in the fray.2

Last week, the New Jersey Supreme Court issued a unanimous opinion, which reversed the Appellate Division’s holding that Judge Johnson had “mistakenly exercised” discretion. Applying its own precedents from Rubanick, Landrigan, and Kemp, and the established abuse-of-discretion standard, the Court concluded that the trial court’s ruling to exclude Kornbluth and Madigan was “unassailable.” In re Accutane Litig., ___ N.J. ___, 2018 WL 3636867 (2018), Slip op. at 79.3

The high court graciously acknowledged that defendants and amici had “good reason” to seek clarification of New Jersey law. Slip op. at 67. In abandoning abuse-of-discretion as its standard of review, the Appellate Division had relied upon a criminal case that involved the application of the Frye standard, which is applied as a matter of law. Id. at 70-71. The high court also appeared to welcome the opportunity to grant review and reverse the intermediate court reinforce “the rigor expected of the trial court” in its gatekeeping role. Id. at 67. The Supreme Court, however, did not articulate a new standard; rather it demonstrated at length that Judge Johnson had appropriately applied the legal standards that had been previously announced in New Jersey Supreme Court cases.4

In attempting to defend the Appellate Division’s decision, plaintiffs sought to characterize New Jersey law as somehow different from, and more “liberal” than, the United States Supreme Court’s decision in Daubert. The New Jersey Supreme Court acknowledged that it had never formally adopted the dicta from Daubert about factors that could be considered in gatekeeping, slip op. at 10, but the Court went on to note what disinterested observers had long understood, that the so-called Daubert factors simply flowed from a requirement of sound methodology, and that there was “little distinction” and “not much light” between the Landrigan and Rubanick principles and the Daubert case or its progeny. Id at 10, 80.

Curiously, the New Jersey Supreme Court announced that the Daubert factors should be incorporated into the New Jersey Rules 702 and 703 and their case law, but it stopped short of declaring New Jersey a “Daubert” jurisdiction. Slip op. at 82. In part, the Court’s hesitance followed from New Jersey’s bifurcation of expert witness standards for civil and criminal cases, with the Frye standard still controlling in the criminal docket. At another level, it makes no sense to describe any jurisdiction as a “Daubert” state because the relevant aspects of the Daubert decision were dicta, and the Daubert decision and its progeny were superseded by the revision of the controlling statute in 2000.5

There were other remarkable aspects of the Supreme Court’s Accutane decision. For instance, the Court put its weight behind the common-sense and accurate interpretation of Sir Austin Bradford Hill’s famous articulation of factors for causal judgment, which requires that sampling error, bias, and confounding be eliminated before assessing whether the observed association is strong, consistent, plausible, and the like. Slip op. at 20 (citing the Reference Manual at 597-99), 78.

The Supreme Court relied extensively on the National Academies’ Reference Manual on Scientific Evidence.6 That reliance is certainly preferable to judicial speculations and fabulations of scientific method. The reliance is also positive, considering that the Court did not look only at the problematic epidemiology chapter, but adverted also to the chapters on statistical evidence and on clinical medicine.

The Supreme Court recognized that the Appellate Division had essentially sanctioned an anything goes abandonment of gatekeeping, an approach that has been all-too-common in some of New Jersey’s lower courts. Contrary to the previously prevailing New Jersey zeitgeist, the Court instructed that gatekeeping must be “rigorous” to “prevent[] the jury’s exposure to unsound science through the compelling voice of an expert.” Slip op. at 68-9.

Not all evidence is equal. “[C]ase reports are at the bottom of the evidence hierarchy.” Slip op. at 73. Extrapolation from non-human animal studies is fraught with external validity problems, and such studies “far less probative in the face of a substantial body of epidemiologic evidence.” Id. at 74 (internal quotations omitted).

Perhaps most chilling for the lawsuit industry will be the Supreme Court’s strident denunciation of expert witnesses’ selectivity in choosing lesser evidence in the face of a large body of epidemiologic evidence, id. at 77, and their unprincipled cherry picking among the extant epidemiologic publications. Like the trial court, the Supreme Court found that the plaintiffs’ expert witnesses’ inconsistent use of methodological criteria and their selective reliance upon studies (disregarding eight of the nine epidemiologic studies) that favored their task masters was the antithesis of sound methodology. Id. at 73, citing with approval, In re Lipitor, ___ F.3d ___ (4th Cir. 2018) (slip op. at 16) (“Result-driven analysis, or cherry-picking, undermines principles of the scientific method and is a quintessential example of applying methodologies (valid or otherwise) in an unreliable fashion.”).

An essential feature of the Supreme Court’s decision is that it was not willing to engage in the common reductionism that has “all epidemiologic studies are flawed,” and which thus privileges cherry picking. Not all disagreements between expert witnesses can be framed as differences in interpretation. In re Accutane will likely stand as a bulwark against flawed expert witness opinion testimony in the Garden State for a long time.


1 Judge Nelson Johnson is also the author of Boardwalk Empire: The Birth, High Times, and Corruption of Atlantic City (2010), a spell-binding historical novel about political and personal corruption.

2 In support of the defendants’ positions, amicus briefs were filed by the New Jersey Business & Industry Association, Commerce and Industry Association of New Jersey, and New Jersey Chamber of Commerce; by law professors Kenneth S. Broun, Daniel J. Capra, Joanne A. Epps, David L. Faigman, Laird Kirkpatrick, Michael M. Martin, Liesa Richter, and Stephen A. Saltzburg; by medical associations the American Medical Association, Medical Society of New Jersey, American Academy of Dermatology, Society for Investigative Dermatology, American Acne and Rosacea Society, and Dermatological Society of New Jersey, by the Defense Research Institute; by the Pharmaceutical Research and Manufacturers of America; and by New Jersey Civil Justice Institute. In support of the plaintiffs’ position and the intermediate appellate court’s determination, amicus briefs were filed by political action committee the New Jersey Association for Justice; by the Ironbound Community Corporation; and by plaintiffs’ lawyer Allan Kanner.

3 Nothing in the intervening scientific record called question upon Judge Johnson’s trial court judgment. See, e.g., I.A. Vallerand, R.T. Lewinson, M.S. Farris, C.D. Sibley, M.L. Ramien, A.G.M. Bulloch, and S.B. Patten, “Efficacy and adverse events of oral isotretinoin for acne: a systematic review,” 178 Brit. J. Dermatol. 76 (2018).

4 Slip op. at 9, 14-15, citing Landrigan v. Celotex Corp., 127 N.J. 404, 414 (1992); Rubanick v. Witco Chem. Corp., 125 N.J. 421, 447 (1991) (“We initially took that step to allow the parties in toxic tort civil matters to present novel scientific evidence of causation if, after the trial court engages in rigorous gatekeeping when reviewing for reliability, the proponent persuades the court of the soundness of the expert’s reasoning.”).

5 The Court did acknowledge that Federal Rule of Evidence 702 had been amended in 2000, to reflect the Supreme Court’s decision in Daubert, Joiner, and Kumho Tire, but the Court did not deal with the inconsistencies between the present rule and the 1993 Daubert case. Slip op. at 64, citing Calhoun v. Yamaha Motor Corp., U.S.A., 350 F.3d 316, 320-21, 320 n.8 (3d Cir. 2003).

6 See Accutane slip op. at 12-18, 24, 73-74, 77-78. With respect to meta-analysis, the Reference Manual’s epidemiology chapter is still stuck in the 1980s and the prevalent resistance to poorly conducted, often meaningless meta-analyses. SeeThe Treatment of Meta-Analysis in the Third Edition of the Reference Manual on Scientific Evidence” (Nov. 14, 2011) (The Reference Manual fails to come to grips with the prevalence and importance of meta-analysis in litigation, and fails to provide meaningful guidance to trial judges).