TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

An Opinion to SAVOR

November 11th, 2022

The saxagliptin medications are valuable treatments for type 2 diabetes mellitus (T2DM). The SAVOR (Saxagliptin Assessment of Vascular Outcomes Recorded in Patients with Diabetes Mellitus) study was a randomized controlled trial, undertaken by manufacturers at the request of the FDA.[1] As a large (over sixteen thousand patients randomized) double-blinded cardiovascular outcomes trial, SAVOR collected data on many different end points in patients with T2DM, at high risk of cardiovascular disease, over a median of 2.1 years. The primary end point was a composite end point of cardiac death, non-fatal myocardial infarction, and non-fatal stroke. Secondary end points included each constituent of the composite, as well as hospitalizations for heart failure, coronary revascularization, or unstable angina, as well as other safety outcomes.

The SAVOR trial found no association between saxagliptin use and the primary end point, or any of the constituents of the primary end point.  The trial did, however, find a modest association between saxagliptin and one of the several secondary end points, hospitalization for heart failure (hazard ratio, 1.27; 95% C.I., 1.07 to 1.51; p = 0.007). The SAVOR authors urged caution in interpreting their unexpected finding for heart failure hospitalizations, given the multiple end points considered.[2] Notwithstanding the multiplicity, in 2016, the FDA, which does not require a showing of causation for adding warnings to a drug’s labeling, added warnings about the “risk” of hospitalization for heart failure from the use of saxagliptin medications.

And the litigation came.

The litigation evidentiary display grew to include, in addition to SAVOR, observational studies, meta-analyses, and randomized controlled trials of other DPP-4 inhibitor medications that are in the same class as saxagliptin. The SAVOR finding for heart failure was not supported by any of the other relevant human study evidence. The lawsuit industry, however, armed with an FDA warning, pressed its cases. A multi-district litigation (MDL 2809) was established. Rule 702 motions were filed by both plaintiffs’ and defendants’ counsel.

When the dust settled in this saxagliptin litigation, the court found that the defendants’ expert witnesses satisfied the relevance and reliability requirements of Rule 702, whereas the proferred opinions of plaintiff’s expert witness, Dr. Parag Goyal, a cardiologist at Cornell-Weill Hospital in New York, did not satisfy Rule 702.[3] The court’s task was certainly made easier by the lack of any other expert witness or published opinion that saxagliptin actually causes heart failure serious enough to result in hospitalization. 

The saxagliptin litigation presented an interesting array of facts for a Rule 702 show down. First, there was an RCT that reported a nominally statistically significant association between medication use and a harm, hospitalization for heart failure. The SAVOR finding, however, was in a secondary end point, and its statistical significance was unimpressive when considered in the light of the multiple testing that took place in the context of a cardiovascular outcomes trial.

Second, the heart failure increase was not seen in the original registration trials. Third, there was an effort to find corroboration in observational studies and meta-analyses, without success. Fourth, there was no apparent mechanism for the putative effect. Fifth, there was no support from trials or observational studies of other medications in the class of DPP-4 inhibitors.

Dr. Goyal testified that the heart failure finding in SAVOR “should be interpreted as cause and effect unless there is compelling evidence to prove otherwise.” On this record, the MDL court excluded Dr. Goyal’s causation opinions. Dr. Goyal purported to conduct a Bradford Hill analysis, but the MDL court appeared troubled by his glib dismissal of the threat to validity in SAVOR from multiple testing, and his ignoring the consistency prong of the Hill factors. SAVOR was the only heart failure finding in humans, with the remaining observational studies, meta-analyses, and other trials of DPP-4 inhibitors failing to provide supporting evidence.

The challenged defense expert witnesses defended the validity of their opinions, and ultimately the MDL court had little concern in permitting them through the judicial gate. The plaintiffs’ challenges to Suneil Koliwad, a physician with a doctorate in molecular physiology, Eric Adler, a cardiologist, and Todd Lee, a pharmaco-epidemiologist, were all denied. The plaintiffs challenged, among other things, whether Dr. Adler was qualified to apply a Bonferroni correction to the SAVOR results, and whether Dr. Lee was obligated to obtain and statistically analyze the data from the trials and studies ab initio. The MDL court quickly dispatched these frivolous challenges.

The saxagliptin MDL decision is an important reminder that litigants should remain vigilant about inaccurate assertions of “statistical significance,” even in premier, peer-reviewed journals. Not all journals are as careful as the New England Journal of Medicine in requiring qualification of claims of statistical significance in the face of multiple testing.

One legal hiccup in the court’s decision was its improvident citation to Daubert, for the proposition that the gatekeeping inquiry must focus “solely on principles and methodology, not on the conclusions they generate.”[4] That piece of obiter dictum did not survive past the Supreme Court’s 1997 decision in Joiner,[5] and it was clearly superseded by statute in 2000. Surely it is time to stop citing Daubert for this dictum.


[1] Benjamin M. Scirica, Deepak L. Bhatt, Eugene Braunwald, Gabriel Steg, Jaime Davidson, et al., for the SAVOR-TIMI 53 Steering Committee and Investigators, “Saxagliptin and Cardiovascular Outcomes in Patients with Type 2 Diabetes Mellitus,” 369 New Engl. J. Med. 1317 (2013).

[2] Id. at 1324.

[3] In re Onglyza & Kombiglyze XR Prods. Liab. Litig., MDL 2809, 2022 WL 43244 (E.D. Ken. Jan. 5, 2022).

[4] Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579, 595 (1993).

[5] General Electric Co. v. Joiner, 522 U.S. 136 (1997).

Further Thoughts on Cheng’s Consensus Rule

October 3rd, 2022

In “Cheng’s Proposed Consensus Rule for Expert Witnesses,”[1] I discussed a recent law review article by Professor Edward K. Cheng,[2] who has proposed dispensing with expert witness testimony as we know it in favor of having witnesses tell juries what the scientific consensus is on any subject. Cheng’s project is fraught with difficulties and contradictions; and it has clearly anticipatable bad outcomes. Four Supreme Court cases (Daubert, Joiner, Kumho Tire, and Weisgram), and a major revision in Rule 702, ratified by Congress, all embraced the importance of judicial gatekeeping of expert witness opinion testimony to the fact-finding function of trials. Professor Cheng now wants to ditch the entire notion of gatekeeping, as well as the epistemic basis – sufficient facts and data – for expert witnesses’ opinions in favor of reportage of which way the herd is going. Cheng’s proposal is perhaps the most radical attack, in recent times, on the nature of legal factfinding, whether by judges or juries, in the common law world.

Still, there are two claims within his proposal, which although overstated, are worth further discussion and debate. The first is that the gatekeeping role does not sit well with many judges. We see judges ill at ease in their many avoidance tactics, by which they treat serious methodological challenges to expert witness testimony as “merely going to the weight of the conclusion.” The second is that many judges, and especially juries, are completely at sea in the technical knowledge needed to evaluate the scientific issues in many modern day trials.

With respect to the claimed epistemic incompetence, the simpler remedy is to get rid of incompetent judges. We have commercial courts, vaccine courts, and patent courts. Why are litigants disputing a contract or a commercial practice entitled to epistemically competent judges, but litigants in health claim cases are not? Surely, the time has come to have courts with judges that have background and training in the health and statistical sciences. The time for “blue ribbon” juries of properly trained fact finders seems overdue. Somehow we must reconcile the seventh amendment right to a jury with the requirement of “due process” of law. The commitment to jury trials for causes of action known to the common law in 1787, or 1791, is stretched beyond belief for the sorts of technical and complex claims now seen in federal courts and state courts of general jurisdiction.[3]

Several courts have challenged the belief that the seventh amendment right to a jury applies in the face of complex litigation. The United States Court of Appeals explained its understanding of complexity that should remove a case from the province of the seventh amendment:

“A suit is too complex for a jury when circumstances render the jury unable to decide in a proper manner. The law presumes that a jury will find facts and reach a verdict by rational means. It does not contemplate scientific precision but does contemplate a resolution of each issue on the basis of a fair and reasonable assessment of the evidence and a fair and reasonable application of the relevant legal rules. See Schulz v. Pennsylvania RR, 350 U.S. 523, 526 (1956). A suit might be excessively complex as a result of any set of circumstances which singly or in combination render a jury unable to decide in the foregoing rational manner. Examples of such circumstances are an exceptionally long trial period and conceptually difficult factual issues.”[4]

The Circuit’s description of complexity certainly seems to apply to many contemporary claims of health effects.

We should recognize that Professor Cheng’s indictment, and conviction, of judicial gatekeeping and jury decision making as epistemically incompetent directly implies that the judicial process has no epistemic, truth finding function in technical cases of claimed health effects. Cheng’s proposed solution does not substantially ameliorate this implication, because consensus statements are frequently absent, and even when present, are plagued with their own epistemic weaknesses.

Consider for instance, the 1997 pronouncement of the International Agency for Research on Cancer that crystalline silica is a “known” human carcinogen.[5] One of the members of the working group responsible for the pronouncement explained:

“It is hardly surprising that the Working Group had considerable difficulty in reaching a decision, did not do so unanimously and would probably not have done so at all, had it not been explained that we should be concerned with hazard identification, not risk.”[6]

And yet, within months of the IARC pronouncement, state and federal regulatory agencies formed a chorus of assent to the lung cancer “risk” of crystalline silica. Nothing in the scientific record had changed except the permission of the IARC to stop thinking critically about the causation issue. Another consensus group came out, a few years after the IARC pronouncement, with a devastating critical assessment of the IARC review:

“The present authors believe that the results of these studies [cited by IARC] are inconsistent and, when positive, only weakly positive. Other, methodologically strong, negative studies have not been considered, and several studies viewed as providing evidence supporting the carcinogenicity of silica have significant methodological weaknesses. Silica is not directly genotoxic and is a pulmonary carcinogen only in the rat, a species that seems to be inappropriate for assessing particulate carcinogenesis in humans. Data on humans demonstrate a lack of association between lung cancer and exposure to crystalline silica. Exposure-response relationships have generally not been found. Studies in which silicotic patients were not identified from compensation registries and in which enumeration was complete did not support a causal association between silicosis and lung cancer, which further argues against the carcinogenicity of crystalline silica.”[7]

Cheng’s proposal would seem to suppress legitimate courtroom criticism of an apparent consensus statement, which was based upon a narrow majority of a working group, on a controversial dataset, with no examination of the facts and data upon which the putative consensus statement was itself based.

The Avandia litigation tells a cautionary tale of how fragile and ephemeral consensuses can be. A dubious meta-analysis by a well-known author received lead article billing in an issue of the New England Journal of Medicine, in 2007, and litigation claims started to roll in within hours.[8] In face of this meta-analysis, an FDA advisory committee recommended heightened warnings, and a trial court declined to take a careful look at the methodological flaws in the inciting meta-analytic study.[9] Ultimately, a large clinical trial exculpated the medication, but by then the harm had been done, and there was no revisiting of the gatekeeping decision to allow the claims to proceed.[10] The point should be obvious. In 2007, there appeared to be a consensus, with led to an FDA label change, despite the absence of sufficient facts and data to support the litigation claims. Even if plaintiffs’ claims passed through the gate in 2008, they were highly vulnerable to courtroom challenges to the original meta-analysis. Cheng’s proposal, however, would truncate the litigation process into an exploration whether or not there was a “consensus.”

Deviation from Experts’ Standards of Care

The crux of many Rule 702 challenges to an expert witness is that the witness has committed malpractice in his discipline. The challenger must identify a standard of care, and the challenged witness’s deviation(s) from that standard. The identification of the relevant standard of care will, indeed, sometimes involve a consensus, evidenced by texts, articles, professional society statements, or simply implicit in relevant works of scholarship or scientific studies. Consensuses about standards of care are, of course, about methodology. Consensuses about conclusions, however, may also be relevant because if a litigant’s expert witness proffers a conclusion at odds with consensus conclusions, the deviant conclusion implies deviant methodology.

Cheng’s treatment of statistical significance is instructive for how his proposal would create mischief in many different types of adjudications, but especially of claimed health effects. First, Cheng’s misrepresentation of consensus among statisticians is telling for the validity of his project.  After all, he holds an advanced degree in statistics, and yet, he is willing write that that:

“[w]hile historically used as a rule of thumb, statisticians have now concluded that using the 0.05 [p-value] threshold is more distortive than helpful.”[11]

Statisticians, without qualification! And as was shown, Cheng is demonstrably wrong in his use of the cited source to support his representation of what certainly seems like a consensus paper. His précis is not even remotely close to the language of the paper, but the consensus paper is hearsay and can only be used by an expert witness in support of an opinion.  Presumably, another expert witness might contradict the quoted opinion about what “statisticians” have concluded, but it is unclear whether a court could review the underlying A.S.A. paper, take judicial notice of the incorrectness of the proffered opinion, and then exclude the expert witness opinion.

After the 2016 publication of the A.S.A.’s consensus statement, some statisticians did indeed publish editorials claiming it was time to move beyond statistical significance testing. At least one editorial, by an A.S.A. officer was cited as representing an A.S.A. position, which led the A.S.A. President to appoint a task force to consider the call for an across-the-board rejection of significance testing. In 2021, that task force clearly endorsed significance testing as having a continued role in statistical practice.[12]

Where would this situation leave a gatekeeping court or a factfinding jury? Some obscure psychology journals have abandoned the use of significance testing, but the New England Journal of Medicine has retained the practice, while introducing stronger controls for claims of “significance” when the study at hands has engaged in multiple comparisons.

But Cheng, qua law professor and statistician (and would-be expert witness) claims “statisticians have now concluded that using the 0.05 [p-value] threshold is more distortive than helpful,” and the trial must chase not the validity of the inference of claimed causation but whether there is, or is not, a census about the use of a pre-specified threshold for p-values or confidence intervals. Cheng’s proposal about consensuses would turn trials into disputes about whether consensuses exist, and the scope of the purported agreement, not about truth.

In some instances, there might be a clear consensus, fully supported, on a general causation issue. Consider for instance, the known causal relationship between industrial benzene exposure and acute myelogenous leukemia (AML). This consensus turns out to be rather unhelpful when considering whether minute contamination of carbonated water can cause cancer,[13] or even whether occupational exposure to gasoline, with its low-level benzene (~1%) content, can cause AML.[14]

Frequently, there is also a deep asymmetry in consensus statements. When the evidence for a causal conclusion is very clear, professional societies may weigh in to express their confident conclusions about the existence of causation. Such societies typically do not issue statements that explicitly reject causal claims. The absence of a consensus statement, however, often can be taken to represent a consensus that professional societies do not endorse causal claims, and consider the evidence, at best, equivocal. Those dogs that have not barked can be, and have been, important considerations in gatekeeping.

Contrary to Cheng’s complete dismissal of judges’ epistemic competence, judges can, in many instances, render reasonable gatekeeping decisions by closely considering the absence of consensus statements, or systematic reviews, favoring the litigation claims.[15] At least in this respect, Professor Cheng is right to emphasize the importance of consensus, but he fails to note the importance of its absence, and the ability of litigants and their expert witnesses to inform gatekeeping judges of the relevance of consensus statements or their absence to the epistemic assessment of proferred expert witness opinion testimony.


[1]Cheng’s Proposed Consensus Rule for Expert Witnesses,” (Sept. 15, 2022).

[2] Edward K. Cheng, “The Consensus Rule: A New Approach to Scientific Evidence,” 75 Vanderbilt L. Rev. 407 (2022) [Consensus Rule]

[3] There is an extensive discussion and debate of viability and the validity of asserting rights to trial by jury for many complex civil actions in the modern era. See, e.g., Stephan Landsman & James F. Holderman, “The Evolution of the Jury Trial in America,” 37 Litigation 32 (2010); Robert A. Clifford, “Deselecting the Jury in a Civil Case,” 30 Litigation 8 (Winter 2004); Hugh H. Bownes, “Should Trial by Jury Be Eliminated in Complex Cases,” 1 Risk 75 (1990); Douglas King, “Complex Civil Litigation and the Seventh Amendment Right to a Jury Trial,” 51 Univ. Chi. L. Rev. 581 (1984); Alvin B. Rubin, “Trial by Jury in Complex Civil Cases: Voice of Liberty or Verdict by Confusion?” 462 Ann. Am. Acad. Political & Social Sci. 87 (1982); William V. Luneburg & Mark A. Nordenberg, “Specially Qualified Juries and Expert Nonjury Tribunals: Alternatives for Coping with the Complexities of Modern Civil Litigation,” 67 Virginia L. Rev. 887 (1981); Richard O. Lempert, “Civil Juries and Complex Cases: Let’s Not Rush to Judgment,” 80 Mich. L. Rev. 68 (1981); Comment, “The Case for Special Juries in Complex Civil Litigation,” 89 Yale L. J. 1155 (1980); James S. Campbell & Nicholas Le Poidevin, “Complex Cases and Jury Trials: A Reply to Professor Arnold,” 128 Univ. Penn. L. Rev. 965 (1980); Barry E. Ungar & Theodore R. Mann, “The Jury and the Complex Civil Case,” 6 Litigation 3 (Spring 1980); Morris S. Arnold, “A Historical Inquiry into the Right to Trial by Jury in Complex Civil Litigation,”128 Univ. Penn. L. Rev. 829 (1980); Daniel H. Margolis & Evan M. Slavitt, “The Case Against Trial by Jury in Complex Civil Litigation,” 7 Litigation 19 (1980); Montgomery Kersten, “Preserving the Right to Jury Trial in Complex Civil Cases,” 32 Stanford L. Rev. 99 (1979); Maralynne Flehner, “Jury Trials in Complex Litigation,” 4 St. John’s Law Rev. 751 (1979); Comment, “The Right to a Jury Trial in Complex Civil Litigation,” 92 Harvard L. Rev. 898 (1979); Kathy E. Davidson, “The Right to Trial by Jury in Complex Litigation,” 20 Wm. & Mary L. Rev. 329 (1978); David L. Shapiro & Daniel R. Coquillette, “The Fetish of Jury Trial in Civil Cases: A Comment on Rachal v. Hill,” 85 Harvard L. Rev. 442 (1971); Comment, “English Judge May Not Order Jury Trial in Civil Case in Absence of Special Circumstances. Sims v. William Howard & Son Ltd. (C. A. 1964),” 78 Harv. L. Rev. 676 (1965); Fleming James, Jr., “Right to a Jury Trial in Civil Actions,” 72 Yale L. J. 655 (1963).

[4] In re Japanese Elec. Prods. Antitrust Litig., 63` F.2d 1069, 1079 (3d Cir 1980). See In re Boise Cascade Sec. Litig., 420 F. Supp. 99, 103 (W.D. Wash. 1976) (“In sum, it appears to this Court that the scope of the problems presented by this case is immense. The factual issues, the complexity of the evidence that will be required to explore those issues, and the time required to do so leads to the conclusion that a jury would not be a rational and capable fact finder.”). See also Ross v. Bernhard, 396 U.S. 532, 538 & n.10, 90 S. Ct. 733 (1970) (discussing the “legal” versus equitable nature of an action that might give rise to a right to trial by jury). Of course, the statistical and scientific complexity of claims was absent from cases tried in common law courts in 1791, at the time of the adoption of the seventh amendment.

[5] IARC Monograph on the Evaluation of Carcinogenic Risks to Humans of Silica, Some Silicates, Coal Dust and para-Aramid Fibrils, vol. 68 (1997).

[6] Corbett McDonald & Nicola Cherry, “Crystalline Silica and Lung Cancer: The Problem of Conflicting Evidence,” 8 Indoor Built Env’t 121, 121 (1999).

[7] Patrick A. Hessel, John F. Gamble, J. Bernard L. Gee, Graham Gibbs, Francis H.Y. Green, W. Keith C. Morgan, and Brooke T. Mossman, “Silica, Silicosis, and Lung Cancer: A Response to a Recent Working Group Report,” 42 J. Occup & Envt’l Med. 704, 704 (2000).

[8] Steven Nissen & K. Wolski, “Effect of Rosiglitazone on the Risk of Myocardial Infarction and Death from Cardiovascular Causes,” 356 New Engl. J. Med. 2457 (2007); Erratum, 357 New Engl. J. Med. 100 (2007).

[9] In re Avandia Mktg., Sales Practices & Prods. Liab. Litig., 2011 WL 13576 (E.D. Pa. Jan. 4, 2011).

[10] Philip D. Home, Stuart J Pocock, et al., “Rosiglitazone Evaluated for Cardiovascular Outcomes in Oral Agent Combination Therapy for Type 2 Diabetes (RECORD),” 373 Lancet 2125 (2009). The hazard ratios for cardiovascular death was 0.84 (95% C.I., 0·59–1·18), and for myocardial infarction, 1·14 (95% C.I., 0·80–1·63).

[11] Consenus Rule at 424 (emphasis added) (citing Ronald L. Wasserstein & Nicole A. Lazar, “The ASA Statement on p-Values: Context, Process, and Purpose,” 70 Am. Statistician 129, 131 (2016)).

[12] Yoav Benjamini, Richard D. DeVeaux, Bradly Efron, Scott Evans, Mark Glickman, Barry Braubard, Xuming He, Xiao Li Meng, Nancy Reid, Stephen M. Stigler, Stephen B. Vardeman, Christopher K. Wikle, Tommy Wright, Linda J. Young, and Karen Kafadar, “The ASA President’s Task Force Statement on Statistical Significance and Replicability,” 15 Annals of Applied Statistics 1084 (2021); see also “A Proclamation from the Task Force on Statistical Significance” (June 21, 2021).

[13] Sutera v. Perrier Group of America, Inc., 986 F. Supp. 655, 664-65 (D. Mass. 1997).

[14] Burst v. Shell Oil Co., 2015 WL 3755953, at *9 (E.D. La. June 16, 2015), aff’d, 650 F. App’x 170 (5th Cir. 2016). cert. denied. 137 S. Ct. 312 (2016); Henricksen v. ConocoPhillips Co., 605 F. Supp. 2d 1142, 1156 (E.D. Wa. 2009).

[15] In re Mirena Ius Levonorgestrel-Related Prod. Liab. Litig. (No. II), 341 F. Supp. 3d 213 (S.D.N.Y. 2018), aff’d, 982 F.3d 113 (2d Cir. 2020); In re Lipitor (Atorvastatin Calcium) Mktg., Sales Pracs. & Prods. Liab. Litig., 227 F. Supp. 3d 452 (D.S.C. 2017), aff’d, 892 F.3d 624 (4th Cir. 2018); In re: Zoloft (Sertraline Hydrocloride) Prod. Liab. Litig., No. 12-MD-2342, 2015 WL 7776911, at *1 (E.D. Pa. Dec. 2, 2015), aff’d, 858 F.3d 787 (3d Cir. 2017); In re Incretin-Based Therapies Prods. Liab. Litig., 524 F. Supp. 3d. 1007 (S.D. Cal. 2021); In re Viagra (Sildenafil Citrate) & Cialis (Tadalafil) Prod. Liab. Litig., 424 F. Supp. 3d 781, 798–99 (N.D. Cal. 2020).

Cheng’s Proposed Consensus Rule for Expert Witnesses

September 15th, 2022

Edward K. Cheng is the Hess Professor of Law in absentia from Vanderbilt Law School, while serving this fall as a visiting professor at Harvard. Professor Cheng is one of the authors of the multi-volume treatise, Modern Scientific Evidence, and the author of many articles on scientific and statistical evidence. Cheng’s most recent article, “The Consensus Rule: A New Approach to Scientific Evidence,”[1] while thought provoking, follows in the long-standing tradition of law school professors to advocate evidence law reforms, based upon theoretical considerations devoid of practical or real-world support.

Cheng’s argument for a radical restructuring of Rule 702 is based upon his judgment that jurors and judges are epistemically incompetent to evaluate expert witness opinion testimony. The current legal approach has trial judges acting as gatekeepers of expert witness testimony, and jurors acting as judges of factual scientific claims. Cheng would abolish these roles as beyond their ken.[2] Lay persons can, however, determine which party’s position is supported by the relevant expert community, which he presumes (without evidence) possesses the needed epistemic competence. Accordingly, Cheng would rewrite the legal system’s approach to important legal disputes, such as disputes over causal claims, from:

Whether a given substance causes a given disease

to

Whether the expert community believes that a given substance causes a given disease.

Cheng channels the philosophical understanding of the ancients who realized that one must have expertise to judge whether someone else has used that expertise correctly. And he channels the contemporary understanding that knowledge is a social endeavor, not the unique perspective of an individual in isolation. From these twin premisses, Cheng derives a radical and cynical proposal to reform the law of expert witness testimony. In his vision, experts would come to court not to give their own opinions, and certainly not to try to explain how they arrive at their opinions from the available evidence. For him, the current procedure is too much like playing chess with a monkey. The expert function would consist of telling the jury what the expert witness’s community believes.[3] Jurors would not decide the “actual substantive questions,” but simply decide what they believe the relevant expert witness community accepts as a consensus. This radical restructuring is what Cheng calls the “consensus rule.”

In this proposed “consensus rule,” there is no room for gatekeeping. Parties continue to call expert witnesses, but only as conduits for the “consensus” opinions of their fields. Indeed, Cheng’s proposal would radically limit expert witness to service as pollsters; their testimony would present only their views of what the consensus is in their fields. This polling information is the only evidence that the jury hear from expert witnesses, because this is the only evidence that Cheng believes the jury is epistemically competent to assess.[4]

Under Cheng’s Consensus Rule, when there is no consensus in the realm, the expert witness regime defaults to “anything goes,” without gatekeeping.[5] Judges would continue to exercise some control over who is qualified to testify, but only as far as the proposed experts must be in a position to know what the consensus is in their fields.

Cheng does not explain why, under his proposed “consensus rule,” subject matter experts are needed at all.  The parties might call librarians, or sociologists of science, to talk about the relevant evidence of consensus. If a party cannot afford a librarian expert witness, then perhaps lawyers could present directly the results of their PubMed, and other internet searches.

Cheng may be right that his “deferential approach” would eliminate having the inexpert passing judgment on the expert. The “consensus rule” would reduce science to polling, conducted informally, often without documentation or recording, by partisan expert witnesses. This proposal hardly better reflects, as he argues, the “true” nature of science. In Cheng’s vision, science in the courtroom is just a communal opinion, without evidence and without inference. To be sure, this alternative universe is tidier and less disputatious, but it is hardly science or knowledge. We are left with opinions about opinions, without data, without internal or external validity, and without good and sufficient facts and data.

Cheng claims that his proposed Consensus Rule is epistemically superior to Rule 702 gatekeeping. For the intellectual curious and able, his proposal is a counsel of despair. Deference to the herd, he tells us “is not merely optimal—it is the only practical strategy.”[6] In perhaps the most extreme overstatement of his thesis, Cheng tells us that

“deference is arguably not due to any individual at all! Individual experts can be incompetent, biased, error prone, or fickle—their personal judgments are not and have never been the source of reliability. Rather, proper deference is to the community of experts, all of the people who have spent their careers and considerable talents accumulating knowledge in their field.”[7]

Cheng’s hypothesized community of experts, however is worthy of deference only by virtue of the soundness of its judgments. If a community has not severely tested its opinions, then its existence as a community is irrelevant. Cheng’s deference is the sort of phenomenon that helped create Lysenkoism and other intellectual fads that were beyond challenge with actual data.

There is, I fear, some partial truth to Cheng’s judgment of juries and judges as epistemically incompetent, or challenged, to judge science, but his judgment seems greatly overstated. Finding aberrant jury verdicts would be easy, but Cheng provides no meaningful examples of gatekeeping gone wrong. Professor Cheng may have over-generalized in stating that judges are epistemically incompetent to make substantive expert determinations. He surely cannot be suggesting that judges never have sufficient scientific acumen to determine the relevance and reliability of expert witness opinion. If judges can, in some cases, make a reasonable go at gatekeeping, why then is Cheng advocating a general rule that strips all judges of all gatekeeping responsibility with respect to expert witnesses?

Clearly judges lack the technical resources, time, and background training to delve deeply into the methodological issues with which they may be confronted. This situation could be ameliorated by budgeting science advisors and independent expert witnesses, and by creating specialty courts staffed with judges that have scientific training. Cheng acknowledges this response, but he suggests that conflicts with “norms about generalist judges.”[8] This retreat to norms is curious in the face of Cheng’s radical proposals, and the prevalence of using specialist judges for adjudicating commercial and patent disputes.

Although Cheng is correct that assessing validity and reliability of scientific inferences and conclusions often cannot be reduced to a cookbook or checklist approach, not all expertise is as opaque as Cheng suggests. In his view, lawyers are deluded into thinking that they can understand the relevant science, with law professors being even worse offenders.[9] Cross-examining a technical expert witness can be difficult and challenging, but lawyers on both sides of the aisle occasionally demolish the most skilled and knowledgeable expert witnesses, on substantive grounds. And these demolitions happen to expert witnesses who typically, self-servingly claim that they have robust consensuses agreeing with their opinions.

While scolding us that we must get “comfortable with relying on the expertise and authority of others,” Cheng reassures us that deferring to authority is “not laziness or an abdication of our intellectual responsibility.”[10] According to Cheng, the only reason to defer to the opinion of expert is that they are telling us what their community would say.[11] Good reasons, sound evidence, and valid inference need not worry us in Cheng’s world.

Finding Consensus

Cheng tells us that his Consensus Rule would look something like:

Rule 702A. If the relevant scientific community believes a fact involving specialized knowledge, then that fact is established accordingly.”

Imagine the endless litigation over what the “relevant” community is. For a health effect claim about a drug and heart attacks, is it the community of cardiologists or epidemiologists? Do we accept the pronouncements of the American Heart Association or those of the American College of Cardiology. If there is a clear consensus based upon a clinical trial, which appears to be based upon suspect data, is discovery of underlying data beyond the reach of litigants because the correctness of the allegedly dispositive study is simply not in issue? Would courts have to take judicial notice of the clear consensus and shut down any attempt to get to the truth of the matter?

Cheng acknowledges that cases will involve issues that are controversial or undeveloped, without expert community consensus. Many litigations start after publication of a single study or meta-analysis, which is hardly the basis for any consensus. Cheng appears content, in this expansive area, to revert to anything goes because if the expert community has not coalesced around a unified view, or if the community is divided, then the courts cannot do better than flipping a coin! Cheng’s proposal thus has a loophole the size of the Sun.

Cheng tells us, unhelpfully, that “[d]etermining consensus is difficult in some cases, and less so in others.”[12] Determining consensus may not be straightforward, but no matter. Consensus Rule questions are not epistemically challenging and thus “far more manageable,” because they requires no special expertise. (Again, why even call a subject matter expert witness, as opposed to a science journalist or librarian?) Cheng further advises that consensus is “a bit like the reasonable person standard in negligence,” but this simply conflates normative judgments with the scientific judgments.[13]

Cheng’s Consensus Rule would allow the use of a systematic review or a meta-analysis, not for evidence of the correctness of its conclusions, but only as evidence of a consensus.[14] The thought experiment of how this suggestion plays out in the real world may cause some agita. The litigation over Avandia began within days of the publication of a meta-analysis in the New England Journal of Medicine.[15] So some evidence of consensus; right? But then the letters to the editor within a few weeks of publication showed that the meta-analysis was fatally flawed. Inadmissible! Under the Consensus Rule the correctness or the methodological appropriateness of the meta-analysis is irrelevant. A few months later, another meta-analysis is published, which fails to find the risk that the original meta-analysis claimed. Is the trial now about which meta-analysis represents the community’s consensus, or are we thrown into the game of anything goes, where expert witnesses just say things, without judicial supervision?  A few years go by, and now there is a large clinical trial that supersedes all the meta-analyses of small trials.[16] Is a single large clinical trial now admissible as evidence of a new consensus, or are only systematic reviews and meta-analyses relevant evidence?

Cheng’s Consensus Rule will be useless in most determinations of specific causation.  It will be a very rare case indeed when a scientific organization issues a consensus statement about plaintiff John Doe. Very few tort cases involve putative causal agents that are thought to cause every instance of some disease in every person exposed to the agent. Even when a scientific community has addressed general causation, it will have rarely resolved all the uncertainty about the causal efficacy of all levels of exposure or the appropriate window of latency. So Cheng’s proposal guarantees to remove specific causation from the control of Rule 702 gatekeeping.

The potential for misrepresenting consensus is even greater than the misrepresentations of actual study results. At least the data are the data, but what will jurors do when they are regaled by testimony about the informal consensus reached in the hotel lobby of the latest scientific conference. Regulatory pronouncements that are based upon precautionary principles will be misrepresented as scientific consensus.  Findings by the International Agency for Research on Cancer that a substance is a IIA “probable human carcinogen” will be hawked as a consensus, even though the classification specifically disclaims any quantitative meaning for “probable,” and it directly equates to “insufficient” evidence of carcinogencity in humans.

In some cases, as Cheng notes, organizations such as the National Research Council, or the National Academy of Science, Engineering and Medicine (NASEM), will have weighed in on a controversy that has found its way into court.[17] Any help from such organizations will likely be illusory. Consider the 2006 publication of a comprehensive review of the available studies on non-pulmonary cancers and asbestos exposure by NASEM. The writing group presented its assessment of colorectal cancer as not causally associated with occupational asbestos exposure.[18] By 2007, the following year, expert witnesses for plaintiffs argued that the NASEM publication was no longer a consensus because one or two (truly inconsequential studies) had been published after the report and thus not considered. Under Cheng’s proposal, this dodge would appear to be enough to oust the consensus rule, and default to the “anything goes” rule. The scientific record can change rapidly, and many true consensus statements quickly find their way into the dustbin of scientific history.

Cheng greatly underestimates the difficulty in ascertaining “consensus.” Sometimes, to be sure, professional societies issue consensus statements, but they are often tentative and inconclusive. In many areas of science, there will be overlapping realms of expertise, with different disciplines issuing inconsistent “consensus” statements. Even within a single expert community, there may be two schools of thoughts about a particular issue.

There are instances, perhaps more than a few, when a consensus is epistemically flawed. If, as is the case in many health effect claims, plaintiffs rely upon the so-called linear no-threshold dose-response (LNT) theory of carcinogenesis, plaintiffs will point to regulatory pronouncements that embrace LNT as “the consensus.” When scientists are being honest, they generally recognize LNT as part of a precautionary principle approach, which may make sense as the foundation of “risk assessment.” The widespread assumption of LNT in regulatory agencies, and among scientists who work in such agencies, is understandable, but LNT remains an assumption. Nonetheless, we already see LNT hawked as a consensus, which under Cheng’s Consenus Rule would become the key dispositive issue, while quashing the mountain of evidence that there are, in fact, defense mechanisms to carcinogenesis that result in practical thresholds.

Beyond, regulatory pronouncements, some areas of scientific endeavor have themselves become politicized and extremist. Tobacco smoking surely causes lung cancer, but the studies of environmental tobacco smoking and lung cancer have been oversold. In areas of non-scientific disputes, such as history of alleged corporate malfeasance, juries will be treated to “the consensus” of Marxist labor historians, without having to consider the actual underlying historical documents. Cheng tells us that his Consensus Rule is a “realistic way of treating nonscientific expertise,”[19] which would seem to cover historian expert witness. Yet here, lawyers and lay fact finders are fully capable of exploring the glib historical conclusions of historian witnesses with cross-examination on the underlying documentary facts of the proffered opinions.

The Alleged Warrant for the Consensus Rule

If Professor Cheng is correct that the current judicial system, with decisions by juries and judges, is epistemically incompetent, does his Consensus Rule necessarily follow?  Not really. If we are going to engage in radical reforms, then the institutionalization of blue-ribbon juries would make much greater sense. As for Cheng’s claim that knowledge is “social,” the law of evidence already permits the use of true consensus statements as learned treatises, both to impeach expert witnesses who disagree, and (in federal court) to urge the truth of the learned treatise.

The gatekeeping process of Rule 702, which Professor Cheng would throw overboard, has important advantages in that judges ideally will articulate reasons for finding expert witness opinion testimony admissible or not. These reasons can be evaluated, discussed, and debated, with judges, lawyers, and the public involved. This gatekeeping process is rational and socially open.

Some Other Missteps in Cheng’s Argument

Experts on Both Sides are Too Extreme

Cheng’s proposal is based, in part, upon his assessment that the adversarial system causes the parties to choose expert witnesses “at the extremes.” Here again, Cheng provides no empirical evidence for his assessment. There is a mechanical assumption often made by people who do not bother to learn the details of a scientific dispute that the truth must somehow lie in the “middle.” For instance, in MDL 926, the silicone gel breast implant litigation, presiding Judge Sam Pointer complained about the parties’ expert witnesses being too extreme. Judge Pointer  believed that MDL judges should not entertain Rule 702 challenges, which were in his view properly heard by the transferor courts. As a result, Judge Robert Jones, and then Judge Jack Weinstein, conducted thorough Rule 702 hearings and found that the plaintiffs’ expert witnesses’ opinions were unreliable and insufficiently supported by the available evidence.[20] Judge Weinstein started the process of selecting court-appointed expert witnesses for the remaining New York cases, which goaded Judge Pointer into taking the process back to the MDL court level. After appointing four, highly qualified expert witnesses, Judge Pointer continued to believe that the parties’ expert witnesses were “extremists,” and that the courts’ own experts would come down somewhere between them.  When the court-appointed experts filed their reports, Judge Pointer was shocked that all four of his experts sided with the defense in rejecting the tendentious claims of plaintiffs’ expert witnesses.

Statistical Significance

Along the way, in advocating his radical proposal, Professor Cheng made some other curious announcements. For instance, he tells us that “[w]hile historically used as a rule of thumb, statisticians have now concluded that using the 0.05 [p-value] threshold is more distortive than helpful.”[21] Cheng’s purpose here is unclear, but the source he cited does not remotely support his statement, and certainly not his gross overgeneralization about “statisticians.” If this is the way he envisions experts will report “consensus,” then his program seems broken at its inception. The American Statistical Association’s (ASA) p-value “consensus” statement articulated six principles, the third of which noted that

“[s]cientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold.”

This is a few light years away from statisticians’ concluding that statistical significance thresholds are more distortive than helpful. The ASA p-value statement further explains that

“[t]he widespread use of ‘statistical significance’ (generally interpreted as ‘p < 0.05’) as a license for making a claim of a scientific finding (or implied truth) leads to considerable distortion of the scientific process.”[22]

In the science of health effects, statistical significance remains extremely important, but it has never been a license for making causal claims. As Sir Austin Bradford Hill noted in his famous after-dinner speech, ruling out chance (and bias) as an explanation for an association was merely a predicate for evaluating the association for causality.[23]

Over-endorsing Animal Studies

Under Professor Cheng’s Consensus Rule, the appropriate consensus might well be one generated solely by animal studies. Cheng tells that “perhaps” scientists do not consider toxicology when the pertinent epidemiology is “clear.” When the epidemiology, however, is unclear, scientists consider toxicology.[24] Well, of course, but the key question is whether a consensus about causation in humans will be based upon non-human animal studies. Cheng seems to answer this question in the affirmative by criticizing courts that have required epidemiologic studies “even though the entire field of toxicology uses tissue and animal studies to make inferences, often in combination with and especially in the absence of epidemiology.”[25] The vitality of the field of toxicology is hardly undermined by its not generally providing sufficient grounds for judgments of human causation.

Relative Risk Greater Than Two

In the midst of his argument for the Consensus Rule, Cheng points critically to what he calls “questionable proxies” for scientific certainty. One such proxy is the judicial requirement of risk ratios in excess of two. His short discussion appears to be focused upon the inference of specific causation in a given case, but it leads to a non-sequitur:

“Some courts have required a relative risk of 2.0 in toxic tort cases, requiring a doubling of the population risk before considering causation.73 But the preponderance standard does not require that the substance more likely than not caused any case of the disease in the population, it requires that the substance more likely than not caused the plaintiff’s case.”[26]

Of course, it is exactly because we are interested in the probability of causation of the plaintiff’s case, that we advert to the risk ratio to give us some sense whether “more likely than not” the exposure caused plaintiff’s case. Unless plaintiff can show he is somehow unique, he is “any case.” In many instances, plaintiff cannot show how he is different from the participants of the study that gave rise to the risk ratio less than two.


[1] Edward K. Cheng, “The Consensus Rule: A New Approach to Scientific Evidence,” 75 Vanderbilt L. Rev. 407 (2022) [Consensus Rule].

[2] Consensus Rule at 410 (“The judge and the jury, lacking in expertise, are not competent to handle the questions that the Daubert framework assigns to them.”)

[3] Consensus Rule at 467 (“Under the Consensus Rule, experts no longer offer their personal opinions on causation or teach the jury how to assess the underlying studies. Instead, their testimony focuses on what the expert community as a whole believes about causation.”)

[4] Consensus Rule at 467.

[5] Consensus Rule at 437.

[6] Consensus Rule at 434.

[7] Consensus Rule at 434.

[8] Consensus Rule at 422.

[9] Consensus Rule at 429.

[10] Consensus Rule at 432-33.

[11] Consensus Rule at 434.

[12] Consensus Rule at 456.

[13] Consensus Rule at 457.

[14] Consensus Rule at 459.

[15] Steven E. Nissen, M.D., and Kathy Wolski, M.P.H., “Effect of Rosiglitazone on the Risk of Myocardial Infarction and Death from Cardiovascular Causes,” 356 New Engl. J. Med. 2457 (2007).

[16] P.D. Home, et al., “Rosiglitazone Evaluated for Cardiovascular Outcomes in Oral Agent Combination Therapy for Type 2 Diabetes (RECORD), 373 Lancet 2125 (2009).

[17] Consensus Rule at 458.

[18] Jonathan M. Samet, et al., Asbestos: Selected Health Effects (2006).

[19] Consensus Rule at 445.

[20] Hall v. Baxter Healthcare Corp., 947 F. Supp.1387 (D. Or. 1996) (excluding plaintiffs’ expert witnesses’ causation opinions); In re Breast Implant Cases, 942 F. Supp. 958 (E. & S.D.N.Y. 1996) (granting partial summary judgment on claims of systemic disease causation).

[21] Consenus Rule at 424 (citing Ronald L. Wasserstein & Nicole A. Lazar, “The ASA Statement on p-Values: Context, Process, and Purpose,” 70 Am. Statistician 129, 131 (2016)).

[22] Id.

[23] Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295, 295 (1965). See Schachtman, “Ruling Out Bias & Confounding is Necessary to Evaluate Expert Witness Causation Opinions” (Oct. 29, 2018); “Woodside & Davis on the Bradford Hill Considerations” (Aug. 23, 2013); Frank C. Woodside, III & Allison G. Davis, “The Bradford Hill Criteria: The Forgotten Predicate,” 35 Thomas Jefferson L. Rev. 103 (2013).

[24] Consensus Rule at 444.

[25] Consensus Rule at 424 & n. 74 (citing to one of multiple court advisory expert witnesses in Hall v. Baxter Healthcare Corp., 947 F. Supp.1387, 1449 (D. Or. 1996), who suggested that toxicology would be appropriate to consider when the epidemiology was not clear). Citing to one outlier advisor is a rather strange move for Cheng considering that the “consensus” was readily discernible to the trial judge in Hall, and to Judge Jack Weinstein, a few months later, in In re Breast Implant Cases, 942 F. Supp. 958 (E. & S.D.N.Y. 1996).

[26] Consensus Rule at 424 & n. 73 (citing Lucinda M. Finley, “Guarding the Gate to the Courthouse: How Trial Judges Are Using Their Evidentiary Screening Role to Remake Tort Causation Rules,” 49 Depaul L. Rev. 335, 348–49 (2000). See Schachtman, “Rhetorical Strategy in Characterizing Scientific Burdens of Proof” (Nov. 15, 2014).

Amicus Curious – Gelbach’s Foray into Lipitor Litigation

August 25th, 2022

Professor Schauer’s discussion of statistical significance, covered in my last post,[1] is curious for its disclaimer that “there is no claim here that measures of statistical significance map easily onto measures of the burden of proof.” Having made the disclaimer, Schauer proceeds to falls into the transposition fallacy, which contradicts his disclaimer, and, generally speaking, is not a good thing for a law professor eager to advance the understanding of “The Proof,” to do.

Perhaps more curious than Schauer’s error is his citation support for his disclaimer.[2] The cited paper by Jonah B. Gelbach is one of several of Gelbach’s papers that advances the claim that the p-value does indeed map onto posterior probability and the burden of proof. Gelbach’s claim has also been the center piece in his role as an advocate in support of plaintiffs in the Lipitor (atorvastatin) multi-district litigation (MDL) over claims that ingestion of atorvastatin causes diabetes mellitus.

Gelbach’s intervention as plaintiffs’ amicus is peculiar on many fronts. At the time of the Lipitor litigation, Sonal Singh was an epidemiologist and Assistant Professor of Medicine, at the Johns Hopkins University. The MDL trial court initially held that Singh’s proffered testimony was inadmissible because of his failure to consider daily dose.[3] In a second attempt, Singh offered an opinion for 10 mg daily dose of atorvastatin, based largely upon the results of a clinical trial known as ASCOT-LLA.[4]

The ASCOT-LLA trial randomized 19,342 participants with hypertension and at least three other cardiovascular risk factors to two different anti-hypertensive medications. A subgroup with total cholesterol levels less than or equal to 6.5 mmol./l. were randomized to either daily 10 mg. atorvastatin or placebo.  The investigators planned to follow up for five years, but they stopped after 3.3 years because of clear benefit on the primary composite end point of non-fatal myocardial infarction and fatal coronary heart disease. At the time of stopping, there were 100 events of the primary pre-specified outcome in the atorvastatin group, compared with 154 events in the placebo group (hazard ratio 0.64 [95% CI 0.50 – 0.83], p = 0.0005).

The atorvastatin component of ASCOT-LLA had, in addition to its primary pre-specified outcome, seven secondary end points, and seven tertiary end points.  The emergence of diabetes mellitus in this trial population, which clearly was at high risk of developing diabetes, was one of the tertiary end points. Primary, secondary, and tertiary end points were reported in ASCOT-LLA without adjustment for the obvious multiple comparisons. In the treatment group, 3.0% developed diabetes over the course of the trial, whereas 2.6% developed diabetes in the placebo group. The unadjusted hazard ratio was 1.15 (0.91 – 1.44), p = 0.2493.[5] Given the 15 trial end points, an adjusted p-value for this particular hazard ratio, for diabetes, might well exceed 0.5, and even approach 1.0.

On this record, Dr. Singh honestly acknowledged that statistical significance was important, and that the diabetes finding in ASCOT-LLA might have been the result of low statistical power or of no association at all. Based upon the trial data alone, he testified that “one can neither confirm nor deny that atorvastatin 10 mg is associated with significantly increased risk of type 2 diabetes.”[6] The trial court excluded Dr. Singh’s 10mg/day causal opinion, but admitted his 80mg/day opinion. On appeal, the Fourth Circuit affirmed the MDL district court’s rulings.[7]

Jonah Gelbach is a professor of law at the University of California at Berkeley. He attended Yale Law School, and received his doctorate in economics from MIT.

Professor Gelbach entered the Lipitor fray to present a single issue: whether statistical significance at conventionally demanding levels such as 5 percent is an appropriate basis for excluding expert testimony based on statistical evidence from a single study that did not achieve statistical significance.

Professor Gelbach is no stranger to antic proposals.[8] As amicus curious in the Lipitor litigation, Gelbach asserts that plaintiffs’ expert witness, Dr. Singh, was wrong in his testimony about not being able to confirm the ASCOT-LLA association because he, Gelbach, could confirm the association.[9] Ultimately, the Fourth Circuit did not discuss Gelbach’s contentions, which is not surprising considering that the asserted arguments and alleged factual considerations were not only dehors the record, but in contradiction of the record.

Gelbach’s curious claim is that any time a risk ratio, for an exposure and an outcome of interest, is greater than 1.0, with a p-value < 0.5,[10] the evidence should be not only admissible, but sufficient to support a conclusion of causation. Gelbach states his claim in the context of discussing a single randomized controlled trial (ASCOT-LLA), but his broad pronouncements are carelessly framed such that others may take them to apply to a single observational study, with its greater threats to internal validity.

Contra Kumho Tire

To get to his conclusion, Gelbach attempts to remove the constraints of traditional standards of significance probability. Kumho Tire teaches that expert witnesses must “employ[] in the courtroom the same level of intellectual rigor that characterizes the practice of an expert in the relevant field.”[11] For Gelbach, this “eminently reasonable admonition” does not impose any constraints on statistical inference in the courtroom. Statistical significance at traditional levels (p < 0.05) is for elitist scholarly work, not for the “practical” rent-seeking work of the tort bar. According to Gelbach, the inflation of the significance level ten-fold to p < 0.5 is merely a matter of “weight” and not admissibility of any challenged opinion testimony.

Likelihood Ratios and Posterior Probabilities

Gelbach maintains that any evidence that has a likelihood ratio (LR > 1) greater than one is relevant, and should be admissible under Federal Rule of Evidence 401.[12] This argument ignores the other operative Federal Rules of Evidence, namely 702 and 703, which impose additional criteria of admissibility for expert witness opinion testimony.

With respect to variance and random error, Gelbach tells us that any evidence that generates a LR > 1, should be admitted when “the statistical evidence is statistically significant below the 50 percent level, which will be true when the p-value is less than 0.5.”[13]

At times, Gelbach seems to be discussing the admissibility of the ASCOT-LLA study itself, and not the proffered opinion testimony of Dr. Singh. The study itself would not be admissible, although it is clearly the sort of hearsay an expert witness in the field may consider. If Dr. Singh were to have reframed and recalculated the statistical comparisons, then the Rule 703 requirement of “reasonable reliance” by scientists in the field of interest may not have been satisfied.

Gelbach also generates a posterior probability (0.77), which is based upon his calculations from data in the ASCOT-LLA trial, and not the posterior probability of Dr. Singh’s opinion. The posterior probability, as calculated, is problematic on many fronts.

Gelbach does not present his calculations – for the sake of brevity he says – but he tells us that the ASCOT-LLA data yield a likelihood ratio of roughly 1.9, and a p-value of 0.126.[14] What the clinical trialists reported was a hazard ratio of 1.15, which is a weak association on most researchers’ scales, with a two-sided p-value of 0.25, which is five times higher than the usual 5 percent. Gelbach does not explain how or why his calculated p-value for the likelihood ratio is roughly half the unadjusted, two-sided p-value for the tertiary outcome from ASCOT-LLA.

As noted, the reported diabetes hazard ratio of 1.15 was a tertiary outcome for the ASCOT trial, one of 15 calculated by the trialists, with p-values unadjusted for multiple comparisons.  The failure to adjust is perhaps excusable in that some (but certainly not all) of the outcome variables are overlapping or correlated. A sophisticated reader would not be misled; only when someone like Gelbach attempts to manufacture an inflated posterior probability without accounting for the gross underestimate in variance is there an insult to statistical science. Gelbach’s recalculated p-value for his LR, if adjusted for the multiplicity of comparisons in this trial, would likely exceed 0.5, rendering all his arguments nugatory.

Using the statistics as presented by the published ASCOT-LLA trial to generate a posterior probability also ignores the potential biases (systematic errors) in data collection, the unadjusted hazard ratios, the potential for departures from random sampling, errors in administering the participant recruiting and inclusion process, and other errors in measurements, data collection, data cleaning, and reporting.

Gelbach correctly notes that there is nothing methodologically inappropriate in advocating likelihood ratios, but he is less than forthcoming in explaining that such ratios translate into a posterior probability only if he posits a prior probability of 0.5.[15] His pretense to having simply stated “mathematical facts” unravels when we consider his extreme, unrealistic, and unscientific assumptions.

The Problematic Prior

Gelbach’s glibly assumes that the starting point, the prior probability, for his analysis of Dr. Singh’s opinion is 50%. This is an old and common mistake,[16] long since debunked.[17] Gelbach’s assumption is part of an old controversy, which surfaced in early cases concerning disputed paternity. The assumption, however, is wrong legally and philosophically.

The law simply does not hand out 0.5 prior probability to both parties at the beginning of a trial. As Professor Jaffee noted almost 35 years ago:

“In the world of Anglo-American jurisprudence, every defendant, civil and criminal, is presumed not liable. So, every claim (civil or criminal) starts at ground zero (with no legal probability) and depends entirely upon proofs actually adduced.”[18]

Gelbach assumes that assigning “equal prior probability” to two adverse parties is fair, because the fact-finder would not start hearing evidence with any notion of which party’s contentions are correct. The 0.5/0.5 starting point, however, is neither fair nor is it the law.[19] The even odds prior is also not good science.

The defense is entitled to a presumption that it is not liable, and the plaintiff must start at zero.  Bayesians understand that this is the death knell of their beautiful model.  If the prior probability is zero, then Bayes’ Theorem tells us mathematically that no evidence, no matter how large a likelihood ratio, can move the prior probability of zero towards one. Bayes’ theorem may be a correct statement about inverse probabilities, but still be an inadequate or inaccurate model for how factfinders do, or should, reason in determining the ultimate facts of a case.

We can see how unrealistic and unfair Gelbach’s implied prior probability is if we visualize the proof process as a football field.  To win, plaintiffs do not need to score a touchdown; they need only cross the mid-field 50-yard line. Rather than making plaintiffs start at the zero-yard line, however, Gelbach would put them right on the 50-yard line. Since one toe over the mid-field line is victory, the plaintiff is spotted 99.99+% of its burden of having to present evidence to build up 50% probability. Instead, plaintiffs are allowed to scoot from the zero yard line right up claiming success, where even the slightest breeze might give them winning cases. Somehow, in the model, plaintiffs no longer have to present evidence to traverse the first half of the field.

The even odds starting point is completely unrealistic in terms of the events upon which the parties are wagering. The ASCOT-LLA study might have shown a protective association between atorvastatin and diabetes, or it might have shown no association at all, or it might have show a larger hazard ratio than measured in this particular sample. Recall that the confidence interval for hazard ratios for diabetes ran from 0.91 to 1.44. In other words, parameters from 0.91 (protective association) to 1.0 (no association), to 1.44 (harmful association) were all reasonably compatible with the observed statistic, based upon this one study’s data. The potential outcomes are not binary, which makes the even odds starting point inappropriate.[20]


[1]Schauer’s Long Footnote on Statistical Significance” (Aug. 21, 2022).

[2] Frederick Schauer, The Proof: Uses of Evidence in Law, Politics, and Everything Else 54-55 (2022) (citing Michelle M. Burtis, Jonah B. Gelbach, and Bruce H. Kobayashi, “Error Costs, Legal Standards of Proof, and Statistical Significance,” 25 Supreme Court Economic Rev. 1 (2017).

[3] In re Lipitor Mktg., Sales Practices & Prods. Liab. Litig., MDL No. 2:14–mn–02502–RMG, 2015 WL 6941132, at *1  (D.S.C. Oct. 22, 2015).

[4] Peter S. Sever, et al., “Prevention of coronary and stroke events with atorvastatin in hypertensive patients who have average or lower-than-average cholesterol concentrations, in the Anglo-Scandinavian Cardiac Outcomes Trial Lipid Lowering Arm (ASCOT-LLA): a multicentre randomised controlled trial,” 361 Lancet 1149 (2003). [cited here as ASCOT-LLA]

[5] ASCOT-LLA at 1153 & Table 3.

[6][6] In re Lipitor Mktg., Sales Practices & Prods. Liab. Litig., 174 F.Supp. 3d 911, 921 (D.S.C. 2016) (quoting Dr. Singh’s testimony).

[7] In re Lipitor Mktg., Sales Practices & Prods. Liab. Litig., 892 F.3d 624, 638-39 (2018) (affirming MDL trial court’s exclusion in part of Dr. Singh).

[8] SeeExpert Witness Mining – Antic Proposals for Reform” (Nov. 4, 2014).

[9] Brief for Amicus Curiae Jonah B. Gelbach in Support of Plaintiffs-Appellants, In re Lipitor Mktg., Sales Practices & Prods. Liab. Litig., 2017 WL 1628475 (April 28, 2017). [Cited as Gelbach]

[10] Gelbach at *2.

[11] Kumho Tire Co. v. Carmichael, 526 U.S. 137, 152 (1999).

[12] Gelbach at *5.

[13] Gelbach at *2, *6.

[14] Gelbach at *15.

[15] Gelbach at *19-20.

[16] See Richard A. Posner, “An Economic Approach to the Law of Evidence,” 51 Stanford L. Rev. 1477, 1514 (1999) (asserting that the “unbiased fact-finder” should start hearing a case with even odds; “[I]deally we want the trier of fact to work from prior odds of 1 to 1 that the plaintiff or prosecutor has a meritorious case. A substantial departure from this position, in either direction, marks the trier of fact as biased.”).

[17] See, e.g., Richard D. Friedman, “A Presumption of Innocence, Not of Even Odds,” 52 Stan. L. Rev. 874 (2000). [Friedman]

[18] Leonard R. Jaffee, “Prior Probability – A Black Hole in the Mathematician’s View of the Sufficiency and Weight of Evidence,” 9 Cardozo L. Rev. 967, 986 (1988).

[19] Id. at p.994 & n.35.

[20] Friedman at 877.

The Rise of Agnothology as Conspiracy Theory

July 19th, 2022

A few egregious articles in the biomedical literature have begun to endorse explicitly asymmetrical standards for inferring causation in the context of environmental or occupational exposures. Very little if anything is needed for inferring causation, and nothing counts against causation.  If authors refuse to infer causation, then they are agents of “industry,” epidemiologic malfeasors, and doubt mongers.

For an example of this genre, take the recent article, entitled “Toolkit for detecting misused epidemiological methods.”[1] [Toolkit] Please.

The asymmetry begins with Trump-like projection of the authors’ own foibles. The principal hammer in the authors’ toolkit for detecting misused epidemiologic methods is personal, financial bias. And yet, somehow, in an article that calls out other scientists for having received money from “industry,” the authors overlooked the business of disclosing their receipt of monies from one of the biggest industries around – the lawsuit industry.

Under the heading “competing interests,” the authors state that “they have no competing interests.”[2]  Lead author, Colin L. Soskolne, was, however, an active, partisan expert witness for plaintiffs’ counsel in diacetyl litigation.[3] In an asbestos case before the Pennsylvania Supreme Court, Rost v. Ford Motor Co., Soskolne signed on to an amicus brief, supporting the plaintiff, using his science credentials, without disclosing his expert witness work for plaintiffs, or his long-standing anti-asbestos advocacy.[4]

Author Shira Kramer signed on to Toolkit, without disclosing any conflicts, but with an even more impressive résumé of pro-plaintiff litigation experience.[5] Kramer is the owner of Epidemiology International, in Cockeysville, Maryland, where she services the lawsuit industry. She too was an “amicus” in Rost, without disclosing her extensive plaintiff-side litigation consulting and testifying.

Carl Cranor, another author of Toolkit, takes first place for hypocrisy on conflicts of interest. As a founder of Council for Education and Research on Toxics (CERT), he has sterling credentials for monetizing the bounty hunt against “carcinogens,” most recently against coffee.[6] He has testified in denture cream and benzene litigation, for plaintiffs. When he was excluded under Rule 702 from the Milward case, CERT filed an amicus brief on his behalf, without disclosing that Cranor was a founder of that organization.[7], [8]

The title seems reasonably fair-minded but the virulent bias of the authors is soon revealed. The Toolkit is presented as a Table in the middle of the article, but the actual “tools” are for the most part not seriously discussed, other than advice to “follow the money” to identify financial conflicts of interest.

The authors acknowledge that epidemiology provides critical knowledge of risk factors and causation of disease, but they quickly transition to an effort to silence any industry commentator on any specific epidemiologic issue. As we will see, the lawsuit industry is given a complete pass. Not surprisingly, several of the authors (Kramer, Cranor, Soskolne) have worked closely in tandem with the lawsuit industry, and have derived financial rewards for their efforts.

Repeatedly, the authors tell us that epidemiologic methods and language are misused by “powerful interests,” which have financial stakes in the outcome of research. Agents of these interests foment uncertainty and doubt about causal relationships through “disinformation,” “malfeasance,” and “doubt mongering.” There is no correlative concern about false claiming or claim mongering..

Who are these agents who plot to sabotage “social justice” and “truth”? Clearly, they are scientists with whom the Toolkit authors disagree. The Toolkit gang cites several papers as exemplifying “malfeasance,”[9] but they never explain what was wrong with them, or how the malfeasors went astray.  The Toolkit tactics seem worthy of Twitter smear and run.

The Toolkit

The authors’ chart of “tools” used by industry might have been an interesting taxonomy of error, but mostly they are ad hominem attack on scientists with whom they disagree. Channeling Putin on Ukraine, those scientists who would impose discipline and rigor on epidemiologic science are derided as not “real epidemiologists,” and, to boot, they are guilty of ethical lapses in failing to advance “social justice.”

Mostly the authors give us a toolkit for silencing those who would get in the way of the situational science deployed at the beck and call of the lawsuit industry.[10] Indeed, the Toolkit authors are not shy about identifying their litigation goals; they tell us that the toolkit can be deployed in depositions and in cross-examinations to pursue “social justice.” These authors also outline a social agenda that greatly resembles the goals of cancel culture: expose the perpetrators who stand in the way of the authors’preferred policy choices, diminish their adversaries’ their influence on journals, and galvanize peer reviewers to reject their adversaries’ scientific publications. The Toolkit authors tell us that “[t] he scientific community should engage by recognizing and professionally calling out common practices used to distort and misapply epidemiological and other health-related sciences.”[11] What this advice translates into are covert and open ad hominem campaigns as peer reviewers to block publications, to deny adversaries tenure and promotions, and to use social and other media outlets to attack adversaries’ motives, good faith, and competence.

None of this is really new. Twenty-five years ago, the late F. Douglas K. Liddell railed at the Mt. Sinai mob, and the phenomenon was hardly new then.[12] The Toolkit’s call to arms is, however, quite open, and raises the question whether its authors and adherents can be fair journal editors and peer reviewers of journal submissions.

Much of the Toolkit is the implementation of a strategy developed by lawsuit industry expert witnesses to demonize their adversaries by accusing them of manufacturing doubt or ignorance or uncertainty. This strategy has gained a label used to deride those who disagree with litigation overclaiming: agnotology or the creation of ignorance. According to Professor Robert Proctor, a regular testifying historian for tobacco plaintiffs, a linguist, Iain Boal, coined the term agnotology, in 1992, to describe the study of the production of ignorance.[13]

The Rise of “Agnotology” in Ngram

Agnotology has become a cottage sub-industry of the lawsuit industry, although lawsuits (or claim mongering if you like), of course, remain their main product. Naomi Oreskes[14] and David Michaels[15] gave the agnotology field greater visibility with their publications, using the less erudite but catchier phrase “manufacturing doubt.” Although the study of ignorance and uncertainty has a legitimate role in epistemology[16] and sociology,[17] much of the current literature is dominated by those who use agnotology as propaganda in support of their own litigation and regulatory agendas.[18] One lone author, however, appears to have taken agnotology study seriously enough to see that it is largely a conspiracy theory that reduces complex historical or scientific theory, evidence, opinion, and conclusions to a clash between truth and a demonic ideology.[19]

Is there any substance to the Toolkit?

The Toolkit is not entirely empty of substantive issues. The authors note that “statistical methods are a critical component of the epidemiologist’s toolkit,”[20] and they cite some articles about common statistical mistakes missed by peer reviewers. Curiously, the Toolkit omits any meaningful discussion of statistical mistakes that increase the risk of false positive results, such as multiple comparisons or dichotomizing continuous confounder variables. As for the Toolkit’s number one identified “inappropriate” technique used by its authors’ adversaries, we have:

“A1. Relying on statistical hypothesis testing; Using ‘statistical significance’ at the 0.05 level of probability as a strict decision criterion to determine the interpretation of statistical results and drawing conclusions.”

Peer into the hearings of any federal court so-called Daubert motion, and you will see the lawsuit industry, and its hired expert witnesses, rail at statistical significance, unless of course, there is some subgroup that has nominal significance, in which case, they are all in for endorsing the finding as “conclusive.” 

Welcome to asymmetric, situational science.


[1] Colin L. Soskolne, Shira Kramer, Juan Pablo Ramos-Bonilla, Daniele Mandrioli, Jennifer Sass, Michael Gochfeld, Carl F. Cranor, Shailesh Advani & Lisa A. Bero, “Toolkit for detecting misused epidemiological methods,” 20(90) Envt’l Health (2021) [Toolkit].

[2] Toolkit at 12.

[3] Watson v. Dillon Co., 797 F.Supp. 2d 1138 (D. Colo. 2011).

[4] Rost v. Ford Motor Co., 151 A.3d 1032 (Pa. 2016). See “The Amicus Curious Brief” (Jan. 4, 2018).

[5] See, e.g., Sean v. BMW of North Am., LLC, 26 N.Y.3d 801, 48 N.E.3d 937, 28 N.Y.S.3d 656 (2016) (affirming exclusion of Kramer); The Little Hocking Water Ass’n v. E.I. Du Pont De Nemours & Co., 90 F.Supp.3d 746 (S.D. Ohio 2015) (excluding Kramer); Luther v. John W. Stone Oil Distributor, LLC, No. 14-30891 (5th Cir. April 30, 2015) (mentioning Kramer as litigation consultant); Clair v. Monsanto Co., 412 S.W.3d 295 (Mo. Ct. App. 2013 (mentioning Kramer as plaintiffs’ expert witness); In re Chantix (Varenicline) Prods. Liab. Litig., No. 2:09-CV-2039-IPJ, MDL No. 2092, 2012 WL 3871562 (N.D.Ala. 2012) (excluding Kramer’s opinions in part); Frischhertz v. SmithKline Beecham Corp., 2012 U.S. Dist. LEXIS 181507, Civ. No. 10-2125 (E.D. La. Dec. 21, 2012) (excluding Kramer); Donaldson v. Central Illinois Public Service Co., 199 Ill. 2d 63, 767 N.E.2d 314 (2002) (affirming admissibility of Kramer’s opinions in absence of Rule 702 standards).

[6]  “The Council for Education & Research on Toxics” (July 9, 2013) (CERT amicus brief filed without any disclosure of conflict of interest). Among the fellow travelers who wittingly or unwittingly supported CERT’s scheme to pervert the course of justice were lawsuit industry stalwarts, Arthur L. Frank, Peter F. Infante, Philip J. Landrigan, Barry S. Levy, Ronald L. Melnick, David Ozonoff, and David Rosner. See also NAS, “Carl Cranor’s Conflicted Jeremiad Against Daubert” (Sept. 23, 2018); Carl Cranor, “Milward v. Acuity Specialty Products: How the First Circuit Opened Courthouse Doors for Wronged Parties to Present Wider Range of Scientific Evidence” (July 25, 2011).

[7] Milward v. Acuity Specialty Products Group, Inc., 664 F. Supp. 2d 137, 148 (D. Mass. 2009), rev’d, 639 F.3d 11 (1st Cir. 2011), cert. den. sub nom. U.S. Steel Corp. v. Milward, 565 U.S. 1111 (2012), on remand, Milward v. Acuity Specialty Products Group, Inc., 969 F.Supp. 2d 101 (D. Mass. 2013) (excluding specific causation opinions as invalid; granting summary judgment), aff’d, 820 F.3d 469 (1st Cir. 2016).

[8] To put this effort into a sociology of science perspective, the Toolkit article is published in a journal, Environmental Health, an Editor in Chief of which is David Ozonoff, a long-time pro-plaintiff partisan in the asbestos litigation. The journal has an “ombudsman,”Anthony Robbins, who was one of the movers-and-shakers in forming SKAPP, The Project on Scientific Knowledge and Public Policy, a group that plotted to undermine the application of federal evidence law of expert witness opinion testimony. SKAPP itself now defunct, but its spirit of subverting law lives on with efforts such as the Toolkit. “More Antic Proposals for Expert Witness Testimony – Including My Own Antic Proposals” (Dec. 30, 2014). Robbins is also affiliated with an effort, led by historian and plaintiffs’ expert witness David Rosner, to perpetuate misleading historical narratives of environmental and occupational health. “ToxicHistorians Sponsor ToxicDocs” (Feb. 1, 2018); “Creators of ToxicDocs Show Off Their Biases” (June 7, 2019); Anthony Robbins & Phyllis Freeman, “ToxicDocs (www.ToxicDocs.org) goes live: A giant step toward leveling the playing field for efforts to combat toxic exposures,” 39 J. Public Health Pol’y 1 (2018).

[9] The exemplars cited were Paolo Boffetta, MD, MPH; Hans Olov Adami, Philip Cole, Dimitrios Trichopoulos, Jack Mandel, “Epidemiologic studies of styrene and cancer: a review of the literature,” 51 J. Occup. & Envt’l Med. 1275 (2009); Carlo LaVecchia & Paolo Boffetta, “Role of stopping exposure and recent exposure to asbestos in the risk of mesothelioma,” 21 Eur. J. Cancer Prev. 227 (2012); John Acquavella, David Garabrant, Gary Marsh G, Thomas Sorahan and Douglas L. Weed, “Glyphosate epidemiology expert panel review: a weight of evidence systematic review of the relationship between glyphosate exposure and non-Hodgkin’s lymphoma or multiple myeloma,” 46 Crit. Rev. Toxicol. S28 (2016); Catalina Ciocan, Nicolò Franco, Enrico Pira, Ihab Mansour, Alessandro Godono, and Paolo Boffetta, “Methodological issues in descriptive environmental epidemiology. The example of study Sentieri,” 112 La Medicina del Lavoro 15 (2021).

[10] The Toolkit authors acknowledge that their identification of “tools” was drawn from previous publications of the same ilk, in the same journal. Rebecca F. Goldberg & Laura N. Vandenberg, “The science of spin: targeted strategies to manufacture doubt with detrimental effects on environmental and public health,” 20:33 Envt’l Health (2021).

[11] Toolkit at 11.

[12] F.D.K. Liddell, “Magic, Menace, Myth and Malice,” 41 Ann. Occup. Hyg. 3, 3 (1997). SeeThe Lobby – Cut on the Bias” (July 6, 2020).

[13] Robert N. Proctor & Londa Schiebinger, Agnotology: The Making and Unmaking of Ignorance (2008).

[14] Naomi Oreskes & Erik M. Conway, Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming (2010); Naomi Oreskes & Erik M. Conway, “Defeating the merchants of doubt,” 465 Nature 686 (2010).

[15] David Michaels, The Triumph of Doubt: Dark Money and the Science of Deception (2020); David Michaels, Doubt is Their Product: How Industry’s Assault on Science Threatens Your Health (2008); David Michaels, “Science for Sale,” Boston Rev. 2020; David Michaels, “Corporate Campaigns Manufacture Scientific Doubt,” 174 Science News 32 (2008); David Michaels, “Manufactured Uncertainty: Protecting Public Health in the Age of Contested Science and Product Defense,” 1076 Ann. N.Y. Acad. Sci. 149 (2006); David Michaels, “Scientific Evidence and Public Policy,” 95 Am. J. Public Health s1 (2005); David Michaels & Celeste Monforton, “Manufacturing Uncertainty: Contested Science and the Protection of the Public’s Health and Environment,” 95 Am. J. Pub. Health S39 (2005); David Michaels & Celeste Monforton, “Scientific Evidence in the Regulatory System: Manufacturing Uncertainty and the Demise of the Formal Regulatory Ssytem,” 13 J. L. & Policy 17 (2005); David Michaels, “Doubt is Their Product,” Sci. Am. 96 (June 2005); David Michaels, “The Art of ‘Manufacturing Uncertainty’,” L.A. Times (June 24, 2005).

[16] See, e.g., Sibilla Cantarini, Werner Abraham, and Elisabeth Leiss, eds., Certainty-uncertainty – and the Attitudinal Space in Between (2014); Roger M. Cooke, Experts in Uncertainty: Opinion and Subjective Probability in Science (1991).

[17] See, e.g., Ralph Hertwig & Christoph Engel, eds., Deliberate Ignorance: Choosing Not to Know (2021); Linsey McGoey, The Unknowers: How Strategic Ignorance Rules the World (2019); Michael Smithson, “Toward a Social Theory of Ignorance,” 15 J. Theory Social Behavior 151 (1985).

[18] See Janet Kourany & Martin Carrier, eds., Science and the Production of Ignorance: When the Quest for Knowledge Is Thwarted (2020); John Launer, “The production of ignorance,” 96 Postgraduate Med. J. 179 (2020); David S. Egilman, “The Production of Corporate Research to Manufacture Doubt About the Health Hazards of Products: An Overview of the Exponent BakeliteVR Simulation Study,” 28 New Solutions 179 (2018); Larry Dossey, “Agnotology: on the varieties of ignorance, criminal negligence, and crimes against humanity,” 10 Explore 331 (2014); Gerald Markowitz & David Rosner, Deceit and Denial: The Deadly Politics of Industrial Revolution (2002).

[19] See Enea Bianchi, “Agnotology: a Conspiracy Theory of Ignorance?” Ágalma: Rivista di studi culturali e di estetica 41 (2021).

[20] Toolkit at 4.

Madigan’s Shenanigans & Wells Quelled in Incretin-Mimetic Cases

July 15th, 2022

The incretin-mimetic litigation involved claims that the use of Byetta, Januvia, Janumet, and Victoza medications causes pancreatic cancer. All four medications treat diabetes mellitus through incretin hormones, which stimulate or support insulin production, which in turn lowers blood sugar. On Planet Earth, the only scientists who contend that these medications cause pancreatic cancer are those hired by the lawsuit industry.

The cases against the manufacturers of the incretin-mimetic medications were consolidated for pre-trial proceedings in federal court, pursuant to the multi-district litigation (MDL) statute, 28 US Code § 1407. After years of MDL proceedings, the trial court dismissed the cases as barred by the doctrine of federal preemption, and for good measure, excluded plaintiffs’ medical causation expert witnesses from testifying.[1] If there were any doubt about the false claiming in this MDL, the district court’s dismissals were affirmed by the Ninth Circuit.[2]

The district court’s application of Federal Rule of Evidence 702 to the plaintiffs’ expert witnesses’ opinion is an important essay in patho-epistemology. The challenged expert witnesses provided many examples of invalid study design and interpretation. Of particular interest, two of the plaintiffs’ high-volume statistician testifiers, David Madigan and Martin Wells, proffered their own meta-analyses of clinical trial safety data. Although the current edition of the Reference Manual on Scientific Evidence[3] provides virtually no guidance to judges for assessing the validity of meta-analyses, judges and counsel do now have other readily available sources, such as the FDA’s Guidance on meta-analysis of safety outcomes of clinical trials.[4] Luckily for the Incretin-Mimetics pancreatic cancer MDL judge, the misuse of meta-analysis methodology by plaintiffs’ statistician expert witnesses, David Madigan and Martin Wells was intuitively obvious.

Madigan and Wells had a large set of clinical trials at their disposal, with adverse safety outcomes assiduously collected. As is the case with many clinical trial safety outcomes, the trialists will often have a procedure for blinded or unblinded adjudication of safety events, such as pancreatic cancer diagnosis.

At deposition, Madigan testified that he counted only adjudicated cases of pancreatic cancer in his meta-analyses, which seems reasonable enough. As discovery revealed, however, Madigan employed the restrictive inclusion criteria of adjudicated pancreatic cancer only to the placebo group, not to the experimental group. His use of restrictive inclusion criteria for only the placebo group had the effect of excluding several non-adjudicated events, with the obvious spurious inflation of risk ratios. The MDL court thus found with relative ease that Madigan’s “unequal application of criteria among the two groups inevitably skews the data and critically undermines the reliability of his analysis.” The unexplained, unjustified change in methodology revealed Madigan’s unreliable “cherry-picking” and lack of scientific rigor as producing a result-driven meta-analyses.[5]

The MDL court similarly found that Wells’ reports “were marred by a selective review of data and inconsistent application of inclusion criteria.”[6] Like Madigan, Wells cherry picked studies. For instance, he excluded one study, EXSCEL, on grounds that it reported “a high pancreatic cancer event rate in the comparison group as compared to background rate in the general population….”[7] Wells’ explanation blatantly failed, however, given that the entire patient population of the clinical trial had diabetes, a known risk factor for pancreatic cancer.[8]

As Professor Ioannidis and others have noted, we are awash in misleading meta-analyses:

“Currently, there is massive production of unnecessary, misleading, and conflicted systematic reviews and meta-analyses. Instead of promoting evidence-based medicine and health care, these instruments often serve mostly as easily produced publishable units or marketing tools.  Suboptimal systematic reviews and meta-analyses can be harmful given the major prestige and influence these types of studies have acquired.  The publication of systematic reviews and meta-analyses should be realigned to remove biases and vested interests and to integrate them better with the primary production of evidence.”[9]

Whether created for litigation, like the Madigan-Wells meta-analyses, or published in the “peer-reviewed” literature, courts will have to up their game in assessing the validity of such studies. Published meta-analyses have grown exponentially from the 1990s to the present. To date, 248,886 meta-analyses have been published, according the National Library of Medicine’s Pub-Med database. Last year saw over 35,000 meta-analyses published. So far, this year, 20,416 meta-analyses have been published, and we appear to be on track to have a bumper crop.

The data analytics from Pub-Med provide a helpful visual representation of the growth of meta-analyses in biomedical science.

 

Count of Publications with Keyword Meta-analysis in Pub-Med Database

In 1979, the year I started law school, one meta-analysis was published. Lawyers could still legitimately argue that meta-analyses involved novel methodology that had not been generally accepted. The novelty of meta-analysis wore off sometime between 1988, when Judge Robert Kelly excluded William Nicholson’s meta-analysis of health outcomes among PCB-exposed workers, on grounds that such analyses were “novel,” and 1990, when the Third Circuit reversed Judge Kelly, with instructions to assess study validity.[10] Fortunately, or not, depending upon your point of view, plaintiffs dropped Nicholson’s meta-analysis in subsequent proceedings. A close look at Nicholson’s non-peer reviewed calculations shows that he failed to standardize for age or sex, and that he merely added observed and expected cases, across studies, without weighting by individual study variance. The trial court never had the opportunity to assess the validity vel non of Nicholson’s ersatz meta-analysis.[11] Today, trial courts must pick up on the challenge of assessing study validity of meta-analyses relied upon by expert witnesses, regulatory agencies, and systematic reviews.


[1] In re Incretin-Based Therapies Prods. Liab. Litig., 524 F.Supp.3d 1007 (S.D. Cal. 2021).

[2] In re Incretin-Based Therapies Prods. Liab. Litig., No. 21-55342, 2022 WL 898595 (9th Cir. Mar. 28, 2022) (per curiam)

[3]  “The Treatment of Meta-Analysis in the Third Edition of the Reference Manual on Scientific Evidence” (Nov. 15, 2011).

[4] Food and Drug Administration, Center for Drug Evaluation and Research, “Meta-Analyses of Randomized Controlled Clinical Trials to Evaluate the Safety of Human Drugs or Biological Products – (Draft) Guidance for Industry” (Nov. 2018); Jonathan J. Deeks, Julian P.T. Higgins, Douglas G. Altman, “Analysing data and undertaking meta-analyses,” Chapter 10, in Julian P.T. Higgins, James Thomas, Jacqueline Chandler, Miranda Cumpston, Tianjing Li, Matthew J. Page, and Vivian Welch, eds., Cochrane Handbook for Systematic Reviews of Interventions (version 6.3 updated February 2022); Donna F. Stroup, Jesse A. Berlin, Sally C. Morton, Ingram Olkin, G. David Williamson, Drummond Rennie, David Moher, Betsy J. Becker, Theresa Ann Sipe, Stephen B. Thacker, “Meta-Analysis of Observational Studies: A Proposal for Reporting,” 283 J. Am. Med. Ass’n 2008 (2000); David Moher, Alessandro Liberati, Jennifer Tetzlaff, and Douglas G Altman, “Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement,” 6 PLoS Med e1000097 (2009).

[5] In re Incretin-Based Therapies Prods. Liab. Litig., 524 F.Supp.3d 1007, 1037 (S.D. Cal. 2021). See In re Lipitor (Atorvastatin Calcium) Mktg., Sales Practices & Prods. Liab. Litig. (No. II) MDL2502, 892 F.3d 624, 634 (4th Cir. 2018) (“Result-driven analysis, or cherry-picking, undermines principles of the scientific method and is a quintessential example of applying methodologies (valid or otherwise) in an unreliable fashion.”).

[6] In re Incretin-Based Therapies Prods. Liab. Litig., 524 F.Supp.3d 1007, 1043 (S.D. Cal. 2021).

[7] Id. at 1038.

[8] See, e.g., Albert B. Lowenfels & Patrick Maisonneuve, “Risk factors for pancreatic cancer,” 95 J. Cellular Biochem. 649 (2005).

[9] John P. Ioannidis, “The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses,” 94 Milbank Quarterly 485 (2016).

[10] In re Paoli R.R. Yard PCB Litig., 706 F. Supp. 358, 373 (E.D. Pa. 1988), rev’d and remanded, 916 F.2d 829, 856-57 (3d Cir. 1990), cert. denied, 499 U.S. 961 (1991). See also Hines v. Consol. Rail Corp., 926 F.2d 262, 273 (3d Cir. 1991).

[11]The Shmeta-Analysis in Paoli” (July 11, 2019). See  James A. Hanley, Gilles Thériault, Ralf Reintjes and Annette de Boer, “Simpson’s Paradox in Meta-Analysis,” 11 Epidemiology 613 (2000); H. James Norton & George Divine, “Simpson’s paradox and how to avoid it,” Significance 40 (Aug. 2015); George Udny Yule, “Notes on the theory of association of attributes in statistics,” 2 Biometrika 121 (1903).

Reference Manual on Scientific Evidence – 3rd Edition is Past Its Expiry

October 17th, 2021

INTRODUCTION

The new, third edition of the Reference Manual on Scientific Evidence was released to the public in September 2011, as a joint production of the National Academies of Science, and the Federal Judicial Center. Within a year of its publication, I wrote that the Manual needed attention on several key issues. Now that there is a committee working on the fourth edition, I am reprising the critique, slightly modified, in the hope that it may make a difference for the fourth edition.

The Development Committee for the third edition included Co-Chairs, Professor Jerome Kassirer, of Tufts University School of Medicine, and the Hon. Gladys Kessler, who sits on the District Court for the District of Columbia.  The members of the Development Committee included:

  • Ming W. Chin, Associate Justice, The Supreme Court of California
  • Pauline Newman, Judge, Court of Appeals for the Federal Circuit
  • Kathleen O’Malley, Judge, Court of Appeals for the Federal Circuit (formerly a district judge on the Northern District of Ohio)
  • Jed S. Rakoff, Judge, Southern District of New York
  • Channing Robertson, Professor of Engineering, Stanford University
  • Joseph V. Rodricks, Principal, Environ
  • Allen Wilcox, Senior Investigator, Institute of Environmental Health Sciences
  • Sandy L. Zabell, Professor of Statistics and Mathematics, Weinberg College of Arts and Sciences, Northwestern University

Joe S. Cecil, Project Director, Program on Scientific and Technical Evidence, in the Federal Judicial Center’s Division of Research, who shepherded the first two editions, served as consultant to the Committee.

With over 1,000 pages, there was much to digest in the third edition of the Reference Manual on Scientific Evidence (RMSE 3d).  Much of what is covered was solid information on the individual scientific and technical disciplines covered.  Although the information is easily available from other sources, there is some value in collecting the material in a single volume for the convenience of judges and lawyers.  Of course, given that this information is provided to judges from an ostensibly neutral, credible source, lawyers will naturally focus on what is doubtful or controversial in the RMSE. To date, there have been only a few reviews and acknowledgments of the new edition.[1]

Like previous editions, the substantive scientific areas were covered in discrete chapters, written by subject matter specialists, often along with a lawyer who addresses the legal implications and judicial treatment of that subject matter.  From my perspective, the chapters on statistics, epidemiology, and toxicology were the most important in my practice and in teaching, and I have focused on issues raised by these chapters.

The strengths of the chapter on statistical evidence, updated from the second edition, remained, as did some of the strengths and flaws of the chapter on epidemiology.  In addition, there was a good deal of overlap among the chapters on statistics, epidemiology, and medical testimony.  This overlap was at first blush troubling because the RMSE has the potential to confuse and obscure issues by having multiple authors address them inconsistently.  This is an area where reviewers of the upcoming edition should pay close attention.

I. Reference Manual’s Disregard of Study Validity in Favor of the “Whole Tsumish”

There was a deep discordance among the chapters in the third Reference Manual as to how judges should approach scientific gatekeeping issues. The third edition vacillated between encouraging judges to look at scientific validity, and discouraging them from any meaningful analysis by emphasizing inaccurate proxies for validity, such as conflicts of interest.[2]

The Third Edition featured an updated version of the late Professor Margaret Berger’s chapter from the second edition, “The Admissibility of Expert Testimony.”[3]  Berger’s chapter criticized “atomization,” a process she describes pejoratively as a “slicing-and-dicing” approach.[4]  Drawing on the publications of Daubert-critic Susan Haack, Berger rejected the notion that courts should examine the reliability of each study independently.[5]  Berger contended that the “proper” scientific method, as evidenced by works of the International Agency for Research on Cancer, the Institute of Medicine, the National Institute of Health, the National Research Council, and the National Institute for Environmental Health Sciences, “is to consider all the relevant available scientific evidence, taken as a whole, to determine which conclusion or hypothesis regarding a causal claim is best supported by the body of evidence.”[6]

Berger’s contention, however, was profoundly misleading.  Of course, scientists undertaking a systematic review should identify all the relevant studies, but some of the “relevant” studies may well be insufficiently reliable (because of internal or external validity issues) to answer the research question at hand. All the cited agencies, and other research organizations and researchers, exclude studies that are fundamentally flawed, whether as a result of bias, confounding, erroneous data analyses, or related problems.  Berger cited no support for her remarkable suggestion that scientists do not make “reliability” judgments about available studies when assessing the “totality of the evidence.”

Professor Berger, who had a distinguished career as a law professor and evidence scholar, died in November 2010.  She was no friend of Daubert,[7] but remarkably her antipathy had outlived her.  Berger’s critical discussion of “atomization” cited the notorious decision in Milward v. Acuity Specialty Products Group, Inc., 639 F.3d 11, 26 (1st Cir. 2011), which was decided four months after her passing.[8]

Professor Berger’s contention about the need to avoid assessments of individual studies in favor of the whole “tsumish” must also be rejected because Federal Rule of Evidence 703 requires that each study considered by an expert witness “qualify” for reasonable reliance by virtue of the study’s containing facts or data that are “of a type reasonably relied upon by experts in the particular field forming opinions or inferences upon the subject.”  One of the deeply troubling aspects of the Milward decision is that it reversed the trial court’s sensible decision to exclude a toxicologist, Dr. Martyn Smith, who outran his headlights on issues having to do with a field in which he was clearly inexperienced – epidemiology.

Scientific studies, and especially epidemiologic studies, involve multiple levels of hearsay.  A typical epidemiologic study may contain hearsay leaps from patient to clinician, to laboratory technicians, to specialists interpreting test results, back to the clinician for a diagnosis, to a nosologist for disease coding, to a national or hospital database, to a researcher querying the database, to a statistician analyzing the data, to a manuscript that details data, analyses, and results, to editors and peer reviewers, back to study authors, and on to publication.  Those leaps do not mean that the final results are untrustworthy, only that the study itself is not likely admissible in evidence.

The inadmissibility of scientific studies is not problematic because Rule 703 permits testifying expert witnesses to formulate opinions based upon facts and data, which are not independently admissible in evidence. The distinction between relied upon and admissible studies is codified in the Federal Rules of Evidence, and in virtually every state’s evidence law.

Referring to studies, without qualification, as admissible in themselves is usually wrong as a matter of evidence law.  The error has the potential to encourage carelessness in gatekeeping expert witnesses’ opinions for their reliance upon inadmissible studies.  The error is doubly wrong if this approach to expert witness gatekeeping is taken as license to permit expert witnesses to rely upon any marginally relevant study of their choosing.  It is therefore disconcerting that the RMSE 3d failed to make the appropriate distinction between admissibility of studies and admissibility of expert witness opinion that has reasonably relied upon appropriate studies.

Consider the following statement from the chapter on epidemiology:

“An epidemiologic study that is sufficiently rigorous to justify a conclusion that it is scientifically valid should be admissible, as it tends to make an issue in dispute more or less likely.”[9]

Curiously, the advice from the authors of the epidemiology chapter, by speaking to a single study’s validity, was at odds with Professor Berger’s caution against slicing and dicing. The authors of the epidemiology chapter seemed to be stressing that scientifically valid studies should be admissible.  Their footnote emphasized and confused the point:

See DeLuca v. Merrell Dow Pharms., Inc., 911 F.2d 941, 958 (3d Cir. 1990); cf. Kehm v. Procter & Gamble Co., 580 F. Supp. 890, 902 (N.D. Iowa 1982) (“These [epidemiologic] studies were highly probative on the issue of causation—they all concluded that an association between tampon use and menstrually related TSS [toxic shock syndrome] cases exists.”), aff’d, 724 F.2d 613 (8th Cir. 1984). Hearsay concerns may limit the independent admissibility of the study, but the study could be relied on by an expert in forming an opinion and may be admissible pursuant to Fed. R. Evid. 703 as part of the underlying facts or data relied on by the expert. In Ellis v. International Playtex, Inc., 745 F.2d 292, 303 (4th Cir. 1984), the court concluded that certain epidemiologic studies were admissible despite criticism of the methodology used in the studies. The court held that the claims of bias went to the studies’ weight rather than their admissibility. Cf. Christophersen v. Allied-Signal Corp., 939 F.2d 1106, 1109 (5th Cir. 1991) (“As a general rule, questions relating to the bases and sources of an expert’s opinion affect the weight to be assigned that opinion rather than its admissibility. . . .”).”[10]

This footnote, however, that studies relied upon by an expert in forming an opinion may be admissible pursuant to Rule 703, was unsupported by and contrary to Rule 703 and the overwhelming weight of case law interpreting and applying the rule.[11] The citation to a pre-Daubert decision, Christophersen, was doubtful as a legal argument, and managed to engender much confusion

Furthermore, Kehm and Ellis, the cases cited in this footnote by the authors of the epidemiology chapter, both involved “factual findings” in public investigative or evaluative reports, which were independently admissible under Federal Rule of Evidence 803(8)(C). See Ellis, 745 F.2d at 299-303; Kehm, 724 F.2d at 617-18.  As such, the cases hardly support the chapter’s suggestion that Rule 703 is a rule of admissibility for epidemiologic studies.

Here the RMSE 3d, in one sentence, confused Rule 703 with an exception to the rule against hearsay, which would prevent the statistically based epidemiologic studies from being received in evidence.  The point is reasonably clear, however, that the studies “may be offered” in testimony to explain an expert witness’s opinion. Under Rule 705, that offer may also be refused. The offer, however, is to “explain,” not to have the studies admitted in evidence.  The RMSE 3d was certainly not alone in advancing this notion that studies are themselves admissible.  Other well-respected evidence scholars have lapsed into this error.[12]

Evidence scholars should not conflate admissibility of the epidemiologic (or other) studies with the ability of an expert witness to advert to a study to explain his or her opinion.  The testifying expert witness really should not be allowed to become a conduit for off-hand comments and opinions in the introduction or discussion section of relied upon articles, and the wholesale admission of such hearsay opinions undermines the trial court’s control over opinion evidence.  Rule 703 authorizes reasonable reliance upon “facts and data,” not every opinion that creeps into the published literature.

II. Toxicology for Judges

The toxicology chapter, “Reference Guide on Toxicology,” in RMSE 3d was written by Professor Bernard D. Goldstein, of the University of Pittsburgh Graduate School of Public Health, and Mary Sue Henifin, a partner in the Princeton, New Jersey office of Buchanan Ingersoll, P.C.

  1. Conflicts of Interest

At the question and answer session of the Reference Manual’s public release ceremony, in September 2011, one gentleman rose to note that some of the authors were lawyers with big firm affiliations, which he supposed must mean that they represent mostly defendants.  Based upon his premise, he asked what the review committee had done to ensure that conflicts of interest did not skew or distort the discussions in the affected chapters.  Dr. Kassirer and Judge Kessler responded by pointing out that the chapters were peer reviewed by outside reviewers, and reviewed by members of the supervising review committee.  The questioner seemed reassured, but now that I have looked at the toxicology chapter, I am not so sure.

The questioner’s premise that a member of a large firm will represent mostly defendants and thus have a pro-defense bias was probably a common perception among unsophisticated lay observers.  For instance, some large firms represent insurance companies intent upon denying coverage to product manufacturers.  These counsel for insurance companies often take the plaintiffs’ side of the underlying disputed issue in order to claim an exclusion to the contract of insurance, under a claim that the harm was “expected or intended.”  Similarly, the common perception ignores the reality of lawyers’ true conflict:  although gatekeeping helps the defense lawyers’ clients, it takes away legal work from firms that represent defendants in the litigations that are pretermitted by effective judicial gatekeeping.  Erosion of gatekeeping concepts, however, inures to the benefit of plaintiffs, their counsel, as well as the expert witnesses engaged on behalf of plaintiffs in litigation.

The questioner’s supposition in the case of the toxicology chapter, however, is doubly flawed.  If he had known more about the authors, he would probably not have asked his question.  First, the lawyer author, Ms. Henifin, despite her large firm affiliation, has taken some aggressive positions contrary to the interests of manufacturers.[13]  As for the scientist author of the toxicology chapter, Professor Goldstein, the casual reader of the chapter may want to know that he has testified in any number of toxic tort cases, almost invariably on the plaintiffs’ side.  Unlike the defense lawyer, who loses business revenue, when courts shut down unreliable claims, plaintiffs’ testifying or consulting expert witnesses stand to gain by minimalist expert witness opinion gatekeeping.  Given the economic asymmetries, the reader must thus want to know that Professor Goldstein was excluded as an expert witness in some high-profile toxic tort cases.[14]  There do not appear to be any disclosures of Professor Goldstein’s (or any other scientist author’s) conflicts of interests in RMSE 3d.  Having pointed out this conflict, I would note that financial conflicts of interest are nothing really compared with ideological conflicts of interest, which often propel scientists into service as expert witnesses to advance their political agenda.

  1. Hormesis

One way that ideological conflicts might be revealed is to look for imbalances in the presentation of toxicologic concepts.  Most lawyers who litigate cases that involve exposure-response issues are familiar with the “linear no threshold” (LNT) concept that is used frequently in regulatory risk assessments, and which has metastasized to toxic tort litigation, where LNT often has no proper place.

LNT is a dubious assumption because it claims to “know” the dose response at very low exposure levels in the absence of data.  There is a thin plausibility for LNT for genotoxic chemicals claimed to be carcinogens, but even that plausibility evaporates when one realizes that there are DNA defense and repair mechanisms to genotoxicity, which must first be saturated, overwhelmed, or inhibited, before there can be a carcinogenic response. The upshot is that low exposures that do not swamp DNA repair and tumor suppression proteins will not cause cancer.

Hormesis is today an accepted concept that describes a dose-response relationship that shows a benefit at low doses, but harm at high doses. The toxicology chapter in the Reference Manual has several references to LNT but none to hormesis.  That font of all knowledge, Wikipedia reports that hormesis is controversial, but so is LNT.  This is the sort of imbalance that may well reflect an ideological bias.

One of the leading textbooks on toxicology describes hormesis[15]:

“There is considerable evidence to suggest that some non-nutritional toxic substances may also impart beneficial or stimulatory effects at low doses but that, at higher doses, they produce adverse effects. This concept of “hormesis” was first described for radiation effects but may also pertain to most chemical responses.”

Similarly, the Encyclopedia of Toxicology describes hormesis as an important phenomenon in toxicologic science[16]:

“This type of dose–response relationship is observed in a phenomenon known as hormesis, with one explanation being that exposure to small amounts of a material can actually confer resistance to the agent before frank toxicity begins to appear following exposures to larger amounts.  However, analysis of the available mechanistic studies indicates that there is no single hormetic mechanism. In fact, there are numerous ways for biological systems to show hormetic-like biphasic dose–response relationship. Hormetic dose–response has emerged in recent years as a dose–response phenomenon of great interest in toxicology and risk assessment.”

One might think that hormesis would also be of great interest to federal judges, but they will not learn about it from reading the Reference Manual.

Hormesis research has come into its own.  The International Dose-Response Society, which “focus[es] on the dose-response in the low-dose zone,” publishes a journal, Dose-Response, and a newsletter, BELLE:  Biological Effects of Low Level Exposure.  In 2009, two leading researchers in the area of hormesis published a collection of important papers:  Mark P. Mattson and Edward J. Calabrese, eds., Hormesis: A Revolution in Biology, Toxicology and Medicine (2009).

A check in PubMed shows that LNT has more “hits” than “hormesis” or “hermetic,” but still the latter phrases exceed 1,267 references, hardly insubstantial.  In actuality, there are many more hermetic relationships identified in the scientific literature, which often fails to identify the relationship by the term hormesis or hermetic.[17]

The Reference Manual’s omission of hormesis was regrettable.  Its inclusion of references to LNT but not to hormesis suggests a biased treatment of the subject.

  1. Questionable Substantive Opinions

Readers and litigants would fondly hope that the toxicology chapter would not put forward partisan substantive positions on issues that are currently the subject of active litigation.  Fervently, we would hope that any substantive position advanced would at least be well documented.

For at least one issue, the toxicology chapter disappointed significantly.  Table 1 in the chapter presents a “Sample of Selected Toxicological End Points and Examples of Agents of Concern in Humans.” No documentation or citations are provided for this table.  Most of the exposure agent/disease outcome relationships in the table are well accepted, but curiously at least one agent-disease pair, which is the subject of current litigation, is wildly off the mark:

“Parkinson’s disease and manganese[18]

If the chapter’s authors had looked, they would have found that Parkinson’s disease is almost universally accepted to have no known cause, at least outside court rooms.  They would also have found that the issue has been addressed carefully and the claimed relationship or “concern” has been rejected by the leading researchers in the field (who have no litigation ties).[19]  Table 1 suggests a certain lack of objectivity, and its inclusion of a highly controversial relationship, manganese-Parkinson’s disease, suggests a good deal of partisanship.

  1. When All You Have Is a Hammer, Everything Looks Like a Nail

The substantive area author, Professor Goldstein, is not a physician; nor is he an epidemiologist.  His professional focus on animal and cell research appeared to color and bias the opinions offered in this chapter:[20]

“In qualitative extrapolation, one can usually rely on the fact that a compound causing an effect in one mammalian species will cause it in another species. This is a basic principle of toxicology and pharmacology.  If a heavy metal, such as mercury, causes kidney toxicity in laboratory animals, it is highly likely to do so at some dose in humans.”

Such extrapolations may make sense in regulatory contexts, where precauationary judgments are of interest, but they hardly can be said to be generally accepted in controversies in scientific communities, or in civil actions over actual causation.  There are too many counterexamples to cite, but consider crystalline silica, silicon dioxide.  Silica causes something resembling lung cancer in rats, but not in mice, guinea pigs, or hamsters.  It hardly makes sense to ask juries to decide whether the plaintiff is more like a rat than a mouse.

For a sober second opinion to the toxicology chapter, one may consider the views of some well-known authors:

“Whereas the concordance was high between cancer-causing agents initially discovered in humans and positive results in animal studies (Tomatis et al., 1989; Wilbourn et al., 1984), the same could not be said for the reverse relationship: carcinogenic effects in animals frequently lacked concordance with overall patterns in human cancer incidence (Pastoor and Stevens, 2005).”[21]

III. New Reference Manual’s Uneven Treatment of Causation and of Conflicts of Interest

The third edition of the Reference Manual on Scientific Evidence (RMSE) appeared to get off to a good start in the Preface by Judge Kessler and Dr. Kassirer, when they noted that the Supreme Court mandated federal courts to:

“examine the scientific basis of expert testimony to ensure that it meets the same rigorous standard employed by scientific researchers and practitioners outside the courtroom.”

RMSE at xiii.  The preface faltered, however, on two key issues, causation and conflicts of interest, which are taken up as an introduction to the third edition.

  1. Causation

The authors reported in somewhat squishy terms that causal assessments are judgments:

“Fundamentally, the task is an inferential process of weighing evidence and using judgment to conclude whether or not an effect is the result of some stimulus. Judgment is required even when using sophisticated statistical methods. Such methods can provide powerful evidence of associations between variables, but they cannot prove that a causal relationship exists. Theories of causation (evolution, for example) lose their designation as theories only if the scientific community has rejected alternative theories and accepted the causal relationship as fact. Elements that are often considered in helping to establish a causal relationship include predisposing factors, proximity of a stimulus to its putative outcome, the strength of the stimulus, and the strength of the events in a causal chain.”[22]

The authors left the inferential process as a matter of “weighing evidence,” but without saying anything about how the scientific community does its “weighing.” Language about “proving” causation is also unclear because “proof” in scientific parlance connotes a demonstration, which we typically find in logic or in mathematics. Proving empirical propositions suggests a bar set so high such that the courts must inevitably acquiesce in a very low threshold of evidence.  The question, of course, is how low can and will judges go to admit evidence.

The authors thus introduced hand waving and excuses for why evidence can be weighed differently in court proceedings from the world of science:

“Unfortunately, judges may be in a less favorable position than scientists to make causal assessments. Scientists may delay their decision while they or others gather more data. Judges, on the other hand, must rule on causation based on existing information. Concepts of causation familiar to scientists (no matter what stripe) may not resonate with judges who are asked to rule on general causation (i.e., is a particular stimulus known to produce a particular reaction) or specific causation (i.e., did a particular stimulus cause a particular consequence in a specific instance). In the final analysis, a judge does not have the option of suspending judgment until more information is available, but must decide after considering the best available science.”[23]

But the “best available science” may be pretty crummy, and the temptation to turn desperation into evidence (“well, it’s the best we have now”) is often severe.  The authors of the Preface thus remarkable signalled that “inconclusive” is not a judgment open to judges charged with expert witness gatekeeping.  If the authors truly meant to suggest that judges should go with whatever is dished out as “the best available science,” then they have overlooked the obvious:  Rule 702 opens the door to “scientific, technical, or other specialized knowledge,” not to hunches, suggestive but inconclusive evidence, and wishful thinking about how the science may turn out when further along.  Courts have a choice to exclude expert witness opinion testimony that is based upon incomplete or inconclusive evidence. The authors went fairly far afield to suggest, erroneously, that the incomplete and the inconclusive are good enough and should be admitted.

  1. Conflicts of Interest

Surprisingly, given the scope of the scientific areas covered in the RMSE, the authors discussed conflicts of interest (COI) at some length.  Conflicts of interest are a fact of life in all endeavors, and it is understandable counsel judges and juries to try to identify, assess, and control them.  COIs, however, are weak proxies for unreliability.  The emphasis given here was, however, undue because federal judges were enticed into thinking that they can discern unreliability from COI, when they should be focused on the data, inferences, and analyses.

What becomes fairly clear is that the authors of the Preface set out to use COI as a basis for giving litigation plaintiffs a pass, and for holding back studies sponsored by corporate defendants.

“Conflict of interest manifests as bias, and given the high stakes and adversarial nature of many courtroom proceedings, bias can have a major influence on evidence, testimony, and decisionmaking. Conflicts of interest take many forms and can be based on religious, social, political, or other personal convictions. The biases that these convictions can induce may range from serious to extreme, but these intrinsic influences and the biases they can induce are difficult to identify. Even individuals with such prejudices may not appreciate that they have them, nor may they realize that their interpretations of scientific issues may be biased by them. Because of these limitations, we consider here only financial conflicts of interest; such conflicts are discoverable. Nonetheless, even though financial conflicts can be identified, having such a conflict, even one involving huge sums of money, does not necessarily mean that a given individual will be biased. Having a financial relationship with a commercial entity produces a conflict of interest, but it does not inevitably evoke bias. In science, financial conflict of interest is often accompanied by disclosure of the relationship, leaving to the public the decision whether the interpretation might be tainted. Needless to say, such an assessment may be difficult. The problem is compounded in scientific publications by obscure ways in which the conflicts are reported and by a lack of disclosure of dollar amounts.

Judges and juries, however, must consider financial conflicts of interest when assessing scientific testimony. The threshold for pursuing the possibility of bias must be low. In some instances, judges have been frustrated in identifying expert witnesses who are free of conflict of interest because entire fields of science seem to be co-opted by payments from industry. Judges must also be aware that the research methods of studies funded specifically for purposes of litigation could favor one of the parties. Though awareness of such financial conflicts in itself is not necessarily predictive of bias, such information should be sought and evaluated as part of the deliberations.”[24]

All in all, rather misleading advice.  Financial conflicts are not the only conflicts that can be “discovered.”  Often expert witnesses will have political and organizational alignments, which will show deep-seated ideological alignments with the party for which they are testifying.  For instance, in one silicosis case, an expert witness in the field of history of medicine testified, at an examination before trial, that his father suffered from a silica-related disease.  This witness’s alignment with Marxist historians and his identification with radical labor movements made his non-financial conflicts obvious, although these COI would not necessarily have been apparent from his scholarly publications alone.

How low will the bar be set for discovering COI?  If testifying expert witnesses are relying upon textbooks, articles, essays, will federal courts open the authors/hearsay declarants up to searching discovery of their finances? What really is at stake here is that the issues of accuracy, precision, and reliability are lost in the ad hominem project of discovery COIs.

Also misleading was the suggestion that “entire fields of science seem to be co-opted by payments from industry.”  Do the authors mean to exclude the plaintiffs’ lawyer lawsuit industry, which has become one of the largest rent-seeking organizations, and one of the most politically powerful groups in this country?  In litigations in which I have been involved, I have certainly seen plaintiffs’ counsel, or their proxies – labor unions, federal agencies, or “victim support groups” provide substantial funding for studies.  The Preface authors themselves show an untoward bias by their pointing out industry payments without giving balanced attention to other interested parties’ funding of scientific studies.

The attention to COI was also surprising given that one of the key chapters, for toxic tort practitioners, was written by Dr. Bernard D. Goldstein, who has testified in toxic tort cases, mostly (but not exclusively) for plaintiffs.[25]  In one such case, Makofsky, Dr. Goldstein’s participation was particularly revealing because he was forced to explain why he was willing to opine that benzene caused acute lymphocytic leukemia, despite the plethora of published studies finding no statistically significant relationship.  Dr. Goldstein resorted to the inaccurate notion that scientific “proof” of causation requires 95 percent certainty, whereas he imposed only a 51 percent certainty for his medico-legal testimonial adventures.[26] Dr. Goldstein also attempted to justify the discrepancy from the published literature by adverting to the lower standards used by federal regulatory agencies and treating physicians.  

These explanations were particularly concerning because they reflect basic errors in statistics and in causal reasoning.  The 95 percent derives from the use of the coefficient of confidence in confidence intervals, but the probability involved there is not the probability of the association’s being correct, and it has nothing to do with the probability in the belief that an association is real or is causal.  (Thankfully the RMSE chapter on statistics got this right, but my fear is that judges will skip over the more demanding chapter on statistics and place undue weight on the toxicology chapter.)  The reference to federal agencies (OSHA, EPA, etc.) and to treating physicians was meant, no doubt, to invoke precautionary principle concepts as a justification for some vague, ill-defined, lower standard of causal assessment.  These references were really covert invitations to shift the burden of proof.

The Preface authors might well have taken their own counsel and conducted a more searching assessment of COI among authors of Reference Manual.  Better yet, the authors might have focused the judiciary on the data and the analysis.

  1. Reference Manual on Scientific Evidence (3d edition) on Statistical Significance

How does the new Reference Manual on Scientific Evidence treat statistical significance?  Inconsistently and at times incoherently.

  1. Professor Berger’s Introduction

In her introductory chapter, the late Professor Margaret A. Berger raised the question what role statistical significance should play in evaluating a study’s support for causal conclusions[27]:

“What role should statistical significance play in assessing the value of a study? Epidemiological studies that are not conclusive but show some increased risk do not prove a lack of causation. Some courts find that they therefore have some probative value,62 at least in proving general causation.63

This seems rather backwards.  Berger’s suggestion that inconclusive studies do not prove lack of causation seems nothing more than a tautology. Certainly the failure to rule out causation is not probative of causation. How can that tautology support the claim that inconclusive studies “therefore” have some probative value? Berger’s argument seems obviously invalid, or perhaps text that badly needed a posthumous editor.  And what epidemiologic studies are conclusive?  Are the studies individually or collectively conclusive?  Berger introduced a tantalizing concept, which was not spelled out anywhere in the Manual.

Berger’s chapter raised other, serious problems. If the relied-upon studies are not statistically significant, how should we understand the testifying expert witness to have ruled out random variability as an explanation for the disparity observed in the study or studies?  Berger did not answer these important questions, but her rhetoric elsewhere suggested that trial courts should not look too hard at the statistical support (or its lack) for what expert witness testimony is proffered.

Berger’s citations in support were curiously inaccurate.  Footnote 62 cites the Cook case:

“62. See Cook v. Rockwell Int’l Corp., 580 F. Supp. 2d 1071 (D. Colo. 2006) (discussing why the court excluded expert’s testimony, even though his epidemiological study did not produce statistically significant results).”

Berger’s citation was disturbingly incomplete.[28] The expert witness, Dr. Clapp, in Cook did rely upon his own study, which did not obtain a statistically significant result, but the trial court admitted the expert witness’s testimony; the court denied the Rule 702 challenge to Clapp, and permitted him to testify about a statistically non-significant ecological study. Given that the judgment of the district court was reversed

Footnote 63 is no better:

“63. In re Viagra Prods., 572 F. Supp. 2d 1071 (D. Minn. 2008) (extensive review of all expert evidence proffered in multidistricted product liability case).”

With respect to the concept of statistical significance, the Viagra case centered around the motion to exclude plaintiffs’ expert witness, Gerald McGwin, who relied upon three studies, none of which obtained a statistically significant result in its primary analysis.  The Viagra court’s review was hardly extensive; the court did not report, discuss, or consider the appropriate point estimates in most of the studies, the confidence intervals around those point estimates, or any aspect of systematic error in the three studies.  At best, the court’s review was perfunctory.  When the defendant brought to light the lack of data integrity in McGwin’s own study, the Viagra MDL court reversed itself, and granted the motion to exclude McGwin’s testimony.[29]  Berger’s chapter omitted the cautionary tale of McGwin’s serious, pervasive errors, and how they led to his ultimate exclusion. Berger’s characterization of the review was incorrect, and her failure to cite the subsequent procedural history, misleading.

  1. Chapter on Statistics

The Third Edition’s chapter on statistics was relatively free of value judgments about significance probability, and, therefore, an improvement over Berger’s introduction.  The authors carefully described significance probability and p-values, and explain[30]:

“Small p-values argue against the null hypothesis. Statistical significance is determined by reference to the p-value; significance testing (also called hypothesis testing) is the technique for computing p-values and determining statistical significance.”

Although the chapter conflated the positions often taken to be Fisher’s interpretation of p-values and Neyman’s conceptualization of hypothesis testing as a dichotomous decision procedure, this treatment was unfortunately fairly standard in introductory textbooks.  The authors may have felt that presenting multiple interpretations of p-values was asking too much of judges and lawyers, but the oversimplification invited a false sense of certainty about the inferences that can be drawn from statistical significance.

Kaye and Freedman, however, did offer some important qualifications to the untoward consequences of using significance testing as a dichotomous outcome[31]:

“Artifacts from multiple testing are commonplace. Because research that fails to uncover significance often is not published, reviews of the literature may produce an unduly large number of studies finding statistical significance.111 Even a single researcher may examine so many different relationships that a few will achieve statistical significance by mere happenstance. Almost any large dataset—even pages from a table of random digits—will contain some unusual pattern that can be uncovered by diligent search. Having detected the pattern, the analyst can perform a statistical test for it, blandly ignoring the search effort. Statistical significance is bound to follow.

There are statistical methods for dealing with multiple looks at the data, which permit the calculation of meaningful p-values in certain cases.112 However, no general solution is available, and the existing methods would be of little help in the typical case where analysts have tested and rejected a variety of models before arriving at the one considered the most satisfactory (see infra Section V on regression models). In these situations, courts should not be overly impressed with claims that estimates are significant. Instead, they should be asking how analysts developed their models.113

This important qualification to statistical significance was omitted from the overlapping discussion in the chapter on epidemiology, where it was very much needed.

  1. Chapter on Multiple Regression

The chapter on regression did not add much to the earlier and later discussions.  The author asked rhetorically what is the appropriate level of statistical significance, and answers:

“In most scientific work, the level of statistical significance required to reject the null hypothesis (i.e., to obtain a statistically significant result) is set conventionally at 0.05, or 5%.47

Daniel Rubinfeld, “Reference Guide on Multiple Regression,” in RMSE3d 303, 320.

  1. Chapter on Epidemiology

The chapter on epidemiology[32] mostly muddled the discussion set out in Kaye and Freedman’s chapter on statistics.

“The two main techniques for assessing random error are statistical significance and confidence intervals. A study that is statistically significant has results that are unlikely to be the result of random error, although any criterion for ‘significance’ is somewhat arbitrary. A confidence interval provides both the relative risk (or other risk measure) found in the study and a range (interval) within which the risk likely would fall if the study were repeated numerous times.”

The suggestion that a statistically significant study has results unlikely due to chance, without reminding the reader that the finding is predicated on the assumption that there is no association, and that the probability distribution was correct, and came close to crossing the line in committing the transposition fallacy so nicely described and warned against in the statistics chapter. The problem was that “results” is ambiguous as between the data as extreme or more so than what was observed, and the point estimate of the mean or proportion in the sample, and the assumptions that lead to a p-value were not disclosed.

The suggestion that alpha is “arbitrary,” was “somewhat” correct, but this truncated discussion was distinctly unhelpful to judges who are likely to take “arbitrary“ to mean “I will get reversed.”  The selection of alpha is conventional to some extent, and arbitrary in the sense that the law’s setting an age of majority or a voting age is arbitrary.  Some young adults, age 17.8 years old, may be better educated, better engaged in politics, better informed about current events, than 35 year olds, but the law must set a cut off.  Two year olds are demonstrably unfit, and 82 year olds are surely past the threshold of maturity requisite for political participation. A court might admit an opinion based upon a study of rare diseases, with tight control of bias and confounding, when p = 0.051, but that is hardly a justification for ignoring random error altogether, or admitting an opinion based upon a study, in which the disparity observed had a p = 0.15.

The epidemiology chapter correctly called out judicial decisions that confuse “effect size” with statistical significance[33]:

“Understandably, some courts have been confused about the relationship between statistical significance and the magnitude of the association. See Hyman & Armstrong, P.S.C. v. Gunderson, 279 S.W.3d 93, 102 (Ky. 2008) (describing a small increased risk as being considered statistically insignificant and a somewhat larger risk as being considered statistically significant.); In re Pfizer Inc. Sec. Litig., 584 F. Supp. 2d 621, 634–35 (S.D.N.Y. 2008) (confusing the magnitude of the effect with whether the effect was statistically significant); In re Joint E. & S. Dist. Asbestos Litig., 827 F. Supp. 1014, 1041 (S.D.N.Y. 1993) (concluding that any relative risk less than 1.50 is statistically insignificant), rev’d on other grounds, 52 F.3d 1124 (2d Cir. 1995).”

Actually this confusion is not understandable at all.  The distinction has been the subject of teaching since the first edition of the Reference Manual, and two of the cited cases post-date the second edition.  The Southern District of New York asbestos case, of course, predated the first Manual.  To be sure, courts have on occasion badly misunderstood significance probability and significance testing.   The authors of the epidemiology chapter could well have added In re Viagra, to the list of courts that confused effect size with statistical significance.[34]

The epidemiology chapter appropriately chastised courts for confusing significance probability with the probability that the null hypothesis, or its complement, is correct[35]:

“A common error made by lawyers, judges, and academics is to equate the level of alpha with the legal burden of proof. Thus, one will often see a statement that using an alpha of .05 for statistical significance imposes a burden of proof on the plaintiff far higher than the civil burden of a preponderance of the evidence (i.e., greater than 50%).  See, e.g., In re Ephedra Prods. Liab. Litig., 393 F. Supp. 2d 181, 193 (S.D.N.Y. 2005); Marmo v. IBP, Inc., 360 F. Supp. 2d 1019, 1021 n.2 (D. Neb. 2005) (an expert toxicologist who stated that science requires proof with 95% certainty while expressing his understanding that the legal standard merely required more probable than not). But see Giles v. Wyeth, Inc., 500 F. Supp. 2d 1048, 1056–57 (S.D. Ill. 2007) (quoting the second edition of this reference guide).

Comparing a selected p-value with the legal burden of proof is mistaken, although the reasons are a bit complex and a full explanation would require more space and detail than is feasible here. Nevertheless, we sketch out a brief explanation: First, alpha does not address the likelihood that a plaintiff’s disease was caused by exposure to the agent; the magnitude of the association bears on that question. See infra Section VII. Second, significance testing only bears on whether the observed magnitude of association arose  as a result of random chance, not on whether the null hypothesis is true. Third, using stringent significance testing to avoid false-positive error comes at a complementary cost of inducing false-negative error. Fourth, using an alpha of .5 would not be equivalent to saying that the probability the association found is real is 50%, and the probability that it is a result of random error is 50%.”

The footnotes went on to explain further the difference between alpha probability and burden of proof probability, but somewhat misleadingly asserted that “significance testing only bears on whether the observed magnitude of association arose as a result of random chance, not on whether the null hypothesis is true.”[36]  The significance probability does not address the probability that the observed statistic is the result of random chance; rather it describes the probability of observing at least as large a departure from the expected value if the null hypothesis is true.  Of course, if this cumulative probability is sufficiently low, then the null hypothesis is rejected, and this would seem to bear upon whether the null hypothesis is true.  Kaye and Freedman’s chapter on statistics did much better at describing p-values and avoiding the transposition fallacy.

When they stayed on message, the authors of the epidemiology chapter were certainly correct that significance probability cannot be translated into an assessment of the probability that the null hypothesis, or the obtained sampling statistic, is correct.  What these authors omitted, however, was a clear statement that the many courts and counsel who have misstated this fact do not create any worthwhile precedent, persuasive or binding.

The epidemiology chapter ultimately failed to help judges in assessing statistical significance:

“There is some controversy among epidemiologists and biostatisticians about the appropriate role of significance testing.85 To the strictest significance testers, any study whose p-value is not less than the level chosen for statistical significance should be rejected as inadequate to disprove the null hypothesis. Others are critical of using strict significance testing, which rejects all studies with an observed p-value below that specified level. Epidemiologists have become increasingly sophisticated in addressing the issue of random error and examining the data from a study to ascertain what information they may provide about the relationship between an agent and a disease, without the necessity of rejecting all studies that are not statistically significant.86 Meta-analysis, as well, a method for pooling the results of multiple studies, sometimes can ameliorate concerns about random error.87  Calculation of a confidence interval permits a more refined assessment of appropriate inferences about the association found in an epidemiologic study.88

Id. at 578-79.  Mostly true, but again rather unhelpful to judges and lawyers.  Some of the controversy, to be sure, was instigated by statisticians and epidemiologists who would elevate Bayesian methods, and eliminate the use of significance probability and testing altogether. As for those scientists who still work within the dominant frequentist statistical paradigm, the chapter authors divided the world up into “strict” testers and those critical of “strict” testing.  Where, however, is the boundary? Does criticism of “strict” testing imply embrace of “non-strict” testing, or of no testing at all?  I can sympathize with a judge who permits reliance upon a series of studies that all go in the same direction, with each having a confidence interval that just misses excluding the null hypothesis.  Meta-analysis in such a situation might not just ameliorate concerns about random error, it might eliminate them.  But what of those scientists critical of strict testing?  This certainly does not suggest or imply that courts can or should ignore random error; yet that is exactly what happened in the early going in In re Viagra Products Liab. Litig.[37]  The epidemiology chapter’s reference to confidence intervals was correct in part; they permit a more refined assessment because they permit a more direct assessment of the extent of random error in terms of magnitude of association, as well as the point estimate of the association obtained from and conditioned on the sample.  Confidence intervals, however, do not eliminate the need to interpret the extent of random error; rather they provide a more direct assessment and measurement of the standard error.

V. Power in the Reference Manual for Scientific Evidence

The Third Edition treated statistical power in three of its chapters, those on statistics, epidemiology, and medical testimony.  Unfortunately, the treatments were not always consistent.

The chapter on statistics has been consistently among the most frequently ignored content of the three editions of the Reference Manual.  The third edition offered a good introduction to basic concepts of sampling, random variability, significance testing, and confidence intervals.[38]  Kaye and Freedman provided an acceptable non-technical definition of statistical power[39]:

“More precisely, power is the probability of rejecting the null hypothesis when the alternative hypothesis … is right. Typically, this probability will depend on the values of unknown parameters, as well as the preset significance level α. The power can be computed for any value of α and any choice of parameters satisfying the alternative hypothesis. Frequentist hypothesis testing keeps the risk of a false positive to a specified level (such as α = 5%) and then tries to maximize power. Statisticians usually denote power by the Greek letter beta (β). However, some authors use β to denote the probability of accepting the null hypothesis when the alternative hypothesis is true; this usage is fairly standard in epidemiology. Accepting the null hypothesis when the alternative holds true is a false negative (also called a Type II error, a missed signal, or a false acceptance of the null hypothesis).”

The definition was not, however, without problems.  First, it introduced a nomenclature issue likely to be confusing for judges and lawyers. Kaye and Freeman used β to denote statistical power, but they acknowledge that epidemiologists use β to denote the probability of a Type II error.  And indeed, both the chapters on epidemiology and medical testimony used β to reference Type II error rate, and thus denote power as the complement of β, or (1- β).[40]

Second, the reason for introducing the confusion about β was doubtful.  Kaye and Freeman suggested that statisticians usually denote power by β, but they offered no citations.  A quick review (not necessarily complete or even a random sample) suggests that many modern statistics texts denote power as (1- β).[41]   At the end of the day, there really was no reason for the conflicting nomenclature and the likely confusion it would engenders.  Indeed, the duplicative handling of statistical power, and other concepts, suggested that it is time to eliminate the repetitive discussions, in favor of one, clear, thorough discussion in the statistics chapter.

Third, Kaye and Freeman problematically refer to β as the probability of accepting the null hypothesis when elsewhere they more carefully instructed that a non-significant finding results in not rejecting the null hypothesis as opposed to accepting the null.  Id. at 253.[42]

Fourth, Kaye and Freeman’s discussion of power, unlike most of their chapter, offered advice that is controversial and unclear:

“On the other hand, when studies have a good chance of detecting a meaningful association, failure to obtain significance can be persuasive evidence that there is nothing much to be found.”[43]

Note that the authors left open what a legal or clinically meaningful association is, and thus offered no real guidance to judges on how to evaluate power after data are collected and analyzed.  As Professor Sander Greenland has argued, in legal contexts, this reliance upon observed power (as opposed to power as a guide in determining appropriate sample size in the planning stages of a study) was arbitrary and “unsalvageable as an analytic tool.”[44]

The chapter on epidemiology offered similar controversial advice on the use of power[45]:

“When a study fails to find a statistically significant association, an important question is whether the result tends to exonerate the agent’s toxicity or is essentially inconclusive with regard to toxicity.93 The concept of power can be helpful in evaluating whether a study’s outcome is exonerative or inconclusive.94  The power of a study is the probability of finding a statistically significant association of a given magnitude (if it exists) in light of the sample sizes used in the study. The power of a study depends on several factors: the sample size; the level of alpha (or statistical significance) specified; the background incidence of disease; and the specified relative risk that the researcher would like to detect.95  Power curves can be constructed that show the likelihood of finding any given relative risk in light of these factors. Often, power curves are used in the design of a study to determine what size the study populations should be.96

Although the authors correctly emphasized the need to specify an alternative hypothesis, their discussion and advice were empty of how that alternative should be selected in legal contexts.  The suggestion that power curves can be constructed was, of course, true, but irrelevant unless courts know where on the power curve they should be looking.  The authors were also correct that power is used to determine adequate sample size under specified conditions; but again, the use of power curves in this setting is today rather uncommon.  Investigators select a level of power corresponding to an acceptable Type II error rate, and an alternative hypothesis that would be clinically meaningful for their research, in order to determine their sample size. Translating clinical into legal meaningfulness is not always straightforward.

In a footnote, the authors of the epidemiology chapter noted that Professor Rothman has been “one of the leaders in advocating the use of confidence intervals and rejecting strict significance testing.”[46] What the chapter failed, however, to mention is that Rothman has also been outspoken in rejecting post-hoc power calculations that the epidemiology chapter seemed to invite:

“Standard statistical advice states that when the data indicate a lack of significance, it is important to consider the power of the study to detect as significant a specific alternative hypothesis. The power of a test, however, is only an indirect indicator of precision, and it requires an assumption about the magnitude of the effect. In planning a study, it is reasonable to make conjectures about the magnitude of an effect to compute study-size requirements or power. In analyzing data, however, it is always preferable to use the information in the data about the effect to estimate it directly, rather than to speculate about it with study-size or power calculations (Smith and Bates, 1992; Goodman and Berlin, 1994; Hoening and Heisey, 2001). Confidence limits and (even more so) P-value functions convey much more of the essential information by indicating the range of values that are reasonably compatible with the observations (albeit at a somewhat arbitrary alpha level), assuming the statistical model is correct. They can also show that the data do not contain the information necessary for reassurance about an absence of effect.”[47]

The selective, incomplete scholarship of the epidemiology chapter on the issue of statistical power was not only unfortunate, but it distorted the authors’ evaluation of the sparse case law on the issue of power.  For instance, they noted:

“Even when a study or body of studies tends to exonerate an agent, that does not establish that the agent is absolutely safe. See Cooley v. Lincoln Elec. Co., 693 F. Supp. 2d 767 (N.D. Ohio 2010). Epidemiology is not able to provide such evidence.”[48]

Here the authors, Green, Freedman, and Gordis, shifted the burden to the defendant and then go to an even further extreme of making the burden of proof one of absolute certainty in the product’s safety.  This is not, and never has been, a legal standard. The cases they cited amplified the error. In Cooley, for instance, the defense expert would have opined that welding fume exposure did not cause parkinsonism or Parkinson’s disease.  Although the expert witness had not conducted a meta-analysis, he had reviewed the confidence intervals around the point estimates of the available studies.  Many of the point estimates were at or below 1.0, and in some cases, the upper bound of the confidence interval excluded 1.0.  The trial court expressed its concern that the expert witness had inferred “evidence of absence” from “absence of evidence.”  Cooley v. Lincoln Elec. Co., 693 F. Supp. 2d 767, 773 (N.D. Ohio 2010).  This concern, however, was misguided given that many studies had tested the claimed association, and that virtually every case-control and cohort study had found risk ratios at or below 1.0, or very close to 1.0.  What the court in Cooley, and the authors of the epidemiology chapter in the third edition have lost sight of, is that when the hypothesis is repeatedly tested, with failure to reject the null hypothesis, and with point estimates at or very close to 1.0, and with narrow confidence intervals, then the claimed association is probably incorrect.[49]

The Cooley court’s comments might have had some validity when applied to a single study, but not to the impressive body of exculpatory epidemiologic evidence that pertained to welding fume and Parkinson’s disease.  Shortly after the Cooley case was decided, a published meta-analysis of welding fume or manganese exposure demonstrated a reduced level of risk for Parkinson’s disease among persons occupationally exposed to welding fumes or manganese.[50]

VI. The Treatment of Meta-Analysis in the Third Edition

Meta-analysis is a statistical procedure for aggregating data and statistics from individual studies into a single summary statistical estimate of the population measurement of interest.  The first meta-analysis is typically attributed to Karl Pearson, circa 1904, who sought a method to overcome the limitations of small sample size and low statistical power.  Statistical methods for meta-analysis in epidemiology and the social sciences, however, did not mature until the 1970s.  Even then, the biomedical scientific community remained skeptical of, if not out rightly hostile to, meta-analysis until relatively recently.

The hostility to meta-analysis, especially in the context of observational epidemiologic studies, was colorfully expressed by two capable epidemiologists, Samuel Shapiro and Alvan Feinstein, as late as the 1990s:

“Meta-analysis begins with scientific studies….  [D]ata from these studies are then run through computer models of bewildering complexity which produce results of implausible precision.”

* * * *

“I propose that the meta-analysis of published non-experimental data should be abandoned.”[51]

The professional skepticism about meta-analysis was reflected in some of the early judicial assessments of meta-analysis in court cases.  In the 1980s and early 1990s, some trial judges erroneously dismissed meta-analysis as a flawed statistical procedure that claimed to make something out of nothing.[52]

In In re Paoli Railroad Yard PCB Litigation, Judge Robert Kelly excluded plaintiffs’ expert witness Dr. William Nicholson and his testimony based upon his unpublished meta-analysis of health outcomes among PCB-exposed workers.  Judge Kelly found that the meta-analysis was a novel technique, and that Nicholson’s meta-analysis was not peer reviewed.  Furthermore, the meta-analysis assessed health outcomes not experienced by any of the plaintiffs before the trial court.[53]

The Court of Appeals for the Third Circuit reversed the exclusion of Dr. Nicholson’s testimony, and remanded for reconsideration with instructions.[54]  The Circuit noted that meta-analysis was not novel, and that the lack of peer-review was not an automatic disqualification.  Acknowledging that a meta-analysis could be performed poorly using invalid methods, the appellate court directed the trial court to evaluate the validity of Dr. Nicholson’s work on his meta-analysis. On remand, however, it seems that plaintiffs chose – wisely – not to proceed with Nicholson’s meta-analysis.[55]

In one of many squirmishes over colorectal cancer claims in asbestos litigation, Judge Sweet in the Southern District of New York was unimpressed by efforts to aggregate data across studies.  Judge Sweet declared that:

“no matter how many studies yield a positive but statistically insignificant SMR for colorectal cancer, the results remain statistically insignificant. Just as adding a series of zeros together yields yet another zero as the product, adding a series of positive but statistically insignificant SMRs together does not produce a statistically significant pattern.”[56]

The plaintiffs’ expert witness who had offered the unreliable testimony, Dr. Steven Markowitz, like Nicholson, another foot soldier in Dr. Irving Selikoff’s litigation machine, did not offer a formal meta-analysis to justify his assessment that multiple non-significant studies, taken together, rule out chance as a likely explanation for an aggregate finding of an increased risk.

Judge Sweet was quite justified in rejecting this back of the envelope, non-quantitative meta-analysis.  His suggestion, however, that multiple non-significant studies could never collectively serve to rule out chance as an explanation for an overall increased rate of disease in the exposed groups is completely wrong.  Judge Sweet would have better focused on the validity issues in key studies, the presence of bias and confounding, and the completeness of the proffered meta-analysis.  The Second Circuit reversed the entry of summary judgment, and remanded the colorectal cancer claim for trial.[57]  Over a decade later, with even more accumulated studies and data, the Institute of Medicine found the evidence for asbestos plaintiffs’ colorectal cancer claims to be scientifically insufficient.[58]

Courts continue to go astray with an erroneous belief that multiple studies, all without statistically significant results, cannot yield a statistically significant summary estimate of increased risk.  See, e.g., Baker v. Chevron USA, Inc., 2010 WL 99272, *14-15 (S.D.Ohio 2010) (addressing a meta-analysis by Dr. Infante on multiple myeloma outcomes in studies of benzene-exposed workers).  There were many sound objections to Infante’s meta-analysis, but the suggestion that multiple studies without statistical significance could not yield a summary estimate of risk with statistical significance was not one of them.

In the last two decades, meta-analysis has emerged as an important technique for addressing random variation in studies, as well as some of the limitations of frequentist statistical methods.  In 1980s, articles reporting meta-analyses were rare to non-existent.  In 2009, there were over 2,300 articles with “meta-analysis” in their title, or in their keywords, indexed in the PubMed database of the National Library of Medicine.[59]

The techniques for aggregating data have been studied, refined, and employed extensively in thousands of methods and application papers in the last decade. Consensus guideline papers have been published for meta-analyses of clinical trials as well as observational studies.[60]  Meta-analyses, of observational studies and of randomized clinical trials, routinely are relied upon by expert witnesses in pharmaceutical and so-called toxic tort litigation.[61]

The second edition of the Reference Manual on Scientific Evidence gave very little attention to meta-analysis; the third edition did not add very much to the discussion.  The time has come for the next edition to address meta-analyses, and criteria for their validity or invalidity.

  1. Statistics Chapter

The statistics chapter of the third edition gave scant attention to meta-analysis.  The chapter noted, in a footnote, that there are formal procedures for aggregating data across studies, and that the power of the aggregated data will exceed the power of the individual, included studies.  The footnote then cautioned that meta-analytic procedures “have their own weakness,”[62] without detailing what that weakness is. The time has come to spell out the weaknesses so that trial judges can evaluate opinion testimony based upon meta-analyses.

The glossary at the end of the statistics chapter offers a definition of meta-analysis:

“meta-analysis. Attempts to combine information from all studies on a certain topic. For example, in the epidemiological context, a meta-analysis may attempt to provide a summary odds ratio and confidence interval for the effect of a certain exposure on a certain disease.”[63]

This definition was inaccurate in ways that could yield serious mischief.  Virtually all meta-analyses are, or should be, built upon a systematic review that sets out to collect all available studies on a research issue of interest.  It is a rare meta-analysis, however, that includes “all” studies in its quantitative analysis.  The meta-analytic process involves a pre-specification of inclusionary and exclusionary criteria for the quantitative analysis of the summary estimate of risk.  Those criteria may limit the quantitative analysis to randomized trials, or to analytical epidemiologic studies.  Furthermore, meta-analyses frequently and appropriately have pre-specified exclusionary criteria that relate to study design or quality.

On a more technical note, the offered definition suggests that the summary estimate of risk will be an odds ratio, which may or may not be true.  Meta-analyses of risk ratios may yield summary estimates of risk in terms of relative risk or hazard ratios, or even of risk differences.  The meta-analysis may combine data of means rather than proportions as well.

  1. Epidemiology Chapter

The chapter on epidemiology delved into meta-analysis in greater detail than the statistics chapter, and offered apparently inconsistent advice.  The overall gist of the chapter, however, can perhaps best be summarized by the definition offered in this chapter’s glossary:

“meta-analysis. A technique used to combine the results of several studies to enhance the precision of the estimate of the effect size and reduce the plausibility that the association found is due to random sampling error.  Meta-analysis is best suited to pooling results from randomly controlled experimental studies, but if carefully performed, it also may be useful for observational studies.”[64]

It is now time to tell trial judges what “careful” means in the context of conducting and reporting and relying upon meta-analyses.

The epidemiology chapter appropriately noted that meta-analysis can help address concerns over random error in small studies.[65]  Having told us that properly conducted meta-analyses of observational studies can be helpful, the chapter then proceeded to hedge considerably[66]:

“Meta-analysis is most appropriate when used in pooling randomized experimental trials, because the studies included in the meta-analysis share the most significant methodological characteristics, in particular, use of randomized assignment of subjects to different exposure groups. However, often one is confronted with nonrandomized observational studies of the effects of possible toxic substances or agents. A method for summarizing such studies is greatly needed, but when meta-analysis is applied to observational studies – either case-control or cohort – it becomes more controversial.174 The reason for this is that often methodological differences among studies are much more pronounced than they are in randomized trials. Hence, the justification for pooling the results and deriving a single estimate of risk, for example, is problematic.175

The stated objection to pooling results for observational studies was certainly correct, but many research topics have sufficient studies available to allow for appropriate selectivity in framing inclusionary and exclusionary criteria to address the objection.  The chapter went on to credit the critics of meta-analyses of observational studies.  As they did in the second edition of the Reference Manual, the authors in the third edition repeated their cites to, and quotes from, early papers by John Bailar, who was then critical of such meta-analyses:

“Much has been written about meta-analysis recently and some experts consider the problems of meta-analysis to outweigh the benefits at the present time. For example, John Bailar has observed:

‘[P]roblems have been so frequent and so deep, and overstatements of the strength of conclusions so extreme, that one might well conclude there is something seriously and fundamentally wrong with the method. For the present . . . I still prefer the thoughtful, old-fashioned review of the literature by a knowledgeable expert who explains and defends the judgments that are presented. We have not yet reached a stage where these judgments can be passed on, even in part, to a formalized process such as meta-analysis.’

John Bailar, “Assessing Assessments,” 277 Science 528, 529 (1997).”[67]

Bailar’s subjective preference for “old-fashioned” reviews, which often cherry picked the included studies is, well, “old fashioned.”  More to the point, it is questionable science, and a distinctly minority viewpoint in the light of substantial improvements in the conduct and reporting of systematic reviews and meta-analyses of observational studies.  Bailar may be correct that some meta-analyses should have never left the protocol stage, but the third edition of the Reference Manual failed to provide the judiciary with the tools to appreciate the distinction between good and bad meta-analyses.

This categorical rejection, cited with apparent approval, is amplified by a recitation of some real or apparent problems with meta-analyses of observational studies.  What is missing is a discussion of how many of these problems can be and are dealt with in contemporary practice[68]:

“A number of problems and issues arise in meta-analysis. Should only published papers be included in the meta-analysis, or should any available studies be used, even if they have not been peer reviewed? Can the results of the meta-analysis itself be reproduced by other analysts? When there are several meta-analyses of a given relationship, why do the results of different meta-analyses often disagree? The appeal of a meta-analysis is that it generates a single estimate of risk (along with an associated confidence interval), but this strength can also be a weakness, and may lead to a false sense of security regarding the certainty of the estimate. A key issue is the matter of heterogeneity of results among the studies being summarized.  If there is more variance among study results than one would expect by chance, this creates further uncertainty about the summary measure from the meta-analysis. Such differences can arise from variations in study quality, or in study populations or in study designs. Such differences in results make it harder to trust a single estimate of effect; the reasons for such differences need at least to be acknowledged and, if possible, explained.176 People often tend to have an inordinate belief in the validity of the findings when a single number is attached to them, and many of the difficulties that may arise in conducting a meta-analysis, especially of observational studies such as epidemiologic ones, may consequently be overlooked.177

The epidemiology chapter authors were entitled to their opinion, but their discussion left the judiciary uninformed about current practice, and best practices, in epidemiology.  A categorical rejection of meta-analyses of observational studies is at odds with the chapter’s own claim that such meta-analyses can be helpful if properly performed. What was needed, and is missing, is a meaningful discussion to help the judiciary determine whether a meta-analysis of observational studies was properly performed.

  1. Medical Testimony Chapter

The chapter on medical testimony is the third pass at meta-analysis in the third edition of the Reference Manual.  The second edition’s chapter on medical testimony ignored meta-analysis completely; the new edition addresses meta-analysis in the context of the hierarchy of study designs[69]:

“Other circumstances that set the stage for an intense focus on medical evidence included

(1) the development of medical research, including randomized controlled trials and other observational study designs;

(2) the growth of diagnostic and therapeutic interventions;141

(3) interest in understanding medical decision making and how physicians reason;142 and

(4) the acceptance of meta-analysis as a method to combine data from multiple randomized trials.143

This language from the medical testimony chapter curiously omitted observational studies, but the footnote reference (note 143) then inconsistently discussed two meta-analyses of observational, rather than experimental, studies.[70]  The chapter then provided even further confusion by giving a more detailed listing of the hierarchy of medical evidence in the form of different study designs[71]:

3. Hierarchy of medical evidence

With the explosion of available medical evidence, increased emphasis has been placed on assembling, evaluating, and interpreting medical research evidence.  A fundamental principle of evidence-based medicine (see also Section IV.C.5, infra) is that the strength of medical evidence supporting a therapy or strategy is hierarchical.  When ordered from strongest to weakest, systematic review of randomized trials (meta-analysis) is at the top, followed by single randomized trials, systematic reviews of observational studies, single observational studies, physiological studies, and unsystematic clinical observations.150 An analysis of the frequency with which various study designs are cited by others provides empirical evidence supporting the influence of meta-analysis followed by randomized controlled trials in the medical evidence hierarchy.151 Although they are at the bottom of the evidence hierarchy, unsystematic clinical observations or case reports may be the first signals of adverse events or associations that are later confirmed with larger or controlled epidemiological studies (e.g., aplastic anemia caused by chloramphenicol,152 or lung cancer caused by asbestos153). Nonetheless, subsequent studies may not confirm initial reports (e.g., the putative association between coffee consumption and pancreatic cancer).154

This discussion further muddied the water by using a parenthetical to suggest that meta-analyses of randomized clinical trials are equivalent to systematic reviews of such studies — “systematic review of randomized trials (meta-analysis).” Of course, systematic reviews are not meta-analyses, although they are usually a necessary precondition for conducting a proper meta-analysis.  The relationship between the procedures for a systematic review and a meta-analysis are in need of clarification, but the judiciary will not find it in the third edition of the Reference Manual.

CONCLUSION

The idea of the Reference Manual was important to support trial judge’s efforts to engage in gatekeeping in unfamiliar subject matter areas. In its third incarnation, the Manual has become a standard starting place for discussion, but on several crucial issues, the third edition was unclear, imprecise, contradictory, or muddled. The organizational committee and authors for the fourth edition have a fair amount of work on their hands. There is clearly room for improvement in the Fourth Edition.


[1] Adam Dutkiewicz, “Book Review: Reference Manual on Scientific Evidence, Third Edition,” 28 Thomas M. Cooley L. Rev. 343 (2011); John A. Budny, “Book Review: Reference Manual on Scientific Evidence, Third Edition,” 31 Internat’l J. Toxicol. 95 (2012); James F. Rogers, Jim Shelson, and Jessalyn H. Zeigler, “Changes in the Reference Manual on Scientific Evidence (Third Edition),” Internat’l Ass’n Def. Csl. Drug, Device & Biotech. Comm. Newsltr. (June 2012).  See Schachtman “New Reference Manual’s Uneven Treatment of Conflicts of Interest.” (Oct. 12, 2011).

[2] The Manual did not do quite so well in addressing its own conflicts of interest.  See, e.g., infra at notes 7, 20.

[3] RSME 3d 11 (2011).

[4] Id. at 19.

[5] Id. at 20 & n. 51 (citing Susan Haack, “An Epistemologist in the Bramble-Bush: At the Supreme Court with Mr. Joiner,” 26 J. Health Pol. Pol’y & L. 217–37 (1999).

[6] Id. at 19-20 & n.52.

[7] Professor Berger filed an amicus brief on behalf of plaintiffs, in Rider v. Sandoz Pharms. Corp., 295 F.3d 1194 (11th Cir. 2002).

[8] Id. at 20 n.51. (The editors noted misleadingly that the published chapter was Berger’s last revision, with “a few edits to respond to suggestions by reviewers.”). I have written elsewhere of the ethical cloud hanging over this Milward decision. SeeCarl Cranor’s Inference to the Best Explanation” (Feb. 12, 2021); “From here to CERT-ainty” (June 28, 2018); “The Council for Education & Research on Toxics” (July 9, 2013) (CERT amicus brief filed without any disclosure of conflict of interest). See also NAS, “Carl Cranor’s Conflicted Jeremiad Against Daubert” (Sept. 23, 2018).

[9] RMSE 3d at 610 (internal citations omitted).

[10] RMSE 3d at 610 n.184 (emphasis, in bold, added).

[11] Interestingly, the authors of this chapter seem to abandon their suggestion that studies relied upon “might qualify for the learned treatise exception to the hearsay rule, Fed. R. Evid. 803(18), or possibly the catchall exceptions, Fed. R. Evid. 803(24) & 804(5),” which was part of their argument in the Second Edition.  RMSE 2d at 335 (2000).  See also RMSE 3d at 214 (discussing statistical studies as generally “admissible,” but acknowledging that admissibility may be no more than permission to explain the basis for an expert’s opinion, which is hardly admissibility at all).

[12] David L. Faigman, et al., Modern Scientific Evidence:  The Law and Science of Expert Testimony v.1, § 23:1,at 206 (2009) (“Well conducted studies are uniformly admitted.”).

[13] See Richard M. Lynch and Mary S. Henifin, “Causation in Occupational Disease: Balancing Epidemiology, Law and Manufacturer Conduct,” 9 Risk: Health, Safety & Environment 259, 269 (1998) (conflating distinct causal and liability concepts, and arguing that legal and scientific causal criteria should be abrogated when manufacturing defendant has breached a duty of care).

[14]  See, e.g., Parker v. Mobil Oil Corp., 7 N.Y.3d 434, 857 N.E.2d 1114, 824 N.Y.S.2d 584 (2006) (dismissing leukemia (AML) claim based upon claimed low-level benzene exposure from gasoline), aff’g 16 A.D.3d 648 (App. Div. 2d Dep’t 2005).  No; you will not find the Parker case cited in the Manual‘s chapter on toxicology. (Parker is, however, cited in the chapter on exposure science even though it is a state court case.).

[15] Curtis D. Klaassen, Casarett & Doull’s Toxicology: The Basic Science of Poisons 23 (7th ed. 2008) (internal citations omitted).

[16] Philip Wexler, Bethesda, et al., eds., 2 Encyclopedia of Toxicology 96 (2005).

[17] See Edward J. Calabrese and Robyn B. Blain, “The hormesis database: The occurrence of hormetic dose responses in the toxicological literature,” 61 Regulatory Toxicology and Pharmacology 73 (2011) (reviewing about 9,000 dose-response relationships for hormesis, to create a database of various aspects of hormesis).  See also Edward J. Calabrese and Robyn B. Blain, “The occurrence of hormetic dose responses in the toxicological literature, the hormesis database: An overview,” 202 Toxicol. & Applied Pharmacol. 289 (2005) (earlier effort to establish hormesis database).

[18] Reference Manual at 653

[19] See e.g., Karin Wirdefeldt, Hans-Olaf Adami, Philip Cole, Dimitrios Trichopoulos, and Jack Mandel, “Epidemiology and etiology of Parkinson’s disease: a review of the evidence.  26 European J. Epidemiol. S1, S20-21 (2011); Tomas R. Guilarte, “Manganese and Parkinson’s Disease: A Critical Review and New Findings,” 118 Environ Health Perspect. 1071, 1078 (2010) (“The available evidence from human and non­human primate studies using behavioral, neuroimaging, neurochemical, and neuropathological end points provides strong sup­port to the hypothesis that, although excess levels of [manganese] accumulation in the brain results in an atypical form of parkinsonism, this clini­cal outcome is not associated with the degen­eration of nigrostriatal dopaminergic neurons as is the case in PD [Parkinson’s disease].”)

[20] RMSE3ed at 646.

[21] Hans-Olov Adami, Sir Colin L. Berry, Charles B. Breckenridge, Lewis L. Smith, James A. Swenberg, Dimitrios Trichopoulos, Noel S. Weiss, and Timothy P. Pastoor, “Toxicology and Epidemiology: Improving the Science with a Framework for Combining Toxicological and Epidemiological Evidence to Establish Causal Inference,” 122 Toxciological Sciences 223, 224 (2011).

[22] RMSE3d at xiv.

[23] RMSE3d at xiv.

[24] RMSE3d at xiv-xv.

[25] See, e.g., Parker v. Mobil Oil Corp., 7 N.Y.3d 434, 857 N.E.2d 1114, 824 N.Y.S.2d 584 (2006); Exxon Corp. v. Makofski, 116 SW 3d 176 (Tex. Ct. App. 2003).

[26] Goldstein here and elsewhere has confused significance probability with the posterior probability required by courts and scientists.

[27] Margaret A. Berger, “The Admissibility of Expert Testimony,” in RMSE3d 11, 24 (2011).

[28] Cook v. Rockwell Int’l Corp., 580 F. Supp. 2d 1071, 1122 (D. Colo. 2006), rev’d and remanded on other grounds, 618 F.3d 1127 (10th Cir. 2010), cert. denied, ___ U.S. ___ (May 24, 2012).

[29] In re Viagra Products Liab. Litig., 658 F. Supp. 2d 936, 945 (D. Minn. 2009). 

[31] Id. at 256 -57.

[32] Michael D. Green, D. Michal Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” in RMSE3d 549, 573.

[33] Id. at 573n.68.

[34] See In re Viagra Products Liab. Litig., 572 F. Supp. 2d 1071, 1081 (D. Minn. 2008).

[35] RSME3d at 577 n81.

[36] Id.

[37] 572 F. Supp. 2d 1071, 1081 (D. Minn. 2008).

[38] David H. Kaye & David A. Freedman, “Reference Guide on Statistics,” in RMSE3ed 209 (2011).

[39] Id. at 254 n.106

[40] See Michael D. Green, D. Michal Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” in RMSE3ed 549, 582, 626 ; John B. Wong, Lawrence O. Gostin, and Oscar A. Cabrera, Abogado, “Reference Guide on Medical Testimony,” in RMSE3ed 687, 724.  This confusion in nomenclature is regrettable, given the difficulty many lawyers and judges seem have in following discussions of statistical concepts.

[41] See, e.g., Richard D. De Veaux, Paul F. Velleman, and David E. Bock, Intro Stats 545-48 (3d ed. 2012); Rand R. Wilcox, Fundamentals of Modern Statistical Methods 65 (2d ed. 2010).

[42] See also Daniel Rubinfeld, “Reference Guide on Multiple Regression,“ in RMSE3d 303, 321 (describing a p-value > 5% as leading to failing to reject the null hypothesis).

[43] RMSE3d at 254.

[44] See Sander Greenland, “Nonsignificance Plus High Power Does Not Imply Support Over the Alternative,” 22 Ann. Epidemiol. 364, 364 (2012).

[45] Michael D. Green, D. Michal Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” RMSE3ed 549, 582.

[46] RMSE3d at 579 n.88.

[47] Kenneth Rothman, Sander Greenland, and Timothy Lash, Modern Epidemiology 160 (3d ed. 2008).  See also Kenneth J. Rothman, “Significance Questing,” 105 Ann. Intern. Med. 445, 446 (1986) (“[Simon] rightly dismisses calculations of power as a weak substitute for confidence intervals, because power calculations address only the qualitative issue of statistical significance and do not take account of the results already in hand.”).

[48] RMSE3d at 582 n.93; id. at 582 n.94 (“Thus, in Smith v. Wyeth-Ayerst Labs. Co., 278 F.Supp. 2d 684, 693 (W.D.N.C. 2003), and Cooley v. Lincoln Electric Co., 693 F. Supp. 2d 767, 773 (N.D. Ohio 2010), the courts recognized that the power of a study was critical to assessing whether the failure of the study to find a statistically significant association was exonerative of the agent or inconclusive.”).

[49] See, e.g., Anthony J. Swerdlow, Maria Feychting, Adele C. Green, Leeka Kheifets, David A. Savitz, International Commission for Non-Ionizing Radiation Protection Standing Committee on Epidemiology, “Mobile Phones, Brain Tumors, and the Interphone Study: Where Are We Now?” 119 Envt’l Health Persp. 1534, 1534 (2011) (“Although there remains some uncertainty, the trend in the accumulating evidence is increasingly against the hypothesis that mobile phone use can cause brain tumors in adults.”).

[50] James Mortimer, Amy Borenstein, and Lorene Nelson, “Associations of welding and manganese exposure with Parkinson disease: Review and meta-analysis,” 79 Neurology 1174 (2012).

[51] Samuel Shapiro, “Meta-analysis/Smeta-analysis,” 140 Am. J. Epidem. 771, 777 (1994).  See also Alvan Feinstein, “Meta-Analysis: Statistical Alchemy for the 21st Century,” 48 J. Clin. Epidem. 71 (1995).

[52] Allen v. Int’l Bus. Mach. Corp., No. 94-264-LON, 1997 U.S. Dist. LEXIS 8016, at *71–*74 (suggesting that meta-analysis of observational studies was controversial among epidemiologists).

[53] 706 F. Supp. 358, 373 (E.D. Pa. 1988).

[54] In re Paoli R.R. Yard PCB Litig., 916 F.2d 829, 856-57 (3d Cir. 1990), cert. denied, 499 U.S. 961 (1991); Hines v. Consol. Rail Corp., 926 F.2d 262, 273 (3d Cir. 1991).

[55] SeeThe Shmeta-Analysis in Paoli,” (July 11, 2019).

[56] In re Joint E. & S. Dist. Asbestos Litig., 827 F. Supp. 1014, 1042 (S.D.N.Y. 1993).

[57] 52 F.3d 1124 (2d Cir. 1995).

[58] Institute of Medicine, Asbestos: Selected Cancers (Wash. D.C. 2006).

[59] See Michael O. Finkelstein and Bruce Levin, “Meta-Analysis of ‘Sparse’ Data: Perspectives from the Avandia CasesJurimetrics J. (2011).

[60] See Donna Stroup, et al., “Meta-analysis of Observational Studies in Epidemiology: A Proposal for Reporting,” 283 J. Am. Med. Ass’n 2008 (2000) (MOOSE statement); David Moher, Deborah Cook, Susan Eastwood, Ingram Olkin, Drummond Rennie, and Donna Stroup, “Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement,” 354 Lancet 1896 (1999).  See also Jesse Berlin & Carin Kim, “The Use of Meta-Analysis in Pharmacoepidemiology,” in Brian Strom, ed., Pharmacoepidemiology 681, 683–84 (4th ed. 2005); Zachary Gerbarg & Ralph Horwitz, “Resolving Conflicting Clinical Trials: Guidelines for Meta-Analysis,” 41 J. Clin. Epidemiol. 503 (1988).

[61] See Finkelstein & Levin, supra at note 59. See also In re Bextra and Celebrex Marketing Sales Practices and Prod. Liab. Litig., 524 F. Supp. 2d 1166, 1174, 1184 (N.D. Cal. 2007) (holding that reliance upon “[a] meta-analysis of all available published and unpublished randomized clinical trials” was reasonable and appropriate, and criticizing the expert witnesses who urged the complete rejection of meta-analysis of observational studies).

[62] RMSE 3d at 254 n.107.

[63] Id. at 289.

[64] Reference Guide on Epidemiology, RSME3d at 624.  See also id. at 581 n. 89 (“Meta-analysis is better suited to combining results from randomly controlled experimental studies, but if carefully performed it may also be helpful for observational studies, such as those in the epidemiologic field.”).

[65] Id. at 579; see also id. at 607 n. 171.

[66] Id. at 607.

[67] Id. at 607 n.177.

[68] Id. at 608.

[69] RMSE 3d at 722-23.

[70] Id. at 723 n.143 (“143. … Video Software Dealers Ass’n v. Schwarzenegger, 556 F.3d 950, 963 (9th Cir. 2009) (analyzing a meta-analysis of studies on video games and adolescent behavior); Kennecott Greens Creek Min. Co. v. Mine Safety & Health Admin., 476 F.3d 946, 953 (D.C. Cir. 2007) (reviewing the Mine Safety and Health Administration’s reliance on epidemiological studies and two meta-analyses).”).

[71] Id. at 723-24.

Rule 702 is Liberal, Not Libertine; Epistemic, Not Mechanical

October 4th, 2021

One common criticism of expert witness gatekeeping after the Supreme Court’s Daubert decision has been that the decision contravenes the claimed “liberal thrust” of the Federal Rules of Evidence. The criticism has been repeated so often as to become a cliché, but its frequent repetition by lawyers and law professors hardly makes it true. The criticism fails to do justice to the range of interpretations of “liberal” in the English language, the context of expert witness common law, and the language of Rule 702, both before and after the Supreme Court’s Daubert decision.

The first problem with the criticism is that the word “liberal,” or the phrase “liberal thrust,” does not appear in the Federal Rules of Evidence. The drafters of the Rules did, however, set out the underlying purpose of the federal codification of common law evidence in Rule 102, with some care:

“These rules should be construed so as to administer every proceeding fairly, eliminate unjustifiable expense and delay, and promote the development of evidence law, to the end of ascertaining the truth and securing a just determination.”

Nothing could promote ascertaining truth and achieving just determinations more than eliminating weak and invalid scientific inference in the form of expert witness opinion testimony. Barring speculative, unsubstantiated, and invalid opinion testimony before trial certainly has the tendency to eliminate full trials, with their expense and delay. And few people would claim unfairness in deleting invalid opinions from litigation. If there is any “liberal thrust” in the purpose of the Federal Rules of Evidence, it serves to advance the truth-finding function of trials.

And yet some legal commentators go so far as to claim that Daubert was wrongly decided because it offends the “liberal thrust” of federal rules.[1] Of course, it is true that the Supreme Court spoke of basic standard of relevance in the Federal Rules as being a “liberal” standard.[2] And in holding that Rule 702 did not incorporate the so-called Frye general acceptance rule,[3] the Daubert Court observed that drafting history of Rule 702 failed to mention Frye, just before invoking liberal-thrust cliché:

“rigid ‘general acceptance’ requirement would be at odds with the ‘liberal thrust’ of the Federal Rules and their ‘general approach of relaxing the traditional barriers to ‘opinion testimony’.”[4]

The court went on to cite one district court judge famously hostile to expert witness gatekeeping,[5] and to the “permissive backdrop” of the Rules, in holding that the Rules did not incorporate Frye,[6] which it characterized as an “austere” standard.[7]

While the Frye standard may have been “austere,” it was also widely criticized. It was also true that the Frye standard was largely applied to scientific devices and not to the scientific process of causal inference. The Frye case itself addressed the admissibility of a systolic blood pressure deception test, an early attempt by William Marston to design a lasso of truth. When courts distinguished the Frye cases on grounds that they involved devices, not causal inferences, they left no meaningful standard in place.

As a procedural matter, the Frye general acceptance standard made little sense in the context of causal opinions. If the opinion itself was generally accepted, then of course it would have to be admitted. Indeed, if the proponent sought judicial notice of the opinion, a trial court would likely have to admit the opinion, and then bar any contrary opinion as not generally accepted.

To be sure, before the Daubert decision, defense counsel attempted to invoke the Frye standard in challenges to the underlying methodology used by expert witnesses to draw causal inferences. There were, however, few such applications. Although not exactly how Frye operated, the Supreme Court might have imagined that the Frye standard required all expert witness opinion testimony to be based on “sufficiently established and accepted scientific methods. The actual language of the 1923 Frye case provides some ambivalent support with its twilight zone standard:

“Just when a scientific principle or discovery crosses the line between the experimental and demonstrable stages is difficult to define. Somewhere in this twilight zone the evidential force of the principle must be recognized, and while the courts will go a long way in admitting expert testimony deduced from a well-recognized scientific principle or discovery, the thing from which the deduction is made must be sufficiently established to have gained general acceptance in the particular field in which it belongs.”[8]

There was always an interpretative difficulty in how exactly a trial court was supposed to poll the world’s scientific community to ascertain “general acceptance.” Moreover, the rule actually before the Daubert Court, Rule 702, spoke of “knowledge.” At best, “general acceptance,” whether of methods or of conclusions, was merely a proxy, and often a very inaccurate one for an epistemic basis for disputed claims or conclusions at issue in litigation.

In cases involving causal claims before Daubert, expert witness opinions received scant attention from trial judges as long as the proffered expert witness met the very minimal standard of expertise needed to qualify to give an opinion. Furthermore, Rule 705 relieved expert witnesses of having to provide any bases for their opinions on direct examination. The upshot was that the standard for admissibility was authoritarian, not epistemic. If the proffered witness had a reasonable pretense to expertise, then the proffering party could parade him or her as an “authority,” on whose opinion the jury could choose to rely in its fact finding. Given this context, any epistemic standard would be “liberal” in freeing the jury or fact finder from the yoke of authoritarian expert witness ipse dixit.

And what exactly is the “liberal” in all this thrusting over Rule 702? Most dictionaries report that the word “liberal” traces back to the Latin liber, meaning “free.” The Latin word is thus the root of both liberty and libertine. One of the major, early uses of the adjective liberal was in the phrase “liberal arts,” meant to denote courses of study freed from authority, dogmas, and religious doctrine. The primary definition provided by the Oxford English Dictionary emphasizes this specific meaning:

“1. Originally, the distinctive epithet of those ‘arts’ or ‘sciences’ (see art 7) that were considered ‘worthy of a free man’; opposed to servile or mechanical.  … . Freq. in liberal arts.”

The Frye general acceptance standard was servile in the sense of its deference to others who were the acceptors, and it was mechanical in its reducing a rule that called for “knowledge” into a formula for nose-counting among the entire field in which an expert witness was testifying. In this light, the Daubert Court’s decision is obvious.

To be sure, the OED provides other subordinate or secondary definitions for “liberal,” such 3c:

Of construction or interpretation: Inclining to laxity or indulgence; not rigorous.”

Perhaps this definition would suggest that a liberal interpretation of Rule 702 would lead to reject the Frye standard because it was rigorous in determining admissibility on a rigid proxy determination that was not necessarily tied to the rule’s requirement of knowledge. Of course, knowledge or epistemic criteria in the Rule imply a different sort of rigor, one that is not servile or mechanical.

The epistemic criterion built into the original Rule 702, and carried forward in every amendment, accords with the secondary meanings given by the OED:

4. a. Free from narrow prejudice; open-minded, candid.

  1. esp. Free from bigotry or unreasonable prejudice in favour of traditional opinions or established institutions; open to the reception of new ideas or proposals of reform.”

The Daubert case represented a step in direction of the classically liberal goal of advancing the truth-finding function of trials. The counter-revolution of let it all in, under the guise of finding challenges to expert witness opinion as going to “weight not admissibility,” or to inventing “presumptions of admissibility” should be seen for what they are: retrograde and illiberal movements in jurisprudential progress.


[1] See, e.g., Michael H. Graham, “The Expert Witness, Predicament: Determining ‘Reliable’ Under the Gatekeeping Test of Daubert, Kumho, and Proposed Amended Rule 702 of the Federal Rules of Evidence,” 54 U. Miami L. Rev. 317, 321 (2000) (“Daubert is a very incomplete case, if not a very bad decision. It did not, in any way, accomplish what it was meant to, i.e., encourage more liberal admissibility of expert witness evidence.”)

[2] Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579,587 (1993).

[3] Frye v. United States, 293 F. 1013 (D.C. Cir. 1923).

[4] Id. at 588, citing Beech Aircraft Corp. v. Rainey, 488 U. S. 153, 169 (citing Rules 701 to 705); see also Edward J. Imwinkelried, “A Brief Defense of the Supreme Court’s Approach to the Interpretation of the Federal Rules of Evidence,” Indiana L. Rev. 267, 294 (1993)(writing of the “liberal structural design” of the Federal Rules).

[5] Jack B. Weinstein, “Rule 702 of the Federal Rules of Evidence is Sound; It Should Not Be Amended,” 138 F. R. D. 631 (1991) (“The Rules were designed to depend primarily upon lawyer-adversaries and sensible triers of fact to evaluate conflicts”).

[6] Daubert at 589.

[7] Id.

[8] Frye v. United States, 54 App. D.C. 46, 293 F. 1013 (1923).

People Get Ready – There’s a Reference Manual a Comin’

July 16th, 2021

Science is the key …

Back in February, I wrote about a National Academies’ workshop that featured some outstanding members of the scientific and statistical world, and which gave participants to identify new potential subjects for inclusion in a proposed fourth edition of the Reference Manual on Scientific Evidence.[1] Funding for that new edition is now secured, and the National Academies has published a précis of the February workshop. National Academies of Sciences, Engineering, and Medicine, Emerging Areas of Science, Engineering, and Medicine for the Courts: Proceedings of a Workshop – in Brief (Washington, DC 2021). The Rapporteurs for these proceedings provide a helpful overview for this meeting, which was not generally covered in the legal media.[2]

The goal of the workshop, which was supported by a planning committee, the Committee on Science, Technology, and Law, the National Academies, the Federal Judicial Center, and the National Science Foundation, was, of course, to identify chapters for a new, fourth edition, of Reference Manual on Scientific Evidence. The workshop was co-chaired by Dr. Thomas D. Albright, of the Salk Institute for Biological Studies, and the Hon. Kathleen McDonald O’Malley, Judge on the U.S. Court of Appeals for the Federal Circuit.

The Rapporteurs duly noted Judge O’Malley’s Workshop comments that she hoped that the reconsideration of the Reference Manual can help close the gap between science and the law. It is thus encouraging that the Rapporteurs focused a large part of their summary on the presentation of Professor Xiao-Li Meng[3] on selection bias, which “can come from cherry picking data, which alters the strength of the evidence.” Meng identified the

“7 S’(ins)” of selection bias:

(1) selection of target/hypothesis (e.g., subgroup analysis);

(2) selection of data (e.g., deleting ‘outliers’ or using only ‘complete cases’);

(3) selection of methodologies (e.g., choosing tests to pass the goodness-of-fit); (4) selective due diligence and debugging (e.g., triple checking only when the outcome seems undesirable);

(5) selection of publications (e.g., only when p-value <0.05);

(6) selections in reporting/summary (e.g., suppressing caveats); and

(7) selections in understanding and interpretation (e.g., our preference for deterministic, ‘common sense’ interpretation).”

Meng also addressed the problem of analyzing subgroup findings after not finding an association in the full sample, dubious algorithms, selection bias in publishing “splashy” and nominally “statistically significant” results, and media bias and incompetence in disseminating study results. Meng discussed how these biases could affect the accuracy of research findings, and how these biases obviously affect the accuracy, validity, and reliability of research findings that are relied upon by expert witnesses in court cases.

The Rapporteurs’ emphasis on Professor Meng’s presentation was noteworthy because the current edition of the Reference Manual is generally lacking in a serious exploration of systematic bias and confounding. To be sure, the concepts are superficially addressed in the Manual’s chapter on epidemiology, but in a way that has allowed many district judges to shrug off serious questions of invalidity with the shibboleth that such questions “to to the weight, not the admissibility,” of challenged expert witness opinion testimony. Perhaps the pending revision to Rule 702 will help improve fidelity to the spirit and text of Rule 702.

Questions of bias and noise have come to receive more attention in the professional statistical and epidemiologic literature. In 2009, Professor Timothy Lash published an important book-length treatment of quantitative bias analysis.[4] Last year, statistician David Hand published a comprehensive, but readily understandable, book on “Dark Data,” and the ways statistical and scientific interference are derailed.[5] One of the presenters at the February workshop, nobel laureate, Daniel Kahneman, published a book on “noise,” just a few weeks ago.[6]

David Hand’s book, Dark Data, (Chapter 10) sets out a useful taxonomy of the ways that data can be subverted by what the consumers of data do not know. The taxonomy would provide a useful organizational map for a new chapter of the Reference Manual:

A Taxonomy of Dark Data

Type 1: Data We Know Are Missing

Type 2: Data We Don’t Know Are Missing

Type 3: Choosing Just Some Cases

Type 4: Self- Selection

Type 5: Missing What Matters

Type 7: Changes with Time

Type 8: Definitions of Data

Type 9: Summaries of Data

Type 11: Feedback and Gaming

Type 12: Information Asymmetry

Type 13: Intentionally Darkened Data

Type 14: Fabricated and Synthetic Data

Type 15: Extrapolating beyond Your Data

Providing guidance not only on “how we know,” but also on how we go astray, patho-epistemology, would be helpful for judges and lawyers. Hand’s book really just a beginning to helping gatekeepers appreciate how superficially plausible health-effects claims are invalidated by the data relied upon by proffered expert witnesses.

* * * * * * * * * * * *

“There ain’t no room for the hopeless sinner
Who would hurt all mankind, just to save his own, believe me now
Have pity on those whose chances grow thinner”


[1]Reference Manual on Scientific Evidence v4.0” (Feb. 28, 2021).

[2] Steven Kendall, Joe S. Cecil, Jason A. Cantone, Meghan Dunn, and Aaron Wolf.

[3] Prof. Meng is the Whipple V. N. Jones Professor of Statistics, in Harvard University. (“Seeking simplicity in statistics, complexity in wine, and everything else in fortune cookies.”)

[4] Timothy L. Lash, Matthew P. Fox, and Aliza K. Fink, Applying Quantitative Bias Analysis to Epidemiologic Data (2009).

[5] David J. Hand, Dark data : why what you don’t know matters (2020).

[6] Daniel Kahneman, Olivier Sibony, and Cass R. Sunstein, Noise: A Flaw in Human Judgment (2021).

A Proclamation from the Task Force on Statistical Significance

June 21st, 2021

The American Statistical Association (ASA) has finally spoken up about statistical significance testing.[1] Sort of.

Back in February of this year, I wrote about the simmering controversy over statistical significance at the ASA.[2] Back in 2016, the ASA issued its guidance paper on p-values and statistical significance, which sought to correct misinterpretations and misrepresentations of “statistical significance.”[3] Lawsuit industry lawyers seized upon the ASA statement to proclaim a new freedom from having to exclude random error.[4] To obtain their ends, however, the plaintiffs’ bar had to distort the ASA guidance in statistically significant ways.

To add to the confusion, in 2019, the ASA Executive Director published an editorial that called for an end to statistical significance testing.[5] Because the editorial lacked disclaimers about whether or not it represented official ASA positions, scientists, statisticians, and lawyers on all sides were fooled into thinking the ASA had gone whole hog.[6] Then ASA President Karen Kafadar stepped into the breach to explain that the Executive Director was not speaking for the ASA.[7]

In November 2019, members of the ASA board of directors (BOD) approved a motion to create a “Task Force on Statistical Significance and Replicability.”[8] Its charge was

“to develop thoughtful principles and practices that the ASA can endorse and share with scientists and journal editors. The task force will be appointed by the ASA President with advice and participation from the ASA BOD. The task force will report to the ASA BOD by November 2020.

The members of the Task Force identified in the motion were:

Linda Young (Nat’l Agricultural Statistics Service & Univ. of Florida; Co-Chair)

Xuming He (Univ. Michigan; Co-Chair)

Yoav Benjamini (Tel Aviv Univ.)

Dick De Veaux (Williams College; ASA Vice President)

Bradley Efron (Stanford Univ.)

Scott Evans (George Washington Univ.; ASA Publications Representative)

Mark Glickman (Harvard Univ.; ASA Section Representative)

Barry Graubard (Nat’l Cancer Instit.)

Xiao-Li Meng (Harvard Univ.)

Vijay Nair (Wells Fargo & Univ. Michigan)

Nancy Reid (Univ. Toronto)

Stephen Stigler (Univ. Chicago)

Stephen Vardeman (Iowa State Univ.)

Chris Wikle (Univ. Missouri)

Tommy Wright (U.S. Census Bureau)

Despite the inclusion of highly accomplished and distinguished statisticians on the Task Force, there were isolated demurrers. Geoff Cumming, for one, clucked:

“Why won’t statistical significance simply whither and die, taking p <. 05 and maybe even p-values with it? The ASA needs a Task Force on Statistical Inference and Open Science, not one that has its eye firmly in the rear view mirror, gazing back at .05 and significance and other such relics.”[9]

Despite the clucking, the Taskforce arrived at its recommendations, but curiously, its report did not find a home in an ASA publication. Instead, the “The ASA President’s Task Force Statement on Statistical Significance and Replicability” has now appeared as an “in press” publication at The Annals of Applied Statistics, where Karen Kafadar is the editor in chief.[10] The report is accompanied by an editorial by Kafadar.[11]

The Taskforce advanced five basic propositions, which may have been obscured by some of the recent glosses on the ASA 2016 p-value statement:

  1. “Capturing the uncertainty associated with statistical summaries is critical.”
  2. “Dealing with replicability and uncertainty lies at the heart of statistical science. Study results are replicable if they can be verified in further studies with new data.”
  3. “The theoretical basis of statistical science offers several general strategies for dealing with uncertainty.”
  4. “Thresholds are helpful when actions are required.”
  5. “P-values and significance tests, when properly applied and interpreted, increase the rigor of the conclusions drawn from data.”

All of this seems obvious and anodyne, but I suspect it will not silence the clucking.


[1] Deborah Mayo, “Alas! The ASA President’s Task Force Statement on Statistical Significance and Replicability,” Error Statistics (June 20, 2021).

[2]Falsehood Flies – The ASA 2016 Statement on Statistical Significance” (Feb. 26, 2021).

[3] Ronald L. Wasserstein & Nicole A. Lazar, “The ASA’s Statement on p-Values: Context, Process, and Purpose,” 70 The Am. Statistician 129 (2016); see “The American Statistical Association’s Statement on and of Significance” (March 17, 2016).

[4]The American Statistical Association Statement on Significance Testing Goes to Court – Part I” (Nov. 13, 2018); “The American Statistical Association Statement on Significance Testing Goes to Court – Part 2” (Mar. 7, 2019).

[5]Has the American Statistical Association Gone Post-Modern?” (Mar. 24, 2019); “American Statistical Association – Consensus versus Personal Opinion” (Dec. 13, 2019). See also Deborah G. Mayo, “The 2019 ASA Guide to P-values and Statistical Significance: Don’t Say What You Don’t Mean,” Error Statistics Philosophy (June 17, 2019); B. Haig, “The ASA’s 2019 update on P-values and significance,” Error Statistics Philosophy  (July 12, 2019); Brian Tarran, “THE S WORD … and what to do about it,” Significance (Aug. 2019); Donald Macnaughton, “Who Said What,” Significance 47 (Oct. 2019).

[6] Ronald L. Wasserstein, Allen L. Schirm, and Nicole A. Lazar, “Editorial: Moving to a World Beyond ‘p < 0.05’,” 73 Am. Statistician S1, S2 (2019).

[7] Karen Kafadar, “The Year in Review … And More to Come,” AmStat News 3 (Dec. 2019) (emphasis added); see Kafadar, “Statistics & Unintended Consequences,” AmStat News 3,4 (June 2019).

[8] Karen Kafadar, “Task Force on Statistical Significance and Replicability,” ASA Amstat Blog (Feb. 1, 2020).

[9] See, e.g., Geoff Cumming, “The ASA and p Values: Here We Go Again,” The New Statistics (Mar. 13, 2020).

[10] Yoav Benjamini, Richard D. DeVeaux, Bradly Efron, Scott Evans, Mark Glickman, Barry Braubard, Xuming He, Xiao Li Meng, Nancy Reid, Stephen M. Stigler, Stephen B. Vardeman, Christopher K. Wikle, Tommy Wright, Linda J. Young, and Karen Kafadar, “The ASA President’s Task Force Statement on Statistical Significance and Replicability,” 15 Annals of Applied Statistics 2021, available at https://www.e-publications.org/ims/submission/AOAS/user/submissionFile/51526?confirm=79a17040.

[11] Karen Kafadar, “Editorial: Statistical Significance, P-Values, and Replicability,” 15 Annals of Applied Statistics 2021, available at https://www.e-publications.org/ims/submission/AOAS/user/submissionFile/51525?confirm=3079934e.