TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Doctor Moline – Why Can’t You Be True?

December 18th, 2022

Doctor Moline, why can’t you be true?

Oh, Doc Moline, why can’t you be true?

You done started doing the things you used to do.

Mass torts are the product of the lawsuit industry, and since the 1960s, this industry has produced tort claims on a truly industrial scale. The industry now has an economic ally and adjunct in the litigation finance industry, and it has been boosted by the desuetude of laws against champerty and maintenance. The way that mass torts are adjudicated in some places could easily be interpreted as legalized theft.

One governor on the rapaciousness of the lawsuit industry has been the requirement that claims actually be proven in court. Since the Supreme Court’s ruling in Daubert, the defense bar has been able, on notable occasions, to squelch some instances of false claiming. Just as equity often varies with the length of the Chancellor’s foot, gatekeeping of scientific opinion about causation often varies with the scientific acumen of the trial judge. From the decision in Daubert itself, gatekeeping has been under assault form the lawsuit industry and its allies. I have, in these pages, detailed the efforts of the now defunct Project on Scientific Knowledge and Public Policy (SKAPP) to undermine any gatekeeping of scientific opinion testimony for scientific or statistical validity. SKAPP, as well as other organizations, and some academics, in aid of the lawsuit industry, have lobbied for the abandonment of the requirement of proving causation, or for the dilution of the scientific standards for expert opinions of causation.[1] The counter to this advocacy has been, and continues to be, an insistence that the traditional elements of a case, including general and specific causation, be sufficiently proven, with opinion testimony that satisfies the legal knowledge requirement for such testimony.

Alas, expert witness testimony can go awry in other ways besides merely failing to satisfy the validity and relevance requirements of the law of evidence.[2] One way I had not previously contemplated is suing for defamation or “product disparagement.”

We are now half a century since occupational exposures to various asbestos fibers came under general federal regulatory control, with regulatory requirements that employers warn their employees about the hazards involved with asbestos exposure. This federally enforced dissemination of information about asbestos hazards created a significant problem for the asbestos lawsuit industry.  Cases of mesothelioma have always occurred among persons non-occupationally exposed to asbestos, but as occupational exposure declined, the relative proportion of mesothelioma cases with no obvious occupational exposures increased. The lawsuit industry could not stand around and let these tragic cases go to waste.

Cosmetic talc variably has some mineral particulate that comes under the category of “elongate mineral particles,” (EMP), which the lawsuit industry could assert is “asbestos.” As a result, this industry has been able to reprise asbestos litigation into a new morality tale against cosmetic talc producers and sellers. LTL Management LLC was formerly known as Johnson & Johnson Consumer Inc. [J&J], a manufacturer and seller of cosmetic talc. J&J became a major target of the lawsuit industry in mesothelioma (and ovarian cancer) cases, based upon claims that EMP/asbestos in cosmetic talc caused their cancers. The lawsuit industry recruited its usual retinue of expert witnesses to support its litigation efforts.

Standing out in this retinue was Dr. Jacqueline Moline. On December 16, J&J did something that rarely happens in the world of mass torts; it sued Dr. Moline for fraud, injurious falsehood and product disparagement, and violations of the Lanham Act (§ 43(a), 15 U.S.C. § 1125(a)).[3] The gravamen of the complaint is that Dr. Moline, in 2020, published a case series of 33 persons who supposedly used cosmetic talc products and later developed malignant mesothelioma. According to her article, the 33 patients had no other exposures to asbestos, which she concluded, showed that cosmetic talc use can cause mesothelioma:

Objective: To describe 33 cases of malignant mesothelioma among individuals with no known asbestos exposure other than cosmetic talcum powder.

Methods: Cases were referred for medico-legal evaluation, and tissue digestions were performed in some cases. Tissue digestion for the six ases described was done according to standard methodology.

Results: Asbestos of the type found in talcum powder was found in all six cases evaluated. Talcum powder usage was the only source of asbestos for all 33 cases.

Conclusions: Exposure to asbestos-contaminated talcum powders can cause mesothelioma. Clinicians should elicit a history of talcum powder usage in all patients presenting with mesothelioma.”[4]

Jacqueline Moline and Ronald Gordon both gave anemic conflicts disclosures: “Authors J.M. and R.G. have served as expert witnesses in asbestos litigation, including talc litigation for plaintiffs.”[5] Co-author Maya Alexandri was a lawyer at the time of publication; she is now a physician practicing emergency medicine, and also a fabulist. The article does not disclose the nature of Dr. Alexandri’s legal practice.

Dr. Moline is a professor and chair of occupational medicine at the Zucker School of Medicine at Hofstra/Northwell. She received her medical degree from the University of Chicago-Pritzker School of Medicine and a Master of Science degree in community medicine from the Mount Sinai School of Medicine. She completed a residency in internal medicine at Yale New Haven Hospital and an occupational and environmental medicine residency at Mount Sinai Medical Center. Dr. Moline is also a major-league testifier for the lawsuit industry.  Over the last quarter century, she has testified from sea to shining sea, for plaintiffs in asbestos, talc, and other litigations.[6]

According to J&J, Dr. Moline was listed as an expert witness for plaintiff, in over 200 talc mesothelioma cases against J&J.  There are, of course, other target defendants in this litigation, and the actual case count is likely higher. Moline has testified in 46 talc cases against J&J, and she has testified in 16 of those cases.[7] J&J estimates that she has made millions of dollars in service of the lawsuit industry.[8]

The authors’ own description of the manuscript makes clear the concern over the validity of personal and occupational histories of the 33 cases: “This manuscript is the first to describe mesothelioma among talcum powder consumers. Our case study suggest [sic] that cosmetic talcum powder use may help explain the high prevalence of idiopathic mesothelioma cases, particularly among women, and stresses the need for improved exposure history elicitation among physicians.”[9]

The Complaint alleges that Moline knew that her article, testimony, and public statements about the absence of occupational asbestos exposure in subjects of her case series, were false.  After having her testimony either excluded by trial courts, or held on appeal to be legally insufficient,[10] Moline set out to have a peer-reviewed publication that would support her claims. Because mesothelioma is sometimes considered, uncritically, as pathognomonic of amphibole asbestos exposure, Moline was obviously keen to establish the absence of occupational exposure in any of the 33 cases.

Alas, the truth appears to have caught up with Moline because some of the 33 cases were in litigation, in which the detailed histories of each case would be discovered. Defense counsel sought to connect the dots between the details of each of the 33 cases and the details of pending or past lawsuits. The federal district court decision in the case of Bell v. American International Industries blew open the doors of Moline’s alleged fraud.[11]  Betty Bell claimed that her use of cosmetic talc had caused her to develop mesothelioma. What Dr. Moline and Bell’s counsel were bound to have known was that Bell had had occupational exposure to asbestos. Before filing a civil action against talc product suppliers, Bell filed workers’ compensation against two textile industry employers.[12] Judge Osteen’s opinion in Bell documents the anxious zeal that plaintiffs’ counsel brought to bear in trying to suppress the true nature of Ms. Bell’s exposure. After Judge Osteen excoriated Moline and plaintiffs’ counsel for their efforts to conceal information about Bell’s occupational asbestos exposures, and about her inclusion in the 33 case series, plaintiffs’ counsel dismissed her case.

Another of the 33 cases was the New Jersey case brought by Stephen Lanzo, for whom Moline testified as an expert witness.[13] In the course of the Lanzo case, the defense developed facts of Mr. Lanzo’s prior asbestos exposure.  Crocidolite fibers were found in his body, even though the amphibole crocidolite is not a fiber type found in talc. Crocidolite is orders of magnitude more potent in causing human mesotheliomas than other asbestos fiber types.[14] Despite these facts, Dr. Moline appears to have included Lanzo as one of the 33 cases in her article.

And then there were others, too.


[1] SeeSkappology” (May 26, 2020);  “SKAPP A LOT” (April 30, 2010); “Manufacturing Certainty” (Oct. 25, 2011); “David Michaels’ Public Relations Problem” (Dec. 2, 2011); “Conflicted Public Interest Groups” (Nov. 3, 2013).

[2] See, e.g., “Legal Remedies for Suspect Medical Science in Products Cases – Part One” (June 2, 2020); “Part Two” (June 3, 2020); “Part Three” (June 5, 2020); “Part 4” (June 7, 2020); “Part 5” (June 8, 2020).

[3] LTL Management LLC v. Dr. Jacqueline Miriam Moline,

Adv. Proc. No. 22- ____, in Chap. 11, Case No. 21-30589, Bankruptcy Ct., D.N.J. (Dec. 16, 2022) [Complaint]

[4] Jacqueline Moline, Kristin Bevilacqua, Maya Alexandri, and Ronald E. Gordon, “Mesothelioma Associated with the Use of Cosmetic Talc,” 62 J. Occup. & Envt’l Med. 11 (Jan. 2020) (emphasis added) [cited as Moline]

[5] Dr. Gordon has had other litigation activities of interest. See William C. Rempel, “Alleged Mob Case May Best Illustrate How Not to Play the Game : Crime: Scheme started in a Texas jail and ended with reputed mobsters charged in $30-million laundering scam,” L.A. Times (July 4, 1993).

[6] See., e.g., Fowler v. Akzo Nobel Chemicals, Inc., 251 N.J. 300, 276 A. 3d 1146 (2022); Lanzo v. Cyprus Amax Minerals Co., 467 N.J. Super. 476, 254 A.3d 691 (App. Div. 2021); Fishbain v. Colgate-Palmolive Co., No. A-1786-15T2 (N.J. App. Div. 2019); Buttitta v. Allied Signal, Inc., N.J. App. Div. (2017); Kaenzig v. Charles B. Chrystal Co., N.J. App. Div. (2015); Anderson v. A.J. Friedman Supply Co., 416 N.J. Super. 46, 3 A.3d 545 (App. Div. 2010); Cioni v. Avon Prods., Inc., 2022 NY Slip Op 33197(U) (2022); Zicklin v. Bergdorf Goodman Inc., 2022 NY Slip Op 32119(U) (N.Y.Sup. N.Y. Cty. 2022); Nemeth v. Brenntag North America, 183 A.D.3d 211, 123 N.Y.S.3d 12 (2020), rev’d, 38 N.Y.3d 336, 345 (2022) (Moline’s testimony insufficient); Olson v. Brenntag North America, Inc., 2020 NY Slip Op 33741(U) (N.Y.Sup. N.Y. Cty. 2020), rev’d, 207 A.D.3d 415, 416 (N.Y. 1st Dep’t 2022) (holding Moline’s testimony on causation insufficient).; Moldow v. A.I. Friedman, L.P., 2019 NY Slip Op 32060(U) (N.Y.Sup. N.Y. Cty. 2019); Zoas v BASF Catalysts, LLC., 2018 NY Slip Op 33009(U) (N.Y.Sup. N.Y. Cty. 2018); Prokocimer v. Avon Prods., Inc., 2018 NY Slip Op 33170(U) (Dec. 11, 2018); Shulman v. Brenntag North America, Inc., 2018 NY Slip Op 32943(U) (N.Y.Sup. N.Y. Cty. 2018); Pistone v. American Biltrite, Inc., 2018 NY Slip Op 30851(U) (2018); Evans v. 3M Co., 2017 NY Slip Op 30756(U) (N.Y.Sup. N.Y. Cty. 2017); Juni v. A.O. Smith Water Prods., 48 Misc.3d 460, 11 N.Y.S.3d 416 (2015), aff’d, 32 N.Y.3d 1116, 116 N.E.3d 75, 91 N.Y.S.3d 784 (2018); Konstantin v. 630 Third Ave. Associates, 121 A.D. 3d 230, 990 N.Y.S. 2d 174 (2014); Lopez v. Gem Gravure Co., 50 A.D.3d 1102, 858 N.Y.S.2d 226 (2008); Lopez v. Superflex, Ltd., 31 A.D. 3d 914, 819 N.Y.S. 2d 165 (2006); DeMeyer v. Advantage Auto, 9 Misc. 3d 306, 797 N.Y.S.2d 743 (2005); Amorgianos v. National RR Passenger Corp., 137 F. Supp. 2d 147 (E.D.N.Y. 2001), aff’d, 303 F. 3d 256 (2d Cir. 2002); Chapp v. Colgate-Palmolive Co., 2019 Wisc. App. 54, 935 N.W.2d 553 (2019); McNeal v. Whittaker, Clark & Daniels, Inc., 80 Cal. App. 853 (2022); Burnett v. American Internat’l Indus., Case No. 3:20-CV-3046 (W.D. Ark. Jan. 27, 2022); McAllister v. McDermott, Inc., Civ. Action No. 18-361-SDD-RLB (M.D.La. Aug. 14, 2020); Hanson v. Colgate-Palmolive Co., 353 F. Supp. 3d 1273 (S.D. Ga. 2018); Norman-Bloodsaw v. Lawrence Berkeley Laboratory, 135 F. 3d 1260 (9th Cir. 1998); Carroll v. Akebono Brake Corp., 514 P. 3d 720 (Wash. App. 2022).

[7] Complaint ¶15.

[8] Complaint ¶19.

[9] Moline at 11.

[10] See, e.g., In re New York City Asbestos Litig. (Juni), 148 A.D.3d 233, 236-37, 239 (N.Y. App. Div. 1st Dep’t 2017), aff’d, 2 N.Y.3d 1116, 1122 (2018); Nemeth v. Brenntag North America, 183 A.D.3d 211, 123 N.Y.S.3d 12 (N.Y. App. Div. 2020), rev’d, 38 N.Y.3d 336, 345 (2022); Olson v. Brenntag North America, Inc., 2020 NY Slip Op 33741(U) (N.Y.Sup. Ct. N.Y. Cty. 2020), rev’d, 207 A.D.3d 415, 416 (N.Y. App. Div. 1st Dep’t 2022).

[11] Bell v. American Internat’l Indus. et al., No. 1:17-CV-00111, 2022 U.S. Dist. LEXIS 199180 (M.D.N.C. Sept. 13, 2022) (William Lindsay Osteen, Jr., J.). See Daniel Fisher, “Key talc/cancer study cited by plaintiffs hid evidence of other exposure, lawyers say” (Dec. 1, 2022).

[12] According to the Complaint against Moline, Bell had filed workers’ compensation claims with the North Carolina Industrial Commission, back in 2015, declaring under oath that she had been exposed to asbestos while working with two textile manufacturing employers, Hoechst Celanese Corporation and Pillowtex Corporation. Complaint at ¶102. As frequently happens in civil actions, the claimant dismisses worker’s compensation without prejudice, to pursue the more lucrative payday in a civil action, without the burden of employers’ liens against the recovery. Complaint at 102.

[13] SeeNew Jersey Appellate Division Calls for Do-Over in Baby Powder Dust Up” (May 22, 2021).

[14] David H. Garabrant & Susan T. Pastula, “A comparison of asbestos fiber potency and elongate mineral particle (EMP) potency for mesothelioma in humans,” 361 Toxicology & Applied Pharmacol. 127 (2018) (“relative potency of chrysotile:amosite:crocidolite was 1:83:376”). See also D. Wayne Berman & Kenny S. Crump, “Update of Potency Factors for Asbestos-Related Lung Cancer and Mesothelioma,” 38(S1) Critical Reviews in Toxicology 1 (2008).

An Opinion to SAVOR

November 11th, 2022

The saxagliptin medications are valuable treatments for type 2 diabetes mellitus (T2DM). The SAVOR (Saxagliptin Assessment of Vascular Outcomes Recorded in Patients with Diabetes Mellitus) study was a randomized controlled trial, undertaken by manufacturers at the request of the FDA.[1] As a large (over sixteen thousand patients randomized) double-blinded cardiovascular outcomes trial, SAVOR collected data on many different end points in patients with T2DM, at high risk of cardiovascular disease, over a median of 2.1 years. The primary end point was a composite end point of cardiac death, non-fatal myocardial infarction, and non-fatal stroke. Secondary end points included each constituent of the composite, as well as hospitalizations for heart failure, coronary revascularization, or unstable angina, as well as other safety outcomes.

The SAVOR trial found no association between saxagliptin use and the primary end point, or any of the constituents of the primary end point.  The trial did, however, find a modest association between saxagliptin and one of the several secondary end points, hospitalization for heart failure (hazard ratio, 1.27; 95% C.I., 1.07 to 1.51; p = 0.007). The SAVOR authors urged caution in interpreting their unexpected finding for heart failure hospitalizations, given the multiple end points considered.[2] Notwithstanding the multiplicity, in 2016, the FDA, which does not require a showing of causation for adding warnings to a drug’s labeling, added warnings about the “risk” of hospitalization for heart failure from the use of saxagliptin medications.

And the litigation came.

The litigation evidentiary display grew to include, in addition to SAVOR, observational studies, meta-analyses, and randomized controlled trials of other DPP-4 inhibitor medications that are in the same class as saxagliptin. The SAVOR finding for heart failure was not supported by any of the other relevant human study evidence. The lawsuit industry, however, armed with an FDA warning, pressed its cases. A multi-district litigation (MDL 2809) was established. Rule 702 motions were filed by both plaintiffs’ and defendants’ counsel.

When the dust settled in this saxagliptin litigation, the court found that the defendants’ expert witnesses satisfied the relevance and reliability requirements of Rule 702, whereas the proferred opinions of plaintiff’s expert witness, Dr. Parag Goyal, a cardiologist at Cornell-Weill Hospital in New York, did not satisfy Rule 702.[3] The court’s task was certainly made easier by the lack of any other expert witness or published opinion that saxagliptin actually causes heart failure serious enough to result in hospitalization. 

The saxagliptin litigation presented an interesting array of facts for a Rule 702 show down. First, there was an RCT that reported a nominally statistically significant association between medication use and a harm, hospitalization for heart failure. The SAVOR finding, however, was in a secondary end point, and its statistical significance was unimpressive when considered in the light of the multiple testing that took place in the context of a cardiovascular outcomes trial.

Second, the heart failure increase was not seen in the original registration trials. Third, there was an effort to find corroboration in observational studies and meta-analyses, without success. Fourth, there was no apparent mechanism for the putative effect. Fifth, there was no support from trials or observational studies of other medications in the class of DPP-4 inhibitors.

Dr. Goyal testified that the heart failure finding in SAVOR “should be interpreted as cause and effect unless there is compelling evidence to prove otherwise.” On this record, the MDL court excluded Dr. Goyal’s causation opinions. Dr. Goyal purported to conduct a Bradford Hill analysis, but the MDL court appeared troubled by his glib dismissal of the threat to validity in SAVOR from multiple testing, and his ignoring the consistency prong of the Hill factors. SAVOR was the only heart failure finding in humans, with the remaining observational studies, meta-analyses, and other trials of DPP-4 inhibitors failing to provide supporting evidence.

The challenged defense expert witnesses defended the validity of their opinions, and ultimately the MDL court had little concern in permitting them through the judicial gate. The plaintiffs’ challenges to Suneil Koliwad, a physician with a doctorate in molecular physiology, Eric Adler, a cardiologist, and Todd Lee, a pharmaco-epidemiologist, were all denied. The plaintiffs challenged, among other things, whether Dr. Adler was qualified to apply a Bonferroni correction to the SAVOR results, and whether Dr. Lee was obligated to obtain and statistically analyze the data from the trials and studies ab initio. The MDL court quickly dispatched these frivolous challenges.

The saxagliptin MDL decision is an important reminder that litigants should remain vigilant about inaccurate assertions of “statistical significance,” even in premier, peer-reviewed journals. Not all journals are as careful as the New England Journal of Medicine in requiring qualification of claims of statistical significance in the face of multiple testing.

One legal hiccup in the court’s decision was its improvident citation to Daubert, for the proposition that the gatekeeping inquiry must focus “solely on principles and methodology, not on the conclusions they generate.”[4] That piece of obiter dictum did not survive past the Supreme Court’s 1997 decision in Joiner,[5] and it was clearly superseded by statute in 2000. Surely it is time to stop citing Daubert for this dictum.


[1] Benjamin M. Scirica, Deepak L. Bhatt, Eugene Braunwald, Gabriel Steg, Jaime Davidson, et al., for the SAVOR-TIMI 53 Steering Committee and Investigators, “Saxagliptin and Cardiovascular Outcomes in Patients with Type 2 Diabetes Mellitus,” 369 New Engl. J. Med. 1317 (2013).

[2] Id. at 1324.

[3] In re Onglyza & Kombiglyze XR Prods. Liab. Litig., MDL 2809, 2022 WL 43244 (E.D. Ken. Jan. 5, 2022).

[4] Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579, 595 (1993).

[5] General Electric Co. v. Joiner, 522 U.S. 136 (1997).

Further Thoughts on Cheng’s Consensus Rule

October 3rd, 2022

In “Cheng’s Proposed Consensus Rule for Expert Witnesses,”[1] I discussed a recent law review article by Professor Edward K. Cheng,[2] who has proposed dispensing with expert witness testimony as we know it in favor of having witnesses tell juries what the scientific consensus is on any subject. Cheng’s project is fraught with difficulties and contradictions; and it has clearly anticipatable bad outcomes. Four Supreme Court cases (Daubert, Joiner, Kumho Tire, and Weisgram), and a major revision in Rule 702, ratified by Congress, all embraced the importance of judicial gatekeeping of expert witness opinion testimony to the fact-finding function of trials. Professor Cheng now wants to ditch the entire notion of gatekeeping, as well as the epistemic basis – sufficient facts and data – for expert witnesses’ opinions in favor of reportage of which way the herd is going. Cheng’s proposal is perhaps the most radical attack, in recent times, on the nature of legal factfinding, whether by judges or juries, in the common law world.

Still, there are two claims within his proposal, which although overstated, are worth further discussion and debate. The first is that the gatekeeping role does not sit well with many judges. We see judges ill at ease in their many avoidance tactics, by which they treat serious methodological challenges to expert witness testimony as “merely going to the weight of the conclusion.” The second is that many judges, and especially juries, are completely at sea in the technical knowledge needed to evaluate the scientific issues in many modern day trials.

With respect to the claimed epistemic incompetence, the simpler remedy is to get rid of incompetent judges. We have commercial courts, vaccine courts, and patent courts. Why are litigants disputing a contract or a commercial practice entitled to epistemically competent judges, but litigants in health claim cases are not? Surely, the time has come to have courts with judges that have background and training in the health and statistical sciences. The time for “blue ribbon” juries of properly trained fact finders seems overdue. Somehow we must reconcile the seventh amendment right to a jury with the requirement of “due process” of law. The commitment to jury trials for causes of action known to the common law in 1787, or 1791, is stretched beyond belief for the sorts of technical and complex claims now seen in federal courts and state courts of general jurisdiction.[3]

Several courts have challenged the belief that the seventh amendment right to a jury applies in the face of complex litigation. The United States Court of Appeals explained its understanding of complexity that should remove a case from the province of the seventh amendment:

“A suit is too complex for a jury when circumstances render the jury unable to decide in a proper manner. The law presumes that a jury will find facts and reach a verdict by rational means. It does not contemplate scientific precision but does contemplate a resolution of each issue on the basis of a fair and reasonable assessment of the evidence and a fair and reasonable application of the relevant legal rules. See Schulz v. Pennsylvania RR, 350 U.S. 523, 526 (1956). A suit might be excessively complex as a result of any set of circumstances which singly or in combination render a jury unable to decide in the foregoing rational manner. Examples of such circumstances are an exceptionally long trial period and conceptually difficult factual issues.”[4]

The Circuit’s description of complexity certainly seems to apply to many contemporary claims of health effects.

We should recognize that Professor Cheng’s indictment, and conviction, of judicial gatekeeping and jury decision making as epistemically incompetent directly implies that the judicial process has no epistemic, truth finding function in technical cases of claimed health effects. Cheng’s proposed solution does not substantially ameliorate this implication, because consensus statements are frequently absent, and even when present, are plagued with their own epistemic weaknesses.

Consider for instance, the 1997 pronouncement of the International Agency for Research on Cancer that crystalline silica is a “known” human carcinogen.[5] One of the members of the working group responsible for the pronouncement explained:

“It is hardly surprising that the Working Group had considerable difficulty in reaching a decision, did not do so unanimously and would probably not have done so at all, had it not been explained that we should be concerned with hazard identification, not risk.”[6]

And yet, within months of the IARC pronouncement, state and federal regulatory agencies formed a chorus of assent to the lung cancer “risk” of crystalline silica. Nothing in the scientific record had changed except the permission of the IARC to stop thinking critically about the causation issue. Another consensus group came out, a few years after the IARC pronouncement, with a devastating critical assessment of the IARC review:

“The present authors believe that the results of these studies [cited by IARC] are inconsistent and, when positive, only weakly positive. Other, methodologically strong, negative studies have not been considered, and several studies viewed as providing evidence supporting the carcinogenicity of silica have significant methodological weaknesses. Silica is not directly genotoxic and is a pulmonary carcinogen only in the rat, a species that seems to be inappropriate for assessing particulate carcinogenesis in humans. Data on humans demonstrate a lack of association between lung cancer and exposure to crystalline silica. Exposure-response relationships have generally not been found. Studies in which silicotic patients were not identified from compensation registries and in which enumeration was complete did not support a causal association between silicosis and lung cancer, which further argues against the carcinogenicity of crystalline silica.”[7]

Cheng’s proposal would seem to suppress legitimate courtroom criticism of an apparent consensus statement, which was based upon a narrow majority of a working group, on a controversial dataset, with no examination of the facts and data upon which the putative consensus statement was itself based.

The Avandia litigation tells a cautionary tale of how fragile and ephemeral consensuses can be. A dubious meta-analysis by a well-known author received lead article billing in an issue of the New England Journal of Medicine, in 2007, and litigation claims started to roll in within hours.[8] In face of this meta-analysis, an FDA advisory committee recommended heightened warnings, and a trial court declined to take a careful look at the methodological flaws in the inciting meta-analytic study.[9] Ultimately, a large clinical trial exculpated the medication, but by then the harm had been done, and there was no revisiting of the gatekeeping decision to allow the claims to proceed.[10] The point should be obvious. In 2007, there appeared to be a consensus, with led to an FDA label change, despite the absence of sufficient facts and data to support the litigation claims. Even if plaintiffs’ claims passed through the gate in 2008, they were highly vulnerable to courtroom challenges to the original meta-analysis. Cheng’s proposal, however, would truncate the litigation process into an exploration whether or not there was a “consensus.”

Deviation from Experts’ Standards of Care

The crux of many Rule 702 challenges to an expert witness is that the witness has committed malpractice in his discipline. The challenger must identify a standard of care, and the challenged witness’s deviation(s) from that standard. The identification of the relevant standard of care will, indeed, sometimes involve a consensus, evidenced by texts, articles, professional society statements, or simply implicit in relevant works of scholarship or scientific studies. Consensuses about standards of care are, of course, about methodology. Consensuses about conclusions, however, may also be relevant because if a litigant’s expert witness proffers a conclusion at odds with consensus conclusions, the deviant conclusion implies deviant methodology.

Cheng’s treatment of statistical significance is instructive for how his proposal would create mischief in many different types of adjudications, but especially of claimed health effects. First, Cheng’s misrepresentation of consensus among statisticians is telling for the validity of his project.  After all, he holds an advanced degree in statistics, and yet, he is willing write that that:

“[w]hile historically used as a rule of thumb, statisticians have now concluded that using the 0.05 [p-value] threshold is more distortive than helpful.”[11]

Statisticians, without qualification! And as was shown, Cheng is demonstrably wrong in his use of the cited source to support his representation of what certainly seems like a consensus paper. His précis is not even remotely close to the language of the paper, but the consensus paper is hearsay and can only be used by an expert witness in support of an opinion.  Presumably, another expert witness might contradict the quoted opinion about what “statisticians” have concluded, but it is unclear whether a court could review the underlying A.S.A. paper, take judicial notice of the incorrectness of the proffered opinion, and then exclude the expert witness opinion.

After the 2016 publication of the A.S.A.’s consensus statement, some statisticians did indeed publish editorials claiming it was time to move beyond statistical significance testing. At least one editorial, by an A.S.A. officer was cited as representing an A.S.A. position, which led the A.S.A. President to appoint a task force to consider the call for an across-the-board rejection of significance testing. In 2021, that task force clearly endorsed significance testing as having a continued role in statistical practice.[12]

Where would this situation leave a gatekeeping court or a factfinding jury? Some obscure psychology journals have abandoned the use of significance testing, but the New England Journal of Medicine has retained the practice, while introducing stronger controls for claims of “significance” when the study at hands has engaged in multiple comparisons.

But Cheng, qua law professor and statistician (and would-be expert witness) claims “statisticians have now concluded that using the 0.05 [p-value] threshold is more distortive than helpful,” and the trial must chase not the validity of the inference of claimed causation but whether there is, or is not, a census about the use of a pre-specified threshold for p-values or confidence intervals. Cheng’s proposal about consensuses would turn trials into disputes about whether consensuses exist, and the scope of the purported agreement, not about truth.

In some instances, there might be a clear consensus, fully supported, on a general causation issue. Consider for instance, the known causal relationship between industrial benzene exposure and acute myelogenous leukemia (AML). This consensus turns out to be rather unhelpful when considering whether minute contamination of carbonated water can cause cancer,[13] or even whether occupational exposure to gasoline, with its low-level benzene (~1%) content, can cause AML.[14]

Frequently, there is also a deep asymmetry in consensus statements. When the evidence for a causal conclusion is very clear, professional societies may weigh in to express their confident conclusions about the existence of causation. Such societies typically do not issue statements that explicitly reject causal claims. The absence of a consensus statement, however, often can be taken to represent a consensus that professional societies do not endorse causal claims, and consider the evidence, at best, equivocal. Those dogs that have not barked can be, and have been, important considerations in gatekeeping.

Contrary to Cheng’s complete dismissal of judges’ epistemic competence, judges can, in many instances, render reasonable gatekeeping decisions by closely considering the absence of consensus statements, or systematic reviews, favoring the litigation claims.[15] At least in this respect, Professor Cheng is right to emphasize the importance of consensus, but he fails to note the importance of its absence, and the ability of litigants and their expert witnesses to inform gatekeeping judges of the relevance of consensus statements or their absence to the epistemic assessment of proferred expert witness opinion testimony.


[1]Cheng’s Proposed Consensus Rule for Expert Witnesses,” (Sept. 15, 2022).

[2] Edward K. Cheng, “The Consensus Rule: A New Approach to Scientific Evidence,” 75 Vanderbilt L. Rev. 407 (2022) [Consensus Rule]

[3] There is an extensive discussion and debate of viability and the validity of asserting rights to trial by jury for many complex civil actions in the modern era. See, e.g., Stephan Landsman & James F. Holderman, “The Evolution of the Jury Trial in America,” 37 Litigation 32 (2010); Robert A. Clifford, “Deselecting the Jury in a Civil Case,” 30 Litigation 8 (Winter 2004); Hugh H. Bownes, “Should Trial by Jury Be Eliminated in Complex Cases,” 1 Risk 75 (1990); Douglas King, “Complex Civil Litigation and the Seventh Amendment Right to a Jury Trial,” 51 Univ. Chi. L. Rev. 581 (1984); Alvin B. Rubin, “Trial by Jury in Complex Civil Cases: Voice of Liberty or Verdict by Confusion?” 462 Ann. Am. Acad. Political & Social Sci. 87 (1982); William V. Luneburg & Mark A. Nordenberg, “Specially Qualified Juries and Expert Nonjury Tribunals: Alternatives for Coping with the Complexities of Modern Civil Litigation,” 67 Virginia L. Rev. 887 (1981); Richard O. Lempert, “Civil Juries and Complex Cases: Let’s Not Rush to Judgment,” 80 Mich. L. Rev. 68 (1981); Comment, “The Case for Special Juries in Complex Civil Litigation,” 89 Yale L. J. 1155 (1980); James S. Campbell & Nicholas Le Poidevin, “Complex Cases and Jury Trials: A Reply to Professor Arnold,” 128 Univ. Penn. L. Rev. 965 (1980); Barry E. Ungar & Theodore R. Mann, “The Jury and the Complex Civil Case,” 6 Litigation 3 (Spring 1980); Morris S. Arnold, “A Historical Inquiry into the Right to Trial by Jury in Complex Civil Litigation,”128 Univ. Penn. L. Rev. 829 (1980); Daniel H. Margolis & Evan M. Slavitt, “The Case Against Trial by Jury in Complex Civil Litigation,” 7 Litigation 19 (1980); Montgomery Kersten, “Preserving the Right to Jury Trial in Complex Civil Cases,” 32 Stanford L. Rev. 99 (1979); Maralynne Flehner, “Jury Trials in Complex Litigation,” 4 St. John’s Law Rev. 751 (1979); Comment, “The Right to a Jury Trial in Complex Civil Litigation,” 92 Harvard L. Rev. 898 (1979); Kathy E. Davidson, “The Right to Trial by Jury in Complex Litigation,” 20 Wm. & Mary L. Rev. 329 (1978); David L. Shapiro & Daniel R. Coquillette, “The Fetish of Jury Trial in Civil Cases: A Comment on Rachal v. Hill,” 85 Harvard L. Rev. 442 (1971); Comment, “English Judge May Not Order Jury Trial in Civil Case in Absence of Special Circumstances. Sims v. William Howard & Son Ltd. (C. A. 1964),” 78 Harv. L. Rev. 676 (1965); Fleming James, Jr., “Right to a Jury Trial in Civil Actions,” 72 Yale L. J. 655 (1963).

[4] In re Japanese Elec. Prods. Antitrust Litig., 63` F.2d 1069, 1079 (3d Cir 1980). See In re Boise Cascade Sec. Litig., 420 F. Supp. 99, 103 (W.D. Wash. 1976) (“In sum, it appears to this Court that the scope of the problems presented by this case is immense. The factual issues, the complexity of the evidence that will be required to explore those issues, and the time required to do so leads to the conclusion that a jury would not be a rational and capable fact finder.”). See also Ross v. Bernhard, 396 U.S. 532, 538 & n.10, 90 S. Ct. 733 (1970) (discussing the “legal” versus equitable nature of an action that might give rise to a right to trial by jury). Of course, the statistical and scientific complexity of claims was absent from cases tried in common law courts in 1791, at the time of the adoption of the seventh amendment.

[5] IARC Monograph on the Evaluation of Carcinogenic Risks to Humans of Silica, Some Silicates, Coal Dust and para-Aramid Fibrils, vol. 68 (1997).

[6] Corbett McDonald & Nicola Cherry, “Crystalline Silica and Lung Cancer: The Problem of Conflicting Evidence,” 8 Indoor Built Env’t 121, 121 (1999).

[7] Patrick A. Hessel, John F. Gamble, J. Bernard L. Gee, Graham Gibbs, Francis H.Y. Green, W. Keith C. Morgan, and Brooke T. Mossman, “Silica, Silicosis, and Lung Cancer: A Response to a Recent Working Group Report,” 42 J. Occup & Envt’l Med. 704, 704 (2000).

[8] Steven Nissen & K. Wolski, “Effect of Rosiglitazone on the Risk of Myocardial Infarction and Death from Cardiovascular Causes,” 356 New Engl. J. Med. 2457 (2007); Erratum, 357 New Engl. J. Med. 100 (2007).

[9] In re Avandia Mktg., Sales Practices & Prods. Liab. Litig., 2011 WL 13576 (E.D. Pa. Jan. 4, 2011).

[10] Philip D. Home, Stuart J Pocock, et al., “Rosiglitazone Evaluated for Cardiovascular Outcomes in Oral Agent Combination Therapy for Type 2 Diabetes (RECORD),” 373 Lancet 2125 (2009). The hazard ratios for cardiovascular death was 0.84 (95% C.I., 0·59–1·18), and for myocardial infarction, 1·14 (95% C.I., 0·80–1·63).

[11] Consenus Rule at 424 (emphasis added) (citing Ronald L. Wasserstein & Nicole A. Lazar, “The ASA Statement on p-Values: Context, Process, and Purpose,” 70 Am. Statistician 129, 131 (2016)).

[12] Yoav Benjamini, Richard D. DeVeaux, Bradly Efron, Scott Evans, Mark Glickman, Barry Braubard, Xuming He, Xiao Li Meng, Nancy Reid, Stephen M. Stigler, Stephen B. Vardeman, Christopher K. Wikle, Tommy Wright, Linda J. Young, and Karen Kafadar, “The ASA President’s Task Force Statement on Statistical Significance and Replicability,” 15 Annals of Applied Statistics 1084 (2021); see also “A Proclamation from the Task Force on Statistical Significance” (June 21, 2021).

[13] Sutera v. Perrier Group of America, Inc., 986 F. Supp. 655, 664-65 (D. Mass. 1997).

[14] Burst v. Shell Oil Co., 2015 WL 3755953, at *9 (E.D. La. June 16, 2015), aff’d, 650 F. App’x 170 (5th Cir. 2016). cert. denied. 137 S. Ct. 312 (2016); Henricksen v. ConocoPhillips Co., 605 F. Supp. 2d 1142, 1156 (E.D. Wa. 2009).

[15] In re Mirena Ius Levonorgestrel-Related Prod. Liab. Litig. (No. II), 341 F. Supp. 3d 213 (S.D.N.Y. 2018), aff’d, 982 F.3d 113 (2d Cir. 2020); In re Lipitor (Atorvastatin Calcium) Mktg., Sales Pracs. & Prods. Liab. Litig., 227 F. Supp. 3d 452 (D.S.C. 2017), aff’d, 892 F.3d 624 (4th Cir. 2018); In re: Zoloft (Sertraline Hydrocloride) Prod. Liab. Litig., No. 12-MD-2342, 2015 WL 7776911, at *1 (E.D. Pa. Dec. 2, 2015), aff’d, 858 F.3d 787 (3d Cir. 2017); In re Incretin-Based Therapies Prods. Liab. Litig., 524 F. Supp. 3d. 1007 (S.D. Cal. 2021); In re Viagra (Sildenafil Citrate) & Cialis (Tadalafil) Prod. Liab. Litig., 424 F. Supp. 3d 781, 798–99 (N.D. Cal. 2020).

Cheng’s Proposed Consensus Rule for Expert Witnesses

September 15th, 2022

Edward K. Cheng is the Hess Professor of Law in absentia from Vanderbilt Law School, while serving this fall as a visiting professor at Harvard. Professor Cheng is one of the authors of the multi-volume treatise, Modern Scientific Evidence, and the author of many articles on scientific and statistical evidence. Cheng’s most recent article, “The Consensus Rule: A New Approach to Scientific Evidence,”[1] while thought provoking, follows in the long-standing tradition of law school professors to advocate evidence law reforms, based upon theoretical considerations devoid of practical or real-world support.

Cheng’s argument for a radical restructuring of Rule 702 is based upon his judgment that jurors and judges are epistemically incompetent to evaluate expert witness opinion testimony. The current legal approach has trial judges acting as gatekeepers of expert witness testimony, and jurors acting as judges of factual scientific claims. Cheng would abolish these roles as beyond their ken.[2] Lay persons can, however, determine which party’s position is supported by the relevant expert community, which he presumes (without evidence) possesses the needed epistemic competence. Accordingly, Cheng would rewrite the legal system’s approach to important legal disputes, such as disputes over causal claims, from:

Whether a given substance causes a given disease

to

Whether the expert community believes that a given substance causes a given disease.

Cheng channels the philosophical understanding of the ancients who realized that one must have expertise to judge whether someone else has used that expertise correctly. And he channels the contemporary understanding that knowledge is a social endeavor, not the unique perspective of an individual in isolation. From these twin premisses, Cheng derives a radical and cynical proposal to reform the law of expert witness testimony. In his vision, experts would come to court not to give their own opinions, and certainly not to try to explain how they arrive at their opinions from the available evidence. For him, the current procedure is too much like playing chess with a monkey. The expert function would consist of telling the jury what the expert witness’s community believes.[3] Jurors would not decide the “actual substantive questions,” but simply decide what they believe the relevant expert witness community accepts as a consensus. This radical restructuring is what Cheng calls the “consensus rule.”

In this proposed “consensus rule,” there is no room for gatekeeping. Parties continue to call expert witnesses, but only as conduits for the “consensus” opinions of their fields. Indeed, Cheng’s proposal would radically limit expert witness to service as pollsters; their testimony would present only their views of what the consensus is in their fields. This polling information is the only evidence that the jury hear from expert witnesses, because this is the only evidence that Cheng believes the jury is epistemically competent to assess.[4]

Under Cheng’s Consensus Rule, when there is no consensus in the realm, the expert witness regime defaults to “anything goes,” without gatekeeping.[5] Judges would continue to exercise some control over who is qualified to testify, but only as far as the proposed experts must be in a position to know what the consensus is in their fields.

Cheng does not explain why, under his proposed “consensus rule,” subject matter experts are needed at all.  The parties might call librarians, or sociologists of science, to talk about the relevant evidence of consensus. If a party cannot afford a librarian expert witness, then perhaps lawyers could present directly the results of their PubMed, and other internet searches.

Cheng may be right that his “deferential approach” would eliminate having the inexpert passing judgment on the expert. The “consensus rule” would reduce science to polling, conducted informally, often without documentation or recording, by partisan expert witnesses. This proposal hardly better reflects, as he argues, the “true” nature of science. In Cheng’s vision, science in the courtroom is just a communal opinion, without evidence and without inference. To be sure, this alternative universe is tidier and less disputatious, but it is hardly science or knowledge. We are left with opinions about opinions, without data, without internal or external validity, and without good and sufficient facts and data.

Cheng claims that his proposed Consensus Rule is epistemically superior to Rule 702 gatekeeping. For the intellectual curious and able, his proposal is a counsel of despair. Deference to the herd, he tells us “is not merely optimal—it is the only practical strategy.”[6] In perhaps the most extreme overstatement of his thesis, Cheng tells us that

“deference is arguably not due to any individual at all! Individual experts can be incompetent, biased, error prone, or fickle—their personal judgments are not and have never been the source of reliability. Rather, proper deference is to the community of experts, all of the people who have spent their careers and considerable talents accumulating knowledge in their field.”[7]

Cheng’s hypothesized community of experts, however is worthy of deference only by virtue of the soundness of its judgments. If a community has not severely tested its opinions, then its existence as a community is irrelevant. Cheng’s deference is the sort of phenomenon that helped create Lysenkoism and other intellectual fads that were beyond challenge with actual data.

There is, I fear, some partial truth to Cheng’s judgment of juries and judges as epistemically incompetent, or challenged, to judge science, but his judgment seems greatly overstated. Finding aberrant jury verdicts would be easy, but Cheng provides no meaningful examples of gatekeeping gone wrong. Professor Cheng may have over-generalized in stating that judges are epistemically incompetent to make substantive expert determinations. He surely cannot be suggesting that judges never have sufficient scientific acumen to determine the relevance and reliability of expert witness opinion. If judges can, in some cases, make a reasonable go at gatekeeping, why then is Cheng advocating a general rule that strips all judges of all gatekeeping responsibility with respect to expert witnesses?

Clearly judges lack the technical resources, time, and background training to delve deeply into the methodological issues with which they may be confronted. This situation could be ameliorated by budgeting science advisors and independent expert witnesses, and by creating specialty courts staffed with judges that have scientific training. Cheng acknowledges this response, but he suggests that conflicts with “norms about generalist judges.”[8] This retreat to norms is curious in the face of Cheng’s radical proposals, and the prevalence of using specialist judges for adjudicating commercial and patent disputes.

Although Cheng is correct that assessing validity and reliability of scientific inferences and conclusions often cannot be reduced to a cookbook or checklist approach, not all expertise is as opaque as Cheng suggests. In his view, lawyers are deluded into thinking that they can understand the relevant science, with law professors being even worse offenders.[9] Cross-examining a technical expert witness can be difficult and challenging, but lawyers on both sides of the aisle occasionally demolish the most skilled and knowledgeable expert witnesses, on substantive grounds. And these demolitions happen to expert witnesses who typically, self-servingly claim that they have robust consensuses agreeing with their opinions.

While scolding us that we must get “comfortable with relying on the expertise and authority of others,” Cheng reassures us that deferring to authority is “not laziness or an abdication of our intellectual responsibility.”[10] According to Cheng, the only reason to defer to the opinion of expert is that they are telling us what their community would say.[11] Good reasons, sound evidence, and valid inference need not worry us in Cheng’s world.

Finding Consensus

Cheng tells us that his Consensus Rule would look something like:

Rule 702A. If the relevant scientific community believes a fact involving specialized knowledge, then that fact is established accordingly.”

Imagine the endless litigation over what the “relevant” community is. For a health effect claim about a drug and heart attacks, is it the community of cardiologists or epidemiologists? Do we accept the pronouncements of the American Heart Association or those of the American College of Cardiology. If there is a clear consensus based upon a clinical trial, which appears to be based upon suspect data, is discovery of underlying data beyond the reach of litigants because the correctness of the allegedly dispositive study is simply not in issue? Would courts have to take judicial notice of the clear consensus and shut down any attempt to get to the truth of the matter?

Cheng acknowledges that cases will involve issues that are controversial or undeveloped, without expert community consensus. Many litigations start after publication of a single study or meta-analysis, which is hardly the basis for any consensus. Cheng appears content, in this expansive area, to revert to anything goes because if the expert community has not coalesced around a unified view, or if the community is divided, then the courts cannot do better than flipping a coin! Cheng’s proposal thus has a loophole the size of the Sun.

Cheng tells us, unhelpfully, that “[d]etermining consensus is difficult in some cases, and less so in others.”[12] Determining consensus may not be straightforward, but no matter. Consensus Rule questions are not epistemically challenging and thus “far more manageable,” because they requires no special expertise. (Again, why even call a subject matter expert witness, as opposed to a science journalist or librarian?) Cheng further advises that consensus is “a bit like the reasonable person standard in negligence,” but this simply conflates normative judgments with the scientific judgments.[13]

Cheng’s Consensus Rule would allow the use of a systematic review or a meta-analysis, not for evidence of the correctness of its conclusions, but only as evidence of a consensus.[14] The thought experiment of how this suggestion plays out in the real world may cause some agita. The litigation over Avandia began within days of the publication of a meta-analysis in the New England Journal of Medicine.[15] So some evidence of consensus; right? But then the letters to the editor within a few weeks of publication showed that the meta-analysis was fatally flawed. Inadmissible! Under the Consensus Rule the correctness or the methodological appropriateness of the meta-analysis is irrelevant. A few months later, another meta-analysis is published, which fails to find the risk that the original meta-analysis claimed. Is the trial now about which meta-analysis represents the community’s consensus, or are we thrown into the game of anything goes, where expert witnesses just say things, without judicial supervision?  A few years go by, and now there is a large clinical trial that supersedes all the meta-analyses of small trials.[16] Is a single large clinical trial now admissible as evidence of a new consensus, or are only systematic reviews and meta-analyses relevant evidence?

Cheng’s Consensus Rule will be useless in most determinations of specific causation.  It will be a very rare case indeed when a scientific organization issues a consensus statement about plaintiff John Doe. Very few tort cases involve putative causal agents that are thought to cause every instance of some disease in every person exposed to the agent. Even when a scientific community has addressed general causation, it will have rarely resolved all the uncertainty about the causal efficacy of all levels of exposure or the appropriate window of latency. So Cheng’s proposal guarantees to remove specific causation from the control of Rule 702 gatekeeping.

The potential for misrepresenting consensus is even greater than the misrepresentations of actual study results. At least the data are the data, but what will jurors do when they are regaled by testimony about the informal consensus reached in the hotel lobby of the latest scientific conference. Regulatory pronouncements that are based upon precautionary principles will be misrepresented as scientific consensus.  Findings by the International Agency for Research on Cancer that a substance is a IIA “probable human carcinogen” will be hawked as a consensus, even though the classification specifically disclaims any quantitative meaning for “probable,” and it directly equates to “insufficient” evidence of carcinogencity in humans.

In some cases, as Cheng notes, organizations such as the National Research Council, or the National Academy of Science, Engineering and Medicine (NASEM), will have weighed in on a controversy that has found its way into court.[17] Any help from such organizations will likely be illusory. Consider the 2006 publication of a comprehensive review of the available studies on non-pulmonary cancers and asbestos exposure by NASEM. The writing group presented its assessment of colorectal cancer as not causally associated with occupational asbestos exposure.[18] By 2007, the following year, expert witnesses for plaintiffs argued that the NASEM publication was no longer a consensus because one or two (truly inconsequential studies) had been published after the report and thus not considered. Under Cheng’s proposal, this dodge would appear to be enough to oust the consensus rule, and default to the “anything goes” rule. The scientific record can change rapidly, and many true consensus statements quickly find their way into the dustbin of scientific history.

Cheng greatly underestimates the difficulty in ascertaining “consensus.” Sometimes, to be sure, professional societies issue consensus statements, but they are often tentative and inconclusive. In many areas of science, there will be overlapping realms of expertise, with different disciplines issuing inconsistent “consensus” statements. Even within a single expert community, there may be two schools of thoughts about a particular issue.

There are instances, perhaps more than a few, when a consensus is epistemically flawed. If, as is the case in many health effect claims, plaintiffs rely upon the so-called linear no-threshold dose-response (LNT) theory of carcinogenesis, plaintiffs will point to regulatory pronouncements that embrace LNT as “the consensus.” When scientists are being honest, they generally recognize LNT as part of a precautionary principle approach, which may make sense as the foundation of “risk assessment.” The widespread assumption of LNT in regulatory agencies, and among scientists who work in such agencies, is understandable, but LNT remains an assumption. Nonetheless, we already see LNT hawked as a consensus, which under Cheng’s Consenus Rule would become the key dispositive issue, while quashing the mountain of evidence that there are, in fact, defense mechanisms to carcinogenesis that result in practical thresholds.

Beyond, regulatory pronouncements, some areas of scientific endeavor have themselves become politicized and extremist. Tobacco smoking surely causes lung cancer, but the studies of environmental tobacco smoking and lung cancer have been oversold. In areas of non-scientific disputes, such as history of alleged corporate malfeasance, juries will be treated to “the consensus” of Marxist labor historians, without having to consider the actual underlying historical documents. Cheng tells us that his Consensus Rule is a “realistic way of treating nonscientific expertise,”[19] which would seem to cover historian expert witness. Yet here, lawyers and lay fact finders are fully capable of exploring the glib historical conclusions of historian witnesses with cross-examination on the underlying documentary facts of the proffered opinions.

The Alleged Warrant for the Consensus Rule

If Professor Cheng is correct that the current judicial system, with decisions by juries and judges, is epistemically incompetent, does his Consensus Rule necessarily follow?  Not really. If we are going to engage in radical reforms, then the institutionalization of blue-ribbon juries would make much greater sense. As for Cheng’s claim that knowledge is “social,” the law of evidence already permits the use of true consensus statements as learned treatises, both to impeach expert witnesses who disagree, and (in federal court) to urge the truth of the learned treatise.

The gatekeeping process of Rule 702, which Professor Cheng would throw overboard, has important advantages in that judges ideally will articulate reasons for finding expert witness opinion testimony admissible or not. These reasons can be evaluated, discussed, and debated, with judges, lawyers, and the public involved. This gatekeeping process is rational and socially open.

Some Other Missteps in Cheng’s Argument

Experts on Both Sides are Too Extreme

Cheng’s proposal is based, in part, upon his assessment that the adversarial system causes the parties to choose expert witnesses “at the extremes.” Here again, Cheng provides no empirical evidence for his assessment. There is a mechanical assumption often made by people who do not bother to learn the details of a scientific dispute that the truth must somehow lie in the “middle.” For instance, in MDL 926, the silicone gel breast implant litigation, presiding Judge Sam Pointer complained about the parties’ expert witnesses being too extreme. Judge Pointer  believed that MDL judges should not entertain Rule 702 challenges, which were in his view properly heard by the transferor courts. As a result, Judge Robert Jones, and then Judge Jack Weinstein, conducted thorough Rule 702 hearings and found that the plaintiffs’ expert witnesses’ opinions were unreliable and insufficiently supported by the available evidence.[20] Judge Weinstein started the process of selecting court-appointed expert witnesses for the remaining New York cases, which goaded Judge Pointer into taking the process back to the MDL court level. After appointing four, highly qualified expert witnesses, Judge Pointer continued to believe that the parties’ expert witnesses were “extremists,” and that the courts’ own experts would come down somewhere between them.  When the court-appointed experts filed their reports, Judge Pointer was shocked that all four of his experts sided with the defense in rejecting the tendentious claims of plaintiffs’ expert witnesses.

Statistical Significance

Along the way, in advocating his radical proposal, Professor Cheng made some other curious announcements. For instance, he tells us that “[w]hile historically used as a rule of thumb, statisticians have now concluded that using the 0.05 [p-value] threshold is more distortive than helpful.”[21] Cheng’s purpose here is unclear, but the source he cited does not remotely support his statement, and certainly not his gross overgeneralization about “statisticians.” If this is the way he envisions experts will report “consensus,” then his program seems broken at its inception. The American Statistical Association’s (ASA) p-value “consensus” statement articulated six principles, the third of which noted that

“[s]cientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold.”

This is a few light years away from statisticians’ concluding that statistical significance thresholds are more distortive than helpful. The ASA p-value statement further explains that

“[t]he widespread use of ‘statistical significance’ (generally interpreted as ‘p < 0.05’) as a license for making a claim of a scientific finding (or implied truth) leads to considerable distortion of the scientific process.”[22]

In the science of health effects, statistical significance remains extremely important, but it has never been a license for making causal claims. As Sir Austin Bradford Hill noted in his famous after-dinner speech, ruling out chance (and bias) as an explanation for an association was merely a predicate for evaluating the association for causality.[23]

Over-endorsing Animal Studies

Under Professor Cheng’s Consensus Rule, the appropriate consensus might well be one generated solely by animal studies. Cheng tells that “perhaps” scientists do not consider toxicology when the pertinent epidemiology is “clear.” When the epidemiology, however, is unclear, scientists consider toxicology.[24] Well, of course, but the key question is whether a consensus about causation in humans will be based upon non-human animal studies. Cheng seems to answer this question in the affirmative by criticizing courts that have required epidemiologic studies “even though the entire field of toxicology uses tissue and animal studies to make inferences, often in combination with and especially in the absence of epidemiology.”[25] The vitality of the field of toxicology is hardly undermined by its not generally providing sufficient grounds for judgments of human causation.

Relative Risk Greater Than Two

In the midst of his argument for the Consensus Rule, Cheng points critically to what he calls “questionable proxies” for scientific certainty. One such proxy is the judicial requirement of risk ratios in excess of two. His short discussion appears to be focused upon the inference of specific causation in a given case, but it leads to a non-sequitur:

“Some courts have required a relative risk of 2.0 in toxic tort cases, requiring a doubling of the population risk before considering causation.73 But the preponderance standard does not require that the substance more likely than not caused any case of the disease in the population, it requires that the substance more likely than not caused the plaintiff’s case.”[26]

Of course, it is exactly because we are interested in the probability of causation of the plaintiff’s case, that we advert to the risk ratio to give us some sense whether “more likely than not” the exposure caused plaintiff’s case. Unless plaintiff can show he is somehow unique, he is “any case.” In many instances, plaintiff cannot show how he is different from the participants of the study that gave rise to the risk ratio less than two.


[1] Edward K. Cheng, “The Consensus Rule: A New Approach to Scientific Evidence,” 75 Vanderbilt L. Rev. 407 (2022) [Consensus Rule].

[2] Consensus Rule at 410 (“The judge and the jury, lacking in expertise, are not competent to handle the questions that the Daubert framework assigns to them.”)

[3] Consensus Rule at 467 (“Under the Consensus Rule, experts no longer offer their personal opinions on causation or teach the jury how to assess the underlying studies. Instead, their testimony focuses on what the expert community as a whole believes about causation.”)

[4] Consensus Rule at 467.

[5] Consensus Rule at 437.

[6] Consensus Rule at 434.

[7] Consensus Rule at 434.

[8] Consensus Rule at 422.

[9] Consensus Rule at 429.

[10] Consensus Rule at 432-33.

[11] Consensus Rule at 434.

[12] Consensus Rule at 456.

[13] Consensus Rule at 457.

[14] Consensus Rule at 459.

[15] Steven E. Nissen, M.D., and Kathy Wolski, M.P.H., “Effect of Rosiglitazone on the Risk of Myocardial Infarction and Death from Cardiovascular Causes,” 356 New Engl. J. Med. 2457 (2007).

[16] P.D. Home, et al., “Rosiglitazone Evaluated for Cardiovascular Outcomes in Oral Agent Combination Therapy for Type 2 Diabetes (RECORD), 373 Lancet 2125 (2009).

[17] Consensus Rule at 458.

[18] Jonathan M. Samet, et al., Asbestos: Selected Health Effects (2006).

[19] Consensus Rule at 445.

[20] Hall v. Baxter Healthcare Corp., 947 F. Supp.1387 (D. Or. 1996) (excluding plaintiffs’ expert witnesses’ causation opinions); In re Breast Implant Cases, 942 F. Supp. 958 (E. & S.D.N.Y. 1996) (granting partial summary judgment on claims of systemic disease causation).

[21] Consenus Rule at 424 (citing Ronald L. Wasserstein & Nicole A. Lazar, “The ASA Statement on p-Values: Context, Process, and Purpose,” 70 Am. Statistician 129, 131 (2016)).

[22] Id.

[23] Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295, 295 (1965). See Schachtman, “Ruling Out Bias & Confounding is Necessary to Evaluate Expert Witness Causation Opinions” (Oct. 29, 2018); “Woodside & Davis on the Bradford Hill Considerations” (Aug. 23, 2013); Frank C. Woodside, III & Allison G. Davis, “The Bradford Hill Criteria: The Forgotten Predicate,” 35 Thomas Jefferson L. Rev. 103 (2013).

[24] Consensus Rule at 444.

[25] Consensus Rule at 424 & n. 74 (citing to one of multiple court advisory expert witnesses in Hall v. Baxter Healthcare Corp., 947 F. Supp.1387, 1449 (D. Or. 1996), who suggested that toxicology would be appropriate to consider when the epidemiology was not clear). Citing to one outlier advisor is a rather strange move for Cheng considering that the “consensus” was readily discernible to the trial judge in Hall, and to Judge Jack Weinstein, a few months later, in In re Breast Implant Cases, 942 F. Supp. 958 (E. & S.D.N.Y. 1996).

[26] Consensus Rule at 424 & n. 73 (citing Lucinda M. Finley, “Guarding the Gate to the Courthouse: How Trial Judges Are Using Their Evidentiary Screening Role to Remake Tort Causation Rules,” 49 Depaul L. Rev. 335, 348–49 (2000). See Schachtman, “Rhetorical Strategy in Characterizing Scientific Burdens of Proof” (Nov. 15, 2014).

Amicus Curious – Gelbach’s Foray into Lipitor Litigation

August 25th, 2022

Professor Schauer’s discussion of statistical significance, covered in my last post,[1] is curious for its disclaimer that “there is no claim here that measures of statistical significance map easily onto measures of the burden of proof.” Having made the disclaimer, Schauer proceeds to falls into the transposition fallacy, which contradicts his disclaimer, and, generally speaking, is not a good thing for a law professor eager to advance the understanding of “The Proof,” to do.

Perhaps more curious than Schauer’s error is his citation support for his disclaimer.[2] The cited paper by Jonah B. Gelbach is one of several of Gelbach’s papers that advances the claim that the p-value does indeed map onto posterior probability and the burden of proof. Gelbach’s claim has also been the center piece in his role as an advocate in support of plaintiffs in the Lipitor (atorvastatin) multi-district litigation (MDL) over claims that ingestion of atorvastatin causes diabetes mellitus.

Gelbach’s intervention as plaintiffs’ amicus is peculiar on many fronts. At the time of the Lipitor litigation, Sonal Singh was an epidemiologist and Assistant Professor of Medicine, at the Johns Hopkins University. The MDL trial court initially held that Singh’s proffered testimony was inadmissible because of his failure to consider daily dose.[3] In a second attempt, Singh offered an opinion for 10 mg daily dose of atorvastatin, based largely upon the results of a clinical trial known as ASCOT-LLA.[4]

The ASCOT-LLA trial randomized 19,342 participants with hypertension and at least three other cardiovascular risk factors to two different anti-hypertensive medications. A subgroup with total cholesterol levels less than or equal to 6.5 mmol./l. were randomized to either daily 10 mg. atorvastatin or placebo.  The investigators planned to follow up for five years, but they stopped after 3.3 years because of clear benefit on the primary composite end point of non-fatal myocardial infarction and fatal coronary heart disease. At the time of stopping, there were 100 events of the primary pre-specified outcome in the atorvastatin group, compared with 154 events in the placebo group (hazard ratio 0.64 [95% CI 0.50 – 0.83], p = 0.0005).

The atorvastatin component of ASCOT-LLA had, in addition to its primary pre-specified outcome, seven secondary end points, and seven tertiary end points.  The emergence of diabetes mellitus in this trial population, which clearly was at high risk of developing diabetes, was one of the tertiary end points. Primary, secondary, and tertiary end points were reported in ASCOT-LLA without adjustment for the obvious multiple comparisons. In the treatment group, 3.0% developed diabetes over the course of the trial, whereas 2.6% developed diabetes in the placebo group. The unadjusted hazard ratio was 1.15 (0.91 – 1.44), p = 0.2493.[5] Given the 15 trial end points, an adjusted p-value for this particular hazard ratio, for diabetes, might well exceed 0.5, and even approach 1.0.

On this record, Dr. Singh honestly acknowledged that statistical significance was important, and that the diabetes finding in ASCOT-LLA might have been the result of low statistical power or of no association at all. Based upon the trial data alone, he testified that “one can neither confirm nor deny that atorvastatin 10 mg is associated with significantly increased risk of type 2 diabetes.”[6] The trial court excluded Dr. Singh’s 10mg/day causal opinion, but admitted his 80mg/day opinion. On appeal, the Fourth Circuit affirmed the MDL district court’s rulings.[7]

Jonah Gelbach is a professor of law at the University of California at Berkeley. He attended Yale Law School, and received his doctorate in economics from MIT.

Professor Gelbach entered the Lipitor fray to present a single issue: whether statistical significance at conventionally demanding levels such as 5 percent is an appropriate basis for excluding expert testimony based on statistical evidence from a single study that did not achieve statistical significance.

Professor Gelbach is no stranger to antic proposals.[8] As amicus curious in the Lipitor litigation, Gelbach asserts that plaintiffs’ expert witness, Dr. Singh, was wrong in his testimony about not being able to confirm the ASCOT-LLA association because he, Gelbach, could confirm the association.[9] Ultimately, the Fourth Circuit did not discuss Gelbach’s contentions, which is not surprising considering that the asserted arguments and alleged factual considerations were not only dehors the record, but in contradiction of the record.

Gelbach’s curious claim is that any time a risk ratio, for an exposure and an outcome of interest, is greater than 1.0, with a p-value < 0.5,[10] the evidence should be not only admissible, but sufficient to support a conclusion of causation. Gelbach states his claim in the context of discussing a single randomized controlled trial (ASCOT-LLA), but his broad pronouncements are carelessly framed such that others may take them to apply to a single observational study, with its greater threats to internal validity.

Contra Kumho Tire

To get to his conclusion, Gelbach attempts to remove the constraints of traditional standards of significance probability. Kumho Tire teaches that expert witnesses must “employ[] in the courtroom the same level of intellectual rigor that characterizes the practice of an expert in the relevant field.”[11] For Gelbach, this “eminently reasonable admonition” does not impose any constraints on statistical inference in the courtroom. Statistical significance at traditional levels (p < 0.05) is for elitist scholarly work, not for the “practical” rent-seeking work of the tort bar. According to Gelbach, the inflation of the significance level ten-fold to p < 0.5 is merely a matter of “weight” and not admissibility of any challenged opinion testimony.

Likelihood Ratios and Posterior Probabilities

Gelbach maintains that any evidence that has a likelihood ratio (LR > 1) greater than one is relevant, and should be admissible under Federal Rule of Evidence 401.[12] This argument ignores the other operative Federal Rules of Evidence, namely 702 and 703, which impose additional criteria of admissibility for expert witness opinion testimony.

With respect to variance and random error, Gelbach tells us that any evidence that generates a LR > 1, should be admitted when “the statistical evidence is statistically significant below the 50 percent level, which will be true when the p-value is less than 0.5.”[13]

At times, Gelbach seems to be discussing the admissibility of the ASCOT-LLA study itself, and not the proffered opinion testimony of Dr. Singh. The study itself would not be admissible, although it is clearly the sort of hearsay an expert witness in the field may consider. If Dr. Singh were to have reframed and recalculated the statistical comparisons, then the Rule 703 requirement of “reasonable reliance” by scientists in the field of interest may not have been satisfied.

Gelbach also generates a posterior probability (0.77), which is based upon his calculations from data in the ASCOT-LLA trial, and not the posterior probability of Dr. Singh’s opinion. The posterior probability, as calculated, is problematic on many fronts.

Gelbach does not present his calculations – for the sake of brevity he says – but he tells us that the ASCOT-LLA data yield a likelihood ratio of roughly 1.9, and a p-value of 0.126.[14] What the clinical trialists reported was a hazard ratio of 1.15, which is a weak association on most researchers’ scales, with a two-sided p-value of 0.25, which is five times higher than the usual 5 percent. Gelbach does not explain how or why his calculated p-value for the likelihood ratio is roughly half the unadjusted, two-sided p-value for the tertiary outcome from ASCOT-LLA.

As noted, the reported diabetes hazard ratio of 1.15 was a tertiary outcome for the ASCOT trial, one of 15 calculated by the trialists, with p-values unadjusted for multiple comparisons.  The failure to adjust is perhaps excusable in that some (but certainly not all) of the outcome variables are overlapping or correlated. A sophisticated reader would not be misled; only when someone like Gelbach attempts to manufacture an inflated posterior probability without accounting for the gross underestimate in variance is there an insult to statistical science. Gelbach’s recalculated p-value for his LR, if adjusted for the multiplicity of comparisons in this trial, would likely exceed 0.5, rendering all his arguments nugatory.

Using the statistics as presented by the published ASCOT-LLA trial to generate a posterior probability also ignores the potential biases (systematic errors) in data collection, the unadjusted hazard ratios, the potential for departures from random sampling, errors in administering the participant recruiting and inclusion process, and other errors in measurements, data collection, data cleaning, and reporting.

Gelbach correctly notes that there is nothing methodologically inappropriate in advocating likelihood ratios, but he is less than forthcoming in explaining that such ratios translate into a posterior probability only if he posits a prior probability of 0.5.[15] His pretense to having simply stated “mathematical facts” unravels when we consider his extreme, unrealistic, and unscientific assumptions.

The Problematic Prior

Gelbach’s glibly assumes that the starting point, the prior probability, for his analysis of Dr. Singh’s opinion is 50%. This is an old and common mistake,[16] long since debunked.[17] Gelbach’s assumption is part of an old controversy, which surfaced in early cases concerning disputed paternity. The assumption, however, is wrong legally and philosophically.

The law simply does not hand out 0.5 prior probability to both parties at the beginning of a trial. As Professor Jaffee noted almost 35 years ago:

“In the world of Anglo-American jurisprudence, every defendant, civil and criminal, is presumed not liable. So, every claim (civil or criminal) starts at ground zero (with no legal probability) and depends entirely upon proofs actually adduced.”[18]

Gelbach assumes that assigning “equal prior probability” to two adverse parties is fair, because the fact-finder would not start hearing evidence with any notion of which party’s contentions are correct. The 0.5/0.5 starting point, however, is neither fair nor is it the law.[19] The even odds prior is also not good science.

The defense is entitled to a presumption that it is not liable, and the plaintiff must start at zero.  Bayesians understand that this is the death knell of their beautiful model.  If the prior probability is zero, then Bayes’ Theorem tells us mathematically that no evidence, no matter how large a likelihood ratio, can move the prior probability of zero towards one. Bayes’ theorem may be a correct statement about inverse probabilities, but still be an inadequate or inaccurate model for how factfinders do, or should, reason in determining the ultimate facts of a case.

We can see how unrealistic and unfair Gelbach’s implied prior probability is if we visualize the proof process as a football field.  To win, plaintiffs do not need to score a touchdown; they need only cross the mid-field 50-yard line. Rather than making plaintiffs start at the zero-yard line, however, Gelbach would put them right on the 50-yard line. Since one toe over the mid-field line is victory, the plaintiff is spotted 99.99+% of its burden of having to present evidence to build up 50% probability. Instead, plaintiffs are allowed to scoot from the zero yard line right up claiming success, where even the slightest breeze might give them winning cases. Somehow, in the model, plaintiffs no longer have to present evidence to traverse the first half of the field.

The even odds starting point is completely unrealistic in terms of the events upon which the parties are wagering. The ASCOT-LLA study might have shown a protective association between atorvastatin and diabetes, or it might have shown no association at all, or it might have show a larger hazard ratio than measured in this particular sample. Recall that the confidence interval for hazard ratios for diabetes ran from 0.91 to 1.44. In other words, parameters from 0.91 (protective association) to 1.0 (no association), to 1.44 (harmful association) were all reasonably compatible with the observed statistic, based upon this one study’s data. The potential outcomes are not binary, which makes the even odds starting point inappropriate.[20]


[1]Schauer’s Long Footnote on Statistical Significance” (Aug. 21, 2022).

[2] Frederick Schauer, The Proof: Uses of Evidence in Law, Politics, and Everything Else 54-55 (2022) (citing Michelle M. Burtis, Jonah B. Gelbach, and Bruce H. Kobayashi, “Error Costs, Legal Standards of Proof, and Statistical Significance,” 25 Supreme Court Economic Rev. 1 (2017).

[3] In re Lipitor Mktg., Sales Practices & Prods. Liab. Litig., MDL No. 2:14–mn–02502–RMG, 2015 WL 6941132, at *1  (D.S.C. Oct. 22, 2015).

[4] Peter S. Sever, et al., “Prevention of coronary and stroke events with atorvastatin in hypertensive patients who have average or lower-than-average cholesterol concentrations, in the Anglo-Scandinavian Cardiac Outcomes Trial Lipid Lowering Arm (ASCOT-LLA): a multicentre randomised controlled trial,” 361 Lancet 1149 (2003). [cited here as ASCOT-LLA]

[5] ASCOT-LLA at 1153 & Table 3.

[6][6] In re Lipitor Mktg., Sales Practices & Prods. Liab. Litig., 174 F.Supp. 3d 911, 921 (D.S.C. 2016) (quoting Dr. Singh’s testimony).

[7] In re Lipitor Mktg., Sales Practices & Prods. Liab. Litig., 892 F.3d 624, 638-39 (2018) (affirming MDL trial court’s exclusion in part of Dr. Singh).

[8] SeeExpert Witness Mining – Antic Proposals for Reform” (Nov. 4, 2014).

[9] Brief for Amicus Curiae Jonah B. Gelbach in Support of Plaintiffs-Appellants, In re Lipitor Mktg., Sales Practices & Prods. Liab. Litig., 2017 WL 1628475 (April 28, 2017). [Cited as Gelbach]

[10] Gelbach at *2.

[11] Kumho Tire Co. v. Carmichael, 526 U.S. 137, 152 (1999).

[12] Gelbach at *5.

[13] Gelbach at *2, *6.

[14] Gelbach at *15.

[15] Gelbach at *19-20.

[16] See Richard A. Posner, “An Economic Approach to the Law of Evidence,” 51 Stanford L. Rev. 1477, 1514 (1999) (asserting that the “unbiased fact-finder” should start hearing a case with even odds; “[I]deally we want the trier of fact to work from prior odds of 1 to 1 that the plaintiff or prosecutor has a meritorious case. A substantial departure from this position, in either direction, marks the trier of fact as biased.”).

[17] See, e.g., Richard D. Friedman, “A Presumption of Innocence, Not of Even Odds,” 52 Stan. L. Rev. 874 (2000). [Friedman]

[18] Leonard R. Jaffee, “Prior Probability – A Black Hole in the Mathematician’s View of the Sufficiency and Weight of Evidence,” 9 Cardozo L. Rev. 967, 986 (1988).

[19] Id. at p.994 & n.35.

[20] Friedman at 877.

Schauer’s Long Footnote on Statistical Significance

August 21st, 2022

One of the reasons that, in 2016, the American Statistical Association (ASA) issued, for the first time in its history, a consensus statement on p-values, was the persistent and sometimes deliberate misstatements and misrepresentations about the meaning of the p-value. Indeed, of the six principles articulated by the ASA, several were little more than definitional, designed to clear away misunderstandings.  Notably, “Principle Two” addresses one persistent misunderstanding and states:

“P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.

Researchers often wish to turn a p-value into a statement about the truth of a null hypothesis, or about the probability that random chance produced the observed data. The p-value is neither. It is a statement about data in relation to a specified hypothetical explanation, and is not a statement about the explanation itself.”[1]

The ASA consensus statement followed on the heels of an important published article, written by seven important authors in the fields of statistics and epidemiology.[2] One statistician,[3] who frequently shows up as an expert witness for multi-district litigation plaintiffs, described the article’s authors as the “A-Team” of statistics. In any event, the seven prominent thought leaders identified common statistical misunderstandings, including the belief that:

“2. The P value for the null hypothesis is the probability that chance alone produced the observed association; for example, if the P value for the null hypothesis is 0.08, there is an 8% probability that chance alone produced the association. No![4]

This is all basic statistics.

Frederick Schauer is the David and Mary Harrison Distinguished Professor of Law at the University of Virginia. Schauer has had contributed prolifically to legal scholarship, and his publications are often well written and thoughtful analyses. Schauer’s recent book, The Proof: Uses of Evidence in Law, Politics, and Everything Else, published by the Harvard University Press is a contribution to the literature of “legal epistemology,” and the foundations of evidence that lie beneath many of our everyday and courtroom approaches to resolving disputes.[5] Schauer’s book might be a useful addition to an undergraduate’s reading list for a course in practical epistemology, or for a law school course on evidence. The language of The Proof is clear and lively, but at times wanders into objectionable and biased political correctness. For example, Schauer channels Naomi Oreskes and her critique of manufacturing industry in his own discussion of “manufactured evidence,”[6] but studiously avoids any number of examples of explicit manufacturing of fraudulent evidence in litigation by the lawsuit industry.[7] Perhaps the most serious omission in this book on evidence is its failure to discuss the relative quality and hierarchy of evidence in science, medicine, and in policy.  Readers will not find any mention of the methodology of systematic reviews or meta-analyses in Schauer’s work.

At the end of his chapter on burdens of proof, Schauer adds “A Long Footnote on Statistical Significance,” in which he expresses surprise that the subject of statistical significance is controversial. Schauer might well have brushed up on the statistical concepts he wanted to discuss.

Schauer’s treatment of statistical significance is both distinctly unbalanced, as well as misstated. In an endnote,[8] Schauer cites some of the controversialists who have criticized significance tests, but none of the statisticians who have defended their use.[9]

As for conceptual accuracy, after giving a serviceable definition of the p-value, Schauer immediately goes astray:

And this likelihood is conventionally described in terms of a p-value, where the p-value is the probability that positive results—rejection of the “null hypothesis” that there is no connection between the examined variables—were produced by chance.”[10]

And again, for emphasis, Schauer tells us:

“A p-value of greater than .05 – a greater than 5 percent probability that the same results would have been the result of chance – has been understood to mean that the results are not statistically significant.”[11]

And then once more for emphasis, in the context of an emotionally laden hypothetical about an experimental drug “cures” a dread, incurable disease, p = 0.20, Schauer tells us that he suspects most people would want to take the medication:

“recognizing that an 80 percent likelihood that the rejection of ineffectiveness was still good enough, at least if there were no other alternatives.”

Schauer wants to connect his discussion of statistical significance to degrees or varying strengths of evidence, but his discursion into statistical significance largely conflates precision with strength. Evidence can be statistically robust but not be very strong. If we imagine a very large randomized clinical trial that found that a medication lowered systolic blood pressure by 1mm of mercury, p < 0.05, we would not consider that finding to constitute strong evidence for therapeutic benefit. If the observation of lowering blood pressure by 1mm came from an observational study, p < 0.05, the finding might not even qualify as evidence in the views of sophisticated cardiovascular physicians and researchers.

Earlier in the chapter, Schauer points to instances in which substantial evidence for a conclusion is downplayed because it is not “conclusive,” or “definitive.” He is obviously keen to emphasize that evidence that is not “conclusive” may still be useful in some circumstances. In this context, Schauer yet again misstates the meaning of significance probability, when he tells us that:

“[j]ust as inconclusive or even weak evidence may still be evidence, and may still be useful evidence for some purposes, so too might conclusions – rejections of the null hypothesis – that are more than 5 percent likely to have been produced by chance still be valuable, depending on what follows from those conclusions.”[12]

And while Schauer is right that weak evidence may still be evidence, he seems loathe to admit that weak evidence may be pretty crummy support for a conclusion. Take, for instance, a fair coin.  We have an expected value on ten flips of five heads and five tails.  We flip the coin ten times, but we observe six heads and four tails.  Do we now have “evidence” that the expected value and the expected outcome are wrong?  Not really. The probability of observing the expected outcome on the binomial model that most people would endorse for the thought experiment is 24.6%. The probability of not observing the expected value in ten flips is three times greater. If we look at an epidemiologic study, with a sizable number of participants, the “expected value” of 1.0, embodied in the null hypothesis, is an outcome that we would rarely expect to see, even if the null hypothesis is correct.  Schauer seems to have missed this basic lesson of probability and statistics.

Perhaps even more disturbing is that Schauer fails to distinguish the other determinants of study validity and the warrants for inferring a conclusion at any level of certainty. There is a distinct danger that his comments about p-values will be taken to apply to various study designs, descriptive, observational, and experimental. And there is a further danger that incorrect definitions of the p-value and statistical significance probabilities will be used to misrepresent p-values as relating to posterior probabilities. Surely, a distinguished professor of law, at a top law school, in a book published by a prestigious  publisher (Belknap Press) can do better. The message for legal practitioners is clear. If you need to define or discuss statistical concepts in a brief, seek out a good textbook on statistics. Do not rely upon other lawyers, even distinguished law professors, or judges, for accurate working definitions.


[1] Ronald L. Wasserstein & Nicole A. Lazar, “The ASA’s Statement on p-Values: Context, Process, and Purpose,” 70 The Am. Statistician 129, 131 (2016).

[2] Sander Greenland, Stephen J. Senn, Kenneth J. Rothman, John B. Carlin, Charles Poole, Steven N. Goodman, and Douglas G. Altman, “Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations,” 31 European J. Epidemiol. 337 (2016).[cited as “Seven Sachems”]

[3] Martin T. Wells.

[4] Seven Sachems at 340 (emphasis added).

[5] Frederick Schauer, The Proof: Uses of Evidence in Law, Politics, and Everything Else (2022). [Schauer] One nit: Schauer cites a paper by A. Philip Dawid, “Statistical Risk,” 194 Synthese 3445 (2017). The title of the paper is “On individual risk.”

[6] Naomi Oreskes & Erik M. Conway, Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Climate Change (2010).

[7] See, e.g., In re Silica Prods. Liab. Litig., 398 F.Supp. 2d 563 (S.D.Tex. 2005); Transcript of Daubert Hearing at 23 (Feb. 17, 2005) (“great red flags of fraud”).

[8] See Schauer endnote 44 to Chapter 3, “The Burden of Proof,” citing Valentin Amrhein, Sander Greenland, and Blake McShane, “Scientists Rise Up against Statistical Significance,” www .nature .com (March 20, 2019), which in turn commented upon Blakey B. McShane, David Gal, Andrew Gelman, Christian Robert, and Jennifer L. Tackett, “Abandon Statistical Significance,” 73 American Statistician 235 (2019).

[9] Yoav Benjamini, Richard D. DeVeaux, Bradly Efron, Scott Evans, Mark Glickman, Barry Braubard, Xuming He, Xiao Li Meng, Nancy Reid, Stephen M. Stigler, Stephen B. Vardeman, Christopher K. Wikle, Tommy Wright, Linda J. Young, and Karen Kafadar, “The ASA President’s Task Force Statement on Statistical Significance and Replicability,” 15 Annals of Applied Statistics 1084 (2021); see alsoA Proclamation from the Task Force on Statistical Significance” (June 21, 2021).

[10] Schauer at 55. To be sure, Schauer, in endnote 43 to Chapter 3, disclaims any identification of p-values or measures of statistical significance with posterior probabilities or probabilistic measures of the burden of proof. Nonetheless, in the text, he proceeds to do exactly what he disclaimed in the endnote.

[11] Schauer at 55.

[12] Schauer at 56.

The Rise of Agnothology as Conspiracy Theory

July 19th, 2022

A few egregious articles in the biomedical literature have begun to endorse explicitly asymmetrical standards for inferring causation in the context of environmental or occupational exposures. Very little if anything is needed for inferring causation, and nothing counts against causation.  If authors refuse to infer causation, then they are agents of “industry,” epidemiologic malfeasors, and doubt mongers.

For an example of this genre, take the recent article, entitled “Toolkit for detecting misused epidemiological methods.”[1] [Toolkit] Please.

The asymmetry begins with Trump-like projection of the authors’ own foibles. The principal hammer in the authors’ toolkit for detecting misused epidemiologic methods is personal, financial bias. And yet, somehow, in an article that calls out other scientists for having received money from “industry,” the authors overlooked the business of disclosing their receipt of monies from one of the biggest industries around – the lawsuit industry.

Under the heading “competing interests,” the authors state that “they have no competing interests.”[2]  Lead author, Colin L. Soskolne, was, however, an active, partisan expert witness for plaintiffs’ counsel in diacetyl litigation.[3] In an asbestos case before the Pennsylvania Supreme Court, Rost v. Ford Motor Co., Soskolne signed on to an amicus brief, supporting the plaintiff, using his science credentials, without disclosing his expert witness work for plaintiffs, or his long-standing anti-asbestos advocacy.[4]

Author Shira Kramer signed on to Toolkit, without disclosing any conflicts, but with an even more impressive résumé of pro-plaintiff litigation experience.[5] Kramer is the owner of Epidemiology International, in Cockeysville, Maryland, where she services the lawsuit industry. She too was an “amicus” in Rost, without disclosing her extensive plaintiff-side litigation consulting and testifying.

Carl Cranor, another author of Toolkit, takes first place for hypocrisy on conflicts of interest. As a founder of Council for Education and Research on Toxics (CERT), he has sterling credentials for monetizing the bounty hunt against “carcinogens,” most recently against coffee.[6] He has testified in denture cream and benzene litigation, for plaintiffs. When he was excluded under Rule 702 from the Milward case, CERT filed an amicus brief on his behalf, without disclosing that Cranor was a founder of that organization.[7], [8]

The title seems reasonably fair-minded but the virulent bias of the authors is soon revealed. The Toolkit is presented as a Table in the middle of the article, but the actual “tools” are for the most part not seriously discussed, other than advice to “follow the money” to identify financial conflicts of interest.

The authors acknowledge that epidemiology provides critical knowledge of risk factors and causation of disease, but they quickly transition to an effort to silence any industry commentator on any specific epidemiologic issue. As we will see, the lawsuit industry is given a complete pass. Not surprisingly, several of the authors (Kramer, Cranor, Soskolne) have worked closely in tandem with the lawsuit industry, and have derived financial rewards for their efforts.

Repeatedly, the authors tell us that epidemiologic methods and language are misused by “powerful interests,” which have financial stakes in the outcome of research. Agents of these interests foment uncertainty and doubt about causal relationships through “disinformation,” “malfeasance,” and “doubt mongering.” There is no correlative concern about false claiming or claim mongering..

Who are these agents who plot to sabotage “social justice” and “truth”? Clearly, they are scientists with whom the Toolkit authors disagree. The Toolkit gang cites several papers as exemplifying “malfeasance,”[9] but they never explain what was wrong with them, or how the malfeasors went astray.  The Toolkit tactics seem worthy of Twitter smear and run.

The Toolkit

The authors’ chart of “tools” used by industry might have been an interesting taxonomy of error, but mostly they are ad hominem attack on scientists with whom they disagree. Channeling Putin on Ukraine, those scientists who would impose discipline and rigor on epidemiologic science are derided as not “real epidemiologists,” and, to boot, they are guilty of ethical lapses in failing to advance “social justice.”

Mostly the authors give us a toolkit for silencing those who would get in the way of the situational science deployed at the beck and call of the lawsuit industry.[10] Indeed, the Toolkit authors are not shy about identifying their litigation goals; they tell us that the toolkit can be deployed in depositions and in cross-examinations to pursue “social justice.” These authors also outline a social agenda that greatly resembles the goals of cancel culture: expose the perpetrators who stand in the way of the authors’preferred policy choices, diminish their adversaries’ their influence on journals, and galvanize peer reviewers to reject their adversaries’ scientific publications. The Toolkit authors tell us that “[t] he scientific community should engage by recognizing and professionally calling out common practices used to distort and misapply epidemiological and other health-related sciences.”[11] What this advice translates into are covert and open ad hominem campaigns as peer reviewers to block publications, to deny adversaries tenure and promotions, and to use social and other media outlets to attack adversaries’ motives, good faith, and competence.

None of this is really new. Twenty-five years ago, the late F. Douglas K. Liddell railed at the Mt. Sinai mob, and the phenomenon was hardly new then.[12] The Toolkit’s call to arms is, however, quite open, and raises the question whether its authors and adherents can be fair journal editors and peer reviewers of journal submissions.

Much of the Toolkit is the implementation of a strategy developed by lawsuit industry expert witnesses to demonize their adversaries by accusing them of manufacturing doubt or ignorance or uncertainty. This strategy has gained a label used to deride those who disagree with litigation overclaiming: agnotology or the creation of ignorance. According to Professor Robert Proctor, a regular testifying historian for tobacco plaintiffs, a linguist, Iain Boal, coined the term agnotology, in 1992, to describe the study of the production of ignorance.[13]

The Rise of “Agnotology” in Ngram

Agnotology has become a cottage sub-industry of the lawsuit industry, although lawsuits (or claim mongering if you like), of course, remain their main product. Naomi Oreskes[14] and David Michaels[15] gave the agnotology field greater visibility with their publications, using the less erudite but catchier phrase “manufacturing doubt.” Although the study of ignorance and uncertainty has a legitimate role in epistemology[16] and sociology,[17] much of the current literature is dominated by those who use agnotology as propaganda in support of their own litigation and regulatory agendas.[18] One lone author, however, appears to have taken agnotology study seriously enough to see that it is largely a conspiracy theory that reduces complex historical or scientific theory, evidence, opinion, and conclusions to a clash between truth and a demonic ideology.[19]

Is there any substance to the Toolkit?

The Toolkit is not entirely empty of substantive issues. The authors note that “statistical methods are a critical component of the epidemiologist’s toolkit,”[20] and they cite some articles about common statistical mistakes missed by peer reviewers. Curiously, the Toolkit omits any meaningful discussion of statistical mistakes that increase the risk of false positive results, such as multiple comparisons or dichotomizing continuous confounder variables. As for the Toolkit’s number one identified “inappropriate” technique used by its authors’ adversaries, we have:

“A1. Relying on statistical hypothesis testing; Using ‘statistical significance’ at the 0.05 level of probability as a strict decision criterion to determine the interpretation of statistical results and drawing conclusions.”

Peer into the hearings of any federal court so-called Daubert motion, and you will see the lawsuit industry, and its hired expert witnesses, rail at statistical significance, unless of course, there is some subgroup that has nominal significance, in which case, they are all in for endorsing the finding as “conclusive.” 

Welcome to asymmetric, situational science.


[1] Colin L. Soskolne, Shira Kramer, Juan Pablo Ramos-Bonilla, Daniele Mandrioli, Jennifer Sass, Michael Gochfeld, Carl F. Cranor, Shailesh Advani & Lisa A. Bero, “Toolkit for detecting misused epidemiological methods,” 20(90) Envt’l Health (2021) [Toolkit].

[2] Toolkit at 12.

[3] Watson v. Dillon Co., 797 F.Supp. 2d 1138 (D. Colo. 2011).

[4] Rost v. Ford Motor Co., 151 A.3d 1032 (Pa. 2016). See “The Amicus Curious Brief” (Jan. 4, 2018).

[5] See, e.g., Sean v. BMW of North Am., LLC, 26 N.Y.3d 801, 48 N.E.3d 937, 28 N.Y.S.3d 656 (2016) (affirming exclusion of Kramer); The Little Hocking Water Ass’n v. E.I. Du Pont De Nemours & Co., 90 F.Supp.3d 746 (S.D. Ohio 2015) (excluding Kramer); Luther v. John W. Stone Oil Distributor, LLC, No. 14-30891 (5th Cir. April 30, 2015) (mentioning Kramer as litigation consultant); Clair v. Monsanto Co., 412 S.W.3d 295 (Mo. Ct. App. 2013 (mentioning Kramer as plaintiffs’ expert witness); In re Chantix (Varenicline) Prods. Liab. Litig., No. 2:09-CV-2039-IPJ, MDL No. 2092, 2012 WL 3871562 (N.D.Ala. 2012) (excluding Kramer’s opinions in part); Frischhertz v. SmithKline Beecham Corp., 2012 U.S. Dist. LEXIS 181507, Civ. No. 10-2125 (E.D. La. Dec. 21, 2012) (excluding Kramer); Donaldson v. Central Illinois Public Service Co., 199 Ill. 2d 63, 767 N.E.2d 314 (2002) (affirming admissibility of Kramer’s opinions in absence of Rule 702 standards).

[6]  “The Council for Education & Research on Toxics” (July 9, 2013) (CERT amicus brief filed without any disclosure of conflict of interest). Among the fellow travelers who wittingly or unwittingly supported CERT’s scheme to pervert the course of justice were lawsuit industry stalwarts, Arthur L. Frank, Peter F. Infante, Philip J. Landrigan, Barry S. Levy, Ronald L. Melnick, David Ozonoff, and David Rosner. See also NAS, “Carl Cranor’s Conflicted Jeremiad Against Daubert” (Sept. 23, 2018); Carl Cranor, “Milward v. Acuity Specialty Products: How the First Circuit Opened Courthouse Doors for Wronged Parties to Present Wider Range of Scientific Evidence” (July 25, 2011).

[7] Milward v. Acuity Specialty Products Group, Inc., 664 F. Supp. 2d 137, 148 (D. Mass. 2009), rev’d, 639 F.3d 11 (1st Cir. 2011), cert. den. sub nom. U.S. Steel Corp. v. Milward, 565 U.S. 1111 (2012), on remand, Milward v. Acuity Specialty Products Group, Inc., 969 F.Supp. 2d 101 (D. Mass. 2013) (excluding specific causation opinions as invalid; granting summary judgment), aff’d, 820 F.3d 469 (1st Cir. 2016).

[8] To put this effort into a sociology of science perspective, the Toolkit article is published in a journal, Environmental Health, an Editor in Chief of which is David Ozonoff, a long-time pro-plaintiff partisan in the asbestos litigation. The journal has an “ombudsman,”Anthony Robbins, who was one of the movers-and-shakers in forming SKAPP, The Project on Scientific Knowledge and Public Policy, a group that plotted to undermine the application of federal evidence law of expert witness opinion testimony. SKAPP itself now defunct, but its spirit of subverting law lives on with efforts such as the Toolkit. “More Antic Proposals for Expert Witness Testimony – Including My Own Antic Proposals” (Dec. 30, 2014). Robbins is also affiliated with an effort, led by historian and plaintiffs’ expert witness David Rosner, to perpetuate misleading historical narratives of environmental and occupational health. “ToxicHistorians Sponsor ToxicDocs” (Feb. 1, 2018); “Creators of ToxicDocs Show Off Their Biases” (June 7, 2019); Anthony Robbins & Phyllis Freeman, “ToxicDocs (www.ToxicDocs.org) goes live: A giant step toward leveling the playing field for efforts to combat toxic exposures,” 39 J. Public Health Pol’y 1 (2018).

[9] The exemplars cited were Paolo Boffetta, MD, MPH; Hans Olov Adami, Philip Cole, Dimitrios Trichopoulos, Jack Mandel, “Epidemiologic studies of styrene and cancer: a review of the literature,” 51 J. Occup. & Envt’l Med. 1275 (2009); Carlo LaVecchia & Paolo Boffetta, “Role of stopping exposure and recent exposure to asbestos in the risk of mesothelioma,” 21 Eur. J. Cancer Prev. 227 (2012); John Acquavella, David Garabrant, Gary Marsh G, Thomas Sorahan and Douglas L. Weed, “Glyphosate epidemiology expert panel review: a weight of evidence systematic review of the relationship between glyphosate exposure and non-Hodgkin’s lymphoma or multiple myeloma,” 46 Crit. Rev. Toxicol. S28 (2016); Catalina Ciocan, Nicolò Franco, Enrico Pira, Ihab Mansour, Alessandro Godono, and Paolo Boffetta, “Methodological issues in descriptive environmental epidemiology. The example of study Sentieri,” 112 La Medicina del Lavoro 15 (2021).

[10] The Toolkit authors acknowledge that their identification of “tools” was drawn from previous publications of the same ilk, in the same journal. Rebecca F. Goldberg & Laura N. Vandenberg, “The science of spin: targeted strategies to manufacture doubt with detrimental effects on environmental and public health,” 20:33 Envt’l Health (2021).

[11] Toolkit at 11.

[12] F.D.K. Liddell, “Magic, Menace, Myth and Malice,” 41 Ann. Occup. Hyg. 3, 3 (1997). SeeThe Lobby – Cut on the Bias” (July 6, 2020).

[13] Robert N. Proctor & Londa Schiebinger, Agnotology: The Making and Unmaking of Ignorance (2008).

[14] Naomi Oreskes & Erik M. Conway, Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming (2010); Naomi Oreskes & Erik M. Conway, “Defeating the merchants of doubt,” 465 Nature 686 (2010).

[15] David Michaels, The Triumph of Doubt: Dark Money and the Science of Deception (2020); David Michaels, Doubt is Their Product: How Industry’s Assault on Science Threatens Your Health (2008); David Michaels, “Science for Sale,” Boston Rev. 2020; David Michaels, “Corporate Campaigns Manufacture Scientific Doubt,” 174 Science News 32 (2008); David Michaels, “Manufactured Uncertainty: Protecting Public Health in the Age of Contested Science and Product Defense,” 1076 Ann. N.Y. Acad. Sci. 149 (2006); David Michaels, “Scientific Evidence and Public Policy,” 95 Am. J. Public Health s1 (2005); David Michaels & Celeste Monforton, “Manufacturing Uncertainty: Contested Science and the Protection of the Public’s Health and Environment,” 95 Am. J. Pub. Health S39 (2005); David Michaels & Celeste Monforton, “Scientific Evidence in the Regulatory System: Manufacturing Uncertainty and the Demise of the Formal Regulatory Ssytem,” 13 J. L. & Policy 17 (2005); David Michaels, “Doubt is Their Product,” Sci. Am. 96 (June 2005); David Michaels, “The Art of ‘Manufacturing Uncertainty’,” L.A. Times (June 24, 2005).

[16] See, e.g., Sibilla Cantarini, Werner Abraham, and Elisabeth Leiss, eds., Certainty-uncertainty – and the Attitudinal Space in Between (2014); Roger M. Cooke, Experts in Uncertainty: Opinion and Subjective Probability in Science (1991).

[17] See, e.g., Ralph Hertwig & Christoph Engel, eds., Deliberate Ignorance: Choosing Not to Know (2021); Linsey McGoey, The Unknowers: How Strategic Ignorance Rules the World (2019); Michael Smithson, “Toward a Social Theory of Ignorance,” 15 J. Theory Social Behavior 151 (1985).

[18] See Janet Kourany & Martin Carrier, eds., Science and the Production of Ignorance: When the Quest for Knowledge Is Thwarted (2020); John Launer, “The production of ignorance,” 96 Postgraduate Med. J. 179 (2020); David S. Egilman, “The Production of Corporate Research to Manufacture Doubt About the Health Hazards of Products: An Overview of the Exponent BakeliteVR Simulation Study,” 28 New Solutions 179 (2018); Larry Dossey, “Agnotology: on the varieties of ignorance, criminal negligence, and crimes against humanity,” 10 Explore 331 (2014); Gerald Markowitz & David Rosner, Deceit and Denial: The Deadly Politics of Industrial Revolution (2002).

[19] See Enea Bianchi, “Agnotology: a Conspiracy Theory of Ignorance?” Ágalma: Rivista di studi culturali e di estetica 41 (2021).

[20] Toolkit at 4.

Madigan’s Shenanigans & Wells Quelled in Incretin-Mimetic Cases

July 15th, 2022

The incretin-mimetic litigation involved claims that the use of Byetta, Januvia, Janumet, and Victoza medications causes pancreatic cancer. All four medications treat diabetes mellitus through incretin hormones, which stimulate or support insulin production, which in turn lowers blood sugar. On Planet Earth, the only scientists who contend that these medications cause pancreatic cancer are those hired by the lawsuit industry.

The cases against the manufacturers of the incretin-mimetic medications were consolidated for pre-trial proceedings in federal court, pursuant to the multi-district litigation (MDL) statute, 28 US Code § 1407. After years of MDL proceedings, the trial court dismissed the cases as barred by the doctrine of federal preemption, and for good measure, excluded plaintiffs’ medical causation expert witnesses from testifying.[1] If there were any doubt about the false claiming in this MDL, the district court’s dismissals were affirmed by the Ninth Circuit.[2]

The district court’s application of Federal Rule of Evidence 702 to the plaintiffs’ expert witnesses’ opinion is an important essay in patho-epistemology. The challenged expert witnesses provided many examples of invalid study design and interpretation. Of particular interest, two of the plaintiffs’ high-volume statistician testifiers, David Madigan and Martin Wells, proffered their own meta-analyses of clinical trial safety data. Although the current edition of the Reference Manual on Scientific Evidence[3] provides virtually no guidance to judges for assessing the validity of meta-analyses, judges and counsel do now have other readily available sources, such as the FDA’s Guidance on meta-analysis of safety outcomes of clinical trials.[4] Luckily for the Incretin-Mimetics pancreatic cancer MDL judge, the misuse of meta-analysis methodology by plaintiffs’ statistician expert witnesses, David Madigan and Martin Wells was intuitively obvious.

Madigan and Wells had a large set of clinical trials at their disposal, with adverse safety outcomes assiduously collected. As is the case with many clinical trial safety outcomes, the trialists will often have a procedure for blinded or unblinded adjudication of safety events, such as pancreatic cancer diagnosis.

At deposition, Madigan testified that he counted only adjudicated cases of pancreatic cancer in his meta-analyses, which seems reasonable enough. As discovery revealed, however, Madigan employed the restrictive inclusion criteria of adjudicated pancreatic cancer only to the placebo group, not to the experimental group. His use of restrictive inclusion criteria for only the placebo group had the effect of excluding several non-adjudicated events, with the obvious spurious inflation of risk ratios. The MDL court thus found with relative ease that Madigan’s “unequal application of criteria among the two groups inevitably skews the data and critically undermines the reliability of his analysis.” The unexplained, unjustified change in methodology revealed Madigan’s unreliable “cherry-picking” and lack of scientific rigor as producing a result-driven meta-analyses.[5]

The MDL court similarly found that Wells’ reports “were marred by a selective review of data and inconsistent application of inclusion criteria.”[6] Like Madigan, Wells cherry picked studies. For instance, he excluded one study, EXSCEL, on grounds that it reported “a high pancreatic cancer event rate in the comparison group as compared to background rate in the general population….”[7] Wells’ explanation blatantly failed, however, given that the entire patient population of the clinical trial had diabetes, a known risk factor for pancreatic cancer.[8]

As Professor Ioannidis and others have noted, we are awash in misleading meta-analyses:

“Currently, there is massive production of unnecessary, misleading, and conflicted systematic reviews and meta-analyses. Instead of promoting evidence-based medicine and health care, these instruments often serve mostly as easily produced publishable units or marketing tools.  Suboptimal systematic reviews and meta-analyses can be harmful given the major prestige and influence these types of studies have acquired.  The publication of systematic reviews and meta-analyses should be realigned to remove biases and vested interests and to integrate them better with the primary production of evidence.”[9]

Whether created for litigation, like the Madigan-Wells meta-analyses, or published in the “peer-reviewed” literature, courts will have to up their game in assessing the validity of such studies. Published meta-analyses have grown exponentially from the 1990s to the present. To date, 248,886 meta-analyses have been published, according the National Library of Medicine’s Pub-Med database. Last year saw over 35,000 meta-analyses published. So far, this year, 20,416 meta-analyses have been published, and we appear to be on track to have a bumper crop.

The data analytics from Pub-Med provide a helpful visual representation of the growth of meta-analyses in biomedical science.

 

Count of Publications with Keyword Meta-analysis in Pub-Med Database

In 1979, the year I started law school, one meta-analysis was published. Lawyers could still legitimately argue that meta-analyses involved novel methodology that had not been generally accepted. The novelty of meta-analysis wore off sometime between 1988, when Judge Robert Kelly excluded William Nicholson’s meta-analysis of health outcomes among PCB-exposed workers, on grounds that such analyses were “novel,” and 1990, when the Third Circuit reversed Judge Kelly, with instructions to assess study validity.[10] Fortunately, or not, depending upon your point of view, plaintiffs dropped Nicholson’s meta-analysis in subsequent proceedings. A close look at Nicholson’s non-peer reviewed calculations shows that he failed to standardize for age or sex, and that he merely added observed and expected cases, across studies, without weighting by individual study variance. The trial court never had the opportunity to assess the validity vel non of Nicholson’s ersatz meta-analysis.[11] Today, trial courts must pick up on the challenge of assessing study validity of meta-analyses relied upon by expert witnesses, regulatory agencies, and systematic reviews.


[1] In re Incretin-Based Therapies Prods. Liab. Litig., 524 F.Supp.3d 1007 (S.D. Cal. 2021).

[2] In re Incretin-Based Therapies Prods. Liab. Litig., No. 21-55342, 2022 WL 898595 (9th Cir. Mar. 28, 2022) (per curiam)

[3]  “The Treatment of Meta-Analysis in the Third Edition of the Reference Manual on Scientific Evidence” (Nov. 15, 2011).

[4] Food and Drug Administration, Center for Drug Evaluation and Research, “Meta-Analyses of Randomized Controlled Clinical Trials to Evaluate the Safety of Human Drugs or Biological Products – (Draft) Guidance for Industry” (Nov. 2018); Jonathan J. Deeks, Julian P.T. Higgins, Douglas G. Altman, “Analysing data and undertaking meta-analyses,” Chapter 10, in Julian P.T. Higgins, James Thomas, Jacqueline Chandler, Miranda Cumpston, Tianjing Li, Matthew J. Page, and Vivian Welch, eds., Cochrane Handbook for Systematic Reviews of Interventions (version 6.3 updated February 2022); Donna F. Stroup, Jesse A. Berlin, Sally C. Morton, Ingram Olkin, G. David Williamson, Drummond Rennie, David Moher, Betsy J. Becker, Theresa Ann Sipe, Stephen B. Thacker, “Meta-Analysis of Observational Studies: A Proposal for Reporting,” 283 J. Am. Med. Ass’n 2008 (2000); David Moher, Alessandro Liberati, Jennifer Tetzlaff, and Douglas G Altman, “Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement,” 6 PLoS Med e1000097 (2009).

[5] In re Incretin-Based Therapies Prods. Liab. Litig., 524 F.Supp.3d 1007, 1037 (S.D. Cal. 2021). See In re Lipitor (Atorvastatin Calcium) Mktg., Sales Practices & Prods. Liab. Litig. (No. II) MDL2502, 892 F.3d 624, 634 (4th Cir. 2018) (“Result-driven analysis, or cherry-picking, undermines principles of the scientific method and is a quintessential example of applying methodologies (valid or otherwise) in an unreliable fashion.”).

[6] In re Incretin-Based Therapies Prods. Liab. Litig., 524 F.Supp.3d 1007, 1043 (S.D. Cal. 2021).

[7] Id. at 1038.

[8] See, e.g., Albert B. Lowenfels & Patrick Maisonneuve, “Risk factors for pancreatic cancer,” 95 J. Cellular Biochem. 649 (2005).

[9] John P. Ioannidis, “The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses,” 94 Milbank Quarterly 485 (2016).

[10] In re Paoli R.R. Yard PCB Litig., 706 F. Supp. 358, 373 (E.D. Pa. 1988), rev’d and remanded, 916 F.2d 829, 856-57 (3d Cir. 1990), cert. denied, 499 U.S. 961 (1991). See also Hines v. Consol. Rail Corp., 926 F.2d 262, 273 (3d Cir. 1991).

[11]The Shmeta-Analysis in Paoli” (July 11, 2019). See  James A. Hanley, Gilles Thériault, Ralf Reintjes and Annette de Boer, “Simpson’s Paradox in Meta-Analysis,” 11 Epidemiology 613 (2000); H. James Norton & George Divine, “Simpson’s paradox and how to avoid it,” Significance 40 (Aug. 2015); George Udny Yule, “Notes on the theory of association of attributes in statistics,” 2 Biometrika 121 (1903).

The Faux Bayesian Approach in Litigation

July 13th, 2022

In an interesting series of cases, an expert witness claimed to have arrived at the specific causation of plaintiff’s stomach cancer by using “Bayesian probabilities which consider the interdependence of individual probabilities.” Courtesy of counsel in the cases, I have been able to obtain a copy of the report of the expert witness, Dr. Robert P. Gale. The cases in which Dr. Gale served were all FELA cancer cases against the Union Pacific Railroad, brought for cancers diagnosed in the plaintiffs. Given his research and writings in hematopoietic cancers and molecular biology, Dr. Gale would seem to have been a credible expert witness for the plaintiffs in their cases.[1]

The three cases involving Dr. Gale were all decisions on Rule 702 motions to exclude his causation opinions. In all three cases, the court found Dr. Gale to be qualified to opine on causation, which finding is decided by a very low standard in federal court. In two of the cases, the same judge, federal Magistrate Judge Cheryl R. Zwart, excluded Dr. Gale’s opinions.[2] In at least one of the two cases, the decision seemed rather straightforward, given that Dr. Gale claimed to have ruled out alternative causes of Mr. Hernandez’s stomach cancer.  Somehow, despite his qualifications, however, Dr. Gale missed that Mr. Hernandez had had helicobacter pylori infections before he was diagnosed with stomach cancer.

In the third case, the district judge denied the Rule 702 motion against Dr. Gale, in a cursory, non-searching review.[3]

The common thread in all three cases is that the courts dutifully noted that Dr. Gale had described his approach to specific causation as involving “Bayesian probabilities which consider the interdependence of individual probabilities.” The judicial decisions never described how Dr. Gale’s invocation of Bayesian probabilities contributed to his specific causation opinion, and a careful review of Dr. Gale’s report reveals no such analysis. To be explicit, there was no discussion of prior or posterior probabilities or odds, no discussion of likelihood ratios, or Bayes factors. There was absolutely nothing in Dr. Gale’s report that would warrant his claim that he had done a Bayesian analysis of specific causation or of the “interdependence of individual probabilities” of putative specific causes.

We might forgive the credulity of the judicial officers in these cases, but why would Dr. Gale state that he had done a Bayesian analysis? The only reason that suggests itself is that Dr. Gale was bloviating in order to give his specific causation opinions an aura of scientific and mathematical respectability. Falsus in duo, falsus in omnibus.[4]


[1] See, e.g., Robert Peter Gale, et al., Fetal Liver Transplantation (1987); Robert Peter Gale & Thomas Hauser, Chernobyl: The Final Warning (1988); Kenneth A. Foon, Robert Peter Gale, et al., Immunologic Approaches to the Classification and Management of Lymphomas and Leukemias (1988); Eric Lax & Robert Peter Gale, Radiation: What It Is, What You Need to Know (2013).

[2] Byrd v. Union Pacific RR, 453 F.Supp.3d 1260 (D. Neb. 2020) (Zwart, M.J.); Hernandez v. Union Pacific RR, No. 8: 18CV62 (D. Neb. Aug. 14, 2020).

[3] Langrell v. Union Pacific RR, No. 8:18CV57, 2020 WL 3037271 (D. Neb. June 5, 2020) (Bataillon, S.J.).

[4] Dr. Gale’s testimony has not fared well elsewhere. See, e.g., In re Incretin-Based Therapies Prods. Liab. Litig., 524 F.Supp.3d 1007 (S.D. Cal. 2021) (excluding Gale); Wilcox v. Homestake Mining Co., 619 F. 3d 1165 (10th Cir. 2010); June v. Union Carbide Corp., 577 F. 3d 1234 (10th Cir. 2009) (affirming exclusion of Dr. Gale and entry of summary judgment); Finestone v. Florida Power & Light Co., 272 F. App’x 761 (11th Cir. 2008); In re Rezulin Prods. Liab. Litig., 309 F.Supp.2d 531 (S.D.N.Y. 2004) (excluding Dr. Gale from offering ethical opinions).

Small Relative Risks and Causation (General & Specific)

June 28th, 2022

The Bradford Hill Predicate: Ruling Out Random and Systematic Error

In two recent posts, I spent some time discussing a recent law review, which had some important things to say about specific causation.[1] One of several points from which I dissented was the article’s argument that Sir Austin Bradford Hill had not made explicit that ruling out random and systematic error was required before assessing his nine “viewpoints” on whether an association was causal. I take some comfort in the correctness of my interpretation of Sir Austin’s famous article by reading the analysis of no less than Sir Richard Doll’s own analysis of his friend and colleague’s views:

“In summary, we have to show, first, that the association cannot reasonably be explained by chance (bearing in mind that extreme chances do turn up from time to time or no one would buy a ticket in a national lottery), by methodological bias (which can have many sources), or by confounding (which needs to be explored but should not be postulated without some idea of what it might be). Second, we have to see whether the available evidence gives positive support to the concept of causality: that is to say, how it matches up to Hill’s (1965) guidelines (Table 1).”[2]

On the issue of whether small relative risks can establish general causation, the Differential Etiology  paper urged caution in interpreting results when “strength of a relationship is modest.” The strength of an association is, of course, one of the nine Bradford Hill viewpoints, which come into play after we have a “clear-cut” association beyond what we would care to attribute to chance. Additionally, strength of association is primarily a quantitive assessment, and the advice given about caution in the face of “modest” associations is not terribly helpful.  The scientific literature does better.

Sir Richard’s 2002 paper is in a sense a scientific autobiography about some successes in discerning causal associations from observational studies. Unlike expert witnesses for the lawsuit industry, Sir Richard’s essay is notably for its intellectual humility.  In addition to its clear and explicit articulation of the need to rule out random and systematic error before proceeding to a consideration of Sir Austin’s nine guidelines, Sir Richard Doll’s 2002 essay is instructive for judges and lawyers, for other reasons. For example, he raises and explains the problem encountered for causal inference by small relative risks:

“Small relative risks of the order of 2:1 or even less are what are likely to be observed, like the risk now recorded for childhood leukemia and exposure to magnetic fields of 0.4 µT or more (Ahlbom et al. 2000) that are seldom encountered in the United Kingdom. And here the problems of eliminating bias and confounding are immense.”[3]

Sir Richard opines that relative risks under two can be shown to be causal associations, but often with massive data, randomization, and a good deal of support from experimental work.

Another Sir Richard, Sir Richard Peto, along with Sir Richard Doll, raised this concern in their classic essay on the causes of cancer, where they noted that relative risks between one and two create extremely difficult problems of interpretation because the role of the association cannot be confidently disentangled from the contribution of biases.[4] Small relative risks are thus seen as raising a concern about bias and confounding.[5]

In the legal world, courts have recognized that the larger the relative risk, or the strength of association, the more likely a general causation inference can be drawn, even when they blithely ignored the role of actual or residual confounding.[6]

The chapters on statistics and on epidemiology in the current (third) edition of the Reference Manual on Scientific Evidence directly tie the magnitude of the association to the elimination of confounding as an alternative explanation for causality of an association. A larger “effect size,” such as for smoking and lung cancer (greater than ten-fold, and often higher than 30-fold), eliminates the need to worry about confounding:

“Many confounders have been proposed to explain the association between smoking and lung cancer, but careful epidemiological studies have ruled them out, one after the other.”[7]

*  *  *  *  *  *

“A relative risk of 10, as seen with smoking and lung cancer, is so high that it is extremely difficult to imagine any bias or confounding factor that might account for it. The higher the relative risk, the stronger the association and the lower the chance that the effect is spurious. Although lower relative risks can reflect causality, the epidemiologist will scrutinize such associations more closely because there is a greater chance that they are the result of uncontrolled confounding or biases.”[8]

The Reference Manual omits the converse: the lower relative risk, the weaker the association and the greater the chance that the apparent effect is spurious. The authors’ intent, however, is clear enough. In the Appendix, below, I have collected some pronouncements from the scientific literature that urge caution in drawing causal inferences in the face of weak associations, but with more quantitative guidance.

 Small RRs and Specific Causation

Sir Richard Doll was among the first generation of epidemiologists in the academic world. He eschewed the use of epidemiology for discerning the cause of an individual’s disease:

“That asbestos is a cause of lung cancer in this practical sense is incontrovertible, but we can never say that asbestos was responsible for the production of the disease in a particular patient, as there are many other etiologically significant agents to which the individual may have been exposed, and we can speak only of the extent to which the risk of the disease was increased by the extent of his or her exposure.”[9]

On the individual attribution issue, Sir Richard’s views do not hold up as well as his analysis of general causation. Epidemiologic study results are used to predict future disease in individuals, to guide screening and prophylaxis decisions, to determine pharmacologic and surgical interventions in individuals, and to provide prognoses to individuals. Just as confounding falls by the wayside in the analysis of general causation with relative risks greater than 20, so too do the concerns about equating increased risk with specific causation.

The urn model of probability, however, gives us some insight into attributability. If we expected 100 cases of a disease in a sample of a certain size, but we observed 200 cases, then we would have 100 expected and 100 excess cases. Attribution would be no better than a flip of a coin.  If, however, in a situation where the relative risk was 20, we might have 100 expected cases and 2,000 excess cases. The odds of a given case’s being an excess case are rather strong, and even the agnostics and dissenters from probabilistic reasoning in individual cases become weak kneed about denying recovery when the claimant is similar to the cases seen in the study sample.

******************Appendix*************************

Norman E. Breslow & N. E. Day, “Statistical Methods in Cancer Research,” in The Analysis of Case-Control Studies 36 (IARC Pub. No. 32, 1980) (“[r]elative risks of less than 2.0 may readily reflect some unperceived bias or confounding factor”)

Richard Doll & Richard Peto, The Causes of Cancer 1219 (1981) (“when relative risk lies between 1 and 2 … problems of interpretation may become acute, and it may be extremely difficult to disentangle the various contributions of biased information, confounding of two or more factors, and cause and effect.”)

Iain K. Crombie, “The limitations of case-control studies in the detection of environmental carcinogens,” 35 35 J. Epidem. & Community Health 281, 281 (1981) (“The case-control study is unable to detect very small relative risks (< 1.5) even where exposure is widespread and large numbers of cases of cancer are occurring in the population.”)

Ernst L. Wynder & Geoffrey C. Kabat, “Environmental Tobacco Smoke and Lung Cancer: A Critical Assessment,” in H. Kasuga, ed., Indoor Air Quality 5, 6 (1990) (“An association is generally considered weak if the odds ratio is under 3.0 and particularly when it is under 2.0, as is the case in the relationship of ETS and lung cancer. If the observed relative risk is small, it is important to determine whether the effect could be due to biased selection of subjects, confounding, biased reporting, or anomalies of particular subgroups.”).

Ernst L. Wynder, “Epidemiological issues in weak associations,” 19 Internat’l  J. Epidemiol. S5 (1990)

David Sackett, R. Haynes, Gordon Guyatt, and Peter Tugwell, Clinical  Epidemiology: A Basic Science for Clinical Medicine (2d ed. 1991)

Muin J. Khoury, Levy M. James, W. Dana Flanders, and David J. Erickson, “Interpretation of recurring weak associations obtained from epidemiologic studies of suspected human teratogens,” 46 Teratology 69 (1992);

Lynn Rosenberg, “Induced Abortion and Breast Cancer: More Scientific Data Are Needed,” 86 J. Nat’l Cancer Instit. 1569, 1569 (1994) (“A typical difference in risk (50%) is small in epidemiologic terms and severely challenges our ability to distinguish if it reflects cause and effect or if it simply reflects bias.”) (commenting upon Janet R. Daling, K. E. Malone, L. F. Voigt, E. White, and Noel S. Weiss, “Risk of breast cancer among young women: relationship to induced abortion,” 86 J. Nat’l Cancer Inst. 1584 (1994);

Linda Anderson, “Abortion and possible risk for breast cancer: analysis and inconsistencies,” (Wash. D.C., Nat’l Cancer Institute, Oct. 26,1994) (“In epidemiologic research, relative risks of less than 2 are considered small and are usually difficult to interpret. Such increases may be due to chance, statistical bias, or effects of confounding factors that are sometimes not evident.”); 

Washington Post (Oct. 27, 1994) (quoting Dr. Eugenia Calle, Director of Analytic Epidemiology for the American Cancer Society: “Epidemiological studies, in general are probably not able, realistically, to identify with any confidence any relative risks lower than 1.3 (that is a 30% increase in risk) in that context, the 1.5 [reported relative risk of developing breast cancer after abortion] is a modest elevation compared to some other risk factors that we know cause disease.”)

Gary Taubes, “Epidemiology Faces Its Limits,” 269 Science 164, 168 (July 14, 1995) (quoting Marcia Angell, former editor of the New England Journal of Medicine, as stating that “[a]s a general rule of thumb, we are looking for a relative risk of 3 or more [before accepting a paper for publication], particularly if it is biologically implausible or if it’s a brand new finding.”) (quoting John C. Bailar: “If you see a 10-fold relative risk and it’s replicated and it’s a good study with biological backup, like we have with cigarettes and lung cancer, you can draw a strong inference. * * * If it’s a 1.5 relative risk, and it’s only one study and even a very good one, you scratch your chin and say maybe.”)

Samuel Shapiro, “Bias in the evaluation of low-magnitude associations: an empirical perspective,” 151 Am. J. Epidemiol. 939 (2000)

David A. Freedman & Philip B. Stark, “The Swine Flu Vaccine and Guillain-Barré Syndrome: A Case Study in Relative Risk and Specific Causation,” 64 Law & Contemp. Probs. 49, 61 (2001) (“If the relative risk is near 2.0, problems of bias and confounding in the underlying epidemiologic studies may be serious, perhaps intractable.”).

S. Straus, W. Richardson, P. Glasziou, and R. Haynes, Evidence-Based Medicine. How to Teach and Practice EBM (3d ed. 2005)

David F. Goldsmith & Susan G. Rose, “Establishing Causation with Epidemiology,” in Tee L. Guidotti & Susan G. Rose, eds., Science on the Witness Stand: Evaluating Scientific Evidence in Law, Adjudication, and Policy 57, 60 (2001) (“There is no clear consensus in the epidemiology community regarding what constitutes a ‘strong’ relative risk, although, at a minimum, it is likely to be one where the RR is greater than two; i.e., one in which the risk among the exposed is at least twice as great as among the unexposed.”)

Samuel Shapiro, “Looking to the 21st century: have we learned from our mistakes, or are we doomed to compound them?” 13 Pharmacoepidemiol. & Drug Safety  257 (2004)

Mark Parascandola, Douglas L Weed & Abhijit Dasgupta, “Two Surgeon General’s reports on smoking and cancer: a historical investigation of the practice of causal inference,” 3 Emerging Themes in Epidemiol. 1 (2006)

Heinemann, “Epidemiology of Selected Diseases in Women,” chap. 4, in M.A. Lewis, M. Dietel, P.C. Scriba, W.K. Raff, eds., Biology and Epidemiology of Hormone Replacement Therapy 47, 48 (2006) (discussing the “small relative risks in relation to bias/confounding and causal relation.”)

Roger D. Peng, Francesca Dominici, and Scott L. Zeger, “Reproducible Epidemiologic Research,” 163 Am. J. Epidem. 783, 784 (2006) (“The targets of current investigations tend to have smaller relative risks that are more easily confounded.”)

R. Bonita, R. Beaglehole & T. Kjellström, Basic Epidemiology 93 (W.H.O. 2d ed. 2006) (“A strong association between possible cause and effect, as measured by the size of the risk ratio (relative risk), is more likely to be causal than is a weak association, which could be influenced by confounding or bias. Relative risks greater than 2 can be considered strong.”)

David A. Grimes & Kenneth F. Schulz, “False alarms and pseudo-epidemics: the limitations of observational epidemiology,” 120 Obstet. & Gynecol. 920 (2012) (“Most reported associations in observational clinical research are false, and the minority of associations that are true are often exaggerated. This credibility problem has many causes, including the failure of authors, reviewers, and editors to recognize the inherent limitations of these studies. This issue is especially problematic for weak associations, variably defined as relative risks (RRs) or odds ratios (ORs) less than 4.”)

Kenneth F. Schulz & David A. Grimes, Essential Concepts in Clinical Research:
Randomised Controlled Trials and Observational Epidemiology at 75 (2d ed. 2019) (“Even after attempts to minimise selection and information biases and after control for known potential confounding factors, bias often remains. These biases can easily account for small associations. As a result, weak associations (which dominate in published studies) must be viewed with circumspection and humility.43 Weak associations, defined as relative risks between 0.5 and 2.0, in a cohort study can readily be accounted for by residual bias (Fig. 7.2). Because case-control studies are more susceptible to bias than are cohort studies, the bar must be set higher. ln case-control studies, weak associations can be viewed as odds ratios between 0.33 and 3.0 (Fig. 7.3). Results that full within these zones may be due to bias. Results that full outside these bounds in either direction may deserve attention.”)

Brian L. Strom, “Basic Principles of Clinical Epidemiology Relevant to Pharmacoepidemiologic Studies,” chap. 3, in Brian L. Strom, Stephen E. Kimmel & Sean Hennessy, eds., Pharmacoepidemiology 48 (6th ed. 2020) (“Conventionally, epidemiologists consider an association with a relative risk of less than 2.0 a weak association.”)


[1] Joseph Sanders, David L. Faigman, Peter B. Imrey, and Philip Dawid, “Differential Etiology: Inferring Specific Causation in the Law from Group Data in Science,” 63 Ariz. L. Rev. 851 (2021) [Differential Etiology].

[2] Richard Doll, “Proof of Causality: deduction from epidemiological observation,” 45 Persp. Biology & Med. 499, 501 (2002) (emphasis added).

[3] Id. at 512.

[4] Richard Doll & Richard Peto, The Causes of Cancer 1219 (1981) (“when relative risk lies between 1 and 2 … problems of interpretation may become acute, and it may be extremely difficult to disentangle the various contributions of biased information, confounding of two or more factors, and cause and effect.”).

[5]Confounding in the Courts” (Nov. 2, 2018); “General Causation and Epidemiologic Measures of Risk Size” (Nov. 24, 2012). 

[6] See King v. Burlington Northern Santa Fe Railway Co., 762 N.W.2d 24, 40 (Neb. 2009) (“the higher the relative risk, the greater the likelihood that the relationship is causal”); Landrigan v. Celotex Corp., 127 N.J. 404, 605 A.2d 1079, 1086 (1992) (“The relative risk of lung cancer in cigarette smokers as compared to nonsmokers is on the order of 10:1, whereas the relative risk of pancreatic cancer is about 2:1. The difference suggests that cigarette smoking is more likely to be a causal factor for lung cancer than for pancreatic cancer.”).

[7] RMSE3d at 219.

[8] RMSE3d at 602. 

[9] Richard Doll, “Proof of Causality: deduction from epidemiological observation,” 45 Persp. Biology & Med. 499, 500 (2002).

The opinions, statements, and asseverations expressed on Tortini are my own, or those of invited guests, and these writings do not necessarily represent the views of clients, friends, or family, even when supported by good and sufficient reason.