TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Acetaminophen & Autism – Prada Review Misleadingly Claims to Be NIH Funded

September 9th, 2025

A few weeks ago, four scientists published what they called a “navigation guide” systematic review on acetaminophen use and autism.[1] The last named author, Andrea A. Baccarelli, is an environmental epidemiologist, who has been an expert witness for plaintiffs’ counsel in lawsuits against the manufacturers and sellers of acetaminophen. Another author, Beate Ritz, frequently testifies for the lawsuit industry in cases against various manufacturing industries. A third author, Ann Z. Bauer, was the lead author of a [faux] “consensus statement” that invoked the precautionary principle to call for limits on the use of acetaminophen (N-acetyl-p-aminophenol or APAP) by pregnant women, on grounds that such use may increase the risks of neurodevelopmental (including autism), reproductive and urogenital disorders.[2] The lead author was Diddier Prada, who works in Manhattan, at the Icahn School of Medicine at Mount Sinai, in the environmental and climate science department, within the Institute for Health Equity Research. The Mount Sinai website describes Dr. Diddier Prada as an environmental and molecular epidemiologist who focuses on the role of environmental toxicants in age-related conditions

Curious readers might wonder how someone whose interest is in environmental issues and “health equity” became involved in a review of pharmaco-epidemiology and teratology. The flavor of systematic review deployed in the paper, “navigation guide,” originated and has had limited use in the field of environmental issues. To my knowledge, so-called navigation guides have never been used previously in pharmaco-epidemiologic or teratologic controversies.[3]

The Prada paper and its deployment of a “navigation guide” systematic review deserve greater critical scrutiny.  In this post, however, I want to address some peripheral issues, such as “competing interests” and misleading claims about the paper’s having been NIH funded.

Only Dr. Baccarelli disclosed a potential conflict of interest, in a statement that many would judge to be anemic:

“Dr. Baccarelli served as an expert witness for the plaintiff’s legal team on matters of general causation involving acetaminophen use during pregnancy and its potential links to neurodevelopmental disorders. This involvement may be perceived as a conflict of interest regarding the information presented in this paper on acetaminophen and neurodevelopmental outcomes. Dr. Baccarelli has made every effort to ensure that this current work—like his past work as an expert witness on this matter—was conducted with the highest standards of scientific integrity and objectivity.”

The disclosure fails to mention whether Dr. Baccarelli was compensated for his playing on the “plaintiff’s legal team,” and if so, then how much. Using the passive voice, he suggests that this work might be perceived as a conflict of interest, when surely he knows that it is a serious issue. If industry scientists working on the relevant issue had published, they surely would be accused of having had a conflict.

Dr. Baccarelli self-servingly, falsely, and with epistemic arrogance, asserts that he made every effort in this paper, and in his past work as an expert witness, to conform to the “highest standards of scientific integrity and objectivity.” Despite his best efforts to be “scientific,” Baccarelli’s work failed critical scrutiny in the multi-district litigation that consolidated acetaminophen cases for pre-trial handling. In that litigation, the defense challenged Dr. Baccarelli’s opinions under Rule 702, for their lack of validity. In an extensive, closely reasoned opinion, federal district court judge Denise Cote ruled that Dr. Baccarelli’s proffered opinions failed to meet the relevance and reliability standards of federal law.[4]

The MDL court easily found that Dr. Baccarelli was qualified to provide an opinion on epidemiology, although the focus of his career has been on environmental issues. Baccarelli’s substantive problem was that he deviated from accepted and valid methods of causal inference by cherry picking different results and outcomes across multiple studies. Baccarelli’s sophistical trick was to advance a “transdiagnostic” analysis that lumps an already heterogenous autism spectrum disorder (ASD), with attention-deficit hyperactivity disorder (ADHD), and a grab bag of “other neurodevelopmental disorders.” If a study found a putative association with only one of the three end points, Baccarelli would claim success on all three. Baccarelli avoided conducting separate ASD and ADHD analyses, and he cherry picked the end points that supported his pre-determined conclusions.

Judge Cote found that the transdiagnostic analyses advanced by plaintiffs’ expert witnesses, including Baccarelli, obscured and obfuscated more than they informed the causal inquiry.[5] The court’s analysis casts considerable shade upon Baccarelli’s self-serving claim to have used “the highest standards of scientific integrity and objectivity.” Judge Cote barred Baccarelli and the other members of the plaintiffs’ “expert team” from testifying.

Conspicuously absent from the conflict disclosure section of the Prada article was any mention of the litigation work of co-author Beate Ritz. In 2007, Ritz became a fellow of the Collegium Ramazzini, which functions in support of the lawsuit industry much as the scientists of the Tobacco Institute supported tobacco legal defense efforts in times past. Ritz’s fellowship in the Collegium makes her a full-fledged member of the Lobby and a supporter of the lawsuit industry.[6] Ritz has testified, for claimants, in cases involving claims of heavy metals in baby food, in cases involving claims that paraquat exposure caused Parkinson’s disease, and most notoriously for plaintiffs in glyphosate litigation, where her witnessing is often done for the Wisner Baum lawfirm that employs the son of Robert F. Kennedy, Jr.[7]

The conflict of interest disclosure statement is hardly the only misleading aspect of the Prada paper. At the end of the paper, the authors state, with respect to funding that their “study was supported by NIH (R35ES031688; U54CA267776).” Some people may incorrectly believe that the Prada review was directly sponsored and funded by the National Institutes of Health.  Nothing could be further from the truth.

The research grant referenced, R35ES031688, is a National Institute of Environmental Health Sciences (NIEHS) research grant. The curious reader might inquire what whether and why the NIEHS would be concerned about a pharmacological issue. The short answer is that the NIEHS is not, and that this grant has nothing to do with children’s neurological status in relation to their mother’s ingestion of acetaminophen.

The NIEHS award this research grant to Andrea Baccarelli, while he was at Columbia University, for his project “Extracellular Vesicles in Environmental Epidemiology Studies of Aging.” The research focuses on extracellular vesicles (EVs) and their role in environmental health, particularly as it relates to aging. What Baccarelli promised to do with this NIEHS grant was to study the effects of air pollution on accelerated brain aging, and disease states such as dementia. Baccarelli noted that his focus would be on intra-cellular communication enabled by extracellular vesicles, in reaction to air pollution. The described research would understandably be viewed as potentially relevant to the NIEHS mission statement, but it has nothing to do with autism among children of women who ingested acetaminophen during pregnancy.  The phrases “extracellular vesicles” and “air pollution” do not appear in the Prada review.

The second grant listed under funding for the Prada review was U54CA267776. The U54 designation marks this as a career award, not specific to a specific topic or this published work. Ironically, the grant is a diversity, equity, and inclusion grant to the Mount Sinai Icahn School of Medicine, in Manhattan. The Icahn School has long had one of the most ethically, racially, culturally diverse faculties of any medical school, and hardly needs financial incentives to hire minority physicians and scientists.

The NIH awarded grant U54CA267776 for “Cohort Cluster Hiring Initiative at Icahn School of Medicine at Mount Sinai.” The NIH describes the grant as aiming to reduce “[t]he barriers to research and career success for underrepresented groups in academic medicine.” The text of the U54 grant is written largely in bureaucratic jargon, which may require a degree in DEI to understand fully. What is abundantly clear is that nothing in this U54 grant, or in its stated criteria for evaluation, has anything to do with studying the teratologic potential of acetaminophen.

What so far has escaped the media’s attention is that Prada and colleagues did not have NIH (or NIEHS) support for their acetaminophen review. They had career-level support for DEI purposes, or perhaps general “walking-around” money for research on environmental pollution and brain aging, which has nothing to do with the subject of their navigation guide review. The authors of the Prada review never prepared a study proposal related to acetaminophen for evaluation by a funding committee at NIH. The authors never submitted a protocol to the NIH, and the NIH provided no peer review or guidance for the authors’ acetaminophen review. In short, there is nothing that marks the Prada review as an NIH work product other than the over-claiming of the authors with respect to funding sources.

The Prada review has attracted a lot of attention in the media and from the worm-brained Secretary of Health and Human Services. An article in the Washington Post described the Prada review as NIH funded, which tracks the paper’s misleading disclosure.[8] The media no doubt jumped on the publication of the Prada review last month because Secretary Kennedy promised to reveal the cause of autism by September. We can imagine that Kennedy will be tempted to embrace the Prada review because he can falsely mischaracterize it as an NIH-funded review.

Not only is the funding claim dodgy, but so is the suggestion that the review supports a conclusion of causation between maternal ingestion of acetaminophen and autism in children. The lead author, Dr. Diddier Prada, noted the frequent confusion between correlation and causation and explicitly stated the authors of the review “cannot answer the question about causation.”[9]


[1] Diddier Prada, Beate Ritz, Ann Z. Bauer and Andrea A. Baccarelli, “Evaluation of the evidence on acetaminophen use and neurodevelopmental disorders using the Navigation Guide methodology,” 24 Envt’l Health 56 (2025).

[2] Ann Z. Bauer et al., “Paracetamol Use During Pregnancy — A Call for Precautionary Action,” 17 Nature Rev. Endocrinology 757 (2021).

[3] See Tracey J. Woodruff, Patrice Sutton, and The Navigation Guide Work Group, “An Evidence-Based Medicine Methodology To Bridge The Gap Between Clinical And Environmental Health Sciences,” 30 Health Affairs 931 (May 2011).

[4] In re Acetaminophen ASD-ADHD Prods. Liab. Litig., 707 F. Supp. 3d 309, 2023 WL 8711617 (S.D.N.Y. 2023) (Cote, J.).

[5] Id. at 334.

[6] See F.D.K. Liddell, “Magic, Menace, Myth and Malice,” 41 Ann. Occup. Hyg. 3, 3 (1997).

[7] See, e.g., In re Roundup Prods. Liab. Litig., 390 F. Supp. 3d 1102 (2018); Barrera v. Monsanto Co., Del. Super. Ct. (May 31, 2019); Pilliod v. Monsanto Co., 67 Cal. App. 5th 591, 282 Cal. Rptr. 3d 679 (2021). See also Dan Charles, “Taking the stand: For scientists, going to court as an expert witness brings risks and rewards,” 383 Science 942 (Feb. 29, 2024) (quoting Ritz as suggesting that she was reluctant to get involved as an expert witnesses).

[8] Ariana Eunjung Cha, Caitlin Gilbert and Lauren Weber, “MAHA activists have been pushing for more investigation into use of the common pain killer during pregnancy,” Wash. Post (Sept. 5, 2025). See also Liz Essley Whyte & Nidhi Subbaraman, “RFK Jr., HHS to Link Autism to Tylenol Use in Pregnancy and Folate Deficiencies,” Wall St. J. (Sept. 5, 2025).

[9] Jess Steier, “Saturday Morning Thoughts on the Tylenol-Autism News: The public health whiplash continues as we play another round of ‘autism cause’ roulette,” Unbiased Science (substack) (Sept. 06, 2025).

AAAS Conference on Scientific Evidence and the Courts

September 8th, 2025

Back in September 2023, the American Association for the Advancement of Science (AAAS), with its Center for Scientific Responsibility and Justice, sponsored a two day meeting on Scientific Evidence and the Courts. If there were notices for this conference, I missed them. The meeting presentations are now available online. Judging from camera views of the audience, the conference did not appear to be well attended. Most of the material was forgettable, but some of the presentations are worth watching.

Jennifer L. Mnookin opened the conference with a keynote presentation on “Where Law and Science Meet.” Chancellor Mnookin presented a broad overview and some interesting insights on the development of the evidence law of expert witness testimony.

Following Mnookin, Professors Ronald Allen and Andrew Jurs presented on the “Unintended Impacts [sic] of the Daubert Standard.” The conference took place only a few months before amendment to Rule 702 became effective, and the reference to a “Daubert” standard was untoward. Allen’s comments followed the path of his previous articles. Jurs presented some empirical legal research, which seemed flawed for its assumption that the Frye standard was universally applied in federal court before the advent of Daubert. Assessing whether these standards lead to different outcomes when both standards have been applied heterogeneously, and one standard, Frye, is often not applied at all, and Daubert is often flyblown by judges hostile to the gatekeeping enterprise, Jurs’ empirical research seemed both invalid and very much beside the point. Both presenters missed the key point of Daubert, in which case plaintiff’s counsel advocated for no standard at all, beyond basic subject-matter qualification, for giving expert opinions in court.

A Session on “An International Perspective,” Scott Carlson discussed the efforts of the American Bar Association (ABA), and its Center of Global Programs, on supporting judges in foreign countries. Prateek Sibal discussed the history and work of the UNESCO Global Judges Initiative. My sincere wish is that the ABA would support judges more in the United States.

Panelists Valerie P. Hans, Emily Murphy, and Dr. Michael J. Saks presented on various jury issues, in a session “In the Minds of the Jury.” The presentations on how foreign countries process expert witness testimony were lacking any mention of how juries rarely if ever sit in civil cases that involve complex technical and scientific issues.

Two editors of scientific journals, Adriana Bankston and Valda Vinson, along with law professor Michael Sakes, spoke about peer review and publication, in  a session “As a Matter of Fact: ‘General Acceptance’ in Emerging vs. Established Science.” Their discussion on the publication process shed very little light on how courts and juries should assess the validity of specific papers, particularly in view of the lax practices at many journals. Towards the end of this session, a question from the audience proved to be very revealing of the prejudices of the law professor on the panel. The questioner rose to complain that after beginning research on a topic that has litigation relevance her research is now frequently questioned. She asked the panel how she might deal with the annoyance of being questioned. Some on the panel basically urged her to buck up, but the law professor invoked the spirit of agnothologist, and lawsuit industry expert witness, David Michael, to suggest that “manufacturing doubt” was just a corporate tactic in the face of scientific evidence. The prejudice against corporate speech is remarkable when the lawsuit industry has a long history of playing the ad hominem game in advancing its pecuniary interests.

The session that followed addressed how trustworthy science might best be put before courts. The organizers described this session, Utilizing Scientific and Technical Expertise, as going to the heart of the issues targeted by the conference. Joe S. Cecil, Deanne M. Ottaviano, and Shari Seidman Diamond discussed how scientific expertise enters into the evidentiary record in American courtrooms. Their presentations were interesting, but curiously no one mentioned that the primary avenue for expert witness opinion is through oral testimony!

Joe Cecil discussed methods judges have to obtain scientific and technical evidence to advance justice. (By this I hope he meant the truth, and not just the outcome preferred by social justice warriors.) As noted, Joe Cecil did not focus on the ordinary methods of direct and cross-examination of party expert witnesses, but rather, he identified other methods of introducing expertise into the courtroom for the benefit of the judge or the jury. Only one suggestion really affects jury comprehension, namely the appointment of non-party expert witnesses by the court. The other methods really only provide expertise to the trial judge, who perhaps is challenged to make a ruling under Federal Rule of Evidence 702. The federal courts have the inherent supervisory power to appoint technical advisors to act as special law clerks on issues. Similarly, appointed special masters can address technical implementation issues, subject to the district judges’ control. The judges are always free to read outside the briefs and testimony, but there are ethical and notice issues for such conduct. The Reference Manual on Scientific Evidence (RMSE) sits on the shelves on every federal judge’s bookshelf, even if in pristine, unused condition. Judges can at least read the RMSE on specific issues without having to disclose their extra-curricular research to the parties.  Of course, parties are well advised to consider any materials in the RMSE, which support or oppose their contentions.

In discussing the RMSE, Cecil noted that the fourth edition was in the works. He also mentioned that all the old chapter topics would be carried forward to the fourth edition, and that new topics would include eyewitness identification, computer science, artificial intelligence, and climate science. Sadly, there will be no chapter on genetic determination of disease, but perhaps the clinical medicine chapter will take on the subject in greater detail than previous editions. This conference took place two years ago, and yet the RMSE, fourth edition, is still not published. The National Academies website previously listed the project as completed, but the site now describes the work as “in progress.”

Joe Cecil’s analysis of the various extraordinary expert techniques was pretty much spot on, especially his assessment that “experiments” with court-appointed experts were often failures or at best modest successes. The discussion of Judge Pointer’s Rule 706 independent expert witnesses in the silicon [sic] breast implant litigation, MDL926, seemed to lack context. Cecil acknowledged that the court’s expert witnesses contributed some value to admissibility decisions, but Judge Pointer notoriously did not believe that he, as the MDL judge, had any responsibility for Rule 702 determinations, and he made none except in cases that he tried in the Northern District of Alabama. (And these decisions were before the Science Panel was appointed.) So the Rule 706 witnesses really could not have aided in admissibility decisions.

The real value – in my view – of the Science Panel was that it demonstrated that Judge Pointer was quite wrong in believing that both sides’ expert witnesses were simply “too extreme,” or too partisan, and that the truth was somehow in the middle. Indeed, Judge Pointer said so on many occasions, and he was judicially gobsmacked when all four of his experts roundly rejected the plaintiffs’ distortions of the science of immunology, epidemiology, toxicology, and rheumatology. The courts’ expert witnesses sat for discovery depositions, and then gave testimony de bene esse. To my knowledge, their testimony was never admitted in any of the subsequent trials.

Judge Jed Rakoff gave an interesting presentation, “Strengthening Cooperation Between the Scientific Enterprise and the Justice System,” on the intersection between scientific and legal expertise and the need for their better integration. Judge Rakoff focused on the astonishing lack of compliance of trial judges with the gatekeeping requirements of Rule 702 in addressing the admissibility of forensic evidence. Several subsequent panels also addressed forensic topics, including “A Texas Case Study in Accountability for Forensic Sciences,” “Innovations in Investigative Technologies Improvements and Drawbacks,” and “Artificial Intelligence and the Courts,” “Wrongful Convictions and Changed Science: Statutes,” and “Standing Up for Justice: When the Law and Science Work Hand-in-Hand.”

One of the more curious sessions was on “Statistical Modeling and Causation Science,” presented by the American Statistical Association along with the AAAS. Maria Cuellar, from the University of Pennsylvania, discussed the role of statistical thinking in causal assessment, with slides that referred to a nonparametric estimator for the probability of causation. Cuellar, however, never defined what an estimator was; nor did she differentiate nonparametric from parametric estimators. She displayed other equations, again without explaining their origin and meaning, or identifying symbols or meanings. Similarly, Rochelle E. Tractenberg, discussed the use of statistics as evidence and as part of inferring causal inference in litigation, in a model of unclarity. At one point, Tractenberg appeared to suggest that general causation could be taken from regulatory pronouncements. Her discussion of glyphosate implied that general causation was established, which may have led me to disregard her presentation.

Finally, the conference sported a discussion, “Toxic Tort 2.0: Emerging Trends in Climate Change Related Litigation,” The two presenters were Dr. L. Delta Merner, the “Lead Scientist” for the Science Hub for Climate Litigation, Union of Concerned Scientists, and Dr. Paul A. Hanle, Visiting Scholar and  Founder of the Climate Judiciary Project, Environmental Law Institute. The Science Hub actively promotes climate change litigation, which made me wonder whether its scientists are involved in that new chapter in the upcoming fourth edition of the Reference Manual.

Systematic Reviews versus Expert Witness Reports

July 2nd, 2025

Back in November 2024, I posted that the fourth edition of the Reference Manual on Scientific Evidence was completed, and that its publication was imminent. I based my prediction upon the National Academies’ website that reported that the project had been completed. Alas, when no Manual was forth coming, I checked back, and the project was, and is as of today, marked as “in progress.” The NASEM website provides no explanation for the retrograde movement. Could the Manual have been DOGE’d? Did Robert F. Kennedy Jr. insist that a chapter on miasma theory be added?

Ever since the third edition of the Manual arrived, I have tried to identify its strengths and weaknesses, and to highlight topics and coverage that should be improved in the next edition. In 2023, knowing that people were working on submissions for the fourth edition, I posted a series of desiderata for the new edition.[1] I might well have extended the desiderata, but I thought that work was close to completion.

One gaping omission in the third edition of the Manual, which I did not address, is the dearth of coverage of the synthesis of data and evidence across studies. To be sure, the chapter on medical testimony does discuss the “hierarchy of medical evidence, and places the systematic review at the apex.[2] The chapter on epidemiology, however, fails to discuss systematic reviews in a meaningful way, and treats meta-analysis, which ideally pre-supposes a systematic review, with some hostility and neglect.[3]

Notwithstanding the glaring omission in the 2011 version of the Reference Manual, the legal academy had been otherwise well aware of the importance of properly conducted systematic reviews. Back in 2006, Professor Margaret Berger organized a symposia on law and science, at which John Ioannidis presented on the importance of systematic reviews.[4] Lisa Bero also presented on systematic reviews and meta-analyses, and identified a significant source of bias in such reviews that results when authors limit their citations to studies that support their pre-selected, preferred conclusion.[5] Bero’s contribution, however, missed the point that a well-conducted systematic review makes cherry picking much more difficult, as well as obvious to the reader.

The high prevalence of biased citation and consideration of, and reliance upon, studies is a major source of methodological error in courtroom proceedings. Even when the studies relied upon are reasonably well done, expert witnesses can manipulate the evidentiary display through biased selection and exclusion of what to present in support of their opinions. Sometimes astute judges recognize and bar expert witnesses who would pass off their opinions, as well considered, when they are propped up only by biased citation. Unfortunately, courts have not always been vigilant and willing to exclude expert witnesses who proffer biased, invalid opinions based upon cherry-picked evidence.[6] Given that cherry picking or “biased citation” is recognized in the professional community as rather serious methodological sins, judges may be astonished to learn that both phrases, “cherry picking” and “biased citation” do not appear in the third edition of the Reference Manual on Scientific Evidence. With the delay in publishing the fourth edition, there is still time to add citations to careful debunking of biased citation, such as the reverse-engineered systematic review and meta-analysis in last year’s decision in the paraquat parkinsonism litigation.[7]

When I began my courtroom career, systematic reviews of the evidence for a causal claim were virtually non-existent. Most reviews and textbook chapters were hipshots that identified a few studies that supported the author’s preferred opinion, with perhaps a few disparaging words about a study that contradicted the author’s preferred outcome. On a controversial issue, lawyers could generally find a textbook or review article on either side of an issue. Cross-examination on a so-called “learned treatise,” however, was limited. In state courts, the learned treatise was not admissible for its truth, but only to show that expert witnesses should not be believed when they disagreed with the statement. It was all too easy for an expert witness to declare, “yes, I disagree with that one sentence, on one page, out of 1,500 pages, in that one book.”

In federal courts, the applicable rule of evidence makes the learned treatise statement admissible for its truth:

“Rule 803. Exceptions to the Rule Against Hearsay

The following are not excluded by the rule against hearsay, regardless of whether the declarant is available as a witness:

(18) Statements in Learned Treatises, Periodicals, or Pamphlets . A statement contained in a treatise, periodical, or pamphlet if:

(A) the statement is called to the attention of an expert witness on cross-examination or relied on by the expert on direct examination; and

(B) the publication is established as a reliable authority by the expert’s admission or testimony, by another expert’s testimony, or by judicial notice.

If admitted, the statement may be read into evidence but not received as an exhibit.”

While this rule historically had some importance in showing the finder of fact that the opinion given in court was not shared with the relevant expert community, the rule was and is problematic. Exactly what counts as “learned” is undefined. Expert witnesses on either side can simply endorse a treatise, a periodical, or a pamphlet as learned to enable a lawyer to use it on direct or cross-examination, and make its contents admissible. The rule was drafted and enacted in 1975, when another rule, Rule 702, was generally interpreted to place no epistemic restraints upon expert witnesses. Allowing Rule 803(18) to be invoked without the epistemic constraints of Rules 702 and 703 raised few concerns in 1975, but in the aftermath of Daubert (1993), the tension within the Federal Rules of Evidence requires that the admissibility of a statement in a learned treatise cannot save an expert witness opinion that is not otherwise sufficiently grounded and valid.[8]

Systematic reviews are a different kettle of fish from the sort of textbook opinions of the 1970s and 1980s, which often lacked comprehensive assessments and consistent application of criteria for validity. The intersection of the evolution of Rule 702 and systematic reviews is remarkable. When Rule 702 was drafted, systematic reviews were non-existent. When the Supreme Court decided the Daubert case in 1993, systematic reviews were just emerging as a different and superior form of evidence synthesis.[9] The lesson for judges, regulators, and lawyers is that the standards for valid synthesis of studies and lines of evidence have changed and become more demanding.

In 2009, several professional groups produced an important guidance for reporting systematic reviews, “the Preferred Reporting Items for Systematic reviews and Meta-Analyses,” or PRISMA.[10] Although the PRISMA guidance ostensibly addresses reporting, if authors have not done something that should be reported, their failure to do it and report about it can be identified as a significant omission from their publication. One of the PRISMA specifications called for the writing of a protocol for any systematic review, and for making this protocol available to the scientific community and the public. The protocol will identify the exact clinical issue under review, the kinds of evidence that bear on the issue, and criteria for including or excluding studies that should be included in the review. The requirement of pre-registration has the ability to damp down data dredging in observational studies and experiments, and to help readers see when authors reverse engineered systematic reviews by declaring their criteria for inclusion and exclusion after reading candidate studies and their conclusions.

In 2011, the Centre for Reviews and Dissemination, at the University of York in England, developed an internet archive, PROSPERO, for prospectively registering systematic reviews. In addition to reducing duplication of systematic reviews, PROSPERO aimed to increase transparency, validity, and integrity of the systematic reviews. Around the same time, the Center for Open Science, also set up a web-based archive for systematic review protocols.[11]

Reviews purporting to be systematic are now commonplace. By 2018, ROSPERO had registered over 30,000 records, but of course, some scientists may have registered systematic reviews which they never completed.[12] Despite the publication of professional guidances, carefully performed systematic reviews can still be hard to find.[13]

In federal court, expert witnesses must proffer their opinions in a specified form. Back in the 1980s, federal court practice on expert witnesses was “loose” not only on admissibility issues, but also on the requirements for pre-trial disclosure of opinions. In some federal districts, such as those within Pennsylvania, federal judges took their cues not from the language of the Federal Rules of Civil Procedure, but from state court practice, which required only cursory disclosure of top-level opinions without identifying all facts and data relied upon by the proposed expert witness. In many state courts, and in some federal judicial districts, lawyers had a difficulty obtaining judicial authorization to conduct examinations before trial to discover all the bases and reasoning (if any) behind an expert witness’s opinion. Under the current version of the Federal Rules of Civil Procedure, trial by ambush has generally given way to full discovery. The current version of Rule 26 provides:

Rule 26. Duty to Disclose; General Provisions Governing Discovery

(a) Required Disclosures.

* * *

(2) Disclosure of Expert Testimony.

(A) In General. In addition to the disclosures required by Rule 26(a)(1) , a party must disclose to the other parties the identity of any witness it may use at trial to present evidence under Federal Rule of Evidence 702 703 , or 705 .

(B) Witnesses Who Must Provide a Written Report. Unless otherwise stipulated or ordered by the court, this disclosure must be accompanied by a written report—prepared and signed by the witness—if the witness is one retained or specially employed to provide expert testimony in the case or one whose duties as the party’s employee regularly involve giving expert testimony. The report must contain:

(i) a complete statement of all opinions the witness will express and the basis and reasons for them;

(ii) the facts or data considered by the witness in forming them;

(iii) any exhibits that will be used to summarize or support them;

(iv) the witness’s qualifications, including a list of all publications authored in the previous 10 years;

(v) a list of all other cases in which, during the previous 4 years, the witness testified as an expert at trial or by deposition; and

(vi) a statement of the compensation to be paid for the study and testimony in the case.

An expert’s report or disclosure under Rule 26 remains a far cry from a systematic review, but the Rule goes a long way towards eliminating trial by ambush and surprise in requiring a complete statement of all opinions, all the bases and reasons for the opinions, and all the facts or data considered in reaching the opinions. The requirements of Rule 26, combined with a mandatory oral deposition, go a long way to help reveal cherry picking and motivated reasoning in an expert witness’s opinions.


[1] Schachtman, “Reference Manual – Desiderata for 4th Edition – Part I – Signature Diseases,” Tortini (Jan. 30, 2023); “Reference Manual – Desiderata for 4th Edition – Part II – Epidemiology & Specific Causation,” Tortini (Jan. 31, 2023); “Reference Manual – Desiderata for 4th Edition – Part III – Differential Etiology,” Tortini (Feb. 1, 2023); “Reference Manual – Desiderata for 4th Edition – Part IV – Confidence Intervals,” Tortini (Feb. 10, 2023); “Reference Manual – Desiderata for 4th Edition – Part V – Specific Tortogens,” Tortini (Feb. 14, 2023); “Reference Manual – Desiderata for 4th Edition – Part VI – Rule 703,” Tortini (Feb. 17, 2023).

[2] See John B. Wong, Lawrence O. Gostin, and Oscar A. Cabrera, “Reference Guide on Medical Testimony,” in Reference Manual on Scientific Evidence 687, 723-24 (3d ed. 2011) (discussing hierarchy of medical evidence, with systematic reviews at the apex).

[3] Schachtman, “The Treatment of Meta-Analysis in the Third Edition of the Reference Manual on Scientific Evidence,” Tortini (Nov. 14, 2011).

[4] John P.A. Ioannidis & Joseph Lau, Systematic Review of Medical Evidence, 12 J.L. & Pol’y 509 (2004).

[5] Lisa Bero, “Evaluating Systematic Reviews and Meta-Analyses,” 14 J. L. & Policy 569, 576 (2006).

[6] See Schachtman, “Cherry Picking; Systematic Reviews; Weight of the Evidence,” Tortini (April 5, 2015); “The Fallacy of Cherry Picking As Seen in American Courtrooms,” Tortini (May 3, 2014);  “The Cherry-Picking Fallacy in Synthesizing Evidence,” Tortini (June 15, 2012).

[7] In re Paraquat Prods. Liab. Litig., 730 F. Supp. 3d 793 (S.D. Ill. 2024); see also Schachtman, “Paraquat Shape-Shifting Expert Witness Quashed,” Tortini (Apr. 24, 2024).

[8] See Schachtman, “Unlearning the Learned Treatise Exception,” Tortini (Aug. 21, 2010).

[9] Iain Chalmers, Larry V. Hedges, Harris Cooper, “A Brief History of Research Synthesis,” 25 Evaluation & the Health Professions 12 (2002); Mark Starr, Iain Chalmers, Mike Clarke, Andrew D. Oxman, “The origins, evolution, and future of The Cochrane Database of Systematic Reviews,” 25 Int J. Technol. Assess. Health Care s182 (2009); Mike Clarke, “History of evidence synthesis to assess treatment effects: personal reflections on something that is very much alive,” 109 J. Roy. Soc. Med. 154 (2016). See also Wen-Lin Lee, R. Barker Bausell & Brian M. Berman, “The growth of health-related meta-analyses published from 1980 to 2000,” 24 Eval. Health Prof. 327 (2001).

[10] Alessandro Liberati, Douglas G. Altman, Jennifer Tetzlaff, Cynthia Mulrow, Peter C. Gøtzsche, John P.A. Ioannidis, Mike Clarke, Devereaux, Jos Kleijnen, and David Moher, “The PRISMA Statement for Reporting Systematic Reviews and Meta-Analyses of Studies That Evaluate Health Care Interventions: Explanation and Elaboration,” 151 Ann Intern Med. W-65 (2009); “The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration,” 6 PLoS Med. e1000100 (2009).

[11] Alison Booth, Mike Clarke, Gordon Dooley, Davina Ghersi, David Moher, Mark Petticrew & Lesley Stewart, “The nuts and bolts of PROSPERO: an international prospective register of systematic reviews,” 1 Systematic Reviews 1 (2012); Alison Booth, Mike Clarke, Davina Ghersi, David Moher, Mark Petticrew, Lesley Stewart, “An international registry of systematic review protocols,” 377 Lancet 108 (2011).

[12] Matthew J. Page, Larissa Shamseer, and Andrea C. Tricco, “Registration of systematic reviews in PROSPERO: 30,000 records and counting,” 7 Systematic Reviews 32 (2018).

[13][13] John P. Ioannidis, “The Mass Production of Redundant, Misleading, and Conflicted Systematic Reviews and Meta-analyses,” 94 Milbank Q. 485 (2016).

Judging Science Symposium

May 25th, 2025

While waiting for the much delayed fourth edition of the Reference Manual on Scientific Evidence, you may want to take a look at a recent law review issue on expert witnesses issues. Back in November 2024, the Columbia Science & Technology Law Review held its symposium at the Columbia Law Review on “Judging Science.” The symposium explored current judicial practice for, and treatment of, scientific expert witness testimony in the United States. Because the symposium took place at Columbia, we can expect any number of antic proposals for reform, as well.

Among the commentators on the presentations were Hon. Jed S. Rakoff, Judge on the Southern District of New York,[1] and the notorious Provost David Madigan, from Northeastern University.[2]

The current issue (vol. 26, no.2) of the Columbia Science and Technology Law Review, released on May 23, 2025, contains papers originally presented at the symposium:

Edith Beerdsen, “Unsticking Litigation Science.”

Edward Cheng, “Expert Histories.”

Shari Seidman Diamond & Richard Lempert, “How Experts View the Legal System’s Use of Scientific Evidence.”

David Faigman, “Overcoming Judicial Innumeracy.”

Maura Grossman & Paul Grimm, “Judicial Approaches to Acknowledged and Unacknowledged AI-Generated Evidence.”

Valerie Hans, “Juries Judging Science.”

Enjoy the beach reading!


[1] See Schachtman, “Scientific illiteracy among the judiciary,” Tortini (Feb. 29, 2012).

[2] See, e.g., In re Accutane Litig., No. 271(MCL), 2015 WL 753674 (N.J. Super., Law Div., Atlantic Cty., Feb. 20, 2015) (excluding plaintiffs’ expert witness David Madigan); In re Incretin-Based Therapies Prods. Liab. Litig., 524 F. Supp. 3d 1007 (S.D. Cal. 2021), aff’d, No. 21-55342, 2022 WL 898595 (9th Cir. Mar. 28, 2022) (per curiam). Provost Madigan is stepping down from his position next month. Sonel Cutler, Zoe MacDiarmid & Kate Armanini, “Northeastern Provost David Madigan to step down in June,” The Huntington News (Jan. 16, 2025).

ABA Publishes Bad Advice on How to Defeat So-Called Daubert Motions

January 3rd, 2025

There are some science expert witnesses, such as Ronald Melnick and David Michaels, who testify for the lawsuit industry, who seem to believe that the so-called “Daubert” motion is an immoral attempt to exclude important scientific opinions at trial.[1] Melnick and Michaels and their ilk appear to have persuaded themselves that they should have the unfettered right to influence the fact-finding process with their opinions, regardless of validity concerns.

Most lawyers approach motions to exclude expert witness opinion testimony from an adversarial perspective. They are duty bound to probe their adversaries’ expert witnesses’ opinions for legally fatal invalidity. With respect to their own expert witnesses, the last thing that lawyers wish to happen on their watch is for the court to exclude an expert witness whom they selected and shepherded in the litigation process. Lawyers do their best, but they usually admit, at least in some cases, that from the umpire’s perspective they should lose.

A recent article published by the American Bar Association (ABA) offers advice how to defeat an adversary’s so-called Daubert motion.[2] The article does not admit that sometimes the motion might be well taken, and so it fits into the Swiftean view of lawyers as “a society of men among us, bred up from their youth in the art of proving, by words multiplied for the purpose, that white is black, and black is white, according as they are paid.”[3]

Perhaps we should not be too harsh in criticizing an article on how to defeat a Daubert motion that fails to ask whether the opposition is epistemically warranted. Still, this recent offering seriously misleads young lawyers who seek to defeat evidentiary challenges to their expert witnesses.

The first problem is that this how-to article perpetuates the mistake that there is even a thing called a Daubert challenge.  The Daubert case was decided over 30 years ago, based upon a version of a congressionally approved rule that is no longer in effect.[4] The holding of the case was simply that Congress, in enacting the original Rule 702, did not incorporate the holding of Frye v. United States into the promulgated rule.[5]

There was, of course, some interesting and important dicta in the Daubert opinion, but the authors do a disservice to the bar to repeat the dicta as though they were good law. The issue of the meaning of the original Rule 702 was addressed again multiple times after Daubert, in ways that certainly affected the oft-quoted dicta, and which led to two substantive revisions of Rule 702. We are thus now so far removed from the Daubert case itself that it really is time to stop the mindless recitation of its dicta.

What follows the discussion of the Daubert case, in this how-to article, is no less discouraging. The authors offer five tips, each of which is problematic.

“1. The Best Defense Is a Good Offense—Vet Offered Experts Thoroughly”

The authors advise that “Knowing the potential weaknesses of an expert’s background can help your client guard against Daubert challenges early by picking the right expert to avoid impeachment issues, or by allowing you to minimize the offered expert’s weaknesses through the expert’s report and opinion, or other testimony.”

True, true, and immaterial. Impeachment of an otherwise qualified witness is indeed an important consideration for trial, but it has nothing to do with Rule 702. Indeed, evaluating how expert witnesses will hold up to cross-examination assume that their testimony will be admitted.  To the extent that Rule 702 requires a witness qualified by education, experience, or training, the bar for qualification is set very low. Very few Rule 702 motions challenge proffered expert witnesses on grounds that they are unqualified.

“2. Research Standards and Methodologies Commonly Used and Accepted by Courts in Similar Fact Patterns”

The authors somewhat more relevant advise that “[n]ew lawyers can also assist with defending against Daubert challenges by thoroughly researching expert methodologies that have been previously accepted by courts in similar situations. If a court has previously accepted a methodology that your expert expects to use, this will demonstrate that the methodology is reliable and commonly accepted in the expert’s given field.”

There is, of course, a sense in which this advice is true, but still the Sinatra article is very misleading. There are some cases that turn on the use of crack-pot methodologies, and these methodologies should be avoided. Most expert witnesses, however, are smart enough to dress up their opinions to appear to have been reached by the use of a recognizable, generally accepted methodology. In litigation over alleged chronic health effects, plaintiffs’ witnesses will invoke Bradford Hill’s considerations for determining whether an association is causal. Finding cases that find opinions based upon such considerations to be admissible, however will not protect expert witnesses who have not faithfully applied the considerations to the facts of the case at hand. To channel Seinfeld, it is not good enough for a restaurant to accept reservations; it must also honor those reservations.

“3. Highlight Your Expert Witnesses’ Credentials.”

Here again, the authors offer advice that is largely irrelevant to prosecuting or defending a Rule 702 motion: “Once your team decides to work with a particular expert to support your client, new lawyers can further assist in fending off Daubert challenges by highlighting your expert’s relevant credentials wherever appropriate.” Many successful Rule 702 motions have excluded the proffered opinion testimony of world-renown experts, which speaks volumes about how such experts think they can get away with sub-par work because it is only litigation.

“4. Point Out the Timing of the Daubert Challenge”

The authors advise that a Rule 702 motion might be defeated if made too late in the proceedings: “If a Daubert challenge is made at a late stage in the litigation, you may be able to overcome the challenge by arguing that your adversary has raised the issues too late in the proceedings.”

Tellingly, the authors cite no cases for this remarkable proposition, which implies that failing to make a pre-trial evidentiary challenge is a waiver of a trial objection. The proposition is wrong; there is nothing in Rule 702 that requires the motion to brought in advance of trial. There are, of course, many practical reasons why a party would wish to lodge the motion before trial, the most important of which is that the outcome of the motion might result in the entry of summary judgment and dismissal of the lawsuit before trial. Judicial and party economies shout for the motion to be made before trial, but not until after parties have the procedural ability to substitute new expert witness opinion. Additionally, a judge may set the timing of a Rule 702, which will then become part of a pre-trial order. Rule 702 is, however, a rule of admissibility, and nothing in the rule or the case law prevents a motion from being brought in the middle of a trial. Moreover, if a motion were brought before trial, and denied, the loser would have to assert the objection again, or make a motion to strike testimony, at trial in order to preserve the denial for appeal.

There is also a large difference in what may be done in a pre-trial Rule 702 motion than can be accomplished at trial. The moving party may present testimony, as well as materials that are not themselves otherwise admissible at trial, in support of the motion. Rule 702 motions can sometimes take days of courtroom time, and the trial judge has the opportunity to appreciate the nuances of what may be a complex argument about validity or sufficiency of evidence. What is glaringly wrong, however, in the authors’ argument is that pre-trial discovery really must be over for a Rule 702 motion to be effective and timely.

“5. Highlight Why the Expert’s Testimony Is Relevant and Will Aid the Fact Finder”

The authors urge opponents to stress relevancy and helpfulness. Relevancy is a fairly trivial requirement and not part of Rule 702 (or “Daubert”) itself. Helpfulness, or aiding the fact finder, is a measure of admissibility, but it stands to reason that opinion testimony that lacks a valid and sufficient foundation can never really be helpful to or be relied upon by the finder of fact.

Perhaps most egregious in this ABA article is its complete failure to note that the relevant rule is a statute that has been only recently amended. The words of the statute should be the starting point for any lawyer, young or old, as well as for judges. Given that the statute was just amended, the authors of this ABA “young lawyers” advice might well have suggested to their readers that they actually read and comply with the rule, and that they spend a few minutes reading the Rules Advisory Committee notes on why the rule was amended.

Social media platforms enjoy substantial immunity, under Section 230 of the Communications Decency Act, for the crazy stuff published by users of their platforms. I don’t know whether the ABA has potential legal liability for what it publishes, but it certainly has an ethical responsibility not to disseminate bad advice.


[1] Ronald L. Melnick, “A Daubert Motion: A Legal Strategy to Exclude Essential Scientific Evidence in Toxic Tort Litigation,” 95 Am. J. Pub. Health S30 (2005) (“However, if a judge does not have adequate training or experience in dealing with scientific uncertainty, understand the full value or limit of currently used methodologies, or recognize hidden assumptions, misrepresentations of scientific data, or the strengths of scientific inferences, he or she may reach an incorrect decision on the reliability and relevance of evidence linking environmental factors to human disease.”).

[2] Maria Sinatra and Gianna E Cricco-Lizza, “5 Tips for New Lawyers to Defeat Daubert Challenges,” Am. Bar Ass’ n (Oct. 4, 2024).

[3] Jonathan Swift, Gulliver’s Travels: Travels into Several Remote Nations of the World, Part IV, Chapter 5 (1727).

[4] Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993).

[5] Frye v. United States, 293 F. 1013 (D.C. Cir. 1923).

Manufacturing Consensus

December 9th, 2024

The lawsuit industry is fond of claiming that it is victimized by manufactured doubt;[1] its response has often been to manufacture consensus.[2] Doubt and assent are real psychological phenomena that are removed from the more important epistemic question whether the propositions doubted or agreed to are true, or worthy of belief.

Since at least the adoption of Federal Rule of Evidence 702, the law of evidence in federal courts, and in some state courts, has come to realize that expert witness opinion testimony must be judged by epistemic criteria. Implicit in such judging is that reviewing courts, and finders of fact, must assess the validity of facts and data relied upon, and inferences drawn, by expert witnesses.

Professor Edward Cheng has argued that judges and jurors are epistemically incompetent to engage in the tasks required of them by Rule 702.[3] Cheng would replace Rule 702 with what he calls a consensus rule that requires judges and jurors to assess only whether there is a scientific consensus on general scientific propositions such as claims of causality between a particular exposure and a specific disease outcome.

Cheng’s proposal is not the law; it never has been the law; and it will never be the law. Yet, law professors must make a living, and novelty is often the coin of the academic realm.[4] Cheng teaches at Vanderbilt Law School, and a few years ago, he started a podcast, Excited Utterances, which features some insightful and some antic proposals from the law school professoriate. The podcast is hosted by Cheng, or sometimes by his protégé, G. Alexander Nunn (“Alex”), who is now Associate Professor of Law at Texas A&M University School of Law

Cheng’s consensus rule has not gained any traction in the law, but it has attracted support from a few like-minded academics. David Caudill, a Professor of Law, at the Villanova University Charles Widger School of Law, has sponsored a symposium of supporters.[5] This year, Caudill has published another publication that largely endorses Cheng’s consensus rule.[6]

Back in October 2024, Cheng hosted Caudill on Excited Utterances, to talk about his support for Cheng’s consensus rule. The podcast website blurb describes Caudill as having critiqued and improved upon Cheng’s “proposal to have courts defer to expert consensus rather than screening expert evidence through Daubert.” [This is, of course, incorrect. Daubert was one case that interpreted a statute that has since been substantively revised twice. The principle of charity suggests that Nunn meant Federal Rule of Evidence 702.] Alex Nunn conducted the interview of Caudill, which was followed by some comments from Cheng.

If you are averse to reading law review articles, you may find Nunn’s interview of Caudill a more digestible, and time-saving, way to hear a précis of the Cheng-Caudill assault on scientific fact finding in court. You will have to tolerate, however, Nunn’s irrational exuberance over how the consensus rule is “cutting edge,” and “wide ranging,” and Caudill’s endorsement of the consensus rule as “really cool,” and his dismissal of the Daubert case as “infamous.”

Like Cheng, Caudill believes that we can escape the pushing and shoving over data and validity by becoming nose counters. The task, however, will not be straightforward. Many litigations begin before there is any consensus on one side or the other. No one seems to agree how to handle such situations. Some litigations begin with an apparent consensus, but then shift dramatically with the publication of a mega-trial or a definitive systematic review. Some scientific issues remain intractable to easy resolution, and the only consensuses exist within partisan enclaves.

Tellingly, Caudill moves from the need to discern “consensus” to mere “majority rule.” Having litigated health effects claims for 40 years or so, I have no idea of how we tally support for one view over another. Worse yet, Caudill acknowledges that judges and jurors will need expert assistance in identifying consensus. Perhaps litigants will indeed be reduced to calling librarians, historians, and sociologists of science, but such witnesses will not necessarily be able to access, interpret, and evaluate the underlying facts, data, and inferences to the controversy. Cheng and Caudill appear to view this willful blindness as a feature not a bug, but their whole enterprise works in derogation of the goal of evidence law to determine the truth of the matter.[7]

Robust consensus that exists over an extended period of time – in the face of severe testing of the challenged claim – may have some claim to track the truth of the matter. Cheng and Caudill, however, fail to deal with is the situation that results when the question is called among the real experts, and the tally is 51 to 49 percent. Or worse yet, 40% versus 38%, with 22% disclaiming having looked at the issue sufficiently. Cheng and Caudill are left with asking the fact finder to guess what the consensus will be when the scientific community sees the evidence that it has not yet studied or that does not yet exist.

Perhaps the most naïve feature of the Cheng-Caudill agenda is the notion that consensus bubbles up from the pool of real experts without partisan motivations. As though there is not already enough incentive to manufacture consensus, Cheng’s and Caudill’s approach will cause a proliferation of conferences that label themselves “consensus” forming meetings, which will result in self-serving declarations of – you guessed it – consensuses.[8]

Perhaps more important from a jurisprudential view is that the whole process of identifying a consensus has the normative goal of pushing listeners into believing that the consensus has the correct understanding so that they do not have to think very hard. We do not really care about the consensus; we care about the issue that underlies the alleged consensus. At best, when it exists, consensus is a proxy for truth. Without evidence, Caudill asserts that the proxy will be correct virtually all the time. In any event, a carefully reasoned and stated consensus view would virtually always make its way to the finder of fact in litigation in the form of a “learned treatise,” with which partisan expert witnesses would disagree at their peril.


[1] See, e.g., David Michaels, Doubt is Their Product: How Industry’s War on Science Threatens Your Health (2008); David Michaels, “Manufactured Uncertainty: Protecting Public Health in the Age of Contested Science and Product Defense,” 1076 Ann. N.Y. Acad. Sci. 149 (2006); David Michaels, “Mercenary Epidemiology – Data Reanalysis and Reinterpretation for Sponsors with Financial Interest in the Outcome,” 16 Ann. Epidemiol. 583 (2006); David Michaels & Celeste Monforton, “Manufacturing Uncertainty: Contested Science and the Protection of the Public’s Health and Environment,” 95 Amer. J. Public Health S39 (2005); David Michaels, “Doubt is their Product,” 292 Sci. Amer. 74 (June 2005).

[2] See generally Edward S. Herman & Noam Chomsky, Manufacturing Consent (1988); Schachtman, “The Rise of Agnothology as Conspiracy Theory,” Tortini (Aug. 21, 2022).

[3] Edward K. Cheng, “The Consensus Rule: A New Approach to Scientific Evidence,” 75 Vanderbilt L. Rev. 407 (2022); see Schachtman, “Cheng’s Proposed Consensus Rule for Expert Witnesses,” Tortini (Sept. 15, 2022); “Further Thoughts on Cheng’s Consensus RuleTortini (Oct. 3, 2022); “ Consensus Rule – Shadows of Validity,” Tortini (Apr. 26, 2023); “ Consenus is Not Science,” Tortini (Nov. 8, 2023).

[4] Of possible interest, David Madigan, a statistician who has frequently been involved in litigation for the lawsuit industry, and who has proffered some particularly dodgy analyses, was Professor Cheng’s doctoral dissertation advisor. See Schachtman, “Madigan’s Shenanigans & Wells Quelled in Incretin-Mimetic Cases,” Tortini (July 19, 2022); “David Madigan’s Graywashed Meta-Analysis in Taxotere MDL,” Tortini (June 19, 2020); “Disproportionality Analyses Misused by Lawsuit Industry,” Tortini (April 20, 2020); “Johnson of Accutane – Keeping the Gate in the Garden State,” Tortini (March 28, 2015).

[5] David S. Caudill, “The ‘Crisis of Expertise’ Reaches the Courtroom: An Introduction to the Symposium on, and a Response to, Edward Cheng’s Consensus Rule,” 67 Villanova L. Rev. 837 (2023).

[6] David S. Caudill, Harry Collins & Robert Evans, “Judges Should Be Discerning Consensus, Not Evaluating Scientific Expertise,” 92 Univ. Cinn. L. Rev. 1031 (2024).

[7] See, e.g., Jorge R Barrio, “Consensus Science and the Peer Review,” 11 Molecular Imaging & Biol. 293 (2009) (“scientific reviewers of journal articles or grant applications – typically in biomedical research – may use the term (e.g., ‘….it is the consensus in the field…’) often as a justification for shutting down ideas not associated with their beliefs.”); Yehoshua Socol, Yair Y Shaki & Moshe Yanovskiy, “Interests, Bias, and Consensus in Science and Regulation,” 17 Dose Response 1 (2019) (“While appealing to scientific consensus is a legitimate tool in public debate and regulatory decisions, such an appeal is illegitimate in scientific discussion itself.”); Neelay Trivedi, “Science is about Evidence, not Consensus,” The Stanford Rev. (Feb. 25, 2021).

[8] For a tendentious example of such a claim of manufactured consensus, see David Healy, “Manufacturing Consensus,” 34 The Hastings Center Report 52 (2004); David Healy, “Manufacturing Consensus,” 30 Culture, Medicine & Psychiatry 135 (2006).

Professor Lahav’s Radically Misguided Treatment of Chancy Tort Causation

September 27th, 2024

In the 19th and early 20th century, scientists and lay people usually conceptualized causation as “deterministic.” Their model of science was perhaps what was called Newtonian, in which observations were invariably described in terms of identifiable forces that acted upon antecedent phenomena. The universe was akin to a pool table, with the movement of the billiard balls fully explained by their previous positions, mass, and movements. There was little need for probability to describe events or outcomes in such a universe.

The 20th century ushered in probabilistic concepts and models in physics and biology. Because tort law is so focused on claims of bodily integrity and harms, I am focused here on claimed health effects. Departing from the Koch-Henle postulates and our understanding of pathogen-based diseases, the latter half of the 20th century saw the rise of observational epidemiology and scientific conclusions about stochastic processes and effects that could best be understood in terms of probabilities, with statistical inferences from samples of populations. The language of deterministic physics failed to do justice to epidemiologic evidence or conclusions. Modern medicine and biology invoked notions of base rates for chronic diseases, which rates might be modified by environmental exposures.

In the wake of the emerging science of epidemiology, the law experienced a new horizon on which many claimed tortogens did not involve exposures uniquely tied to the harms alleged. Rather, the harms asserted were often diseases of ordinary life, but with that suggested the harms were quantitatively more prevalent or incident among people exposed to the alleged tortogen. Of course, the backwaters of tort law saw reactionary world views on trial, as with claims of trauma-induced cancer cases, which are with us still. Nonetheless, slowly but not always steadily, the law came to grips with probability and statistical evidence.

In law, as in science, a key component of causal attribution is counterfactual analysis. If A causes B, then if in the same world, ceteris paribus, we do not have A, then we don’t have B. Counterfactual analysis applies as much to stochastic processes that are causally influenced by rate changes, as they apply to the Newtonian world of billiard balls. Some writers in the legal academy, however, would opportunistically use the advent of probabilistic analyses of health effects to dispose of science altogether. No one has more explicitly exploited the opportunity than Professor Alexandra Lahav.

In an essay published in 2022, Professor Lahav advanced extraordinary claims about probabilistic causation, or what she called “chancy causation.”[1] The proffered definition of chancy causation is bumfuzzling. Lahav provides an example of an herbicide that is “associated” with the type of cancer that the heavily exposed plaintiff developed. She tells us that:

“[t]here is a chance that the exposure caused his cancer, and a chance that it did not. Probability follows certain rules, or tendencies, but these regular laws do not abolish chance. This is a common problem in modern life, where much of what we know about medicines, interventions, and the chemicals to which we are exposed is probabilistic. Following the philosophical literature, I call this phenomenon chancy causation.”[2]

So the rules of probability do not abolish chance? It is hard to know what Lahav is trying to say here. Probability quantifies chance, and gives us an understanding of phenomena and their predictability. When we can model an empirical process with a probability distribution, such as one that is independent and identically distributed, we can often make and test quantitative inferences about the anticipated phenomena.

Lahav vaguely acknowledges that her term, “chancy causation” is borrowed, but she does not give credit to the many authors who have used it before.[3] Lahav does note that the concept of probabilistic causation used in modern-day risk factor epidemiology is different from the deterministic causal claims that dominated tort law in the 19th and the first half of the 20th century. Lahav claims that chancy causation is inconsistent with counterfactual analysis, but she cites no support for her claim, which is demonstrably false. If we previously saw the counterfactual of if A then B, as key to causality, we can readily restate the counterfactual as a probability: A probably causes B. On a counterfactual analysis, then if we do not have A as an antecedent, then we probably do not have B. For a classic tortogen such as tobacco smoking, we can say confidently that tobacco smoking probably causes lung cancer. And for a given instance of lung cancer, we can say based upon the entire evidentiary display, that if a person did not smoke tobacco, he would probably not have developed lung cancer. Of course, the correspondence is not 100 percent, which is only to say that it is probabilistic. There are highly penetrant genetic mutations that may be the cause of a given lung cancer case. We know, however, that such mutations do not cause or explain the large majority of lung cancer cases.

Contrary to Lahav’s ipse dixits, tort law can incorporate, and has accommodated, both general and specific causation in terms of probabilistic counterfactuals. The modification requires us, of course, to address the baseline situation as a rate or frequency of events, and the post-exposure world as one with a modified rate or frequency. Without confusion or embarrassment, we can say that the exposure is the cause of the change in event rates. Modern physics similarly addresses whether we must be content with probability statements, rather than precise deterministic “billiard ball” physics, which is so useful in a game of snooker, but less so in describing the position of sub-atomic particles. In the first half of the 20th century, the biological sciences learned with some difficulty that it must embrace probabilistic models, in genetic science, as well as in epidemiology. Many biological causation models are completely stated in terms of probabilities that are modified by specified conditions.

Lahav intends for her rejection of counterfactual causality to do a lot of work in her post-modern program. By falsely claiming that chancy causation has no factual basis, Lahav jumps to the conclusion that what the law calls for is nothing but “policy,”[4] and “normative decision.”[5] Having fabricated the demise of but-for causation in the context of probabilistic relationships, Lahav suggests that tort law can pretend that the causation question is nothing more than a normative analysis of the defendant’s conduct. (Perhaps it is more than a tad revealing that she does not see that the plaintiff’s conduct is involved in the normative judgment.) Of course, tort law already has ample room for policy and normative considerations built into the concepts of duty and breach of duty.

As we saw with the lung cancer example above, the claim that tobacco smoking probably caused the smoker to develop lung cancer can be entirely factual, and supported by a probabilistic judgment. Lahav calls her erroneous move “pragmatic,” although it has no relationship to the philosophical pragmatism of Peirce or Quine. Lahav’s move is an incorrect misrepresentation of probability and of epidemiologic science in the name of compensation free-for-alls. Obtaining a heads in the flip of a fair coin has a probability of 50%; that is a fact, not a normative decision, even though it is, to use Lahav’s vocabulary, “chancy.”

Lahav’s argument is not always easy to follow. In one place, she uses “chancy” to refer to the posterior probability of the correctness of the causal claim:

“the counterfactual standard can be successfully defended against by the introduction of chance. The more conflicting studies, the “more chancy” the causation. By that I do not mean proving a lower probability (although this is a good result from a defense point of view) but rather that more, different study results create the impression of irreducible chanciness, which in turn dictates that the causal relation cannot be definitively proven.”[6]

This usage, which clearly refers to the posterior probability of a claim, is not necessarily limited to so-called non-deterministic phenomena. People could refer to any conclusion, based upon conflicting evidence of deterministic phenomena, as “chancy.”

Lurking in her essay is a further confusion between the posterior probability we might assign to a claim, or to an inference from probabilistic evidence, and the probability of random error. In an interview conducted by Felipe Jiménez,[7] Lahav was more transparent in her confusion, and she explicitly commited the transpositional fallacy with her suggestion that customary statistical standards (statistical significance) ensure that even small increased risks, say of 30%, are known to a high degree of certainty.

Despite these confusions, it seems fairly clear that Lahav is concerned with stochastic causal processes, and most of her examples evidence that concern. Lahav poses a hypothetical in which epidemiologic studies show smokers have a 20% increased risk of developing lung cancer compared with non-smokers.[8] Given that typical smoking histories convey relative risks of 20 to 30, or increased risks of 2,000 to 3,000%, Lahav’s hypothetical may readers think she is shilling for tobacco compaies. In any event, in the face of a 20% increased risk (or relative rsk of 1.2), Lahav acknowledges that the probability of a smoker’s developing lung cancer is higher than that of a non-smoker, but “in any particular case the question whether a patient’s lung cancer was caused by smoking is uncertain.” This assertion, however, is untrue; the question is not “uncertain.” She has provided a certain quantification of the increased risk. Furthermore, her hypothetical gives us a good deal of information on which we can say that smoking probably did not result in the patient’s lung cancer. Causation may be chancy because it is based upon a probabilistic inference, but the chances are actual known, and they are low.

Lahav posits a more interesting hypothetical when she considers a case in which there is an 80% chance that a person’s lung cancer is attributable to smoking.[9] We can understand this hypothetical better if we reframe it as classic urn probability problem. In a given (large) population of non-smokers, we expect 100 lung cancers per year. In a population of smokers, otherwise just like the population of non-smokers, we observe 500 lung cancers. Of the observed number, 100 were “expected” because they happen without exposure to the putative causal agent, and 400 are “excess.”The relative risk would be 5, or 400% increased risk, and still well below the actual measure of risk from long-term smoking, but the attributable risk would be [(RR-1)/RR] or 0.8 (or 80%). If we imagine an urn with 100 white “expected” balls, and 400 red “excess” balls added, then any given draw from the urn, with replacement, yields an 80% probability of a red ball, or an excess case. Of course, if we can see the color, we may come to a consensus judgment that the ball is actually red. But on our analogy to discerning the cause of a given lung cancer, we have at present nothing by way of evidence with which to call the question, and so it remains “chancy” or probabilistic. The question is not, however, in any way normative. The answer is different quantitatively in the 20% and in the 400% hypotheticals.

Lahav asserts that we are in a state of complete ignorance once a smoker has lung cancer.[10] This is not, however, true. We have the basis for a probabilistic judgment that will probably be true. It may well be true that the probability of attribution will be affected by the probability that the relative risk = 5 is correct. If the posterior probability for the claim that smoking causes lung cancer by increasing its risk 400% is only 30%, then of course, we could not make the attribution in a given case with an 80% probability of correctness. In actual litigation, the argument is often framed on an assumption arguendo that the increased risk is greater than two, so that only the probability of attribution is involved. If the posterior probability of the claim that exposure to the tortogen increased risk by 400% or 20,000% was only 0.49, then the plaintiff would lose. If the posterior probability of the increased risk was greater than 0.5, the finder of fact could find that the specific causation claim had been carried if the magnitude of the relative risk, and the attributable risk, were sufficiently large. This inference on specific causation would not be a normative judgment; it would be guided by factual evidence about the magnitude of the relevant increased risk.

Lahav advances a perverse skepticism that any inferences about individuals can be drawn from information about rates or frequencies in groups of similar individuals.  Yes, there may always be some debate about what is “similar,” but successive studies may well draw the net tighter around what is the appropriate class. Lahav’s skepticism and her outright denialism about inferences from general causation to specific causation, are common among some in the legal academy, but it ignores that group to individual inferences are drawn in epidemiology in multiple contexts. Regressions for disease prediction are based upon individual data within groups, and the regression equations are then applied to future individuals to help predict those individuals’ probability of future disease (such as heart attack or breast cancer), or their probability of cancer-free survival after a specific therapy. Group to individual inferences are, of course, also the basis for prescribing decisions in clinical medicine.  These are not normative inferences; they are based upon evidence-based causal thinking about probabilistic inferences.

In the early tobacco litigation, defendants denied that tobacco smoking caused lung cancer, but they argued that even if it did, and the relative risk were 20, then the specific causation inference in this case was still insecure because the epidemiologic study tells us nothing about the particular case. Lahav seems to be channeling the tobacco-company argument, which has long since been rejected on the substantive law of causation. Indeed, as noted, epidemiologists do draw inferences about individual cases from population-based studies when they invoke clinical prediction models such as the Framingham cardiovascular risk event model, or the Gale breast cancer prediction model. Physicians base important clinical interventions, both pharmacologic and surgical, for individuals upon population studies. Lahav asserts, without evidence, that the only difference between an intervention based upon an 80% or a 30% probability is a “normative implication.”[11] The difference is starkly factual, not normative, and describes a long-term likelihood of success, as well as an individual probability of success.

Post-Modern Causation

What we have in Lahav’s essay is the ultimate post-modern program, which asserts, without evidence, that when causation is “chancy,” or indeterminate, courts leave the realm of science and step into the twilight zone of “normative decisions.” Lahav suggests that there is an extreme plasticity to the very concept of causation such that causation can be whatever judges want it to be. I for one sincerely doubt it. And if judges make up some Lahav-inspired concept of normative causation, the scientific community would rightfully scoff.

Establishing causation can be difficult, and many so-called mass tort litigations have failed for want of sufficient, valid evidence supporting causal claims. The late Professor Margaret Berger reacted to this difficulty in a more forthright way by arguing for the abandonment of general causation, or cause-in-fact, as an element of tort claims under the law.[12] Berger’s antipathy to requiring causation manifested in her hostility to judicial gatekeeping of the validity of expert witness opinions. Her animus against requiring causation and gatekeeping under Rule 702 was so strong that it exceeded her lifespan. Berger’s chapter in the third edition of the Reference Manual on Scientific Evidence, which came out almost one year after her death, embraced the First Circuit’s notorious anti-Daubert decision in Milward, which also post-dated her passing.[13]

Professor Lahav has previously expressed a distain for the causation requirement in tort law. In an earlier paper, “The Knowledge Remedy,” Lahav argued for an extreme, radical precautionary principle approach to causation.[14] Lahav believes that the likes of David Michaels have “demonstrated” that manufactured uncertainty is a genuine problem, but not one that affects her main claims. Remarkably, Lahav sees no problem with manufactured certainty in the advocacy science of many authors or the lawsuit industry.[15] In “Chancy Causation,” Lahav thus credulously repeats Michaels’ arguments, and goes so far as to describe Rule 702 challenges to causal claims as having the “negative effect” of producing “incentives to sow doubt about epidemiologic studies using methodological battles, a strategy pioneered by the tobacco companies … .”[16] Lahav’s agenda is revealed by the absence of any corresponding concern about the negative effect of producing incentives to overstate the findings, or the validity of inferences, in order to obtain an unwarranted and unsafe verdicts for claimants.


[1] Alexandra D. Lahav, “Chancy Causation in Tort,” 15 J. Tort L. 109 (2022) [hereafter Chancy Causation].

[2] Chancy Causation at 110.

[3] See, e.g., David K. Lewis, Philosophical Papers: Volume 2 175 (1986); Mark Parascandola, “Evidence and Association: Epistemic Confusion in Toxic Tort Law,” 63 Phil. Sci. S168 (1996).

[4] Chancy Causation at 109.

[5] Chancy Causation at 110-11.

[6] Chancy Causation at 129.

[7] Felipe Jiménez, “Alexandra Lahav on Chancy Causation in Tort,” The Private Law Podcast (Mar. 29, 2021).

[8] Chancy Causation at 115.

[9] Chancy Causation at 116-17.

[10] Chancy Causation at 117.

[11] Chancy Causation at 119.

[12] Margaret A. Berger, “Eliminating General Causation: Notes towards a New Theory of Justice and Toxic Torts,” 97 Colum. L. Rev. 2117 (1997).

[13] Milward v. Acuity Specialty Products Group, Inc., 639 F.3d 11 (1st Cir. 2011), cert. denied sub nom., U.S. Steel Corp. v. Milward, 132 S. Ct. 1002 (2012).

[14] Alexandra D. Lahav, “The Knowledge Remedy,” 98 Texas L. Rev. 1361 (2020). See “The Knowledge Remedy ProposalTortini (Nov. 14, 2020).

[15] Chancy Causation at 118 (citing plaintiffs’ expert witness David Michaels, The Triumph of Doubt: Dark Money and the Science of Deception (2020), among others).

[16] Chancy Causation at 129.

800 Plaintiffs Fail to Show that Glyphosate Caused Their NHL

September 11th, 2024

Last week, Barbara Billauer, at the American Council on Science and Health[1] website, reported on the Australian court that found insufficient scientific evidence to support plaintiffs’ claims that they had developed non-Hodgkin’s lymphoma (NHL) from their exposure to Monsanto’s glyphosate product. The judgment had previously been reported by the Genetic Literacy Project,[2] which republished an Australian news report from July.[3] European news media seemed more astute in reporting the judgment, with The Guardian[4] and Reuters reporting the court decision in July.[5] The judgment was noteworthy because the mainstream and legal media in the United States generally ignored the development.  The Old Gray Lady and the WaPo in the United States, both of which have covered previous glyphosate cases in the United States, sayeth naught. Crickets at Law360.

On July 24, 2024, Justice Michael Lee, for the Federal Court of Australia, ruled that there was insufficient evidence to support the claims of 800 plaintiffs that their NHL had been caused by glyphosate exposure.[6] Because plaintiffs’ claims were aggregated in a class, the judgment against the class of 800 or so claimants, was the most significant judgment in glyphosate litigation to date.

Justice Lee’s opinion is over 300 pages long, and I have had a chance only to skim it. Regardless of how the Australian court handled various issues, one thing is indisputable: the court has given a written record of its decision processes for the world to assess, critique, validate, or refute. Jury trials provide no similar opportunity to evaluate the reasoning processes (vel non) of the decision maker. The absence of transparency, and an opportunity to evaluate the soundness of verdicts in complex medical causation, raises the question whether jury trials really satisfy the legal due process requirements of civil adjudication.


[1] Barbara Pfeffer Billauer, “The RoundUp Judge Who Got It,” ACSH (Aug. 29, 2024).

[2] Kristian Silva, “Insufficient evidence that glyphosate causes cancer: Australian court tosses 800-person class action lawsuit,” ABC News (Australia) (July 26, 2024).

[3] Kristian Silva, “Major class action thrown out as Federal Court finds insufficient evidence to prove weedkiller Roundup causes cancer,” ABC Australian News (July 25, 2024).

[4] Australian Associated Press, “Australian judge dismisses class action claiming Roundup causes cancer,” The Guardian (July 25, 2024).

[5] Peter Hobson and Alasdair Pal, “Australian judge dismisses lawsuit claiming Bayer weedkiller causes blood cancer,” Reuters (July 25, 2024).

[6] McNickle v. Huntsman Chem. Co. Australia Pty Ltd (Initial Trial) [2024] FCA 807.

Zhang’s Glyphosate Meta-Analysis Succumbs to Judicial Scrutiny

August 5th, 2024

Back in March 2015, the International Agency for Research on Cancer (IARC) issued its working group’s monograph on glyphosate weed killer. The report classified glyphosate as a “probable carcinogen,” which is highly misleading. For IARC, the term “probable” does not mean more likely than not, or for that matter, probable does not have any quantitative meaning. The all-important statement of IARC methods, “The Preamble,” makes this clear.[1] 

In the case of glyphosate, the IARC working group concluded that the epidemiologic evidence for an association between glyphosate exposure and cancer (specifically non-Hodgkins lymphoma (NHL)), was limited, which is IARC’s euphemism for insuffcient. Instead of epidemiology, IARC’s glyphosate conclusion was based largely upon rodent studies, but even the animal evidence relied upon by IARC was dubious. The IARC working group cherry picked a few arguably “positive” rodent study results with increases in tumors, while ignoring exculpatory rodent studies with decreasing tumor yield.[2]

Although the IARC hazard classification was uncritically embraced by the lawsuit industry, most regulatory agencies, even indulging precautionary principle reasoning, rejected the claim of carcinogenicity. The United States  Environmental Protection Agency (EPA), European Food Safety Authority, Food and Agriculture Organization (in conjunction with World Health Organization, European Chemicals Agency, Health Canada, German Federal Institute for Risk Assessment, among others, found that the scientific evidence did not support the claim that glyphosate causes NHL. The IARC monograph very quickly after publication became the proximate cause of a huge litigation effort by the lawsuit industry against Monsanto.

The personal injury cases against Monsanto, filed in federal court, were aggregated for pre-trial hearing, before Judge Vince Chhabria, of the Northern District of California, as MDL 2741. Judge Chhabria denied Monsanto’s early Rule 702 motions, and thus cases proceeded to trial, with mixed results.

In 2019, the Zhang study, a curious meta-analysis of some of the available glyphosate epidemiologic studies appeared in Mutation Research / Reviews in Mutation Research, a toxicology journal that seemed an unlikely venue for a meta-analysis of epidemiologic studies. The authors combined selected results from one large cohort study, the Agricultural Health Study, along with five case-control studies, to reach a summary relative risk of 1.41 (95% confidence interval 1.13-1.75).[3] According to the authors, their “current meta-analysis of human epidemiological studies suggests a compelling link between exposures to GBHs [glyphosate] and increased risk for NHL.”

The Zhang meta-analysis was not well reviewed in regulatory and scientific circles. The EPA found that Zhang used inappropriate methods in her meta-analysis.[4] Academic authors also panned the Zhang meta-analysis in both scholarly,[5] and popular articles.[6] The senior author of the Zhang paper, Lianne Sheppard, a Professor in the University of Washington Departments of Environmental  and  Occupational Health Sciences, and Biostatistics, attempted to defend the study, in Forbes.[7] Professor Geoffrey Kabat very adeptly showed that this defense was futile.[8] Despite the very serious and real objections to the validity of the Zhang meta-analysis, plaintiffs’ expert witnesses, such as Beate Ritz, an epidemiologist with U.C.L.A. testified that she trusted and relied upon the analysis.[9]

For five years, the Zhang study was a debating point for lawyers and expert witnesses in the glyphosate litigation, without significant judicial gatekeeping. It took the entrance of Luoping Zhang herself as an expert witness in the glyphosate litigation, and the procedural oddity of her placing exclusive reliance upon her own meta-analysis, to bring the meta-analysis into the unforgiving light of judicial scrutiny.

Zhang is a biochemist and toxicologist, in the University of California, Berkeley. Along with two other co-authors of her 2019 meta-analysis paper, she had been a board member of the EPA’s 2016 scientific advisory panel on glyphosate. After plaintiffs’ counsel disclosed Zhang as an expert witness, she disclosed her anticipated testimony, as is required by Federal Rule of Civil Procedure 26, by attaching and adopting by reference the contents of two of her published papers. The first paper was her 2019 meta-analysis; the other paper discussed putative mechanisms. Neither paper concluded that glyphosate causes NHL. Zhang’s disclosure did not add materially to her 2019 published analysis of six epidemiologic studies on glyphosate and NHL.

The defense challenged the validity of Dr. Zhang’s proffered opinions, and her exclusive reliance upon her own 2019 meta-analysis required the MDL court to pay attention to the failings of that paper, which had previously escaped critical judicial scrutiny. In June 2024, after an oral hearing in Bulone v. Monsanto, at which Dr. Zhang testified, Judge Chhabria ruled that Zhang’s proffered testimony, and her reliance upon her own meta-analysis was “junk science.”[10]

Judge Chhabria, perhaps encouraged by the recently fortifying amendment to Rule 702, issued a remarkable opinion that paid close attention to the indicia of validity of an expert witness’s opinion and the underlying meta-analysis. Judge Chhabria quickly spotted the disconnect between Zhang’s published papers and what is required for an admissible causation opinion. The mechanism paper did not address the extant epidemiology, and both sides in the MDL had emphasized that the epidemiology was critically important for determining whether there was, or was not, causation.

Zhang’s meta-analysis did evaluate some, but not all, of the available epidemiology, but the paper’s conclusion stopped considerably short of the needed opinion on causation. Zhang and colleagues had concluded that there was a “compelling link” between exposures to [glyphosate-based herbicides] and increased risk for NHL. In their paper’s key figure, show casing the summary estimate of relative risk of 1.41 (95% C.I., 1.13 -1.75), Zhang and her co-authors concluded only that exposure was “associated with an increased risk of NHL.” According to Judge Chhabria, in incorporating her 2019 paper into her Rule 26 report, Zhang failed to add a proper holistic causation analysis, as had other expert witnesses who had considered the Bradford Hill predicates and considerations.

Judge Chhabria picked up on another problem that has both legal and scientific implications. A meta-analysis is out of date as soon as a subsequent epidemiologic study becomes available, which would have satisfied the inclusion criteria for the meta-analysis. Since publishing her meta-analysis in 2019, additional studies had in fact been published. At the hearing, Dr. Zhang acknowledged that several of them would qualify for inclusion in the meta-analysis, per her own stated methods. Her failure to update the meta-analysis made her report incomplete and inadmissible for a court matter in 2024.

Judge Chhabria might have stopped there, but he took a closer look at the meta-analysis to explore whether it was a valid analysis, on its own terms. Much as Chief Judge Nancy Rosenstengel had done with the made-for-litigation meta-analysis concocted by Martin Wells in the paraquat litigation,[11] Judge Chhabria examined whether Zhang had been faithful to her own stated methods. Like Chief Judge Rosenstengel’s analysis, Judge Chhabria’s analysis stands as a strong rebuttal to the uncharitable opinion of Professor Edward Cheng, who has asserted that judges lack the expertise to evaluate the “expert opinions” before them.[12]

Judge Chhabria accepted the intellectual challenge that Rule 702 mandates. With the EPA memorandum lighting the way, Judge Chhabria readily discerned that “the challenged meta-analysis was not reliably performed.” He declared that the Zhang meta-analysis was “junk science,” with “deep methodological problems.”

Zhang claimed that she was basing the meta-analysis on the subgroups of six studies with the heaviest glyphosate exposure. This claim was undermined by the absence of any exposure-response gradient in the study deemed by Zhang to be of the highest quality. Furthermore, of the remaining five studies, three studies failed to provide any exposure-dependent analysis other than a comparison of NHL rates among “ever” versus “never” glyphosate exposure. As a result of this heterogeneity, Zhang used all the data from studies without exposure characterizations, but only limited data from the other studies that analyzed NHL by exposure levels. And because the highest quality study was among those that provided exposure level correlations, Zhang’s meta-analysis used only some of the data from it.

The analytical problems created by Zhang’s meta-analytical approach were compounded by the included studies’ having measured glyphosate exposures differently, with different cut-points for inclusion as heavily exposed. Some of the excluded study participants would have heavier exposure than those included in the summary analysis.

In the universe of included studies, some provided adjusted results from multi-variate analyses that included other pesticide exposures. Other studies reported only unadjusted results. Even though Zhang’s method stated a preference for adjusted analyses, she inexplicably failed to use adjusted data in the case of one study that provided both adjusted and unadjusted results.

As shown in Judge Chhabria’s review, Zhang’s methodological errors created an incoherent analysis, with methods that could not be justified. Even accepting its own stated methodology, the meta-analysis was an exercise in cherry picking. In the court’s terms, it was, without qualification, “junk science.”

After the filing of briefs, Judge Chhabria provided the parties an oral hearing, with an opportunity for viva voce testimony. Dr. Zhang thus had a full opportunity to defend her meta-analysis. The hearing, however, did not go well for her. Zhang could not talk intelligently about the studies included, or how they defined high exposure. Zhang’s lack of familiarity with her own opinion and published paper was yet a further reason for excluding her testimony.

As might be expected, plaintiffs’ counsel attempted to hide behind peer review. Plaintiffs’ counsel attempted to shut down Rule 702 scrutiny of the Zhang meta-analysis by suggesting that the trial court had no business digging into validity concerns given that Zhang had published her meta-analysis in what apparently was a peer reviewed journal. Judge Chhabria would have none of it. In his opinion, publication in a peer-reviewed journal cannot obscure the glaring methodological defects of the relied upon meta-analysis. The court observed that “[p]re-publication editorial peer review, just by itself, is far from a guarantee of scientific reliability.”[13] The EPA memorandum was thus a more telling indicator of the validity issues than the publication in a nominally peer-reviewed journal.

Contrary to some law professors who are now seeking to dismantle expert witness gatekeeping as beyond a judge’s competence, Judge Chhabria dismissed the suggestion that he lacked the expertise to adjudicate the validity issues. Indeed, he displayed a better understanding of the meta-analytic process than did Dr. Zhang. As the court observed, one of the goals of MDL assignments was to permit a single trial judge to have time to engage with the scientific issues and to develop “fluency” in the relevant scientific studies. Indeed, when MDL judges have the fluency in the scientific concepts to address Rule 702 or 703 issues, it would be criminal for them to ignore it.

The Bulone opinion should encourage lawyers to get “into the weeds” of expert witness opinions. There is nothing that a little clear thinking – and glyphosate – cannot clear away. Indeed, now that the weeds of Zhang’s meta-analysis are cleared away, it is hard to fathom that any other expert witness can rely upon it without running afoul of both Federal Rules of Evidence 702 and 703.

There were a few issues not addressed in Bulone. As seen in her oral hearing testimony, Zhang probably lacked the qualifications to proffer the meta-analysis. The bar for qualification as an expert witness, however, is sadly very low. One other issue that might well have been addressed is Zhang’s use of a fixed effect model for her meta-analysis. Considering that she was pooling data from cohort and case-control studies, some with and some without adjustments for confounders, with different measures of exposure, and some with and some without exposure-dependent analyses, Zhang and her co-authors were not justified in using a fixed effect model for arriving at a summary estimate of relative risk. Admittedly, this error could easily have been lost in the flood of others.

Postscript

Glyphosate is not merely a scientific issue. Its manufacturer, Monsanto, is the frequent target of media outlets (such as Telesur) from autocratic countries, such as Communist China and its client state, Venezuela.[14]

天安门广场英雄万岁


[1]The IARC-hy of Evidence – Incoherent & Inconsistent Classifications of Carcinogenicity,” Tortini (Sept. 19, 2023).

[2] Robert E Tarone, “On the International Agency for Research on Cancer classification of glyphosate as a probable human carcinogen,” 27 Eur. J. Cancer Prev. 82 (2018).

[3] Luoping Zhang, Iemaan Rana, Rachel M. Shaffer, Emanuela Taioli, Lianne Sheppard, “Exposure to glyphosate-based herbicides and risk for non-Hodgkin lymphoma: A meta-analysis and supporting evidence,” 781 Mutation Research/Reviews in Mutation Research 186 (2019).

[4] David J. Miller, Acting Chief Toxicology and Epidemiology Branch Health Effects Division, U.S. Environmental Protection Agency, Memorandum to Christine Olinger, Chief Risk Assessment Branch I, “Glyphosate: Epidemiology Review of Zhang et al. (2019) and Leon et al. (2019) publications for Response to Comments on the Proposed Interim Decision” (Jan. 6, 2020).

[5] Geoffrey C. Kabat, William J. Price, Robert E. Tarone, “On recent meta-analyses of exposure to glyphosate and risk of non-Hodgkin’s lymphoma in humans,” 32 Cancer Causes & Control 409 (2021).

[6] Geoffrey Kabat, “Paper Claims A Link Between Glyphosate And Cancer But Fails To Show Evidence,” Science 2.0 (Feb. 18, 2019).

[7] Lianne Sheppard, “Glyphosate Science is Nuanced. Arguments about it on the Internet? Not so much,” Forbes (Feb. 20, 2020).

[8] Geoffrey Kabat, “EPA Refuted A Meta-Analysis Claiming Glyphosate Can Cause Cancer And Senior Author Lianne Sheppard Doubled Down,” Science 2.0 (Feb. 26, 2020).

[9] Maria Dinzeo, “Jurors Hear of New Study Linking Roundup to Cancer,” Courthouse News Service (April 8, 2019).

[10] Bulone v. Monsanto Co., Case No. 16-md-02741-VC, MDL 2741 (N.D. Cal. June 20, 2024). See Hank Campbell, “Glyphosate legal update: Meta-study used by ambulance-chasing tort lawyers targeting Bayer’s Roundup as carcinogenic deemed ‘junk science nonsense’ by trial judge,” Genetic Literacy Project (June 24, 2024).

[11] In re Paraquat Prods. Liab. Litig., No. 3:21-MD-3004-NJR, 2024 WL 1659687 (S.D. Ill. Apr. 17, 2024) (opinion sur Rule 702 motion), appealed sub nom., Fuller v. Syngenta Crop Protection, LLC, No. 24-1868 (7th Cir. May 17, 2024). SeeParaquat Shape-Shifting Expert Witness Quashed,” Tortini (April 24, 2024).

[12] Edward K. Cheng, “The Consensus Rule: A New Approach to Scientific Evidence,” 75 Vanderbilt L. Rev. 407 (2022). SeeCheng’s Proposed Consensus Rule for Expert Witnesses,” Tortini (Sept. 15, 2022); “Further thoughts on Cheng’s Consensus Rule,” Tortini (Oct. 3, 2022).

[13] Bulone, citing Valentine v. Pioneer Chlor Alkali Co., 921 F. Supp. 666, 674-76 (D. Nev. 1996), for its distinction between “editorial peer review” and “true peer review,” with the latter’s inclusion of post-publication assessment of a paper as really important for Rule 702 purposes.

[14] Anne Applebaum, Autocracy, Inc.: The Dictators Who Want to Run the World 66 (2024).

Paraquat Shape-Shifting Expert Witness Quashed

April 24th, 2024

Another multi-district litigation (MDL) has hit a jarring speed bump. Claims for Parkinson’s disease (PD), allegedly caused by exposure to paraquat dichloride (paraquat), were consolidated, in June 2021, for pre-trial coordination in MDL No. 3004, in the Southern District of Illinois, before Chief Judge Nancy J. Rosenstengel. Like many health-effects litigation claims, the plaintiffs’ claims in these paraquat cases turn on epidemiologic evidence. To make their causation case in the first MDL trial cases, plaintiffs’ counsel nominated a statistician, Martin T. Wells, to present their causation case. Last week, Judge Rosenstengel found Wells’ opinion so infected by invalid methodologies and inferences as to be inadmissible under the most recent version of Rule 702.[1] Summary judgment in the trial cases followed.[2]

Back in the 1980s, paraquat gained some legal notoriety in one of the most retrograde Rule 702 decisions.[3] Both the herbicide and Rule 702 survived, however, and they both remain in wide use. For the last two decades, there has been a widespread challenges to the safety of paraquat, and in particular there have been claims that paraquat can cause PD or parkinsonism under some circumstances.  Despite this background, the plaintiffs’ counsel in MDL 3004 began with four problems.

First, paraquat is closely regulated for agricultural use in the United States. Under federal law, paraquat can be used to control the growth of weeds only “by or under the direct supervision of a certified applicator.”[4] The regulatory record created an uphill battle for plaintiffs.[5] Under the Federal Insecticide, Fungicide, and Rodenticide Act (“FIFRA”), the U.S. EPA has regulatory and enforcement authority over the use, sale, and labeling of paraquat.[6] As part of its regulatory responsibilities, in 2019, the EPA systematically reviewed available evidence to assess whether there was an association between paraquat and PD. The agency’s review concluded that “there is limited, but insufficient epidemiologic evidence at this time to conclude that there is a clear associative or causal relationship between occupational paraquat exposure and PD.”[7] In 2021, the EPA issued its Interim Registration Review Decision, and reapproved the registration of paraquat. In doing so, the EPA concluded that “the weight of evidence was insufficient to link paraquat exposure from pesticidal use of U.S. registered products to Parkinson’s disease in humans.”[8]

Second, beyond the EPA, there were no other published reviews, systematic or otherwise, which reached a conclusion that paraquat causes PD.[9]

Third, the plaintiffs claims faced another serious impediment. Their counsel placed their reliance upon Professor Martin Wells, a statistician on the faculty of Cornell University. Unfortunately for plaintiffs, Wells has been known to operate as a “cherry picker,” and his methodology has been previously reviewed in an unfavorable light. Another MDL court, which reviewed a review and meta-analysis propounded by Wells, found that his reports “were marred by a selective review of data and inconsistent application of inclusion criteria.”[10]

Fourth, the plaintiffs’ claims were before Chief Judge Nancy J. Rosenstengel, who was willing to do the hard work required under Rule 702, specially as it has been recently amended for clarification and emphasis of the gatekeeper’s responsibilities to evaluate validity issues in the proffered opinions of expert witnesses. As her 97 page decision evinces, Judge Rosenstengel conducted four days of hearings, which included viva voce testimony from Martin Wells, and she obviously read the underlying papers, reviews, as well as the briefs and the Reference Manual on Scientific Evidence, with great care. What followed did not go well for Wells or the plaintiffs’ claims.[11] Judge Rosenstengel has written an opinion that may be the first careful judicial consideration of the basic requirements of systematic review.

The court noted that systematic reviewers carefully define a research question and what kinds of empirical evidence will be reviewed, and then collect, summarize, and, if feasible, synthesize the available evidence into a conclusion.[12] The court emphasized that systematic reviewers should “develop a protocol for the review before commencement and adhere to the protocol regardless of the results of the review.”[13]

Wells proffered a meta-analysis, and a “weight of the evidence” (WOE) review from which he concluded that paraquat causes PD and nearly triples the risk of the disease among workers exposed to the herbicide.[14] In his reports, Wells identified a universe of at least 36 studies, but included seven in his meta-analysis. The defense had identified another two studies that were germane.[15]

Chief Judge Rosenstengel’s opinion is noteworthy for its fine attention to detail, detail that matters to the validity of the expert witness’s enterprise. Martin Wells set out to do a meta-analysis, which was all fine and good. With a universe of 36 studies, with sub-findings, alternative analyses, and changing definitions of relevant exposure, the devil lay in the details.

The MDL court was careful to point out that it was not gainsaying Wells’ decision to limit his meta-analysis to case-control studies, or to his grading of any particular study as being of low quality. Systematic reviews and meta-analyses are generally accepted techniques that are part of a scientific approach to causal inference, but each has standards, predicates, and requirements for valid use. Expert witnesses must not only use a reliable methodology, Rule 702(d) requires that they must reliably apply their chosen methodology to the facts at hand in reaching their conclusions.[16]

The MDL court concluded that Wells’ meta-analysis was not sufficiently reliable under Rule 702 because he failed faithfully and reliably to apply his own articulated methodology. The court followed Wells’ lead in identifying the source and content of his chosen methodology, and simply examined his proffered opinion for compliance with that methodology.[17] The basic principles of validity for conducting meta-analyses were not, in any event, really contested. These principles and requirements were clearly designed to ensure and enhance the reliability of meta-analyses by pre-empting results-driven, reverse-engineered summary estimates of association.

The court found that Wells failed clearly to pre-specify his eligibility criteria. He then proceeded to redefine exposure criteria and study inclusion or eligibility criteria, and study quality criteria, after looking at the evidence. He also inconsistently applied his stated criteria, all in an apparently desired effort to exclude less favorable study outcomes. These ad hoc steps were some of Wells’ deviations from the standards to which he played lip service.

The court did not exclude Wells because it disagreed with his substantive decisions to include or exclude any particular study, or his quality grading of any study. Rather, Dr. Wells’ meta-analysis does not pass muster under Rule 702 because its methodology was unclear, inconsistently applied, not replicable, and at times transparently reverse-engineered.[18]

The court’s evaluation of Wells was unflinchingly critical. Wells’ proffered opinions “required several methodological contortions and outright violations of the scientific standards he professed to apply.”[19] From his first involvement in this litigation, Wells had violated the basic rules of conducting systematic reviews and meta-analyses.[20] His definition of “occupational” exposure meandered to suit his desire to include one study (with low variance) that might otherwise have been excluded.[21] Rather than pre-specifying his review process, his study inclusion criteria, and his quality scores, Wells engaged in an unwritten “holistic” review process, which he conceded was not objectively replicable. Wells’ approach left him free to include studies he wanted in his meta-analysis, and then provide post hoc justifications.[22] His failure to identify his inclusion/exclusion criteria was a “methodological red flag” in Dr. Wells’ meta-analysis, which suggested his reverse engineering of the whole analysis, the “very antithesis of a systematic review.”[23]

In what the court described as “methodological shapeshifting,” Wells blatantly and inconsistently graded studies he wanted to include, and had already decided to include in his meta-analysis, to be of higher quality.[24] The paraquat MDL court found, unequivocally, that Wells had “failed to apply the same level of intellectual rigor to his work in the four trial selection cases that would be required of him and his peers in a non-litigation setting.”[25]

It was also not lost upon the MDL court that Wells had shifted from a fixed effect to a random effects meta-analysis, between his principal and rebuttal reports.[26] Basic to the meta-analytical enterprise is a predicate systematic review, properly done, with pre-specification of inclusion and exclusion criteria for what studies would go into any meta-analysis. The MDL court noted that both sides had cited Borenstein’s textbook on meta-analysis,[27] and that Wells had himself cited the Cochrane Handbook[28] for the basic proposition that that objective and scientifically valid study selection criteria should be clearly stated in advance to ensure the objectivity of the analysis.

There was of course legal authority for this basic proposition about prespecification. Given that the selection of studies that go into a systematic review and meta-analysis can be dispositive of its conclusion, undue subjectivity or ad hoc inclusion can easily arrange a desired outcome.[29] Furthermore, meta-analysis carries with it the opportunity to mislead a lay jury with a single (and inflated) risk ratio,[30] which is obtained by the operator’s manipulation of inclusion and exclusion criteria. This opportunity required the MDL court to examine the methodological rigor of the proffered meta-analysis carefully to evaluate whether it reflects a valid pooling of data or it was concocted to win a case.[31]

Martin Wells had previously acknowledged the dangers of manipulation and subjective selectivity inherent in systematic reviews and meta-analyses. The MDL court quoted from Wells’ testimony in Martin v. Actavis:

QUESTION: You would certainly agree that the inclusion-exclusion criteria should be based upon objective criteria and not simply because you were trying to get to a particular result?

WELLS: No, you shouldn’t load the – sort of cook the books.

QUESTION: You should have prespecified objective criteria in advance, correct?

WELLS: Yes.[32]

The MDL court also picked up on a subtle but important methodological point about which odds ratio to use in a meta-analysis when a study provides multiple analyses of the same association. In his first paraquat deposition, Wells cited the Cochrane Handbook, for the proposition that if a crude risk ratio and a risk ratio from a multivariate analysis are both presented in a given study, then the adjusted risk ratio (and its corresponding measure of standard error seen in its confidence interval) is generally preferable to reduce the play of confounding.[33] Wells violated this basic principle by ignoring the multivariate analysis in the study that dominated his meta-analysis (Liou) in favor of the unadjusted bivariate analysis. Given that Wells accepted this basic principle, the MDL court found that Wells likely selected the minimally adjusted odds ratio over the multiviariate adjusted odds ratio for inclusion in his meta-analysis in order to have the smaller variance (and thus greater weight) from the former. This maneuver was disqualifying under Rule 702.[34]

All in all, the paraquat MDL court’s Rule 702 ruling was a convincing demonstration that non-expert generalist judges, with assistance from subject-matter experts, treatises, and legal counsel, can evaluate and identify deviations from methodological standards of care.


[1] In re Paraquat Prods. Prods. Liab. Litig., Case No. 3:21-md-3004-NJR, MDL No. 3004, Slip op., ___ F.3d ___ (S.D. Ill. Apr. 17, 2024) [Slip op.]

[2] In re Paraquat Prods. Prods. Liab. Litig., Op. sur motion for judgment, Case No. 3:21-md-3004-NJR, MDL No. 3004 (S.D. Ill. Apr. 17, 2024). See also Brendan Pierson, “Judge rejects key expert in paraquat lawsuits, tosses first cases set for trial,” Reuters (Apr. 17, 2024); Hailey Konnath, “Trial-Ready Paraquat MDL Cases Tossed After Testimony Axed,” Law360 (Apr. 18, 2024).

[3] Ferebee v. Chevron Chem. Co., 552 F. Supp. 1297 (D.D.C. 1982), aff’d, 736 F.2d 1529 (D.C. Cir.), cert. denied, 469 U.S. 1062 (1984). SeeFerebee Revisited,” Tortini (Dec. 28, 1017).

[4] See 40 C.F.R. § 152.175.

[5] Slip op. at 31.

[6] 7 U.S.C. § 136w; 7 U.S.C. § 136a(a); 40 C.F.R. § 152.175. The agency must periodically review the registration of the herbicide. 7 U.S.C. § 136a(g)(1)(A). See Ruckelshaus v. Monsanto Co., 467 U.S. 986, 991-92 (1984).

[7] See Austin Wray & Aaron Niman, Memorandum, Paraquat Dichloride: Systematic review of the literature to evaluate the relationship between paraquat dichloride exposure and Parkinson’s disease at 35 (June 26, 2019).

[8] See also Jeffrey Brent and Tammi Schaeffer, “Systematic Review of Parkinsonian Syndromes in Short- and Long-Term Survivors of Paraquat Poisoning,” 53 J. Occup. & Envt’l Med. 1332 (2011) (“An analysis the world’s entire published experience found no connection between high-dose paraquat exposure in humans and the development of parkinsonism.”).

[9] Douglas L. Weed, “Does paraquat cause Parkinson’s disease? A review of reviews,” 86 Neurotoxicology 180, 180 (2021).

[10] In re Incretin-Based Therapies Prods. Liab. Litig., 524 F.Supp. 3d 1007, 1038, 1043 (S.D. Cal. 2021), aff’d, No. 21-55342, 2022 WL 898595 (9th Cir. Mar. 28, 2022) (per curiam). SeeMadigan’s Shenanigans and Wells Quelled in Incretin-Mimetic CasesTortini (July 15, 2022).

[11] The MDL court obviously worked hard to learn the basics principles of epidemiology. The court relied extensively upon the epidemiology chapter in the Reference Manual on Scientific Evidence. Much of that material is very helpful, but its exposition on statistical concepts is at times confused and erroneous. It is unfortunate that courts do not pay more attention to the more precise and accurate exposition in the chapter on statistics. Citing the epidemiology chapter, the MDL court gave an incorrect interpretation of the p-value: “A statistically significant result is one that is unlikely the product of chance. Slip op. at 17 n. 11. And then again, citing the Reference Manual, the court declared that “[a] p-value of .1 means that there is a 10% chance that values at least as large as the observed result could have been the product of random error. Id.” Id. Similarly, the MDL court gave an incorrect interpretation of the confidence interval. In a footnote, the court tells us that “[r]esearchers ordinarily assert a 95% confidence interval, meaning that ‘there is a 95% chance that the “true” odds ratio value falls within the confidence interval range’. In re Zoloft (Sertraline Hydrochloride) Prod. Liab. Litig., MDL No. 2342, 2015 WL 7776911, at *2 (E.D. Pa. Dec. 2, 2015).” Slip op. at 17n.12.  Citing another court for the definition of a statistical concept is a risky business.

[12] Slip op. at 20, citing Lisa A. Bero, “Evaluating Systematic Reviews and Meta-Analyses,” 14 J.L. & Pol’y 569, 570 (2006).

[13] Slip op. at 21, quoting Bero, at 575.

[14] Slip op. at 3.

[15] The nine studies at issue were as follows: (1) H.H. Liou, et al., “Environmental risk factors and Parkinson’s disease; A case-control study in Taiwan,” 48 Neurology 1583 (1997); (2) Caroline M. Tanner, et al.,Rotenone, Paraquat and Parkinson’s Disease,” 119 Envt’l Health Persps. 866 (2011) (a nested case-control study within the Agricultural Health Study (“AHS”)); (3) Clyde Hertzman, et al., “A Case-Control Study of Parkinson’s Disease in a Horticultural Region of British Columbia,” 9 Movement Disorders 69 (1994); (4) Anne-Maria Kuopio, et al., “Environmental Risk Factors in Parkinson’s Disease,” 14 Movement Disorders 928 (1999); (5) Katherine Rugbjerg, et al., “Pesticide exposure and risk of Parkinson’s disease – a population-based case-control study evaluating the potential for recall bias,” 37 Scandinavian J. of Work, Env’t & Health 427 (2011); (6) Jordan A. Firestone, et al., “Occupational Factors and Risk of Parkinson’s Disease: A Population-Based Case-Control Study,” 53 Am. J. of Indus. Med. 217 (2010); (7) Amanpreet S. Dhillon,“Pesticide / Environmental Exposures and Parkinson’s Disease in East Texas,” 13 J. of Agromedicine 37 (2008); (8) Marianne van der Mark, et al., “Occupational exposure to pesticides and endotoxin and Parkinson’s disease in the Netherlands,” 71 J. Occup. & Envt’l Med. 757 (2014); (9) Srishti Shrestha, et al., “Pesticide use and incident Parkinson’s disease in a cohort of farmers and their spouses,” Envt’l Research 191 (2020).

[16] Slip op. at 75.

[17] Slip op. at 73.

[18] Slip op. at 75, citing In re Mirena IUS Levonorgestrel-Related Prod. Liab. Litig. (No. II), 341 F. Supp. 3d 213, 241 (S.D.N.Y. 2018) (“Opinions that assume a conclusion and reverse-engineer a theory to fit that conclusion are . . . inadmissible.”) (internal citation omitted), aff’d, 982 F.3d 113 (2d Cir. 2020); In re Zoloft (Sertraline Hydrochloride) Prod. Liab. Litig., No. 12-md-2342, 2015 WL 7776911, at *16 (E.D. Pa. Dec. 2, 2015) (excluding expert’s opinion where he “failed to consistently apply the scientific methods he articulat[ed], . . . deviated from or downplayed certain well established principles of his field, and . . . inconsistently applied methods and standards to the data so as to support his a priori opinion.”), aff’d, 858 F.3d 787 (3d Cir. 2017).

[19] Slip op. at 35.

[20] Slip op. at 58.

[21] Slip op. at 55.

[22] Slip op. at 41, 64.

[23] Slip op. at 59-60, citing In re Lipitor (Atorvastatin Calcium) Mktg., Sales Pracs. & Prod. Liab. Litig., 892 F.3d 624, 634 (4th Cir. 2018) (“Result-driven analysis, or cherry-picking, undermines principles of the scientific method and is a quintessential example of applying methodologies (valid or otherwise) in an unreliable fashion.”).

[24] Slip op. at 67, 69-70, citing In re Zoloft (Sertraline Hydrochloride) Prod. Liab. Litig., 858 F.3d 787, 795-97 (3d Cir. 2017) (“[I]f an expert applies certain techniques to a subset of the body of evidence and other techniques to another subset without explanation, this raises an inference of unreliable application of methodology.”); In re Bextra and Celebrex Mktg. Sales Pracs. & Prod. Liab. Litig., 524 F. Supp. 2d 1166, 1179 (N.D. Cal. 2007) (excluding an expert witness’s causation opinion because of his result-oriented, inconsistent evaluation of data sources).

[25] Slip op. at 40.

[26] Slip op. at 61 n.44.

[27] Michael Borenstein, Larry V. Hedges, Julian P. T. Higgins, and Hannah R. Rothstein, Introduction to Meta-Analysis (2d ed. 2021).

[28] Jacqueline Chandler, James Thomas, Julian P. T. Higgins, Matthew J. Page, Miranda Cumpston, Tianjing Li, Vivian A. Welch, eds., Cochrane Handbook for Systematic Reviews of Interventions (2ed 2023).

[29] Slip op. at 56, citing In re Zimmer Nexgen Knee Implant Prod. Liab. Litig., No. 11 C 5468, 2015 WL 5050214, at *10 (N.D. Ill. Aug. 25, 2015).

[30] Slip op. at 22. The court noted that the Reference Manual on Scientific Evidence cautions that “[p]eople often tend to have an inordinate belief in the validity of the findings when a single number is attached to them, and many of the difficulties that may arise in conducting a meta-analysis, especially of observational studies such as epidemiological ones, may consequently be overlooked.” Id., quoting from Manual, at 608.

[31] Slip op. at 57, citing Deutsch v. Novartis Pharms. Corp., 768 F. Supp. 2d 420, 457-58 (E.D.N.Y. 2011) (“[T]here is a strong risk of prejudice if a Court permits testimony based on an unreliable meta-analysis because of the propensity for juries to latch on to the single number.”).

[32] Slip op. at 64, quoting from Notes of Testimony of Martin Wells, in In re Testosterone Replacement Therapy Prod. Liab. Litig., Nos. 1:14-cv-1748, 15-cv-4292, 15-cv-426, 2018 WL 7350886 (N.D. Ill. Apr. 2, 2018).

[33] Slip op. at 70.

[34] Slip op. at 71-72, citing People Who Care v. Rockford Bd. of Educ., 111 F.3d 528, 537-38 (7th Cir. 1997) (“[A] statistical study that fails to correct for salient explanatory variables . . . has no value as causal explanation and is therefore inadmissible in federal court.”); In re Roundup Prod. Liab. Litig., 390 F. Supp. 3d 1102, 1140 (N.D. Cal. 2018). Slip op. at 17 n. 12.