Back in November 2024, I posted that the fourth edition of the Reference Manual on Scientific Evidence was completed, and that its publication was imminent. I based my prediction upon the National Academies’ website that reported that the project had been completed. Alas, when no Manual was forth coming, I checked back, and the project was, and is as of today, marked as “in progress.” The NASEM website provides no explanation for the retrograde movement. Could the Manual have been DOGE’d? Did Robert F. Kennedy Jr. insist that a chapter on miasma theory be added?
Ever since the third edition of the Manual arrived, I have tried to identify its strengths and weaknesses, and to highlight topics and coverage that should be improved in the next edition. In 2023, knowing that people were working on submissions for the fourth edition, I posted a series of desiderata for the new edition.[1] I might well have extended the desiderata, but I thought that work was close to completion.
One gaping omission in the third edition of the Manual, which I did not address, is the dearth of coverage of the synthesis of data and evidence across studies. To be sure, the chapter on medical testimony does discuss the “hierarchy of medical evidence, and places the systematic review at the apex.[2] The chapter on epidemiology, however, fails to discuss systematic reviews in a meaningful way, and treats meta-analysis, which ideally pre-supposes a systematic review, with some hostility and neglect.[3]
Notwithstanding the glaring omission in the 2011 version of the Reference Manual, the legal academy had been otherwise well aware of the importance of properly conducted systematic reviews. Back in 2006, Professor Margaret Berger organized a symposia on law and science, at which John Ioannidis presented on the importance of systematic reviews.[4] Lisa Bero also presented on systematic reviews and meta-analyses, and identified a significant source of bias in such reviews that results when authors limit their citations to studies that support their pre-selected, preferred conclusion.[5] Bero’s contribution, however, missed the point that a well-conducted systematic review makes cherry picking much more difficult, as well as obvious to the reader.
The high prevalence of biased citation and consideration of, and reliance upon, studies is a major source of methodological error in courtroom proceedings. Even when the studies relied upon are reasonably well done, expert witnesses can manipulate the evidentiary display through biased selection and exclusion of what to present in support of their opinions. Sometimes astute judges recognize and bar expert witnesses who would pass off their opinions, as well considered, when they are propped up only by biased citation. Unfortunately, courts have not always been vigilant and willing to exclude expert witnesses who proffer biased, invalid opinions based upon cherry-picked evidence.[6] Given that cherry picking or “biased citation” is recognized in the professional community as rather serious methodological sins, judges may be astonished to learn that both phrases, “cherry picking” and “biased citation” do not appear in the third edition of the Reference Manual on Scientific Evidence. With the delay in publishing the fourth edition, there is still time to add citations to careful debunking of biased citation, such as the reverse-engineered systematic review and meta-analysis in last year’s decision in the paraquat parkinsonism litigation.[7]
When I began my courtroom career, systematic reviews of the evidence for a causal claim were virtually non-existent. Most reviews and textbook chapters were hipshots that identified a few studies that supported the author’s preferred opinion, with perhaps a few disparaging words about a study that contradicted the author’s preferred outcome. On a controversial issue, lawyers could generally find a textbook or review article on either side of an issue. Cross-examination on a so-called “learned treatise,” however, was limited. In state courts, the learned treatise was not admissible for its truth, but only to show that expert witnesses should not be believed when they disagreed with the statement. It was all too easy for an expert witness to declare, “yes, I disagree with that one sentence, on one page, out of 1,500 pages, in that one book.”
In federal courts, the applicable rule of evidence makes the learned treatise statement admissible for its truth:
“Rule 803. Exceptions to the Rule Against Hearsay
The following are not excluded by the rule against hearsay, regardless of whether the declarant is available as a witness:
(18) Statements in Learned Treatises, Periodicals, or Pamphlets . A statement contained in a treatise, periodical, or pamphlet if:
(A) the statement is called to the attention of an expert witness on cross-examination or relied on by the expert on direct examination; and
(B) the publication is established as a reliable authority by the expert’s admission or testimony, by another expert’s testimony, or by judicial notice.
If admitted, the statement may be read into evidence but not received as an exhibit.”
While this rule historically had some importance in showing the finder of fact that the opinion given in court was not shared with the relevant expert community, the rule was and is problematic. Exactly what counts as “learned” is undefined. Expert witnesses on either side can simply endorse a treatise, a periodical, or a pamphlet as learned to enable a lawyer to use it on direct or cross-examination, and make its contents admissible. The rule was drafted and enacted in 1975, when another rule, Rule 702, was generally interpreted to place no epistemic restraints upon expert witnesses. Allowing Rule 803(18) to be invoked without the epistemic constraints of Rules 702 and 703 raised few concerns in 1975, but in the aftermath of Daubert (1993), the tension within the Federal Rules of Evidence requires that the admissibility of a statement in a learned treatise cannot save an expert witness opinion that is not otherwise sufficiently grounded and valid.[8]
Systematic reviews are a different kettle of fish from the sort of textbook opinions of the 1970s and 1980s, which often lacked comprehensive assessments and consistent application of criteria for validity. The intersection of the evolution of Rule 702 and systematic reviews is remarkable. When Rule 702 was drafted, systematic reviews were non-existent. When the Supreme Court decided the Daubert case in 1993, systematic reviews were just emerging as a different and superior form of evidence synthesis.[9] The lesson for judges, regulators, and lawyers is that the standards for valid synthesis of studies and lines of evidence have changed and become more demanding.
In 2009, several professional groups produced an important guidance for reporting systematic reviews, “the Preferred Reporting Items for Systematic reviews and Meta-Analyses,” or PRISMA.[10] Although the PRISMA guidance ostensibly addresses reporting, if authors have not done something that should be reported, their failure to do it and report about it can be identified as a significant omission from their publication. One of the PRISMA specifications called for the writing of a protocol for any systematic review, and for making this protocol available to the scientific community and the public. The protocol will identify the exact clinical issue under review, the kinds of evidence that bear on the issue, and criteria for including or excluding studies that should be included in the review. The requirement of pre-registration has the ability to damp down data dredging in observational studies and experiments, and to help readers see when authors reverse engineered systematic reviews by declaring their criteria for inclusion and exclusion after reading candidate studies and their conclusions.
In 2011, the Centre for Reviews and Dissemination, at the University of York in England, developed an internet archive, PROSPERO, for prospectively registering systematic reviews. In addition to reducing duplication of systematic reviews, PROSPERO aimed to increase transparency, validity, and integrity of the systematic reviews. Around the same time, the Center for Open Science, also set up a web-based archive for systematic review protocols.[11]
Reviews purporting to be systematic are now commonplace. By 2018, ROSPERO had registered over 30,000 records, but of course, some scientists may have registered systematic reviews which they never completed.[12] Despite the publication of professional guidances, carefully performed systematic reviews can still be hard to find.[13]
In federal court, expert witnesses must proffer their opinions in a specified form. Back in the 1980s, federal court practice on expert witnesses was “loose” not only on admissibility issues, but also on the requirements for pre-trial disclosure of opinions. In some federal districts, such as those within Pennsylvania, federal judges took their cues not from the language of the Federal Rules of Civil Procedure, but from state court practice, which required only cursory disclosure of top-level opinions without identifying all facts and data relied upon by the proposed expert witness. In many state courts, and in some federal judicial districts, lawyers had a difficulty obtaining judicial authorization to conduct examinations before trial to discover all the bases and reasoning (if any) behind an expert witness’s opinion. Under the current version of the Federal Rules of Civil Procedure, trial by ambush has generally given way to full discovery. The current version of Rule 26 provides:
Rule 26. Duty to Disclose; General Provisions Governing Discovery
(a) Required Disclosures.
* * *
(2) Disclosure of Expert Testimony.
(A) In General. In addition to the disclosures required by Rule 26(a)(1) , a party must disclose to the other parties the identity of any witness it may use at trial to present evidence under Federal Rule of Evidence 702 , 703 , or 705 .
(B) Witnesses Who Must Provide a Written Report. Unless otherwise stipulated or ordered by the court, this disclosure must be accompanied by a written report—prepared and signed by the witness—if the witness is one retained or specially employed to provide expert testimony in the case or one whose duties as the party’s employee regularly involve giving expert testimony. The report must contain:
(i) a complete statement of all opinions the witness will express and the basis and reasons for them;
(ii) the facts or data considered by the witness in forming them;
(iii) any exhibits that will be used to summarize or support them;
(iv) the witness’s qualifications, including a list of all publications authored in the previous 10 years;
(v) a list of all other cases in which, during the previous 4 years, the witness testified as an expert at trial or by deposition; and
(vi) a statement of the compensation to be paid for the study and testimony in the case.
An expert’s report or disclosure under Rule 26 remains a far cry from a systematic review, but the Rule goes a long way towards eliminating trial by ambush and surprise in requiring a complete statement of all opinions, all the bases and reasons for the opinions, and all the facts or data considered in reaching the opinions. The requirements of Rule 26, combined with a mandatory oral deposition, go a long way to help reveal cherry picking and motivated reasoning in an expert witness’s opinions.
[1] Schachtman, “Reference Manual – Desiderata for 4th Edition – Part I – Signature Diseases,” Tortini (Jan. 30, 2023); “Reference Manual – Desiderata for 4th Edition – Part II – Epidemiology & Specific Causation,” Tortini (Jan. 31, 2023); “Reference Manual – Desiderata for 4th Edition – Part III – Differential Etiology,” Tortini (Feb. 1, 2023); “Reference Manual – Desiderata for 4th Edition – Part IV – Confidence Intervals,” Tortini (Feb. 10, 2023); “Reference Manual – Desiderata for 4th Edition – Part V – Specific Tortogens,” Tortini (Feb. 14, 2023); “Reference Manual – Desiderata for 4th Edition – Part VI – Rule 703,” Tortini (Feb. 17, 2023).
[2] See John B. Wong, Lawrence O. Gostin, and Oscar A. Cabrera, “Reference Guide on Medical Testimony,” in Reference Manual on Scientific Evidence 687, 723-24 (3d ed. 2011) (discussing hierarchy of medical evidence, with systematic reviews at the apex).
[3] Schachtman, “The Treatment of Meta-Analysis in the Third Edition of the Reference Manual on Scientific Evidence,” Tortini (Nov. 14, 2011).
[4] John P.A. Ioannidis & Joseph Lau, Systematic Review of Medical Evidence, 12 J.L. & Pol’y 509 (2004).
[5] Lisa Bero, “Evaluating Systematic Reviews and Meta-Analyses,” 14 J. L. & Policy 569, 576 (2006).
[6] See Schachtman, “Cherry Picking; Systematic Reviews; Weight of the Evidence,” Tortini (April 5, 2015); “The Fallacy of Cherry Picking As Seen in American Courtrooms,” Tortini (May 3, 2014); “The Cherry-Picking Fallacy in Synthesizing Evidence,” Tortini (June 15, 2012).
[7] In re Paraquat Prods. Liab. Litig., 730 F. Supp. 3d 793 (S.D. Ill. 2024); see also Schachtman, “Paraquat Shape-Shifting Expert Witness Quashed,” Tortini (Apr. 24, 2024).
[8] See Schachtman, “Unlearning the Learned Treatise Exception,” Tortini (Aug. 21, 2010).
[9] Iain Chalmers, Larry V. Hedges, Harris Cooper, “A Brief History of Research Synthesis,” 25 Evaluation & the Health Professions 12 (2002); Mark Starr, Iain Chalmers, Mike Clarke, Andrew D. Oxman, “The origins, evolution, and future of The Cochrane Database of Systematic Reviews,” 25 Int J. Technol. Assess. Health Care s182 (2009); Mike Clarke, “History of evidence synthesis to assess treatment effects: personal reflections on something that is very much alive,” 109 J. Roy. Soc. Med. 154 (2016). See also Wen-Lin Lee, R. Barker Bausell & Brian M. Berman, “The growth of health-related meta-analyses published from 1980 to 2000,” 24 Eval. Health Prof. 327 (2001).
[10] Alessandro Liberati, Douglas G. Altman, Jennifer Tetzlaff, Cynthia Mulrow, Peter C. Gøtzsche, John P.A. Ioannidis, Mike Clarke, Devereaux, Jos Kleijnen, and David Moher, “The PRISMA Statement for Reporting Systematic Reviews and Meta-Analyses of Studies That Evaluate Health Care Interventions: Explanation and Elaboration,” 151 Ann Intern Med. W-65 (2009); “The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration,” 6 PLoS Med. e1000100 (2009).
[11] Alison Booth, Mike Clarke, Gordon Dooley, Davina Ghersi, David Moher, Mark Petticrew & Lesley Stewart, “The nuts and bolts of PROSPERO: an international prospective register of systematic reviews,” 1 Systematic Reviews 1 (2012); Alison Booth, Mike Clarke, Davina Ghersi, David Moher, Mark Petticrew, Lesley Stewart, “An international registry of systematic review protocols,” 377 Lancet 108 (2011).
[12] Matthew J. Page, Larissa Shamseer, and Andrea C. Tricco, “Registration of systematic reviews in PROSPERO: 30,000 records and counting,” 7 Systematic Reviews 32 (2018).
[13][13] John P. Ioannidis, “The Mass Production of Redundant, Misleading, and Conflicted Systematic Reviews and Meta-analyses,” 94 Milbank Q. 485 (2016).


