TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Reference Manual 4th Edition Corrects Some, Not All, Mistakes on Confidence Intervals

January 9th, 2026

So now that the new, fourth, edition of the Reference Manual on Scientific Evidence,[1] has been released, inquiring minds may want to know whether it has corrected errors in the previous, third, edition.[2] The authors of the new edition have had 14 years to ponder and reflect upon errors and to correct them.

Judges and lawyers look to the Manual for guidance and understanding of basic concepts, and the first three editions contained significant errors in addressing statistical concepts. There is probably no better place to jump in to see whether the new edition has corrected the prevalent mistakes in defining the statistical concept of a confidence interval, which was botched in several chapters in the third edition.[3] The concept of a confidence interval is important in many statistical applications, but it is especially important in the interpretation of epidemiologic studies.

Contrition is good for the soul. The new edition, in places, evinces an awareness that earlier editions had misled readers, and that the fourth edition needed to do better.  And in several key places, including in particular the chapter, the fourth edition has improved in its discussion of confidence intervals.

Professor David Kaye has two chapters in the new edition, one on DNA evidence, and another chapter, with Professor Hal Stern, on statistical evidence.[4] Kaye is a careful writer with substantial statistical expertise. His contributions to the third edition were anodyne treatments of statistical concepts, and his chapters in the new edition seem excellent as well upon first reading. In his chapter on DNA evidence, Kaye alludes to the misunderstandings and misrepresentations of the confidence interval,[5] and in his chapter on statistical evidence, Kaye, along with Stern, gives careful definitions and explications of confidence intervals.

Kaye and Stern call out several cases, frequently cited, for having given clearly incorrect definitions of confidence intervals. This sort of candor to the court is necessary if judges, and lawyers, are going to correct bad practices.[6] The statistics chapter in the fourth edition also does not shy away from calling out the authors of another chapter [epidemiology] in the Reference Manual’s third edition for having given erroneous definitions:

“Language from another reference guide in the previous edition of this Reference Manual that is often quoted may inadvertently convey the incorrect impression that a confidence coefficient such as 95% refers to the percentage of results in (hypothetically) repeated studies that would be expected to lie within the interval reported in the study before the court.”[7]

A very gentle criticism indeed; the epidemiology chapter was manifestly incorrect, and we can all agree that its error was negligent, not intentional. The epidemiology chapter from the third edition did not merely convey the incorrect impression; that chapter contained erroneous definitions of confidence intervals.

Kaye and Sterne correctly note that a given confidence interval “does not give the probability that the unknown parameter lies within the confidence interval.”[8] And they helpfully point out that there is no tendency for the point estimate near the center of a confidence interval to be closer to the true value than any other value within the interval.[9]

The authors of the new edition’s chapter on epidemiology obviously got the message from Professors Kaye and Sterne.[10] Fourth time is a charm. The epidemiology chapter in the third edition had been a mess on statistical issues.[11] Without any acknowledgment or confession of error committed in the first three editions, the authors of the epidemiology chapter in the fourth edition now note:

“Just as the p-value does not provide the probability that the risk estimate found in a study is correct, the confidence interval does not provide the range within which the true risk is likely to lie. In other words, it is a misconception to interpret a 95% confidence interval as representing an interval within which the true value has a 95% probability of being found.”[12]

Unfortunately, in the glossary at the end of the new edition’s epidemiology chapter, the erroneous definition of confidence interval was carried forward from the third edition, without change or correction:

confidence interval. A range of values that reflects random error. Thus, if a confidence level of 0.95 is selected for a study, 95% of similar studies would result in the true relative risk falling within the confidence interval.”[13]

What the authors no doubt meant to write was that:

“95% of similar studies would result in the true relative risk falling within the confidence intervals.”

By putting “interval” in the singular, the authors fell into the trap described by Professors Kaye and Hall, and into the error that the previous chapters on epidemiology committed.

The new edition of the Reference Manual appears to suffer, at least on this statistical issue, from the lack of high-level editing across chapters.  The interaction between authors of the statistics and the epidemiology chapters sorted out a serious error, but the error pops up in new chapters. Michael Weisberg and Anastasia Thanukos have an introductory chapter on How Science Works, which crudely and incorrectly describes confidence intervals:

“Uncertainty and error are generally expressed as a range, within which we are confident that, if the study were repeated, the new result would fall. Scientists often use a 95% confidence interval for this purpose.”[14]

Confidence intervals model only random error, and the “range” around one point estimate does not give us “confidence” that the next point estimate would fall into that range.

The chapter on regression analyses in third edition of the Reference Manual incorrectly defined confidence intervals.[15] Alas the fourth edition did not auto-correct:

“Loosely speaking, a confidence interval represents an interval of values in which the true value of a regression coefficient falls within some pre-specified probability (where the true value is the estimate that would be obtained from the same model with a very large sample).”[16]

Why the authors of a highly technical chapter chose to speak loosely, rather than accurately, is a mystery. All the authors of the regression chapter had to do was refer to the accurate, helpful definitions in the statistics chapter.

Why should we care about the Reference Manual’s misleading, incorrect definitions of confidence intervals (or p-values for that matter)? The erroneous definitions and misuses typically place a Bayesian interpretation upon the confidence interval by claiming that the coefficient of confidence (typically 95% when alpha is set at 0.05) states the probability that the parameter, the true population measure, falls within the interval around the point estimate. This misinterpretation might suffice for a Bayesian 95% credible interval, but almost invariably the calculation under discussion is the point estimate ± 1.96 standard errors. Good statistics, like good grammar, costs nothing.

Whether the conflation of confidence intervals with credible intervals results from ignorance or willful efforts to mislead, it is wrong.  And the conflation is part of a long-running rhetorical campaign to mislead about the meaning of the burden of proof and statistical significance in order to abandon statistical tests, and to green-light precautionary principle judgments as “scientific.”[17]

In past posts, I have cited and quoted any number of scientists and lawyers who have engaged in the effort, either intentional or negligent, to mislead readers about the nature of science, by idealizing and falsely elevating the burden of proof in science, and declaring it to be different from the legal and regulatory burden of proof.[18]

To pick one particularly notorious author, consider junk science writer Naomi Oreskes.[19] In her 2010 book, Oreskes declares:

“The 95 percent confidence standard means that there is only 1 chance in 20 that you believe something that isn’t true.

* * * * *

That is a very high bar. It reflects a scientific worldview in which skepticism is a virtue, credulity is not.”[20]

In fact, statistics, science, and law, the confidence interval has nothing to do with the burden of proof; rather it reflects the precision of a single point estimate. Truth is a virtue that may be lost on the likes of Naomi Oreskes, but it is essential to litigating scientific issues. Given that many lawyers in the past had cited the Reference Manual’s chapter on epidemiology for its incorrect definitions of the statistical confidence interval, we should rejoice that this one error has been corrected.


[1] National Academies of Sciences, Engineering, and Medicine & Federal Judicial Center, REFERENCE MANUAL ON SCIENTIFIC EVIDENCE (4th ed. 2025) (cited as RMSE 4th ed.)

[2] National Academies of Sciences, Engineering, and Medicine & Federal Judicial Center, REFERENCE MANUAL ON SCIENTIFIC EVIDENCE (3rd ed. 2011) (cited as RMSE 3rd ed.)

[3] See Nathan Schachtman, Reference Manual – Desiderata for 4th Edition – Part IV – Confidence Intervals, TORTINI (Feb. 10, 2023).

[4] In RMSE 3rd ed., Professor Kaye, along with David Freedman, wrote the chapter on statistical evidence; the two gave careful definitions and explications of confidence intervals.  Professor Freedman sadly died before the third edition was released, and he is replaced by Hal Stern in the chapter on statistics in the fourth edition.

[5] David H. Kaye, Reference Guide on Human DNA Identification Evidence in RMSE 4th ed. at 261, (noting that “the meaning of a confidence interval is subtle, and the estimate commonly is misconstrued”).

[6] See Kaye & Sterne, RMSE 4th ed. at 511 n.125 (citing Turpin v. Merrell Dow Pharm., Inc., 959 F.2d 1349, 1353 (6th Cir. 1992) (“If a confidence interval of ‘95 percent between 0.8 and 3.10 is cited, this means that random repetition of the study should produce, 95 percent of the time, a relative risk somewhere between 0.8 and 3.10.”); Garcia v. Tyson Foods, Inc., 890 F. Supp. 2d 1273, 1285 (D. Kan. 2012) (“Dr. Radwin testified that his study was conducted within a confidence interval of 95 — that is ‘if I did this study over and over again, 95 out of a hundred times I would  expect to get an average between that interval.’”); In re Silicone Gel Breast Implants Prods. Liab. Litig., 318 F. Supp. 2d 879, 897 (C.D. Cal. 2004) (“a margin of error between 0.5 and 8.0 at the 95% confidence level . . . means that 95 times out of 100 a study of that type would yield a relative risk value somewhere between 0.5 and 8.0”)).

[7] See Kaye & Sterne, RMSE 4th ed. at 511 n.125 (citing Rhyne v. U.S. Steel  Corp., 474 F. Supp. 3d 733, 744 (W.D.N.C. 2020) (“‘If a 95% confidence interval is specified, the range encompasses the results we would expect 95% of the time if samples for new studies were repeatedly drawn from the population.’ Reference Guide on Epidemiology, at 580.”).

[8] Kaye & Sterne, RMSE 4th ed. at 512 & n. 126 (citing additional errant judicial decisions, and Geoff Cumming & Robert Maillardet, Confidence Intervals and Replication: Where Will the Next Mean Fall?, 11 PSYCH. METHODS 217 (2006).)

[9] Id. at 512.

[10] Steve C. Gold, Michael D. Green, Jonathan Chevrier, & Brenda Eskenazi, Reference Guide on Epidemiology, in RMSE 4th ed. at 897

[11] Michael D. Green, D. Michal Freedman & Leon Gordis, Reference Guide on Epidemiology, 549, 573, 580, in RMSE 3rd ed.

[12] Steve C. Gold, Michael D. Green, Jonathan Chevrier, & Brenda Eskenazi, Reference Guide on Epidemiology, RMSE 4th ed. at 897, 939.

[13] Id. at 1011.

[14] Michael Weisberg & Anastasia Thanukos, How Science Works , in RMSE 4th ed. at 47, 90.

[15] Daniel Rubinfeld, Reference Guide on Multiple Regression, RMSE 3rd ed. at 303, 342, 352.

[16] Daniel Rubinfeld & David Card, Reference Guide on Multiple Regression and Advanced Statistical Models, in RMSE 4th ed. at 577, 613.

[17] Schachtman, Rhetorical Strategy in Characterizing Scientific Burdens of Proof, TORTINI (Nov. 11, 2014);

[18] See, e.g., Kevin C. Elliott & David B. Resnik, Science, Policy, and the Transparency of Values, 122 ENVT’L HEALTH PERSP. 647 (2014) (exemplifying the rhetorical strategy that idealizes and elevates a burden of proof in science, and then declaring it to be different from legal and regulatory burdens of proof).

[19] Schachtman, Playing Dumb on Statistical Significance, TORTINI (Jan. 4, 2015); The Rhetoric of Playing Dumb on Statistical Significance – Further Comments on Oreskes, TORTINI (Jan. 17, 2015).

[20] Naomi Oreskes & Erik M. Conway, MERCHANTS OF DOUBT: HOW A HANDFUL OF SCIENTISTS OBSCURED THE TRUTH ON ISSUES FROM TOBACCO SMOKE TO GLOBAL WARMING at 156-57 (2010).

A New Year, A New Reference Manual

January 5th, 2026

The fourth edition of the Reference Manual on Scientific Evidence was quietly released in the waning hours of 2025, in the twilight of American democracy.[1] The Manual had been slated to be published in 2023, but that date slid to 2024, and then to 2025.  Perhaps the change in directorship of the Federal Judicial Center slowed things up. (Judge Robin Rosenberg of Zantac fame is now the Director)

The new volume is available for download at:

https://www.nationalacademies.org/publications/26919

Although I was a reviewer of one chapter of the Manual, I am just seeing this new edition for the first time today. The basic structure of the volume has not changed, although it has now grown to over 1,600 pages. Many of the key chapters on statistics, epidemiology, toxicology, and medical testimony are carried over from previous editions, with some new authors added and some previous authors no longer participating. In addition, there are some new chapters on exposure science, artificial intelligence, climate science, mental health, neuroscience, and eyewitness identification.

The individual chapters and authors in the new edition of the Manual are:

Liesa L. Richter & Daniel J. Capra, The Admissibility of Expert Testimony, at 1.

Michael Weisberg & Anastasia Thanukos, How Science Works, at 47

Valena E. Beety, Jane Campbell Moriarty, & Andrea L. Roth, Reference Guide on Forensic Feature Comparison Evidence, at 113

David H. Kaye, Reference Guide on Human DNA Identification Evidence, at 207

Thomas D. Albright & Brandon L. Garrett, Reference Guide on Eyewitness Identification, at 361

David H. Kaye & Hal S. Stern, Reference Guide on Statistics and Research Methods, at 463

Daniel L. Rubinfeld & David Card, Reference Guide on Multiple Regression and Advanced Statistical Models, at 577

Shari Seidman Diamond, Matthew Kugler, & James N. Druckman, Reference Guide on Survey Research, at 681

Mark A. Allen, Carlos Brain, & Filipe Lacerda, Reference Guide on Estimation of Economic Damages, at 749

Prologue to the Reference Guide on Exposure Science and Exposure Assessment, the Reference Guide on Epidemiology, and the Reference Guide on Toxicology, at 829i

Elizabeth Marder & Joseph V. Rodricks, Reference Guide on Exposure Science and Exposure Assessment, at 831

Steve C. Gold, Michael D. Green, Jonathan Chevrier, & Brenda Eskenazi, Reference Guide on Epidemiology, at 897

David L. Eaton, Bernard D. Goldstein, & Mary Sue Henifin, Reference Guide on Toxicology, at 1027

John B. Wong, Lawrence O. Gostin, & Oscar A. Cabrera, Reference Guide on Medical Testimony, at 1105

Henry T. Greely & Nita A. Farahany, Reference Guide on Neuroscience, at 1185

Kirk Heilbrun, David DeMatteo, & Paul S. Appelbaum, Reference Guide on Mental Health Evidence, at 1269

Chaouki T. Abdallah, Bert Black, & Edl Schamiloglu, Reference Guide on Engineering, at 1353

Brian N. Levine, Joanne Pasquarelli, & Clay Shields, Reference Guide on Computer Science, at 1409

James E. Baker & Laurie N. Hobart, Reference Guide on Artificial Intelligence, at 1481

Jessica Wentz & Radley Horton, Reference Guide on Climate Science, at 1561

Some quick comments on changes in authorship in some of the chapters. Bernard Goldstein, a member of the dodgy Collegium Ramazzini, remains an author of the toxicology chapter in the new edition. David Eaton, however, has been added. Professor Eaton was the president of the Society of Toxicology for many years, and perhaps he has brought some balance to the new edition’s work on toxicology.

An author of the statistics chapter, David Kaye, is also the sole author of the chapter on DNA evidence. Professor Kaye is a distinguished scholar of DNA evidence with serious statistical expertise. David Freedman had been a co-author of the statistics chapter in the third edition, but sadly Professor Freedman died before the third edition was published. Freedman is replaced by Hal Stern, an accomplished statistician from the University of California.

The chapter on epidemiology lost Leon Gordis, who died in 2015. The chapter in the fourth edition has the return of law professors Steve C. Gold and Michael D. Green, whose pro-plaintiff biases are well known, along with two new authors, epidemiology professors Jonathan Chevrier, & Brenda Eskenazi. Like Goldstein, Eskenazi is a fellow of the Collegium Ramazzini.

The Reference Manual, for better or worse, has had substantial influence on the litigation of scientific and technical issues in federal court, and in some state courts as well. I hope to write more substantively about the new edition in 2026.


[1] National Academies of Sciences, Engineering, and Medicine & Federal Judicial Center, Reference Manual on Scientific Evidence (4th ed. 2025).

IARC’s Industry Sniffing Bots Are Coming for You

October 8th, 2025

“Hey, hey, you, you, get off of my cloud.”   …. Jagger & Richards

For the last 50 years, critics, cranks, and anti-industry zealots have argued that industry-sponsored science is vitiated by conflicts of interest. What started as the whining of scientists who were regulatory “political scientists” and adjuncts to plaintiffs’ law firms, has become a major movement. The rise of post-modernism in philosophy has supported the rejection of robust debate of scientific assessments of causation on grounds that all such judgments are politically and socially determined.  Evidence is just casuistry, at least when done by those with whom we disagree.

The anti-industry bias has had demonstrably bad consequences in distorting scientific judgment. Over 30 years ago, a science journalist published a story in the Journal of the National Cancer Institute, about how dire predictions of asbestos mortality never came to pass.[1] In investigating the failure of these predictions, the journalist concluded that they had been the product of exaggerations by government scientists who suffered from a form of “white-hat” bias”:

“the government’s exaggeration of the asbestos danger reflects a 1970s’ Zeitgeist that developed partly in response to revelations of industry misdeeds.  ‘It was sort of the “in” thing to exaggerate … [because] that would be good for the environmental movement’….  ‘At the time it looked like you were wearing a white hat if you made these wild estimates. But I wasn’t sure whoever did that was doing all that much good.”[2]

The existence of “white-hat” bias is perhaps the most benign explanation for the propagation of badly done science. The deployment of political correctness applied to issues that really depend upon scientific method, data, and inference for their resolution should not, however, be seen as particularly benign.

In 2010, over a decade after the description of white-hat bias in the JNCI, two public health researchers, Mark B. Cope and David B. Allisosn, described white-hat bias as a prevalent cognitive error in how research is reported and interpreted.[3]  They described white-hat bias as a “bias leading to the distortion of information in the service of what may be perceived to be righteous ends.” Perhaps the temptation to overstate the evidence against a toxic substance is unavoidable, but it diminishes the authority and credibility of regulators entrusted with promulgating and enforcing protective measures.  And error is still error, regardless of its origins and motivations. 

Allison and Cope gave examples of white-hat bias in how papers are cited, with “exonerative” studies cited less often than those than claim harmful outcomes.  And when positive papers were cited, they were often interpreted misleadingly to overstate the harms previously reported.

The principle of charity suggests white-hat bias should be considered for much of the anti-industry prejudices exhibited by public health scientists. The persistence, virulence, and irrationality of many instances of prejudiced judgments, however, make the charitable explanation implausible.

Kenneth Rothman, the founder of Epidemiology, the official journal of the International Society for Environmental Epidemiology (ISEE), provided a more insightful explanation to the anti-industry bias as the “new McCarthyism in science.”[4] Rothman identified an anti-manufacturing industry bias as manifesting as intolerance toward industry-sponsored studies, and strict scrutiny of “conflict-of-interest” (COI) disclosures. The McCarthyites amplify the gamesmanship over COI disclosures by excusing or justifying non-disclosure of COIs from scientists aligned with advocacy groups or the lawsuit industry, or from positional COIs.

The quaint notion that “an opinion should be evaluated on the basis of its contents, not on the interests or credentials of the individuals who hold it,” has been generally banished.[5] The offense to honest scientific inquiry receives little attention,[6] but the sanctimonious deployment of COI claims allows scientists to over-indulge in poor quality research by claiming that they have extirpated industry influence.

In 1995, anti-tobacco historian and expert witness, Robert Proctor, coined the term agnotology from the Greek ágnosis (“not knowing”) and -logia (study of).[7] Agnotology is now a specialty of scientist-advocates and expert witnesses for the lawsuit industry; it has been the subject of numerous and repetitive books,[8] too many articles to cite, and even doctoral dissertations.[9]

The anti-manufacturing industry jihad is little more than defamation against every scientist or citizen who has called for evidence-based regulation and law in dealing with scientific issues. The movement would deprive legislators, regulators, and juries of important, relevant scientific evidence based upon a smear.

What is truly fascinating, however, is the hypocrisy built into the anti-industry COI movement. There is another industry that is protected from criticism – the lawsuit industry. The lawsuit industry that has grown up parasitically around a system of tort law, which now includes not only law firms that service claimants, but also their retinue of expert witnesses, their litigation funders, and even investment firms that collude with hit-piece journalists who work on “distort and short” schemes of trading in the securities of their targets.

The critics of research done or funded by manufacturing industry argue that industry studies disproportionately report outcomes favorable to their sponsors. The implied potential conflicts posed by industry-sponsored research studies are fairly obvious. Industries that make or sell products, raw materials, or chemicals have an interest in having toxicological and epidemiologic studies support claims of safety.  Research that suggests an industry’s product causes harm may hurt the industry’s financial interests directly by inhibiting sales, or indirectly by undermining the industry’s position in litigation, or by leading to greater regulatory scrutiny and control. Indirect harms may result from heightened warnings or instructions, which may limit sales or encourage sales of competing, less hazardous products. If the harm evidenced by the research is sufficiently severe, the research may lead to product recall or bans, again with serious economic consequences for the industry. 

The lawsuit industry has conflicts of interest that mirror those of manufacturing industry.[10] Manufacturing evidence and conclusions of harm is good for the lawsuit industry, and provides rich sources of revenue for its go-to expert witnesses. There are also ideological interests that motivate many players in the lawsuit industry. Lawsuit industry COIs are frequently ignored or down-played, even though the research funded, sponsored, or written by its members has a strange propensity to support claims made in court and in agencies.

The International Agency for Research on Cancer (IARC) has become ground zero for hypocritical exorcisms of COIs. In 2018, several authors wrote a commentary in which they declared that IARC and its cancer hazard evaluations were under attack from those with “economic interests” (manufacturing firms or their consultants).[11] Several of the commentary authors, Peter F. Infante, Ronald Melnick, and James Huff, were full-fledged members of the lawsuit industry, with consulting firms that work to help claimants in tort litigation. The authors’ own COIs, however, did not inhibit them from declaring that only “scientific experts who do not have conflicts of interest should be allowed to criticize IARC pronouncements. Three of the four authors (Infante, Melnick, Huff) of this hit piece identified themselves as having consulting firms, but only James Huff gave a disclosure that he had “been retained as expert consultant on long-term animal bioassays of glyphosate in litigation for plaintiffs.” Infante and Melnick gave no disclosure, although they have been far more than consultants; they have appeared in testimonial roles for tort claimants. To top off the hypocrisy, the journal editor, Steven B. Markowitz, felt compelled to declare that he had “no conflict of interest in the review and publication decision regarding this article.” Markowitz is a not infrequent testifying expert witness for the lawsuit industry.[12] It is a safe bet that the great majority of the studies authored by Infante, Melnick, Huff, and Markowitz claim or suggest harms from chemical exposures.

It seems rudimentary that scientific research should be evaluated on the merits of studies, methods, data, and inference, and not the source of the funding. We are, however, deep into the post-modern world that regards science as a way of exercising political power and social control, and not a search for the truth. Given our Zeitgeist, no one should be surprised that an IARC official has just come out with a paper that attempted to deploy a large-language model (LLM) to identify possible industry influence, down to parts per trillion or whatever the level of detection may be.

Last month, Mary K. Schubauer-Berigan, the head of the Evidence Synthesis and Classification Branch of IARC, along with several other scientists, published a paper that proposed the use of an LLM to identify industry influence.[13] Schubauer-Berigan is an occupational epidemiologist, but she is also an amateur agnotologist. The first sentence of her article really tells all you need to know: “Industry-funded research poses a threat to the validity of scientific inference on carcinogenic hazards.” The authors claim that their LLM can help assess bias from industry studies in evidence synthesis and identify “industry influence” on scientific inference. These authors reflect the IARC dogma that only manufacturing industry has COIs of concern. Lawsuit industry influence is never mentioned.

The authors applied their LLM to identify industry relationships among authors of review articles on issues related to three specific IARC hazard classifications (benzene, cobalt, and aspartame). The search apparently included direct funding for studies of the agent under consideration, as well as whether studies or reviews had an industrial sponsor or a trade association, whether they used data provided by an industry source, or whether authors were paid consulting fees or provided expert testimony. The authors’ algorithm did not include whether spouses, children, parents, good friends, professional colleagues, or mentors ever had some dalliance with manufacturing industry.

IARC’s LLM was never let loose in search of lawsuit industry connections. Are you surprised?


[1] Tom Reynolds, “Asbestos-Linked Cancer Rates Up Less Than Predicted,” 84 J. Nat’l Cancer Instit. 560 (1992).

[2] Id. at 562. 

[3] Mark B. Cope and David B. Allison, “White hat bias: examples of its presence in obesity research and a call for renewed commitment to faithfulness in research reporting,” 34 Internat’l J. Obesity 84 (2010).

[4] Kenneth J. Rothman, “Conflict of interest: the new McCarthyism in science,” 269 J. Am. Med. Ass’n 2782 (1993). See Schachtman, “The Rhetoric and Challenge of Conflicts of Interest,” Tortini (July 30, 2013).

[5] Brian MacMahon, “Epidemiology:  another perspective,” 37 Internat’l J. Epidem. 1192, 1192 (2008).

[6] See Thomas P. Stossel, “Has the hunt for conflicts of interest gone too far?” 336 Brit. Med. J. 476 (2008); Kenneth J. Rothman & S. Evans, “Extra scrutiny for industry funded trials: JAMA’s demand for an additional hurdle is unfair – and absurd, 331 Brit. Med. J. 1350 (2005) & 332 Brit. Med. J. 151 (2006) (erratum).

[7] Robert Proctor, The Cancer Wars: How Politics Shapes What We Know and Don’t Know About Cancer 8 & not (1995).

[8] See, e.g., David Michaels, The Triumph of Doubt: Dark Money and the Science of Deception (2020); David Michaels, Doubt Is Their Product: How Industrys Assault on Science Threatens Your Health (2008); Naomi Oreskes & Erik M. Conway, Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming (2010); Robert N. Proctor & Londa Schiebinger, eds., Agnotology: The Making and Unmaking of Ignorance (2008); Janet Kourany & Martin Carrier, eds, Science and the Production of Ignorance: When the Quest for Knowledge Is Thwarted (2020); Blake D. Scott, “Agnotology and Argumentation: A Rhetorical Taxonomy of Not-Knowing,” OSSA Conference Archive 133 (2016).

[9] Craig Alex Biegel, Manufactured Science, the Attorneys’ Handmaiden: The Influence of Lawyers in Toxc [sic] Substance Disease Research, Dissertation for Florida State University (2016).

[10] See Laurence J. Hirsch, “Conflicts of Interest, Authorship, and Disclosures in Industry-Related Scientific Publications: The Tort Bar and Editorial Oversight of Medical Journals,” 84 Mayo Clin. Proc. 811 (2009).

[11] Peter F. Infante, Ronald Melnick, James Huff & Harri Vainio, “Commentary: IARC Monographs Program and public health under siege by corporate interests,” 61 Am J. Indus. Med. 277 (2018).

[12] See In re Joint Eastern & Southern District Asbestos Litig., 758 F.Supp. 199 (S.D.N.Y. 1991); Juni v. A.O. Smith Water Prods. Co., 32 N.Y.3d 1116, 116 N.E.3d 75 (2018); Konstantin v. 630 Third Avenue Assocs., N.Y.S.Ct. (N.Y. Cty.) No. 190134/2010 (jury verdict returned Aug. 17, 2011); Koeberle v. John Crane, Inc., Phila. Cty. Ct. C.P. No. 000887 (jury verdict returned Feb. 2010).

[13] Nathan L. DeBono, Vanessa Amar, Hardy Hardy, Mary K. Schubauer-Berigan, Derek Ruths & Nicholas B. King, “A large language model-based tool for identifying relationships to industry in research on the carcinogenicity of benzene, cobalt, and aspartame,” 24 Envt’l Health 64 (2025).

Prada – Fashionable, But Unreliable Review on Acetaminophen and Autism

September 30th, 2025

Back in the first week of this month, I posted about a paper (Prada 2025),[1]  which featured a so-called navigation-guide systematic review of the scientific evidence on the issue whether pregnant women’s ingestion of acetaminophen causes their children to develop autism.[2] The focus of my post was on some dodgy aspects of the Prada review, such as its anemic disclosures of interest, and its squirrely claim to have been “NIH funded.”

Since posting, the Prada review has been very much in the news. Last week, President Trump held a news conference, where we learned that he cannot pronounce acetaminophen and that he has a strongly held opinion that acetaminophen causes autism.[3] Trump was surrounded by officials in his administration, including plaintiffs’ lawyer Robert Kennedy, Jr., and three physicians, Drs. Oz, Makary, and Bhattacharya, who looked on in apparent approval. Once upon a time, a risk communication such as this one about acetaminophen, would have come out from a non-political FDA employee, such as Janet Woodcock, who was head of Drug Safety, and for many years the Director of Center for Drug Evaluation and Research. Over her tenure, Dr. Woodcock weighed in on many pharmaceutical safety issues. Those of us who have been involved in litigation of those safety issues remember that Dr. Woodcock chose her language very carefully. She did not just give opinions; she marshalled facts.

Admittedly, Trump’s autism press conference was not as deranged as his 2020 press conference at which he suggested that injecting sodium hypochlorite (bleach) into patients would cure Covid-19 infections. Still, most of the world was left with the impression that Trump was replacing (DOGE-ing) scientific research and replacing it with irrational speculation. Trump’s press conference on acetaminophen and vaccines was widely met with skepticism and disbelief. Medical ethicist Dr. Arthur Caplan, who is not given to hyperbole, called the conference “the saddest display of a lack of evidence, rumors, recycling old myths, lousy advice, outright lies, and dangerous advice I have ever witnessed by anyone in authority.”[4]

When the administration physicians communicated with the public, they said something very different from Trump’s presentation. In her press release, Press Secretary Karoline Leavitt used the meaningless locution, “suggested link,” and cited the Prada review, which eschewed causal conclusions:[5]

“Andrea Baccarelli, M.D., Ph.D., Dean of the Faculty, Harvard T.H. Chan School of Public Health: “Colleagues and I recently conducted a rigorous review, funded by a grant from the National Institutes of Health (NIH), of the potential risks of acetaminophen use during pregnancy… We found evidence of an association between exposure to acetaminophen during pregnancy and increased incidence of neurodevelopmental disorders in children.

Harvard University: Using acetaminophen during pregnancy may increase children’s autism and ADHD risk.”

Of course, saying that something “may increase risk” is not even close to saying that something causes the outcome in question. And Baccarelli’s description of his paper, Prada review, as funded by the National Institutes of Health is misleading at best.[6]

Leavitt went on to declare that “[t]he Trump Administration does not believe popping more pills is always the answer for better health.” Unless of course, it is Propecia for Mr. Trump, testosterone for Mr. Kennedy, or ketamine for Mr. Musk.

FDA Commissioner Martin A. Makary issued a Notice, the same day, in which he declared:

“In recent years, evidence has accumulated suggesting that the use of acetaminophen by pregnant women may be associated with an increased risk of neurological conditions such as autism and ADHD in children.

* * *

To be clear, while an association between acetaminophen and autism has been described in many studies, a causal relationship has not been established and there are contrary studies in the scientific literature.”[7]

So the FDA is clearly not declaring that acetaminophen causes autism.

Dr. Mehmet Oz, former surgeon and television talking head, who stood mute by Trump’s side at the infamous press conference, found his voice later in the week, when he acknowledged that pregnant women of course should take acetaminophen when physicians direct them to do so.

In Europe, where pharmaceutical regulation is typically more precautionary than in the United States, both the European Medicines Agency and the U.K.’s Medicines and Healthcare Products Regulatory Agency announced that using acetaminophen during pregnancy was safe with no showing that it causes autism in offspring.[8] Steffen Thirstrup, the EMA’s Chief Medical Officer, announced a day after the Trump bungle, that:

“Paracetamol [acetaminophen] remains an important option to treat pain or fever in pregnant women. Our advice is based on a rigorous assessment of the available scientific data and we have found no evidence that taking paracetamol during pregnancy causes autism in children.”

Most medical organizations were appalled at the administration’s sloppy messaging. The day after the press conference, the American College of Medical Toxicology (ACMT) issued a statement in response, to affirm the safety of acetaminophen in pregnancy.[9] The ACMT noted that its position was in agreement with the American College of Obstetrics and Gynecologists, the Society for Maternal-Fetal Medicine, the American Academy of Pediatrics, and the Society for Developmental and Behavioral Pediatrics.

The acetaminophen kerfuffle seems always to come back to the Prada “navigation guide” systematic review and its authors, including the Harvard Dean, Andrea Baccarelli, who was the well-paid member of the plaintiffs’ expert witness team in acetaminophen litigation.[10] Why did Dr. Andrea Baccarelli in the Prada review use this curious, arcane, and infrequently used method of review? Why did Baccarelli and his co-authors publish this review in Environmental Health, which is dedicated to publishing “manuscripts on important aspects of environmental and occupational medicine,” which places maternal ingestion of a licensed pharmaceutical outside its stated competence? Why did Baccarelli offer a litigation opinion that acetaminophen causes autism, but retreat to “association” when writing for the scientific community? And why did Baccarelli and his co-authors not disclose that Baccarelli had submitted essentially the same navigation guide systematic review as his proffered expert witness testimony, and that a federal court had rejected his opinion as not “the product of reliable principles and methods,” and not “a reliable application of the principles and methods to the facts of the case”[11]? Perhaps the answers are obvious to most observers, but candid disclosures certainly would have provided important context, and saved some people the embarrassment of relying upon the Prada review.

In digging deeper into the history of the navigation guide method itself, the earliest citation I could find to such systematic reviews was in 2009, in a conference paper that discussed this approach as a proposal.[12] The authors that made up the Navigating the Scientific Evidence to Improve Prevention Workshop Organizing Committee were not particularly well known or distinguished in the field of research synthesis. Still, there must be other reasons that “navigation guide” reviews are not more prevalent if the Organizing Committee had been truly on to something important.

The Committee never identified a rationale for a new systematic review approach. When the Organizing Committee outlined its approach in 2009, there were well over three decades of experience with systematic reviews,[13] with well-regarded full-length textbook treatment by experts in the field.[14]

In addition to the lack of experience among its authors and the preemption of the subject by comprehensive treatments elsewhere, there were three additional curious take aways from a cursory reading of the Organizing Committee’s 2009 manuscript. First, Committee emphasized the alleged need for a review methodology for environmental exposures. This emphasis was never accompanied by a showing that well-described methodologies long in use were somehow inadequate or inappropriate for environmental exposures.

Second, the authors urged the need for precautionary assessments, which might make their method fine where syntheses for precautionary pronouncements were called for. In the United States, regulatory assessments vary depending up the governing statutes that create the regulatory mandate.  In personal injury litigation, the precautionary principle is nothing less than an end run around the burden of proof on the party claiming harm and suing in tort. The designated subject matter of environmental exposures for the proposed systematic review technique offers an insight into why these authors believed that they had to propose a new fangled systematic review methodology. Previously described methods interfered with authors’ ability to elevate “iffy” associations into conclusions of causality in the name of the precautionary principle.

The third curiosity in the 2009 manuscript is that the authors never described the need for a pre-specified protocol. Later articles on this proposed methodology similarly failed to describe the need for such a protocol,[15] although by 2014, authors from the original Organizing Committee reversed course to add a pre-specified protocol to the requirements for a navigation guide systematic review.[16]

A recent article defines a systematic review essentially in terms of a protocol:

“Systematic review (SR) is a rigorous, protocol-driven approach designed to minimise error and bias when summarising the body of research evidence relevant to a specific scientific question.”[17]

The purpose of a protocol may be obvious to anyone who has been paying attention to the replication crisis in biomedical literature, but the same article offers a helpful description of its rationale:

“The purposes of the protocol are to discourage ad-hoc changes to methodology during the review process which may introduce bias, to allow any justifiable methodological changes to be tracked, and also to allow peer-review of the work that it is proposed, to help ensure the utility and validity of its objectives and methods.”[18]

Systematic reviews vary widely in quality, methodological rigor, and validity, but one of the key determinants of their validity is whether they were preceded by pre-specified protocol. Although systematic reviews are often described the “gold standard” for evidence synthesis, their methodological rigor vary widely. Reviews that lack a pre-specified protocol are decidedly less rigorous than those reviews that employ a protocol.[19] The absence of a protocol is thus an important tell that a systematic review may be untrustworthy.

The Prada paper put together by Baccarelli’s team has no protocol. It may satisfy the Trump administration’s Fool’s Gold Standard for Science, but that is far short of the requirements of Federal Rule of Evidence 702. Given Baccarelli’s abridgement of scientific method, we should not be overly surprised by Judge Cote’s judgment of the failures of Baccarelli’s and the other plaintiffs’ expert witnesses’ proffered opinions in the acetaminophen litigation:

“their analyses have not served to enlighten but to obfuscate the weakness of the evidence on which they purport to rely and the contradictions in the research. As performed by the plaintiffs’ experts, their transdiagnostic analysis has obscured instead of informing the inquiry on causation.”[20]

Judge Cote carefully reviewed Baccarelli’s proffered testimony and found it replete with cursory analyses, cherry-picked data, and result-driven assessments of studies.[21] Her Honor’s findings would seem to apply with equal measure to the Prada review.


[1] Diddier Prada, Beate Ritz, Ann Z. Bauer and Andrea A. Baccarelli, “Evaluation of the evidence on acetaminophen use and neurodevelopmental disorders using the Navigation Guide methodology,” 24 Envt’l Health 56 (2025).

[2] See Schachtman, “Acetaminophen & Autism – Prada Review Misleadingly Claims to Be NIH Funded,” Tortini (Sept. 9, 2025).

[3] Jeff Mason, Ahmed Aboulenein, and Julie Steenhuysen, “Trump Links Autism to Tylenol and Vaccines, Claims Not Backed by Science,” Reuters (Sept. 22, 2025); Brianna Abbott & Andrea Petersen, “The Trump administration said acetaminophen could cause autism. Doctors maintain it is safe during pregnancy,” Wall St. J. (Sept. 22, 2025) (“Studies looking at a link [sic] between acetaminophen and autism are inconclusive.”); Will Weissert, “Dr. Trump? The president reprises his COVID era, this time sharing unproven medical advice on autism,” Wash. Post (Sept. 23, 2025).

[4] Ali Swenson & Lauran Neergaard, “Trump makes unfounded claims about Tylenol and repeats discredited link between vaccines and autism,” Assoc. Press (Sept. 23, 2025).

[5] Leavitt, “FACT: Evidence Suggests Link Between Acetaminophen, Autism,” The White House (Sept. 22, 2025).

[6] See Schachtman, “Acetaminophen & Autism – Prada Review Misleadingly Claims to Be NIH Funded,” Tortini (Sept. 9, 2025). The referenced grants had nothing to do with acetaminophen and autism, or even autism generally. The NIEHS granted Dr. Baccarelli money to study air pollution and brain aging. The exposure of interest was not acetaminophen, and the outcome of interest was not autism. By claiming that his research was “NIH funded,” Baccarelli was attempting to boost the prestige of the research even though his acetaminophen review was done for litigation, not for the federal government. Apparently the NIEHS acquiesces in this charade because it suggests to the uninitiated that its research grants result in more published papers, even though the topics of those papers are unrelated to the funded research proposal, and the unrelated topics never receiving committee peer review.

[7] Martin A. Makary, “Notice to Physicians on the Use of Acetaminophen During Pregnancy,” (Sept. 22, 2025).

[8] E.M.A., “Use of paracetamol during pregnancy unchanged in the EU,” (Sept. 23, 2025).

[9] ACMT Supports the Safe Use of Acetaminophen in Pregnancy (Sept. 23, 2025).

[10] Rebecca Robbins & Azeen Ghorayshi, “Harvard Dean Was Paid $150,000 as an Expert Witness in Tylenol Lawsuits,” N.Y. Times (Sept. 23, 2025).

[11] Fed. R. Evid. 702.

[12] Patrice Sutton, Heather Sarantis, Julia Quint, Mark Miller, Michele Ondeck, Rivka Gordon, and Tracey Woodruff, “Navigating the Scientific Evidence to Improve Prevention: A Proposal to Develop A Transparent and Systematic Methodology to Sort the Scientific Evidence Linking Environmental Exposures to Reproductive Health Outcomes,”  (July 29, 2009).

[13] See Quan Nha Hong & Pierre Pluye, “Systematic reviews: A brief historical overview,” 34 Education for Information 261, 261 (2018) (describing the evolution of systematic reviews as made up of a “foundation period 1970-1989,” an “institutionalization period 1990-2000, and a “diversification period” from 2001 forward.)

[14] Matthias Egger, Julian P. T. Higgins, and George Davey Smith, Systematic Reviews in Health Research: Meta-Analysis in Context (3rd ed. 2022). The first edition of this text was published in 1995.

[15] Tracey J. Woodruff, Patrice Sutton, and The Navigation Guide Work Group, “An Evidence-Based Medicine Methodology To Bridge The Gap Between Clinical And Environmental Health Sciences,” 30 Health Affairs 931 (2011); Julia R. Barrett, “The Navigation Guide Systematic Review for the Environmental Health Sciences,” 122 Envt’l Health Persp. A283 (2014).

[16] Tracey J. Woodruff & Patrice Sutton, “The Navigation Guide Systematic Review Methodology: A Rigorous and Transparent Method for Translating Environmental Health Science into Better Health Outcomes,” 122 Environ Health Perspect. 1007 (2014).

[17] Paul Whaley, Crispin Halsall, Marlene Ågerstrand, Elisa Aiassa, Diane Benford, Gary Bilotta, David Coggon, Chris Collins, Ciara Dempsey, Raquel Duarte-Davidson, Rex Fitzgerald, Malyka Galay-Burgos, David Gee, Sebastian Hoffmann, Juleen Lam, Toby Lasserson, Len Levy, Steven Lipworth, Sarah Mackenzie Ross, Olwenn Martin, Catherine Meads, Monika Meyer-Baron, James Miller, Camilla Pease, Andrew Rooney, Alison Sapiets, Gavin Stewart, and David Taylor, “Implementing systematic review techniques in chemical risk assessment: Challenges, opportunities and recommendations,” 92-93 Env’t Internat’l 556 (2016).

[18] Id. at 560.

[19] Julia Menon, Fréderique Struijs & Paul Whaley, “The methodological rigour of systematic reviews in environmental health,” 52 Critical Rev Toxicol. 167 (2022).

[20] In re Acetaminophen ASD-ADHD Prods. Liab. Litig., 707 F. Supp. 3d 309, 334, 2023 WL 8711617 (S.D.N.Y. 2023) (Cote, J.).

[21] Id. at 354-56.

Specific Causation – The Process of Elimination

September 24th, 2025

Specific causation causes some courts to become costive, and sometimes, courts overuse so-called differential etiology as a laxative. The phrase “differential etiology” is an analogy to differential diagnosis, the reasoning process by which clinicians assess the identity of a disease or disorder. Differential etiology, like laxatives, can be overused and misused.

Last month, the Ninth Circuit affirmed a district court’s summary judgment in a glyphosate case. Engilis v. Monsanto Co., No. 23-4201, D.C. No. 3:19-cv-07859-VC (9th Cir. August 12, 2025). The trial court found that plaintiff’s expert witness’s differential etiology was unreliable because the putative expert witness acknowledged that obesity could be a cause of plaintiff’s disease, but then failed reliably to rule out obesity as a differential etiology. Instead, the excluded expert witness glibly inferred that glyphosate was a cause of plaintiff’s cancer. The trial and appellate courts were faced with a great example of invalid, motivated reasoning, or the lack of reasoning.

The Ninth Circuit’s affirmance was significant because it clearly acknowledged that there was no presumption of admissibility, and that the district court was well within its discretion to find that the proffered expert witness opinion had failed to meet the requirements of Rule 702.[1]

The decision in Engilis was simple and straightforward; it was based upon specific or individual causation or its absence. In cases involving diseases with multiple potential causes, none of which is necessary for the outcome, an exposure or lifestyle factor may be capable of causing a particular disease, but that factor may not have played a causal role in everyone who experienced the exposure or lifestyle factor and who developed the disease. (Not everyone who smoked cigarettes develops lung cancer, and not all lung cancer patients smoked.) Courts and litigants are thus left with the puzzle of individual causation.

In a case such as Engilis, courts can basically assume, arguendo, that glyphosate can cause the claimed outcome (Non-Hodgkin’s Lymphoma or NHL), but then insist that there is competent and sufficient evidence to show that the claimant’s specific case of NHL was caused by the claimed exposure.

Some courts and commentators have suggested that a process of “differential etiology, by analogy to differential diagnosis, can get a claimant to the finish line. This attempted solution assumes arguendo that glyphosate can cause NHL, but then it still must resolve whether this specific case of NHL (or whatever claimed) was caused by the claimed exposure.

As suggested above, differential etiology is something like constipation, which is resolved by the process of elimination. Formally, the reasoning process is an “iterative disjunctive syllogism.” We start with an exhaustive listing of the possible established general causes of the claimed disease:

A or B or C (exhausting the possible general causes of the claimed disease).

Because the diseases may multifactorial, the set of disjuncts may be more complex:

A or B or C or A*B or B*C or A*C or A*B*C.

But if the claimant had never been exposed to A, we can deduce:

B or C or B*C.

And if the claimant had never been exposed to B, we can infer that:

C.

And if C is the tortogen under investigation, for which general causation was established, the claimant would have an unequivocal submissible case to the jury.

Of course many diseases have unknown causes, so-called idiopathic or sporadic cases.  In such instances, any proper differential etiology must include a disjunct D, for idiopathic cause. We can see that the iterative disjunctive syllogism in such cases leaves us with uneliminated D in some of the remaining disjuncts, and the claimant cannot reach an unequivocal conclusion in support of his claim.

There may perhaps be a solution to this problem that turns on the effect size, and the probability of attribution associated with each uneliminated disjunct, but that is a story for another day.


[1] See Paul Driessen, “Nation’s most liberal court rejects plaintiff expert’s claims that glyphosate caused couple’s cancer,” Eurasia Review (Sept. 23, 2025).

Acetaminophen & Autism – Prada Review Misleadingly Claims to Be NIH Funded

September 9th, 2025

A few weeks ago, four scientists published what they called a “navigation guide” systematic review on acetaminophen use and autism.[1] The last named author, Andrea A. Baccarelli, is an environmental epidemiologist, who has been an expert witness for plaintiffs’ counsel in lawsuits against the manufacturers and sellers of acetaminophen. Another author, Beate Ritz, frequently testifies for the lawsuit industry in cases against various manufacturing industries. A third author, Ann Z. Bauer, was the lead author of a [faux] “consensus statement” that invoked the precautionary principle to call for limits on the use of acetaminophen (N-acetyl-p-aminophenol or APAP) by pregnant women, on grounds that such use may increase the risks of neurodevelopmental (including autism), reproductive and urogenital disorders.[2] The lead author was Diddier Prada, who works in Manhattan, at the Icahn School of Medicine at Mount Sinai, in the environmental and climate science department, within the Institute for Health Equity Research. The Mount Sinai website describes Dr. Diddier Prada as an environmental and molecular epidemiologist who focuses on the role of environmental toxicants in age-related conditions

Curious readers might wonder how someone whose interest is in environmental issues and “health equity” became involved in a review of pharmaco-epidemiology and teratology. The flavor of systematic review deployed in the paper, “navigation guide,” originated and has had limited use in the field of environmental issues. To my knowledge, so-called navigation guides have never been used previously in pharmaco-epidemiologic or teratologic controversies.[3]

The Prada paper and its deployment of a “navigation guide” systematic review deserve greater critical scrutiny.  In this post, however, I want to address some peripheral issues, such as “competing interests” and misleading claims about the paper’s having been NIH funded.

Only Dr. Baccarelli disclosed a potential conflict of interest, in a statement that many would judge to be anemic:

“Dr. Baccarelli served as an expert witness for the plaintiff’s legal team on matters of general causation involving acetaminophen use during pregnancy and its potential links to neurodevelopmental disorders. This involvement may be perceived as a conflict of interest regarding the information presented in this paper on acetaminophen and neurodevelopmental outcomes. Dr. Baccarelli has made every effort to ensure that this current work—like his past work as an expert witness on this matter—was conducted with the highest standards of scientific integrity and objectivity.”

The disclosure fails to mention whether Dr. Baccarelli was compensated for his playing on the “plaintiff’s legal team,” and if so, then how much. Using the passive voice, he suggests that this work might be perceived as a conflict of interest, when surely he knows that it is a serious issue. If industry scientists working on the relevant issue had published, they surely would be accused of having had a conflict.

Dr. Baccarelli self-servingly, falsely, and with epistemic arrogance, asserts that he made every effort in this paper, and in his past work as an expert witness, to conform to the “highest standards of scientific integrity and objectivity.” Despite his best efforts to be “scientific,” Baccarelli’s work failed critical scrutiny in the multi-district litigation that consolidated acetaminophen cases for pre-trial handling. In that litigation, the defense challenged Dr. Baccarelli’s opinions under Rule 702, for their lack of validity. In an extensive, closely reasoned opinion, federal district court judge Denise Cote ruled that Dr. Baccarelli’s proffered opinions failed to meet the relevance and reliability standards of federal law.[4]

The MDL court easily found that Dr. Baccarelli was qualified to provide an opinion on epidemiology, although the focus of his career has been on environmental issues. Baccarelli’s substantive problem was that he deviated from accepted and valid methods of causal inference by cherry picking different results and outcomes across multiple studies. Baccarelli’s sophistical trick was to advance a “transdiagnostic” analysis that lumps an already heterogenous autism spectrum disorder (ASD), with attention-deficit hyperactivity disorder (ADHD), and a grab bag of “other neurodevelopmental disorders.” If a study found a putative association with only one of the three end points, Baccarelli would claim success on all three. Baccarelli avoided conducting separate ASD and ADHD analyses, and he cherry picked the end points that supported his pre-determined conclusions.

Judge Cote found that the transdiagnostic analyses advanced by plaintiffs’ expert witnesses, including Baccarelli, obscured and obfuscated more than they informed the causal inquiry.[5] The court’s analysis casts considerable shade upon Baccarelli’s self-serving claim to have used “the highest standards of scientific integrity and objectivity.” Judge Cote barred Baccarelli and the other members of the plaintiffs’ “expert team” from testifying.

Conspicuously absent from the conflict disclosure section of the Prada article was any mention of the litigation work of co-author Beate Ritz. In 2007, Ritz became a fellow of the Collegium Ramazzini, which functions in support of the lawsuit industry much as the scientists of the Tobacco Institute supported tobacco legal defense efforts in times past. Ritz’s fellowship in the Collegium makes her a full-fledged member of the Lobby and a supporter of the lawsuit industry.[6] Ritz has testified, for claimants, in cases involving claims of heavy metals in baby food, in cases involving claims that paraquat exposure caused Parkinson’s disease, and most notoriously for plaintiffs in glyphosate litigation, where her witnessing is often done for the Wisner Baum lawfirm that employs the son of Robert F. Kennedy, Jr.[7]

The conflict of interest disclosure statement is hardly the only misleading aspect of the Prada paper. At the end of the paper, the authors state, with respect to funding that their “study was supported by NIH (R35ES031688; U54CA267776).” Some people may incorrectly believe that the Prada review was directly sponsored and funded by the National Institutes of Health.  Nothing could be further from the truth.

The research grant referenced, R35ES031688, is a National Institute of Environmental Health Sciences (NIEHS) research grant. The curious reader might inquire what whether and why the NIEHS would be concerned about a pharmacological issue. The short answer is that the NIEHS is not, and that this grant has nothing to do with children’s neurological status in relation to their mother’s ingestion of acetaminophen.

The NIEHS award this research grant to Andrea Baccarelli, while he was at Columbia University, for his project “Extracellular Vesicles in Environmental Epidemiology Studies of Aging.” The research focuses on extracellular vesicles (EVs) and their role in environmental health, particularly as it relates to aging. What Baccarelli promised to do with this NIEHS grant was to study the effects of air pollution on accelerated brain aging, and disease states such as dementia. Baccarelli noted that his focus would be on intra-cellular communication enabled by extracellular vesicles, in reaction to air pollution. The described research would understandably be viewed as potentially relevant to the NIEHS mission statement, but it has nothing to do with autism among children of women who ingested acetaminophen during pregnancy.  The phrases “extracellular vesicles” and “air pollution” do not appear in the Prada review.

The second grant listed under funding for the Prada review was U54CA267776. The U54 designation marks this as a career award, not specific to a specific topic or this published work. Ironically, the grant is a diversity, equity, and inclusion grant to the Mount Sinai Icahn School of Medicine, in Manhattan. The Icahn School has long had one of the most ethically, racially, culturally diverse faculties of any medical school, and hardly needs financial incentives to hire minority physicians and scientists.

The NIH awarded grant U54CA267776 for “Cohort Cluster Hiring Initiative at Icahn School of Medicine at Mount Sinai.” The NIH describes the grant as aiming to reduce “[t]he barriers to research and career success for underrepresented groups in academic medicine.” The text of the U54 grant is written largely in bureaucratic jargon, which may require a degree in DEI to understand fully. What is abundantly clear is that nothing in this U54 grant, or in its stated criteria for evaluation, has anything to do with studying the teratologic potential of acetaminophen.

What so far has escaped the media’s attention is that Prada and colleagues did not have NIH (or NIEHS) support for their acetaminophen review. They had career-level support for DEI purposes, or perhaps general “walking-around” money for research on environmental pollution and brain aging, which has nothing to do with the subject of their navigation guide review. The authors of the Prada review never prepared a study proposal related to acetaminophen for evaluation by a funding committee at NIH. The authors never submitted a protocol to the NIH, and the NIH provided no peer review or guidance for the authors’ acetaminophen review. In short, there is nothing that marks the Prada review as an NIH work product other than the over-claiming of the authors with respect to funding sources.

The Prada review has attracted a lot of attention in the media and from the worm-brained Secretary of Health and Human Services. An article in the Washington Post described the Prada review as NIH funded, which tracks the paper’s misleading disclosure.[8] The media no doubt jumped on the publication of the Prada review last month because Secretary Kennedy promised to reveal the cause of autism by September. We can imagine that Kennedy will be tempted to embrace the Prada review because he can falsely mischaracterize it as an NIH-funded review.

Not only is the funding claim dodgy, but so is the suggestion that the review supports a conclusion of causation between maternal ingestion of acetaminophen and autism in children. The lead author, Dr. Diddier Prada, noted the frequent confusion between correlation and causation and explicitly stated the authors of the review “cannot answer the question about causation.”[9]


[1] Diddier Prada, Beate Ritz, Ann Z. Bauer and Andrea A. Baccarelli, “Evaluation of the evidence on acetaminophen use and neurodevelopmental disorders using the Navigation Guide methodology,” 24 Envt’l Health 56 (2025).

[2] Ann Z. Bauer et al., “Paracetamol Use During Pregnancy — A Call for Precautionary Action,” 17 Nature Rev. Endocrinology 757 (2021).

[3] See Tracey J. Woodruff, Patrice Sutton, and The Navigation Guide Work Group, “An Evidence-Based Medicine Methodology To Bridge The Gap Between Clinical And Environmental Health Sciences,” 30 Health Affairs 931 (May 2011).

[4] In re Acetaminophen ASD-ADHD Prods. Liab. Litig., 707 F. Supp. 3d 309, 2023 WL 8711617 (S.D.N.Y. 2023) (Cote, J.).

[5] Id. at 334.

[6] See F.D.K. Liddell, “Magic, Menace, Myth and Malice,” 41 Ann. Occup. Hyg. 3, 3 (1997).

[7] See, e.g., In re Roundup Prods. Liab. Litig., 390 F. Supp. 3d 1102 (2018); Barrera v. Monsanto Co., Del. Super. Ct. (May 31, 2019); Pilliod v. Monsanto Co., 67 Cal. App. 5th 591, 282 Cal. Rptr. 3d 679 (2021). See also Dan Charles, “Taking the stand: For scientists, going to court as an expert witness brings risks and rewards,” 383 Science 942 (Feb. 29, 2024) (quoting Ritz as suggesting that she was reluctant to get involved as an expert witnesses).

[8] Ariana Eunjung Cha, Caitlin Gilbert and Lauren Weber, “MAHA activists have been pushing for more investigation into use of the common pain killer during pregnancy,” Wash. Post (Sept. 5, 2025). See also Liz Essley Whyte & Nidhi Subbaraman, “RFK Jr., HHS to Link Autism to Tylenol Use in Pregnancy and Folate Deficiencies,” Wall St. J. (Sept. 5, 2025).

[9] Jess Steier, “Saturday Morning Thoughts on the Tylenol-Autism News: The public health whiplash continues as we play another round of ‘autism cause’ roulette,” Unbiased Science (substack) (Sept. 06, 2025).

AAAS Conference on Scientific Evidence and the Courts

September 8th, 2025

Back in September 2023, the American Association for the Advancement of Science (AAAS), with its Center for Scientific Responsibility and Justice, sponsored a two day meeting on Scientific Evidence and the Courts. If there were notices for this conference, I missed them. The meeting presentations are now available online. Judging from camera views of the audience, the conference did not appear to be well attended. Most of the material was forgettable, but some of the presentations are worth watching.

Jennifer L. Mnookin opened the conference with a keynote presentation on “Where Law and Science Meet.” Chancellor Mnookin presented a broad overview and some interesting insights on the development of the evidence law of expert witness testimony.

Following Mnookin, Professors Ronald Allen and Andrew Jurs presented on the “Unintended Impacts [sic] of the Daubert Standard.” The conference took place only a few months before amendment to Rule 702 became effective, and the reference to a “Daubert” standard was untoward. Allen’s comments followed the path of his previous articles. Jurs presented some empirical legal research, which seemed flawed for its assumption that the Frye standard was universally applied in federal court before the advent of Daubert. Assessing whether these standards lead to different outcomes when both standards have been applied heterogeneously, and one standard, Frye, is often not applied at all, and Daubert is often flyblown by judges hostile to the gatekeeping enterprise, Jurs’ empirical research seemed both invalid and very much beside the point. Both presenters missed the key point of Daubert, in which case plaintiff’s counsel advocated for no standard at all, beyond basic subject-matter qualification, for giving expert opinions in court.

A Session on “An International Perspective,” Scott Carlson discussed the efforts of the American Bar Association (ABA), and its Center of Global Programs, on supporting judges in foreign countries. Prateek Sibal discussed the history and work of the UNESCO Global Judges Initiative. My sincere wish is that the ABA would support judges more in the United States.

Panelists Valerie P. Hans, Emily Murphy, and Dr. Michael J. Saks presented on various jury issues, in a session “In the Minds of the Jury.” The presentations on how foreign countries process expert witness testimony were lacking any mention of how juries rarely if ever sit in civil cases that involve complex technical and scientific issues.

Two editors of scientific journals, Adriana Bankston and Valda Vinson, along with law professor Michael Sakes, spoke about peer review and publication, in  a session “As a Matter of Fact: ‘General Acceptance’ in Emerging vs. Established Science.” Their discussion on the publication process shed very little light on how courts and juries should assess the validity of specific papers, particularly in view of the lax practices at many journals. Towards the end of this session, a question from the audience proved to be very revealing of the prejudices of the law professor on the panel. The questioner rose to complain that after beginning research on a topic that has litigation relevance her research is now frequently questioned. She asked the panel how she might deal with the annoyance of being questioned. Some on the panel basically urged her to buck up, but the law professor invoked the spirit of agnothologist, and lawsuit industry expert witness, David Michael, to suggest that “manufacturing doubt” was just a corporate tactic in the face of scientific evidence. The prejudice against corporate speech is remarkable when the lawsuit industry has a long history of playing the ad hominem game in advancing its pecuniary interests.

The session that followed addressed how trustworthy science might best be put before courts. The organizers described this session, Utilizing Scientific and Technical Expertise, as going to the heart of the issues targeted by the conference. Joe S. Cecil, Deanne M. Ottaviano, and Shari Seidman Diamond discussed how scientific expertise enters into the evidentiary record in American courtrooms. Their presentations were interesting, but curiously no one mentioned that the primary avenue for expert witness opinion is through oral testimony!

Joe Cecil discussed methods judges have to obtain scientific and technical evidence to advance justice. (By this I hope he meant the truth, and not just the outcome preferred by social justice warriors.) As noted, Joe Cecil did not focus on the ordinary methods of direct and cross-examination of party expert witnesses, but rather, he identified other methods of introducing expertise into the courtroom for the benefit of the judge or the jury. Only one suggestion really affects jury comprehension, namely the appointment of non-party expert witnesses by the court. The other methods really only provide expertise to the trial judge, who perhaps is challenged to make a ruling under Federal Rule of Evidence 702. The federal courts have the inherent supervisory power to appoint technical advisors to act as special law clerks on issues. Similarly, appointed special masters can address technical implementation issues, subject to the district judges’ control. The judges are always free to read outside the briefs and testimony, but there are ethical and notice issues for such conduct. The Reference Manual on Scientific Evidence (RMSE) sits on the shelves on every federal judge’s bookshelf, even if in pristine, unused condition. Judges can at least read the RMSE on specific issues without having to disclose their extra-curricular research to the parties.  Of course, parties are well advised to consider any materials in the RMSE, which support or oppose their contentions.

In discussing the RMSE, Cecil noted that the fourth edition was in the works. He also mentioned that all the old chapter topics would be carried forward to the fourth edition, and that new topics would include eyewitness identification, computer science, artificial intelligence, and climate science. Sadly, there will be no chapter on genetic determination of disease, but perhaps the clinical medicine chapter will take on the subject in greater detail than previous editions. This conference took place two years ago, and yet the RMSE, fourth edition, is still not published. The National Academies website previously listed the project as completed, but the site now describes the work as “in progress.”

Joe Cecil’s analysis of the various extraordinary expert techniques was pretty much spot on, especially his assessment that “experiments” with court-appointed experts were often failures or at best modest successes. The discussion of Judge Pointer’s Rule 706 independent expert witnesses in the silicon [sic] breast implant litigation, MDL926, seemed to lack context. Cecil acknowledged that the court’s expert witnesses contributed some value to admissibility decisions, but Judge Pointer notoriously did not believe that he, as the MDL judge, had any responsibility for Rule 702 determinations, and he made none except in cases that he tried in the Northern District of Alabama. (And these decisions were before the Science Panel was appointed.) So the Rule 706 witnesses really could not have aided in admissibility decisions.

The real value – in my view – of the Science Panel was that it demonstrated that Judge Pointer was quite wrong in believing that both sides’ expert witnesses were simply “too extreme,” or too partisan, and that the truth was somehow in the middle. Indeed, Judge Pointer said so on many occasions, and he was judicially gobsmacked when all four of his experts roundly rejected the plaintiffs’ distortions of the science of immunology, epidemiology, toxicology, and rheumatology. The courts’ expert witnesses sat for discovery depositions, and then gave testimony de bene esse. To my knowledge, their testimony was never admitted in any of the subsequent trials.

Judge Jed Rakoff gave an interesting presentation, “Strengthening Cooperation Between the Scientific Enterprise and the Justice System,” on the intersection between scientific and legal expertise and the need for their better integration. Judge Rakoff focused on the astonishing lack of compliance of trial judges with the gatekeeping requirements of Rule 702 in addressing the admissibility of forensic evidence. Several subsequent panels also addressed forensic topics, including “A Texas Case Study in Accountability for Forensic Sciences,” “Innovations in Investigative Technologies Improvements and Drawbacks,” and “Artificial Intelligence and the Courts,” “Wrongful Convictions and Changed Science: Statutes,” and “Standing Up for Justice: When the Law and Science Work Hand-in-Hand.”

One of the more curious sessions was on “Statistical Modeling and Causation Science,” presented by the American Statistical Association along with the AAAS. Maria Cuellar, from the University of Pennsylvania, discussed the role of statistical thinking in causal assessment, with slides that referred to a nonparametric estimator for the probability of causation. Cuellar, however, never defined what an estimator was; nor did she differentiate nonparametric from parametric estimators. She displayed other equations, again without explaining their origin and meaning, or identifying symbols or meanings. Similarly, Rochelle E. Tractenberg, discussed the use of statistics as evidence and as part of inferring causal inference in litigation, in a model of unclarity. At one point, Tractenberg appeared to suggest that general causation could be taken from regulatory pronouncements. Her discussion of glyphosate implied that general causation was established, which may have led me to disregard her presentation.

Finally, the conference sported a discussion, “Toxic Tort 2.0: Emerging Trends in Climate Change Related Litigation,” The two presenters were Dr. L. Delta Merner, the “Lead Scientist” for the Science Hub for Climate Litigation, Union of Concerned Scientists, and Dr. Paul A. Hanle, Visiting Scholar and  Founder of the Climate Judiciary Project, Environmental Law Institute. The Science Hub actively promotes climate change litigation, which made me wonder whether its scientists are involved in that new chapter in the upcoming fourth edition of the Reference Manual.

Lack of Trust in Science – The Situation Our Situation Is In

August 29th, 2025

The United States is in political crisis as its citizens are frogmarched into an authoritarian, illiberal, and unlawful dystopia. The seriousness of the political situation makes it difficult to focus on scientific issues, but as with past fascist regimes in history, the crisis is not limited to any one sphere of life in the United States.

Scholars of fascism have pointed out that not all fascist regimes are the same, but there are some key features that give them all a family resemblance. In the political realm, fascist leaders point to an idyllic history, however mythical or false, in which the country was once great. The greatness has been eroded and squandered by the country’s enemies, internal and external. Confronting enemies within and without is an emergency, which cannot be addressed within the rule of law. Only an authoritarian leader can fix it by suspending the rule of law.

Fascism does not operate solely in the political sphere, but insists upon ideological purity in art, culture, education, business, finance, military, law, and science.[1]

Yes, even science. Nazi Germany had its bogus science of racial purity. The Soviet Union had its Lysenkoism. Theocratic fascist regimes, such as Iran or the United States, have their “god talk” and blasphemy squads, which suppress scientific curiosity, experimentation, and development, except for the creation of weapons (where replicability, validity, and predictive accuracy really matter).

There are various reasons for Felonious Trump’s election, but the epistemic sin of credulousness of the American people is certainly one of them. We are living in Orwell’s 1984 world where many people have been tethered to TV screens to receive their daily influx of state-approved propaganda. Character for truth has ceased to be a virtue. “And even truth can become a lie in the mouth of a born liar.”[2]

The credulity of the American people has manifested as distrust in scientific expertise and willingness to believe charlatans such as Robert Kennedy, Jr. The phenomenon of transferring trust from legitimate scientists to charlatans is probably one of the clearest and strongest symptom of our current malaise.

Professor Arthur L. Caplan[3] is a scientist and medical ethicist who has never been shy about asking discomforting questions. Not surprisingly, Caplan has spoken out against some of the bone-headed anti-science actions of the present regime in Washington.[4]

In an essay entitled “How Stupid has Science Been?” Caplan asks:

“So how can U.S. President Trump, Secretary of Health Robert F. Kennedy, Jr., or Director of the Centers for Medicare and Medicaid Mehmet Oz and their enthusiastic followers be succeeding in defunding research and installing ideological oversight and censorship that is crushing science, technology and engineering and will for many years to come?”[5]

Caplan blames the scientific community itself, in part, for the current crisis by disparaging and discouraging scientists from engaging with the public. Obviously, Caplan is not thinking of the cadre of scientists who seek phony validation by becoming highly paid expert witnesses for the lawsuit industry. Nor is he thinking of the dodgy TV doctors such as Dr. Oz. Caplan’s focus is on the harm done to the careers of accomplished scientists, such as the late Carl Sagan, who was denied tenure at Harvard University and membership in the National Academies of Science because his popularizing efforts eclipsed his substantial scientific accomplishments. Caplan thus blames the American scientific establishment itself for having “disparaged its public communication as unnecessary and looked down on those few who tried to educate broader audiences about the wonders, benefits, methods and advancements of science.”

Professor Caplan argues that in popularizing scientific ideas, theories, and methods, scientists – such as the late Carl Sagan – undermined their own careers. The result is that high-achieving scientists ignored the public square and retreated into their own scientific community’s ivory tower. Caplan’s critique of the detachment of the scientific community could well be extended to its frequent failures to speak out against charlatans in its own midsts, and politicians who distort and misrepresent scientific research in the public arena.

Caplan is, however, very clear that the scientific community’s insularity, and its “resulting failure to communicate about science to the public is a major factor in explaining why so few have rallied to science’s defense today against government policies promoting ignorance, illiteracy and quackery.”  Indeed, although at this point, it is also clear that frank communications about the government’s promotion of scientific quackery will be punished by the Regime’s cancellation of grants, firing from advisory councils, and retaliations against scientists’ universities.

I take Caplan’s critique to be an invitation to engage in counter-factual thinking about what our current situation might look like if scientists had robustly “occupied the field” of communication and education of the public. Citing a recent article in a Nature journal,[6] Caplan observes that populists and right-wing thinkers have been losing faith in science for years. This diagnosis, however, is not quite accurate. Populists, left and right, have succumbed to motivated reasoning in learning to ignore scientific conclusions, regardless of validity concerns, on emotive or political grounds. This mode of (non)-thinking allows populists, left and right, to subscribe to putative scientific claims without any appreciation of the nuances of scientific inference and threats to validity.

Caplan is right to call out the right-wing attack on science, but some of the attack on science is coming from left-wing populists, such as the worm-brained Robert F. Kennedy, Jr. And historically, there have been many instances in which environmental and occupational health advocates have outrun their headlights to press claims based upon hypothetical models and unvalidated assumptions.

All people, whether they hang politically left or right, are vulnerable to the emperor of all cognitive biases – apophenia, the psychological tendency to discern causal patterns in random noise. Although apophenia was originally thought of an abnormal psychological process,[7] the phenomenon is common to “normal” as well as mentally ill persons.[8]

Many people, left and right, are willing to endorse, or subscribe to, pseudo-scientific claims based upon their motivations to accept claims, without regard to the methods used to support those claims. Professor Caplan is correct that serious scientists have been too shy to step into the public square, and the scientific community should encourage, not punish, engagement with the public. (Caplan passes over the problem of how university publicists often misrepresent and exaggerate the findings and research of university scientists.)

The problem of lack of trust in science, however, is a much bigger problem. On average, American education and acumen in math and science lags that of many countries in the world,[9] even as post-secondary education in the United States excels and attracts many of the best and the brightest domestically and internationally. Immigrants have helped American universities keep their leadership role in the world, despite shortfalls in domestic funding of primary and secondary science education. Of course, this international leadership in science and math university education, gained with the help of immigrants, is now under attack from the MAGAT regime.[10]

No one is eager to blame those who evidence their lack of trust in science, and to be sure, there is plenty of blame to go around. There are multiple systemic causes of poor quality science and improvident claims to scientific knowledge.[11] In assessing the causes of the prevalent distrust in science, we should not lose sight of the responsibility of those who claim that scientists cannot be trusted. There is at bottom a widespread moral failure in the land.  “It is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence.”[12]

доверяй, но проверяй!


[1] Zachary Basu, “Trump knee-caps America’s institutions,” Axios (Aug. 27, 2025); Elisabeth Zerofsky, “Robert Paxton, A Leading Historian Of Fascism, Long Resisted Applying The Label To Trumpism. Then He Changed His Mind..,” N.Y. Times Mag. 45 (Oct. 27, 2024).

[2] Thomas Mann, “The Problem of Freedom: An Address to the Undergraduates and Faculty of Rutgers University at Convocation,” (April 28, 1939).

[3] Arthur L. Caplan, PhD., is the Drs. William F. and Virginia Connolly Mitty Professor of Bioethics, Department of Population Health, and the founder of  the Division of Medical Ethics at NYU Grossman School of Medicine’s Department of Population Health in New York City. I had the pleasure to meet Professor Caplan, and present to one of his classes, back when he taught at the University of Pennsylvania.

[4] See, e.g., Arthur L. Caplan, “Fed Action Toward Medical Journals Is ‘Dangerous’, Ethicist Says,” Medscape (Aug. 26, 2025).

[5] Arthur L. Caplan, entitled “How Stupid has Science Been?” EMBO reports (Aug. 2025).

[6] Vukašin GligorićGerben A. van Kleef, and Bastiaan T. Rutjens, “Political ideology and trust in scientists in the USA,” 9 Nature Human Behaviour 1501 (2025) (“Since the 1980s, trust of science among conservatives in America has been plummeting”).

[7] See Aaron L Mishara, “Klaus Conrad (1905–1961): Delusional Mood, Psychosis, and Beginning Schizophrenia,” 36 Schizophr Bull. 9 (2009); Scott D. Blain, Julia M. Longenecker, Rachael G. Grazioplene, Bonnie Klimes-Dougan, and Colin G. DeYoung, “Apophenia as the disposition to false positives: A unifying framework for openness and psychoticism,” 129 J. Abnormal Psych. 279 (2020).

[8] Donna L Roberts, “Apophenia: The Human Tendency to Find Patterns in Randomness,” Medium (Jan. 9, 2024); Ahmed S. Sultan & Maryam Jessri, “Pathology is Always Around Us: Apophenia in Pathology, a Remarkable Unreported Phenomenon,” 7 Diseases 54 (2019).

[9] Drew DeSilver, “U.S. students’ academic achievement still lags that of their peers in many other countries,” Pew Research Center (Feb. 15, 2017).

[10] Is it not high time that we call the movement by its essential motivation: make American great again for the Trumps?

[11] See, e.g., Lex Bouter, Mai Har Sham & Sabine Kleinert, “The Lancet–World Conferences on Research Integrity Foundation Commission on Research Integrity,” 406 The Lancet 896 (2025).

[12] William K. Clifford, “The Ethics of Belief,” 29 Contemporary Rev. 289, 295 (1877).

Junk Journalism

August 19th, 2025

There is plenty of room for a healthy science-based environmentalism, but finding the room in the American political house has always been difficult. The current administration brings together the horseshoe wacko excesses of the worm-brained Robert Kennedy, Jr., and the crony capitalism of Felonious Trump. In this toxic, post-truth milieu, environmental groups such as Sierra Club and Greenpeace are both complaining about their setbacks,[1] as well as stepping up their own propaganda.

In the face of advocacy group propaganda, journalists should provide a strong science filter before allowing misinformation and emotive appeals to be passed off as scientific truth. Sadly, well-motivated manufacturing industry can rarely count on either the main stream media for sympathy or accuracy in reporting environmental issues. Readers of major newspapers, however, deserve careful reporting and the separation from hyperbole and fact.

A recent article in the Washington Post makes the point. Activist journalist Amudalat Ajasa reported her story this week that “Her dogs kept dying, and she got cancer. Then they tested her water.”[2] Oh my goodness; that must be a scandal; right? Queue the outrage.

Now widespread journalistic practice means that Ms. Ajasa may not have written the headline, and it was likely an editor who concocted the click-bait headline that suggested that something in the water killed some woman’s dogs and caused her cancer. Upon reading the story, however, readers would be justified in concluding that the author was clearly in on the ploy to misinform. So shame on both the would-be journalist and her editor.

Ms. Ajasa tells us that the residents of Elkton, Maryland, worry about “forever chemicals” in their water, a worry instigated in large measure by mass and social media, advocacy NGOs, state and federal agencies, and the lawsuit industry. Focusing on her anecdotal datum, Ajasa reports that Ms. Debbie Blankenship, a resident of the Elkton area, had “chalked up her health problems, including losing her right leg to an infection, to bad luck.” Bad luck? Ajasa must have gotten a HIPAA release and waiver to discuss Ms. Blankenship’s medical condition in a very public forum because the WaPo story discusses health details and features photographs of Ms. Blankenship, who is clearly obese, has had one leg amputated, and is confined to a wheel chair. Apparently, neither Ms. Blankenship nor Ms. Ajasa ever considered that lifestyle factors combined to cause Ms. Blankenship to develop diabetes mellitus and cancer (of some unspecified type).

The obvious, however, is ignored or pushed aside by Ajasa’s reporting that in 2023, W.L. Gore & Associates, a manufacturer of Gore-Tex, telephoned with a request to test the Blankenship water well for perfluorooctanoic acid (PFOA), which had been used in its manufacture of Teflon (polytetrafluoroethylene or PFTE). PFOA is one of the family of PFAS chemicals that has been the subject of a regulatory furor in recent years, including the issuance of action levels below the limits of detection for many laboratories.

The request to test the Blankenship water well was triggered by a lawsuit, filed in 2022, by a former W.L. Gore employee, Stephen Sutton. The lawsuit industry jumped on Sutton’s lawsuit with a class action environmental complaint in 2024. In any event, according to Ms. Ajasa, the company’s request to test the Blankenship well led to the eureka moment of scientific insight. Ms. Blankenship and her dogs drank well water, but her husband and children always drank bottled water. She was poisoned by the well water. Quod erat demonstrandum!

Ajasa’s reporting forces the reader to wade through a lot of activist propaganda and scientific hooey, such as claims that there is no safe level of PFOA, passed off as scientific fact. Agency assumptions and precautionary principle statements are not facts. Ignorance about no observable effect level is not knowledge that there is no safe level.

The WaPo readers are similarly regaled with a claim, masquerading as a statement of fact, that PFAS chemicals have “been linked to serious health problems including high cholesterol, cardiovascular disease, infertility, low birth weight and certain cancers.” Use of the verb “link” is a meaningless term in science, and thus a favorite of sloppy journalists. Whether a link is an association, a cause, a suggestion from an anecdote, a lawyer’s allegation, or a claim by an environmental group is anyone’s guess, and is left to the reader’s imagination. Whether Ms. Blankenship’s cancer is one of the “certain cancers” is not reported. Sloppy journalism of this sort, whether intentional, reckless, or negligent, undermines evidence-based legislation, regulation, and adjudication. “The credulous man is father to the liar and the cheat.”[3]

Ms. Ajasa eventually gets around to telling her readers that the water samples from Ms. Blankenship’s well contained PFOA concentrations of 3.4 parts per trillion (ppt), below the Environmental Protection Agency’s precautionary and unsupported maximum action level of 4 ppt. Rather than looking for other potential causes of Ms. Blankenship’s health problems, Ms. Ajasa glibly channels the EPA’s unsupported assertions that “that small amounts of the chemical can cause serious health impacts [sic], including cancer.” The reader is left to believe that this is a fact and that the undefined “small amounts” must include the 3.4 ppt detected in Blankenship’s well. Ajasa uses innuendo to substitute for the absence of evidence.

Journalists have an important role in informing and educating the public about scientific issues and controversies. Innuendo, unquestioned assumptions, and sloppy thinking – this is how the junk journalism sausage is made. Junk journalism is much like junk science. If we understand that junk journalism is a form of information pollution, then a well-considered, evidence-based environmentalism calls for remediation. 


[1] David Gelles, Claire Brown and Karen Zraick, “Environmental Groups Face ‘Generational’ Setbacks Under Trump,” N.Y. Times (Aug. 16, 2025). The list of aggrieved seems endless: Sierra Club, Greenpeace, Climate and Communities Institute, Natural Resources Defense Council, Earthjustice, the Southern Environmental Law Center, etc.

[2] Amudalat Ajasa, “Her dogs kept dying, and she got cancer. Then they tested her water,” Wash. Post (Aug. 14, 2025).

[3] William Kingdon Clifford, “The Ethics of Belief” (1877), in Leslie Stephen & Sir Frederick Pollock, eds., The Ethics of Belief and Other Essays 70, 77 (1947).

Victor Schwartz – An Intellectual Leader of the American Defense Bar

August 8th, 2025

Victor Elliot Schwartz died late last month. His passing was marked with several obituaries, from his colleagues, friends, and family, which marked his many achievements.[1] At the defense bar, Victor was truly a thought leader and tort scholar, as well as an advocate for sensible reform. Victor’s work had tremendous influence, although sadly, because of a rapacious, rent-seeking lawsuit industry, not as much influence as it should have had.

His work and insights inspired my own efforts on several fronts. Just as I was coming out of my clerkship, Victor published a law review article, in the University of Cincinnati Law Review, on the often otiose warnings required for products and raw materials sold for use in the workplace of large manufacturing concerns.[2] The learning of Victor’s scholarship became essential in fashioning a defense to dozens of silicosis cases filed in western Pennsylvania, in the early 1980s. The pursuer was a law firm that hoped to exploit the usual David-Goliath narrative from its asbestos cases, coming out of the large U.S. Steel and Bethlehem Steel factories and foundries. Victor’s work emphasized the importance of the epistemic context of occupational exposures cases that arose from employment in factories owned by sophisticated users and purchasers of potentially hazardous materials. Along with my co-defense counsel, we implemented Victor’s insights in the Cambria County, Pennsylvania, silicosis litigation. When the dust settled, the pursuer and his clients went away empty handed.[3]

Victor’s insights into the law and communication theory were equally valuable in asbestos litigation. Because most cases in Philadelphia were tried through the cockamamie reverse-bifurcation procedure, the defense rarely got a chance to put on a state-of-the-art or sophisticated intermediary defense. There was one blessed judge, the Hon. Levan Gordon, who distained reverse bifurcation, and gave me the opening to present both defenses in response to plaintiffs’ counsel’s insistence upon trying negligence and punitive damage claims in an all-issue case. Although I had the weaker side of the medical dispute, my adversary turned the case into a passion play on failure to warn. The jury returned a no cause verdict for the defense, without reaching the medical claim.[4]

Some years later, I was invited by the National Industrial Sand Association to talk about the recrudescence of silicosis litigation.  The sand mining companies were very concerned about the bogus radiographic screenings and liability claims. Victor was also invited, but as things turned out, I spoke first. As a young brash lawyer, I thought I should include some concrete recommendations on what the companies could do to avoid liability. I suggested that they ask for indemnifications for any third-party suits by the buyers’ employees. I acknowledged that this was a tough ask so I had a fall-back suggestion that the firms put in recitations in the sales documents that the buyer warrants and represents that it is conversant with all pertinent regulations and industrial hygiene procedures to handle silica sand safely in its business. The audience, made up of owners and executives, was clearly uncomfortable over the suggestion that they request such concessions from their buyers in a highly competitive market. The comments were hostile, but Victor jumped in, and said that he had planned to offer the same suggestions and that the sand companies should take these suggestions very seriously. Victor had the gray hair and the gravitas that I lacked, and the company executives piped down and I got on with my talk.

Victor was a natural, and as a young lawyer, he was one of my leading role models. Years later, he encouraged me to seek membership in the American Law Institute, and offered helpful guidance about the application process. More recently, when the directors of the Center for Truth in Science wanted to create a legal advisory council, Victor Schwartz was our number one recruit.  He will be missed.


[1]Shook Mourns the Passing of Beloved Public Policy Chair Victor Schwartz,” SHB (Jul. 29, 2025); PR Newswire (Jul. 29, 2025); Legacy.com (Jul. 29, 2025)

[2] Victor E. Schwartz & Russell W. Driver, “Warnings in the Workplace: The Need for a Synthesis of Law and Communication Theory,” 52 U. Cin. L. Rev. 38 (1983). See also Victor E. Schwartz, Mark A. Behrens & Andrew W. Crouse, “Getting the Sand Out of the Eyes of the Law: The Need for a Clear Rule for Sand Suppliers in Texas After Humble Sand & Gravel, Inc. v. Gomez,” 37 St. Mary’s Law J. 283 (2006).

[3] Phillips v. A.P. Green Co., 428 Pa. Super. 167, 630 A.2d 874 (1993) (citing Schwartz & Driver), aff’d on other grounds sub nom. Phillips v. A-Best Products Co., 542 Pa. 124, 665 A.2d 1167 (1995) (citing lack of proximate cause between failure to warn and harm); Smith v. Walter C. Best, Inc., 927 F.2d 736 (3rd Cir. 1990) (Ohio law); Goodbar v. Whitehead Bros., 591 F. Supp. 552 (W.D.Va. 1984) (citing Schwartz & Driver), aff’d sub nom. Beale v. Hardy, 769 F.2d 213 (4th Cir. 1985). See Schachtman, “History of Silicosis Litigation,” Tortini (Jan. 31, 2019).

[4] O’Donnell v. The Celotex Corp., Phila. Cty. Ct.C.P., July 1982 Term, Case. No. 1619 (trial before Hon. Levan Gordon, and a jury; May 1989) (defense verdict in case in which plaintiffs presented negligence claims and defendants presented extensive evidence of federal government’s superior knowledge of hazard and control of workplace). See Schachtman, “Asbestos and Asbestos Litigation Are Forever,” Tortini (Sep. 16, 2014); “Divine Intervention in Litigation,” Torinti (Jan. 27, 2018).

The opinions, statements, and asseverations expressed on Tortini are my own, or those of invited guests, and these writings do not necessarily represent the views of clients, friends, or family, even when supported by good and sufficient reason.