TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

How Science Works in the New Reference Manual on Scientific Evidence

March 12th, 2026

The Second and Third Editions of the Reference Manual on Scientific Evidence contained a chapter, “How Science Works,” by Professor David Goodstein. This chapter ambitiously set out to cover philosophy and sociology of science to help orient judges as strangers in a strange land. Goodstein’s chapter had been a useful introduction to scientific methodology, and it countered some of the antic ideas seen in some judicial opinions, as well as in some other chapters of the Manual. Goodstein brought a good deal of experience and expertise to the task. He was a distinguished professor of physics and Vice Provost at the California Institute of Technology, and he had written engagingly about scientific discovery and the pathology of science.[1] Sadly, Goodstein died in April 2024. His death may have had some role in the delayed publication of the Fourth Edition of the Manual,[2] and the improvident replacement of his chapter with a new chapter written by authors less articulate about how science works.

The substitute chapter on “How Science Works” was written by two authors considerably less accomplished than the late Professor Goodstein.[3] Michael Weisberg is a professor of philosophy at the University of Pennsylvania, where he is the deputy director of Perry World House, which “analyzes global policy challenges through the realms of climate, democracy, global justice and human rights, and security.” The connection with Perry House may explain the new chapter’s heavy reliance upon the development of the chlorofluorocarbon (CFC) connection to ozone layer depletion as an exemplar of scientific discovery and knowledge. The University of Pennsylvania webpage describes Weisberg as “educat[ing] the next generation of environmental leaders in the classroom, at the negotiating table, and in the field, ensuring that their voices have maximal impact on addressing the climate crisis.”[4] So we have a philosopher of advocacy science, as it were. Some readers might think those credentials are not optimal for preparing a nuts-and-bolts description of how science works. Reading sections of the new chapter will not diminish their concerns.

Joining with Weisberg on this new version of “How Science Works,” is Anastasia Thanukos, who works at the University of California Museum of Paleontology. Thanukos has her masters degree in integrative biology, and her doctorate in science education.[5] 

The new “method” chapter has some virtues. As did Goodstein’s chapter, the new authors put peer review into a realistic perspective that should keep judges from being snoockered into admitting weak or bogus evidence because it had been published in a peer reviewed journal.[6] The authors should have gone much farther in pointing out that the rise of predatory and pay-to-play journals, as well as journals controlled by advocacy groups, have undermined much of the publishing model of modern science.

Weisberg and Thanukos discuss “expertise” in a way that is interesting but irrelevant to legal cases.  They seem blithely unaware that the standard for qualifying an expert witness is extremely low. Who will disbuse them when they argue that “[i]t is worth evaluating the closeness of a scientist’s disciplinary expertise to a scientific topic on which expert testimony is delivered”?[7] In what emerges as a consistent pattern of giving anti-manufacturing industry examples, the authors point to Richard Scorer as an accomplished scientist, who had no specific expertise in CFC ozone depletion. Notwithstanding the lack of specific expertise, an industry-backed group promoted Scorer’s views that criticized the CFC-ozone depletion hypothesis.[8] Citing Naomi Oreskes, the new Manual chapter states that “[t]he problem of scientists with legitimate expertise in one field weighing in on a scientific question outside their area of expertise is a pernicious one that has affected public acceptance of science and policy on issues such as climate change and tobacco exposure.”[9] Later, when Weisberg and Thanukos discuss the Milward case, they miss the pernicious influence that flowed from allowing Martyn Smith, a toxicologist, to give methodologically muddled opinion testimony on epidemiology. Pernicious is where you find it, and the authors of the new chapter find virtually all untoward instances of poor scientific method and conduct to originate from manufacturing industry.

Weisberg and Thanukos introduce a discussion of the “replication crisis,” a phrase and concept absent from the third edition of the Reference Manual.[10] The authors express some skepticism that there is an actual crisis over replication,[11] but their focus on climate science may mean that they are simply blinded by groupthink in that discipline. Their discussion of retractions omits the steep rise in retraction rates in most scientific disciplines,[12] and the authors ignore the proliferation of poor quality journals. Positively, the authors introduce a discussion of study preregistration, a notion absent from the third edition of the Manual, and they explain that such preregistration may serve as a bulwark against data dredging post hoc analyses.[13] Negatively, the authors ignore how frequently preregistered protocols are not used, or are used and then violated.

Weisberg and Thanukos appropriately ignore “weight of the evidence” (WOE) and “inference to the best explanation” (IBE). Readers might (mistakenly) think that the new chapter implicitly rejects WOE, as put forth by Carl Cranor and credulously accepted by the First Circuit in Milward, when the chapter authors insist that 

“the judge’s task requires a deeper examination of the available evidence and methods by which it was arrived at, as well as an assessment of how the community of experts in this area has evaluated or would evaluate the evidence and reasoning in question.”[14]

Contrary to the Milward decision from 2011, the new authors are not shy about stating the obvious; there is good science, and there is bad science.  Not all “judgment” about causality is acceptable and fit for submission to juries.[15] Given the judicial resistance to Rule 702, the obvious here requires stating. Weisberg and Thanukos acknowledge that some scientific judgment is unreliable or invalid because it was based upon work that was not carried out in accordance with current standards for scientific investigation and inference.[16] It should not surprise anyone that most of their examples of bad science are the product of manufacturing industry; the authors are oblivious to bad science sponsored by the lawsuit industry or by non-governmental advocacy organizations (NGOs).

Weisberg and Thanukos frame scientific disagreements and debates as governed by both data and ethical norms. Science is not infinitely contestable. There are identifiable norms, including a norm that scientists should “seek relevant information,” and “scrutinize ideas and evidence.”[17] Contrary to Milward’s standard of judicial abstention and credulity in the face of dodgy causal claims, these authors state what should be obvious, that scientific scrutiny involves, among other things, “an evaluation of methods, considering potential biases and oversights.”[18]

The chapters’ authors, non-lawyers, get closer to the heart of the error in Milward’s abstention doctrine with their recognition of what should have been obvious to the authors of the law chapter (Richter & Capra):

“When research relevant to a trial has not yet been scrutinized by a community with the appropriate technical expertise, a judge may be placed in the position of providing or requesting this scrutiny.”[19]  

Rather than some vague, subjective, and content-free WOE standard, Weisberg and Thanukos urge scientists, and by implication judges as well, to engage in serious efforts to “identify and avoid bias” and abide by ethical guidelines.[20] In other (my) words, the new authors agree that there is a standard of care reflected in the norms of science, and consequently there can be deviations from that standard. For Weisberg and Thanukos, compliance with the normative structure of scientific investigations is at the heart of building up accurate and predictive conclusions from data.[21] As part of their communitarian and normative conception of the scientific process, the authors appear to accept the reality and necessity for judges to act as gatekeepers.[22]

And while this recognition of standards and the need to police against deviations from standards is commendable, Weisberg and Thanukos proceed to give an abridgment of scientific method and process that is distorted and erroneous. They steadfastly ignore the concept of hierarchy of evidence, and thus provide illegitimate cover for levelers of evidence. In discussing randomized controlled trials, for instance, they note that such trials are often taken as “the gold standard,” but then they counter, without citation, support, or argument, that such trials “are just one line of evidence among many.”[23] The authors elide discussion and reconciliation of when that “just one line of evidence” conflicts with observational studies.

Notwithstanding their helpful comments about the need to evaluate studies for bias and other errors, these authors enter into the Milward controversy with an observation that assessing many lines of evidence is required and can be difficult for courts, and has led to “controversy.” Citing to papers including one  by the late Margaret Berger at her notorious lawsuit industry SKAPP-funded Coronado Conference, Weisberg and Thanukos float the observation that:

“In science, the available evidence (some of which may come from other research programs not designed to test the hypothesis under consideration) is evaluated as a body, along with the strengths, weaknesses, and caveats relating to each type of data, an approach which, some scholars have argued, the judiciary has not always followed.98[24]

This claim that the available evidence is evaluated as “a body” is presented as a fact about how science works, without any citation or argument. Several comments are in order. First, the claim is at odds with the authors’ own statements that scientific norms require evaluating each study for biases and other disqualifying flaws. Second, the claim is at odds with the authors’ own reference to systematic reviews and meta-analyses,[25] which are governed by protocols with inclusionary and exclusionary criteria for individual studies, and which require consideration of individual study validity before it enters the “body” of evidence that is quantitatively or qualitatively evaluated. In the authors’ words, “authors delineate both the criteria that studies must meet for inclusion in the review and the methods that will be used to assess the studies.”[26] The Milward case involved an expert witness who had proffered the very opposite of a systematic review in the form of post hoc rejiggering of studies and their data to fit a pre-conceived litigation goal. In the context of addressing the replication crisis, Weisberg and Thanukos correctly observe “peer review alone cannot ensure that the conclusions of published studies are actually correct, highlighting the responsibility judges bear in evaluating the validity of the methodologies that contributed to a particular piece of research.”[27] Of course, the Milward case involved a hired expert witness whose unprincipled re-analysis of studies was never peer reviewed or published.

Third, the authors could easily have found additional support for the contrary proposition that individual studies must be evaluated before being considered as part of the entire evidentiary display. The IARC Preamble, which roughly describes how that agency arrives at its so-called hazard classifications of human carcinogenicity, specifies that individual studies within each of three streams of evidence are evaluated for validity and soundness before contributing to a sub-conclusion with respect to (1) epidemiology, (2) toxicology, and (3) mechanistic lines of evidence.[28] Each of those three lines of evidence is adjudged “sufficient,” “limited,” or “inadequate,” by specialists in the three respective areas, before an overall evaluation is reached. There is much that is objectionable in the IARC working group procedures, but this division of labor and the need to consider disparate lines of evidence and studies within each line separately before attempting a synthesis, is present in all systematic review methodology. The suggestion from Weisberg and Thanukos that “the available evidence” in science is “evaluated as a body” is not only unsupported, but it is demonstrably false and misleading.

This claim about holistic evaluation is a fairly transparent but failed attempt to support a claim made in the chapter on the admissibility of expert witness evidence by Liesa Richter and Daniel Capra, who present an exposition of the notorious Milward case, without criticism, in a way to suggest that the case represents appropriate judicial gatekeeping under Rule 702, and that the case is consistent with scientific norms.[29] The chapter on how science works, after  having stated a false claim about scientific methodology for synthesis and integrating disparate lines of evidence, attempts to provide a gloss on the similar and equally benighted claim of Richter and Capra, in footnote 98:

“98. Some scholars have raised concerns that the courts have on occasion unfairly dismissed numerous individual lines of evidence as being flawed or insufficiently conclusive and concluded that evidence is lacking, when in fact the body of evidence, taken as a whole, points to a clear conclusion. For more, see discussion of Milward v. Acuity Specialty Products Group, Inc.; see also Liesa L. Richter & Daniel J. Capra, The Admissibility of Expert Testimony, in this manual; Berger 2005, supra note 97; and Steve C. Gold, A Fitting Vision of Science for the Courtroom, 3 Wake Forest J.L. & Pol’y 1 (2013).”

Some “scholars” have indeed said such things in their more unscholarly moments; some scholars have criticized Milward, but they are not cited in this new methods chapter. The footnote is accurate, but highly misleading by omission. The First Circuit in Milward also said as much, also without support or justification, and Richter and Capra, in their chapter of the Manual, fourth edition, parrot the Milward case. Weisberg and Thanukos cite to two articles, by Margaret Berger and by Steven Gold, both law professors, not scientists, and both ideologically hostile to Rule 702 gatekeeping. The Berger article was from a lawsuit-industry SKAPP funded symposium known as the Coronado Conference, and the Gold paper comes out of a symposium sponsored by the lawsuit industry itself and the Center for Progressive Reform, an advocacy NGO to which one of Mr. Milward’s expert witnesses, Carl Cranor, belongs. So the authors of the new science methodology chapter failed to cite any scientific source, but cited to papers by lawyers in the capture of the lawsuit industry, and a single (infamous) decision that ignored Rules 702 and 703, as well as the extensive literature on systematic reviews.  Weisberg and Thanukos could have cited many sources that contradicted their claim, and the claim of the lawsuit industry sponsored lawyers, but they did not. This is what biased and subversive scholarship looks like.

Funding Bias – The New McCarthyism

The selective citation to articles sponsored by the lawsuit industry is ironic in the context of what Weisberg and Thanukos have to say elsewhere about the “funding effect.” Some of what the authors say about personal bias is almost reasonable. For instance, they suggest that funding source is a “valid consideration” in evaluating methodologies and conclusions of expert testimony, and presumably of published studies as well, but not a sufficient reason to exclude such testimony or reliance.[30] Interestingly, these authors ignored the funding and the ideological interests of the symposia they cited in support of the repudiated Milward abstention doctrine.

Over three decades ago, Kenneth Rothman, the founder of Epidemiology, the official journal of the International Society for Environmental Epidemiology (ISEE), wrote his protest against the obsession with funding in article that should have been cited in the new chapter, for balance. Rothman described the fixation on funding as the “new McCarthyism in science,” which manifested as intolerance toward industry-sponsored studies, and strict scrutiny of “conflict-of-interest” (COI) disclosures.[31] The new McCarthyites amplify the gamesmanship over COI disclosures by excusing or justifying non-disclosure of COIs from scientists who have positional conflicts, or who are aligned with advocacy groups or with the lawsuit industry.

This asymmetrical standard for adjudging conflicts is on full display in the Weisberg and Thanukos chapter, when they claim that “in pharmaceuticals, there is a strong tendency for industry-sponsored trials to favor the industry’s product.”[32] The chapter authors, and their cited source, ignore the context in which the pharmaceutical industry scientists publish clinical trial results.  A successful clinical trial that showed efficacy with minimal adverse events is the result of years of prior research, including phase I and II trials, and preclinical testing. If the research fails to show efficacy, or shows unreasonable harm, in any of this prior research, the phase III trial is never done and so never published. If the medication is never licensed, the phase III trial will generally not be published. The selection effects are obvious and overwhelming in determining that the published results of phase III trials will be work that favors the sponsor. The “failed” phase III trial may result in a securities class action against the pharmaceutical company. In the realm of observational studies, some work commissioned by manufacturing industry has its origins in the poorly conducted, flawed work of environmental zealots and NGOs. Manufacturing industry has an obvious interest in correcting the scientific record, and again, any carefully done study would rebut that of the zealots and favor the industry sponsor.

Elsewhere, the authors offer a more balanced assessment when they observe that “[a]ll research is potentially influenced by bias, and every funder of research has the potential to introduce a source of bias.”[33] Similarly, the fourth edition chapter notes that “[a]ll scientists have some sort of motivation for their work, and this does not preclude scientific knowledge building, so long as biased methodologies and interpretations are avoided.”[34] Their recognition that motivated reasoning is everywhere suggests that all research should receive scrutiny regardless of apparent or disclosed funding source.[35]

When it comes to providing examples of funding-effect distortions of science, Weisberg and Thanukos seem to blank on instances created by the lawsuit industry or by environmental NGOs. The reader should contrast how readily and stridently the authors point to bias in industry-sponsored research with how the authors tie themselves up with double negatives when making the same point about NGOs:

“That is not to suggest that government-or nongovernmental organization (NGO)-sponsored research is necessarily free from bias.”[36]

The cognitive dissonance is palpable. The only conclusion that could be drawn from such a locution is that Weisberg and Thanukos have not worked very hard to identify and disclose their own biases.

STATISTICS DONE POORLY

When it comes to explaining and discussing the role of statistical methods in the scientific process, Weisberg and Thanukos go off the rails. The new chapter is an unmitigated disaster, which should have been corrected in the peer review and oversight process. The first sign of trouble became apparent upon checking the definition of “p-value” in the chapter’s glossary:

p-value. A statistic that gives the calculated probability that the null hypothesis could be true even given the observed differences between conditions.”[37]

This definition is the transposition fallacy on steroids. Obviously, a p-value cannot be the probability that the null hypothesis “could be true” when the procedure for calculating a p-value must assume that the null hypothesis is true, along with a specified probability model. Equally important, the p-value does not describe a probability in connection with the null hypothesis because it describes the probability of observing data as different from the null, or more so, as seen in this particular sample.  The statistics chapter in the Manual by Hall and Kaye states the meaning correctly.  The coverage of statistical concepts by Weisberg and Thanukos should be studiously ignored.

The outrageously incorrect definition of p-value in the glossary is not an isolated error.  The authors are clearly statistically challenged. In the text of their chapter, they incorrectly describe the p-value, consistently with their aberrant glossary entry:

“the commonly used p-value approach, scientists compare a test hypothesis (e.g., that drug X is effective) to a null (e.g., that there is no difference in cure rates between those who took drug X and those who took a placebo). Scientists then calculate the probability that the null hypothesis could be true even with the observed difference between conditions (e.g., the cure rate of patients taking drug X compared to that of those taking a placebo).”[38]

Weisberg and Thanukos thus conflate frequentist and Bayesian statistics. They also obliterate the meaning of the confidence interval, an important concept for judges and lawyers to understand. Here is how the authors describe the confidence interval in their chapter:

Evaluating estimates: In science (and in contrast to their lay meanings), the terms uncertainty and error refer to the variability of a set of data that is intended to estimate a single number. Uncertainty and error are generally expressed as a range, within which we are confident that, if the study were repeated, the new result would fall. Scientists often use a 95% confidence interval for this purpose.”[39]

Describing the confidence interval in the same sentence as “uncertainty and error” is bound to induce uncertainty and error. The confidence interval provides a range of estimates based upon random error, and uncertainty only in the form of imprecision in the point estimate. There are of course myriad other kinds of uncertainty and error not captured by the confidence interval. The most important of the authors’ errors is that they assert incorrectly that the confidence interval provides a range within which new results from the study repeated would fall.  This is, again, a variant on the transposition fallacy that the authors commit in their definition of the p-value. The confidence interval provides a range of results that would not be rejected as alternative null hypotheses by the data in the obtained sample. Because of random error, future samples would give different results, with different confidence intervals, which would not be co-extensive with the first obtained confidence interval. To be sure, the statistics chapter states the matter correctly, and the epidemiology chapter finally gets it correct in its text (after having mangled the concept in the second and third editions), but the epidemiology chapter perpetuates its previous errors in defining confidence intervals in its glossary. This sort of issue, and it is a serious one, could have been eliminated had there been meaningful peer review and editorial oversight for consistency and accuracy of the Manual as a whole.

Weisberg and Thanukos address statistical power in a way that may also mislead readers. They tell us that “[p]ower refers to a test’s ability to reject a hypothesis that is indeed false.” W&T at 88. If only were it so. The authors omit that power is a probability that at a specified level of significance (say p < 0.05), and a specified alternative hypothesis, sample size, and probability model, the sample result will reject the null hypothesis in favor of the alternative hypothesis. Then the authors suggest confusingly that “[w]ell-designed studies have sufficient power to detect the differences of interest, but it may not be apparent when a test lacks power.”[40]

If the study at issue presents a confidence interval around a point estimate of interest, then it will be clear what alternative null hypotheses are statistically compatible with the sample result at the pre-specified level of alpha (significance). Any point outside the interval would be rejected by such a test of significance, and so the casual reader will have a rather good idea of what could and could not be rejected by the sample data. And of course, virtually every study will have low power to detect extremely small increased risks, say relative risk of 1.00001. And most studies will have high power to detect risk ratios of over 1,000.

This new chapter on “How Science Works” also propagates some well-known fallacies about statistical significance testing. Implicit in the authors’ committing the transposition fallacy, is a conceptual and mathematical confusion between the coefficient of confidence (1-α) and the posterior probability of an hypothesis.

The authors’ mistake comes in their insistence upon labeling precision in a test result as “certainty.” In the quote below, the authors’ confusion is clear and obvious:

“Note that the 95% and 5% cutoffs are somewhat arbitrary, and a higher degree of confidence might be required if more certainty were desired—for example if an impactful policy decision depended on the conclusion.”[41]

An impactful [sic] policy decision might well call for more certainty, or a higher posterior probability, but a higher coefficient of confidence will not necessarily map to hypothesis probability at all. The authors’ confusion and conflation of the probability of alpha and the Bayesian posterior probability arises elsewhere within the chapter:

“(1) A p-value lower than 0.05 does not prove that a null hypothesis is false. It is strong evidence, but there is a small chance that the difference observed could be the result of chance alone.

(2) Using a low p-value (e.g., 0.05) as a criterion for significance sets a high bar for rejecting the null hypothesis, minimizing the chance of getting a false positive… .”[42]

Again, a p-value less than five percent is hardly strong evidence in the context of large database studies, especially when there are multiple comparisons and the outcome is not the pre-specified outcome of the analysis. The authors’ confusion is on full display when they discuss the Zoloft birth defects litigation, where the Third Circuit affirmed the exclusion of plaintiffs’ expert witnesses’ causation opinions and the grant of summary judgment to the defendants. According to the authors’ narrative:

“plaintiffs’ expert’s testimony would have argued that multiple, nonsignificant associations between Zoloft use and birth defects indicated a causal relationship. The testimony was excluded because these results were consistent with a weak causal relationship (a small effect size), one that is ‘so weak that one cannot conclude that the risk is greater than that seen in the general population’.”[43]

Of course, in the Zoloft litigation, the excluded plaintiffs’ expert witnesses were caught red-handed – at cherry picking – and attempting to circumvent the lack of significance with a methodologically incorrect meta-analyses.[44]

If the risk of birth defects among children born to mothers who used Zoloft in pregnancy was no greater than seen in the general population, then there would be no risk, not risk “so weak” it cannot be seen. Locutions such as the “results were consistent with a weak causal relationship,” when the results were equally consistent with no causal relationship suggest that the writers cannot bring themselves to say that the causal hypothesis was simply not supported at all. Of course, no study may exclude an increased risk of 0.01 percent, or a relative risk of 1.01, but at some point, when multiple attempts fail to reveal an increased risk, we may conclude that the proponents of the causal claim have failed to make their case.

META-SHMETA-ANALYSIS

Weisberg and Thanukos address meta-analysis incompletely in the context of systematic reviews. The authors do not provide any insights into how meta-analyses are done, and more glaringly, they fail to mention that not all systematic reviews can or should result in quantitative syntheses of estimates of association. On the positive side, they state that meta-analyses are important in litigation, and that the application of rigorous methodologies should be required.[45] With clearly unintended irony, Weisberg and Thanukos offer, as support for their statement, the Paoli Railroad Yard case, “in which the exclusion of a contested meta-analysis was overturned.”[46]

Weisberg and Thanukos have stepped into the wet corner of a pigsty. The issue in the Paoli case arose from a meta-analysis of mortality rates associated with polychlorobiphenyl (PCB) exposures. The district court excluded the proponent of the meta-analysis, not because it was unreliable, but because it was novel. Holding it up in conjunction with a statement about application of rigorous or reliable methodologies was way off the relevant legal point.

The expert witness who proffered the meta-analysis in Paoli was William  Nicholson, who was a physicist with no professional training in epidemiology. For his opinion that PCBs were causally associated with human liver cancer, Nicholson relied upon a non-peer-reviewed, unpublished report he wrote for the Ontario Ministry of Labor.[47] Nicholson described his report as a “study of the data of all the PCB worker epidemiological studies that had been published,” from which he concluded that there was “substantial evidence for a causal association between excess risk of death from cancer of the liver, biliary tract, and gall bladder and exposure to PCBs.”[48]

The defense challenged Nicholson’s opinion, not on Rule 702, but on case law that pre-dated the Daubert decision.[49] The challenge included pointing out the unreliability of the Nicholson’s meta-analysis, but also asserted (incorrectly) the novelty of meta-analysis generally. The district court sustained the defense objection on the grounds of “novelty,” without reaching the reliability analysis.[50] The Third Circuit appropriately reversed and remanded for consideration of the reliability of Nicholson’s meta-analysis.[51]

The consideration of Nicholson’s “meta-analysis” never occurred on remand; plaintiffs’ counsel and their expert witnesses withdrew their reliance upon Nicholson’s analysis. Their about face was highly prudent. Nicholson’s report presented SMRs (standardized mortality ratios); for the all cancers statistic, he reported an SMR of 95. What Nicholson did, in this analysis, and in all other instances, was simply divide the observed number of deaths by the expected, and multiply by 100. This crude, simplistic calculation fails to present a standardized mortality ratio, which requires taking into account the age distribution of the exposed and the unexposed groups, and a weighting of the contribution of cases within each age stratum. Nicholson’s presentation of data was nothing short of a fraud.

Nicholson’s Report was replete with many other methodological sins. He used a composite of three organs (liver, gall bladder, bile duct) without any biological rationale. His analysis combined male and female results, and still his analysis of the composite outcome was based upon only seven cases. Of those seven cases, some of the cases were not confirmed as primary liver cancer, and at least one case was confirmed as not being a primary liver cancer.[52]

As noted, Nicholson failed to standardize the analysis for the age distribution of the observed and expected cases, and he failed to present meaningful analysis of random or systematic error. When he did present p-values, he presented one-tailed values, and he made no corrections for his many comparisons from the same set of data.

Finally, and most egregiously, Nicholson’s meta-analysis was meta-analysis in name only. What he had done was simply to add “observed” and “expected” events across studies to arrive at totals, and to recalculate a bogus risk ratio, which he fraudulently called a standardized mortality ratio. Adding events across studies, without weighting by the inverse of study variance, is not a valid meta-analysis; indeed, it is a well-known example of how to generate the error known as Simpson’s Paradox, which can change the direction or magnitude of any association.[53]

In citing to the Paoli case as a reversal of exclusion of a contested meta-analysis, Weisberg and Thanukos give a truncated analysis that misleads readers, judges, and lawyers. There never was a proper consideration of the reliability vel non of Nicholson’s meta-analysis in the Paoli litigation, and in the final analysis, the Paoli plaintiffs abandoned reliance upon Nicholson’s ill-conceived meta-analysis.

VIRTUE SIGNALING

Although there are no land acknowledgments for the property on which Federal Judicial Center building is located, Weisberg and Thanukos miss few opportunities to let us know that they are woke scholars. There is the gratuitous and triggering “pregnant people,”[54] which begs any number of biological questions. Then there is the authors’ statement that they are limiting their focus to the “Western conception of science,” which begs another question, why would we call any other epistemically valid approach, from any corner of the globe, as something other than “science.”[55]

Equally gratuitous are the authors’ endorsements of DEI and “diversity,” with overbroad generalizations that diversity per se advances science,[56] and a claim that “women, people of color, other historically oppressed groups, and non-Western people” are not taken seriously as scientists.[57] In over 40 years of litigating technical and scientific issues, I have never seen a judge or a lawyer disrespect an expert witness based upon sex, race, ethnicity, or national origin. Of course, I have seen expert witnesses treated roughly for propounding bad science, and that seems perfectly appropriate.


[1] See David Goodstein, ON FACT AND FRAUD: CAUTIONARY TALES FROM THE FRONT LINES OF SCIENCE (2010).

[2] Weisberg and Thanukos frequently refer to other chapters in the Manual, which suggests that their chapter was written late in the development of the Fourth Edition, and perhaps contributed to the delayed publication.

[3] Michael Weisberg & Anastasia Thanukos, How Science Works, in National Academies of Sciences, Engineering, and Medicine & Federal Judicial Center, REFERENCE MANUAL ON SCIENTIFIC EVIDENCE 47 (4th ed. 2025) [cited as W&T].

[4] See Michael Weisberg, University of Pennsylvania Philosophy, at https://philosophy.sas.upenn.edu/people/michael-weisberg.

[5] Anna Thanukos, Staff, available at https://ucmp.berkeley.edu/people/anna-thanukos/#:~:text=Her%20background%3A%20Anna%20has%20a,Education%2C%20both%20from%20UC%20Berkeley

[6] W&T at 72-75.

[7] W&T at 81.

[8] W&T at 81.

[9] W&T at 81 & n.85 (emphasis added), citing Naomi Oreskes & Erik M. Conway, MERCHANTS OF DOUBT: HOW A HANDFUL OF SCIENTISTS OBSCURED THE TRUTH ON ISSUES FROM TOBACCO SMOKE TO GLOBAL WARMING (2010).

[10] W&T at 94-96.

[11] W&T at 95 n.120.

[12] Richard Van Noorden, More than 10,000 research papers were retracted in 2023 — a new record, 624 NATURE 479 (2023).

[13] W&T at 95.

[14] W&T at 55.

[15] W&T at 63, 68.

[16] W&T at 68.

[17] W&T at 65.

[18] W&T at 70.

[19] W&T at 71.

[20] W&T at 66.

[21] W&T at 75.

[22] W&T at 49.

[23] W&T at 83.

[24] W&T at 86 (citing Richter and Capra’s discussion of Milward in chapter one of the Manual, and Professor Gold’s article from the lawsuit industry celebratory conference on the Milward case).

[25] W&T at 99-100.

[26] W&T at 99.

[27] W&T 96 (emphasis added).

[28] IARC MONOGRAPHS ON THE IDENTIFICATION OF CARCINOGENIC HAZARDS TO HUMANS – PREAMBLE (2019), available at https://monographs.iarc.who.int/wp-content/uploads/2019/07/Preamble-2019.pdf

[29] Liesa L. Richter & Daniel J. Capra, The Admissibility of Expert Testimony, National Academies of Sciences, Engineering, and Medicine & Federal Judicial Center, REFERENCE MANUAL ON SCIENTIFIC EVIDENCE 1, 32-33 (4th ed. 2025).

[30] W&T at 76.

[31] Kenneth J. Rothman, “Conflict of interest: the new McCarthyism in science,” 269 J. AM. MED. ASS’N 2782 (1993). See Schachtman, The Rhetoric and Challenge of Conflicts of Interest, TORTINI (July 30, 2013).

[32] W&T at 76 & n.67, citing Sergio Sismondo, Pharmaceutical Company Funding and Its Consequences: A Qualitative Systematic Review, 29 CONTEMP. CLINICAL TRIALS 109 (2008).

[33] W&T at 77.

[34] W&T at 59-60.

[35] W&T at 59-60.

[36] W&T at 76.

[37] W&T at 111.

[38] W&T at 87.

[39] W&T at 90.

[40] W&T at 88.

[41] W&T at 90 (emphasis added).

[42] W&T at 88.

[43] W&T at 90 (internal citations omitted).

[44] In re Zoloft (Sertraline Hydrochloride) Prods. Liab. Litig., 26 F. Supp. 3d 449 (E.D. Pa. 2014); No. 12-md-2342, 2015 WL 314149, at *3 (E.D. Pa. Jan. 23, 2015) (rejecting proffered expert witness opinion based upon “cherry-picking of studies and data within studies”), aff’d, 858 F.3d 787 (3rd Cir. 2017).

[45] W&T at 99.

[46] W&T at 99 & n.134, citing In re Paoli R.R. Yard PCB Litig., 916 F.2d 829 (3d Cir. 1990).

[47] William Nicholson, Report to the Workers’ Compensation Board on Occupational Exposure to PCBs and Various Cancers, for the Industrial Disease Standards Panel (ODP); IDSP Report No. 2 (Toronto Dec. 1987) [Report].

[48] Id. at 373.

[49] See United States v. Downing, 753 F.2d 1224 (3d Cir.1985).

[50] In re Paoli RR Yard Litig., 706 F. Supp. 358, 372-73 (E.D. Pa. 1988).

[51] In re Paoli RR Yard PCB Litig., 916 F.2d 829 (3d Cir. 1990), cert. denied sub nom. General Elec. Co. v. Knight, 499 U.S. 961 (1991).

[52] Report, Table 22.

[53] See James A. Hanley, et al., Simpson’s Paradox in Meta-Analysis, 11  EPIDEMIOLOGY 613 (2000); H. James Norton & George Divine, Simpson’s paradox and how to avoid it, SIGNIFICANCE 40 (Aug. 2015); George Udny Yule, Notes on the theory of association of attributes in statistics, 2 BIOMETRIKA 121 (1903).

[54] W&T at 84.

[55] W&T at 50.

[56] W&T at 71 n. 52-54.

[57] W&T at 102.

The Reference Manual’s Chapter on Expert Witness Testimony Admissibility – Part One

February 23rd, 2026

With the retraction of the climate science chapter, The Reference Manual on Scientific Evidence is now one chapter shorter, at least in the Federal Judicial Center’s version. At the time of this writing, for curious souls, the National Academies version is still sporting the climate advocacy chapter. Even without the climate chapter, the Manual is over 1,000 pages, and more than a casual weekend read. Many judges, finding this tome on their desks, will read individual subject matter chapters pro re nata. The first chapter in the Manual, however, is about the law, not science, and might be the starting place for the ordinary work-a-day judge. As in past editions of the Manual, the new edition has a chapter on the The Admissibility of Expert Testimony. In the first, second, and third editions, this chapter was written by Professor Margaret Berger. In the fourth edition, the chapter on the law was written by law professors Liesa Richter and Daniel Capra. To understand and evaluate the most recent iteration, the reader should have some sense of what has gone before.

Previous Chapters on Admissibility of Expert Witness Testimony

Professor Berger’s past chapters had been idiosyncratic productions.[1] Berger was an evidence law scholar, who wrote often about expert witness admissibility issues.[2] She was also known for her antic proposals, such as calling for abandoning the element of causation in products liability cases.[3] As an outspoken ideological opponent of expert witness gatekeeping, Berger was a strange choice to write the law chapter of the Manual.[4] Berger’s chapters in the first through the third editions made her opposition to gatekeeping obvious, and this hostility may have been responsible for some of the judicial resistance to applying the clear language of Rule 702, even after its 2000 revision.

Berger was not only a law professor; she was at the center of ideological and financially conflicted groups that worked to undermine the application of Rule 702 in health effects cases. One of the key players in this concerted action was David Michaels. Currently, Michaels teaches epidemiology at the George Washington University Milken Institute School of Public Health. He is a card-carrying member of the Collegium Ramazzini, an organization that has participated in efforts to corrupt state and federal judges by funding ex parte conferences with lawsuit industry expert witnesses.[5] Michaels is the author of two books, both highly anti-manufacturing industry, and biased in favor of the lawsuit industry.[6] Both books are provocatively titled anti-industry diatribes, which have little scholarly value, but are used regularly by plaintiffs counsel solely to smear corporate defendants and defense expert witnesses. Most clear-eyed trial judges have quashed these efforts on various grounds, including Rule 703, because the books are not the sort of material upon which scientists would reasonably rely.[7]

In 2002, David Michaels created an anti-Daubert advocacy organization, the Project on Scientific Knowledge and Public Policy (SKAPP), from money siphoned from the plaintiffs’ common-benefit fund in MDL 926 (silicone gel breast implant litigation).[8] Michaels lavished some of the misdirected money to prepare and publish an anti-Daubert pamphlet for SKAPP, in 2003.[9] In this anti-Daubert publications, and many others sponsored by SKAPP, Michaels and the SKAPP grantees typically acknowledged the source of SKAPP funding obliquely to hide that it was nothing more than plaintiffs counsels’ walking around money:

“I am also grateful for the support SKAPP has received from the Common Benefit Trust, a fund established pursuant to a court order in the Silicone Gel Breast Implant Liability litigation.”[10]

Many credulous lawyers, judges, and legal scholars were duped into believing that SKAPP, SKAPP publications, and SKAPP-sponsored publications were supported by the Federal Judicial Center.

Michaels directed a good amount of SKAPP’s anti-Daubert funding to support Professor Berger’s efforts in organizing a series of symposia on science and the law. Several of Berger’s SKAPP conferences were held in Coronado, California, and featured a predominance of scientists who work for the lawsuit industry and are affiliated with advocacy organizations, such as the Collegium Ramazzini. The papers from one of the Coronado Conferences were published in a special issue of the American Journal of Public Health, the official journal of the American Public Health Association,[11] which has issued position papers highly critical of Rule 702 gatekeeping.[12]

The spider web of connections between SKAPP, the Collegium Ramazzini, the American Public Health Association, the Tellus Institute, the lawsuit industry,  Professor Berger, and others hostile to Rule 702 is a testament to the concerted action to undermine the Supreme Court’s decisions in the area, and the codification of those decisions in Rule 702. That Professor Berger was within this web of connections, and was writing the chapter on the admissibility of expert witness opinion testimony, in the first three editions of the Reference Manual, explains but does not justify many of the opinions contained within those chapters.

Professor David Bernstein, who has written extensively on expert witness issues, restated the situation thus:

“In 2003, the toxic tort plaintiffs’ bar used money from a fund established as part of the silicone breast implant litigation settlement to sponsor four conference in Coronado, California, that resulted in a slew of policy papers excoriating the Daubert gatekeeping requirement.”[13]

The active measures of these groups and Professor Berger explain the straight line between Berger’s symposia and the First Circuit’s decision in Milward v. Acuity Specialty Products Group, Inc.[14] Carl Cranor was one of the speakers at the Coronado Conferences, and along with Martyn Smith, another member of the Collegium Ramazzini, founded a Proposition 65 bounty-hunting organization, Council for Education on Research on Toxics (CERT). Cranor has long advocated for a loosey-goosey “weight of the evidence” approach that had been rejected by the Supreme Court in Joiner.[15] Cranor, along with Smith, unsurprisingly turned up as expert witnesses for plaintiff in Milward, in which case they reprised their weight-of-the evidence approach opinions. When Milward appealed the exclusion of Cranor and Smith, CERT filed an amicus brief, without disclosing that Cranor and Smith were founders of the organization, and that CERT funded Smith’s research through donations to his university, from CERT’s shake-down operations under Prop 65. The First Circuit’s 2011 decision in Milward resulted from a fraud on the court.

Professor Berger died in November 2010, but when the third edition of the Manual was released in 2011, it contained Berger’s chapter on the law of expert witnesses, with a citation to the Milward case, decided after her death.[16] An editorial note from an unnamed editor to her posthumous chapter suggested that

“[w]hile revising this chapter Professor Berger became ill and, tragically, passed away. We have published her last revision, with a few edits to respond to suggestions by reviewers.”

Given that Berger was an ideological opponent of expert witness gatekeeping, there can be little doubt that she would have endorsed the favorable references to Milward made after her passing, but adding them can hardly be considered non-substantive edits. Curious readers might wonder who was the editor who took such liberties of adding the chapter citations to Milward. Curious readers do not have to wonder, however, what would have happened if the incestuous relationships among Berger, SKAPP, the plaintiffs’ bar, and others had been replicated by similar efforts of manufacturing industry to influence the interpretation and application of the law. In 2008, the Supreme Court decided an important case involving constitutional aspects of punitive damages. The Court went out of its way to decline to rely upon empirical research that showed the unpredictability of punitive damage awards because it was funded in part by Exxon:

“The Court is aware of a body of literature running parallel to anecdotal reports, examining the predictability of punitive awards by conducting numerous ‘mock juries’, where different ‘jurors’ are confronted with the same hypothetical case. See, e.g., C. Sunstein, R. Hastie, J. Payne, D. Schkade, & W. Viscusi, Punitive Damages: How Juries Decide (2002); Schkade, Sunstein, & Kahneman, Deliberating About Dollars: The Severity Shift, 100 Colum. L.Rev. 1139 (2000); Hastie, Schkade, & Payne, Juror Judgments in Civil Cases: Effects of Plaintiff’s Requests and Plaintiff’s Identity on Punitive Damage Awards, 23 Law & Hum. Behav. 445 (1999); Sunstein, Kahneman, & Schkade, Assessing Punitive Damages (with Notes on Cognition and Valuation in Law), 107 Yale L.J. 2071 (1998). Because this research was funded in part by Exxon, we decline to rely on it.”[17]

Unlike the situation with SKAPP, David Michaels, the plaintiffs’ bar, and Professor Berger, the studies sponsored in part by Exxon had disclosed their funding clearly. Those studies involved outstanding scientists whose integrity were unquestionable, and for its trouble, Exxon was rewarded with gratuitous shaming from Justice Souter. The anti-Daubert papers sponsored by the plaintiffs’ bar through SKAPP, and Professor Berger’s ideological conflicts of interest have received a free pass. This disparate treatment between conflicts of interest within manufacturing industry and those within the lawsuit industry and its advocacy group allies is a serious social, political, and legal problem. It was a problem on full display in the now-retracted climate science chapter in the Manual. In evaluating the new fourth edition’s chapter on the law of expert witness admissibility (and other chapters), we should be asking whether there are signs of undue political influence.


[1] See Schachtman, The Late Professor Berger’s Introduction to the Reference Manual on Scientific Evidence, TORTINI (Oct. 23, 2011).

[2] See generally Edward K. Cheng, Introduction: Festschrift in Honor of Margaret A. Berger, 75 BROOKLYN L. REV. 1057 (2010). 

[3] Margaret A. Berger, Eliminating General Causation: Notes towards a New Theory of Justice and Toxic Torts, 97 COLUM. L. REV. 2117 (1997).

[4] See, e.g., Margaret A. Berger & Aaron D. Twerski, “Uncertainty and Informed Choice:  Unmasking Daubert,” 104 MICH. L.  REV. 257 (2005). 

[5] In re School Asbestos Litig., 977 F.2d 764 (3d Cir. 1992). See Cathleen M. Devlin, Disqualification of Federal Judges – Third Circuit Orders District Judge James McGirr Kelly to Disqualify Himself So As To Preserve ‘The Appearance of Justice’ Under 28 U.S.C. § 455 – In re School Asbestos Litigation (1992), 38 VILL. L. REV. 1219 (1993); Bruce A. Green, May Judges Attend Privately Funded Educational Programs? Should Judicial Education Be Privatized?: Questions of Judicial Ethics and Policy, 29 FORDHAM URB. L. J. 941, 996-98 (2002).

[6] David Michael, DOUBT IS THEIR PRODUCT: HOW INDUSTRY’S WAR ON SCIENCE THREATENS YOUR HEALTH (2008); David Michaels, THE TRIUMPH OF DOUBT (2020).

[7] See In re DePuy Orthopaedics, Inc. Pinnacle Hip Implant Prods. Liab. Litig., 888 F.3d 753, 787 n.71 (5th Cir. 2018) (advising the district court to weigh carefully whether Doubt is Their Product has any legal relevance); King v. DePuy Orthopaedics, Inc., 2024 WL 6953089, at *2 (D. Ariz. July 9, 2024) (finding Michaels’ books to be legally irrelevant); Sarjeant v. Foster Wheeler LLC, 2024 WL 4658407, at *1 (N.D. Cal.Oct. 24, 2024) (ruling that Doubt Is Their Product is legally irrelevant hearsay, and not the type of material upon which an expert witness would rely to form scientific opinion). See also Evans v. Biomet, Inc., 2022 WL 3648250, at *4 (D. Alaska Feb. 1, 2022) (quashing plaintiff’s subpoena to defendant’s expert for material in connection with Doubt Is Their Product).

[8] See Ralph Klier v. Elf Atochem North America Inc., 2011 U.S. App. LEXIS 19650 (5th Cir. 2011) (holding that district court abused its discretion in distributing residual funds from class action over arsenic exposure to charities; directing that residual funds be distributed to class members with manifest personal injuries). A “common benefit” fund is commonplace in multi-district litigation of mass torts.  In such cases, federal courts may require the defendant to “hold back” a certain percentage of settlement proceeds, to pay into a fund, which is available to those plaintiffs’ counsel who did “common benefit work,” work for the benefit of all claimants.  Plaintiffs’ counsel who worked for the common benefit of all claimants may petition the MDL court for compensation or reimbursement for their work or expenses.  See, e.g., William Rubenstein, On What a ‘Common Benefit Fee’ Is, Is Not, and Should Be, CLASS ACTION ATT’Y FEE DIG. 87, 89 (Mar. 2009).  In the silicone gel breast implant litigation (MDL 926), plaintiffs’ counsel on the MDL Steering Committee undertook common benefit work in the form of developing expert witnesses for trial, and funding scientific studies.  By MDL Orders 13, and 13A, the Court set hold-back amounts of 5 or 6%, and later reduced the amount to 4%.  Id. at 94.

[9] Eula Bingham, Leslie Boden, Richard Clapp, Polly Hoppin, Sheldon Krimsky, David Michaels, David Ozonoff & Anthony Robbins, Daubert: The Most Influential Supreme Court Ruling You’ve Never Heard Of (June 2003). The authors described the publication as a publication of SKAPP, coordinated by the Tellus Institute, and funded by The Bauman Foundation, a private foundation that supports “progressive social change advocacy.” Boden, Hoppin, Michaels, and Ozonoff are fellows of the Collegium Ramazzini.

[10] David Michael, DOUBT IS THEIR PRODUCT: HOW INDUSTRY’S WAR ON SCIENCE THREATENS YOUR HEALTH 267 (2008). See Nathan Schachtman, “SKAPP A LOT,” TORTINI (April 30, 2010); “Manufacturing Certainty” TORTINI (Oct. 25, 2011); “David Michaels’ Public Relations Problem” TORTINI (Dec. 2, 2011); “Conflicted Public Interest Groups” TORTINI (Nov. 3, 2013). 

[11] 95 AM. J. PUB. HEALTH S1 (2005).

[12] See, e.g., Am. Pub. Health Assn, Threats to Public Health Science, Policy Statement 2004-11 (Nov. 9, 2004), available at https://www.apha.org/policy-and-advocacy/public-health-policy-briefs/policy-database/2014/07/02/08/52/threats-to-public-health-science

[13] David E. Bernstein & Eric G. Lasker, Defending Daubert: It’s Time to Amend Federal Rule of Evidence, 702, 57 WM. & MARY L. REV. 1, 39 (2015), available at https://scholarship.law.wm.edu/wmlr/vol57/iss1/2. See David Michaels & Neil Vidmar, Foreword, 72 LAW & CONTEMP. PROBS. i, ii (2009) (“SKAPP has convened four Coronado Conferences.”).

[14] Milward v. Acuity Specialty Products Group, Inc., 639 F.3d 11 (1st Cir. 2011), cert. denied sub nom., U.S. Steel Corp. v. Milward, 132 S. Ct. 1002 (2012).

[15] General Electric Co. v. Joiner, 522 U.S. 136, 136-37 (1997).

[16] Margaret A. Berger, The Admissibility of Expert Testimony, in National Academies of Sciences, Engineering and Medicine & Federal Judicial Center, REFERENCE MANUAL ON SCIENTIFIC EVIDENCE 11, 20 n.51, 23-24 n.61 (3rd 2011).

[17] Exxon Shipping Co. v. Baker, 554 U.S. 471, 128 S. Ct. 2605, 2626 n.17 (2008).

The FJC Retracts Climate Science Chapter in New Reference Manual

February 10th, 2026

When the new, fourth, edition of the Reference Manual on Scientific Evidence was released late last year,[1] I remarked that there were some new chapters,[2] including one on climate change. I found the addition of a chapter on climate change curious largely because I was unfamiliar with the science or the need to address the area for federal judges, and because I thought there were other more pressing topics, such as genetic causation, from which judges could benefit, but which were not included.

I confess that I did not read the new chapter on climate change,[3] which is not a subject that comes up in my practice or in my writing. Writers at the National Review, however, did read the chapter on climate, and found it objectionable. Writing on January 17th of this year, Michael Fragoso observed that the chapter on climate science was an advocacy piece that would resolve climate change litigation in favor of plaintiffs.[4]

If Fragoso’s charge is correct, the implications are extremely serious. Judges have an ethical obligation not to go beyond the adversary process to educate themselves about the factual issues before them in pending litigation. In the past, judges who have done so have found themselves on the wrong end of a petition for a writ of mandamus, and have been disqualified and removed from cases.[5] The Federal Judicial Center (FJC), which is the research and educational division of the federal courts, has tried to create a safe space for teaching judges about technical subjects that arise in litigation in a way that is balanced and removed from partisan advocacy. The last edition, the third, and the current edition, the fourth, of the Manual have been the joint product of the both the FJC and the National Academies of Science, Engineering and Medicine (NASEM), in the hope of producing disinterested tutorials on key areas of science that are important to judges in their adjudication of civil and criminal cases, as well as their performance of judicial review of regulation and agency action.

Following up on the National Review article, on January 29, 2026, the Attorneys General of 24 states[6] wrote a letter to Judge Robin Rosenberg, the director of the Federal Judicial Center. The letter identified the advocacy perspective of the climate chapter and its authors, who wrote what the Attorneys General described as an amicus brief that placed a thumb on the scales of justice, with respect to issues currently pending at all levels of the federal courts. The Attorneys General requested the immediate withdrawal of the offending chapter.

Judge Rosenberg is a savvy judge of scientific horse flesh. She presided over the Zantac multi-district litigation (MDL No. 2924), in which she excluded plaintiffs’ expert witnesses in a detailed, analytically careful opinion of over 300 pages.[7] On February 6, a week after the request to withdraw the climate chapter was made, Judge Rosenberg, wrote to West Virginia Attorney General John McCuskey, to report that the chapter had been omitted.[8] Given the prompt response from Judge Rosenberg, the decision was likely not a difficult one.  A decision not to include this chapter, as written, in the first place, would have been an even easier one.

Retractions of publications of the NASEM, which includes what was formerly the Institute of Medicine, are rarer than hens teeth.  This one received coverage and some intense harrumphing.[9] The retraction of the climate science chapter comes on the heels of a high-profile retraction, in December 2025, of an article in the prestigious journal Nature,[10] which argued that the costs of climate change would reach $38 trillion a year by 2049.[11]

The climate science chapter appears to be the outcome of what the late Daniel Kahneman called poor decision hygiene.  The chapter in question had two authors, and both were from the same institution, published together, and shared the same advocacy perspectives on climate change. Hardly a team of rivals. The editors of the Manual certainly could have done better in selecting these authors and in editing the work product.

Jessica Wentz is a Non-Resident Senior Fellow at the Sabin Center for Climate Change Law, at the Columbia Law School. The Sabin Center website describes itself as “develop[ing] legal techniques to combat the climate crisis and advance climate justice, and train the next generation of leaders in the field.” The language of “combat” and “crisis” certainly suggests a hardened, adversarial stance. Wentz’s writings reveal her advocacy and adversarial positions.[12]

Radley Horton is a Professor at Columbia University’s Climate School. He describes his research as focusing on climate extremes, and related topics. Horton’s curriculum vitae, social media, social media, testimony,[13] and professional work certainly mark him as an advocate for “attribution science” in litigation to address climate crises. Horton and Wentz previously published a law review article that seems to be a brief for plaintiffs’ positions in climate litigation.[14] One of the key issues in climate litigation is whether litigation is an appropriate avenue for addressing climate issues, and Horton and Wentz have both clearly committed to endorsing litigation strategies, and the plaintiffs’ positions to boot.

This kerfuffle at FJC and NASEM has a larger meaning. There is a glib assumption afoot that the only conflicts of interest that matter are ones that are attributed to industrial stakeholders and their scientific supporters. This naïve view was attacked and debunked back in 1980, by Sir Richard Peto, writing in the pages of Nature. Sir Richard noted that whereas industry may downplay risks, “environmentalists usually exaggerate the likely hazards and are largely indifferent to the costs of control.” Positional conflicts can be, and often are, more powerful than the ones created by profit.[15]


[1] National Academies of Sciences, Engineering, and Medicine & Federal Judicial Center, REFERENCE MANUAL ON SCIENTIFIC EVIDENCE (4th ed. 2025) (cited as RMSE 4th ed.).

[2] Nathan Schachtman, A New Year, A New Reference Manual, in TORTINI (Jan. 5, 2026).

[3] Jessica Wentz & Radley Horton, Reference Guide on Climate Science, RMSE 4th ed.

[4] Michael A. Fragoso, Bias and the Federal Judicial Center’s ‘Climate Science’, NAT’L REV. (Jan. 17, 2026). Fragoso also took umbrage to the use of the silly phrase “pregnant people” elsewhere in the Manual. RMSE 4th ed. at 84.

[5] In re School Asbestos Litigation, 977 F.2d 764 (3d Cir. 1992). See Cathleen M. Devlin, Disqualification of Federal Judges – Third Circuit Orders District Judge James McGirr Kelly to Disqualify Himself So As To Preserve ‘The Appearance of Justice’ Under 28 U.S.C. § 455 – In re School Asbestos Litigation (1992), 38 VILL. L. REV. 1219 (1993);

[6] Alabama, Alaska, Arkansas, Florida, Georgia, Idaho, Indiana, Iowa, Missouri, Montana, Nebraska, New Hampshire, North Dakato, Ohio, Oklahoma, Pennsylvania, South Carolina, South Dakato, Tennessee, Texas, Utah, West Virginia, West Virginia, and Wyoming.

[7] In re Zantac (Ranidine) Prods. Liab. Litig., 644 F. Supp. 3d 1075 (S.D. Fla. 2022).

[8] Hon. Robin Rosenberg, Letter in Response to Attorneys General (Feb. 6, 2026).

[9] Editorial Board, A Failed Climate Coup in the Courts,  WALL ST. J. (Feb. 9, 2026); Charles Creitz, Judicial research center cuts climate section from judges’ manual, FOX NEWS (Feb. 9, 2026); Suzanne Monyak, Judiciary Cuts Climate Part of Science Manual after Backlash, BLOOMBERG LAW (Feb. 9, 2026).

[10] Maximilian Kotz, Anders Levermann & Leonie Wenz, The economic commitment of climate change, 628 NATURE 551 (2024) (retracted on Dec. 3, 2025).

[11] Authors retract Nature paper projecting high costs of climate change, RETRACTION WATCH (Dec. 3, 2025).

[12] Michael Burger, Jessica Wentz & Daniel Metzger, Climate science in rights-based advocacy contexts (June 28, 2020).

[13] Written Testimony of Radley Horton, Lamont Associate Research Professor, Columbia University, before the Committee on Science, Space, and Technology Subcommittee on Environment Sea Change: Impacts of Climate Change on Our Oceans and Coasts (Feb. 27, 2019).

[14] Michael Burger, Radley M. Horton & Jessica Wentz, The Law and Science of Climate Change Attribution, 45 COLUMBIA J. ENVT’L J. L. 57  (2020).

[15] Richard Peto, Distorting the epidemiology of cancer: the need for a more balanced overview, 284 NATURE 297, 297 (1980).

IARC’s Industry Sniffing Bots Are Coming for You

October 8th, 2025

“Hey, hey, you, you, get off of my cloud.”   …. Jagger & Richards

For the last 50 years, critics, cranks, and anti-industry zealots have argued that industry-sponsored science is vitiated by conflicts of interest. What started as the whining of scientists who were regulatory “political scientists” and adjuncts to plaintiffs’ law firms, has become a major movement. The rise of post-modernism in philosophy has supported the rejection of robust debate of scientific assessments of causation on grounds that all such judgments are politically and socially determined.  Evidence is just casuistry, at least when done by those with whom we disagree.

The anti-industry bias has had demonstrably bad consequences in distorting scientific judgment. Over 30 years ago, a science journalist published a story in the Journal of the National Cancer Institute, about how dire predictions of asbestos mortality never came to pass.[1] In investigating the failure of these predictions, the journalist concluded that they had been the product of exaggerations by government scientists who suffered from a form of “white-hat” bias”:

“the government’s exaggeration of the asbestos danger reflects a 1970s’ Zeitgeist that developed partly in response to revelations of industry misdeeds.  ‘It was sort of the “in” thing to exaggerate … [because] that would be good for the environmental movement’….  ‘At the time it looked like you were wearing a white hat if you made these wild estimates. But I wasn’t sure whoever did that was doing all that much good.”[2]

The existence of “white-hat” bias is perhaps the most benign explanation for the propagation of badly done science. The deployment of political correctness applied to issues that really depend upon scientific method, data, and inference for their resolution should not, however, be seen as particularly benign.

In 2010, over a decade after the description of white-hat bias in the JNCI, two public health researchers, Mark B. Cope and David B. Allisosn, described white-hat bias as a prevalent cognitive error in how research is reported and interpreted.[3]  They described white-hat bias as a “bias leading to the distortion of information in the service of what may be perceived to be righteous ends.” Perhaps the temptation to overstate the evidence against a toxic substance is unavoidable, but it diminishes the authority and credibility of regulators entrusted with promulgating and enforcing protective measures.  And error is still error, regardless of its origins and motivations. 

Allison and Cope gave examples of white-hat bias in how papers are cited, with “exonerative” studies cited less often than those than claim harmful outcomes.  And when positive papers were cited, they were often interpreted misleadingly to overstate the harms previously reported.

The principle of charity suggests white-hat bias should be considered for much of the anti-industry prejudices exhibited by public health scientists. The persistence, virulence, and irrationality of many instances of prejudiced judgments, however, make the charitable explanation implausible.

Kenneth Rothman, the founder of Epidemiology, the official journal of the International Society for Environmental Epidemiology (ISEE), provided a more insightful explanation to the anti-industry bias as the “new McCarthyism in science.”[4] Rothman identified an anti-manufacturing industry bias as manifesting as intolerance toward industry-sponsored studies, and strict scrutiny of “conflict-of-interest” (COI) disclosures. The McCarthyites amplify the gamesmanship over COI disclosures by excusing or justifying non-disclosure of COIs from scientists aligned with advocacy groups or the lawsuit industry, or from positional COIs.

The quaint notion that “an opinion should be evaluated on the basis of its contents, not on the interests or credentials of the individuals who hold it,” has been generally banished.[5] The offense to honest scientific inquiry receives little attention,[6] but the sanctimonious deployment of COI claims allows scientists to over-indulge in poor quality research by claiming that they have extirpated industry influence.

In 1995, anti-tobacco historian and expert witness, Robert Proctor, coined the term agnotology from the Greek ágnosis (“not knowing”) and -logia (study of).[7] Agnotology is now a specialty of scientist-advocates and expert witnesses for the lawsuit industry; it has been the subject of numerous and repetitive books,[8] too many articles to cite, and even doctoral dissertations.[9]

The anti-manufacturing industry jihad is little more than defamation against every scientist or citizen who has called for evidence-based regulation and law in dealing with scientific issues. The movement would deprive legislators, regulators, and juries of important, relevant scientific evidence based upon a smear.

What is truly fascinating, however, is the hypocrisy built into the anti-industry COI movement. There is another industry that is protected from criticism – the lawsuit industry. The lawsuit industry that has grown up parasitically around a system of tort law, which now includes not only law firms that service claimants, but also their retinue of expert witnesses, their litigation funders, and even investment firms that collude with hit-piece journalists who work on “distort and short” schemes of trading in the securities of their targets.

The critics of research done or funded by manufacturing industry argue that industry studies disproportionately report outcomes favorable to their sponsors. The implied potential conflicts posed by industry-sponsored research studies are fairly obvious. Industries that make or sell products, raw materials, or chemicals have an interest in having toxicological and epidemiologic studies support claims of safety.  Research that suggests an industry’s product causes harm may hurt the industry’s financial interests directly by inhibiting sales, or indirectly by undermining the industry’s position in litigation, or by leading to greater regulatory scrutiny and control. Indirect harms may result from heightened warnings or instructions, which may limit sales or encourage sales of competing, less hazardous products. If the harm evidenced by the research is sufficiently severe, the research may lead to product recall or bans, again with serious economic consequences for the industry. 

The lawsuit industry has conflicts of interest that mirror those of manufacturing industry.[10] Manufacturing evidence and conclusions of harm is good for the lawsuit industry, and provides rich sources of revenue for its go-to expert witnesses. There are also ideological interests that motivate many players in the lawsuit industry. Lawsuit industry COIs are frequently ignored or down-played, even though the research funded, sponsored, or written by its members has a strange propensity to support claims made in court and in agencies.

The International Agency for Research on Cancer (IARC) has become ground zero for hypocritical exorcisms of COIs. In 2018, several authors wrote a commentary in which they declared that IARC and its cancer hazard evaluations were under attack from those with “economic interests” (manufacturing firms or their consultants).[11] Several of the commentary authors, Peter F. Infante, Ronald Melnick, and James Huff, were full-fledged members of the lawsuit industry, with consulting firms that work to help claimants in tort litigation. The authors’ own COIs, however, did not inhibit them from declaring that only “scientific experts who do not have conflicts of interest should be allowed to criticize IARC pronouncements. Three of the four authors (Infante, Melnick, Huff) of this hit piece identified themselves as having consulting firms, but only James Huff gave a disclosure that he had “been retained as expert consultant on long-term animal bioassays of glyphosate in litigation for plaintiffs.” Infante and Melnick gave no disclosure, although they have been far more than consultants; they have appeared in testimonial roles for tort claimants. To top off the hypocrisy, the journal editor, Steven B. Markowitz, felt compelled to declare that he had “no conflict of interest in the review and publication decision regarding this article.” Markowitz is a not infrequent testifying expert witness for the lawsuit industry.[12] It is a safe bet that the great majority of the studies authored by Infante, Melnick, Huff, and Markowitz claim or suggest harms from chemical exposures.

It seems rudimentary that scientific research should be evaluated on the merits of studies, methods, data, and inference, and not the source of the funding. We are, however, deep into the post-modern world that regards science as a way of exercising political power and social control, and not a search for the truth. Given our Zeitgeist, no one should be surprised that an IARC official has just come out with a paper that attempted to deploy a large-language model (LLM) to identify possible industry influence, down to parts per trillion or whatever the level of detection may be.

Last month, Mary K. Schubauer-Berigan, the head of the Evidence Synthesis and Classification Branch of IARC, along with several other scientists, published a paper that proposed the use of an LLM to identify industry influence.[13] Schubauer-Berigan is an occupational epidemiologist, but she is also an amateur agnotologist. The first sentence of her article really tells all you need to know: “Industry-funded research poses a threat to the validity of scientific inference on carcinogenic hazards.” The authors claim that their LLM can help assess bias from industry studies in evidence synthesis and identify “industry influence” on scientific inference. These authors reflect the IARC dogma that only manufacturing industry has COIs of concern. Lawsuit industry influence is never mentioned.

The authors applied their LLM to identify industry relationships among authors of review articles on issues related to three specific IARC hazard classifications (benzene, cobalt, and aspartame). The search apparently included direct funding for studies of the agent under consideration, as well as whether studies or reviews had an industrial sponsor or a trade association, whether they used data provided by an industry source, or whether authors were paid consulting fees or provided expert testimony. The authors’ algorithm did not include whether spouses, children, parents, good friends, professional colleagues, or mentors ever had some dalliance with manufacturing industry.

IARC’s LLM was never let loose in search of lawsuit industry connections. Are you surprised?


[1] Tom Reynolds, “Asbestos-Linked Cancer Rates Up Less Than Predicted,” 84 J. Nat’l Cancer Instit. 560 (1992).

[2] Id. at 562. 

[3] Mark B. Cope and David B. Allison, “White hat bias: examples of its presence in obesity research and a call for renewed commitment to faithfulness in research reporting,” 34 Internat’l J. Obesity 84 (2010).

[4] Kenneth J. Rothman, “Conflict of interest: the new McCarthyism in science,” 269 J. Am. Med. Ass’n 2782 (1993). See Schachtman, “The Rhetoric and Challenge of Conflicts of Interest,” Tortini (July 30, 2013).

[5] Brian MacMahon, “Epidemiology:  another perspective,” 37 Internat’l J. Epidem. 1192, 1192 (2008).

[6] See Thomas P. Stossel, “Has the hunt for conflicts of interest gone too far?” 336 Brit. Med. J. 476 (2008); Kenneth J. Rothman & S. Evans, “Extra scrutiny for industry funded trials: JAMA’s demand for an additional hurdle is unfair – and absurd, 331 Brit. Med. J. 1350 (2005) & 332 Brit. Med. J. 151 (2006) (erratum).

[7] Robert Proctor, The Cancer Wars: How Politics Shapes What We Know and Don’t Know About Cancer 8 & not (1995).

[8] See, e.g., David Michaels, The Triumph of Doubt: Dark Money and the Science of Deception (2020); David Michaels, Doubt Is Their Product: How Industrys Assault on Science Threatens Your Health (2008); Naomi Oreskes & Erik M. Conway, Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming (2010); Robert N. Proctor & Londa Schiebinger, eds., Agnotology: The Making and Unmaking of Ignorance (2008); Janet Kourany & Martin Carrier, eds, Science and the Production of Ignorance: When the Quest for Knowledge Is Thwarted (2020); Blake D. Scott, “Agnotology and Argumentation: A Rhetorical Taxonomy of Not-Knowing,” OSSA Conference Archive 133 (2016).

[9] Craig Alex Biegel, Manufactured Science, the Attorneys’ Handmaiden: The Influence of Lawyers in Toxc [sic] Substance Disease Research, Dissertation for Florida State University (2016).

[10] See Laurence J. Hirsch, “Conflicts of Interest, Authorship, and Disclosures in Industry-Related Scientific Publications: The Tort Bar and Editorial Oversight of Medical Journals,” 84 Mayo Clin. Proc. 811 (2009).

[11] Peter F. Infante, Ronald Melnick, James Huff & Harri Vainio, “Commentary: IARC Monographs Program and public health under siege by corporate interests,” 61 Am J. Indus. Med. 277 (2018).

[12] See In re Joint Eastern & Southern District Asbestos Litig., 758 F.Supp. 199 (S.D.N.Y. 1991); Juni v. A.O. Smith Water Prods. Co., 32 N.Y.3d 1116, 116 N.E.3d 75 (2018); Konstantin v. 630 Third Avenue Assocs., N.Y.S.Ct. (N.Y. Cty.) No. 190134/2010 (jury verdict returned Aug. 17, 2011); Koeberle v. John Crane, Inc., Phila. Cty. Ct. C.P. No. 000887 (jury verdict returned Feb. 2010).

[13] Nathan L. DeBono, Vanessa Amar, Hardy Hardy, Mary K. Schubauer-Berigan, Derek Ruths & Nicholas B. King, “A large language model-based tool for identifying relationships to industry in research on the carcinogenicity of benzene, cobalt, and aspartame,” 24 Envt’l Health 64 (2025).

Acetaminophen & Autism – Prada Review Misleadingly Claims to Be NIH Funded

September 9th, 2025

A few weeks ago, four scientists published what they called a “navigation guide” systematic review on acetaminophen use and autism.[1] The last named author, Andrea A. Baccarelli, is an environmental epidemiologist, who has been an expert witness for plaintiffs’ counsel in lawsuits against the manufacturers and sellers of acetaminophen. Another author, Beate Ritz, frequently testifies for the lawsuit industry in cases against various manufacturing industries. A third author, Ann Z. Bauer, was the lead author of a [faux] “consensus statement” that invoked the precautionary principle to call for limits on the use of acetaminophen (N-acetyl-p-aminophenol or APAP) by pregnant women, on grounds that such use may increase the risks of neurodevelopmental (including autism), reproductive and urogenital disorders.[2] The lead author was Diddier Prada, who works in Manhattan, at the Icahn School of Medicine at Mount Sinai, in the environmental and climate science department, within the Institute for Health Equity Research. The Mount Sinai website describes Dr. Diddier Prada as an environmental and molecular epidemiologist who focuses on the role of environmental toxicants in age-related conditions

Curious readers might wonder how someone whose interest is in environmental issues and “health equity” became involved in a review of pharmaco-epidemiology and teratology. The flavor of systematic review deployed in the paper, “navigation guide,” originated and has had limited use in the field of environmental issues. To my knowledge, so-called navigation guides have never been used previously in pharmaco-epidemiologic or teratologic controversies.[3]

The Prada paper and its deployment of a “navigation guide” systematic review deserve greater critical scrutiny.  In this post, however, I want to address some peripheral issues, such as “competing interests” and misleading claims about the paper’s having been NIH funded.

Only Dr. Baccarelli disclosed a potential conflict of interest, in a statement that many would judge to be anemic:

“Dr. Baccarelli served as an expert witness for the plaintiff’s legal team on matters of general causation involving acetaminophen use during pregnancy and its potential links to neurodevelopmental disorders. This involvement may be perceived as a conflict of interest regarding the information presented in this paper on acetaminophen and neurodevelopmental outcomes. Dr. Baccarelli has made every effort to ensure that this current work—like his past work as an expert witness on this matter—was conducted with the highest standards of scientific integrity and objectivity.”

The disclosure fails to mention whether Dr. Baccarelli was compensated for his playing on the “plaintiff’s legal team,” and if so, then how much. Using the passive voice, he suggests that this work might be perceived as a conflict of interest, when surely he knows that it is a serious issue. If industry scientists working on the relevant issue had published, they surely would be accused of having had a conflict.

Dr. Baccarelli self-servingly, falsely, and with epistemic arrogance, asserts that he made every effort in this paper, and in his past work as an expert witness, to conform to the “highest standards of scientific integrity and objectivity.” Despite his best efforts to be “scientific,” Baccarelli’s work failed critical scrutiny in the multi-district litigation that consolidated acetaminophen cases for pre-trial handling. In that litigation, the defense challenged Dr. Baccarelli’s opinions under Rule 702, for their lack of validity. In an extensive, closely reasoned opinion, federal district court judge Denise Cote ruled that Dr. Baccarelli’s proffered opinions failed to meet the relevance and reliability standards of federal law.[4]

The MDL court easily found that Dr. Baccarelli was qualified to provide an opinion on epidemiology, although the focus of his career has been on environmental issues. Baccarelli’s substantive problem was that he deviated from accepted and valid methods of causal inference by cherry picking different results and outcomes across multiple studies. Baccarelli’s sophistical trick was to advance a “transdiagnostic” analysis that lumps an already heterogenous autism spectrum disorder (ASD), with attention-deficit hyperactivity disorder (ADHD), and a grab bag of “other neurodevelopmental disorders.” If a study found a putative association with only one of the three end points, Baccarelli would claim success on all three. Baccarelli avoided conducting separate ASD and ADHD analyses, and he cherry picked the end points that supported his pre-determined conclusions.

Judge Cote found that the transdiagnostic analyses advanced by plaintiffs’ expert witnesses, including Baccarelli, obscured and obfuscated more than they informed the causal inquiry.[5] The court’s analysis casts considerable shade upon Baccarelli’s self-serving claim to have used “the highest standards of scientific integrity and objectivity.” Judge Cote barred Baccarelli and the other members of the plaintiffs’ “expert team” from testifying.

Conspicuously absent from the conflict disclosure section of the Prada article was any mention of the litigation work of co-author Beate Ritz. In 2007, Ritz became a fellow of the Collegium Ramazzini, which functions in support of the lawsuit industry much as the scientists of the Tobacco Institute supported tobacco legal defense efforts in times past. Ritz’s fellowship in the Collegium makes her a full-fledged member of the Lobby and a supporter of the lawsuit industry.[6] Ritz has testified, for claimants, in cases involving claims of heavy metals in baby food, in cases involving claims that paraquat exposure caused Parkinson’s disease, and most notoriously for plaintiffs in glyphosate litigation, where her witnessing is often done for the Wisner Baum lawfirm that employs the son of Robert F. Kennedy, Jr.[7]

The conflict of interest disclosure statement is hardly the only misleading aspect of the Prada paper. At the end of the paper, the authors state, with respect to funding that their “study was supported by NIH (R35ES031688; U54CA267776).” Some people may incorrectly believe that the Prada review was directly sponsored and funded by the National Institutes of Health.  Nothing could be further from the truth.

The research grant referenced, R35ES031688, is a National Institute of Environmental Health Sciences (NIEHS) research grant. The curious reader might inquire what whether and why the NIEHS would be concerned about a pharmacological issue. The short answer is that the NIEHS is not, and that this grant has nothing to do with children’s neurological status in relation to their mother’s ingestion of acetaminophen.

The NIEHS award this research grant to Andrea Baccarelli, while he was at Columbia University, for his project “Extracellular Vesicles in Environmental Epidemiology Studies of Aging.” The research focuses on extracellular vesicles (EVs) and their role in environmental health, particularly as it relates to aging. What Baccarelli promised to do with this NIEHS grant was to study the effects of air pollution on accelerated brain aging, and disease states such as dementia. Baccarelli noted that his focus would be on intra-cellular communication enabled by extracellular vesicles, in reaction to air pollution. The described research would understandably be viewed as potentially relevant to the NIEHS mission statement, but it has nothing to do with autism among children of women who ingested acetaminophen during pregnancy.  The phrases “extracellular vesicles” and “air pollution” do not appear in the Prada review.

The second grant listed under funding for the Prada review was U54CA267776. The U54 designation marks this as a career award, not specific to a specific topic or this published work. Ironically, the grant is a diversity, equity, and inclusion grant to the Mount Sinai Icahn School of Medicine, in Manhattan. The Icahn School has long had one of the most ethically, racially, culturally diverse faculties of any medical school, and hardly needs financial incentives to hire minority physicians and scientists.

The NIH awarded grant U54CA267776 for “Cohort Cluster Hiring Initiative at Icahn School of Medicine at Mount Sinai.” The NIH describes the grant as aiming to reduce “[t]he barriers to research and career success for underrepresented groups in academic medicine.” The text of the U54 grant is written largely in bureaucratic jargon, which may require a degree in DEI to understand fully. What is abundantly clear is that nothing in this U54 grant, or in its stated criteria for evaluation, has anything to do with studying the teratologic potential of acetaminophen.

What so far has escaped the media’s attention is that Prada and colleagues did not have NIH (or NIEHS) support for their acetaminophen review. They had career-level support for DEI purposes, or perhaps general “walking-around” money for research on environmental pollution and brain aging, which has nothing to do with the subject of their navigation guide review. The authors of the Prada review never prepared a study proposal related to acetaminophen for evaluation by a funding committee at NIH. The authors never submitted a protocol to the NIH, and the NIH provided no peer review or guidance for the authors’ acetaminophen review. In short, there is nothing that marks the Prada review as an NIH work product other than the over-claiming of the authors with respect to funding sources.

The Prada review has attracted a lot of attention in the media and from the worm-brained Secretary of Health and Human Services. An article in the Washington Post described the Prada review as NIH funded, which tracks the paper’s misleading disclosure.[8] The media no doubt jumped on the publication of the Prada review last month because Secretary Kennedy promised to reveal the cause of autism by September. We can imagine that Kennedy will be tempted to embrace the Prada review because he can falsely mischaracterize it as an NIH-funded review.

Not only is the funding claim dodgy, but so is the suggestion that the review supports a conclusion of causation between maternal ingestion of acetaminophen and autism in children. The lead author, Dr. Diddier Prada, noted the frequent confusion between correlation and causation and explicitly stated the authors of the review “cannot answer the question about causation.”[9]


[1] Diddier Prada, Beate Ritz, Ann Z. Bauer and Andrea A. Baccarelli, “Evaluation of the evidence on acetaminophen use and neurodevelopmental disorders using the Navigation Guide methodology,” 24 Envt’l Health 56 (2025).

[2] Ann Z. Bauer et al., “Paracetamol Use During Pregnancy — A Call for Precautionary Action,” 17 Nature Rev. Endocrinology 757 (2021).

[3] See Tracey J. Woodruff, Patrice Sutton, and The Navigation Guide Work Group, “An Evidence-Based Medicine Methodology To Bridge The Gap Between Clinical And Environmental Health Sciences,” 30 Health Affairs 931 (May 2011).

[4] In re Acetaminophen ASD-ADHD Prods. Liab. Litig., 707 F. Supp. 3d 309, 2023 WL 8711617 (S.D.N.Y. 2023) (Cote, J.).

[5] Id. at 334.

[6] See F.D.K. Liddell, “Magic, Menace, Myth and Malice,” 41 Ann. Occup. Hyg. 3, 3 (1997).

[7] See, e.g., In re Roundup Prods. Liab. Litig., 390 F. Supp. 3d 1102 (2018); Barrera v. Monsanto Co., Del. Super. Ct. (May 31, 2019); Pilliod v. Monsanto Co., 67 Cal. App. 5th 591, 282 Cal. Rptr. 3d 679 (2021). See also Dan Charles, “Taking the stand: For scientists, going to court as an expert witness brings risks and rewards,” 383 Science 942 (Feb. 29, 2024) (quoting Ritz as suggesting that she was reluctant to get involved as an expert witnesses).

[8] Ariana Eunjung Cha, Caitlin Gilbert and Lauren Weber, “MAHA activists have been pushing for more investigation into use of the common pain killer during pregnancy,” Wash. Post (Sept. 5, 2025). See also Liz Essley Whyte & Nidhi Subbaraman, “RFK Jr., HHS to Link Autism to Tylenol Use in Pregnancy and Folate Deficiencies,” Wall St. J. (Sept. 5, 2025).

[9] Jess Steier, “Saturday Morning Thoughts on the Tylenol-Autism News: The public health whiplash continues as we play another round of ‘autism cause’ roulette,” Unbiased Science (substack) (Sept. 06, 2025).

The FDA Expert Panel on Talc – More Malarky     

June 18th, 2025

On May 20, 2025, as announced, FDA Commissioner Martin Makary held his panel discussion on talc in food and medications.[1] The discussion lasted just under two hours, and is available on YouTube for your viewing and perhaps your amusement. Makary opened and closed the event with what could have been the plaintiffs’ opening and closing statements from one of the many talc trials that have clouded courtrooms across the land. He asked rhetorically: “Why don’t we talk about at our oncology meetings the 1993 National Toxicology Program results that found clear evidence of carcinogenic activity of talc in animal studies?’” Perhaps because the talc findings were questionable at best, and the asbestos findings with respect to gastrointestinal cancers were exculpatory for talc.

Makary’s introductory remarks were followed by the panelists’ introducing themselves by their training and involvement with talc issues. Other than Makary, the participants were FDA Deputy Commissioner Sara Brenner, George Tidmarsh, John Joseph Godleski, Sandra McDonald, Daniel Cramer, Joellen Schildkraut, Malcolm Sim, Steven Pfeiffer, Nicolas Wentzensen, and Nicole C. Kleinstreuer. Godleski and Cramer have served as plaintiffs’ expert witnesses in ovarian cancer litigation, which was not particularly germane to the panel discussion. In their initial discussions of qualifications and background, neither Godleski nor Cramer disclosed his potential conflicts of interest, or the amount of fees earned. Sandra McDonald described her experience in assisting Godleski, but she did not declare whether she earned any money for consulting services to the lawsuit industry. Later in the panel discussion, when George Tidmarsh stated that no one should be vilified for past practices in using talc, Daniel Cramer jumped in to vilify Johnson & Johnson with the suggestion that somehow that company had surreptitiously arranged for the National Cancer Institute to remove a statement about how talc “may be associated with talc use” from its website just before he was about to testify in his first talc trial for plaintiffs.

None of the panelists had served as a defense expert witness. Steven Pfeiffer works for a pharmaceutical company, but not one that had any experience with the safety or efficacy of talc as an ingredient in medications.

None of the panelists had participated in any toxicologic or epidemiologic study of talc on cancers or diseases of the digestive organs. None of the panelists made it his or her business to become familiar with the extensive studies of the asbestos and talc on gastrointestinal cancers. The lack of experience, or specific citations to any study, did not stop Daniel Cramer from suggesting that talc was responsible for inflammatory bowel disease, autoimmune diseases, and gastrointestinal cancers.  Like Cramer, epidemiologist Joellen Schildkraut, focused on ovarian cancer, and made the false assertion that the relationship between talc and gastrointestinal cancers is understudied. Schildkraut held back from asserting that talc causes ovarian cancer, but she heartily endorsed banning talc on the precautionary principle. All the panelists concurred with the suggestion that talc be eliminated from food and drugs, without waiting for “the epidemiologists to catch up.”

Two issues were grossly misrepresented by the panelists. None of them, however, was well informed enough for the misrepresentations to have been overt lies. The first whopper was that National Toxicology Program (NTP) testing had shown carcinogenicity of talc in its inhalational studies for the lung and other organs. The second whopper was that rice on talc was used prevalently in the United States, and that it was responsible for digestive organ cancers. Nicole C. Kleinstreuer, who has worked at the NTP, and accurately described its activities gave a description of its animal talc studies, perhaps a bit slanted, but not too inaccurate. When George Tidmarsh later misrepresented NTP talc findings, however, Kleinsteuer was silent.

NTP Ingestion Studies

Makary did not identify the NTP studies to which he referred, but Kleinsteuer described a talc inhalation study that has only one referent. The NTP conducted long-term rodent inhalation and ingestion assays for both talc and different kinds of asbestos, in the 1980s and 1990s. For talc, the NTP published, in 1993, only one long-term inhalational study in rats and mice.[2] In mice, exposed to talc by inhalation for up to two years, there was no evidence of any “neoplastic” effects. The results in rats were more difficult to interpret. In male rats, exposed for over two years, there was weak evidence of neoplastic effects based upon an increased incidence of benign or malignant adrenal gland pheochromocytomas. In female rats, the NTP reported “clear evidence” of excess alveolar/bronchiolar (lung) adenomas and carcinomas and benign or malignant adrenal gland pheochromocytomas of the adrenal gland. The meaning of these rodent studies obviously varies depending upon whether you are a rat or a mouse of a certain breed; the meaning for humans is even murkier, even for humans that are rodent-like. The multiple comparisons across exposure levels for dozens if not hundreds of outcomes, and the lumping of benign and malignant effects together, certainly makes the NTP statistical analyses suspect. This report was marked by significant controversy, and some scientists refused to endorse its finding because adrenal gland pheochromocytomas were not treatment-related; the maximum-tolerated dose was exceeded for female rats at the higher exposure level, thus violating the study’s protocol; and talc is thus not expected to cause tumors in rats (and mice) exposed at levels that do not cause “marked chronic lung toxicity.”[3]

One of the lawsuit industry’s, and Makary’s, theories about the harmfulness of ingested talc is based upon the supposition that talc has asbestos contaminants. This theory is as vague as is the term asbestos, which has no mineralogical meaning; instead, the term asbestos was historically used to refer to six different minerals: actinolite, anthophyllite, amosite (cummingtonite-grunerite), chrysotile, crocidolite, and tremolite. All of these minerals, except for chrysotile, are amphibole minerals. Some of the amphibole minerals occur in both fibrous and non-fibrous form, and the ill health effects of the amphibole fibers are generally attributed to their resistance to biological degradation and their high aspect ratio. Things get a bit crazy because the federal government, for purposes of standardizing aerosol measurements, set the aspect ratio for counting “fibers,” at 3:1. The pathogenicity of “federal fibers,” which are not really fibers, is highly disputed.

The NTP never conducted long-term talc ingestion studies; it did something much better. The NTP tested dietary high-dose, long-term ingestion of various asbestos types in multiple species. The NTP did not leave the exposure issue vague with “asbestos” as the dietary source. Instead, the NTP was more precise when testing whether ingesting “asbestos” was harmful to rodents. The NTP ran separate ingestion experiments on chrysotile, amosite, and crocidolite, with the different form of asbestos making up one percent of the animals’ lifetime diet. Overall, these experiments were “null”; that is, they provided no support for the carcinogenicity of ingested asbestos of the types tested.

The NTP conducted lifetime ingestion studies in male and female rats with a diet of one percent crocidolite asbestos, the most toxic and carcinogenic form of asbestos in human beings. The NTP experiments showed that under these conditions, long-term ingestion of crocidolite asbestos was neither overtly toxic nor carcinogenic in male or in female rats.[4] After crocidolite, amosite asbestos, fibrous cummingtonite-grunerite, named for “asbestos mines of South Africa, is the most toxic and carcinogenic of the asbestos fibers. The NTP showed that feeding male and female rats amosite asbestos for one percent of their diet, for their lifetimes, was not overtly toxic, did not affect their survival, and was not carcinogenic.[5] The NTP repeated its life-time one percent amosite diet in Syrian Golden hamsters, again without toxic or carcinogenic response in either the male or female hamsters.[6]

Looking at the least toxic and carcinogenic asbestos mineral, chrysotile, the NTP’s conducted long-term one percent feed studies of both “short range” and “long range” (chrysotile fiber length) in Syrian Golden hamsters. Again the results were “null”; that is, there was no treatment-related toxicity or carcinogenicity.[7] There were no increases in adrenal cortical adenomas (benign growths) when compared with concurrent controls, but there was an increase of these benign tumors when compared with pooled control groups from other experiments. Ultimately, the NTP concluded that the biological importance of these benign adrenal growths in the absence of cancers or tumors of the gastrointestinal tract (which was the target organ) was questionable, at best.

Because of prior research suggesting that carcinogencity was a function of fiber rigidity and length, the NTP tested ingested chrysotile in rats, at two different fiber lengths. For its experiments, the NTP defined “short-range chrysotile (SR)” as short fibers with a median length of 0.66 microns, and a range of 0.088 to 51.1 microns. “Intermediate-range (IR) chrysotile fibers had a median length of 0.82 microns, with a range from 0.104 to 783.4 microns. The NTP did not use long-range chrysotile fibers, which are generally greater than 5 microns in length. Male and female F344/N rats ingested an NTP one percent diet of chrysotile, in the two lengths of chrysotile, SR and IR, for a lifetime. There were no neoplastic or non-neoplastic diseases, overt toxicity, or decrease in survival associated with SR chrysotile ingestion, in either the male or the female rats.[8] In the female rats, there was no effect on fertility or litter, overt toxicity, or carcinogenicity from IR chrysotile ingestion. The male rats also did not show any adverse clinical signs, but they experienced a statistically insignificant increase in benign colonic polyps, which the NTP stretched to characterize as “some” (but not clear) evidence of carcinogenicity.

Rice is Nice, With or Without Talc

The FDA panelists’ inaccurate claims about talc on rice also cry out for rebuttal, which no panelist seemed able or willing to give. Given that the panel was convened with only four days notice, and without public comment, it operated in a fact-free zone, and operated mostly as a propaganda exercise. The history of the ingested asbestos and talc controversy goes back over half a century. Some background is needed to understand exactly how outlandish the rice-on-talc claim is.

The causal association between asbestosis and lung cancer was well established by the early 1960s,[9] as was the causal association between crocidolite asbestos exposure and mesothelioma.[10] Some sources carelessly credit Irving Selikoff with these discoveries, but he was not so much of a discoverer, as he was a zealous spokesman for the safety of asbestos-exposed workers. Selikoff worked hand-in-hand with various labor unions to publicize and politicize asbestos risks that had been shown by other workers. Credit for the lung cancer connection properly goes to earlier work done by Sir Richard Doll and others, and the crocidolite-mesothelioma connection was shown by J. Christopher Wagner, in 1960. Where Selikoff deserves credit is in tireless efforts to expand the scope of asbestos-related diseases beyond lung cancer and mesothelioma, with or without sufficient evidence, and thus to expand the compensability of other diseases of ordinary life in asbestos workers.

In his efforts to extend the scope of compensation, Selikoff did not limit himself to risks that had been scientifically established; he sought to expand the list of asbestos-related diseases. He advanced the unsubstantiated notions that all six kinds of asbestos minerals carried the same risks, that asbestos caused virtually every kind of cancer in humans, that any asbestos in the environment required extreme remedial action, and that asbestos was responsible for a very high percentage of all human cancers.

No doubt Selikoff wanted credit for scientific discoveries, but he also wanted science that would support compensation. Selikoff understood that if the asbestos workers stopped smoking, their risks of lung cancer would fall, and their cancer morbidity and mortality would be more influenced by gastrointestinal cancers, given that colorectal cancer was the leading cause of cancer-related death in non-smoking men, in the 1960s.

By 1950, Selikoff had already become an advocate, who testified and wrote reports as a claimants’ expert witness in many asbestos cases. In the early 1950s, New Jersey lawyer Carl Gelman retained Selikoff to examine 17 workers from the Paterson plant of Union Asbestos and Rubber Company (UNARCO). Gelman filed workers’ compensation claims on behalf of these UNARCO workers, and Selikoff supported Gelman’s claims with reports and testimony. In the early 1950s, Anton Szczesniak, one of the UNARCO claimants, with Selikoff’s support as an expert witness, sought compensation for “intestinal cancer.” In 1965, Selikoff testified to support an asbestos insulator’s claim that asbestos exposure caused his colorectal cancer.[11] In 1974, Selikoff wrote a review article on asbestos exposure and gastrointestinal cancers, without any disclosure of his pro-plaintiff testimonial adventures.[12] Serious epidemiologists such as Sir Richard Doll and Sir Richard Peto pushed back on Selikoff’s exaggerated projections of asbestos-related mortality,[13] and his assertion that asbestos caused digestive system cancers.[14] Forty years after Selikoff testified for the claimant in an asbestos colorectal cancer case, the Institute of Medicine published a systematic review of the evidence available to Selikoff and later evidence, which showed that the evidence was insufficient “to infer a causal relationship between asbestos exposure and pharyngeal, stomach, and colorectal cancers.”[15]

Selikoff’s rent-seeking and fear-mongering spawned many asbestos scares. Some scientists accepted Selikoff’s dogma that a single asbestos fiber, of any variety, could cause any human cancer. The Mt. Sinai jihad against “asbestos” extended to any exposures involving asbestos, or even other minerals that contained “elongated mineral particles,” that nominally met the crude definition of asbestos. This jihad led to a prolonged litigation against the Reserve Mining Company, which had permits to dump taconite tailings in Lake Superior, since the late 1940s. Using Selikoff’s claim that “asbestiform” mineral particles had entered the water supply, the U.S. Environmental Protection Agency was able to obtain an injunction against the mining company.[16]

Regulatory overreach, Selikoff’s exaggerated testimony, and the trial judge’s partiality and bias marred the litigation.[17] After decades of research on asbestos in drinking water, there remains no substantial evidence that supports a conclusion that ingested asbestos in drinking water causes gastrointestinal or any other cancer.[18]

Selikoff was the head of an anti-asbestos lobby that promoted the fiction that asbestos was responsible for all manners of human ailments, regardless of dose or route of administration.[19] One of the panics he helped initiate involved the claim that talc-dusted rice was responsible for the high rate of stomach cancer among Japanese in Japan.

Reuben Merliss published an article in Science, in 1971, in which he attempted to attribute the high rate of stomach cancer in Japan to the Japanese custom of dusting rice with talc. Merliss relied upon overall population rates and trends to draw an ecologic inference that the Japanese rice (with talc and any asbestos contaminants) was responsible for the Japanese higher incidence of stomach cancer.[20]

The Merliss hypothesis, inspired by Selikoff, was sunk by a much more careful analysis (which got less media coverage). Two epidemiologists analyzed data about use of talc-coated rice in Japan and Hawaii, and found no support for the claim that talc-coated rice increased the risk of developing stomach cancer.[21]

Their more careful dietary assessment found high rates of stomach cancer among Japanese in Japan who did not consume talc-coated rice, while Japanese in Hawaii, who consumed considerable quantities of talc-coated rice had intermediate rates of stomach cancer (lower than in Japan). Filipinos in Hawai had very low rates of gastric cancer, even though they consumed the greatest amounts of talc-coated rice of any of the observed groups. The secular incidence trend of stomach cancer decreased more substantially among the talc-exposed Japanese living in Hawaii than among the non-exposed Japanese living in Japan.

Although the asbestos perpetual motion litigation machine continues to churn, the lawsuit industry has been hampered by the bankruptcy of virtually every company that made an asbestos-containing product, and the reduction of asbestos use and exposures over the last 50 years. The lawsuit industry’s shift to demonize and monetize talc as the next mineral target was predictable. What was not predictable was that we would have a Secretary of Health & Human Services whose sole experience in medicine has been in suing pharmaceutical and other manufacturing industries, perpetuating medieval beliefs in the miasma theory of disease causation,[22] and spreading conspiracies, misinformation, and disinformation. FDA Commissioner Makary has shown himself to be a willing accomplice in advancing the Secretary’s agenda. In his closing remarks, Makary made unsupported assertions, then retreated to the dodge that he was just asking questions. Makary strongly suggested that the recent increase in colorectal cancer among young people has been caused by the use of talc in food and medications. He failed to reference any evidence for his suggestion, which is, in any event, hard to square with the history of use of talc in medications for centuries, and the steady overall decline in the incidence of colorectal cancer in men and women.[23]

The Center for Truth in Science has sponsored rigorous systematic reviews of the evidence on cosmetic talc use and female reproductive cancers,[24] and respiratory cancers.[25] The systematic review of talc on reproductive organ cancers integrated evidence across toxicologic and epidemiologic studies, and found suggestive evidence of no association between the use of perineal talc and ovarian and endometrial cancers. The systematic review of talc use and respiratory cancers similarly integrated the available toxicologic and epistemiologic evidence, and rejected a causal association. The review reached a conclusion of suggestive evidence in the opposite direction – of no association between inhaled talc and mesothelioma or lung cancer.

The FDA talc panel was fool’s gold, and not the promised “gold standard” science. Rather than engaging with the systematic reviews sponsored by the Center, or for that matter with any systematic reviews, Commissioner Makary and his panel wallowed in anecdotes, stories, and isolated study results, without trying to identify and synthesize all the available evidence.


[1] FDA Expert Panel on Talc, “Independent Expert Panel to Evaluate Safety and Necessity of Talc in Food, Drug, and Cosmetic Products,” FDA (May 20, 2025).

[2] NTP Technical Report on the Toxicology and Carcinogenesis Studies of Talc (CAS No. 14807-96-6) in F344/N Rats and B6C3F Mice (Sept. 1993).

[3] Jay I. Goodman, “An Analysis of the National Toxicology Program’s (NTP) Technical Report (NTP TR 421) on the Toxicology and Carcinogenesis Studies of Talc,” 21 Regulatory Toxicol. & Pharmacology 244 (1995). See also Robyn L. Prueitt, Nicholas L. Drury, Ross A. Shore, Denali N. Boon & Julie E. Goodman, “Talc and human cancer: a systematic review of the experimental animal and mechanistic evidence,”  54 Critical Reviews in Toxicology  359 (2024).

[4] NTP TR-280 Toxicology and Carcinogenesis Studies of Crocidolite Asbestos (CASRN 12001-28-4) In F344/N Rats (Feed Studies) (1988).

[5] NTP TR-279 Toxicology and Carcinogenesis Studies of Amosite Asbestos (CASRN 12172-73-5) in F344/N Rats (Feed Studies) (1990).

[6] NTP TR-249 Lifetime Carcinogenesis Studies of Amosite Asbestos (CASRN 12172-73-5) in Syrian Golden Hamsters (Feed Studies) (1983).

[7] NTP TR-246 Lifetime Carcinogenesis Studies of Chrysotile Asbestos (CASRN 12001-29-5) in Syrian Golden Hamsters (Feed Studies) (1990).

[8] NTP – TR-295 Toxicology and Carcinogenesis Studies of Chrysotile Asbestos (CASRN 12001-29-5) in F344/N Rats (Feed Studies) (1985).

[9] See Richard Doll, “Mortality from Lung Cancer in Asbestos Workers,”  12 Br. J. Indus. Med. 81 (1955).

[10] See J. Christopher Wagner, C.A. Sleggs, and Paul Marchand, “Diffuse pleural mesothelioma and asbestos exposure in the North Western Cape Province,” 17 Br. J. Indus. Med. 260 (1960); J. Christopher Wagner, “The discovery of the association between blue asbestos and mesotheliomas and the aftermath,” 48 Br. J. Indus. Med. 399 (1991).

[11] See “Health Hazard Progress Notes,”16 The Asbestos Worker 13 (May 1966) (“A recent decision has widened the range of compensable diseases for insulation workers even further. A member of Local No. 12. Unfortunately died of a cancer of the colon. Dr. Selikoff reported to the compensation court that his research showed that these cancers of the intestine were at least three times as common among the insulation workers as in men of the same age in the general population. Based upon Dr. Selikoff’s testimony, the Referee gave the family a compensation award, holding that the exposure to many dusts during employment was responsible for the cancer. The insurance company appealed this decision. A special panel of the Workman’s Compensation Board reviewed the matter and agreed with the Referee’s judgment and affirmed the compensation award. This was the first case in which a cancer of the colon was established as compensable and it is likely that this case will become an historical precedent.”).

[12] Irving J. Selikoff, “Epidemiology of Gastrointestinal Cancer,” 9 Envt’l Health Persp. 299 (1974).

[13] Richard Doll & Richard Peto, “The causes of cancer: quantitative estimates of avoidable risks of cancer in the United States today,” 66 J. Nat’l Cancer Instit. 1191 (1981).

[14] Richard Doll and Julian Peto, Asbestos: Effects on Health of Exposure to Asbestos 8 (1985).

[15] Jonathan M. Samet, et al., Asbestos: Selected Cancers – Institute of Medicine (2006).

[16] See Wendy Wriston Adamson, Saving Lake Superior: A Story of environmental action (1974); Frank D. Schaumburg, Judgment Reserved: A Landmark Environmental Case (1976); Robert V. Bartlett, The Reserve Mining Controversy: Science, Technology, and Environmental Quality (1980); Thomas F. Bastow, This Vast Pollution: United States of America v. Reserve Mining Company (1986); Michael E. Berndt & William C. Brice, “The origins of public concern with taconite and human health: Reserve Mining and the asbestos case,” 52 Regulatory Toxicol. & Pharmacol. S31 (2008).

[17] Reserve Mining Co. v. Lord, 529 F.2d 181 (8th Cir. 1976) (removing Judge Lord from case).

[18] See World Health Organization, Asbestos in Drinking Water (4th ed. 2021) (“no causal association between asbestos exposure via drinking-water and cancer development has been reported for any asbestos fibre type”); Jennifer Go, Nawal Farhat, Karen Leingartner, Elvin Iscan Insel, Franco Momoli, Richard Carrier & Daniel Krewski, “Review of epidemiological and toxicological studies on health effects from ingestion of asbestos in drinking water,” 54 Critical Reviews in Toxicology 856 (2024) (“Based on high-quality animal studies, an increased risk for cancer or non-cancer endpoints was not supported, aligning with findings from human studies. Overall, the currently available body of evidence is insufficient to establish a clear link between asbestos contamination in drinking water and adverse health effects.”); Kenneth D. MacRae, “Asbestos in drinking water and cancer,” 22 J. Royal Coll. Physicians 7 (1988).

[19] Francis Douglas Kelly Liddell, “Magic, Menace, Myth and Malice,” 41 Ann. Occup. Hyg. 3, 3 (1997) (“[A]n anti-asbestos lobby, based in the Mount Sinai School of Medicine of the City University of New York, promoted the fiction that asbestos was an all-pervading menace, and trumped up a number of asbestos myths for widespread dissemination, through media eager for bad news.”).

[20] Rueben R. Merliss, “Talc-Treated Rice and Japanese Stomach Cancer,” 173 Science 1141 (1971). The claim persists in the underworld of medical speculation. See E. Whitin Kiritani, “Asbestos and Stomach Cancer in Japan – A Connection?” 33 Medical Hypotheses 159 (1990).

[21] Grant N. Stemmermann & Lawrence N. Kolonel, “Talc-coated rice as a risk factor for stomach cancer,” 31 Am. J. Clin. Nutrition 2017 (1978).

[22] Paul Offit, “Understanding RFK Jr.,” Beyond the Noise (Feb. 11, 2025).

[23] American Cancer Society, “Key Statistics for Colorectal Cancer” (last revised April 28, 2025).

[24] Heather N. Lynch, Daniel J. Lauer, Olivia Messina Leleck, Rachel D. Freid, Justin Collins, Kathleen Chen, William J. Thompson, A. Michael Ierardi, Ania Urban, Paolo Boffetta & Kenneth A. Mundt, “Systematic review of the association between talc and female reproductive tract cancers,” 5 Front. Toxicol. 1157761 (2023).

[25] Heather N. Lynch, Daniel J. Lauer, William J. Thompson, Olivia Leleck, Rachel D. Freid, Justin Collins, Kathleen Chen, A. Michael Ierardi, Ania M. Urban, Michael A. Cappello, Paolo Boffetta & Kenneth A. Mundt, “Systematic review of the scientific evidence of the pulmonary carcinogenicity of talc,” 10 Front. Public Health 989111 (2022).

David Egilman, Rest in Peace, Part 3

April 30th, 2024

Egilman was sufficiently clever to discern that if his “method” led to a conclusion that silicone gel breast implants cause autoimmune disease, but the Institute of Medicine, along with court-appointed experts, found no basis for a causal conclusion, then by modus tollens Egilman’s “method” was suspect and must be rejected.[1] This awareness likely explains the extent to which he went to cover up his involvement in the plaintiffs’ causation case in the silicone litigation.

Egilman’s selective leaking of Eli Lilly documents was also a sore point. Egilman’s participation in an unlawful conspiracy was carefully detailed in an opinion by the presiding judge, Hon. Jack Weinstein.[2] His shenanigans were also widely covered in the media,[3] and in the scholarly law journals.[4] When Egilman was caught with his hand in the cookie jar, and conspiring to distribute confidential Zyprexa documents to the press, he pleaded the fifth amendment. The proceedings did not go well, and Egilman ultimately stipulated to his responsibility for violating a court order, and agreed to pay a monetary penalty of $100,000. Egilman’s settlement was prudent. The Court of Appeals affirmed sanctions against Egilman’s co-conspirator, for what the court described as “brazen” conduct.[5]

 

Despite being a confessed contemnor, Egilman managed to attract a fair amount of hagiographic commentary.[6] An article in Science, described Egilman as “the scourge of companies he accuses of harming public health and corrupting science,”[7] and quoted fawning praise from his lawsuit industry employers: “[h]e’s a bloodhound who can sniff out corporate misconduct better than security dogs at an airport,”[8] In 2009, a screen writer, Patrick Coppola, announced that he was developing a script for a “Doctor David Egilman Project”. A webpage (still available on the Way-Back machine)[9] described the proposed movie as Erin Brockovich meets The Verdict. Perhaps it would have been more like King Kong meets Lenin in October.

After I started my blog, Tortini, in 2010, I occasionally commented upon David Egilman. As a result, I received occasional emails from various correpondents about him. Most were lawyers aggrieved by his behavior at deposition or in trial, or physicians libeled by him. I generally discounted those partisan and emotive accounts, although I tried to help by sharing transcripts from Egilman’s many testimonial adventures.

One email correspondent was Dennis Nichols, a well-respected journalist from Cincinnati, Ohio. Nichols had known Egilman in the early 1980s, when he was at NIOSH, in Cincinnait. Nichols had some interests in common with Egilman, and had socialized with him 40 years ago. Dennis wondered what had become of Egilman, and one day, googled Egilman, and found my post “David Egilman’s Methodology for Divining Causation.”  Nichols found my description of Egilman’s m.o. consistent with what he remembered from the early 1980s. In the course of our correspondence, Dennis Nichols shared his recollections of his interactions with the very young David Egilman. Dennis Nichols died in February 2022,[10] and I am taking the liberty of sharing his first-hand account with a broader audience.

“I met David Egilman only two or three times, and that was more than 30 years ago, when he was an epidemiologist at NIOSH. When I remarked on the content of conversation with him in about 1990, he and a lawyer representing him threatened to sue me for libel, to which I picked up the gauntlet. I had a ‘blood from the turnip’ defense to accompany my primary defense of truth, and besides, Egilman was widely known as a Communist.

I had lunch with Egilman in a Cincinnati restaurant in 1982 after someone suggested that he might be interested in supporting an arts and entertainment publishing venture that I was involved with, called The Outlook; notwithstanding that I was a conservative, The Outlook leaned left, and its key staff were Catholic pacifists and socialists. Over lunch, Egilman explained to me that he considered himself a Marxist-Leninist, his term, and that the day would come when people like him would have to kill people like me, again his language.

He subsequently invited me and the editor of The Outlook to a reception he had at his house on Mt. Adams, a Cincinnati upscale and Bohemian neighborhood, or at least as close as Cincinnati gets to Bohemian, where he served caviar that he had brought back from his most recent trip to Moscow and displayed poster-size photographs of Lenin, Marx, Stalin, Luxemburg, Gorky and other heroes of the Soviet Union and Scientific Socialism. I do not recall that Egilman admired Mao; the USSR had considerable tension in those years with China, and Egilman was clearly in the USSR camp in those days of Brezhnev, and he said so. Egilman said he traveled often to the Soviet Union, I think in the course of his work, which probably was not common in 1982.

The Outlook editor had met Egilman in the course of his advocacy journalism in reporting on the Fernald Feed Materials Production Center, now closed, which processed fuel cores for nuclear weapons.

Probably none of this matters a generation later, but is just nostalgia about an old communist and his predations before he got into exploiting medical mal. May he rot.”[11]

The account from Mr. Nichols certainly rings true. From years of combing over Egilman’s website (before he added password protection), anyone could see that he viewed litigation as class warfare that would advance his political goals. Litigation has the advantage of being lucrative, and bloodless, too – perfect for fair-weather Marxists.

Did Egilman remain a Marxist into the 1990s and the 21st century? Does it matter?

If Egilman was as committed to Marxist doctrine as Mr. Nichols suggests, he would have recognized that, as an expert witness, he needed to tone down his public rhetoric. Around the time I corresponded with Mr. Nichols, I saw that Egilman was presenting to the Socialist Caucus of the American Public Health Association (2012-13). Egilman always struck me as a bit too pudgy and comfortable really to yearn for a Spartan workers’ paradise. In any event, Egilman was probably not committed to the violent overthrow of the United States government because he had found a better way to destabilize our society by allying himself with the lawsuit industry. The larger point, however, is that political commitments and ideological biases are just as likely to lead to motivated reasoning, if not more so.

Although Egilman’s voice needed no amplification, he managed to turn up the wattage of his propaganda by taking over the reins, as editor in chief, of a biomedical journal. The International Journal of Occupational and Environmental Health (IJOEH) was founded and paid for by Joseph LaDou, in 1995. By 2007, Egilman had taken over as chief editor. He ran the journal out of his office, and the journal’s domain was registered in his name. Egilman published frequently in the journal, which became a vanity press for his anti-manufacturer, pro-lawsuit industry views. His editorial board included such testifying luminaries as Arthur Frank, Barry S. Levy, and David Madigan.

Douglas Starr, in an article in Science, described IJOEH as having had a reputation for opposing “mercenary science,” which is interesting given that Egilman, many on his editorial board, and many of the authors who published in IJOEH were retained, paid expert witnesses in litigation. The journal itself could not have been a better exemplar[12] of mercenary science, in support of the lawsuit industry.

In 2015, IJOEH was acquired by the Taylor & Francis publishing group, which, in short order, declined to renew Egilman’s contract to serve as editor. The new publisher also withdrew one of Egilman’s peer-reviewed papers that had been slated for publication. Taylor & Francis reported to the blog Retraction Watch that Egilman’s article had been “published inadvertently, before the review process was completed,” and was later deemed “unsuitable for publication.”[13] Egilman and his minions revolted, but Taylor & Francis held the line and retired the journal.[14]

Egilman recovered from the indignity foisted upon him by Taylor & Francis, by finding yet another journal, the Journal of Scientific Practice and Integrity (JOSPI).[15] Egilman probably said all that was needed to describe the goals of this new journal by announcing that the

Journal’s “partner” was the Collegium Ramazzini. Egilman of course was the editor in chief, with an editorial board made up of many well-known, high-volume testifiers for the lawsuit industry: Adriane Fugh-Berman, Barry Castleman, Michael R. Harbut, Peter Infante, William E. Longo, David Madigan, Gerald Markowitz, and David Rosner.

Some say that David Egilman was a force of nature, but so are hurricanes, earthquakes, volcanoes, and pestilences. You might think I have nothing good to say about David Egilman, but that is not true. The Lawsuit Industry has often organized and funded mass radiographic and other medical screenings to cull plaintiffs from the population of workers.[16] Some of these screenings led to the massive filing of fraudulent claims.[17] Although he was blind to many of the excesses of the lawsuit industry, Egilman spoke out against attorney-sponsored and funded medico-legal screenings. He published his criticisms in medical journals,[18] and he commented freely in lay media. He told one reporter that “all too often these medical screenings are little more than rackets perpetrated by money-hungry lawyers. Most workers usually don’t know what they’re getting involved in.”[19] Among the Collegium Ramazzini crowd, Egilman was pretty much a lone voice of criticism.


[1] SeeDavid Egilman’s Methodology for Divining Causation,” Tortini (Sept. 6, 2012).

[2] In re Zyprexa Injunction, 474 F.Supp. 2d 385 (E.D.N.Y. 2007). The Zyprexa case was not the first instance of Egilman’s involvement in a controversy over a protective order. Ballinger v. BrushWellman, Inc., 2001 WL 36034524 (Colo. Dist. June 22, 2001), aff’d in part and rev’d in part, 2002 WL 2027530 (Colo. App. Sept. 5, 2002) (unpublished).

[3]Doctor Who Leaked Documents Will Pay $100,000 to Lilly,” N. Y. Times (Sept. 8, 2007).

[4] William G. Childs, “When the Bell Can’t Be Unrung: Document Leaks and Protective Orders in Mass Tort Litigation,” 27 Rev. Litig. 565 (2008).

[5] Eli Lilly & Co. v. Gottstein, 617 F.3d 186, 188 (2d Cir. 2010).

[6] Michelle Dally, “The Hero Who Wound Up On the Wrong Side of the Law,” Rhode Island Monthly 37 (Nov. 2001).

[7] Douglas Starr, “Bearing Witness,” 363 Science 334 (2019).

[8] Id. at 335 (quoting Mark Lanier, who fired Egilman for his malfeasance in the Zyprexa litigation).

[9] Doctor David Egilman Project, at <https://web.archive.org/web/20130902035225/http://coppolaentertainment.com/ddep.htm>.

[10] Bill Steigerwald, “The death of a great Ohio newspaperman,” (Feb. 08, 2022) (“Dennis Nichols of Cincinnati’s eastern suburbs was a dogged, brilliant and principled journalist who ran his family’s two community papers and gave the local authorities all the trouble they deserved.); John Thebout, Village of Batavia Mayor, “Batavia Mayor remembers Dennis Nichols,” Clermont Sun (Feb. 9, 2022).

[11] Dennis Nichols email to Nathan Schachtman, re David Egilman (Mar. 9, 2013)

[12] Douglas Starr, “Bearing Witness,” 363 Science 334, 337 (2019).

[13] See Public health journal’s editorial board tells publisher they have ‘grave concerns’ over new editor,” Retraction Watch (April 27, 2017).

[14]David Egilman and Friends Circle the Wagon at the IJOEH,” Tortini (May 4, 2017).

[15] SeeA New Egilman Bully Pulpit,” Tortini (Feb. 19, 2020).

[16] Schachtman, “State Regulators Impose Sanction Unlawful Screenings 05-25-07,” Washington Legal Foundation Legal Opinion Letter, vol. 17, no. 13 (May 2007); Schachtman, “Silica Litigation – Screening, Scheming, and Suing,” Washington Legal Foundation Critical Legal Issues Working Paper (December 2005); Schachtman & Rhodes, “Medico-Legal Issues in Occupational Lung Disease Litigation,” 27 Seminars in Roentgenology 140 (1992).

[17] In re Silica Prods. Liab. Litig., 398 F. Supp. 2d 563 (S.D. Tex. 2005) (Jack, J.).

[18] See David Egilman and Susanna Rankin Bohme, “Attorney-directed screenings can be hazardous,” 45 Am. J. Indus. Med. 305 (2004); David Egilman, “Asbestos screenings,” 42 Am. J. Indus. Med. 163 (2002).

[19] Andrew Schneider, “Asbestos Lawsuits Anger Critics,” St. Louis Post-Dispatch (Feb. 11, 2003).

David Egilman RIP – Part Two

April 28th, 2024

There was a good bit of irony in Egilman’s reaching out to me to help him prepare for my deposition of him in a silicone gel breast implant case. First, the materials he apparently wanted were all in a document repository for the benefit of plaintiffs’ lawyers. He needed only to have asked the Wilentz firm lawyers for relevant. In rather typical fashion, Egilman wanted to create a faux issue about defense counsel’s hiding the ball.

Second, Egilman had already completed his report, and his request showed that his opinions had been asserted without looking at material documents.

Third, and perhaps most important, in New Jersey, attorneys are not generally allowed to communicate with a represented party directly.[1] Expert witnesses are usually considered as agents of the parties that retained them, which means that such witnesses are also not free to communicate directly with the adverse parties or its counsel. There was no exact precedent for Egilman’s misconduct, but it was obviously disturbing to plaintiffs’ counsel, who promptly withdrew Egilman as a witness in the case. Alas, I did not get my chance to conduct this examination before trial.

Much of the irony in the New Jersey situation derived from Egilman’s fancying himself  something of an ethicist. He certainly was quick to pronounce ethical judgments upon others, especially anyone in manufacturing industry, or any scientist who served as an expert witness opposite him. As he made clear at his CSPI lecture, Egilman had an ideological bias, and it deeply affected his judgment of science and history. He swam in the hogwash of critical theory, cultural hegemony, and Marxist cant.

To Egilman, it was obvious that material forces of capitalism meant that manufacturing industry was incapable of honestly defending its products. The motives, biases, and depradations of the lawsuit industry and its agents rarely concerned him. As a committed socialist, Egilman was incurious about how and why occupational and environmental diseases were so prevalent in socialist and communist countries, where profits are outlawed and the people own the means of production.[2]

Like the radical labor historians David Rosner and Gerald Markowitz, Egilman tried to cram the history of silicosis (and even silicosis litigation) into a Marxist narrative of class conflict, economic reductionism, and capitalist greed. Egilman’s ideological bias marred his attempts to relate the history of dust diseases. His bias made him a careless historian. Several of his attempts to relate the history of dust diseases were little more than recycled litigation reports, previously  filed in various cases, with footnotes added. Egilman was occasionally listed as an expert witness in silicosis cases, but he glibly and ignorantly lumped the history of silica with that of asbestos diseases. In one article, for example, he wrote:

“Knowledge that asbestos and silica were hazardous to health became public several decades after the industry knew of the health concerns. This delay was largely influenced by the interests of Metropolitan Life Insurance Company (MetLife) and other asbestos mining and product manufacturing companies.”[3]

Egilman’s claims about silica, however, were never supported in this article or elsewhere. A brief review of two published monographs by Frederick L. Hoffman, published before 1923, should be sufficient to condemn the authors’ carelessness to the dustbin of occupational history.[4]  The bibliographies in both these monographs document the widespread interest in, and awareness of, the occupational hazards of silica dusts, going back into the 19th century, among the media, the labor movement, and the non-industrial scientific community. The conversation about silicosis was on full display in the national silicosis conference of 1938, sponsored by Secretary of Labor Francis Perkins.

On at least one occasion, Egilman publicly acknowledged his own entrepreneurial and profit motives. In a consumer diacetyl exposure case (claiming bronchiolitis obliterans), a federal district court excluded Egilman’s causation opinions as unreliable. The court found that Egilman had manipulated data to reach misleading conclusions, devoid of scientific validity.[5]

Egilman was so distraught by being excluded that he sought to file a personal appeal to the United States Court of Appeal.[6] When the defendant-appellee opposed Egilman’s motion to intervene in the plaintiff’s appeal, Egilman stridently asserted his right to participate,[7] and filed his own declaration.[8] The declaration is required reading for anyone who wants to understand Egilman’s psycho-pathology.

In what was nothing short of a scurrilous pleading, Egilman attacked the district judge for having excluded him from testifying. He went so far as to claim that the judge had defamed him with derogatory comments about his “methodology.” If Egilman’s challenge to the trial judge was not bizarre enough, Egilman also claimed a right to intervene in the appeal by advancing the claim that the Rule 702 exclusion hurt his livelihood.  The following language is from paragraph 11 of Dr. Egilman’s declaration in support of his motion:

“The Daubert ruling eliminates my ability to testify in this case and in others. I will lose the opportunity to bill for services in this case and in others (although I generally donate most fees related to courtroom testimony to charitable organizations, the lack of opportunity to do so is an injury to me). Based on my experience, it is virtually certain that some lawyers will choose not to attempt to retain me as a result of this ruling. Some lawyers will be dissuaded from retaining my services because the ruling is replete with unsubstantiated pejorative attacks on my qualifications as a scientist and expert. The judge’s rejection of my opinion is primarily an ad hominem attack and not based on an actual analysis of what I said – in an effort to deflect the ad hominem nature of the attack the judge creates ‘strawman’ arguments and then knocks the strawmen down, without ever addressing the substance of my positions.”

Egilman was a bit coy about how much of his fees went to him, and how much went to charity. To give the reader some idea of the artificial flavor of Egilman’s pomposity, paragraph 8 of his remarkable declaration avers”

“My views on the scientific standards for the determination of cause-effect relationships (medical epistemology) have been cited by the Massachusetts Supreme Court (Vassallo v. Baxter Healthcare Corporation, 428 Mass. 1 (1998)):

Although there was conflicting testimony at the Oregon hearing as to the necessity of epidemiological data to establish causation of a disease, the judge appears to have accepted the testimony of an expert epidemiologist that, in the absence of epidemiology, it is ‘sound science…. to rely on case reports, clinical studies, in vivo tests and animal tests.’ The judge may also have relied on the affidavit of the plaintiff’s epidemiological expert, Dr. David S. Egilman, who identified several examples in which disease causation has been established based on animal and clinical case studies alone to demonstrate that doctors utilize epidemiological data as one tool among many ’.”

Egilman’s quote from the Vassallo decision is accurate as far as it goes,[9] but the underlying assertion is either a lie or a grand self-delusion. There was epidemiologic evidence on silicone and connective tissue disease before the Oregon federal district court and its technical advisors, and the court resoundingly rejected the plaintiffs’ causal claims as unsupported by valid evidence, with or without epidemiologic evidence. The argument that epidemiology was unnecessary came from Dr. Egilman’s affidavit, and the plaintiffs’ counsel’s briefs, which were considered and rejected by Judge Jones.[10]

Egilman’s affidavit in connection with the so-called Oregon hearings, which took place during the summer of 1996, was not a particularly important piece of evidence. Most of the “regulars” had put in reports or affidavits in the Hall case. Egilman failed to appear at the proceedings before the court and its technical advisors; and he was not mentioned by name in the Hall decision. Nonetheless, Judge Jones, in his published decision, clearly rejected all the plaintiffs’ witnesses and affiants, including Egilman, in their efforts to make a case for silicone as a cause of autoimmune disease.

A few months after the Oregon hearings, Judge Weinstein, in the fall of 1996, along with other federal and state judges, held a “Daubert” hearing on the admissibility of expert witness opinion testimony in breast implant cases, pending in New York state and federal courts.  Egilman’s affidavit on causation was once again in play. Plaintiffs’ counsel suggested that Egilman might testify, but he was once again a no show. Egilman’s affidavit was in the record, and the multi-judge panel considered and rejected the claimed causal connection between silicone and autoimmune or connective tissue diseases.[11]

There is more, however, to the disingenuousness of Dr. Egilman’s citation to the Vassallo case.  The Newkirk court, in receiving his curious declaration, would not likely have known that Vassallo was a silicone gel breast implant case, and one may suspect that Dr. Egilman wanted to keep the Ninth Circuit uninformed of his role in the silicone litigation. After all, by 1999, The Institute of Medicine (now the National Academies of Science, Engineering, and Medicine) delivered its assessment of the safety of silicone breast implants.  Egilman’s distorted and exaggerated claims had been rejected.[12]

Alas, the jingle of coin doth not always soothe the hurt that conscience must feel. In his declaration, Egilman sought to temper the unfavorable judgment in the Newkirk diacetyl case by noting that only judges who had not previously encountered him would be unduly persuaded by Judge Peterson’s decision. Other judges who have heard him hold forth in court would no doubt see him for the brilliant crusading avenger that he is. The feared prejudice:

“will generally not occur in cases heard before Judges where I have already appeared as a witness. For example a New York state trial judge has praised plaintiffs’ molecular-biology and public-health expert Dr. David Egilman as follows: ‘Dr. Egilman is a brilliant fellow and I always enjoy seeing him and I enjoy listening to his testimony . . . . He is brilliant, he really is.’ [Lopez v. Ford Motor Co., et al. (120954/2000; In re New York City Asbestos Litigation, Index No. 40000/88).]”[13]

The United States Court of Appeals did not appear to hold Egilman the intervenor as brilliant as he thought himself. The court was not moved by either the bullying or the braggadocio.[14] The curious appeal was denied.

Egilman obviously could not sue the trial or appellate judges in the Newkirk case, but he did on other occasions try to deflect or diminish criticism by threats of litigation. In 2009, Laurence Hirsh, a physician, formerly with Merck, wrote a commentary for the Mayo Clinic Proceedings, on conflicts of interest. His commentary was a sustained critique of the hypocrisy and anti-industry bias of journals’ requirements for disclosure of conflict of interest.[15] Hirsch pointed out that some of the authors, including David Egilman, who had written articles critical of Merck, had given anemic disclosures of their own biases and conflicts of interest. Hirsch noted that Egilman had testified in many different litigations (too many diverse litigations to be credible for any one witness), including “silicone breast implants and connective tissue disease (characterized as the epitome of junk science)….”[16] With respect to compensation, Hirsch reported that:

“Egilman has testified for Mr Lanier and other attorneys in more than 100 tort cases (nearly always for plaintiffs) for approximately 2 decades and, by his own estimate, has earned $20 to $25 million for such testimony. Besides dollars, Egilman’s objectivity is questionable on other grounds. In 2007, he signed an admission that ‘there was another side to the story’ and was fined $100,000 by an outraged federal judge for actively facilitating the leak (through a third party) to a New York Times reporter (exclusively) of court-sealed documents in litigation involving Eli Lilly (Indianapolis, IN) and olanzapine (Zyprexa).”[17]

Hirsch’s commentary was a burr under the saddle of this lawsuit industry work horse. Egilman wrote to Hirsch to demand that he correct and retract his comments. Egilman threatened to sue Dr. Hirsch for false and defamatory statements. Alas, Hirsch was intimidated by the threats. The correction that resulted was shaped by Egilman’s assertions, and what resulted was false and misleading:

“1. Dr Egilman’s income from serving as a medical expert in tort litigation, etc, was incorrectly reported as $20-$25 million during a 20-year period. Dr Egilman actually testified in court that it was $2-$2.5 million during that time. The source for the original statement in the Commentary was an online newspaper article dated July 31, 2005. The newspaper revised its report of the court testimony by Dr Egilman in a correction that was published only in the local, printed edition on August 2, 2005 (Michael Morris, oral communication, September 11, 2009).

2. Dr Egilman was not fined by a judge for leaking court sealed documents concerning the Lilly-Zyprexa litigation. Rather, Dr Egilman and Lilly entered into an (Stipulated) agreement by US District Judge Jack Weinstein, filed September 9, 2007, in which Dr Egilman agreed to pay Lilly $100,000, and to dismiss his appeal of the Court’s Final Judgment, Order and Injunction from February and March, 2007 (http://lawprofessors.typepad.com/tortsprof/files/EgilmanSettlement.pdf).

3. Dr Egilman has not testified in court in breast implant and connective tissue disease, or in antidepressant or antipsychotic drug cases. Dr Egilman did provide a sworn affidavit in one case involving local effects of leakage of silicone from breast implants (Vassallo vs Baxter Healthcare Corporation. Decisions of the Supreme Judicial Court of Massachusetts. May 5-July 16, 1998, p. 7).

I regret these inaccuracies in my Commentary.”[18]

Egilman’s estimate of his income, without access to his tax returns, was essentially worthless. The difference between a fine and a stipulated penalty was meaningless. The claim that Egilman did not testify in the Vassallo trial, in which the plaintiff claimed that she had developed atypical autoimmune disease as a result of her silicone gel breast implants, was simply a lie that Egilman foisted upon Dr. Hirsch.

Falsus in uno, falsus in omnibus.


[1] See Formal Opinion 503, of the ABA’s Standing Committee on Ethics and Professional Responsibility, ABA Model Rule of Professional Conduct 4.02.

[2] See, e.g., Jie Li, Peng Yin, Haidong Wang, Lijun Wang, Jinling You, Jiangmei Liu, Yunning Liu, Wei Wang, Xiao Zhang, Piye Niu, and Maigeng Zhou, “The burden of pneumoconiosis in China – analysis Global Burden of Disease Study,” 22 BMC Pub. Health 1114 (2022); Na Wu, Chang Jiang Xue, Shiwen Yu, and Qiao Ye, “Artificial stone-associated silicosis in China: A prospective comparison with natural stone-associated silicosis,” 25 Respirology 518 (2019); Christa Schröder, Friedrich Klaus, Martin Butz, Dorothea Koppisch, and Otten Heinz, “Uranium mining in Germany: incidence of occupational diseases 1946-1999,” 75 Internat’l Arch. Occup. & Envt’l Health 235 (2002); A.G. Chebotarev, “Incidence of silicosis and the effectiveness of preventive measures at the Balei mines (1947 to 1967),” 13 Gigiena truda i professional’nye zabolevaniia 14 (1969) (in Russian); C. Hadjioloff, “The Development of Silicosis and Its Expert Evaluation as a Basis for the Rehabilitation of Silicosis Patients in Bulgaria,” 58 Medizinische Klinik 2023 (1963).

[3] David Egilman, Tess Bird, and Caroline Lee, “Dust diseases and the legacy of corporate manipulation of science and law, 20 Internat’l J. Occup. & Envt’l Health 115, 115 (2014) (emphasis added).

[4] Frederick L. Hoffman, Mortality from Respiratory Diseases in the Dusty Trades; Dep’t of Labor, Bureau of Labor Statistics (1918); The Problem of Dust Phthisis in the Granite Stone Industry, Dep’t of Labor, Bureau of Labor Statistics (1922). See also U.S. Department of Labor Bulletin No. 21, part I, National Silicosis Conference, Report on Medical Control (1938).

[5] Newkirk v. Conagra Foods, Inc., 727 F.Supp. 2d 1006 (E.D. Wash. 2010).

[6] Schachtman, “Exclusion of Dr. David Egilman in Diacetyl Case,” Tortini (June 20, 2011); “David Egilman’s Methodology for Divining Causation,” Tortini (Sept. 6, 2012).

[7] Opposition of David Egilman to Motion for Order to Show Cause re Dismissal of Appeal for Lack of Standing, in case no. 10-35667, document 7547640 (9th Cir. Nov. 16, 2010).

[8] Declaration of David Egilman, in Support of Opposition to Motion for Order to Show Cuase Why Appeal Should Not Be Dismissed for Lack of Standing, in case no. 10-35667, document 7547640 (9th Cir. Nov. 16, 2010) Declaration [Declaration].

[9] Vassallo v. Baxter Healthcare Corporation, 428 Mass. 1, 12 (1998).

[10] See Hall v. Baxter Healthcare Corp., 947 F. Supp. 1387 (D. Or. 1996). Judge Jones made his views very clear:  contrary to Egilman’s affidavit, epidemiology was needed, but lacking, in the plaintiffs’ case.

[11] Transcript at p.159:7-18, from Nyitray v. Baxter Healthcare Corp., CV 93-159 (E.D.N.Y. Oct. 9, 1996) (pre-trial hearing before Judge Jack Weinstein, Justice Lobis, and Magistrate Cheryl Pollak). See In re Breast Implant Cases, 942 F. Supp. 958 (E.& S.D.N.Y. 1996) (rejecting sufficiency of plaintiffs’ causation expert witness evidence, which included affidavit of Dr. Egilman). Years later, Judge Jack B. Weinstein elaborated upon his published breast-implant decision, with a bit more detail about how he viewed the plaintiffs’ expert witnesses. Judge Jack B. Weinstein, “Preliminary Reflections on Administration of Complex Litigation” 2009 Cardozo L. Rev. de novo 1, 14 (2009) (describing plaintiffs’ expert witnesses in silicone litigation as “charlatans”; “[t]he breast implant litigation was largely based on a litigation fraud. … Claims—supported by medical charlatans—that enormous damages to women’s systems resulted could not be supported.”) Egilman, who had filed an affidavit in support of the plaintiffs’ claims in the Hall case, and in the cases before Judge Weinstein, was within the scope of that litigation fraud.

[12] Stuart Bondurant, Virginia Ernster, and Roger Herdman, eds., Safety of Silicone Breast Implants (1999).

[13] Declaration at p. 9 n. 2.

[14] Newkirk v. Conagra Foods, Inc. 727 F.Supp. 2d 1006 (E.D. Wash. 2010), aff’d, 438 Fed.Appx. 607 (9th Cir.2011); Egilman v. Conagra Foods, Inc., 2012 WL 3836100 (9th Cir. 2012), cert. denied, 568 U.S. 1229 (2013).

[15] Laurence J. Hirsch, “Conflicts of Interest, Authorship, and Disclosures in Industry-Related Scientific Publications: The Tort Bar and Editorial Oversight of Medical Journals,” 84 Mayo Clin. Proc. 811 (2009).

[16] Id. at 815.

[17] Id. at 814 (internal citations omitted).

[18] Laurence J. Hirsch, “Corrections,” 85 Mayo Clin. Proc. 99 (2010).

How Access to a Protocol and Underlying Data Gave Yale Researchers a Big Black Eye

April 13th, 2024

Prelude to Litigation

Phenylpropanolamine (PPA) was a widely used direct α-adrenergic agonist used as a medication to control cold symptoms and to suppress appetite for weight loss.[1] In 1972, an over-the-counter (OTC) Advisory Review Panel considered the safety and efficacy of PPA-containing nasal decongestant medications, leading, in 1976, to a recommendation that the agency label these medications as “generally recognized as safe and effective.” Several years later, another Panel recommended that PPA-containing weight control products also be recognized as safe and effective.

Six years later, in 1982, another FDA panel recommended that PPA be considered safe and effective for appetite suppression in dieting.  Two epidemiologic studies of PPA and hemorrhagic stroke were conducted in the 1980s. The results of one study by Hershel Jick and colleagues, presented as a letter to the editor, reported a relative risk of 0.58, with a 95% exact confidence interval, 0.03 – 2.9.[2] A year later, two researchers, reporting a study based upon Medicaid databases, found no significant associations between HS and PPA.[3]

The FDA, however, did not approve a final monograph for PPA, with recognition of its “safe and effective” status because of occasional reports of hemorrhagic stroke that occurred in patients who used PPA-containing medications, mostly young women who had used PPA appetite suppressants for dieting. In 1982, the FDA requested information on the effects of PPA on blood pressure, particularly with respect to weight-loss medications. The agency deferred a proposed 1985 final monograph because of the blood pressure issue.

The FDA deemed the data inadequate to answer its safety concerns. Congressional and agency hearings in the early 1990s amplified some public concern, but in 1990, the Director of Cardio-Renal Drug Products, at the Center for Drug Evaluation and Research, found several well-supported facts, based upon robust evidence. Blood pressure studies in humans showed a biphasic response. PPA initially causes blood pressure to rise above baseline (a pressor effect), and then to fall below baseline (depressor effect). These blood pressure responses are dose-related, and diminish with repeated use. Patients develop tolerance to the pressor effects within a few hours. The Center concluded that at doses of 50 mg of PPA and below, the pressor effects of the medication are smaller, indeed smaller than normal daily variations in basal blood pressure. Humans develop tolerance to the pressor effects quickly, within the time frame of a single dose. The only time period in which even a theoretical risk might exist is within a few hours, or less, of a patient’s taking the first dose of PPA medication. Doses of 25 mg. immediate-release PPA could not realistically be considered to pose any “absolute safety risk and have a reasonable safety margin.”[4]

In 1991, Dr. Heidi Jolson, an FDA scientist wrote that the agency’s spontaneous adverse event reporting system “suggested” that PPA appetite suppressants increased the risk of cerebrovascular accidents. A review of stroke data, including the adverse event reports, by epidemiology consultants failed to support a causal association between PPA and hemorrhagic stroke (HS). The reviewers, however, acknowledged that the available data did not permit them to rule out a risk of HS. The FDA adopted the reviewers’ recommendation for a prospective, large case-control study designed to take into account the known physiological effects of PPA on blood pressure.[5]

What emerged from this regulatory indecision was a decision to conduct another epidemiologic study. In November 1992, a manufacturers’ group, now known as the Consumer Healthcare Products Association (CHPA) proposed a case-control study that would become known as the Hemorrhagic Stroke Project (HSP). In March 1993, the group submitted a proposed protocol, and a suggestion that the study be conducted by several researchers at Yale University. After feedback from the public and the Yale researchers, the group submitted a final protocol in April 1994. Both the researchers and the sponsors agreed to a scientific advisory group that would operate independently and oversee the study. The study began in September 1994. The FDA deferred action on a final monograph for PPA, and product marketing continued.

The Yale HSP authors delivered their final report on their case-control study to FDA, in May 2000.[6] The HSP was a study, with 702 HS cases, and over 1,376 controls, men and women, ages 18 to 49. The report authors concluded that “the results of the HSP suggest that PPA increases the risk for hemorrhagic stroke.”[7] The study had taken over five years to design, conduct, and analyze. In September 2000, the FDA’s Office of Post-Marketing Drug Risk Assessment released the results, with its own interpretation and conclusion that dramatically exceeded the HSP authors’ own interpretation.[8] The FDA’s Non-Prescription Drug Advisory Committee then voted, on October 19, 2000, to recommend that PPA be reclassified as “unsafe.” The Committee’s meeting, however, was attended by several leading epidemiologists who pointed to important methodological problems and limitations in the design and execution of the HSP.[9]

In November 2000, the FDA” Nonprescription Drugs Advisory Committee determined that there was a significant association PPA and HS, and recommended that PPA not be considered safe for OTC use. The FDA never addressed causality; nor did it have to do so under governing law. The FDA’s actions led the drug companies voluntarily to withdraw PPA-containing products.

The December 21, 2000, issue of The New England Journal of Medicine featured a revised version of the HSP report as its lead article.[10] Under the journal’s guidelines for statistical reporting, the authors were required to present two-tailed p-values or confidence intervals. Results from the HSP Final Report looked considerably less impressive after the obtained significance probabilities were doubled. Only the finding in appetite suppressant use was branded an independent risk factor:

“The results suggest that phenylpropanolamine in appetite suppressants, and possibly in cough and cold remedies, is an independent risk factor for hemorrhagic stroke in women.”[11]

The HSP had multiple pre-specified aims, and several other statistical comparisons and analyses were added along the way. No statistical adjustment was made for these multiple comparisons, but their presence in the study must be considered. Perhaps that is why the authors merely suggest that PPA in appetite suppressants was an independent risk factor for HS in women. Under current statistical guidelines for the New England Journal of Medicine, this suggestion might require even further qualification and weakening.[12]

The HSP study faced difficult methodological issues. The detailed and robust identification of PPA’s blood pressure effects in humans focused attention on the crucial timing of timing of a HS in relation to ingestion of a PPA medication. Any use, or any use within the last seven or 30 days, would be fairly irrelevant to the pathophysiology of a cerebral hemorrhage. The HSP authors settled on a definition of “first use” as any use of a PPA product within 24 hours, and no other uses in the previous two weeks.[13] Given the rapid onset of pressor and depressor effects, and adaptation response, this definition of first use was generous and likely included many irrelevant exposed cases, but at least the definition attempted to incorporate the phenomena of short-lived effect and adaption. The appetite suppressant association did not involve any “first use,” which makes the one “suggested” increase risk much less certain and relevant.

The alternative definition of exposure, in addition to “first use,” the ingestion of the PPA-containing medication took place as “the index day before the focal time and the preceding three calendar days.” Again, given the known pharmacokinetics and physiological effects of PPA, this three-day (plus) window seems doubtfully relevant.

All instances of “first use” occurred among men and women who used a cough or cold remedy, with an adjusted OR of 3.14, with a 95% confidence interval (CI), of 0.96–10.28), p = 0.06. The very wide confidence interval, in excess of an order of magnitude, reveals the fragility of the statistical inference. There were but 8 first use exposed stroke cases (out of 702), and 5 exposed controls (out of 1,376).

When this first use analysis is broken down between men and women, the result becomes even more fragile. Among men, there was only one first use exposure in 319 male HS patients, and one first use exposure in 626 controls, for an adjusted OR of 2.95, CI 0.15 – 59.59, and p = 0.48. Among women, there were 7 first use exposures among 383 female HS patients, and 4 first use exposures among 750 controls, with an adjusted OR of 3.13, CI 0.86 – 11.46, p = 0.08.

The small numbers of actual first exposure events speak loudly for the inconclusiveness and fragility of the study results, and the sensitivity of the results to any methodological deviations or irregularities. Of course, for the one “suggested” association for appetite suppressant use among women, the results were even more fragile. None of the appetite suppressant cases were “first use,” which raises serious questions whether anything meaningful was measured. There were six (non-first use) exposed among 383 female HS patients, with only a single exposed female control among 750. The authors presented an adjusted OR of 15.58, with a p-value of 0.02. The CI, however, spanned more than two orders of magnitude, 1.51 – 182.21, which makes the result well-nigh uninterpretable. One of six appetite suppressant cases was also a user of cough-cold remedies, and she was double counted in the study’s analyses. This double-counted case, had a body-mass index of 19, which is certainly not overweight, and at the low end of normal.[14] The one appetite suppressant control was obese.

For the more expansive any exposure analysis for use of PPA cough-cold medication, the results were significantly unimpressive. There were six exposed male cases among 391 male HS cases, and 13 exposed controls, for an adjusted odds ratio of 0.62, CI 0.20 – 1.92, p = 0.41. Although not an inverse association, the sample results for men were incompatible with a hypothetical doubling of risk. For women, on the expansive exposure definition, there were 16 exposed cases, among 383 female cases, with 19 exposed controls out of 750 female controls.  The odds ratio for female PPA cough-cold medication was 1.54, CI 0.76 – 3.14, p = 0.23.

Aside from doubts whether the HSP measured meaningful exposures, the small number of exposed cases and controls present insuperable interpretative difficulties for the study. First, working with a case-control design and odds ratios, there should be some acknowledgment that odds ratios always exaggerate the observed association size compared with a relative risk.[15] Second, the authors knew that confounding would be an important consideration in evaluating any observed association. Known and suspected risk factors were consistently more prevalent among cases than controls.[16]

The HSP authors valiantly attempted to control for confounding in two ways. They selected controls by a technique known as random digit dialing, to find two controls for each case, matched on telephone exchange, sex, age, and race. The HSP authors, however, used imperfectly matched controls rather than lose the corresponding case from their study.[17] For other co-variates, the authors used multi-variate logistic regression to provide odds ratios that were adjusted for potential confounding from the measured covariates. At least two of co-variates, alcohol and cocaine use, in the population under age 50 sample involved potential legal or moral judgment, which almost certainly would have skewed interview results.

An even more important threat to methodological validity, key co-variates, such as smoking, alcohol use, hypertension, and cocaine use were incorporated into the adjustment regression as dichotomous variables; body mass index was entered as a polychotomous variable. Monte Carlo simulation shows that categorizing a continuous variable in logistic regression results in inflating the rate of finding false positive associations.[18] The type I (false-positive) error rates increases with sample size, with increasing correlation between the confounding variable and outcome of interest, and the number of categories used for the continuous variables. Numerous authors have warned of the cost and danger of dichotomizing continuous variables, in losing information, statistical power, and reliability.[19]  In the field of pharmaco-epidemiology, the bias created by dichotomization of a continuous variable is harmful from both the perspective of statistical estimation and hypothesis testing.[20] Readers will be misled into believing that a study has adjusted for important co-variates with the false allure of fully adjusted model.

Finally, with respect to the use of logistic regression to control confounding and provide adjusted odds ratios, there is the problem of the small number of events. Although the overall sample size is adequate for logistic regression, cell sizes of one, or two, or three, raise serious questions about the use of large-sample statistical methods for analysis of the HSP results.[21]

A Surfeit of Sub-Groups

The study protocol identified three (really four or five) specific goals, to estimate the associations: (1) between PPA use and HS; (2) between HS and type of PPA use (cough-cold remedy or appetite suppression); and (3) in women, between PPA appetite suppressant use and HS, and between PPA first use and HS.[22]

With two different definitions of “exposure,” and some modifications added along the way, with two sexes, two different indications (cold remedy and appetite suppression), and with non-pre-specified analyses such as men’s cough-cold PPA use, there was ample opportunity to inflate the Type I error rate. As the authors of the HSP final report acknowledged, they were able to identify only 60 “exposed” cases and controls.[23] In the context of a large case-controls study, the authors were able to identify some nominally statistically significant outcomes (PPA appetite suppressant and HS), but these were based upon very small numbers (six and one exposed, cases and controls, respectively), which made the results very uncertain considering the potential biases and confounding.

Design and Implementation Problems

Case-control studies always present some difficulty of obtaining controls that are similar to cases except that they did not experience the outcome of interest. As noted, controls were selected using “random digit dialing” in the same area code as the cases. The investigators were troubled by poor response rates from potential controls. They deviated from standard methodology for enrolling controls through random digit dialing by enrolling the first eligible control who agreed to participate, while failing to call back candidates who had asked to speak at another time.[24]

The exposure prevalence rate among controls was considerably lower than shown from PPA-product marketing research. This again raises questions about the low reported exposure rates among controls, which would inflate any observed odds ratios. Of course, it seems eminently reasonable to predict that persons who were suffering from head colds or the flu might not answer their phones or might request a call back. People who are obese might be reluctant to tell a stranger on the telephone that they are using a medication to suppress their appetite.

In the face of this obvious opportunity for selection bias, there was also ample room for recall bias. Cases were asked about medication use just before a unforgettable catastrophic event in their lives. Controls were asked about medication use before a day within the range of the previous week. More controls were interviewed by phone than were cases. Given the small number of exposed cases and controls, recall bias created by the differential circumstances and interview settings and procedures, was never excluded.

Lumpen Epidemiology ICH vs SAH

Every epidemiologic study or clinical trial has an exposure and outcome of interest, in a population of interest. The point is to compare exposed and unexposed persons, of relevant age, gender, and background, with comparable risk factors other than the exposure of interest, to determine if the exposure makes any difference in the rate of events of the outcome of interest.

Composite end points represent “lumping” together different individual end points for consideration as a single outcome. The validity of composite end points depends upon assumptions, which will have to be made at the time investigators design their study and write their protocol.  After the data are collected and analyzed, the assumptions may or may not be supported.

Lumping may offer some methodological benefits, such as increasing statistical power or reducing sample size requirements. Standard epidemiologic practice, however, as reflected in numerous textbooks and methodology articles, requires the reporting of the individual constitutive end points, along with the composite result. Even when the composite end point was employed based upon a view that the component end points are sufficiently related, that view must itself ultimately be tested by showing that the individual end points are, in fact, concordant, with risk ratios in the same direction.

There are many clear statements that caution the consumers of medical studies against being misled by misleading claims that may be based upon composite end points, in the medical literature.  In 2004, the British Medical Journal published a useful paper, “Users’ guide to detecting misleading claims in clinical research reports,” One of the authors’ suggestions to readers was:

“Beware of composite endpoints.”[25]

The one methodological point to which virtually all writers agree is that authors should report the results for the composite end point separately to permit readers to evaluate the individual results.[26]  A leading biostatistical methodologist, the late Douglas Altman, cautioned readers against assuming that the overall estimate of association can be interpreted for each individual end point, and advised authors to provide “[a] clear listing of the individual endpoints and the number of participants experiencing them” to permit a more meaningful interpretation of composite outcomes.[27]

The HSP authors used a composite of hemorrhagic strokes, which was composed of both intracerebral hemorrhages (ICH) and subarachnoid hemorrhages (SAH). In their New England Journal of Medicine article, the authors presented the composite end point, but not the risk ratios for the two individual end points. Before they published the article, one of the authors wrote his fellow authors to advise them that because ICH and SAH are very different medical phenomena, they should present the individual end points in their analysis.[28]

The HSP researchers eventually did publish an analysis of SAH and PPA use.[29] The authors identified 425 SAH cases, of which 312 met the criteria for aneurysmal SAH. They looked at many potential risk factors such as smoking (OR = 5.07), family history (OR = 3.1), marijuana (OR = 2.38), cocaine (OR = 24.97), hypertension (OR = 2.39), aspirin (OR = 1.24), alcohol (OR = 2.95), education, as well as PPA.

Only a bivariate analysis was presented for PPA, with an odds ratio of 1.15, p = 0.87. No confidence intervals were presented. The authors were a bit more forthcoming about the potential role of bias and confounding in this publication than they were in their earlier 2000 HSP paper. “Biases that might have affected this analysis of the HSP include selection and recall bias.”[30]

Judge Rothstein’s Rule 702 opinion reports that the “Defendants assert that this article demonstrates the lack of an association between PPA and SAHs resulting from the rupture of an aneurysm.”[31] If the defendants actually claimed a “demonstration” of “the lack of association,” then shame, and more shame, on them! First, the cited study provided only a bivariate analysis for PPA and SAH. The odds ratio of 1.15 pales in comparison the risk ratios reported for many other common exposures. We can only speculate what happens to the 1.15, when the PPA exposure is placed in a fully adjusted model for all important covariates. Second, the p-value of 0.87 does not tell that 1.15 is unreal or due to chance. The HSP reported a 15% increase in odds ratio, which is very compatible with no risk at all. Perhaps if the defendants had been more modest in their characterization they would not have given the court the basis to find that “defendants distort and misinterpret the Stroke Article.”[32]

Rejecting the defendants’ characterization, the court drew upon an affidavit from plaintiffs’ expert witness, Kenneth Rothman, who explained that a p-value cannot provide evidence of lack of an effect.[33] A high p-value, with its corresponding 95% confidence interval that includes 1.0, can, however, show that the sample data are compatible with the null hypothesis. What Judge Rothstein missed, and the defendants may not have said effectively, is that the statistical analysis was a test of an hypothesis, and the test failed to allow us to reject the null hypothesis.  The plaintiffs were left with an indeterminant analysis, from which they really could not honestly claim an association between PPA use and aneurismal SAH.

I Once Was Blind, But Now I See

The HSP protocol called for interviewers to be blinded to the study hypothesis, but this guard against bias was abandoned.[34]  The HSP report acknowledged that “[b]linding would have provided extra protection against unequal ascertainment of PPA exposure in case subjects compared with control subjects.”[35]

The study was conducted out of four sites, and at least one of the sites violated protocol by informing cases that they were participating in a study designed to evaluate PPA and HS.[36] The published article in the New England Journal of Medicine misleadingly claimed that study participants were blinded to its research hypothesis.[37] Although the plaintiffs’ expert witnesses tried to slough off this criticism, the lack of blinding among interviewers and study subjects amplifies recall biases, especially when study subjects and interviewers may have been reluctant to discuss fully several of the co-variate exposures, such as cocaine, marijuana, and alcohol use.[38]

No Causation At All

Scientists and the general population alike have been conditioned to view the controversy over tobacco smoking and lung cancer as a contrivance of the tobacco industry. What is lost in this conditioning is the context of Sir Arthur Bradford Hill’s triumphant 1965 Royal Society of Medicine presidential address. Hill, along with his colleague Sir Richard Doll, were not overly concerned with the tobacco industry, but rather the important methodological criticisms  posited by three leading statistical scientists, Joseph Berkson, Jerzy Neyman, and Sir Ronald Fisher. Hill and Doll’s success in showing that tobacco smoking causes lung cancer required sufficient rebuttal to these critics. The 1965 speech is often cited for its articulation of nine factors to consider in evaluating an association, but the necessary condition is often overlooked. In his speech, Hill identified the situation before the nine factors come into play:

“Disregarding then any such problem in semantics we have this situation. Our observations reveal an association between two variables, perfectly clear-cut and beyond what we would care to attribute to the play of chance. What aspects of that association should we especially consider before deciding that the most likely interpretation of it is causation?”[39]

The starting point, before the Bradford Hill nine factors come into play, requires a “clear-cut” association, which is “beyond what we would care to attribute to the play of chance.”  What is “clear-cut” association?  The most reasonable interpretation of Bradford Hill is that the starting point is an association that is not the result of chance, bias, or confounding.

Looking at the state of the science after the HSP was published, there were two studies that failed to find any association between PPA and HS. The HSP authors “suggested” an association between PPA appetite suppressant and HS, but with six cases and one control, this was hardly beyond the play of chance. And none of the putative associations were “clear cut” in removing bias and confounding as an explanation for the observations.

And Then Litigation Cometh

A tsunami of state and federal cases followed the publication of the HSP study.[40] The Judicial Panel on Multi-district Litigation gave Judge Barbara Rothstein, in the Western District of Washington, responsibility for the pre-trial management of the federal PPA cases. Given the problems with the HSP, the defense unsurprisingly lodged Rule 702 challenges to plaintiffs’ expert witnesses’ opinions, and Rule 703 challenges to reliance upon the HSP.[41]

In June 2003, Judge Rothstein issued her decision on the defense motions. After reviewing a selective regulatory history of PPA, the court turned to epidemiology, and its statistical analysis.  Although misunderstanding of p-values and confidence intervals is endemic among the judiciary, the descriptions provided by Judge Rothstein portended a poor outcome:

“P-values measure the probability that the reported association was due to chance, while confidence intervals indicate the range of values within which the true odds ratio is likely to fall.”[42]

Both descriptions are seriously incorrect,[43] which is especially concerning given that Judge Rothstein would go on, in 2003, to become the director of the Federal Judicial Center, where she would oversee work on the Reference Manual on Scientific Evidence.

The MDL court also managed to make a mash out of the one-tailed test used in the HSP report. That report was designed to inform regulatory action, where actual conclusions of causation are not necessary. When the HSP authors submitted their paper to the New England Journal of Medicine, they of course had to comply with the standards of that journal, and they doubled their reported p-values to comply with the journal’s requirement of using a two-tailed test. Some key results of the HSP no longer had p-values below 5 percent, as the defense was keen to point out in its briefings.

From the sources it cited, the court clearly did not understand the issue, which was the need to control for random error. The court declared that it had found:

“that the HSP’s one-tailed statistical analysis complies with proper scientific methodology, and concludes that the difference in the expression of the HSP’s findings [and in the published article] falls far short of impugning the study’s reliability.”[44]

This finding ignores the very different contexts between regulatory action and causation in civil litigation. The court’s citation to an early version of the Reference Manual on Scientific Evidence further illustrates its confusion:

“Since most investigators of toxic substances are only interested in whether the agent increases the incidence of disease (as distinguished from providing protection from the disease), a one-tailed test is often viewed as appropriate.”

*****

“a rigid rule [requiring a two-tailed test] is not required if p-values and significance levels are used as clues rather than as mechanical rules for statistical proof.”[45]

In a sense, given the prevalence of advocacy epidemiology, many researchers are interested in only showing an increased risk. Nonetheless, the point of evaluating p-values is to assess random error involved in sampling of a population, and that sampling generates a rate of error even when the null hypothesis is assumed to be absolutely correct. Random error can go in either direction, resulting in risk ratios above or below 1.0. Indeed, the probability of observing a risk ratio of exactly 1.0, in a large study, is incredibly small even if the null hypothesis is correct. The risk ratio for men who had used a PPA product was below 1.0, which also recommends a two-tailed test. Trading on the confusion of regulatory and litigation findings, the court proceeded to mischaracterize the parties’ interests in designing the HSP, as only whether PPA increased the risk of stroke. In the MDL, the parties did not want “clues,” or help on what FDA policy should be; they wanted a test of the causal hypothesis.

In a footnote, the court pointed to testimony of Dr. Ralph Horwitz, one of the HSP investigators, who stated that all parties “[a]ll parties involved in designing the HSP were interested solely in testing whether PPA increased the risk of stroke.” The parties, of course, were not designing the HSP for support for litigation claims.[46] The court also cited, in this footnote, a then recent case that found a one-tailed p-value inappropriate “where that analysis assumed the very fact in dispute.” The plaintiffs’ reliance upon the one-sided p-values in the unpublished HSP report did exactly that.[47] The court tried to excuse the failure to rule out random error by pointing to language in the published HSP article, where the authors stated that inconclusive findings raised “concern regarding  safety.”[48]

In analyzing the defense challenge to the opinions based upon the HSP, Judge Rothstein committed both legal and logical fallacies. First, citing Professor David Faigman’s treatise for the proposition that epidemiology is widely accepted because the “general techniques are valid,” the court found that the HSP, and reliance upon it, was valid, despite the identified problems. The issue was not whether epidemiological techniques are valid, but whether the techniques used in the HSP were valid. The devilish details of the HSP in particular largely went ignored.[49] From a legal perspective, Judge Rothstein’s opinion can be seen to place a burden upon the defense to show invalidity, by invoking a presumption of validity. This shifting of the burden was then, and is now, contrary to the law.

Perhaps the most obvious dodge of the court’s gatekeeping responsibility came with the conclusory assertion that the “Defendants’ ex post facto dissection of the HSP fails to undermine its reliability. Scientific studies almost invariably contain flaws.”[50] Perhaps it is sobering to consider that all human beings have flaws, and yet somehow we distinguish between sinners and saints, and between criminals and heroes. The court shirked its responsibility to look at the identified flaws to determine whether they threatened the HSP’s internal validity, as well as its external validity in the plaintiffs’ claims for hemorrhagic strokes in each of the many subgroups considered in the HSP, as well as outcomes not considered, such as myocardial infarction and ischemic stroke. Given that there was but one key epidemiologic study relied upon for support of the plaintiffs’ extravagant causal claims, the identified flaws might be expected to lead to some epistemic humility.

The PPA MDL court exhibited a willingness to cherry pick HSP results to support its low-grade gatekeeping. For instance, the court recited that “[b]ecause no men reported use of appetite suppressants and only two reported first use of a PPA-containing product, the investigators could not determine whether PPA posed an increased risk for hemorrhagic stroke in men.”[51] There was, of course, another definition of PPA exposure that yielded a total of 19 exposed men, about one-third of all exposed cases and controls. All exposed men used OTC PPA cough cold remedies, six men with HS, and 13 controls, with a reported odds ratio of 0.62 (95%, C.I., 0.20 – 1.92); p = 0.41. Although the result for men was not statistically significant, the point estimate for the sample was a risk ratio below one, with a confidence interval that excludes a doubling of the risk based upon this sample statistic. The number of male HS exposed cases was the same as the number of female HS appetite suppressant cases, which somehow did not disturb the court.

Superficially, the PPA MDL court appeared to place great weight on the fact of peer review publication in a prestigious journal, by well-credentialed scientists and clinicians. Given that “[t]he prestigious NEJM published the HSP results …  research bears the indicia of good science.”[52] Although Professor Susan Haack’s writings on law and science are often errant, her analysis of this kind of blind reliance on peer review is noteworthy:

“though peer-reviewed publication is now standard practice at scientific and medical journals, I doubt that many working scientists imagine that the fact that a work has been accepted for publication after peer review is any guarantee that it is good stuff, or that it’s not having been published necessarily undermines its value. The legal system, however, has come to invest considerable epistemic confidence in peer-reviewed publication  — perhaps for no better reason than that the law reviews are not peer-reviewed!”[53]

Ultimately, the PPA MDL court revealed that it was quite inattentive to the validity concerns of the HSP. Among the cases filed in the federal court were heart attack and ischemic stroke claims.  The HSP did not address those claims, and the MDL court was perfectly willing to green light the claims on the basis of case reports and expert witness hand waving about “plausibility.”  Not only was this reliance upon case reports plus biological plausibility against the weight of legal authority, it was against the weight of scientific opinion, as expressed by the HSP authors themselves:

“Although the case reports called attention to a possible association between the use of phenylpropanolamine and the risk of hemorrhagic stroke, the absence of control subjects meant that these studies could not produce evidence that meets the usual criteria for valid scientific inference”[54]

Since no epidemiology was necessary at all for ischemic stroke and myocardial infarction claims, then a deeply flawed epidemiologic study was thus even better than nothing. And peer review and prestige were merely window dressing.

The HSP study was subjected to much greater analysis in actual trial litigation.  Before the MDL court concluded its abridged gatekeeping, the defense successfully sought the underlying data to the HSP. Plaintiffs’ counsel and the Yale investigators resisted and filed motions to quash the defense subpoenas. The MDL court denied the motions and required the parties to collaborate on redaction of medical records to be produced.[55]

In a law review article published a few years after the PPA Rule 702 decision, Judge Rothstein immodestly described the PPA MDL as a “model mass tort,” and without irony characterized herself as having taken “an aggressive role in determining the admissibility of scientific evidence [].”[56]

The MDL court’s PPA decision stands as a landmark of judicial incuriousness and credulity.  The court conducted hearings and entertaining extensive briefings on the reliability of plaintiffs’ expert witnesses’ opinions, which were based largely upon one epidemiologic study, known as the “Yale Hemorrhagic Stroke Project (HSP).”  In the end, publication in a prestigious peer-reviewed journal proved to be a proxy for independent review and an excuse not to exercise critical judgment: “The prestigious NEJM published the HSP results, further substantiating that the research bears the indicia of good science.” Id. at 1239 (citing Daubert II for the proposition that peer review shows the research meets the minimal criteria for good science). The admissibility challenges were refused.

Exuberant Praise for Judge Rothstein

In 2009, an American Law Institute – American Bar Association continuing legal education seminar on expert witnesses and environmental litigation, Anthony Roisman presented on “Daubert & Its Progeny – Finding & Selecting Experts – Direct & Cross-Examination.” Roisman has been active in various plaintiff advocacy organizations, including serving as the head of the American Trial Lawyers’ Association Section on Toxic, Environmental & Pharmaceutical Torts (STEP). In his 2009 lecture, Roisman praised Rothstein’s PPA Rule 702 decision as “the way Daubert should be interpreted.” More concerning was Roisman’s revelation that Judge Rothstein wrote the PPA decision, “fresh from a seminar conducted by the Tellus Institute, which is an organization set up of scientists to try to bring some common sense to the courts’ interpretation of science, which is what is going on in a Daubert case.”[57]

Roisman’s endorsement of the PPA decision may have been purely result-oriented jurisprudence, but what of his enthusiasm for the “learning” that Judge Rothstein received fresh from the Tellus Institute.  What exactly is or was the Tellus Institute?

In June 2003, the same month as Judge Rothstein’s PPA decision, the Tellus Institute supported a group known as Scientific Knowledge and Public Policy (SKAPP), in publishing an attack on the Daubert decision. The Tellus-SKAPP paper, “Daubert: The Most Influential Supreme Court Ruling You’ve Never Heard Of,” appeared online in 2003.[58]

David Michaels, a plaintiffs’ expert in chemical exposure cases, and a founder of SKAPP, has typically described his organization as having been funded by the Common Benefit Trust, “a fund established pursuant to a court order in the Silicone Gel Breast Implant Liability litigation.”[59] What Michaels hides is that this “Trust” is nothing other than the common benefits fund set up in MDL 926, as it is for most MDLs, to permit plaintiffs’ counsel to retain and present expert witnesses in the common proceedings. In other words, it was the plaintiffs’ lawyers’ walking-around money. SKAPP’s sister organization, the Tellus Institute is clearly aligned with SKAPP. Alas, Richard Clapp, who was a testifying expert witness for PPA plaintiffs, was an active member of the Tellus Institute, at the time of the judicial educational seminar for Judge Rothstein.[60] Clapp is listed as a member of the planning committee responsible for preparing the anti-Daubert pamphlet. In 2005, as director of the Federal Judicial Center, Judge Rothstein attended another conference, “the Coronado Conference, which was sponsored by SKAPP.[61]

Roisman’s revelation in 2009, after the dust had settled on the PPA litigation, may well put Judge Rothstein in the same category as Judge James Kelly, against whom the U.S. Court of Appeals for the Third Circuit issued a writ of mandamus for recusal. Judge Kelly was invited to attend a conference on asbestos medical issues, set up by Dr. Irving Selikoff with scientists who testified for plaintiffs’ counsel. The conference was funded by plaintiffs’ counsel. The co-conspirators, Selikoff and plaintiffs’ counsel, paid for Judge Kelly’s transportation and lodgings, without revealing the source of the funding.[62]

In the case of Selikoff and Motley’s effort to subvert the neutrality of Judge James M. Kelly in the school district asbestos litigation, and pervert the course of justice, the conspiracy was detected in time for a successful recusal effort. In the PPA litigation, there was no disclosure of the efforts by the anti-Daubert advocacy group, the Tellus Institute, to undermine the neutrality of a federal judge. 

Aftermath of Failed MDL Gatekeeping

Ultimately, the HSP study received much more careful analysis before juries. Although the cases that went to trial involved plaintiffs with catastrophic injuries, and a high-profile article in the New England Journal of Medicine, the jury verdicts were overwhelmingly in favor of the defense.[63]

In the first case that went to trial (but second to verdict), the defense presented a thorough scientific critique of the HSP. The underlying data and medical records that had been produced in response to a Rule 45 subpoena in the MDL allowed juries to see that the study investigators had deviated from the protocol in ways to increase the number of exposed cases, with the obvious result of increasing the odds ratios reported. Juries were ultimately much more curious about evidence and testimony on reclassifications of exposure that drove up the odds ratios for PPA use, than they were about the performance of linear logistic regressions.

The HSP investigators were well aware of the potential for medication use to occur after the onset of stroke symptoms (headache), which may have sent a person to the medicine chest for an OTC cold remedy. Case 71-0039 was just such a case, as shown by the medical records and the HSP investigators’ initial classification of the case. On dubious grounds, however, the study reclassified the time of stroke onset to after the PPA-medication use, in what the investigators knew increased their chances of finding an association.

The reclassification of Case 20-0092 was even more egregious. The patient was originally diagnosed as having experienced a transient ischemic attack (TIA), after a CT of the head showed no bleed. Case 20-0092 was not a case. For the TIA, the patient was given heparin, an appropriate therapy but one that is known to cause bleeding. The following day, MRI of the head revealed a HS. The HSP classified Case 20-0092 as a case.

In Case 18-0025, the patient experienced a headache in the morning, and took a PPA-medication (Contac) for relief. The stroke was already underway when the Contac was taken, but the HSP reversed the order of events.

Case 62-0094 presented an interesting medical history that included an event no one in the HSP considered including in the interview protocol. In addition to a history of heavy smoking, alcohol, cocaine, heroin, and marijuana use, and a history of seizure disorder, Case 62-0094 suffered a traumatic head injury immediately before developing a SAH. Treating physicians ascribed the SAH to traumatic injury, but understandably there were no controls that were identified with similar head injury within the exposure period.

Both sides of the PPA litigation accused the other of “hacking at the A cell,” but juries seemed to understand that the hacking had started before the paper was published.

In a case involving two plaintiffs, in Los Angeles, where the jury heard the details of how the HSP cases were analyzed, the jury returned two defense verdicts. In post-trial motions, plaintiffs’ counsel challenged the defendant’s reliance upon underlying data in the HSP, which went behind the peer-reviewed publication, and which showed that the peer review failed to prevent serious errors.  In essence, the plaintiffs’ counsel claimed that the defense’s scrutiny of the underlying data and investigator misclassifications were themselves not “generally accepted” methods, and thus inadmissible. The trial court rejected the plaintiffs’ claim and their request for a new trial, and spoke to the significance of challenging the superficial significance of peer review of the key study relied upon by plaintiffs in the PPA litigation:

“I mean, you could almost say that there was some unethical activity with that Yale Study.  It’s real close.  I mean, I — I am very, very concerned at the integrity of those researchers.

********

Yale gets — Yale gets a big black eye on this.”[64]

Epidemiologist Charles Hennekens, who had been a consultant to PPA-medication manufacturers, published a critique of the HSP study, in 2006. The Hennekens critique included many of the criticisms lodged by himself, as well as by epidemiologists Lewis Kuller, Noel Weiss, and Brian Strom, back in an October 2000 FDA meeting, before the HSP was published. Richard Clapp, Tellus Institute activist and expert witness for PPA plaintiffs, and Michael Williams, lawyer for PPA claimants, wrote a letter criticizing Hennekens.[65] David Michaels, an expert witness for plaintiffs in other chemical exposure cases, and a founder of SKAPP, which collaborated with the Tellus Institute on its anti-Daubert compaign, wrote a letter accusing Hennekens of “mercenary epidemiology,” for engaging in re-analysis of a published study. Michaels never complained about the litigation-inspired re-analyses put forward by plaintiffs’ witnesses in the Bendectin litigation.  Plaintiffs’ lawyers and their expert witnesses had much to gain by starting the litigation and trying to expand its reach. Defense lawyers and their expert witnesses effectively put themselves out of business by shutting it down.[66]


[1] Rachel Gorodetsky, “Phenylpropanolamine,” in Philip Wexler, ed., 7 Encyclopedia of Toxicology 559 (4th ed. 2024).

[2] Hershel Jick, Pamela Aselton, and Judith R. Hunter,  “Phenylpropanolamine and Cerebral Hemorrhage,” 323 Lancet 1017 (1984).

[3] Robert R. O’Neill & Stephen W. Van de Carr, “A Case-Control Study of Adrenergic  Decongestants and Hemorrhagic CVA Using a Medicaid Data Base” m.s. (1985).

[4] Ramond Lipicky, Center for Drug Evaluation and Research, PPA, Safety Summary at 29 (Aug. 9, 1900).

[5] Center for Drug Evaluation and Research, US Food and Drug Administration, “Epidemiologic Review of Phenylpropanolamine Safety Issues” (April 30, 1991).

[6] Ralph I. Horwitz, Lawrence M. Brass, Walter N. Kernan, Catherine M. Viscoli, “Phenylpropanolamine & Risk of Hemorrhagic Stroke – Final Report of the Hemorrhagic Stroke Project (May 10, 2000).

[7] Id. at 3, 26.

[8] Lois La Grenade & Parivash Nourjah, “Review of study protocol, final study report and raw data regarding the incidence of hemorrhagic stroke associated with the use of phenylopropanolamine,” Division of Drug Risk Assessment, Office of Post-Marketing Drug Risk Assessment (0PDRA) (Sept. 27, 2000). These authors concluded that the HSP report provided “compelling evidence of increased risk of hemorrhagic stroke in young people who use PPA-containing appetite suppressants. This finding, taken in association with evidence provided by spontaneous reports and case reports published in the

medical literature leads us to recommend that these products should no longer be available for over the counter use.”

[9] Among those who voiced criticisms of the design, methods, and interpretation of the HSP study were Noel Weiss, Lewis Kuller, Brian Strom, and Janet Daling. Many of the criticisms would prove to be understated in the light of post-publication review.

[10] Walter N. Kernan, Catherine M. Viscoli, Lawrence M. Brass, J.P. Broderick, T. Brott, and Edward Feldmann, “Phenylpropanolamine and the risk of hemorrhagic stroke,” 343 New Engl. J. Med. 1826 (2000) [cited as Kernan]

[11] Kernan, supra note 10, at 1826 (emphasis added).

[12] David Harrington, Ralph B. D’Agostino, Sr., Constantine Gatsonis, Joseph W. Hogan, David J. Hunter, Sharon-Lise T. Normand, Jeffrey M. Drazen, and Mary Beth Hamel, “New Guidelines for Statistical Reporting in the Journal,” 381 New Engl. J. Med. 285 (2019).

[13] Kernan, supra note 10, at 1827.

[14] Transcript of Meeting on Safety Issues of Phenylpropanolamine (PPA) in Over-the-Counter Drug Products 117 (Oct. 19, 2000).

[15][15] See, e.g., Huw Talfryn Oakley Davies, Iain Kinloch Crombie, and Manouche Tavakoli, “When can odds ratios mislead?” 316 Brit. Med. J. 989 (1998); Thomas F. Monaghan, Rahman, Christina W. Agudelo, Alan J. Wein, Jason M. Lazar, Karel Everaert, and Roger R. Dmochowski, “Foundational Statistical Principles in Medical Research: A Tutorial on Odds Ratios, Relative Risk, Absolute Risk, and Number Needed to Treat,” 18 Internat’l J. Envt’l Research & Public Health 5669 (2021).

[16] Kernan, supra note 10, at 1829, Table 2.

[17] Kernan, supra note 10, at 1827.

[18] Peter C. Austin & Lawrence J. Brunner, “Inflation of the type I error rate when a continuous confounding variable is categorized in logistic regression analyses,” 23 Statist. Med. 1159 (2004).

[19] See, e.g., Douglas G. Altman & Patrick Royston, “The cost of dichotomising continuous variables,” 332 Brit. Med. J. 1080 (2006); Patrick Royston, Douglas G. Altman, and Willi Sauerbrei, “Dichotomizing continuous predictors in multiple regression: a bad idea,” 25 Stat. Med. 127 (2006). See also Robert C. MacCallum, Shaobo Zhang, Kristopher J. Preacher, and Derek D. Rucker, “On the Practice of Dichotomization of Quantitative Variables,” 7 Psychological Methods 19 (2002); David L. Streiner, “Breaking Up is Hard to Do: The Heartbreak of Dichotomizing Continuous Data,” 47 Can. J. Psychiatry 262 (2002); Henian Chen, Patricia Cohen, and Sophie Chen, “Biased odds ratios from dichotomization of age,” 26 Statist. Med. 3487 (2007); Carl van Walraven & Robert G. Hart, “Leave ‘em Alone – Why Continuous Variables Should Be Analyzed as Such,” 30 Neuroepidemiology 138 (2008); O. Naggara, J. Raymond, F. Guilbert, D. Roy, A. Weill, and Douglas G. Altman, “Analysis by Categorizing or Dichotomizing Continuous Variables Is Inadvisable,” 32 Am. J. Neuroradiol. 437 (Mar 2011); Neal V. Dawson & Robert Weiss, “Dichotomizing Continuous Variables in Statistical Analysis: A Practice to Avoid,” Med. Decision Making 225 (2012); Phillippa M Cumberland, Gabriela Czanner, Catey Bunce, Caroline J Doré, Nick Freemantle, and Marta García-Fiñana, “Ophthalmic statistics note: the perils of dichotomising continuous variables,” 98 Brit. J. Ophthalmol. 841 (2014).

[20] Valerii Fedorov, Frank Mannino1, and Rongmei Zhang, “Consequences of dichotomization,” 8 Pharmaceut. Statist. 50 (2009).

[21] Peter Peduzzi, John Concato, Elizabeth Kemper, Theodore R. Holford, and Alvan R. Feinstein, “A simulation study of the number of events per variable in logistic regression analysis?” 49 J. Clin. Epidem. 1373 (1996).

[22] HSP Final Report at 5.

[23] HSP Final Report at 26.

[24] Byron G. Stier & Charles H. Hennekens, “Phenylpropanolamine and Hemorrhagic Stroke in the Hemorrhagic Stroke Project: A Reappraisal in the Context of Science, the Food and Drug Administration, and the Law,” 16 Ann. Epidem. 49, 50 (2006) [cited as Stier & Hennekens].

[25] Victor M. Montori, Roman Jaeschke, Holger J. Schünemann, Mohit Bhandari, Jan L Brozek, P. J. Devereaux, and Gordon H. Guyatt, “Users’ guide to detecting misleading claims in clinical research reports,” 329 Brit. Med. J. 1093 (2004). 

[26] Wolfgang Ahrens & Iris Pigeot, eds., Handbook of Epidemiology 1840 (2d ed. 2014) (47.5.8 Use of Composite Endpoints); Stuart J. Pocock, John J. V. McMurray, and Tim J. Collier, “Statistical Controversies in Reporting of Clinical Trials: Part 2 of a 4-Part Series on Statistics for Clinical Trials,” 66 J. Am. Coll. Cardiol. 2648, 2650-51 (2015) (“Interpret composite endpoints carefully.”); Schulz & Grimes, “Multiplicity in randomized trials I:  endpoints and treatments,” 365 Lancet 1591, 1595 (2005).

[27] Eric Lim, Adam Brown, Adel Helmy, Shafi Mussa & Douglas Altman, “Composite Outcomes in Cardiovascular Research: A Survey of Randomized Trials,” 149 Ann. Intern. Med. 612 (2008).

[28] See, e.g., Thomas Brott email to Walter Kernan (Sept. 10, 2000).

[29] Joseph P. Broderick, Catherine M. Viscoli, Thomas Brott, Walter N. Kernan, Lawrence M. Brass, Edward Feldmann, Lewis B. Morgenstern, Janet Lee Wilterdink, and Ralph I. Horwitz, “Major Risk Factors for Aneurysmal Subarachnoid Hemorrhage in the Young Are Modifiable,” 34 Stroke 1375 (2003).

[30] Id. at 1379.

[31] Id. at 1243.

[32] Id. at 1243.

[33] Id., citing Rothman Affidavit, ¶ 7; Kenneth J. Rothman, Epidemiology:  An Introduction at 117 (2002).

[34] HSP Final Report at 26 (‘‘HSP interviewers were not blinded to the case-control status of study subjects and some were aware of the study purpose’.”); Walter Kernan Dep. at 473-74, In re PPA Prods. Liab. Litig., MDL 1407 (W.D. Wash.) (Sept. 19, 2002).

[35] HSP Final Report at 26.

[36] Stier & Hennekens, note 24 supra, at 51.

[37] NEJM at 1831.

[38] See Christopher T. Robertson & Aaron S. Kesselheim, Blinding as a Solution to Bias – Strengthening Biomedical Science, Forensic Science, and the Law 53 (2016); Sandy Zabell, “The Virtues of Being Blind,” 29 Chance 32 (2016).

[39] Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295, 295 (1965).

[40] See Barbara J. Rothstein, Francis E. McGovern, and Sarah Jael Dion, “A Model Mass Tort: The PPA Experience,” 54 Drake L. Rev. 621 (2006); Linda A. Ash, Mary Ross Terry, and Daniel E. Clark, Matthew Bender Drug Product Liability § 15.86 PPA (2003).

[41] In re Phenylpropanolamine Prods. Liab. Litig., 289 F.Supp. 2d 1230 (W.D. Wash. 2003).

[42] Id. at 1236 n.1

[43] Michael O. Finkelstein & Bruce Levin, Statistics for Lawyers 171, 173-74 (3rd ed. 2015). See also Sander Greenland, Stephen J. Senn, Kenneth J. Rothman, John B. Carlin, Charles Poole, Steven N. Goodman, and Douglas G. Altman, “Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations,” 31 Eur. J. Epidem. 337 (2016).

[44] In re Phenylpropanolamine Prods. Liab. Litig., 289 F.Supp. 2d 1230, 1241 (W.D. Wash. 2003).

[45] Id. (citing Reference Manual at 126-27, 358 n. 69). The edition of Manual was not identified by the court.

[46] Id. at n.9, citing deposition of Ralph Horowitz [sic].

[47] Id., citing Good v. Fluor Daniel Corp., 222 F.Supp. 2d 1236, 1242-43 (E.D. Wash. 2002).

[48] Id. 1241, citing Kernan at 183.

[49] In re Phenylpropanolamine Prods. Liab. Litig., 289 F.Supp. 2d 1230, 1239 (W.D. Wash. 2003) (citing 2 Modern Scientific Evidence: The Law and Science of Expert Testimony § 28-1.1, at 302-03 (David L. Faigman,  et al., eds., 1997) (“Epidemiologic studies have been well received by courts trying mass tort suits. Well-conducted studies are uniformly admitted. The widespread acceptance of epidemiology is based in large part on the belief that the general techniques are valid.”).

[50] Id. at 1240. The court cited the Reference Manual on Scientific Evidence 337 (2d ed. 2000), for this universal attribution of flaws to epidemiology studies (“It is important to recognize that most studies have flaws. Some flaws are inevitable given the limits of technology and resources.”) Of course, when technology and resources are limited, expert witnesses are permitted to say “I cannot say.” The PPA MDL court also cited another MDL court, which declared that “there is no such thing as a perfect epidemiological study.” In re Orthopedic Bone Screw Prods. Liab. Litig., MDL No. 1014, 1997 WL 230818, at *8-9 (E.D.Pa. May 5, 1997).

[51] Id. at 1236.

[52] Id. at 1239.

[53] Susan Haack, “Irreconcilable Differences?  The Troubled Marriage of Science and Law,” 72 Law & Contemp. Problems 1, 19 (2009) (internal citations omitted). It may be telling that Haack has come to publish much of her analysis in law reviews. See Nathan Schachtman, “Misplaced Reliance On Peer Review to Separate Valid Science From NonsenseTortini (Aug. 14, 2011).

[54] Kernan, supra note 10, at 1831.

[55] In re Propanolamine Prods. Litig., MDL 1407, Order re Motion to Quash Subpoenas re Yale Study’s Hospital Records (W.D. Wash. Aug. 16, 2002). Two of the HSP investigators wrote an article, over a decade later, to complain about litigation efforts to obtain data from ongoing studies. They did not mention the PPA case. Walter N. Kernan, Catherine M. Viscoli, and Mathew C. Varughese, “Litigation Seeking Access to Data From Ongoing Clinical Trials: A Threat to Clinical Research,” 174 J. Am. Med. Ass’n Intern. Med. 1502 (2014).

[56] Barbara J. Rothstein, Francis E. McGovern, and Sarah Jael Dion, “A Model Mass Tort: The PPA Experience,” 54 Drake L. Rev. 621, 638 (2006).

[57] Anthony Roisman, “Daubert & Its Progeny – Finding & Selecting Experts – Direct & Cross-Examination,” ALI-ABA 2009. Roisman’s remarks about the role of Tellus Institute start just after minute 8, on the recording, available from the American Law Institute, and the author.

[58] See Daubert: The Most Influential Supreme Court Ruling You’ve Never Heard Of; A Publication of the Project on Scientific Knowledge and Public Policy, coordinated by the Tellus Institute” (2003).

[59] See, e.g., David Michaels, Doubt is Their Product: How Industry’s War on Science Threatens Your Health 267 (2008).

[60] See Richard W. Clapp & David Ozonoff, “Environment and Health: Vital Intersection or Contested Territory?” 30 Am. J. L. & Med. 189, 189 (2004) (“This Article also benefited from discussions with colleagues in the project on Scientific Knowledge and Public Policy at Tellus Institute, in Boston, Massachusetts.”).

[61] See Barbara Rothstein, “Bringing Science to Law,” 95 Am. J. Pub. Health S1 (2005) (“The Coronado Conference brought scientists and judges together to consider these and other tensions that arise when science is introduced in courts.”).

[62] In re School Asbestos Litigation, 977 F.2d 764 (3d Cir. 1992). See Cathleen M. Devlin, “Disqualification of Federal Judges – Third Circuit Orders District Judge James McGirr Kelly to Disqualify Himself So As To Preserve ‘The Appearance of Justice’ Under 28 U.S.C. § 455 – In re School Asbestos Litigation (1992),” 38 Villanova L. Rev. 1219 (1993); Bruce A. Green, “May Judges Attend Privately Funded Educational Programs? Should Judicial Education Be Privatized?: Questions of Judicial Ethics and Policy,” 29 Fordham Urb. L.J. 941, 996-98 (2002).

[63] Alison Frankel, “A Line in the Sand,” The Am. Lawyer – Litigation (2005); Alison Frankel, “The Mass Tort Bonanza That Wasn’t,” The Am. Lawyer (Jan. 6, 2006).

[64] O’Neill v. Novartis AG, California Superior Court, Los Angeles Cty., Transcript of Oral Argument on Post-Trial Motions, at 46 -47 (March 18, 2004) (Hon. Anthony J. Mohr), aff’d sub nom. O’Neill v. Novartis Consumer Health, Inc.,147 Cal. App. 4th 1388, 55 Cal. Rptr. 3d 551, 558-61 (2007).

[65] Richard Clapp & Michael L. Williams, Regarding ‘‘Phenylpropanolamine and Hemorrhagic Stroke in the Hemorrhagic Stroke Project,’’ 16 Ann. Epidem. 580 (2006).

[66] David Michaels, “Regarding ‘Phenylpropanolamine and Hemorrhagic Stroke in the Hemorrhagic Stroke Project’: Mercenary Epidemiology – Data Reanalysis and Reinterpretation for Sponsors with Financial Interest in the Outcome

16 Ann. Epidem. 583 (2006). Hennekens responded to these letters. Stier & Hennekens, note 24, supra.

Collegium Ramazzini & Its Fellows – The Lobby

November 19th, 2023

Back in 1997, Francis Douglas Kelly Liddell, a real scientist in the area of asbestos and disease, had had enough of the insinuations, slanders, and bad science from the minions of Irving John Selikoff.[1] Liddell broke with the norms of science and called out his detractors for what they were doing:

 “[A]n anti-asbestos lobby, based in the Mount Sinai School of Medicine of the City University of New York, promoted the fiction that asbestos was an all-pervading menace, and trumped up a number of asbestos myths for widespread dissemination, through media eager for bad news.”[2]

What Liddell did not realize is that the Lobby had become institutionalized in the form of an organization, the Collegium Ramazzini, started by Selikoff under false pretenses.[3] Although the Collegium operates with some degree of secrecy, the open and sketchy conduct of its members suggest that we could use the terms “the Lobby” and “the Collegium Ramazzini,” interchangeably.

Ramazzini founder Irving Selikoff had an unfortunate track record for perverting the course of justice. Selikoff conspired with Ron Motley and others to bend judges with active asbestos litigation dockets by inviting them to a one-sided conference on asbestos science, and to pay for their travel and lodging. Presenters included key expert witnesses for plaintiffs; defense expert witnesses were conspicuously not invited to the conference. In his invitation to this ex parte soirée, Selikoff failed to mention that the funding came from plaintiffs’ counsel. Selikoff’s shenanigans led to the humiliation and disqualification of James M. Kelly,[4] the federal judge in charge of the asbestos school property damage litigation,

Neither Selikoff nor the co-conspirator counsel for plaintiffs ever apologized for their ruse. The disqualification did lead to a belated disclosure and mea culpa from the late Judge Jack Weinstein. Because of a trial in progress, Judge Weinstein did not attend the plaintiffs’ dog-and-pony show, Selikoff’s so-called “Third Wave” conference, but Judge Weinstein and a New York state trial judge, Justice Helen Freedman, attended an ex parte private luncheon meeting with Dr. Selikoff. Here is how Judge Weinstein described the event:

“But what I did may have been even worse [than Judge Kelly’s conduct that led to his disqualification]. A state judge and I were attempting to settle large numbers of asbestos cases. We had a private meeting with Dr. Irwin [sic] J. Selikoff at his hospital office to discuss the nature of his research. He had never testified and would never testify. Nevertheless, I now think that it was a mistake not to have informed all counsel in advance and, perhaps, to have had a court reporter present and to have put that meeting on the record.”[5]

Judge Weinstein’s false statement that Selikoff “had never testified”[6] not only reflects an incredible and uncharacteristic naiveté by a distinguished evidence law scholar, but the false statement was in a journal, Judicature, which was, and is, widely circulated to state and federal judges. The source of the lie appears to have been Selikoff himself in the ethically dodgy ex parte meeting with judges actively presiding over asbestos personal injury cases.

The point apparently weighed on Judge Weinstein’s conscience. He repeated his mea culpa almost verbatim, along with the false statement about Selikoff’s having never testified, in a law review article in 1994, and then incorporated the misrepresentation into a full-length book.[7] I have no doubt that Judge Weinstein did not intend to mislead anyone; like many others, he had been duped by Selikoff’s deception.

There is no evidence that Selikoff was acting as an authorized agent for the Collegium Ramazzini in conspiring to influence trial judges, or in lying to Judge Weinstein and Justice Freedman, but Selikoff was the founder of the Collegium, and his conduct seems to have set a norm for the organization. Furthermore, the Third-Wave Conference was sponsored by the Collegium. Two years later, the Collegium created an award in Selikoff’s name, in 1993, not long after the Third Wave misconduct.[8] Perhaps the award was the Collegium’s ratification of Selikoff’s misdeeds. Two of the recipients, Stephen M. Levin, and Yasunosuke Suzuki, were “regulars,” as expert witnesses for plaintiffs in asbestos litigation. The Selikoff Award is funded by the Irving J. Selikoff Endowment of the Collegium Ramazzini. The Collegium can fairly be said to be the continuation of Selikoff’s work in the form of advocacy organization.

Selikoff’s Third-Wave Conference and his lies to two key judges would not be the last of efforts to pervert the course of justice. With the Selikoff imprimatur and template in hand, Fellows of the Collegium have carried on, by carrying on. Collegium Fellows Carl F. Cranor and Thomas Smith Martyn Thomas served as partisan paid expert witnesses in the notorious Milward case.[9]

After the trial court excluded the proffered opinions of Cranor and Smith, plaintiff appealed, with the help of an amicus brief filed by The Council for Education and Research on Toxics (CERT). The plaintiffs’ counsel, Cranor and Smith, CERT, and counsel for CERT all failed to disclose that CERT was founded by the two witnesses, Cranor and Smith, whose exclusion was at the heart of the appeal.[10] Among the 27 signatories to the CERT amicus brief, a majority (15) were fellows of the Collegium Ramazzini. Others may have been members but not fellows. Many of the signatories, whether or not members or fellows of the Collegium, were frequent testifiers for plaintiffs’ counsel.

None raised any ethical qualms about the obvious conflict of interest on how scrupulous gatekeeping might hurt their testimonial income, or their (witting or unwitting) participation in CERT’s conspiracy to pervert the course of justice.[11]

The CERT amici signatories are listed below. The bold  names are identified as Collegium fellows at its current website. Others may have been members but not fellows. The asterisks indicate those who have testified in tort litigation; please accept my apologies if I missed anyone.

Nicholas A. Ashford,
Nachman Brautbar,*
David C. Christiani,*
Richard W. Clapp,*
James Dahlgren,*
Devra Lee Davis,
Malin Roy Dollinger,*
Brian G. Durie,
David A. Eastmond,
Arthur L. Frank,*
Frank H. Gardner,
Peter L. Greenberg,
Robert J. Harrison,
Peter F. Infante,*
Philip J. Landrigan,
Barry S. Levy,*
Melissa A. McDiarmid,
Myron Mehlman,
Ronald L. Melnick,*
Mark Nicas,*
David Ozonoff,*
Stephen M. Rappaport,
David Rosner,*
Allan H. Smith,*
Daniel Thau Teitelbaum,*
Janet Weiss,* and
Luoping Zhang

This D & C (deception and charade) was repeated on other occasions when Collegium fellows and members signed amicus briefs without any disclosures of conflicts of interest. In Rost v. Ford Motor Co.,[12] for instance, an amicus brief was filed by by “58 physicians and scientists,” many of whom were Collegium fellows.[13]

Ramazzini Fellows David Michaels and Celeste Monforton were both involved in the notorious Project on Scientific Knowledge and Public Policy (SKAPP) organization, which consistently misrepresented its funding from plaintiffs’ lawyers as having come from a “court fund.”[14]

Despite Selikoff’s palaver about how the Collegium would seek consensus and open discussions, it has become an echo-chamber for the rent-seeking mass-tort lawsuit industry, for the hyperbolic critics of any industry position, and for the credulous shills for any pro-labor position. In its statement about membership, the Collegium warns that

“Persons who have any type of links which may compromise the authenticity of their commitment to the mission of the Collegium Ramazzini do not qualify for Fellowship. Likewise, persons who have any conflict of interest that may negatively affect his or her impartiality as a researcher should not be nominated for Fellowship.”

This exclusionary criterion ensures lack of viewpoint diversity, and makes the Collegium an effective proxy for the law industry in the United States.

Among the Collegium’s current and past fellows, we can find many familiar names from the annals of tort litigation, all expert witnesses for plaintiffs, and virtually always only for plaintiffs. After over 40 years at the bar, I do not recognize a single name of anyone who has ever testified on behalf of a defendant in a tort case.

Henry A. Anderson

Barry I. Castleman      

Martin Cherniack

David Christiani 

Arthur Frank

Lennart Hardell 

David G. Hoel

Stephen M. Levin

Ronald L. Melnick

David Michaels

Celeste Monforton

Albert Miller

Brautbar Nachman

Christopher Portier

Steven B. Markowitz

Christine Oliver                 

Colin L, Soskolne

Yasunosuke Suzuki

Daniel Thau Teitelbaum

Laura Welch


[1]The Lobby – Cut on the Bias” (July 6, 2020).

[2] F.D.K. Liddell, “Magic, Menace, Myth and Malice,” 41 Ann. Occup. Hyg. 3, 3 (1997).

[3] SeeThe Dodgy Origins of the Collegium Ramazzini” (Nov. 15, 2023).

[4] In re School Asbestos Litigation, 977 F.2d 764 (3d Cir. 1992). See Cathleen M. Devlin, “Disqualification of Federal Judges – Third Circuit Orders District Judge James McGirr Kelly to Disqualify Himself So As To Preserve ‘The Appearance of Justice’ Under 28 U.S.C. § 455 – In re School Asbestos Litigation (1992),” 38 Villanova L. Rev. 1219 (1993); Bruce A. Green, “May Judges Attend Privately Funded Educational Programs? Should Judicial Education Be Privatized?: Questions of Judicial Ethics and Policy,” 29 Fordham Urb. L.J. 941, 996-98 (2002).

[5] Jack B. Weinstein, “Learning, Speaking, and Acting: What Are the Limits for Judges?” 77 Judicature 322, 326 (May-June 1994) (emphasis added).

[6]Selikoff and the Mystery of the Disappearing Testimony” (Dec. 3, 2010).

[7] See Jack B. Weinstein, “Limits on Judges’ Learning, Speaking and Acting – Part I- Tentative First Thoughts: How May Judges Learn?” 36 Ariz. L. Rev. 539, 560 (1994) (“He [Selikoff] had never testified and would   never testify.”); Jack B. Weinstein, Individual Justice in Mass Tort Litigation: The Effect of Class Actions, Consolidations, and other Multi-Party Devices 117 (1995) (“A court should not coerce independent eminent scientists, such as the late Dr. Irving Selikoff, to testify if, like he, they prefer to publish their results only in scientific journals.”).

[8] See also “The Selikoff – Castleman Conspiracy” (Mar. 13, 2011).

[9] Milward v. Acuity Specialty Products Group, Inc., 664 F.Supp.2d 137, 140 (D.Mass.2009), rev’d, 639 F. 3d 11 (1st Cir. 2011), cert. denied, 132 S.Ct. 1002 (2012).

[10]  See “The Council for Education and Research on Toxics” (July 9, 2013).

[11]Carl Cranor’s Inference to the Best Explanation” (Dec. 12, 2021).

[12] Rost v. Ford Motor Co., 151 A.3d 1032, 1052 (Pa. 2016).

[13]The Amicus Curious Brief” (Jan. 4, 2018).

[14] See, e.g., “SKAPP A LOT” (April 30, 2010); “Manufacturing Certainty” (Oct. 25, 2011); “David Michaels’ Public Relations Problem” (Dec. 2, 2011); “Conflicted Public Interest Groups” (Nov. 3, 2013).