TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

How Science Works in the New Reference Manual on Scientific Evidence

March 12th, 2026

The Second and Third Editions of the Reference Manual on Scientific Evidence contained a chapter, “How Science Works,” by Professor David Goodstein. This chapter ambitiously set out to cover philosophy and sociology of science to help orient judges as strangers in a strange land. Goodstein’s chapter had been a useful introduction to scientific methodology, and it countered some of the antic ideas seen in some judicial opinions, as well as in some other chapters of the Manual. Goodstein brought a good deal of experience and expertise to the task. He was a distinguished professor of physics and Vice Provost at the California Institute of Technology, and he had written engagingly about scientific discovery and the pathology of science.[1] Sadly, Goodstein died in April 2024. His death may have had some role in the delayed publication of the Fourth Edition of the Manual,[2] and the improvident replacement of his chapter with a new chapter written by authors less articulate about how science works.

The substitute chapter on “How Science Works” was written by two authors considerably less accomplished than the late Professor Goodstein.[3] Michael Weisberg is a professor of philosophy at the University of Pennsylvania, where he is the deputy director of Perry World House, which “analyzes global policy challenges through the realms of climate, democracy, global justice and human rights, and security.” The connection with Perry House may explain the new chapter’s heavy reliance upon the development of the chlorofluorocarbon (CFC) connection to ozone layer depletion as an exemplar of scientific discovery and knowledge. The University of Pennsylvania webpage describes Weisberg as “educat[ing] the next generation of environmental leaders in the classroom, at the negotiating table, and in the field, ensuring that their voices have maximal impact on addressing the climate crisis.”[4] So we have a philosopher of advocacy science, as it were. Some readers might think those credentials are not optimal for preparing a nuts-and-bolts description of how science works. Reading sections of the new chapter will not diminish their concerns.

Joining with Weisberg on this new version of “How Science Works,” is Anastasia Thanukos, who works at the University of California Museum of Paleontology. Thanukos has her masters degree in integrative biology, and her doctorate in science education.[5] 

The new “method” chapter has some virtues. As did Goodstein’s chapter, the new authors put peer review into a realistic perspective that should keep judges from being snoockered into admitting weak or bogus evidence because it had been published in a peer reviewed journal.[6] The authors should have gone much farther in pointing out that the rise of predatory and pay-to-play journals, as well as journals controlled by advocacy groups, have undermined much of the publishing model of modern science.

Weisberg and Thanukos discuss “expertise” in a way that is interesting but irrelevant to legal cases.  They seem blithely unaware that the standard for qualifying an expert witness is extremely low. Who will disbuse them when they argue that “[i]t is worth evaluating the closeness of a scientist’s disciplinary expertise to a scientific topic on which expert testimony is delivered”?[7] In what emerges as a consistent pattern of giving anti-manufacturing industry examples, the authors point to Richard Scorer as an accomplished scientist, who had no specific expertise in CFC ozone depletion. Notwithstanding the lack of specific expertise, an industry-backed group promoted Scorer’s views that criticized the CFC-ozone depletion hypothesis.[8] Citing Naomi Oreskes, the new Manual chapter states that “[t]he problem of scientists with legitimate expertise in one field weighing in on a scientific question outside their area of expertise is a pernicious one that has affected public acceptance of science and policy on issues such as climate change and tobacco exposure.”[9] Later, when Weisberg and Thanukos discuss the Milward case, they miss the pernicious influence that flowed from allowing Martyn Smith, a toxicologist, to give methodologically muddled opinion testimony on epidemiology. Pernicious is where you find it, and the authors of the new chapter find virtually all untoward instances of poor scientific method and conduct to originate from manufacturing industry.

Weisberg and Thanukos introduce a discussion of the “replication crisis,” a phrase and concept absent from the third edition of the Reference Manual.[10] The authors express some skepticism that there is an actual crisis over replication,[11] but their focus on climate science may mean that they are simply blinded by groupthink in that discipline. Their discussion of retractions omits the steep rise in retraction rates in most scientific disciplines,[12] and the authors ignore the proliferation of poor quality journals. Positively, the authors introduce a discussion of study preregistration, a notion absent from the third edition of the Manual, and they explain that such preregistration may serve as a bulwark against data dredging post hoc analyses.[13] Negatively, the authors ignore how frequently preregistered protocols are not used, or are used and then violated.

Weisberg and Thanukos appropriately ignore “weight of the evidence” (WOE) and “inference to the best explanation” (IBE). Readers might (mistakenly) think that the new chapter implicitly rejects WOE, as put forth by Carl Cranor and credulously accepted by the First Circuit in Milward, when the chapter authors insist that 

“the judge’s task requires a deeper examination of the available evidence and methods by which it was arrived at, as well as an assessment of how the community of experts in this area has evaluated or would evaluate the evidence and reasoning in question.”[14]

Contrary to the Milward decision from 2011, the new authors are not shy about stating the obvious; there is good science, and there is bad science.  Not all “judgment” about causality is acceptable and fit for submission to juries.[15] Given the judicial resistance to Rule 702, the obvious here requires stating. Weisberg and Thanukos acknowledge that some scientific judgment is unreliable or invalid because it was based upon work that was not carried out in accordance with current standards for scientific investigation and inference.[16] It should not surprise anyone that most of their examples of bad science are the product of manufacturing industry; the authors are oblivious to bad science sponsored by the lawsuit industry or by non-governmental advocacy organizations (NGOs).

Weisberg and Thanukos frame scientific disagreements and debates as governed by both data and ethical norms. Science is not infinitely contestable. There are identifiable norms, including a norm that scientists should “seek relevant information,” and “scrutinize ideas and evidence.”[17] Contrary to Milward’s standard of judicial abstention and credulity in the face of dodgy causal claims, these authors state what should be obvious, that scientific scrutiny involves, among other things, “an evaluation of methods, considering potential biases and oversights.”[18]

The chapters’ authors, non-lawyers, get closer to the heart of the error in Milward’s abstention doctrine with their recognition of what should have been obvious to the authors of the law chapter (Richter & Capra):

“When research relevant to a trial has not yet been scrutinized by a community with the appropriate technical expertise, a judge may be placed in the position of providing or requesting this scrutiny.”[19]  

Rather than some vague, subjective, and content-free WOE standard, Weisberg and Thanukos urge scientists, and by implication judges as well, to engage in serious efforts to “identify and avoid bias” and abide by ethical guidelines.[20] In other (my) words, the new authors agree that there is a standard of care reflected in the norms of science, and consequently there can be deviations from that standard. For Weisberg and Thanukos, compliance with the normative structure of scientific investigations is at the heart of building up accurate and predictive conclusions from data.[21] As part of their communitarian and normative conception of the scientific process, the authors appear to accept the reality and necessity for judges to act as gatekeepers.[22]

And while this recognition of standards and the need to police against deviations from standards is commendable, Weisberg and Thanukos proceed to give an abridgment of scientific method and process that is distorted and erroneous. They steadfastly ignore the concept of hierarchy of evidence, and thus provide illegitimate cover for levelers of evidence. In discussing randomized controlled trials, for instance, they note that such trials are often taken as “the gold standard,” but then they counter, without citation, support, or argument, that such trials “are just one line of evidence among many.”[23] The authors elide discussion and reconciliation of when that “just one line of evidence” conflicts with observational studies.

Notwithstanding their helpful comments about the need to evaluate studies for bias and other errors, these authors enter into the Milward controversy with an observation that assessing many lines of evidence is required and can be difficult for courts, and has led to “controversy.” Citing to papers including one  by the late Margaret Berger at her notorious lawsuit industry SKAPP-funded Coronado Conference, Weisberg and Thanukos float the observation that:

“In science, the available evidence (some of which may come from other research programs not designed to test the hypothesis under consideration) is evaluated as a body, along with the strengths, weaknesses, and caveats relating to each type of data, an approach which, some scholars have argued, the judiciary has not always followed.98[24]

This claim that the available evidence is evaluated as “a body” is presented as a fact about how science works, without any citation or argument. Several comments are in order. First, the claim is at odds with the authors’ own statements that scientific norms require evaluating each study for biases and other disqualifying flaws. Second, the claim is at odds with the authors’ own reference to systematic reviews and meta-analyses,[25] which are governed by protocols with inclusionary and exclusionary criteria for individual studies, and which require consideration of individual study validity before it enters the “body” of evidence that is quantitatively or qualitatively evaluated. In the authors’ words, “authors delineate both the criteria that studies must meet for inclusion in the review and the methods that will be used to assess the studies.”[26] The Milward case involved an expert witness who had proffered the very opposite of a systematic review in the form of post hoc rejiggering of studies and their data to fit a pre-conceived litigation goal. In the context of addressing the replication crisis, Weisberg and Thanukos correctly observe “peer review alone cannot ensure that the conclusions of published studies are actually correct, highlighting the responsibility judges bear in evaluating the validity of the methodologies that contributed to a particular piece of research.”[27] Of course, the Milward case involved a hired expert witness whose unprincipled re-analysis of studies was never peer reviewed or published.

Third, the authors could easily have found additional support for the contrary proposition that individual studies must be evaluated before being considered as part of the entire evidentiary display. The IARC Preamble, which roughly describes how that agency arrives at its so-called hazard classifications of human carcinogenicity, specifies that individual studies within each of three streams of evidence are evaluated for validity and soundness before contributing to a sub-conclusion with respect to (1) epidemiology, (2) toxicology, and (3) mechanistic lines of evidence.[28] Each of those three lines of evidence is adjudged “sufficient,” “limited,” or “inadequate,” by specialists in the three respective areas, before an overall evaluation is reached. There is much that is objectionable in the IARC working group procedures, but this division of labor and the need to consider disparate lines of evidence and studies within each line separately before attempting a synthesis, is present in all systematic review methodology. The suggestion from Weisberg and Thanukos that “the available evidence” in science is “evaluated as a body” is not only unsupported, but it is demonstrably false and misleading.

This claim about holistic evaluation is a fairly transparent but failed attempt to support a claim made in the chapter on the admissibility of expert witness evidence by Liesa Richter and Daniel Capra, who present an exposition of the notorious Milward case, without criticism, in a way to suggest that the case represents appropriate judicial gatekeeping under Rule 702, and that the case is consistent with scientific norms.[29] The chapter on how science works, after  having stated a false claim about scientific methodology for synthesis and integrating disparate lines of evidence, attempts to provide a gloss on the similar and equally benighted claim of Richter and Capra, in footnote 98:

“98. Some scholars have raised concerns that the courts have on occasion unfairly dismissed numerous individual lines of evidence as being flawed or insufficiently conclusive and concluded that evidence is lacking, when in fact the body of evidence, taken as a whole, points to a clear conclusion. For more, see discussion of Milward v. Acuity Specialty Products Group, Inc.; see also Liesa L. Richter & Daniel J. Capra, The Admissibility of Expert Testimony, in this manual; Berger 2005, supra note 97; and Steve C. Gold, A Fitting Vision of Science for the Courtroom, 3 Wake Forest J.L. & Pol’y 1 (2013).”

Some “scholars” have indeed said such things in their more unscholarly moments; some scholars have criticized Milward, but they are not cited in this new methods chapter. The footnote is accurate, but highly misleading by omission. The First Circuit in Milward also said as much, also without support or justification, and Richter and Capra, in their chapter of the Manual, fourth edition, parrot the Milward case. Weisberg and Thanukos cite to two articles, by Margaret Berger and by Steven Gold, both law professors, not scientists, and both ideologically hostile to Rule 702 gatekeeping. The Berger article was from a lawsuit-industry SKAPP funded symposium known as the Coronado Conference, and the Gold paper comes out of a symposium sponsored by the lawsuit industry itself and the Center for Progressive Reform, an advocacy NGO to which one of Mr. Milward’s expert witnesses, Carl Cranor, belongs. So the authors of the new science methodology chapter failed to cite any scientific source, but cited to papers by lawyers in the capture of the lawsuit industry, and a single (infamous) decision that ignored Rules 702 and 703, as well as the extensive literature on systematic reviews.  Weisberg and Thanukos could have cited many sources that contradicted their claim, and the claim of the lawsuit industry sponsored lawyers, but they did not. This is what biased and subversive scholarship looks like.

Funding Bias – The New McCarthyism

The selective citation to articles sponsored by the lawsuit industry is ironic in the context of what Weisberg and Thanukos have to say elsewhere about the “funding effect.” Some of what the authors say about personal bias is almost reasonable. For instance, they suggest that funding source is a “valid consideration” in evaluating methodologies and conclusions of expert testimony, and presumably of published studies as well, but not a sufficient reason to exclude such testimony or reliance.[30] Interestingly, these authors ignored the funding and the ideological interests of the symposia they cited in support of the repudiated Milward abstention doctrine.

Over three decades ago, Kenneth Rothman, the founder of Epidemiology, the official journal of the International Society for Environmental Epidemiology (ISEE), wrote his protest against the obsession with funding in article that should have been cited in the new chapter, for balance. Rothman described the fixation on funding as the “new McCarthyism in science,” which manifested as intolerance toward industry-sponsored studies, and strict scrutiny of “conflict-of-interest” (COI) disclosures.[31] The new McCarthyites amplify the gamesmanship over COI disclosures by excusing or justifying non-disclosure of COIs from scientists who have positional conflicts, or who are aligned with advocacy groups or with the lawsuit industry.

This asymmetrical standard for adjudging conflicts is on full display in the Weisberg and Thanukos chapter, when they claim that “in pharmaceuticals, there is a strong tendency for industry-sponsored trials to favor the industry’s product.”[32] The chapter authors, and their cited source, ignore the context in which the pharmaceutical industry scientists publish clinical trial results.  A successful clinical trial that showed efficacy with minimal adverse events is the result of years of prior research, including phase I and II trials, and preclinical testing. If the research fails to show efficacy, or shows unreasonable harm, in any of this prior research, the phase III trial is never done and so never published. If the medication is never licensed, the phase III trial will generally not be published. The selection effects are obvious and overwhelming in determining that the published results of phase III trials will be work that favors the sponsor. The “failed” phase III trial may result in a securities class action against the pharmaceutical company. In the realm of observational studies, some work commissioned by manufacturing industry has its origins in the poorly conducted, flawed work of environmental zealots and NGOs. Manufacturing industry has an obvious interest in correcting the scientific record, and again, any carefully done study would rebut that of the zealots and favor the industry sponsor.

Elsewhere, the authors offer a more balanced assessment when they observe that “[a]ll research is potentially influenced by bias, and every funder of research has the potential to introduce a source of bias.”[33] Similarly, the fourth edition chapter notes that “[a]ll scientists have some sort of motivation for their work, and this does not preclude scientific knowledge building, so long as biased methodologies and interpretations are avoided.”[34] Their recognition that motivated reasoning is everywhere suggests that all research should receive scrutiny regardless of apparent or disclosed funding source.[35]

When it comes to providing examples of funding-effect distortions of science, Weisberg and Thanukos seem to blank on instances created by the lawsuit industry or by environmental NGOs. The reader should contrast how readily and stridently the authors point to bias in industry-sponsored research with how the authors tie themselves up with double negatives when making the same point about NGOs:

“That is not to suggest that government-or nongovernmental organization (NGO)-sponsored research is necessarily free from bias.”[36]

The cognitive dissonance is palpable. The only conclusion that could be drawn from such a locution is that Weisberg and Thanukos have not worked very hard to identify and disclose their own biases.

STATISTICS DONE POORLY

When it comes to explaining and discussing the role of statistical methods in the scientific process, Weisberg and Thanukos go off the rails. The new chapter is an unmitigated disaster, which should have been corrected in the peer review and oversight process. The first sign of trouble became apparent upon checking the definition of “p-value” in the chapter’s glossary:

p-value. A statistic that gives the calculated probability that the null hypothesis could be true even given the observed differences between conditions.”[37]

This definition is the transposition fallacy on steroids. Obviously, a p-value cannot be the probability that the null hypothesis “could be true” when the procedure for calculating a p-value must assume that the null hypothesis is true, along with a specified probability model. Equally important, the p-value does not describe a probability in connection with the null hypothesis because it describes the probability of observing data as different from the null, or more so, as seen in this particular sample.  The statistics chapter in the Manual by Hall and Kaye states the meaning correctly.  The coverage of statistical concepts by Weisberg and Thanukos should be studiously ignored.

The outrageously incorrect definition of p-value in the glossary is not an isolated error.  The authors are clearly statistically challenged. In the text of their chapter, they incorrectly describe the p-value, consistently with their aberrant glossary entry:

“the commonly used p-value approach, scientists compare a test hypothesis (e.g., that drug X is effective) to a null (e.g., that there is no difference in cure rates between those who took drug X and those who took a placebo). Scientists then calculate the probability that the null hypothesis could be true even with the observed difference between conditions (e.g., the cure rate of patients taking drug X compared to that of those taking a placebo).”[38]

Weisberg and Thanukos thus conflate frequentist and Bayesian statistics. They also obliterate the meaning of the confidence interval, an important concept for judges and lawyers to understand. Here is how the authors describe the confidence interval in their chapter:

Evaluating estimates: In science (and in contrast to their lay meanings), the terms uncertainty and error refer to the variability of a set of data that is intended to estimate a single number. Uncertainty and error are generally expressed as a range, within which we are confident that, if the study were repeated, the new result would fall. Scientists often use a 95% confidence interval for this purpose.”[39]

Describing the confidence interval in the same sentence as “uncertainty and error” is bound to induce uncertainty and error. The confidence interval provides a range of estimates based upon random error, and uncertainty only in the form of imprecision in the point estimate. There are of course myriad other kinds of uncertainty and error not captured by the confidence interval. The most important of the authors’ errors is that they assert incorrectly that the confidence interval provides a range within which new results from the study repeated would fall.  This is, again, a variant on the transposition fallacy that the authors commit in their definition of the p-value. The confidence interval provides a range of results that would not be rejected as alternative null hypotheses by the data in the obtained sample. Because of random error, future samples would give different results, with different confidence intervals, which would not be co-extensive with the first obtained confidence interval. To be sure, the statistics chapter states the matter correctly, and the epidemiology chapter finally gets it correct in its text (after having mangled the concept in the second and third editions), but the epidemiology chapter perpetuates its previous errors in defining confidence intervals in its glossary. This sort of issue, and it is a serious one, could have been eliminated had there been meaningful peer review and editorial oversight for consistency and accuracy of the Manual as a whole.

Weisberg and Thanukos address statistical power in a way that may also mislead readers. They tell us that “[p]ower refers to a test’s ability to reject a hypothesis that is indeed false.” W&T at 88. If only were it so. The authors omit that power is a probability that at a specified level of significance (say p < 0.05), and a specified alternative hypothesis, sample size, and probability model, the sample result will reject the null hypothesis in favor of the alternative hypothesis. Then the authors suggest confusingly that “[w]ell-designed studies have sufficient power to detect the differences of interest, but it may not be apparent when a test lacks power.”[40]

If the study at issue presents a confidence interval around a point estimate of interest, then it will be clear what alternative null hypotheses are statistically compatible with the sample result at the pre-specified level of alpha (significance). Any point outside the interval would be rejected by such a test of significance, and so the casual reader will have a rather good idea of what could and could not be rejected by the sample data. And of course, virtually every study will have low power to detect extremely small increased risks, say relative risk of 1.00001. And most studies will have high power to detect risk ratios of over 1,000.

This new chapter on “How Science Works” also propagates some well-known fallacies about statistical significance testing. Implicit in the authors’ committing the transposition fallacy, is a conceptual and mathematical confusion between the coefficient of confidence (1-α) and the posterior probability of an hypothesis.

The authors’ mistake comes in their insistence upon labeling precision in a test result as “certainty.” In the quote below, the authors’ confusion is clear and obvious:

“Note that the 95% and 5% cutoffs are somewhat arbitrary, and a higher degree of confidence might be required if more certainty were desired—for example if an impactful policy decision depended on the conclusion.”[41]

An impactful [sic] policy decision might well call for more certainty, or a higher posterior probability, but a higher coefficient of confidence will not necessarily map to hypothesis probability at all. The authors’ confusion and conflation of the probability of alpha and the Bayesian posterior probability arises elsewhere within the chapter:

“(1) A p-value lower than 0.05 does not prove that a null hypothesis is false. It is strong evidence, but there is a small chance that the difference observed could be the result of chance alone.

(2) Using a low p-value (e.g., 0.05) as a criterion for significance sets a high bar for rejecting the null hypothesis, minimizing the chance of getting a false positive… .”[42]

Again, a p-value less than five percent is hardly strong evidence in the context of large database studies, especially when there are multiple comparisons and the outcome is not the pre-specified outcome of the analysis. The authors’ confusion is on full display when they discuss the Zoloft birth defects litigation, where the Third Circuit affirmed the exclusion of plaintiffs’ expert witnesses’ causation opinions and the grant of summary judgment to the defendants. According to the authors’ narrative:

“plaintiffs’ expert’s testimony would have argued that multiple, nonsignificant associations between Zoloft use and birth defects indicated a causal relationship. The testimony was excluded because these results were consistent with a weak causal relationship (a small effect size), one that is ‘so weak that one cannot conclude that the risk is greater than that seen in the general population’.”[43]

Of course, in the Zoloft litigation, the excluded plaintiffs’ expert witnesses were caught red-handed – at cherry picking – and attempting to circumvent the lack of significance with a methodologically incorrect meta-analyses.[44]

If the risk of birth defects among children born to mothers who used Zoloft in pregnancy was no greater than seen in the general population, then there would be no risk, not risk “so weak” it cannot be seen. Locutions such as the “results were consistent with a weak causal relationship,” when the results were equally consistent with no causal relationship suggest that the writers cannot bring themselves to say that the causal hypothesis was simply not supported at all. Of course, no study may exclude an increased risk of 0.01 percent, or a relative risk of 1.01, but at some point, when multiple attempts fail to reveal an increased risk, we may conclude that the proponents of the causal claim have failed to make their case.

META-SHMETA-ANALYSIS

Weisberg and Thanukos address meta-analysis incompletely in the context of systematic reviews. The authors do not provide any insights into how meta-analyses are done, and more glaringly, they fail to mention that not all systematic reviews can or should result in quantitative syntheses of estimates of association. On the positive side, they state that meta-analyses are important in litigation, and that the application of rigorous methodologies should be required.[45] With clearly unintended irony, Weisberg and Thanukos offer, as support for their statement, the Paoli Railroad Yard case, “in which the exclusion of a contested meta-analysis was overturned.”[46]

Weisberg and Thanukos have stepped into the wet corner of a pigsty. The issue in the Paoli case arose from a meta-analysis of mortality rates associated with polychlorobiphenyl (PCB) exposures. The district court excluded the proponent of the meta-analysis, not because it was unreliable, but because it was novel. Holding it up in conjunction with a statement about application of rigorous or reliable methodologies was way off the relevant legal point.

The expert witness who proffered the meta-analysis in Paoli was William  Nicholson, who was a physicist with no professional training in epidemiology. For his opinion that PCBs were causally associated with human liver cancer, Nicholson relied upon a non-peer-reviewed, unpublished report he wrote for the Ontario Ministry of Labor.[47] Nicholson described his report as a “study of the data of all the PCB worker epidemiological studies that had been published,” from which he concluded that there was “substantial evidence for a causal association between excess risk of death from cancer of the liver, biliary tract, and gall bladder and exposure to PCBs.”[48]

The defense challenged Nicholson’s opinion, not on Rule 702, but on case law that pre-dated the Daubert decision.[49] The challenge included pointing out the unreliability of the Nicholson’s meta-analysis, but also asserted (incorrectly) the novelty of meta-analysis generally. The district court sustained the defense objection on the grounds of “novelty,” without reaching the reliability analysis.[50] The Third Circuit appropriately reversed and remanded for consideration of the reliability of Nicholson’s meta-analysis.[51]

The consideration of Nicholson’s “meta-analysis” never occurred on remand; plaintiffs’ counsel and their expert witnesses withdrew their reliance upon Nicholson’s analysis. Their about face was highly prudent. Nicholson’s report presented SMRs (standardized mortality ratios); for the all cancers statistic, he reported an SMR of 95. What Nicholson did, in this analysis, and in all other instances, was simply divide the observed number of deaths by the expected, and multiply by 100. This crude, simplistic calculation fails to present a standardized mortality ratio, which requires taking into account the age distribution of the exposed and the unexposed groups, and a weighting of the contribution of cases within each age stratum. Nicholson’s presentation of data was nothing short of a fraud.

Nicholson’s Report was replete with many other methodological sins. He used a composite of three organs (liver, gall bladder, bile duct) without any biological rationale. His analysis combined male and female results, and still his analysis of the composite outcome was based upon only seven cases. Of those seven cases, some of the cases were not confirmed as primary liver cancer, and at least one case was confirmed as not being a primary liver cancer.[52]

As noted, Nicholson failed to standardize the analysis for the age distribution of the observed and expected cases, and he failed to present meaningful analysis of random or systematic error. When he did present p-values, he presented one-tailed values, and he made no corrections for his many comparisons from the same set of data.

Finally, and most egregiously, Nicholson’s meta-analysis was meta-analysis in name only. What he had done was simply to add “observed” and “expected” events across studies to arrive at totals, and to recalculate a bogus risk ratio, which he fraudulently called a standardized mortality ratio. Adding events across studies, without weighting by the inverse of study variance, is not a valid meta-analysis; indeed, it is a well-known example of how to generate the error known as Simpson’s Paradox, which can change the direction or magnitude of any association.[53]

In citing to the Paoli case as a reversal of exclusion of a contested meta-analysis, Weisberg and Thanukos give a truncated analysis that misleads readers, judges, and lawyers. There never was a proper consideration of the reliability vel non of Nicholson’s meta-analysis in the Paoli litigation, and in the final analysis, the Paoli plaintiffs abandoned reliance upon Nicholson’s ill-conceived meta-analysis.

VIRTUE SIGNALING

Although there are no land acknowledgments for the property on which Federal Judicial Center building is located, Weisberg and Thanukos miss few opportunities to let us know that they are woke scholars. There is the gratuitous and triggering “pregnant people,”[54] which begs any number of biological questions. Then there is the authors’ statement that they are limiting their focus to the “Western conception of science,” which begs another question, why would we call any other epistemically valid approach, from any corner of the globe, as something other than “science.”[55]

Equally gratuitous are the authors’ endorsements of DEI and “diversity,” with overbroad generalizations that diversity per se advances science,[56] and a claim that “women, people of color, other historically oppressed groups, and non-Western people” are not taken seriously as scientists.[57] In over 40 years of litigating technical and scientific issues, I have never seen a judge or a lawyer disrespect an expert witness based upon sex, race, ethnicity, or national origin. Of course, I have seen expert witnesses treated roughly for propounding bad science, and that seems perfectly appropriate.


[1] See David Goodstein, ON FACT AND FRAUD: CAUTIONARY TALES FROM THE FRONT LINES OF SCIENCE (2010).

[2] Weisberg and Thanukos frequently refer to other chapters in the Manual, which suggests that their chapter was written late in the development of the Fourth Edition, and perhaps contributed to the delayed publication.

[3] Michael Weisberg & Anastasia Thanukos, How Science Works, in National Academies of Sciences, Engineering, and Medicine & Federal Judicial Center, REFERENCE MANUAL ON SCIENTIFIC EVIDENCE 47 (4th ed. 2025) [cited as W&T].

[4] See Michael Weisberg, University of Pennsylvania Philosophy, at https://philosophy.sas.upenn.edu/people/michael-weisberg.

[5] Anna Thanukos, Staff, available at https://ucmp.berkeley.edu/people/anna-thanukos/#:~:text=Her%20background%3A%20Anna%20has%20a,Education%2C%20both%20from%20UC%20Berkeley

[6] W&T at 72-75.

[7] W&T at 81.

[8] W&T at 81.

[9] W&T at 81 & n.85 (emphasis added), citing Naomi Oreskes & Erik M. Conway, MERCHANTS OF DOUBT: HOW A HANDFUL OF SCIENTISTS OBSCURED THE TRUTH ON ISSUES FROM TOBACCO SMOKE TO GLOBAL WARMING (2010).

[10] W&T at 94-96.

[11] W&T at 95 n.120.

[12] Richard Van Noorden, More than 10,000 research papers were retracted in 2023 — a new record, 624 NATURE 479 (2023).

[13] W&T at 95.

[14] W&T at 55.

[15] W&T at 63, 68.

[16] W&T at 68.

[17] W&T at 65.

[18] W&T at 70.

[19] W&T at 71.

[20] W&T at 66.

[21] W&T at 75.

[22] W&T at 49.

[23] W&T at 83.

[24] W&T at 86 (citing Richter and Capra’s discussion of Milward in chapter one of the Manual, and Professor Gold’s article from the lawsuit industry celebratory conference on the Milward case).

[25] W&T at 99-100.

[26] W&T at 99.

[27] W&T 96 (emphasis added).

[28] IARC MONOGRAPHS ON THE IDENTIFICATION OF CARCINOGENIC HAZARDS TO HUMANS – PREAMBLE (2019), available at https://monographs.iarc.who.int/wp-content/uploads/2019/07/Preamble-2019.pdf

[29] Liesa L. Richter & Daniel J. Capra, The Admissibility of Expert Testimony, National Academies of Sciences, Engineering, and Medicine & Federal Judicial Center, REFERENCE MANUAL ON SCIENTIFIC EVIDENCE 1, 32-33 (4th ed. 2025).

[30] W&T at 76.

[31] Kenneth J. Rothman, “Conflict of interest: the new McCarthyism in science,” 269 J. AM. MED. ASS’N 2782 (1993). See Schachtman, The Rhetoric and Challenge of Conflicts of Interest, TORTINI (July 30, 2013).

[32] W&T at 76 & n.67, citing Sergio Sismondo, Pharmaceutical Company Funding and Its Consequences: A Qualitative Systematic Review, 29 CONTEMP. CLINICAL TRIALS 109 (2008).

[33] W&T at 77.

[34] W&T at 59-60.

[35] W&T at 59-60.

[36] W&T at 76.

[37] W&T at 111.

[38] W&T at 87.

[39] W&T at 90.

[40] W&T at 88.

[41] W&T at 90 (emphasis added).

[42] W&T at 88.

[43] W&T at 90 (internal citations omitted).

[44] In re Zoloft (Sertraline Hydrochloride) Prods. Liab. Litig., 26 F. Supp. 3d 449 (E.D. Pa. 2014); No. 12-md-2342, 2015 WL 314149, at *3 (E.D. Pa. Jan. 23, 2015) (rejecting proffered expert witness opinion based upon “cherry-picking of studies and data within studies”), aff’d, 858 F.3d 787 (3rd Cir. 2017).

[45] W&T at 99.

[46] W&T at 99 & n.134, citing In re Paoli R.R. Yard PCB Litig., 916 F.2d 829 (3d Cir. 1990).

[47] William Nicholson, Report to the Workers’ Compensation Board on Occupational Exposure to PCBs and Various Cancers, for the Industrial Disease Standards Panel (ODP); IDSP Report No. 2 (Toronto Dec. 1987) [Report].

[48] Id. at 373.

[49] See United States v. Downing, 753 F.2d 1224 (3d Cir.1985).

[50] In re Paoli RR Yard Litig., 706 F. Supp. 358, 372-73 (E.D. Pa. 1988).

[51] In re Paoli RR Yard PCB Litig., 916 F.2d 829 (3d Cir. 1990), cert. denied sub nom. General Elec. Co. v. Knight, 499 U.S. 961 (1991).

[52] Report, Table 22.

[53] See James A. Hanley, et al., Simpson’s Paradox in Meta-Analysis, 11  EPIDEMIOLOGY 613 (2000); H. James Norton & George Divine, Simpson’s paradox and how to avoid it, SIGNIFICANCE 40 (Aug. 2015); George Udny Yule, Notes on the theory of association of attributes in statistics, 2 BIOMETRIKA 121 (1903).

[54] W&T at 84.

[55] W&T at 50.

[56] W&T at 71 n. 52-54.

[57] W&T at 102.

Reference Manual’s Chapter on Expert Witness Testimony Admissibility – Part 5

March 7th, 2026

By ignoring Milward’s expert witnesses’ omissions from, and abridgements of, WOE and IBE, the appellate court blinded itself to these witnesses’ distortions of scientific method. The need for judgment, which the Milward court was keen to honor, does not mean that there are not aberrant or deviant judgments, or deviations from the standard of scientific care that are disqualifying. The need for judgment must also allow for equipoise and uncertainty that stands in the way of an inculpatory or exonerative verdict. And then there is the business of questionable research practices that subvert causal judgment. The district court had followed and acknowledged the showing of questionable research practices that pervaded Martyn Smith’s for-litigation opinions. The cheerleaders for Milward seem eager to obscure these practices by their insistence that causation is, after all, only a judgment.

The Milward decision, in its embrace of some truly aberrant methodology and judgment, and some absence of methodology, made some of its own whoopers. Martyn Smith’s incompetent analyses of the epidemiologic evidence had been thoroughly debunked in the district court, but the circuit court glibly adopted Smith’s characterizations. The appellate court failed to understand and come to grips with Smith’s rejiggering of data, and his inconsistently redefining exposures and outcomes in epidemiologic studies to make up new, fanciful results that favored his WOE-ful opinion. The appellate court also failed to understand that scientific judgment is not some vague, amorphous, unstructured decision that turns on whatever looks to be “explanatory.” Even the International Agency for Research on Cancer, which issues hazard classifications that are distorted by non-scientific precautionary principle reasoning, insists that three streams of evidence (epidemiologic, toxicologic, mechanistic) be considered separately, in accordance with criteria, with attention to the validity of each study, and synthesized into a judgment of causality following a carefully structured analysis.[1]

The appellate court in Milward took the demonstration of Smith’s failure to calculate odds ratios correctly to be something that merely went to the weight, not the admissibility, on the theory that a jury, which does not have access to the Reference Manual or to the actual studies as published, could sort it all out. And yet, when the court improvidently set out a definition of what an odds ratio is, it bungled the definition beyond understanding:

“An odds ratio represents the difference in the incidence of a disease between a population that has been exposed to benzene and one that has not.”[2]

The court’s definition is not even wrong. The difference between incidence of a disease in an exposed group and a non-exposed group is the risk difference. It is not an odds ratio. Perhaps the court might have realized what most third graders know, that there is a difference between a ratio (division) and a difference (subtraction). And of course, the odds of exposure is not the same as the incidence of a disease. The relevant odds ratio represents the odds of exposure in cases with APML diagnoses divided by the odds of exposure in study subjects without APML. The odds ratio does involve measurements of incidence although in some cases the odds ratio will approximate a risk ratio, which does involve a ratio of incidences. This is not some hyper-technicality; it is a vivid display that Chief Judge Lynch, writing for a panel of three judges of the First Circuit, had no idea of what she was reviewing or writing.

Richter and Capra devote two pages to a discussion of the Milward case and its embrace of WOE and IBE. There is not, in this discussion, a single adjective of approval or of disapproval. The attention to this one intermediate appellate court opinion far exceeds any other case decided at a level below the Supreme Court, and an engaged reader must ask why the authors of the first chapter of the new Reference Manual wrote about this case at all, especially given the 2023 amendments to Rule 702, which would suggest that Milward was bad law when decided in 2011, and clearly and emphatically bad law in December 2025, when the new Manual was published.

The chapter provides one not-so-subtle clue of the authors’ intent. At the conclusion of their extended, uncritical, and incomplete exposition of Milward,[3] Richter and Capra refer the reader to a law review symposium,[4] “[f]or a detailed analysis of the Milward decision and the weight of the evidence approach to scientific reasoning.” Like Richter and Capra’s coverage of Milward, the cited symposium was hardly an objective analysis; rather, it was more like a drunken celebration at a family reunion.

There have been many law review articles that have discussed the Milward case, but Richter and Capra chose to cite to one particular symposium, which was sponsored by two corporations, the Center for Progressive Reform (CPR) and the Robert A. Habush Foundation. The Center for Progressive Reform (CPR) is a not-for-profit corporation. Its website describes the CPR as a “research and advocacy organization that works in the service of responsive government; climate justice, mitigation, and adaptation; and protecting against environmental harm.”[5] CPR describes one of its key activities as defending science from corporate interference. Presumably its own corporate activities and those of the lawsuit industry are acceptable, but those of corporate manufacturing industry are not. From reviewing CPR’s website, it is not clear that the CPR believes manufacturing corporations should even be allowed to defend against lawsuits. Milward’s retained expert witness Carl Cranor is a “member scholar” at CPR, which makes CPR’s sponsorship of the symposium rather incestuous.[6]

CPR is also apparently comfortable with one highly politicized “corporation,” namely the American Association for Justice (AAJ), which is the trade group for the American lawsuit industry.[7] The AAJ describes itself as a corporation, or a “collective,” that supports plaintiff trial lawyers as their “collective voice … on Capitol Hill and in courthouses across the nation … .” The Robert A. Habush Foundation is endowed by the AAJ, and serves its “educational” mission.  Through the Habush Foundation, the AAJ funds educational programs, “think tanks,” and writing projects designed to influence judges, law professors, lawyers, and the public, on issues of importance to the AAJ:  “the civil justice system and individual rights” for bigger, better, and more profitable litigation outcomes. The AAJ may be a “not-for-profit” corporation, but it represents the interests of one of the most powerful, wealthiest, interest groups in American society — the plaintiffs’ bar.

The Milward symposium agenda and papers from its participants were published at the website for the Wake Forest Journal of Law and Public Policy, but now are marked as “currently private. If you would like to request access, we’ll send your username to the site owner for approval.”

The symposium cited by Richter and Capra for “analysis,” was very much a family affair. The choice of venue, at the Wake Forest Law School, was connected to the web of interests involved. CPR board member, Sid Shapiro, is a law professor at Wake Forest. Shapiro presented at the symposium, along with the Wake Forest professor Michael Green. Cranor, Shapiro’s CPR colleague, and party expert witness for plaintiff, presented.[8] There was only one practicing lawyer who presented at the symposium, Steven Baughman Jensen, who was a past chair of the AAJ’s Section on Toxic, Environmental, and Pharmaceutical Torts. Jensen represented Milward, and hired Cranor as one of the plaintiff’s expert witnesses. Attorney Jensen’s contribution to the symposium has been published along with Cranor’s as well, in the proceedings of the Milward symposium were published volume 3, no. 1 of the Wake Forest Journal of Law and Public Policy,[9] which is now also marked private. Jensen also published an abbreviated paean to Milward in in the AAJ’s trade journal.[10] No defense counsel or defense expert witness participated at the symposium, referenced by Richter and Capra.

Consistent with the financial, advocacy, and political interests of the symposium sponsors, the articles are almost all partisan high-fives for the Milward decision. Writing for the Federal Judicial Center and the National Academies, the authors of a chapter on the law of expert witnesses, a legal issue, for the Reference Manual, should have been aware of the partisan nature of the CPR-AAJ sponsored symposium. They should have flagged the advocacy nature of the symposium, and identified the funding sources and the conflicts created. Furthermore, Richter and Capra should have cited papers that criticized the Milward case, from various perspectives, including its failure to adhere to the law of Rule 702.[11] Their failure to do so is a significant failure of this chapter.


[1] IARC MONOGRAPHS ON THE IDENTIFICATION OF CARCINOGENIC HAZARDS TO HUMANS – PREAMBLE (2019).

[2] Milward, 639 F.3d at 23.

[3] Richter & Capra at 33n.96 (“For a detailed analysis of the Milward decision and the weight of the evidence approach to scientific reasoning…”).

[4] Symposium: Toxic Tort Litigation: After Milward v. Acuity Products, 3 WAKE FOREST JOURNAL OF LAW & POLICY 1 (2013).

[5] The Center for Progressive Reform, at https://progressivereform.org/, last visited on Feb. 24, 2026

[6] Carl Cranor Biography, Center for Progressive Reform, Member Scholars, at https://progressivereform.org/member-scholars/

[7] The AAJ was previously known by the more revealing name, Association of Trial Lawyers of America (ATLA®). 

[8] Carl F. Cranor, Milward v. Acuity Specialty Products: Advances in General Causation Testimony in Toxic Tort Litigation, 3 WAKE FOREST JOURNAL OF LAW & POLICY 105 (2013).

[9] Steve Baughman Jensen, Sometimes Doubt Doesn’t Sell: A Plaintiffs’ Lawyer’s Perspective on Milward v. Acuity Products, 3 WAKE FOREST JOURNAL OF LAW & POLICY 177 (2013).

[10] Steve Baughman Jensen, Reframing the Daubert Issue in Toxic Tort Cases, 49 TRIAL 46 (Feb. 2013).

[11] See Eric Lasker, Manning the Daubert Gate: A Defense Primer in Response to Milward v. Acuity Specialty Products, 79 DEF. COUNS. J. 128, 128 (2012);

David E. Bernstein, The Misbegotten Judicial Resistance to the Daubert Revolution, 89 NOTRE DAME L. REV. 27, 29, 53-58 (2013); David E. Bernstein & Eric G. Lasker, Defending Daubert: It’s Time to Amend Federal Rule of Evidence 702, 57 WM. & MARY L. REV. 1, 33 (2015); Richard Collin Mangrum, Comment on the Proposed Revision of Federal Rule 702: “Clarifying” the Court’s Gatekeeping Responsibility over Expert Testimony, 56 CREIGHTON LAW REVIEW 97, 106 & n.45 (2022); Thomas D. Schroeder, Toward a More Apparent Approach to Considering the Admission of Expert Testimony, 95 NOTRE DAME L. REV. 2039, 2045 (2020); Lawrence A. Kogan, Weight of the Evidence: A Lower Expert Evidence Standard Metastasizes in Federal Court, Washington Legal Foundation Critical Legal Issues WORKING PAPER Series no. 215 (Mar. 2020); Note, Judicial Conference Amends Rule 702. — Federal Rule of Evidence 702, 138 HARV. L. REV. 899, 903 (2025); Nathan A. Schachtman, Desultory Thoughts on Milward v. Acuity Specialty Products, DOI: 10.13140/RG.2.1.5011.5285 (Oct. 2015), available at https://www.researchgate.net/publication/282816421_Desultory_Thoughts_on_Milward_v_Acuity_Specialty_Products .

Reference Manual’s Chapter on Expert Witness Testimony Admissibility – Part 3

March 2nd, 2026

Richter and Capra treat WOE in Justice Steven’s lone dissenting opinion in Joiner as if it were the law. Of course, it was not; nor was it a particularly insightful analysis into scientific method, Rule 702, or the law of expert witnesses. The Manual authors elevate WOE by their complete failure to offer any criticisms or by citing to the scientific and legal scholars who have criticized WOE.

Richter and Capra do cite to a couple of cases that are skeptical of expert witnesses who had offered WOE opinions, but they fail to cite to any cases that disparage WOE itself.[1] In aggravation of their misplaced focus on the Joiner dissent, Richter and Capra proceed to spend two full pages on the Milward case, which had posthumously appeared in Professor Berger’s version of the law chapter in the 2011, third edition of the Reference Manual. The attention given to Milward in the fourth edition is greater than to any other non-Supreme Court case, including Frye. Richter and Capra offer no commentary or analysis critical of the case, although many legal commentators have criticized the Milward opinion on WOE.[2]

Richter and Capra’s chapter fails to note that a dark cloud hangs over the Milward case due to the unethical non-disclosure of CERT’s amicus brief filed in support of reversing the exclusion of CERT’s founders, Carl Cranor and Martyn Smith,[3] or CERT’s funding Smith’s research, or CERT’s involvement in shaking down corporations in California for Prop 65 bounties.

In their extensive coverage of the 2011 Milward decision, Richter and Capra failed to report that after the First Circuit reversed and remanded, the trial court again excluded plaintiffs’ expert witnesses for failing to give a valid opinion on specific causation. On the second appeal, the First Circuit affirmed the exclusion of specific causation expert witness testimony and the entry of final judgment for defendants.[4] Given that the first appellate decision was no longer necessary to the final disposition of the case, it is questionable whether there is any holding with respect to general causation in the case.

The most salient aspect of Richter and Capra’s uncritical coverage of the Milward case is their complete failure to identify the legal errors made by the First Circuit in its decision on Rule 702 and general causation. As the Reporter to the Rules Advisory Committee, Professor Capra was intimately involved in many meetings and memoranda that addressed the failings of courts to engage properly in gatekeeping. These failings were the gravamen of the basis for the 2023 amendments to Rule 702. The Milward decision in 2011 managed to check almost every box for bad decision making: the appellate panel ignored the text of Rule 702, disregarded Supreme Court precedent in the Joiner case, relied upon over-ruled, obsolete, pre-Daubert decisions, ignored the policy considerations urged by the Supreme Court, bungled basic scientific concepts, and egregiously and credulously endorsed WOE as advocated as a scientific methodology. Professor David E. Bernstein has pointed to the 2011 Milward decision, as “the most notorious,” and “[t]he most prominent example of such judicial truculence” in resisting following the requirements of Rule 702, as it existed in 2011.[5]

Milward is an important case, much as the Berenstain Bears stories are important and helpful in teaching children what not to do. Unfortunately, Richter and Capra discuss Milward in a way that might lead readers to believe that the case represents a reasonable or proper treatment of the science involved in the case. To correct this biased coverage of Milward, readers will have to roll up their sleeves and actually look at what the court did and did not do, and what scientific methodology issues were involved.

Perhaps the best place to begin is the beginning. Brian Milward filed a lawsuit in which he claimed that he was exposed to benzene as a refrigerator technician.[6] He developed acute promyelocytic leukeumia (APML), and claimed that he had been exposed to benzene from having used products made or sold by roughly two dozen companies. APML is a rare disease, type M3 of acute myeloid leukemia (AML), defined by specific chromosomal abnormalities that are necessary but not sufficient to result in APML. APML has an incidence of fewer than five cases per million per year. APML occurs with equal frequency in both sexes; there are no known environmental or occupational causes of APML.[7] APML occurs in the general population without benzene exposure, and its occurrence in all populations is sparse. There are no biomarkers that suggest that some putative benzene-related mechanism is involved in some APML cases, which biomarker would identify the rarity of benzene involvement in causation.

Milward’s General Causation Expert Witness, Martyn T. Smith

Milward did not serve a report from an epidemiologist, or anyone with significant expertise in epidemiology. His only general causation expert witness was Martyn Smith, a toxicologist, who testified that the “weight of the evidence” supported his opinion that benzene exposure causes APML.[8] As noted above, Smith is a member of the advocacy group, the Collegium Ramazzini; and for over 30 years, he has been a frequent testifier for plaintiffs in chemical exposure cases.[9]

Despite the low but widespread prevalence of APML in the general population, with no sex specificity, and the absence of any identifying biomarker of supposed benzene-related etiology in individual cases, Smith maintained that epidemiology was not necessary to reach a causal opinion about benzene and APML. The principal thrust of Smith’s proffered testimony is that APML is a plausible outcome of benzene exposure, because benzene can cause other varieties of AML, by structurally altering chromosomes (clastogenic) by breaking them and causing re-arrangements.[10]

The trial court found that Smith’s extrapolations were problematic and lacking in supporting evidence. The clear differences among AML subtypes made the extrapolation to APML, a unique clinical entity, inappropriate. The characteristic translocation in APML is absent from other varieties of AML, and APML, unlike other AML varieties, is treatable with all-trans retinoic acid.[11]

Smith advanced speculation that benzene targeted cells in the pathway of  leukemic transformation to APML, but the state of science was clearly devoid of sufficient evidence to show that benzene was involved in the APML translocations. Although the parties agreed that mechanistic evidence showed that benzene can effectuate chromosome damage that are characteristic of some AML subtypes other than APML, the trial court found that:

“[n]o evidence has been published making a similar connection between benzene exposure and the t(15;17) translocation, characteristic of APL [APML].”[12]

The trial court assessed Smith’s extrapolation from benzene’s clastogenic effect in breaking and rearranging chromosomes to induce some types of AML to its causing the specific APML t(15;17) translocation, as a

“bull in the china shop generalization: since the bull smashes the teacups, it must also smash the crystal. Whether that is so, of course, would depend on the bull having equal access to both teacups and crystal. If the teacups were easily knocked over, but the crystal securely stored away, a reason would exist to question, if not to reject, the proposition that the crystal was in as much danger as the teacups.”[13]

The trial judge clearly saw that Smith’s plausibility proved too much, and would support attributing virtually any disease to benzene through a putative mechanism of breaking chromosomes.

Lacking the courage of his convictions, Smith, non-epidemiologist, proceeded to offer opinions about the epidemiology of benzene and APML, some of them quite fanciful. No published or unpublished study showed a statistically significant increase in APML among benzene-exposed workers. The most Smith could draw from the published epidemiologic studies on benzene was one Chinese study that found a small risk ratio, without even nominal statistical significance: a crude odds ratio of 1.42 for benzene exposure and APML. Despite Smith’s hand waving about lack of power,[14] this Chinese study suggested that chloramphenicol was a risk factor for APML (M3), and it was able to identify a nominally statistically significant association between benzene and another sub-type of AML (M2a), with an odds ratio of 1.54.[15]

Smith offered no meta-analysis to show that the available studies collectively established a summary estimate of increased risk for APML among benzene workers. Undaunted, Smith set about to re-jigger the numbers in published studies to make something out of nothing. Neither physician nor epidemiologist, Smith altered diagnoses and exposure status as reported in published papers so that his reclassified cases and controls would yield, where none existed. These re-analyses were done speculatively, inconsistently, and incompetently, and were driven by the motivation to make something out of nothing. His approach was unsupported, unprincipled, and lacking in any reasonable methodology. The proffered re-analyses were never published, never presented at a professional society meeting, and never could comply with the standards used by epidemiologists used in their non-litigation activities. As a toxicologist, Smith did not have any non-litigation epidemiologic activities of note.

Smith’s representation of the relevant epidemiologic methods and studies was misleading and contained numerous errors that cumulatively led to erroneous conclusions; his own re-jiggering was carried out to reach a preferred conclusion to support plaintiff’s litigation case.[16]

One of the epidemiologic studies relied upon by Smith was Golumb (1982).[17] This study did not explore associations with benzene; it was a study of insecticides, chemicals and solvents, and petroleum. Crude oil contains very little benzene, typically about 0.1 percent.[18] Smith, without any evidentiary support, assumed that petroleum exposure equated to benzene exposure.

There were eight cases of cases of leukemia with petroleum exposure; one of those cases was APML. The authors of Golumb (1982) reported that this particular case with APML was actually a crane operator.[19]

In analyzing published epidemiologic studies, Smith insisted that he could re-classify APML cases to non-APML in control subjects, in studies, when the karyotype was normal. Karyotype analysis identifies the defining translocations of specific chromosomes in APML, and is found in virtually all such cases. The obvious result of Smith’s ad hoc reclassifications were to increase risk ratios for APML among benzene-exposed subjects. His arbitrary reclassifications of data allowed him to create the result he desired. In reviewing other published studies, Smith insisted that normal karyotype did not require reclassifying cases out of the APML category, when this approach would yield a risk ratio above one. 

Taking data from the Golumb 1982 paper, Smith attempted to inflate his calculation of an odds ratio, which would support his causation opinion. He arbitrarily discarded two APML from the non-exposed cases, and he discarded eight non-APML cases from the exposed subjects. He did not report p-values or confidence intervals for his reanalyses. At the hearing, the defense epidemiologist showed that Smith’s rejiggered odds ratio (1.51) had a p-value of 0.72, and a 95 percent confidence interval of 0.15 – 14.91. Not only was the result not statistically significant, the confidence interval shows that there was a range of alternative hypotheses over an order of magnitude in range, with none of them being rejected based upon the sample data at an alpha of 0.05. Without the rejiggering of exposed and unexposed cases, the odds ratio would have been 0.71, p = 0.76. All results, both as reported in the published article and as rejiggered by Smith were highly compatible with no association whatsoever.

In discussing other studies, Smith repeated his re-labeling of leukemia cases as APML, in the absence of karyotyping, to support his claims that there were more APML cases observed than expect on general population rates.[20] Smith also cited studies improvidently in supposed support of his opinion (Rinsky 1981; updated in 1994), where there was no association at all. Even workers heavily exposed to benzene in these studies did not develop APML.[21]  Similarly, in support of his opinion, Smith cited another Chinese study, which actually declared that:

“Acute promyelocytic leukemia has been reported infrequently in benzene-exposed groups as well as in t-ANLL. Although ANLL-M3 occurred in at least 4 patients in this series, its general representation among the subtypes of ANLL was similar in its distribution in de novo ANLL in China.”[22]

Smith’s methodological improprieties were the subject of a four day pre-trial hearing before Judge O’Toole. In the course of the hearings, Smith attempted to defends his methods, but like Donny Kerabatsos, in the Big Lebowski, Smith was out of his depth. The trial court found that Dr. Smith’s arbitrary creating and choosing data to support his beliefs was unreliable and not in accordance with generally accepted scientific methodology in the fields of medicine or epidemiology. Smith was simply fabricating data to fit his made-for-litigation beliefs.

Carl Cranor’s Attempt to Bolster Smith

Milward also submitted a report from Carl Forest Cranor, Smith’s business partner in founding the Prop 65 bounty-hunting CERT, and a fellow member of the advocacy group Collegium Ramazzini. Cranor has no expertise in toxicology or epidemiology, and he has never published on the cause of APML. As a professor of philosophy, Cranor has written about scientific methodology, including WOE and “inference to the best explanation (IBE).” Cranor’s publications are riddled with basic misunderstandings of statistical concepts.[23] Essentially, Cranor testified at the Rule 702 hearing, as a cheerleader for Smith, and to advocate for open admissions of dodgy scientific conclusions as acceptable with a methodology he described as WOE or IBE. Cranor stretched to resurrect Justice Stevens’ use of WOE, and attempted to pass it off as a generally accepted scientific mode of reasoning.

The trial court carefully reviewed the proffered opinion testimony in a four day pre-trial hearing. The trial court found that Smith had shown that his hypothesis was plausible and possible, but not that it was “scientific knowledge,” as required by Rule 702. Lacking sufficient scientific methodological validity and support, Smith’s opinions failed to satisfy the requirements of Rule 702, and were thus inadmissible. As a result of excluding plaintiff’s sole general causation expert witness, the trial court granted summary judgment to the defendants.[24]

(to be continued)


[1] See, e.g., Allen v. Pennsylvania Eng’g Corp., 102 F.3d 194, 197-98 (5th Cir. 1996) (“We are also unpersuaded that the ‘weight of the evidence’ methodology these experts use is scientifically acceptable for demonstrating a medical link between Allen’s EtO [ethylene oxide] exposure and brain cancer.”); Magistrini v. One Hour Martinizing Dry Cleaning, 180 F. Supp. 2d 584, 601-02 (D.N.J. 2002) (excluding David Ozonoff, whose WOE analysis of whether perchloroethylene causes acute myelomonocytic leukemia was criticized by court-appointed technical advisor), aff’d, 68 F. App’x 356 (3d Cir. 2003).

[2] See Eric Lasker, Manning the Daubert Gate: A Defense Primer in Response to Milward v. Acuity Specialty Products, 79 DEF. COUNS. J. 128, 128 (2012); David E. Bernstein, The Misbegotten Judicial Resistance to the Daubert Revolution, 89 NOTRE DAME L. REV. 27, 29, 53-58 (2013); David E. Bernstein & Eric G. Lasker, Defending Daubert: It’s Time to Amend Federal Rule of Evidence 702, 57 WM. & MARY L. REV. 1, 33 (2015); Richard Collin Mangrum, Comment on the Proposed Revision of Federal Rule 702: “Clarifying” the Court’s Gatekeeping Responsibility over Expert Testimony, 56 CREIGHTON LAW REVIEW 97, 106 & n.45 (2022); Thomas D. Schroeder, Toward a More Apparent Approach to Considering the Admission of Expert Testimony, 95 NOTRE DAME L. REV. 2039, 2045 (2020); Lawrence A. Kogan, Weight of the Evidence: A Lower Expert Evidence Standard Metastasizes in Federal Court, Washington Legal Foundation Critical Legal Issues WORKING PAPER Series no. 215 (Mar. 2020); Note, Judicial Conference Amends Rule 702. — Federal Rule of Evidence 702, 138 HARV. L. REV. 899, 903 (2025); Nathan A. Schachtman, Desultory Thoughts on Milward v. Acuity Specialty Products, DOI: 10.13140/RG.2.1.5011.5285 (Oct. 2015), available at https://www.researchgate.net/publication/282816421_Desultory_Thoughts_on_Milward_v_Acuity_Specialty_Products .

[3] See David DeMatteo & Kellie Wiltsie, When Amicus Curiae Briefs are Inimicus Curiae Briefs: Amicus Curiae Briefs and the Bypassing of Admissibility Standards, 72 AM. UNIV. L. REV. 1871 (2022) (noting that amicus briefs often include “unvetted and potentially inaccurate, misleading, or mischaracterized expert information,” without the procedural safeguards in place for vetting expert witnesses at trial).

[4] Milward v. Acuity Specialty Prods. Group, Inc., 969 F. Supp. 2d 101, 109 (D. Mass. 2013), aff’d sub. nom., Milward v. Rust-Oleum Corp., 820 F.3d 469, 471, 477 (1st Cir. 2016).

[5] David E. Bernstein, The Misbegotten Judicial Resistance to the Daubert Revolution, 89 NOTRE DAME L. REV. 27, 53, 29 (2013).

[6] Milward v. Acuity Specialty Products Group, Inc., 664 F. Supp. 2d 137 (D. Mass. 2009) (O’Toole, J.), rev’d, 639 F.3d 11 (1st Cir. 2011), cert. denied, U.S. Steel Corp. v. Milward, 565 U.S. 1111 (2012).

[7] Andrew Y. Li, et al., Clustered incidence of adult acute promyelocytic leukemia in the vicinity of Baltimore, 61 LEUKEMIA & LYMPHOMA 2743 (2021); Hassan Ali, et al., Epidemiology and Survival Outcomes of Acute Promyelocytic Leukemia in Adults: A SEER Database Analysis, 144 BLOOD 5942 S1 (2024).

[8] Milward, 664 F. Supp. 2d at 142.

[9] See, e.g., PPG Industries, Inc. v. Wells, No. 21-0232 (Feb. 10, 2023 W.Va.S.Ct.); Hall v. ConocoPhillips, 248 F. Supp. 3d 1177 (W.D. Okla. 2017); In re Levaquin Prods. Liab. Litig., 739 F.3d 401 (8th Cir. 2014); Jacoby v. Rite Aid Corp., No. 1508 EDA 2012 (Dec. 9, 2013 Pa. Super.); Harris v. CSX Transp., Inc., 232 W.Va. 617, 753 S.E.2d 275 (2013); In re Baycol Prods. Litig., 495 F. Supp. 2d 977 (D. Minn. 2007); In re Rezulin Prods. Liab. Litig., MDL 1348, 441 F.Supp.2d 567 (S.D.N.Y. 2006) (advocating mythological “silent injury”); Perry v. Novartis, 564 F.Supp.2d 452 (E.D. Pa. 2008); Dodge v. Cotter Corp., 328 F.3d 1212 (10th Cir. 2003); Sutera v. The Perrier Group of America Inc., 986 F. Supp. 655 (D. Mass. 1997); Redland Soccer Club, Inc. v. Dep’t of Army, 835 F.Supp. 803 (M.D. Pa. 1993).

[10] Milward, 664 F.Supp. 2d at 143-44.

[11] Milward, 664 F.Supp. 2d at 144.

[12] Id. at 146

[13] Id.

[14] The claim that a study lacks power is meaningless without a specification of the alternative hypothesis, the risk ratio the researcher thinks is the population parameter, at a specified level of alpha (typically p < 0.05), and a specified probability model. While virtually all studies would have reasonable statistical power (say 80 percent probability) to reject an alternative hypothesis that the risk ratio exceeded 10,000, no study would have power to detect a risk ratio of 1.0001, at a high level of probability.

[15] Yi Zhongguo, et al. (National Investigative Group for the Survey of Leukemia & Aplastic Anemia), Countrywide Analysis of Risk Factors for Leukemia and Aplastic Anemia, 14 ACTA ACADEMIAE MEDICINAE SINICAE 185 (1992).

[16] Milward, 664 F. Supp. 2d at 148-49.

[17] Harvey M. Golomb, et al., Correlation of Occupation and Karyotype in Adults With Acute Nonlymphocytic Leukemia, 60 BLOOD 404 (1982).

[18] Bo Holmberg, Per Lundberg, Benzene: standards, occurrence, and exposure, 7 AM. J. INDUS. MED. 375 (1985).

[19] Golumb, supra at note 17, at 407.

[20] See, e.g., Song-Nian Yin, et al., A cohort study of cancer among benzene-exposed workers in China: overall results, 29 AM. J. INDUS. MED. 227 (1996).

[21] Robert A. Rinsky, et al., Leukemia in Benzene Workers, 2 AM. J. INDUS. MED. 217 (1981); Mary B. Paxton, et al., Leukemia Risk Associated with Benzene Exposure in the Pliofilm Cohort: I. Mortality Update and Exposure Distribution, 14 RISK ANALYSIS 147 (1994); Mary B. Paxton, et al., Leukemia Risk Associated with Benzene Exposure in the Pliofilm Cohort II. Risk Estimates, 14 RISK ANALYSIS 155 (1994).

[22] Lois B. Travis, et al., Hematopoietic Malignancies and Related Disorders Among Benzene-Exposed Workers in China, 14 LEUKEMIA & LYMPHOMA 91, 99 (1994).

[23] See, e.g., Carl F. Cranor, REGULATING TOXIC SUBSTANCES: A PHILOSOPHY OF SCIENCE AND THE LAW at 33-34(1993) (conflating random error with posterior probabilities: “One can think of α, β (the chances of type I and type II errors, respectively) and 1- β as measures of the “risk of error” or “standards of proof.”); id. at 44, 47, 55, 72-76.

[24] 664 F. Supp. 2d at 140, 149.

The Fourth Edition’s Chapter on Admissibility of Expert Witness Testimony – Part 2

February 24th, 2026

The Manual’s new law chapter on the admissibility (vel non) of expert witness testimony was written by two law professors who teach evidence, and who often write articles with each another.[1] Liesa Richter teaches at the University of Oklahoma College of Law. Daniel Capra teaches at Fordham School of Law, in Manhattan. For the last three decades, Capra has been the Reporter for the Judicial Conference Advisory Committee on the Federal Rules of Evidence. There probably is no evidence law scholar more involved with the Federal Rules, including with the key expert witness rules, Rule 702 and Rule 703, than Capra.

The new chapter’s strengths follow from Professor Capra’s involvement in the evolution of Rule 702. The chapter plainly acknowledges that the Supreme Court decisions in the 1990s follow from an epistemic standard, and the use of the terms “scientific” and “knowledge” in Rule 702. Counting heads, as suggested by the Frye case, was at times a weak and ambiguous proxy for knowledge.[2] The new chapter has the important advantage of not having authors entwined in the advocacy of dodgy groups such as SKAPP, and the Collegium Ramazzini. Gone from the new chapter are Berger’s gratuitous and unwarranted endorsements and mischaracterizations of carcinogenicity evaluations by the International Agency for Research on Cancer (IARC).

Like Berger’s previous versions of this chapter, the new chapter carefully explains the Supreme Court decisions on expert witness admissibility and the changes in Rule 702, over time, including the 2023 amendment to 702. One glaring omission from the new chapter is absence of any mention of the fourth Supreme Court case in the 1993-2000 quartert: Weisgram v. Marley.[3] This important opinion by Justice Ginsburg was a clear expression of the seriousness with which the Court took the gatekeeping enterprise:

“Since Daubert, moreover, parties relying on expert testimony have had notice of the exacting standards of reliability such evidence must meet… . It is implausible to suggest, post-Daubert, that parties will initially present less than their best expert evidence in the expectation of a second chance should their first trial fail.”[4]

Professor Berger discussed this case in her last chapter, but the new authors fail to mention it all.[5] 

On the plus side, Richter and Capra discuss, although way too briefly, the role that Federal Rule of Evidence 703 plays in governing expert witness testimony.[6]  Rule 703 does not address the admissibility of expert witnesses’ opinions, but it does give trial courts control over what hearsay facts and data (such as published studies), otherwise inadmissible, upon which expert witnesses can rely.[7] Richter and Capra do not, however come to grips what how Rule 703 will often require trial courts to engage with the specifics of the validity and flaws of specific studies in order to evaluate the reasonableness of expert witness reliance upon them. Berger, in the third edition of the Manual, completely failed to address Rule 703, and its important role in gatekeeping.

Richter and Capra helpfully advise judges to be cautious in relying upon pre-2023 amendment cases because that most recent amendment was designed to correct clearly erroneous applications of Rule 702 in both federal trial and appellate courts.[8] The new Manual authors also deserve credit for being willing to call out judges for ignoring the Rule 702 sufficiency prong and for invoking the evasive dodge of many courts in characterizing expert witness challenges as going to “weight not admissibility.”[9]

Richter and Capra improve upon past chapters by simply reporting that the 2023 amendment to Rule 702 addressed important concerns that courts were failing to keep expert witnesses “within the bounds of what can be concluded from a reliable application of the expert’s basis and methodology,” and that Rule Rule 702(d) was amended to emphasize their legal obligation to do so.[10] Berger could have discussed this phenomenon even back in 2010-11, but failed to do so.

The new authors report that the Rules Advisory Committee had been concerned that expert witnesses engage regularly in overstating or overclaiming the appropriate level of certainty for their opinions, especially in the context of forensic science.[11] Although the recognition of problematic overclaiming in forensic science is a welcomed development, Richter and Capra fail to recognize that overclaiming is at the heart of the Milward case involving benzene exposure and acute promyelocytic leukemia (APL). And they seem unaware that overclaiming is baked into the precautionary principle that drives IARC pronouncements, advocacy positions of groups such as the Collegium Ramazzini, and much of regulatory rule-making.

In several respects, Richter and Capra have improved upon the past three editions in presenting the law of expert witness testimony. The new chapter gives a brief exposition of the Joiner case,[12] where the Court concluded that that there was an “analytical gap” between the plaintiffs’ experts witnesses’ conclusion on causation and the animal and human studies upon which they relied. The authors’ summary of the case explains that the Supreme Court majority concluded that the trial court below was well within its discretion to find that the plaintiffs’ expert witnesses had a cavernous analytical gap between their relied upon evidence and their conclusion that polychlorobiphenyls (PCBs) caused Mr. Joiner’s lung cancer.

Richter and Capra’s goes sideways in addressing the dissent by Justice Stevens and by giving it uncritical, disproportionate attention. As a dissent, which has never gained any serious acceptance on the high court by any other member, Justice Stevens’ opinion in Joiner hardly deserved any mention at all. Richter and Capra note, however, early in the chapter that Justice Stevens’ criticized the majority in Joiner for having “examined each study relied upon by the plaintiff’s experts in a piecemeal fashion and concluded that the experts’ opinions on causation were unreliable because no one study supported causation.”[13] Stevens’ criticism was wide of the mark in that the Court specifically addressed the “mosaic” theory that was a reprise of the plaintiffs’ unsuccessful strategy in the Bendectin litigation.[14]

Justice Stevens’ dissent wantonly embraced Joiner’s expert witnesses’ use of a “weight of the evidence” (WOE) methodology. Stevens asserted that WOE is accepted in regulatory circles, which is true but irrelevant, and that it is accepted in scientific circles, which is a gross exaggeration and misrepresentation. Richter and Capra somehow manage to discuss Stevens’ WOE argument twice,[15] thereby giving undue, uncritical emphasis and appearing to endorse it over the majority opinion, which after all contained the holding of the Joiner case. The authors give credence to the WOE argument in Joiner by suggesting that the majority had not adequately addressed it, and by failing to provide or cite any critical commentary on WOE.

Careful readers will be left wondering why their time is being wasted with the emphasis on a dissent that was never the law, that mischaracterized the majority opinion, that endorsed a method, WOE, that has been widely criticized, and that never persuaded any other justice to join.

The scientific community has never been seriously impressed by the so-called WOE approach to determining causality.  The phrase is vague and ambiguous; its use, inconsistent.[16] Although the phrase, WOE, is thrown around a lot, especially in regulatory contexts, it has no clear, consistent meaning or mode of application.[17]

Many lawyers, like Justice Stevens, Richter, and Capra, may feel comfortable with WOE because the phrase is used often in the law, where the subjectivity, vagueness, lack of structure and hierarchy to the metaphor “weighing” evidence is seen as a virtue that avoids having to worry too much about the evidential soundness of verdicts.[18] The process of science, however, is not like that of a jury’s determination of a fact such as who had the right of way in a car collision case. Not all evidence is the same in science, and a scientific judgment is not acceptable when it hangs on weak evidence and invalid inferences.

The lawsuit industry and its expert witnesses have adopted WOE, much as they have the equally vague term, “link,” for WOE’s permissiveness of causal inferences. WOE frees them from the requirement of any meaningful methodology, which means that any conclusion is possible, including their preferred conclusion. Under WOE, any conclusion can survive gatekeeping as an opinion. WOE frees the putative expert witness from the need to consider the quality of research. WOE-ful enthusiasts such as Carl Cranor invoke WOE or seek to inflict WOE without mentioning the crucial “nuts and bolts” of scientific inference, such as concepts of

  • Internal and external validity
  • A hierarchy of evidence
  • Assessment of random error
  • Assessment of known and residual confounding
  • Known and potential threats to validity
  • Pre-specification of end points and statistical analyses
  • Pre-specification of weights to be assigned, and inclusionary and exclusionary criteria for studies
  • Appropriate synthesis across studies, such as systematic review and meta-analysis

These important concepts are lost in the miasma of WOE.

If Richter and Capra wished to take a deeper dive into the Joiner case, rather than elevate the rank speculation of the lone dissenter, Justice Stevens, they may have asked whether Joiner’s expert witnesses relied upon all, or the most carefully conducted, epidemiologic studies.

As the record was fashioned, the Supreme Court’s discussion of the plaintiffs’ expert witnesses’ methodological excesses and failures did not include a discussion of why the excluded witnesses had failed to rely upon all the available epidemiology. The challenged witnesses relied upon an unpublished Monsanto study, but apparently ignored an unpublished investigation by NIOSH government researchers, who found that there were “no excess deaths from cancers of the … the lung,” among PCB-exposed workers at a Westinghouse Electric manufacturing facility. Actually, the NIOSH report indicated a statistically non-significant decrease in lung cancer rate among PCB exposed workers, with fairly a narrow confidence interval; SMR = 0.7 (95% CI, 0.4 – 1.2).[19] By the time the Joiner case was litigated, this unpublished NIOSH report was published and unjustifiably ignored by Joiner’s expert witnesses.[20] Twenty years after Joiner was decided in the Supreme Court, NIOSH scientists published updated data from this cohort, which showed that the long-term lung cancer mortality for PCB-exposed workers remained reduced, with a standardized mortality ratio of 0.88 (95% C.I., 0.7–1.1) for the cohort, and even lower for the workers with the highest levels of exposure, 0.82 (95% C.I., 0.5–1.3).[21]

At the time the Joiner case was on its way up to the Supreme Court, two Swedish studies were available, but they were perhaps too small to add much to the mix of evidence.[22] Another North American study published in 1987, and not cited by Joiner’s expert witnesses, was also conducted in a cohort of North American PCB-exposed capacitor workers, and showed less than expected mortality from lung cancer.[23] Joiner thus represents not only an analytical gap case, but also a cherry picking case. The Supreme Court was eminently correct to affirm the shoddy evidence proffered in the Joiner case.

Thirty years after the Supreme Court decided Joiner, the claim that PCBs cause lung cancer in humans remains unsubstantiated. Subsequent studies bore out the point that Joiner’s expert witnesses were using an improper, unsafe methodology and invalid inferences to advance a specious claim.[24] In 2015, researchers published a large, updated cohort study, funded by General Electric, on the mortality experience of workers in a plant that manufactured capacitors with PCBs. The study design was much stronger than anything relied upon by Joiner’s expert witnesses, and its results are consistent with the NIOSH study available to, but ignored by, them. The results are not uniformly good for General Electric, but on the end point of lung cancer for men, the standardized mortality ratio was 81 (95% C.I., 68 – 96), nominally statistically significantly below the expected SMR of 100.[25]

There is also the legal aftermath of Joiner, in which the Supreme Court reversed and remanded the case to the 11th Circuit, which in turn remanded the case back to the district court to address claims that Mr. Joiner had also been exposed to furans and dioxins, and that these other chemicals had caused, or contributed to, his lung cancer, as well.[26] 

Thus the dioxins were left in the case even after the Supreme Court ruled on admissibility of expert witnesses’ opinions on PCBs and lung cancer. Anthony Roisman, a lawyer with the plaintiff-side National Legal Scholars Law Firm, P.C., argued that the Court had addressed an artificial question when asked about PCBs alone because the case was really about an alleged mixture of exposures, and he held out hope that the Joiners would do better on remand.[27]

Alas, the Joiner case evaporated in the district court. In February 1998, Judge Orinda Evans, who had been the original trial judge, and who had sustained defendants’ Rule 702 challenges and granted their motions for summary judgments, received and reopened the case upon remand from the 11th Circuit. Judge Evans set a deadline for a pre-trial order, and then extended the deadline at plaintiff’s request. After Joiner’s lawyers withdrew, and then their replacements withdrew, the parties ultimately stipulated to the dismissal of the case with prejudice, in February 1999. The case had run its course, and so had the claim that dioxins were responsible for plaintiff’s lung cancer.

In 2006, the National Research Council published a monograph on dioxin, which took the controversial approach of focusing on all cancer mortality rather than specific cancers that had been suggested as likely outcomes of interest.[28] The validity of this approach, and the committee’s conclusions, were challenged vigorously in subsequent publications.[29] In 2013, the Industrial Injuries Advisory Council (IIAC), an independent scientific advisory body in the United Kingdom, published a review of lung cancer and dioxin. The Council found the epidemiologic studies mixed, and declined to endorse the compensability of lung cancer for dioxin-exposed industrial workers.[30]

In 1996, when Justice Stevens dissented in Joiner, and over the course of three decades, Stevens’ assessment of science, scientific methodology, and law have been wrong. His viewpoints never gained acceptance from any other justice on the Supreme Court. Richter and Capra, in writing the first chapter of the new Reference Manual, lead judges and lawyers astray in improvidently elevating the dissent, as though it were law, and in failing to provide sufficient context, analysis, and criticism.

(To be continued.)


[1] Liesa L. Richter & Daniel J. Capra, The Admissibility of Expert Testimony, National Academies of Sciences, Engineering, and Medicine & Federal Judicial Center, REFERENCE MANUAL ON SCIENTIFIC EVIDENCE 1 (4th ed. 2025).

[2] Id. at 6.

[3] 528 U.S. 440 (2000).

[4] 528 U.S. at 445 (internal citations omitted).

[5] Margaret A. Berger, The Admissibility of Expert Testimony, in National Academies of Sciences, Engineering and Medicine & Federal Judicial Center, REFERENCE MANUAL ON SCIENTIFIC EVIDENCE 11, 18-19 (3rd 2011).

[6] Richter & Capra at 17.

[7] See Nathan A. Schachtman, Rule of Evidence 703—Problem Child of Article VII, PROOF 3 (Spring 2009).

[8] Id. at 13.

[9] Id. at 16.

[10] Id. at 22-23.

[11] Id. at 23, 39.

[12] General Electric Co. v. Joiner, 522 U.S. 136 (1997).

[13] Richter & Capra at 10 (citing General Electric Co. v. Joiner, 522 U.S. 136, 150-155 (1997) (Stevens, J.).

[14] Joiner, 522 U.S. at 147-48.

[15] Richter & Capra at 10, 31.

[16] See, e.g., V. H. Dale, G.R. Biddinger, M.C. Newman, J.T. Oris, G.W. Suter II, T. Thompson, et al., Enhancing the ecological risk assessment process, 4 INTEGRATED ENVT’L ASSESS. MANAGEMENT 306 (2008) (“An approach to interpreting lines of evidence and weight of evidence is critically needed for complex assessments, and it would be useful to develop case studies and/or standards of practice for interpreting lines of evidence.”); Igor Linkov, Drew Loney, Susan M. Cormier, F.Kyle Satterstrom & Todd Bridges, Weight-of-evidence evaluation in environmental assessment: review of qualitative and quantitative approaches, 407 SCI. TOTAL ENV’T 5199–205 (2009); Douglas L. Weed, Weight of Evidence: A Review of Concept and Methods, 25 RISK ANALYSIS 1545 (2005) (noting the vague, ambiguous, indefinite nature of the concept of WOE review); R.G. Stahl Jr., Issues addressed and unaddressed in EPA’s ecological risk guidelines, 17 RISK POLICY REPORT 35 (1998); (noting that U.S. EPA’s guidelines for ecological WOE approaches to risk assessment fail to provide meaningful guidance); Glenn W. Suter & Susan M. Cormier, Why and how to combine evidence in environmental assessments: Weighing evidence and building cases, 409 SCI. TOTAL ENV’T 1406, 1406 (2011) (noting arbitrariness and subjectivity of WOE “methodology”).

[17] See Charles Menzie, et al., “A weight-of-evidence approach for evaluating ecological risks; report of the Massachusetts Weight-of-Evidence Work Group,” 2 HUMAN ECOL. RISK ASSESS. 277, 279 (1996)  (“although the term ‘weight of evidence’ is used frequently in ecological risk assessment, there is no consensus on its definition or how it should be applied”); Sheldon Krimsky, “The weight of scientific evidence in policy and law,” 95 AM. J. PUB. HEALTH S129 (2005) (“However, the term [WOE] is applied quite liberally in the regulatory literature, the methodology behind it is rarely explicated.”).

[18] See, e.g., People v. Collier, 146 A.D.3d 1146, 1147-48, 2017 NY Slip Op 00342 (N.Y. App. Div. 3d Dep’t, Jan. 19, 2017) (rejecting appeal based upon defendant’s claim that conviction was against “weight of the evidence”); Venson v. Altamirano, 749 F.3d 641, 656 (7th Cir. 2014) (noting “new trial is appropriate if the jury’s verdict is against the manifest weight of the evidence”).

[19] Thomas Sinks, et al., Health Hazard Evaluation Report, HETA 89-116-209 (Jan. 1991).

[20] Thomas Sinks, et al., Mortality among workers exposed to polychlorinated biphenyls,” 136 AM. J. EPIDEMIOL. 389 (1992).

[21] Avima M. Ruder, et al., Mortality among Workers Exposed to Polychlorinated Biphenyls (PCBs) in an Electrical Capacitor Manufacturing Plant in Indiana: An Update, 114 ENVT’L HEALTH PERSP. 18, 21 (2006).

[22] P. Gustavsson, et al., Short-term mortality and cancer incidence in capacitor manufacturing workers exposed to polychlorinated biphenyls (PCBs), 10 AM. J. INDUS. MED. 341 (1986); P. Gustavsson & C. Hogstedt, “A cohort study of Swedish capacitor manufacturing workers exposed to polychlorinated biphenyls (PCBs),” 32 AM. J. INDUS. MED. 234 (1997) (cancer incidence for entire cohort, SIR = 86, 95%; CI 51-137).

[23] David P. Brown, “Mortality of workers exposed to polychlorinated biphenyls–an update,” 42 ARCH. ENVT’L HEALTH 333, 336 (1987

[24] See Mary M. Prince, et al., Mortality and exposure response among 14,458 electrical capacitor manufacturing workers exposed to polychlorinated biphenyls (PCBs), 114 ENVT’L HEALTH PERSP. 1508, 1511 (2006) (reporting a nominally statistically significant decreased mortality ratio of 0.78, 95% C.I. 0.65–0.93, for men exposed to PCBs); Avima M. Ruder, Mortality among 24,865 workers exposed to polychlorinated biphenyls (PCBs) in three electrical capacitor manufacturing plants: a ten-year update, 217 INT’L J. HYG. & ENVT’L HEALTH 176, 181 (2014) (reporting no increase in the lung cancer standardized mortality ratio for long-term workers, 0.99, 95% C.I., 0.91–1.07).

[25] Renate D. Kimbrough, et al., Mortality among capacitor workers exposed to polychlorinated biphenyls (PCBs), a long-term update, 88 INT’L ARCH. OCCUP. & ENVT’L HEALTH 85 (2015).

[26] Joiner v. General Electric Co., 134 F.3d 1457 (11th Cir. 1998) (per curiam).

[27] Anthony Z. Roisman, The Implications of G.E. v. Joiner for Admissibility of Expert Testimony, 65 VT. J. ENVT’L L. 1 (1998).

[28] See David L. Eaton (Chairperson), HEALTH RISKS FROM DIOXIN AND RELATED COMPOUNDS – EVALUATION OF THE EPA REASSESSMENT (2006).

[29] Paolo Boffetta, et al., TCDD and cancer: A critical review of epidemiologic studies,” 41 CRIT. REV. TOXICOL. 622 (2011) (“In conclusion, recent epidemiological evidence falls far short of conclusively demonstrating a causal link between TCDD exposure and cancer risk in humans.”).

[30] Industrial Injuries Advisory Council – Information Note on Lung cancer and Dioxin (December 2013). See also Mann v. CSX Transp., Inc., 2009 WL 3766056, 2009 U.S. Dist. LEXIS 106433 (N.D. Ohio 2009) (Polster, J.) (dioxin exposure case) (“Plaintiffs’ medical expert, Dr. James Kornberg, has opined that numerous organizations have classified dioxins as a known human carcinogen. However, it is not appropriate for one set of experts to bring the conclusions of another set of experts into the courtroom and then testify merely that they ‘agree’ with that conclusion.”), citing Thorndike v. DaimlerChrysler Corp., 266 F. Supp. 2d 172 (D. Me. 2003) (court excluded expert who was “parroting” other experts’ conclusions).

IARC’s Industry Sniffing Bots Are Coming for You

October 8th, 2025

“Hey, hey, you, you, get off of my cloud.”   …. Jagger & Richards

For the last 50 years, critics, cranks, and anti-industry zealots have argued that industry-sponsored science is vitiated by conflicts of interest. What started as the whining of scientists who were regulatory “political scientists” and adjuncts to plaintiffs’ law firms, has become a major movement. The rise of post-modernism in philosophy has supported the rejection of robust debate of scientific assessments of causation on grounds that all such judgments are politically and socially determined.  Evidence is just casuistry, at least when done by those with whom we disagree.

The anti-industry bias has had demonstrably bad consequences in distorting scientific judgment. Over 30 years ago, a science journalist published a story in the Journal of the National Cancer Institute, about how dire predictions of asbestos mortality never came to pass.[1] In investigating the failure of these predictions, the journalist concluded that they had been the product of exaggerations by government scientists who suffered from a form of “white-hat” bias”:

“the government’s exaggeration of the asbestos danger reflects a 1970s’ Zeitgeist that developed partly in response to revelations of industry misdeeds.  ‘It was sort of the “in” thing to exaggerate … [because] that would be good for the environmental movement’….  ‘At the time it looked like you were wearing a white hat if you made these wild estimates. But I wasn’t sure whoever did that was doing all that much good.”[2]

The existence of “white-hat” bias is perhaps the most benign explanation for the propagation of badly done science. The deployment of political correctness applied to issues that really depend upon scientific method, data, and inference for their resolution should not, however, be seen as particularly benign.

In 2010, over a decade after the description of white-hat bias in the JNCI, two public health researchers, Mark B. Cope and David B. Allisosn, described white-hat bias as a prevalent cognitive error in how research is reported and interpreted.[3]  They described white-hat bias as a “bias leading to the distortion of information in the service of what may be perceived to be righteous ends.” Perhaps the temptation to overstate the evidence against a toxic substance is unavoidable, but it diminishes the authority and credibility of regulators entrusted with promulgating and enforcing protective measures.  And error is still error, regardless of its origins and motivations. 

Allison and Cope gave examples of white-hat bias in how papers are cited, with “exonerative” studies cited less often than those than claim harmful outcomes.  And when positive papers were cited, they were often interpreted misleadingly to overstate the harms previously reported.

The principle of charity suggests white-hat bias should be considered for much of the anti-industry prejudices exhibited by public health scientists. The persistence, virulence, and irrationality of many instances of prejudiced judgments, however, make the charitable explanation implausible.

Kenneth Rothman, the founder of Epidemiology, the official journal of the International Society for Environmental Epidemiology (ISEE), provided a more insightful explanation to the anti-industry bias as the “new McCarthyism in science.”[4] Rothman identified an anti-manufacturing industry bias as manifesting as intolerance toward industry-sponsored studies, and strict scrutiny of “conflict-of-interest” (COI) disclosures. The McCarthyites amplify the gamesmanship over COI disclosures by excusing or justifying non-disclosure of COIs from scientists aligned with advocacy groups or the lawsuit industry, or from positional COIs.

The quaint notion that “an opinion should be evaluated on the basis of its contents, not on the interests or credentials of the individuals who hold it,” has been generally banished.[5] The offense to honest scientific inquiry receives little attention,[6] but the sanctimonious deployment of COI claims allows scientists to over-indulge in poor quality research by claiming that they have extirpated industry influence.

In 1995, anti-tobacco historian and expert witness, Robert Proctor, coined the term agnotology from the Greek ágnosis (“not knowing”) and -logia (study of).[7] Agnotology is now a specialty of scientist-advocates and expert witnesses for the lawsuit industry; it has been the subject of numerous and repetitive books,[8] too many articles to cite, and even doctoral dissertations.[9]

The anti-manufacturing industry jihad is little more than defamation against every scientist or citizen who has called for evidence-based regulation and law in dealing with scientific issues. The movement would deprive legislators, regulators, and juries of important, relevant scientific evidence based upon a smear.

What is truly fascinating, however, is the hypocrisy built into the anti-industry COI movement. There is another industry that is protected from criticism – the lawsuit industry. The lawsuit industry that has grown up parasitically around a system of tort law, which now includes not only law firms that service claimants, but also their retinue of expert witnesses, their litigation funders, and even investment firms that collude with hit-piece journalists who work on “distort and short” schemes of trading in the securities of their targets.

The critics of research done or funded by manufacturing industry argue that industry studies disproportionately report outcomes favorable to their sponsors. The implied potential conflicts posed by industry-sponsored research studies are fairly obvious. Industries that make or sell products, raw materials, or chemicals have an interest in having toxicological and epidemiologic studies support claims of safety.  Research that suggests an industry’s product causes harm may hurt the industry’s financial interests directly by inhibiting sales, or indirectly by undermining the industry’s position in litigation, or by leading to greater regulatory scrutiny and control. Indirect harms may result from heightened warnings or instructions, which may limit sales or encourage sales of competing, less hazardous products. If the harm evidenced by the research is sufficiently severe, the research may lead to product recall or bans, again with serious economic consequences for the industry. 

The lawsuit industry has conflicts of interest that mirror those of manufacturing industry.[10] Manufacturing evidence and conclusions of harm is good for the lawsuit industry, and provides rich sources of revenue for its go-to expert witnesses. There are also ideological interests that motivate many players in the lawsuit industry. Lawsuit industry COIs are frequently ignored or down-played, even though the research funded, sponsored, or written by its members has a strange propensity to support claims made in court and in agencies.

The International Agency for Research on Cancer (IARC) has become ground zero for hypocritical exorcisms of COIs. In 2018, several authors wrote a commentary in which they declared that IARC and its cancer hazard evaluations were under attack from those with “economic interests” (manufacturing firms or their consultants).[11] Several of the commentary authors, Peter F. Infante, Ronald Melnick, and James Huff, were full-fledged members of the lawsuit industry, with consulting firms that work to help claimants in tort litigation. The authors’ own COIs, however, did not inhibit them from declaring that only “scientific experts who do not have conflicts of interest should be allowed to criticize IARC pronouncements. Three of the four authors (Infante, Melnick, Huff) of this hit piece identified themselves as having consulting firms, but only James Huff gave a disclosure that he had “been retained as expert consultant on long-term animal bioassays of glyphosate in litigation for plaintiffs.” Infante and Melnick gave no disclosure, although they have been far more than consultants; they have appeared in testimonial roles for tort claimants. To top off the hypocrisy, the journal editor, Steven B. Markowitz, felt compelled to declare that he had “no conflict of interest in the review and publication decision regarding this article.” Markowitz is a not infrequent testifying expert witness for the lawsuit industry.[12] It is a safe bet that the great majority of the studies authored by Infante, Melnick, Huff, and Markowitz claim or suggest harms from chemical exposures.

It seems rudimentary that scientific research should be evaluated on the merits of studies, methods, data, and inference, and not the source of the funding. We are, however, deep into the post-modern world that regards science as a way of exercising political power and social control, and not a search for the truth. Given our Zeitgeist, no one should be surprised that an IARC official has just come out with a paper that attempted to deploy a large-language model (LLM) to identify possible industry influence, down to parts per trillion or whatever the level of detection may be.

Last month, Mary K. Schubauer-Berigan, the head of the Evidence Synthesis and Classification Branch of IARC, along with several other scientists, published a paper that proposed the use of an LLM to identify industry influence.[13] Schubauer-Berigan is an occupational epidemiologist, but she is also an amateur agnotologist. The first sentence of her article really tells all you need to know: “Industry-funded research poses a threat to the validity of scientific inference on carcinogenic hazards.” The authors claim that their LLM can help assess bias from industry studies in evidence synthesis and identify “industry influence” on scientific inference. These authors reflect the IARC dogma that only manufacturing industry has COIs of concern. Lawsuit industry influence is never mentioned.

The authors applied their LLM to identify industry relationships among authors of review articles on issues related to three specific IARC hazard classifications (benzene, cobalt, and aspartame). The search apparently included direct funding for studies of the agent under consideration, as well as whether studies or reviews had an industrial sponsor or a trade association, whether they used data provided by an industry source, or whether authors were paid consulting fees or provided expert testimony. The authors’ algorithm did not include whether spouses, children, parents, good friends, professional colleagues, or mentors ever had some dalliance with manufacturing industry.

IARC’s LLM was never let loose in search of lawsuit industry connections. Are you surprised?


[1] Tom Reynolds, “Asbestos-Linked Cancer Rates Up Less Than Predicted,” 84 J. Nat’l Cancer Instit. 560 (1992).

[2] Id. at 562. 

[3] Mark B. Cope and David B. Allison, “White hat bias: examples of its presence in obesity research and a call for renewed commitment to faithfulness in research reporting,” 34 Internat’l J. Obesity 84 (2010).

[4] Kenneth J. Rothman, “Conflict of interest: the new McCarthyism in science,” 269 J. Am. Med. Ass’n 2782 (1993). See Schachtman, “The Rhetoric and Challenge of Conflicts of Interest,” Tortini (July 30, 2013).

[5] Brian MacMahon, “Epidemiology:  another perspective,” 37 Internat’l J. Epidem. 1192, 1192 (2008).

[6] See Thomas P. Stossel, “Has the hunt for conflicts of interest gone too far?” 336 Brit. Med. J. 476 (2008); Kenneth J. Rothman & S. Evans, “Extra scrutiny for industry funded trials: JAMA’s demand for an additional hurdle is unfair – and absurd, 331 Brit. Med. J. 1350 (2005) & 332 Brit. Med. J. 151 (2006) (erratum).

[7] Robert Proctor, The Cancer Wars: How Politics Shapes What We Know and Don’t Know About Cancer 8 & not (1995).

[8] See, e.g., David Michaels, The Triumph of Doubt: Dark Money and the Science of Deception (2020); David Michaels, Doubt Is Their Product: How Industrys Assault on Science Threatens Your Health (2008); Naomi Oreskes & Erik M. Conway, Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming (2010); Robert N. Proctor & Londa Schiebinger, eds., Agnotology: The Making and Unmaking of Ignorance (2008); Janet Kourany & Martin Carrier, eds, Science and the Production of Ignorance: When the Quest for Knowledge Is Thwarted (2020); Blake D. Scott, “Agnotology and Argumentation: A Rhetorical Taxonomy of Not-Knowing,” OSSA Conference Archive 133 (2016).

[9] Craig Alex Biegel, Manufactured Science, the Attorneys’ Handmaiden: The Influence of Lawyers in Toxc [sic] Substance Disease Research, Dissertation for Florida State University (2016).

[10] See Laurence J. Hirsch, “Conflicts of Interest, Authorship, and Disclosures in Industry-Related Scientific Publications: The Tort Bar and Editorial Oversight of Medical Journals,” 84 Mayo Clin. Proc. 811 (2009).

[11] Peter F. Infante, Ronald Melnick, James Huff & Harri Vainio, “Commentary: IARC Monographs Program and public health under siege by corporate interests,” 61 Am J. Indus. Med. 277 (2018).

[12] See In re Joint Eastern & Southern District Asbestos Litig., 758 F.Supp. 199 (S.D.N.Y. 1991); Juni v. A.O. Smith Water Prods. Co., 32 N.Y.3d 1116, 116 N.E.3d 75 (2018); Konstantin v. 630 Third Avenue Assocs., N.Y.S.Ct. (N.Y. Cty.) No. 190134/2010 (jury verdict returned Aug. 17, 2011); Koeberle v. John Crane, Inc., Phila. Cty. Ct. C.P. No. 000887 (jury verdict returned Feb. 2010).

[13] Nathan L. DeBono, Vanessa Amar, Hardy Hardy, Mary K. Schubauer-Berigan, Derek Ruths & Nicholas B. King, “A large language model-based tool for identifying relationships to industry in research on the carcinogenicity of benzene, cobalt, and aspartame,” 24 Envt’l Health 64 (2025).

Lack of Trust in Science – The Situation Our Situation Is In

August 29th, 2025

The United States is in political crisis as its citizens are frogmarched into an authoritarian, illiberal, and unlawful dystopia. The seriousness of the political situation makes it difficult to focus on scientific issues, but as with past fascist regimes in history, the crisis is not limited to any one sphere of life in the United States.

Scholars of fascism have pointed out that not all fascist regimes are the same, but there are some key features that give them all a family resemblance. In the political realm, fascist leaders point to an idyllic history, however mythical or false, in which the country was once great. The greatness has been eroded and squandered by the country’s enemies, internal and external. Confronting enemies within and without is an emergency, which cannot be addressed within the rule of law. Only an authoritarian leader can fix it by suspending the rule of law.

Fascism does not operate solely in the political sphere, but insists upon ideological purity in art, culture, education, business, finance, military, law, and science.[1]

Yes, even science. Nazi Germany had its bogus science of racial purity. The Soviet Union had its Lysenkoism. Theocratic fascist regimes, such as Iran or the United States, have their “god talk” and blasphemy squads, which suppress scientific curiosity, experimentation, and development, except for the creation of weapons (where replicability, validity, and predictive accuracy really matter).

There are various reasons for Felonious Trump’s election, but the epistemic sin of credulousness of the American people is certainly one of them. We are living in Orwell’s 1984 world where many people have been tethered to TV screens to receive their daily influx of state-approved propaganda. Character for truth has ceased to be a virtue. “And even truth can become a lie in the mouth of a born liar.”[2]

The credulity of the American people has manifested as distrust in scientific expertise and willingness to believe charlatans such as Robert Kennedy, Jr. The phenomenon of transferring trust from legitimate scientists to charlatans is probably one of the clearest and strongest symptom of our current malaise.

Professor Arthur L. Caplan[3] is a scientist and medical ethicist who has never been shy about asking discomforting questions. Not surprisingly, Caplan has spoken out against some of the bone-headed anti-science actions of the present regime in Washington.[4]

In an essay entitled “How Stupid has Science Been?” Caplan asks:

“So how can U.S. President Trump, Secretary of Health Robert F. Kennedy, Jr., or Director of the Centers for Medicare and Medicaid Mehmet Oz and their enthusiastic followers be succeeding in defunding research and installing ideological oversight and censorship that is crushing science, technology and engineering and will for many years to come?”[5]

Caplan blames the scientific community itself, in part, for the current crisis by disparaging and discouraging scientists from engaging with the public. Obviously, Caplan is not thinking of the cadre of scientists who seek phony validation by becoming highly paid expert witnesses for the lawsuit industry. Nor is he thinking of the dodgy TV doctors such as Dr. Oz. Caplan’s focus is on the harm done to the careers of accomplished scientists, such as the late Carl Sagan, who was denied tenure at Harvard University and membership in the National Academies of Science because his popularizing efforts eclipsed his substantial scientific accomplishments. Caplan thus blames the American scientific establishment itself for having “disparaged its public communication as unnecessary and looked down on those few who tried to educate broader audiences about the wonders, benefits, methods and advancements of science.”

Professor Caplan argues that in popularizing scientific ideas, theories, and methods, scientists – such as the late Carl Sagan – undermined their own careers. The result is that high-achieving scientists ignored the public square and retreated into their own scientific community’s ivory tower. Caplan’s critique of the detachment of the scientific community could well be extended to its frequent failures to speak out against charlatans in its own midsts, and politicians who distort and misrepresent scientific research in the public arena.

Caplan is, however, very clear that the scientific community’s insularity, and its “resulting failure to communicate about science to the public is a major factor in explaining why so few have rallied to science’s defense today against government policies promoting ignorance, illiteracy and quackery.”  Indeed, although at this point, it is also clear that frank communications about the government’s promotion of scientific quackery will be punished by the Regime’s cancellation of grants, firing from advisory councils, and retaliations against scientists’ universities.

I take Caplan’s critique to be an invitation to engage in counter-factual thinking about what our current situation might look like if scientists had robustly “occupied the field” of communication and education of the public. Citing a recent article in a Nature journal,[6] Caplan observes that populists and right-wing thinkers have been losing faith in science for years. This diagnosis, however, is not quite accurate. Populists, left and right, have succumbed to motivated reasoning in learning to ignore scientific conclusions, regardless of validity concerns, on emotive or political grounds. This mode of (non)-thinking allows populists, left and right, to subscribe to putative scientific claims without any appreciation of the nuances of scientific inference and threats to validity.

Caplan is right to call out the right-wing attack on science, but some of the attack on science is coming from left-wing populists, such as the worm-brained Robert F. Kennedy, Jr. And historically, there have been many instances in which environmental and occupational health advocates have outrun their headlights to press claims based upon hypothetical models and unvalidated assumptions.

All people, whether they hang politically left or right, are vulnerable to the emperor of all cognitive biases – apophenia, the psychological tendency to discern causal patterns in random noise. Although apophenia was originally thought of an abnormal psychological process,[7] the phenomenon is common to “normal” as well as mentally ill persons.[8]

Many people, left and right, are willing to endorse, or subscribe to, pseudo-scientific claims based upon their motivations to accept claims, without regard to the methods used to support those claims. Professor Caplan is correct that serious scientists have been too shy to step into the public square, and the scientific community should encourage, not punish, engagement with the public. (Caplan passes over the problem of how university publicists often misrepresent and exaggerate the findings and research of university scientists.)

The problem of lack of trust in science, however, is a much bigger problem. On average, American education and acumen in math and science lags that of many countries in the world,[9] even as post-secondary education in the United States excels and attracts many of the best and the brightest domestically and internationally. Immigrants have helped American universities keep their leadership role in the world, despite shortfalls in domestic funding of primary and secondary science education. Of course, this international leadership in science and math university education, gained with the help of immigrants, is now under attack from the MAGAT regime.[10]

No one is eager to blame those who evidence their lack of trust in science, and to be sure, there is plenty of blame to go around. There are multiple systemic causes of poor quality science and improvident claims to scientific knowledge.[11] In assessing the causes of the prevalent distrust in science, we should not lose sight of the responsibility of those who claim that scientists cannot be trusted. There is at bottom a widespread moral failure in the land.  “It is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence.”[12]

доверяй, но проверяй!


[1] Zachary Basu, “Trump knee-caps America’s institutions,” Axios (Aug. 27, 2025); Elisabeth Zerofsky, “Robert Paxton, A Leading Historian Of Fascism, Long Resisted Applying The Label To Trumpism. Then He Changed His Mind..,” N.Y. Times Mag. 45 (Oct. 27, 2024).

[2] Thomas Mann, “The Problem of Freedom: An Address to the Undergraduates and Faculty of Rutgers University at Convocation,” (April 28, 1939).

[3] Arthur L. Caplan, PhD., is the Drs. William F. and Virginia Connolly Mitty Professor of Bioethics, Department of Population Health, and the founder of  the Division of Medical Ethics at NYU Grossman School of Medicine’s Department of Population Health in New York City. I had the pleasure to meet Professor Caplan, and present to one of his classes, back when he taught at the University of Pennsylvania.

[4] See, e.g., Arthur L. Caplan, “Fed Action Toward Medical Journals Is ‘Dangerous’, Ethicist Says,” Medscape (Aug. 26, 2025).

[5] Arthur L. Caplan, entitled “How Stupid has Science Been?” EMBO reports (Aug. 2025).

[6] Vukašin GligorićGerben A. van Kleef, and Bastiaan T. Rutjens, “Political ideology and trust in scientists in the USA,” 9 Nature Human Behaviour 1501 (2025) (“Since the 1980s, trust of science among conservatives in America has been plummeting”).

[7] See Aaron L Mishara, “Klaus Conrad (1905–1961): Delusional Mood, Psychosis, and Beginning Schizophrenia,” 36 Schizophr Bull. 9 (2009); Scott D. Blain, Julia M. Longenecker, Rachael G. Grazioplene, Bonnie Klimes-Dougan, and Colin G. DeYoung, “Apophenia as the disposition to false positives: A unifying framework for openness and psychoticism,” 129 J. Abnormal Psych. 279 (2020).

[8] Donna L Roberts, “Apophenia: The Human Tendency to Find Patterns in Randomness,” Medium (Jan. 9, 2024); Ahmed S. Sultan & Maryam Jessri, “Pathology is Always Around Us: Apophenia in Pathology, a Remarkable Unreported Phenomenon,” 7 Diseases 54 (2019).

[9] Drew DeSilver, “U.S. students’ academic achievement still lags that of their peers in many other countries,” Pew Research Center (Feb. 15, 2017).

[10] Is it not high time that we call the movement by its essential motivation: make American great again for the Trumps?

[11] See, e.g., Lex Bouter, Mai Har Sham & Sabine Kleinert, “The Lancet–World Conferences on Research Integrity Foundation Commission on Research Integrity,” 406 The Lancet 896 (2025).

[12] William K. Clifford, “The Ethics of Belief,” 29 Contemporary Rev. 289, 295 (1877).

Can Lawyers Sink Lower Than Plagiarizing a Bot?

May 16th, 2025

Several years ago, I submitted a brief, which I had written, in a New York case. When a co-defendant’s counsel filed the same brief, without acknowledging that it was plagiarised, I was annoyed. It seemed to me that such plagiarism clearly has professional and general ethical implications, especially if the plagiarists charged clients for writing something that they stole from another person.[1]

Legal culture does to some extent encourage plagiarism. Law students work as research assistants for professors and often write segments of legal treatises and hornbooks that are published under their professors’ names. Presumably the law students are satisfied that their work was used and that they received strong recommendations in future job searches. When bright young law school graduates accept judicial clerkships, they understand that their writing for draft opinions or memoranda will be “cannibalized” by their judges at will. Similarly, young law firm associates know that much of their writing may be used in briefs without attribution or signature lines on the brief. This practice goes too far, in my view, when partners require associates to draft articles for publication without making them authors, and with at most an anemic note of gratitude for “assistance.”

When courts use lawyers’ arguments and their actual language advanced in briefs, lawyers rarely complain. At least there are no complaints from the plagiarized lawyers who prevailed.

Sometimes the plagiarized argument reveals the source of a court’s error. In an opinion issued over Justice Sotomayor’s name, the Supreme Court adopted wholesale an argument advanced by the Solicitor General. The origin of the argument was unmistakable because the claim was so egregiously wrong. In its amicus brief, the Solicitor General argued that statistical significance was unnecessary for reaching a conclusion of causation between the use of Zicam and anosmia. In support of its argument, the Solicitor General’s amicus brief cited three cases: Best, Westberry, and Ferebee.[2] The three cited cases all involved disputes over specific causation, whereas the case sub judice purported to involve an issue of general causation. (The Court correctly decided that the corporate disclosure issue under the securities laws did not actually require general causation.) Nevertheless, the Solicitor General’s sloppy legal scholarship was incorporated into the Supreme Court’s opinion. Justice Sotomayor repeated the citation to the first two cases, dropped the reference to Ferebee, but curiously added an even more bizarre third case to the argument by citing the infamous Wells case.[3] Remarkably, as notorious and poorly reasoned as the Wells case was,[4] it involved plaintiffs’ expert witnesses’ reliance upon at least one poorly conducted study that reported nominal statistical significance. And thus the Supreme Court produced the following text, with three inapposite cases:

“We note that courts frequently permit expert testimony on causation based on evidence other than statistical significance. Seee.g.Best v. Lowe’s Home Centers, Inc., 563 F. 3d 171, 178 (6th Cir 2009); Westberry v. Gislaved Gummi AB, 178 F. 3d 257, 263–264 (4th Cir. 1999) (citing cases); Wells v. Ortho Pharmaceutical Corp., 788 F. 2d 741, 744–745 (11th Cir. 1986). We need not consider whether the expert testimony was properly admitted in those cases, and we do not attempt to define here what constitutes reliable evidence of causation.”[5]

I suspect that a law clerk acted as an intermediary, or a vcctor, in the plagiarism by incorporating the Solicitor General’s argument into a draft or a memorandum, which became part of the opinion that none of the justices considered carefully. At least, the three cases cited by the Solicitor General and the cases cited by Justice Sotomayor actually existed.

The introduction of large language models in artificial intelligence (A.I.) has taken plagiarism to a new level.[6] Lawyers can now prompt a machine to review the case law on a specified issue and to create an output of analysis in favor of an identified litigation position. And then the lawyers can submit the output to a court as their own work, and charge their clients for the brief writing. The submission of a brief or memorandum to a court, however, implies that the facts are supported, and that the legal precedents cited exist and are pertinent authorities for the court to consider.

The problem with the use of AI, however, is that AI “hallucinates” non-existent cases, all with citations, case specifics, and sometimes fanciful quotes that seem pertinent. It sucks to be a bot, but it is even worse for lawyers who take AI output, and present it as their own, without further research, in hopes of persuading a court.

Last week, Special Master Magistrate Judge Michael R. Wilner (retired) imposed a monetary sanction of $31,100 against the lawfirm K&L Gates for failing to fact check a brief prepared in substantial part by AI, as well as failing to disclose its use of AI, or to correct errors after notified of the Special Master’s concerns.[7] The Special Master noted that “no reasonably competent attorney should out-source research and writing to this technology – particularly without any attempt to verify the accuracy of that material.”[8] Ouch. K&L Gates is not the first firm, and it sadly will not be the last firm to be sanctioned for the improvident use of AI, and the submission of fraudulent authorities to a court.[9]

The Special Master’s outrage was generated by his discovery that one third of the legal citations were incorrect, and that two of them were non-existent. Several quotations from cases presented in support of the plaintiffs’ argument were entirely bogus.[10] Although the Special Master freely described the lawyers’ actions as “misconduct,” he concluded that disciplinary sanctions against the lawyers involved was unwarranted.[11]

The Special Master’s opinion elided the ethical significance of the plaintiffs’ lawyers’ conduct. The opinion never mentions the Rules of Professional Responsibility; nor does it suggest that a reference to the California State Bar’s Office of Chief Trial Counsel was in order. Plagiarism is, after all, research misconduct in most other professional domains. Under the regulations of the Office of Research Integrity (ORI), plagiarism is “the appropriation of another person’s ideas, processes, results, or words without giving appropriate credit.”[12] An AI model may not be eligible for copyright, or the moral rights of authors, but it seems that the spirit of the prohibition against plagiarism requires disclosure and credit to AI, however bogus AI’s contribution may have been.

Although I tend to think of plagiarism in briefs and other court submissions as a violation of a lawyers’ professional responsibility, the issue actually is not clear cut. Plagiarism would seem to violate core professional responsibilities of honesty and integrity. In disparaging plagiarism, courts have occasionally invoked Model Rule 8.4(c), which prohibits engaging in “dishonesty, fraud, deceit or misrepresentation.”[13] Legal commentators have divided over the propriety of recycling legal arguments and verbatim language without acknowledgments.[14] Context also matters. The legal analysis in a routine motion in a mass tort litigation, say for dismissal for lack of diversity, or for change of venue, probably should not be re-invented. Copying another lawyer’s appellate brief and and its consideration of a legal issue, without acknowledgment, seems ethically dodgy.

Of course, copying, whether permissible or not, does not mean that lawyers are freed from their ethical responsibility of ensuring that citations and interpretations of authorities are correct. Without having taken steps to ensure the relevance and correctness of cited authorities, lawyers cannot represent to the court that their argument has a good-faith basis in law and fact.

Hallucinations versus Delusions

Philosophy professor Hilarius Bookbinder (not Søren Kierkegaard) points out that describing bogus AI output as “hallucinations” is euphemistic and erroneous. When AI makes stuff up, the output is not really an hallucination, but delusional.[15]

Bookbinder channels the insight of William James, who gave extensive consideration to the phenomenon of hallucinations, and ultimately characterized them as correct reports of altered consciousness.[16] In a footnote to The Principles of Psychology, James offered the following helpful distinction:

“Illusions and hallucinations must both be distinguished from delusions. A delusion is a false opinion about a matter of fact, which need not necessarily involve, though it often does involve, false perceptions of sensible things. We may, for example, have religions delusions, medical delusions, delusions about our own importance, about other peoples’ characters, etc., ad libitum.”[17]

Bookbinder has a point about how we talk about large language models of AI. In James’ parlance, AI does not really hallucinate, but it clearly suffers delusions; or perhaps it simply fabricates unwittingly. This feature, or flaw, of AI makes lawyers’ uncritical reliance upon AI for legal research and writing not only unethical, not merely for plagiarizing, but for violating their professional duties of competence and candor to the tribunal. Outsourcing thinking to a machine seems unbecoming for a profession that is built upon careful, independent analysis.

Hallucinations and delusions are both distinguishable from a third phenomenon, bullshit – or willful indifference to the truth.[18] For bull shitters, the assertion is more important than the truth value of the statement. When Felonious Trump claimed a Civil War battle took place on one of his golf courses, and even went so far as to identify the site with a plague. Several historians pushed back, and pointed out, his mistake, to which Trump asked “How would they know that?” Were they there?”[19] Trumpian lies and bullshit have now begun to infiltrate into the judicial system, so we must sort delusions, bullshit, and lies among the pathology of lawyering. We can probably say fairly that AI lacks the intentionality to deceive, or the psychopathology that confuses assertion with fact.

There may well be conduct worse than plagiarizing a bot, but that is hardly a recommendation.


[1] Schachtman, “Copycat – Further Thoughts on Plagiarism in the Law,” Tortini (Oct. 24, 2010); “Plagiarism in the Law,” Tortini (Oct. 16, 2010).

[2] Brief for the United States as Amicus Curiae Supporting Respondents, in Matrixx Initiatives, Inc. v. Siracusano, 2010 WL 4624148, at *14-16 (Nov. 12, 2010) (“Best v. Lowe’s Home CentersInc., 563 F.3d 171, 178 (6th Cir. 2009) (“an ‘overwhelming majority of the courts of appeals’ agree” that differential diagnosis, a process for medical diagnosis that does not entail statistical significance tests, informs causation) (quoting Westberry v. Gislaved Gummi AB, 178 F.3d 257, 263 (4th Cir. 1999)),” and “Ferebee v. Chevron Chem. Co., 736 F.2d 1529, 1536 (D.C. Cir.) (‘[P]roducts liability law does not preclude recovery until a “statistically significant” number of people have been injured’.), cert. denied, 469 U.S. 1062 (1984)).

[3] Wells v. Ortho Pharmaceutical Corp., 615 F. Supp. 262 (N.D. Ga. 1985), aff’d in relevant part, 788 F.2d 741 (11th Cir.), cert. denied, 479 U.S.950 (1986).

[4] See, e.g., James L. Mills and Duane Alexander, “Teratogens and ‘Litogens’,” 15 New Engl. J. Med. 1234 (1986); Samuel R. Gross, “Expert Evidence,” 1991 Wis. L. Rev. 1113, 1121-24 (1991) (“Unfortunately, Judge Shoob’s decision is absolutely wrong. There is no scientifically credible evidence that Ortho-Gynol Contraceptive Jelly ever causes birth defects.”). See also Editorial, “Federal Judges v. Science,” N.Y. Times, December 27, 1986, at A22 (unsigned editorial) (“That Judge Shoob and the appellate judges ignored the best scientific evidence is an intellectual embarrassment.”);  David E. Bernstein, “Junk Science in the Courtroom,” Wall St. J. at A 15 (Mar. 24,1993) (pointing to Wells as a prominent example of how the federal judiciary had embarrassed American judicial system with its careless, non-evidence based approach to scientific evidence); Bert Black, Francisco J. Ayala & Carol Saffran-Brinks, “Science and the Law in the Wake of Daubert: A New Search for Scientific Knowledge,” 72 Texas L. Rev. 715, 733-34 (1994) (lawyers and leading scientist noting that the district judge “found that the scientific studies relied upon by the plaintiffs’ expert were inconclusive, but nonetheless held his testimony sufficient to support a plaintiffs’ verdict. *** [T]he court explicitly based its decision on the demeanor, tone, motives, biases, and interests that might have influenced each expert’s opinion. Scientific validity apparently did not matter at all.”) (internal citations omitted); Troyen A. Brennan, “Untangling Causation Issues in Law and Medicine: Hazardous Substance Litigation,” 107 Ann. Intern. Med. 741, 744-45 (1987) (describing the result in Wells as arising from the difficulties created by the Ferebee case; “[t]he Wells case can be characterized as the court embracing the hypothesis when the epidemiologic study fails to show any effect”).  Kenneth R. Foster, David E. Bernstein, and Peter W. Huber, eds., Phantom Risk: Scientific Inference and the Law 28-29, 138-39 (MIT Press 1993) (criticizing Wells decision); Hans Zeisel & David Kaye, Prove It With Figures: Empirical Methods in Law and Litigation § 6.5, at 93(1997) (noting the multiple comparisons in studies of birth defects among women who used spermicides, based upon the many reported categories of birth malformations, and the large potential for even more unreported categories); id. at § 6.5 n.3, at 271 (characterizing Wells as “notorious,” and noting that the case became a “lightning rod for the legal system’s ability to handle expert evidence.”).

[5] Matrixx Initiatives, Inc. v. Siracusano, 131 S.Ct. 1309, 1319 (2011).

[6] Schachtman, “Artificial Intelligence May Be Worse Than None At All,” Tortini (Feb. 2, 2025); “Hallucinations in Law and in Government,” Tortini (Feb. 19, 2025).

[7] Lacey v. State Farm General Insurance Co., Case 2:24-cv-05205-FMO-MAA, doc. 119 (C.D. Calif. May 6, 2025).

[8] Id. at 7. The Special Master ducked the more interesting counter-factual question: what if AI had generated a perfect brief, with exactly the right citations, properly cited, with all inferences and conclusions proper and correct? Would the outsourcing of human intelligence be acceptable to us, with all the requisite disclosures? See Hilarius Bookbinder, “Why AI is Destroying Academic Integrity: It’s because students prefer The Experience Machine,” Scriptorium Philosophia (Jan. 02, 2025).

[9] See note 1, supra. See also Mata v. Avianca, 678 F. Supp. 3d 443 (S.D.N.Y. 2023) (ordering sanctions against lawyers who submitted briefs with six fabricated judicial opinions and fake quotes); Gauthier v. Goodyear Tire & Rubber Co., civil action no. 1:23-CV-281 (E.D. Tex Nov. 25, 2024) (imposing sanctions for citing non-existing cases with fabricated quotations). Canadian lawyers have also been seduced by the prospect of outsourcing their thinking, researching, analyzing, and writing to a bot. Ko v. Li, CV-25-00736891-00ES, 2025 Ontario Super. Ct. Justice 2766 (May 1, 2025) (ordering lawyer to show cause why she should not be held in contempt). See Bernise Carolino, “Ko v. Li, Ontario Superior Court, 2 nonexistent case citations, attorney referred for potential contempt proceedings,” Canadian Lawyering (May 13, 2025).

[10] Kat Black, “‘A Collective Debacle’: Ellis George and K&L Gates Ordered to Pay $31,000 after Using AI to Write Brief in Insurance Case,” Legal Intelligencer (May 13, 2025).

[11] Lacey, supra, at 7; Eugene Volokh, “AI Hallucination in Filings Involving 14th-Largest U.S. Law Firm Lead to $31K in Sanctions,” The Volokh Conspiracy, Reason (May 13, 2025).

[12] 42 CFR 93.103 (c) Department of Health and Human Services, Office of Research Integrity. See, e.g., Sena Chang, “Secretary of Defense Pete Hegseth ’03 ‘plagiarized’ small portions of his senior thesis, experts say. But how serious is it?” Daily Princetonian (May 10, 2025).

[13] See In re Mundie, 453 Fed.Appx. 9 (2d Cir. 2011) (publicly reprimanding lawyer for various acts, including copying extensively from another lawyer’s brief in a different case). See also In re Summit Financial, Inc., 2021 WL 5173331 (Bankr. C.D. Cal. Nov. 5, 2021); Lohan v. Perez, 924 F.Supp. 2d 447 (E.D.N.Y. 2013). Compare New York City Bar Formal Opinion 2018-3 (disapproving “extensive” copying, while noting that copying source material without attribution is not “per se” an ethical violation) with North Carolina State Bar Formal Ethics Opinion 2008-14 (acknowledging that lawyers may copy language used in other briefs without attribution).

[14] Dennis A. Rendleman, “Copy That!: What is plagiarism in the practice of law?,” YourABA (Mar. 2020) (arguing that unacknowledged copying in legal filings is different from such conduct in scholarly publications); Andrew M. Carter, “The Case for Plagiarism,” 9 U. Calif. Irvine L. Rev. 531 (2019); Carol M. Bast & Linda B. Samuels, “Plagiarism and legal scholarship in the age of information sharing: the need for intellectual honesty,” 57 Catholic Univ. L. Rev. 777 (2008); Peter A. Joy & Kevin C. McMunigal, “The Problems of Plagiarism as an Ethics Offense,” ABA Criminal Justice 56 (Summer 2011).

[15] Hilarius Bookbinder, “Hallucination, bullshit, confabulation: AI and the outsourcing of thinking,” Scriptorium Philosophia (May 15, 2025).

[16] William James, The Principles of Psychology, vol. 2 (ch. 18-19) (1918).

[17] Id. at ch. 19, n. 41.

[18] Harry Frankfurt, On Bullshit 63 (2005) (“Bullshit is unavoidable whenever circumstances require someone to talk without knowing what he is talking about.  Thus the production of bullshit is stimulated whenever a person’s obligations or opportunities to speak about some topic exceed his knowledge of the facts that are relevant to that topic.”).

[19] Joyce Chen, “Donald Trump’s Golf Course Plaque Honors Fake Civil War Battle,” Rollingstone (Aug. 17, 2017); Nicholas Fandos, “In Renovation of Golf Club, Donald Trump Also Dressed Up History,” N.Y. Times (Nov. 24, 2015).

Man Oh Mann, Has the Climate Changed

March 15th, 2025

Michael Mann, formerly a climate scientist at Penn State University, is no stranger to controversy.[1] As an outspoken advocate for climate change, he has attracted close scrutiny and harsh criticism. Several right-of-center commentators criticized Mann’s work in potentially defamatory terms of “misconduct,” or “manipulation,” or data torturing. One blogger likened Mann’s conduct to Penn State’s Jerry Sandusky’s sexual abuse scandal.[2]

Mann sought vindication, not by a duel, but by lawsuits for defamation. His cases have bounced up and down the court system for over a decade,[3] but last week, they crashed landed. In the course of yo-yo’ing through the courts, the case resulted in the Supreme Court’s denial of a petition for a writ of certiorari, which was accompanied by a dissent from Associate Justice Alito. The published dissent is interesting for the light it sheds on recent speculation about the fate of New York Times v. Sullivan,[4] but also for providing a reasonably accurate statement of the facts of the case:

“Penn State professor Michael Mann is internationally known for his academic work and advocacy on the contentious subject of climate change. As part of this 345*345 work, Mann and two colleagues produced what has been dubbed the ‘hockey stick’ graph, which depicts a slight dip in temperatures between the years 1050 and 1900, followed by a sharp rise in temperature over the last century. Because thermometer readings for most of this period are not available, Mann attempted to ascertain temperatures for the earlier years based on other data such as growth rings of ancient trees and corals, ice cores from glaciers, and cave sediment cores. The hockey stick graph has been prominently cited as proof that human activity has led to global warming. Particularly after emails from the University of East Anglia’s Climate Research Unit were made public, the quality of Mann’s work was called into question in some quarters.

Columnists Rand Simberg and Mark Steyn criticized Mann, the hockey stick graph, and an investigation conducted by Penn State into allegations of wrongdoing by Mann. Simberg’s and Steyn’s comments, which appeared in blogs hosted by the Competitive Enterprise Institute and National Review Online, employed pungent language, accusing Mann of, among other things, ‘misconduct’, ‘wrongdoing’, and the ‘manipulation’ and ‘tortur[e]’ of data. App. to Pet. for Cert. in No. 18-1451, pp. 94a, 98a (App.).

Mann responded by filing a defamation suit in the District of Columbia’s Superior Court. Petitioners moved for dismissal, relying in part on the District’s anti-SLAPP statute, D. C. Code § 16-5502(b) (2012), which requires dismissal of a defamation claim if it is based on speech made ‘in furtherance of the right of advocacy on issues of public interest’ and the plaintiff cannot show that the claim is likely to succeed on the merits. The Superior Court denied the motion, and the D. C. Court of Appeals affirmed. 150 A.3d 1213, 1247, 1249 (2016). The petition now before us presents two questions: (1) whether a court or jury must determine if a factual connotation is ‘provably false’ and (2) whether the First Amendment permits defamation liability for expressing a subjective opinion about a matter of scientific or political controversy. Both questions merit our review.”[5]

Subsequent events in the Mann case have made a return trip to the Supreme Court for a substantive decision on the First Amendment issue very unlikely. Mann’s case against the National Review was dismissed before trial. A District of Columbia jury returned verdicts in favor of Mann, and against Steyn and Simberg, on Mann’s claims of libel. The jury awarded Mann two dollars, one dollar against each defendant, but one million dollars against Steyn, and one thousand dollars against Simberg, as punitive damages. Post-trial motions have been pending after the trial until earlier this month.[6]

On January 7, 2025, the trial court ordered Dr. Mann to pay court costs and attorney fees in the amount $530,820.21 to The National Review, which had been dismissed from the case, before trial.[7] Mann plans to appeal this cost award against him.

Punitive Damages

Judge Irving upheld the libel verdict for Dr. Mann, but found that the punitive damages awards were grossly excessive given the nominal damage awards.[8] As such, the punitive damage awards offended the due process clause of the constitution, and had to be reduced.[9] The one million dollar award was reduced to $5,000.

Sanctions against Michael Mann for Misconduct

In the course of the trial, Dr. Mann and his counsel introduced an exhibit with items of alleged damages in the form of loss grants.[10] In the pre-trial discovery phase of the case, Mann had not been able to adduce any evidence that he actually lost any funding because grants withheld or withdrawn because of the comments of the two blogging defendants. Mann had acknowledged, at least at one point, that the details of grants not received were not relevant to any claim or defense in the case.  Understandably, Judge Alfred S. Irving, Jr., presiding, was rather upset about the Mann testimony and exhibit. The defendants filed a “Motion for Sanctions for Bad-Faith Trial Misconduct,” during the trial.[11]

The facts of the motions were further litigated in post-trial motions, with the result that Judge Irving found, by clear and convincing evidence, that Dr. Mann and his counsel had acted in bad faith in pressing claims for several lost grants. In last week’s 46-page Order, Judge Irving documented in painful detail the dishonesty and mendacity exhibited by Mann and his lawyers, and the violation of multiple rules of professional responsibility. The court found that Dr. Mann, through his lawyers, had:

“acted in bad faith when they presented erroneous evidence and made false representations to the jury and the Court regarding damages stemming from loss of grant funding… . The Court does not reach this decision lightly.”

Judge Irving characterized the misconduct of Dr. Mann and his counsel as “extraordinary in its scope, extent, and intent.” The court has not yet made an assessment of the dollar amount for Mann’s egregious conduct. In all likelihood, the sanction award for his trial misconduct will exceed the $6,002, he has in the plus column for his litigation efforts. With over a half a million dollars assessed against Dr. Mann, in favor of the National Review, Mann’s litigation efforts to date might seem like being hit over the head repeatedly with a hockey stick.

Over a year ago, the New York Times reported on the initial jury verdict in favor of Dr. Mann.[12] Since then, however, the paper has been remarkably silent on the developments in the case, including the court’s findings concerning Dr. Mann’s misconduct in presenting evidence.

No one will miss the irony in Mann’s prevailing at trial in showing that he had been defamed by the trial defendants, and then attempting in plain sight to deceive the jury on damages, in what fair comment might call “misconduct,” or “manipulation,” or “data torturing.” Of course, none of the litigation events described by Judge Irving bear on the correctness vel non of predictions of climate change. These litigation events do, however, single out Dr. Michael Mann as lacking the ethos for serving as a spokesman for any scientific claim. Being called out for manipulating evidence is not a good thing for anyone in the evidence business.


[1][1] Mann left Penn State in 2022, to become a Presidential Distinguished Professor in the Department of Earth and Environmental Science at the University of Pennsylvania.

[2] See Competitive Enterprise Institute v. Mann (D.C. Ct. Apps. 2016).

[3] SeeOreskes Excluded as Historian Expert Witness in Mann Case,” Tortini  (Feb. 28, 2023); “Climategate on Appeal,” Tortini (Aug. 17, 2014).

[4] 376 US 254 (1964).

[5] National Review, Inc. v. Mann, 140 S.Ct. 344, 344-45 (2019).

[6] Eugene Volokh, “Punitive Damages Award in Mann v. Steyn Reduced from $1M to $5K, largely because the compensatory damages were just $1,” Reason (Mar. 4, 2025); Roger Pielke, “In Bad Faith,” AEI (Mar. 12, 2025).

[7] Mann v. National Review, Inc., 2012 CA 008263B, Amended Order Granting in Part National Review Inc.’s Motion for Attorneys’ Fees and Supplemental Motion for “Fees on Fees” (D.C. Super. Ct. Jan. 7, 2025); see Danielle Shockey, “Pennsylvania Climate Scientist Must “Pay Up” $530K After 8 Year Legal Battle Over 2 Blog Posts,” Tampa Free Press (Jan. 12, 2025); Marc Morano, “Prof. Michael Mann ‘intends to appeal’ court order to pay ‘National Review Inc. $530,820.21 in attorneys’ fees & costs’,” Climate Depot (Jan. 10, 2025).

[8] Mann v. National Review, Inc., 2012 CA 008263B, Omnibus Order on Defendants’ Post-Trial Motions for Judgment as a Matter of Law (D.C. Super. Ct. Mar. 4, 2025).

[9] Id. at 20-30. See BMW of North America v. Gore, 517 U.S. 559, 575 (1996); Cooper Indus., Inc. v. Leatherman Tool Group, Inc., 532 U.S. 424, 433 (2001); State Farm Mut. Auto. Ins. Co. v. Campbell, 538 U.S. 408, 427 (2003).

[10] Mann v. National Review, Inc., 2012 CA 008263B, Order Granting in Part Defendants’ Motions for Sanctions (D.C. Super. Ct. Mar. 12, 2025).

[11] Id. at 1-2.

[12] Delger Erdenesanaa, “Michael Mann, a Leading Climate Scientist, Wins His Defamation Suit,” N.Y. Times (Feb. 8, 2024).

Blame It On Delaney – Rats to You

January 16th, 2025

Yesterday, the FDA gave notice that it was banning Red Dye number 3 from foods and pharmaceuticals. Technically, it revoked the authorization for the use of the dye.[1]

Formally, FDA granted a petition by an “white hat” and “empty head” consortium of individuals and NGOs that included the Center for Science in the Public Interest, Breast Cancer Prevention Partners, Center for Environmental Health, Center for Food Safety, Chef Ann Foundation, Children’s Advocacy Institute, Consumer Federation of America, Consumer Reports, Defend Our Health, Environmental Defense Fund, Environmental Working group, Feingold Association of the United States, Food & Water Watch, Health Babies Bright Futures, Life Time Foundation, Momsrising Prevention Institute, Public Citizen, Public Health Institute, Public Interest Research Research Group, Real Food for Kids, Lisa Y. Lefferts, Linda S. Birnbaum, and Philip J. Landrigan.[2]

As the FDA notice explained, a federal statute known as the “the Delaney Clause,” which was added by Congress in 1960 to the Color Additives Amendment to the Food, Drug and Cosmetics Act, prohibits FDA authorization of a food color additive that has been found to induce cancer in humans or animals. The FDA agreed with petitioners that there were two studies that found higher rates of cancer in laboratory male rats exposed to high levels of red dye #3. The agency pointed out that:

“The way that FD&C Red No. 3 causes cancer in male rats does not occur in humans. Relevant exposure levels to FD&C Red No. 3 for humans are typically much lower than those that cause the effects shown in male rats. Studies in other animals and in humans did not show these effects; claims that the use of FD&C Red No. 3 in food and in ingested drugs puts people at risk are not supported by the available scientific information.”

The FDA rule making extrapolates across species, across dose levels, across sex, without evidence and indeed against evidence. Nonetheless, the 65 year old Delaney Clause, based upon out-dated and invalidated scientific methods, required the FDA action. Even the IARC does not consider red dye #3 a human carcinogen. While we can all agree that inbred laboratory male rats should not be fed food colored with red dye number 3, we have to ask ourselves are we more like this subset of the rat world, or more like mice and hamsters?

The formal FDA decision is dated today, January 16, 2025, and can be found in the Federal Register.[3] Richard Williams, a former FDA officer, called the rule making “another failed attempt” at educating and protecting consumers.[4]


[1] FDA, “FDA to Revoke Authorization for the Use of Red No. 3 in Food and Ingested Drugs,” (Jan. 15, 2025).

[2] Color additive petition pursuant to 21 U.S.C. §§ 379e, 721(b)(1) to remove FD&C Red No. 3 from the permanent list of color additives approved for use in food and dietary supplements, 21 C.F.R. § 74.303, and for use in ingested drugs, 21 C.F.R. § 74.1303, because the FDA has found that the additive induces cancer and is unsafe (Oct. 24, 2022).

[3]Color Additive Petition From Center for Science in the Public Interest, et al.; Request To Revoke Color Additive Listing for Use of FD&C Red No. 3 in Food and Ingested Drugs – A Rule by the Food and Drug Administration,” Fed. Reg. (Jan. 16, 2025).

[4] Richard Williams, “Red Dye 3, New Nutrition Labels, and More,” (Jan. 16, 2025).

Access to a Study Protocol & Underlying Data Reveals a Nuclear Non-Proliferation Test

April 8th, 2024

The limits of peer review ultimately make it a poor proxy for the validity tests posed by Rules 702 and 703. Published peer review articles simply do not permit a very searching evaluation of the facts and data of a study. In the wake of the Daubert decision, expert witnesses quickly saw that they can obscure the search for validity by the reliance upon published studies, and frustrate the goals of judicial gatekeeping. As a practical matter, the burden shifts to the party that wishes to challenge the relied upon facts and data to learn more about the cited studies to show that the facts and data are not sufficient under Rule 702(b), and that the testimony is not the product of reliable methods under Rule 702(c). Obtaining study protocols, and in some instances, underlying data, are necessary for due process in the gatekeeping process. A couple of case studies may illustrate the power of looking under the hood of published studies, even ones that were peer reviewed.

When the Supreme Court decided the Daubert case in June 1993, two recent verdicts in silicone-gel breast implant cases were fresh in memory.[1] The verdicts were large by the standards of the time, and the evidence presented for the claims that silicone caused autoimmune disease was extremely weak. The verdicts set off a feeding frenzy, not only in the lawsuit industry, but also in the shady entrepreneurial world of supposed medical tests for “silicone sensitivity.”

The plaintiffs’ litigation theory lacked any meaningful epidemiologic support, and so there were fulsome presentations of putative, hypothetical mechanisms. One such mechanism involved the supposed in vivo degradation of silicone to silica (silicon dioxide), with silica then inducing an immunogenic reaction, which then, somehow, induced autoimmunity and the induction of autoimmune connective tissue disease. The degradation claim would ultimately prove baseless,[2] and the nuclear magnetic resonance evidence put forward to support degradation would turn out to be instrumental artifact and deception. The immunogenic mechanism had a few lines of potential support, with the most prominent at the time coming from the laboratories of Douglas Radford Shanklin, and his colleague, David L. Smalley, both of whom were testifying expert witnesses for claimants.

The Daubert decision held out some opportunity to challenge the admissibility of testimony that silicone implants led to either the production of a silicone-specific antibody, or the induction of t-cell mediated immunogenicity from silicone (or resulting silica) exposure. The initial tests of the newly articulated standard for admissibility of opinion testimony in silicone litigation did not go well.[3]  Peer review, which was absent in the re-analyses relied upon in the Bendectin litigation, was superficially present in the studies relied upon in the silicone litigation. The absence of supportive epidemiology was excused with hand waving that there was a “credible” mechanism, and that epidemiology took too long and was too expensive. Initially, post-Daubert, federal courts were quick to excuse the absence of epidemiology for a novel claim.

The initial Rule 702 challenges to plaintiffs’ expert witnesses thus focused on  immunogenicity as the putative mechanism, which if true, might lend some plausibility to their causal claim. Ultimately, plaintiffs’ expert witnesses would have to show that the mechanism was real by showing that silicone exposure causes autoimmune disease through epidemiologic studies,

One of the more persistent purveyors of a “test” for detecting alleged silicone sensitivity came from Smalley and Shanklin, then at the University of Tennessee. These authors exploited the fears of implant recipients and the greed of lawyers by marketing a “silicone sensitivity test (SILS).” For a price, Smalley and Shanklin would test mailed-in blood specimens sent directly by lawyers or by physicians, and provide ready-for-litigation reports that claimants had suffered an immune system response to silicone exposure. Starting in 1995, Smalley and Shanklin also cranked out a series of articles at supposedly peer reviewed journals, which purported to identify a specific immune response to crystalline silica in women who had silicone gel breast implants.[4] These studies had two obvious goals. First, the studies promoted their product to the “silicone sisters,” various support groups of claimants, as well as their lawyers, and a network of supporting rheumatologists and plastic surgeons. Second, by identifying a putative causal mechanism, Shanklin could add a meretricious patina of scientific validity to the claim that silicone breast implants cause autoimmune disease, which Shanklin, as a testifying expert witness, needed to survive Rule 702 challenges.

The plaintiffs’ strategy had been to paper over the huge analytical gaps in their causal theory with complicated, speculative research, which had been peer reviewed and published. Although the quality of the journals was often suspect, and the nature of the peer review obscure, the strategy had been initially successful in deflecting any meaningful scrutiny.

Many of the silicone cases were pending in a multi-district litigation, MDL 926, before Judge Sam Pointer, in the Northern District of Alabama. Judge Pointer, however, did not believe that ruling on expert witness admissibility was a function of an MDL court, and by 1995, he started to remand cases to the transferor courts, for those courts to do what they thought appropriate under Rules 702 and 703. Some of the first remanded cases went to the District of Oregon, where they landed in front of Judge Robert E. Jones. In early 1996, Judge Jones invited briefing on expert witness challenges, and in face of the complex immunology and toxicology issues, and the emerging epidemiologic studies, he decided to appoint four technical advisors to assist him in deciding the challenges.

The addition of scientific advisors to the gatekeeper’s bench made a huge difference in the sophistication and detail of the challenges that could be lodged to the relied-upon studies. In June 1996, Judge Jones entertained extensive hearings with viva voce testimony from both challenged witnesses and subject-matter experts on topics, such as immunology and nuclear magnetic resonance spectroscopy. Judge Jones invited final argument in the form of videotaped presentations from counsel so that the videotapes could be distributed to his technical advisors later in the summer. The contrived complexity of plaintiffs’ case dissipated, and the huge analytical gaps became visible. In December 1996, Judge Jones issued his decision that excluded the plaintiffs’ expert witnesses’ proposed testimony on grounds that it failed to satisfy the requirements of Rule 702.[5]

In October 1996, while Judge Jones was studying the record, and writing his opinion in the Hall case, Judge Weinstein, with a judge from the Southern District of New York, and another from New York state trial court, conducted a two-week Rule 702 hearing, in Brooklyn. Judge Weinstein announced at the outset that he had studied the record from the Hall case, and that he would incorporate it into his record for the cases remanded to the Southern and Eastern Districts of New York.

Curious gaps in the articles claiming silicone immunogenicity, and the lack of success in earlier Rule 702 challenges, motivated the defense to obtain the study protocols and underlying data from studies such as those published by Shanklin and Smalley. Shanklin and Smalley were frequently listed as expert witnesses in individual cases, but when requests or subpoenas for their protocols and raw data were filed, plaintiffs’ counsel stonewalled or withdrew them as witnesses. Eventually, the defense was able to enforce a subpoena and obtain the protocol and some data. The respondents claimed that the control data no longer existed, and inexplicably a good part of the experimental data had been destroyed. Enough was revealed, however, to see that the published articles were not what they claimed to be.[6]

In addition to litigation discovery, in March 1996, a surgeon published the results of his test of the Shanklin-Smalley silicone sensitivity test (“SILS”).[7] Dr. Leroy Young sent the Shanklin laboratory several blood samples from women with and without silicone implants. For six women who never had implants, Dr. Young submitted a fabricated medical history that included silicone implants and symptoms of “silicone-associated disease.” All six samples were reported back as “positive”; indeed, these results were more positive than the blood samples from the women who actually had silicone implants. Dr. Young suggested that perhaps the SILS test was akin to cold fusion.

By the time counsel assembled in Judge Weinstein’s courtroom, in October 1996, some epidemiologic studies had become available and much more information was available on the supposedly supportive mechanistic studies upon which plaintiffs’ expert witnesses had previously relied. Not too surprisingly, plaintiffs’ counsel chose not to call the entrepreneurial Dr. Shanklin, but instead called Donard S. Dwyer, a young, earnest immunologist who had done some contract work on an unrelated matter for Bristol-Myers Squibb, a defendant in the litigation.  Dr. Dwyer had filed an affidavit previously in the Oregon federal litigation, in which he gave blanket approval to the methods and conclusions of the Smalley-Shanklin research:

“Based on a thorough review of these extensive materials which are more than adequate to evaluate Dr. Smalley’s test methodology, I formed the following conclusions. First, the experimental protocols that were used are standard and acceptable methods for measuring T Cell proliferation. The results have been reproducible and consistent in this laboratory. Second, the conclusion that there are differences between patients with breast implants and normal controls with respect to the proliferative response to silicon dioxide appears to be justified from the data.”[8]

Dwyer maintained this position even after the defense obtaining the study protocol and underlying data, and various immunologists on the defense side filed scathing evaluatons of the Smalley-Shanklin work.  On direct examination at the hearings in Brooklyn, Dwyer vouched for the challenged t-cell studies, and opined that the work was peer reviewed and sufficiently reliable.[9]

The charade fell apart on cross-examination. Dwyer refused to endorse the studies that claimed to have found an anti-silicone antibody. Researchers at leading universities had attempted to reproduce the findings of such antibodies, without success.[10] The real controversy was over the claimed finding of silicone antigenicity as shown in t-cell or the cell-mediated specific immune response. On direct examination, plaintiffs’ counsel elicited Dwyer’s support for the soundness of the scientific studies that purported to establish such antigenicity, with little attention to the critiques that had been filed before the hearing.[11] Dwyer stuck to his unqualified support he had expressed previously in his affidavit for the Oregon cases.[12]

The problematic aspect of Dwyer’s direct examination testimony was that he had seen the protocol and the partial data produced by Smalley and Shanklin.[13] Dwyer, therefore, could not resist some basic facts about their work. First, the Shanklin data failed to support a dose-response relationship.[14] Second, the blood samples from women with silicone implants had been mailed to Smalley’s laboratory, whereas the control samples were collected locally. The disparity ensured that the silicone blood samples would be older than the controls, which was a departure from treating exposed and control samples in the same way.[15] Third, the experiment was done unblinded; the laboratory technical personnel and the investigators knew which blood samples were silicone exposed and which were controls (except for samples sent by Dr. Leroy Young).[16] Fourth, Shanklin’s laboratory procedures deviated from the standardized procedure set out in the National Institute of Health’s Current Protocols in Immunology.[17]

The SILS study protocol and the data produced by Shanklin and Smalley made clear that each sample was to be tested in triplicate for t-cell proliferation in response to silica, to a positive control mitogen (Con A), and to a negative control blank. The published papers all claimed that the each sample was tested in triplicate for each of these three response situations (silica, mitogen, and nothing).[18] Shanklin and Smalley described their t-cell proliferation studies, in their published papers, as having been done in triplicate. These statements were, however, untrue and never corrected.[19]

The study protocol called for the tests to be run in triplicate, but they instructed the laboratory that two counts may be used if one count does not match the other counts, which is to be decided by a technical specialist on a “case-by-case” basis. Of data that was supposed to be reported in triplicate, fully one third had only two data points, and 10 percent had but one data point.[20] No criteria were provided to the technical specialist for deciding which data to discard.[21] Not only had Shanklin excluded data, but he discarded and destroyed the data such that no one could go back and assess whether the data should have been excluded.[22]

Dwyer agreed that this exclusion and discarding of data was not at all a good method.[23] Dwyer proclaimed that he had not come to Brooklyn to defend this aspect of the Shanklin work, and that it was not defensible at all. Dwyer conceded that “the interpretation of the data and collection of the data are flawed.”[24] Dwyer tried to stake out a position that was incoherent by asserting that there was “nothing inherently wrong with the method,” while conceding that discarding data was problematic.[25] The judges presiding over the hearing could readily see that the Shanklin research was bent.

At this point, the lead plaintiffs’ counsel, Michael Williams, sought an off-ramp. He jumped to his feet and exclaimed “I’m informed that no witness in this case will rely on Dr. Smalley’s [and Shanklin’s] work in any respect.” [26] Judge Weinstein’s eyes lit up with the prospect that the Smalley-Shanklin work, by agreement, would never be mentioned again in New York state or federal cases. Given how central the claim of silicone antigenicity was to plaintiffs’ cases, the defense resisted the stipulation about research that they would continue to face in other state and federal courts. The defense was saved, however, by the obstinence of a lawyer from the Weitz & Luxenberg firm, who rose to report that her firm intended to call Drs. Shanklin and Smalley as witnesses, and that they would not stipulate to the exclusion of their work. Judge Weinstein rolled his eyes, and waved me to continue.[27] The proliferation of the t-cell test was over. The hearing before Judges Weinstein and Baer, and Justice Lobis, continued for several more days, with several other dramatic moments.[28]

In short order, on October 23, 1996, Judge Weinstein issued a short, published opinion, in which he granted partial summary judgment on the claims of systemic disease for all cases pending in federal court in New York.[29] What was curious was that the defendants had not moved for summary judgment. There were, of course, pending motions to exclude plaintiffs’ expert witnesses, but Judge Weinstein effectively ducked those motions, and let it be known that he was never a fan of Rule 702. It would be many years later, before Judge Weinstein allowed his judicial assessment see the light of day. Two decades and some years later, in a law review article, Judge Weinstein gave his judgment that

“[t]he breast implant litigation was largely based on a litigation fraud. …  Claims—supported by medical charlatans—that enormous damages to women’s systems resulted could not be supported.”[30]

Judge Weinstein’s opinion was truly a judgment from which there can be no appeal. Shanklin and Smalley continued to publish papers for another decade. None of the published articles by Shanklin and others have been retracted.


[1] Reuters, “Record $25 Million Awarded In Silicone-Gel Implants Case,” N.Y. Times at A13 (Dec. 24, 1992) (describing the verdict returned in Harris County, Texas, in Johnson v. Medical Engineering Corp.); Associated Press, “Woman Wins Implant Suit,” N.Y. Times at A16 (Dec. 17, 1991) (reporting a verdict in Hopkins v. Dow Corning, for $840,000 in compensatory and $6.5 million in punitive damages); see Hopkins v. Dow Corning Corp., 33 F.3d 1116 (9th Cir. 1994) (affirming judgment with minimal attention to Rule 702 issues).

[2] William E. Hull, “A Critical Review of MR Studies Concerning Silicone Breast Implants,” 42 Magnetic Resonance in Medicine 984, 984 (1999) (“From my viewpoint as an analytical spectroscopist, the result of this exercise was disturbing and disappointing. In my judgement as a referee, none of the Garrido group’s papers (1–6) should have been published in their current form.”). See also N.A. Schachtman, “Silicone Data – Slippery & Hard to Find, Part 2,” Tortini (July 5, 2015). Many of the material science claims in the breast implant litigation were as fraudulent as the health effects claims. See, e.g., John Donley, “Examining the Expert,” 49 Litigation 26 (Spring 2023) (discussing his encounters with frequent testifier Pierre Blais, in silicone litigation).

[3] See, e.g., Hopkins v. Dow Corning Corp., 33 F.3d 1116 (9th Cir. 1994) (affirming judgment for plaintiff over Rule 702 challenges), cert. denied, 115 S.Ct. 734 (1995). See Donald A. Lawson, “Note, Hopkins v. Dow Corning Corporation: Silicone and Science,” 37 Jurimetrics J. 53 (1996) (concluding that Hopkins was wrongly decided).

[4] See David L. Smalley, Douglas R. Shanklin, Mary F. Hall, and Michael V. Stevens, “Detection of Lymphocyte Stimulation by Silicon Dioxide,” 4 Internat’l J. Occup. Med. & Toxicol. 63 (1995); David L. Smalley, Douglas R. Shanklin, Mary F. Hall, Michael V. Stevens, and Aram Hanissian, “Immunologic stimulation of T lymphocytes by silica after use of silicone mammary implants,” 9 FASEB J. 424 (1995); David L. Smalley, J. J. Levine, Douglas R. Shanklin, Mary F. Hall, Michael V. Stevens, “Lymphocyte response to silica among offspring of silicone breast implant recipients,” 196 Immunobiology 567 (1996); David L. Smalley, Douglas R. Shanklin, “T-cell-specific response to silicone gel,” 98 Plastic Reconstr. Surg. 915 (1996); and Douglas R. Shanklin, David L. Smalley, Mary F. Hall, Michael V. Stevens, “T cell-mediated immune response to silica in silicone breast implant patients,” 210 Curr. Topics Microbiol. Immunol. 227 (1996). Shanklin was also no stranger to making his case in the popular media. See, e.g., Douglas Shanklin, “More Research Needed on Breast Implants,” Kitsap Sun at 2 (Aug. 29, 1995) (“Widespread silicone sickness is very real in women with past and continuing exposure to silicone breast implants.”) (writing for Scripps Howard News Service). Even after the Shanklin studies were discredited in court, Shanklin and his colleagues continued to publish their claims that silicone implants led to silica antigenicity. David L. Smalley, Douglas R. Shanklin, and Mary F. Hall, “Monocyte-dependent stimulation of human T cells by silicon dioxide,” 66 Pathobiology 302 (1998); Douglas R. Shanklin and David L. Smalley, “The immunopathology of siliconosis. History, clinical presentation, and relation to silicosis and the chemistry of silicon and silicone,” 18 Immunol. Res. 125 (1998); Douglas Radford Shanklin, David L. Smalley, “Pathogenetic and diagnostic aspects of siliconosis,” 17 Rev. Environ Health 85 (2002), and “Erratum,” 17 Rev Environ Health. 248 (2002); Douglas Radford Shanklin & David L Smalley, “Kinetics of T lymphocyte responses to persistent antigens,” 80 Exp. Mol. Pathol. 26 (2006). Douglas Shanklin died in 2013. Susan J. Ainsworth, “Douglas R. Shanklin,” 92 Chem. & Eng’g News (April 7, 2014). Dr. Smalley appears to be still alive. In 2022, he sued the federal government to challenge his disqualification from serving as a laboratory director of any clinical directory in the United States, under 42 U.S.C. § 263a(k). He lost. Smalley v. Becerra, Case No. 4:22CV399 HEA (E.D. Mo. July 6, 2022).

[5] Hall v. Baxter Healthcare Corp., 947 F. Supp. 1387 (D. Ore. 1996); see Joseph Sanders & David H. Kaye, “Expert Advice on Silicone Implants: Hall v. Baxter Healthcare Corp., 37 Jurimetrics J. 113 (1997); Laurens Walker & John Monahan, “Scientific Authority: The Breast Implant Litigation and Beyond,” 86 Virginia L. Rev. 801 (2000); Jane F. Thorpe, Alvina M. Oelhafen, and Michael B. Arnold, “Court-Appointed Experts and Technical Advisors,” 26 Litigation 31 (Summer 2000); Laural L. Hooper, Joe S. Cecil & Thomas E. Willging, “Assessing Causation in Breast Implant Litigation: The Role of Science Panels,” 64 Law & Contemp. Problems 139 (2001); Debra L. Worthington, Merrie Jo Stallard, Joseph M. Price & Peter J. Goss, “Hindsight Bias, Daubert, and the Silicone Breast Implant Litigation: Making the Case for Court-Appointed Experts in Complex Medical and Scientific Litigation,” 8 Psychology, Public Policy &  Law 154 (2002).

[6] Judge Jones’ technical advisor on immunology reported that the studies offered in support of the alleged connection between silicone implantation and silicone-specific T cell responses, including the published papers by Shanklin and Smalley, “have a number of methodological shortcomings and thus should not form the basis of such an opinion.” Mary Stenzel-Poore, “Silicone Breast Implant Cases–Analysis of Scientific Reasoning and Methodology Regarding Immunological Studies” (Sept. 9, 1996). This judgment was seconded, over three years later, in the proceedings before MDL 926 and its Rule 706 court-appointed immunology expert witness. See Report of Dr. Betty A. Diamond, in MDL 926, at 14-15 (Nov. 30, 1998). Other expert witnesses who published studies on the supposed immunogenicity of silicone came up with some creative excuses to avoid producing their underlying data. Eric Gershwin consistently testified that his data were with a co-author in Israel, and that he could not produce them. N.A. Schachtman, “Silicone Data – Slippery and Hard to Find, Part I,” Tortini (July 4, 2015). Nonetheless, the court-appointed technical advisors were highly critical of Dr. Gershwin’s results. Dr. Stenzel-Poore, the immunologist on Judge Jones’ panel of advisors, found Gershwin’s claims “not well substantiated.” Hall v. Baxter Healthcare Corp., 947 F.Supp. 1387 (D. Ore. 1996). Similarly, Judge Pointer’s appointed expert immunologist Dr. Betty A. Diamond, was unshakeable in her criticisms of Gershwin’s work and his conclusions. Testimony of Dr. Betty A. Diamond, in MDL 926 (April 23, 1999). And the Institute of Medicine committee, charged with reviewing the silicone claims, found Gershwin’s work inadequate and insufficient to justify the extravagent claims that plaintiffs were making for immunogenicity and for causation of autoimmune disease. Stuart Bondurant, Virginia Ernster, and Roger Herdman, eds., Safety of Silicone Breast Implants 256 (1999). Another testifying expert witness who relied upon his own data, Nir Kossovsky, resorted to a seismic excuse; he claimed that the Northridge Quake destroyed his data. N.A. Schachtman, “Earthquake Induced Data Loss – We’re All Shook Up,” Tortini (June 26, 2015); Kossovsky, along with his wife, Beth Brandegee, and his father, Ram Kossowsky, sought to commercialize an ELISA-based silicone “antibody” biomarker diagnostic test, Detecsil. Although the early Rule 702 decisions declined to take a hard at Kossovsky’s study, the U.S. Food and Drug Administration eventually shut down the Kossovsky Detecsil test. Lillian J. Gill, FDA Acting Director, Office of Compliance, Letter to Beth S. Brandegee, President, Structured Biologicals (SBI) Laboratories: Detecsil Silicone Sensitivity Test (July 15, 1994); see Gary Taubes, “Silicone in the System: Has Nir Kossovsky really shown anything about the dangers of breast implants?” Discover Magazine (Dec. 1995).

[7] Leroy Young, “Testing the Test: An Analysis of the Reliability of the Silicone Sensitivity Test (SILS) in Detecting Immune-Mediated Responses to Silicone Breast Implants,” 97 Plastic & Reconstr. Surg. 681 (1996).

[8] Affid. of Donard S. Dwyer, at para. 6 (Dec. 1, 1995), filed in In re Breast Implant Litig. Pending in U.S. D. Ct, D. Oregon (Groups 1,2, and 3).

[9] Notes of Testimony of Dr. Donnard Dwyer, Nyitray v. Baxter Healthcare Corp., CV 93-159 (E. & S.D.N.Y and N.Y. Sup. Ct., N.Y. Cty. Oct. 8, 9, 1996) (Weinstein, J., Baer, J., Lobis, J., Pollak, M.J.).

[10] Id. at N.T. 238-239 (Oct. 8, 1996).

[11] Id. at N.T. 240.

[12] Id. at N.T. 241-42.

[13] Id. at N.T. 243-44; 255:22-256:3.

[14] Id. at 244-45.

[15] Id. at N.T. 259.

[16] Id. at N.T. 258:20-22.

[17] Id. at N.T. 254.

[18] Id. at N.T. 252:16-254.

[19] Id. at N.T. 254:19-255:2.

[20] Id. at N.T. 269:18-269:14.

[21] Id. at N.T. 261:23-262:1.

[22] Id. at N.T. 269:18-270.

[23] Id. atN.T. 256:3-16.

[24] Id. at N.T. 262:15-17

[25] Id. at N.T. 247:3-5.

[26] Id. at N.T. at 260:2-3

[27] Id. at N.T. at 261:5-8.

[28] One of the more interesting and colorful moments came when the late James Conlon cross-examined plaintiffs’ pathology expert witness, Saul Puszkin, about questionable aspects of his curriculum vitae. The examination was revealed such questionable conduct that Judge Weinstein stopped the examination and directed Dr. Puszkin not to continue without legal counsel of his own.

[29] In re Breast Implant Cases, 942 F. Supp. 958 (E.& S.D.N.Y. 1996). The opinion did not specifically address the Rule 702 and 703 issues that were the subject of pending motions before the court.

[30] Hon. Jack B. Weinstein, “Preliminary Reflections on Administration of Complex Litigation” 2009 Cardozo L. Rev. de novo 1, 14 (2009) (emphasis added).