TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

How Science Works in the New Reference Manual on Scientific Evidence

March 12th, 2026

The Second and Third Editions of the Reference Manual on Scientific Evidence contained a chapter, “How Science Works,” by Professor David Goodstein. This chapter ambitiously set out to cover philosophy and sociology of science to help orient judges as strangers in a strange land. Goodstein’s chapter had been a useful introduction to scientific methodology, and it countered some of the antic ideas seen in some judicial opinions, as well as in some other chapters of the Manual. Goodstein brought a good deal of experience and expertise to the task. He was a distinguished professor of physics and Vice Provost at the California Institute of Technology, and he had written engagingly about scientific discovery and the pathology of science.[1] Sadly, Goodstein died in April 2024. His death may have had some role in the delayed publication of the Fourth Edition of the Manual,[2] and the improvident replacement of his chapter with a new chapter written by authors less articulate about how science works.

The substitute chapter on “How Science Works” was written by two authors considerably less accomplished than the late Professor Goodstein.[3] Michael Weisberg is a professor of philosophy at the University of Pennsylvania, where he is the deputy director of Perry World House, which “analyzes global policy challenges through the realms of climate, democracy, global justice and human rights, and security.” The connection with Perry House may explain the new chapter’s heavy reliance upon the development of the chlorofluorocarbon (CFC) connection to ozone layer depletion as an exemplar of scientific discovery and knowledge. The University of Pennsylvania webpage describes Weisberg as “educat[ing] the next generation of environmental leaders in the classroom, at the negotiating table, and in the field, ensuring that their voices have maximal impact on addressing the climate crisis.”[4] So we have a philosopher of advocacy science, as it were. Some readers might think those credentials are not optimal for preparing a nuts-and-bolts description of how science works. Reading sections of the new chapter will not diminish their concerns.

Joining with Weisberg on this new version of “How Science Works,” is Anastasia Thanukos, who works at the University of California Museum of Paleontology. Thanukos has her masters degree in integrative biology, and her doctorate in science education.[5] 

The new “method” chapter has some virtues. As did Goodstein’s chapter, the new authors put peer review into a realistic perspective that should keep judges from being snoockered into admitting weak or bogus evidence because it had been published in a peer reviewed journal.[6] The authors should have gone much farther in pointing out that the rise of predatory and pay-to-play journals, as well as journals controlled by advocacy groups, have undermined much of the publishing model of modern science.

Weisberg and Thanukos discuss “expertise” in a way that is interesting but irrelevant to legal cases.  They seem blithely unaware that the standard for qualifying an expert witness is extremely low. Who will disbuse them when they argue that “[i]t is worth evaluating the closeness of a scientist’s disciplinary expertise to a scientific topic on which expert testimony is delivered”?[7] In what emerges as a consistent pattern of giving anti-manufacturing industry examples, the authors point to Richard Scorer as an accomplished scientist, who had no specific expertise in CFC ozone depletion. Notwithstanding the lack of specific expertise, an industry-backed group promoted Scorer’s views that criticized the CFC-ozone depletion hypothesis.[8] Citing Naomi Oreskes, the new Manual chapter states that “[t]he problem of scientists with legitimate expertise in one field weighing in on a scientific question outside their area of expertise is a pernicious one that has affected public acceptance of science and policy on issues such as climate change and tobacco exposure.”[9] Later, when Weisberg and Thanukos discuss the Milward case, they miss the pernicious influence that flowed from allowing Martyn Smith, a toxicologist, to give methodologically muddled opinion testimony on epidemiology. Pernicious is where you find it, and the authors of the new chapter find virtually all untoward instances of poor scientific method and conduct to originate from manufacturing industry.

Weisberg and Thanukos introduce a discussion of the “replication crisis,” a phrase and concept absent from the third edition of the Reference Manual.[10] The authors express some skepticism that there is an actual crisis over replication,[11] but their focus on climate science may mean that they are simply blinded by groupthink in that discipline. Their discussion of retractions omits the steep rise in retraction rates in most scientific disciplines,[12] and the authors ignore the proliferation of poor quality journals. Positively, the authors introduce a discussion of study preregistration, a notion absent from the third edition of the Manual, and they explain that such preregistration may serve as a bulwark against data dredging post hoc analyses.[13] Negatively, the authors ignore how frequently preregistered protocols are not used, or are used and then violated.

Weisberg and Thanukos appropriately ignore “weight of the evidence” (WOE) and “inference to the best explanation” (IBE). Readers might (mistakenly) think that the new chapter implicitly rejects WOE, as put forth by Carl Cranor and credulously accepted by the First Circuit in Milward, when the chapter authors insist that 

“the judge’s task requires a deeper examination of the available evidence and methods by which it was arrived at, as well as an assessment of how the community of experts in this area has evaluated or would evaluate the evidence and reasoning in question.”[14]

Contrary to the Milward decision from 2011, the new authors are not shy about stating the obvious; there is good science, and there is bad science.  Not all “judgment” about causality is acceptable and fit for submission to juries.[15] Given the judicial resistance to Rule 702, the obvious here requires stating. Weisberg and Thanukos acknowledge that some scientific judgment is unreliable or invalid because it was based upon work that was not carried out in accordance with current standards for scientific investigation and inference.[16] It should not surprise anyone that most of their examples of bad science are the product of manufacturing industry; the authors are oblivious to bad science sponsored by the lawsuit industry or by non-governmental advocacy organizations (NGOs).

Weisberg and Thanukos frame scientific disagreements and debates as governed by both data and ethical norms. Science is not infinitely contestable. There are identifiable norms, including a norm that scientists should “seek relevant information,” and “scrutinize ideas and evidence.”[17] Contrary to Milward’s standard of judicial abstention and credulity in the face of dodgy causal claims, these authors state what should be obvious, that scientific scrutiny involves, among other things, “an evaluation of methods, considering potential biases and oversights.”[18]

The chapters’ authors, non-lawyers, get closer to the heart of the error in Milward’s abstention doctrine with their recognition of what should have been obvious to the authors of the law chapter (Richter & Capra):

“When research relevant to a trial has not yet been scrutinized by a community with the appropriate technical expertise, a judge may be placed in the position of providing or requesting this scrutiny.”[19]  

Rather than some vague, subjective, and content-free WOE standard, Weisberg and Thanukos urge scientists, and by implication judges as well, to engage in serious efforts to “identify and avoid bias” and abide by ethical guidelines.[20] In other (my) words, the new authors agree that there is a standard of care reflected in the norms of science, and consequently there can be deviations from that standard. For Weisberg and Thanukos, compliance with the normative structure of scientific investigations is at the heart of building up accurate and predictive conclusions from data.[21] As part of their communitarian and normative conception of the scientific process, the authors appear to accept the reality and necessity for judges to act as gatekeepers.[22]

And while this recognition of standards and the need to police against deviations from standards is commendable, Weisberg and Thanukos proceed to give an abridgment of scientific method and process that is distorted and erroneous. They steadfastly ignore the concept of hierarchy of evidence, and thus provide illegitimate cover for levelers of evidence. In discussing randomized controlled trials, for instance, they note that such trials are often taken as “the gold standard,” but then they counter, without citation, support, or argument, that such trials “are just one line of evidence among many.”[23] The authors elide discussion and reconciliation of when that “just one line of evidence” conflicts with observational studies.

Notwithstanding their helpful comments about the need to evaluate studies for bias and other errors, these authors enter into the Milward controversy with an observation that assessing many lines of evidence is required and can be difficult for courts, and has led to “controversy.” Citing to papers including one  by the late Margaret Berger at her notorious lawsuit industry SKAPP-funded Coronado Conference, Weisberg and Thanukos float the observation that:

“In science, the available evidence (some of which may come from other research programs not designed to test the hypothesis under consideration) is evaluated as a body, along with the strengths, weaknesses, and caveats relating to each type of data, an approach which, some scholars have argued, the judiciary has not always followed.98[24]

This claim that the available evidence is evaluated as “a body” is presented as a fact about how science works, without any citation or argument. Several comments are in order. First, the claim is at odds with the authors’ own statements that scientific norms require evaluating each study for biases and other disqualifying flaws. Second, the claim is at odds with the authors’ own reference to systematic reviews and meta-analyses,[25] which are governed by protocols with inclusionary and exclusionary criteria for individual studies, and which require consideration of individual study validity before it enters the “body” of evidence that is quantitatively or qualitatively evaluated. In the authors’ words, “authors delineate both the criteria that studies must meet for inclusion in the review and the methods that will be used to assess the studies.”[26] The Milward case involved an expert witness who had proffered the very opposite of a systematic review in the form of post hoc rejiggering of studies and their data to fit a pre-conceived litigation goal. In the context of addressing the replication crisis, Weisberg and Thanukos correctly observe “peer review alone cannot ensure that the conclusions of published studies are actually correct, highlighting the responsibility judges bear in evaluating the validity of the methodologies that contributed to a particular piece of research.”[27] Of course, the Milward case involved a hired expert witness whose unprincipled re-analysis of studies was never peer reviewed or published.

Third, the authors could easily have found additional support for the contrary proposition that individual studies must be evaluated before being considered as part of the entire evidentiary display. The IARC Preamble, which roughly describes how that agency arrives at its so-called hazard classifications of human carcinogenicity, specifies that individual studies within each of three streams of evidence are evaluated for validity and soundness before contributing to a sub-conclusion with respect to (1) epidemiology, (2) toxicology, and (3) mechanistic lines of evidence.[28] Each of those three lines of evidence is adjudged “sufficient,” “limited,” or “inadequate,” by specialists in the three respective areas, before an overall evaluation is reached. There is much that is objectionable in the IARC working group procedures, but this division of labor and the need to consider disparate lines of evidence and studies within each line separately before attempting a synthesis, is present in all systematic review methodology. The suggestion from Weisberg and Thanukos that “the available evidence” in science is “evaluated as a body” is not only unsupported, but it is demonstrably false and misleading.

This claim about holistic evaluation is a fairly transparent but failed attempt to support a claim made in the chapter on the admissibility of expert witness evidence by Liesa Richter and Daniel Capra, who present an exposition of the notorious Milward case, without criticism, in a way to suggest that the case represents appropriate judicial gatekeeping under Rule 702, and that the case is consistent with scientific norms.[29] The chapter on how science works, after  having stated a false claim about scientific methodology for synthesis and integrating disparate lines of evidence, attempts to provide a gloss on the similar and equally benighted claim of Richter and Capra, in footnote 98:

“98. Some scholars have raised concerns that the courts have on occasion unfairly dismissed numerous individual lines of evidence as being flawed or insufficiently conclusive and concluded that evidence is lacking, when in fact the body of evidence, taken as a whole, points to a clear conclusion. For more, see discussion of Milward v. Acuity Specialty Products Group, Inc.; see also Liesa L. Richter & Daniel J. Capra, The Admissibility of Expert Testimony, in this manual; Berger 2005, supra note 97; and Steve C. Gold, A Fitting Vision of Science for the Courtroom, 3 Wake Forest J.L. & Pol’y 1 (2013).”

Some “scholars” have indeed said such things in their more unscholarly moments; some scholars have criticized Milward, but they are not cited in this new methods chapter. The footnote is accurate, but highly misleading by omission. The First Circuit in Milward also said as much, also without support or justification, and Richter and Capra, in their chapter of the Manual, fourth edition, parrot the Milward case. Weisberg and Thanukos cite to two articles, by Margaret Berger and by Steven Gold, both law professors, not scientists, and both ideologically hostile to Rule 702 gatekeeping. The Berger article was from a lawsuit-industry SKAPP funded symposium known as the Coronado Conference, and the Gold paper comes out of a symposium sponsored by the lawsuit industry itself and the Center for Progressive Reform, an advocacy NGO to which one of Mr. Milward’s expert witnesses, Carl Cranor, belongs. So the authors of the new science methodology chapter failed to cite any scientific source, but cited to papers by lawyers in the capture of the lawsuit industry, and a single (infamous) decision that ignored Rules 702 and 703, as well as the extensive literature on systematic reviews.  Weisberg and Thanukos could have cited many sources that contradicted their claim, and the claim of the lawsuit industry sponsored lawyers, but they did not. This is what biased and subversive scholarship looks like.

Funding Bias – The New McCarthyism

The selective citation to articles sponsored by the lawsuit industry is ironic in the context of what Weisberg and Thanukos have to say elsewhere about the “funding effect.” Some of what the authors say about personal bias is almost reasonable. For instance, they suggest that funding source is a “valid consideration” in evaluating methodologies and conclusions of expert testimony, and presumably of published studies as well, but not a sufficient reason to exclude such testimony or reliance.[30] Interestingly, these authors ignored the funding and the ideological interests of the symposia they cited in support of the repudiated Milward abstention doctrine.

Over three decades ago, Kenneth Rothman, the founder of Epidemiology, the official journal of the International Society for Environmental Epidemiology (ISEE), wrote his protest against the obsession with funding in article that should have been cited in the new chapter, for balance. Rothman described the fixation on funding as the “new McCarthyism in science,” which manifested as intolerance toward industry-sponsored studies, and strict scrutiny of “conflict-of-interest” (COI) disclosures.[31] The new McCarthyites amplify the gamesmanship over COI disclosures by excusing or justifying non-disclosure of COIs from scientists who have positional conflicts, or who are aligned with advocacy groups or with the lawsuit industry.

This asymmetrical standard for adjudging conflicts is on full display in the Weisberg and Thanukos chapter, when they claim that “in pharmaceuticals, there is a strong tendency for industry-sponsored trials to favor the industry’s product.”[32] The chapter authors, and their cited source, ignore the context in which the pharmaceutical industry scientists publish clinical trial results.  A successful clinical trial that showed efficacy with minimal adverse events is the result of years of prior research, including phase I and II trials, and preclinical testing. If the research fails to show efficacy, or shows unreasonable harm, in any of this prior research, the phase III trial is never done and so never published. If the medication is never licensed, the phase III trial will generally not be published. The selection effects are obvious and overwhelming in determining that the published results of phase III trials will be work that favors the sponsor. The “failed” phase III trial may result in a securities class action against the pharmaceutical company. In the realm of observational studies, some work commissioned by manufacturing industry has its origins in the poorly conducted, flawed work of environmental zealots and NGOs. Manufacturing industry has an obvious interest in correcting the scientific record, and again, any carefully done study would rebut that of the zealots and favor the industry sponsor.

Elsewhere, the authors offer a more balanced assessment when they observe that “[a]ll research is potentially influenced by bias, and every funder of research has the potential to introduce a source of bias.”[33] Similarly, the fourth edition chapter notes that “[a]ll scientists have some sort of motivation for their work, and this does not preclude scientific knowledge building, so long as biased methodologies and interpretations are avoided.”[34] Their recognition that motivated reasoning is everywhere suggests that all research should receive scrutiny regardless of apparent or disclosed funding source.[35]

When it comes to providing examples of funding-effect distortions of science, Weisberg and Thanukos seem to blank on instances created by the lawsuit industry or by environmental NGOs. The reader should contrast how readily and stridently the authors point to bias in industry-sponsored research with how the authors tie themselves up with double negatives when making the same point about NGOs:

“That is not to suggest that government-or nongovernmental organization (NGO)-sponsored research is necessarily free from bias.”[36]

The cognitive dissonance is palpable. The only conclusion that could be drawn from such a locution is that Weisberg and Thanukos have not worked very hard to identify and disclose their own biases.

STATISTICS DONE POORLY

When it comes to explaining and discussing the role of statistical methods in the scientific process, Weisberg and Thanukos go off the rails. The new chapter is an unmitigated disaster, which should have been corrected in the peer review and oversight process. The first sign of trouble became apparent upon checking the definition of “p-value” in the chapter’s glossary:

p-value. A statistic that gives the calculated probability that the null hypothesis could be true even given the observed differences between conditions.”[37]

This definition is the transposition fallacy on steroids. Obviously, a p-value cannot be the probability that the null hypothesis “could be true” when the procedure for calculating a p-value must assume that the null hypothesis is true, along with a specified probability model. Equally important, the p-value does not describe a probability in connection with the null hypothesis because it describes the probability of observing data as different from the null, or more so, as seen in this particular sample.  The statistics chapter in the Manual by Hall and Kaye states the meaning correctly.  The coverage of statistical concepts by Weisberg and Thanukos should be studiously ignored.

The outrageously incorrect definition of p-value in the glossary is not an isolated error.  The authors are clearly statistically challenged. In the text of their chapter, they incorrectly describe the p-value, consistently with their aberrant glossary entry:

“the commonly used p-value approach, scientists compare a test hypothesis (e.g., that drug X is effective) to a null (e.g., that there is no difference in cure rates between those who took drug X and those who took a placebo). Scientists then calculate the probability that the null hypothesis could be true even with the observed difference between conditions (e.g., the cure rate of patients taking drug X compared to that of those taking a placebo).”[38]

Weisberg and Thanukos thus conflate frequentist and Bayesian statistics. They also obliterate the meaning of the confidence interval, an important concept for judges and lawyers to understand. Here is how the authors describe the confidence interval in their chapter:

Evaluating estimates: In science (and in contrast to their lay meanings), the terms uncertainty and error refer to the variability of a set of data that is intended to estimate a single number. Uncertainty and error are generally expressed as a range, within which we are confident that, if the study were repeated, the new result would fall. Scientists often use a 95% confidence interval for this purpose.”[39]

Describing the confidence interval in the same sentence as “uncertainty and error” is bound to induce uncertainty and error. The confidence interval provides a range of estimates based upon random error, and uncertainty only in the form of imprecision in the point estimate. There are of course myriad other kinds of uncertainty and error not captured by the confidence interval. The most important of the authors’ errors is that they assert incorrectly that the confidence interval provides a range within which new results from the study repeated would fall.  This is, again, a variant on the transposition fallacy that the authors commit in their definition of the p-value. The confidence interval provides a range of results that would not be rejected as alternative null hypotheses by the data in the obtained sample. Because of random error, future samples would give different results, with different confidence intervals, which would not be co-extensive with the first obtained confidence interval. To be sure, the statistics chapter states the matter correctly, and the epidemiology chapter finally gets it correct in its text (after having mangled the concept in the second and third editions), but the epidemiology chapter perpetuates its previous errors in defining confidence intervals in its glossary. This sort of issue, and it is a serious one, could have been eliminated had there been meaningful peer review and editorial oversight for consistency and accuracy of the Manual as a whole.

Weisberg and Thanukos address statistical power in a way that may also mislead readers. They tell us that “[p]ower refers to a test’s ability to reject a hypothesis that is indeed false.” W&T at 88. If only were it so. The authors omit that power is a probability that at a specified level of significance (say p < 0.05), and a specified alternative hypothesis, sample size, and probability model, the sample result will reject the null hypothesis in favor of the alternative hypothesis. Then the authors suggest confusingly that “[w]ell-designed studies have sufficient power to detect the differences of interest, but it may not be apparent when a test lacks power.”[40]

If the study at issue presents a confidence interval around a point estimate of interest, then it will be clear what alternative null hypotheses are statistically compatible with the sample result at the pre-specified level of alpha (significance). Any point outside the interval would be rejected by such a test of significance, and so the casual reader will have a rather good idea of what could and could not be rejected by the sample data. And of course, virtually every study will have low power to detect extremely small increased risks, say relative risk of 1.00001. And most studies will have high power to detect risk ratios of over 1,000.

This new chapter on “How Science Works” also propagates some well-known fallacies about statistical significance testing. Implicit in the authors’ committing the transposition fallacy, is a conceptual and mathematical confusion between the coefficient of confidence (1-α) and the posterior probability of an hypothesis.

The authors’ mistake comes in their insistence upon labeling precision in a test result as “certainty.” In the quote below, the authors’ confusion is clear and obvious:

“Note that the 95% and 5% cutoffs are somewhat arbitrary, and a higher degree of confidence might be required if more certainty were desired—for example if an impactful policy decision depended on the conclusion.”[41]

An impactful [sic] policy decision might well call for more certainty, or a higher posterior probability, but a higher coefficient of confidence will not necessarily map to hypothesis probability at all. The authors’ confusion and conflation of the probability of alpha and the Bayesian posterior probability arises elsewhere within the chapter:

“(1) A p-value lower than 0.05 does not prove that a null hypothesis is false. It is strong evidence, but there is a small chance that the difference observed could be the result of chance alone.

(2) Using a low p-value (e.g., 0.05) as a criterion for significance sets a high bar for rejecting the null hypothesis, minimizing the chance of getting a false positive… .”[42]

Again, a p-value less than five percent is hardly strong evidence in the context of large database studies, especially when there are multiple comparisons and the outcome is not the pre-specified outcome of the analysis. The authors’ confusion is on full display when they discuss the Zoloft birth defects litigation, where the Third Circuit affirmed the exclusion of plaintiffs’ expert witnesses’ causation opinions and the grant of summary judgment to the defendants. According to the authors’ narrative:

“plaintiffs’ expert’s testimony would have argued that multiple, nonsignificant associations between Zoloft use and birth defects indicated a causal relationship. The testimony was excluded because these results were consistent with a weak causal relationship (a small effect size), one that is ‘so weak that one cannot conclude that the risk is greater than that seen in the general population’.”[43]

Of course, in the Zoloft litigation, the excluded plaintiffs’ expert witnesses were caught red-handed – at cherry picking – and attempting to circumvent the lack of significance with a methodologically incorrect meta-analyses.[44]

If the risk of birth defects among children born to mothers who used Zoloft in pregnancy was no greater than seen in the general population, then there would be no risk, not risk “so weak” it cannot be seen. Locutions such as the “results were consistent with a weak causal relationship,” when the results were equally consistent with no causal relationship suggest that the writers cannot bring themselves to say that the causal hypothesis was simply not supported at all. Of course, no study may exclude an increased risk of 0.01 percent, or a relative risk of 1.01, but at some point, when multiple attempts fail to reveal an increased risk, we may conclude that the proponents of the causal claim have failed to make their case.

META-SHMETA-ANALYSIS

Weisberg and Thanukos address meta-analysis incompletely in the context of systematic reviews. The authors do not provide any insights into how meta-analyses are done, and more glaringly, they fail to mention that not all systematic reviews can or should result in quantitative syntheses of estimates of association. On the positive side, they state that meta-analyses are important in litigation, and that the application of rigorous methodologies should be required.[45] With clearly unintended irony, Weisberg and Thanukos offer, as support for their statement, the Paoli Railroad Yard case, “in which the exclusion of a contested meta-analysis was overturned.”[46]

Weisberg and Thanukos have stepped into the wet corner of a pigsty. The issue in the Paoli case arose from a meta-analysis of mortality rates associated with polychlorobiphenyl (PCB) exposures. The district court excluded the proponent of the meta-analysis, not because it was unreliable, but because it was novel. Holding it up in conjunction with a statement about application of rigorous or reliable methodologies was way off the relevant legal point.

The expert witness who proffered the meta-analysis in Paoli was William  Nicholson, who was a physicist with no professional training in epidemiology. For his opinion that PCBs were causally associated with human liver cancer, Nicholson relied upon a non-peer-reviewed, unpublished report he wrote for the Ontario Ministry of Labor.[47] Nicholson described his report as a “study of the data of all the PCB worker epidemiological studies that had been published,” from which he concluded that there was “substantial evidence for a causal association between excess risk of death from cancer of the liver, biliary tract, and gall bladder and exposure to PCBs.”[48]

The defense challenged Nicholson’s opinion, not on Rule 702, but on case law that pre-dated the Daubert decision.[49] The challenge included pointing out the unreliability of the Nicholson’s meta-analysis, but also asserted (incorrectly) the novelty of meta-analysis generally. The district court sustained the defense objection on the grounds of “novelty,” without reaching the reliability analysis.[50] The Third Circuit appropriately reversed and remanded for consideration of the reliability of Nicholson’s meta-analysis.[51]

The consideration of Nicholson’s “meta-analysis” never occurred on remand; plaintiffs’ counsel and their expert witnesses withdrew their reliance upon Nicholson’s analysis. Their about face was highly prudent. Nicholson’s report presented SMRs (standardized mortality ratios); for the all cancers statistic, he reported an SMR of 95. What Nicholson did, in this analysis, and in all other instances, was simply divide the observed number of deaths by the expected, and multiply by 100. This crude, simplistic calculation fails to present a standardized mortality ratio, which requires taking into account the age distribution of the exposed and the unexposed groups, and a weighting of the contribution of cases within each age stratum. Nicholson’s presentation of data was nothing short of a fraud.

Nicholson’s Report was replete with many other methodological sins. He used a composite of three organs (liver, gall bladder, bile duct) without any biological rationale. His analysis combined male and female results, and still his analysis of the composite outcome was based upon only seven cases. Of those seven cases, some of the cases were not confirmed as primary liver cancer, and at least one case was confirmed as not being a primary liver cancer.[52]

As noted, Nicholson failed to standardize the analysis for the age distribution of the observed and expected cases, and he failed to present meaningful analysis of random or systematic error. When he did present p-values, he presented one-tailed values, and he made no corrections for his many comparisons from the same set of data.

Finally, and most egregiously, Nicholson’s meta-analysis was meta-analysis in name only. What he had done was simply to add “observed” and “expected” events across studies to arrive at totals, and to recalculate a bogus risk ratio, which he fraudulently called a standardized mortality ratio. Adding events across studies, without weighting by the inverse of study variance, is not a valid meta-analysis; indeed, it is a well-known example of how to generate the error known as Simpson’s Paradox, which can change the direction or magnitude of any association.[53]

In citing to the Paoli case as a reversal of exclusion of a contested meta-analysis, Weisberg and Thanukos give a truncated analysis that misleads readers, judges, and lawyers. There never was a proper consideration of the reliability vel non of Nicholson’s meta-analysis in the Paoli litigation, and in the final analysis, the Paoli plaintiffs abandoned reliance upon Nicholson’s ill-conceived meta-analysis.

VIRTUE SIGNALING

Although there are no land acknowledgments for the property on which Federal Judicial Center building is located, Weisberg and Thanukos miss few opportunities to let us know that they are woke scholars. There is the gratuitous and triggering “pregnant people,”[54] which begs any number of biological questions. Then there is the authors’ statement that they are limiting their focus to the “Western conception of science,” which begs another question, why would we call any other epistemically valid approach, from any corner of the globe, as something other than “science.”[55]

Equally gratuitous are the authors’ endorsements of DEI and “diversity,” with overbroad generalizations that diversity per se advances science,[56] and a claim that “women, people of color, other historically oppressed groups, and non-Western people” are not taken seriously as scientists.[57] In over 40 years of litigating technical and scientific issues, I have never seen a judge or a lawyer disrespect an expert witness based upon sex, race, ethnicity, or national origin. Of course, I have seen expert witnesses treated roughly for propounding bad science, and that seems perfectly appropriate.


[1] See David Goodstein, ON FACT AND FRAUD: CAUTIONARY TALES FROM THE FRONT LINES OF SCIENCE (2010).

[2] Weisberg and Thanukos frequently refer to other chapters in the Manual, which suggests that their chapter was written late in the development of the Fourth Edition, and perhaps contributed to the delayed publication.

[3] Michael Weisberg & Anastasia Thanukos, How Science Works, in National Academies of Sciences, Engineering, and Medicine & Federal Judicial Center, REFERENCE MANUAL ON SCIENTIFIC EVIDENCE 47 (4th ed. 2025) [cited as W&T].

[4] See Michael Weisberg, University of Pennsylvania Philosophy, at https://philosophy.sas.upenn.edu/people/michael-weisberg.

[5] Anna Thanukos, Staff, available at https://ucmp.berkeley.edu/people/anna-thanukos/#:~:text=Her%20background%3A%20Anna%20has%20a,Education%2C%20both%20from%20UC%20Berkeley

[6] W&T at 72-75.

[7] W&T at 81.

[8] W&T at 81.

[9] W&T at 81 & n.85 (emphasis added), citing Naomi Oreskes & Erik M. Conway, MERCHANTS OF DOUBT: HOW A HANDFUL OF SCIENTISTS OBSCURED THE TRUTH ON ISSUES FROM TOBACCO SMOKE TO GLOBAL WARMING (2010).

[10] W&T at 94-96.

[11] W&T at 95 n.120.

[12] Richard Van Noorden, More than 10,000 research papers were retracted in 2023 — a new record, 624 NATURE 479 (2023).

[13] W&T at 95.

[14] W&T at 55.

[15] W&T at 63, 68.

[16] W&T at 68.

[17] W&T at 65.

[18] W&T at 70.

[19] W&T at 71.

[20] W&T at 66.

[21] W&T at 75.

[22] W&T at 49.

[23] W&T at 83.

[24] W&T at 86 (citing Richter and Capra’s discussion of Milward in chapter one of the Manual, and Professor Gold’s article from the lawsuit industry celebratory conference on the Milward case).

[25] W&T at 99-100.

[26] W&T at 99.

[27] W&T 96 (emphasis added).

[28] IARC MONOGRAPHS ON THE IDENTIFICATION OF CARCINOGENIC HAZARDS TO HUMANS – PREAMBLE (2019), available at https://monographs.iarc.who.int/wp-content/uploads/2019/07/Preamble-2019.pdf

[29] Liesa L. Richter & Daniel J. Capra, The Admissibility of Expert Testimony, National Academies of Sciences, Engineering, and Medicine & Federal Judicial Center, REFERENCE MANUAL ON SCIENTIFIC EVIDENCE 1, 32-33 (4th ed. 2025).

[30] W&T at 76.

[31] Kenneth J. Rothman, “Conflict of interest: the new McCarthyism in science,” 269 J. AM. MED. ASS’N 2782 (1993). See Schachtman, The Rhetoric and Challenge of Conflicts of Interest, TORTINI (July 30, 2013).

[32] W&T at 76 & n.67, citing Sergio Sismondo, Pharmaceutical Company Funding and Its Consequences: A Qualitative Systematic Review, 29 CONTEMP. CLINICAL TRIALS 109 (2008).

[33] W&T at 77.

[34] W&T at 59-60.

[35] W&T at 59-60.

[36] W&T at 76.

[37] W&T at 111.

[38] W&T at 87.

[39] W&T at 90.

[40] W&T at 88.

[41] W&T at 90 (emphasis added).

[42] W&T at 88.

[43] W&T at 90 (internal citations omitted).

[44] In re Zoloft (Sertraline Hydrochloride) Prods. Liab. Litig., 26 F. Supp. 3d 449 (E.D. Pa. 2014); No. 12-md-2342, 2015 WL 314149, at *3 (E.D. Pa. Jan. 23, 2015) (rejecting proffered expert witness opinion based upon “cherry-picking of studies and data within studies”), aff’d, 858 F.3d 787 (3rd Cir. 2017).

[45] W&T at 99.

[46] W&T at 99 & n.134, citing In re Paoli R.R. Yard PCB Litig., 916 F.2d 829 (3d Cir. 1990).

[47] William Nicholson, Report to the Workers’ Compensation Board on Occupational Exposure to PCBs and Various Cancers, for the Industrial Disease Standards Panel (ODP); IDSP Report No. 2 (Toronto Dec. 1987) [Report].

[48] Id. at 373.

[49] See United States v. Downing, 753 F.2d 1224 (3d Cir.1985).

[50] In re Paoli RR Yard Litig., 706 F. Supp. 358, 372-73 (E.D. Pa. 1988).

[51] In re Paoli RR Yard PCB Litig., 916 F.2d 829 (3d Cir. 1990), cert. denied sub nom. General Elec. Co. v. Knight, 499 U.S. 961 (1991).

[52] Report, Table 22.

[53] See James A. Hanley, et al., Simpson’s Paradox in Meta-Analysis, 11  EPIDEMIOLOGY 613 (2000); H. James Norton & George Divine, Simpson’s paradox and how to avoid it, SIGNIFICANCE 40 (Aug. 2015); George Udny Yule, Notes on the theory of association of attributes in statistics, 2 BIOMETRIKA 121 (1903).

[54] W&T at 84.

[55] W&T at 50.

[56] W&T at 71 n. 52-54.

[57] W&T at 102.

Reference Manual’s Chapter on Expert Witness Testimony Admissibility – Part 5

March 7th, 2026

By ignoring Milward’s expert witnesses’ omissions from, and abridgements of, WOE and IBE, the appellate court blinded itself to these witnesses’ distortions of scientific method. The need for judgment, which the Milward court was keen to honor, does not mean that there are not aberrant or deviant judgments, or deviations from the standard of scientific care that are disqualifying. The need for judgment must also allow for equipoise and uncertainty that stands in the way of an inculpatory or exonerative verdict. And then there is the business of questionable research practices that subvert causal judgment. The district court had followed and acknowledged the showing of questionable research practices that pervaded Martyn Smith’s for-litigation opinions. The cheerleaders for Milward seem eager to obscure these practices by their insistence that causation is, after all, only a judgment.

The Milward decision, in its embrace of some truly aberrant methodology and judgment, and some absence of methodology, made some of its own whoopers. Martyn Smith’s incompetent analyses of the epidemiologic evidence had been thoroughly debunked in the district court, but the circuit court glibly adopted Smith’s characterizations. The appellate court failed to understand and come to grips with Smith’s rejiggering of data, and his inconsistently redefining exposures and outcomes in epidemiologic studies to make up new, fanciful results that favored his WOE-ful opinion. The appellate court also failed to understand that scientific judgment is not some vague, amorphous, unstructured decision that turns on whatever looks to be “explanatory.” Even the International Agency for Research on Cancer, which issues hazard classifications that are distorted by non-scientific precautionary principle reasoning, insists that three streams of evidence (epidemiologic, toxicologic, mechanistic) be considered separately, in accordance with criteria, with attention to the validity of each study, and synthesized into a judgment of causality following a carefully structured analysis.[1]

The appellate court in Milward took the demonstration of Smith’s failure to calculate odds ratios correctly to be something that merely went to the weight, not the admissibility, on the theory that a jury, which does not have access to the Reference Manual or to the actual studies as published, could sort it all out. And yet, when the court improvidently set out a definition of what an odds ratio is, it bungled the definition beyond understanding:

“An odds ratio represents the difference in the incidence of a disease between a population that has been exposed to benzene and one that has not.”[2]

The court’s definition is not even wrong. The difference between incidence of a disease in an exposed group and a non-exposed group is the risk difference. It is not an odds ratio. Perhaps the court might have realized what most third graders know, that there is a difference between a ratio (division) and a difference (subtraction). And of course, the odds of exposure is not the same as the incidence of a disease. The relevant odds ratio represents the odds of exposure in cases with APML diagnoses divided by the odds of exposure in study subjects without APML. The odds ratio does involve measurements of incidence although in some cases the odds ratio will approximate a risk ratio, which does involve a ratio of incidences. This is not some hyper-technicality; it is a vivid display that Chief Judge Lynch, writing for a panel of three judges of the First Circuit, had no idea of what she was reviewing or writing.

Richter and Capra devote two pages to a discussion of the Milward case and its embrace of WOE and IBE. There is not, in this discussion, a single adjective of approval or of disapproval. The attention to this one intermediate appellate court opinion far exceeds any other case decided at a level below the Supreme Court, and an engaged reader must ask why the authors of the first chapter of the new Reference Manual wrote about this case at all, especially given the 2023 amendments to Rule 702, which would suggest that Milward was bad law when decided in 2011, and clearly and emphatically bad law in December 2025, when the new Manual was published.

The chapter provides one not-so-subtle clue of the authors’ intent. At the conclusion of their extended, uncritical, and incomplete exposition of Milward,[3] Richter and Capra refer the reader to a law review symposium,[4] “[f]or a detailed analysis of the Milward decision and the weight of the evidence approach to scientific reasoning.” Like Richter and Capra’s coverage of Milward, the cited symposium was hardly an objective analysis; rather, it was more like a drunken celebration at a family reunion.

There have been many law review articles that have discussed the Milward case, but Richter and Capra chose to cite to one particular symposium, which was sponsored by two corporations, the Center for Progressive Reform (CPR) and the Robert A. Habush Foundation. The Center for Progressive Reform (CPR) is a not-for-profit corporation. Its website describes the CPR as a “research and advocacy organization that works in the service of responsive government; climate justice, mitigation, and adaptation; and protecting against environmental harm.”[5] CPR describes one of its key activities as defending science from corporate interference. Presumably its own corporate activities and those of the lawsuit industry are acceptable, but those of corporate manufacturing industry are not. From reviewing CPR’s website, it is not clear that the CPR believes manufacturing corporations should even be allowed to defend against lawsuits. Milward’s retained expert witness Carl Cranor is a “member scholar” at CPR, which makes CPR’s sponsorship of the symposium rather incestuous.[6]

CPR is also apparently comfortable with one highly politicized “corporation,” namely the American Association for Justice (AAJ), which is the trade group for the American lawsuit industry.[7] The AAJ describes itself as a corporation, or a “collective,” that supports plaintiff trial lawyers as their “collective voice … on Capitol Hill and in courthouses across the nation … .” The Robert A. Habush Foundation is endowed by the AAJ, and serves its “educational” mission.  Through the Habush Foundation, the AAJ funds educational programs, “think tanks,” and writing projects designed to influence judges, law professors, lawyers, and the public, on issues of importance to the AAJ:  “the civil justice system and individual rights” for bigger, better, and more profitable litigation outcomes. The AAJ may be a “not-for-profit” corporation, but it represents the interests of one of the most powerful, wealthiest, interest groups in American society — the plaintiffs’ bar.

The Milward symposium agenda and papers from its participants were published at the website for the Wake Forest Journal of Law and Public Policy, but now are marked as “currently private. If you would like to request access, we’ll send your username to the site owner for approval.”

The symposium cited by Richter and Capra for “analysis,” was very much a family affair. The choice of venue, at the Wake Forest Law School, was connected to the web of interests involved. CPR board member, Sid Shapiro, is a law professor at Wake Forest. Shapiro presented at the symposium, along with the Wake Forest professor Michael Green. Cranor, Shapiro’s CPR colleague, and party expert witness for plaintiff, presented.[8] There was only one practicing lawyer who presented at the symposium, Steven Baughman Jensen, who was a past chair of the AAJ’s Section on Toxic, Environmental, and Pharmaceutical Torts. Jensen represented Milward, and hired Cranor as one of the plaintiff’s expert witnesses. Attorney Jensen’s contribution to the symposium has been published along with Cranor’s as well, in the proceedings of the Milward symposium were published volume 3, no. 1 of the Wake Forest Journal of Law and Public Policy,[9] which is now also marked private. Jensen also published an abbreviated paean to Milward in in the AAJ’s trade journal.[10] No defense counsel or defense expert witness participated at the symposium, referenced by Richter and Capra.

Consistent with the financial, advocacy, and political interests of the symposium sponsors, the articles are almost all partisan high-fives for the Milward decision. Writing for the Federal Judicial Center and the National Academies, the authors of a chapter on the law of expert witnesses, a legal issue, for the Reference Manual, should have been aware of the partisan nature of the CPR-AAJ sponsored symposium. They should have flagged the advocacy nature of the symposium, and identified the funding sources and the conflicts created. Furthermore, Richter and Capra should have cited papers that criticized the Milward case, from various perspectives, including its failure to adhere to the law of Rule 702.[11] Their failure to do so is a significant failure of this chapter.


[1] IARC MONOGRAPHS ON THE IDENTIFICATION OF CARCINOGENIC HAZARDS TO HUMANS – PREAMBLE (2019).

[2] Milward, 639 F.3d at 23.

[3] Richter & Capra at 33n.96 (“For a detailed analysis of the Milward decision and the weight of the evidence approach to scientific reasoning…”).

[4] Symposium: Toxic Tort Litigation: After Milward v. Acuity Products, 3 WAKE FOREST JOURNAL OF LAW & POLICY 1 (2013).

[5] The Center for Progressive Reform, at https://progressivereform.org/, last visited on Feb. 24, 2026

[6] Carl Cranor Biography, Center for Progressive Reform, Member Scholars, at https://progressivereform.org/member-scholars/

[7] The AAJ was previously known by the more revealing name, Association of Trial Lawyers of America (ATLA®). 

[8] Carl F. Cranor, Milward v. Acuity Specialty Products: Advances in General Causation Testimony in Toxic Tort Litigation, 3 WAKE FOREST JOURNAL OF LAW & POLICY 105 (2013).

[9] Steve Baughman Jensen, Sometimes Doubt Doesn’t Sell: A Plaintiffs’ Lawyer’s Perspective on Milward v. Acuity Products, 3 WAKE FOREST JOURNAL OF LAW & POLICY 177 (2013).

[10] Steve Baughman Jensen, Reframing the Daubert Issue in Toxic Tort Cases, 49 TRIAL 46 (Feb. 2013).

[11] See Eric Lasker, Manning the Daubert Gate: A Defense Primer in Response to Milward v. Acuity Specialty Products, 79 DEF. COUNS. J. 128, 128 (2012);

David E. Bernstein, The Misbegotten Judicial Resistance to the Daubert Revolution, 89 NOTRE DAME L. REV. 27, 29, 53-58 (2013); David E. Bernstein & Eric G. Lasker, Defending Daubert: It’s Time to Amend Federal Rule of Evidence 702, 57 WM. & MARY L. REV. 1, 33 (2015); Richard Collin Mangrum, Comment on the Proposed Revision of Federal Rule 702: “Clarifying” the Court’s Gatekeeping Responsibility over Expert Testimony, 56 CREIGHTON LAW REVIEW 97, 106 & n.45 (2022); Thomas D. Schroeder, Toward a More Apparent Approach to Considering the Admission of Expert Testimony, 95 NOTRE DAME L. REV. 2039, 2045 (2020); Lawrence A. Kogan, Weight of the Evidence: A Lower Expert Evidence Standard Metastasizes in Federal Court, Washington Legal Foundation Critical Legal Issues WORKING PAPER Series no. 215 (Mar. 2020); Note, Judicial Conference Amends Rule 702. — Federal Rule of Evidence 702, 138 HARV. L. REV. 899, 903 (2025); Nathan A. Schachtman, Desultory Thoughts on Milward v. Acuity Specialty Products, DOI: 10.13140/RG.2.1.5011.5285 (Oct. 2015), available at https://www.researchgate.net/publication/282816421_Desultory_Thoughts_on_Milward_v_Acuity_Specialty_Products .

Reference Manual’s Chapter on Expert Witness Testimony Admissibility – Part 4

March 5th, 2026

In the district court, Judge George O’Toole conducted a pre-trial hearing over four days, and heard testimony from Smith and Cranor, as well as from defense expert witnesses. Judge O’Toole’s published opinion carefully and accurately stated the facts, the applicable law, and presented a well-reasoned judgment as to why Smith’s opinion was not admissible under Rule 702. Without admissible opinions on general causation to support Milward’s case, Judge O’Toole granted summary judgment to the defendants.

Milward appealed the judgment. A panel of judges in the First Circuit heard argument, and reversed in an opinion that is riddled with serious errors.[1] In reviewing the district court’s application of Rule 702, the panel, in an opinion written by Chief Judge Lynch, credulously accepted most of Smith’s and Cranor’s arguments that an ill-defined WOE approach is acceptable method of guiding scientific judgment. Cranor equated WOE, as used by Smith, to the approach that Sir Austin Bradford Hill described, in 1965, for identifying causal associations from epidemiologic data.[2] Chief Judge Lynch’s opinion tracked accurately Cranor’s and Milward’s lawyers’ misrepresentations about Sir Austin’s paper:

“Dr. Smith’s opinion was based on a ‘‘weight of the evidence’’ methodology in which he followed the guidelines articulated by world-renowned epidemiologist Sir Arthur [sic] Bradford Hill in his seminal methodological article on inferences of causality. See Arthur [sic] Bradford Hill, The Environment and Disease: Association or Causation?, 58 Proc. Royal Soc’y Med. 295 (1965).

Hill’s article explains that one should not conclude that an observed association between a disease and a feature of the environment (e.g., a chemical) is causal without first considering a variety of ‘viewpoints’ on the issue.”[3]

The quoted language from the First Circuit opinion, which twice refers to “Arthur Bradford Hill,” rather than Austin Bradford Hill, may suggest that neither Chief Judge Lynch nor his judicial colleagues and their law clerks read the classic paper. An even stronger indicator that the appellate court did not actually read this paper is evidenced in the court’s equating WOE to Bradford Hill viewpoints, without consideration of the necessary predicate for those nine viewpoints. In his short paper, Sir Austin clearly spelled out that there was a foundation needed before parsing the nine viewpoints:

“Disregarding then any such problem in semantics we have this situation. Our observations reveal an association between two variables, perfectly clear-cut and beyond what we would care to attribute to the play of chance. What aspects of that association should we especially consider before deciding that the most likely interpretation of it is causation?”[4]

Whatever Sir Arthur had to say about the matter, Sir Austin defined the starting point of causal analysis as an association free of invalidating bias and random error. The Milward decision ignored this all important predicate for assessing the various considerations that might allow for a valid association to be considered a causal association.[5] The resulting abridgement was a failure of scientific due process that distorted the Bradford Hill paper.

The First Circuit amplified its error when it asserted that from the nine considerations “no one type of evidence must be present before causality may be inferred.”[6] Although Sir Austin said something similar, one of the considerations he noted was “temporality,” in which the putative cause must come before the effect.  Most scientists would consider this consideration to be essential, unless they were observing events that were moving faster than the speed of light. The other eight considerations are more dependent upon context of the exposures and outcomes of interest, but surely strength and consistency of the clear-cut association across multiple studies is an extremely important consideration.

The First Circuit proceeds from misreading Sir Austin’s paper to misunderstanding another paper invoked by Cranor and by Milward’s lawyers. Carelessly tracking Cranor, the appellate court suggested that there was no “hierarchy of evidence”:

“For example, when a group from the National Cancer Institute was asked to rank the different types of evidence, it concluded that ‘‘[t]here should be no such hierarchy.’’ Michele Carbon [sic] et al., Modern Criteria to Establish Human Cancer Etiology, 64 Cancer Res. 5518, 5522 (2004); see also Sheldon Krimsky, The Weight of Scientific Evidence in Policy and Law, 95 Am. J. Pub. Health S129, S130 (2005).”[7]

This quoted language from the Milward opinion shows how slavishly and credulously the court adopted and regurgitated plaintiff’s argument. Sheldon Krimky was actively involved with SKAPP, and his article was presented at the SKAPP-funded Coronado Conference, discussed earlier in this series. Krimsky actually acknowledged that although “the term [WOE] is applied quite liberally in the regulatory literature, the methodology behind it is rarely explicated.”

As for the article by Carbon [sic], this publication never rejected a hierarchy of evidence. The court’s language, quoted above, follows immediately after the court’s discussion of Sir Austin’s nine types of corroborating evidence that would support the causal interpretation of an association. As such, the court seems to imply, incorrectly, that there was no hierarchy of these considerations.[8]

The court’s language also suggests that the quoted language came from the National Cancer Institute (NCI), but its provenance is quite different. The cited article’s lead author, Michele Carbone (not Carbon), was reporting on a workshop hosted by the NCI at an NCI building; it was not an official NCI event or publication. The NCI did not sponsor or conduct the meeting, and Carbone’s paper was not an official statement of the NCI. Carbone’s paper was styled “Meeting Report,” and published as a paid advertisement in Cancer Research, not in the Journal of the National Cancer Institute as a scholarly article.

The discipline of epidemiology was not strongly represented at the meeting; most of the chairpersons and scientists in attendance were pathologists, cell biologists, virologists, and toxicologists. The authors of the meeting report reflect the interests and focus of the scientists in attendance. The lead author, Michele Carbone, a pathologist at the University of Hawaii, was an enthusiastic proponent of Simian Virus 40 as a cause of mesothelioma, a hypothesis that has not fared terribly well in the crucible of epidemiologic science.

The cited article did report some suggestions for modifying Bradford Hill’s criteria in the light of modern molecular biology, as well as a sense of the group that there was no “hierarchy” in which epidemiology was at the top of disciplines.  The group definitely did not address the established concept that some types of epidemiologic studies are analytically more powerful to support inferences of causality than others — the hierarchy of epidemiologic evidence. The group also did not address or reject a ranking of importance of Bradford Hill’s nine viewpoints. There was nothing remarkable about the tumor biologists’ statement that in some cases causality can be determined by careful identification of genetic inheritance or molecular biological pathways. There was no evidence of this sort in the Milward case, and the citation by Cranor and Milward’s lawyers was nothing more than hand waving.

Carbone’s meeting report summarizes informal discussion sessions at the 2003 meeting.  Those in attendance broke out into two groups, one chaired by Brook Mossman, a pathologist, and the other group chaired by Dr. Harald zur Hausen, a virologist. The meeting report included a narrative of how the two groups responded to twelve questions. Drawing from plaintiff’s (and Cranor’s) argument, the court’s citation to this meeting report is based upon one sentence in Carbone’s report, about one of twelve questions:

6. What is the hierarchy of state-of-the-art approaches needed for confirmation criteria, and which bioassays are critical for decisions: epidemiology, animal testing, cell culture, genomics, and so forth?

There should be no such hierarchy. Epidemiology, animal, tissue culture and molecular pathology should be seen as integrating evidences in the determination of human carcinogenicity.”[9]

Considering the fuller context of the meeting, there is nothing particularly surprising about this statement.  The full question and answer in the meeting report does not even remotely support the weight given to it by the court. There was quite a bit of disagreement among meeting participants over criteria for different kinds of carcinogens, as seen the report on another question:

“2. Should the criteria be the same for different agents (viruses, chemicals, physical agents, promoting agents versus initiating DNA-damaging agents)?

There were different opinions. Group 1 debated this issue and concluded that the current listing of criteria should remain the same because we lack sufficient evidence to develop a separate classification. Group 2 strongly supported the view that it is useful to separate the biological or infectious agents from chemical and physical carcinogens due to their frequently entirely different mode of action.”[10]

Carbone and the other authors of the meeting report noted the importance to epidemiology for general causation, while acknowledging its limitations for determining specific causation:

“Concerning the respective roles of epidemiology and molecular pathology, it was noted that epidemiology allows the determination of the overall effect of a given carcinogen in the human population (e.g., hepatitis B virus and hepatocellular carcinoma) but cannot prove causality in the individual tumor patient.”[11]

Clearly, the report was not disavowing the necessity for epidemiology to confirm carcinogenicity in humans. Specific causation of Mr. Milward’s APML was irrelevant to his first appeal to the First Circuit. Carbone’s report emphasized the need to integrate epidemiologic findings with molecular biology; it did not suggest that epidemiology was not necessary or urge that epidemiology be ignored or disregarded:

“A general consensus was often reached on several topics such as the need to integrate molecular pathology and epidemiology for a more accurate and rapid identification of human carcinogens.”[12]

                 * * * * *

“Ideally, before labeling an agent as a human carcinogen, it is important to have epidemiological, experimental animals, and mechanistic evidence (molecular pathology).”[13]

The court’s implication that there was “no hierarchy of evidence” is unsupported by the meeting report. The suggestion that WOE allows some loosey-goosey, ad hoc, unstructured assessment of diverse lines of evidence is rejected in the meeting report with a careful admonition about the lack of validity of some animal models and mechanistic research:

“Moreover, carcinogens and anticarcinogens can have different effects in different situations. As shown by the example of addition of β-carotene in the diet, β- carotene has chemopreventive effects in many experimental systems, yet it appears to have increased the incidence of lung cancer in heavy smokers. Animal experiments can be very useful in predicting the carcinogenicity of a given chemical. However, there are significant differences in susceptibility among species and within organs in the same species, and differences in the metabolic pathway of a given chemical among human and animals could lead to error.”[14]

Inference to the Best Explanation

The First Circuit asserted that “no serious argument can be made that the weight of the evidence approach is inherently unreliable.”[15] As discussed above, this assertion is demonstrably false. In his testimony at the Rule 702 pre-trial hearing, Cranor classified WOE as based upon “inference to the best explanation,” and the First Circuit obsequiously accepted this claim. In articulating and accepting Cranor’s reduction of scientific method to IBE, the appellate court seemed unaware that IBE as an epistemic theory has been roundly criticized. In a very general sense, IBE draws on Charles Pierce’s description of abduction as a mode of reasoning, although many writers have been eager to distinguish abduction from IBE. Bas van Fraassen criticized IBE as lacking merit as a mode of argument in a way germane to Cranor’s presentation of the notion, and the First Circuit’s uncritical acceptance:

“As long as the pattern of Inference to the Best Explanation—henceforth, IBE—is left vague, it seems to fit much rational activity. But when we scrutinize its credentials, we find it seriously wanting.”[16]

The IBE approach raises thorny problems of knowing how to discern the best explanation, or how to tell whether an explanation is simply the best of a bad lot. Other philosophers of science have questioned why explanatoriness should matter as opposed to predictive ability and resistance to falsification upon severe or robust testing.

In the hands of Smith and Cranor, these philosophical quandries become largely beside the point. For Smith and Cranor IBE becomes telling just so stories, which transform “but for” causation into “could be” causation. Drawing directly from Cranor, the Circuit Court explained that an inference to the best explanation involves six general steps for scientists:

“(1) identify an association between an exposure and a disease,

(2) consider a range of plausible explanations for the association,

(3) rank the rival explanations according to their plausibility,

(4) seek additional evidence to separate the more plausible from the less plausible explanations,

(5) consider all of the relevant available evidence, and

(6) integrate the evidence  using professional judgment to come to a conclusion about the best explanation.”[17]

Of course assessing causation requires judgment, but Cranor and Smith radically abridge the process of judging by eliminating:

  • the robust testing of, and attempts to falsify, hypotheses,
  • the weighting of study designs,
  • the pre-specification of kinds of studies to be included or excluded, the assignment of weights to different kinds and qualities of studies, and
  • the pre-specification of criteria of study validity, experimental design, consistency, and exposure-response.

The vague, contentless IBE and WOE, in the hands of Smith, operates just as van Fraassen anticipated. With Cranor’s “philosophizing,” IBE creates a permission structure to reach any desired conclusion. Indeed, Cranor’s approach makes no allowance for when careful scientists withhold judgment because the evidence is inadequate to the task. Furthermore, Cranor’s approach and the Milward decision would cheerily approve cherry picking of studies and data within studies, post hoc weighing of evidence, and even fabricating and rejiggering of evidence, all of which was on display in Smith’s for-litigation opinion.

The First Circuit uttered its mantra of approval of Smith’s scientific delicts in language that became the target of the revision of Rule 702 in 2023:

“the alleged flaws identified by the [district] court go to the weight of Dr. Smith’s opinion, not its admissibility. There is an important difference between what is unreliable support and what a trier of fact may conclude is insufficient support for an expert’s conclusion.”[18]

Earlier in its opinion, the appellate court quoted from the version of Rule 702 in effect when it heard the appeal:

“if (1) the testimony is based upon sufficient facts or data, (2) the testimony is the product of reliable principles and methods, and (3) the witness has applied the principles and methods reliably to the facts of the case.”[19]

Sufficiency, reliability, and validity were all preliminary questions to be decided by the court as part of its gatekeeping responsibility.  The appellate court simply ignored the law in its decision to green light Smith’s testimony.

                    (to be continued)


[1] Milward v. Acuity Specialty Products Group, Inc., 639 F.3d 11 (1st Cir. 2011), cert. denied sub nom., U.S. Steel Corp. v. Milward, 565 U.S. 1111 (2012).

[2] Austin Bradford Hill, The Environment and Disease: Association or Causation?, 58 PROC. ROYAL SOC’Y MED. 295 (1965).

[3] Milward, 639 F.3d at 17.

[4] Id. at 295.

[5] See Frank C. Woodside, III & Allison G. Davis, The Bradford Hill Criteria: The Forgotten Predicate, 35 THOMAS JEFFERSON L. REV. 103 (2013).

[6] Milward, 639 F.3d at 17.

[7] Id. (internal citations omitted).

[8] The Reference Manual chapter on medical testimony carefully discusses the hierarchy of evidence as it factors into the assessment of medical causation. John B. Wong, Lawrence O. Gostin & Oscar A. Cabrera, Reference Guide on Medical Testimony, in National Academies of Sciences, Engineering and Medicine & Federal Judicial Center, REFERENCE MANUAL ON SCIENTIFIC EVIDENCE 687, 723 -24 (2011); John B. Wong, Lawrence O. Gostin, & Oscar A. Cabrera, Reference Guide on Medical Testimony, in National Academies of Sciences, Engineering and Medicine & Federal Judicial Center, REFERENCE MANUAL ON SCIENTIFIC EVIDENCE 1105, 1150-52 (4th ed. 2025). Interestingly, the chapter on epidemiology in the third edition of the Reference Manual cited to the Carbone workshop with apparent approval, but the same chapter in the fourth edition has dropped the reference. Compare Michael D. Green, D. Michal Freedman & Leon Gordis, Reference Guide on Epidemiology, in National Academies of Sciences, Engineering and Medicine & Federal Judicial Center, REFERENCE MANUAL ON SCIENTIFIC EVIDENCE 549, 564 n.48 (3rd ed. 2011) with Steve C. Gold, Michael D. Green, Jonathan Chevrier, & Brenda Eskenazi, Reference Guide on Epidemiology, in National Academies of Sciences, Engineering and Medicine & Federal Judicial Center, REFERENCE MANUAL ON SCIENTIFIC EVIDENCE 897 (4th ed. 2025).

[9] Carbone at 5522.

[10] Carbone at 5521.

[11] Carbone at 5518 (emphasis added).

[12] Carbone at 5518.

[13] Carbone at 5519.

[14] Carbone at 5521.

[15] Milward, 639 F.3d at 18-19.

[16] Bas van Fraassen, LAWS AND SYMMETRY 131 (1989).

[17] Milward, 639 F.3d at 18.

[18] Milward, 639 F.3d at 22.

[19] Milward, 639 F.3d at 14.

Reference Manual’s Chapter on Expert Witness Testimony Admissibility – Part 3

March 2nd, 2026

Richter and Capra treat WOE in Justice Steven’s lone dissenting opinion in Joiner as if it were the law. Of course, it was not; nor was it a particularly insightful analysis into scientific method, Rule 702, or the law of expert witnesses. The Manual authors elevate WOE by their complete failure to offer any criticisms or by citing to the scientific and legal scholars who have criticized WOE.

Richter and Capra do cite to a couple of cases that are skeptical of expert witnesses who had offered WOE opinions, but they fail to cite to any cases that disparage WOE itself.[1] In aggravation of their misplaced focus on the Joiner dissent, Richter and Capra proceed to spend two full pages on the Milward case, which had posthumously appeared in Professor Berger’s version of the law chapter in the 2011, third edition of the Reference Manual. The attention given to Milward in the fourth edition is greater than to any other non-Supreme Court case, including Frye. Richter and Capra offer no commentary or analysis critical of the case, although many legal commentators have criticized the Milward opinion on WOE.[2]

Richter and Capra’s chapter fails to note that a dark cloud hangs over the Milward case due to the unethical non-disclosure of CERT’s amicus brief filed in support of reversing the exclusion of CERT’s founders, Carl Cranor and Martyn Smith,[3] or CERT’s funding Smith’s research, or CERT’s involvement in shaking down corporations in California for Prop 65 bounties.

In their extensive coverage of the 2011 Milward decision, Richter and Capra failed to report that after the First Circuit reversed and remanded, the trial court again excluded plaintiffs’ expert witnesses for failing to give a valid opinion on specific causation. On the second appeal, the First Circuit affirmed the exclusion of specific causation expert witness testimony and the entry of final judgment for defendants.[4] Given that the first appellate decision was no longer necessary to the final disposition of the case, it is questionable whether there is any holding with respect to general causation in the case.

The most salient aspect of Richter and Capra’s uncritical coverage of the Milward case is their complete failure to identify the legal errors made by the First Circuit in its decision on Rule 702 and general causation. As the Reporter to the Rules Advisory Committee, Professor Capra was intimately involved in many meetings and memoranda that addressed the failings of courts to engage properly in gatekeeping. These failings were the gravamen of the basis for the 2023 amendments to Rule 702. The Milward decision in 2011 managed to check almost every box for bad decision making: the appellate panel ignored the text of Rule 702, disregarded Supreme Court precedent in the Joiner case, relied upon over-ruled, obsolete, pre-Daubert decisions, ignored the policy considerations urged by the Supreme Court, bungled basic scientific concepts, and egregiously and credulously endorsed WOE as advocated as a scientific methodology. Professor David E. Bernstein has pointed to the 2011 Milward decision, as “the most notorious,” and “[t]he most prominent example of such judicial truculence” in resisting following the requirements of Rule 702, as it existed in 2011.[5]

Milward is an important case, much as the Berenstain Bears stories are important and helpful in teaching children what not to do. Unfortunately, Richter and Capra discuss Milward in a way that might lead readers to believe that the case represents a reasonable or proper treatment of the science involved in the case. To correct this biased coverage of Milward, readers will have to roll up their sleeves and actually look at what the court did and did not do, and what scientific methodology issues were involved.

Perhaps the best place to begin is the beginning. Brian Milward filed a lawsuit in which he claimed that he was exposed to benzene as a refrigerator technician.[6] He developed acute promyelocytic leukeumia (APML), and claimed that he had been exposed to benzene from having used products made or sold by roughly two dozen companies. APML is a rare disease, type M3 of acute myeloid leukemia (AML), defined by specific chromosomal abnormalities that are necessary but not sufficient to result in APML. APML has an incidence of fewer than five cases per million per year. APML occurs with equal frequency in both sexes; there are no known environmental or occupational causes of APML.[7] APML occurs in the general population without benzene exposure, and its occurrence in all populations is sparse. There are no biomarkers that suggest that some putative benzene-related mechanism is involved in some APML cases, which biomarker would identify the rarity of benzene involvement in causation.

Milward’s General Causation Expert Witness, Martyn T. Smith

Milward did not serve a report from an epidemiologist, or anyone with significant expertise in epidemiology. His only general causation expert witness was Martyn Smith, a toxicologist, who testified that the “weight of the evidence” supported his opinion that benzene exposure causes APML.[8] As noted above, Smith is a member of the advocacy group, the Collegium Ramazzini; and for over 30 years, he has been a frequent testifier for plaintiffs in chemical exposure cases.[9]

Despite the low but widespread prevalence of APML in the general population, with no sex specificity, and the absence of any identifying biomarker of supposed benzene-related etiology in individual cases, Smith maintained that epidemiology was not necessary to reach a causal opinion about benzene and APML. The principal thrust of Smith’s proffered testimony is that APML is a plausible outcome of benzene exposure, because benzene can cause other varieties of AML, by structurally altering chromosomes (clastogenic) by breaking them and causing re-arrangements.[10]

The trial court found that Smith’s extrapolations were problematic and lacking in supporting evidence. The clear differences among AML subtypes made the extrapolation to APML, a unique clinical entity, inappropriate. The characteristic translocation in APML is absent from other varieties of AML, and APML, unlike other AML varieties, is treatable with all-trans retinoic acid.[11]

Smith advanced speculation that benzene targeted cells in the pathway of  leukemic transformation to APML, but the state of science was clearly devoid of sufficient evidence to show that benzene was involved in the APML translocations. Although the parties agreed that mechanistic evidence showed that benzene can effectuate chromosome damage that are characteristic of some AML subtypes other than APML, the trial court found that:

“[n]o evidence has been published making a similar connection between benzene exposure and the t(15;17) translocation, characteristic of APL [APML].”[12]

The trial court assessed Smith’s extrapolation from benzene’s clastogenic effect in breaking and rearranging chromosomes to induce some types of AML to its causing the specific APML t(15;17) translocation, as a

“bull in the china shop generalization: since the bull smashes the teacups, it must also smash the crystal. Whether that is so, of course, would depend on the bull having equal access to both teacups and crystal. If the teacups were easily knocked over, but the crystal securely stored away, a reason would exist to question, if not to reject, the proposition that the crystal was in as much danger as the teacups.”[13]

The trial judge clearly saw that Smith’s plausibility proved too much, and would support attributing virtually any disease to benzene through a putative mechanism of breaking chromosomes.

Lacking the courage of his convictions, Smith, non-epidemiologist, proceeded to offer opinions about the epidemiology of benzene and APML, some of them quite fanciful. No published or unpublished study showed a statistically significant increase in APML among benzene-exposed workers. The most Smith could draw from the published epidemiologic studies on benzene was one Chinese study that found a small risk ratio, without even nominal statistical significance: a crude odds ratio of 1.42 for benzene exposure and APML. Despite Smith’s hand waving about lack of power,[14] this Chinese study suggested that chloramphenicol was a risk factor for APML (M3), and it was able to identify a nominally statistically significant association between benzene and another sub-type of AML (M2a), with an odds ratio of 1.54.[15]

Smith offered no meta-analysis to show that the available studies collectively established a summary estimate of increased risk for APML among benzene workers. Undaunted, Smith set about to re-jigger the numbers in published studies to make something out of nothing. Neither physician nor epidemiologist, Smith altered diagnoses and exposure status as reported in published papers so that his reclassified cases and controls would yield, where none existed. These re-analyses were done speculatively, inconsistently, and incompetently, and were driven by the motivation to make something out of nothing. His approach was unsupported, unprincipled, and lacking in any reasonable methodology. The proffered re-analyses were never published, never presented at a professional society meeting, and never could comply with the standards used by epidemiologists used in their non-litigation activities. As a toxicologist, Smith did not have any non-litigation epidemiologic activities of note.

Smith’s representation of the relevant epidemiologic methods and studies was misleading and contained numerous errors that cumulatively led to erroneous conclusions; his own re-jiggering was carried out to reach a preferred conclusion to support plaintiff’s litigation case.[16]

One of the epidemiologic studies relied upon by Smith was Golumb (1982).[17] This study did not explore associations with benzene; it was a study of insecticides, chemicals and solvents, and petroleum. Crude oil contains very little benzene, typically about 0.1 percent.[18] Smith, without any evidentiary support, assumed that petroleum exposure equated to benzene exposure.

There were eight cases of cases of leukemia with petroleum exposure; one of those cases was APML. The authors of Golumb (1982) reported that this particular case with APML was actually a crane operator.[19]

In analyzing published epidemiologic studies, Smith insisted that he could re-classify APML cases to non-APML in control subjects, in studies, when the karyotype was normal. Karyotype analysis identifies the defining translocations of specific chromosomes in APML, and is found in virtually all such cases. The obvious result of Smith’s ad hoc reclassifications were to increase risk ratios for APML among benzene-exposed subjects. His arbitrary reclassifications of data allowed him to create the result he desired. In reviewing other published studies, Smith insisted that normal karyotype did not require reclassifying cases out of the APML category, when this approach would yield a risk ratio above one. 

Taking data from the Golumb 1982 paper, Smith attempted to inflate his calculation of an odds ratio, which would support his causation opinion. He arbitrarily discarded two APML from the non-exposed cases, and he discarded eight non-APML cases from the exposed subjects. He did not report p-values or confidence intervals for his reanalyses. At the hearing, the defense epidemiologist showed that Smith’s rejiggered odds ratio (1.51) had a p-value of 0.72, and a 95 percent confidence interval of 0.15 – 14.91. Not only was the result not statistically significant, the confidence interval shows that there was a range of alternative hypotheses over an order of magnitude in range, with none of them being rejected based upon the sample data at an alpha of 0.05. Without the rejiggering of exposed and unexposed cases, the odds ratio would have been 0.71, p = 0.76. All results, both as reported in the published article and as rejiggered by Smith were highly compatible with no association whatsoever.

In discussing other studies, Smith repeated his re-labeling of leukemia cases as APML, in the absence of karyotyping, to support his claims that there were more APML cases observed than expect on general population rates.[20] Smith also cited studies improvidently in supposed support of his opinion (Rinsky 1981; updated in 1994), where there was no association at all. Even workers heavily exposed to benzene in these studies did not develop APML.[21]  Similarly, in support of his opinion, Smith cited another Chinese study, which actually declared that:

“Acute promyelocytic leukemia has been reported infrequently in benzene-exposed groups as well as in t-ANLL. Although ANLL-M3 occurred in at least 4 patients in this series, its general representation among the subtypes of ANLL was similar in its distribution in de novo ANLL in China.”[22]

Smith’s methodological improprieties were the subject of a four day pre-trial hearing before Judge O’Toole. In the course of the hearings, Smith attempted to defends his methods, but like Donny Kerabatsos, in the Big Lebowski, Smith was out of his depth. The trial court found that Dr. Smith’s arbitrary creating and choosing data to support his beliefs was unreliable and not in accordance with generally accepted scientific methodology in the fields of medicine or epidemiology. Smith was simply fabricating data to fit his made-for-litigation beliefs.

Carl Cranor’s Attempt to Bolster Smith

Milward also submitted a report from Carl Forest Cranor, Smith’s business partner in founding the Prop 65 bounty-hunting CERT, and a fellow member of the advocacy group Collegium Ramazzini. Cranor has no expertise in toxicology or epidemiology, and he has never published on the cause of APML. As a professor of philosophy, Cranor has written about scientific methodology, including WOE and “inference to the best explanation (IBE).” Cranor’s publications are riddled with basic misunderstandings of statistical concepts.[23] Essentially, Cranor testified at the Rule 702 hearing, as a cheerleader for Smith, and to advocate for open admissions of dodgy scientific conclusions as acceptable with a methodology he described as WOE or IBE. Cranor stretched to resurrect Justice Stevens’ use of WOE, and attempted to pass it off as a generally accepted scientific mode of reasoning.

The trial court carefully reviewed the proffered opinion testimony in a four day pre-trial hearing. The trial court found that Smith had shown that his hypothesis was plausible and possible, but not that it was “scientific knowledge,” as required by Rule 702. Lacking sufficient scientific methodological validity and support, Smith’s opinions failed to satisfy the requirements of Rule 702, and were thus inadmissible. As a result of excluding plaintiff’s sole general causation expert witness, the trial court granted summary judgment to the defendants.[24]

(to be continued)


[1] See, e.g., Allen v. Pennsylvania Eng’g Corp., 102 F.3d 194, 197-98 (5th Cir. 1996) (“We are also unpersuaded that the ‘weight of the evidence’ methodology these experts use is scientifically acceptable for demonstrating a medical link between Allen’s EtO [ethylene oxide] exposure and brain cancer.”); Magistrini v. One Hour Martinizing Dry Cleaning, 180 F. Supp. 2d 584, 601-02 (D.N.J. 2002) (excluding David Ozonoff, whose WOE analysis of whether perchloroethylene causes acute myelomonocytic leukemia was criticized by court-appointed technical advisor), aff’d, 68 F. App’x 356 (3d Cir. 2003).

[2] See Eric Lasker, Manning the Daubert Gate: A Defense Primer in Response to Milward v. Acuity Specialty Products, 79 DEF. COUNS. J. 128, 128 (2012); David E. Bernstein, The Misbegotten Judicial Resistance to the Daubert Revolution, 89 NOTRE DAME L. REV. 27, 29, 53-58 (2013); David E. Bernstein & Eric G. Lasker, Defending Daubert: It’s Time to Amend Federal Rule of Evidence 702, 57 WM. & MARY L. REV. 1, 33 (2015); Richard Collin Mangrum, Comment on the Proposed Revision of Federal Rule 702: “Clarifying” the Court’s Gatekeeping Responsibility over Expert Testimony, 56 CREIGHTON LAW REVIEW 97, 106 & n.45 (2022); Thomas D. Schroeder, Toward a More Apparent Approach to Considering the Admission of Expert Testimony, 95 NOTRE DAME L. REV. 2039, 2045 (2020); Lawrence A. Kogan, Weight of the Evidence: A Lower Expert Evidence Standard Metastasizes in Federal Court, Washington Legal Foundation Critical Legal Issues WORKING PAPER Series no. 215 (Mar. 2020); Note, Judicial Conference Amends Rule 702. — Federal Rule of Evidence 702, 138 HARV. L. REV. 899, 903 (2025); Nathan A. Schachtman, Desultory Thoughts on Milward v. Acuity Specialty Products, DOI: 10.13140/RG.2.1.5011.5285 (Oct. 2015), available at https://www.researchgate.net/publication/282816421_Desultory_Thoughts_on_Milward_v_Acuity_Specialty_Products .

[3] See David DeMatteo & Kellie Wiltsie, When Amicus Curiae Briefs are Inimicus Curiae Briefs: Amicus Curiae Briefs and the Bypassing of Admissibility Standards, 72 AM. UNIV. L. REV. 1871 (2022) (noting that amicus briefs often include “unvetted and potentially inaccurate, misleading, or mischaracterized expert information,” without the procedural safeguards in place for vetting expert witnesses at trial).

[4] Milward v. Acuity Specialty Prods. Group, Inc., 969 F. Supp. 2d 101, 109 (D. Mass. 2013), aff’d sub. nom., Milward v. Rust-Oleum Corp., 820 F.3d 469, 471, 477 (1st Cir. 2016).

[5] David E. Bernstein, The Misbegotten Judicial Resistance to the Daubert Revolution, 89 NOTRE DAME L. REV. 27, 53, 29 (2013).

[6] Milward v. Acuity Specialty Products Group, Inc., 664 F. Supp. 2d 137 (D. Mass. 2009) (O’Toole, J.), rev’d, 639 F.3d 11 (1st Cir. 2011), cert. denied, U.S. Steel Corp. v. Milward, 565 U.S. 1111 (2012).

[7] Andrew Y. Li, et al., Clustered incidence of adult acute promyelocytic leukemia in the vicinity of Baltimore, 61 LEUKEMIA & LYMPHOMA 2743 (2021); Hassan Ali, et al., Epidemiology and Survival Outcomes of Acute Promyelocytic Leukemia in Adults: A SEER Database Analysis, 144 BLOOD 5942 S1 (2024).

[8] Milward, 664 F. Supp. 2d at 142.

[9] See, e.g., PPG Industries, Inc. v. Wells, No. 21-0232 (Feb. 10, 2023 W.Va.S.Ct.); Hall v. ConocoPhillips, 248 F. Supp. 3d 1177 (W.D. Okla. 2017); In re Levaquin Prods. Liab. Litig., 739 F.3d 401 (8th Cir. 2014); Jacoby v. Rite Aid Corp., No. 1508 EDA 2012 (Dec. 9, 2013 Pa. Super.); Harris v. CSX Transp., Inc., 232 W.Va. 617, 753 S.E.2d 275 (2013); In re Baycol Prods. Litig., 495 F. Supp. 2d 977 (D. Minn. 2007); In re Rezulin Prods. Liab. Litig., MDL 1348, 441 F.Supp.2d 567 (S.D.N.Y. 2006) (advocating mythological “silent injury”); Perry v. Novartis, 564 F.Supp.2d 452 (E.D. Pa. 2008); Dodge v. Cotter Corp., 328 F.3d 1212 (10th Cir. 2003); Sutera v. The Perrier Group of America Inc., 986 F. Supp. 655 (D. Mass. 1997); Redland Soccer Club, Inc. v. Dep’t of Army, 835 F.Supp. 803 (M.D. Pa. 1993).

[10] Milward, 664 F.Supp. 2d at 143-44.

[11] Milward, 664 F.Supp. 2d at 144.

[12] Id. at 146

[13] Id.

[14] The claim that a study lacks power is meaningless without a specification of the alternative hypothesis, the risk ratio the researcher thinks is the population parameter, at a specified level of alpha (typically p < 0.05), and a specified probability model. While virtually all studies would have reasonable statistical power (say 80 percent probability) to reject an alternative hypothesis that the risk ratio exceeded 10,000, no study would have power to detect a risk ratio of 1.0001, at a high level of probability.

[15] Yi Zhongguo, et al. (National Investigative Group for the Survey of Leukemia & Aplastic Anemia), Countrywide Analysis of Risk Factors for Leukemia and Aplastic Anemia, 14 ACTA ACADEMIAE MEDICINAE SINICAE 185 (1992).

[16] Milward, 664 F. Supp. 2d at 148-49.

[17] Harvey M. Golomb, et al., Correlation of Occupation and Karyotype in Adults With Acute Nonlymphocytic Leukemia, 60 BLOOD 404 (1982).

[18] Bo Holmberg, Per Lundberg, Benzene: standards, occurrence, and exposure, 7 AM. J. INDUS. MED. 375 (1985).

[19] Golumb, supra at note 17, at 407.

[20] See, e.g., Song-Nian Yin, et al., A cohort study of cancer among benzene-exposed workers in China: overall results, 29 AM. J. INDUS. MED. 227 (1996).

[21] Robert A. Rinsky, et al., Leukemia in Benzene Workers, 2 AM. J. INDUS. MED. 217 (1981); Mary B. Paxton, et al., Leukemia Risk Associated with Benzene Exposure in the Pliofilm Cohort: I. Mortality Update and Exposure Distribution, 14 RISK ANALYSIS 147 (1994); Mary B. Paxton, et al., Leukemia Risk Associated with Benzene Exposure in the Pliofilm Cohort II. Risk Estimates, 14 RISK ANALYSIS 155 (1994).

[22] Lois B. Travis, et al., Hematopoietic Malignancies and Related Disorders Among Benzene-Exposed Workers in China, 14 LEUKEMIA & LYMPHOMA 91, 99 (1994).

[23] See, e.g., Carl F. Cranor, REGULATING TOXIC SUBSTANCES: A PHILOSOPHY OF SCIENCE AND THE LAW at 33-34(1993) (conflating random error with posterior probabilities: “One can think of α, β (the chances of type I and type II errors, respectively) and 1- β as measures of the “risk of error” or “standards of proof.”); id. at 44, 47, 55, 72-76.

[24] 664 F. Supp. 2d at 140, 149.