TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

The Maestro and Mesothelioma – Wikipedia & False Claims

January 21st, 2024

The Maestro is a biographical film of the late Leonard Bernstein. The film, starring Bradley Cooper as Bernstein, had a limited release before streaming on Netflix. As a work of biography, the film is peculiar in its focus on Bernstein’s sexuality and filandering, while paying virtually no attention to his radical chic politics, or his engagement with teaching music appreciation.

In any event, the film sent me to Wikipedia to fact check some details of Bernstein’s life, and I was surprised to see that Wikipedia described Bernstein’s cause of death as involving mesothelioma:

“Bernstein announced his retirement from conducting on October 9, 1990.[174] He died five days later at the age of 72, in his New York apartment at The Dakota, of a heart attack brought on by mesothelioma.[175][2]”

Bernstein certainly did not have occupational exposure to amphibole asbestos, but he did smoke cigarettes, several packs a day, for decades. Mesothelioma seemed unlikely, unless perhaps he smoked Kent cigarettes in the 1950s, when they had crocidolite filters. As you can see from the above quote, the Wikipedia article cites two sources, a newspaper account and a book. Footnote number 2 is an obituary was written by Donal Henahan, and printed in the New York Times.[1] The Times reported that:

“Leonard Bernstein, one of the most prodigally talented and successful musicians in American history, died yesterday evening at his apartment at the Dakota on the Upper West Side of Manhattan. He was 72 years old.

*   *   *   *   *   *   *

Mr. Bernstein’s spokeswoman, Margaret Carson, said he died of a heart attack caused by progressive lung failure.”

There is no mention of mesothelioma in the Times article, and the citation provided does not support the assertion that mesothelioma was involved in the cause of Bernstein’s death. The obituary cited was published the day following Bernstein’s death the night before, which suggests that there was no information from an autopsy, which would have been important in ascertaining any tissue pathology for an accurate and complete cause of death. In 1990, the diagnosis of malignant mesothelioma was often uncertain, even with extensive tissue available post-mortem.

The other citation provided by the Wikipedia article was even less impressive. Footnote 175 pointed to a book of short articles on musicians, with an entry for Bernstein.[2] The book tells us that

“Bernstein is most remembered, perhaps, for his flamboyant conducting style. *** Leonard Bernstein died at his home from cardiac arrest brought on by mesothelioma.”

The blurb on Bernstein provides no support for the statement that cardiac arrest was brought on by mesothelioma, and the narrative struck me as odd in leaving out the progressive lung failure caused by non-malignant smoking-induced lung disease.

I set out to find what else may have been written about the causes of Bernstein’s death. I was surprised to find other references to mesothelioma, but all without any support. One online article seemed promising, but offered a glib conclusion without any source:

“Leonard Bernstein, a towering figure in American music, met his end on October 14, 1990, just five days after retiring from his illustrious career as a conductor. Found in his New York apartment, the cause of his death was a heart attack induced by mesothelioma, a consequence of a lifetime of smoking.”[3]

The lack of any foot- or end-notes disqualifies this source, and others, for establishing a diagnosis of mesothelioma. Other internet articles, inspired by the Cooper production of Maestro, made very similar statements, all without citing any source.[4] Some of the internet articles likely plagiarized others, but I was unable to find who first gave rise to the conclusion that Bernstein died of complications of “mesothelioma” caused by smoking.

Whence came the Wikipedia’s pronouncement that Bernstein died of, or with, mesothelioma? Two “mainstream” print newspapers provided some real information and insight. An article in the Washington Post elaborated on Bernstein’s final illness and the cause of his death:

“Leonard Bernstein, 72, a giant in the American musical community who was simultaneously one of this nation’s most respected and versatile composers and preeminent conductors, died yesterday at his Manhattan apartment. He died in the presence of his physician, who said the cause of death was sudden cardiac arrest caused by progressive lung failure.

On the advice of the doctor, Kevin M. Cahill, Bernstein had announced through a spokeswoman Tuesday that he would retire from conducting. Cahill said progressive emphysema complicated by a pleural tumor and a series of lung infections had left Bernstein too weak to continue working.”[5]

Ah a pleural tumor, but no report or representation that it was malignant mesothelioma.

The Los Angeles Times, with the benefit of an extra three hours to prepare its obituary for a west coast audience, provided similar, detailed information about Bernstein’s death:

“Bernstein, known and beloved by the world as ‘Lenny’, died at 6:15 p.m. in the presence of his son, Alexander, and physician, Kevin M. Cahill, who said the cause of death was complications of progressive lung failure. On Cahill’s advice, the conductor had announced Tuesday that he would retire. Cahill said progressive emphysema complicated by a pleural tumor and a series of lung infections had left Bernstein too weak to continue working.”[6]

Now a pleural tumor can be benign or malignant. And if the tumor were malignant, it may or may not be a primary tumor of the pleura. Metastatic lesions of the pleura, or in the lung parenchyma adjacent to the pleura are common enough that the physician’s statement about tumor of the pleura cannot be transformed into a conclusion about mesothelioma.[7]

Feeling good about having sorted a confusion, I thought I could add to the font of all knowledge, Wikipedia, by editing its unsupported statement about mesothelioma to “pleural tumor.” I made the edit, but within a few days, someone had changed the text back to mesothelioma, without adding any support. The strength of any statement is, of course, based entirely upon its support and the strength of its inferences. Wikipedia certainly can be a reasonable starting place to look for information, but it has no ability to support a claim, whether historical, scientific, or medical. Perhaps I should have added the citation to the Washington Post obituary when I made my edit. Still, it was clear that nothing in article’s footnotes supported the text, and someone felt justified in returning the mention of mesothelioma based upon two completely unsupportive sources. Not only is the Bernstein article in Wikipedia suspect, but there is actually an entry in Wikipedia for “Deaths from Mesothelioma,” which lists Bernstein as well. The article has but one sentence: “This is a list of individuals who have died as a result of mesothelioma, which is usually caused by exposure to asbestos.” And then follows a list of 67 persons, of varying degree of noteworthiness, who supposedly died of mesothelioma. I wonder how many of the entries are false.


[1] Donal  Henahan, “Leonard Bernstein, 72, Music’s Monarch, Dies,” New York Times (October 15, 1990).

[2] Scott Stanton, The Tombstone Tourist: Musicians at 29 (2003).

[3] Soumyadeep Ganguly, “Leonard Bernstein’s cause of death explored: How does Bradley Cooper Maestro end? Movie ending explored,” SK POP (modified Dec 25, 2023).

[4] See, e.g., Gargi Chatterjee, “How did Leonard Bernstein die?” pinkvilla (Dec 23, 2023).

[5] Bart Barnes, “Conductor Leonard Bernstein Dies at 72,” Wash. Post (Oct. 15, 1990) (emphasis added).

[6] Myrna Oliver, “Leonard Bernstein Dies; Conductor, Composer. Renaissance man of his art was 72. The longtime leader of the N.Y. Philharmonic carved a niche in history with ‘West Side Story’,” Los Angeles Times (Oct. 15, 1990) (emphasis added).

[7] See, e.g., Julie Desimpel, Filip M. Vanhoenacker, Laurens Carp, and Annemiek Snoeckx, “Tumor and tumorlike conditions of the pleura and juxtapleural region: review of imaging findings,” 12 Insights Imaging 97 (2021).

The Proper Study of Mankind

December 24th, 2023

“Know then thyself, presume not God to scan;

The proper study of Mankind is Man.”[1]

 

Kristen Ranges recently earned her law degree from the University of Miami School of Law, and her doctorate in Environmental Science and Policy, from the University of Miami Rosenstiel School of Marine, Atmospheric, and Earth Science. Ranges’ dissertation was titled Animals Aiding Justice: The Deepwater Horizon Oil Spill and Ensuing Neurobehavioral Impacts as a Case Study for Using Animal Models in Toxic Tort Litigation – A Dissertation.[2] At first blush, Ranges would seem to be a credible interlocutor in the never-ending dispute over the role of whole animal toxicology (and in vitro toxicology) in determining human causation in tort litigation. Her dissertation title is, however, as Martin Short would say, a bit of a tell. Zebrafish become sad when exposed to oil spills, as do we all.

Ranges recently published a spin-off of her dissertation as a law review article with one of her professors. “Vermin of Proof: Arguments for the Admissibility of Animal Model Studies as Proof of Causation in Toxic Tort Litigation.”[3] Arguments for; no arguments against. We can thus understand this is an advocacy piece, which is fair enough. The paper was not designed or titled to mislead anyone into thinking it would be a consideration of arguments for and against extrapolation from (non-human) animal studies to human beings. Perhaps you will think it churlish of me to point out that animal studies will rarely be admissible as evidence. They come into consideration in legal cases only through expert witnesses’ reliance upon them. So the issue is not whether animal studies are admissible, but rather whether expert witness opinion testimony that relies solely or excessively on animal studies for purposes of inferring causation is admissible under the relevant evidentiary rules. Talking about the admissibility of animal model studies signals, if nothing else, a serious lack of familiarity with the relevant evidentiary rules.

Ranges’ law review is clearly, and without subtlety, an advocacy piece. She argues:

“However, judges, scholars, and other legal professionals are skeptical of the use of animal studies because of scientific and legal concerns, which range from interspecies disparities to prejudice of juries. These concerns are either unfounded or exaggerated. Animal model studies can be both reliable and relevant in toxic tort cases. Given the Federal Rules of Evidence, case law relevant to scientific evidence, and one of the goals of tort law-justice-judges should more readily admit these types of studies as evidence to help plaintiffs meet the burden of proof in toxic tort litigation.”[4]

For those of you who labor in this vineyard, I would suggest you read Ranges’ article and judge for yourself. What I see is a serious lack of scientific evidence for her claims, and a serious misunderstanding of the relevant law. One might, for starters, putting aside the Agency’s epistemic dilution, ask whether there are any I.A.R.C. category I (“known”) carcinogens based solely upon animal evidence. Or has the U.S. Food & Drug Administration ever approved a medication as reasonably safe and effective based upon only animal studies?

Every dog owner and lover has likely been told by a veterinarian, or the Humane Society, that we should resist their lupine entreaties and withhold chocolate, raisins, walnuts, avocados, and certain other human foods. Despite their obvious intelligence, capacity for affection, when it comes to toxicology, dogs are not people, although some people act like the less reputable varieties of dogs.

Back in 1985, in connection with Agent Orange litigation, the late Judge Jack Weinstein wrote what was correct then, and even more so today, that “laboratory animal studies are generally viewed with more suspicion than epidemiological studies, because they require making the assumption that chemicals behave similarly in different species.”[5] Judge Weinstein was no push-over for strident defense counsel or expert witnesses, but the legal consequences were nonetheless obvious to him, when he looked carefully at the animal studies plaintiffs’ expert witnesses were claiming to support their opinions. “[A]nimal studies are of so little probative value as to be inadmissible.  They cannot be a predicate for an opinion under Rule 703.”[6] One of the several disconnects between the plaintiffs’ expert witnesses’ animal studies and the human diseases claimed was the disparity of dose and duration between the relied upon studies and the service men claimants. Judge Weinstein observed that when the hand waving stopped, “[t]here is no evidence that plaintiffs were exposed to the far higher concentrations involved in both animal and industrial exposure studies.”[7]

Ranges and Oakley unfairly deprecate the Supreme Court’s treatment of animal evidence in the 1997 Joiner opinion.[8] Mr. Joiner had been an electrician by a small city in Georgia, where he experienced dermal exposure, over several years, to polychlorinated biphenyls (PCB’s), a chemical found in electrical transformer coolant. He alleged that he had developed small-cell lung cancer from his occasional occupational exposure. In the district court, a careful judge excluded the plaintiffs’ expert witnesses, who relied heavily upon animal studies and who cherry picked and distorted the available epidemiology.[9] The Court of Appeals reversed, in an unsigned, non-substantive opinion that interjected an asymmetric standard of review.[10]

After granting review, the Supreme Court engaged with the substantive validity issues passed over by the intermediate appellate court. In addressing the plaintiff’s expert witnesses’s reliance upon animal studies, the Court was struck by an extrapolation from a different species, different route of administration, different dose, different duration of exposure, and different disease.[11] Joiner was an adult human whose alleged exposure to PCBs was far less than the exposure in the baby mice that received injections of PCBs in a high concentration. The mice developed alveologenic adenomas, a rare tumor that is usually benign, not malignant.[12] The Joiner Court recognized that these multiple extrapolations were a bridge to nowhere, and reversed the Court of Appeals, and reinstated the judgment of the district court. What is particular salient about the Joiner decision, and about which you will find no discussion in the law review paper by Ranges and Oakley, is how well the Joiner opinion has held up over quarter of a century that passed. Today, in the waning moments of 2023, there is still no valid, scientifically sound support for the claim that the sort of exposure Mr. Joiner had can cause small-cell lung cancer.[13]

Perhaps the most egregious lapses in scholarship occur when Ranges, a newly minted scientist, and her co-author, a full professor of law, write:

“For example, Bendectin, an antinausea medication prescribed to pregnant women, caused a slew of birth defects (hence its nickname ‘The Second Thalidomide’).49[14]

I had to re-read this sentence many times to make sure I was not hallucinating. Ranges’ and Oakley’s statement is, of course, demonstrably false. A double whooper, at least, and a jarring deviation from the standard of scholarly care.

But their statement is footnoted you say. Here is what the cited article, footnote 40 in “Vermin of Proof,” says:

“RESULTS: The temporal trends in prevalence rates for specific birth defects examined from 1970 through 1992 did not show changes that reflected the cessation of Bendectin use over the 1980–84 period. Further, the NVP hospitalization rate doubled when Bendectin use ceased.

CONCLUSIONS: The population results of the ecological analyses complement the person-specific results of the epidemiological analyses in finding no evidence of a teratogenic effect from the use of Bendectin.”[15]

So the cited source actually says the exact opposite of what the authors assert. Apparently, students on law review at Georgetown University Law Center do not check citations for accuracy. Not only was the statement wrong in 1993, when the Supreme Court decided the famous Daubert case, it was wrong 20 years later, in 2013, when the United States Food and Drug Administration (FDA) approved  Diclegis, a combination of doxylamine succinate and pyridoxine hydrochloride, the essential ingredients in Bendectin, for sale in the United States, for pregnant women experiencing nausea and vomiting.[16] The return of Bendectin to the market, although under a different name, was nothing less than a triumph of science over the will of the lawsuit industry.[17] 

Channeling the likes of plaintiffs’ expert witness Carl Cranor (whom they cite liberally and credulously), Ranges and Oakley argue for a vague “weight of the evidence” (WOE) methodology, in which several inconclusive and lighter-than-air pieces of evidence somehow magically combine in cold fusion to warrant a conclusion of causation. Others have gone down this dubious path before, but these authors’ embrace of the plaintiffs’ expert witnesses’ opinion in Bendectin litigation reveals the insubstantiality and the invalidity of their method.[18] As Professor Ronald Allen put the matter:

“Given the weight of evidence in favor of Bendectin’s safety, it seems peculiar to argue for mosaic evidence [WOE] from a case in which it would have plainly been misleading.”[19]

It surely seems like a reduction ad absurdum of the proposed methodology.

One thing these authors get right is that most courts disparage and exclude expert witness opinion that relies exclusively or excessively upon animal toxicology.[20] They wrongly chastise these courts, however, for ignoring scientific opinion. In 2005, the Teratology Society issued a position paper on causation in teratology-related litigation,[21] in which the Society specifically addressed the authors’ claims:

“6. Human data are required for conclusions that there is a causal relationship between an exposure and an outcome in humans. Experimental animal data are commonly and appropriately used in establishing regulatory exposure limits and are useful in addressing biologic plausibility and mechanism questions, but are not by themselves sufficient to establish causation in a lawsuit. In vitro data may be helpful in exploring mechanisms of toxicity but are not by themselves evidence of causation.”[22]

Ranges and Oakley are flummoxed that courts exclude expert witnesses who have relied upon animal studies when regulatory agencies use such studies with abandon. The case law on the distinction between precautionary standards in regulation and causation standards in tort law is clear, and explains the difference in approach, but these authors are determined to ignore the obvious difference.[23] The Teratology Society emphasized what should be hornbook law; namely, regulatory standards for testing and warnings are not particularly germane to tort law standards for causation:

“2. The determination of causation in a lawsuit is not the same as a regulatory determination of a protective level of exposure. If a government agency has determined a regulatory exposure level for a chemical, the existence of that level is not evidence that the chemical produces toxicity in humans at that level or any other level. Regulatory levels use default assumptions that are improper in lawsuits. One such assumption is that humans will be as sensitive to the toxicity of a chemical as is the most sensitive experimental animal species. This assumption may be very useful in regulation but is not evidence that exposure to that chemical caused an adverse outcome in an individual plaintiff. Regulatory levels often incorporate uncertainty factors or margins of exposure. These factors may result in a regulatory level much lower than an exposure level shown to be harmful in any organism and are an additional reason for the lack of utility of regulatory levels in causation considerations.”[24]

The suggestion from Ranges and Oakley that the judicial treatment of reliance upon animal studies is based upon ossified, ancient precedent, prejudice, and uncritical acceptance of defense counsel’s unsupported argument is simply wrong. There are numerous discussions of the difficulty of extrapolating teratogenicity from animal data to humans,[25] and ample basis for criticism of the glib extension of rodent carcinogenicity to humans.[26]

Ranges and Oakley ignore the extensive scientific literature questioning extrapolation from high exposure rodent models to much lower exposures in humans.[27] The invalidity of extrapolation can result in both false positives and false negatives. Indeed the thalidomide case is a compelling example of the failure of animal testing. Thalidomide was tested on pregnant rats and rabbits without detecting teratogenicity; indeed most animal species do not metabolize thalidomide or exhibit teratogenicity as seen in humans. Animal models simply do not have a sufficient positive predictive value to justify a conclusion of causation in humans, even if we accept a precautionary principle recognition of such animal testing for regulatory purposes.[28]

As improvident as Ranges’ pronouncements may be, finding her message amplified by Professor Ed Cheng on his podcast series, Excited Utterances, was even more disturbing. In November 2023, Cheng interviewed Kristen Ranges in an episode of his podcast, Vermin of Proof, in which he gave Ranges a chance to reprise her complaints about the judiciary’s handling of animal evidence, without much in the way of specificity, and with some credulous cheerleading to aid and abet. In his epilogue, Cheng wondered why toxicologic evidence is disfavored when such evidence is routinely used by scientists and regulators. What Cheng misses is that regulators use toxicologic evidence for regulation, not for assessments of human causation, and that the two enterprises are quite different.  The regulatory exercise goes something like asking about the stall speed of a pig. It does not matter that pigs cannot fly; we skip that fact and press on to ask what the pig’s take off and stall speeds are.

Seventy years ago, no less than Sir Austin Bradford Hill, observed that:

“We may subject mice, or other laboratory animals, to such an atmosphere of tobacco smoke that they can — like the old man in the fairy story — neither sleep nor slumber; they can neither breed nor eat. And lung cancers may or may not develop to a significant degree. What then? We may have thus strengthened the evidence, we may even have narrowed the search, but we must, I believe, invariably return to man for the final proof or proofs.”[29]


[1] Alexander Pope, “An Essay on Man” (1733), in Robin Sowerby, ed., Alexander Pope: Selected Poetry and Prose at 153 (1988).

[2] Kristen Ranges, Animals Aiding Justice: The Deepwater Horizon Oil Spill and Ensuing Neurobehavioral Impacts as a Case Study for Using Animal Models in Toxic Tort Litigation – A Dissertation (2023).

[3] Kristen Ranges & Jessica Owley, “Vermin of Proof: Arguments for the Admissibility of Animal Model Studies as Proof of Causation in Toxic Tort Litigation,” 34 Georgetown Envt’l L. Rev. 303 (2022) [Vermin]

[4] Vermin at 303.

[5] In re Agent Orange Prod. Liab. Litig., 611 F. Supp. 1223, 1241 (E.D.N.Y. 1985), aff’d, 818 F.2d 187 (2d Cir. 1987), cert. denied, 487 U.S. 1234 (1988).

[6] Id.

[7] Id.

[8] General Elec. Co. v. Joiner, 522 U.S. 136, 144 (1997) [Joiner]

[9] Joiner v. General Electric Co., 864 F. Supp. 1310 (N.D. Ga. 1994).

[10] Joiner v. General Electric Co., 134 F.3d 1457 (11th Cir. 1998) (per curiam). 

[11] Joiner, 522 U.S. at 144-45.

[12] See Leonid Roshkovan, Jeffrey C. Thompson, Sharyn I. Katz, Charuhas Deshpande, Taylor Jenkins, Anna K. Nowak, Rosyln Francis, Carole Dennie, Dominique Fabre, Sunil Singhal, and Maya Galperin-Aizenberg, “Alveolar adenoma of the lung: multidisciplinary case discussion and review of the literature,” 12 J. Thoracic Dis. 6847 (2020).

[13] See How Have Important Rule 702 Holdings Held Up With Time?” (Mar. 20, 2015); “The Joiner Finale” (Mar. 23, 2015).

[14] Vermain at 312.

[15] Jeffrey S Kutcher, Arnold Engle, Jacqueline Firth & Steven H. Lamm, “Bendectin and Birth Defects II: Ecological Analyses, 67 Birth Defects Research Part A: Clinical and Molecular Teratology 88, 88 (2003).

[16] See FDA News Release, “FDA approves Diclegis for pregnant women experiencing nausea and vomiting,” (April 8, 2013).

[17] See Gideon Koren, “The Return to the USA of the Doxylamine-Pyridoxine Delayed Release Combination (Diclegis®) for Morning Sickness — A New Morning for American Women,” 20 J. Popul. Ther. Clin. Pharmacol. e161 (2013).

[18] Michael D. Green, “Pessimism About Milward,” 3 Wake Forest J. Law & Policy41, 62-63 (2013); Susan Haack, “Irreconcilable Differences? The Troubled Marriage of Science and Law,” 72 Law & Contemporary Problems 1, 17 (2009); Susan Haack, “Proving Causation: The Holism of Warrant and the Atomism of Daubertm” 4 J. Health & Biomedical Law 273, 274-78 (2008).

[19] Ronald J. Allen & Esfand Nafisi, “Daubert and its Discontents,” 76 Brooklyn L. Rev. 132, 148 (2010). 

[20] See In re Zoloft (Sertraline Hydrochloride) Prods. Liab. Litig., 26 F. Supp. 3d 466, 475 (E.D. Pa. 2014) (noting that “causation opinions based primarily upon in vitro and live animal studies are unreliable and do not meet the Daubert standard.”), aff’d, 858 F.3d 787 (3rd Cir. 2017); Chapman v. Procter & Gamble Distrib., LLC, 766 F.3d 1296, 1308 (11th Cir. 2014) (affirming exclusion of testimony based on “secondary methodologies,” including animal studies, which offer “insufficient proof of general causation.”); The Sugar Ass’n v. McNeil-PPC, Inc., 2008 WL 11338092, *3 (C.D. Calif. July 21, 2008) (finding that plaintiffs’ expert witnesses, including Dr. Abou-Donia, failed to provide the requisite analytical  support for the extrapolation of their Five Opinions from rats to humans”); In re Silicone Gel Breast Implants Prods. Liab. Litig., 318 F. Supp. 2d 879, 891 (C.D. Cal. 2004) (observing that failure to compare similarities and differences across animals and humans could lead to the exclusion of opinion evidence); Cagle v. The Cooper Companies, 318 F. Supp. 2d 879, 891 (C.D. Calif. 2004) (citing Joiner, for observation that animal studies are not generally admissible when contrary epidemiologic studies are available; and detailing significant disadvantages in relying upon animal studies, such as (1) differences in absorption, distribution, and metabolism; (2) the unrealistic, non-physiological exposures used in animal studies; and (3) the use of unverified assumptions about dose-response); Wills v. Amerada Hess Corp., No. 98 CIV. 7126(RPP), 2002 WL 140542, at *12 (S.D.N.Y. Jan. 31, 2002) (faulting expert’s reliance on animal studies because there was no evidence plaintiff had injected suspected carcinogen in same manner as studied animals, or at same dosage levels), aff’d, 379 F.3d 32 (2nd Cir. 2004) (Sotomayor, J.); Bourne v. E.I. du Pont de Nemours & Co., 189 F. Supp. 2d 482, 501 (S.D. W.Va. 2002) (benlate and birth defects), aff’d, 85 F. App’x 964 (4th Cir.), cert. denied, 543 U.S. 917 (2004); Magistrini v. One Hour Martinizing Dry Cleaning noted that “[a]nimal bioassays are of limited use in determining whether a particular chemical causes a particular disease, or type of cancer, in humans.”190 180 F. Supp. 2d 584, 593 (D.N.J. 2002); Soutiere v. BetzDearborn, Inc., No. 2:99-CV-299, 2002 WL  34381147, at *4 (D. Vt. July 24, 2002) (holding expert’s evidence inadmissible when “[a]t best there are animal studies that suggest a link between massive doses of [the substance in question] and the development of certain kinds of cancers, such that [the substance in question] is listed as a ‘suspected’ or ‘probable’ human carcinogen”); Glastetter v. Novartis Pharms. Corp., 252 F.3d 986, 991 (8th Cir. 2001); Hollander v. Sandoz Pharm. Corp., 95 F. Supp. 2d 1230, 1238 (W.D. Okla. 2000), aff’d, 289 F.3d 1193, 1209 (10th Cir. 2002) (rejecting the relevance of animal studies to causation arguments in the circumstances of the case); Allison v. McGhan Medical Corp., 184 F.3d 1300, 1313–14 (11th Cir.1999); Raynor v. Merrell Pharrns. Inc., 104 F.3d 1371, 1375-1377 (D.C. Cir.1997) (observing that animal studies are unreliable, especially when “sound epidemiological studies produce opposite results from non-epidemiological ones, the rate of error of the latter is likely to be quite high”); Lust v. Merrell Dow Pharms., Inc., 89 F.3d 594, 598 (9th Cir.1996); Barrett v. Atlantic Richfield Co., 95 F.3d 375 (5th Cir. 1996) (extrapolation from a rat study was speculation); Nat’l Bank of Comm. v. Dow Chem. Co., 965 F. Supp. 1490, 1527 (E.D. Ark. 1996) (“because of the difference in animal species, the methods and routes of administration of the suspect chemical agent, maternal metabolisms and other factors, animal studies, taken alone, are unreliable predictors of causation in humans”), aff’d, 133 F.3d 1132 (8th Cir. 1998); Hall v. Baxter Healthcare Corp., 947 F. Supp. 1387, 1410-11 (D. Or. 1996) (with the help of court-appointed technical advisors, observing that animal studies taken alone fail to predict human disease reliably); Daubert v. Merrell Dow Pharrns., Inc., 43 F.3d 1311, 1322 (9th Cir. 1995) (on remand from Supreme Court with directions to apply an epistemic standard derived from Rule 702 itself); Sorensen v. Shaklee Corp., 31 F.3d 638, 650 (8th Cir.1994) (affirming exclusion of expert witness opinions based upon animal mutagenicity data not germane to the claimed harm); Elkins v. Richardson-Merrell, Inc., 8 F.3d 1068, 1073 (6th Cir. 1993);Wade-Greaux v. Whitehall Labs., Inc., 874 F. Supp. 1441, 1482 (D.V.1. 1994), aff’d, 46 F.3d 1120 (3d Cir. 1994) (per curiam); Renaud v. Martin Marietta Corp., Inc., 972 F.2d 304, 307 (10th Cir.1992) (“The etiological evidence proffered by the plaintiff was not sufficiently reliable, being drawn from tests on non-human subjects without confirmatory epidemiological data.”) (“Dr. Jackson performed no calculations to determine whether the dose or route of administration of antidepressants to rats and monkeys in the papers that she cited in her report was equivalent to or substantially similar to human beings taking prescribed doses of Prozac.”); Bell v. Swift Adhesives, Inc., 804 F. Supp. 1577, 1579–81 (S.D. Ga. 1992) (excluding expert opinion of Dr. Janette Sherman, who opined that methylene chloride caused liver cancer, based largely upon on animal studies); Conde v. Velsicol Chem. Corp., 804 F. Supp. 972, 1025-26 (S.D. Ohio 1992) (noting that epidemiology is “the primary generally accepted methodology for demonstrating a causal relation between a chemical compound and a set of symptoms or a disease”), aff’d, 24 F.3d 809 (6th Cir. 1994); Turpin v. Merrell Dow Pharm., Inc., 959 F.2d 1349, 1360-61 (6th Cir. 1992) (“The analytical gap between the [animal study] evidence presented and the inferences to be drawn on the ultimate issue of human birth defects is too wide. Under such circumstances, a jury should not be asked to speculate on the issue of causation.”); Brock v. Merrell Dow Pharm., 874F.2d 307, 313 (5th Cir. 1989) (noting the “very limited usefulness of animal studies when confronted with questions of toxicity”); Richardson v. Richardson-Merrell, Inc., 857 F. 2d 823, 830 (D.C. Cir. 1988) (“Positive results from in vitro studies may provide a clue signaling the need for further research, but alone do not provide a satisfactory basis for opining about causation in the human context.”);  Lynch v. Merrell-Nat‘l Labs., 830 F.2d 1190, 1194 (1st Cir. 1987) (“Studies of this sort [animal studies], singly or in combination, do not have the capability of proving causation in human beings in the absence of any confirmatory epidemiological data.”). See also Merrell Dow Pharrns., Inc. v. Havner, 953 S.W.2d 706, 730 (Tex. 1997); DePyper v. Navarro, No. 83-303467-NM, 1995 WL 788828, at *34 (Mich. Cir. Ct. Nov. 27, 1995), aff’d, No. 191949, 1998 WL 1988927 (Mich. Ct. App. Nov. 6, 1998); Nelson v. American Sterilizer Co., 566 N.W.2d 671 (Mich. Ct. App. 1997)(high-dose animal studies not reliable). But see Ambrosini v. Labarraque,  101 F.3d 129, 137-140 (D.C. Cir.1996); Dyson v. Winfield, 113 F. Supp. 2d 44, 50-51 (D.D.C. 2000).

[21] Teratology Society Public Affairs Committee, “Position Paper Causation in Teratology-Related Litigation,” 73 Birth Defects Research (Part A) 421 (2005) [Teratology Position Paper]

[22] Id. at 423.

[23]  SeeImproper Reliance Upon Regulatory Risk Assessments in Civil Litigation” (Mar. 19, 2023) (collecting cases).

[24] Teratology Position Paper at 422-423.

[25] See, e.g., Gideon Koren, Anne Pastuszak & Shinya Ito, “Drugs in Pregnancy,” 338 New England J. Med. 1128, 1131 (1998); Louis Lasagna, “Predicting Human Drug Safety from Animal Studies: Current Issues,” 12 J. Toxicological Sci. 439, 442-43 (1987).

[26] Bruce N. Ames & Lois S. Gold, Too Many Rodent Carcinogens: Mitogenesis Increases Mutagenesis, 249 Science 970, 970 (1990) (noting that chronic irritation induced by many chemicals at high exposures is itself a cause of cancer in rodent models); Bruce N. Ames & Lois Swirsky Gold, “Environmental Pollution and Cancer: Some Misconceptions,” in Jay H. Lehr, ed., Rational Readings on Environmental Concerns 151, 153 (1992); Mary Eubanks, “The Danger of Extrapolation: Humans and Rodents Differ in Response to PCBs,” 112 Envt’l Health Persps. A113 (2004)

[27] Andrea Gawrylewski, “The Trouble with Animal Models: Why did human trials fail?” 21 The Scientist 44 (2007); Michael B. Bracken, “Why animal studies are often poor predictors of human reactions to exposure,” 101 J. Roy. Soc. Med. 120 (2008); Fiona Godlee, “How predictive and productive is animal research?” 3348 Brit. Med. J. g3719 (2014); John P. A. Ioannidis, “Extrapolating from Animals to Humans,” 4 Science Translational Med. 15 (2012); Pandora Pound & Michael Bracken, “Is animal research sufficiently evidence based to be a cornerstone of biomedical research?” 348 Brit. Med. J. g3387 (2014); Pandora Pound, Shah Ebrahim, Peter Sandercock, Michael B Bracken, and Ian Roberts, “Where is the evidence that animal research benefits humans?” 328 Brit. Med. J. 514 (2004) (writing on behalf of the Reviewing Animal Trials Systematically (RATS) Group).

[28] See Ray Greek, Niall Shanks, and Mark J. Rice, “The History and Implications of Testing Thalidomide on Animals,” 11 J. Philosophy, Sci. & Law 1, 19 (2011).

[29] Austin Bradford Hill, “Observation and Experiment,” 248 New Engl. J. Med. 995, 999 (1953).

The Role of Peer Review in Rule 702 and 703 Gatekeeping

November 19th, 2023

“There is no expedient to which man will not resort to avoid the real labor of thinking.”
              Sir Joshua Reynolds (1723-92)

Some courts appear to duck the real labor of thinking, and the duty to gatekeep expert witness opinions,  by deferring to expert witnesses who advert to their reliance upon peer-reviewed published studies. Does the law really support such deference, especially when problems with the relied-upon studies are revealed in discovery? A careful reading of the Supreme Court’s decision in Daubert, and of the Reference Manual on Scientific Evidence provides no support for admitting expert witness opinion testimony that relies upon peer-reviewed published studies, when the studies are invalid or are based upon questionable research practices.[1]

In Daubert v. Merrell Dow Pharmaceuticals, Inc.,[2] The Supreme Court suggested that peer review of studies relied upon by a challenged expert witness should be a factor in determining the admissibility of that expert witness’s opinion. In thinking about the role of peer-review publication in expert witness gatekeeping, it is helpful to remember the context of how and why the Supreme was talking about peer review in the first place. In the trial court, the Daubert plaintiff had proffered an expert witness opinion that featured reliance upon an unpublished reanalysis of published studies. On the defense motion, the trial court excluded the claimant’s witness,[3] and the Ninth Circuit affirmed.[4] The intermediate appellate court expressed its view that unpublished, non-peer-reviewed reanalyses were deviations from generally accepted scientific discourse, and that other appellate courts, considering the alleged risks of Bendectin, refused to admit opinions based upon unpublished, non-peer-reviewed reanalyses of epidemiologic studies.[5] The Circuit expressed its view that reanalyses are generally accepted by scientists when they have been verified and scrutinized by others in the field. Unpublished reanalyses done for solely for litigation would be an insufficient foundation for expert witness opinion.[6]

The Supreme Court, in Daubert, evaded the difficult issues involved in evaluating a statistical analysis that has not been published by deciding the case on the ground that the lower courts had applied the wrong standard.  The so-called Frye test, or what I call the “twilight zone” test comes from the heralded 1923 case excluding opinion testimony based upon a lie detector:

“Just when a scientific principle or discovery crosses the line between the experimental and demonstrable stages is difficult to define. Somewhere in this twilight zone the evidential force of the principle must be recognized, and while the courts will go a long way in admitting expert testimony deduced from a well recognized scientific principle or discovery, the thing from which the deduction is made must be sufficiently established to have gained general acceptance in the particular field in which it belongs.”[7]

The Supreme Court, in Daubert, held that with the promulgation of the Federal Rules of Evidence in 1975, the twilight zone test was no longer legally valid. The guidance for admitting expert witness opinion testimony lay in Federal Rule of Evidence 702, which outlined an epistemic test for “knowledge,” which would be helpful to the trier of fact. The Court then proceeded to articulate several  non-definitive factors for “good science,” which might guide trial courts in applying Rule 702, such as testability or falsifiability, a showing of known or potential error rate. Another consideration, general acceptance carried over from Frye as a consideration.[8] Courts have continued to build on this foundation to identify other relevant considerations in gatekeeping.[9]

One of the Daubert Court’s pertinent considerations was “whether the theory or technique has been subjected to peer review and publication.”[10] The Court, speaking through Justice Blackmun, provided a reasonably cogent, but probably now out-dated discussion of peer review:

 “Publication (which is but one element of peer review) is not a sine qua non of admissibility; it does not necessarily correlate with reliability, see S. Jasanoff, The Fifth Branch: Science Advisors as Policymakers 61-76 (1990), and in some instances well-grounded but innovative theories will not have been published, see Horrobin, “The Philosophical Basis of Peer Review and the Suppression of Innovation,” 263 JAMA 1438 (1990). Some propositions, moreover, are too particular, too new, or of too limited interest to be published. But submission to the scrutiny of the scientific community is a component of “good science,” in part because it increases the likelihood that substantive flaws in methodology will be detected. See J. Ziman, Reliable Knowledge: An Exploration of the Grounds for Belief in Science 130-133 (1978); Relman & Angell, “How Good Is Peer Review?,” 321 New Eng. J. Med. 827 (1989). The fact of publication (or lack thereof) in a peer reviewed journal thus will be a relevant, though not dispositive, consideration in assessing the scientific validity of a particular technique or methodology on which an opinion is premised.”[11]

To the extent that peer review was touted by Justice Blackmun, it was because the peer-review process advanced the ultimate consideration of the scientific validity of the opinion or claim under consideration. Validity was the thing; peer review was just a crude proxy.

If the Court were writing today, it might well have written that peer review is often a feature of bad science, advanced by scientists who know that peer-reviewed publication is the price of admission to the advocacy arena. And of course, the wild proliferation of journals, including the “pay-to-play” journals, facilitates the festschrift.

Reference Manual on Scientific Evidence

Certainly, judicial thinking evolved since 1993, and the decision in Daubert. Other considerations for gatekeeping have been added. Importantly, Daubert involved the interpretation of a statute, and in 2000, the statute was amended.

Since the Daubert decision, the Federal Judicial Center and the National Academies of Science have weighed in with what is intended to be guidance for judges and lawyers litigating scientific and technical issue. The Reference Manual on Scientific Evidence is currently in a third edition, but a fourth edition is expected in 2024.

How does the third edition[12] treat peer review?

An introduction by now retired Associate Justice Stephen Breyer blandly reports the Daubert considerations, without elaboration.[13]

The most revealing and important chapter in the Reference Manual is the one on scientific method and procedure, and sociology of science, “How Science Works,” by Professor David Goodstein.[14] This chapter’s treatment is not always consistent. In places, the discussion of peer review is trenchant. At other places, it can be misleading. Goodstein’s treatment, at first, appears to be a glib endorsement of peer review as a substitute for critical thinking about a relied-upon published study:

“In the competition among ideas, the institution of peer review plays a central role. Scientifc articles submitted for publication and proposals for funding often are sent to anonymous experts in the field, in other words, to peers of the author, for review. Peer review works superbly to separate valid science from nonsense, or, in Kuhnian terms, to ensure that the current paradigm has been respected.11 It works less well as a means of choosing between competing valid ideas, in part because the peer doing the reviewing is often a competitor for the same resources (space in prestigious journals, funds from government agencies or private foundations) being sought by the authors. It works very poorly in catching cheating or fraud, because all scientists are socialized to believe that even their toughest competitor is rigorously honest in the reporting of scientific results, which makes it easy for a purposefully dishonest scientist to fool a referee. Despite all of this, peer review is one of the venerated pillars of the scientific edifice.”[15]

A more nuanced and critical view emerges in footnote 11, from the above-quoted passage, when Goodstein discusses how peer review was framed by some amici curiae in the Daubert case:

“The Supreme Court received differing views regarding the proper role of peer review. Compare Brief for Amici Curiae Daryl E. Chubin et al. at 10, Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579 (1993) (No. 92-102) (“peer review referees and editors limit their assessment of submitted articles to such matters as style, plausibility, and defensibility; they do not duplicate experiments from scratch or plow through reams of computer-generated data in order to guarantee accuracy or veracity or certainty”), with Brief for Amici Curiae New England Journal of Medicine, Journal of the American Medical Association, and Annals of Internal Medicine in Support of Respondent, Daubert v. Merrell Dow Pharm., Inc., 509 U.S. 579 (1993) (No. 92-102) (proposing that publication in a peer-reviewed journal be the primary criterion for admitting scientifc evidence in the courtroom). See generally Daryl E. Chubin & Edward J. Hackett, Peerless Science: Peer Review and U.S. Science Policy (1990); Arnold S. Relman & Marcia Angell, How Good Is Peer Review? 321 New Eng. J. Med. 827–29 (1989). As a practicing scientist and frequent peer reviewer, I can testify that Chubin’s view is correct.”[16]

So, if, as Professor Goodstein attests, Chubin is correct that peer review does not “guarantee accuracy or veracity or certainty,” the basis for veneration is difficult to fathom.

Later in Goodstein’s chapter, in a section entitled “V. Some Myths and Facts about Science,” the gloves come off:[17]

Myth: The institution of peer review assures that all published papers are sound and dependable.

Fact: Peer review generally will catch something that is completely out of step with majority thinking at the time, but it is practically useless for catching outright fraud, and it is not very good at dealing with truly novel ideas. Peer review mostly assures that all papers follow the current paradigm (see comments on Kuhn, above). It certainly does not ensure that the work has been fully vetted in terms of the data analysis and the proper application of research methods.”[18]

Goodstein is not a post-modern nihilist. He acknowledges that “real” science can be distinguished from “not real science.” He can hardly be seen to have given a full-throated endorsement to peer review as satisfying the gatekeeper’s obligation to evaluate whether a study can be reasonably relied upon, or whether reliance upon such a particular peer-reviewed study can constitute sufficient evidence to render an expert witness’s opinion helpful, or the application of a reliable methodology.

Goodstein cites, with apparent approval, the amicus brief filed by the New England Journal of Medicine, and other journals, which advised the Supreme Court that “good science,” requires a “a rigorous trilogy of publication, replication and verification before it is relied upon.” [19]

“Peer review’s ‘role is to promote the publication of well-conceived articles so that the most important review, the consideration of the reported results by the scientific community, may occur after publication.’”[20]

Outside of Professor Goodstein’s chapter, the Reference Manual devotes very little ink or analysis to the role of peer review in assessing Rule 702 or 703 challenges to witness opinions or specific studies.  The engineering chapter acknowledges that “[t]he topic of peer review is often raised concerning scientific and technical literature,” and helpfully supports Goodstein’s observations by noting that peer review “does not ensure accuracy or validity.”[21]

The chapter on neuroscience is one of the few chapters in the Reference Manual, other than Professor Goodstein’s, to address the limitations of peer review. Peer review, if absent, is highly suspicious, but its presence is only the beginning of an evaluation process that continues after publication:

Daubert’s stress on the presence of peer review and publication corresponds nicely to scientists’ perceptions. If something is not published in a peer-reviewed journal, it scarcely counts. Scientists only begin to have confidence in findings after peers, both those involved in the editorial process and, more important, those who read the publication, have had a chance to dissect them and to search intensively for errors either in theory or in practice. It is crucial, however, to recognize that publication and peer review are not in themselves enough. The publications need to be compared carefully to the evidence that is proffered.[22]

The neuroscience chapter goes on to discuss peer review also in the narrow context of functional magnetic resonance imaging (fMRI). The authors note that fMRI, as a medical procedure, has been the subject of thousands of peer-reviewed, but those peer reviews do little to validate the use of fMRI as a high-tech lie detector.[23] The mental health chapter notes in a brief footnote that the science of memory is now well accepted and has been subjected to peer review, and that “[c]areful evaluators” use only tests that have had their “reliability and validity confirmed in peer-reviewed publications.”[24]

Echoing other chapters, the engineering chapter also mentions peer review briefly in connection with qualifying as an expert witness, and in validating the value of accrediting societies.[25]  Finally, the chapter points out that engineering issues in litigation are often sufficiently novel that they have not been explored in peer-reviewed literature.[26]

Most of the other chapters of the Reference Manual, third edition, discuss peer review only in the context of qualifications and membership in professional societies.[27] The chapter on exposure science discusses peer review only in the narrow context of a claim that EPA guidance documents on exposure assessment are peer reviewed and are considered “authoritative.”[28]

Other chapters discuss peer review briefly and again only in very narrow contexts. For instance, the epidemiology chapter discusses peer review in connection with two very narrow issues peripheral to Rule 702 gatekeeping. First, the chapter raises the question (without providing a clear answer) whether non-peer-reviewed studies should be included in meta-analyses.[29] Second, the chapter asserts that “[c]ourts regularly affirm the legitimacy of employing differential diagnostic methodology,” to determine specific causation, on the basis of several factors, including the questionable claim that the methodology “has been subjected to peer review.”[30] There appears to be no discussion in this key chapter about whether, and to what extent, peer review of published studies can or should be considered in the gatekeeping of epidemiologic testimony. There is certainly nothing in the epidemiology chapter, or for that matter elsewhere in the Reference Manual, to suggest that reliance upon a peer-reviewed published study pretermits analysis of that study to determine whether it is indeed internally valid or reasonably relied upon by expert witnesses in the field.


[1] See Jop de Vrieze, “Large survey finds questionable research practices are common: Dutch study finds 8% of scientists have committed fraud,” 373 Science 265 (2021); Yu Xie, Kai Wang, and Yan Kong, “Prevalence of Research Misconduct and Questionable Research Practices: A Systematic Review and Meta-Analysis,” 27 Science & Engineering Ethics 41 (2021).

[2] 509 U.S. 579 (1993).

[3]  Daubert v. Merrell Dow Pharmaceuticals, Inc., 727 F.Supp. 570 (S.D.Cal.1989).

[4] 951 F. 2d 1128 (9th Cir. 1991).

[5]  951 F. 2d, at 1130-31.

[6] Id. at 1131.

[7] Frye v. United States, 293 F. 1013, 1014 (D.C. Cir. 1923) (emphasis added).

[8]  Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 590 (1993).

[9] See, e.g., In re TMI Litig. II, 911 F. Supp. 775, 787 (M.D. Pa. 1995) (considering the relationship of the technique to methods that have been established to be reliable, the uses of the method in the actual scientific world, the logical or internal consistency and coherence of the claim, the consistency of the claim or hypothesis with accepted theories, and the precision of the claimed hypothesis or theory).

[10] Id. at  593.

[11] Id. at 593-94.

[12] National Research Council, Reference Manual on Scientific Evidence (3rd ed. 2011) [RMSE]

[13] Id., “Introduction” at 1, 13

[14] David Goodstein, “How Science Works,” RMSE 37.

[15] Id. at 44-45.

[16] Id. at 44-45 n. 11 (emphasis added).

[17] Id. at 48 (emphasis added).

[18] Id. at 49 n.16 (emphasis added)

[19] David Goodstein, “How Science Works,” RMSE 64 n.45 (citing Brief for the New England Journal of Medicine, et al., as Amici Curiae supporting Respondent, 1993 WL 13006387 at *2, in Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579 (1993).

[20] Id. (citing Brief for the New England Journal of Medicine, et al., 1993 WL 13006387 *3)

[21] Channing R. Robertson, John E. Moalli, David L. Black, “Reference Guide on Engineering,” RMSE 897, 938 (emphasis added).

[22] Henry T. Greely & Anthony D. Wagner, “Reference Guide on Neuroscience,” RMSE 747, 786.

[23] Id. at 776, 777.

[24] Paul S. Appelbaum, “Reference Guide on Mental Health Evidence,” RMSE 813, 866, 886.

[25] Channing R. Robertson, John E. Moalli, David L. Black, “Reference Guide on Engineering,” RMSE 897, 901, 931.

[26] Id. at 935.

[27] Daniel Rubinfeld, “Reference Guide on Multiple Regression,” 303, 328 RMSE  (“[w]ho should be qualified as an expert?”); Shari Seidman Diamond, “Reference Guide on Survey Research,” RMSE 359, 375; Bernard D. Goldstein & Mary Sue Henifin, “Reference Guide on Toxicology,” RMSE 633, 677, 678 (noting that membership in some toxicology societies turns in part on having published in peer-reviewed journals).

[28] Joseph V. Rodricks, “Reference Guide on Exposure Science,” RMSE 503, 508 (noting that EPA guidance documents on exposure assessment often are issued after peer review).

[29] Michael D. Green, D. Michal Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” RMSE 549, 608.

[30] Id. at 617-18 n.212.

Collegium Ramazzini & Its Fellows – The Lobby

November 19th, 2023

Back in 1997, Francis Douglas Kelly Liddell, a real scientist in the area of asbestos and disease, had had enough of the insinuations, slanders, and bad science from the minions of Irving John Selikoff.[1] Liddell broke with the norms of science and called out his detractors for what they were doing:

 “[A]n anti-asbestos lobby, based in the Mount Sinai School of Medicine of the City University of New York, promoted the fiction that asbestos was an all-pervading menace, and trumped up a number of asbestos myths for widespread dissemination, through media eager for bad news.”[2]

What Liddell did not realize is that the Lobby had become institutionalized in the form of an organization, the Collegium Ramazzini, started by Selikoff under false pretenses.[3] Although the Collegium operates with some degree of secrecy, the open and sketchy conduct of its members suggest that we could use the terms “the Lobby” and “the Collegium Ramazzini,” interchangeably.

Ramazzini founder Irving Selikoff had an unfortunate track record for perverting the course of justice. Selikoff conspired with Ron Motley and others to bend judges with active asbestos litigation dockets by inviting them to a one-sided conference on asbestos science, and to pay for their travel and lodging. Presenters included key expert witnesses for plaintiffs; defense expert witnesses were conspicuously not invited to the conference. In his invitation to this ex parte soirée, Selikoff failed to mention that the funding came from plaintiffs’ counsel. Selikoff’s shenanigans led to the humiliation and disqualification of James M. Kelly,[4] the federal judge in charge of the asbestos school property damage litigation,

Neither Selikoff nor the co-conspirator counsel for plaintiffs ever apologized for their ruse. The disqualification did lead to a belated disclosure and mea culpa from the late Judge Jack Weinstein. Because of a trial in progress, Judge Weinstein did not attend the plaintiffs’ dog-and-pony show, Selikoff’s so-called “Third Wave” conference, but Judge Weinstein and a New York state trial judge, Justice Helen Freedman, attended an ex parte private luncheon meeting with Dr. Selikoff. Here is how Judge Weinstein described the event:

“But what I did may have been even worse [than Judge Kelly’s conduct that led to his disqualification]. A state judge and I were attempting to settle large numbers of asbestos cases. We had a private meeting with Dr. Irwin [sic] J. Selikoff at his hospital office to discuss the nature of his research. He had never testified and would never testify. Nevertheless, I now think that it was a mistake not to have informed all counsel in advance and, perhaps, to have had a court reporter present and to have put that meeting on the record.”[5]

Judge Weinstein’s false statement that Selikoff “had never testified”[6] not only reflects an incredible and uncharacteristic naiveté by a distinguished evidence law scholar, but the false statement was in a journal, Judicature, which was, and is, widely circulated to state and federal judges. The source of the lie appears to have been Selikoff himself in the ethically dodgy ex parte meeting with judges actively presiding over asbestos personal injury cases.

The point apparently weighed on Judge Weinstein’s conscience. He repeated his mea culpa almost verbatim, along with the false statement about Selikoff’s having never testified, in a law review article in 1994, and then incorporated the misrepresentation into a full-length book.[7] I have no doubt that Judge Weinstein did not intend to mislead anyone; like many others, he had been duped by Selikoff’s deception.

There is no evidence that Selikoff was acting as an authorized agent for the Collegium Ramazzini in conspiring to influence trial judges, or in lying to Judge Weinstein and Justice Freedman, but Selikoff was the founder of the Collegium, and his conduct seems to have set a norm for the organization. Furthermore, the Third-Wave Conference was sponsored by the Collegium. Two years later, the Collegium created an award in Selikoff’s name, in 1993, not long after the Third Wave misconduct.[8] Perhaps the award was the Collegium’s ratification of Selikoff’s misdeeds. Two of the recipients, Stephen M. Levin, and Yasunosuke Suzuki, were “regulars,” as expert witnesses for plaintiffs in asbestos litigation. The Selikoff Award is funded by the Irving J. Selikoff Endowment of the Collegium Ramazzini. The Collegium can fairly be said to be the continuation of Selikoff’s work in the form of advocacy organization.

Selikoff’s Third-Wave Conference and his lies to two key judges would not be the last of efforts to pervert the course of justice. With the Selikoff imprimatur and template in hand, Fellows of the Collegium have carried on, by carrying on. Collegium Fellows Carl F. Cranor and Thomas Smith Martyn Thomas served as partisan paid expert witnesses in the notorious Milward case.[9]

After the trial court excluded the proffered opinions of Cranor and Smith, plaintiff appealed, with the help of an amicus brief filed by The Council for Education and Research on Toxics (CERT). The plaintiffs’ counsel, Cranor and Smith, CERT, and counsel for CERT all failed to disclose that CERT was founded by the two witnesses, Cranor and Smith, whose exclusion was at the heart of the appeal.[10] Among the 27 signatories to the CERT amicus brief, a majority (15) were fellows of the Collegium Ramazzini. Others may have been members but not fellows. Many of the signatories, whether or not members or fellows of the Collegium, were frequent testifiers for plaintiffs’ counsel.

None raised any ethical qualms about the obvious conflict of interest on how scrupulous gatekeeping might hurt their testimonial income, or their (witting or unwitting) participation in CERT’s conspiracy to pervert the course of justice.[11]

The CERT amici signatories are listed below. The bold  names are identified as Collegium fellows at its current website. Others may have been members but not fellows. The asterisks indicate those who have testified in tort litigation; please accept my apologies if I missed anyone.

Nicholas A. Ashford,
Nachman Brautbar,*
David C. Christiani,*
Richard W. Clapp,*
James Dahlgren,*
Devra Lee Davis,
Malin Roy Dollinger,*
Brian G. Durie,
David A. Eastmond,
Arthur L. Frank,*
Frank H. Gardner,
Peter L. Greenberg,
Robert J. Harrison,
Peter F. Infante,*
Philip J. Landrigan,
Barry S. Levy,*
Melissa A. McDiarmid,
Myron Mehlman,
Ronald L. Melnick,*
Mark Nicas,*
David Ozonoff,*
Stephen M. Rappaport,
David Rosner,*
Allan H. Smith,*
Daniel Thau Teitelbaum,*
Janet Weiss,* and
Luoping Zhang

This D & C (deception and charade) was repeated on other occasions when Collegium fellows and members signed amicus briefs without any disclosures of conflicts of interest. In Rost v. Ford Motor Co.,[12] for instance, an amicus brief was filed by by “58 physicians and scientists,” many of whom were Collegium fellows.[13]

Ramazzini Fellows David Michaels and Celeste Monforton were both involved in the notorious Project on Scientific Knowledge and Public Policy (SKAPP) organization, which consistently misrepresented its funding from plaintiffs’ lawyers as having come from a “court fund.”[14]

Despite Selikoff’s palaver about how the Collegium would seek consensus and open discussions, it has become an echo-chamber for the rent-seeking mass-tort lawsuit industry, for the hyperbolic critics of any industry position, and for the credulous shills for any pro-labor position. In its statement about membership, the Collegium warns that

“Persons who have any type of links which may compromise the authenticity of their commitment to the mission of the Collegium Ramazzini do not qualify for Fellowship. Likewise, persons who have any conflict of interest that may negatively affect his or her impartiality as a researcher should not be nominated for Fellowship.”

This exclusionary criterion ensures lack of viewpoint diversity, and makes the Collegium an effective proxy for the law industry in the United States.

Among the Collegium’s current and past fellows, we can find many familiar names from the annals of tort litigation, all expert witnesses for plaintiffs, and virtually always only for plaintiffs. After over 40 years at the bar, I do not recognize a single name of anyone who has ever testified on behalf of a defendant in a tort case.

Henry A. Anderson

Barry I. Castleman      

Martin Cherniack

David Christiani 

Arthur Frank

Lennart Hardell 

David G. Hoel

Stephen M. Levin

Ronald L. Melnick

David Michaels

Celeste Monforton

Albert Miller

Brautbar Nachman

Christopher Portier

Steven B. Markowitz

Christine Oliver                 

Colin L, Soskolne

Yasunosuke Suzuki

Daniel Thau Teitelbaum

Laura Welch


[1]The Lobby – Cut on the Bias” (July 6, 2020).

[2] F.D.K. Liddell, “Magic, Menace, Myth and Malice,” 41 Ann. Occup. Hyg. 3, 3 (1997).

[3] SeeThe Dodgy Origins of the Collegium Ramazzini” (Nov. 15, 2023).

[4] In re School Asbestos Litigation, 977 F.2d 764 (3d Cir. 1992). See Cathleen M. Devlin, “Disqualification of Federal Judges – Third Circuit Orders District Judge James McGirr Kelly to Disqualify Himself So As To Preserve ‘The Appearance of Justice’ Under 28 U.S.C. § 455 – In re School Asbestos Litigation (1992),” 38 Villanova L. Rev. 1219 (1993); Bruce A. Green, “May Judges Attend Privately Funded Educational Programs? Should Judicial Education Be Privatized?: Questions of Judicial Ethics and Policy,” 29 Fordham Urb. L.J. 941, 996-98 (2002).

[5] Jack B. Weinstein, “Learning, Speaking, and Acting: What Are the Limits for Judges?” 77 Judicature 322, 326 (May-June 1994) (emphasis added).

[6]Selikoff and the Mystery of the Disappearing Testimony” (Dec. 3, 2010).

[7] See Jack B. Weinstein, “Limits on Judges’ Learning, Speaking and Acting – Part I- Tentative First Thoughts: How May Judges Learn?” 36 Ariz. L. Rev. 539, 560 (1994) (“He [Selikoff] had never testified and would   never testify.”); Jack B. Weinstein, Individual Justice in Mass Tort Litigation: The Effect of Class Actions, Consolidations, and other Multi-Party Devices 117 (1995) (“A court should not coerce independent eminent scientists, such as the late Dr. Irving Selikoff, to testify if, like he, they prefer to publish their results only in scientific journals.”).

[8] See also “The Selikoff – Castleman Conspiracy” (Mar. 13, 2011).

[9] Milward v. Acuity Specialty Products Group, Inc., 664 F.Supp.2d 137, 140 (D.Mass.2009), rev’d, 639 F. 3d 11 (1st Cir. 2011), cert. denied, 132 S.Ct. 1002 (2012).

[10]  See “The Council for Education and Research on Toxics” (July 9, 2013).

[11]Carl Cranor’s Inference to the Best Explanation” (Dec. 12, 2021).

[12] Rost v. Ford Motor Co., 151 A.3d 1032, 1052 (Pa. 2016).

[13]The Amicus Curious Brief” (Jan. 4, 2018).

[14] See, e.g., “SKAPP A LOT” (April 30, 2010); “Manufacturing Certainty” (Oct. 25, 2011); “David Michaels’ Public Relations Problem” (Dec. 2, 2011); “Conflicted Public Interest Groups” (Nov. 3, 2013).

The Dodgy Origins of the Collegium Ramazzini

November 15th, 2023

Or How Irving Selikoff and His Lobby (the Collegium Ramazzini) Fooled the Monsanto Corporation

Anyone who litigates occupational or environmental disease cases has heard of the Collegium Ramazzini. The group is named after a 17th century Italian physician, Bernardino Ramazzini, who is sometimes referred to as the father of occupational medicine.[1] His children have been an unruly lot. In Ramazzini’s honor, the Collegium was founded just over 40 years old, to acclaim and promises of neutrality and consensus.

Back in May 1983, a United Press International reporter chronicled the high aspirations and the bipartisan origins of the Collegium.[2] The UPI reporter noted that the group was founded by the late Irving Selikoff, who is also well known in litigation circles. Selikoff held himself out as an authority on occupational and environmental medicine, but his actual training in medicine was dodgy. His training in epidemiology and statistics was non-existent.

Selikoff was, however, masterful at marketing and prosyletizing. Selikoff would become known for misrepresenting his training, and creating a mythology that he did not participate in litigation, that crocidolite was not used in products in the United State, and that asbestos would become a major cause of cancer in the United States, among other things.[3] It is thus no surprise that Selikoff successfully masked the intentions of the Ramazzini group, and was thus able to capture the support of two key legislators, Senators Charles Mathias (Rep., Maryland) and Frank Lautenberg (Dem., New Jersey), along with officials from both organized labor and industry.

Selikoff was able to snooker the Senators and officials with empty talk of a new organization that would work to obtain scientific consensus on occupational and environmental issues. It did not take long after its founding in 1983 for the Collegium to become a conclave of advocates and zealots.

The formation of the Collegium may have been one of Selikoff’s greatest deceptions. According to the UPI news report, Selikoff represented that the Collegium would not lobby or seek to initiate legislation, but rather would interpret scientific findings in accessible language, show the policy implications of these findings, and make recommendations. This representation was falsified fairly quickly, but certainly by 1999, when the Collegium called for legislation banning the use of asbestos.  Selikoff had promised that the Collegium

“will advise on the adequacy of a standard, but will not lobby to have a standard set. Our function is not to condemn, but rather to be a conscience among scientists in occupational and environmental health.”

The Adventures of Pinocchio (1883); artwork by Enrico Mazzanti

Senator Mathias proclaimed the group to be “dedicated to the improvement of the human condition.” Perhaps no one was more snookered than the Monsanto Corporation, which helped fund the Collegium back in 1983. Monte Throdahl, a Monsanto senior vice president, reportedly expressed his hopes that the group would emphasize the considered judgments of disinterested scientists and not the advocacy and rent seeking of “reporters or public interests groups” on occupational medical issues. Forty years in, those hopes are long since gone. Recent Collegium meetings have been sponsored and funded by the National Institute for Environmental Sciences, Centers for Disease Control, National Cancer Institute, and Environmental Protection Agency. The time has come to cut off funding.


[1] Giuliano Franco & Francesca Franco, “Bernardino Ramazzini: The Father of Occupational Medicine,” 91 Am. J. Public Health 1382 (2001).

[2] Drew Von Bergen, “A group of international scientists, backed by two senators,” United Press International (May 10, 1983).

[3]Selikoff Timeline & Asbestos Litigation History” (Feb. 26, 2023); “The Lobby – Cut on the Bias” (July 6, 2020); “The Legacy of Irving Selikoff & Wicked Wikipedia” (Mar. 1, 2015). See also “Hagiography of Selikoff” (Sept. 26, 2015);  “Scientific Prestige, Reputation, Authority & The Creation of Scientific Dogmas” (Oct. 4, 2014); “Irving Selikoff – Media Plodder to Media Zealot” (Sept. 9, 2014).; “Historians Should Verify Not Vilify or Abilify – The Difficult Case of Irving Selikoff” (Jan. 4, 2014); “Selikoff and the Mystery of the Disappearing Amphiboles” (Dec. 10, 2010); “Selikoff and the Mystery of the Disappearing Testimony” (Dec. 3, 2010).

Consenus is Not Science

November 8th, 2023

Ted Simon, a toxicologist and a fellow board member at the Center for Truth in Science, has posted an intriguing piece in which he labels scientific consensus as a fool’s errand.[1]  Ted begins his piece by channeling the late Michael Crichton, who famously derided consensus in science, in his 2003 Caltech Michelin Lecture:

“Let’s be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. Science, on the contrary, requires only one investigator who happens to be right, which means that he or she has results that are verifiable by reference to the real world. In science, consensus is irrelevant. What is relevant is reproducible results. The greatest scientists in history are great precisely because they broke with the consensus.

* * * *

There is no such thing as consensus science. If it’s consensus, it isn’t science. If it’s science, it isn’t consensus. Period.”[2]

Crichton’s (and Simon’s) critique of consensus is worth remembering in the face of recent proposals by Professor Edward Cheng,[3] and others,[4] to make consensus the touchstone for the admissibility of scientific opinion testimony.

Consensus or general acceptance can be a proxy for conclusions drawn from valid inferences, within reliably applied methodologies, based upon sufficient evidence, quantitatively and qualitatively. When expert witnesses opine contrary to a consensus, they raise serious questions regarding how they came to their conclusions. Carl Sagan declaimed that “extraordinary claims require extraordinary evidence,” but his principle was hardly novel. Some authors quote the French polymath Pierre Simon Marquis de Laplace, who wrote in 1810: “[p]lus un fait est extraordinaire, plus il a besoin d’être appuyé de fortes preuves,”[5] but as the Quote Investigator documents,[6] the basic idea is much older, going back at least another century to church rector who expressed his skepticism of a contemporary’s claim of direct communication with the almighty: “Sure, these Matters being very extraordinary, will require a very extraordinary Proof.”[7]

Ted Simon’s essay is also worth consulting because he notes that many sources of apparent consensus are really faux consensus, nothing more than self-appointed intellectual authoritarians who systematically have excluded some points of view, while turning a blind eye to their own positional conflicts.

Lawyers, courts, and academics should be concerned that Cheng’s “consensus principle” will change the focus from evidence, methodology, and inference, to a surrogate or proxy for validity. And the sociological notion of consensus will then require litigation of whether some group really has announced a consensus. Consensus statements in some areas abound, but inquiring minds may want to know whether they are the result of rigorous, systematic reviews of the pertinent studies, and whether the available studies can support the claimed consensus.

Professor Cheng is hard at work on a book-length explication of his proposal, and some criticism will have to await the event.[8] Perhaps Cheng will overcome the objections placed against his proposal.[9] Some of the examples Professor Cheng has given, however, such as his errant his dramatic misreading of the American Statistical Association’s 2016 p-value consensus statement to represent, in Cheng’s words:

“[w]hile historically used as a rule of thumb, statisticians have now concluded that using the 0.05 [p-value] threshold is more distortive than helpful.”[10]

The 2016 Statement said no such thing, although a few statisticians attempted to distort the statement in the way that Cheng suggests. In 2021, a select committee of leading statisticians, appointed by the President of the ASA, issued a statement to make clear that the ASA had not embraced the Cheng misinterpretation.[11] This one example alone does not bode well for the viability of Cheng’s consensus principle.


[1] Ted Simon, “Scientific consensus is a fool’s errand made worse by IARC” (Oct. 2023).

[2] Michael Crichton, “Aliens Cause Global Warming,” Caltech Michelin Lecture (Jan. 17, 2003).

[3] Edward K. Cheng, “The Consensus Rule: A New Approach to Scientific Evidence,” 75 Vanderbilt L. Rev. 407 (2022) [Consensus Rule]

[4] See Norman J. Shachoy Symposium, The Consensus Rule: A New Approach to the Admissibility of Scientific Evidence (2022), 67 Villanova L. Rev. (2022); David S. Caudill, “The ‘Crisis of Expertise’ Reaches the Courtroom: An Introduction to the Symposium on, and a Response to, Edward Cheng’s Consensus Rule,” 67 Villanova L. Rev. 837 (2022); Harry Collins, “The Owls: Some Difficulties in Judging Scientific Consensus,” 67 Villanova L. Rev. 877 (2022); Robert Evans, “The Consensus Rule: Judges, Jurors, and Admissibility Hearings,” 67 Villanova L. Rev. 883 (2022); Martin Weinel, “The Adversity of Adversarialism: How the Consensus Rule Reproduces the Expert Paradox,” 67 Villanova L. Rev. 893 (2022); Wendy Wagner, “The Consensus Rule: Lessons from the Regulatory World,” 67 Villanova L. Rev. 907 (2022); Edward K. Cheng, Elodie O. Currier & Payton B. Hampton, “Embracing Deference,” 67 Villanova L. Rev. 855 (2022).

[5] Pierre-Simon Laplace, Théorie analytique des probabilités (1812) (The more extraordinary a fact, the more it needs to be supported by strong proofs.”). See Tressoldi, “Extraordinary Claims Require Extraordinary Evidence: The Case of Non-Local Perception, a Classical and Bayesian Review of Evidences,” 2 Frontiers Psych. 117 (2011); Charles Coulston Gillispie, Pierre-Simon Laplace, 1749-1827: a life in exact science (1997).

[6]Extraordinary Claims Require Extraordinary Evidence” (Dec. 5, 2021).

[7] Benjamin Bayly, An Essay on Inspiration 362, part 2 (2nd ed. 1708).

[8] The Consensus Principle, under contract with the University of Chicago Press.

[9] SeeCheng’s Proposed Consensus Rule for Expert Witnesses” (Sept. 15, 2022);
Further Thoughts on Cheng’s Consensus Rule” (Oct. 3, 2022); “Consensus Rule – Shadows of Validity” (Apr. 26, 2023).

[10] Consensus Rule at 424 (citing but not quoting Ronald L. Wasserstein & Nicole A. Lazar, “The ASA Statement on p-Values: Context, Process, and Purpose,” 70 Am. Statistician 129, 131 (2016)).

[11] Yoav Benjamini, Richard D. DeVeaux, Bradly Efron, Scott Evans, Mark Glickman, Barry Braubard, Xuming He, Xiao Li Meng, Nancy Reid, Stephen M. Stigler, Stephen B. Vardeman, Christopher K. Wikle, Tommy Wright, Linda J. Young, and Karen Kafadar, “The ASA President’s Task Force Statement on Statistical Significance and Replicability,” 15 Annals of Applied Statistics 1084 (2021); see also “A Proclamation from the Task Force on Statistical Significance” (June 21, 2021).

Just Dissertations

October 27th, 2023

One of my childhood joys was roaming the stacks of libraries and browsing for arcane learning stored in aging books. Often, I had no particular goal in my roaming, and I flitted from topic to topic. Occasionally, however, I came across useful learning. It was in one college library, for instance, that I discovered the process for making nitrogen tri-iodide, which provided me with some simple-minded amusement for years. (I only narrowly avoided detection by Dean Brownlee for a prank involving NI3 in chemistry lab.)

Nowadays, most old book are off limits to the casual library visitor, but digital archives can satisfy my occasional compulsion to browse what is new and compelling in the world of research on topics of interest. And there can be no better source for new and topical research than browsing dissertations and theses, which are usually required to open new ground in scholarly research and debate. There are several online search tools for dissertations, such as ProQuest, EBSCO Open Dissertation, Theses and Dissertations, WorldCat Dissertations and Theses, Open Access Theses and Dissertations, and Yale Library Resources to Find Dissertation.

Some universities generously share the scholarship of their graduate students online, and there are some great gems freely available.[1] Other universities provide a catalogue of their students’ dissertations, the titles of which can be browsed and the texts of which can be downloaded. For lawyers interested in medico-legal issues, the London School of Hygiene & Tropical Medicine has a website, “LSHTM Research Online,” which is delightful place to browse on a rainy afternoon, and which features a free, open access repository of research. Most of the publications are dissertations, some 1,287 at present, on various medical and epidemiologic topics, from 1938 to the present.

The prominence of the London School of Hygiene & Tropical Medicine makes its historical research germane to medico-legal issues such as “state of the art,” notice, priority, knowledge, and intellectual provenance. A 1959 dissertation by J. D. Walters, the Surgeon Lieutenant of the Royal Nayal, is included in the repository.[2] Walters’ dissertation is a treasure trove of the state-of-the-art case – who knew what when – about asbestos health hazards, written before litigation distorted perspectives on the matter. Walters’ dissertation shows in contemporaneous scholarship, not hindsight second guessing, that Sir Richard Doll’s 1955 study, flawed as it was by contemporaneous standards, was seen as establishing an association between asbestosis (not asbestos exposure) and lung cancer. Walters’ careful assessment of how asbestos was actually used in British dockyards documents the differences between British and American product use. The British dockyards had full-time laggers since 1946, and they used spray asbestos, asbestos (amosite and crocidolite) mattresses, as well as lower asbestos content insulation.

Walters reported cases of asbestosis among the laggers. Written four years before Irving Selikoff published on an asbestosis hazard among laggers, the predominant end-users of asbestos-containing insulation, Walters’ dissertation preempts Selikoff’s claim of priority in identifying the asbestos hazard, and it shows that large employers, such as the Royal Navy, and the United States Navy, were well aware of asbestos hazards, before companies began placing warning labels. Like Selikoff, Walters typically had no information about worker compliance with safety regulations, such as respiratory use. Walters emphasized the need for industrial medical officers to be aware of the asbestosis hazard, and the means to prevent it. Noticeably absent was any suggestion that a warning label on bags of asbestos or boxes of pre-fabricated insulation were relevant to the medical officer’s work in controlling the hazard.

Among the litigation relevant finds in the repository is the doctoral thesis of Francis Douglas Kelly Liddell,[3] on the mortality of the Quebec chrysotile workers, with most of the underlying data.[4] A dissertation by Keith Richard Sullivan reported on the mortality patterns of civilian workers at Royal Navy dockyards in England.[5] Sullivan found no increased risk of lung cancer, although excesses of asbestosis and mesothelioma occurred at all dockyards. A critical look at meta-analyses of formaldehyde and cancer outcomes in one dissertation shows prevalent biases in available studies, and insufficient evidence of causation.[6]

Some of the other interesting dissertations with historical medico-legal relevance are:

Francis, The evaluation of small airway disease in the human lung with special reference to tests which are suitable for epidemiological screening; PhD thesis, London School of Hygiene & Tropical Medicine (1978) DOI: https://doi.org/10.17037/PUBS.04655290

Gillian Mary Regan, A Study of pulmonary function in asbestosis, PhD thesis, London School of Hygiene & Tropical Medicine (1977) DOI: https://doi.org/10.17037/PUBS.04655127

Christopher J. Sirrs, Health and Safety in the British Regulatory State, 1961-2001: the HSC, HSE and the Management of Occupational Risk, PhD thesis, London School of Hygiene & Tropical Medicine (2016) DOI: https://doi.org/10.17037/PUBS.02548737

Michael Etrata Rañopa, Methodological issues in electronic healthcare database studies of drug cancer associations: identification of cancer, and drivers of discrepant results, PhD thesis, London School of Hygiene & Tropical Medicine (2016). DOI: https://doi.org/10.17037/PUBS.02572609

Melanie Smuk, Missing Data Methodology: Sensitivity analysis after multiple imputation, PhD thesis, London School of Hygiene & Tropical Medicine (2015) DOI: https://doi.org/10.17037/PUBS.02212896

John Ross Tazare, High-dimensional propensity scores for data-driven confounder adjustment in UK electronic health records, PhD thesis, London School of Hygiene & Tropical Medicine (2022). DOI: https://doi.org/10.17037/PUBS.046647276/

Rebecca Jane Hardy, (1995) Meta-analysis techniques in medical research: a statistical perspective. PhD thesis, London School of Hygiene & Tropical Medicine. DOI: https://doi.org/10.17037/PUBS.00682268

Jemma Walker, Bayesian modelling in genetic association studies, PhD thesis, London School of Hygiene & Tropical Medicine (2012) DOI: https://doi.org/10.17037/PUBS.01635516

  1. Marieke Schoonen, (2007) Pharmacoepidemiology of autoimmune diseases.PhD thesis, London School of Hygiene & Tropical Medicine. DOI: https://doi.org/10.17037/PUBS.04646551

Claudio John Verzilli, Method for the analysis of incomplete longitudinal data, PhD thesis, London School of Hygiene & Tropical Medicine (2003)  DOI: https://doi.org/10.17037/PUBS.04646517

Martine Vrijheid, Risk of congenital anomaly in relation to residence near hazardous waste landfill sites, PhD thesis, London School of Hygiene & Tropical Medicine (2000) DOI: https://doi.org/10.17037/PUBS.00682274


[1] See, e.g., Benjamin Nathan Schachtman, Traumedy: Dark Comedic Negotiations of Trauma in Contemporary American Literature (2016).

[2] J.D. Walters, Asbestos – a potential hazard to health in the ship building and ship repairing industries, DrPH thesis, London School of Hygiene & Tropical Medicine (1959); https://doi.org/10.17037/PUBS.01273049.

[3]The Lobby – Cut on the Bias” (July 6, 2020).

[4] Francis Douglas Kelly Liddell, Mortality of Quebec chrysotile workers in relation to radiological findings while still employed, PhD thesis, London School of Hygiene & Tropical Medicine (1978); DOI: https://doi.org/10.17037/PUBS.04656049

[5] Keith Richard Sullivan, Mortality patterns among civilian workers in Royal Navy Dockyards, PhD thesis, London School of Hygiene & Tropical Medicine (1994) DOI: https://doi.org/10.17037/PUBS.04656717

[6] Damien Martin McElvenny, Meta-analysis of Rare Diseases in Occupational Epidemiology, PhD thesis, London School of Hygiene & Tropical Medicine (2017) DOI: https://doi.org/10.17037/PUBS.03894558

Science & the Law – from the Proceedings of the National Academies of Science

October 5th, 2023

The current issue of the Proceedings of the National Academies of Science (PNAS) features a medley of articles on science generally, and forensic science, in the law.[1] The general editor of the compilation appears to be editorial board member, Thomas D. Albright, the Conrad T. Prebys Professor of Vision Research at the Salk Institute for Biological Studies.

 I have not had time to plow through the set of offerings, but even a superficial inspection reveals that the articles will be of interest to lawyers and judges involved in the litigation of scientific issues. The authors seem to agree that descriptively and prescriptively, validity is more important than expertise in the legal  consideration of scientific evidence.

1. Thomas D. Albright, “A scientist’s take on scientific evidence in the courtroom,” 120 Proceedings of the National Academies of Science 120 (41) e2301839120 (2023).

Albright’s essay was edited by Henry Roediger, a psychologist at the Washington University in St. Louis.

Abstract

Scientific evidence is frequently offered to answer questions of fact in a court of law. DNA genotyping may link a suspect to a homicide. Receptor binding assays and behavioral toxicology may testify to the teratogenic effects of bug repellant. As for any use of science to inform fateful decisions, the immediate question raised is one of credibility: Is the evidence a product of valid methods? Are results accurate and reproducible? While the rigorous criteria of modern science seem a natural model for this evaluation, there are features unique to the courtroom that make the decision process scarcely recognizable by normal standards of scientific investigation. First, much science lies beyond the ken of those who must decide; outside “experts” must be called upon to advise. Second, questions of fact demand immediate resolution; decisions must be based on the science of the day. Third, in contrast to the generative adversarial process of scientific investigation, which yields successive approximations to the truth, the truth-seeking strategy of American courts is terminally adversarial, which risks fracturing knowledge along lines of discord. Wary of threats to credibility, courts have adopted formal rules for determining whether scientific testimony is trustworthy. Here, I consider the effectiveness of these rules and explore tension between the scientists’ ideal that momentous decisions should be based upon the highest standards of evidence and the practical reality that those standards are difficult to meet. Justice lies in carefully crafted compromise that benefits from robust bonds between science and law.

2. Thomas D.Albright, David Baltimore, Anne-MarieMazza, “Science, evidence, law, and justice,” 120 Proceedings of the National Academies of Science 120 (41) e2301839120 (2023).

Professor Baltimore is a nobel laureate and researcher in biology, now at the California Institute of Technology. Anne-Marie Mazza is the director of the Committee on Science, Technology, and Law, of the National Academies of Sciences, Engineering, and Medicine. Jennifer Mnookin is the chancellor of the University of Wisconsin, Madison; previously, she was the dean of the UCLA School of Law. Judge Tatel is a federal judge on the United States Court of Appeals for the District of Columbia Circuit.

Abstract

For nearly 25 y, the Committee on Science, Technology, and Law (CSTL), of the National Academies of Sciences, Engineering, and Medicine, has brought together distinguished members of the science and law communities to stimulate discussions that would lead to a better understanding of the role of science in legal decisions and government policies and to a better understanding of the legal and regulatory frameworks that govern the conduct of science. Under the leadership of recent CSTL co-chairs David Baltimore and David Tatel, and CSTL director Anne-Marie Mazza, the committee has overseen many interdisciplinary discussions and workshops, such as the international summits on human genome editing and the science of implicit bias, and has delivered advisory consensus reports focusing on topics of broad societal importance, such as dual use research in the life sciences, voting systems, and advances in neural science research using organoids and chimeras. One of the most influential CSTL activities concerns the use of forensic evidence by law enforcement and the courts, with emphasis on the scientific validity of forensic methods and the role of forensic testimony in bringing about justice. As coeditors of this Special Feature, CSTL alumni Tom Albright and Jennifer Mnookin have recruited articles at the intersection of science and law that reveal an emerging scientific revolution of forensic practice, which we hope will engage a broad community of scientists, legal scholars, and members of the public with interest in science-based legal policy and justice reform.

3. Nicholas Scurich, David L. Faigman, and Thomas D. Albright, “Scientific guidelines for evaluating the validity of forensic feature-comparison methods,” 120 Proceedings of the National Academies of Science (2023).

Nicholas Scurich is the chair of the department of Psychological Science, at the University of Southern California, David Faigman has written prolifically about science in the law. He is now the chancellor and dean, at the University of San Francisco College of Law.

Abstract

When it comes to questions of fact in a legal context—particularly questions about measurement, association, and causality—courts should employ ordinary standards of applied science. Applied sciences generally develop along a path that proceeds from a basic scientific discovery about some natural process to the formation of a theory of how the process works and what causes it to fail, to the development of an invention intended to assess, repair, or improve the process, to the specification of predictions of the instrument’s actions and, finally, empirical validation to determine that the instrument achieves the intended effect. These elements are salient and deeply embedded in the cultures of the applied sciences of medicine and engineering, both of which primarily grew from basic sciences. However, the inventions that underlie most forensic science disciplines have few roots in basic science, and they do not have sound theories to justify their predicted actions or results of empirical tests to prove that they work as advertised. Inspired by the “Bradford Hill Guidelines”—the dominant framework for causal inference in epidemiology—we set forth four guidelines that can be used to establish the validity of forensic comparison methods generally. This framework is not intended as a checklist establishing a threshold of minimum validity, as no magic formula determines when particular disciplines or hypotheses have passed a necessary threshold. We illustrate how these guidelines can be applied by considering the discipline of firearm and tool mark examination.

4. Peter Stout, “The secret life of crime labs,” 120 Proceedings of the National Academies of Science 120 (41) e2303592120 (2023).

Peter Stout is a scientist with the Houston Forensic Science Center, in Houston, Texas. The Center describes itself as “an independent local government corporation,” which provides forensic “services” to the Houston police

Abstract

Houston TX experienced a widely known failure of its police forensic laboratory. This gave rise to the Houston Forensic Science Center (HFSC) as a separate entity to provide forensic services to the City of Houston. HFSC is a very large forensic laboratory and has made significant progress at remediating the past failures and improving public trust in forensic testing. HFSC has a large and robust blind testing program, which has provided many insights into the challenges forensic laboratories face. HFSC’s journey from a notoriously failed lab to a model also gives perspective to the resource challenges faced by all labs in the country. Challenges for labs include the pervasive reality of poor-quality evidence. Also that forensic laboratories are necessarily part of a much wider system of interdependent functions in criminal justice making blind testing something in which all parts have a role. This interconnectedness also highlights the need for an array of oversight and regulatory frameworks to function properly. The major essential databases in forensics need to be a part of blind testing programs and work is needed to ensure that the results from these databases are indeed producing correct results and those results are being correctly used. Last, laboratory reports of “inconclusive” results are a significant challenge for laboratories and the system to better understand when these results are appropriate, necessary and most importantly correctly used by the rest of the system.

5. Brandon L. Garrett & Cynthia Rudin, “Interpretable algorithmic forensics,” 120 Proceedings of the National Academies of Science 120 (41) 120 (41) e2301842120 (2023).

Garrett teaches at the Duke University School of Law. Rudin teaches statistics at Duke University.

Abstract

One of the most troubling trends in criminal investigations is the growing use of “black box” technology, in which law enforcement rely on artificial intelligence (AI) models or algorithms that are either too complex for people to understand or they simply conceal how it functions. In criminal cases, black box systems have proliferated in forensic areas such as DNA mixture interpretation, facial recognition, and recidivism risk assessments. The champions and critics of AI argue, mistakenly, that we face a catch 22: While black box AI is not understandable by people, they assume that it produces more accurate forensic evidence. In this Article, we question this assertion, which has so powerfully affected judges, policymakers, and academics. We describe a mature body of computer science research showing how “glass box” AI—designed to be interpretable—can be more accurate than black box alternatives. Indeed, black box AI performs predictably worse in settings like the criminal system. Debunking the black box performance myth has implications for forensic evidence, constitutional criminal procedure rights, and legislative policy. Absent some compelling—or even credible—government interest in keeping AI as a black box, and given the constitutional rights and public safety interests at stake, we argue that a substantial burden rests on the government to justify black box AI in criminal cases. We conclude by calling for judicial rulings and legislation to safeguard a right to interpretable forensic AI.

6. Jed S. Rakoff & Goodwin Liu, “Forensic science: A judicial perspective,” 120 Proceedings of the National Academies of Science e2301838120 (2023).

Judge Rakoff has written previously on forensic evidence. He is a federal district court judge in the Southern District of New York. Goodwin Liu is a justice on the California Supreme Court. Their article was edited by Professor Mnookin.

Abstract

This article describes three major developments in forensic evidence and the use of such evidence in the courts. The first development is the advent of DNA profiling, a scientific technique for identifying and distinguishing among individuals to a high degree of probability. While DNA evidence has been used to prove guilt, it has also demonstrated that many individuals have been wrongly convicted on the basis of other forensic evidence that turned out to be unreliable. The second development is the US Supreme Court precedent requiring judges to carefully scrutinize the reliability of scientific evidence in determining whether it may be admitted in a jury trial. The third development is the publication of a formidable National Academy of Sciences report questioning the scientific validity of a wide range of forensic techniques. The article explains that, although one might expect these developments to have had a major impact on the decisions of trial judges whether to admit forensic science into evidence, in fact, the response of judges has been, and continues to be, decidedly mixed.

7. Jonathan J. Koehler, Jennifer L. Mnookin, and Michael J. Saks, “The scientific reinvention of forensic science,” 120 Proceedings of the National Academies of Science e2301840120 (2023).

Koehler is a professor of law at the Northwestern Pritzker School of Law. Saks is a professor of psychology at Arizona State University, and Regents Professor of Law, at the Sandra Day O’Connor College of Law.

Abstract

Forensic science is undergoing an evolution in which a long-standing “trust the examiner” focus is being replaced by a “trust the scientific method” focus. This shift, which is in progress and still partial, is critical to ensure that the legal system uses forensic information in an accurate and valid way. In this Perspective, we discuss the ways in which the move to a more empirically grounded scientific culture for the forensic sciences impacts testing, error rate analyses, procedural safeguards, and the reporting of forensic results. However, we caution that the ultimate success of this scientific reinvention likely depends on whether the courts begin to engage with forensic science claims in a more rigorous way.

8. William C. Thompson, “Shifting decision thresholds can undermine the probative value and legal utility of forensic pattern-matching evidence,” 120 Proceedings of the National Academies of Science e2301844120 (2023).

Thompson is professor emeritus in the Department of Criminology, Law & Society, University of California, Irvine.

Abstract

Forensic pattern analysis requires examiners to compare the patterns of items such as fingerprints or tool marks to assess whether they have a common source. This article uses signal detection theory to model examiners’ reported conclusions (e.g., identification, inconclusive, or exclusion), focusing on the connection between the examiner’s decision threshold and the probative value of the forensic evidence. It uses a Bayesian network model to explore how shifts in decision thresholds may affect rates and ratios of true and false convictions in a hypothetical legal system. It demonstrates that small shifts in decision thresholds, which may arise from contextual bias, can dramatically affect the value of forensic pattern-matching evidence and its utility in the legal system.

9. Marlene Meyer, Melissa F. Colloff, Tia C. Bennett, Edward Hirata, Amelia Kohl, Laura M. Stevens, Harriet M. J. Smith, Tobias Staudigl & Heather D. Flowe, “Enabling witnesses to actively explore faces and reinstate study-test pose during a lineup increases discriminability,” 120 Proceedings of the National Academies of Science e2301845120 (2023).

Marlene Meyer, Melissa F. Colloff, Tia C. Bennett, Edward Hirata, Amelia Kohl, and Heather D. Flowe are psychologists at the School of Psychology, University of Birmingham (United Kingdom). Harriet M. J. Smith is a psychologist in the School of Psychology, Nottingham Trent University, Nottingham, United Kingdom, and Tobias Staudigl is a psychologist in the Department of Psychology, Ludwig-Maximilians-Universität München, in Munich, Germany.

Abstract

Accurate witness identification is a cornerstone of police inquiries and national security investigations. However, witnesses can make errors. We experimentally tested whether an interactive lineup, a recently introduced procedure that enables witnesses to dynamically view and explore faces from different angles, improves the rate at which witnesses identify guilty over innocent suspects compared to procedures traditionally used by law enforcement. Participants encoded 12 target faces, either from the front or in profile view, and then attempted to identify the targets from 12 lineups, half of which were target present and the other half target absent. Participants were randomly assigned to a lineup condition: simultaneous interactive, simultaneous photo, or sequential video. In the front-encoding and profile-encoding conditions, Receiver Operating Characteristics analysis indicated that discriminability was higher in interactive compared to both photo and video lineups, demonstrating the benefit of actively exploring the lineup members’ faces. Signal-detection modeling suggested interactive lineups increase discriminability because they afford the witness the opportunity to view more diagnostic features such that the nondiagnostic features play a proportionally lesser role. These findings suggest that eyewitness errors can be reduced using interactive lineups because they create retrieval conditions that enable witnesses to actively explore faces and more effectively sample features.


[1] 120 Proceedings of the National Academies of Science (Oct. 10, 2023).

The IARC-hy of Evidence – Incoherent & Inconsistent Classifications of Carcinogenicity

September 19th, 2023

Recently, two lawyers wrote an article in a legal trade magazine about excluding epidemiologic evidence in civil litigation.[1] The article was wildly wide of the mark, with several conceptual and practical errors.[2] For starters, the authors discussed Rule 702 as excluding epidemiologic studies and evidence, when the rule addresses the admissibility of expert witness opinion testimony. The most egregious recommendation of the authors, however, was their recommendation that counsel urge the classifications of chemicals with respect to carcinogenicity, by the International Agency for Research on Cancer (IARC), and by regulatory agencies, as probative for or against causation.

The project of evaluating the evidence for, or against, carcinogenicity of the myriad natural and synthetic agents to which humans are exposed is certainly important. Certainly, IARC has taken the project seriously. There have, however, been problems with IARC’s classifications of specific chemicals, pharmaceuticals, or exposure circumstances, but a basic problem with the classifications begins with the classes themselves. Classification requires defined classes. I don’t mean to be anti-semantic, but IARC’s definitions and its hierarchy of carcinogenicity are not entirely coherent.

The agency was established in 1965, and by the early 1970s, found itself in the business of preparing “monographs on the evaluation of carcinogenic risk of chemicals to man.” Originally, the IARC set out to classify the carcinogenicity of chemicals, but over the years, its scope increased to include complex mixtures, physical agents such as different forms of radiation, and biological organisms. To date, there have been 134 IARC monographs, addressing 1,045 “agents” (either substances or exposure circumstances).

From its beginnings, the IARC has conducted its classifications through working groups that meet to review and evaluate evidence, and classify the cancer hazards of “agents” under discussion. The breakdown of IARC’s classifications among four groups currently is:

Group 1 – Carcinogenic to humans (127 agents)

Group 2A – Probably carcinogenic to humans (95 agents)

Group 2B – Possibly carcinogenic to humans (323 agents)

Group 3 – Not classifiable as to its carcinogenicity to humans   (500 agents)

Previously, the IARC classification included a Group 4 for agents that are probably not carcinogenic for human beings. After decades of review, the IARC placed only a single agent in Group 4, caprolactam, apparently because the agency found everything else in the world to be presumptively a cause of cancer. The IARC could not find sufficiently strong evidence even for water, air, or basic foods to declare that they do not cause cancer in humans. Ultimately, the IARC abandoned Group 4, in favor of a presumption of universal carcinogencity.

The IARC describes its carcinogen classification procedures, requirements, and rationales in a document known as “The Preamble.” Any discussion of IARC classifications, whether in scientific publications or in legal briefs, without reference to this document should be suspect. The Preamble seeks to define many of the words in the classificatory scheme, some in ways that are not intuitive. This document has been amended over time, and the most recent iteration can be found online at the IARC website.[3]

IARC claims to build its classifications upon “consensus” evaluations, based in turn upon considerations of

(a) the strength of evidence of carcinogenicity in humans,

(b) the evidence of carcinogenicity in experimental (non-human) animals, and

(c) the mechanistic evidence of carcinogenicity.

IARC further claims that its evaluations turn on the use of “transparent criteria and descriptive terms.”[4] This last claim is, for some terms, is falsifiable.

The working groups are described as engaged in consensus evaluations, although past evaluations have been reached on simple majority vote of the working group. The working groups are charged with considering the three lines of evidence, described above, for any given agent, and reaching a synthesis in the form of the IARC classificatory scheme. The chart, from the Preamble, below roughly describes how working groups may “mix and match” lines of evidence, of varying degrees of robustness and validity (vel non) to reach a classification.

 

Agents placed in Category I are thus “carcinogenic to humans.” Interestingly, IARC does not refer to Category I carcinogens as “known” carcinogens, although many commentators are prone to do so. The implication of calling Category I agents “known carcinogens” is to distinguish Category IIA, IIB, and III as agents “not known to cause cancer.” The adjective that IARC uses, rather than “known,” is “sufficient” evidence in humans, but IARC also allows for reaching Category I with “limited,” or even “inadequate” human evidence if the other lines of evidence, in experimental animals or mechanistic evidence in humans, are sufficient.

In describing “sufficient” evidence, the IARC’s Preamble does not refer to epidemiologic evidence as potentially “conclusive” or “definitive”; rather its use of “sufficient” implies, perhaps non-transparently, that its labels of “limited” or “inadequate” evidence in humans refer to insufficient evidence. IARC gives an unscientific, inflated weight and understanding to “limited evidence of carcinogenicity,” by telling us that

“[a] causal interpretation of the positive association observed in the body of evidence on exposure to the agent and cancer is credible, but chance, bias, or confounding could not be ruled out with reasonable confidence.”[5]

Remarkably, for IARC, credible interpretations of causality can be based upon evidentiary displays that are confounded or biased.  In other words, non-credible associations may support IARC’s conclusions of causality. Causal interpretations of epidemiologic evidence are “credible” according to IARC, even though Sir Austin’s predicate of a valid association is absent.[6]

The IARC studiously avoids, however, noting that any classification is based upon “insufficient” evidence, even though that evidence may be less than sufficient, as in “limited,” or “inadequate.” A close look at Table 4 reveals that some Category I classifications, and all Category IIA, IIB, and III classifications are based upon insufficient evidence of carcinogenicity in humans.

Non-Probable Probabilities

The classification immediately below Category or Group I is Group 2A, for agents “probably carcinogenic to humans.” The IARC’s use of “probably” is problematic. Group I carcinogens require only “sufficient” evidence of human carcinogenicity, and there is no suggestion that any aspect of a Group I evaluation requires apodictic, conclusive, or even “definitive” evidence. Accordingly, the determination of Group I carcinogens will be based upon evidence that is essentially probabilistic. Group 2A is also defined as having only “limited evidence of carcinogenicity in humans”; in other words, insufficient evidence of carcinogenicity in humans, or epidemiologic studies with uncontrolled confounding and biases.

Importing IARC 2A classifications into legal or regulatory arenas will allow judgments or regulations based upon “limited evidence” in humans, which as we have seen, can be based upon inconsistent observational studies, and studies that fail to measure and adjust for known and potential confounding risk factors and systematic biases. The 2A classification thus requires little substantively or semantically, and many 2A classifications leave juries and judges to determine whether a chemical or medication caused a human being’s cancer, when the basic predicates for Sir Austin Bradford Hill’s factors for causal judgment have not been met.[7]

An IARC evaluation of Group 2A, or “probably carcinogenic to humans,” would seem to satisfy the legal system’s requirement that an exposure to the agent of interest more likely than not causes the harm in question. Appearances and word usage in different contexts, however, can be deceiving. Probability is a continuous quantitative scale from zero to one. In Bayesian analyses, zero and one are unavailable because if either were our starting point, no amount of evidence could ever change our judgment of the probability of causation. (Cromwell’s Rule). The IARC informs us that its use of “probably” is purely idiosyncratic; the probability that a Group 2A agent causes cancer has “no quantitative” meaning. All the IARC intends is that a Group 2A classification “signifies a greater strength of evidence than possibly carcinogenic.”[8] Group 2A classifications are thus consistent with having posterior probabilities less than 0.5 (or 50 percent). A working group could judge the probability of a substance or a process to be carcinogenic to humans to be greater than zero, but no more than say ten percent, and still vote for a 2A classification, in keeping with the IARC Preamble. This low probability threshold for a 2A classification converts the judgment of “probably carcinogenic” into little more than precautionary prescriptions, rendered when the most probable assessment is either ignorance or lack of causality. There is thus a practical certainty, close to 100%, that a 2A classification will confuse judges and juries, as well as the scientific community.

In addition to being based upon limited, that is insufficient, evidence of human carcinogenicity, Group 2A evaluations of “probable human carcinogenicity” connote “sufficient evidence” in experimental animals. An agent can be classified 2A even when the sufficient evidence of carcinogenicity occurs in only one of several non-human animal species, with the other animal species failing to show carcinogenicity. IARC 2A classifications can thus raise the thorny question in court whether a claimant is more like a rat or a mouse.

Courts should, because of the incoherent and diluted criteria for “probably carcinogenic,” exclude expert witness opinions based upon IARC 2A classifications as scientifically insufficient.[9] Given the distortion of ordinary language in its use of defined terms such as “sufficient,” “limited,” and “probable,” any evidentiary value to IARC 2A classifications, and expert witness opinion based thereon, is “substantially outweighed by a danger of … unfair prejudice, confusing the issues, [and] misleading the jury….”[10]

Everything is Possible

Category 2B denotes “possibly carcinogenic.” This year, the IARC announced that a working group had concluded that aspartame, an artificial sugar substitute, was “possibly carcinogenic.”[11] Such an evaluation, however, tells us nothing. If there are no studies at all of an agent, the agent could be said to be possibly carcinogenic. If there are inconsistent studies, even if the better designed studies are exculpatory, scientists could still say that the agent of interest was possibly carcinogenic. The 2B classification does not tell us anything because everything is possible until there is sufficient evidence to inculpate or exculpate it from causing cancer in humans.

It’s a Hazard, Not a Risk

IARC’s classification does not include an assessment of exposure levels. Consequently, there is no consideration of dose or exposure level at which an agent becomes carcinogenic. IARC’s evaluations are limited to whether the agent is or is not carcinogenic. The IARC explicitly concedes that exposure to a carcinogenic agent may carry little risk, but it cannot bring itself to say no risk, or even benefit at low exposures.

As noted, the IARC classification scheme refers to the strength of the evidence that an agent is carcinogenic, and not to the quantitative risk of cancer from exposure at a given level. The Preamble explains the distinction as fundamental:

“A cancer hazard is an agent that is capable of causing cancer, whereas a cancer risk is an estimate of the probability that cancer will occur given some level of exposure to a cancer hazard. The Monographs assess the strength of evidence that an agent is a cancer hazard. The distinction between hazard and risk is fundamental. The Monographs identify cancer hazards even when risks appear to be low in some exposure scenarios. This is because the exposure may be widespread at low levels, and because exposure levels in many populations are not known or documented.”[12]

This attempted explanation reveals important aspects of IARC’s project. First, there is an unproven assumption that there will be cancer hazards regardless of the exposure levels. The IARC contemplates that there may circumstances of low levels of risk from low levels of exposure, but it elides the important issue of thresholds. Second, IARC’s distinction between hazard and risk is obscured by its own classifications.  For instance, when IARC evaluated crystalline silica and classified it in Group I, it did so for only “occupational exposures.”[13] And yet, when IARC evaluated the hazard of coal exposure, it placed coal dust in Group 3, even though coal dust contains crystalline silica.[14] Similarly, in 2018, the IARC classified coffee as a Group 3,[15] even though every drop of coffee contains acrylamide, which is, according to IARC, a Group 2A “probable human carcinogen.”[16]


[1] Christian W. Castile & and Stephen J. McConnell, “Excluding Epidemiological Evidence Under FRE 702,” For The Defense 18 (June 2023) [Castile].

[2]Excluding Epidemiologic Evidence Under Federal Rule of Evidence 702” (Aug. 26, 2023).

[3] IARC Monographs on the Identification of Carcinogenic Hazards to Humans – Preamble (2019).

[4] Jonathan M. Samet , Weihsueh A. Chiu , Vincent Cogliano, Jennifer Jinot, David Kriebel, Ruth M. Lunn, Frederick A. Beland, Lisa Bero, Patience Browne, Lin Fritschi, Jun Kanno , Dirk W. Lachenmeier, Qing Lan, Gerard Lasfargues, Frank Le Curieux, Susan Peters, Pamela Shubat, Hideko Sone, Mary C. White , Jon Williamson, Marianna Yakubovskaya , Jack Siemiatycki, Paul A. White, Kathryn Z. Guyton, Mary K. Schubauer-Berigan, Amy L. Hall, Yann Grosse, Veronique Bouvard, Lamia Benbrahim-Tallaa, Fatiha El Ghissassi, Beatrice Lauby-Secretan, Bruce Armstrong, Rodolfo Saracci, Jiri Zavadil , Kurt Straif, and Christopher P. Wild, “The IARC Monographs: Updated Procedures for Modern and Transparent Evidence Synthesis in Cancer Hazard Identification,” 112 J. Nat’l Cancer Inst. djz169 (2020).

[5] Preamble at 31.

[6] See Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295 (1965) (noting that only when “[o]ur observations reveal an association between two variables, perfectly clear-cut and beyond what we would care to attribute to the play of chance,” do we move on to consider the nine articulated factors for determining whether an association is causal.

[7] Id.

[8] IARC Monographs on the Identification of Carcinogenic Hazards to Humans – Preamble 31 (2019) (“The terms probably carcinogenic and possibly carcinogenic have no quantitative significance and are used as descriptors of different strengths of evidence of carcinogenicity in humans.”).

[9] SeeIs the IARC lost in the weeds” (Nov. 30, 2019); “Good Night Styrene” (Apr. 18, 2019).

[10] Fed. R. Evid. 403.

[11] Elio Riboli, et al., “Carcinogenicity of aspartame, methyleugenol, and isoeugenol,” 24 The Lancet Oncology P848-850 (2023);

IARC, “Aspartame hazard and risk assessment results released” (2023).

[12] Preamble at 2.

[13] IARC Monograph 68, at 41 (1997) (“For these reasons, the Working Group therefore concluded that overall the epidemiological findings support increased lung cancer risks from inhaled crystalline silica (quartz and cristobalite) resulting from occupational exposure.”).

[14] IARC Monograph 68, at 337 (1997).

[15] IARC Monograph No. 116, Drinking Coffee, Mate, and Very Hot Beverages (2018).

[16] IARC Monograph no. 60, Some Industrial Chemicals (1994).

PLPs & Five-Legged Dogs

September 1st, 2023

All lawyers have heard the puzzle of “how many legs does a dog have if you call his tail a leg?” The puzzle is often misattributed to Abraham Lincoln, who used the puzzle at various times, including in jury speeches. The answer of course is: “Four. Saying that a tail is a leg does not make it a leg.” Quote investigators have traced the puzzle as far back as 1825, when newspapers quoted legislator John W. Hulbert as saying that something “reminded him of the story.”[1]

What do we call a person who becomes pregnant and delivers a baby?

A woman.

The current, trending fashion is to call such a person a PLP, a person who becomes pregnant and lactates. This façon de parler is particularly misleading if it is meant as an accommodation to the transgender population. Transgender women will not show up as pregnant or lactating, and transgender men will show up only if there transition is incomplete and has left them with functional female reproductive organs.

In 2010, Guinness World Records named Thomas Beatie the “World’s First Married Man to Give Birth.” Thomas Beatie is now legally a man, which is just another way of saying that he chose to identify as a man, and gained legal recognition for his choice. Beatie was born as a female, matured into a woman, and had ovaries and a uterus. Beatie was, in other words, biologically a female when she went through puberty and became biologically a woman.

Beatie underwent partial gender reassignment surgery, consisting at least of double mastectomy, and taking testosterone replacement therapy (off label), but retained ovaries and a uterus.

Guinness makes a fine stout, and we may look upon it kindly for having nurtured the statistical thinking of William Sealy Gosset. Guinness, however, cannot make a dog have five legs simply by agreeing to call its tail a leg. Beatie was not the first pregnant man; rather he was the first person, born with functional female reproductive organs, to have his male gender identity recognized by a state, who then conceived and delivered a newborn. If Guinness wants to call this the first “legal man” to give birth, by semantic legerdemain, that is fine. Certainly we can and should publicly be respectful of transgendered persons, and work to prevent them from being harassed or embarrassed. There may well be many situations in which we would change our linguistic usage to acknowledge a transsexual male as the mother of a child.[2] We do not, however, have to change biology to suit their choices, or to make useless gestures to have them feel included when their inclusion is not relevant to important scientific and medical issues.

Sadly, the NASEM would impose this politico-semanticism upon us while addressing the serious issue whether women of child-bearing age should be included in clinical trials.  At a recent workshop on “Developing a Framework to Address Legal, Ethical, Regulatory, and Policy Issues for Research Specific to Pregnant and Lactating Persons,”[3] the Academies introduced a particularly ugly neologism, “pregnant and lactating persons,” or PLP for short. The workshop reports:

“Approximately 4 million pregnant people in the United States give birth annually, and 70 percent of these individuals take at least one prescription medication during their pregnancy. Yet, pregnant and lactating persons are often excluded from clinical trials, and often have to make treatment decisions without an adequate understanding of the benefits and risks to themselves and their developing fetus or newborn baby. An ad hoc committee of the National Academies of Sciences, Engineering, and Medicine will develop a framework for addressing medicolegal and liability issues when planning or conducting research specific to pregnant and lactating persons.”[4]

The full report from NASEM, with fulsome use of the PLP phrase, is now available.[5]

J.K. Rowling is not the only one who is concerned about the erasure of the female from our discourse. Certainly we can acknowledge that transgenderism is real, without allowing the exception to erase biological facts about reproduction. After all, Guinness’s first pregnant “legal man” could not lactate, as a result of bilateral mastectomies, and thus the “legal man” was not a pregnant person who could lactate. And the pregnant “legal man” had functioning ovaries and uterus, which is not a matter of gender identity, but physiological functioning of biological female sex organs. Furthermore, including transgendered women, or “legal women,” without functional ovaries and uterus, in clinical trials will not answer difficult question about whether experimental therapies may harm women’s reproductive function or their offspring in utero or after birth.

The inclusion of women in clinical trials is a serious issue precisely because experimental therapies may hold risks for participating women’s offspring in utero. The law may not permit a proper informed consent by women for their conceptus. And because of the new latitude legislatures enjoy to impose religion-based bans on abortion, a women who conceives while taking an experimental drug may not be able to terminate her pregnancy that has been irreparably harmed by the drug.

The creation of the PLP category really confuses rather than elucidates how we answer the ethical and medical questions involved in testing new drugs or treatments for women. NASEM’s linguistic gerrymandering may allow some persons who have suffered from gender dysphoria to feel “included,” and perhaps to have their choices “validated,” but the inclusion of transgender women, or partially transgendered men, will not help answer the important questions facing clinical researchers. Taxpayers who fund NASEM and NIH deserve better clarity and judgment in the use of governmental funds in supporting clinical trials.

When and whence comes this PLP neologism?  An N-Gram search shows that “pregnant person” was found in the database before 1975, and that the phrase has waxed and waned since.

N-Gram for pregnant person, conducted September 1, 2023

A search of the National Library of Medicine PubMed database found several dozen hits, virtually all within the last two years. The earliest use was in 1970,[6] with a recrudenscence 11 years later.[7]

                                             

From PubMed search for “pregnant person,” conducted Sept. 1, 2023 

In 2021, the New England Journal of Medicine published a paper on the safety of Covid-19 vaccines in “pregnant persons.”[8] As of last year, the Association of American Medical Colleges sponsored a report about physicians advocating for inclusion of “pregnant people” in clinical trials, in a story that noted that “[p]regnant patients are often excluded from clinical trials for fear of causing harm to them or their babies, but leaders in maternal-fetal medicine say the lack of data can be even more harmful.”[9] And currently, the New York State Department of Health advises that “[d]ue to changes that occur during pregnancy, pregnant people may be more susceptible to viral respiratory infections.”[10]

The neologism of PLP was not always so. Back in the dark ages, 2008, the National Cancer Institute issued guidelines on the inclusion of pregnant and breast-feeding women in clinical trials.[11] As recently as June 2021, The World Health Organization was still old school in discussing “pregnant and lactating women.”[12] The same year, over a dozen female scientists, published a call to action about the inclusion of “pregnant women” in COVID-19 trials.[13]

Two years ago, I gingerly criticized the American Medical Association’s issuance of a linguistic manifesto on how physicians and scientists should use language to advance the Association’s notions of social justice.[14] I criticized the Association’s efforts at the time, but its guide to “correct” usage was devoid of the phrase “pregnant persons” or “lactating persons.”[15] Pregnancy is a function of sex, not of gender.


[1]Suppose You Call a Sheep’s Tail a Leg, How Many Legs Will the Sheep Have?” QuoteResearch (Nov. 15, 2015).

[2] Sam Dylan More, “The pregnant man – an oxymoron?” 7 J. Gender Studies 319 (1998).

[3] National Academies of Sciences, Engineering, and Medicine, “Research with Pregnant and Lactating Persons: Mitigating Risk and Liability: Proceedings of a Workshop in Brief,” (2023).

[4] NASEM, “Research with Pregnant and Lactating Persons: Mitigating Risk and Liability: Proceedings of a Workshop–in Brief” (2023).

[5] National Academies of Sciences, Engineering, and Medicine, Inclusion of pregnant and lactating persons in clinical trials: Proceedings of a workshop (2023).

[6] W.K. Keller, “The pregnant person,” 68 J. Ky. Med. Ass’n 454 (1970).

[7] Vibiana M. Andrade, “The toxic workplace: Title VII protection for the potentially pregnant person,” 4 Harvard Women’s Law J. 71 (1981).

[8] Tom T. Shimabukuro, Shin Y. Kim, Tanya R. Myers, Pedro L. Moro, Titilope Oduyebo, Lakshmi Panagiotakopoulos, Paige L. Marquez, Christine K. Olson, Ruiling Liu, Karen T. Chang, Sascha R. Ellington, Veronica K. Burkel, et al., for the CDC v-safe COVID-19 Pregnancy Registry Team, “Preliminary Findings of mRNA Covid-19 Vaccine Safety in Pregnant Persons,” 384 New Engl. J. Med. 2273 (2021).

[9] Bridget Balch, “Prescribing without data: Doctors advocate for the inclusion of pregnant people in clinical research,” AAMC (Mar. 22, 2022).

[10] New York State Department of Health, “Pregnancy & COVID-19,” last visited August 31, 2023.

[11] NCI, “Guidelines Regarding the Inclusion of Pregnant and Breast-Feeding Women on Cancer Clinical Treatment Trials,” (May 29, 2008).

[12] WHO, “Update on WHO Interim recommendations on COVID-19 vaccination of pregnant and lactating women,” (June 10, 2021).

[13] Melanie M. Taylor, Loulou Kobeissi, Caron Kim, Avni Amin, Anna E Thorson, Nita B. Bellare, Vanessa Brizuela, Mercedes Bonet, Edna Kara, Soe Soe Thwin, Hamsadvani Kuganantham, Moazzam Ali, Olufemi T. Oladapo, Nathalie Broutet, “Inclusion of pregnant women in COVID-19 treatment trials: a review and global call to action,”9 Health Policy E366 (2021).

[14] American Medical Association, “Advancing Health Equity: A Guide to Language, Narrative and Concepts,” (2021); see Harriet Hall, “The AMA’s Guide to Politically Correct Language: Advancing Health Equity,” Science Based Medicine (Nov. 2, 2021).

[15]When the American Medical Association Woke Up” (Nov.17, 2021).

The opinions, statements, and asseverations expressed on Tortini are my own, or those of invited guests, and these writings do not necessarily represent the views of clients, friends, or family, even when supported by good and sufficient reason.