TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

A Π-Day Celebration of Irrational Numbers and Other Things – Philadelphia Glyphosate Litigation

March 14th, 2024

Science can often be more complicated and nuanced than we might like. Back in 1897, the Indiana legislature attempted to establish that π was equal to 3.2.[1] Sure, that was simpler and easier to use in calculations, but also wrong. The irreducible fact is that π is an irrational number, and Indiana’s attempt to change that fact was, well, irrational. And to celebrate irrationality, consider the lawsuit’s industry’s jihad against glyphosate, including its efforts to elevate a dodgy IARC evaluation, while suppressing evidence of glyphosate’s scientific exonerations

                                                 

After Bayer lost three consecutive glyphosate cases in Philadelphia last year, observers were scratching their heads over why the company had lost when the scientific evidence strongly supports the defense. The Philadelphia Court of Common Pleas, not to be confused with Common Fleas, can be a rough place for corporate defendants. The local newspapers, to the extent people still read newspapers, are insufferably slanted in their coverage of health claims.

The plaintiffs’ verdicts garnered a good deal of local media coverage in Philadelphia.[2] Defense verdicts generally receive no ink from sensationalist newspapers such as the Philadelphia Inquirer. Regardless, media accounts, both lay and legal, are generally inadequate to tell us what happened, or what went wrong in the court room. The defense losses could be attributable to partial judges or juries, or the difficulty in communicating subtle issues of scientific validity. Plaintiffs’ expert witnesses may seem more sure of themselves than defense experts, or plaintiffs’ counsel may connect better with juries primed by fear-mongering media. Without being in the courtroom, or at least studying trial transcripts, outside observers are challenged to explain fully jury verdicts that go against the scientific evidence. The one thing jury verdicts are not, however, are valid assessments of the strength of scientific evidence, inferences, and conclusions.

Although Philadelphia juries can be rough, they like to see a fight. (Remember Rocky.) It is not a place for genteel manners or delicate and subtle distinctions. Last week, Bayer broke its Philadelphia losing streak, with a win in Kline v. Monsanto Co.[3] Mr. Kline claimed that he developed Non-Hodgkin’s lymphoma (NHL) from his long-term use of Round-Up. The two-week trial, before Judge Ann Butchart, last week went to the jury, which deliberated two hours before returning a unanimous defense verdict. The jury found that the defendants, Monsanto and Nouryon Chemicals LLC, were not negligent, and that the plaintiff’s use of Roundup was not a factual cause of his lymphoma.[4]

Law360 reported that the Kline verdict was the first to follow a ruling on Valentine’s Day, February 14, 2024, which excluded any courtroom reference to the hazard evaluation of Glyphosate by the International Agency for Research on Cancer (IARC). The Law360 article indicated that the IARC found that glyphosate can cause cancer; except of course IARC has never reached such a conclusion.

The IARC working group evaluated the evidence for glyphosate and classified the substance as a category IIA carcinogen, which it labels as “probably” causing human cancer. This label sounds close to what might be useful in a courtroom, except that the IARC declares that “probably,” as used in is IIA classification does not mean what people generally, and lawyers and judges specifically, mean by the word probably.  For IARC, “probable” has no quantitative meaning.  In other words, for IARC, probable, a quantitative concept, which everyone understands to be measured on a scale from 0 to 1, or from 0% to 100%, is not quantitative. An IARC IIA classification could thus represent a posterior probability of 1% in favor of carcinogenicity (and 99% probable not a carcinogen). In other words, on whether glyphosate causes cancer in humans, IARC says maybe in its own made-up epistemic modality.

To find the idiosyncratic definition of “probable,” a diligent reader must go outside the monograph of interest to the so-called Preamble, a separate document, last revised in 2019. The first time the jury will hear of the IARC pronouncement will be in the plaintiff’s case, and if the defense wishes to inform the jury on the special, idiosyncratic meaning of IARC “probable,” they must do it on cross-examination of hostile plaintiffs’ witnesses, or wait longer until they present their own witnesses. Disclosing the IARC IIA classification hurts because the “probable” language lines up with what the trial judges will instruct the juries at the end of the case, when the jurors are told that they need not believe that the plaintiff has eliminated all doubt; they need only find that the plaintiff has shown that each element of his case is “probable,” or more likely than not, in order to prevail. Once the jury has heard “probable,” the defense will have a hard time putting the toothpaste back in the tube. Of course, this is why the lawsuit industry loves IARC evaluations, with its fallacies of semantical distortion.[5]

Although identifying the causes of a jury verdict is more difficult than even determining carcinogenicity, Rosemary Pinto, one of plaintiff Kline’s lawyers, suggested that the exclusion of the IARC evaluation sank her case:

“We’re very disappointed in the jury verdict, which we plan to appeal, based upon adverse rulings in advance of the trial that really kept core components of the evidence out of the case. These included the fact that the EPA safety evaluation of Roundup has been vacated, who IARC (the International Agency for Research on Cancer) is and the relevance of their finding that Roundup is a probable human carcinogen [sic], and also the allowance into evidence of findings by foreign regulatory agencies disguised as foreign scientists. All of those things collectively, we believe, tilted the trial in Monsanto’s favor, and it was inconsistent with the rulings in previous Roundup trials here in Philadelphia and across the country.”[6]

Pinto was involved in the case, and so she may have some insight into why the jury ruled as it did. Still, issuing this pronouncement before interviewing the jurors seems little more than wishcasting. As philosopher Harry Frankfurt explained, “the production of bullshit is stimulated whenever a person’s obligations or opportunities to speak about some topic exceed his knowledge of the facts that are relevant to that topic.”[7] Pinto’s real aim was revealed in her statement that the IARC review was “crucial evidence that juries should be hearing.”[8]  

What is the genesis of Pinto’s complaint about the exclusion of IARC’s conclusions? The Valentine’s Day Order, issued by Judge Joshua H. Roberts, who heads up the Philadelphia County mass tort court, provided that:

AND NOW, this 14th day of February, 2024, upon consideration of Defendants’ Motion to Clarify the Court’s January 4, 2024 Order on Plaintiffs Motion in Limine No. 5 to Exclude Foreign Regulatory Registrations and/or Approvals of Glyphosate, GBHs, and/or Roundup, Plaintiffs’ Response, and after oral argument, it is ORDERED as follows:

  1. The Court’s Order of January 4, 2024, is AMENDED to read as follows: [ … ] it is ORDERED that the Motion is GRANTED without prejudice to a party’s introduction of foreign scientific evidence, provided that the evidence is introduced through an expert witness who has been qualified pursuant to Pa. R. E. 702.

  2. The Court specifically amends its Order of January 4, 2024, to exclude reference to IARC, and any other foreign agency and/or foreign regulatory agency.

  3. The Court reiterates that no party may introduce any testimony or evidence regarding a foreign agency and/or foreign regulatory agency which may result in a mini-trial regarding the protocols, rules, and/or decision making process of the foreign agency and/or foreign regulatory agency. [fn1]

  4. The trial judge shall retain full discretion to make appropriate evidentiary rulings on the issues covered by this Order based on the testimony and evidence elicited at trial, including but not limited to whether a party or witness has “opened the door.”[9]

Now what was not covered in the legal media accounts was the curious irony that the exclusion of the IARC evaluation resulted from plaintiffs’ motion, an own goal of sorts. In previous Philadelphia trials, plaintiffs’ counsel vociferously objected to the defense counsel’s and experts’ references to the determinations by foreign regulators, such as European Union Assessment Group on Glyphosate (2017, 2022), Health Canada (2017), European Food Safety Authority (2017, 2023), Australian Pesticides and Veterinary Medicines Authority (2017), German Federal Institute for Risk Assessment (2019), and others, that rejected the IARC evaluation and reported that glyphosate has not been shown to be carcinogenic.[10]

The gravamen of the plaintiffs’ objection was that such regulatory determinations were hearsay, and that they resulted from various procedures, using various criteria, which would require explanation, and would be subject to litigants’ challenges.[11] In other words, for each regulatory agency’s determination, there would be a “mini-trial,” or a “trial within a trial,” about the validity and accuracy of the foreign agency’s assessment.

In the earlier Philadelphia trials, the plaintiffs’ objections were largely sustained, which created a significant evidentiary bias in the courtrooms. Plaintiffs’ expert witnesses could freely discuss the IARC glyphosate evaluation, but the defense and its experts could not discuss the many determinations of the safety of glyphosate. Jurors were apparently left with the erroneous impression that the IARC evaluation was a consensus view of the entire world’s scientific community.

Now plaintiffs’ objection has a point, even though it seems to prove too much and must ultimately fail. In a trial, each side has expert witnesses who can offer an opinion about the key causal issue, whether glyphosate can cause NHL, and whether it caused this plaintiff’s NHL. Each expert witness will have written a report that identifies the facts and data relied upon, and that explains the inferences drawn and conclusions reached. The adversary can challenge the validity of the data, inferences, and conclusions because the opposing expert witness will be subject to cross-examination.

The facts and data relied upon will, however, be “hearsay,” which will come from published studies not written by the expert witnesses at trial. There will be many aspects of the relied upon studies that will be taken on faith without the testimony of the study participants, their healthcare providers, or the scientists who collected the data, chose how to analyze the data, conducted the statistical and scientific analyses, and wrote up the methods and study findings. Permitting reliance upon any study thus allows for a “mini-trial” or a “trial within a trial,” on each study cited and relied upon by the testifying expert witnesses. This complexity involved in expert witness opinion testimony is one of the foundational reasons for Rule 702’s gatekeeping regime in federal court, and most state courts, but which is usually conspicuously absent in Pennsylvania courtrooms.

Furthermore, the plaintiffs’ objections to foreign regulatory determinations would apply to any review paper, and more important, it would apply to the IARC glyphosate monograph itself. After all, if expert witnesses are supposed to have reviewed the underlying studies themselves, and be competent to do so, and to have arrived at an opinion in some reliable way from the facts and data available, then they would have no need to advert to the IARC’s review on the general causation issue.  If an expert witness were allowed to invoke the IARC conclusion, presumably to bolster his or her own causation opinion, then the jury would need to resolve questions about:

  • who was on the working group;
  • how were working group members selected, or excluded;
  • how the working group arrived at its conclusion;
  • what did the working group rely upon, or not rely upon, and why,
  • what was the group’s method for synthesizing facts and data to reach its conclusion;
  • was the working group faithful to its stated methodology;
  • did the working group commit any errors of statistical or scientific judgment along the way;
  • what potential biases did the working group members have;
  • what is the basis for the IARC’s classificatory scheme; and
  • how are IARC’s key terms such as “sufficient,” “limited,” “probable,” “possible,” etc., defined and used by working groups.

Indeed, a very substantial trial could be had on the bona fides and methods of the IARC, and the glyphosate IARC working group in particular.

The curious irony behind the Valentine’s Day order is that plaintiffs’ counsel were generally winning their objections to the defense’s references to foreign regulatory determinations. But as pigs get fatter, hogs get slaughtered. Last year, plaintiffs’ counsel moved to “exclude foreign regulatory registrations and or approvals of glyphosate.”[12] To be sure, plaintiffs’ counsel were not seeking merely the exclusion of glyphosate registrations, but the scientific evaluations of regulatory agencies and their staff scientists and consulting scientists. Plaintiffs wanted trials in which juries would hear only about IARC, as though it was a scientific consensus. The many scientific regulatory considerations and rejections of the IARC evaluation would be purged from the courtroom.

On January 4, 2024, plaintiffs’ counsel obtained what they sought, an order that memorialized the tilted playing field they had largely been enjoying in Philadelphia courtrooms. Judge Roberts’ order was short and somewhat ambiguous:

“upon consideration of plaintiff’s motion in limine no. 5 to exclude foreign regulatory registrations and/or approvals of glyphosate, GBHs, and/or Roundup, any response thereto, the supplements of the parties, and oral argument, it is ORDERED that the motion is GRANTED without prejudice to a party’s introduction of foreign scientific evidence including, but not limited to, evidence from the International Agency for Research on Cancer (IARC), provided that such introduction does not refer to foreign regulatory agencies.”

The courtroom “real world” outcome after Judge Roberts’ order was an obscene verdict in the McKivison case. Again, there may have been many contributing causes to the McKivison verdict, including Pennsylvania’s murky and retrograde law of expert witness opinion testimony.[13] Mr. McKivison was in remission from NHL and had sustained no economic damages, and yet, on January 26, 2024, a jury in his case returned a punitive compensatory damages award of $250 million, and an even more punitive punitive damage award of $2 billion.[14] It seems at least plausible that the imbalance between admitting the IARC evaluation while excluding foreign regulatory assessments helped create a false narrative that scientists and regulators everywhere had determined glyphosate to be unsafe.

On February 2, 2024, the defense moved for a clarification of Judge Roberts’ January 4, 2024 order that applied globally in the Philadelphia glyphosate litigation. The defendants complained that in their previous trial, after Judge Roberts’ Order of January 4, 2024, they were severely prejudiced by being prohibited from referring to the conclusions and assessments of foreign scientists who worked for regulatory agencies. The complaint seems well founded.  If a hearsay evaluation of glyphosate by an IARC working group is relevant and admissible, the conclusions of foreign scientists about glyphosate are relevant and admissible, whether or not they are employed by foreign regulatory agencies. Indeed, plaintiffs’ counsel routinely complained about Monsanto/Bayer’s “influence” over the United States Environmental Protection Agency, but the suggestion that the European Union’s regulators are in the pockets of Bayer is pretty farfetched. Indeed, the complaint about bias is peculiar coming from plaintifs’ counsel, who command an out-sized influence within the Collegium Ramazzini,[15] which in turn often dominates IARC working groups. Every agency and scientific group, including the IARC, has its “method,” its classificatory schemes, its definitions, and the like. By privileging the IARC conclusion, while excluding all the other many agencies and groups, and allowing plaintiffs’ counsel to argue that there is no real-world debate over glyphosate, Philadelphia courts play a malignant role in helping to generate the huge verdicts seen in glyphosate litigation.

The defense motion for clarification also stressed that the issue whether glyphosate causes NHL or other human cancer is not the probandum for which foreign agency and scientific group statements are relevant.  Pennsylvania has a most peculiar, idiosyncratic law of strict liability, under which such statements may not be relevant to liability questions. Plaintiffs’ counsel, in glyphosate and most tort litigations, however, routinely assert negligence as well as punitive damages claims. Allowing plaintiffs’ counsel to create a false and fraudulent narrative that Monsanto has flouted the consensus of the entire scientific and regulatory community in failing to label Roundup with cancer warnings is a travesty of the rule of law.

What seems clever by halves in the plaintiffs’ litigation approach was that its complaints about foreign regulatory assessments applied equally, if not more so, to the IARC glyphosate hazard evaluation. The glyphosate litigation is not likely as interminable as π, but it is irrational.

*      *     *      *      *     * 

Post Script.  Ten days after the verdict in Kline, and one day after the above post, the Philadelphia Inquirer released a story about the defense verdict. See Nick Vadala, “Monsanto wins first Roundup court case in recent string of Philadelphia lawsuits,” Phila. Inq. (Mar. 15, 2024).


[1] Bill 246, Indiana House of Representatives (1897); Petr Beckmann, A History of π at 174 (1971).

[2] See Robert Moran, “Philadelphia jury awards $175 million after deciding 83-year-old man got cancer from Roundup weed killer,” Phila. Inq. (Oct. 27, 2023); Nick Vadala, “Philadelphia jury awards $2.25 billion to man who claimed Roundup weed killer gave him cancer,” Phila. Inq. (Jan. 29, 2024).

[3] Phila. Ct. C.P. 2022-01641.

[4] George Woolston, “Monsanto Nabs 1st Win In Philly’s Roundup Trial Blitz,” Law360 (Mar. 5, 2024); Nicholas Malfitano, “After three initial losses, Roundup manufacturers get their first win in Philly courtroom,” Pennsylvania Record (Mar. 6, 2024).

[5][5] See David Hackett Fischer, “ Fallacies of Semantical Distortion,” chap. 10, in Historians’ Fallacies: Toward a Logic of Historical Thought (1970); see alsoIARC’s Fundamental Distinction Between Hazard and Risk – Lost in the Flood” (Feb. 1, 2024); “The IARC-hy of Evidence – Incoherent & Inconsistent Classification of Carcinogencity” (Sept. 19, 2023).

[6] Malfitano, note 2 (quoting Pinto); see also Law360, note 2 (quoting Pinto).

[7] Harry Frankfurt, On Bullshit at 63 (2005); seeThe Philosophy of Bad Expert Witness Opinion Testimony” (Oct. 2, 2010).

[8] See Malifanto, note 2 (quoting Pinto).

[9] In re Roundup Prods. Litig., Phila. Cty. Ct. C.P., May Term 2022-0550, Control No. 24020394 (Feb. 14, 2024) (Roberts, J.). In a footnote, the court explained that “an expert may testify that foreign scientists have concluded that Roundup and· glyphosate can be used safely and they do not cause cancer. In the example provided, there is no specific reference to an agency or regulatory body, and the jury is free to make a credibility determination based on the totality of the expert’s testimony. It is, however, impossible for this Court, in a pre-trial posture, to anticipate every iteration of a question asked or answer provided; it remains within the discretion of the trial judge to determine whether a question or answer is appropriate based on the context and the trial circumstances.”

[10] See National Ass’n of Wheat Growers v. Bonta, 85 F.4th 1263, 1270 (9th Cir. 2023) (“A significant number of . . . organizations disagree with IARC’s conclusion that glyphosate is a probable carcinogen”; … “[g]lobal studies from the European Union, Canada, Australia, New Zealand, Japan, and South Korea have all concluded that glyphosate is unlikely to be carcinogenic to humans.”).

[11] See, e.g., In re Seroquel, 601 F. Supp. 2d 1313, 1318 (M.D. Fla. 2009) (noting that references to foreign regulatory actions or decisions “without providing context concerning the regulatory schemes and decision-making processes involved would strip the jury of any framework within which to evaluate the meaning of that evidence”)

[12] McKivison v. Monsanto Co., Phila. Cty. Ct. C.P., No. 2022- 00337, Plaintiff’s Motion in Limine No. 5 to Exclude Foreign Regulatory Registration and/or Approvals of Glyphosate, GHBs and/or Roundup.

[13] See Sherman Joyce, “New Rule 702 Helps Judges Keep Bad Science Out Of Court,” Law360 (Feb. 13, 2024) (noting Pennsylvania’s outlier status on evidence law that enables dodgy opinion testimony).

[14] P.J. D’Annunzio, “Monsanto Fights $2.25B Verdict After Philly Roundup Trial,” Law360 (Feb. 8, 2024).

[15]Collegium Ramazzini & Its Fellows – The Lobby” (Nov. 19, 2023).

A Citation for Jurs & DeVito’s Unlawful U-Turn

February 27th, 2024

Antic proposals abound in the legal analysis of expert witness opinion evidence. In the courtroom, the standards for admitting or excluding such evidence are found in judicial decisions or in statutes. When legislatures have specified standards for admitting expert witness opinions, courts have a duty to apply the standards to the facts before them. Law professors are, of course, untethered from either precedent or statute, and so we may expect chaos to ensue when they wade into disputes about the proper scope of expert witness gatekeeping.

Andrew Jurs teaches about science and the law at the Drake University Law School, and Scott DeVito is an associate professor of law at the Jacksonville University School of Law. Together, they have recently produced one of the most antic of antic proposals in a fatuous call for the wholesale revision of the law of expert witnesses.[1]

Jurs and DeVito rightly point out that since the Supreme Court, in Daubert,[2] waded into the dispute whether the historical Frye decision survived the enactment of the Federal Rules of Evidence, we have seen lower courts apply the legal standard inconsistently and sometimes incoherently. These authors, however, like many other academics, incorrectly label one or the other standard, Frye or Daubert, as being stricter than the other. Applying the labels of stricter and weaker standards, ignores that they are standards that measure completely different things. Frye advances a sociological standard, and a Frye test challenge can be answered by conducting a survey. Rule 702, as interpreted by Daubert, and as since revised and adopted by the Supreme Court and Congress, is an epistemic standard. Jurs and DeVito, like many other legal academic writers, apply a single adjective to standards that measure two different, incommensurate things. The authors’ repetition of the now 30-plus year-old mistake is a poor start for a law review article that sets out to reform the widespread inconsistency in the application of Rule 702, in federal and in state courts.

In seeking greater adherence to the actual rule, and consistency among decisions, Jurs and DeVito might have urged for judicial education, or blue-ribbon juries, or science courts, or greater use of court-appointed expert witnesses. Instead they have put their marker down on abandoning all meaningful gatekeeping. Jurs and DeVito are intent upon repairing the inconsistency and incoherency in the application of Daubert, by removing the standard altogether.

“To resolve the problem, we propose that the Courts replace the multiple Daubert factors with a single factor—testability—and that once the evidence meets this standard the judge should provide the jury with a proposed jury instruction to guide their analysis of the fact question addressed by the expert evidence.”[3]

In other words, because lower federal courts have routinely ignored the actual statutory language of Rule 702, and Supreme Court precedents, Jurs and DeVito would have courts invent a new standard, that virtually excludes nothing as long as someone can imagine a test for the asserted opinion. Remarkably, although they carry on about the “rule of law,” the authors fail to mention that judges have no authority to ignore the requirements of Rule 702. And perhaps even more stunning is that they have advanced their nihilistic proposal in the face of the remedial changes in Rule 702, designed to address judicial lawlessness in ignoring previously enacted versions of Rule 702. This antic proposal would bootstrap previous judicial “flyblowing” of a Congressional mandate into a prescription for abandoning any meaningful standard. They have articulated the Cole Porter standard: anything goes. Any opinion that “can be tested is science”; end of discussion.  The rest is for the jury to decide as a question of fact, subject to the fact finder’s credibility determinations. This would be a Scott v. Sandford rule[4] for scientific validity; science has no claims of validity that the law is bound to respect.

Jurs and DeVito attempt a cynical trick. They argue that they would fix the problem of “an unpredictable standard” by reverting to what they say is Daubert’s first principle of ensuring the reliability of expert witness testimony, and limiting the evidentiary display at trial to “good science.” Cloaking their nihilism, the authors say that they want to promote “good science,” but advocate the admissibility of any and every opinion, as long as it is theoretically “testable.” In order to achieve this befuddled goal, they simply redefine scientific knowledge as “essentially” equal to testable propositions.[5]

Jurs and DeVito marshal evidence of judicial ignorance of key aspects of scientific method, such as error rate. We can all agree that judges frequently misunderstand key scientific concepts, but their misunderstandings and misapplications do not mean that the concepts are unimportant or unnecessary. Many judges seem unable to deliver an opinion that correctly defines p-value or confidence interval, but their inabilities do not allow us to dispense with the need to assess random error in statistical tests. Our faint-hearted authors never explain why the prevalence of judicial error must be a counsel of despair that drives us to bowdlerize scientific evidence into something it is not. We may simply need better training for judges, or better assistance for them in addressing complex claims. Ultimately, we need better judges.

For those judges who have taken their responsibility seriously, and who have engaged with the complexities of evaluating validity concerns raised in Rule 702 and 703 challenges, the Jurs and DeVito proposal must seem quite patronizing. The “Daubert” factors are simply too complex for you, so we will just give you crayons, or a single, meaningless factor that you cannot screw up.[6]

The authors set out a breezy, selective review of statements by a few scientists and philosophers of science. Rather than supporting the extreme reductionism, Jurs and DeVito’s review reveals that science is much more than identifying a “testable” proposition. Indeed, the article’s discussion of philosophy and practice of science weighs strongly against the authors’ addled proposal.[7]

The authors, for example, note that Sir Isaac Newton emphasized the importance of empirical method.[8] Contrary to the article’s radical reductionism, the authors note that Sir Karl Popper and Albert Einstein stressed that the failure to obtain a predicted experimental result may render a theory “untenable,” which of course requires data and valid tests and inferences to assess. Quite a bit of motivated reasoning has led Jurs and DeVito to confuse a criterion of testability with the whole enterprise of science, and to ignore the various criteria of validity for collecting data, testing hypotheses, and interpreting results.

The authors suggest that their proposal will limit the judicial inquiry to the the legal question of reliability, but this suggestion is mere farce. Reliability means obtaining the same or sufficiently similar results upon repeated testing, but these authors abjure testing itself.  Furthermore, reliability as contemplated by the Supreme Court, in 1993, and by FRE 702 ever since, has meant validity of the actual test that an expert witness argues in support of his or her opinion or claims.

Whimsically, and without evidence, Jurs and DeVito claim that their radical abandonment of gatekeeping will encourage scientists, in “fields that are testable, but not yet tested, to perform real, objective, and detailed research.” Their proposal, however, works to remove any such incentive because untested but testable research becomes freely admissible. Why would the lawsuit industry fund studies, which might not support their litigation claims, when the industry’s witnesses need only imagine a possible test to advance their claims, without the potential embarrassment by facts? The history of modern tort law teaches us that cheap speculation would quickly push out actual scientific studies.

The authors’ proposal would simply open the floodgates of speculation, conjecture, and untested hypothesis, and leave the rest to the vagaries of trials, mostly in front of jurors untrained in evaluating scientific and statistical evidence. Admittedly, some incurious and incompetent gatekeepers and triers of fact will be relieved to know that they will not have to evaluate actual scientific evidence, because it had been eliminated by the Jurs and DeVito proposal to make mere testability the touchstone of admissibility

To be sure, in Aristotelian terms, testability is logical and practically prior to testing, but these relationships do not justify holding out testability as the “essence” of science, and the alpha and omega of science.[9] Of course, one must have an hypothesis to engage in hypothesis testing, but science lies in the clever interrogation of nature, guided by the hypothesis. The scientific process lies in answering the question, not simply in formulating the question.

As for the authors’ professed concern about “rule of law,” readers should note that the Jurs and DeVito article completely ignores the remedial amendment to Rule 702, which went into effect on December 1, 2023, to address the myriad inconsistencies, and failures to engage, in required gatekeeping of expert witness opinion testimony.[10]

The new Rule 702 is now law, with its remedial clarification that the proponent of expert witness opinion must show the court that the opinion is sufficiently supported by facts or data, Rule 702(b), and that the opinion “reflects a reliable application of the principles and methods to the facts of the case,” Rule 702(d). The Rule prohibits deferring the evaluation of sufficiency of support or reliability of application of method to the trier of fact; there is no statutory support for suggesting that these inquires always or usually go to “weight and not admissibility.”

The Jurs and DeVito proposal would indeed be a U-Turn in the law of expert witness opinion testimony. Rather than promote the rule of law, they have issued an open, transparent call for licentiousness in the adjudication of scientific and technical issues.


[1] Andrew Jurs & Scott DeVito, “A Return to Rationality: Restoring the Rule of Law After Daubert’s Disasterous U-Turn,” 164 New Mexico L. Rev. 164 (2024) [cited below as U-Turn]

[2] Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993).

[3] U-Turn at 164, Abstract.

[4] 60 U.S. 393 (1857).

[5] U-Turn at 167.

[6] U-Turn at 192.

[7] See, e.g., U-Turn at 193 n.179, citing David C. Gooding, “Experiment,” in W.H. Newton-Smith, ed., A Companion to the Philosophy of Science 117 (2000) (emphasizing the role of actual experimentation, not the possibility of experimentation, in the development of science).

[8] U-Turn at 194.

[9] See U-Turn at 196.

[10] See Supreme Court Order, at 3 (Apr. 24, 2023); Supreme Court Transmittal Package (Apr. 24, 2023).

IARC’S Fundamental Distinction Between Hazard and Risk – Lost in the Flood

February 1st, 2024

Socrates viewed philosophy as beginning in wonder,[1] but Socrates and his philosophic heirs recognized that philosophy does not get down to business until it starts to clarify the terms of discussion. By the middle of the last century, failure to understand the logic of language replaced wonder as the beginning of philosophy.[2] Even if philosophy could not cure all conceptual pathology, most writers came to see that clarifying terms, concepts, and usage was an essential starting point in thinking clearly about a subject.[3]

Hazard versus Risk

Precision in scientific exposition often follows from the use of measurements, using agreed upon quantitative units, and accepted, accurate, reliable procedures for measurements. When scientists substitute qualitative measures for what are inherently quantitative measures, they frequently lapse into error. For example, beware of rodent studies that proclaim harms at “low doses,” which turn out to low in comparison to other rodent studies, but orders of magnitude greater exposure than experienced by human beings.

Risk is a quantitative term meaning a rate of some specified outcome. A Dictionary of Epidemiology, for instance, defines risk as:

“The probability of an adverse or beneficial event in a defined population over a specified time interval. In epidemiology and in clinical research it is commonly measured through the cumulative incidence and the incidence proportion.”[4]

An increased risk thus requires a measurement of a rate or probability of an outcome greater than expected in the absence of the exposure of interest. We might be uncertain of the precise measure of the risk, or of an increased risk, but conceptually a risk connotes a rate or a probability that is, at least in theory, measurable.

Hazard is a categorical concept; something is, or is not, a hazard without regard to the rate or incidence of harm. The definition of “hazard,” in the Dictionary of Epidemiology provides a definition that captures the non-quantitative, categorical natural of some exposure’s being a hazard:

“The inherent capability of a natural or human-made agent or process to adversely affect human life, health, property, or activity, with the potential to cause a disease.”[5]

The International Agency for Research on Cancer (IARC) purports to set out a classification scheme, for human cancer hazards. As used by IARC, its classification scheme involves a set of epistemic modal terms: “known,” “probably,” “possibly,” and “indeterminate.” These epistemic modalities characterize the strength of the evidence that an agent is carcinogenic, and not the magnitude of quantitative risk of cancer from exposure at a given level. The IARC Preamble, which attempts to describe the Agency’s methodology, explains that the distinction between hazard and risk is “fundamental”:

“A cancer hazard is an agent that is capable of causing cancer, whereas a cancer risk is an estimate of the probability that cancer will occur given some level of exposure to a cancer hazard. The Monographs assess the strength of evidence that an agent is a cancer hazard. The distinction between hazard and risk is fundamental. The Monographs identify cancer hazards even when risks appear to be low in some exposure scenarios. This is because the exposure may be widespread at low levels, and because exposure levels in many populations are not known or documented.”[6]

This attempted explanation reveals an important problem in IARC’s project, as stated in the Preamble. First, there is an unproven assumption that there will be cancer hazards regardless of the exposure levels. The IARC contemplates that there may circumstances of low levels of risk from low levels of exposure, but it elides the important issue of thresholds of exposure, below which there is no risk. The Preamble suggests that IARC does not attempt to provide evidence for or against meaningful thresholds of hazardousness, but this failure greatly undermines the project.  Exposure circumstances may be such that there is no hazard at all, and so the risk is zero.

The purported distinction between hazard and risk, supposedly fundamental, is often blurred by the Agency, in the monographs produced by working groups on specific exposure circumstances. Consider for instance how a working group characterized the “hazard” of inhalation of respirable crystalline silica:

“ln making the overalI evaluation, the Working Group noted that carcinogenicity in humans was not detected in all industrial circumstances studied. Carcinogenicity may be dependent on inherent characteristics of the crystalline silica or on external factors affecting its biological activity or distribution of its polymorphs.

Crystalline silica inhaled in the form of quartz or cristobalite from occupational sources is carcinogenic to humans (Group 1)”[7]

So some IARC classifications actually do specify that exposure to a substance is not a hazard in all circumstances, a qualification that implies that the same exposure in some exposure circumstances is not a hazard, and so the risk is zero.

We know something about the deliberations of the crystalline silica working group. The members were deadlocked for some time, and the switch of one vote ultimately gave a bare majority to reclassifying crystalline silica as a Group I exposure. Here is how the working group member, Corbett McDonald described the situation:

“The IARC Working Group, in 1997, had considerable difficulty in reaching a decision and might well not have done so had it not been made clear that it was concerned with hazard identification, not risk.”[8]

It was indeed Professor McDonald who changed his vote based upon this linguistic distinction between hazard and risk. His own description of the dataset, however, suggests that the elderly McDonald was railroaded by more younger, more strident members of the group:

“Of the many studies reviewed by the Working Group … nine were identified as providing the least confounded evidence. Four studies which we considered positive included two of refractory brick workers, one in the diatomite industry and our own in pottery workers; the five which seemed negative or equivocal included studies of South Dakota gold miners, Danish stone workers, US stone workers and US granite workers. This further example that the truth is seldom pure and never simple underlines the difficulty of establishing a rational control policy for some carcinogenic materials.”[9]

In defense of his vote, McDonald meekly offered that

“[s]ome equally expert panel of scientists presented with the same information on another occasion could of course have reached a different verdict. The evidence was conflicting and difficult to assess and such judgments are essentially subjective. Of course, when the evidence is conflicting, it cannot be said to be sufficient. Not only was the epidemiologic evidence conflicting, but so was the whole animal toxicology, which found a risk of tumors in rats, but not in mice or hamsters.”

Aside from endorsing a Group I classification for crystalline silica, the working group ignored the purportedly fundamental distinction between hazard and risk, by noting that not all exposure circumstances posed a hazard of cancer. The same working group did even greater violence to the supposed distinction between risk and hazard in its evaluation of coal dust exposure and human cancer. Coal miners have been studied extensively for cancer risk, and the working group reviewed and evaluated the nature of their exposures and their cancer outcomes. Coal dust virtually always contains crystalline silica, often making up a sizable percentage of the total inhalational exposures (40% or so) of coal dust.[10] And yet, when the group studied the cancer rates among coal miners, and in animals, it concluded that there was “inadequate evidence in humans, and “in experimental animals,” for carcinogenicity. The same working group that agreed, on a divided vote, to place crystalline silica in Group 1, voted that “[c]oal dust cannot be classified as to its carcinogenicity to humans (Group 3).”[11]

The conceptual confusion between hazard and risk is compounded by the IARC’s use of epistemic modalities – known, probably, possibly, and indeterminate – to characterize the existence of a hazard. The Preamble, in Table 4, summarizes the categories and “the stream of evidence” needed to place any particular exposure in a one epistemic modal class or another. What is inexplicable is how and why a single substance such as crystalline silica goes from a known cancer hazard in some unspecified occupational setting but then its putative carcinogenicity becomes indeterminate when it makes up 40% of the inhaled dust in a coal mine.

 

The conceptual difficulty created by IARC’s fundamental distinction between hazard and risk is that risk might well vary across exposure circumstances, but there is no basis for varying the epistemic modality for the hazard assessment simply because coal dust is only say 40% crystalline silica. Some of the exposure circumstances evaluated for the Group I silica hazard classification actually were lower than the silica content of coal.  Granite quarrying, for example, involves exposure to rock dust that is roughly 30% crystalline silica.

The conceptual and epistemic confusion caused by IARC’s treatment of the same substance in different exposure circumstances is hardly unique to its treatment of crystalline silica and coal dust. Benzene has long been classified as a Group I human carcinogen, for its ability to cause a specific form of leukemia.[12] Gasoline contains, on average, about one percent benzene, and so gasoline exposure inevitably involves benzene exposure. And yet, benzene exposure in the form of inhaling gasoline fumes is only a “possible” human carcinogen, Group 2B.[13]

Similarly, in 2018, the IARC classified the evidence for the human carcinogenicity of coffee as “indeterminate,” Group 3.[14] And yet every drop of coffee inevitably contains acrylamide, which is, according to IARC, a Group 2A “probable human carcinogen.”[15] Rent-seeking groups, such as the Council for Education and Research on Toxics (founded by Carl Cranor and Martyn Smith) have tried shamelessly to weaponize the IARC 2A classification for acrylaminde by claiming a bounty against coffee sellers such as Star-Bucks in California Proposition 65 litigation.[16]

Similarly confusing, IARC designates acetaldehyde on its own a “possible” human carcinogen, group 2B, even though acetaldehyde is invariably associated with the metabolism of ethyl alcohol, which itself is a Group I human carcinogen.[17] There may well be other instances of such confusions, and I would welcome examples from readers.

These disparate conclusions strain credulity, and undermine confidence that the hazard-risk distinction does any work at all. Hazard and risk do have different meanings, and I would not want to be viewed as anti-semantic. IARC’s use of the hazard-risk distinction, however, lends itself to the interpretation that hazard is simply risk without the quantification. This usage actually is worse than having no distinction at all, because it ignores the existence of thresholds below which exposure carries no risk, as well as ignoring different routes of exposure and exposure circumstances that carry no risk at all. The vague and unquantified categorical determination that a substance is a hazard allows fear mongers to substitute subjective, emotive, and unscientific judgments for scientific assessment of carcinogenicity under realistic conditions of use and exposure.


[1] Plato, Theaetetus 155d (Fowler transl. 1921).

[2] Ludwig Wittgenstein, Philosophical Investigations (1953).

[3] See, e.g., Richard M. Rorty, ed., The Linguistic Turn: Essays in Philosophical Method (1992); Nicholas Rescher, Concept Audits: A Philosophical Method (2016); Timothy Williamson, Philosophical Method: A Very Short Introduction 32 (2020) (discussing the need to clarify terms).

[4] Miquel Porta, Sander Greenland, Miguel Hernán, Isabel dos Santos Silva, John M. Last, and Andrea Burón, A Dictionary of Epidemiology 250 (6th ed. 2014).

[5] Id. at 128.

[6] IARC Monographs on the Identification of Carcinogenic Hazards to Humans – Preamble (2019) (emphasis added).

[7] IARC Monograph on the Evaluation of Carcinogenic Risks to Humans: Volume 68, Silica, Some Silicates, Coal Dust, and para-Aramid Fibrils 210-211 (1997).

[8] Corbett McDonald & Nicola Cherry, “Crystalline Silica and Lung Cancer: The Problem of Conflicting Evidence,” 8 Indoor Built Environment 121, 121 (1999).

[9] Id.

[10] IARC Monograph on the Evaluation of Carcinogenic Risks to Humans: Volume 68, Silica, Some Silicates, Coal Dust, and para-Aramid Fibrils 340 (1997).

[11] Id. at 393.

[12] IARC Monograph, Volume 120: Benzene (2018).

[13] IARC Monographs on the Evaluation of Carcinogenic Risks to Humans: Volume 45, Occupational Exposures in Petroleum Refining; Crude Oil and Major Petroleum Fuels 194 (1989).

[14] IARC Monograph No. 116, Drinking Coffee, Mate, and Very Hot Beverages (2018).

[15] IARC Monograph no. 60, Some Industrial Chemicals (1994).

[16] SeeCoffee with Cream, Sugar & a Dash of Acrylamide” (June 9, 2018); “The Council for Education and Research on Toxics” (July 9, 2013).

[17] IARC Monographs on the Evaluation of Carcinogenic Risks to Humans Volume 96 1278 (2010).

Consenus is Not Science

November 8th, 2023

Ted Simon, a toxicologist and a fellow board member at the Center for Truth in Science, has posted an intriguing piece in which he labels scientific consensus as a fool’s errand.[1]  Ted begins his piece by channeling the late Michael Crichton, who famously derided consensus in science, in his 2003 Caltech Michelin Lecture:

“Let’s be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. Science, on the contrary, requires only one investigator who happens to be right, which means that he or she has results that are verifiable by reference to the real world. In science, consensus is irrelevant. What is relevant is reproducible results. The greatest scientists in history are great precisely because they broke with the consensus.

* * * *

There is no such thing as consensus science. If it’s consensus, it isn’t science. If it’s science, it isn’t consensus. Period.”[2]

Crichton’s (and Simon’s) critique of consensus is worth remembering in the face of recent proposals by Professor Edward Cheng,[3] and others,[4] to make consensus the touchstone for the admissibility of scientific opinion testimony.

Consensus or general acceptance can be a proxy for conclusions drawn from valid inferences, within reliably applied methodologies, based upon sufficient evidence, quantitatively and qualitatively. When expert witnesses opine contrary to a consensus, they raise serious questions regarding how they came to their conclusions. Carl Sagan declaimed that “extraordinary claims require extraordinary evidence,” but his principle was hardly novel. Some authors quote the French polymath Pierre Simon Marquis de Laplace, who wrote in 1810: “[p]lus un fait est extraordinaire, plus il a besoin d’être appuyé de fortes preuves,”[5] but as the Quote Investigator documents,[6] the basic idea is much older, going back at least another century to church rector who expressed his skepticism of a contemporary’s claim of direct communication with the almighty: “Sure, these Matters being very extraordinary, will require a very extraordinary Proof.”[7]

Ted Simon’s essay is also worth consulting because he notes that many sources of apparent consensus are really faux consensus, nothing more than self-appointed intellectual authoritarians who systematically have excluded some points of view, while turning a blind eye to their own positional conflicts.

Lawyers, courts, and academics should be concerned that Cheng’s “consensus principle” will change the focus from evidence, methodology, and inference, to a surrogate or proxy for validity. And the sociological notion of consensus will then require litigation of whether some group really has announced a consensus. Consensus statements in some areas abound, but inquiring minds may want to know whether they are the result of rigorous, systematic reviews of the pertinent studies, and whether the available studies can support the claimed consensus.

Professor Cheng is hard at work on a book-length explication of his proposal, and some criticism will have to await the event.[8] Perhaps Cheng will overcome the objections placed against his proposal.[9] Some of the examples Professor Cheng has given, however, such as his errant his dramatic misreading of the American Statistical Association’s 2016 p-value consensus statement to represent, in Cheng’s words:

“[w]hile historically used as a rule of thumb, statisticians have now concluded that using the 0.05 [p-value] threshold is more distortive than helpful.”[10]

The 2016 Statement said no such thing, although a few statisticians attempted to distort the statement in the way that Cheng suggests. In 2021, a select committee of leading statisticians, appointed by the President of the ASA, issued a statement to make clear that the ASA had not embraced the Cheng misinterpretation.[11] This one example alone does not bode well for the viability of Cheng’s consensus principle.


[1] Ted Simon, “Scientific consensus is a fool’s errand made worse by IARC” (Oct. 2023).

[2] Michael Crichton, “Aliens Cause Global Warming,” Caltech Michelin Lecture (Jan. 17, 2003).

[3] Edward K. Cheng, “The Consensus Rule: A New Approach to Scientific Evidence,” 75 Vanderbilt L. Rev. 407 (2022) [Consensus Rule]

[4] See Norman J. Shachoy Symposium, The Consensus Rule: A New Approach to the Admissibility of Scientific Evidence (2022), 67 Villanova L. Rev. (2022); David S. Caudill, “The ‘Crisis of Expertise’ Reaches the Courtroom: An Introduction to the Symposium on, and a Response to, Edward Cheng’s Consensus Rule,” 67 Villanova L. Rev. 837 (2022); Harry Collins, “The Owls: Some Difficulties in Judging Scientific Consensus,” 67 Villanova L. Rev. 877 (2022); Robert Evans, “The Consensus Rule: Judges, Jurors, and Admissibility Hearings,” 67 Villanova L. Rev. 883 (2022); Martin Weinel, “The Adversity of Adversarialism: How the Consensus Rule Reproduces the Expert Paradox,” 67 Villanova L. Rev. 893 (2022); Wendy Wagner, “The Consensus Rule: Lessons from the Regulatory World,” 67 Villanova L. Rev. 907 (2022); Edward K. Cheng, Elodie O. Currier & Payton B. Hampton, “Embracing Deference,” 67 Villanova L. Rev. 855 (2022).

[5] Pierre-Simon Laplace, Théorie analytique des probabilités (1812) (The more extraordinary a fact, the more it needs to be supported by strong proofs.”). See Tressoldi, “Extraordinary Claims Require Extraordinary Evidence: The Case of Non-Local Perception, a Classical and Bayesian Review of Evidences,” 2 Frontiers Psych. 117 (2011); Charles Coulston Gillispie, Pierre-Simon Laplace, 1749-1827: a life in exact science (1997).

[6]Extraordinary Claims Require Extraordinary Evidence” (Dec. 5, 2021).

[7] Benjamin Bayly, An Essay on Inspiration 362, part 2 (2nd ed. 1708).

[8] The Consensus Principle, under contract with the University of Chicago Press.

[9] SeeCheng’s Proposed Consensus Rule for Expert Witnesses” (Sept. 15, 2022);
Further Thoughts on Cheng’s Consensus Rule” (Oct. 3, 2022); “Consensus Rule – Shadows of Validity” (Apr. 26, 2023).

[10] Consensus Rule at 424 (citing but not quoting Ronald L. Wasserstein & Nicole A. Lazar, “The ASA Statement on p-Values: Context, Process, and Purpose,” 70 Am. Statistician 129, 131 (2016)).

[11] Yoav Benjamini, Richard D. DeVeaux, Bradly Efron, Scott Evans, Mark Glickman, Barry Braubard, Xuming He, Xiao Li Meng, Nancy Reid, Stephen M. Stigler, Stephen B. Vardeman, Christopher K. Wikle, Tommy Wright, Linda J. Young, and Karen Kafadar, “The ASA President’s Task Force Statement on Statistical Significance and Replicability,” 15 Annals of Applied Statistics 1084 (2021); see also “A Proclamation from the Task Force on Statistical Significance” (June 21, 2021).

Just Dissertations

October 27th, 2023

One of my childhood joys was roaming the stacks of libraries and browsing for arcane learning stored in aging books. Often, I had no particular goal in my roaming, and I flitted from topic to topic. Occasionally, however, I came across useful learning. It was in one college library, for instance, that I discovered the process for making nitrogen tri-iodide, which provided me with some simple-minded amusement for years. (I only narrowly avoided detection by Dean Brownlee for a prank involving NI3 in chemistry lab.)

Nowadays, most old book are off limits to the casual library visitor, but digital archives can satisfy my occasional compulsion to browse what is new and compelling in the world of research on topics of interest. And there can be no better source for new and topical research than browsing dissertations and theses, which are usually required to open new ground in scholarly research and debate. There are several online search tools for dissertations, such as ProQuest, EBSCO Open Dissertation, Theses and Dissertations, WorldCat Dissertations and Theses, Open Access Theses and Dissertations, and Yale Library Resources to Find Dissertation.

Some universities generously share the scholarship of their graduate students online, and there are some great gems freely available.[1] Other universities provide a catalogue of their students’ dissertations, the titles of which can be browsed and the texts of which can be downloaded. For lawyers interested in medico-legal issues, the London School of Hygiene & Tropical Medicine has a website, “LSHTM Research Online,” which is delightful place to browse on a rainy afternoon, and which features a free, open access repository of research. Most of the publications are dissertations, some 1,287 at present, on various medical and epidemiologic topics, from 1938 to the present.

The prominence of the London School of Hygiene & Tropical Medicine makes its historical research germane to medico-legal issues such as “state of the art,” notice, priority, knowledge, and intellectual provenance. A 1959 dissertation by J. D. Walters, the Surgeon Lieutenant of the Royal Nayal, is included in the repository.[2] Walters’ dissertation is a treasure trove of the state-of-the-art case – who knew what when – about asbestos health hazards, written before litigation distorted perspectives on the matter. Walters’ dissertation shows in contemporaneous scholarship, not hindsight second guessing, that Sir Richard Doll’s 1955 study, flawed as it was by contemporaneous standards, was seen as establishing an association between asbestosis (not asbestos exposure) and lung cancer. Walters’ careful assessment of how asbestos was actually used in British dockyards documents the differences between British and American product use. The British dockyards had full-time laggers since 1946, and they used spray asbestos, asbestos (amosite and crocidolite) mattresses, as well as lower asbestos content insulation.

Walters reported cases of asbestosis among the laggers. Written four years before Irving Selikoff published on an asbestosis hazard among laggers, the predominant end-users of asbestos-containing insulation, Walters’ dissertation preempts Selikoff’s claim of priority in identifying the asbestos hazard, and it shows that large employers, such as the Royal Navy, and the United States Navy, were well aware of asbestos hazards, before companies began placing warning labels. Like Selikoff, Walters typically had no information about worker compliance with safety regulations, such as respiratory use. Walters emphasized the need for industrial medical officers to be aware of the asbestosis hazard, and the means to prevent it. Noticeably absent was any suggestion that a warning label on bags of asbestos or boxes of pre-fabricated insulation were relevant to the medical officer’s work in controlling the hazard.

Among the litigation relevant finds in the repository is the doctoral thesis of Francis Douglas Kelly Liddell,[3] on the mortality of the Quebec chrysotile workers, with most of the underlying data.[4] A dissertation by Keith Richard Sullivan reported on the mortality patterns of civilian workers at Royal Navy dockyards in England.[5] Sullivan found no increased risk of lung cancer, although excesses of asbestosis and mesothelioma occurred at all dockyards. A critical look at meta-analyses of formaldehyde and cancer outcomes in one dissertation shows prevalent biases in available studies, and insufficient evidence of causation.[6]

Some of the other interesting dissertations with historical medico-legal relevance are:

Francis, The evaluation of small airway disease in the human lung with special reference to tests which are suitable for epidemiological screening; PhD thesis, London School of Hygiene & Tropical Medicine (1978) DOI: https://doi.org/10.17037/PUBS.04655290

Gillian Mary Regan, A Study of pulmonary function in asbestosis, PhD thesis, London School of Hygiene & Tropical Medicine (1977) DOI: https://doi.org/10.17037/PUBS.04655127

Christopher J. Sirrs, Health and Safety in the British Regulatory State, 1961-2001: the HSC, HSE and the Management of Occupational Risk, PhD thesis, London School of Hygiene & Tropical Medicine (2016) DOI: https://doi.org/10.17037/PUBS.02548737

Michael Etrata Rañopa, Methodological issues in electronic healthcare database studies of drug cancer associations: identification of cancer, and drivers of discrepant results, PhD thesis, London School of Hygiene & Tropical Medicine (2016). DOI: https://doi.org/10.17037/PUBS.02572609

Melanie Smuk, Missing Data Methodology: Sensitivity analysis after multiple imputation, PhD thesis, London School of Hygiene & Tropical Medicine (2015) DOI: https://doi.org/10.17037/PUBS.02212896

John Ross Tazare, High-dimensional propensity scores for data-driven confounder adjustment in UK electronic health records, PhD thesis, London School of Hygiene & Tropical Medicine (2022). DOI: https://doi.org/10.17037/PUBS.046647276/

Rebecca Jane Hardy, (1995) Meta-analysis techniques in medical research: a statistical perspective. PhD thesis, London School of Hygiene & Tropical Medicine. DOI: https://doi.org/10.17037/PUBS.00682268

Jemma Walker, Bayesian modelling in genetic association studies, PhD thesis, London School of Hygiene & Tropical Medicine (2012) DOI: https://doi.org/10.17037/PUBS.01635516

  1. Marieke Schoonen, (2007) Pharmacoepidemiology of autoimmune diseases.PhD thesis, London School of Hygiene & Tropical Medicine. DOI: https://doi.org/10.17037/PUBS.04646551

Claudio John Verzilli, Method for the analysis of incomplete longitudinal data, PhD thesis, London School of Hygiene & Tropical Medicine (2003)  DOI: https://doi.org/10.17037/PUBS.04646517

Martine Vrijheid, Risk of congenital anomaly in relation to residence near hazardous waste landfill sites, PhD thesis, London School of Hygiene & Tropical Medicine (2000) DOI: https://doi.org/10.17037/PUBS.00682274


[1] See, e.g., Benjamin Nathan Schachtman, Traumedy: Dark Comedic Negotiations of Trauma in Contemporary American Literature (2016).

[2] J.D. Walters, Asbestos – a potential hazard to health in the ship building and ship repairing industries, DrPH thesis, London School of Hygiene & Tropical Medicine (1959); https://doi.org/10.17037/PUBS.01273049.

[3]The Lobby – Cut on the Bias” (July 6, 2020).

[4] Francis Douglas Kelly Liddell, Mortality of Quebec chrysotile workers in relation to radiological findings while still employed, PhD thesis, London School of Hygiene & Tropical Medicine (1978); DOI: https://doi.org/10.17037/PUBS.04656049

[5] Keith Richard Sullivan, Mortality patterns among civilian workers in Royal Navy Dockyards, PhD thesis, London School of Hygiene & Tropical Medicine (1994) DOI: https://doi.org/10.17037/PUBS.04656717

[6] Damien Martin McElvenny, Meta-analysis of Rare Diseases in Occupational Epidemiology, PhD thesis, London School of Hygiene & Tropical Medicine (2017) DOI: https://doi.org/10.17037/PUBS.03894558

Science & the Law – from the Proceedings of the National Academies of Science

October 5th, 2023

The current issue of the Proceedings of the National Academies of Science (PNAS) features a medley of articles on science generally, and forensic science, in the law.[1] The general editor of the compilation appears to be editorial board member, Thomas D. Albright, the Conrad T. Prebys Professor of Vision Research at the Salk Institute for Biological Studies.

 I have not had time to plow through the set of offerings, but even a superficial inspection reveals that the articles will be of interest to lawyers and judges involved in the litigation of scientific issues. The authors seem to agree that descriptively and prescriptively, validity is more important than expertise in the legal  consideration of scientific evidence.

1. Thomas D. Albright, “A scientist’s take on scientific evidence in the courtroom,” 120 Proceedings of the National Academies of Science 120 (41) e2301839120 (2023).

Albright’s essay was edited by Henry Roediger, a psychologist at the Washington University in St. Louis.

Abstract

Scientific evidence is frequently offered to answer questions of fact in a court of law. DNA genotyping may link a suspect to a homicide. Receptor binding assays and behavioral toxicology may testify to the teratogenic effects of bug repellant. As for any use of science to inform fateful decisions, the immediate question raised is one of credibility: Is the evidence a product of valid methods? Are results accurate and reproducible? While the rigorous criteria of modern science seem a natural model for this evaluation, there are features unique to the courtroom that make the decision process scarcely recognizable by normal standards of scientific investigation. First, much science lies beyond the ken of those who must decide; outside “experts” must be called upon to advise. Second, questions of fact demand immediate resolution; decisions must be based on the science of the day. Third, in contrast to the generative adversarial process of scientific investigation, which yields successive approximations to the truth, the truth-seeking strategy of American courts is terminally adversarial, which risks fracturing knowledge along lines of discord. Wary of threats to credibility, courts have adopted formal rules for determining whether scientific testimony is trustworthy. Here, I consider the effectiveness of these rules and explore tension between the scientists’ ideal that momentous decisions should be based upon the highest standards of evidence and the practical reality that those standards are difficult to meet. Justice lies in carefully crafted compromise that benefits from robust bonds between science and law.

2. Thomas D.Albright, David Baltimore, Anne-MarieMazza, “Science, evidence, law, and justice,” 120 Proceedings of the National Academies of Science 120 (41) e2301839120 (2023).

Professor Baltimore is a nobel laureate and researcher in biology, now at the California Institute of Technology. Anne-Marie Mazza is the director of the Committee on Science, Technology, and Law, of the National Academies of Sciences, Engineering, and Medicine. Jennifer Mnookin is the chancellor of the University of Wisconsin, Madison; previously, she was the dean of the UCLA School of Law. Judge Tatel is a federal judge on the United States Court of Appeals for the District of Columbia Circuit.

Abstract

For nearly 25 y, the Committee on Science, Technology, and Law (CSTL), of the National Academies of Sciences, Engineering, and Medicine, has brought together distinguished members of the science and law communities to stimulate discussions that would lead to a better understanding of the role of science in legal decisions and government policies and to a better understanding of the legal and regulatory frameworks that govern the conduct of science. Under the leadership of recent CSTL co-chairs David Baltimore and David Tatel, and CSTL director Anne-Marie Mazza, the committee has overseen many interdisciplinary discussions and workshops, such as the international summits on human genome editing and the science of implicit bias, and has delivered advisory consensus reports focusing on topics of broad societal importance, such as dual use research in the life sciences, voting systems, and advances in neural science research using organoids and chimeras. One of the most influential CSTL activities concerns the use of forensic evidence by law enforcement and the courts, with emphasis on the scientific validity of forensic methods and the role of forensic testimony in bringing about justice. As coeditors of this Special Feature, CSTL alumni Tom Albright and Jennifer Mnookin have recruited articles at the intersection of science and law that reveal an emerging scientific revolution of forensic practice, which we hope will engage a broad community of scientists, legal scholars, and members of the public with interest in science-based legal policy and justice reform.

3. Nicholas Scurich, David L. Faigman, and Thomas D. Albright, “Scientific guidelines for evaluating the validity of forensic feature-comparison methods,” 120 Proceedings of the National Academies of Science (2023).

Nicholas Scurich is the chair of the department of Psychological Science, at the University of Southern California, David Faigman has written prolifically about science in the law. He is now the chancellor and dean, at the University of San Francisco College of Law.

Abstract

When it comes to questions of fact in a legal context—particularly questions about measurement, association, and causality—courts should employ ordinary standards of applied science. Applied sciences generally develop along a path that proceeds from a basic scientific discovery about some natural process to the formation of a theory of how the process works and what causes it to fail, to the development of an invention intended to assess, repair, or improve the process, to the specification of predictions of the instrument’s actions and, finally, empirical validation to determine that the instrument achieves the intended effect. These elements are salient and deeply embedded in the cultures of the applied sciences of medicine and engineering, both of which primarily grew from basic sciences. However, the inventions that underlie most forensic science disciplines have few roots in basic science, and they do not have sound theories to justify their predicted actions or results of empirical tests to prove that they work as advertised. Inspired by the “Bradford Hill Guidelines”—the dominant framework for causal inference in epidemiology—we set forth four guidelines that can be used to establish the validity of forensic comparison methods generally. This framework is not intended as a checklist establishing a threshold of minimum validity, as no magic formula determines when particular disciplines or hypotheses have passed a necessary threshold. We illustrate how these guidelines can be applied by considering the discipline of firearm and tool mark examination.

4. Peter Stout, “The secret life of crime labs,” 120 Proceedings of the National Academies of Science 120 (41) e2303592120 (2023).

Peter Stout is a scientist with the Houston Forensic Science Center, in Houston, Texas. The Center describes itself as “an independent local government corporation,” which provides forensic “services” to the Houston police

Abstract

Houston TX experienced a widely known failure of its police forensic laboratory. This gave rise to the Houston Forensic Science Center (HFSC) as a separate entity to provide forensic services to the City of Houston. HFSC is a very large forensic laboratory and has made significant progress at remediating the past failures and improving public trust in forensic testing. HFSC has a large and robust blind testing program, which has provided many insights into the challenges forensic laboratories face. HFSC’s journey from a notoriously failed lab to a model also gives perspective to the resource challenges faced by all labs in the country. Challenges for labs include the pervasive reality of poor-quality evidence. Also that forensic laboratories are necessarily part of a much wider system of interdependent functions in criminal justice making blind testing something in which all parts have a role. This interconnectedness also highlights the need for an array of oversight and regulatory frameworks to function properly. The major essential databases in forensics need to be a part of blind testing programs and work is needed to ensure that the results from these databases are indeed producing correct results and those results are being correctly used. Last, laboratory reports of “inconclusive” results are a significant challenge for laboratories and the system to better understand when these results are appropriate, necessary and most importantly correctly used by the rest of the system.

5. Brandon L. Garrett & Cynthia Rudin, “Interpretable algorithmic forensics,” 120 Proceedings of the National Academies of Science 120 (41) 120 (41) e2301842120 (2023).

Garrett teaches at the Duke University School of Law. Rudin teaches statistics at Duke University.

Abstract

One of the most troubling trends in criminal investigations is the growing use of “black box” technology, in which law enforcement rely on artificial intelligence (AI) models or algorithms that are either too complex for people to understand or they simply conceal how it functions. In criminal cases, black box systems have proliferated in forensic areas such as DNA mixture interpretation, facial recognition, and recidivism risk assessments. The champions and critics of AI argue, mistakenly, that we face a catch 22: While black box AI is not understandable by people, they assume that it produces more accurate forensic evidence. In this Article, we question this assertion, which has so powerfully affected judges, policymakers, and academics. We describe a mature body of computer science research showing how “glass box” AI—designed to be interpretable—can be more accurate than black box alternatives. Indeed, black box AI performs predictably worse in settings like the criminal system. Debunking the black box performance myth has implications for forensic evidence, constitutional criminal procedure rights, and legislative policy. Absent some compelling—or even credible—government interest in keeping AI as a black box, and given the constitutional rights and public safety interests at stake, we argue that a substantial burden rests on the government to justify black box AI in criminal cases. We conclude by calling for judicial rulings and legislation to safeguard a right to interpretable forensic AI.

6. Jed S. Rakoff & Goodwin Liu, “Forensic science: A judicial perspective,” 120 Proceedings of the National Academies of Science e2301838120 (2023).

Judge Rakoff has written previously on forensic evidence. He is a federal district court judge in the Southern District of New York. Goodwin Liu is a justice on the California Supreme Court. Their article was edited by Professor Mnookin.

Abstract

This article describes three major developments in forensic evidence and the use of such evidence in the courts. The first development is the advent of DNA profiling, a scientific technique for identifying and distinguishing among individuals to a high degree of probability. While DNA evidence has been used to prove guilt, it has also demonstrated that many individuals have been wrongly convicted on the basis of other forensic evidence that turned out to be unreliable. The second development is the US Supreme Court precedent requiring judges to carefully scrutinize the reliability of scientific evidence in determining whether it may be admitted in a jury trial. The third development is the publication of a formidable National Academy of Sciences report questioning the scientific validity of a wide range of forensic techniques. The article explains that, although one might expect these developments to have had a major impact on the decisions of trial judges whether to admit forensic science into evidence, in fact, the response of judges has been, and continues to be, decidedly mixed.

7. Jonathan J. Koehler, Jennifer L. Mnookin, and Michael J. Saks, “The scientific reinvention of forensic science,” 120 Proceedings of the National Academies of Science e2301840120 (2023).

Koehler is a professor of law at the Northwestern Pritzker School of Law. Saks is a professor of psychology at Arizona State University, and Regents Professor of Law, at the Sandra Day O’Connor College of Law.

Abstract

Forensic science is undergoing an evolution in which a long-standing “trust the examiner” focus is being replaced by a “trust the scientific method” focus. This shift, which is in progress and still partial, is critical to ensure that the legal system uses forensic information in an accurate and valid way. In this Perspective, we discuss the ways in which the move to a more empirically grounded scientific culture for the forensic sciences impacts testing, error rate analyses, procedural safeguards, and the reporting of forensic results. However, we caution that the ultimate success of this scientific reinvention likely depends on whether the courts begin to engage with forensic science claims in a more rigorous way.

8. William C. Thompson, “Shifting decision thresholds can undermine the probative value and legal utility of forensic pattern-matching evidence,” 120 Proceedings of the National Academies of Science e2301844120 (2023).

Thompson is professor emeritus in the Department of Criminology, Law & Society, University of California, Irvine.

Abstract

Forensic pattern analysis requires examiners to compare the patterns of items such as fingerprints or tool marks to assess whether they have a common source. This article uses signal detection theory to model examiners’ reported conclusions (e.g., identification, inconclusive, or exclusion), focusing on the connection between the examiner’s decision threshold and the probative value of the forensic evidence. It uses a Bayesian network model to explore how shifts in decision thresholds may affect rates and ratios of true and false convictions in a hypothetical legal system. It demonstrates that small shifts in decision thresholds, which may arise from contextual bias, can dramatically affect the value of forensic pattern-matching evidence and its utility in the legal system.

9. Marlene Meyer, Melissa F. Colloff, Tia C. Bennett, Edward Hirata, Amelia Kohl, Laura M. Stevens, Harriet M. J. Smith, Tobias Staudigl & Heather D. Flowe, “Enabling witnesses to actively explore faces and reinstate study-test pose during a lineup increases discriminability,” 120 Proceedings of the National Academies of Science e2301845120 (2023).

Marlene Meyer, Melissa F. Colloff, Tia C. Bennett, Edward Hirata, Amelia Kohl, and Heather D. Flowe are psychologists at the School of Psychology, University of Birmingham (United Kingdom). Harriet M. J. Smith is a psychologist in the School of Psychology, Nottingham Trent University, Nottingham, United Kingdom, and Tobias Staudigl is a psychologist in the Department of Psychology, Ludwig-Maximilians-Universität München, in Munich, Germany.

Abstract

Accurate witness identification is a cornerstone of police inquiries and national security investigations. However, witnesses can make errors. We experimentally tested whether an interactive lineup, a recently introduced procedure that enables witnesses to dynamically view and explore faces from different angles, improves the rate at which witnesses identify guilty over innocent suspects compared to procedures traditionally used by law enforcement. Participants encoded 12 target faces, either from the front or in profile view, and then attempted to identify the targets from 12 lineups, half of which were target present and the other half target absent. Participants were randomly assigned to a lineup condition: simultaneous interactive, simultaneous photo, or sequential video. In the front-encoding and profile-encoding conditions, Receiver Operating Characteristics analysis indicated that discriminability was higher in interactive compared to both photo and video lineups, demonstrating the benefit of actively exploring the lineup members’ faces. Signal-detection modeling suggested interactive lineups increase discriminability because they afford the witness the opportunity to view more diagnostic features such that the nondiagnostic features play a proportionally lesser role. These findings suggest that eyewitness errors can be reduced using interactive lineups because they create retrieval conditions that enable witnesses to actively explore faces and more effectively sample features.


[1] 120 Proceedings of the National Academies of Science (Oct. 10, 2023).

The IARC-hy of Evidence – Incoherent & Inconsistent Classifications of Carcinogenicity

September 19th, 2023

Recently, two lawyers wrote an article in a legal trade magazine about excluding epidemiologic evidence in civil litigation.[1] The article was wildly wide of the mark, with several conceptual and practical errors.[2] For starters, the authors discussed Rule 702 as excluding epidemiologic studies and evidence, when the rule addresses the admissibility of expert witness opinion testimony. The most egregious recommendation of the authors, however, was their recommendation that counsel urge the classifications of chemicals with respect to carcinogenicity, by the International Agency for Research on Cancer (IARC), and by regulatory agencies, as probative for or against causation.

The project of evaluating the evidence for, or against, carcinogenicity of the myriad natural and synthetic agents to which humans are exposed is certainly important. Certainly, IARC has taken the project seriously. There have, however, been problems with IARC’s classifications of specific chemicals, pharmaceuticals, or exposure circumstances, but a basic problem with the classifications begins with the classes themselves. Classification requires defined classes. I don’t mean to be anti-semantic, but IARC’s definitions and its hierarchy of carcinogenicity are not entirely coherent.

The agency was established in 1965, and by the early 1970s, found itself in the business of preparing “monographs on the evaluation of carcinogenic risk of chemicals to man.” Originally, the IARC set out to classify the carcinogenicity of chemicals, but over the years, its scope increased to include complex mixtures, physical agents such as different forms of radiation, and biological organisms. To date, there have been 134 IARC monographs, addressing 1,045 “agents” (either substances or exposure circumstances).

From its beginnings, the IARC has conducted its classifications through working groups that meet to review and evaluate evidence, and classify the cancer hazards of “agents” under discussion. The breakdown of IARC’s classifications among four groups currently is:

Group 1 – Carcinogenic to humans (127 agents)

Group 2A – Probably carcinogenic to humans (95 agents)

Group 2B – Possibly carcinogenic to humans (323 agents)

Group 3 – Not classifiable as to its carcinogenicity to humans   (500 agents)

Previously, the IARC classification included a Group 4 for agents that are probably not carcinogenic for human beings. After decades of review, the IARC placed only a single agent in Group 4, caprolactam, apparently because the agency found everything else in the world to be presumptively a cause of cancer. The IARC could not find sufficiently strong evidence even for water, air, or basic foods to declare that they do not cause cancer in humans. Ultimately, the IARC abandoned Group 4, in favor of a presumption of universal carcinogencity.

The IARC describes its carcinogen classification procedures, requirements, and rationales in a document known as “The Preamble.” Any discussion of IARC classifications, whether in scientific publications or in legal briefs, without reference to this document should be suspect. The Preamble seeks to define many of the words in the classificatory scheme, some in ways that are not intuitive. This document has been amended over time, and the most recent iteration can be found online at the IARC website.[3]

IARC claims to build its classifications upon “consensus” evaluations, based in turn upon considerations of

(a) the strength of evidence of carcinogenicity in humans,

(b) the evidence of carcinogenicity in experimental (non-human) animals, and

(c) the mechanistic evidence of carcinogenicity.

IARC further claims that its evaluations turn on the use of “transparent criteria and descriptive terms.”[4] This last claim is, for some terms, is falsifiable.

The working groups are described as engaged in consensus evaluations, although past evaluations have been reached on simple majority vote of the working group. The working groups are charged with considering the three lines of evidence, described above, for any given agent, and reaching a synthesis in the form of the IARC classificatory scheme. The chart, from the Preamble, below roughly describes how working groups may “mix and match” lines of evidence, of varying degrees of robustness and validity (vel non) to reach a classification.

 

Agents placed in Category I are thus “carcinogenic to humans.” Interestingly, IARC does not refer to Category I carcinogens as “known” carcinogens, although many commentators are prone to do so. The implication of calling Category I agents “known carcinogens” is to distinguish Category IIA, IIB, and III as agents “not known to cause cancer.” The adjective that IARC uses, rather than “known,” is “sufficient” evidence in humans, but IARC also allows for reaching Category I with “limited,” or even “inadequate” human evidence if the other lines of evidence, in experimental animals or mechanistic evidence in humans, are sufficient.

In describing “sufficient” evidence, the IARC’s Preamble does not refer to epidemiologic evidence as potentially “conclusive” or “definitive”; rather its use of “sufficient” implies, perhaps non-transparently, that its labels of “limited” or “inadequate” evidence in humans refer to insufficient evidence. IARC gives an unscientific, inflated weight and understanding to “limited evidence of carcinogenicity,” by telling us that

“[a] causal interpretation of the positive association observed in the body of evidence on exposure to the agent and cancer is credible, but chance, bias, or confounding could not be ruled out with reasonable confidence.”[5]

Remarkably, for IARC, credible interpretations of causality can be based upon evidentiary displays that are confounded or biased.  In other words, non-credible associations may support IARC’s conclusions of causality. Causal interpretations of epidemiologic evidence are “credible” according to IARC, even though Sir Austin’s predicate of a valid association is absent.[6]

The IARC studiously avoids, however, noting that any classification is based upon “insufficient” evidence, even though that evidence may be less than sufficient, as in “limited,” or “inadequate.” A close look at Table 4 reveals that some Category I classifications, and all Category IIA, IIB, and III classifications are based upon insufficient evidence of carcinogenicity in humans.

Non-Probable Probabilities

The classification immediately below Category or Group I is Group 2A, for agents “probably carcinogenic to humans.” The IARC’s use of “probably” is problematic. Group I carcinogens require only “sufficient” evidence of human carcinogenicity, and there is no suggestion that any aspect of a Group I evaluation requires apodictic, conclusive, or even “definitive” evidence. Accordingly, the determination of Group I carcinogens will be based upon evidence that is essentially probabilistic. Group 2A is also defined as having only “limited evidence of carcinogenicity in humans”; in other words, insufficient evidence of carcinogenicity in humans, or epidemiologic studies with uncontrolled confounding and biases.

Importing IARC 2A classifications into legal or regulatory arenas will allow judgments or regulations based upon “limited evidence” in humans, which as we have seen, can be based upon inconsistent observational studies, and studies that fail to measure and adjust for known and potential confounding risk factors and systematic biases. The 2A classification thus requires little substantively or semantically, and many 2A classifications leave juries and judges to determine whether a chemical or medication caused a human being’s cancer, when the basic predicates for Sir Austin Bradford Hill’s factors for causal judgment have not been met.[7]

An IARC evaluation of Group 2A, or “probably carcinogenic to humans,” would seem to satisfy the legal system’s requirement that an exposure to the agent of interest more likely than not causes the harm in question. Appearances and word usage in different contexts, however, can be deceiving. Probability is a continuous quantitative scale from zero to one. In Bayesian analyses, zero and one are unavailable because if either were our starting point, no amount of evidence could ever change our judgment of the probability of causation. (Cromwell’s Rule). The IARC informs us that its use of “probably” is purely idiosyncratic; the probability that a Group 2A agent causes cancer has “no quantitative” meaning. All the IARC intends is that a Group 2A classification “signifies a greater strength of evidence than possibly carcinogenic.”[8] Group 2A classifications are thus consistent with having posterior probabilities less than 0.5 (or 50 percent). A working group could judge the probability of a substance or a process to be carcinogenic to humans to be greater than zero, but no more than say ten percent, and still vote for a 2A classification, in keeping with the IARC Preamble. This low probability threshold for a 2A classification converts the judgment of “probably carcinogenic” into little more than precautionary prescriptions, rendered when the most probable assessment is either ignorance or lack of causality. There is thus a practical certainty, close to 100%, that a 2A classification will confuse judges and juries, as well as the scientific community.

In addition to being based upon limited, that is insufficient, evidence of human carcinogenicity, Group 2A evaluations of “probable human carcinogenicity” connote “sufficient evidence” in experimental animals. An agent can be classified 2A even when the sufficient evidence of carcinogenicity occurs in only one of several non-human animal species, with the other animal species failing to show carcinogenicity. IARC 2A classifications can thus raise the thorny question in court whether a claimant is more like a rat or a mouse.

Courts should, because of the incoherent and diluted criteria for “probably carcinogenic,” exclude expert witness opinions based upon IARC 2A classifications as scientifically insufficient.[9] Given the distortion of ordinary language in its use of defined terms such as “sufficient,” “limited,” and “probable,” any evidentiary value to IARC 2A classifications, and expert witness opinion based thereon, is “substantially outweighed by a danger of … unfair prejudice, confusing the issues, [and] misleading the jury….”[10]

Everything is Possible

Category 2B denotes “possibly carcinogenic.” This year, the IARC announced that a working group had concluded that aspartame, an artificial sugar substitute, was “possibly carcinogenic.”[11] Such an evaluation, however, tells us nothing. If there are no studies at all of an agent, the agent could be said to be possibly carcinogenic. If there are inconsistent studies, even if the better designed studies are exculpatory, scientists could still say that the agent of interest was possibly carcinogenic. The 2B classification does not tell us anything because everything is possible until there is sufficient evidence to inculpate or exculpate it from causing cancer in humans.

It’s a Hazard, Not a Risk

IARC’s classification does not include an assessment of exposure levels. Consequently, there is no consideration of dose or exposure level at which an agent becomes carcinogenic. IARC’s evaluations are limited to whether the agent is or is not carcinogenic. The IARC explicitly concedes that exposure to a carcinogenic agent may carry little risk, but it cannot bring itself to say no risk, or even benefit at low exposures.

As noted, the IARC classification scheme refers to the strength of the evidence that an agent is carcinogenic, and not to the quantitative risk of cancer from exposure at a given level. The Preamble explains the distinction as fundamental:

“A cancer hazard is an agent that is capable of causing cancer, whereas a cancer risk is an estimate of the probability that cancer will occur given some level of exposure to a cancer hazard. The Monographs assess the strength of evidence that an agent is a cancer hazard. The distinction between hazard and risk is fundamental. The Monographs identify cancer hazards even when risks appear to be low in some exposure scenarios. This is because the exposure may be widespread at low levels, and because exposure levels in many populations are not known or documented.”[12]

This attempted explanation reveals important aspects of IARC’s project. First, there is an unproven assumption that there will be cancer hazards regardless of the exposure levels. The IARC contemplates that there may circumstances of low levels of risk from low levels of exposure, but it elides the important issue of thresholds. Second, IARC’s distinction between hazard and risk is obscured by its own classifications.  For instance, when IARC evaluated crystalline silica and classified it in Group I, it did so for only “occupational exposures.”[13] And yet, when IARC evaluated the hazard of coal exposure, it placed coal dust in Group 3, even though coal dust contains crystalline silica.[14] Similarly, in 2018, the IARC classified coffee as a Group 3,[15] even though every drop of coffee contains acrylamide, which is, according to IARC, a Group 2A “probable human carcinogen.”[16]


[1] Christian W. Castile & and Stephen J. McConnell, “Excluding Epidemiological Evidence Under FRE 702,” For The Defense 18 (June 2023) [Castile].

[2]Excluding Epidemiologic Evidence Under Federal Rule of Evidence 702” (Aug. 26, 2023).

[3] IARC Monographs on the Identification of Carcinogenic Hazards to Humans – Preamble (2019).

[4] Jonathan M. Samet , Weihsueh A. Chiu , Vincent Cogliano, Jennifer Jinot, David Kriebel, Ruth M. Lunn, Frederick A. Beland, Lisa Bero, Patience Browne, Lin Fritschi, Jun Kanno , Dirk W. Lachenmeier, Qing Lan, Gerard Lasfargues, Frank Le Curieux, Susan Peters, Pamela Shubat, Hideko Sone, Mary C. White , Jon Williamson, Marianna Yakubovskaya , Jack Siemiatycki, Paul A. White, Kathryn Z. Guyton, Mary K. Schubauer-Berigan, Amy L. Hall, Yann Grosse, Veronique Bouvard, Lamia Benbrahim-Tallaa, Fatiha El Ghissassi, Beatrice Lauby-Secretan, Bruce Armstrong, Rodolfo Saracci, Jiri Zavadil , Kurt Straif, and Christopher P. Wild, “The IARC Monographs: Updated Procedures for Modern and Transparent Evidence Synthesis in Cancer Hazard Identification,” 112 J. Nat’l Cancer Inst. djz169 (2020).

[5] Preamble at 31.

[6] See Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295 (1965) (noting that only when “[o]ur observations reveal an association between two variables, perfectly clear-cut and beyond what we would care to attribute to the play of chance,” do we move on to consider the nine articulated factors for determining whether an association is causal.

[7] Id.

[8] IARC Monographs on the Identification of Carcinogenic Hazards to Humans – Preamble 31 (2019) (“The terms probably carcinogenic and possibly carcinogenic have no quantitative significance and are used as descriptors of different strengths of evidence of carcinogenicity in humans.”).

[9] SeeIs the IARC lost in the weeds” (Nov. 30, 2019); “Good Night Styrene” (Apr. 18, 2019).

[10] Fed. R. Evid. 403.

[11] Elio Riboli, et al., “Carcinogenicity of aspartame, methyleugenol, and isoeugenol,” 24 The Lancet Oncology P848-850 (2023);

IARC, “Aspartame hazard and risk assessment results released” (2023).

[12] Preamble at 2.

[13] IARC Monograph 68, at 41 (1997) (“For these reasons, the Working Group therefore concluded that overall the epidemiological findings support increased lung cancer risks from inhaled crystalline silica (quartz and cristobalite) resulting from occupational exposure.”).

[14] IARC Monograph 68, at 337 (1997).

[15] IARC Monograph No. 116, Drinking Coffee, Mate, and Very Hot Beverages (2018).

[16] IARC Monograph no. 60, Some Industrial Chemicals (1994).

PLPs & Five-Legged Dogs

September 1st, 2023

All lawyers have heard the puzzle of “how many legs does a dog have if you call his tail a leg?” The puzzle is often misattributed to Abraham Lincoln, who used the puzzle at various times, including in jury speeches. The answer of course is: “Four. Saying that a tail is a leg does not make it a leg.” Quote investigators have traced the puzzle as far back as 1825, when newspapers quoted legislator John W. Hulbert as saying that something “reminded him of the story.”[1]

What do we call a person who becomes pregnant and delivers a baby?

A woman.

The current, trending fashion is to call such a person a PLP, a person who becomes pregnant and lactates. This façon de parler is particularly misleading if it is meant as an accommodation to the transgender population. Transgender women will not show up as pregnant or lactating, and transgender men will show up only if there transition is incomplete and has left them with functional female reproductive organs.

In 2010, Guinness World Records named Thomas Beatie the “World’s First Married Man to Give Birth.” Thomas Beatie is now legally a man, which is just another way of saying that he chose to identify as a man, and gained legal recognition for his choice. Beatie was born as a female, matured into a woman, and had ovaries and a uterus. Beatie was, in other words, biologically a female when she went through puberty and became biologically a woman.

Beatie underwent partial gender reassignment surgery, consisting at least of double mastectomy, and taking testosterone replacement therapy (off label), but retained ovaries and a uterus.

Guinness makes a fine stout, and we may look upon it kindly for having nurtured the statistical thinking of William Sealy Gosset. Guinness, however, cannot make a dog have five legs simply by agreeing to call its tail a leg. Beatie was not the first pregnant man; rather he was the first person, born with functional female reproductive organs, to have his male gender identity recognized by a state, who then conceived and delivered a newborn. If Guinness wants to call this the first “legal man” to give birth, by semantic legerdemain, that is fine. Certainly we can and should publicly be respectful of transgendered persons, and work to prevent them from being harassed or embarrassed. There may well be many situations in which we would change our linguistic usage to acknowledge a transsexual male as the mother of a child.[2] We do not, however, have to change biology to suit their choices, or to make useless gestures to have them feel included when their inclusion is not relevant to important scientific and medical issues.

Sadly, the NASEM would impose this politico-semanticism upon us while addressing the serious issue whether women of child-bearing age should be included in clinical trials.  At a recent workshop on “Developing a Framework to Address Legal, Ethical, Regulatory, and Policy Issues for Research Specific to Pregnant and Lactating Persons,”[3] the Academies introduced a particularly ugly neologism, “pregnant and lactating persons,” or PLP for short. The workshop reports:

“Approximately 4 million pregnant people in the United States give birth annually, and 70 percent of these individuals take at least one prescription medication during their pregnancy. Yet, pregnant and lactating persons are often excluded from clinical trials, and often have to make treatment decisions without an adequate understanding of the benefits and risks to themselves and their developing fetus or newborn baby. An ad hoc committee of the National Academies of Sciences, Engineering, and Medicine will develop a framework for addressing medicolegal and liability issues when planning or conducting research specific to pregnant and lactating persons.”[4]

The full report from NASEM, with fulsome use of the PLP phrase, is now available.[5]

J.K. Rowling is not the only one who is concerned about the erasure of the female from our discourse. Certainly we can acknowledge that transgenderism is real, without allowing the exception to erase biological facts about reproduction. After all, Guinness’s first pregnant “legal man” could not lactate, as a result of bilateral mastectomies, and thus the “legal man” was not a pregnant person who could lactate. And the pregnant “legal man” had functioning ovaries and uterus, which is not a matter of gender identity, but physiological functioning of biological female sex organs. Furthermore, including transgendered women, or “legal women,” without functional ovaries and uterus, in clinical trials will not answer difficult question about whether experimental therapies may harm women’s reproductive function or their offspring in utero or after birth.

The inclusion of women in clinical trials is a serious issue precisely because experimental therapies may hold risks for participating women’s offspring in utero. The law may not permit a proper informed consent by women for their conceptus. And because of the new latitude legislatures enjoy to impose religion-based bans on abortion, a women who conceives while taking an experimental drug may not be able to terminate her pregnancy that has been irreparably harmed by the drug.

The creation of the PLP category really confuses rather than elucidates how we answer the ethical and medical questions involved in testing new drugs or treatments for women. NASEM’s linguistic gerrymandering may allow some persons who have suffered from gender dysphoria to feel “included,” and perhaps to have their choices “validated,” but the inclusion of transgender women, or partially transgendered men, will not help answer the important questions facing clinical researchers. Taxpayers who fund NASEM and NIH deserve better clarity and judgment in the use of governmental funds in supporting clinical trials.

When and whence comes this PLP neologism?  An N-Gram search shows that “pregnant person” was found in the database before 1975, and that the phrase has waxed and waned since.

N-Gram for pregnant person, conducted September 1, 2023

A search of the National Library of Medicine PubMed database found several dozen hits, virtually all within the last two years. The earliest use was in 1970,[6] with a recrudenscence 11 years later.[7]

                                             

From PubMed search for “pregnant person,” conducted Sept. 1, 2023 

In 2021, the New England Journal of Medicine published a paper on the safety of Covid-19 vaccines in “pregnant persons.”[8] As of last year, the Association of American Medical Colleges sponsored a report about physicians advocating for inclusion of “pregnant people” in clinical trials, in a story that noted that “[p]regnant patients are often excluded from clinical trials for fear of causing harm to them or their babies, but leaders in maternal-fetal medicine say the lack of data can be even more harmful.”[9] And currently, the New York State Department of Health advises that “[d]ue to changes that occur during pregnancy, pregnant people may be more susceptible to viral respiratory infections.”[10]

The neologism of PLP was not always so. Back in the dark ages, 2008, the National Cancer Institute issued guidelines on the inclusion of pregnant and breast-feeding women in clinical trials.[11] As recently as June 2021, The World Health Organization was still old school in discussing “pregnant and lactating women.”[12] The same year, over a dozen female scientists, published a call to action about the inclusion of “pregnant women” in COVID-19 trials.[13]

Two years ago, I gingerly criticized the American Medical Association’s issuance of a linguistic manifesto on how physicians and scientists should use language to advance the Association’s notions of social justice.[14] I criticized the Association’s efforts at the time, but its guide to “correct” usage was devoid of the phrase “pregnant persons” or “lactating persons.”[15] Pregnancy is a function of sex, not of gender.


[1]Suppose You Call a Sheep’s Tail a Leg, How Many Legs Will the Sheep Have?” QuoteResearch (Nov. 15, 2015).

[2] Sam Dylan More, “The pregnant man – an oxymoron?” 7 J. Gender Studies 319 (1998).

[3] National Academies of Sciences, Engineering, and Medicine, “Research with Pregnant and Lactating Persons: Mitigating Risk and Liability: Proceedings of a Workshop in Brief,” (2023).

[4] NASEM, “Research with Pregnant and Lactating Persons: Mitigating Risk and Liability: Proceedings of a Workshop–in Brief” (2023).

[5] National Academies of Sciences, Engineering, and Medicine, Inclusion of pregnant and lactating persons in clinical trials: Proceedings of a workshop (2023).

[6] W.K. Keller, “The pregnant person,” 68 J. Ky. Med. Ass’n 454 (1970).

[7] Vibiana M. Andrade, “The toxic workplace: Title VII protection for the potentially pregnant person,” 4 Harvard Women’s Law J. 71 (1981).

[8] Tom T. Shimabukuro, Shin Y. Kim, Tanya R. Myers, Pedro L. Moro, Titilope Oduyebo, Lakshmi Panagiotakopoulos, Paige L. Marquez, Christine K. Olson, Ruiling Liu, Karen T. Chang, Sascha R. Ellington, Veronica K. Burkel, et al., for the CDC v-safe COVID-19 Pregnancy Registry Team, “Preliminary Findings of mRNA Covid-19 Vaccine Safety in Pregnant Persons,” 384 New Engl. J. Med. 2273 (2021).

[9] Bridget Balch, “Prescribing without data: Doctors advocate for the inclusion of pregnant people in clinical research,” AAMC (Mar. 22, 2022).

[10] New York State Department of Health, “Pregnancy & COVID-19,” last visited August 31, 2023.

[11] NCI, “Guidelines Regarding the Inclusion of Pregnant and Breast-Feeding Women on Cancer Clinical Treatment Trials,” (May 29, 2008).

[12] WHO, “Update on WHO Interim recommendations on COVID-19 vaccination of pregnant and lactating women,” (June 10, 2021).

[13] Melanie M. Taylor, Loulou Kobeissi, Caron Kim, Avni Amin, Anna E Thorson, Nita B. Bellare, Vanessa Brizuela, Mercedes Bonet, Edna Kara, Soe Soe Thwin, Hamsadvani Kuganantham, Moazzam Ali, Olufemi T. Oladapo, Nathalie Broutet, “Inclusion of pregnant women in COVID-19 treatment trials: a review and global call to action,”9 Health Policy E366 (2021).

[14] American Medical Association, “Advancing Health Equity: A Guide to Language, Narrative and Concepts,” (2021); see Harriet Hall, “The AMA’s Guide to Politically Correct Language: Advancing Health Equity,” Science Based Medicine (Nov. 2, 2021).

[15]When the American Medical Association Woke Up” (Nov.17, 2021).

Tenpenny Down to Tuppence

August 22nd, 2023

Over two years ago, an osteopathic physician by the name of Sherri Tenpenny created a stir when she told the Ohio state legislature that Covid vaccines magnetize people or cause them to “interface with 5G towers.”[1] What became apparent at that time was that Tenpenny was herself a virulent disease vector of disinformation. Indeed, in its March 2021 report, the Center for Countering Digital Hate listed Tenpenny as a top anti-vaccination shyster. As a social media vector, she is ranked in the top dozen “influencers.”[2] No surprise, in addition to bloviating about Covid vaccines, someone with such quirkly non-evidence based opinions turns up in litigation as an expert witness.[3]

 

At the time of Tenpenny’s ludicrous testimony before the Ohio state legislature, one astute observer remarked that the AMA Ethical Guidelines specify that medical societies and medical licensing boards are responsible for maintaining high standards for medical testimony, and must assess “claims of false or misleading testimony.”[4] When the testimony is false or misleading, these bodies should discipline the offender “as appropriate.”[5]

The State Medical Board of Ohio stepped up to its responsibility. After receiving hundreds (roughly 350) complaints about Tenpenny’s testimony, the Ohio Board launched an investigation of Tenpenny, who was first licensed as an osteopathic physician in 1984.[6]  The Board’s investigators tried to contact Tenpenny, who apparently evaded engaging with them. Eventually, Thomas Renz, a lawyer for Tenpenny informed the Board that Tenpenny “[d]eclin[ed] to cooperate in the Board’s bad faith and unjustified assault on her licensure, livelihood, and constitutional rights cannot be construed as an admission of any allegations against her.”

After multiple unsuccessful attempts to reach Tenpenny, the Board issued a citation, in 2022, against her for stonewalling the investigation. Tenpenny requested an administrative hearing, set for April 2023, when she would be able to submit her defense in writing. The Board refused to let Tenpenny evade questioning, and suspended her license for failure to comply with the investigation. According to the Board’s Order, “Dr. Tenpenny did not simply fail to cooperate with a Board investigation, she refused to cooperate. *** And that refusal was based on her unsupported and subjective belief regarding the Board’s motive for the investigation. Licensees of the Board cannot simply refuse to cooperate in investigations because they decide they do not like what they assume is the reason for the investigation.”[7]

According to the Board’s Order, Tenpenny has been fined $3,000, and she must satisfy the Board’s conditions before applying for reinstatement. The Ohio Board’s decision is largely based upon a procedural ruling that flowed from Tenpenny’s refusal to cooperate with the Board’s investigation. Most state medical boards have done little to nothing to address the substance of physician misconduct arising out of the COVID pandemic. This month, American Board of Internal Medicine (ABIM) announced that it was revoking the board certifications of two physicians, Drs. Paul Marik and Pierre Kory, members of the Front Line COVID-19 Critical Care Alliance, for engaging in promoting disinformation and invalid opinions about therapies for COVID-19 opinions.[8] Ron Johnson, the quack senator from Wisconsin, predictably and transparently criticized the ABIM’s action with an ad hominem attack on the ABIM as the action of a corporate cabal. Quack physicians of course have a first amendment right to say whatever, but their licensure and their board certification are contingent on basic competence. Both the state boards and the certifying private groups have the right and responsibility to revoke licenses and privileges when physicians demonstrate incompetence and callousness in the face of a pandemic. There is no unqualified right to professional licenses or certifications.


[1] Andrea Salcedo, “A doctor falsely told lawmakers vaccines magnetize people: ‘They can put a key on their forehead. It sticks’,” Washington Post (June 9, 2021); Andy Downing, “What an exceedingly dumb time to be alive,” Columbus Alive (June 10, 2021); Jake Zuckerman, “She says vaccines make you magnetized. This West Chester lawmaker invited her testimony, chair says,” Ohio Capital Journal (July 14, 2021).

[2] The Disinformation Dozen (2021),

[3] Shaw v. Sec’y Health & Human Servs., No. 01-707V, 2009 U.S. Claims LEXIS 534, *84 n.40 (Fed. Cl. Spec. Mstr. Aug. 31, 2009) (excluding expert witness opinion testimony from Tenpenny).

[4]  “Epistemic Virtue – Dropping the Dime on TenpennyTortini (July 18, 2021).

[5] A.M.A. Code of Medical Ethics Opinion 9.7.1.

[6] Michael DePeau-Wilson, “Doc Who Said COVID Vax Magnetized People Has License Suspended,” MedPageToday (Aug. 11, 2023); David Gorski, “The Ohio State Medical Board has finally suspended the medical license of antivax quack Sherri Tenpenny,” Science-Based Medicine (Aug, 14, 2023).

[7] In re Sherri J. Tenpenny, D.O., Case No. 22-CRF-0168 State Medical Board of Ohio (Aug. 9, 2023).

[8] David Gorski, “The American Board of Internal Medicine finally acts against two misinformation-spreading doctors,” Science-Based Medicine (Aug. 7, 2023).

Links, Ties, and Other Hook Ups in Risk Factor Epidemiology

July 5th, 2023

Many journalists struggle with reporting the results from risk factor epidemiology. Recently, JAMA Network Open published an epidemiologic study (“Williams study”) that explored whether exposure to Agent Orange amoby ng United States military veterans was associated with bladder cancer.[1] The reported study found little to no association, but lay and scientific journalists described the study as finding a “link,”[2] or a “tie,”[3] thus suggesting causality. One web-based media report stated, without qualification, that Agent Orange “increases bladder cancer risk.”[4]

 

Even the authors of the Williams study described the results inconsistently and hyperbolically. Within the four corners of the published article, the authors described their having found a “modestly increased risk of bladder cancer,” and then, on the same page, they reported that “the association was very slight (hazard ratio, 1.04; 95% C.I.,1.02-1.06).”

In one place, the Williams study states it looked at a cohort of 868,912 veterans with exposure to Agent Orange, and evaluated bladder cancer outcomes against outcomes in 2,427,677 matched controls. Elsewhere, they report different numbers, which are hard to reconcile. In any event, the authors had a very large sample size, which had the power to detect theoretically small differences as “statistically significant” (p < 0.05). Indeed, the study was so large that even a very slight disparity in rates between the exposed and unexposed cohort members could be “statistically significantly” different, notwithstanding that systematic error certainly played a much larger role in the results than random error. In terms of absolute numbers, the researchers found 50,781 bladder cancer diagnoses, on follow-up of 28,672,655 person-years. There were overall 2.1% bladder cancers among the exposed servicemen, and 2.0% among the unexposed. Calling this putative disparity a “modest association” is a gross overstatement, and it is difficult to square the authors’ pronouncement of a “modest association” with a “very slight increased risk.”

The authors also reported that there was no association between Agent Orange exposure and aggressiveness of bladder cancer, with bladder wall muscle invasion taken to be the marker of aggressiveness. Given that the authors were willing to proclaim a hazard ratio of 1.04 as an association, this report of no association with aggressiveness is manifestly false. The Williams study found a decreased odds of a diagnosis of muscle-invasive bladder cancer among the exposed cases, with an odds ratio of 0.91, 95% CI 0.85-0.98 (p = 0.009). The study thus did not find an absence of an association, but rather an inverse association.

Causality

Under the heading of “Meaning,” the authors wrote that “[t]hese findings suggest an association between exposure to Agent Orange and bladder cancer, although the clinical relevance of this was unclear.” Despite disclaiming a causal interpretation of their results, Williams and colleagues wrote that their results “support prior investigations and further support bladder cancer to be designated as an Agent Orange-associated disease.”

Williams and colleagues note that the Institute of Medicine had suggested that the association between Agent Orange exposure and bladder cancer outcomes required further research.[5] Requiring additional research was apparently sufficient for the Department of Veterans Affairs, in 2021, to assume facts not in evidence, and to designate “bladder cancer as a cancer caused by Agent Orange exposure.”[6]

Williams and colleagues themselves appear to disavow a causal interpretation of their results: “we cannot determine causality given the retrospective nature of our study design.” They also acknowledged their inability to “exclude potential selection bias and misclassification bias.” Although the authors did not explore the issue, exposed servicemen may well have been under greater scrutiny, creating surveillance and diagnostic biases.

The authors failed to grapple with other, perhaps more serious biases and inadequacy of methodology in their study. Although the authors claimed to have controlled for the most important confounders, they failed to include diabetes as a co-variate in their analysis, even though diabetic patients have a more than doubled increased risk for bladder cancer, even after adjustment for smoking.[7] Diabetic patients would also have been likely to have had more visits to VA centers for healthcare and more opportunity to have been diagnosed with bladder cancer.

Futhermore, with respect to the known confounding variable of smoking, the authors trichotomized smoking history as “never,” “former,” or “current” smoker. The authors were missing smoking information in about 13% of the cohort. In a univariate analysis based upon smoking status (Table 4), the authors reported the following hazard ratios for bladder cancer, by smoking status:

Smoking status at bladder cancer diagnosis

Never smoked      1   [Reference]

Current smoker   1.10 (1.00-1.21)

Former smoker    1.08 (1.00-1.18)

Unknown              1.17 (1.05-1.31)

This analysis for smoking risk points to the fragility of the Agent Orange analyses. First, the “unknown” smoking status is associated with roughly twice the risk for current or former smokers. Second, the risk ratios for muscle-invasive bladder cancer were understandably higher for current smokers (OR 1.10, 95% CI 1.00-1.21) and former smokers (OR 1.08, 95% CI 1.00-1.18) than for non-smoking veterans.

Third, the Williams’ study’s univariate analysis of smoking and bladder cancer generates risk ratios that are quite out of line with independent studies of smoking and bladder cancer risk. For instance, meta-analyses of studies of smoking and bladder cancer risk report risk ratios of 2.58 (95% C.I., 2.37–2.80) for any smoking, 3.47 (3.07–3.91) for current smoking, and 2.04 (1.85–2.25) for past smoking.[8] These smoking-related bladder cancer risks are thus order(s) of magnitude greater than the univariate analysis of smoking risk in the Williams study, as well as the multivariate analysis of Agent Orange risk reported.

Fourth, the authors engage in the common, but questionable practice of categorizing a known confounder, smoking, which should ideally be reported as a continuous variable with respect to quantity consumed, years smoked, and years since quitting.[9] The question here, given that the study is very large, is not the loss of power,[10] but bias away from the null. Peter Austin has shown, by Monte Carlo simulation, that categorizing a continuous variable in a logistic regression results in inflating the rate of finding false positive associations.[11] The type I (false-positive) error rates increases with sample size, with increasing correlation between the confounding variable and outcome of interest, and the number of categories used for the continuous variables. The large dataset used by Williams and colleagues, which they see as a plus, works against them by increasing the bias from the use of categorical variables for confounding variables.[12]

The Williams study raises serious questions not only about the quality of medical journalism, but also about how an executive agency such as the Veterans Administration evaluates scientific evidence. If the Williams study were to play a role in compensation determinations, it would seem that veterans with muscle-invasive bladder cancer would be turned away, while those veterans with less serious cancers would be compensated. But with 2.1% incidence versus 2.0%, how can compensation be rationally permitted in every case?


[1] Stephen B. Williams, Jessica L. Janes, Lauren E. Howard, Ruixin Yang, Amanda M. De Hoedt, Jacques G. Baillargeon, Yong-Fang Kuo, Douglas S. Tyler, Martha K. Terris, Stephen J. Freedland, “Exposure to Agent Orange and Risk of Bladder Cancer Among US Veterans,” 6 JAMA Network Open e2320593 (2023).

[2] Elana Gotkine, “Exposure to Agent Orange Linked to Risk of Bladder Cancer,” Buffalo News (June 28, 2023); Drew Amorosi, “Agent Orange exposure linked to increased risk for bladder cancer among Vietnam veterans,” HemOnc Today (June 27, 2023).

[3] Andrea S. Blevins Primeau, “Agent Orange Exposure Tied to Increased Risk of Bladder Cancer,” Cancer Therapy Advisor (June 30, 2023); Mike Bassett, “Agent Orange Exposure Tied to Bladder Cancer Risk in Veterans — Increased risk described as ‘modest’, and no association seen with aggressiveness of cancer,” Medpage Today (June 27, 2023).

[4] Darlene Dobkowski, “Agent Orange Exposure Modestly Increases Bladder Cancer Risk in Vietnam Veterans,” Cure Today (June 27, 2023).

[5] Institute of Medicine – Committee to Review the Health Effects in Vietnam Veterans of Exposure to Herbicides (Tenth Biennial Update), Veterans and Agent Orange: Update 2014 at 10 (2016) (upgrading previous finding of “inadequate” to “suggestive”).

[6] Williams study, citing U.S. Department of Veterans Affairs, “Agent Orange exposure and VA disability compensation.”

[7] Yeung Ng, I. Husain, N. Waterfall, “Diabetes Mellitus and Bladder Cancer – An Epidemiological Relationship?” 9 Path. Oncol. Research 30 (2003) (diabetic patients had an increased, significant odds ratio for bladder cancer compared with non diabetics even after adjustment for smoking and age [OR: 2.69 p=0.049 (95% CI 1.006-7.194)]).

[8] Marcus G. Cumberbatch, Matteo Rota, James W.F. Catto, and Carlo La Vecchia, “The Role of Tobacco Smoke in Bladder and Kidney Carcinogenesis: A Comparison of Exposures and Meta-analysis of Incidence and Mortality Risks?” 70 European Urology 458 (2016).

[9] See generally, “Confounded by Confounding in Unexpected Places” (Dec. 12, 2021).

[10] Jacob Cohen, “The cost of dichotomization,” 7 Applied Psychol. Measurement 249 (1983).

[11] Peter C. Austin & Lawrence J. Brunner, “Inflation of the type I error rate when a continuous confounding variable is categorized in logistic regression analyses,” 23 Statist. Med. 1159 (2004).

[12] See, e.g., Douglas G. Altman & Patrick Royston, “The cost of dichotomising continuous variables,” 332 Brit. Med. J. 1080 (2006); Patrick Royston, Douglas G. Altman, and Willi Sauerbrei, “Dichotomizing continuous predictors in multiple regression: a bad idea,” 25 Stat. Med. 127 (2006); Valerii Fedorov, Frank Mannino, and Rongmei Zhang, “Consequences of dichotomization,” 8 Pharmaceut. Statist. 50 (2009). See also Robert C. MacCallum, Shaobo Zhang, Kristopher J. Preacher, and Derek D. Rucker, “On the Practice of Dichotomization of Quantitative Variables,” 7 Psychological Methods 19 (2002); David L. Streiner, “Breaking Up is Hard to Do: The Heartbreak of Dichotomizing Continuous Data,” 47 Can. J. Psychiatry 262 (2002); Henian Chen, Patricia Cohen, and Sophie Chen, “Biased odds ratios from dichotomization of age,” 26 Statist. Med. 3487 (2007); Carl van Walraven & Robert G. Hart, “Leave ‘em Alone – Why Continuous Variables Should Be Analyzed as Such,” 30 Neuroepidemiology 138 (2008); O. Naggara, J. Raymond, F. Guilbert, D. Roy, A. Weill, and Douglas G. Altman, “Analysis by Categorizing or Dichotomizing Continuous Variables Is Inadvisable,” 32 Am. J. Neuroradiol. 437 (Mar 2011); Neal V. Dawson & Robert Weiss, “Dichotomizing Continuous Variables in Statistical Analysis: A Practice to Avoid,” Med. Decision Making 225 (2012); Phillippa M. Cumberland, Gabriela Czanner, Catey Bunce, Caroline J. Doré, Nick Freemantle, and Marta García-Fiñana, “Ophthalmic statistics note: the perils of dichotomising continuous variables,” 98 Brit. J. Ophthalmol. 841 (2014); Julie R. Irwin & Gary H. McClelland, “Negative Consequences of Dichotomizing Continuous Predictor Variables,” 40 J. Marketing Research 366 (2003); Stanley E. Lazic, “Four simple ways to increase power without increasing the sample size,” PeerJ Preprints (23 Oct 2017).