TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Junior Goes to Washington

November 4th, 2024

I do not typically focus on politics per se in these pages, but sometimes politicians wander into the domain of public health, tort law, and the like. And when they do, they become “fair game” so to speak for comment.

Speaking of “fair game,” back in August, Robert Fitzgerald Kennedy, Jr., [Junior] admitted to dumping a dead bear in Central Park, Manhattan, and fabricating a scene to mislead authorities into believing that the bear had died from colliding with a bicycle.[1] Junior’s bizarre account of his criminal activities can be found on X, home to so many dodgy political figures.

Junior, who claims to be an animal lover and who somehow became a member of the New York bar, says he was driving in upstate New York, early in the morning, to go falconing in the Hudson Valley. On his drive, he witnessed a driver in front of him fatally hit a bear cub. We have only Junior’s word that it was another driver, and not he, who hit the bear.

Assuming that Junior was telling the truth (big assumption), we would not know whether or how he could ascertain where the bear was injured by having been hit by another vehicle in front of his own vehicle. Junior continued his story:

“So I pulled over and I picked up the bear and put him in the back of my van, because I was gonna skin the bear. It was in very good condition and I was gonna put the meat in my refrigerator.”

Kennedy noted that New York law permits taking home a bear, killed on the road, but the law requires that the incident be reported to either the New York State Department of Environmental Conservation (DEC) or to the police, who will then issue a permit. In case you are interested in going roadkill collecting, you can contact the DEC at (518) 402-8883 or wildlife@dec.ny.gov.

Junior, the putative lawyer, flouted the law. He never did obtain a permit from a law enforcement officer, but nonetheless he took the bear carcass. The bear never made it back to Junior’s sometime residence. The six-month-old, 44-pound bear cub carcass lay a-moldering in the back of his van, while Kennedy was busy with his falcons. Afterwards, Junior found himself out of time and in need to rush to Brooklyn, for a dinner with friends at the Peter Luger Steak House. Obviously, Junior is not a vegetarian; nor is beaten down by the economy. A portherhouse steak at Luger’s costs over $140 per person. No credit cards accepted from diners. The dinner went late, while the blow flies were having at the bear cub.

Junior had to run to the airport (presumably in Queens), and as he explained:

“I had to go to the airport, and the bear was in my car, and I didn’t want to leave the bear in the car because that would have been bad.”

Bad, indeed. Bad, without a permit. Bad, without being gutted. Bad, without being refrigerated.

Junior had a brain storm, in the part of his brain that remains. He would commit yet another crime. (Unfortunately, the statute of limitations has likely run on the road kill incident.) Junior dumped the dead bear along with a bicycle in Central Park. The geography is curious. Peter Luger’s is in Brooklyn, although the chain also has a restaurant in Great Neck. From either location, traveling into Manhattan would be quite a detour.  There are plenty of parks closer to either restaurant location, or en route to the New York airports.

Junior’s crime was discovered the following day. Although the perpetrator was not identified until Junior’s confession, the crime scene was reported by no other than one of Junior’s Kennedy cousins, in the New York Times.[2]

Now as any hunter knows, if Junior were to have any chance of actually using the bear meat, he needed to gut the animal immediately to prevent the viscera from contaminating muscle tissue. His recklessness in handling of the carcass reflects a profound ignorance of food safety. Junior might have made the meat available to the needy, but his disregard for handling a dead animal rendered the carcass worthless. Last weekend, Felonious Trump announced, at a rally, that he had told Junior that “you work on what we eat.”

Let them eat roadkill or Peter Luger steaks.

Women’s Health Issues

Trump, the Lothario of a porn actress, the grab-them-by-the-pussy, adjudicated sexual abuser,[3] has also announced that he will put Junior in charge of women’s health issues.[4]  Junior appears to be a fellow traveler when it comes to “protecting” women. Back in July, Vanity Fair published the account of Ms. Eliza Cooney, a former babysitter for Junior’s children. According to Cooney, Junior groped her on several occasions.[5] Junior conveniently has no memory of the events, but nonetheless apologized profusely to Ms. Cooney.[6] Junior texted an “apology” to Ms. Cooney not long after the Vanity Fair article was published:

“I have no memory of this incident but I apologize sincerely for anything I ever did that made you feel uncomfortable or anything I did or said that offended you or hurt your feelings. I never intended you any harm. If I hurt you, it was inadvertent. I feel badly for doing so.”

Junior’s lack of memory may be due to his having lost some undisclosed amount of his brain to a worm that resided within his brain.[7] Even so, the apology combined with the profession of lack of memory was peculiar. Ms. Cooney, who is now 48, was understandably underwhelmed by Junior’s text messages:

“It was disingenuous and arrogant. I’m not sure how somebody has a true apology for something that they don’t admit to recalling. I did not get a sense of remorse.”[8]

Somehow the awfulness of placing Junior in “charge” of women’s health makes perfect sense in the administration of Donald Trump.

Health Agencies

If placing the integrity of women’s health and the safety of our food supply at risk is not enough to raise your concern, Trump apparently plans to let Junior have free rein with his “Make America Healthy Again” program. Just a few days ago, Trump announced that he was “going to let him [Junior] go wild on health. I’m going to let him go wild on the food. I’m going to let him go wild on the medicines.”[9]

Junior has forever hawked conspiracy theories and claims that vaccines cause autism and other diseases. As part of the lawsuit industry, Junior has sought to make money by demonizing vaccines and prescription medications. Recently, Howard Lutnick, the co-chair of the Trump transition team, after a lengthy conversation with Junior, recited Junior’s evidence-free claims that vaccines are not safe. According to Lutnick:

“I think it’ll be pretty cool to give him the data. Let’s see what he comes up with.”[10]

Pretty cool to let a monkey have a go at a typewriter, but it would take longer than the lifetime of the universe for a monkey to compose Hamlet. [11] Junior might well need that lifetime of universe, raised to the second power, to interpret the available extensive safety and efficacy data on vaccines.

 Junior has been part of the lawsuit industry and anti-vax conspiracist movement against vaccines for years. When asked whether “banning certain vaccines might be on the table,” Trump told NBC that “Well, I’m going to talk to him and talk to other people, and I’ll make a decision, but he’s [Junior’s] a very talented guy and has strong views.”

Strong views; weak evidence.

Junior asserted last weekend that the aspiring Trump administration would move quickly to end fluoridation of drinking water, even though fluoridation of water supplies takes place at the state, county, and municipal level. When interviewed by NBC, yesterday, Trump said he had not yet spoken to Junior about fluoride yet, “but it sounds OK to me. You know it’s possible.”[12] Junior, not particularly expert in anything, has opined that fluoride is “an industrial waste,” which he claims, sans good and sufficient evidence is “linked” to cancer and other unspecified diseases and disorders.[13]

If there is one possible explanation for this political positioning is that anti-vax propaganda plays into the anti-elite, anti-expert mindset of Trump and his followers. We should not be surprised that surprised that people who believe that Trump was a successful businessman, based upon a (non)-reality TV show, and multiple bankruptcies, would also have no idea of what success would look like for the scientific community.

At the end of the 20th century, the Centers for Disease Control reflected on the great achievements in public health.[14] The Centers identified a fairly uncontroversial list of 10 successes:

(1) Vaccination

(2) Motor-vehicle safety

(3) Safer workplaces

(4) Control of infectious diseases

(5) Decline in deaths from coronary heart disease and stroke

(6) Safer and healthier foods

(7) Healthier mothers and babies

(8) Family planning

(9) Fluoridation of drinking water

(10) Recognition of tobacco use as a health hazard

A second Trump presidency, with Junior at his side, would unravel vaccination and fluoridation, two of the ten great public health achievements of the last century. Trump has already shown a callous disregard for the control of infectious diseases, with his handling of the corona virus pandemic. Trump’s alignment with strident anti-abortion advocates and religious zealots has undermined the health of women, and ensured that many fetuses with severe congenital malformations must be brought to term. His right-wing anti-women constituency and their hostility to Planned Parenthood has undermined family planning. Trump’s coddling of American industry likely means less safe workplaces. Trump and Junior in positions of power would also likely mean less safe, less healthful foods. (A porterhouse or McDonald Big Mac on every plate?) So basically, seven, perhaps eight, of the ten great achievements would be reversed.

Happy Election Day!


[1] Rachel Treisman, “RFK Jr. admits to dumping a dead bear in Central Park, solving a decade-old mystery,” Nat’l Public Radio (Aug. 5, 2024).

[2] Tatiana Schlossberg, “Bear Found in Central Park Was Killed by a Car, Officials Say,” N.Y. Times (Oct. 7, 2014).

[3] Larry Neumeister, Jennifer Peltz, and Michael R. Sisak, “Jury finds Trump liable for sexual abuse, awards accuser $5M,” Assoc’d Press News (May 9, 2023).

[4]Trump brags about putting RFK Jr. in charge of women’s health,” MSNBC (Nov. 2024).

[5] Joe Hagan, “Robert Kennedy Jr’s Shocking History,” Vanity Fair (July 2, 2024).

[6] Mike Wendling, “RFK Jr texts apology to sexual assault accuser – reports,” BBC (July 12, 2024).

[7] Gabrielle Emanuel, “RFK Jr. is not alone. More than a billion people have parasitic worms,” Nat’l Public Radio (May 9, 2024).

[8] Peter Jamison, “RFK Jr. sent text apologizing to woman who accused him of sexual assault,” Washington Post (July 12, 2024).

[9] Bruce Y. Lee, “Trump States He’ll Let RFK Jr. ‘Go Wild’ On Health, Food, Medicines,” Forbes (Nov. 2, 2024).

[10] Dan Diamond, Lauren Weber, Josh Dawsey, Michael Scherer, and Rachel Roubein, “RFK Jr. set for major food, health role in potential Trump administration,” Wash. Post (Oct. 31, 2024).

[11] Stephen Woodcock & Jay Falletta, “A numerical evaluation of the Finite Monkeys Theorem,” 9 Franklin Open 100171 (2024).

[12] Jonathan J. Cooper, “RFK Jr. says Trump would push to remove fluoride from drinking water. ‘It’s possible,’ Trump says,” Assoc’d Press News (Nov. 3, 2024); William Kristol and Andrew Egger, “The Wheels on the Bus Go Off, and Off, and Off, and . . .,” The Bulwark (Nov. 4, 2024).

[13] Nadia Kounang, Carma Hassan and Deidre McPhillips, “RFK Jr. says fluoride is ‘an industrial waste’ linked to cancer, diseases and disorders. Here’s what the science says,” CNNHealth (Nov. 4, 2024).

[14] Centers for Disease Control, “Ten Great Public Health Achievements — United States, 1900-1999,”  48 Morbidity and Mortality Weekly Report 241 (Apr. 2, 1999).

800 Plaintiffs Fail to Show that Glyphosate Caused Their NHL

September 11th, 2024

Last week, Barbara Billauer, at the American Council on Science and Health[1] website, reported on the Australian court that found insufficient scientific evidence to support plaintiffs’ claims that they had developed non-Hodgkin’s lymphoma (NHL) from their exposure to Monsanto’s glyphosate product. The judgment had previously been reported by the Genetic Literacy Project,[2] which republished an Australian news report from July.[3] European news media seemed more astute in reporting the judgment, with The Guardian[4] and Reuters reporting the court decision in July.[5] The judgment was noteworthy because the mainstream and legal media in the United States generally ignored the development.  The Old Gray Lady and the WaPo in the United States, both of which have covered previous glyphosate cases in the United States, sayeth naught. Crickets at Law360.

On July 24, 2024, Justice Michael Lee, for the Federal Court of Australia, ruled that there was insufficient evidence to support the claims of 800 plaintiffs that their NHL had been caused by glyphosate exposure.[6] Because plaintiffs’ claims were aggregated in a class, the judgment against the class of 800 or so claimants, was the most significant judgment in glyphosate litigation to date.

Justice Lee’s opinion is over 300 pages long, and I have had a chance only to skim it. Regardless of how the Australian court handled various issues, one thing is indisputable: the court has given a written record of its decision processes for the world to assess, critique, validate, or refute. Jury trials provide no similar opportunity to evaluate the reasoning processes (vel non) of the decision maker. The absence of transparency, and an opportunity to evaluate the soundness of verdicts in complex medical causation, raises the question whether jury trials really satisfy the legal due process requirements of civil adjudication.


[1] Barbara Pfeffer Billauer, “The RoundUp Judge Who Got It,” ACSH (Aug. 29, 2024).

[2] Kristian Silva, “Insufficient evidence that glyphosate causes cancer: Australian court tosses 800-person class action lawsuit,” ABC News (Australia) (July 26, 2024).

[3] Kristian Silva, “Major class action thrown out as Federal Court finds insufficient evidence to prove weedkiller Roundup causes cancer,” ABC Australian News (July 25, 2024).

[4] Australian Associated Press, “Australian judge dismisses class action claiming Roundup causes cancer,” The Guardian (July 25, 2024).

[5] Peter Hobson and Alasdair Pal, “Australian judge dismisses lawsuit claiming Bayer weedkiller causes blood cancer,” Reuters (July 25, 2024).

[6] McNickle v. Huntsman Chem. Co. Australia Pty Ltd (Initial Trial) [2024] FCA 807.

QRPs in Science and in Court

April 2nd, 2024

Lay juries usually function well in assessing the relevance of an expert witness’s credentials, experience, command of the facts, likeability, physical demeanor, confidence, and ability to communicate. Lay juries can understand and respond to arguments about personal bias, which no doubt is why trial lawyers spend so much time and effort to emphasize the size of fees and consulting income, and the propensity to testify only for one side. For procedural and practical reasons, however, lay juries do not function very well in assessing the actual merits of scientific controversies. And with respect to methodological issues that underlie the merits, juries barely function at all. The legal system imposes no educational or experiential qualifications for jurors, and trials are hardly the occasion to teach jurors the methodology, skills, and information needed to resolve methodological issues that underlie a scientific dispute.

Scientific studies, reviews, and meta-analyses are virtually never directly admissible in evidence in courtrooms in the United States. As a result, juries do not have the opportunity to read and ponder the merits of these sources, and assess their strengths and weaknesses. The working assumption of our courts is that juries are not qualified to engage directly with the primary sources of scientific evidence, and so expert witnesses are called upon to deliver opinions based upon a scientific record not directly in evidence. In the litigation of scientific disputes, our courts thus rely upon the testimony of so-called expert witnesses in the form of opinions. Not only must juries, the usual trier of fact in our courts, assess the credibility of expert witnesses, but they must assess whether expert witnesses are accurately describing studies that they cannot read in their entirety.

The convoluted path by which science enters the courtroom supports the liberal and robust gatekeeping process outlined under Rules 702 and 703 of the Federal Rules of Evidence. The court, not the jury, must make a preliminary determination, under Rule 104, that the facts and data of a study are reasonably relied upon by an expert witness (Rule 703). And the court, not the jury, again under Rule 104, must determine that expert witnesses possess appropriate qualifications for relevant expertise, and that these witnesses have proffered opinions sufficiently supported by facts or data, based upon reliable principles and methods, and reliably applied to the facts of the case. (Rule 702). There is no constitutional right to bamboozle juries with inconclusive, biased, and confounded or crummy studies, or selective and incomplete assessments of the available facts and data. Back in the days of “easy admissibility,” opinions could be tested on cross-examination, but limited time and acumen of counsel, court, and juries cry out for meaningful scientific due process along the lines set out in Rules 702 and 703.

The evolutionary development of Rules 702 and 703 has promoted a salutary convergence between science and law. According to one historical overview of systematic reviews in science, the foundational period for such reviews (1970-1989) overlaps with the enactment of Rules 702 and 703, and the institutionalization of such reviews (1990-2000) coincides with the development of these Rules in a way that introduced some methodological rigor into scientific opinions that are admitted into evidence.[1]

The convergence between legal admissibility and scientific validity considerations has had the further result that scientific concerns over the quality and sufficiency of underlying data, over the validity of study design, analysis, reporting, and interpretation, and over the adequacy and validity of data synthesis, interpretation, and conclusions have become integral to the gatekeeping process. This convergence has the welcome potential to keep legal judgments more in line with best scientific evidence and practice.

The science-law convergence also means that courts must be apprised of, and take seriously, the problems of study reproducibility, and more broadly, the problems raised by questionable research practices (QRPs), or what might be called the patho-epistemology of science. The development, in the 1970s, and the subsequent evolution, of the systematic review represented the scientific community’s rejection of the old-school narrative reviews that selected a few of all studies to support a pre-existing conclusion. Similarly, the scientific community’s embarrassment, in the 1980s and 1990s, over the irreproducibility of study results, has in this century grown into an existential crisis over study reproducibility in the biomedical sciences.

In 2005, John Ioannidis published an article that brought the concern over “reproducibility” of scientific findings in bio-medicine to an ebullient boil.[2] Ioannidis pointed to several factors, which alone or in combination rendered most published medical findings likely false. Among the publication practices responsible for this unacceptably high error rate, Ioannidis identified the use of small sample sizes, data-dredging and p-hacking techniques, poor or inadequate statistical analysis, in the context of undue flexibility in research design, conflicts of interest, motivated reasoning, fads, and prejudices, and pressure to publish “positive” results.  The results, often with small putative effect sizes, across an inadequate number of studies, are then hyped by lay and technical media, as well as the public relations offices of universities and advocacy groups, only to be further misused by advocates, and further distorted to serve the goals of policy wonks. Social media then reduces all the nuances of a scientific study to an insipid meme.

Ioannidis’ critique resonated with lawyers. We who practice in health effects litigation are no strangers to dubious research methods, lack of accountability, herd-like behavior, and a culture of generating positive results, often out of political or economic sympathies. Although we must prepare for confronting dodgy methods in front of jury, asking for scientific due process that intervenes and decides the methodological issues with well-reasoned, written opinions in advance of trial does not seem like too much.

The sense that we are awash in false-positive studies was heightened by subsequent papers. In 2011, Uri Simonsohn and others showed that by using simulations of various combinations of QRPs in psychological science, researchers could attain a 61% false-positive rate for research outcomes.[3] The following year saw scientists at Amgen attempt replication of 53 important studies in hematology and oncology. They succeeded in replicated only six.[4] Also in 2012, Dr. Janet Woodcock, director of the Center for Drug Evaluation and Research at the Food and Drug Administration, “estimated that as much as 75 per cent of published biomarker associations are not replicable.”[5] In 2016, the journal Nature reported that over 70% of scientists who responded to a survey had unsuccessfully attempted to replicate another scientist’s experiments, and more than half failed to replicate their own work.[6] Of the respondents, 90% agreed that there was a replication problem. A majority of the 90% believed that the problem was significant.

The scientific community reacted to the perceived replication crisis in a variety of ways, from conceptual clarification of the very notion of reproducibility,[7] to identification of improper uses and interpretations of key statistical concepts,[8] to guidelines for improved conduct and reporting of studies.[9]

Entire books dedicated to identifying the sources of, and the correctives for, undue researcher flexibility in the design, conduct, and analysis of studies, have been published.[10] In some ways, the Rule 702 and 703 case law is like the collected works of the Berenstain Bears, on how not to do studies.

The consequences of the replication crisis are real and serious. Badly conducted and interpreted science leads to research wastage,[11] loss of confidence in scientific expertise,[12] contemptible legal judgments, and distortion of public policy.

The proposed correctives to QRPs deserve the careful study of lawyers and judges who have a role in health effects litigation.[13] Whether as the proponent of an expert witness, or the challenger, several of the recurrent proposals, such as the call for greater data sharing and pre-registration of protocols and statistical analysis plans,[14] have real-world litigation salience. In many instances, they can and should direct lawyers’ efforts at discovery and challenging of the relied upon scientific studies in litigation.


[1] Quan Nha Hong & Pierre Pluye, “Systematic Reviews: A Brief Historical Overview,” 34 Education for Information 261 (2018); Mike Clarke & Iain Chalmers, “Reflections on the history of systematic reviews,” 23 BMJ Evidence-Based Medicine 122 (2018); Cynthia Farquhar & Jane Marjoribanks, “A short history of systematic reviews,” 126 Brit. J. Obstetrics & Gynaecology 961 (2019); Edward Purssell & Niall McCrae, “A Brief History of the Systematic Review,” chap. 2, in Edward Purssell & Niall McCrae, How to Perform a Systematic Literature Review: A Guide for Healthcare Researchers, Practitioners and Students 5 (2020).

[2] John P. A. Ioannidis “Why Most Published Research Findings Are False,” 1 PLoS Med 8 (2005).

[3] Joseph P. Simmons, Leif D. Nelson, and Uri Simonsohn, “False-Positive Psychology: UndisclosedFlexibility in Data Collection and Analysis Allows Presenting Anything as Significant,” 22 Psychological Sci. 1359 (2011).

[4] C. Glenn Begley and Lee M. Ellis, “Drug development: Raise standards for preclinical cancer research,” 483 Nature 531 (2012).

[5] Edward R. Dougherty, “Biomarker Development: Prudence, risk, and reproducibility,” 34 Bioessays 277, 279 (2012); Turna Ray, “FDA’s Woodcock says personalized drug development entering ‘long slog’ phase,” Pharmacogenomics Reporter (Oct. 26, 2011).

[6] Monya Baker, “Is there a reproducibility crisis,” 533 Nature 452 (2016).

[7] Steven N. Goodman, Daniele Fanelli, and John P. A. Ioannidis, “What does research reproducibility mean?,” 8 Science Translational Medicine 341 (2016); Felipe Romero, “Philosophy of science and the replicability crisis,” 14 Philosophy Compass e12633 (2019); Fiona Fidler & John Wilcox, “Reproducibility of Scientific Results,” Stanford Encyclopedia of Philosophy (2018), available at https://plato.stanford.edu/entries/scientific-reproducibility/.

[8] Andrew Gelman and Eric Loken, “The Statistical Crisis in Science,” 102 Am. Scientist 460 (2014); Ronald L. Wasserstein & Nicole A. Lazar, “The ASA’s Statement on p-Values: Context, Process, and Purpose,” 70 The Am. Statistician 129 (2016); Yoav Benjamini, Richard D. DeVeaux, Bradly Efron, Scott Evans, Mark Glickman, Barry Braubard, Xuming He, Xiao Li Meng, Nancy Reid, Stephen M. Stigler, Stephen B. Vardeman, Christopher K. Wikle, Tommy Wright, Linda J. Young, and Karen Kafadar, “The ASA President’s Task Force Statement on Statistical Significance and Replicability,” 15 Annals of Applied Statistics 1084 (2021).

[9] The International Society for Pharmacoepidemiology issued its first Guidelines for Good Pharmacoepidemiology Practices in 1996. The most recent revision, the third, was issued in June 2015. See “The ISPE Guidelines for Good Pharmacoepidemiology Practices (GPP),” available at https://www.pharmacoepi.org/resources/policies/guidelines-08027/. See also Erik von Elm, Douglas G. Altman, Matthias Egger, Stuart J. Pocock, Peter C. Gøtzsche, and Jan P. Vandenbroucke, for the STROBE Initiative, “The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement Guidelines for Reporting Observational Studies,” 18 Epidem. 800 (2007); Jan P. Vandenbroucke, Erik von Elm, Douglas G. Altman, Peter C. Gøtzsche, Cynthia D. Mulrow, Stuart J. Pocock, Charles Poole, James J. Schlesselman, and Matthias Egger, for the STROBE initiative, “Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): Explanation and Elaboration,” 147 Ann. Intern. Med. W-163 (2007); Shah Ebrahim & Mike Clarke, “STROBE: new standards for reporting observational epidemiology, a chance to improve,” 36 Internat’l J. Epidem. 946 (2007); Matthias Egger, Douglas G. Altman, and Jan P Vandenbroucke of the STROBE group, “Commentary: Strengthening the reporting of observational epidemiology—the STROBE statement,” 36 Internat’l J. Epidem. 948 (2007).

[10] See, e.g., Lee J. Jussim, Jon A. Krosnick, and Sean T. Stevens, eds., Research Integrity: Best Practices for the Social and Behavioral Sciences (2022); Joel Faintuch & Salomão Faintuch, eds., Integrity of Scientific Research: Fraud, Misconduct and Fake News in the Academic, Medical and Social Environment (2022); William O’Donohue, Akihiko Masuda & Scott Lilienfeld, eds., Avoiding Questionable Research Practices in Applied Psychology (2022); Klaas Sijtsma, Never Waste a Good Crisis: Lessons Learned from Data Fraud and Questionable Research Practices (2023).

[11] See, e.g., Iain Chalmers, Michael B Bracken, Ben Djulbegovic, Silvio Garattini, Jonathan Grant, A Metin Gülmezoglu, David W Howells, John P A Ioannidis, and Sandy Oliver, “How to increase value and reduce waste when research priorities are set,” 383 Lancet 156 (2014); John P A Ioannidis, Sander Greenland, Mark A Hlatky, Muin J Khoury, Malcolm R Macleod, David Moher, Kenneth F Schulz, and Robert Tibshirani, “Increasing value and reducing waste in research design, conduct, and analysis,” 383 Lancet 166 (2014).

[12] See, e.g., Friederike Hendriks, Dorothe Kienhues, and Rainer Bromme, “Replication crisis = trust crisis? The effect of successful vs failed replications on laypeople’s trust in researchers and research,” 29 Public Understanding Sci. 270 (2020).

[13] R. Barker Bausell, The Problem with Science: The Reproducibility Crisis and What to Do About It (2021).

[14] See, e.g., Brian A. Noseka, Charles R. Ebersole, Alexander C. DeHavena, and David T. Mellora, “The preregistration revolution,” 115 Proc. Nat’l Acad. Soc. 2600 (2018); Michael B. Bracken, “Preregistration of Epidemiology Protocols: A Commentary in Support,” 22 Epidemiology 135 (2011); Timothy L. Lash & Jan P. Vandenbroucke, “Should Preregistration of Epidemiologic Study Protocols Become Compulsory? Reflections and a Counterproposal,” 23 Epidemiology 184 (2012).

Nullius in verba

March 29th, 2024

The 1975 codification of the law of evidence, in the Federal Rules of Evidence, introduced a subtle, aspirational criterion for expert witness opinion – knowledge. As originally enacted, Rule 702 read:

“If scientific, technical, or other specialized knowledge will assist the trier of fact to understand the evidence or to determine a fact in issue, a witness qualified as an expert by knowledge, skill, experience, training, or education, may testify thereto in the form of an opinion or otherwise.”[1]

In case anyone missed the point, the Advisory Committee Note for the original Rule 702 emphasized that the standard was an epistemic standard:

“An intelligent evaluation of facts is often difficult or impossible without the application of some scientific, technical, or other specialized knowledge. The most common source of this knowledge is the expert witness, although there are other techniques for supplying it.”[2]

Perhaps we should not be too surprised that the epistemic standard was missed by most judges, and even by most lawyers. For a very long time, the common law set out a minimal test for expert witness opinion testimony. The expert witness had to be qualified by training, experience, or education, and the opinion proffered had to be logically and legally relevant to the issues in the case.[3] The enactment of Rule 702, in 1975, barely made a dent in the regime of easy admissibility.

Before the Federal Rules of Evidence, there was, of course, the famous Frye case, which involved an appeal from the excluded expert witness opinion based upon William Marston’s polygraph machine. In 1923, the court in Frye affirmed the exclusion of the expert witness opinion, based upon the lack of general acceptance of the device’s reliability, with its famous twilight zone language:[4]

“Just when a scientific principle or discovery crosses the line between the experimental and demonstrable stages is difficult to define. Somewhere in this twilight zone the evidential force of the principle must be recognized, and while courts will go a long way in admitting expert testimony deduced from a well-recognized scientific principle or discovery, the thing from which the deduction is made must be sufficiently established to have gained general acceptance in the particular field in which it belongs.”

With the explosion of tort litigation fueled by strict products liability doctrine, lawyers pressed Frye’s requirement of general acceptance into service as a bulwark against unreliable scientific opinions. Many courts, however, limited Frye to novel devices, and in 1993, the Supreme Court, in Daubert,[5] rejected the legal claim that Rule 702 had incorporated the common law “general acceptance” test. Looking to the language of the rule itself, the Supreme Court discerned that the rule laid down an epistemic test, not a call for sociological surveys about the prevalence of beliefs.

Resistance to the spirit and text of Rule 702 has been widespread and deep seated. After Daubert, the Supreme Court decided three more cases to emphasize that the epistemic standard was “exacting” and that it would not go away.[6] Since Daubert was decided in 1993, Rule 702 was amended substantively, in 2000, to incorporate some of the essence of the Supreme Court’s quartet,[7] which required the proponent of expert witness opinion to establish that proffered testimony is based upon sufficient facts or data, is the product of reliable principles and methods, and the result of reliably applying those reliable principles and methods to the facts of the case.

The change in the law of expert witnesses, in the 1990s, left some academic commentators well-nigh apoplectic. One professor of evidence law at a large law school complained that the law was a “conceptual muddle containing within it a threat to liberty and popular participation in government.”[8] Many federal district and intermediate appellate courts responded by ignoring the language of Rule 702, by reverting to pre-Daubert precedent, or by inventing new standards and shifting the burden to the party challenging the expert witness opinion’s admissibility. For many commentators, lawyers, and judges, science had no validity concerns that the law was bound to respect.

The judicial evasion and avoidance of the requirements of Rule 702 did not go unnoticed. Professor David Bernstein and practicing lawyer Eric Lasker wrote a paper in 2015, to call attention to the judicial disregard of the requirements of Rule 702.[9]  Several years of discussion and debate ensued before the Judicial Conference Advisory Committee on Evidence Rules (AdCom), in 2021, acknowledged that “in a fair number of cases, the courts have found expert testimony admissible even though the proponent has not satisfied the Rule 702(b) and (d) requirements by a preponderance of the evidence.”[10] This frank acknowledgement led the AdCom to propose amending Rule 702, “to clarify and emphasize” that gatekeeping requires determining whether the proponent has demonstrated to the court “that it is more likely than not that the proffered testimony meets the admissibility requirements set forth in the rule.”[11]  The Proposed Committee Note written in support of amending Rule 702 observed that “many courts have held that the critical questions of the sufficiency of an expert’s basis, and the application of the expert’s methodology, are questions of weight and not admissibility. These rulings are an incorrect application of Rules 702 and 104(a).”[12]

The proposed new Rule 702 is now law,[13] with its remedial clarification that the proponent of expert witness opinion must show the court that the opinion is sufficiently supported by facts or data,[14] that the opinion is “the product of reliable principles and methods,”[15]  and that the opinion “reflects a reliable application of the principles and methods to the facts of the case.”[16] The Rule prohibits deferring the evaluation of sufficiency of support or reliability of application of method to the trier of fact; there is no statutory support for suggesting that these inquires always or usually go to “weight and not admissibility,” or that there is a presumption of admissibility.

We may not have reached the Age of Aquarius, but the days of “easy admissibility” should be confined to the dustbin of legal history. Rule 702 is quickly approaching its 50th birthday, with the last 30 years witnessing the implementation of the promise and potential of an epistemic standard of trustworthiness for expert witness opinion testimony. Rule 702, in its present form, should go a long way towards putting validity questions squarely before the court under Rule 702. Nullius in verba[17] has been the motto of the Royal Society since 1660; it should now guide expert witness practice in federal court going forward.


[1] Pub. L. 93–595, §1, Jan. 2, 1975, 88 Stat. 1937 (emphasis added).

[2] Notes of Advisory Committee on Proposed Rules (1975) (emphasis added).

[3] See Charles T. McCormick, Handbook of the Law of Evidence 28-29, 363 (1954) (“Any relevant conclusions which are supported by a qualified expert witness should be received unless there are other reasons for exclusion.”)

[4] Frye v. United States, 293 F. 1013, 1014 (D.C. Cir. 1923).

[5] Daubert v. Merrell Dow Pharm., Inc., 509 U.S. 579 (1993).

[6] General Electric Co. v. Joiner, 522 U.S. 136 (1997); Kumho Tire Co. v. Carmichael, 526 U.S. 137 (1999); Weisgram v. Marley Co., 528 U.S. 440 (2000).

[7] See notes 5, 6, supra.

[8] John H. Mansfield, “An Embarrassing Episode in the History of the Law of Evidence,” 34 Seton Hall L. Rev. 77, 77 (2003); see also John H. Mansfield, “Scientific Evidence Under Daubert,” 28 St. Mary’s L.J. 1, 23 (1996). Professor Mansfield was the John H. Watson, Jr., Professor of Law, at the Harvard Law School. Many epithets were thrown in the heat of battle to establish meaningful controls over expert witness testimony. See, e.g., Kenneth Chesebro, “Galileo’s Retort: Peter Huber’s Junk Scholarship,” 42 Am. Univ. L. Rev. 1637 (1993). Mr. Chesebro was counsel of record for plaintiffs-appellants in Daubert, well before he became a convicted racketeer in Georgia.

[9] David Bernstein & Eric Lasker, “Defending Daubert: It’s Time to Amend Federal Rules of Evidence 702,” 57 Wm. & Mary L Rev. 1 (2015).

[10] Report of AdCom (May 15, 2021), at https://www.uscourts.gov/rules-policies/archives/committee-reports/advisory-committee-evidence-rules-may-2021. See also AdCom, Minutes of Meeting at 4 (Nov. 13, 2020) (“[F]ederal cases . . . revealed a pervasive problem with courts discussing expert admissibility requirements as matters of weight.”)], at https://www.uscourts.gov/rules-policies/archives/meeting-minutes/advisory-committee-evidence-rules-november-2020.

[11] Proposed Committee Note, Summary of Proposed New and Amended Federal Rules of Procedure (Oct. 19, 2022), at https://www.uscourts.gov/sites/default/files/2022_scotus_package_0.pdf

[12] Id. (emphasis added).

[13] In April 2023, Chief Justice Roberts transmitted the proposed Rule 702, to Congress, under the Rules Enabling Act, and highlighted that the amendment “shall take effect on December 1, 2023, and shall govern in all proceedings thereafter commenced and, insofar as just and practicable all proceedings then pending.” S. Ct. Order, at 3 (Apr. 24, 2023), https://www.supremecourt.gov/orders/courtorders/frev23_5468.pdf; S.Ct. Transmittal Package (Apr. 24, 2023), < https://www.uscourts.gov/sites/default/files/2022_scotus_package_0.pdf>.

[14] Rule 702(b).

[15] Rule 702(c).

[16] Rule 702(d).

[17] Take no one’s word for it.

A Π-Day Celebration of Irrational Numbers and Other Things – Philadelphia Glyphosate Litigation

March 14th, 2024

Science can often be more complicated and nuanced than we might like. Back in 1897, the Indiana legislature attempted to establish that π was equal to 3.2.[1] Sure, that was simpler and easier to use in calculations, but also wrong. The irreducible fact is that π is an irrational number, and Indiana’s attempt to change that fact was, well, irrational. And to celebrate irrationality, consider the lawsuit’s industry’s jihad against glyphosate, including its efforts to elevate a dodgy IARC evaluation, while suppressing evidence of glyphosate’s scientific exonerations

                                                 

After Bayer lost three consecutive glyphosate cases in Philadelphia last year, observers were scratching their heads over why the company had lost when the scientific evidence strongly supports the defense. The Philadelphia Court of Common Pleas, not to be confused with Common Fleas, can be a rough place for corporate defendants. The local newspapers, to the extent people still read newspapers, are insufferably slanted in their coverage of health claims.

The plaintiffs’ verdicts garnered a good deal of local media coverage in Philadelphia.[2] Defense verdicts generally receive no ink from sensationalist newspapers such as the Philadelphia Inquirer. Regardless, media accounts, both lay and legal, are generally inadequate to tell us what happened, or what went wrong in the court room. The defense losses could be attributable to partial judges or juries, or the difficulty in communicating subtle issues of scientific validity. Plaintiffs’ expert witnesses may seem more sure of themselves than defense experts, or plaintiffs’ counsel may connect better with juries primed by fear-mongering media. Without being in the courtroom, or at least studying trial transcripts, outside observers are challenged to explain fully jury verdicts that go against the scientific evidence. The one thing jury verdicts are not, however, are valid assessments of the strength of scientific evidence, inferences, and conclusions.

Although Philadelphia juries can be rough, they like to see a fight. (Remember Rocky.) It is not a place for genteel manners or delicate and subtle distinctions. Last week, Bayer broke its Philadelphia losing streak, with a win in Kline v. Monsanto Co.[3] Mr. Kline claimed that he developed Non-Hodgkin’s lymphoma (NHL) from his long-term use of Round-Up. The two-week trial, before Judge Ann Butchart, last week went to the jury, which deliberated two hours before returning a unanimous defense verdict. The jury found that the defendants, Monsanto and Nouryon Chemicals LLC, were not negligent, and that the plaintiff’s use of Roundup was not a factual cause of his lymphoma.[4]

Law360 reported that the Kline verdict was the first to follow a ruling on Valentine’s Day, February 14, 2024, which excluded any courtroom reference to the hazard evaluation of Glyphosate by the International Agency for Research on Cancer (IARC). The Law360 article indicated that the IARC found that glyphosate can cause cancer; except of course IARC has never reached such a conclusion.

The IARC working group evaluated the evidence for glyphosate and classified the substance as a category IIA carcinogen, which it labels as “probably” causing human cancer. This label sounds close to what might be useful in a courtroom, except that the IARC declares that “probably,” as used in is IIA classification does not mean what people generally, and lawyers and judges specifically, mean by the word probably.  For IARC, “probable” has no quantitative meaning.  In other words, for IARC, probable, a quantitative concept, which everyone understands to be measured on a scale from 0 to 1, or from 0% to 100%, is not quantitative. An IARC IIA classification could thus represent a posterior probability of 1% in favor of carcinogenicity (and 99% probable not a carcinogen). In other words, on whether glyphosate causes cancer in humans, IARC says maybe in its own made-up epistemic modality.

To find the idiosyncratic definition of “probable,” a diligent reader must go outside the monograph of interest to the so-called Preamble, a separate document, last revised in 2019. The first time the jury will hear of the IARC pronouncement will be in the plaintiff’s case, and if the defense wishes to inform the jury on the special, idiosyncratic meaning of IARC “probable,” they must do it on cross-examination of hostile plaintiffs’ witnesses, or wait longer until they present their own witnesses. Disclosing the IARC IIA classification hurts because the “probable” language lines up with what the trial judges will instruct the juries at the end of the case, when the jurors are told that they need not believe that the plaintiff has eliminated all doubt; they need only find that the plaintiff has shown that each element of his case is “probable,” or more likely than not, in order to prevail. Once the jury has heard “probable,” the defense will have a hard time putting the toothpaste back in the tube. Of course, this is why the lawsuit industry loves IARC evaluations, with its fallacies of semantical distortion.[5]

Although identifying the causes of a jury verdict is more difficult than even determining carcinogenicity, Rosemary Pinto, one of plaintiff Kline’s lawyers, suggested that the exclusion of the IARC evaluation sank her case:

“We’re very disappointed in the jury verdict, which we plan to appeal, based upon adverse rulings in advance of the trial that really kept core components of the evidence out of the case. These included the fact that the EPA safety evaluation of Roundup has been vacated, who IARC (the International Agency for Research on Cancer) is and the relevance of their finding that Roundup is a probable human carcinogen [sic], and also the allowance into evidence of findings by foreign regulatory agencies disguised as foreign scientists. All of those things collectively, we believe, tilted the trial in Monsanto’s favor, and it was inconsistent with the rulings in previous Roundup trials here in Philadelphia and across the country.”[6]

Pinto was involved in the case, and so she may have some insight into why the jury ruled as it did. Still, issuing this pronouncement before interviewing the jurors seems little more than wishcasting. As philosopher Harry Frankfurt explained, “the production of bullshit is stimulated whenever a person’s obligations or opportunities to speak about some topic exceed his knowledge of the facts that are relevant to that topic.”[7] Pinto’s real aim was revealed in her statement that the IARC review was “crucial evidence that juries should be hearing.”[8]  

What is the genesis of Pinto’s complaint about the exclusion of IARC’s conclusions? The Valentine’s Day Order, issued by Judge Joshua H. Roberts, who heads up the Philadelphia County mass tort court, provided that:

AND NOW, this 14th day of February, 2024, upon consideration of Defendants’ Motion to Clarify the Court’s January 4, 2024 Order on Plaintiffs Motion in Limine No. 5 to Exclude Foreign Regulatory Registrations and/or Approvals of Glyphosate, GBHs, and/or Roundup, Plaintiffs’ Response, and after oral argument, it is ORDERED as follows:

  1. The Court’s Order of January 4, 2024, is AMENDED to read as follows: [ … ] it is ORDERED that the Motion is GRANTED without prejudice to a party’s introduction of foreign scientific evidence, provided that the evidence is introduced through an expert witness who has been qualified pursuant to Pa. R. E. 702.

  2. The Court specifically amends its Order of January 4, 2024, to exclude reference to IARC, and any other foreign agency and/or foreign regulatory agency.

  3. The Court reiterates that no party may introduce any testimony or evidence regarding a foreign agency and/or foreign regulatory agency which may result in a mini-trial regarding the protocols, rules, and/or decision making process of the foreign agency and/or foreign regulatory agency. [fn1]

  4. The trial judge shall retain full discretion to make appropriate evidentiary rulings on the issues covered by this Order based on the testimony and evidence elicited at trial, including but not limited to whether a party or witness has “opened the door.”[9]

Now what was not covered in the legal media accounts was the curious irony that the exclusion of the IARC evaluation resulted from plaintiffs’ motion, an own goal of sorts. In previous Philadelphia trials, plaintiffs’ counsel vociferously objected to the defense counsel’s and experts’ references to the determinations by foreign regulators, such as European Union Assessment Group on Glyphosate (2017, 2022), Health Canada (2017), European Food Safety Authority (2017, 2023), Australian Pesticides and Veterinary Medicines Authority (2017), German Federal Institute for Risk Assessment (2019), and others, that rejected the IARC evaluation and reported that glyphosate has not been shown to be carcinogenic.[10]

The gravamen of the plaintiffs’ objection was that such regulatory determinations were hearsay, and that they resulted from various procedures, using various criteria, which would require explanation, and would be subject to litigants’ challenges.[11] In other words, for each regulatory agency’s determination, there would be a “mini-trial,” or a “trial within a trial,” about the validity and accuracy of the foreign agency’s assessment.

In the earlier Philadelphia trials, the plaintiffs’ objections were largely sustained, which created a significant evidentiary bias in the courtrooms. Plaintiffs’ expert witnesses could freely discuss the IARC glyphosate evaluation, but the defense and its experts could not discuss the many determinations of the safety of glyphosate. Jurors were apparently left with the erroneous impression that the IARC evaluation was a consensus view of the entire world’s scientific community.

Now plaintiffs’ objection has a point, even though it seems to prove too much and must ultimately fail. In a trial, each side has expert witnesses who can offer an opinion about the key causal issue, whether glyphosate can cause NHL, and whether it caused this plaintiff’s NHL. Each expert witness will have written a report that identifies the facts and data relied upon, and that explains the inferences drawn and conclusions reached. The adversary can challenge the validity of the data, inferences, and conclusions because the opposing expert witness will be subject to cross-examination.

The facts and data relied upon will, however, be “hearsay,” which will come from published studies not written by the expert witnesses at trial. There will be many aspects of the relied upon studies that will be taken on faith without the testimony of the study participants, their healthcare providers, or the scientists who collected the data, chose how to analyze the data, conducted the statistical and scientific analyses, and wrote up the methods and study findings. Permitting reliance upon any study thus allows for a “mini-trial” or a “trial within a trial,” on each study cited and relied upon by the testifying expert witnesses. This complexity involved in expert witness opinion testimony is one of the foundational reasons for Rule 702’s gatekeeping regime in federal court, and most state courts, but which is usually conspicuously absent in Pennsylvania courtrooms.

Furthermore, the plaintiffs’ objections to foreign regulatory determinations would apply to any review paper, and more important, it would apply to the IARC glyphosate monograph itself. After all, if expert witnesses are supposed to have reviewed the underlying studies themselves, and be competent to do so, and to have arrived at an opinion in some reliable way from the facts and data available, then they would have no need to advert to the IARC’s review on the general causation issue.  If an expert witness were allowed to invoke the IARC conclusion, presumably to bolster his or her own causation opinion, then the jury would need to resolve questions about:

  • who was on the working group;
  • how were working group members selected, or excluded;
  • how the working group arrived at its conclusion;
  • what did the working group rely upon, or not rely upon, and why,
  • what was the group’s method for synthesizing facts and data to reach its conclusion;
  • was the working group faithful to its stated methodology;
  • did the working group commit any errors of statistical or scientific judgment along the way;
  • what potential biases did the working group members have;
  • what is the basis for the IARC’s classificatory scheme; and
  • how are IARC’s key terms such as “sufficient,” “limited,” “probable,” “possible,” etc., defined and used by working groups.

Indeed, a very substantial trial could be had on the bona fides and methods of the IARC, and the glyphosate IARC working group in particular.

The curious irony behind the Valentine’s Day order is that plaintiffs’ counsel were generally winning their objections to the defense’s references to foreign regulatory determinations. But as pigs get fatter, hogs get slaughtered. Last year, plaintiffs’ counsel moved to “exclude foreign regulatory registrations and or approvals of glyphosate.”[12] To be sure, plaintiffs’ counsel were not seeking merely the exclusion of glyphosate registrations, but the scientific evaluations of regulatory agencies and their staff scientists and consulting scientists. Plaintiffs wanted trials in which juries would hear only about IARC, as though it was a scientific consensus. The many scientific regulatory considerations and rejections of the IARC evaluation would be purged from the courtroom.

On January 4, 2024, plaintiffs’ counsel obtained what they sought, an order that memorialized the tilted playing field they had largely been enjoying in Philadelphia courtrooms. Judge Roberts’ order was short and somewhat ambiguous:

“upon consideration of plaintiff’s motion in limine no. 5 to exclude foreign regulatory registrations and/or approvals of glyphosate, GBHs, and/or Roundup, any response thereto, the supplements of the parties, and oral argument, it is ORDERED that the motion is GRANTED without prejudice to a party’s introduction of foreign scientific evidence including, but not limited to, evidence from the International Agency for Research on Cancer (IARC), provided that such introduction does not refer to foreign regulatory agencies.”

The courtroom “real world” outcome after Judge Roberts’ order was an obscene verdict in the McKivison case. Again, there may have been many contributing causes to the McKivison verdict, including Pennsylvania’s murky and retrograde law of expert witness opinion testimony.[13] Mr. McKivison was in remission from NHL and had sustained no economic damages, and yet, on January 26, 2024, a jury in his case returned a punitive compensatory damages award of $250 million, and an even more punitive punitive damage award of $2 billion.[14] It seems at least plausible that the imbalance between admitting the IARC evaluation while excluding foreign regulatory assessments helped create a false narrative that scientists and regulators everywhere had determined glyphosate to be unsafe.

On February 2, 2024, the defense moved for a clarification of Judge Roberts’ January 4, 2024 order that applied globally in the Philadelphia glyphosate litigation. The defendants complained that in their previous trial, after Judge Roberts’ Order of January 4, 2024, they were severely prejudiced by being prohibited from referring to the conclusions and assessments of foreign scientists who worked for regulatory agencies. The complaint seems well founded.  If a hearsay evaluation of glyphosate by an IARC working group is relevant and admissible, the conclusions of foreign scientists about glyphosate are relevant and admissible, whether or not they are employed by foreign regulatory agencies. Indeed, plaintiffs’ counsel routinely complained about Monsanto/Bayer’s “influence” over the United States Environmental Protection Agency, but the suggestion that the European Union’s regulators are in the pockets of Bayer is pretty farfetched. Indeed, the complaint about bias is peculiar coming from plaintifs’ counsel, who command an out-sized influence within the Collegium Ramazzini,[15] which in turn often dominates IARC working groups. Every agency and scientific group, including the IARC, has its “method,” its classificatory schemes, its definitions, and the like. By privileging the IARC conclusion, while excluding all the other many agencies and groups, and allowing plaintiffs’ counsel to argue that there is no real-world debate over glyphosate, Philadelphia courts play a malignant role in helping to generate the huge verdicts seen in glyphosate litigation.

The defense motion for clarification also stressed that the issue whether glyphosate causes NHL or other human cancer is not the probandum for which foreign agency and scientific group statements are relevant.  Pennsylvania has a most peculiar, idiosyncratic law of strict liability, under which such statements may not be relevant to liability questions. Plaintiffs’ counsel, in glyphosate and most tort litigations, however, routinely assert negligence as well as punitive damages claims. Allowing plaintiffs’ counsel to create a false and fraudulent narrative that Monsanto has flouted the consensus of the entire scientific and regulatory community in failing to label Roundup with cancer warnings is a travesty of the rule of law.

What seems clever by halves in the plaintiffs’ litigation approach was that its complaints about foreign regulatory assessments applied equally, if not more so, to the IARC glyphosate hazard evaluation. The glyphosate litigation is not likely as interminable as π, but it is irrational.

*      *     *      *      *     * 

Post Script.  Ten days after the verdict in Kline, and one day after the above post, the Philadelphia Inquirer released a story about the defense verdict. See Nick Vadala, “Monsanto wins first Roundup court case in recent string of Philadelphia lawsuits,” Phila. Inq. (Mar. 15, 2024).


[1] Bill 246, Indiana House of Representatives (1897); Petr Beckmann, A History of π at 174 (1971).

[2] See Robert Moran, “Philadelphia jury awards $175 million after deciding 83-year-old man got cancer from Roundup weed killer,” Phila. Inq. (Oct. 27, 2023); Nick Vadala, “Philadelphia jury awards $2.25 billion to man who claimed Roundup weed killer gave him cancer,” Phila. Inq. (Jan. 29, 2024).

[3] Phila. Ct. C.P. 2022-01641.

[4] George Woolston, “Monsanto Nabs 1st Win In Philly’s Roundup Trial Blitz,” Law360 (Mar. 5, 2024); Nicholas Malfitano, “After three initial losses, Roundup manufacturers get their first win in Philly courtroom,” Pennsylvania Record (Mar. 6, 2024).

[5][5] See David Hackett Fischer, “ Fallacies of Semantical Distortion,” chap. 10, in Historians’ Fallacies: Toward a Logic of Historical Thought (1970); see alsoIARC’s Fundamental Distinction Between Hazard and Risk – Lost in the Flood” (Feb. 1, 2024); “The IARC-hy of Evidence – Incoherent & Inconsistent Classification of Carcinogencity” (Sept. 19, 2023).

[6] Malfitano, note 2 (quoting Pinto); see also Law360, note 2 (quoting Pinto).

[7] Harry Frankfurt, On Bullshit at 63 (2005); seeThe Philosophy of Bad Expert Witness Opinion Testimony” (Oct. 2, 2010).

[8] See Malifanto, note 2 (quoting Pinto).

[9] In re Roundup Prods. Litig., Phila. Cty. Ct. C.P., May Term 2022-0550, Control No. 24020394 (Feb. 14, 2024) (Roberts, J.). In a footnote, the court explained that “an expert may testify that foreign scientists have concluded that Roundup and· glyphosate can be used safely and they do not cause cancer. In the example provided, there is no specific reference to an agency or regulatory body, and the jury is free to make a credibility determination based on the totality of the expert’s testimony. It is, however, impossible for this Court, in a pre-trial posture, to anticipate every iteration of a question asked or answer provided; it remains within the discretion of the trial judge to determine whether a question or answer is appropriate based on the context and the trial circumstances.”

[10] See National Ass’n of Wheat Growers v. Bonta, 85 F.4th 1263, 1270 (9th Cir. 2023) (“A significant number of . . . organizations disagree with IARC’s conclusion that glyphosate is a probable carcinogen”; … “[g]lobal studies from the European Union, Canada, Australia, New Zealand, Japan, and South Korea have all concluded that glyphosate is unlikely to be carcinogenic to humans.”).

[11] See, e.g., In re Seroquel, 601 F. Supp. 2d 1313, 1318 (M.D. Fla. 2009) (noting that references to foreign regulatory actions or decisions “without providing context concerning the regulatory schemes and decision-making processes involved would strip the jury of any framework within which to evaluate the meaning of that evidence”)

[12] McKivison v. Monsanto Co., Phila. Cty. Ct. C.P., No. 2022- 00337, Plaintiff’s Motion in Limine No. 5 to Exclude Foreign Regulatory Registration and/or Approvals of Glyphosate, GHBs and/or Roundup.

[13] See Sherman Joyce, “New Rule 702 Helps Judges Keep Bad Science Out Of Court,” Law360 (Feb. 13, 2024) (noting Pennsylvania’s outlier status on evidence law that enables dodgy opinion testimony).

[14] P.J. D’Annunzio, “Monsanto Fights $2.25B Verdict After Philly Roundup Trial,” Law360 (Feb. 8, 2024).

[15]Collegium Ramazzini & Its Fellows – The Lobby” (Nov. 19, 2023).

A Citation for Jurs & DeVito’s Unlawful U-Turn

February 27th, 2024

Antic proposals abound in the legal analysis of expert witness opinion evidence. In the courtroom, the standards for admitting or excluding such evidence are found in judicial decisions or in statutes. When legislatures have specified standards for admitting expert witness opinions, courts have a duty to apply the standards to the facts before them. Law professors are, of course, untethered from either precedent or statute, and so we may expect chaos to ensue when they wade into disputes about the proper scope of expert witness gatekeeping.

Andrew Jurs teaches about science and the law at the Drake University Law School, and Scott DeVito is an associate professor of law at the Jacksonville University School of Law. Together, they have recently produced one of the most antic of antic proposals in a fatuous call for the wholesale revision of the law of expert witnesses.[1]

Jurs and DeVito rightly point out that since the Supreme Court, in Daubert,[2] waded into the dispute whether the historical Frye decision survived the enactment of the Federal Rules of Evidence, we have seen lower courts apply the legal standard inconsistently and sometimes incoherently. These authors, however, like many other academics, incorrectly label one or the other standard, Frye or Daubert, as being stricter than the other. Applying the labels of stricter and weaker standards, ignores that they are standards that measure completely different things. Frye advances a sociological standard, and a Frye test challenge can be answered by conducting a survey. Rule 702, as interpreted by Daubert, and as since revised and adopted by the Supreme Court and Congress, is an epistemic standard. Jurs and DeVito, like many other legal academic writers, apply a single adjective to standards that measure two different, incommensurate things. The authors’ repetition of the now 30-plus year-old mistake is a poor start for a law review article that sets out to reform the widespread inconsistency in the application of Rule 702, in federal and in state courts.

In seeking greater adherence to the actual rule, and consistency among decisions, Jurs and DeVito might have urged for judicial education, or blue-ribbon juries, or science courts, or greater use of court-appointed expert witnesses. Instead they have put their marker down on abandoning all meaningful gatekeeping. Jurs and DeVito are intent upon repairing the inconsistency and incoherency in the application of Daubert, by removing the standard altogether.

“To resolve the problem, we propose that the Courts replace the multiple Daubert factors with a single factor—testability—and that once the evidence meets this standard the judge should provide the jury with a proposed jury instruction to guide their analysis of the fact question addressed by the expert evidence.”[3]

In other words, because lower federal courts have routinely ignored the actual statutory language of Rule 702, and Supreme Court precedents, Jurs and DeVito would have courts invent a new standard, that virtually excludes nothing as long as someone can imagine a test for the asserted opinion. Remarkably, although they carry on about the “rule of law,” the authors fail to mention that judges have no authority to ignore the requirements of Rule 702. And perhaps even more stunning is that they have advanced their nihilistic proposal in the face of the remedial changes in Rule 702, designed to address judicial lawlessness in ignoring previously enacted versions of Rule 702. This antic proposal would bootstrap previous judicial “flyblowing” of a Congressional mandate into a prescription for abandoning any meaningful standard. They have articulated the Cole Porter standard: anything goes. Any opinion that “can be tested is science”; end of discussion.  The rest is for the jury to decide as a question of fact, subject to the fact finder’s credibility determinations. This would be a Scott v. Sandford rule[4] for scientific validity; science has no claims of validity that the law is bound to respect.

Jurs and DeVito attempt a cynical trick. They argue that they would fix the problem of “an unpredictable standard” by reverting to what they say is Daubert’s first principle of ensuring the reliability of expert witness testimony, and limiting the evidentiary display at trial to “good science.” Cloaking their nihilism, the authors say that they want to promote “good science,” but advocate the admissibility of any and every opinion, as long as it is theoretically “testable.” In order to achieve this befuddled goal, they simply redefine scientific knowledge as “essentially” equal to testable propositions.[5]

Jurs and DeVito marshal evidence of judicial ignorance of key aspects of scientific method, such as error rate. We can all agree that judges frequently misunderstand key scientific concepts, but their misunderstandings and misapplications do not mean that the concepts are unimportant or unnecessary. Many judges seem unable to deliver an opinion that correctly defines p-value or confidence interval, but their inabilities do not allow us to dispense with the need to assess random error in statistical tests. Our faint-hearted authors never explain why the prevalence of judicial error must be a counsel of despair that drives us to bowdlerize scientific evidence into something it is not. We may simply need better training for judges, or better assistance for them in addressing complex claims. Ultimately, we need better judges.

For those judges who have taken their responsibility seriously, and who have engaged with the complexities of evaluating validity concerns raised in Rule 702 and 703 challenges, the Jurs and DeVito proposal must seem quite patronizing. The “Daubert” factors are simply too complex for you, so we will just give you crayons, or a single, meaningless factor that you cannot screw up.[6]

The authors set out a breezy, selective review of statements by a few scientists and philosophers of science. Rather than supporting the extreme reductionism, Jurs and DeVito’s review reveals that science is much more than identifying a “testable” proposition. Indeed, the article’s discussion of philosophy and practice of science weighs strongly against the authors’ addled proposal.[7]

The authors, for example, note that Sir Isaac Newton emphasized the importance of empirical method.[8] Contrary to the article’s radical reductionism, the authors note that Sir Karl Popper and Albert Einstein stressed that the failure to obtain a predicted experimental result may render a theory “untenable,” which of course requires data and valid tests and inferences to assess. Quite a bit of motivated reasoning has led Jurs and DeVito to confuse a criterion of testability with the whole enterprise of science, and to ignore the various criteria of validity for collecting data, testing hypotheses, and interpreting results.

The authors suggest that their proposal will limit the judicial inquiry to the the legal question of reliability, but this suggestion is mere farce. Reliability means obtaining the same or sufficiently similar results upon repeated testing, but these authors abjure testing itself.  Furthermore, reliability as contemplated by the Supreme Court, in 1993, and by FRE 702 ever since, has meant validity of the actual test that an expert witness argues in support of his or her opinion or claims.

Whimsically, and without evidence, Jurs and DeVito claim that their radical abandonment of gatekeeping will encourage scientists, in “fields that are testable, but not yet tested, to perform real, objective, and detailed research.” Their proposal, however, works to remove any such incentive because untested but testable research becomes freely admissible. Why would the lawsuit industry fund studies, which might not support their litigation claims, when the industry’s witnesses need only imagine a possible test to advance their claims, without the potential embarrassment by facts? The history of modern tort law teaches us that cheap speculation would quickly push out actual scientific studies.

The authors’ proposal would simply open the floodgates of speculation, conjecture, and untested hypothesis, and leave the rest to the vagaries of trials, mostly in front of jurors untrained in evaluating scientific and statistical evidence. Admittedly, some incurious and incompetent gatekeepers and triers of fact will be relieved to know that they will not have to evaluate actual scientific evidence, because it had been eliminated by the Jurs and DeVito proposal to make mere testability the touchstone of admissibility

To be sure, in Aristotelian terms, testability is logical and practically prior to testing, but these relationships do not justify holding out testability as the “essence” of science, and the alpha and omega of science.[9] Of course, one must have an hypothesis to engage in hypothesis testing, but science lies in the clever interrogation of nature, guided by the hypothesis. The scientific process lies in answering the question, not simply in formulating the question.

As for the authors’ professed concern about “rule of law,” readers should note that the Jurs and DeVito article completely ignores the remedial amendment to Rule 702, which went into effect on December 1, 2023, to address the myriad inconsistencies, and failures to engage, in required gatekeeping of expert witness opinion testimony.[10]

The new Rule 702 is now law, with its remedial clarification that the proponent of expert witness opinion must show the court that the opinion is sufficiently supported by facts or data, Rule 702(b), and that the opinion “reflects a reliable application of the principles and methods to the facts of the case,” Rule 702(d). The Rule prohibits deferring the evaluation of sufficiency of support or reliability of application of method to the trier of fact; there is no statutory support for suggesting that these inquires always or usually go to “weight and not admissibility.”

The Jurs and DeVito proposal would indeed be a U-Turn in the law of expert witness opinion testimony. Rather than promote the rule of law, they have issued an open, transparent call for licentiousness in the adjudication of scientific and technical issues.


[1] Andrew Jurs & Scott DeVito, “A Return to Rationality: Restoring the Rule of Law After Daubert’s Disasterous U-Turn,” 164 New Mexico L. Rev. 164 (2024) [cited below as U-Turn]

[2] Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993).

[3] U-Turn at 164, Abstract.

[4] 60 U.S. 393 (1857).

[5] U-Turn at 167.

[6] U-Turn at 192.

[7] See, e.g., U-Turn at 193 n.179, citing David C. Gooding, “Experiment,” in W.H. Newton-Smith, ed., A Companion to the Philosophy of Science 117 (2000) (emphasizing the role of actual experimentation, not the possibility of experimentation, in the development of science).

[8] U-Turn at 194.

[9] See U-Turn at 196.

[10] See Supreme Court Order, at 3 (Apr. 24, 2023); Supreme Court Transmittal Package (Apr. 24, 2023).

IARC’S Fundamental Distinction Between Hazard and Risk – Lost in the Flood

February 1st, 2024

Socrates viewed philosophy as beginning in wonder,[1] but Socrates and his philosophic heirs recognized that philosophy does not get down to business until it starts to clarify the terms of discussion. By the middle of the last century, failure to understand the logic of language replaced wonder as the beginning of philosophy.[2] Even if philosophy could not cure all conceptual pathology, most writers came to see that clarifying terms, concepts, and usage was an essential starting point in thinking clearly about a subject.[3]

Hazard versus Risk

Precision in scientific exposition often follows from the use of measurements, using agreed upon quantitative units, and accepted, accurate, reliable procedures for measurements. When scientists substitute qualitative measures for what are inherently quantitative measures, they frequently lapse into error. For example, beware of rodent studies that proclaim harms at “low doses,” which turn out to low in comparison to other rodent studies, but orders of magnitude greater exposure than experienced by human beings.

Risk is a quantitative term meaning a rate of some specified outcome. A Dictionary of Epidemiology, for instance, defines risk as:

“The probability of an adverse or beneficial event in a defined population over a specified time interval. In epidemiology and in clinical research it is commonly measured through the cumulative incidence and the incidence proportion.”[4]

An increased risk thus requires a measurement of a rate or probability of an outcome greater than expected in the absence of the exposure of interest. We might be uncertain of the precise measure of the risk, or of an increased risk, but conceptually a risk connotes a rate or a probability that is, at least in theory, measurable.

Hazard is a categorical concept; something is, or is not, a hazard without regard to the rate or incidence of harm. The definition of “hazard,” in the Dictionary of Epidemiology provides a definition that captures the non-quantitative, categorical natural of some exposure’s being a hazard:

“The inherent capability of a natural or human-made agent or process to adversely affect human life, health, property, or activity, with the potential to cause a disease.”[5]

The International Agency for Research on Cancer (IARC) purports to set out a classification scheme, for human cancer hazards. As used by IARC, its classification scheme involves a set of epistemic modal terms: “known,” “probably,” “possibly,” and “indeterminate.” These epistemic modalities characterize the strength of the evidence that an agent is carcinogenic, and not the magnitude of quantitative risk of cancer from exposure at a given level. The IARC Preamble, which attempts to describe the Agency’s methodology, explains that the distinction between hazard and risk is “fundamental”:

“A cancer hazard is an agent that is capable of causing cancer, whereas a cancer risk is an estimate of the probability that cancer will occur given some level of exposure to a cancer hazard. The Monographs assess the strength of evidence that an agent is a cancer hazard. The distinction between hazard and risk is fundamental. The Monographs identify cancer hazards even when risks appear to be low in some exposure scenarios. This is because the exposure may be widespread at low levels, and because exposure levels in many populations are not known or documented.”[6]

This attempted explanation reveals an important problem in IARC’s project, as stated in the Preamble. First, there is an unproven assumption that there will be cancer hazards regardless of the exposure levels. The IARC contemplates that there may circumstances of low levels of risk from low levels of exposure, but it elides the important issue of thresholds of exposure, below which there is no risk. The Preamble suggests that IARC does not attempt to provide evidence for or against meaningful thresholds of hazardousness, but this failure greatly undermines the project.  Exposure circumstances may be such that there is no hazard at all, and so the risk is zero.

The purported distinction between hazard and risk, supposedly fundamental, is often blurred by the Agency, in the monographs produced by working groups on specific exposure circumstances. Consider for instance how a working group characterized the “hazard” of inhalation of respirable crystalline silica:

“ln making the overalI evaluation, the Working Group noted that carcinogenicity in humans was not detected in all industrial circumstances studied. Carcinogenicity may be dependent on inherent characteristics of the crystalline silica or on external factors affecting its biological activity or distribution of its polymorphs.

Crystalline silica inhaled in the form of quartz or cristobalite from occupational sources is carcinogenic to humans (Group 1)”[7]

So some IARC classifications actually do specify that exposure to a substance is not a hazard in all circumstances, a qualification that implies that the same exposure in some exposure circumstances is not a hazard, and so the risk is zero.

We know something about the deliberations of the crystalline silica working group. The members were deadlocked for some time, and the switch of one vote ultimately gave a bare majority to reclassifying crystalline silica as a Group I exposure. Here is how the working group member, Corbett McDonald described the situation:

“The IARC Working Group, in 1997, had considerable difficulty in reaching a decision and might well not have done so had it not been made clear that it was concerned with hazard identification, not risk.”[8]

It was indeed Professor McDonald who changed his vote based upon this linguistic distinction between hazard and risk. His own description of the dataset, however, suggests that the elderly McDonald was railroaded by more younger, more strident members of the group:

“Of the many studies reviewed by the Working Group … nine were identified as providing the least confounded evidence. Four studies which we considered positive included two of refractory brick workers, one in the diatomite industry and our own in pottery workers; the five which seemed negative or equivocal included studies of South Dakota gold miners, Danish stone workers, US stone workers and US granite workers. This further example that the truth is seldom pure and never simple underlines the difficulty of establishing a rational control policy for some carcinogenic materials.”[9]

In defense of his vote, McDonald meekly offered that

“[s]ome equally expert panel of scientists presented with the same information on another occasion could of course have reached a different verdict. The evidence was conflicting and difficult to assess and such judgments are essentially subjective. Of course, when the evidence is conflicting, it cannot be said to be sufficient. Not only was the epidemiologic evidence conflicting, but so was the whole animal toxicology, which found a risk of tumors in rats, but not in mice or hamsters.”

Aside from endorsing a Group I classification for crystalline silica, the working group ignored the purportedly fundamental distinction between hazard and risk, by noting that not all exposure circumstances posed a hazard of cancer. The same working group did even greater violence to the supposed distinction between risk and hazard in its evaluation of coal dust exposure and human cancer. Coal miners have been studied extensively for cancer risk, and the working group reviewed and evaluated the nature of their exposures and their cancer outcomes. Coal dust virtually always contains crystalline silica, often making up a sizable percentage of the total inhalational exposures (40% or so) of coal dust.[10] And yet, when the group studied the cancer rates among coal miners, and in animals, it concluded that there was “inadequate evidence in humans, and “in experimental animals,” for carcinogenicity. The same working group that agreed, on a divided vote, to place crystalline silica in Group 1, voted that “[c]oal dust cannot be classified as to its carcinogenicity to humans (Group 3).”[11]

The conceptual confusion between hazard and risk is compounded by the IARC’s use of epistemic modalities – known, probably, possibly, and indeterminate – to characterize the existence of a hazard. The Preamble, in Table 4, summarizes the categories and “the stream of evidence” needed to place any particular exposure in a one epistemic modal class or another. What is inexplicable is how and why a single substance such as crystalline silica goes from a known cancer hazard in some unspecified occupational setting but then its putative carcinogenicity becomes indeterminate when it makes up 40% of the inhaled dust in a coal mine.

 

The conceptual difficulty created by IARC’s fundamental distinction between hazard and risk is that risk might well vary across exposure circumstances, but there is no basis for varying the epistemic modality for the hazard assessment simply because coal dust is only say 40% crystalline silica. Some of the exposure circumstances evaluated for the Group I silica hazard classification actually were lower than the silica content of coal.  Granite quarrying, for example, involves exposure to rock dust that is roughly 30% crystalline silica.

The conceptual and epistemic confusion caused by IARC’s treatment of the same substance in different exposure circumstances is hardly unique to its treatment of crystalline silica and coal dust. Benzene has long been classified as a Group I human carcinogen, for its ability to cause a specific form of leukemia.[12] Gasoline contains, on average, about one percent benzene, and so gasoline exposure inevitably involves benzene exposure. And yet, benzene exposure in the form of inhaling gasoline fumes is only a “possible” human carcinogen, Group 2B.[13]

Similarly, in 2018, the IARC classified the evidence for the human carcinogenicity of coffee as “indeterminate,” Group 3.[14] And yet every drop of coffee inevitably contains acrylamide, which is, according to IARC, a Group 2A “probable human carcinogen.”[15] Rent-seeking groups, such as the Council for Education and Research on Toxics (founded by Carl Cranor and Martyn Smith) have tried shamelessly to weaponize the IARC 2A classification for acrylaminde by claiming a bounty against coffee sellers such as Star-Bucks in California Proposition 65 litigation.[16]

Similarly confusing, IARC designates acetaldehyde on its own a “possible” human carcinogen, group 2B, even though acetaldehyde is invariably associated with the metabolism of ethyl alcohol, which itself is a Group I human carcinogen.[17] There may well be other instances of such confusions, and I would welcome examples from readers.

These disparate conclusions strain credulity, and undermine confidence that the hazard-risk distinction does any work at all. Hazard and risk do have different meanings, and I would not want to be viewed as anti-semantic. IARC’s use of the hazard-risk distinction, however, lends itself to the interpretation that hazard is simply risk without the quantification. This usage actually is worse than having no distinction at all, because it ignores the existence of thresholds below which exposure carries no risk, as well as ignoring different routes of exposure and exposure circumstances that carry no risk at all. The vague and unquantified categorical determination that a substance is a hazard allows fear mongers to substitute subjective, emotive, and unscientific judgments for scientific assessment of carcinogenicity under realistic conditions of use and exposure.


[1] Plato, Theaetetus 155d (Fowler transl. 1921).

[2] Ludwig Wittgenstein, Philosophical Investigations (1953).

[3] See, e.g., Richard M. Rorty, ed., The Linguistic Turn: Essays in Philosophical Method (1992); Nicholas Rescher, Concept Audits: A Philosophical Method (2016); Timothy Williamson, Philosophical Method: A Very Short Introduction 32 (2020) (discussing the need to clarify terms).

[4] Miquel Porta, Sander Greenland, Miguel Hernán, Isabel dos Santos Silva, John M. Last, and Andrea Burón, A Dictionary of Epidemiology 250 (6th ed. 2014).

[5] Id. at 128.

[6] IARC Monographs on the Identification of Carcinogenic Hazards to Humans – Preamble (2019) (emphasis added).

[7] IARC Monograph on the Evaluation of Carcinogenic Risks to Humans: Volume 68, Silica, Some Silicates, Coal Dust, and para-Aramid Fibrils 210-211 (1997).

[8] Corbett McDonald & Nicola Cherry, “Crystalline Silica and Lung Cancer: The Problem of Conflicting Evidence,” 8 Indoor Built Environment 121, 121 (1999).

[9] Id.

[10] IARC Monograph on the Evaluation of Carcinogenic Risks to Humans: Volume 68, Silica, Some Silicates, Coal Dust, and para-Aramid Fibrils 340 (1997).

[11] Id. at 393.

[12] IARC Monograph, Volume 120: Benzene (2018).

[13] IARC Monographs on the Evaluation of Carcinogenic Risks to Humans: Volume 45, Occupational Exposures in Petroleum Refining; Crude Oil and Major Petroleum Fuels 194 (1989).

[14] IARC Monograph No. 116, Drinking Coffee, Mate, and Very Hot Beverages (2018).

[15] IARC Monograph no. 60, Some Industrial Chemicals (1994).

[16] SeeCoffee with Cream, Sugar & a Dash of Acrylamide” (June 9, 2018); “The Council for Education and Research on Toxics” (July 9, 2013).

[17] IARC Monographs on the Evaluation of Carcinogenic Risks to Humans Volume 96 1278 (2010).

Consenus is Not Science

November 8th, 2023

Ted Simon, a toxicologist and a fellow board member at the Center for Truth in Science, has posted an intriguing piece in which he labels scientific consensus as a fool’s errand.[1]  Ted begins his piece by channeling the late Michael Crichton, who famously derided consensus in science, in his 2003 Caltech Michelin Lecture:

“Let’s be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. Science, on the contrary, requires only one investigator who happens to be right, which means that he or she has results that are verifiable by reference to the real world. In science, consensus is irrelevant. What is relevant is reproducible results. The greatest scientists in history are great precisely because they broke with the consensus.

* * * *

There is no such thing as consensus science. If it’s consensus, it isn’t science. If it’s science, it isn’t consensus. Period.”[2]

Crichton’s (and Simon’s) critique of consensus is worth remembering in the face of recent proposals by Professor Edward Cheng,[3] and others,[4] to make consensus the touchstone for the admissibility of scientific opinion testimony.

Consensus or general acceptance can be a proxy for conclusions drawn from valid inferences, within reliably applied methodologies, based upon sufficient evidence, quantitatively and qualitatively. When expert witnesses opine contrary to a consensus, they raise serious questions regarding how they came to their conclusions. Carl Sagan declaimed that “extraordinary claims require extraordinary evidence,” but his principle was hardly novel. Some authors quote the French polymath Pierre Simon Marquis de Laplace, who wrote in 1810: “[p]lus un fait est extraordinaire, plus il a besoin d’être appuyé de fortes preuves,”[5] but as the Quote Investigator documents,[6] the basic idea is much older, going back at least another century to church rector who expressed his skepticism of a contemporary’s claim of direct communication with the almighty: “Sure, these Matters being very extraordinary, will require a very extraordinary Proof.”[7]

Ted Simon’s essay is also worth consulting because he notes that many sources of apparent consensus are really faux consensus, nothing more than self-appointed intellectual authoritarians who systematically have excluded some points of view, while turning a blind eye to their own positional conflicts.

Lawyers, courts, and academics should be concerned that Cheng’s “consensus principle” will change the focus from evidence, methodology, and inference, to a surrogate or proxy for validity. And the sociological notion of consensus will then require litigation of whether some group really has announced a consensus. Consensus statements in some areas abound, but inquiring minds may want to know whether they are the result of rigorous, systematic reviews of the pertinent studies, and whether the available studies can support the claimed consensus.

Professor Cheng is hard at work on a book-length explication of his proposal, and some criticism will have to await the event.[8] Perhaps Cheng will overcome the objections placed against his proposal.[9] Some of the examples Professor Cheng has given, however, such as his errant his dramatic misreading of the American Statistical Association’s 2016 p-value consensus statement to represent, in Cheng’s words:

“[w]hile historically used as a rule of thumb, statisticians have now concluded that using the 0.05 [p-value] threshold is more distortive than helpful.”[10]

The 2016 Statement said no such thing, although a few statisticians attempted to distort the statement in the way that Cheng suggests. In 2021, a select committee of leading statisticians, appointed by the President of the ASA, issued a statement to make clear that the ASA had not embraced the Cheng misinterpretation.[11] This one example alone does not bode well for the viability of Cheng’s consensus principle.


[1] Ted Simon, “Scientific consensus is a fool’s errand made worse by IARC” (Oct. 2023).

[2] Michael Crichton, “Aliens Cause Global Warming,” Caltech Michelin Lecture (Jan. 17, 2003).

[3] Edward K. Cheng, “The Consensus Rule: A New Approach to Scientific Evidence,” 75 Vanderbilt L. Rev. 407 (2022) [Consensus Rule]

[4] See Norman J. Shachoy Symposium, The Consensus Rule: A New Approach to the Admissibility of Scientific Evidence (2022), 67 Villanova L. Rev. (2022); David S. Caudill, “The ‘Crisis of Expertise’ Reaches the Courtroom: An Introduction to the Symposium on, and a Response to, Edward Cheng’s Consensus Rule,” 67 Villanova L. Rev. 837 (2022); Harry Collins, “The Owls: Some Difficulties in Judging Scientific Consensus,” 67 Villanova L. Rev. 877 (2022); Robert Evans, “The Consensus Rule: Judges, Jurors, and Admissibility Hearings,” 67 Villanova L. Rev. 883 (2022); Martin Weinel, “The Adversity of Adversarialism: How the Consensus Rule Reproduces the Expert Paradox,” 67 Villanova L. Rev. 893 (2022); Wendy Wagner, “The Consensus Rule: Lessons from the Regulatory World,” 67 Villanova L. Rev. 907 (2022); Edward K. Cheng, Elodie O. Currier & Payton B. Hampton, “Embracing Deference,” 67 Villanova L. Rev. 855 (2022).

[5] Pierre-Simon Laplace, Théorie analytique des probabilités (1812) (The more extraordinary a fact, the more it needs to be supported by strong proofs.”). See Tressoldi, “Extraordinary Claims Require Extraordinary Evidence: The Case of Non-Local Perception, a Classical and Bayesian Review of Evidences,” 2 Frontiers Psych. 117 (2011); Charles Coulston Gillispie, Pierre-Simon Laplace, 1749-1827: a life in exact science (1997).

[6]Extraordinary Claims Require Extraordinary Evidence” (Dec. 5, 2021).

[7] Benjamin Bayly, An Essay on Inspiration 362, part 2 (2nd ed. 1708).

[8] The Consensus Principle, under contract with the University of Chicago Press.

[9] SeeCheng’s Proposed Consensus Rule for Expert Witnesses” (Sept. 15, 2022);
Further Thoughts on Cheng’s Consensus Rule” (Oct. 3, 2022); “Consensus Rule – Shadows of Validity” (Apr. 26, 2023).

[10] Consensus Rule at 424 (citing but not quoting Ronald L. Wasserstein & Nicole A. Lazar, “The ASA Statement on p-Values: Context, Process, and Purpose,” 70 Am. Statistician 129, 131 (2016)).

[11] Yoav Benjamini, Richard D. DeVeaux, Bradly Efron, Scott Evans, Mark Glickman, Barry Braubard, Xuming He, Xiao Li Meng, Nancy Reid, Stephen M. Stigler, Stephen B. Vardeman, Christopher K. Wikle, Tommy Wright, Linda J. Young, and Karen Kafadar, “The ASA President’s Task Force Statement on Statistical Significance and Replicability,” 15 Annals of Applied Statistics 1084 (2021); see also “A Proclamation from the Task Force on Statistical Significance” (June 21, 2021).

Just Dissertations

October 27th, 2023

One of my childhood joys was roaming the stacks of libraries and browsing for arcane learning stored in aging books. Often, I had no particular goal in my roaming, and I flitted from topic to topic. Occasionally, however, I came across useful learning. It was in one college library, for instance, that I discovered the process for making nitrogen tri-iodide, which provided me with some simple-minded amusement for years. (I only narrowly avoided detection by Dean Brownlee for a prank involving NI3 in chemistry lab.)

Nowadays, most old book are off limits to the casual library visitor, but digital archives can satisfy my occasional compulsion to browse what is new and compelling in the world of research on topics of interest. And there can be no better source for new and topical research than browsing dissertations and theses, which are usually required to open new ground in scholarly research and debate. There are several online search tools for dissertations, such as ProQuest, EBSCO Open Dissertation, Theses and Dissertations, WorldCat Dissertations and Theses, Open Access Theses and Dissertations, and Yale Library Resources to Find Dissertation.

Some universities generously share the scholarship of their graduate students online, and there are some great gems freely available.[1] Other universities provide a catalogue of their students’ dissertations, the titles of which can be browsed and the texts of which can be downloaded. For lawyers interested in medico-legal issues, the London School of Hygiene & Tropical Medicine has a website, “LSHTM Research Online,” which is delightful place to browse on a rainy afternoon, and which features a free, open access repository of research. Most of the publications are dissertations, some 1,287 at present, on various medical and epidemiologic topics, from 1938 to the present.

The prominence of the London School of Hygiene & Tropical Medicine makes its historical research germane to medico-legal issues such as “state of the art,” notice, priority, knowledge, and intellectual provenance. A 1959 dissertation by J. D. Walters, the Surgeon Lieutenant of the Royal Nayal, is included in the repository.[2] Walters’ dissertation is a treasure trove of the state-of-the-art case – who knew what when – about asbestos health hazards, written before litigation distorted perspectives on the matter. Walters’ dissertation shows in contemporaneous scholarship, not hindsight second guessing, that Sir Richard Doll’s 1955 study, flawed as it was by contemporaneous standards, was seen as establishing an association between asbestosis (not asbestos exposure) and lung cancer. Walters’ careful assessment of how asbestos was actually used in British dockyards documents the differences between British and American product use. The British dockyards had full-time laggers since 1946, and they used spray asbestos, asbestos (amosite and crocidolite) mattresses, as well as lower asbestos content insulation.

Walters reported cases of asbestosis among the laggers. Written four years before Irving Selikoff published on an asbestosis hazard among laggers, the predominant end-users of asbestos-containing insulation, Walters’ dissertation preempts Selikoff’s claim of priority in identifying the asbestos hazard, and it shows that large employers, such as the Royal Navy, and the United States Navy, were well aware of asbestos hazards, before companies began placing warning labels. Like Selikoff, Walters typically had no information about worker compliance with safety regulations, such as respiratory use. Walters emphasized the need for industrial medical officers to be aware of the asbestosis hazard, and the means to prevent it. Noticeably absent was any suggestion that a warning label on bags of asbestos or boxes of pre-fabricated insulation were relevant to the medical officer’s work in controlling the hazard.

Among the litigation relevant finds in the repository is the doctoral thesis of Francis Douglas Kelly Liddell,[3] on the mortality of the Quebec chrysotile workers, with most of the underlying data.[4] A dissertation by Keith Richard Sullivan reported on the mortality patterns of civilian workers at Royal Navy dockyards in England.[5] Sullivan found no increased risk of lung cancer, although excesses of asbestosis and mesothelioma occurred at all dockyards. A critical look at meta-analyses of formaldehyde and cancer outcomes in one dissertation shows prevalent biases in available studies, and insufficient evidence of causation.[6]

Some of the other interesting dissertations with historical medico-legal relevance are:

Francis, The evaluation of small airway disease in the human lung with special reference to tests which are suitable for epidemiological screening; PhD thesis, London School of Hygiene & Tropical Medicine (1978) DOI: https://doi.org/10.17037/PUBS.04655290

Gillian Mary Regan, A Study of pulmonary function in asbestosis, PhD thesis, London School of Hygiene & Tropical Medicine (1977) DOI: https://doi.org/10.17037/PUBS.04655127

Christopher J. Sirrs, Health and Safety in the British Regulatory State, 1961-2001: the HSC, HSE and the Management of Occupational Risk, PhD thesis, London School of Hygiene & Tropical Medicine (2016) DOI: https://doi.org/10.17037/PUBS.02548737

Michael Etrata Rañopa, Methodological issues in electronic healthcare database studies of drug cancer associations: identification of cancer, and drivers of discrepant results, PhD thesis, London School of Hygiene & Tropical Medicine (2016). DOI: https://doi.org/10.17037/PUBS.02572609

Melanie Smuk, Missing Data Methodology: Sensitivity analysis after multiple imputation, PhD thesis, London School of Hygiene & Tropical Medicine (2015) DOI: https://doi.org/10.17037/PUBS.02212896

John Ross Tazare, High-dimensional propensity scores for data-driven confounder adjustment in UK electronic health records, PhD thesis, London School of Hygiene & Tropical Medicine (2022). DOI: https://doi.org/10.17037/PUBS.046647276/

Rebecca Jane Hardy, (1995) Meta-analysis techniques in medical research: a statistical perspective. PhD thesis, London School of Hygiene & Tropical Medicine. DOI: https://doi.org/10.17037/PUBS.00682268

Jemma Walker, Bayesian modelling in genetic association studies, PhD thesis, London School of Hygiene & Tropical Medicine (2012) DOI: https://doi.org/10.17037/PUBS.01635516

  1. Marieke Schoonen, (2007) Pharmacoepidemiology of autoimmune diseases.PhD thesis, London School of Hygiene & Tropical Medicine. DOI: https://doi.org/10.17037/PUBS.04646551

Claudio John Verzilli, Method for the analysis of incomplete longitudinal data, PhD thesis, London School of Hygiene & Tropical Medicine (2003)  DOI: https://doi.org/10.17037/PUBS.04646517

Martine Vrijheid, Risk of congenital anomaly in relation to residence near hazardous waste landfill sites, PhD thesis, London School of Hygiene & Tropical Medicine (2000) DOI: https://doi.org/10.17037/PUBS.00682274


[1] See, e.g., Benjamin Nathan Schachtman, Traumedy: Dark Comedic Negotiations of Trauma in Contemporary American Literature (2016).

[2] J.D. Walters, Asbestos – a potential hazard to health in the ship building and ship repairing industries, DrPH thesis, London School of Hygiene & Tropical Medicine (1959); https://doi.org/10.17037/PUBS.01273049.

[3]The Lobby – Cut on the Bias” (July 6, 2020).

[4] Francis Douglas Kelly Liddell, Mortality of Quebec chrysotile workers in relation to radiological findings while still employed, PhD thesis, London School of Hygiene & Tropical Medicine (1978); DOI: https://doi.org/10.17037/PUBS.04656049

[5] Keith Richard Sullivan, Mortality patterns among civilian workers in Royal Navy Dockyards, PhD thesis, London School of Hygiene & Tropical Medicine (1994) DOI: https://doi.org/10.17037/PUBS.04656717

[6] Damien Martin McElvenny, Meta-analysis of Rare Diseases in Occupational Epidemiology, PhD thesis, London School of Hygiene & Tropical Medicine (2017) DOI: https://doi.org/10.17037/PUBS.03894558

Science & the Law – from the Proceedings of the National Academies of Science

October 5th, 2023

The current issue of the Proceedings of the National Academies of Science (PNAS) features a medley of articles on science generally, and forensic science, in the law.[1] The general editor of the compilation appears to be editorial board member, Thomas D. Albright, the Conrad T. Prebys Professor of Vision Research at the Salk Institute for Biological Studies.

 I have not had time to plow through the set of offerings, but even a superficial inspection reveals that the articles will be of interest to lawyers and judges involved in the litigation of scientific issues. The authors seem to agree that descriptively and prescriptively, validity is more important than expertise in the legal  consideration of scientific evidence.

1. Thomas D. Albright, “A scientist’s take on scientific evidence in the courtroom,” 120 Proceedings of the National Academies of Science 120 (41) e2301839120 (2023).

Albright’s essay was edited by Henry Roediger, a psychologist at the Washington University in St. Louis.

Abstract

Scientific evidence is frequently offered to answer questions of fact in a court of law. DNA genotyping may link a suspect to a homicide. Receptor binding assays and behavioral toxicology may testify to the teratogenic effects of bug repellant. As for any use of science to inform fateful decisions, the immediate question raised is one of credibility: Is the evidence a product of valid methods? Are results accurate and reproducible? While the rigorous criteria of modern science seem a natural model for this evaluation, there are features unique to the courtroom that make the decision process scarcely recognizable by normal standards of scientific investigation. First, much science lies beyond the ken of those who must decide; outside “experts” must be called upon to advise. Second, questions of fact demand immediate resolution; decisions must be based on the science of the day. Third, in contrast to the generative adversarial process of scientific investigation, which yields successive approximations to the truth, the truth-seeking strategy of American courts is terminally adversarial, which risks fracturing knowledge along lines of discord. Wary of threats to credibility, courts have adopted formal rules for determining whether scientific testimony is trustworthy. Here, I consider the effectiveness of these rules and explore tension between the scientists’ ideal that momentous decisions should be based upon the highest standards of evidence and the practical reality that those standards are difficult to meet. Justice lies in carefully crafted compromise that benefits from robust bonds between science and law.

2. Thomas D.Albright, David Baltimore, Anne-MarieMazza, “Science, evidence, law, and justice,” 120 Proceedings of the National Academies of Science 120 (41) e2301839120 (2023).

Professor Baltimore is a nobel laureate and researcher in biology, now at the California Institute of Technology. Anne-Marie Mazza is the director of the Committee on Science, Technology, and Law, of the National Academies of Sciences, Engineering, and Medicine. Jennifer Mnookin is the chancellor of the University of Wisconsin, Madison; previously, she was the dean of the UCLA School of Law. Judge Tatel is a federal judge on the United States Court of Appeals for the District of Columbia Circuit.

Abstract

For nearly 25 y, the Committee on Science, Technology, and Law (CSTL), of the National Academies of Sciences, Engineering, and Medicine, has brought together distinguished members of the science and law communities to stimulate discussions that would lead to a better understanding of the role of science in legal decisions and government policies and to a better understanding of the legal and regulatory frameworks that govern the conduct of science. Under the leadership of recent CSTL co-chairs David Baltimore and David Tatel, and CSTL director Anne-Marie Mazza, the committee has overseen many interdisciplinary discussions and workshops, such as the international summits on human genome editing and the science of implicit bias, and has delivered advisory consensus reports focusing on topics of broad societal importance, such as dual use research in the life sciences, voting systems, and advances in neural science research using organoids and chimeras. One of the most influential CSTL activities concerns the use of forensic evidence by law enforcement and the courts, with emphasis on the scientific validity of forensic methods and the role of forensic testimony in bringing about justice. As coeditors of this Special Feature, CSTL alumni Tom Albright and Jennifer Mnookin have recruited articles at the intersection of science and law that reveal an emerging scientific revolution of forensic practice, which we hope will engage a broad community of scientists, legal scholars, and members of the public with interest in science-based legal policy and justice reform.

3. Nicholas Scurich, David L. Faigman, and Thomas D. Albright, “Scientific guidelines for evaluating the validity of forensic feature-comparison methods,” 120 Proceedings of the National Academies of Science (2023).

Nicholas Scurich is the chair of the department of Psychological Science, at the University of Southern California, David Faigman has written prolifically about science in the law. He is now the chancellor and dean, at the University of San Francisco College of Law.

Abstract

When it comes to questions of fact in a legal context—particularly questions about measurement, association, and causality—courts should employ ordinary standards of applied science. Applied sciences generally develop along a path that proceeds from a basic scientific discovery about some natural process to the formation of a theory of how the process works and what causes it to fail, to the development of an invention intended to assess, repair, or improve the process, to the specification of predictions of the instrument’s actions and, finally, empirical validation to determine that the instrument achieves the intended effect. These elements are salient and deeply embedded in the cultures of the applied sciences of medicine and engineering, both of which primarily grew from basic sciences. However, the inventions that underlie most forensic science disciplines have few roots in basic science, and they do not have sound theories to justify their predicted actions or results of empirical tests to prove that they work as advertised. Inspired by the “Bradford Hill Guidelines”—the dominant framework for causal inference in epidemiology—we set forth four guidelines that can be used to establish the validity of forensic comparison methods generally. This framework is not intended as a checklist establishing a threshold of minimum validity, as no magic formula determines when particular disciplines or hypotheses have passed a necessary threshold. We illustrate how these guidelines can be applied by considering the discipline of firearm and tool mark examination.

4. Peter Stout, “The secret life of crime labs,” 120 Proceedings of the National Academies of Science 120 (41) e2303592120 (2023).

Peter Stout is a scientist with the Houston Forensic Science Center, in Houston, Texas. The Center describes itself as “an independent local government corporation,” which provides forensic “services” to the Houston police

Abstract

Houston TX experienced a widely known failure of its police forensic laboratory. This gave rise to the Houston Forensic Science Center (HFSC) as a separate entity to provide forensic services to the City of Houston. HFSC is a very large forensic laboratory and has made significant progress at remediating the past failures and improving public trust in forensic testing. HFSC has a large and robust blind testing program, which has provided many insights into the challenges forensic laboratories face. HFSC’s journey from a notoriously failed lab to a model also gives perspective to the resource challenges faced by all labs in the country. Challenges for labs include the pervasive reality of poor-quality evidence. Also that forensic laboratories are necessarily part of a much wider system of interdependent functions in criminal justice making blind testing something in which all parts have a role. This interconnectedness also highlights the need for an array of oversight and regulatory frameworks to function properly. The major essential databases in forensics need to be a part of blind testing programs and work is needed to ensure that the results from these databases are indeed producing correct results and those results are being correctly used. Last, laboratory reports of “inconclusive” results are a significant challenge for laboratories and the system to better understand when these results are appropriate, necessary and most importantly correctly used by the rest of the system.

5. Brandon L. Garrett & Cynthia Rudin, “Interpretable algorithmic forensics,” 120 Proceedings of the National Academies of Science 120 (41) 120 (41) e2301842120 (2023).

Garrett teaches at the Duke University School of Law. Rudin teaches statistics at Duke University.

Abstract

One of the most troubling trends in criminal investigations is the growing use of “black box” technology, in which law enforcement rely on artificial intelligence (AI) models or algorithms that are either too complex for people to understand or they simply conceal how it functions. In criminal cases, black box systems have proliferated in forensic areas such as DNA mixture interpretation, facial recognition, and recidivism risk assessments. The champions and critics of AI argue, mistakenly, that we face a catch 22: While black box AI is not understandable by people, they assume that it produces more accurate forensic evidence. In this Article, we question this assertion, which has so powerfully affected judges, policymakers, and academics. We describe a mature body of computer science research showing how “glass box” AI—designed to be interpretable—can be more accurate than black box alternatives. Indeed, black box AI performs predictably worse in settings like the criminal system. Debunking the black box performance myth has implications for forensic evidence, constitutional criminal procedure rights, and legislative policy. Absent some compelling—or even credible—government interest in keeping AI as a black box, and given the constitutional rights and public safety interests at stake, we argue that a substantial burden rests on the government to justify black box AI in criminal cases. We conclude by calling for judicial rulings and legislation to safeguard a right to interpretable forensic AI.

6. Jed S. Rakoff & Goodwin Liu, “Forensic science: A judicial perspective,” 120 Proceedings of the National Academies of Science e2301838120 (2023).

Judge Rakoff has written previously on forensic evidence. He is a federal district court judge in the Southern District of New York. Goodwin Liu is a justice on the California Supreme Court. Their article was edited by Professor Mnookin.

Abstract

This article describes three major developments in forensic evidence and the use of such evidence in the courts. The first development is the advent of DNA profiling, a scientific technique for identifying and distinguishing among individuals to a high degree of probability. While DNA evidence has been used to prove guilt, it has also demonstrated that many individuals have been wrongly convicted on the basis of other forensic evidence that turned out to be unreliable. The second development is the US Supreme Court precedent requiring judges to carefully scrutinize the reliability of scientific evidence in determining whether it may be admitted in a jury trial. The third development is the publication of a formidable National Academy of Sciences report questioning the scientific validity of a wide range of forensic techniques. The article explains that, although one might expect these developments to have had a major impact on the decisions of trial judges whether to admit forensic science into evidence, in fact, the response of judges has been, and continues to be, decidedly mixed.

7. Jonathan J. Koehler, Jennifer L. Mnookin, and Michael J. Saks, “The scientific reinvention of forensic science,” 120 Proceedings of the National Academies of Science e2301840120 (2023).

Koehler is a professor of law at the Northwestern Pritzker School of Law. Saks is a professor of psychology at Arizona State University, and Regents Professor of Law, at the Sandra Day O’Connor College of Law.

Abstract

Forensic science is undergoing an evolution in which a long-standing “trust the examiner” focus is being replaced by a “trust the scientific method” focus. This shift, which is in progress and still partial, is critical to ensure that the legal system uses forensic information in an accurate and valid way. In this Perspective, we discuss the ways in which the move to a more empirically grounded scientific culture for the forensic sciences impacts testing, error rate analyses, procedural safeguards, and the reporting of forensic results. However, we caution that the ultimate success of this scientific reinvention likely depends on whether the courts begin to engage with forensic science claims in a more rigorous way.

8. William C. Thompson, “Shifting decision thresholds can undermine the probative value and legal utility of forensic pattern-matching evidence,” 120 Proceedings of the National Academies of Science e2301844120 (2023).

Thompson is professor emeritus in the Department of Criminology, Law & Society, University of California, Irvine.

Abstract

Forensic pattern analysis requires examiners to compare the patterns of items such as fingerprints or tool marks to assess whether they have a common source. This article uses signal detection theory to model examiners’ reported conclusions (e.g., identification, inconclusive, or exclusion), focusing on the connection between the examiner’s decision threshold and the probative value of the forensic evidence. It uses a Bayesian network model to explore how shifts in decision thresholds may affect rates and ratios of true and false convictions in a hypothetical legal system. It demonstrates that small shifts in decision thresholds, which may arise from contextual bias, can dramatically affect the value of forensic pattern-matching evidence and its utility in the legal system.

9. Marlene Meyer, Melissa F. Colloff, Tia C. Bennett, Edward Hirata, Amelia Kohl, Laura M. Stevens, Harriet M. J. Smith, Tobias Staudigl & Heather D. Flowe, “Enabling witnesses to actively explore faces and reinstate study-test pose during a lineup increases discriminability,” 120 Proceedings of the National Academies of Science e2301845120 (2023).

Marlene Meyer, Melissa F. Colloff, Tia C. Bennett, Edward Hirata, Amelia Kohl, and Heather D. Flowe are psychologists at the School of Psychology, University of Birmingham (United Kingdom). Harriet M. J. Smith is a psychologist in the School of Psychology, Nottingham Trent University, Nottingham, United Kingdom, and Tobias Staudigl is a psychologist in the Department of Psychology, Ludwig-Maximilians-Universität München, in Munich, Germany.

Abstract

Accurate witness identification is a cornerstone of police inquiries and national security investigations. However, witnesses can make errors. We experimentally tested whether an interactive lineup, a recently introduced procedure that enables witnesses to dynamically view and explore faces from different angles, improves the rate at which witnesses identify guilty over innocent suspects compared to procedures traditionally used by law enforcement. Participants encoded 12 target faces, either from the front or in profile view, and then attempted to identify the targets from 12 lineups, half of which were target present and the other half target absent. Participants were randomly assigned to a lineup condition: simultaneous interactive, simultaneous photo, or sequential video. In the front-encoding and profile-encoding conditions, Receiver Operating Characteristics analysis indicated that discriminability was higher in interactive compared to both photo and video lineups, demonstrating the benefit of actively exploring the lineup members’ faces. Signal-detection modeling suggested interactive lineups increase discriminability because they afford the witness the opportunity to view more diagnostic features such that the nondiagnostic features play a proportionally lesser role. These findings suggest that eyewitness errors can be reduced using interactive lineups because they create retrieval conditions that enable witnesses to actively explore faces and more effectively sample features.


[1] 120 Proceedings of the National Academies of Science (Oct. 10, 2023).