TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Susan Haack on Judging Expert Testimony

December 19th, 2020

Susan Haack has written frequently about expert witness testimony in the United States legal system. At times, Haack’s observations are interesting and astute, perhaps more so because she has no training in the law or legal scholarship. She trained in philosophy, and her works no doubt are taken seriously because of her academic seniority; she is the Distinguished Professor in the Humanities, Cooper Senior Scholar in Arts and Sciences, Professor of Philosophy and Professor of Law at the University of Miami.

On occasion, Haack has used her background and experience from teaching about epistemology to good effect in elucidating how epistemiologic issues are handled in the law. For instance, her exploration of the vice of credulity, as voiced by W.K. Clifford,[1] is a useful counterweight to the shrill agnotologists, Robert Proctor, Naomi Oreskes, and David Michaels.

Professor Haack has also been a source of confused, fuzzy, and errant advice when it comes to the issue Rule 702 gatekeeping. Haack’s most recent article on “Judging Expert Testimony” is an example of some unfocused thinking about one of the most important aspect of modern litigation practice, admissibility challenges to expert witness opinion testimony.[2]

Uncontroversially, Haack finds the case law on expert witness gatekeeping lacking in “effective practical guidance,” and she seeks to offer courts, and presumably litigants, “operational help.” Haack sets out to explain “why the legal formulae” are not of practical use. Haack notes that terms such as “reliable” and “sufficient” are qualitative, and vague,[3] much like “obscene” and other adjectives that gave the courts such a difficult time. Rules with vague terms such as these give judges very little guidance. As a philosopher, Haack might have noted that the various judicial formulations of gatekeeping standards are couched as conclusions, devoid of explanatory force.[4] And she might have pointed out that the judicial tendency to confuse reliability with validity has muddled many court opinions and lawyers’ briefs.

Focusing specifically on the field of epidemiology, Haack attempts to help courts by offering questions that judges and lawyers should be asking. She tells us that the Reference Manual for Scientific Evidence is of little practical help, which is a bit unfair.[5] The Manual in its present form has problems, but ultimately the performance of gatekeepers can be improved only if the gatekeepers develop some aptitude and knowledge in the subject matter of the expert witnesses who undergoing Rule 702 challenges. Haack seems unduly reluctant to acknowledge that gatekeeping will require subject matter expertise. The chapter on statistics in the current edition of the Manual, by David Kaye and the late David Freeman, is a rich resource for judges and lawyers in evaluating statistical evidence, including statistical analyses that appear in epidemiologic studies.

Why do judges struggle with epidemiologic testimony? Haack unwittingly shows the way by suggestion that “[e]pidemiological testimony will be to the effect that a correlation, an increased relative risk, has, or hasn’t, been found, between exposure to some substance (the alleged toxin at issue in the case) and some disease or disorder (the alleged disease or disorder the plaintiff claims to have suffered)… .”[6] Some philosophical parsing of the difference between “correlation” and “increased risk” as two very different things might have been in order. Haack suggests an incorrect identity between correlation and increased risk that has confused courts as well as some epidemiologists.

Haack suggests asking various questions that are fairly obvious such as the soundness of the data, measurements, study design, and data interpretation. Haack gives the example of failing to ascertain exposure to an alleged teratogen  during first trimester of pregnancy as a failure of study design that could obscure a real association. Curiously she claims that some of Merrell Dow’s studies of Bendectin did such a thing, not by citing to any publications but to the second-hand accounts of a trial judge.[7] Beyond the objectionable lack of scholarship, the example comes from a medication exposure that has been as exculpated as much as possible from the dubious litigation claims made of its teratogenicity. The misleading example begs the question why choose a Bendectin case, from a litigation that was punctuated by fraud and perjury from plaintiffs’ expert witnesses, and a medication that has been shown to be safe and effective in pregnancy?[8]

Haack balks when it comes to statistical significance, which she tells us is merely based upon a convention, and set “high” to avoid false alarms.[9] Haack’s dismissive attitude cannot be squared with the absolute need to address random error and to assess whether the research claim has been meaningfully tested.[10] Haack would reduce the assessment of random error to the uncertainties of eyeballing sample size. She tells us that:

“But of course, the larger the sample is, then, other things being equal, the better the study. Andrew Wakefield’s dreadful work supposedly finding a correlation between MMR vaccination, bowel disorders, and autism—based on a sample of only 12 children — is a paradigm example of a bad study.”[11]

Sample size was the least of Wakefield’s problems, but more to the point, in some study designs for some hypotheses, a sample of 12 may be quite adequate to the task, and capable of generating robust and even statistically significant findings.

Inevitably, Haack alights upon personal bias or conflicts of interest, as a subject of inquiry.[12] Of course, this is one of the few areas that judges and lawyers understand all too well, and do not need encouragement to pursue. Haack dives in, regardless, to advise asking:

“Do those who paid for or conducted a study have an interest in reaching a given conclusion (were they, for example, scientists working for manufacturers hoping to establish that their medication is effective and safe, or were they scientists working, like Wakefield, with attorneys for one party or another)?”[13]

Speaking of bias, we can detect some in how Haack frames the inquiry. Do scientists work for manufacturers (Boo!) or were they “like Wakefield” working for attorneys for a party? Haack cannot seem to bring herself to say that Wakefield, and many other expert witnesses, worked for plaintiffs and plaintiffs’ counsel, a.k.a., the lawsuit industry. Perhaps Haack included such expert witnesses as working for those who manufacture lawsuits. Similarly, in her discussion of journal quality, she notes that some journals carry advertisements from manufacturers, or receive financial support from them. There is a distinct lack of symmetry discernible in the lack of Haack’s curiosity about journals that are run by scientists or physicians who belong to advocacy groups, or who regularly testify for plaintiffs’ counsel.

There are many other quirky opinions here, but I will conclude with the obvious point that in the epidemiologic literature, there is a huge gulf between reporting on associations and drawing causal conclusions. Haack asks her readers to remember “that epidemiological studies can only show correlations, not causation.”[14] This suggestion ignores Haack’s article discussion of certain clinical trial results, which do “show” causal relationships. And epidemiologic studies can show strong, robust, consistent associations, with exposure-response gradients, not likely consistent with random variation, and these findings collectively can show causation in appropriate cases.

My recommendation is to ignore Haack’s suggestions and to pay closer attention to the subject matter of the expert witness who is under challenge. If the subject matter is epidemiology, open a few good textbooks on the subject. On the legal side, a good treatise such as The New Wigmore will provide much more illumination and guidance for judges and lawyers than vague, general suggestions.[15]


[1] William Kingdon Clifford, “The Ethics of Belief,” in L. Stephen & F. Pollock, eds., The Ethics of Belief 70-96 (1877) (“In order that we may have the to accept [someone’s] testimony as ground for believing what he says, we must have reasonable grounds for trusting his veracity, that he is really trying to speak the truth so far as he knows it; his knowledge, that he has had opportunities of knowing the truth about this matter; and his judgement, that he has made proper use of those opportunities in coming to the conclusion which he affirms.”), quoted in Susan Haack, “Judging Expert Testimony: From Verbal Formalism to Practical Advice,” 1 Quaestio facti. Internat’l J. Evidential Legal Reasoning 13, 13 (2020).

[2]  Susan Haack, “Judging Expert Testimony: From Verbal Formalism to Practical Advice,” 1 Quaestio facti. Internat’l J. Evidential Legal Reasoning 13, 13 (2020) [cited as Haack].

[3]  Haack at 21.

[4]  See, e.g., “Judicial Dodgers – The Crossexamination Excuse for Denying Rule 702 Motions”; “Judicial Dodgers – Reassigning the Burden of Proof on Rule 702”; “Judicial Dodgers – Weight not Admissibility”; “Judicial Dodgers – Rule 702 Tie Does Not Go to Proponent.”

[5]  Haack at 21.

[6]  Haack at 22.

[7]  Haack at 24, citing Blum v. Merrell Dow Pharms., Inc., 33 Phila. Cty. Rep. 193, 214-17 (1996).

[8]  See, e.g., “Bendectin, Diclegis & The Philosophy of Science” (Oct. 23, 2013).

[9]  Haack at 23.

[10]  See generally Deborah MayoStatistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (2018).

[11]  Haack at 23-24 (emphasis added).

[12]  Haack at 24.

[13]  Haack at 24.

[14]  Haack at 25.

[15]  David H. Kaye, David E. Bernstein & Jennifer L. Mnookin, The New Wigmore: A Treatise on Evidence: Expert Evidence (2nd ed. 2011). A new edition is due out presently.

Is Your Daubert Motion Racist?

July 17th, 2020

In this week’s New York Magazine, Jonathan Chait points out there is now a vibrant anti-racism consulting industry that exists to help white (or White?) people to recognize the extent to which their race has enabled their success, in the face of systematic inequalities that burden people of color. Chait acknowledges that some of what this industry does is salutary and timely, but he also notes that there are disturbing elements in this industry’s messaging, which is nothing short of an attack on individualism as racist myth that ignores that individuals are subsumed completely into their respective racial group. Chait argues that many of the West’s most cherished values – individualism, due process, free speech and inquiry, and the rule of law – are imperiled by so-called “radical progressivism” and “identity politics.”[1]

It is hard to fathom how anti-racism can collapse all identity into racial categories, even if some inarticulate progressives say so. Chait’s claim, however, seems to be supported by the Smithsonian National Museum of African American History & Culture, and its webpages on “Talking about Race,” which provides an extended analysis of “whiteness,” “white privilege,” and the like.

On May 31, 2020, the Museum’s website published a graphic that presented its view of the “Aspects & Assumptions of Whiteness and White Culture in the United States,” which made many startling claims about what is “white,” and by implication, what is “non-white.” [The chart is set out below.] I will leave it to the sociologists, psychologists, and anthropologists to parse the discussion of “white-dominant culture,” and white “racial identity,” provided in the Museum’s webpages. In my view, the characterizations of “whiteness” were overtly racist and insulting to all races and ethnicities. As Chait points out, with an abundance of irony, Donald Trump would seem to be the epitome of non-white, by his disavowal of the Museum’s identification of white culture’s insistence that “hard work is the key to success.”

The aspect of the graphic summary of whiteness, which I found most curious, most racist, and most insulting to people of all colors and ethnicities, is the chart’s assertion that white culture places “Emphasis on the Scientific Method,” with its valuation of “[o]bjective, rational linear thinking; “[c]ause and effect relationships”; and “[q]uantitative emphasis.” The implication is that non-whites do not emphasize or care about the scientific method. So scientific method, with its concern over validity of inference, and ruling out random and systematic errors, is just white privilege, and a microaggression against non-white people.

Really? Can the Smithsonian National Museum of African American History & Culture really mean that scientific punctilio is just another manifestation of racism and cultural imperialism. Chait seems to think so, quoting Glenn Singleton, president of Courageous Conversation, a racial-sensitivity training firm, who asserts that valuing “written communication over other forms” is “a hallmark of whiteness,” as is “scientific, linear thinking. Cause and effect.”

The Museum has apparently removed the graphic from its website, in response to a blitz of criticism from right-wing media and pundits.[2]  According to the Washington Post, the graphic has its origins in a 1978 book on White Awareness.[3] In response to the criticism, museum director Spencer Crew apologized and removed the graphic, agreeing that “it did not contribute to the discussion as planned.”[4]

The removal of the graphic is not really the point. Many people will now simply be bitter that they cannot publicly display their racist tropes. More important yet, many people will continue to believe that causal, rational, linear thinking is white, exclusionary, and even racist. Something to remember when you make your next Rule 702 motion.

   


[1]  Jonathan Chait, “Is the Anti-Racism Training Industry Just Peddling White Supremacy?” New York Magazine (July 16, 2020).

[2]  Laura Gesualdi-Gilmore “‘DEEPLY INSULTING’ African American museum accused of ‘racism’ over whiteness chart linking hard work and nuclear family to white culture,” The Sun (Jul 16 2020); “DC museum criticized for saying ‘delayed gratification’ and ‘decision-making’ are aspects of ‘whiteness’,” Fox News (July 16, 2020) (noting that the National Museum of African American History and Culture received a tremendous outcry after equating the nuclear family and self-reliance to whiteness); Sam Dorman, “African-American museum removes controversial chart linking ‘whiteness’ to self-reliance, decision-making The chart didn’t contribute to the ‘productive conversation’ they wanted to see,” Fox News (July 16, 2020); Mairead McArdle, “African American History Museum Publishes Graphic Linking ‘Rational Linear Thinking,’ ‘Nuclear Family’ to White Culture,” Nat’l Rev. (July 15, 2020).

[3]  Judy H. Katz, White Awareness: Handbook for Anti-Racism Training (1978).

[4]  Peggy McGlone, “African American Museum site removes ‘whiteness’ chart after criticism from Trump Jr. and conservative media,” Wash. Post (July 17, 2020).

Ingham v. Johnson & Johnson – Passing Talc Off As Asbestos

June 26th, 2020

In talc exposure litigation of ovarian cancer claims, plaintiffs were struggling to show that cosmetic talc use caused ovarian cancer, despite missteps by the defense.[1] And then lawsuit industrialist Mark Lanier entered the fray and offered a meretriciously beguiling move: Stop trying talc cases and start trying asbestos cases.

The Ingham appellate decision this week from the Missouri Court of Appeals appears to be a superficial affirmation of the Lanier strategy.[2] The court gave defendants some relief on jurisdictional issues, but largely affirmed the admissibility of Lanier’s expert witnesses on medical causation, both general and specific.[3]

After all, asbestos is an established cause of ovarian cancer. Or is it?

In 2006, the Institute of Medicine (now the National Academy of Medicine) addressed extra-pulmonary cancers caused by asbestos, without ever mentioning ovarian carcinoma.[4] Many textbooks and reviews found themselves unable to conclude that asbestos of any type caused ovarian cancer throughout the 20th century and a decade into the 21st century. The world of opinions changed, however, in 2011, when a working group of the International Agency for Research on Cancer (IARC) met in Lyon, France, and issued its support for the general causation claim in a suspect document published in 2012.[5] The IARC has strict rules that prohibit anyone who has any connection with manufacturing industry from serving on its working groups, but the Agency allows consultants and contractors for the lawsuit industry to serve without limitation. The 2011 working group on fibers and dusts thus sported lawsuit industry acolytes such as Peter F. Infante, Jonathan Samet, and Philip J. Landrigan.

Given the composition of this working group, no one was surprised by its finding:

“The Working Group noted that a causal association between exposure to asbestos and cancer of the ovary was clearly established, based on five strongly positive cohort mortality studies of women with heavy occupational exposure to asbestos (Acheson et al., 1982; Wignall & Fox, 1982; Germani et al., 1999; Berry et al., 2000; Magnani et al., 2008). The conclusion received additional support from studies showing that women and girls with environmental, but not occupational exposure to asbestos (Ferrante et al., 2007; Reid et al., 2008, 2009) had positive, though non-significant, increases in both ovarian cancer incidence and mortality.”[6]

The herd mentality is fairly strong in the world of occupational medicine, but not everyone concurred. A group of Australian asbestos researchers (Reid, et al.) without lawsuit industry credentials published another meta-analysis in 2011, as well.[7] Although the Australian researchers reported an increased summary estimate of risk, they were careful to point out that this elevation may have resulted from disease misclassification:

“In the studies that did not examine ovarian cancer pathology, or confirmed cases of mesothelioma from a cancer or mesothelioma registry, misclassification of the cause of death in some cases is likely to have occurred, given that misclassification was reported in those studies that did reexamine cancer pathology specimens. Misclassification may result in an underestimate of peritoneal mesothelioma and an overestimate of ovarian cancer or the converse. Among women, peritoneal mesothelioma may be more likely to be classified as ovarian, colon, or stomach cancer, rather than a rare occupational cancer.”[8]

The authors noted that Irving Selikoff had first reported that a significant number of peritoneal cancers, likely mesothelial in origin, have been misclassified as ovarian cancers. Studies that relied upon death certificates only might thus be very misleading. Supporting the danger of misclassification, the Reid study reported that:

“Only the meta-analysis of those studies that reported ovarian cancer incidence (i.e., those studies that did not rely on cause of death certification to classify their cases of ovarian cancer) did not observe a significant excess risk.”[9]

Reid also reported the absence of other indicia of causation:

“No study showed a statistically significant trend  of ovarian cancer with degree of asbestos exposure. In addition, there was no evidence of a significant trend across studies as grouped exposure increased.”[10]

Other scientists and physicians have acknowledged the controversial nature of the IARC’s determination. In 2011, pathologist Samuel Hammar, who has testified regularly for the lawsuit industry, voiced concerns about the diagnostic accuracy of ovarian cancer cases in asbestos studies:

“It has been difficult to draw conclusions on the basis of epidemiologic studies of ovarian cancers because, histologically, their distinction between peritoneal mesothelioma and carcinomatous peritonei (including primary peritoneal serous papillary adenocarcinoma) is difficult. Ovarian tumors tend to grow by extension and uncommonly metastasize through the bloodstream, which is similar to tumors of mesothelial origin … .”[11]

In 2014, a working group of the Finnish Institute of Occupational Health noted that “despite the conclusions by IARC and the support from recent studies, the hypothesis that asbestos is [a] cause of ovarian cancer remains controversial.”[12] The same year, 2014, the relevant chapter in a leading textbook by Dr. Victor L. Roggli and colleagues opined that:

“the balance of the evidence available at present does not support an association between asbestos exposure and cancers of the female reproductive system.”[13]

Two years later, a text by Dr. Dorsett D. Smith cited “the lack of certainty of the pathologic diagnosis of ovarian cancer versus a peritoneal mesothelioma in epidemiologic studies” as making the epidemiology uninterpretable and any conclusions impossible.[14]

Against this backdrop of evidence, I took a look at what Johnson & Johnson had to say about the occupational asbestos epidemiology in its briefs, in section “B. Studies on asbestos and ovarian cancer.”[15] The defense acknowledged that plaintiffs’ expert witnesses Drs. Jacqueline Moline and Dean Felsher focused on the IARC conclusion, and on studies of heavy occupational exposure. J & J recited without comment or criticism what plaintiffs’ expert witnesses had testified, much of which was quite objectionable.[16]

For instance, Moline and Felsher both reprised the scientifically and judicially debunked views that there is “no known safe level of exposure,” from which they inferred the non-sequitur that “any amount above ordinary background levels – could cause ovarian cancer.”[17] From ignorance, nothing derives but conjecture.

Another example was Felsher’s testimony that asbestos can make the body of an ovarian cancer patient therapy-resistant. In response to these and other remarkable assertions, J & J countered with only the statement that their expert witness, Dr. Huh, “did not agree that all of this was true in the context of ovarian cancer.”[18]

Huh, indeed; that the defense expert witness disagree with some of what plaintiffs’ witnesses claimed hardly frames an issue for exclusion of any expert witness’s opinion. Even more disturbing, there is no appellate point that corresponds to a motion to exclude Dr Moline’s testimony.

The Egilman Challenge

There was a challenge to the testimony of another expert witness, David Egilman, a frequent testifier for Mark Lanier and other lawsuit industrialists. One of the challenges that the defendants made on appeal to the admissibility of Dr. David Egilman’s testimony was his use of a 1972 NIOSH study that apparently quantified exposure in terms of fibers per cubic centimeter, without specifying whether all fibers in the measurement were asbestos fibers, as opposed to non-asbestos fibers, including talc fibers.

The Missouri Court of Appeals rejected this specificc challenge in part because Egilman had explained that:

“whether the 1972 NIOSH study identified fibers specifically as ‘asbestos’ was inconsequential, as the only other possible fiber that could be present in a talc sample is a ‘talc fiber, which is chemically identical to anthophyllite asbestos and structurally the same’.”[19]

Talc typically crystallizes in small plates, but it can occur occasionally as fibers. Egilman, however, equated a talc fiber as chemically and structurally identical to an anthophyllite fiber.

Does Egilman’s opinion hold water?

No, Egilman has wet himself badly (assuming the Missouri appellate court quoted testimony accurately).

According to the Mineralogical Society of America’s Handbook of Mineralogy (and every other standard work on mineralogy I reviewed), anthophyllite and talc, whether in fibrous habit or not, are two different minerals, with very different chemical formulae, crystal chemistry, and structure.[20] Anthophyllite has the chemical formula: (Mg;Fe2+)2(Mg;Fe2+)5Si8O22(OH)2 and is an amphibole double chain silicate. Talc, on the other hand, is a phyllosilicate, a hydrated magnesium silicate with the chemical formula Mg3Si4O10(OH)2. Talc crystallizes in the triclinic class, although sometimes monoclinic, and crystals are platy and very soft.

If the Missouri Court of Appeals characterized Egilman’s testimony correctly on this point, then Egilman gave patently false testimony. Talc and anthophyllite are different chemically and structurally.


[1]  SeeThe Slemp Case, Part I – Jury Verdict for Plaintiff – 10 Initial Observations”; “The Slemp Case, Part 2 – Openings”; “ Slemp Trial Part 3 – The Defense Expert Witness – Huh”; “Slemp Trial Part 4 – Graham Colditz”; “ Slemp Trial Part 5 – Daniel W. Cramer”; “Lawsuit Magic – Turning Talcum into Wampum”; “Talc Litigation Supported by Slippery Expert Witness” (2017).

[2]  Ingham v. Johnson & Johnson, No. No. ED107476, Missouri Court of Appeals for the Eastern District (St. Louis) (June 23, 2020) (Slip op.).

[3]  Cara Salvatore, “Missouri Appeals Court Slashes $4.7B Talc Verdict Against J&J,” Law360 (June 23, 2020).

[4]  Jonathan M. Samet, et al., Asbestos: Selected Cancers Effects (I.O.M. Committee on Asbestos 2006).

[5]  International Agency for Research on Cancer, A Review of Human Carcinogens, Monograph Vol. 100, Part C: Arsenic, Metals, Fibres, and Dusts (2012).

[6]  Id. at 256. Some members followed up their controversial finding with an attempt to justify it with a meta-analysis; see M. Constanza Camargo, Leslie T. Stayner, Kurt Straif, Margarita Reina, Umaima Al-Alem, Paul A. Demers, and Philip J. Landrigan, “Occupational Exposure to Asbestos and Ovarian Cancer: A Meta-analysis,” 119 Envt’l Health Persp. 1211 (2011).

[7]  Alison Reid, Nick de Klerk, and Arthur W Musk, “Does Exposure to Asbestos Cause Ovarian Cancer? A Systematic Literature Review and Meta-Analysis,” 20 Cancer Epidemiol., Biomarkers & Prevention 1287 (2011) [Reid].

[8]  Reid at 1293, 1287.

[9]  Id. at 1293.

[10]  Id. at 1294.

[11]  Samuel Hammar, Richard A. Lemen, Douglas W. Henderson & James Leigh, “Asbestos and other cancers,” chap. 8, in Ronald F. Dodson & Samuel P. Hammar, eds., Asbestos: Risk Assessment, Epidemiology, and Health Effects 435 (2nd ed. 2011) (internal citation omitted).

[12]  Finnish Institute of Occupational Health, Asbestos, Asbestosis and Cancer – Helsinki Criteria for Diagnosis and Attribution 60 (2014) (concluding that there was an increased risk in cohorts of women with “relatively high asbestos exposures”).

[13]  Faye F. Gao and Tim D. Oury, “Other Neoplasia,” chap. 8, in Tim D. Oury, Thomas A. Sporn & Victor L. Roggli, eds., in Pathology of Asbestos-Associated Diseases 177, 188 (3d ed. 2014).

[14]  Dorsett D. Smith, The Health Effects of Asbestos: An Evidence-based Approach 208 (2016).

[15]  Brief of Appellants Johnson & Johnson and Johnson & Johnson Consumer Inc., at 29, in Ingham v. Johnson & Johnson, No. No. ED107476, Missouri Court of Appeals for the Eastern District (St. Louis) (filed Sept. 6, 2019) [J&J Brief].

[16]  Id. at 30.

[17]  See Mark A. Behrens & William L. Anderson, “The ‘Any Exposure’ Theory: An Unsound Basis for Asbestos Causation and Expert Testimony,” 37 SW. U. L. Rev. 479 (2008); William L. Anderson, Lynn Levitan & Kieran Tuckley, “The ‘Any Exposure’ Theory Round II — Court Review of Minimal Exposure Expert Testimony in Asbestos and Toxic Tort Litigation Since 2008,” 22 Kans. J. L. & Pub. Pol’y 1 (2012); William L. Anderson & Kieran Tuckley, “The Any Exposure Theory Round III: An Update on the State of the Case Law 2012 – 2016,” Defense Counsel J. 264 (July 2016); William L. Anderson & Kieran Tuckley, “How Much Is Enough? A Judicial Roadmap to Low Dose Causation Testimony in Asbestos and Tort Litigation,” 42 Am. J. Trial Advocacy 38 (2018).

[18]  Id. at 30.

[19]  Slip op. at 54.

[20]  John W. Anthony, Richard A. Bideaux, Kenneth W. Bladh, and Monte C. Nichols, Handbook of Mineralogy (Mineralogical Soc’y of America 2001).

Science Journalism – UnDark Noir

February 23rd, 2020

Critics of the National Association of Scholars’ conference on Fixing Science pointed readers to an article in Undark, an on-line popular science site for lay audiences, and they touted the site for its science journalism. My review of the particular article left me unimpressed and suspicious of Undark’s darker side. When I saw that the site featured an article on the history of the Supreme Court’s Daubert decision, I decided to give the site another try. For one thing, I am sympathetic to the task science journalists take on: it is important and difficult. In many ways, lawyers must commit to perform the same task. Sadly, most journalists and lawyers, with some notable exceptions, lack the scientific acumen and English communication skills to meet the needs of this task.

The Undark article that caught my attention was a history of the Daubert decision and the Bendectin litigation that gave rise to the Supreme Court case.[1] The author, Peter Andrey Smith, is a freelance reporter, who often covers science issues. In his Undark piece, Smith covered some of the oft-told history of the Daubert case, which has been told before, better and in more detail in many legal sources. Smith gets some credit for giving the correct pronunciation of the plaintiff’s name – “DAW-burt,” and for recounting how both sides declared victory after the Supreme Court’s ruling. The explanation Smith gives of the opinion by Associate Justice Harry Blackmun is reasonably accurate, and he correctly notes that a partial dissenting opinion by Chief Justice Rehnquist complained that the majority’s decision would have trial judges become “amateur scientists.” Nowhere in the article will you find, however, the counter to the dissent: an honest assessment of the institutional and individual competence of juries to decide complex scientific issues.

The author’s biases eventually, however, become obvious. He recounts his interviews with Jason Daubert and his mother, Joyce Daubert. He earnestly reports how Joyce Daubert remembered having taken Bendectin during her pregnancy with Jason, and in the moment of that recall, “she felt she’d finally identified the teratogen that harmed Jason.” Really? Is that how teratogens are identified? Might it have been useful and relevant for a scientific journalist to explain that there are four million live births every year in the United States and that 3% of children born each year have major congenital malformations? And that most malformations have no known cause? Smith ingenuously relays that Jason Daubert had genetic testing, but omits that genetic testing in the early 1990s was fairly primitive and limited. In any event, how were any expert witnesses supposed to rule out base-line risk of birth defects, especially given weak to non-existent epidemiologic support for the Daubert’s claims? Smith does answer these questions; he does not even acknowledge the questions.

Smith later quotes Joyce Daubert as describing the litigation she signed up for as “the hill I’ll die on. You only go to war when you think you can win.” Without comment or analysis, Smith gives Joyce Daubert an opportunity to rant against the “injustice” of how her lawsuit turned out. Smith tells us that the Dauberts found the “legal system remains profoundly disillusioning.” Joyce Daubert told Smith that “it makes me feel stupid that I was so naïve to think that, after we’d invested so much in the case, that we would get justice.”  When called for jury duty, she introduces herself as

“I’m Daubert of Daubert versus Merrell Dow … ; I don’t want to sit on this jury and pretend that I can pass judgment on somebody when there is no justice. Please allow me to be excused.”

But didn’t she really get all the justice she deserved? Given her zealotry, doesn’t she deserve to have her name on the decision that serves to rein in expert witnesses who outrun their scientific headlights? Smith is coy and does not say, but in presenting Mrs. Daubert’s rant, without presenting the other side, he is using his journalistic tools in a fairly blatant attempt to mislead. At this point, I begin to get the feeling that Smith is preaching to a like-minded choir over there at Undark.

The reader is not treated to any interviews with anyone from the company that made Bendectin, any of its scientists, or any of the scientists who published actual studies on whether Bendectin was associated with the particular birth defects Jason Daubert had, or for that matter, with any birth defects at all. The plaintiffs’ expert witnesses quoted and cited never published anything at all on the subject. The readers are left to their imagination about how the people who developed Bendectin felt about the litigation strategies and tactics of the lawsuit industry.

The journalistic ruse is continued with Smith’s treatment of the other actors in the Daubert passion play. Smith describes the Bendectin plaintiffs’ lawyer Barry Nace in hagiographic terms, but omits his bar disciplinary proceedings.[2] Smith tells us that Nace had an impressive background in chemistry, and quotes him in an interview in which he described the evidentiary rules on scientific witness testimony as “scientific evidence crap.”

Smith never describes the Daubert’s actual affirmative evidence in any detail, which one might expect in a sophisticated journalistic outlet. Instead, he described some of their expert witnesses, Shanna Swan, a reproductive epidemiologist, and Alan K. Done, “a former pediatrician from Wayne State University.” Smith is secretive about why Done was done in at Wayne State; and we learn nothing about the serious accusations of perjury on credentials by Done. Instead, Smith regales us with Done’s tsumish theory, which takes inconclusive bits of evidence, throws them together, and then declares causation that somehow eludes the rest of the scientific establishment.

Smith tells us that Swan was a rebuttal witness, who gave an opinion that the data did not rule out “the possibility Bendectin caused defects.” Legally and scientifically, Smith is derelict in failing to explain that the burden was on the party claiming causation, and that Swan’s efforts to manufacture doubt were beside the point. Merrell Dow did not have to rule out any possibility of causation; the plaintiffs had to establish causation. Nor does Smith delve into how Swan sought to reprise her performance in the silicone gel breast implant litigation, only to be booted by several judges as an expert witness. And then for a convincer, Smith sympathetically repeats plaintiffs’ lawyer Barry Nace’s hyperbolic claim that Bendectin manufacturer, Merrell Dow had been “financing scientific articles to get their way,” adding by way of emphasis, in his own voice:

“In some ways, here was the fake news of its time: If you lacked any compelling scientific support for your case, one way to undermine the credibility of your opponents was by calling their evidence ‘junk science’.”

Against Nace’s scatalogical Jackson Pollack approach, Smith is silent about another plaintiffs’ expert witness, William McBride, who was found guilty of scientific fraud.[3] Smith reports interviews of several well-known, well-respected evidence scholars. He dutifully report Professor Edward Cheng’s view that “the courts were right to dismiss the [Bendectin] plaintiffs’ claims.” Smith quotes Professor D. Michael Risinger that claims from both sides in Bendectin cases were exaggerated, and that the 1970s and 1980s saw an “unbridled expansion of self-anointed experts,” with “causation in toxic torts had been allowed to become extremely lax.” So a critical reader might wonder why someone like Professor Cheng, who has a doctorate in statistics, a law degree from Harvard, and teaches at Vanderbilt Law School, would vindicate the manufacturers’ position in the Bendectin litigation. Smith never attempts to reconcile his interviews of the law professors with the emotive comments of Barry Nace and Joyce Daubert.

Smith acknowledges that a reformulated version of Bendectin, known as  Diclegis, was approved by the Food and Drug Administration in the United States, in 2013, for treatment of  nausea and vomiting during pregnancy. Smith tells us that Joyce is not convinced the drug should be back on the market,” but really why would any reasonable person care about her view of the matter? The challenge by Nav Persaud, a Toronto physician, is cited, but Persaud’s challenge is to the claim of efficacy, not to the safety of the medication. Smith tells us that Jason Daubert “briefly mulled reopening his case when Diclegis, the updated version of Bendectin, was re-approved.” But how would the approval of Diclegis, on the strength of a full new drug application, somehow support his claim anew? And how would he “reopen” a claim that had been fully litigated in the 1990s, and well past any statute of limitations?

Is this straight reporting? I think not. It is manipulative and misleading.

Smith notes, without attribution, that some scholars condemn litigation, such as the cases involving Bendectin, as an illegitimate form of regulation of medications. In opposition, he appears to rely upon Elizabeth Chamblee Burch, a professor at the University of Georgia School of Law for the view that because the initial pivotal clinical trials for regulatory approvals take place in limited populations, litigation “serves as a stopgap for identifying rare adverse outcomes that could crop up when several hundreds of millions of people are exposed to those products over longer periods of time.” The problem with this view is that Smith ignores the whole process of pharmacovigilance, post-registration trials, and pharmaco-epidemiologic studies conducted after the licensing of a new medication. The suggested necessity of reliance upon the litigation system as an adjunct to regulatory approval is at best misplaced and tenuous.

Smith correctly explains that the Daubert standard is still resisted in criminal cases, where it could much improve the gatekeeping of forensic expert witness opinion. But while the author gets his knickers in a knot over wrongful convictions, he seems quite indifferent to wrongful judgments in civil action.

Perhaps the one positive aspect of this journalistic account of the Daubert case was that Jason Daubert, unlike his mother, was open minded about his role in transforming the law of scientific evidence. According to Smith, Jason Daubert did not see the case as having “not ruined his life.” Indeed, Jason seemed to approve the basic principle of the Daubert case, and the subsequent legislation that refined the admissibility standard: “Good science should be all that gets into the courts.”


[1] Peter Andrey Smith, “Where Science Enters the Courtroom, the Daubert Name Looms Large: Decades ago, two parents sued a drug company over their newborn’s deformity – and changed courtroom science forever,” Undark (Feb. 17, 2020).

[2]  Lawyer Disciplinary Board v. Nace, 753 S.E.2d 618, 621–22 (W. Va.) (per curiam), cert. denied, 134 S. Ct. 474 (2013).

[3] Neil Genzlinger, “William McBride, Who Warned About Thalidomide, Dies at 91,” N.Y. Times (July 15, 2018); Leigh Dayton, “Thalidomide hero found guilty of scientific fraud,” New Scientist (Feb. 27, 1993); G.F. Humphrey, “Scientific fraud: the McBride case,” 32 Med. Sci. Law 199 (1992); Andrew Skolnick, “Key Witness Against Morning Sickness Drug Faces Scientific Fraud Charges,” 263 J. Am. Med. Ass’n 1468 (1990).

Counter Cancel Culture Part III – Fixing Science

February 14th, 2020

This is the last of three posts about Cancel Culture, and the National Association of Scholars (NAS) conference on Fixing Science, held February 7th and 8th, in Oakland, California.

In finding my participation in the National Association of Scholars’ conference on Fixing Science, “worrying” and “concerning,” John Mashey takes his cues from the former OSHA Administrator, David Michaels. David Michaels has written much about industry conflicts of interests and efforts to influence scientific debates and discussions. He popularized the notion of “manufacturing doubt,”[1] with his book of that title. I leave it to others to decide whether Mashey’s adverting to Michaels’ work, in finding my writings on silica litigation “concerning” and “worrying,” is itself worrisome. In order to evaluate Mashey’s argument, such as it is, the reader should know something more about David Michaels, and his publications.[2]

As one might guess from its title, The Triumph of Doubt: Dark Money and the Science of Deception, Michaels’ new book s appears to be a continuation of his attack on industry’s efforts to influence regulation. I confess not to have read this new book yet, but I am willing to venture a further guess that the industry Michaels is targeting is manufacturing industry, not the lawsuit industry, for which he has worked on many occasions. There is much irony (and no little hypocrisy) in Michaels’ complaints about dark money and the science of deception. For many years, Michaels ran the now defunct The Project on Scientific Knowledge and Public Policy (SKAPP), which was bankrolled by the plaintiffs’ counsel in the silicone gel breast implant litigation. Whenever SKAPP sponsored a conference, or a publication, the sponsors or authors dutifully gave a disclosure that the meeting or publication was underwritten by “a grant from the Common Benefit Trust, a fund established pursuant to a federal court order in the Silicone Gel Breast Implant Products Liability litigation.”

Non-lawyers might be forgiven for thinking that SKAPP and its propaganda had the imprimatur of the federal court system, but nothing could be further from the truth. A common benefits fund is the pool of money that is available to plaintiffs’ lawyers who serve on the steering committee of a large, multi-district litigation, to develop expert witnesses, analyze available scientific studies, and even commission studies of their own.[3] The source of the money was a “tax” imposed upon all settlements with defendants, which funneled the money into the so-called common benefits fund, controlled by the leadership of the plaintiffs’ counsel. When litigating the silicone gel breast implant cases involving claims of autoimmune disease became untenable due to an overwhelming scientific consensus against their causal claims,[4] the leadership of the plaintiffs’ steering committee gave the remaining money to SKAPP, rather than returning the money to the plaintiffs themselves.  David Michaels and his colleagues at SKAPP then misrepresented the source of the money as coming from a “trust fund” established by the federal court, which sounded rather like a neutral, disinterested source. This fund, however, was “walking around” money for the plaintiffs’ lawyers, which belonged to the settling plaintiffs, and which was diverted into a major propaganda effort against the judicial gatekeeping of expert witness opinion testimony.[5] A disinterested reader might well believe that David Michaels thus has some deep personal experience with “dark money,” and “the science of deception.” Mashey might be well advised to consider the adjacency issues raised by his placing such uncritical trust in what Michaels has published.

Regardless of David Michaels’ rhetoric, doubt is not such a bad thing in the face of uncertain and inconclusive evidence. In my view, we could use more doubt, and open-minded thought. Bertrand Russell is generally credited with having written some years ago:

“The biggest cause of trouble in the world today is that the stupid people are so sure about things and the intelligent folks are so full of doubts.”

What are we to make then of the charge by Dorothy Bishop that the conference would not be about regular scientific debate, but

“about weaponising the reproducibility debate to bolster the message that everything in science is uncertain — which is very convenient for those who wish to promote fringe ideas.”

I attended and presented at the conference because I have a long-standing interest in how scientific validity is assessed in the scientific and in the legal world. I have been litigating such issues in many different contexts for over 35 years, with notable scientific experts occasionally on either side. One phenomenon I have observed repeatedly is that expert witnesses of the greatest skill, experience, and knowledge are prone to cognitive biases, fallacies, and other errors. One of my jobs as a legal advocate is to make sure that my own expert witnesses engage fully with the evidence as well as how my adversaries are interpreting the evidence. In other words, expert witnesses of the highest scientific caliber succumb to biases in interpreting studies and evidence.

A quick anecdote, war story, will I hope make the point. A few years ago, I was helping a scientist get ready to testify in a case involving welding fume exposure and Parkinson’s disease. The scientist arrived with some PowerPoint slides, one of which commented that a study relied upon by plaintiffs’ expert witnesses had a fatal design flaw that rendered its conclusions invalid. Another slide embraced a study, sponsored by a co-defendant company, which had a null result but the same design flaw called out in the study used by plaintiff’s witnesses. It was one in the morning, but I gently pointed out the inconsistency, and the scientist immediately saw the problem and modified his slides.

The next day, my adversary noticed the lack of the codefendant’s study in the group of studies this scientist had relied upon. He cross-examined the scientist about why he had left out a study, which the codefendant had actually sponsored. The defense expert witness testified that the omitted study had the same design flaw as seen in the study embraced by plaintiffs’ expert witnesses, and that it had to be consigned to the same fate. The defense won this case, and long after the celibration died down, I received a very angry call from a lawyer for the codefendant. The embrace of bad studies and invalid inferences is not the exclusive province of the plaintiffs’ bar.

My response to Dorothy Bishop is that science ultimately has no political friends, although political actors will try to use criteria of validity selectively to arrive at convenient, and agreeable results. Do liberals ever advance junk science claims? Just say the words: Robert F. Kennedy, Jr. How bizarre and absurd for Kennedy to come out of a meeting with Trump’s organization, to proclaim a new vaccine committee to investigate autism outcomes! Although the issue has been explored in detail in medical journals for the last two decades, apparently there can even be bipartisan junk science. Another “litmus test” for conservatives would be whether they speak out against what are, in my view, unsubstantiated laws in several “Red States,” which mandate that physicians tell women who are seeking abortions that abortions cause breast cancer. There have been, to be sure, some studies that reported increased risks, but they were mostly case-control studies in which recall and reporting biases were uncontrolled. Much better, larger cohort studies done with unbiased information about history of abortions failed to support the association, which no medical organization has taken to be causal. This is actually a good example of irreproducibility that is corrected by the normal evolutionary process of scientific research, with political exploitation of the earlier, less valid studies.

Did presenters at the Fixing Science conference selectively present and challenge studies? It is difficult for me to say, not having a background in climate science. I participated in the conference to talk about how courts deal with problems of unreliable expert witness testimony and reliance upon unreliable studies. But what I heard at the conference were two main speakers argue that climate change and its human cause were real. The thrust of the most data-rich presentation was that many climate models advanced are overstated and not properly calibrated.  Is Bishop really saying that we cannot have a civil conversation about whether some climate change models are poorly done and validated? Assuming that the position I heard is a reasonable interpretation of the data and the models, it establishes a “floor” in opposition to the ceilings asserted by other climate scientists. There are some implications; perhaps the National Association of Scholars should condemn Donald Trump and others who claim that climate change is a hoax. Of course, condemning Trump every time he says something false, stupid, and unsupported would be a full time job. Having staked out an interest climate change, the Association might well consider balancing the negative impression others have of it as “deniers.”

The Science Brief

Back in June 2018, the National Association of Scholars issued a Science Brief, which it described as its official position statement in the area. A link to the brief online was broken, but a copy of the brief was distributed to those who attended the Fixing Science conference in Oakland. The NAS website does contain an open letter from Dr. Peter Wood, the president of the NAS, who described the brief thus:

“the positions we have put forward in these briefs are not settled once and for all. We expect NAS members will critique them. Please read and consider them. Are there essential points we got wrong? Others that we left out? Are there good points that could be made better?

We are not aiming to compile an NAS catechism. Rather, we are asked frequently by members, academics who are weighing whether to join, reporters, and others what NAS ‘thinks’ about various matters. Our 2,600 members (and growing) no doubt think a lot of different things. We prize that intellectual diversity and always welcome voices of dissent on our website, in our conferences, and in our print publications. But it helps if we can present a statement that offers a first-order approximation of how NAS’s general principles apply to particular disciplines or areas of inquiry.

We also hope that these issue briefs will make NAS more visible and that they will assist scholars who are finding their way in the maze of contemporary academic life.

As a preface to an attempt to address general principles, Peter Wood’s language struck me as liberal, in the best sense of open-minded and generous in spirit to the possibility of reasoned disagreement.

So what are the NAS principles when it comes to science? Because the Science Brief seems not to be online at the moment, I will quote it here at length:

OVERVIEW

The National Association of Scholars (NAS) supports the proper teaching and practice of science: the systematic exercise of reason, observation, hypothesis, and experiment aimed at understanding and making reliable predictions about the material world. We work to keep science as a mode of inquiry engaged in the disinterested pursuit of truth rather than a collection of ‘settled’ conclusions. We also work to integrate course requirements in the unique history of Western science into undergraduate core curricula and distribution requirements. The NAS promotes scientific freedom and transparency.

We support researchers’ freedom to formulate and test any scientific hypothesis, unconstrained by political inhibitions. We support researchers’ freedom to pursue any scientific experiment, within ethical research guidelines. We support transparent scientific research, to foster the scientific community’s collective search for truth.

The NAS supports course requirements on the history and the nature of the Western scientific tradition.

All students should learn a coherent general narrative of the history of science that tells how the scientific disciplines interrelate. We work to restore core curricula that include both the unique history of Western science and an introduction to the distinctive mode of Western scientific reasoning. We also work to add new requirements in statistics and experimental design for majors and graduate students in the sciences and social sciences.

The NAS works to reform the practice of modern science so that it generates reproducible results. Modern science and social science are crippled by a crisis of reproducibility. This crisis springs from a combination of misused statistics, slipshod research techniques, and political groupthink. We aim to eliminate the crisis of reproducibility by grounding scientific practice in the meticulous traditions of Western scientific thought and rigorous reproducibility standards.

The NAS works to eliminate the politicization of undergraduate science education.

Our priority is to dismantle advocacy-based science, which discards the exercise of rational skepticism in pursuit of truth when it explicitly declares that scientific inquiry should serve policy advocacy. We therefore work to remove advocacy-based science from the classroom and from university bureaucracies. We also criticize student movements that demand the replacement of disinterested scientific inquiry with advocacy-based science. We focus our critiques on disciplines such as climate science that are mostly engaged in policy advocacy.

The NAS tracks scientific controversies that affect public policy, studies the remedies that scientists propose, and criticizes laws, regulations, and proposed policies based upon advocacy-based science.

We do this to prevent a vicious cycle in which advocacy-based science justifies the misuse of government – and private funding to support yet more advocacy-based science. We also work to reform the administration of government science funding so as to prevent its capture by advocacy-scientists.  The NAS’s scientific reports draw on the expertise of its member scholars and staff, as well as independent scholars. Our aim is to provide professionally credible critiques of America’s science education and science-based public policy.

John Mashey in his critique of the NAS snarkily comments that folks at the NAS lack the expertise to make the assessments they call for. Considering that Mashey is a computer scientist, without training in the climate or life sciences, his comments fall short of their mark. Still, if he were to have something worthwhile to say, and he supported his statements by sufficient evidence and reasoning, I believe we should take it seriously.

Nonetheless, the NAS statement of principles and concerns about how science and statistics is taught are unexceptional. I suspect that neither Mashey nor anyone else is against scientific freedom, methodological rigor,  and ethical, transparent research.

The scientific, mathematical, and statistical literacy of most judges and lawyers, is poor indeed. The Law School Admission Test (LSAT) does not ask any questions about statistical reasoning. A jury trial is not a fair, adequate opportunity to teach jurors the intricacies of statistical and scientific methods. Most medical schools still do not teach a course in experimental design and statistical analysis. Until recently, the Medical College Acceptance Test (MCAT) did not ask any questions of a statistical nature, and the test still does not require applicants to have taken a full course in statistics. I do not believe any reasonable person could be against the NAS’s call for better statistical education for scientists, and I would add for policy makers. Certainly, Mashey offers no arguments or insights on this topic.

Perhaps Mashey is wary of the position that we should be skeptical of advocacy-based science, for fear that climate-change science will come in for unwelcomed attention. If the science is sound, the data accurate, and the models valid, then this science does not need to be privileged and protected from criticism. Whether Mashey cares to acknowledge the phenomenon or not, scientists do become personally invested in their hypotheses.

The NAS statement of principles in its Science Brief thus seems worthy of everyone’s support. Whether the NAS is scrupulous in applying its own principles to positions it takes will require investigation and cautious vigilance. Still, I think Mashey should not judge anyone harshly lest he be so judged. We are a country of great principles, but a long history of indifferent and sometimes poor implementation. To take just a few obvious examples, despite the stirring words in the Declaration of Independence about the equality of all men, native people, women, and African slaves were treated in distinctly unequal and deplorable ways. Although our Constitution was amended after the Civil War to enfranchise former slaves, our federal government, after an all-too-short period of Reconstruction, failed to enforce the letter or the spirit of the Civil War amendments for 100 years, and then some. Less than seven years after our Constitution was amended to include freedom from governmental interference with speech or publication, a Federalist Congress passed the Alien and Sedition Acts, which President Adams signed into law in 1798. It would take over 100 years before the United States Supreme Court would make a political reality of the full promise of the First Amendment.

In these sad, historical events, one thing is clear. The promise and hope of clearly stated principles did prevail. To me, the lesson is not to belittle the principles or the people, but to hold the latter to the former.  If Mashey believes that the NAS is inconsistent or hypocritical about its embrace of what otherwise seems like worthwhile first principles, he should say. For my part, I think the NAS will find it difficult to avoid a charge of selectivity if it were to criticize climate change science, and not cast a wider net.

Finally, I can say that the event sponsored by the Independent Institute and the NAS featured speakers with diverse, disparate opinions. Some speakers denied that there was a “crisis,” and some saw the crisis as overwhelming and destructive of sound science. I heard some casual opinions of climate change skepticism, but from the most serious, sustained look at the actual data and models, an affirmation of anthropogenic climate change. In the area of health effects, the scientific study more relevant to what I do, I heard a fairly wide consensus about the need to infuse greater rigor into methodology and to reduce investigators’ freedom to cherry pick data and hypotheses after data collection is finished. Even so, there were speakers with stark disagreement over methods. The conference was an important airing and exchanging of many ideas. I believe that those who attended and who participated went away with less orthodoxy and much to contemplate. The Independent Institute and the NAS deserve praise for having organized and sponsored the event. The intellectual courage of the sponsors in inviting such an intellectually diverse group of speakers undermines the charge by Mashey, Teytelman, and Bishop that the groups are simply shilling for Big Oil.


[1]        David Michaels, Doubt is Their Product: How Industry’s Assault on Science Threatens Your Health (2008).

[2]        David Michaels, The Triumph of Doubt: Dark Money and the Science of Deception (2020).

[3]        See, e.g., William Rubenstein, “On What a ‘Common Benefit Fee’ Is, Is Not, and Should Be,” Class Action Attorney Fee Digest 87, 89 (March 2009).

[4]        In 1999, after much deliberation, the Institute of Medicine issued a report that found the scientific claims in the silicone litigation to be without scientific support. Stuart Bondurant, et al., Safety of Silicone Breast Implants (I.O.M. 1999).

[5]        I have written about the lack of transparency and outright deception in SKAPP’s disclosures before; seeSKAPP A LOT” (April 30, 2010); “Manufacturing Certainty” (Oct. 25, 2011); “The Capture of the Public Health Community by the Litigation Industry” (Feb. 10, 2014); “Daubert’s Silver Anniversary – Retrospective View of Its Friends and Enemies” (Oct. 21, 2018); “David Michaels’ Public Relations Problem” (Dec. 2, 2011)

Judicial Gatekeeping Cures Claims That Viagra Can Cause Melonoma

January 24th, 2020

The phosphodiesterases 5 inhibitor medications (PDE5i) seem to arouse the litigation propensities of the lawsuit industry. The PDE5i medications (sildenafil, tadalafil, etc.) have multiple indications, but they are perhaps best known for their ability to induce penile erections, which in some situations can be a very useful outcome.

The launch of Viagra in 1998 was followed by litigation that claimed the drug caused heart attacks, and not the romantic kind. The only broken hearts, however, were those of the plaintiffs’ lawyers and their expert witnesses who saw their litigation claims excluded and dismissed.[1]

Then came claims that the PDE5i medications caused non-arteritic anterior ischemic optic neuropathy (“NAION”), based upon a dubious epidemiologic study by Dr. Gerald McGwin. This litigation demonstrated, if anything, that while love may be blind, erections need not be.[2] The NAION cases were consolidated in a multi-district litigation (MDL) in front of Judge Paul Magnuson, in the District of Minnesota. After considerable back and forth, Judge Manguson ultimately concluded that the McGwin study was untrustworthy, and the NAION claims were dismissed.[3]

In 2014, the American Medical Association’s internal medicine journal published an observational epidemiologic study of sildenafil (Viagra) use and melanoma.[4] The authors of the study interpreted their study modestly, concluding:

“[s]ildenafil use may be associated with an increased risk of developing melanoma. Although this study is insufficient to alter clinical recommendations, we support a need for continued investigation of this association.”

Although the Li study eschewed causal conclusions and new clinical recommendations in view of the need for more research into the issue, the litigation industry filed lawsuits, claiming causality.[5]

In the new natural order of things, as soon as the litigation industry cranks out more than a few complaints, an MDL results, and the PDE5i – melanoma claims were no exception. By spring 2016, plaintiffs’ counsel had collected ten cases, a minion, sufficient for an MDL.[6] The MDL plaintiffs named the manufacturers of sildenafil and tadalafil, two of the more widely prescribed PDEi5 medications, on behalf of putative victims.

While the MDL cases were winding their way through discovery and possible trials, additional studies and meta-analyses were published. None of the subsequent studies, including the systematic reviews and meta-analyses, concluded that there was a causal association. Most scientists who were publishing on the issue opined that systematic error (generally confounding) prevented a causal interpretation of the data.[7]

Many of the observational studies found statistically significantly increased relative risks about 1.1 to 1.2 (10 to 20%), typically with upper bounds of 95% confidence intervals less than 2.0. The only scientists who inferred general causation from the available evidence were those who had been recruited and retained by plaintiffs’ counsel. As plaintiffs’ expert witnesses, they contended that the Li study, and the several studies that became available afterwards, collectively showed that PDE5i drugs cause melanoma in humans.

Not surprisingly, given the absence of any non-litigation experts endorsing the causal conclusion, the defendants challenged plaintiffs’ proffered expert witnesses under Federal Rule of Evidence 702. Plaintiffs’ counsel also embraced judicial gatekeeping and challenged the defense experts. The MDL trial judge, the Hon. Richard Seeborg, held hearings with four days of viva voce testimony from four of plaintiffs’ expert witnesses (two on biological plausibility, and two on epidemiology), and three of the defense’s experts. Last week, Judge Seeborg ruled by granting in part, and denying in part, the parties’ motions.[8]

The Decision

The MDL trial judge’s opinion is noteworthy in many respects. First, Judge Richard Seeborg cited and applied Rule 702, a statute, and not dicta from case law that predates the most recent statutory version of the rule. As a legal process matter, this respect for judicial process and the difference in legal authority between statutory and common law was refreshing. Second, the judge framed the Rule 702 issue, in line with the statute, and Ninth Circuit precedent, as an inquiry whether expert witnesses deviated from the standard of care of how scientists “conduct their research and reach their conclusions.”[9]

Biological Plausibility

Plaintiffs proffered three expert witnesses on biological plausibility, Drs. Rizwan Haq, Anand Ganesan, and Gary Piazza. All were subject to motions to exclude under Rule 702. Judge Seeborg denied the defense motions against all three of plaintiffs’ plausibility witnesses.[10]

The MDL judge determined that biological plausibility is neither necessary nor sufficient for inferring causation in science or in the law. The defense argued that the plausibility witnesses relied upon animal and cell culture studies that were unrealistic models of the human experience.[11] The MDL court, however, found that the standard for opinions on biological plausibility is relatively forgiving, and that the testimony of all three of plaintiffs’ proffered witnesses was admissible.

The subjective nature of opinions about biological plausibility is widely recognized in medical science.[12] Plausibility determinations are typically “Just So” stories, offered in the absence of hard evidence that postulated mechanisms are actually involved in a real causal pathway in human beings.

Causal Association

The real issue in the MDL hearings was the conclusion reached by plaintiffs’ expert witnesses that the PDE5i medications cause melanoma. The MDL court did not have to determine whether epidemiologic studies were necessary for such a causal conclusion. Plaintiffs’ counsel had proffered three expert witnesses with more or less expertise in epidemiology: Drs. Rehana Ahmed-Saucedo, Sonal Singh, and Feng Liu-Smith. All of plaintiffs’ epidemiology witnesses, and certainly all of defendants’ experts, implicitly if not explicitly embraced the proposition that analytical epidemiology was necessary to determine whether PDE5i medications can cause melanoma.

In their motions to exclude Ahmed-Saucedo, Singh, and Liu-Smith, the defense pointed out that, although many of the studies yielded statistically significant estimates of melanoma risk, none of the available studies adequately accounted for systematic bias in the form of confounding. Although the plaintiffs’ plausibility expert witnesses advanced “Just-So” stories about PDE5i and melanoma, the available studies showed an almost identical increased risk of basal cell carcinoma of the skin, which would be explained by confounding, but not by plaintiffs’ postulated mechanisms.[13]

The MDL court acknowledged that whether epidemiologic studies “adequately considered” confounding was “central” to the Rule 702 inquiry. Without any substantial analysis, however, the court gave its own ipse dixit that the existence vel non of confounding was an issue for cross-examination and the jury’s resolution.[14] Whether there was a reasonably valid association between PDE5i and melanoma was a jury question. This judicial refusal to engage with the issue of confounding was one of the disappointing aspects of the decision.

The MDL court was less forgiving when it came to the plaintiffs’ epidemiology expert witnesses’ assessment of the association as causal. All the parties’ epidemiology witnesses invoked Sir Austin Bradford Hill’s viewpoints or factors for judging whether associations were causal.[15] Although they embraced Hill’s viewpoints on causation, the plaintiffs’ epidemiologic expert witnesses had a much more difficult time faithfully applying them to the evidence at hand. The MDL court concluded that the plaintiffs’ witnesses deviated from their own professional standard of care in their analysis of the data.[16]

Hill’s first enumerated factor was “strength of association,” which is typically expressed epidemiologically as a risk ratio or a risk difference. The MDL court noted that the extant epidemiologic studies generally showed relative risks around 1.2 for PDE5i and melanoma, which was “undeniably” not a strong association.[17]

The plaintiffs’ epidemiology witnesses were at sea on how to explain away the lack of strength in the putative association. Dr. Ahmed-Saucedo retreated into an emphasis on how all or most of the studies found some increased risk, but the MDL court correctly found that this ruse was merely a conflation of strength with consistency of the observed associations. Dr. Ahmed-Saucedo’s dismissal of the importance of a dose-response relationship, another Hill factor, as unimportant sealed her fate. The MDL court found that her Bradford Hill analysis was “unduly results-driven,” and that her proffered testimony was not admissible.[18] Similarly, the MDL court found that Dr. Feng Liu-Smith similarly conflated strength of association with consistency, which error was too great a professional deviation from the standard of care.[19]

Dr. Sonal Singh fared no better after he contradicted his own prior testimony that there is an order of importance to the Hill factors, with “strength of association,” at or near the top. In the face of a set of studies, none of which showed a strong association, Dr. Singh abandoned his own interpretative principle to suit the litigation needs of the case. His analysis placed the greatest weight on the Li study, which had the highest risk ratio, but he failed to advance any persuasive reason for his emphasis on one of the smallest studies available. The MDL court found that Dr. Singh’s claim to have weighed strength of association heavily, despite the obvious absence of strong associations, puzzling and too great an analytical gap to abide.[20]

Judge Seeborg thus concluded that while the plaintiffs’ expert witness could opine that there was an association, which was arguably plausible, they could not, under Rule 702, contend that the association was causal. In attempting to advance an argument that the association met Bradford Hill’s factors for causality, the plaintiffs’ witnesses had ignored, misrepresented, or confused one of the most important factors, strength of the association, in a way that revealed their analyses to be result driven and unfaithful to the methodology they claimed to have followed. Judge Seeborg emphasized a feature of the revised Rule 702, which often is ignored by his fellow federal judges:[21]

“Under the amendment, as under Daubert, when an expert purports to apply principles and methods in accordance with professional standards, and yet reaches a conclusion that other experts in the field would not reach, the trial court may fairly suspect that the principles and methods have not been faithfully applied. See Lust v. Merrell Dow Pharmaceuticals, Inc., 89 F.3d 594, 598 (9th Cir. 1996). The amendment specifically provides that the trial court must scrutinize not only the principles and methods used by the expert, but also whether those principles and methods have been properly applied to the facts of the case.”

Given that the plaintiffs’ witnesses purported to apply a generally accepted methodology, Judge Seeborg was left to question why they would conclude causality when no one else in their field had done so.[22] The epidemiologic issue had been around for several years, and addressed not just in observational studies, but systematically reviewed and meta-analyzed. The absence of published causal conclusions was not just an absence of evidence, but evidence of absence of expert support for how plaintiffs’ expert witnesses applied the Bradford Hill factors.

Reliance Upon Studies That Did Not Conclude Causation Existed

Parties challenging causal claims will sometimes point to the absence of a causal conclusion in the publication of individual epidemiologic studies that are the main basis for the causal claim. In the PDE5i-melanoma cases, the defense advanced this argument unsuccessfully. The MDL court rejected the defense argument, based upon the absence of any comprehensive review of all the pertinent evidence for or against causality in an individual study; the study authors are mostly concerned with conveying the results of their own study.[23] The authors may have a short discussion of other study results as the rationale for their own study, but such discussions are often limited in scope and purpose. Judge Seeborg, in this latest round of PDE5i litigation, thus did not fault plaintiffs’ witnesses’ reliance upon epidemiologic or mechanistic studies, which individually did not assert causal conclusions; rather it was the absence of causal conclusions in systematic reviews, meta-analyses, narrative reviews, regulatory agency pronouncements, or clinical guidelines that ultimately raised the fatal inference that the plaintiffs’ witnesses were not faithfully deploying a generally accepted methodology.

The defense argument that pointed to the individual epidemiologic studies themselves derives some legal credibility from the Supreme Court’s opinion in General Electric Co. v. Joiner, 522 U.S. 136 (1997). In Joiner, the SCOTUS took plaintiffs’ expert witnesses to task for drawing stronger conclusions than were offered in the papers upon which they relied. Chief Justice Rehnquist gave considerable weight to the consideration that the plaintiffs’ expert witnesses relied upon studies, the authors of which explicitly refused to interpret as supporting a conclusion of human disease causation.[24]

Joiner’s criticisms of the reliance upon studies that do not themselves reach causal conclusions have gained a foothold in the case law interpreting Rule 702. The Fifth Circuit, for example, has declared:[25]

“It is axiomatic that causation testimony is inadmissible if an expert relies upon studies or publications, the authors of which were themselves unwilling to conclude that causation had been proven.”

This aspect of Joiner may properly limit the over-interpretation or misinterpretation of an individual study, which seems fine.[26] The Joiner case may, however, perpetuate an authority-based view of science to the detriment of requiring good and sufficient reasons to support the testifying expert witnesses’ opinions.  The problem with Joiner’s suggestion that expert witness opinion should not be admissible if it disagrees with the study authors’ discussion section is that sometimes study authors grossly over-interpret their data.  When it comes to scientific studies written by “political scientists” (scientists who see their work as advancing a political cause or agenda), then the discussion section often becomes a fertile source of unreliable, speculative opinions that should not be given credence in Rule 104(a) contexts, and certainly should not be admissible in trials. In other words, the misuse of non-rigorous comments in published articles can cut both ways.

There have been, and will continue to be, occasions in which published studies contain data, relevant and important to the causation issue, but which studies also contain speculative, personal opinions expressed in the Introduction and Discussion sections.  The parties’ expert witnesses may disagree with those opinions, but such disagreements hardly reflect poorly upon the testifying witnesses.  Neither side’s expert witnesses should be judged by those out-of-court opinions.  Perhaps the hearsay discussion section may be considered under Rule 104(a), which suspends the application of the Rules of Evidence, but it should hardly be a dispositive factor, other than raising questions for the reviewing court.

In exercising their gatekeeping function, trial judges should exercise care in how they assess expert witnesses’ reliance upon study data and analyses, when they disagree with the hearsay authors’ conclusions or discussions.  Given how many journals cater to advocacy scientists, and how variable the quality of peer review is, testifying expert witnesses should, in some instances,  have the expertise to interpret the data without substantial reliance upon, or reference to, the interpretative comments in the published literature.

Judge Seeborg sensibly seems to have distinguished between the absence of causal conclusions in individual epidemiologic studies and the absence of causal conclusions in any reputable medical literature.[27] He refused to be ensnared in the Joiner argument because:[28]

“Epidemiology studies typically only expressly address whether an association exists between agents such as sildenafil and tadalafil and outcomes like melanoma progression. As explained in In re Roundup Prod. Liab. Litig., 390 F. Supp. 3d 1102, 1116 (N.D. Cal. 2018), ‘[w]hether the agents cause the outcomes, however, ordinarily cannot be proven by epidemiological studies alone; an evaluation of causation requires epidemiologists to exercise judgment about the import of those studies and to consider them in context’.”

This new MDL opinion, relying upon the Advisory Committee Notes to Rule 702, is thus a more felicitous statement of the goals of gatekeeping.

Confidence Intervals

As welcome as some aspects of Judge Seeborg’s opinion are, the decision is not without mistakes. The district judge, like so many of his judicial colleagues, trips over the proper interpretation of a confidence interval:[29]

“When reviewing the results of a study it is important to consider the confidence interval, which, in simple terms, is the ‘margin of error’. For example, a given study could calculate a relative risk of 1.4 (a 40 percent increased risk of adverse events), but show a 95 percent ‘confidence interval’ of .8 to 1.9. That confidence interval means there is 95 percent chance that the true value—the actual relative risk—is between .8 and 1.9.”

This statement is inescapably wrong. The 95 percent probability attaches to the capturing of the true parameter – the actual relative risk – in the long run of repeated confidence intervals that result from repeated sampling of the same sample size, in the same manner, from the same population. In Judge Seeborg’s example, the next sample might give a relative risk point estimate 1.9, and that new estimate will have a confidence interval that may run from just below 1.0 to over 3. A third sample might turn up a relative risk estimate of 0.8, with a confidence interval that runs from say 0.3 to 1.4. Neither the second nor the third sample would be reasonably incompatible with the first. A more accurate assessment of the true parameter is that it will be somewhere between 0.3 and 3, a considerably broader range for the 95 percent.

Judge Seeborg’s error is sadly all too common. Whenever I see the error, I wonder whence it came. Often the error is in briefs of both plaintiffs’ and defense counsel. In this case, I did not see the erroneous assertion about confidence intervals made in plaintiffs’ or defendants’ briefs.


[1]  Brumley  v. Pfizer, Inc., 200 F.R.D. 596 (S.D. Tex. 2001) (excluding plaintiffs’ expert witness who claimed that Viagra caused heart attack); Selig v. Pfizer, Inc., 185 Misc. 2d 600 (N.Y. Cty. S. Ct. 2000) (excluding plaintiff’s expert witness), aff’d, 290 A.D. 2d 319, 735 N.Y.S. 2d 549 (2002).

[2]  “Love is Blind but What About Judicial Gatekeeping of Expert Witnesses? – Viagra Part I” (July 7, 2012); “Viagra, Part II — MDL Court Sees The Light – Bad Data Trump Nuances of Statistical Inference” (July 8, 2012).

[3]  In re Viagra Prods. Liab. Litig., 572 F.Supp. 2d 1071 (D. Minn. 2008), 658 F. Supp. 2d 936 (D. Minn. 2009), and 658 F. Supp. 2d 950 (D. Minn. 2009).

[4]  Wen-Qing Li, Abrar A. Qureshi, Kathleen C. Robinson, and Jiali Han, “Sildenafil use and increased risk of incident melanoma in US men: a prospective cohort study,” 174 J. Am. Med. Ass’n Intern. Med. 964 (2014).

[5]  See, e.g., Herrara v. Pfizer Inc., Complaint in 3:15-cv-04888 (N.D. Calif. Oct. 23, 2015); Diana Novak Jones, “Viagra Increases Risk Of Developing Melanoma, Suit Says,” Law360 (Oct. 26, 2015).

[6]  See In re Viagra (Sildenafil Citrate) Prods. Liab. Litig., 176 F. Supp. 3d 1377, 1378 (J.P.M.L. 2016).

[7]  See, e.g., Jenny Z. Wang, Stephanie Le , Claire Alexanian, Sucharita Boddu, Alexander Merleev, Alina Marusina, and Emanual Maverakis, “No Causal Link between Phosphodiesterase Type 5 Inhibition and Melanoma,” 37 World J. Men’s Health 313 (2019) (“There is currently no evidence to suggest that PDE5 inhibition in patients causes increased risk for melanoma. The few observational studies that demonstrated a positive association between PDE5 inhibitor use and melanoma often failed to account for major confounders. Nonetheless, the substantial evidence implicating PDE5 inhibition in the cyclic guanosine monophosphate (cGMP)-mediated melanoma pathway warrants further investigation in the clinical setting.”); Xinming Han, Yan Han, Yongsheng Zheng, Qiang Sun, Tao Ma, Li Dai, Junyi Zhang, and Lianji Xu, “Use of phosphodiesterase type 5 inhibitors and risk of melanoma: a meta-analysis of observational studies,” 11 OncoTargets & Therapy 711 (2018).

[8]  In re Viagra (Sildenafil Citrate) and Cialis (Tadalafil) Prods. Liab. Litig., Case No. 16-md-02691-RS, Order Granting in Part and Denying in Part Motions to Exclude Expert Testimony (N.D. Calif. Jan. 13, 2020) [cited as Opinion].

[9]  Opinion at 8 (“determin[ing] whether the analysis undergirding the experts’ testimony falls within the range of accepted standards governing how scientists conduct their research and reach their conclusions”), citing Daubert v. Merrell Dow Pharm., Inc. (Daubert II), 43 F.3d 1311, 1317 (9th Cir. 1995).

[10]  Opinion at 11.

[11]  Opinion at 11-13.

[12]  See Kenneth J. Rothman, Sander Greenland, and Timothy L. Lash, “Introduction,” chap. 1, in Kenneth J. Rothman, et al., eds., Modern Epidemiology at 29 (3d ed. 2008) (“no approach can transform plausibility into an objective causal criterion).

[13]  Opinion at 15-16.

[14]  Opinion at 16-17.

[15]  See Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295 (1965); see also “Woodside & Davis on the Bradford Hill Considerations” (April 23, 2013).

[16]  Opinion at 17 – 21.

[17]  Opinion at 18. The MDL court cited In re Silicone Gel Breast Implants Prod. Liab. Litig., 318 F. Supp. 2d 879, 893 (C.D. Cal. 2004), for the proposition that relative risks greater than 2.0 permit the inference that the agent under study “was more likely than not responsible for a particular individual’s disease.”

[18]  Opinion at 18.

[19]  Opinion at 20.

[20]  Opinion at 19.

[21]  Opinion at 21, quoting from Rule 702, Advisory Committee Notes (emphasis in Judge Seeborg’s opinion).

[22]  Opinion at 21.

[23]  SeeFollow the Data, Not the Discussion” (May 2, 2010).

[24]  Joiner, 522 U.S. at 145-46 (noting that the PCB studies at issue did not support expert witnesses’ conclusion that PCB exposure caused cancer because the study authors, who conducted the research, were not willing to endorse a conclusion of causation).

[25]  Huss v. Gayden, 571 F.3d 442  (5th Cir. 2009) (citing Vargas v. Lee, 317 F.3d 498, 501-01 (5th Cir. 2003) (noting that studies that did not themselves embrace causal conclusions undermined the reliability of the plaintiffs’ expert witness’s testimony that trauma caused fibromyalgia); see also McClain v. Metabolife Internat’l, Inc., 401 F.3d 1233, 1247-48 (11th Cir. 2005) (expert witnesses’ reliance upon studies that did not reach causal conclusions about ephedrine supported the challenge to the reliability of their proffered opinions); Happel v. Walmart, 602 F.3d 820, 826 (7th Cir. 2010) (observing that “is axiomatic that causation testimony is inadmissible if an expert relies upon studies or publications, the authors of which were themselves unwilling to conclude that causation had been proven”).

[26]  In re Accutane Prods. Liab. Litig., 511 F. Supp. 2d 1288, 1291 (M.D. Fla. 2007) (“When an expert relies on the studies of others, he must not exceed the limitations the authors themselves place on the study. That is, he must not draw overreaching conclusions.) (internal citations omitted).

[27]  See Rutigliano v. Valley Bus. Forms, 929 F. Supp. 779, 785 (D.N.J. 1996), aff’d, 118 F.3d 1577 (3d Cir. 1997) (“law warns against use of medical literature to draw conclusions not drawn in the literature itself …. Reliance upon medical literature for conclusions not drawn therein is not an accepted scientific methodology.”).

[28]  Opinion at 14

[29]  Opinion at 4 – 5.

Is the IARC Lost in the Weeds?

November 30th, 2019

A couple of years ago, I met David Zaruk at a Society for Risk Analysis meeting, where we were both presenting. I was aware of David’s blogging and investigative journalism, but meeting him gave me a greater appreciation for the breadth and depth of his work. For those of you who do not know David, he is present in cyberspace as the Risk-Monger who blogs about risk and science communications issues. His blog has featured cutting-edge exposés about the distortions in risk communications perpetuated by the advocacy of non-governmental organizations (NGOs). Previously, I have recorded my objections to the intellectual arrogance of some such organizations that purport to speak on behalf of the public interest, when often they act in cahoots with the lawsuit industry in the manufacturing of tort and environmental litigation.

David’s writing on the lobbying and control of NGOs by plaintiffs’ lawyers from the United States should be required reading for everyone who wants to understand how litigation sausage is made. His series, “SlimeGate” details the interplay among NGO lobbying, lawsuit industry maneuvering, and carcinogen determinations at the International Agency for Research on Cancer (IARC). The IARC, a branch of the World Health Organization, is headquartered in Lyon, France. The IARC convenes “working groups” to review the scientific studies of the carcinogencity of various substances and processes. The IARC working groups produce “monographs” of their reviews, and the IARC publishes these monographs, in print and on-line. The United States is in the top tier of participating countries for funding the IARC.

The IARC was founded in 1965, when observational epidemiology was still very much an emerging science, with expertise concentrated in only a few countries. For its first few decades, the IARC enjoyed a good reputation, and its monographs were considered definitive reviews, especially under its first director, Dr. John Higginson, from 1966 to 1981.[1] By the end of the 20th century, the need for the IARC and its reviews had waned, as the methods of systematic review and meta-analyses had evolved significantly, and had became more widely standardized and practiced.

Understandably, the IARC has been concerned that the members of its working groups should be viewed as disinterested scientists. Unfortunately, this concern has been translated into an asymmetrical standard that excludes anyone with a hint of manufacturing connection, but keeps the door open for those scientists with deep lawsuit industry connections. Speaking on behalf of the plaintiffs’ bar, Michael Papantonio, a plaintiffs’ lawyer who founded Mass Torts Made Perfect, noted that “We [the lawsuit industry] operate just like any other industry.”[2]

David Zaruk has shown how this asymmetry has been exploited mercilessly by the lawsuit industry and its agents in connection with the IARC’s review of glyphosate.[3] The resulting IARC classification of glyphosate has led to a litigation firestorm and an all-out assault on agricultural sustainability and productivity.[4]

The anomaly of the IARC’s glyphosate classification has been noted by scientists as well. Dr. Geoffrey Kabat is a cancer epidemiologist, who has written perceptively on the misunderstandings and distortions of cancer risk assessments in various settings.[5] He has previously written about glyphosate in Forbes and elsewhere, but recently he has written an important essay on glyphosate in Issues in Science and Technology, which is published by the National Academies of Sciences, Engineering, and Medicine and Arizona State University. In his essay, Dr. Kabat details how the IARC’s evaluation of glyphosate is an outlier in the scientific and regulatory world, and is not well supported by the available evidence.[6]

The problems with the IARC are both substantive and procedural.[7] One of the key problems that face IARC evaluations is an incoherent classification scheme. IARC evaluations classify putative human carcinogenic risks into five categories: Group I (known), Group 2A (probably), Group 2B (possibly), Group 3 (unclassifiable), and Group 4 (probably not). Group 4 is virtually an empty set with only one substance, caprolactam ((CH2)5C(O)NH), an organic compound used in the manufacture of nylon.

In the IARC evaluation at issue, glyphosate was placed into Group 2A, which would seem to satisfy the legal system’s requirement that an exposure more likely than not causes the harm in question. Appearances and word usage, however, can be deceiving. Probability is a continuous scale from zero to one. In Bayesian decision making, zero and one are unavailable because if either was our starting point, no amount of evidence could ever change our judgment of the probability of causation. (Cromwell’s Rule) The IARC informs us that its use of “probably” is quite idiosyncratic; the probability that a Group 2A agent causes cancer has “no quantitative” meaning. All the IARC intends is that a Group 2A classification “signifies a greater strength of evidence than possibly carcinogenic.”[8]

In other words, Group 2A classifications are consistent with having posterior probabilities of less than 0.5 (or 50 percent). A working group could judge the probability of a substance or a process to be carcinogenic to humans to be greater than zero, but no more than five or ten percent, and still vote for a 2A classification, in keeping with the IARC Preamble. This low probability threshold for a 2A classification converts the judgment of “probably carcinogenic” into a precautionary prescription, rendered when the most probable assessment is either ignorance or lack of causality. There is thus a practical certainty, close to 100%, that a 2A classification will confuse judges and juries, as well as the scientific community.

In IARC-speak, a 2A “probability” connotes “sufficient evidence” in experimental animals, and “limited evidence” in humans. A substance can receive a 2A classification even when the sufficient evidence of carcinogenicity occurs in one non-human animal specie, even though other animal species fail to show carcinogenicity. A 2A classification can raise the thorny question in court whether a claimant is more like a rat or a mouse.

Similarly, “limited evidence” in humans can be based upon inconsistent observational studies that fail to measure and adjust for known and potential confounding risk factors and systematic biases. The 2A classification requires little substantively or semantically, and many 2A classifications leave juries and judges to determine whether a chemical or medication caused a human being’s cancer, when the basic predicates for Sir Austin Bradford Hill’s factors for causal judgment have not been met.[9]

In courtrooms, IARC 2A classifications should be excluded as legally irrelevant, under Rule 403. Even if a 2A IARC classification were a credible judgment of causation, admitting evidence of the classification would be “substantially outweighed by a danger of … unfair prejudice, confusing the issues, [and] misleading the jury….”[10]

The IARC may be lost in the weeds, but there is no need to fret. A little Round Up™ will help.


[1]  See John Higginson, “The International Agency for Research on Cancer: A Brief History of Its History, Mission, and Program,” 43 Toxicological Sci. 79 (1998).

[2]  Sara Randazzo & Jacob Bunge, “Inside the Mass-Tort Machine That Powers Thousands of Roundup Lawsuits,” Wall St. J. (Nov. 25, 2019).

[3]  David Zaruk, “The Corruption of IARC,” Risk Monger (Aug. 24, 2019); David Zaruk, “Greed, Lies and Glyphosate: The Portier Papers,” Risk Monger (Oct. 13, 2017).

[4]  Ted Williams, “Roundup Hysteria,” Slate Magazine (Oct. 14, 2019).

[5]  See, e.g., Geoffrey Kabat, Hyping Health Risks: Environmental Hazards in Everyday Life and the Science of Epidemiology (2008); Geoffrey Kabat, Getting Risk Right: Understanding the Science of Elusive Health Risks (2016).

[6]  Geoffrey Kabat, “Who’s Afraid of Roundup?” 36 Issues in Science and Technology (Fall 2019).

[7]  See Schachtman, “Infante-lizing the IARC” (May 13, 2018); “The IARC Process is Broken” (May 4, 2016). See also Eric Lasker and John Kalas, “Engaging with International Carcinogen Evaluations,” Law360 (Nov. 14, 2019).

[8]  “IARC Preamble to the IARC Monographs on the Identification of Carcinogenic Hazards to Humans,” at Sec. B.5., p.31 (Jan. 2019); See alsoIARC Advisory Group Report on Preamble” (Sept. 2019).

[9]  See Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295 (1965) (noting that only when “[o]ur observations reveal an association between two variables, perfectly clear-cut and beyond what we would care to attribute to the play of chance,” do we move on to consider the nine articulated factors for determining whether an association is causal.

[10]  Fed. R. Evid. 403.

 

Science Bench Book for Judges

July 13th, 2019

On July 1st of this year, the National Judicial College and the Justice Speakers Institute, LLC released an online publication of the Science Bench Book for Judges [Bench Book]. The Bench Book sets out to cover much of the substantive material already covered by the Federal Judicial Center’s Reference Manual:

Acknowledgments

Table of Contents

  1. Introduction: Why This Bench Book?
  2. What is Science?
  3. Scientific Evidence
  4. Introduction to Research Terminology and Concepts
  5. Pre-Trial Civil
  6. Pre-trial Criminal
  7. Trial
  8. Juvenile Court
  9. The Expert Witness
  10. Evidence-Based Sentencing
  11. Post Sentencing Supervision
  12. Civil Post Trial Proceedings
  13. Conclusion: Judges—The Gatekeepers of Scientific Evidence

Appendix 1 – Frye/Daubert—State-by-State

Appendix 2 – Sample Orders for Criminal Discovery

Appendix 3 – Biographies

The Bench Book gives some good advice in very general terms about the need to consider study validity,[1] and to approach scientific evidence with care and “healthy skepticism.”[2] When the Bench Book attempts to instruct on what it represents the scientific method of hypothesis testing, the good advice unravels:

“A scientific hypothesis simply cannot be proved. Statisticians attempt to solve this dilemma by adopting an alternate [sic] hypothesis – the null hypothesis. The null hypothesis is the opposite of the scientific hypothesis. It assumes that the scientific hypothesis is not true. The researcher conducts a statistical analysis of the study data to see if the null hypothesis can be rejected. If the null hypothesis is found to be untrue, the data support the scientific hypothesis as true.”[3]

Even in experimental settings, a statistical analysis of the data do not lead to a conclusion that the null hypothesis is untrue, as opposed to not reasonably compatible with the study’s data. In observational studies, the statistical analysis must acknowledge whether and to what extent the study has excluded bias and confounding. When the Bench Book turns to speak of statistical significance, more trouble ensues:

“The goal of an experiment, or observational study, is to achieve results that are statistically significant; that is, not occurring by chance.”[4]

In the world of result-oriented science, and scientific advocacy, it is perhaps true that scientists seek to achieve statistically significant results. Still, it seems crass to come right out and say so, as opposed to saying that the scientists are querying the data to see whether they are compatible with the null hypothesis. This first pass at statistical significance is only mildly astray compared with the Bench Book’s more serious attempts to define statistical significance and confidence intervals:

4.10 Statistical Significance

The research field agrees that study outcomes must demonstrate they are not the result of random chance. Leaving room for an error of .05, the study must achieve a 95% level of confidence that the results were the product of the study. This is denoted as p ≤ 05. (or .01 or .1).”[5]

and

“The confidence interval is also a way to gauge the reliability of an estimate. The confidence interval predicts the parameters within which a sample value will fall. It looks at the distance from the mean a value will fall, and is measured by using standard deviations. For example, if all values fall within 2 standard deviations from the mean, about 95% of the values will be within that range.”[6]

Of course, the interval speaks to the precision of the estimate, not its reliability, but that is a small point. These definitions are virtually guaranteed to confuse judges into conflating statistical significance and the coefficient of confidence with the legal burden of proof probability.

The Bench Book runs into problems in interpreting legal decisions, which would seem softer grist for the judicial mill. The authors present dictum from the Daubert decision as though it were a holding:[7]

“As noted in Daubert, ‘[t]he focus, of course, must be solely on principles and methodology, not on the conclusions they generate’.”

The authors fail to mention that this dictum was abandoned in Joiner, and that it is specifically rejected by statute, in the 2000 revision to the Federal Rule of Evidence 702.

Early in the Bench Book, it authors present a subsection entitled “The Myth of Scientific Objectivity,” which they might have borrowed from Feyerabend or Derrida. The heading appears misleading because the text contradicts it:

“Scientists often develop emotional attachments to their work—it can be difficult to abandon an idea. Regardless of bias, the strongest intellectual argument, based on accepted scientific hypotheses, will always prevail, but the road to that conclusion may be fraught with scholarly cul-de-sacs.”[8]

In a similar vein, the authors misleadingly tell readers that “the forefront of science is rarely encountered in court,” and so “much of the science mentioned there shall be considered established….”[9] Of course, the reality is that many causal claims presented in court have already been rejected or held to be indeterminate by the scientific community. And just when readers may think themselves safe from the goblins of nihilism, the authors launch into a theory of naïve probabilism that science is just placing subjective probabilities upon data, based upon preconceived biases and beliefs:

“All of these biases and beliefs play into the process of weighing data, a critical aspect of science. Placing weight on a result is the process of assigning a probability to an outcome. Everything in the universe can be expressed in probabilities.”[10]

So help the expert witness who honestly (and correctly) testifies that the causal claim or its rejection cannot be expressed as a probability statement!

Although I have not read all of the Bench Book closely, there appears to be no meaningful discussion of Rule 703, or of the need to access underlying data to ensure that the proffered scientific opinion under scrutiny has used appropriate methodologies at every step in its development. Even a 412 text cannot address every issue, but this one does little to help the judicial reader find more in-depth help on statistical and scientific methodological issues that arise in occupational and environmental disease claims, and in pharmaceutical products litigation.

The organizations involved in this Bench Book appear to be honest brokers of remedial education for judges. The writing of this Bench Book was funded by the State Justice Institute (SJI) Which is a creation of federal legislation enacted with the laudatory goal of improving the quality of judging in state courts.[11] Despite its provenance in federal legislation, the SJI is a a private, nonprofit corporation, governed by 11 directors appointed by the President, and confirmed by the Senate. A majority of the directors (six) are state court judges, one state court administrator, and four members of the public (no more than two from any one political party). The function of the SJI is to award grants to improve judging in state courts.

The National Judicial College (NJC) originated in the early 1960s, from the efforts of the American Bar Association, American Judicature Society and the Institute of Judicial Administration, to provide education for judges. In 1977, the NJC became a Nevada not-for-profit (501)(c)(3) educational corporation, which its campus at the University of Nevada, Reno, where judges could go for training and recreational activities.

The Justice Speakers Institute appears to be a for-profit company that provides educational resources for judge. A Press Release touts the Bench Book and follow-on webinars. Caveat emptor.

The rationale for this Bench Book is open to question. Unlike the Reference Manual for Scientific Evidence, which was co-produced by the Federal Judicial Center and the National Academies of Science, the Bench Book’s authors are lawyers and judges, without any subject-matter expertise. Unlike the Reference Manual, the Bench Book’s chapters have no scientist or statistician authors, and it shows. Remarkably, the Bench Book does not appear to cite to the Reference Manual or the Manual on Complex Litigation, at any point in its discussion of the federal law of expert witnesses or of scientific or statistical method. Perhaps taxpayers would have been spared substantial expense if state judges were simply encouraged to read the Reference Manual.


[1]  Bench Book at 190.

[2]  Bench Book at 174 (“Given the large amount of statistical information contained in expert reports, as well as in the daily lives of the general society, the ability to be a competent consumer of scientific reports is challenging. Effective critical review of scientific information requires vigilance, and some healthy skepticism.”).

[3]  Bench Book at 137; see also id. at 162.

[4]  Bench Book at 148.

[5]  Bench Book at 160.

[6]  Bench Book at 152.

[7]  Bench Book at 233, quoting Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579, 595 (1993).

[8]  Bench Book at 10.

[9]  Id. at 10.

[10]  Id. at 10.

[11] See State Justice Institute Act of 1984 (42 U.S.C. ch. 113, 42 U.S.C. § 10701 et seq.).

The Shmeta-Analysis in Paoli

July 11th, 2019

In the Paoli Railroad yard litigation, plaintiffs claimed injuries and increased risk of future cancers from environmental exposure to polychlorinated biphenyls (PCBs). This massive litigation showed up before federal district judge Hon. Robert F. Kelly,[1] in the Eastern District of Pennsylvania, who may well have been the first judge to grapple with a litigation attempt to use meta-analysis to show a causal association.

One of the plaintiffs’ expert witnesses was the late William J. Nicholson, who was a professor at Mt. Sinai School of Medicine, and a colleague of Irving Selikoff. Nicholson was trained in physics, and had no professional training in epidemiology. Nonetheless, Nicholson was Selikoff’s go-to colleague for performing epidemiologic studies. After Selikoff withdrew from active testifying for plaintiffs in tort litigation, Nicholson was one of his colleagues who jumped into the fray as a surrogate advocate for Selikoff.[2]

For his opinion that PCBs were causally associated with liver cancer in humans,[3] Nicholson relied upon a report he wrote for the Ontario Ministry of Labor. [cited here as “Report”].[4] Nicholson described his report as a “study of the data of all the PCB worker epidemiological studies that had been published,” from which he concluded that there was “substantial evidence for a causal association between excess risk of death from cancer of the liver, biliary tract, and gall bladder and exposure to PCBs.”[5]

The defense challenged the admissibility of Nicholson’s meta-analysis, on several grounds. The trial court decided the challenge based upon the Downing case, which was the law in the Third Circuit, before the Supreme Court decided Daubert.[6] The Downing case allowed some opportunity for consideration of reliability and validity concerns; there is, however, disappointingly little discussion of any actual validity concerns in the courts’ opinions.

The defense challenge to Nicholson’s proffered testimony on liver cancer turned on its characterization of meta-analysis as a “novel” technique, which is generally unreliable, and its claim that Nicholson’s meta-analysis in particular was unreliable. None of the individual studies that contributed data showed any “connection” between PCBs and liver cancer; nor did any individual study conclude that there was a causal association.

Of course, the appropriate response to this situation, with no one study finding a statistically significant association, or concluding that there was a causal association, should have been “so what?” One of the reasons to do a meta-analysis is that no available study was sufficiently large to find a statistically significant association, if one were there. As for drawing conclusions of causal associations, it is not the role or place of an individual study to synthesize all the available evidence into a principled conclusion of causation.

In any event, the trial court concluded that the proffered novel technique lacked sufficient reliability, that the meta-analysis would “overwhelm, confuse, or mislead the jury,” and that the proffered meta-analysis on liver cancer was not sufficiently relevant to the facts of the case (in which no plaintiff had developed, or had died of, liver cancer). The trial court noted that the Report had not been peer-reviewed, and that it had not been accepted or relied upon by the Ontario government for any finding or policy decision. The trial court also expressed its concern that the proffered testimony along the lines of the Report would possibly confuse the jury because it appeared to be “scientific” and because Nicholson appeared to be qualified.

The Appeal

The Court of Appeals for the Third Circuit, in an opinion by Judge Becker, reversed Judge Kelly’s exclusion of the Nicholson Report, in an opinion that is still sometimes cited, even though Downing is no longer good law in the Circuit or anywhere else.[7] The Court was ultimately not persuaded that the trial court had handled the exclusion of Nicholson’s Report and its meta-analysis correctly, and it remanded the case for a do-over analysis.

Judge Becker described Nicholson’s Report as a “meta-analysis,” which pooled or “combined the results of numerous epidemiologic surveys in order to achieve a larger sample size, adjusted the results for differences in testing techniques, and drew his own scientific conclusions.”[8] Through this method, Nicholson claimed to have shown that “exposure to PCBs can cause liver, gall bladder and biliary tract disorders … even though none of the individual surveys supports such a conclusion when considered in isolation.”[9]

Validity

The appellate court gave no weight to the possibility that a meta-analysis would confuse a jury, or that its “scientific nature” or Nicholson’s credentials would lead a jury to give it more weight than it deserved.[10] The Court of Appeals conceded, however, that exclusion would have been appropriate if the methodology used itself was invalid. The appellate opinion further acknowledged that the defense had offered opposition to Nicholson’s Report in which it documented his failure to include data that were inconsistent with his conclusions, and that “Nicholson had produced a scientifically invalid study.”[11]

Judge Becker’s opinion for a panel of the Third Circuit provided no details about the cherry picking. The opinion never analyzed why this charge of cherry-picking and manipulation of the dataset did not invalidate the meta-analytic method generally, or Nicholson’s method as applied. The opinion gave no suggestion that this counter-affidavit was ever answered by the plaintiffs.

Generally, Judge Becker’s opinion dodged engagement with the specific threats to validity in Nicholson’s Report, and took refuge in the indisputable fact that hundreds of meta-analyses were published annually, and that the defense expert witnesses did not question the general reliability of meta-analysis.[12] These facts undermined the defense claim that meta-analysis was novel.[13] The reality, however, was that meta-analysis was in its infancy in bio-medical research.

When it came to the specific meta-analysis at issue, the court did not discuss or analyze a single pertinent detail of the Report. Despite its lack of engagement with the specifics of the Report’s meta-analysis, the court astutely observed that prevalent errors and flaws do not mean that a particular meta-analysis is “necessarily in error.”[14] Of course, without bothering to look, the court would not know whether the proffered meta-analysis was “actually in error.”

The appellate court would have given Nicholson’s Report a “pass” if it was an application of an accepted methodology. The defense’s remedy under this condition would be to cross-examine the opinion in front of a jury. If, on the other hand, the Nicholson had altered an accepted methodology to skew its results, then the court’s gatekeeping responsibility under Downing would be invoked.

The appellate court went on to fault the trial court for failing to make sufficiently explicit findings as to whether the questioned meta-analysis was unreliable. From its perspective, the Court of Appeals saw the trial court as resolving the reliability issue upon the greater credibility of defense expert witnesses in branding the disputed meta-analysis as unreliability. Credibility determinations are for the jury, but the court left room for a challenge on reliability itself:[15]

“Assuming that Dr. Nicholson’s meta-analysis is the proper subject of Downing scrutiny, the district court’s decision is wanting, because it did not make explicit enough findings on the reliability of Dr. Nicholson’s meta-analysis to satisfy Downing. We decline to define the exact level at which a district court can exclude a technique as sufficiently unreliable. Reliability indicia vary so much from case to case that any attempt to define such a level would most likely be pointless. Downing itself lays down a flexible rule. What is not flexible under Downing is the requirement that there be a developed record and specific findings on reliability issues. Those are absent here. Thus, even if it may be possible to exclude Dr. Nicholson’s testimony under Downing, as an unreliable, skewed meta-analysis, we cannot make such a determination on the record as it now stands. Not only was there no hearing, in limine or otherwise, at which the bases for the opinions of the contesting experts could be evaluated, but the experts were also not even deposed. All of the expert evidence was based on affidavits.”

Peer Review

Understandably, the defense attacked Nicholson’s Report as not having been peer reviewed. Without any scrutiny of the scientific bona fides of the workers’ compensation agency, the appellate court acquiesced in Nicholson’s self-serving characterization of his Report as having been reviewed by “cooperating researchers” and the Panel of the Ontario Workers’ Compensation agency. Another partisan expert witness characterized Nicholson’s Report as a “balanced assessment,” and this seemed to appease the Third Circuit, which was wary of requiring peer review in the first place.[16]

Relevancy Prong

The defense had argued that Nicholson’s Report was irrelevant because no individual plaintiff claimed liver cancer.[17] The trial court largely accepted this argument, but the appellate court disagreed because of conclusory language in Nicholson’s affidavit, in which he asserted that “proof of an increased risk of liver cancer is probative of an increased risk of other forms of cancer.” The court seemed unfazed by the ipse dixit, asserted without any support. Indeed, Nicholson’s assertion was contradicted by his own Report, in which he reported that there were fewer cancers among PCB-exposed male capacitor manufacturing workers than expected,[18] and that the rate for all cancers for both men and women was lower than expected, with 132 observed and 139.40 expected.[19]

The trial court had also agreed with the defense’s suggestion that Nicholson’s report, and its conclusion of causality between PCB exposure and liver cancer, were irrelevant because the Report “could not be the basis for anyone to say with reasonable degree of scientific certainty that some particular person’s disease, not cancer of the liver, biliary tract or gall bladder, was caused by PCBs.”[20]

Analysis

It would likely have been lost on Judge Becker and his colleagues, but Nicholson presented SMRs (standardized mortality ratios) throughout his Report, and for the all cancers statistic, he gave an SMR of 95. What Nicholson clearly did in this, and in all other instances, was simply divide the observed number by the expected, and multiply by 100. This crude, simplistic calculation fails to present a standardized mortality ratio, which requires taking into account the age distribution of the exposed and the unexposed groups, and a weighting of the contribution of cases within each age stratum. Nicholson’s presentation of data was nothing short of false and misleading. And in case anyone remembers General Electric v. Joiner, Nicholson’s summary estimate of risk for lung cancer in men was below the expected rate.[21]

Nicholson’s Report was replete with many other methodological sins. He used a composite of three organs (liver, gall bladder, bile duct) without any biological rationale. His analysis combined male and female results, and still his analysis of the composite outcome was based upon only seven cases. Of those seven cases, some of the cases were not confirmed as primary liver cancer, and at least one case was confirmed as not being a primary liver cancer.[22]

Nicholson failed to standardize the analysis for the age distribution of the observed and expected cases, and he failed to present meaningful analysis of random or systematic error. When he did present p-values, he presented one-tailed values, and he made no corrections for his many comparisons from the same set of data.

Finally, and most egregiously, Nicholson’s meta-analysis was meta-analysis in name only. What he had done was simply to add “observed” and “expected” events across studies to arrive at totals, and to recalculate a bogus risk ratio, which he fraudulently called a standardized mortality ratio. Adding events across studies is not a valid meta-analysis; indeed, it is a well-known example of how to generate a Simpson’s Paradox, which can change the direction or magnitude of any association.[23]

Some may be tempted to criticize the defense for having focused its challenge on the “novelty” of Nicholson’s approach in Paoli. The problem of course was the invalidity of Nicholson’s work, but both the trial court’s exclusion of Nicholson, and the Court of Appeals’ reversal and remand of the exclusion decision, illustrate the problem in getting judges, even well-respected judges, to accept their responsibility to engage with questioned scientific evidence.

Even in Paoli, no amount of ketchup could conceal the unsavoriness of Nicholson’s scrapple analysis. When the Paoli case reached the Court Appeals again in 1994, Nicholson’s analysis was absent.[24] Apparently, the plaintiffs’ counsel had second thoughts about the whole matter. Today, under the revised Rule 702, there can be little doubt that Nicholson’s so-called meta-analysis should have been excluded.


[1]  Not to be confused with the Judge Kelly of the same district, who was unceremoniously disqualified after attending an ex parte conference with plaintiffs’ lawyers and expert witnesses, at the invitation of Dr. Irving Selikoff.

[2]  Pace Philip J. Landrigan & Myron A. Mehlman, “In Memoriam – William J. Nicholson,” 40 Am. J. Indus. Med. 231 (2001). Landrigan and Mehlman assert, without any support, that Nicholson was an epidemiologist. Their own description of his career, his undergraduate work at MIT, his doctorate in physics from the University of Washington, his employment at the Watson Laboratory, before becoming a staff member in Irving Selikoff’s department in 1969, all suggest that Nicholson brought little to no experience in epidemiology to his work on occupational and environmental exposure epidemiology.

[3]  In re Paoli RR Yard Litig., 706 F. Supp. 358, 372-73 (E.D. Pa. 1988).

[4]  William Nicholson, Report to the Workers’ Compensation Board on Occupational Exposure to PCBs and Various Cancers, for the Industrial Disease Standards Panel (ODP); IDSP Report No. 2 (Toronto, Ontario Dec. 1987).

[5]  Id. at 373.

[6]  United States v. Downing, 753 F.2d 1224 (3d Cir.1985)

[7]  In re Paoli RR Yard PCB Litig., 916 F.2d 829 (3d Cir. 1990), cert. denied sub nom. General Elec. Co. v. Knight, 111 S.Ct. 1584 (1991).

[8]  Id. at 845.

[9]  Id.

[10]  Id. at 841, 848.

[11]  Id. at 845.

[12]  Id. at 847-48.

[13]  See, e.g., Robert Rosenthal, Judgment studies: Design, analysis, and meta-analysis (1987); Richard J. Light & David B. Pillemer, Summing Up: the Science of Reviewing Research (1984); Thomas A. Louis, Harvey V. Fineberg & Frederick Mosteller, “Findings for Public Health from Meta-Analyses,” 6 Ann. Rev. Public Health 1 (1985); Kristan A. L’abbé, Allan S. Detsky & Keith O’Rourke, “Meta-analysis in clinical research,” 107 Ann. Intern. Med. 224 (1987).

[14]  Id. at 857.

[15]  Id. at 858/

[16]  Id. at 858.

[17]  Id. at 845.

[18]  Report, Table 16.

[19]  Report, Table 18.

[20]  In re Paoli, 916 F.2d at 847.

[21]  See General Electric v. Joiner, 522 U.S. 136 (1997); NAS, “How Have Important Rule 702 Holdings Held Up With Time?” (March 20, 2015).

[22]  Report, Table 22.

[23]  James A. Hanley, Gilles Thériault, Ralf Reintjes and Annette de Boer, “Simpson’s Paradox in Meta-Analysis,” 11 Epidemiology 613 (2000); H. James Norton & George Divine, “Simpson’s paradox and how to avoid it,” Significance 40 (Aug. 2015); George Udny Yule, Notes on the theory of association of attributes in Statistics, 2 Biometrika 121 (1903).

[24]  In re Paoli RR Yard Litig., 35 F.3d 717 (3d Cir. 1994).

California Roasts Fear-Mongering Industry

June 16th, 2019

A year ago, California set out to create an exemption for coffee from its Proposition 65 regulations. The lawsuit industry, represented by the Council for Education and Research on Toxics (CERT) had been successfully deploying Prop 65’s private right of action provisions to pick the pockets of coffee vendors. Something had to give.

In 2010, Mr. Metzger, on behalf of CERT, sued Starbucks and 90 other coffee manufacturers and distributors, claiming they had failed to warn consumers about the cancer risks of acrylamide. CERT’s mission was to shake down the roasters and the vendors because coffee has minor amounts of acrylamide in it. Acrylamide in very high doses causes tumors in rats[1]; coffee consumption by humans is generally regarded as beneficial.

Earlier last year a Los Angeles Superior Court ordered the coffee companies to put cancer warnings on their beverages. In the upcoming damages phase of the case, Metzger sought as much as $2,500 in civil penalties for each cup of coffee the defendants sold over at least a decade. Suing companies for violating California’s Proposition 65 is like shooting fish in a barrel, but the State’s regulatory initiative to save California from the embarrassment of branding coffee a carcinogen was a major setback for CERT.

And so the Office of Environmental Health Hazard Assessment (OEHHA) began a rulemaking largely designed to protect the agency from the public relations nightmare created by the application of the governing statute and regulations to squeeze the coffee roasters and makers.[2] The California’s agency’s proposed regulation on acrylamide in coffee resulted in a stay of CERT’s enforcement action against Starbucks.[3] CERT’s lawyers were not pleased; they had already won a trial court’s judgment that they were owed damages, and only the amount needed to be set. In September 2018, CERT filed a lawsuit in Los Angeles Superior Court against the state of California challenging OEHHA’s proposed rule, saying it was being rammed through the agency on the order of the Office of the Governor in an effort to kill CERT’s suit against the coffee companies. Or maybe it was simply designed to allow people to drink their coffee without the Big Prop 65 warning.

Earlier this month, after reviewing voluminous submissions and holding a hearing, the OEHHA announced its ruling that Californians do not need to be warned that coffee causes cancer. Epistemically, coffee is not known to the State of California to be hazardous to human health.[4] According to Sam Delson, a spokesperson for the OEHHA, “Coffee is a complex mixture of hundreds of chemicals that includes both carcinogens and anti-carcinogens. … The overall effect of coffee consumption is not associated with any significant cancer risk.” The regulation saving coffee goes into effect in October 2019. CERT, no doubt, will press on in its litigation campaign against the State.

CERT is the ethically dodgy organization founded by C. Sterling Wolfe, a former environmental lawyer; Brad Lunn; Carl Cranor, a philosophy professor at University of California Riverside; and Martyn T. Smith, a toxicology professor at University of California Berkeley.[5] Metzger has been its lawyer for many years; indeed, Metzger and CERT share the same office. Smith has been the recipient of CERT’s largesse in funding toxicologic studies. Cranor and Smith have both testified for the lawsuit industry.

In the well-known Milward case,[6] both Cranor and Smith served as paid expert witnesses for plaintiff. When the trial court excluded their proffered testimonies as unhelpful and unreliable, their own organization, CERT, came to the rescue by filing an amicus brief in the First Circuit. Supporting by a large cast of fellow travelers, CERT perverted the course of justice by failing to disclose the intimate relationship between the “amicus” CERT and the expert witnesses Cranor and Smith, whose opinions had been successfully challenged.[7]

The OEHHA coffee regulation shows that not all regulation is bad.


[1]  National Cancer Institute, “Acrylamide and Cancer Risk.”

[2]  See Sam Delson, “Press Release: Proposed OEHHA regulation clarifies that cancer warnings are not required for coffee under Proposition 65” (June 15, 2018).

[3]  Council for Education and Research on Toxics v. Starbucks Corp., case no. B292762, Court of Appeal of the State of California, Second Appellate District.

[4]  Associated Press, “Perk Up: California Says Coffee Cancer Risk Insignificant,” N.Y. Times (June 3, 2019); Sara Randazzo, “Coffee Doesn’t Warrant a Cancer Warning in California, Agency Says; Industry scores win following finding on chemical found in beverage,” W.S.J. (June 3, 2019); Editorial Board, “Coffee Doesn’t Kill After All: California has a moment of sanity, and a lawyer is furious,” Wall.St.J. (June 5, 2019).

[5]  Michael Waters, “The Secretive Non-Profit Gaming California’s Health Laws,” The Outline (June 18, 2018); Beth Mole, “The secretive nonprofit that made millions suing companies over cancer warnings,” Ars Technica (June 6, 2019); NAS, “Coffee with Cream, Sugar & a Dash of Acrylamide” (June 9, 2018); NAS, “The Council for Education & Research on Toxics” (July 9, 2013); NAS, “Sand in My Shoe – CERTainly” (June 17, 2014) (CERT briefs supported by fellow-travelers, testifying expert witnesses Jerrold Abraham, Richard W. Clapp, Ronald Crystal, David A. Eastmond, Arthur L. Frank, Robert J. Harrison, Ronald Melnick, Lee Newman, Stephen M. Rappaport, David Joseph Ross, and Janet Weiss, all without disclosing conflicts of interest).

[6]  Milward v. Acuity Specialty Products Group, Inc., 664 F. Supp. 2d 137, 148 (D.Mass. 2009), rev’d, 639 F.3d 11 (1st Cir. 2011), cert. den. sub nom. U.S. Steel Corp. v. Milward, 565 U.S. 1111 (2012), on remand, Milward v. Acuity Specialty Products Group, Inc., 969 F.Supp. 2d 101 (D.Mass. 2013) (excluding specific causation opinions as invalid; granting summary judgment), aff’d, 820 F.3d 469 (1st Cir. 2016).

[7]  NAS, “The Council for Education & Research on Toxics” (July 9, 2013) (CERT amicus brief filed without any disclosure of conflict of interest). The fellow travelers who knowingly or unknowingly aided CERT’s scheme to pervert the course of justice, included some well-known testifiers for the lawsuit industry: Nicholas A. Ashford, Nachman Brautbar, David C. Christiani, Richard W. Clapp, James Dahlgren, Devra Lee Davis, Malin Roy Dollinger, Brian G. Durie, David A. Eastmond, Arthur L. Frank, Frank H. Gardner, Peter L. Greenberg, Robert J. Harrison, Peter F. Infante, Philip J. Landrigan, Barry S. Levy, Melissa A. McDiarmid, Myron Mehlman, Ronald L. Melnick, Mark Nicas, David Ozonoff, Stephen M. Rappaport, David Rosner, Allan H. Smith, Daniel Thau Teitelbaum, Janet Weiss, and Luoping Zhang. See also NAS, “Carl Cranor’s Conflicted Jeremiad Against Daubert” (Sept. 23, 2018); Carl Cranor, “Milward v. Acuity Specialty Products: How the First Circuit Opened Courthouse Doors for Wronged Parties to Present Wider Range of Scientific Evidence” (July 25, 2011).