Susan Haack has written frequently about expert witness testimony in the United States legal system. At times, Haack’s observations are interesting and astute, perhaps more so because she has no training in the law or legal scholarship. She trained in philosophy, and her works no doubt are taken seriously because of her academic seniority; she is the Distinguished Professor in the Humanities, Cooper Senior Scholar in Arts and Sciences, Professor of Philosophy and Professor of Law at the University of Miami.
On occasion, Haack has used her background and experience from teaching about epistemology to good effect in elucidating how epistemiologic issues are handled in the law. For instance, her exploration of the vice of credulity, as voiced by W.K. Clifford,[1] is a useful counterweight to the shrill agnotologists, Robert Proctor, Naomi Oreskes, and David Michaels.
Professor Haack has also been a source of confused, fuzzy, and errant advice when it comes to the issue Rule 702 gatekeeping. Haack’s most recent article on “Judging Expert Testimony” is an example of some unfocused thinking about one of the most important aspect of modern litigation practice, admissibility challenges to expert witness opinion testimony.[2]
Uncontroversially, Haack finds the case law on expert witness gatekeeping lacking in “effective practical guidance,” and she seeks to offer courts, and presumably litigants, “operational help.” Haack sets out to explain “why the legal formulae” are not of practical use. Haack notes that terms such as “reliable” and “sufficient” are qualitative, and vague,[3] much like “obscene” and other adjectives that gave the courts such a difficult time. Rules with vague terms such as these give judges very little guidance. As a philosopher, Haack might have noted that the various judicial formulations of gatekeeping standards are couched as conclusions, devoid of explanatory force.[4] And she might have pointed out that the judicial tendency to confuse reliability with validity has muddled many court opinions and lawyers’ briefs.
Focusing specifically on the field of epidemiology, Haack attempts to help courts by offering questions that judges and lawyers should be asking. She tells us that the Reference Manual for Scientific Evidence is of little practical help, which is a bit unfair.[5] The Manual in its present form has problems, but ultimately the performance of gatekeepers can be improved only if the gatekeepers develop some aptitude and knowledge in the subject matter of the expert witnesses who undergoing Rule 702 challenges. Haack seems unduly reluctant to acknowledge that gatekeeping will require subject matter expertise. The chapter on statistics in the current edition of the Manual, by David Kaye and the late David Freeman, is a rich resource for judges and lawyers in evaluating statistical evidence, including statistical analyses that appear in epidemiologic studies.
Why do judges struggle with epidemiologic testimony? Haack unwittingly shows the way by suggestion that “[e]pidemiological testimony will be to the effect that a correlation, an increased relative risk, has, or hasn’t, been found, between exposure to some substance (the alleged toxin at issue in the case) and some disease or disorder (the alleged disease or disorder the plaintiff claims to have suffered)… .”[6] Some philosophical parsing of the difference between “correlation” and “increased risk” as two very different things might have been in order. Haack suggests an incorrect identity between correlation and increased risk that has confused courts as well as some epidemiologists.
Haack suggests asking various questions that are fairly obvious such as the soundness of the data, measurements, study design, and data interpretation. Haack gives the example of failing to ascertain exposure to an alleged teratogen during first trimester of pregnancy as a failure of study design that could obscure a real association. Curiously she claims that some of Merrell Dow’s studies of Bendectin did such a thing, not by citing to any publications but to the second-hand accounts of a trial judge.[7] Beyond the objectionable lack of scholarship, the example comes from a medication exposure that has been as exculpated as much as possible from the dubious litigation claims made of its teratogenicity. The misleading example begs the question why choose a Bendectin case, from a litigation that was punctuated by fraud and perjury from plaintiffs’ expert witnesses, and a medication that has been shown to be safe and effective in pregnancy?[8]
Haack balks when it comes to statistical significance, which she tells us is merely based upon a convention, and set “high” to avoid false alarms.[9] Haack’s dismissive attitude cannot be squared with the absolute need to address random error and to assess whether the research claim has been meaningfully tested.[10] Haack would reduce the assessment of random error to the uncertainties of eyeballing sample size. She tells us that:
“But of course, the larger the sample is, then, other things being equal, the better the study. Andrew Wakefield’s dreadful work supposedly finding a correlation between MMR vaccination, bowel disorders, and autism—based on a sample of only 12 children — is a paradigm example of a bad study.”[11]
Sample size was the least of Wakefield’s problems, but more to the point, in some study designs for some hypotheses, a sample of 12 may be quite adequate to the task, and capable of generating robust and even statistically significant findings.
Inevitably, Haack alights upon personal bias or conflicts of interest, as a subject of inquiry.[12] Of course, this is one of the few areas that judges and lawyers understand all too well, and do not need encouragement to pursue. Haack dives in, regardless, to advise asking:
“Do those who paid for or conducted a study have an interest in reaching a given conclusion (were they, for example, scientists working for manufacturers hoping to establish that their medication is effective and safe, or were they scientists working, like Wakefield, with attorneys for one party or another)?”[13]
Speaking of bias, we can detect some in how Haack frames the inquiry. Do scientists work for manufacturers (Boo!) or were they “like Wakefield” working for attorneys for a party? Haack cannot seem to bring herself to say that Wakefield, and many other expert witnesses, worked for plaintiffs and plaintiffs’ counsel, a.k.a., the lawsuit industry. Perhaps Haack included such expert witnesses as working for those who manufacture lawsuits. Similarly, in her discussion of journal quality, she notes that some journals carry advertisements from manufacturers, or receive financial support from them. There is a distinct lack of symmetry discernible in the lack of Haack’s curiosity about journals that are run by scientists or physicians who belong to advocacy groups, or who regularly testify for plaintiffs’ counsel.
There are many other quirky opinions here, but I will conclude with the obvious point that in the epidemiologic literature, there is a huge gulf between reporting on associations and drawing causal conclusions. Haack asks her readers to remember “that epidemiological studies can only show correlations, not causation.”[14] This suggestion ignores Haack’s article discussion of certain clinical trial results, which do “show” causal relationships. And epidemiologic studies can show strong, robust, consistent associations, with exposure-response gradients, not likely consistent with random variation, and these findings collectively can show causation in appropriate cases.
My recommendation is to ignore Haack’s suggestions and to pay closer attention to the subject matter of the expert witness who is under challenge. If the subject matter is epidemiology, open a few good textbooks on the subject. On the legal side, a good treatise such as The New Wigmore will provide much more illumination and guidance for judges and lawyers than vague, general suggestions.[15]
[1] William Kingdon Clifford, “The Ethics of Belief,” in L. Stephen & F. Pollock, eds., The Ethics of Belief 70-96 (1877) (“In order that we may have the to accept [someone’s] testimony as ground for believing what he says, we must have reasonable grounds for trusting his veracity, that he is really trying to speak the truth so far as he knows it; his knowledge, that he has had opportunities of knowing the truth about this matter; and his judgement, that he has made proper use of those opportunities in coming to the conclusion which he affirms.”), quoted in Susan Haack, “Judging Expert Testimony: From Verbal Formalism to Practical Advice,” 1 Quaestio facti. Internat’l J. Evidential Legal Reasoning 13, 13 (2020).
[2] Susan Haack, “Judging Expert Testimony: From Verbal Formalism to Practical Advice,” 1 Quaestio facti. Internat’l J. Evidential Legal Reasoning 13, 13 (2020) [cited as Haack].
[3] Haack at 21.
[4] See, e.g., “Judicial Dodgers – The Crossexamination Excuse for Denying Rule 702 Motions”; “Judicial Dodgers – Reassigning the Burden of Proof on Rule 702”; “Judicial Dodgers – Weight not Admissibility”; “Judicial Dodgers – Rule 702 Tie Does Not Go to Proponent.”
[5] Haack at 21.
[6] Haack at 22.
[7] Haack at 24, citing Blum v. Merrell Dow Pharms., Inc., 33 Phila. Cty. Rep. 193, 214-17 (1996).
[8] See, e.g., “Bendectin, Diclegis & The Philosophy of Science” (Oct. 23, 2013).
[9] Haack at 23.
[10] See generally Deborah Mayo, Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (2018).
[11] Haack at 23-24 (emphasis added).
[12] Haack at 24.
[13] Haack at 24.
[14] Haack at 25.
[15] David H. Kaye, David E. Bernstein & Jennifer L. Mnookin, The New Wigmore: A Treatise on Evidence: Expert Evidence (2nd ed. 2011). A new edition is due out presently.