How Law Professors Think — About Expert Evidence

In a recent law review article, two University of Virginia law professors question whether expert evidence should be subject to its own exclusionary rules.  Frederick Schauer and Barbara A. Spellman, “Is Expert Evidence Really Different?” 89 Notre Dame L. Rev. 1 (2013)[Schauer & Spellman]. Professors Schauer and Spellman argue that expert evidence is not really different from other kinds of evidence, and they suggest that the exclusionary procedures introduced by Daubert and its progeny are ill conceived.

Gedankenexperiment

In order to understand the exact nature of the harms of “junk science,” the authors conduct an interesting Gedanken experiment:

“Suppose ten witnesses testify that they had never been sick a day in their lives, that they then moved in middle age to a community in close proximity to a defendant’s chemical plant, and that they were all diagnosed with the same form of cancer within a year. And suppose that this is the only evidence of causation.”

The authors conclude that this evidence is relevant under Federal Rule of Evidence 401, and sufficient to raise a triable issue of fact.  From their conclusion, the authors argue further all the dangers of mass tort causation evidence are independent of junk science because a jury would be free to reach a verdict for plaintiffs based upon pure sympathy, anti-corporate animus, white-hat bias, or Robin Hood motives.  The authors see, in their hypothetical, a jury reaching a judgment against the defendant

“regardless of any strong evidence of causation, and without any junk science whatsoever.”

Schauer and Spellman’s conclusions, however, are wrong.  Their hypothetical evidentiary display is not even minimally logically relevant.  They are correct that there is no strong evidence of causation, but whence comes the conclusion that no “junk science” would be involved in the jury’s determination?  That determination would not have even the second-hand support of an expert witness opinion, but it would be jury’s first-hand, jejune interpretation of a completely indeterminate fact pattern.

These authors, after all, do not specify what kind of cancer is involved. Virtually no cancer has an induction period of less than a year.  Their hypothetical does not specify what chemicals are “released,” in what route of exposure, in what level, and for what duration. Furthermore, the suggestion of logical relevance implies that the described occurrence is beyond what we would care to ascribe to chance alone, but we do not know the number of people involved, or the baseline risk of the cancer at issue.  One in a million happens eight times a day, in New York City. Flipping a coin ten times, and observing 6 heads and 4 tails, would not support an inference that the best evidence we have is that the coin will always favor heads to tails in a ratio of 1.5.

Schauer and Spellman might improve their hypothetical, but they are unlikely to free themselves of the need for expertise beyond the ken of a lay jury to evaluate the clinical, epidemiologic, scientific, and statistical issues raised by their supposed disease “outbreak.”  And although they have taken the expert witness as the usual purveyor of junk science out of the hypothetical by design, they have simply left the jury to draw a bogus inference.  The junk is still at work, only the jury is no longer a passive recipient of the inference; they are themselves the authors of the junk-science inference.

Schauer and Spellman’s Claim That Expert Evidence Is Not Really Different

The authors make the case that there are some instances in which expert witness opinion testimony is not so different from lay opinions about intoxication or speed of a car or eyewitness identification.  But Schauer and Spellman are wrong to allow themselves to be fooled about expert witness testimony in many complex mass tort cases.

Such cases commonly involve several scientific disciplines, such as toxicology, epidemiology, exposure assessment, neuropsychology, and others. The expert witness for each discipline might have a few dozen studies that are germane to the issues in the case, and each one of those studies might cite or rely upon several papers for their background, methods  and inferences.  Reference lists for each expert witness might run into the hundreds of articles, and in some cases, the experts might need to access underlying data and materials to understand fully the papers upon which they have relied.  A careful reading of each paper might require an hour or more for the expert to understand the claims and limitations of the study.  The expertise to understand the articles fully may have taken years or decades of education.

Juries do not have the time, the interest, the aptitude, the training, the experience, to read, understand, and synthesize the data and information in the studies themselves.  Our trials involving complex technical issues are much like Plato’s allegory of the cave; the jury never sees the actual evidence, only shadows cast by evidence they are usually not permitted to see, and don’t care to see when they have the chance. Juries decide technical issues based upon mostly the appearance of expertise, not upon evidence.

Some years ago, I tried an asbestos case against Charles “Joey” Grant, in front of Judge Armand Della Porter and a jury in Philadelphia Court of Common Pleas.  Joey represented a man who claimed that he had asbestosis from working at the Philadelphia Naval Shipyard. His chest X-ray interpretation was disputed, but he has small-airways obstruction, which his expert witness attributed to asbestosis.  The defense expert thought smoking was a much more likely cause of the obstruction, but the plaintiff had denied smoking in his deposition.  In order to test his assertion, the defense asked a private investigator to conduct surveillance of the plaintiff to determine whether or not he was a smoker.

The investigator, retired Alcohol Tobacco and Firearms agent Frank Buenas, tailed the plaintiff and observed and photographed him smoking.  Plaintiff’s counsel, Joey Grant, seized on my not having provided Buenas an authenticated photograph of the plaintiff, and challenged the identification and every aspect of the surveillance.  The direct examination lasted no more than 25 minutes; the cross-examination lasted about four hours.

Joey was a very good trial lawyer.  He had just come out of the Philadelphia District Attorney’s office, after having successfully prosecuted Nicodemo “Little Nicky” Scarfo.  Joey was also a good looking African American man who played well to our all female, all African American jury.  The issues of the surveillance, and of whether or not he plaintiff was a smoker, were understandable and accessible to the jurors, who were riveted by Joey’s cross-examination. Ultimately, the issues were resolved for the jury in a dramatic fashion.  The plaintiff, who continued to work at the Navy Yard, returned to court at the end of his shift, towards the end of the day in court.  Over objection, I called him back to the stand. He had not heard the investigator’s testimony, but when I showed him Buenas’ photographs, he exclaimed “that’s my bald head!”  The jurors practically lunged out of their seats when I published the photographs to the jury over Joey’s objection.

The point of the war story is to recount how the jury followed a protracted examination, and ignored their bias in favor of Joey, and their prejudice against me, the white guy representing the “asbestos companies” in this reverse-bifurcated trial.  The testimony involved a predicate issue whether Buenas had followed and photographed the right man in the act of smoking.  Would a jury, any jury, follow the testimony of a scientist who was being challenged on methodological details of a single study, for four hours?  Would any jury remain attentive and engaged in the testimony of expert witnesses who testified on direct and cross-examination in similar detail about hundreds of studies?

Having tried cases that involve both the simple, straightforward issue such as Buenas’ investigation and surveillance, and also complex scientific issues, I believe the answer is obvious.  None of the studies cited by Schauer and Spellman address the issue of complexity and how it is represented in the courtroom. 

Most judges, when acting as the trier of fact, lack the interest, time, training, and competence to try complex cases. Consider the trial judge who conducted bench trial of a claim of autoimmune disease caused by silicone.  The trial judge was completely unable to assess the complex immunological, toxicological, and epidemiologic evidence in the case.  See Barrow v. Bristol Myers Squibb Co., No. 96 689 CIV ORL 19B, 1998 WL 812318 (M.D. Fla. Oct. 29, 1998).  In another courtroom, not far away, Judge Sam Pointer had appointed four distinguished expert witnesses under Rule 706. In re Silicone Gel Breast Implant Prods. Liab. Litig. (MDL 926), 793 F. Supp. 1098 (J.P.M.L. 1992) (No. CV92-P-10000-S), Order No. 31, 31B, 31E.  In November 1998, a month after the Middle District of Florida trial court decided the Barrow case, the four court-appointed expert witnesses filed their joint and individual reports that concluded that silicone in breast implants “do[es] not alter incidence or severity of autoimmune disease” and that women who have had such implants  “do[] not display a silicone-induced systemic abnormality in the . . . cells of the immune system.” National Science Panel, Silicone Breast Implants in Relation to Connective Tissue Diseases and Immunologic Dysfunction, Executive Summary at 5-6  (Nov. 17, 1998).  The Panel also found that “[n]o association was evident between breast implants and any of the . . . connective tissue diseases . . . or the other autoimmune/rheumatic conditions” claimed by the thousands of women who had filed lawsuits. Id. at 6-7. 

Another case, in the causal chain that produced the Daubert decision, might also have cautioned Professors Schauer and Spellman against oversimplifying the distinctions between expert and other evidence.  Wells v. Ortho Pharmaceutical Corp., 615 F. Supp. 262 (N.D. Ga. 1985), aff’d and rev’d in part on other grounds, 788 F.2d 741 (11th Cir.), cert. denied, 479 U.S.950 (1986).  The Wells case was a bench trial, in which the trial judge honestly professed not to have understood the epidemiologic evidence, and to have decided the case on his assessment of expert witness demeanor and atmospherics.  And like the Barrow case, the Wells case was clearly wrongly decided. SeeWells v. Ortho Pharmaceutical Corp. Reconsidered – Part 1” (Nov. 12th, 2012) (footnote 1 collects a sample of citations criticizing Judge Shoob’s failure to engage with the scientific evidence).

Jury determinations, unlike the clearly erroneous determinations in Barrow and Wells, are “black boxes.”  The public and the scientific community cannot really criticize jury decisions because the jury does not explain their inferences from evidence .  The lack of explanation, however, does not save them from engaging in junk science.  Outside the Gedanken experiment above, jurors can blame expert witnesses for their errors.  In the experiment’s hypothetical, junk science is still taking place.

The Discomfort With Daubert

The authors recount many of the charges against and criticisms of the Daubert decision.  Schauer & Spellman at 2. They note that some commentators assert that the Justices in Daubert embraced a “clumsy philosophy of science.” But at least Justice Blackmun engaged with the philosophy of science, and the epistemic requirements of Rule 702, and made some attempt to reconcile the truth-finding process in court with what happens in science.  The attempted reconciliation was long overdue.

The authors also point out that some commentators have expressed concern that Daubert burdens mass tort and employment discriminations who “non-traditional experts and expertise.” Id. To paraphrase Tim Minchin, there is non-traditional opinion known not to be true, and non-traditional opinion not known to be true, but if non-traditional opinion is known to be true, then we call it … knowledge.  Perhaps Schauer and Spellman think that our courts should be more open and inclusive, warmer and fuzzier, for clinical ecologists, naturopaths, aromatherapists, homeopaths, and other epistemiopaths.  My sense is that these non-traditional experts should be relegated to live in their own world of magical thinking.

The authors give voice to “the broad worry that law should not outsource its own irreducibly legal determinations to science and scientists with different goals and consequently different standards.” But science and the law are both engaged in truth determinations.  Maybe the law should worry more about having jurors and judges making their determinations with different goals and standards.  Maybe the law should be more concerned with truth and scientific accuracy, and should not outsource its “irreducibly fact determinations” to the epistemiopaths.

Expert witness testimony is clearly different in many important respects from lay witness testimony and other evidence.  The expert witness testimony is largely opinion.  It relies upon many layers of hearsay, with many of the layers not subject to ready scrutiny and testing for veracity in a courtroom.  Many layers of the “onion” represent evidence that would not be admissible under any likely evidentiary scenario in a courtroom.

And jurors are reduced to using proxies for assessing the truth of claims and defenses.  Decisions in a courtroom are often made based upon witness demeanor, style, presentation, qualifications, and appearance of confidence and expertise. The authors lament the lack of empirical support for the last point, but this misses the larger point that the issue is not primarily an empirical one, and the point is not limited to jury competence.  Hon. Jed S. Rakoff, “Lecture: Are federal judges competent? Dilettantes in an age of economic expertise,” 17 Fordham J. Corp. & Fin. L. 4 (2012).

The Monte Hall Problem and Cognitive Biases

The “Monty Hall problem” was originally framed by statistician Steve Selvin in publications in the American Statistician in 1975 (Selvin 1975a), (Selvin 1975b).  The problem was then popularized by an exposition in Parade magazine in 1990, by Marilyn vos Savant.  The problem, based upon Monty Hall’s television game show Let’s Make a Deal, and Monty’s practice of asking contestants whether they wanted to switch.  A full description of the problem can be found elsewhere.

For our purposes here, the interesting point is that the correct answer was not intuitively obvious.  After vos Savant published the answer, many readers wrote indignant letters, claiming she was wrong.  Some of these letters were written by college and university professors of mathematics.  Vos Savant’s popularization of Selvin’s puzzle illustrates that there are cognitive biases, flaws in reasoning, and paradoxes, the avoidance of which requires some measure of intelligence and specialty training in probability and statistics.  See also Amos Tversky and Daniel Kahneman, “Judgment under Uncertainty: Heuristics and Biases,” 185 Science 1124 (1974).

Our system of justice suffers enough from the current regime of trial by sound-bite.  It should not be further undermined by the abandonment of judicial gatekeeping and judicial review of the quality and quantity of scientific evidence in our civil and criminal trials. Opponents of gatekeeping imply that the Federal Rules of Evidence should be “liberally” construed, by which they usually mean, construed consistent to their biases and prejudices.  The adjective “liberal,” both traditionally and today, however, connotes enlightened, free from superstition, bias, prejudice, and promoting accuracy in judgments that are important to institutional respect and prestige.  Let’s hope that 2014 is a better year for science in the courtroom.