Friendly Fire Takes Aim at Daubert – Beecher-Monas And The Undue Attack on Expert Witness Gatekeeping

Even though the principles of Daubert have been embraced by the entire Supreme Court, in a rare unanimous opinion, (See Weisgram v. Marley Co., 528 U.S. 440 (2000)(Ginsburg, J. writing for a unaminous court), and incorporated into a revised Rule 702, ratified by Congress, the enemies of Daubert abound.   Some advocates simply cannot let go of the notion that they have a constitutional right to bamboozle juries with unreliable evidence.

Daubert has some friends who would kill it by reinterpreting and diluting the reliability and relevance requirements so that anything goes, and everything is admissible.  Perhaps the best example of such a “friend,” is Professor Erica Beecher-Monas, who has written a book-length roadmap on how to eviscerate the gatekeeping concept.  See E. Beecher-Monas, Evaluating Scientific Evidence:  An Interdisciplinary Framework for Intellectual Due Process (New York 2007).

Erica Beecher-Monas (EBM, not to be confused with evidence-based medicine) starts off with a trenchant defense of the epistemic approach of Daubert, and an explanation of why proxies for scientific reliability and validity are doomed to fail.  EBM proceeds to offer a five step program of “intellectual due process,” to help trial courts carry out their screening:

1.  evaluate the challenged expert witness’s theory and hypothesis for their ability and power to explain the data;

2.  evaluate the data that weighs in favor, and against, the expert witness’s theory; the gatekeeper court must weigh all the evidence collectively.  The expert witness’s “theory” should explain and account for most of the evidence. According to EBM, the “theory” should explain the data that appears to weigh against the theory as well as the supporting evidence. 

3.  invoke “supportable assumptions” to bridge the inevitable gaps between underlying data and theory; there are, according to the author, “scientifically justifiable default assumptions,” which should be honored to fill in the gaps in an expert witness’s reasoning and explanations.

4.  evaluate the testifying expert witness’s methodology; and

5.  evaluate the statistical and probabilistic inferences between underlying data and opinions.  The trial court must synthesize all the available information to evaluate how well the data, methodology, “default assumptions,” taken together support the proffered conclusions.  

Id. at 6, 46 – 47.

This program sounds encouraging in theory.  As EBM describes how this “framework for analysis,” should work, however, things go poorly, and the law and scientific method are misrepresented.  “Default assumptions” becomes the pretense to let in opinions that would gag the proverbial horsefly off the manure cart.

Not all is bad.  EBM offers some important insights into how courts should handle scientific evidence.  She defends the gatekeeping process because of the serious danger of “dilution effect” among jurors, which overwhelms jurors with evidence of varying quality.  She reminds us that there are standards of care for research science and for clinical medicine, and standards for evaluating whether experimental results can be “honestly” attributed to the data.  Id. at 53.  Courts must evaluate whether the data and method really “show” the conclusion that the expert witness claims for them.  Id.  She criticizes those commentators who confuse the burden of proof with the statistical standard used in hypothesis testing for individual studies.  Id. at 65.

The narrative becomes confused and convoluted in addressing how trial courts should function as gatekeepers.  EBM is critical of how trial courts have discharged their gatekeeping responsibilities.  In many instances, EBM is unhappy with how judges carry out their evaluations, and criticizes them on the basis of her own ipse dixit.  It turns out that intellectual due process, as conceived of by EBM, allows pretty much anything to be admissible in EBM’s ideal juridical world.

Some of EBM’s assertions about the law and the science are startling, and deeply flawed.  In this post, I discuss some of the flawed scholarship, which has the potential to confuse and mislead.

1.  Daubert, which requires only a scintilla of scientifically valid and relevant evidence to survive an admissibility determination.” Id. at 82.                                                                                               

This assertion is wrong on its face.  Justice Blackmun, in writing his opinion in Daubert, discussed “scintilla” of evidence, not in the context of making an admissibility determination of an expert witness’s opinion, but rather in the context of ruling on motions for directed verdicts or summary judgment:

“Additionally, in the event the trial court concludes that the scintilla of evidence presented supporting a position is insufficient to allow a reasonable juror to conclude that the position more likely than not is true, the court remains free to direct a judgment, Fed.Rule Civ.Proc. 50(a), and likewise to grant summary judgment, Fed.Rule Civ.Proc. 56.  Cf., e.g., Turpin v. Merrell Dow Pharmaceuticals, Inc., 959 F.2d 1349 (6th Cir.) (holding that scientific evidence that provided foundation for expert testimony, viewed in the light most favorable to plaintiffs, was not sufficient to allow a jury to find it more probable than not that defendant caused plaintiff’s injury), cert. denied, 506 U.S. 826 (1992); Brock v. Merrell Dow Pharmaceuticals, Inc., 874 F.2d 307 (5th Cir. 1989) (reversing judgment entered on jury verdict for plaintiffs because evidence regarding causation was insufficient), modified, 884 F.2d 166 (5th Cir. 1989), cert. denied, 494 U.S. 1046 (1990); Green 680-681 [Green, “Expert Witnesses and Sufficiency of Evidence in Toxic Substances Litigation: The Legacy of Agent Orange and Bendectin Litigation,” 86 Nw.U.L.Rev. 643 (1992)].”

Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579, 594 (1993) (emphasis added).

Justice Blackmun was emphasizing that Rule 702 is not the only tool on the trial judge’s workbench; he was not setting a standard for the quantum of evidence that must govern an admissibility determination of an expert witness’s opinion.   Even if Justice Blackmun were discussing scintilla of evidence in the context of addressing admissibility (rather than sufficiency), his citation to the Bendectin decisions in the Court of Appeals makes clear that the “scintilla” of evidence offered by the party suffering entry of judgment might be fairly extensive in terms of expert witnesses’ opinions and their relied upon studies.  Nonetheless, this “scintilla” could be, and was, insufficient to resist judgment in the face of evidence of higher quality and relevance. 

EBM’s scholarship here is thus flawed at two levels.  First, she conflates admissibility with sufficiency (which elsewhere she faults various courts for doing, calling the conflation “pernicious”; see id. at 83).  Second, she fails to realize or acknowledge that the scintilla must be weighed against the entire evidentiary display.  Sometimes, as in the Bendectin litigation, the “scintilla” might include a fair amount of evidence, which is trumped by evidence superior in quality and quantity, and that this trumping is what leads to the finding that opining witnesses had offered unreliable opinions, unhelpful to the jury.

2.  “[C]onsistency of the observed effect is a criterion most scientists would deem important, but it may be absent even where there is a strong causal link, such as the link between smoking and lung cancer, which, although strong, is not inevitably observed.  Although it might be persuasive to find that there was a consistent specific association between exposure and a particular disease, such association is rarely observed.”  Id. at 59.

First, EBM offers no citation for the claim that the “link” between smoking and lung cancer is “not inevitably observed.”  The association is virtually always found in modern epidemiologic studies, and it is almost always statistically significant in adequately powered studies.  The repeated finding of an association, not likely due to chance, in many studies, conducted by different investigators, in different populations, at different times, with different study designs is the important point about consistency.  EBM muddles her unsupported, unsupportable assertion by then noting that a “consistent specific association” is rarely observed, but here she has moved, confusingly, to a different consideration – namely the specificity of the association, not its consistency.  Admittedly, specificity is a weak factor in assessing the causality vel non of an association, but EBM’s reference to a “consistent specific association” seems designed to confuse and conflate two different factors in the analysis.

3.  “[A]nimal studies are superior to epidemiologic studies because of the lack of controls endemic to epidemiologic studies, the difficulty in designing and analyzing such studies, and their costliness.”  Id. at 70.

This is one of EBM’s more strident, stunning pronouncements.  Her book makes clear that as an apologist for animal evidence, EBM deprecates and misunderstands epidemiologic evidence at almost every turn.  It is perhaps possible to interpret EBM charitably by suggesting that the epidemiologic studies she is thinking of without controls are “descriptive studies,” such as case reports or case series.  Such an interpretation is unwarranted, however, given EBM’s failure to qualify “epidemiologic studies.”  She paints with a broad brush, in a deliberate attempt to upend the generally accepted hierarchy of evidence.  Even a casual reading of the cases she cites, and the Reference Manual on Scientific Evidence, shows that the epidemiologic studies that are important to real intellectual due process are precisely the ones that have appropriate controls.  Most of the world, even if not EBM, thinks of analytic epidemiologic studies when comparing and contrasting with animal studies.

EBM offers no support for the asserted difficulty in designing and analyzing epidemiologic studies.  Is she making a personal, subjective declaration of her own difficulties?  The difficulties of judges and lawyers?  Or the difficulties of expert witnesses themselves?  To be sure, some lawyers have such difficulties, but they may have a good career choice to go to law rather than medical school.  (Perhaps they would do better yet in real estate litigation rather than in torts.)  Many physicians have “difficulty in designing and analyzing such studies,” but that is because these activities are outside the scope of their expertise, which until recently was rarely taught in medical schools.  In my experience, these activities have not been beyond the abilities of appropriately qualified expert witnesses, whether engaged by plaintiffs or defendants in civil litigation.

As for the “costliness” of epidemiologic studies, many studies can be conducted expeditiously and inexpensively.  Case-control studies can often be done relatively quickly and easily because they work from identified cases back to past exposures.  Cohort studies can often be assembled from administrative medical databases maintained for other purposes.  In the United States, such databases are harder to find, but several exist as a result of Medicare, VA, National Center for Health Statistics, and other managed care programs.  In Scandinavia, the entire countries of Sweden and Denmark are ongoing epidemiologic studies because of their national healthcare systems.  Cohort and case-control studies have been quickly and inexpensively set up to study many important public health issues, ranging from MMR vaccines and thimerosal and autism, abortion and breast cancer, and welding and parkinsonism.  See, e.g., Lone Frank, “Epidemiology: When an Entire Country Is a Cohort,” 287 Science 2398-2399 (2000).  Plaintiffs’ counsel, often with more money at their disposal than the companies they sue, have organized and funded any number of epidemiologic studies.  EBM’s attempted excuses and justifications of why animal studies are “superior” to epidemiology fail.

Perhaps we should take a moment to have a small reality check:

Would we accept an FDA decision that approved a drug that was safe and efficacious in rats, without insisting on a clinical trial in human beings?  How many drugs show great therapeutic promise in animal models only to fail on safety or efficacy, or both, when tested in humans?  I believe that the answers are: “no,” and “sadly, too many.”

4. “Clinical double-blind studies are rarely, if ever, available for litigation purposes.”  Id. at 69.

EBM again cites no support for this assertion, and she is plainly wrong.  Clinical trials have been important sources of evidence relied upon by both plaintiffs’ and defendants’ expert witnesses in pharmaceutical litigation, which makes up a large, increasing portion of all products liability litigation.  Even in cases involving occupational or environmental exposures, for which randomization would be impractical or unethical, double-blinded human clinical studies of toxicokinetics, or metabolic distribution and fate, are often important to both sides involved in litigating claims of personal injury.

5.  “[B]ecause there are so few good epidemiologic studies available, animal studies are often the primary source of information regarding the impact of chemicals.”  Id. at 73.

The field of occupational and environmental epidemiology is perhaps a half a century old, with high quality studies addressing many if not most of the chemicals that are involved in important personal injury litigations.  EBM’s claims about the prevalence of “good” studies, as well as the implicit claim about what proportion of lawsuits involve chemicals for which there exists no epidemiologic data, are themselves devoid of any empirical support.

6.  “[S]cientifc conclusions are couched in tentative phrases. ‘Association’ is preferred to ‘causation.’ Thus, failing to understand that causation, like other hypotheses, can never be proven true, courts may reject as unreliable even evidence that easily meets scientific criteria for validity.”  Id. at 55 (citing Hall, Havner, and Wright).

EBM writes that scientists prefer “association” to “causation,” but the law insists upon causation.  EBM fails to recognize that these are two separate, distinct concepts, and not a mere semantic preference for the intellectually timid.  An opinion about association is not an opinion about causation.  Scientists prefer to speak of association when the criteria for validly inferring the causal conclusion are not met; this preference thus has important epistemic implications for courts that must ensure that opinions are really reliable and relevant.  EBM sets up a straw man – hypotheses can never be proven to be true – in order to advocate for the acceptance and the admissibility of hypotheses masquerading as conclusions.  The fact is that, notwithstanding the mechanics of hypothesis testing, many hypotheses come to be accepted as scientific fact.  Indeed, EBM’s talk here, and elsewhere, of opinions “proven” or “demonstrated” to be true is a sloppy incorporation of mathematical language that is best avoided in evaluating empirical scientific claims.  Scientific findings are “shown” or “inferred,” not demonstrated.  Not all opinions stall at the “association” stage; many justifiably move to opinions about causation.  The hesitancy of scientists to assert that an association is causal usually means that they, like judges who are conscientious about their gatekeeping duties, recognize that there is an unacceptable error rate from indiscriminately treating all associations as causation.

(To Be Continued)