Beecher-Monas and the Attempt to Eviscerate Daubert from Within

Part 2, of a Critique of Evaluating Scientific Evidence, by Erica Beecher-Monas (EBM)

Giving advice to trial and appellate judges on how they should review scientific evidence can be a tricky business.  Such advice must reliably capture the nature of scientific reasoning in several different fields, such as epidemiology and toxicology, and show how such reasoning can and should be incorporated within a framework of statutes, rules, and common law rules.  Erica Beecher-Monas’ book, Evaluating Scientific Evidence, fails to accomplish these goals.  What she does accomplish is the confusion of regulatory assumptions and principles of precautionary principles with the science of health effects in humans.

7.  “Empowering one type of information or one kind of study to the exclusion of another makes no scientific evidentiary sense.”  Id. at 59.

It is telling that Erica Beecher-Monas (EBM) does not mention either the systematic review or the technique of meta-analysis, which is based upon the systematic review.  Of course, these approaches, whether qualitative or quantitative, require a commitment to pre-specify a hierarchy of evidence, and inclusionary and exclusionary criteria for studies.  What EBM seems to hope to accomplish is the flattening of the hierarchy of evidence, and making all types of evidence comparable in probative value.  This is not science or scientific, but part of an agenda to turn Daubert into a standard of bare relevancy.  Systematic reviews do not literally exclude any “one kind” of study, but they recognize that not all study designs are equal.  The omission in EBM’s book speaks volumes.

8. “[T]he likelihood that someone whose health was adversely affected will have the courthouse doors slammed in his or her face,”  id. at 64, troubles EBM. 

EBM recognizes that inferences and scientific methodologies involve false positives and false negatives, but she appears disproportionately concerned by false negatives.  Of course, this solicitude begs the question whether we have reasonably good knowledge that that someone really was adversely affected.  A similar solicitude for the defendant who has had the courthouse door slammed on his head, in cases in which it has caused no harm, is missing.  This imbalance leads EBM to excuse and defend gaps in plaintiffs’ evidentiary displays on scientific issues.

9.  “Gaps in scientific knowledge are inevitable, not fatal flaws.”  Id. at 51 (citing a work on risk assessment).

The author also seems to turn a blind eye to the size of gaps.  Some gaps are simply too big to be bridged by assumptions.  Scientists have to be honest about their assumptions, and temper their desire to reach conclusions.  Expert witnesses often lack the requisite scientific temper to remain agnostic; they take positions when they should rightfully press for the gaps to be filled.  Expert witnesses outrun their headlights, but EBM cites virtually no example of a gatekeeping decision with approval.

Excusing gaps in risk assessment may make some sense given that risk assessment is guided by the precautionary principle.  The proofs in a toxic tort case are not.  EBM’s assertion about the inevitability of “gaps” skirts the key question:  When are gaps too large to countenance, and to support a judgment?  The Joiner case made clear that when the gaps are supported only by the ipse dixit of an expert witness, courts should look hard to determine whether the conclusion is reasonably, reliably supported by the empirical evidence.  The alternative, which EBM seems to invite, is intellectual anarchy.

8.  “Extrapolation from rodent studies to human cancer causation is universally accepted as valid (at least by scientists) because ‘virtually all of the specific chemicals known to be carcinogenic in humans are also positive in rodent bioassays, and sometimes even at comparable dose and with similar organ specificity’.” Id. at 71n.55 (quoting Bernard Weinstein, “Mitogenesis is only one factor in carcinogenesis,” 251 Science 387, 388 (1991)).

When it comes to urging the primacy and superiority of animal evidence, EBM’s brief is relentless and baseless.

Remarkably, in the sentence quoted above, EBM has committed the logical fallacy of affirming the consequent:  If all human carcinogens are rat carcinogens, then all rat carcinogens are human carcinogens.  This argument form is invalid, and the consequent does not follow from the antecedent.  And it is the consequent that provides the desired, putative validity for extrapolating from rodent studies to humans.  Not only does EBM commit a non-sequitur, she quotes Dr. Weinstein’s article out of context, because his article makes quite clear that not all rat carcinogens are accepted causes of cancer in human beings.

9.  “Post-Daubert courts often exclude expert testimony in toxic tort cases simply because the underlying tests relate to animals rather than humans.”  Id. at 71n. 54.

Given EBM’s radical mission to “empower” animal evidence, we should not be too surprised that she is critical of Daubert decisions that have given lesser weight to animal evidence.  The above statement is another example of EBM’s over- and misstatement.  The cases cited, for instance the Hall decision by Judge Jones in the breast implant litigation, and the Texas Supreme Court in Havner, do not support the “simply because.”  Those cases represent complex evidentiary displays that involved animal, in vitro, chemical analysis, and epidemiologic studies. The Hall decision was based upon Rule 702, but it was followed by Judge Jack Weinstein, who, after conducting two weeks of hearings, entered summary judgment sua sponte against the plaintiffs (animal evidence and all).  Recently, Judge Weinstein characterized the expert witnesses who supported the plaintiffs’ claims as “charlatans.”  See Judge Jack B. Weinstein, “Preliminary Reflections on Administration of Complex Litigation.” Cardozo Law Review De Novo at 14, http://www.cardozolawreview.com/content/denovo/WEINSTEIN_2009_1.pdf (“[t]he breast implant litigation was largely based on a litigation fraud. …  Claims—supported by medical charlatans—that enormous damages to women’s systems resulted could not be supported.”) (emphasis added).

Given the widespread rejection of the junk science behind breast implant claims, by courts, scientists, court-appointed experts, and the Institute of Medicine, EBM’s insertion of “simply” in the sentence above simply tells volumes about how she would evaluate the evidentiary display in HallSee also Evaluating Scientific Evidence at 81n.99 (arguing that Hall was mistaken).  If the gatekeeping in the silicone breast implant litigation was mistaken, as EBM argues, it is difficult to imagine what slop would be kept out by a gatekeeper who chose to apply EBM’s “intellectual due process.”

10.  “Animal studies are more persuasive than epidemiology for demonstrating small increases of risk.” Id. at 70

EBM offers no support for this contention, and there is none unless one is concerned to demonstrate small risks for animals.  Even for the furry beasts themselves, the studies do not “demonstrate” (a mathematical concept) small increased risks at low doses comparable to the doses experienced by human beings. 

EBM’s urging of “scientifically justifiable default assumptions” turns into advocacy for regulatory pronouncements of precautionary principle, which have been consistently rejected by courts as not applicable to toxic tort litigation for personal injuries.

11.  “Nonthreshold effects, on the other hand, are characteristic of diseases (like some cancers) that are caused by genetic mutations.” Id. at 75.

EBM offers no support for this assertion, and she ignores the growing awareness that the dose-response curves for many substances are hormetic; that is, the substance often exercises a beneficial or therapeutic effect at low doses, but may be harmful at high doses.  Alcohol is a known human carcinogen, but at low doses, alcohol reduces cardiovascular mortality.  At moderate to high doses, alcohol causes female breast cancer, and liver cancer.  Liver cancer, however, requires sufficiently high, prolonged doses to causes permanent fibrotic and architectural changes in the liver (cirrhosis) before it increases risk of liver cancer.  These counterexamples, and others, show that thresholds are often important features of the dose-response curves of carcinogens.

Similarly, EBM incorrectly argues that the default assumption of a linear dose-response pattern is reasonable because it is, according to her, widely accepted.  Id. at 74n. 65.  Her supporting citation is, however, to an EPA document on risk assessment, which has nothing to do with determinations of causality.  Risk assessments assume causality and attempt to place an upper bound on the magnitude of the hypothetical risk.  Again, EBM’s commitment to the precautionary principle and regulatory approaches preempt scientific thinking.  If EBM had considered the actual and postulated mechanisms of carcinogenesis, even in sources she cites, she would have to acknowledge that the linear no threshold model makes no sense because it ignores the operation of multiple protective mechanisms that must be saturated and overwhelmed before carcinogenetic exposures can actually induce clinically meaningful tumors in animals.  See, e.g., Bernard Weinstein, “Mitogenesis is only one factor in carcinogenesis,” 251 Science 387, 388 (1991) (mistakenly cited by EBM for the proposition that rodent carcinogens should be “assumed” to cause cancer in humans).

12.  “Under this assumption [of the EPA], demonstrating the development of lung cancer in mice would be admissible to show human causation in any organ.  Because we know so little about cancer causation, there is justification for this as a workable but questionable assumption with respect to cancer.”  Id. at 77.

Extrapolation, across species, across organs, and across disparate doses!  No gap is too wide, too deep to be traversed by EBM’s gatekeepers.  In arguing that extrapolation is a routine part of EPA risk assessment, EBM ignores that the extrapolation is not the basis for reaching scientific conclusions about health effects in human beings.  Regulatory science is “mandating certainty” — the opposite side of David Michael’s caricature of industry’s “manufacturing doubt.”

13. “[T]he court in Hall was mistaken when it excluded the expert testimony because the studies relied on only showed that silicone could have caused the plaintiff’s diseases, not that it did.”  Id. at 81n.99.

Admittedly, it is difficult to tell whether EBM is discussing general or specific causation in this sentence, but it certainly seems as if she is criticizing the Hall decision, by Judge Jones, because the expert witnesses for the plaintiff were unable to say that silicone did, in fact, cause Hall’s illness.  EBM appears to be diluting specific causation to a “might have had some effect” standard. 

The readers who have actually read the Hall decision, or who are familiar with the record in Hall, will know that one key expert witness for plaintiffs, an epidemiologist, Dr. David Goldsmith, conceded that he could not say that silicone more likely than not caused autoimmune disease.  A few weeks after testifying in Hall, Goldsmith changed his testimony.  In October 1996, in Judge Weinstein’s courtroom, based upon an abstract of a study that he saw the night before testifying, Goldsmith asserted that believed that silicone did cause autoimmune connective tissue disease, more likely than not.  Before Goldsmith left the stand, Judge Weinstein declared that he did not believe that Goldsmith’s testimony would be helpful to a jury.

So perhaps EBM is indeed claiming that testimony that purports to provide the causal conclusion need not be expressed to some degree of certainty other than possibility.  This interpretation is consistent with what appears to be EBM’s dilution of “intellectual due process” to permit virtually any testimony at all that has the slightest patina of scientific opinion.

14.  “The underlying reason that courts appear to founder in this area [toxic torts] is that causation – an essential element for liability – is highly uncertain, scientifically speaking, and courts do not deal well with this uncertainty.”  Id. at 57.

Regulation in the face of uncertain makes sense as an application of the precautionary principle, but litigation requires expert witness opinion that rises to the level of “scientific knowledge.”  Rule 702.  EBM’s candid acknowledgment is the very reason that Daubert is an essential tool to strip out regulatory “science,” which may well support regulation against a potential, unproven hazard.  Regulations can be abrogated.  Judgments in litigation are forever.  The social goals and the evidentiary standards are different.

15.  “Causal inference is a matter of explanation.”  Id. at 43. 

Here and elsewhere, EBM talks of causality as though it were only about explanations, when in fact, the notion of causal inference includes an element of prediction, as well.  EBM seems to downplay the predictive nature of scientific theories, perhaps because this is where theories founder and confront their error rate.  Inherent in any statement of causal inference is a prediction that if the factual antecedents are the same, the result will be the same.  Causation is more than a narrative of why the effect followed the cause.

EBM’s work feeds the illusion that courts can act as gatekeepers, wrapped in the appearance of “intellectual due process,” but at the end of the day find just about any opinion to be admissible.  I could give further examples of the faux pas, ipse dixit, and non sequitur in EBM’s Evaluating Scientific Evidence, but the reader will appreciate the overall point.  Her topic is important, but there are better places for judges and lawyers to seek guidance in this difficult area.  The Federal Judicial Center’s Reference Manual on Scientific Evidence, although not perfect, is at least free of the sustained ideological noise that afflicts EBM’s text.