For your delectation and delight, desultory dicta on the law of delicts.

Judicial Control of the Rate of Error in Expert Witness Testimony

May 28th, 2015

In Daubert, the Supreme Court set out several criteria or factors for evaluating the “reliability” of expert witness opinion testimony. The third factor in the Court’s enumeration was whether the trial court had considered “the known or potential rate of error” in assessing the scientific reliability of the proffered expert witness’s opinion. Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 593 (1993). The Court, speaking through Justice Blackmun, failed to provide much guidance on the nature of the errors subject to gatekeeping, on how to quantify the errors, and on to know how much error was too much. Rather than provide a taxonomy of error, the Court lumped “accuracy, validity, and reliability” together with a grand pronouncement that these measures were distinguished by no more than a “hen’s kick.” Id. at 590 n.9 (1993) (citing and quoting James E. Starrs, “Frye v. United States Restructured and Revitalized: A Proposal to Amend Federal Evidence Rule 702,” 26 Jurimetrics J. 249, 256 (1986)).

The Supreme Court’s failure to elucidate its “rate of error” factor has caused a great deal of mischief in the lower courts. In practice, trial courts have rejected engineering opinions on stated grounds of their lacking an error rate as a way of noting that the opinions were bereft of experimental and empirical evidential support[1]. For polygraph evidence, courts have used the error rate factor to obscure their policy prejudices against polygraphs, and to exclude test data even when the error rate is known, and rather low compared to what passes for expert witness opinion testimony in many other fields[2]. In the context of forensic evidence, the courts have rebuffed objections to random-match probabilities that would require that such probabilities be modified by the probability of laboratory or other error[3].

When it comes to epidemiologic and other studies that require statistical analyses, lawyers on both sides of the “v” frequently misunderstand p-values or confidence intervals to provide complete measures of error, and ignore the larger errors that result from bias, confounding, study validity (internal and external), inappropriate data synthesis, and the like[4]. Not surprisingly, parties fallaciously argue that the Daubert criterion of “rate of error” is satisfied by expert witness’s reliance upon studies that in turn use conventional 95% confidence intervals and measures of statistical significance in p-values below 0.05[5].

The lawyers who embrace confidence intervals and p-values as their sole measure of error rate fail to recognize that confidence intervals and p-values are means of assessing only one kind of error: random sampling error. Given the carelessness of the Supreme Court’s use of technical terms in Daubert, and its failure to engage in the actual evidence at issue in the case, it is difficult to know whether the Court intended to suggest that random error was the error rate it had in mind[6]. The statistics chapter in the Reference Manual on Scientific Evidence helpfully points out that the inferences that can be drawn from data turn on p-values and confidence intervals, as well as on study design, data quality, and the presence or absence of systematic errors, such as bias or confounding.  Reference Manual on Scientific Evidence at 240 (3d 2011) [Manual]. Random errors are reflected in the size of p-values or the width of confidence intervals, but these measures of random sampling error ignore systematic errors such as confounding and study biases. Id. at 249 & n.96.

The Manual’s chapter on epidemiology takes an even stronger stance: the p-value for a given study does not provide a rate of error or even a probability of error for an epidemiologic study:

“Epidemiology, however, unlike some other methodologies—fingerprint identification, for example—does not permit an assessment of its accuracy by testing with a known reference standard. A p-value provides information only about the plausibility of random error given the study result, but the true relationship between agent and outcome remains unknown. Moreover, a p-value provides no information about whether other sources of error – bias and confounding – exist and, if so, their magnitude. In short, for epidemiology, there is no way to determine a rate of error.”

Manual at 575. This stance seems not entirely justified given that there are Bayesian approaches that would produce credibility intervals accounting for sampling and systematic biases. To be sure, such approaches have their own problems and they have received little to no attention in courtroom proceedings to date.

The authors of the Manual’s epidemiology chapter, who are usually forgiving of judicial error in interpreting epidemiologic studies, point to one United States Court of Appeals case that fallaciously interpreted confidence intervals magically to quantify bias and confounding in a Bendectin birth defects case. Id. at 575 n. 96[7]. The Manual could have gone further to point out that, in the context of multiple studies, of different designs and analyses, cognitive biases involved in evaluating, assessing, and synthesizing the studies are also ignored by statistical measures such as p-values and confidence intervals. Although the Manual notes that assessing the role of chance in producing a particular set of sample data is “often viewed as essential when making inferences from data,” the Manual never suggests that random sampling error is the only kind of error that must be assessed when interpreting data. The Daubert criterion would appear to encompass all varieties or error, not just random error.

The Manual’s suggestion that epidemiology does not permit an assessment of the accuracy of epidemiologic findings misrepresents the capabilities of modern epidemiologic methods. Courts can, and do, invoke gatekeeping approaches to weed out confounded study findings. SeeSorting Out Confounded Research – Required by Rule 702” (June 10, 2012). The “reverse Cornfield inequality” was an important analysis that helped establish the causal connection between tobacco smoke and lung cancer[8]. Olav Axelson studied and quantified the role of smoking as a confounder in epidemiologic analyses of other putative lung carcinogens.[9] Quantitative methods for identifying confounders have been widely deployed[10].

A recent study in birth defects epidemiology demonstrates the power of sibling cohorts in addressing the problem of residual confounding from observational population studies with limited information about confounding variables. Researchers looking at various birth defect outcomes among offspring of women who used certain antidepressants in early pregnancy generally found no associations in pooled data from Iceland, Norway, Sweden, Finland, and Denmark. A putative association between maternal antidepressant use and a specific kind of cardiac defect (right ventricular outflow tract obstruction or RVOTO) did appear in the overall analysis, but was reversed when the analysis was limited to the sibling subcohort. The study found an apparent association between RVOTO defects and first trimester maternal exposure to selective serotonin reuptake inhibitors, with an adjusted odds ratio of 1.48 (95% C.I., 1.15, 1.89). In the adjusted analysis for siblings, the study found an OR of 0.56 (95% C.I., 0.21, 1.49) in an adjusted sibling analysis[11]. This study and many others show how creative analyses can elucidate and quantify the direction and magnitude of confounding effects in observational epidemiology.

Systematic bias has also begun to succumb to more quantitative approaches. A recent guidance paper by well-known authors encourages the use of quantitative bias analysis to provide estimates of uncertainty due to systematic errors[12].

Although the courts have failed to articulate the nature and consequences of erroneous inference, some authors would reduce all of Rule 702 (and perhaps 704, 403 as well) to a requirement that proffered expert witnesses “account” for the known and potential errors in their opinions:

“If an expert can account for the measurement error, the random error, and the systematic error in his evidence, then he ought to be permitted to testify. On the other hand, if he should fail to account for any one or more of these three types of error, then his testimony ought not be admitted.”

Mark Haug & Emily Baird, “Finding the Error in Daubert,” 62 Hastings L.J. 737, 739 (2011).

Like most antic proposals to revise Rule 702, this reform vision shuts out the full range of Rule 702’s remedial scope. Scientists certainly try to identify potential sources of error, but they are not necessarily very good at it. See Richard Horton, “Offline: What is medicine’s 5 sigma?” 385 Lancet 1380 (2015) (“much of the scientific literature, perhaps half, may simply be untrue”). And as Holmes pointed out[13], certitude is not certainty, and expert witnesses are not likely to be good judges of their own inferential errors[14]. Courts continue to say and do wildly inconsistent things in the course of gatekeeping. Compare In re Zoloft (Setraline Hydrochloride) Products, 26 F. Supp. 3d 449, 452 (E.D. Pa. 2014) (excluding expert witness) (“The experts must use good grounds to reach their conclusions, but not necessarily the best grounds or unflawed methods.”), with Gutierrez v. Johnson & Johnson, 2006 WL 3246605, at *2 (D.N.J. November 6, 2006) (denying motions to exclude expert witnesses) (“The Daubert inquiry was designed to shield the fact finder from flawed evidence.”).

[1] See, e.g., Rabozzi v. Bombardier, Inc., No. 5:03-CV-1397 (NAM/DEP), 2007 U.S. Dist. LEXIS 21724, at *7, *8, *20 (N.D.N.Y. Mar. 27, 2007) (excluding testimony from civil engineer about boat design, in part because witness failed to provide rate of error); Sorto-Romero v. Delta Int’l Mach. Corp., No. 05-CV-5172 (SJF) (AKT), 2007 U.S. Dist. LEXIS 71588, at *22–23 (E.D.N.Y. Sept. 24, 2007) (excluding engineering opinion that defective wood-carving tool caused injury because of lack of error rate); Phillips v. Raymond Corp., 364 F. Supp. 2d 730, 732–33 (N.D. Ill. 2005) (excluding biomechanics expert witness who had not reliably tested his claims in a way to produce an accurate rate of error); Roane v. Greenwich Swim Comm., 330 F. Supp. 2d 306, 309, 319 (S.D.N.Y. 2004) (excluding mechanical engineer, in part because witness failed to provide rate of error); Nook v. Long Island R.R., 190 F. Supp. 2d 639, 641–42 (S.D.N.Y. 2002) (excluding industrial hygienist’s opinion in part because witness was unable to provide a known rate of error).

[2] See, e.g., United States v. Microtek Int’l Dev. Sys. Div., Inc., No. 99-298-KI, 2000 U.S. Dist. LEXIS 2771, at *2, *10–13, *15 (D. Or. Mar. 10, 2000) (excluding polygraph data based upon showing that claimed error rate came from highly controlled situations, and that “real world” situations led to much higher error (10%) false positive error rates); Meyers v. Arcudi, 947 F. Supp. 581 (D. Conn. 1996) (excluding polygraph in civil action).

[3] See, e.g., United States v. Ewell, 252 F. Supp. 2d 104, 113–14 (D.N.J. 2003) (rejecting defendant’s objection to government’s failure to quantify laboratory error rate); United States v. Shea, 957 F. Supp. 331, 334–45 (D.N.H. 1997) (rejecting objection to government witness’s providing separate match and error probability rates).

[4] For a typical judicial misstatement, see In re Zoloft Products, 26 F. Supp.3d 449, 454 (E.D. Pa. 2014) (“A 95% confidence interval means that there is a 95% chance that the ‘‘true’’ ratio value falls within the confidence interval range.”).

[5] From my experience, this fallacious argument is advanced by both plaintiffs’ and defendants’ counsel and expert witnesses. See also Mark Haug & Emily Baird, “Finding the Error in Daubert,” 62 Hastings L.J. 737, 751 & n.72 (2011).

[6] See David L. Faigman, et al. eds., Modern Scientific Evidence: The Law and Science of Expert Testimony § 6:36, at 359 (2007–08) (“it is easy to mistake the p-value for the probability that there is no difference”)

[7] Brock v. Merrell Dow Pharmaceuticals, Inc., 874 F.2d 307, 311-12 (5th Cir. 1989), modified, 884 F.2d 166 (5th Cir. 1989), cert. denied, 494 U.S. 1046 (1990). As with any error of this sort, there is always the question whether the judges were entrapped by the parties or their expert witnesses, or whether the judges came up with the fallacy on their own.

[8] See Joel B Greenhouse, “Commentary: Cornfield, Epidemiology and Causality,” 38 Internat’l J. Epidem. 1199 (2009).

[9] Olav Axelson & Kyle Steenland, “Indirect methods of assessing the effects of tobacco use in occupational studies,” 13 Am. J. Indus. Med. 105 (1988); Olav Axelson, “Confounding from smoking in occupational epidemiology,” 46 Brit. J. Indus. Med. 505 (1989); Olav Axelson, “Aspects on confounding in occupational health epidemiology,” 4 Scand. J. Work Envt’l Health 85 (1978).

[10] See, e.g., David Kriebel, Ariana Zeka1, Ellen A Eisen, and David H. Wegman, “Quantitative evaluation of the effects of uncontrolled confounding by alcohol and tobacco in occupational cancer studies,” 33 Internat’l J. Epidem. 1040 (2004).

[11] Kari Furu, Helle Kieler, Bengt Haglund, Anders Engeland, Randi Selmer, Olof Stephansson, Unnur Anna Valdimarsdottir, Helga Zoega, Miia Artama, Mika Gissler, Heli Malm, and Mette Nørgaard, “Selective serotonin reuptake inhibitors and ventafaxine in early pregnancy and risk of birth defects: population based cohort study and sibling design,” 350 Brit. Med. J. 1798 (2015).

[12] Timothy L.. Lash, Matthew P. Fox, Richard F. MacLehose, George Maldonado, Lawrence C. McCandless, and Sander Greenland, “Good practices for quantitative bias analysis,” 43 Internat’l J. Epidem. 1969 (2014).

[13] Oliver Wendell Holmes, Jr., Collected Legal Papers at 311 (1920) (“Certitude is not the test of certainty. We have been cock-sure of many things that were not so.”).

[14] See, e.g., Daniel Kahneman & Amos Tversky, “Judgment under Uncertainty:  Heuristics and Biases,” 185 Science 1124 (1974).

Can an Expert Witness Be Too Biased to Be Allowed to Testify

May 20th, 2015

The Case of Barry Castleman

Barry Castleman has been a fixture in asbestos litigation for over three decades. By all appearances, he was the creation of the litigation industry. Castleman received a bachelor of science degree in chemical engineering in 1968, and a master’s degree in environmental engineering, in 1972. In 1975, he started as a research assistant to plaintiffs’ counsel in asbestos litigation, and in 1979, he commenced his testimonial adventures as a putative expert witness for plaintiffs’ counsel. Enrolled in a doctoral program, Castleman sent chapters of his thesis to litigation industry mentors for review and edits. In 1985, Castleman received a doctorate degree, with the assistance of a Ron Motley fellowship. See John M. Fitzpatrick, “Digging Deep to Attack Bias of Plaintiff Experts,” DRI Products Liability Seminar (2013).

Castleman candidly testified, on many occasions, that he was not an epidemiologist, a biostatistician, a toxicologist, a physician, a pathologist, or any other kind of healthcare professional. He is not a trained historian. Understandably, courts puzzled over exactly what someone like Castleman should be allowed to testify about. Many courts limited or excluded Castleman from remunerative testimonial roles[1]. Still, in the face of his remarkably inadequate training, education, and experience, Castleman persisted, and often prevailed, in making a living at testifying about the historical “state of the art” of medical knowledge about asbestos over time.

The result was often not pretty. Castleman worked not just as an expert witness, but also as an agent of plaintiffs’ counsel to suppress evidence. “The Selikoff – Castleman Conspiracy” (May 13, 2011). As a would-be historian, Castleman was controlled and directed by the litigation industry to avoid inconvenient evidence. “Discovery into the Origin of Historian Expert Witnesses’ Opinions” (Jan. 30, 2012). Despite his covert operations, and his exploitation of defendants’ internal documents, Castleman complained more than anyone about the scrutiny created by his self-chosen litigation roles. In 1985, pressed for materials he had considered in formulating his “opinions,” Castleman wrote a personal letter to the judge, the Hon. Hugh Gibson of Galveston, Texas, to object to lawful discovery into his activities:

“1. It threatens me ethically through demands that I    divulge material submitted in confidence, endangering my good name and reputation.
2. It exposes me to potential liability arising from the release of correspondence and other materials provided to me by others who assumed I would honor their confidence.
3. It jeopardizes my livelihood in that material requested reveals strategies of parties with whom I consult, as well as other materials of a confidential nature.
4. It is far beyond the scope of relevant material to my qualifications and the area of expert testimony offered.
5. It is unprecedented in 49 prior trials and depositions where I have testified, in federal and state courts all over the United States, including many cases in Texas. Never before have I had to produce such voluminous and sensitive material in order to be permitted to testify.
6. It is excessively and unjustifiably intrusive into my personal and business life.
7. I have referenced most of the information I have in my 593-page book, “Asbestos: Medical and Legal Aspects.” The great majority of the information I have on actual knowledge of specific defendants has come from the defendants themselves.
8. All information that I have which is relevant to my testimony and qualifications has been the subject of numerous trials and depositions since 1979.”

Castleman Letter to Hon. Hugh Gibson (Nov. 5, 1985).

Forty years later, Castleman is still working for the litigation industry, and courts are still struggling to figure out what role he should be allowed as a testifying expert witness.

Last year, the Delaware Supreme Court had to order a new trial for R. T. Vanderbilt, in part because Castleman had blurted out non-responsive, scurrilous hearsay statements that:

(1) employees of Johns-Manville (a competitor of R.T. Vanderbilt) had called employees of Vanderbilt “liars;”

(2) R.T. Vanderbilt spent a great amount of money on studies and activities to undermine federal regulatory action on talc; and

(3) R.T. Vanderbilt was “buying senators and lobbying the government.”

The Delaware court held that Castleman’s gratuitous, unsolicited testimony on cross-examination was inadmissible, and that his conduct required a new trial.  R.T. Vanderbilt Co. v. Galliher, No. 510, 2013, 2014 WL 3674180 (Del. July 24, 2014).

Late last year, a federal court ruled, pre-trial, that Castleman may testify over Rule 702 objections because he “possesses ‘specialized knowledge’ regarding the literature relating to asbestos available during the relevant time periods,” and that his testimony “could be useful to the jury as a ‘sort of anthology’ of the copious available literature.” Krik v. Crane Co., No. 10-cv-7435, – F. Supp. 2d -, 2014 WL 5350463, *3 (N.D. Ill. Oct. 21, 2014). Because Castleman was little more than a sounding board for citing and reading sections of the historical medical literature, the district court prohibited him from testifying as to the accuracy of any conclusions in the medical literature. Id.

Last week, another federal court took a different approach to keeping Castleman in business. In ruling on defendant’s Rule 702 objections to Castleman, the court held:

“I agree with defendant that plaintiffs have made no showing that Castleman is qualified to explain the meaning and significance of medical literature. Further, there is no suggestion in Krik that Castleman is qualified as an expert in that respect. To the extent that plaintiffs want Castleman simply to read excerpts from medical articles, they do not explain how doing so could be helpful to the jury. Accordingly, I am granting defendant’s motion as it relates to Castleman’s discussion of the medical literature.


However, Castleman’s report also includes discussions of articles in trade journals and government publications, which, presumably, would not require medical expertise to understand or summarize.”

Suoja v. Owens-Illinois, Inc., 2015 U.S. Dist. LEXIS 63170, at *3 (W.D.Wisc. May 14, 2015). Judge Barbara Crabb thus disallowed medical state of the art testimony from Castleman, but permitted him to resume his sounding board role for non-medical and other historical documents referenced in his Rule 26 report.

The strange persistence of Barry Castleman, and the inconsistent holdings of dozens of opinions strewn across the asbestos litigation landscape, raise the question whether someone so biased, so entrenched in a litigation role, so lacking in the requisite expertise, should simply be expunged from the judicial process. Rather than struggling to find some benign, acceptable role for Barry Castleman, perhaps courts should just say no. “How Testifying Historians Are Like Lawn-Mowing Dogs” (May 24, 2010).

[1] See, e.g., Van Harville v. Johns-Manville Sales Corp., CV-78-642-H (S.D. Ala 1979); In re Related Asbestos Cases, 543 F. Supp. 1142, 1149 (N.D. Cal. 1982) (rejecting Castleman’s bid to be called an “expert”) (holding that the court was “not persuaded that Mr. Castleman, as a layperson, possesses the expertise necessary to read complex, technical medical articles and discern which portions of the articles would best summarize the authors’ conclusions”); Kendrick v. Owens-Corning Fiberglas Corp., No. C-85-178-AAm (E.D. Wash. 1986); In re King County Asbestos Cases of Levinson, Friedman, Vhugen, Duggan, Bland and Horowitz, No. 81-2-08702-7, (Washington St. Super. Ct. for King Cty.1987); Franze v. Celotex Corp., C.A. No. 84-1316 (W.D. Pa.); Dunn v. Hess Oil Virgin Islands Corp., C.A. No. 1987-238 (D.V.I. May 16, 1989) (excluding testimony of Barry Castleman); Rutkowski v. Occidental Chem. Corp., No. 83 C 2339, 1989 WL 32030, at *1 (N.D. Ill. Feb. 16, 1989) (“Castleman lacks the medical background and experience to evaluate and analyze the articles in order to identify which parts of the articles best summarize the authors’ conclusions.”); In re Guam Asbestos Litigation, 61 F.3d 910, 1995 WL 411876 (9th Cir. 1995) (Kozinski, J., dissenting) (“I would also reverse because Barry Castleman was not qualified to testify as an expert witness on the subject of medical state of the art or anything else; he appears to have read a number of articles for the sole purpose of turning himself into an expert witness. Reductio ad absurdum.”); McClure v. Owens Corning Fiberglas Corp. 188 Ill. 2d 102, 720 N.E.2d 242 (1999) (rejecting probativeness of Castleman’s testimony about company conduct).

Professor Bernstein’s Critique of Regulatory Daubert

May 15th, 2015

In the law of expert witness gatekeeping, the distinction between scientific claims made in support of litigation positions and claims made in support of regulations is fundamental. In re Agent Orange Product Liab. Litig., 597 F. Supp. 740, 781 (E.D.N.Y. 1984) (“The distinction between avoidance of risk through regulation and compensation for injuries after the fact is a fundamental one”), aff’d 818 F.2d 145 (2d Cir. 1987), cert. denied sub nom. Pinkney v. Dow Chemical Co., 487 U.S. 1234 (1988). Although scientists proffer opinions in both litigation and regulatory proceedings, their opinions are usually evaluated by substantially different standards. In federal litigation, civil and criminal, expert witnesses must be qualified and have an epistemic basis for their opinions, to satisfy the statutory requirements of Federal Rule of Evidence 702, and they must have reasonably relied upon otherwise inadmissible evidence (such as the multiple layers of hearsay involved in an epidemiologic study) under Rule 703. In regulatory proceedings, scientists are not subject to admissibility requirements and the sufficiency requirements set by the Administrative Procedures Act are extremely low[1].

Some industry stakeholders are aggrieved by the low standards for scientific decision making in certain federal agencies, and they have urged that the more stringent litigation evidentiary rules be imported into regulatory proceedings. There are several potential problems with such reform proposals. First, the epistemic requirements of science generally, or of Rules 702 and 703 in particular, are not particularly stringent. Scientific method leads to plenty of false positive and false negative conclusions, which are subject to daily challenge and revision. Scientific inference is not necessarily so strict, as much as ordinary reasoning is so flawed, inexact, and careless. Second, the call for “regulatory Daubert” ignores mandates of some federal agency enabling statutes and guiding regulations, which call for precautionary judgments, and which allow agencies to decide issues on evidentiary display that fall short of epistemic warrants for claims of knowledge.

Many lawyers who represent industry stakeholders have pressed for extension of Daubert-type gatekeeping to federal agency decision making. The arguments for constraining agency action find support in the over-extended claims that agencies and so-called public interest science advocates make in support of agency measures. Advocates and agency personnel seem to believe that worst-case scenarios and overstated safety claims are required as “bargaining” positions to achieve the most restrictive and possibly the most protective regulation that can be gotten from the administrative procedure, while trumping industry’s concerns about costs and feasibility. Still, extending Daubert to regulatory proceedings could have the untoward result of lowering the epistemic bar for both regulators and litigation fact finders.

In a recent article, Professor David Bernstein questions the expansion of Daubert into some regulatory realms. David E. Bernstein, “What to Do About Federal Agency Science: Some Doubts About Regulatory Daubert,” 22 Geo. Mason L. Rev. 549 (2015)[cited as Bernstein]. His arguments are an important counterweight to those who insist on changing agency rulemaking and actions at every turn. As an acolyte and a defender of scientific scruples and reasoning in the courts, Bernstein’s arguments are worth taking seriously.

Bernstein reminds us that bad policy, as seen in regulatory agency rulemaking or decisions, is not always a scientific issue. In any event, regulatory actions, unlike jury decisions, are not, or at least should not be, “black boxes.” The agency’s rationale and reasoning are publicly stated, subject to criticism, and open to revision. Jury decisions are opaque, non-transparent, potentially unreasoned, not carefully articulated, and not subject to revision absent remarkable failures of proof.

One line of argument[2] pursued by Professor Bernstein follows from his observation that Daubert procedures are required to curtail litigation expert witness “adversarial bias.” Id. at 555. Bernstein traces adversarial bias to three sources:

(1) conscious bias;

(2) unconscious bias; and

(3) selection bias.

Id. Conscious bias stems from deliberate attempts by “hired guns” to deliver opinions that satisfy the lawyers who retained them. The problem of conscious bias is presented by “hired guns” who will adapt their opinions to the needs of the attorney who hires them. Unconscious biases are the more subtle, but no less potent determinants of expert witness behavior, which are created by financial dependence upon, and allegiance to, the witness’s paymaster. Selection bias results from lawyers’ ability to choose expert witnesses to support their claims, regardless whether those witnesses’ opinions are representative of the scientific community. Id.

Professor Bernstein’s taxonomy of bias is important, but incomplete. First, the biases he identifies operate fulsomely in regulatory settings. Although direct financial remuneration is usually not a significant motivation for a scientist to testify before an agency, or to submit a whitepaper, professional advancement and cause advocacy are often powerful incentives at work. These incentives for self-styled public interest zealots may well create more powerful distortions of scientific judgment than any monetary factors in private litigation settings. As for selection bias, lawyers are ethically responsible for screening their expert witnesses, and there can be little doubt that once expert witnesses are disclosed, their opinions will align with their sponsoring parties’ interests. This systematic bias, however, does not necessarily mean that both side’s expert witnesses will necessarily be unrepresentative or unscientific. In the silicone gel breast implant litigation (MDL 926), Judge Pointer, the presiding judge, insisted that both sides’ witnesses were “too extreme,” and he was stunned when his court-appointed expert witnesses filed reports that vindicated the defendants’ expert witnesses’ positions[3]. The defendants had selected expert witnesses who analyzed the data on sound scientific principles; the plaintiffs had selected expert witnesses who overreached in their interpretation of the evidence. Furthermore, many scientific disputes, which find their way into the courtroom, will not have the public profile of silicone gel breast implants, and for which there may be no body of scientific community opinion from which lawyers could select “outliers,” even if they wished to do so.

Professor Bernstein’s offered taxonomy of bias is incomplete because it does not include the most important biases that jurors (and many judges) struggle to evaluate:

random errors;

systematic biases;

confounding; and

cognitive biases.

These errors and biases, along with their consequential fallacies of reasoning, apply with equal force to agency and litigation science. Bernstein does point out, however, an important institutional difference between jury or judge trials and agency review and decisions based upon scientific evidence: agencies often have extensive in-house expertise. Although agency expertise may sometimes be blinded by its policy agenda, agency procedures usually afford the public and the scientific community to understand what the agency decided, and why, and to respond critically when necessary. In the case of the Food and Drug Administration, agency decisions, whether pro- or contra-industry positions are dissected and critiqued by the scientific and statistical community with great care and relish. Nothing of the same sort is possible in response to a jury verdict.

Professor Bernstein is not a science nihilist, and he would not have reviewing courts give a pass to whatever nonsense federal agencies espouse. He calls for enforcement of available statutory requirements that agency action be based upon the “best available science,” and for requiring agencies to explicitly separate and state their policy and scientific judgments. Bernstein also urges greater use of agency peer review, such as occasionally seen from the Institute of Medicine (soon to be the National Academy of Medicine), and the use of Daubert-like criteria for testimony at agency hearings. Bernstein at 554.

Proponents of regulatory Daubert should take Professor Bernstein’s essay to heart, with a daily dose of atorvastatin. Importing Rule 702 into agency proceedings may well undermine the rule’s import in litigation, civil and criminal, while achieving little in the regulatory arena. Consider the pending OSHA rulemaking for lowering the permissible exposure limit (PEL) of crystalline silica in the workplace. OSHA, and along with some public health organizations, has tried to justify this rulemaking on the basis of many overwrought claims of the hazards of crystalline silica exposure at current levels. Clearly, there are some workers who continue to work in unacceptably hazardous conditions, but the harms sustained by these workers can be tied to violations of the current PEL; they are hardly an argument for lowering that current PEL. Contrary to the OSHA’s parade of horribles, silicosis mortality in the United States has steadily declined over the last several decades. The following chart draws upon NIOSH and other federal governmental data:


Silicosis Deaths by Year


Silicosis deaths, crude and age-adjusted death rates, for U.S. residents age 15 and over, 1968–2007

from Susan E. Dudley & Andrew P. Morriss, “Will the Occupational Safety and Health Administration’s Proposed Standards for Occupational Exposure to Respirable Crystalline Silica Reduce Workplace Risk?” 35 Risk Analysis (2015), in press, doi: 10.1111/risa.12341 (NIOSH reference number: 2012F03–01, based upon multiple cause-of-death data from National Center for Health Statistics, National Vital Statistics System, with population estimates from U.S. Census Bureau).

The decline in silicosis mortality is all the more remarkable because it occurred in the presence of stimulated reporting from silicosis litigation, and misclassification of coal workers’ pneumoconiosis in coal-mining states.

The decline in silicosis mortality may be helpfully compared with the steady rise in mortality from accidental falls among men and women 65 years old, or older:

CDC MMWR Death Rates from Unintentional Falls 2015

Yahtyng Sheu, Li-Hui Chen, and Holly Hedegaard, “QuickStats: Death Rates* from Unintentional Falls† Among Adults Aged ≥ 65 Years, by Sex — United States, 2000–2013,” 64 CDC MMWR 450 (May 1, 2015). Over the observation period, these death rates roughly doubled in both men and women.

Is there a problem with OSHA rulemaking? Of course. The agency has gone off on a regulatory frolic and detour trying to justify an onerous new PEL, without any commitment to enforcing its current silica PEL. OSHA has invoked the prospect of medical risks, many of which are unproven, speculative, and remote, such as lung cancer, autoimmune disease, and kidney disease. The agency, however, is awash with PhDs, and I fear that Professor Bernstein is correct that the distortions of the science are not likely to be corrected by applying Rule 702 to agency factfinding. Courts, faced with the complex prediction models, with disputed medical claims made by agency and industry scientists, will do what they usually do, shrug and defer. And the blow back of the “judicially approved” agency science in litigation contexts will be a cure worse than the disease. At bottom, the agency twisting of science is driven by policy goals and considerations, which require public debate and scrutiny, sound executive judgment, with careful legislative oversight and guidance.

[1] Even under the very low evidentiary and procedural hurdles, federal agencies still manage to outrun their headlights on occasion. See, e.g., Industrial Union Department v. American Petroleum Institute, 448 U.S. 607 (1980) (The Benzene Case); Gulf South Insulation v. U.S. Consumer Product Safety Comm’n, 701 F.2d 1137 (5th Cir. 1983); Corrosion Proof Fittings v. EPA, 947 F2d 1201 (5th Cir 1991).

[2] See also David E. Bernstein, “The Misbegotten Judicial Resistance to the Daubert Revolution,” 89 Notre Dame L. Rev. 27, 31 (2013); David E. Bernstein, “Expert Witnesses, Adversarial Bias, and the (Partial) Failure of the Daubert Revolution,” 93 Iowa L. Rev. 451, 456–57 (2008).

[3] Judge Pointer was less than enthusiastic about performing any gatekeeping role. Unlike most of today’s MDL judges, he was content to allow trial judges in the transferor districts to decide Rule 702 and other pre-trial issues. See Note, “District Judge Takes Issue With Circuit Courts’ Application of Gatekeeping Role” 3 Federal Discovery News (Aug. 1997) (noting that Chief Judge Pointer had criticized appellate courts for requiring district judges to serve as gatekeepers of expert witness testimony).

Sophisticated Intermediary Defense Prevails in New York

May 9th, 2015

Several years ago, the New York Appellate Division, 4th Department, reversed summary judgment for defendants in the cases of two workers who alleged that they had developed silicosis from silica exposure to defendants’ silica, at the Olean, New York, facility of their employer, Dexter Corporation (now Henkel Corporation). The trial court motions were based upon the “sophisticated intermediary” defense, but the Appellate Division reversed, holding that there was a genuine issue of material fact with respect to potential confusion between amorphous and crystalline silica, based upon statements in an affidavit of a plaintiffs’ expert witness, made without personal knowledge. See Pete Brush, “NY Court Revives Workers’ Silica Inhalation Suits” (March 24, 2009).

On remand, further discovery and an amplified evidentiary record led to new motions for summary judgment. In a February 26, 2015, order, the New York Supreme Court for Cattaraugus County granted the motions, noting that “the sophisticated intermediary doctrine was tailor-made” for the facts of the two cases. Rickicki v. Borden Chemical, et al., Index No. 53395, and Crowley v. C-E Minerals, Inc., et al., Index No. 61024, N.Y. Supreme Ct., Cattaraugus Cty. (Feb. 26, 2015) (Patrick H. NeMoyer, J.), Slip op. at 24. See also Casetext, “Summary Judgment Re-entered After Remand from the NY Appellate Division in Rickick v. Borden” (April 2, 2015); HarrisMartin, “N.Y. Trial Court Awards Summary Judgment to Silica Defendant, Recognizes Sophisticated Intermediary Doctrine” (March 27, 2015).

The Rickicki case turned 25 years old in February 2015.