TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Amicus Curious – Gelbach’s Foray into Lipitor Litigation

August 25th, 2022

Professor Schauer’s discussion of statistical significance, covered in my last post,[1] is curious for its disclaimer that “there is no claim here that measures of statistical significance map easily onto measures of the burden of proof.” Having made the disclaimer, Schauer proceeds to falls into the transposition fallacy, which contradicts his disclaimer, and, generally speaking, is not a good thing for a law professor eager to advance the understanding of “The Proof,” to do.

Perhaps more curious than Schauer’s error is his citation support for his disclaimer.[2] The cited paper by Jonah B. Gelbach is one of several of Gelbach’s papers that advances the claim that the p-value does indeed map onto posterior probability and the burden of proof. Gelbach’s claim has also been the center piece in his role as an advocate in support of plaintiffs in the Lipitor (atorvastatin) multi-district litigation (MDL) over claims that ingestion of atorvastatin causes diabetes mellitus.

Gelbach’s intervention as plaintiffs’ amicus is peculiar on many fronts. At the time of the Lipitor litigation, Sonal Singh was an epidemiologist and Assistant Professor of Medicine, at the Johns Hopkins University. The MDL trial court initially held that Singh’s proffered testimony was inadmissible because of his failure to consider daily dose.[3] In a second attempt, Singh offered an opinion for 10 mg daily dose of atorvastatin, based largely upon the results of a clinical trial known as ASCOT-LLA.[4]

The ASCOT-LLA trial randomized 19,342 participants with hypertension and at least three other cardiovascular risk factors to two different anti-hypertensive medications. A subgroup with total cholesterol levels less than or equal to 6.5 mmol./l. were randomized to either daily 10 mg. atorvastatin or placebo.  The investigators planned to follow up for five years, but they stopped after 3.3 years because of clear benefit on the primary composite end point of non-fatal myocardial infarction and fatal coronary heart disease. At the time of stopping, there were 100 events of the primary pre-specified outcome in the atorvastatin group, compared with 154 events in the placebo group (hazard ratio 0.64 [95% CI 0.50 – 0.83], p = 0.0005).

The atorvastatin component of ASCOT-LLA had, in addition to its primary pre-specified outcome, seven secondary end points, and seven tertiary end points.  The emergence of diabetes mellitus in this trial population, which clearly was at high risk of developing diabetes, was one of the tertiary end points. Primary, secondary, and tertiary end points were reported in ASCOT-LLA without adjustment for the obvious multiple comparisons. In the treatment group, 3.0% developed diabetes over the course of the trial, whereas 2.6% developed diabetes in the placebo group. The unadjusted hazard ratio was 1.15 (0.91 – 1.44), p = 0.2493.[5] Given the 15 trial end points, an adjusted p-value for this particular hazard ratio, for diabetes, might well exceed 0.5, and even approach 1.0.

On this record, Dr. Singh honestly acknowledged that statistical significance was important, and that the diabetes finding in ASCOT-LLA might have been the result of low statistical power or of no association at all. Based upon the trial data alone, he testified that “one can neither confirm nor deny that atorvastatin 10 mg is associated with significantly increased risk of type 2 diabetes.”[6] The trial court excluded Dr. Singh’s 10mg/day causal opinion, but admitted his 80mg/day opinion. On appeal, the Fourth Circuit affirmed the MDL district court’s rulings.[7]

Jonah Gelbach is a professor of law at the University of California at Berkeley. He attended Yale Law School, and received his doctorate in economics from MIT.

Professor Gelbach entered the Lipitor fray to present a single issue: whether statistical significance at conventionally demanding levels such as 5 percent is an appropriate basis for excluding expert testimony based on statistical evidence from a single study that did not achieve statistical significance.

Professor Gelbach is no stranger to antic proposals.[8] As amicus curious in the Lipitor litigation, Gelbach asserts that plaintiffs’ expert witness, Dr. Singh, was wrong in his testimony about not being able to confirm the ASCOT-LLA association because he, Gelbach, could confirm the association.[9] Ultimately, the Fourth Circuit did not discuss Gelbach’s contentions, which is not surprising considering that the asserted arguments and alleged factual considerations were not only dehors the record, but in contradiction of the record.

Gelbach’s curious claim is that any time a risk ratio, for an exposure and an outcome of interest, is greater than 1.0, with a p-value < 0.5,[10] the evidence should be not only admissible, but sufficient to support a conclusion of causation. Gelbach states his claim in the context of discussing a single randomized controlled trial (ASCOT-LLA), but his broad pronouncements are carelessly framed such that others may take them to apply to a single observational study, with its greater threats to internal validity.

Contra Kumho Tire

To get to his conclusion, Gelbach attempts to remove the constraints of traditional standards of significance probability. Kumho Tire teaches that expert witnesses must “employ[] in the courtroom the same level of intellectual rigor that characterizes the practice of an expert in the relevant field.”[11] For Gelbach, this “eminently reasonable admonition” does not impose any constraints on statistical inference in the courtroom. Statistical significance at traditional levels (p < 0.05) is for elitist scholarly work, not for the “practical” rent-seeking work of the tort bar. According to Gelbach, the inflation of the significance level ten-fold to p < 0.5 is merely a matter of “weight” and not admissibility of any challenged opinion testimony.

Likelihood Ratios and Posterior Probabilities

Gelbach maintains that any evidence that has a likelihood ratio (LR > 1) greater than one is relevant, and should be admissible under Federal Rule of Evidence 401.[12] This argument ignores the other operative Federal Rules of Evidence, namely 702 and 703, which impose additional criteria of admissibility for expert witness opinion testimony.

With respect to variance and random error, Gelbach tells us that any evidence that generates a LR > 1, should be admitted when “the statistical evidence is statistically significant below the 50 percent level, which will be true when the p-value is less than 0.5.”[13]

At times, Gelbach seems to be discussing the admissibility of the ASCOT-LLA study itself, and not the proffered opinion testimony of Dr. Singh. The study itself would not be admissible, although it is clearly the sort of hearsay an expert witness in the field may consider. If Dr. Singh were to have reframed and recalculated the statistical comparisons, then the Rule 703 requirement of “reasonable reliance” by scientists in the field of interest may not have been satisfied.

Gelbach also generates a posterior probability (0.77), which is based upon his calculations from data in the ASCOT-LLA trial, and not the posterior probability of Dr. Singh’s opinion. The posterior probability, as calculated, is problematic on many fronts.

Gelbach does not present his calculations – for the sake of brevity he says – but he tells us that the ASCOT-LLA data yield a likelihood ratio of roughly 1.9, and a p-value of 0.126.[14] What the clinical trialists reported was a hazard ratio of 1.15, which is a weak association on most researchers’ scales, with a two-sided p-value of 0.25, which is five times higher than the usual 5 percent. Gelbach does not explain how or why his calculated p-value for the likelihood ratio is roughly half the unadjusted, two-sided p-value for the tertiary outcome from ASCOT-LLA.

As noted, the reported diabetes hazard ratio of 1.15 was a tertiary outcome for the ASCOT trial, one of 15 calculated by the trialists, with p-values unadjusted for multiple comparisons.  The failure to adjust is perhaps excusable in that some (but certainly not all) of the outcome variables are overlapping or correlated. A sophisticated reader would not be misled; only when someone like Gelbach attempts to manufacture an inflated posterior probability without accounting for the gross underestimate in variance is there an insult to statistical science. Gelbach’s recalculated p-value for his LR, if adjusted for the multiplicity of comparisons in this trial, would likely exceed 0.5, rendering all his arguments nugatory.

Using the statistics as presented by the published ASCOT-LLA trial to generate a posterior probability also ignores the potential biases (systematic errors) in data collection, the unadjusted hazard ratios, the potential for departures from random sampling, errors in administering the participant recruiting and inclusion process, and other errors in measurements, data collection, data cleaning, and reporting.

Gelbach correctly notes that there is nothing methodologically inappropriate in advocating likelihood ratios, but he is less than forthcoming in explaining that such ratios translate into a posterior probability only if he posits a prior probability of 0.5.[15] His pretense to having simply stated “mathematical facts” unravels when we consider his extreme, unrealistic, and unscientific assumptions.

The Problematic Prior

Gelbach’s glibly assumes that the starting point, the prior probability, for his analysis of Dr. Singh’s opinion is 50%. This is an old and common mistake,[16] long since debunked.[17] Gelbach’s assumption is part of an old controversy, which surfaced in early cases concerning disputed paternity. The assumption, however, is wrong legally and philosophically.

The law simply does not hand out 0.5 prior probability to both parties at the beginning of a trial. As Professor Jaffee noted almost 35 years ago:

“In the world of Anglo-American jurisprudence, every defendant, civil and criminal, is presumed not liable. So, every claim (civil or criminal) starts at ground zero (with no legal probability) and depends entirely upon proofs actually adduced.”[18]

Gelbach assumes that assigning “equal prior probability” to two adverse parties is fair, because the fact-finder would not start hearing evidence with any notion of which party’s contentions are correct. The 0.5/0.5 starting point, however, is neither fair nor is it the law.[19] The even odds prior is also not good science.

The defense is entitled to a presumption that it is not liable, and the plaintiff must start at zero.  Bayesians understand that this is the death knell of their beautiful model.  If the prior probability is zero, then Bayes’ Theorem tells us mathematically that no evidence, no matter how large a likelihood ratio, can move the prior probability of zero towards one. Bayes’ theorem may be a correct statement about inverse probabilities, but still be an inadequate or inaccurate model for how factfinders do, or should, reason in determining the ultimate facts of a case.

We can see how unrealistic and unfair Gelbach’s implied prior probability is if we visualize the proof process as a football field.  To win, plaintiffs do not need to score a touchdown; they need only cross the mid-field 50-yard line. Rather than making plaintiffs start at the zero-yard line, however, Gelbach would put them right on the 50-yard line. Since one toe over the mid-field line is victory, the plaintiff is spotted 99.99+% of its burden of having to present evidence to build up 50% probability. Instead, plaintiffs are allowed to scoot from the zero yard line right up claiming success, where even the slightest breeze might give them winning cases. Somehow, in the model, plaintiffs no longer have to present evidence to traverse the first half of the field.

The even odds starting point is completely unrealistic in terms of the events upon which the parties are wagering. The ASCOT-LLA study might have shown a protective association between atorvastatin and diabetes, or it might have shown no association at all, or it might have show a larger hazard ratio than measured in this particular sample. Recall that the confidence interval for hazard ratios for diabetes ran from 0.91 to 1.44. In other words, parameters from 0.91 (protective association) to 1.0 (no association), to 1.44 (harmful association) were all reasonably compatible with the observed statistic, based upon this one study’s data. The potential outcomes are not binary, which makes the even odds starting point inappropriate.[20]


[1]Schauer’s Long Footnote on Statistical Significance” (Aug. 21, 2022).

[2] Frederick Schauer, The Proof: Uses of Evidence in Law, Politics, and Everything Else 54-55 (2022) (citing Michelle M. Burtis, Jonah B. Gelbach, and Bruce H. Kobayashi, “Error Costs, Legal Standards of Proof, and Statistical Significance,” 25 Supreme Court Economic Rev. 1 (2017).

[3] In re Lipitor Mktg., Sales Practices & Prods. Liab. Litig., MDL No. 2:14–mn–02502–RMG, 2015 WL 6941132, at *1  (D.S.C. Oct. 22, 2015).

[4] Peter S. Sever, et al., “Prevention of coronary and stroke events with atorvastatin in hypertensive patients who have average or lower-than-average cholesterol concentrations, in the Anglo-Scandinavian Cardiac Outcomes Trial Lipid Lowering Arm (ASCOT-LLA): a multicentre randomised controlled trial,” 361 Lancet 1149 (2003). [cited here as ASCOT-LLA]

[5] ASCOT-LLA at 1153 & Table 3.

[6][6] In re Lipitor Mktg., Sales Practices & Prods. Liab. Litig., 174 F.Supp. 3d 911, 921 (D.S.C. 2016) (quoting Dr. Singh’s testimony).

[7] In re Lipitor Mktg., Sales Practices & Prods. Liab. Litig., 892 F.3d 624, 638-39 (2018) (affirming MDL trial court’s exclusion in part of Dr. Singh).

[8] SeeExpert Witness Mining – Antic Proposals for Reform” (Nov. 4, 2014).

[9] Brief for Amicus Curiae Jonah B. Gelbach in Support of Plaintiffs-Appellants, In re Lipitor Mktg., Sales Practices & Prods. Liab. Litig., 2017 WL 1628475 (April 28, 2017). [Cited as Gelbach]

[10] Gelbach at *2.

[11] Kumho Tire Co. v. Carmichael, 526 U.S. 137, 152 (1999).

[12] Gelbach at *5.

[13] Gelbach at *2, *6.

[14] Gelbach at *15.

[15] Gelbach at *19-20.

[16] See Richard A. Posner, “An Economic Approach to the Law of Evidence,” 51 Stanford L. Rev. 1477, 1514 (1999) (asserting that the “unbiased fact-finder” should start hearing a case with even odds; “[I]deally we want the trier of fact to work from prior odds of 1 to 1 that the plaintiff or prosecutor has a meritorious case. A substantial departure from this position, in either direction, marks the trier of fact as biased.”).

[17] See, e.g., Richard D. Friedman, “A Presumption of Innocence, Not of Even Odds,” 52 Stan. L. Rev. 874 (2000). [Friedman]

[18] Leonard R. Jaffee, “Prior Probability – A Black Hole in the Mathematician’s View of the Sufficiency and Weight of Evidence,” 9 Cardozo L. Rev. 967, 986 (1988).

[19] Id. at p.994 & n.35.

[20] Friedman at 877.

Schauer’s Long Footnote on Statistical Significance

August 21st, 2022

One of the reasons that, in 2016, the American Statistical Association (ASA) issued, for the first time in its history, a consensus statement on p-values, was the persistent and sometimes deliberate misstatements and misrepresentations about the meaning of the p-value. Indeed, of the six principles articulated by the ASA, several were little more than definitional, designed to clear away misunderstandings.  Notably, “Principle Two” addresses one persistent misunderstanding and states:

“P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.

Researchers often wish to turn a p-value into a statement about the truth of a null hypothesis, or about the probability that random chance produced the observed data. The p-value is neither. It is a statement about data in relation to a specified hypothetical explanation, and is not a statement about the explanation itself.”[1]

The ASA consensus statement followed on the heels of an important published article, written by seven important authors in the fields of statistics and epidemiology.[2] One statistician,[3] who frequently shows up as an expert witness for multi-district litigation plaintiffs, described the article’s authors as the “A-Team” of statistics. In any event, the seven prominent thought leaders identified common statistical misunderstandings, including the belief that:

“2. The P value for the null hypothesis is the probability that chance alone produced the observed association; for example, if the P value for the null hypothesis is 0.08, there is an 8% probability that chance alone produced the association. No![4]

This is all basic statistics.

Frederick Schauer is the David and Mary Harrison Distinguished Professor of Law at the University of Virginia. Schauer has had contributed prolifically to legal scholarship, and his publications are often well written and thoughtful analyses. Schauer’s recent book, The Proof: Uses of Evidence in Law, Politics, and Everything Else, published by the Harvard University Press is a contribution to the literature of “legal epistemology,” and the foundations of evidence that lie beneath many of our everyday and courtroom approaches to resolving disputes.[5] Schauer’s book might be a useful addition to an undergraduate’s reading list for a course in practical epistemology, or for a law school course on evidence. The language of The Proof is clear and lively, but at times wanders into objectionable and biased political correctness. For example, Schauer channels Naomi Oreskes and her critique of manufacturing industry in his own discussion of “manufactured evidence,”[6] but studiously avoids any number of examples of explicit manufacturing of fraudulent evidence in litigation by the lawsuit industry.[7] Perhaps the most serious omission in this book on evidence is its failure to discuss the relative quality and hierarchy of evidence in science, medicine, and in policy.  Readers will not find any mention of the methodology of systematic reviews or meta-analyses in Schauer’s work.

At the end of his chapter on burdens of proof, Schauer adds “A Long Footnote on Statistical Significance,” in which he expresses surprise that the subject of statistical significance is controversial. Schauer might well have brushed up on the statistical concepts he wanted to discuss.

Schauer’s treatment of statistical significance is both distinctly unbalanced, as well as misstated. In an endnote,[8] Schauer cites some of the controversialists who have criticized significance tests, but none of the statisticians who have defended their use.[9]

As for conceptual accuracy, after giving a serviceable definition of the p-value, Schauer immediately goes astray:

And this likelihood is conventionally described in terms of a p-value, where the p-value is the probability that positive results—rejection of the “null hypothesis” that there is no connection between the examined variables—were produced by chance.”[10]

And again, for emphasis, Schauer tells us:

“A p-value of greater than .05 – a greater than 5 percent probability that the same results would have been the result of chance – has been understood to mean that the results are not statistically significant.”[11]

And then once more for emphasis, in the context of an emotionally laden hypothetical about an experimental drug “cures” a dread, incurable disease, p = 0.20, Schauer tells us that he suspects most people would want to take the medication:

“recognizing that an 80 percent likelihood that the rejection of ineffectiveness was still good enough, at least if there were no other alternatives.”

Schauer wants to connect his discussion of statistical significance to degrees or varying strengths of evidence, but his discursion into statistical significance largely conflates precision with strength. Evidence can be statistically robust but not be very strong. If we imagine a very large randomized clinical trial that found that a medication lowered systolic blood pressure by 1mm of mercury, p < 0.05, we would not consider that finding to constitute strong evidence for therapeutic benefit. If the observation of lowering blood pressure by 1mm came from an observational study, p < 0.05, the finding might not even qualify as evidence in the views of sophisticated cardiovascular physicians and researchers.

Earlier in the chapter, Schauer points to instances in which substantial evidence for a conclusion is downplayed because it is not “conclusive,” or “definitive.” He is obviously keen to emphasize that evidence that is not “conclusive” may still be useful in some circumstances. In this context, Schauer yet again misstates the meaning of significance probability, when he tells us that:

“[j]ust as inconclusive or even weak evidence may still be evidence, and may still be useful evidence for some purposes, so too might conclusions – rejections of the null hypothesis – that are more than 5 percent likely to have been produced by chance still be valuable, depending on what follows from those conclusions.”[12]

And while Schauer is right that weak evidence may still be evidence, he seems loathe to admit that weak evidence may be pretty crummy support for a conclusion. Take, for instance, a fair coin.  We have an expected value on ten flips of five heads and five tails.  We flip the coin ten times, but we observe six heads and four tails.  Do we now have “evidence” that the expected value and the expected outcome are wrong?  Not really. The probability of observing the expected outcome on the binomial model that most people would endorse for the thought experiment is 24.6%. The probability of not observing the expected value in ten flips is three times greater. If we look at an epidemiologic study, with a sizable number of participants, the “expected value” of 1.0, embodied in the null hypothesis, is an outcome that we would rarely expect to see, even if the null hypothesis is correct.  Schauer seems to have missed this basic lesson of probability and statistics.

Perhaps even more disturbing is that Schauer fails to distinguish the other determinants of study validity and the warrants for inferring a conclusion at any level of certainty. There is a distinct danger that his comments about p-values will be taken to apply to various study designs, descriptive, observational, and experimental. And there is a further danger that incorrect definitions of the p-value and statistical significance probabilities will be used to misrepresent p-values as relating to posterior probabilities. Surely, a distinguished professor of law, at a top law school, in a book published by a prestigious  publisher (Belknap Press) can do better. The message for legal practitioners is clear. If you need to define or discuss statistical concepts in a brief, seek out a good textbook on statistics. Do not rely upon other lawyers, even distinguished law professors, or judges, for accurate working definitions.


[1] Ronald L. Wasserstein & Nicole A. Lazar, “The ASA’s Statement on p-Values: Context, Process, and Purpose,” 70 The Am. Statistician 129, 131 (2016).

[2] Sander Greenland, Stephen J. Senn, Kenneth J. Rothman, John B. Carlin, Charles Poole, Steven N. Goodman, and Douglas G. Altman, “Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations,” 31 European J. Epidemiol. 337 (2016).[cited as “Seven Sachems”]

[3] Martin T. Wells.

[4] Seven Sachems at 340 (emphasis added).

[5] Frederick Schauer, The Proof: Uses of Evidence in Law, Politics, and Everything Else (2022). [Schauer] One nit: Schauer cites a paper by A. Philip Dawid, “Statistical Risk,” 194 Synthese 3445 (2017). The title of the paper is “On individual risk.”

[6] Naomi Oreskes & Erik M. Conway, Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Climate Change (2010).

[7] See, e.g., In re Silica Prods. Liab. Litig., 398 F.Supp. 2d 563 (S.D.Tex. 2005); Transcript of Daubert Hearing at 23 (Feb. 17, 2005) (“great red flags of fraud”).

[8] See Schauer endnote 44 to Chapter 3, “The Burden of Proof,” citing Valentin Amrhein, Sander Greenland, and Blake McShane, “Scientists Rise Up against Statistical Significance,” www .nature .com (March 20, 2019), which in turn commented upon Blakey B. McShane, David Gal, Andrew Gelman, Christian Robert, and Jennifer L. Tackett, “Abandon Statistical Significance,” 73 American Statistician 235 (2019).

[9] Yoav Benjamini, Richard D. DeVeaux, Bradly Efron, Scott Evans, Mark Glickman, Barry Braubard, Xuming He, Xiao Li Meng, Nancy Reid, Stephen M. Stigler, Stephen B. Vardeman, Christopher K. Wikle, Tommy Wright, Linda J. Young, and Karen Kafadar, “The ASA President’s Task Force Statement on Statistical Significance and Replicability,” 15 Annals of Applied Statistics 1084 (2021); see alsoA Proclamation from the Task Force on Statistical Significance” (June 21, 2021).

[10] Schauer at 55. To be sure, Schauer, in endnote 43 to Chapter 3, disclaims any identification of p-values or measures of statistical significance with posterior probabilities or probabilistic measures of the burden of proof. Nonetheless, in the text, he proceeds to do exactly what he disclaimed in the endnote.

[11] Schauer at 55.

[12] Schauer at 56.

Rule 702 is Liberal, Not Libertine; Epistemic, Not Mechanical

October 4th, 2021

One common criticism of expert witness gatekeeping after the Supreme Court’s Daubert decision has been that the decision contravenes the claimed “liberal thrust” of the Federal Rules of Evidence. The criticism has been repeated so often as to become a cliché, but its frequent repetition by lawyers and law professors hardly makes it true. The criticism fails to do justice to the range of interpretations of “liberal” in the English language, the context of expert witness common law, and the language of Rule 702, both before and after the Supreme Court’s Daubert decision.

The first problem with the criticism is that the word “liberal,” or the phrase “liberal thrust,” does not appear in the Federal Rules of Evidence. The drafters of the Rules did, however, set out the underlying purpose of the federal codification of common law evidence in Rule 102, with some care:

“These rules should be construed so as to administer every proceeding fairly, eliminate unjustifiable expense and delay, and promote the development of evidence law, to the end of ascertaining the truth and securing a just determination.”

Nothing could promote ascertaining truth and achieving just determinations more than eliminating weak and invalid scientific inference in the form of expert witness opinion testimony. Barring speculative, unsubstantiated, and invalid opinion testimony before trial certainly has the tendency to eliminate full trials, with their expense and delay. And few people would claim unfairness in deleting invalid opinions from litigation. If there is any “liberal thrust” in the purpose of the Federal Rules of Evidence, it serves to advance the truth-finding function of trials.

And yet some legal commentators go so far as to claim that Daubert was wrongly decided because it offends the “liberal thrust” of federal rules.[1] Of course, it is true that the Supreme Court spoke of basic standard of relevance in the Federal Rules as being a “liberal” standard.[2] And in holding that Rule 702 did not incorporate the so-called Frye general acceptance rule,[3] the Daubert Court observed that drafting history of Rule 702 failed to mention Frye, just before invoking liberal-thrust cliché:

“rigid ‘general acceptance’ requirement would be at odds with the ‘liberal thrust’ of the Federal Rules and their ‘general approach of relaxing the traditional barriers to ‘opinion testimony’.”[4]

The court went on to cite one district court judge famously hostile to expert witness gatekeeping,[5] and to the “permissive backdrop” of the Rules, in holding that the Rules did not incorporate Frye,[6] which it characterized as an “austere” standard.[7]

While the Frye standard may have been “austere,” it was also widely criticized. It was also true that the Frye standard was largely applied to scientific devices and not to the scientific process of causal inference. The Frye case itself addressed the admissibility of a systolic blood pressure deception test, an early attempt by William Marston to design a lasso of truth. When courts distinguished the Frye cases on grounds that they involved devices, not causal inferences, they left no meaningful standard in place.

As a procedural matter, the Frye general acceptance standard made little sense in the context of causal opinions. If the opinion itself was generally accepted, then of course it would have to be admitted. Indeed, if the proponent sought judicial notice of the opinion, a trial court would likely have to admit the opinion, and then bar any contrary opinion as not generally accepted.

To be sure, before the Daubert decision, defense counsel attempted to invoke the Frye standard in challenges to the underlying methodology used by expert witnesses to draw causal inferences. There were, however, few such applications. Although not exactly how Frye operated, the Supreme Court might have imagined that the Frye standard required all expert witness opinion testimony to be based on “sufficiently established and accepted scientific methods. The actual language of the 1923 Frye case provides some ambivalent support with its twilight zone standard:

“Just when a scientific principle or discovery crosses the line between the experimental and demonstrable stages is difficult to define. Somewhere in this twilight zone the evidential force of the principle must be recognized, and while the courts will go a long way in admitting expert testimony deduced from a well-recognized scientific principle or discovery, the thing from which the deduction is made must be sufficiently established to have gained general acceptance in the particular field in which it belongs.”[8]

There was always an interpretative difficulty in how exactly a trial court was supposed to poll the world’s scientific community to ascertain “general acceptance.” Moreover, the rule actually before the Daubert Court, Rule 702, spoke of “knowledge.” At best, “general acceptance,” whether of methods or of conclusions, was merely a proxy, and often a very inaccurate one for an epistemic basis for disputed claims or conclusions at issue in litigation.

In cases involving causal claims before Daubert, expert witness opinions received scant attention from trial judges as long as the proffered expert witness met the very minimal standard of expertise needed to qualify to give an opinion. Furthermore, Rule 705 relieved expert witnesses of having to provide any bases for their opinions on direct examination. The upshot was that the standard for admissibility was authoritarian, not epistemic. If the proffered witness had a reasonable pretense to expertise, then the proffering party could parade him or her as an “authority,” on whose opinion the jury could choose to rely in its fact finding. Given this context, any epistemic standard would be “liberal” in freeing the jury or fact finder from the yoke of authoritarian expert witness ipse dixit.

And what exactly is the “liberal” in all this thrusting over Rule 702? Most dictionaries report that the word “liberal” traces back to the Latin liber, meaning “free.” The Latin word is thus the root of both liberty and libertine. One of the major, early uses of the adjective liberal was in the phrase “liberal arts,” meant to denote courses of study freed from authority, dogmas, and religious doctrine. The primary definition provided by the Oxford English Dictionary emphasizes this specific meaning:

“1. Originally, the distinctive epithet of those ‘arts’ or ‘sciences’ (see art 7) that were considered ‘worthy of a free man’; opposed to servile or mechanical.  … . Freq. in liberal arts.”

The Frye general acceptance standard was servile in the sense of its deference to others who were the acceptors, and it was mechanical in its reducing a rule that called for “knowledge” into a formula for nose-counting among the entire field in which an expert witness was testifying. In this light, the Daubert Court’s decision is obvious.

To be sure, the OED provides other subordinate or secondary definitions for “liberal,” such 3c:

Of construction or interpretation: Inclining to laxity or indulgence; not rigorous.”

Perhaps this definition would suggest that a liberal interpretation of Rule 702 would lead to reject the Frye standard because it was rigorous in determining admissibility on a rigid proxy determination that was not necessarily tied to the rule’s requirement of knowledge. Of course, knowledge or epistemic criteria in the Rule imply a different sort of rigor, one that is not servile or mechanical.

The epistemic criterion built into the original Rule 702, and carried forward in every amendment, accords with the secondary meanings given by the OED:

4. a. Free from narrow prejudice; open-minded, candid.

  1. esp. Free from bigotry or unreasonable prejudice in favour of traditional opinions or established institutions; open to the reception of new ideas or proposals of reform.”

The Daubert case represented a step in direction of the classically liberal goal of advancing the truth-finding function of trials. The counter-revolution of let it all in, under the guise of finding challenges to expert witness opinion as going to “weight not admissibility,” or to inventing “presumptions of admissibility” should be seen for what they are: retrograde and illiberal movements in jurisprudential progress.


[1] See, e.g., Michael H. Graham, “The Expert Witness, Predicament: Determining ‘Reliable’ Under the Gatekeeping Test of Daubert, Kumho, and Proposed Amended Rule 702 of the Federal Rules of Evidence,” 54 U. Miami L. Rev. 317, 321 (2000) (“Daubert is a very incomplete case, if not a very bad decision. It did not, in any way, accomplish what it was meant to, i.e., encourage more liberal admissibility of expert witness evidence.”)

[2] Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579,587 (1993).

[3] Frye v. United States, 293 F. 1013 (D.C. Cir. 1923).

[4] Id. at 588, citing Beech Aircraft Corp. v. Rainey, 488 U. S. 153, 169 (citing Rules 701 to 705); see also Edward J. Imwinkelried, “A Brief Defense of the Supreme Court’s Approach to the Interpretation of the Federal Rules of Evidence,” Indiana L. Rev. 267, 294 (1993)(writing of the “liberal structural design” of the Federal Rules).

[5] Jack B. Weinstein, “Rule 702 of the Federal Rules of Evidence is Sound; It Should Not Be Amended,” 138 F. R. D. 631 (1991) (“The Rules were designed to depend primarily upon lawyer-adversaries and sensible triers of fact to evaluate conflicts”).

[6] Daubert at 589.

[7] Id.

[8] Frye v. United States, 54 App. D.C. 46, 293 F. 1013 (1923).

Expert Witness Reports Are Not Admissible

August 23rd, 2021

The tradition of antic proposals to change the law of evidence is old and venerable in the common law. In the early 19th century, Jeremy Bentham deviled the English bench and bar with sweeping proposals to place evidence law on a rationale foundation. Bentham’s contributions to his contributions to jurisprudence, like his utilitarianism, often ignored the realities of human experience and decision making. Although Bentham contributed little to the actual workings of courtroom law and procedure, he gave rise to a tradition of antic proposals that have long entertained law professors and philosophers.[1]

Bentham seemingly abhorred tradition, but his writings have given rise to a tradition of antic proposals in the law. Expert witness testimony was uncommon in the early 19th century, but today, hardly a case is tried without expert witnesses. We should not be surprised, therefore, by the rise of antic proposals for reforming the evidence law of expert witness opinion testimony.[2]

A key aspect of the Bentham tradition is ignore the actual experience and conduct of human affairs. And so now we have a proposal to shorten trials by foregoing direct examination of expert witnesses, and admitting the expert witnesses’ reports into evidence.[3] The argument contends that since the Rule 26 report requires disclosure of all the expert witnesses’ substantive opinions and all bases for their opinions, the witnesses’ viva voce testimony is merely a recital of the report. The argument proceeds that reports can be helpful in understanding complex issues and in moving trials along more efficiently.

As much as all lawyers want to promote “understanding,” and make trials more efficient, the argument fails on multiple levels. First, judges can read the expert witness reports, in bench or in jury trials, to help themselves prepare for trial, without admitting the reports into evidence. Second, the rules of evidence, which are binding upon trial judges in both bench and jury trials, require that the testimony be helpful, not the reports. Third, the argument ignores that for the last several years, the federal rules have allowed lawyers to draft reports to a large extent, without any discovery into whose phraseology appears in a final report.

Even before the federal rules created an immunity to discovery into who drafted specific language of an expert report, it was not uncommon to find that there at least some parts of an expert witness’s report that did not accurately summarize the witness’s views at the time he or she gave testimony. Often the process of discovery caused expert witnesses to modify their reports, whether through skillful inquiry at deposition, or through the submission of adversarial reports, or through changes in the evidentiary display between drafting the report and testifying at trial.

In other words, expert witnesses’ testimony rarely comes out exactly as it appears in words in Rule 26 reports. Furthermore, reports may be full of argumentative characterization of facts, which fail to survive routine objections and cross-examination. What is represented as a fact or a factual predicate of an opinion may never be cited in testimony because the expert’s representation was always false or hyperbolic. The expert witnesses are typically not percipient witnesses, and any alleged fact would not be admissible, under Rule 703, simply because it appeared in an expert witness’s report. Indeed, Rule 703 makes clear that expert witnesses can rely upon inadmissible hearsay as long as experts in their fields reasonably would do so in the ordinary course of their professions.

Voir dire of charts, graphs, and underlying data may result in large portions of an expert report becoming inadmissible. Not every objection will be submitted as a motion in limine; and not every objection rises to the level of a Rule 702 or 703 pre-trial motion to exclude the expert witness. Foundational lapses or gaps may render some parts of reports to be inadmissible.

The argument for admitting reports as evidence reflects a trend toward blowsy, frowsy jurisprudence. Judges should be listening carefully to testimony, both direct and cross, from expert witnesses. They will have transcripts at their disposal. Although the question and answer format of direct examination may take some time, it ensures the orderly presentation of admissible testimony.

Given that testimony often turns out differently from the unqualified statements in a pre-trial report, the proposed admissibility of reports will create evidentiary chaos when there a disparity between report and testimony, or there is a failure to elicit as testimony something that is stated in the report. Courts and litigants need an unequivocal record of what is in evidence when moving for striking testimony, or for directed verdicts, new trials, or judgments notwithstanding the verdict.

The proposed abridgement of expert witness direct examinations would allow further gaming by not calling an expert witness once the witness’s report has been filed. Expert witnesses may conveniently become unavailable, after their reports have been admitted into evidence.

In multi-district litigations, the course of litigation may take years and even decades. Reports filed early on may not reflect current views or the current state of the science. Deeming filed reports “admissible” could have a significant potential to subvert accurate fact finding.

In Ake v. General Motors Corp.[4], Chief Judge Larimer faced a plaintiff who sought to offer in evidence a report written by plaintiffs’ expert witness, who was scheduled to testify at trial. The trial court held, however, that the report was inadmissible hearsay, for which no exception was available.[5] The report at issue was not a business record, which might be admissible under Rule 803(6), in that it did not record events made at or near the event at issue, and the event did not involve the expert witness’s regularly conducted business activity.

There are plenty of areas of the law in which reforms are helpful and necessary. The formality of presenting an expert witness’s actual opinions, under oath, in open court, subject to objections and challenges, needs no abridgement.


[1] See, e.g., William Twining, “Bentham’s Theory of Evidence: Setting a Context,” 20 J. Bentham Studies 18 (2019); Kenneth M. Ehrenberg, “Less Evidence, Better Knowledge,” 2 McGill L.J. 173 (2015); Laird C. Kirkpatrick, “Scholarly and Institutional Challenges to the Law of Evidence: From Bentham to the ADR Movement,” 25 Loyola L.A. L. Rev. 837 (1992); Frederick N. Judson, “A Modern View of the Law Reforms of Jeremy Bentham,” 10 Columbia L. Rev. 41 (1910).

[2] SeeExpert Witness Mining – Antic Proposals for Reform” (Nov. 4, 2014).

[3] Roger J. Marzulla, “Expert Reports: Objectionable Hearsay or Admissible Evidence in a Bench Trial?” A.B.A.(May 17, 2021).

[4] 942 F.Supp. 869 (W.D.N.Y. 1996).

[5] Ake v. General Motors Corp., 942 F.Supp. 869, 877 (W.D.N.Y. 1996).

People Get Ready – There’s a Reference Manual a Comin’

July 16th, 2021

Science is the key …

Back in February, I wrote about a National Academies’ workshop that featured some outstanding members of the scientific and statistical world, and which gave participants to identify new potential subjects for inclusion in a proposed fourth edition of the Reference Manual on Scientific Evidence.[1] Funding for that new edition is now secured, and the National Academies has published a précis of the February workshop. National Academies of Sciences, Engineering, and Medicine, Emerging Areas of Science, Engineering, and Medicine for the Courts: Proceedings of a Workshop – in Brief (Washington, DC 2021). The Rapporteurs for these proceedings provide a helpful overview for this meeting, which was not generally covered in the legal media.[2]

The goal of the workshop, which was supported by a planning committee, the Committee on Science, Technology, and Law, the National Academies, the Federal Judicial Center, and the National Science Foundation, was, of course, to identify chapters for a new, fourth edition, of Reference Manual on Scientific Evidence. The workshop was co-chaired by Dr. Thomas D. Albright, of the Salk Institute for Biological Studies, and the Hon. Kathleen McDonald O’Malley, Judge on the U.S. Court of Appeals for the Federal Circuit.

The Rapporteurs duly noted Judge O’Malley’s Workshop comments that she hoped that the reconsideration of the Reference Manual can help close the gap between science and the law. It is thus encouraging that the Rapporteurs focused a large part of their summary on the presentation of Professor Xiao-Li Meng[3] on selection bias, which “can come from cherry picking data, which alters the strength of the evidence.” Meng identified the

“7 S’(ins)” of selection bias:

(1) selection of target/hypothesis (e.g., subgroup analysis);

(2) selection of data (e.g., deleting ‘outliers’ or using only ‘complete cases’);

(3) selection of methodologies (e.g., choosing tests to pass the goodness-of-fit); (4) selective due diligence and debugging (e.g., triple checking only when the outcome seems undesirable);

(5) selection of publications (e.g., only when p-value <0.05);

(6) selections in reporting/summary (e.g., suppressing caveats); and

(7) selections in understanding and interpretation (e.g., our preference for deterministic, ‘common sense’ interpretation).”

Meng also addressed the problem of analyzing subgroup findings after not finding an association in the full sample, dubious algorithms, selection bias in publishing “splashy” and nominally “statistically significant” results, and media bias and incompetence in disseminating study results. Meng discussed how these biases could affect the accuracy of research findings, and how these biases obviously affect the accuracy, validity, and reliability of research findings that are relied upon by expert witnesses in court cases.

The Rapporteurs’ emphasis on Professor Meng’s presentation was noteworthy because the current edition of the Reference Manual is generally lacking in a serious exploration of systematic bias and confounding. To be sure, the concepts are superficially addressed in the Manual’s chapter on epidemiology, but in a way that has allowed many district judges to shrug off serious questions of invalidity with the shibboleth that such questions “to to the weight, not the admissibility,” of challenged expert witness opinion testimony. Perhaps the pending revision to Rule 702 will help improve fidelity to the spirit and text of Rule 702.

Questions of bias and noise have come to receive more attention in the professional statistical and epidemiologic literature. In 2009, Professor Timothy Lash published an important book-length treatment of quantitative bias analysis.[4] Last year, statistician David Hand published a comprehensive, but readily understandable, book on “Dark Data,” and the ways statistical and scientific interference are derailed.[5] One of the presenters at the February workshop, nobel laureate, Daniel Kahneman, published a book on “noise,” just a few weeks ago.[6]

David Hand’s book, Dark Data, (Chapter 10) sets out a useful taxonomy of the ways that data can be subverted by what the consumers of data do not know. The taxonomy would provide a useful organizational map for a new chapter of the Reference Manual:

A Taxonomy of Dark Data

Type 1: Data We Know Are Missing

Type 2: Data We Don’t Know Are Missing

Type 3: Choosing Just Some Cases

Type 4: Self- Selection

Type 5: Missing What Matters

Type 7: Changes with Time

Type 8: Definitions of Data

Type 9: Summaries of Data

Type 11: Feedback and Gaming

Type 12: Information Asymmetry

Type 13: Intentionally Darkened Data

Type 14: Fabricated and Synthetic Data

Type 15: Extrapolating beyond Your Data

Providing guidance not only on “how we know,” but also on how we go astray, patho-epistemology, would be helpful for judges and lawyers. Hand’s book really just a beginning to helping gatekeepers appreciate how superficially plausible health-effects claims are invalidated by the data relied upon by proffered expert witnesses.

* * * * * * * * * * * *

“There ain’t no room for the hopeless sinner
Who would hurt all mankind, just to save his own, believe me now
Have pity on those whose chances grow thinner”


[1]Reference Manual on Scientific Evidence v4.0” (Feb. 28, 2021).

[2] Steven Kendall, Joe S. Cecil, Jason A. Cantone, Meghan Dunn, and Aaron Wolf.

[3] Prof. Meng is the Whipple V. N. Jones Professor of Statistics, in Harvard University. (“Seeking simplicity in statistics, complexity in wine, and everything else in fortune cookies.”)

[4] Timothy L. Lash, Matthew P. Fox, and Aliza K. Fink, Applying Quantitative Bias Analysis to Epidemiologic Data (2009).

[5] David J. Hand, Dark data : why what you don’t know matters (2020).

[6] Daniel Kahneman, Olivier Sibony, and Cass R. Sunstein, Noise: A Flaw in Human Judgment (2021).

Slide Rule 702

June 26th, 2021

Note: A “fatal error,” caused by an old theme has disrupted the layout of my website. I am working on it

The opposition to Daubert’s regime of gatekeeping by the lawsuit industry has been fierce. From the beginning, the resistance has found allies on the bench, who have made the application of Rule 702 to expert witnesses, in both civil and criminal, uneven at best. Back in 2015, Professor David Bernstein and Eric Lasker wrote an exposé, about the unlawful disregard of the statutory language of Rule 702, and they called for and proposed an amendment to the rule.[1] At the time, I was skeptical of unleashing a change through the rules committee, given the uncertainty of where any amendment might ultimately look like.[2]

In the last several years, there have been some notable applications of Rule 702 in litigation involving sertraline, atorvastatin, sildenafil, and other medications, but aberrant decisions have continued to upend the rule of law in the area of expert witness gatekeeping. Last year, I noted that I had come to see the wisdom of Professor Bernstein’s proposal,[3] in the light of continued judicial dodging of Rule 702.[4] Numerous lawyers and legal organizations have chimed in to urge a revision to Rule 702.[5]

Earlier this week, the Committee on Rules of Practice & Procedure rolled out a proposed draft of an amended Rule 702.[6] The proposed new rule looks very much like the current rule:[7]

  1. Rule Testimony by Expert Witnesses
  2. A witness   who   is   qualified   as   an   expert   by
  3. knowledge, skill,  experience,  training,  or  education may
  4. testify in the form of an opinion or otherwise if the proponent
  5. has demonstrated by a preponderance of the evidence that:
  6. (a) the  expert’s  scientific,  technical,  or  other
  7. specialized knowledge will help the trier of
  8. fact  to   understand   the   evidence   or   to
  9. determine a fact in issue;
  10. (b) the testimony is based on sufficient facts or
  11. data;
  12. (c) the  testimony  is  the  product  of  reliable
  13. principles and methods; and
  14. (d) the  expert  has  reliably  applied  expert’s
  15. opinion reflects a reliable application of the
  16. principles and methods to the facts of the
  17. case

Despite what looks like minor linguistic changes, the Rules Committee’s note suggests otherwise. First, the amendment is intended to emphasize that the burden of showing the admissibility requirements rests on the proponent of the challenged expert witness testimony. Of course, the burden of course has always been with the proponent, but some courts have deployed various strategems to shift the burden with conclusory assessments that the challenge “goes to the weight not the admissibility,” thereby dodging the judicial responsibility for gatekeeping. The Committee now would make clear that many courts have erred by having treated the “critical questions of the sufficiency of an expert’s basis, and the application of the expert’s methodology” as going to weight and not admissibility.[8]

The Committee appears, however, to be struggling to provide guidance on when challenges do raise “matters of weight rather than admissibility.” For instance, the Committee Note suggests that:

“nothing in the amendment requires the court to nitpick an expert’s opinion in order to reach a perfect expression of what the basis and methodology can support. The Rule 104(a) standard does not require perfection. On the other hand, it does not permit the expert to make extravagant claims that are unsupported by the expert’s basis and methodology.”[9]

Somehow, I fear that the mantra of “weight not admissibility” has been or will be replaced by refusals to nitpick an expert’s opinion. How many nits does it take to make a causal claim “extravant”?

Perhaps I am the one nitpicking now. The Committee has recognized the essential weakness of gatekeeping as frequently practiced in federal courts by emphasizing that judicial gatekeeping is “essential” and required by the institutional incompetence of jurors to determine whether expert witnesses have reliably applied sound methodology to the facts of the case:

“a trial judge must exercise gatekeeping authority with respect to the opinion ultimately expressed by a testifying expert. A testifying expert’s opinion must stay within the bounds of what can be concluded by a reliable application of the expert’s basis and methodology. Judicial gatekeeping is essential because just as jurors may be unable to evaluate meaningfully the reliability of scientific and other methods underlying expert opinion, jurors may also be unable to assess the conclusions of an expert that go beyond what the expert’s basis and methodology may reliably support.”[10]

If the sentiment of the Rule Committee’s draft note carries through to the Committee Note that accompanies the amended rule, then perhaps some good will come of this effort.


[1] David E. Bernstein & Eric G. Lasker,“Defending Daubert: It’s Time to Amend Federal Rule of Evidence 702,” 57 William & Mary L. Rev. 1 (2015).

[2]On Amending Rule 702 of the Federal Rules of Evidence” (Oct. 17, 2015).

[3]Should Federal Rule of Evidence 702 Be Amended?” (May 8, 2020).

[4]Dodgy Data Duck Daubert Decisions” (April 2, 2020); “Judicial Dodgers – The Crossexamination Excuse for Denying Rule 702 Motions” (May 11, 2020); “Judicial Dodgers – Reassigning the Burden of Proof on Rule 702” (May 13, 2020); “Judicial Dodgers – Weight not Admissibility” (May 28, 2020); “Judicial Dodgers – Rule 702 Tie Does Not Go to Proponent” (June 2, 2020).

[5] See, e.g., Daniel Higginbotham, “The Proposed Amendment to Federal Rule of

Evidence 702 – Will it Work?” IADC Products Liability Newsletter (March 2021); Cary Silverman, “Fact or Fiction: Ensuring the Integrity of Expert Testimony,” U.S. Chamber Instit. Leg. Reform (Feb. 2021); Thomas D. Schroeder, “Federal Courts, Practice & Procedure: Toward a More Apparent Approach to Considering the Admission of Expert Testimony,” 95 Notre Dame L. Rev. 2039, 2043 (2020); Lee Mickus, “Gatekeeping Reorientation: Amend Rule 702 To Correct Judicial Misunderstanding about Expert Evidence,” Wash. Leg. Foundation (May 2020)

13-18 (noting numerous cases that fail to honor the spirit and language of Rule 702); Lawyers for Civil Justice, “Comment to the Advisory Comm. on Evidence Rules and its Rule 702 Subcommittee; A Note about the Note: Specific Rejection of Errant Case Law is Necessary for the Success of an Amendment Clarifying Rule 702’s Admissibility Requirements 1 (Feb. 8, 2021) (arguing that “[t]he only unambiguous way for the Note to convey the intent of the amendment is to reject the specific offending caselaw by name.”).

[6] Committee on Rules of Practice & Procedure Agenda Book (June 22, 2021). See Email Cara Salvatore, “Court Rules Committee Moves to Stiffen Expert Standard,” Law360 (June 23, 2021).

[7] Id. at 836. The proposal has been the subject of submissions and debate for a while. See Jim Beck, “Civil Rules Committee Proposes to Toughen Rule 702,” Drug & Device Law (May 4, 2021).

[8] Committee on Rules of Practice & Procedure Agenda Book at 839 (June 22, 2021).

[9] Id.

[10] Id. at 838-39.

Reference Manual on Scientific Evidence v4.0

February 28th, 2021

The need for revisions to the third edition of the Reference Manual on Scientific Evidence (RMSE) has been apparent since its publication in 2011. A decade has passed, and the federal agencies involved in the third edition, the Federal Judicial Center (FJC) and the National Academies of Science Engineering and Medicine (NASEM), are assembling staff to prepare the long-needed revisions.

The first sign of life for this new edition came back on November 24, 2020, when the NASEM held a short, closed door virtual meeting to discuss planning for a fourth edition.[1] The meeting was billed by the NASEM as “the first meeting of the Committee on Emerging Areas of Science, Engineering, and Medicine for the Courts: Identifying Chapters for a Fourth Edition of The Reference Manual on Scientific Evidence.” The Committee members heard from John S. Cooke (FJC Director), and Alan Tomkins and Reggie Sheehan, both of the National Science Foundation (NSF). The stated purpose of the meeting was to review the third edition of the RMSE to identify “identify areas of science, technology, and medicine that may be candidates for new or updated chapters in a proposed new (fourth) edition of the manual.” The only public pronouncement from the first meeting was that the committee would sponsor a workshop on the topic of new chapters for the RMSE, in early 2021.

The Committee’s second meeting took place a week later, again in closed session.[2] The stated purpose of the Committee’s second meeting was to review the third edition of the RMSE, and to discuss candidate areas for inclusion as new and updated chapters for a fourth edition.

Last week saw the Committee’s third, public meeting. The meeting spanned two days (Feb. 24 and 25, 2021), and was open to the public. The meeting was sponsored by NASEM, FJC, along with the NSF, and was co-chaired by Thomas D. Albright, Professor and Conrad T. Prebys Chair at the Salk Institute for Biological Studies, and the Hon. Kathleen McDonald O’Malley, who sits on the United States Court of Appeals for the Federal Circuit. Identified members of the committee include:

Steven M. Bellovin, professor in the Computer Science department at Columbia University;

Karen Kafadar, Departmental Chair and Commonwealth Professor of Statistics at the University of Virginia, and former president of the American Statistical Association;

Andrew Maynard, professor, and director of the Risk Innovation Lab at the School for the Future of Innovation in Society, at Arizona State University;

Venkatachalam Ramaswamy, Director of the Geophysical Fluid Dynamics Laboratory of the National Oceanic and Atmospheric Administration (NOAA) Office of Oceanic and Atmospheric Research (OAR), studying climate modeling and climate change;

Thomas Schroeder, Chief Judge for the U.S. District Court for the Middle District of North Carolina;

David S. Tatel, United States Court of Appeals for the District of Columbia Circuit; and

Steven R. Kendall, Staff Officer

The meeting comprised five panel presentations, made up of remarkably accomplished and talented speakers. Each panel’s presentations were followed by discussion among the panelists, and the committee members. Some panels answered questions submitted from the public audience. Judge O’Malley opened the meeting with introductory remarks about the purpose and scope of the RMSE, and of the inquiry into additional possible chapters.

  1. Challenges in Evaluating Scientific Evidence in Court

The first panel consisted entirely of judges, who held forth on their approaches to judicial gatekeeping of expert witnesses, and their approach to scientific and technical issues. Chief Judge Schroeder moderated the presentations of panelists:

Barbara Parker Hervey, Texas Court of Criminal Appeals;

Patti B. Saris, Chief Judge of the United States District Court for the District of Massachusetts,  member of President’s Council of Advisors on Science and Technology (PCAST);

Leonard P. Stark, U.S. District Court for the District of Delaware; and

Sarah S. Vance, Judge (former Chief Judge) of the U.S. District Court for the Eastern District of Louisiana, chair of the Judicial Panel on Multidistrict Litigation.

  1. Emerging Issues in the Climate and Environmental Sciences

Paul Hanle, of the Environmental Law Institute moderated presenters:

Joellen L. Russell, the Thomas R. Brown Distinguished Chair of Integrative Science and Professor at the University of Arizona in the Department of Geosciences;

Veerabhadran Ramanathan, Edward A. Frieman Endowed Presidential Chair in Climate Sustainability at the Scripps Institution of Oceanography at the University of California, San Diego;

Benjamin D. Santer, atmospheric scientist at Lawrence Livermore National Laboratory; and

Donald J. Wuebbles, the Harry E. Preble Professor of Atmospheric Science at the University of Illinois.

  1. Emerging Issues in Computer Science and Information Technology

Josh Goldfoot, Principal Deputy Chief, Computer Crime & Intellectual Property Section, at U.S. Department of Justice, moderated panelists:

Jeremy J. Epstein, Deputy Division Director of Computer and Information Science and Engineering (CISE) and Computer and Network Systems (CNS) at the National Science Foundation;

Russ Housley, founder of Vigil Security, LLC;

Subbarao Kambhampati, professor of computer science at Arizona State University; and

Alice Xiang, Senior Research Scientist at Sony AI.

  1. Emerging Issues in the Biological Sciences

Panel four was moderated by Professor Ellen Wright Clayton, the Craig-Weaver Professor of Pediatrics, and Professor of Law and of Health Policy at Vanderbilt Law School, at Vanderbilt University. Her panelists were:

Dana Carroll, distinguished professor in the Department of Biochemistry at the University of Utah School of Medicine;

Yaniv Erlich, Chief Executive Officer of Eleven Therapeutics, Chief Science Officer of MyHeritage;

Steven E. Hyman, director of the Stanley Center for Psychiatric Research at Broad Institute of MIT and Harvard; and

Philip Sabes, Professor Emeritus in Physiology at the University of California, San Francisco (UCSF).

  1. Emerging areas in Psychology, Data, and Statistical Sciences

Gary Marchant, Lincoln Professor of Emerging Technologies, Law and Ethics, at Arizona State University’s Sandra Day O’Connor College of Law, moderated panelists:

Xiao-Li Meng, the Whipple V. N. Jones Professor of Statistics, Harvard University, and the Founding Editor-in-Chief of Harvard Data Science Review;

Rebecca Doerge, Glen de Vries Dean of the Mellon College of Science at Carnegie Mellon University, member of the Dietrich College of Humanities and Social Sciences’ Department of Statistics and Data Science, and of the Mellon College of Science’s Department of Biological Sciences;

Daniel Kahneman, Professor of Psychology and Public Affairs Emeritus at the Princeton School of Public and International Affairs, the Eugene Higgins Professor of Psychology Emeritus at Princeton University, and a fellow of the Center for Rationality at the Hebrew University in Jerusalem; and

Goodwin Liu, Associate Justice of the California Supreme Court.

The Proceedings of this two day meeting were recorded and will be published. The website materials are unclear whether the verbatim remarks will be included, but regardless, the proceedings should warrant careful reading.

Judge O’Malley, in her introductory remarks, emphasized that the RMSE must be a neutral, disinterested source of information for federal judges, an aspirational judgment from which there can be no dissent. More controversial will be Her Honor’s assessment that epidemiologic studies can “take forever,” and other judges’ suggestion that plaintiffs lack financial resources to put forward credible, reliable expert witnesses. Judge Vance corrected the course of the discussion by pointing out that MDL plaintiffs were not disadvantaged, but no one pointed out that plaintiffs’ counsel were among the wealthiest individuals in the United States, and that they have been known to sponsor epidemiologic and other studies that wind up as evidence in court.

Panel One was perhaps the most discomforting experience, as it involved revelations about how sausage is made in the gatekeeping process. The panel was remarkable for including a state court judge from Texas, Judge Barbara Parker Hervey, of the Texas Court of Criminal Appeals. Judge Hervey remarked that [in her experience] if we judges “can’t understand it, we won’t read it.” Her dictum raises interesting issues. No doubt, in some instances, the judicial failure of comprehension is the fault of the lawyers. What happens when the judges “can’t understand it”? Do they ask for further briefing? Or do they ask for a hearing with viva voce testimony from expert witnesses? The point was not followed up.

Leonard P. Stark’s insights were interesting in that his docket in the District of Delaware is flooded with patent and Hatch-Waxman Act litigation. Judge Stark’s extensive educational training is in politics and political science. The docket volume Judge Stark described, however, raised issues about how much attention he could give to any one case.

When the panel was asked how they dealt with scientific issues, Judge Saris discussed her presiding over In re Neurontin, which was a “big challenge for me to understand,” with no randomized trials or objective assessments by the litigants.[3] Judge Vance discussed her experience of presiding in a low-level benzene exposure case, in which plaintiff claimed that his acute myelogenous leukemia was caused by gasoline.[4]

Perhaps the key difference in approach to Rule 702 emerged when the judges were asked whether they read the underlying studies. Judge Saris did not answer directly, but stated she reads the reports. Judge Vance, on the other hand, noted that she reads the relied upon studies. In her gasoline-leukemia case, she read the relied-upon epidemiologic studies, which she described as a “hodge podge,” and which were misrepresented by the expert witnesses and counsel. She emphasized the distortions of the adversarial system and the need to moderate its excesses by validating what exactly the expert witnesses had relied upon.

This division in judicial approach was seen again when Professor Karen Kafadar asked how the judges dealt with peer review. Judge Saris seemed to suggest that the peer-reviewed published article was prima facie reliable. Others disagreed and noted that peer reviewed articles can have findings that are overstated, and wrong. One speaker noted that Jerome Kassirer had downplayed the significance of, and the validation provided by, peer review, in the RMSE (3rd ed 2011).

Curiously, there was no discussion of Rule 703, either in Judge O’Malley’s opening remarks on the RMSE, or in the first panel discussion. When someone from the audience submitted a question about the role of Rule 703 in the gatekeeping process, the moderator did not read it.

Panel Two. The climate change panel was a tour de force of the case for anthropogenic climate change. To some, the presentations may have seemed like a reprise of The Day After Tomorrow. Indeed, the science was presented so confidently, if not stridently, that one of the committee members asked whether there could be any reasonable disagreement. The panelists responded essentially by pointing out that there could be no good faith opposition. The panelists were much less convincing on the issue of attributability. None of the speakers addressed the appropriateness vel non of climate change litigation, when the federal and state governments encouraged, licensed, and regulated the exploitation and use of fossil fuel reserves.

Panel Four. Dr. Clayton’s panel was fascinating and likely to lead to new chapters. Professor Hyman presented on heritability, a subject that did not receive much attention in the RMSE third edition. With the advent of genetic claims of susceptibility and defenses of mutation-induced disease, courts will likely need some good advice on navigating the science. Dana Carroll presented on human genome editing (CRISPR). Philip Sabes presented on brain-computer interfaces, which have progressed well beyond the level of sci-fi thrillers, such as The Brain That Wouldn’t Die (“Jan in the Pan”).

In addition to the therapeutic applications, Sabes discussed some of potential forensic uses, such as lie detectors, pain quantification, and the like. Yaniv Erlich, of MyHeritage, discussed advances in forensic genetic genealogy, which have made a dramatic entrance to the common imagination through the apprehension of Joseph James DeAngelo, the Golden State killer. The technique of triangulating DNA matches from consumer DNA databases has other applications, of course, such as identifying lost heirs, and resolving paternity issues.

Panel Five. Professor Marchant’s panel may well have identified some of the most salient needs for the next edition of the RMSE. Nobel Laureate Daniel Kahneman presented some of the highlights from his forthcoming book about “noise” in human judgment.[5] Kahneman’s expansion upon his previous thinking about the sources of error in human – and scientific – judgment are a much needed addition to the RMSE. Along the same lines, Professor Xiao Li Meng, presented on selection bias, and how it pervades scientific work, and detracts from the strength of evidence in the form of:

  1. cherry picking
  2. subgroup analyses
  3. unprincipled handling of outliers
  4. selection in methodologies (different tests)
  5. selection in due diligence (check only when you don’t like results)
  6. publication bias that results from publishing only impressive or statistically significant results
  7. selection in reporting, not reporting limitations all analyses
  8. selection in understanding

Professor Meng’s insights are sorely lacking in the third edition of the RMSE, and among judicial gatekeepers generally.  All too often, undue selectivity in methodologies and in relied-upon data is treated by judges as an issue that “goes to the weight, not the admissibility” of expert witness opinion testimony. In actuality, the selection biases, and other systematic and cognitive biases, are as important as, if not more important than, random error assessments. Indeed a close look at the RMSE third edition reveals a close embrace of the amorphous, anything-goes “weight of the evidence” approach in the epidemiology chapter.  That chapter marginalizes meta-analyses and fails to mention systematic review techiniques altogether. The chapter on clinical medicine, however, takes a divergent approach, emphasizing the hierarchy of evidence inherent in different study types, and the need for principled and systematic reviews of the available evidence.[6]

The Committee co-chairs and panel moderators did a wonderful job to identify important new trends in genetics, data science, error assessment, and computer science, and they should be congratulated for their efforts. Judge O’Malley is certainly correct in saying that the RMSE must be a neutral source of information on statistical and scientific methodologies, and it needs to be revised and updated to address errors and omissions in the previous editions. The legal community should look for, and study, the published proceedings when they become available.

——————————————————————————————————

[1]  SeeEmerging Areas of Science, Engineering, and Medicine for the Courts: Identifying Chapters for a Fourth Edition of The Reference Manual on Scientific Evidence – Committee Meeting” (Nov. 24, 2020).

[2]  SeeEmerging Areas of Science, Engineering, and Medicine for the Courts: Identifying Chapters for a Fourth Edition of The Reference Manual on Scientific Evidence – Committee Meeting 2 (Virtual)” (Dec. 1, 2020).

[3]  In re Neurontin Marketing, Sales Practices & Prods. Liab. Litig., 612 F. Supp. 2d 116 (D. Mass. 2009) (Saris, J.).

[4]  Burst v. Shell Oil Co., 104 F.Supp.3d 773 (E.D.La. 2015) (Vance, J.), aff’d, ___ Fed. App’x ___, 2016 WL 2989261 (5th Cir. May 23, 2016), cert. denied, 137 S.Ct. 312 (2016). SeeThe One Percent Non-solution – Infante Fuels His Own Exclusion in Gasoline Leukemia Case” (June 25, 2015).

[5]  Daniel Kahneman, Olivier Sibony, and Cass R. Sunstein, Noise: A Flaw in Human Judgment (anticipated May 2021).

[6]  See John B. Wong, Lawrence O. Gostin, and Oscar A. Cabrera, “Reference Guide on Medical Testimony,” Reference Manual on Scientific Evidence 723-24 (3ed ed. 2011) (discussing hierarchy of medical evidence, with systematic reviews at the apex).

Center for Truth in Science

February 2nd, 2021

The Center for Truth in Science

Well, now I have had the complete 2020 experience, trailing into 2021. CoVid-19, a.k.a. Trump flu happened. The worst for me is now mostly over, and I can see a light at the end of the tunnel. Fortunately it is not the hypoxemic end-of-life light at the end of the tunnel.

Kurt Gödel famously noted that the “the meaning of world is the separation of wish and fact.” The work of science in fields that touch on religion, politics, and other dogmas requires nothing less than separating wish from fact. Sadly, most people are cut off from the world of science by ignorance, lack of education, and social media that blur the distinction between wish and fact, and ultimately replace the latter with the former.

It should go without saying that truth is science and science is truth, but our current crises show that truth and science are both victims of the same forces that blur wish with fact. We might think that a center established for “truth in science” is as otiose as a center for justice in the law, but all the social forces at work to blur wish and fact make such a center an imperative for our time.

The Center for Truth in Science was established last year, and has already weighed in on important issues and scientific controversies that occupy American courtrooms and legislatures. Championing “fact-based” science, the Center has begun to tackle some of the difficult contemporary scientific issues that loom large on the judicial scene – talc, glyphosate, per- and polyfluoroalkyl substances, and other – as well as methodological and conceptual problems that underlie these issues. (Of course, there is no other kind of science than fact-based, but there are many pseudo-, non-fact based knock offs out there.) The Center has already produced helpful papers on various topics, with many more important papers in progress. The Center’s website is a welcomed resource for news and insights on science that matters for current policy decisions.

The Center is an important and exciting development, and its work promises to provide the tools to help us separate wish from fact. Nothing less than the meaning of the world is at stake.

A TrumPence for Your Thoughts

November 21st, 2020

Trigger Warning: Political Rant

“Let them call me rebel and welcome, I feel no concern from it; but I should suffer the misery of devils, were I to make a whore of my soul by swearing allegiance to one whose character is that of a sottish, stupid, stubborn, worthless, brutish man.”

Thomas Paine, “The Crisis, Number 1” (Dec. 23, 1776), in Ian Shapiro & Jane E. Calvert, eds., Selected Writings of Thomas Paine 53, 58 (2014).

♂, ♀, ✳, †, ∞

Person, woman, man, camera, TV

Back on October 20, 2020, televangelist Pat Robertson heard voices in his head, and interpreted them to be the voice of god, announcing the imminent victory of Donald Trump. How Robertson knows he was not hearing the devil, he does not say. Even gods get their facts and predictions wrong sometimes. We should always ask for the data and the analysis.

Trump’s “spiritual advisor,” mega-maga-church pastor and televangelist, Paula White, violated the ban on establishment of religion, and prayed for Trump’s victory.[1] Speaking in tongues, White made Trump seem articulate. White wandered from unconstitutional into blatantly criminal territory, however, when she sought intervention of foreign powers in the election, by summoning angels from Africa and South America to help Trump win the election.  Trump seemed not to take notice that these angels were undocumented, illegal aliens. In the end, the unlawful aliens proved ineffective. Our better angels prevailed over Ms. White’s immigrant angels. Now ICE will now have to track these angels down and deport them back to their you-know-what countries of origin.

How did we get to this place? It is not that astute observers on the left and the right did not warn us.

Before Trump was elected in 2016, Justice Ruth Bader Ginsburg notoriously bashed Donald Trump, by calling him a “faker”:

“He has no consistency about him. He says whatever comes into his head at the moment. He really has an ego … How has he gotten away with not turning over his tax returns? The press seems to be very gentle with him on that.”[2]

Faker was a fitting epithet that captured Trump’s many pretensions. It is a word that has a broader meaning in the polyglot world of New York City, where both Justice Ginsburg and Donald Trump were born and grew up. The word has a similar range of connotations as trombenik, “a lazy person, ne’er-do-well, boastful loudmouth, bullshitter, bum.” Maybe we should modify trombenik to Trumpnik?

Justice Ginsburg’s public pronouncement was, of course, inappropriate, but accurate nonetheless. She did something, however, that Trump has never done in his public persona; she apologized:

“‘On reflection, my recent remarks in response to press inquiries were ill-advised and I regret making them’, Ginsburg said in a statement Thursday morning. ‘Judges should avoid commenting on a candidate for public office. In the future I will be more circumspect’.”[3]

Of course, Justice Ginsburg should have been more circumspect, but her disdain for Trump was not simply an aversion to his toxic politics and personality. Justice Ginsburg was a close friend of Justice Antonin Scalia, who was one of the most conservative justices on the Supreme Court bench. Ginsburg and Scalia could and did disagree vigorously and still share friendship and many common interests. Scalia was not a faker; Trump is.

Other conservative writers have had an equal or even a greater disdain for Trump. On this side of the Atlantic, principled conservatives rejected the moral and political chaos of Donald Trump. When Trump’s nomination as the Republican Party candidate for president seemed assured in June 2016, columnist George Will announced to the Federalist Society that he had changed his party affiliation from Republican to unaffiliated.[4]

On the other side of the Atlantic, conservative thinkers such as the late Sir Roger Scruton rolled their eyes at the prospect of Donald Trump’s masquerading as a conservative.[5] After Trump had the benefit of a few months to get his sea legs on the ship of state, Sir Roger noted that Trump was nothing more than a craven opportunist:

“Q. Does ‘Trumpism’ as an ideology exist, and if it does, is it conservative, or is it just opportunism?

A. It is opportunism. He probably does have conservative instincts, but let’s face it, he doesn’t have any thoughts that are longer than 140 characters, so how can he have a real philosophy?”[6]

Twitter did, at some point, double the number of characters permitted in a tweet, but Trump simply repeated himself more.

In the United States, we have had social conservatives, fiscal conservatives, classic liberal conservatives, and more recently, we have seen neo-cons, theo-cons, and Vichy cons. I suppose there have always been con-cons, but Trump has strongly raised the profile of this last subgroup. There can be little doubt that Donald John Trump has always been a con-con. Now we have Banana Republicans who have made a travesty of the rule of law. Four years in, we are all suffering from what Barak Obama termed “truth decay.”

Cancel culture has always been with us. Socrates, Jesus, and Julius Caesar were all canceled, with extreme prejudice. In the United States, Senator Joseph McCarthy developed cancel culture into a national past time. In this century, the Woke Left has weaponized cancel culture into a serious social and intellectual problem. Now, Donald Trump wants to go one step further and cancel our republican form of democracy. Trump is attempting in plain sight to cancel a national election he lost.

Yes, I have wandered from my main mission on this blog to write about tort law and about how the law handles scientific and statistical issues. My desultory writings on this blog have largely focused on evidence in scientific controversies that find their way into the law. Our political structures are created and conditioned by our law, and our commitment to the rule of law, and the mistreatment of scientific issues by political actors is as pressing a concern, to me at least, as mistreatment of science by judges or lawyers. Trump has now made the post-modernists look like paragons of epistemic virtue. As exemplified in the political response to the pandemic, this political development has important implications for the public acceptance of science and evidence-based policies and positions in all walks of life.

Another blogger whose work on science and risk I respect is David Zaruk, who openly acknowledges that Donald Trump is an “ethically and intellectually flawed train wreck of a politician.”[7] Like Trump apologists James Lindsay and Ben Shapiro, however, Zaruk excuses the large turnout for Trump because Trump voters:

“are sick to death of being told by smug, arrogant, sanctimonious zealots how to think, how to feel and how to act. Nobody likes to be fixed and especially not by self-righteous, moralising mercenaries.”

But wait:  Isn’t this putative defense itself a smug, arrogant, sanctimonious, and zealous lecture that we should somehow be tolerant of Trump and his supporters? What about the sickness unto death over Trump’s endless propagation of lies and fraud? Trump has set an example that empowers his followers to do likewise. Zaruk’s reductionist analysis ignores important determinants of the vote. Many of the Trump voters were motivated by the most self-righteous of all moralizing mercenaries – leaders of Christian nationalism.[8] Zaruk’s acknowledgement of Trump’s deep ethical and intellectual flaws, while refraining from criticizing Trump voters, fits the pattern of the Trump-supporting mass social media that engages in the rhetoric of gas-lighting “what-about-ism.”[9]

Sure, no one likes to be told that they are bereft of moral, practical, and political judgment, but voting for Trump is complicit in advancing “a deeply ethically and intellectually flawed” opportunist. Labeling all of Trump’s opponents as “smug, arrogant, sanctimonious zealots” is really as empty as Trump’s list of achievements. Furthermore, Zaruk’s animadversions against the Woke Left miss the full picture of who is criticizing Trump and his “base.” The critique of Trump has come not just from so-called progressives but from deeply conservative writers such as Will and Scruton, and from pragmatic conservative political commentators such as George Conway, Amanda Carpenter, Sarah Longwell, and Charles Sykes. There is no moral equivalency between the possibility that the Wokies will influence a Biden administration and the certainty that truly deplorable people such as Bannon, Gingrich, Giuliani, Navarro, et alia, will both influence and control our nation’s policy agenda.

Of course, Trump voters may honestly believe that a Democratic administration will be on the wrong side of key issues, such as immigration, abortion, gun control, regulation, taxation, and the like. Certainly opponents of the Democratic positions on these issues could seek an honest broker to represent their views. Trump voters, however, cannot honestly endorse the character and morality of Mr. Trump, his cabinet, and his key Senate enablers. Trump has been the Vector-in-Chief of contagion and lies. As for Trump’s evangelical Christian supporters, they have an irreconcilable problem with our fundamental prohibition against state establishment of religion.

It has been a difficult year for Trump. He has had the full 2020 experience. He developed COVID, lost his job, and received an eviction notice. And now he finds himself with electicle dysfunction. Trump has long been a hater and a denier. Without intending to libel his siblings, we can say that hating and denying are in his DNA. Trump hates and denies truth, evidence, valid inference, careful analysis and synthesis. He is the apotheosis of what happens when a corrupt, small-minded business man surrounds himself with lackies, yes-people, and emotionally damaged, financially dependent children.

Trump declared victory before the votes could be tallied, and he announced in advance, without evidence, that the election was rigged but only if it turned out with the “appearance” of his losing. After the votes were tallied, and he had lost by over 5,000,000 votes, and he lost the Electoral College by the same margin he labeled a “landslide” for him four years earlier, he claimed victory, contrary to the evidence, just as he said he would. Sore loser. Millions of voting Americans, to whom Zaruk would give a moral pass, do not see this as a problem.

In The Queen’s Gambit, a Netflix series, the stern, taciturn janitor of a girls’ orphanage, Mr. Shaibel, taught Beth Harmon, a seven year old, how to play chess. In one of their early games, Beth has a clearly lost position, and Mr. Shaibel instructs her, “now you resign.” Beth protests that she still has moves she can make before there is a checkmate, but Mr. Shaibel sternly repeats himself, “no, now you resign.” Beth breaks into tears and runs out of the room, but she learned the lesson and developed the resiliency, focus, and sportsmanship to play competitive chess at the highest level. If only Mr. Shaibel could have taught our current president this lesson, perhaps he would understand that the American electorate, both the self-styled progressives and conservatives who care about decency and morality, have united in saying to him, “now you resign.”

Dr. Mary Trump, the President’s niece, has written an unflattering psychological analysis of Trump. It does not take a Ph.D. in clinical psychology to see the problem. Donald Trump and his family do not have a dog. Before Donald Trump, James K. Polk (11th president) was the last president not to have a dog in the White House (March 4, 1845 – March 4, 1849). Polk died three months after leaving office.

I suppose there are some good people who do not like dogs, but liking and caring for dogs, and being open to their affection back, certainly marks people as capable of empathy, concern, and love. I could forgive the Obamas for never having had dogs before moving into the Whitehouse; they were a hard working, ambitious two career couple, living in a large city. They fixed their omission shortly upon Obama’s election. The absence of dog from the Trump White House speaks volumes about Trump. In a rally speech, he mocked: “Can you imagine me walking a dog?” Of course, he would not want to walk a dog down a ramp. How interesting that of all the criticisms lodged against Trump, the observation that he lacked canine companionship struck such a nerve that he addressed the matter defensively in one of his rallies. And how sad that he could not imagine his son Barron walking a dog. It was probably Barron’s only hope of having another living creature close to him show concern. Of course, Melania could walk the dog, which would allow her to do something useful and entertaining (besides ignoring the Christmas decorations), especially in her high-heel dog-walking shoes.

Saturday, November 7, 2020. O joy, o rapture! People danced in the streets of the Upper East. Cars honked horns. People hung out their windows and banged pots. Grown men and women shed tears of joy and laughter. A beautiful New York day, VD Day, not venereal disease day, but victory over Donald. Trump can begin to plan for the Trump Presidential Lie-brary and adult book store.

But wait. Trump legal advisor Harmeet Dhillon tells Lou Dobbs on the Fox News Channel: “We’re waiting for the United States Supreme Court – of which the president has nominated three justices – to step in and do something. And hopefully Amy Coney Barrett will come through.” Well, that was not a terribly subtle indication of the corruption in Trump’s soul and on his legal team. Americans now know all about loyalty oaths to the leader, and the abdication of principles. Fealty to Trump is the only principle; just read the Republican Party Platform.

Former White House chief strategist Steven Bannon was not to be out done in his demonstrations of fealty. Bannon called for Dr. Anthony Fauci and FBI Director Christopher Wray to be beheaded “as a warning to federal bureaucrats. You either get with the program or you are gone.” Bannon, of course, was not in a principal-agent relationship with Trump, as was Dhillon, but given that Trump has an opinion about everything on Twitter-Twatter, and that he was silent about Bannon’s call for decapitations, we have to take his silence as tacit agreement.

It does seem that many Republicans are clutching at straws to hang on. Fraud claims require pleading with particularity, and proof by clear and convincing evidence. Extraordinary claims require extraordinary evidence. First and second order hearsay will not suffice. Surely, Rudy the Wanker knows this; indeed, when he has appeared in court, he has readily admitted that he is not pursuing a fraud case.[10] In open court, Guiliani, with a straight face, told a federal judge that his client was denied the opportunity to ensure opacity at the polls.[11]

Under the eye of Newt Gingrich, former Republican Speaker of the House, poll workers should be jailed, and Attorney General William P. Barr should step in to the fray. Never failing to disappoint, Bully Barr obliged. Still, the Republican attempt to win by litigation, a distinctly un-conservative approach, has been failing.[12]

How will we know when our national nightmare is over? There will not be the usual concession speech. Look for Trump’s announcement of his candidacy for the 2024 presidential election.

Donald J. Trump Foundation, Trump Airlines, Trump Magazine, Trump Steaks, Trump Vodka, Trump Mortgage, Trump: The Game, Trump University, GoTrump.com, Trump Marriage #1, Trump Marriage #2, Trump Taj Mahal, Trump Plaza Hotel and Casino, Plaza Hotel, Trump Castle Hotel and Casino, Trump Hotels and Casino Resorts, Trump Entertainment Resorts, Trumpnet – all failures – are now gone. Soon Trump himself will be gone as well.


Post-Script

A dimly lit room filled with coffins. Spider webs stretch across the room. Rats scurry across the floor. Slowly, the tops of the coffins are pushed open from within in, by arms of skeletons. The occupants of the coffins, skeleton, slowly get up and start talking.

Skeleton one: COVID, COVID, COVID, COVID, COVID, COVID, that’s all everyone wants to talk about.

Skeleton two: It’s no big deal; we were going to die anyway. Well at some point.

Skeleton three: And besides, now we are immune. Ha, ha, ha!

Skeleton four: Hey, look at us; we’re rounding the corner.

All, singing while dancing in a circle conga line:

We’ll be coming around the corner when he’s gone (toot, toot)

We’ll be coming around the corner when he’s gone (toot, toot)

We’ll be coming around the corner, we’ll be coming around the corner

We’ll be coming around the corner when he’s gone (toot, toot).


[1]  Wyatte Grantham-Philips, “Pastor Paula White calls on angels from Africa and South America to bring Trump victory,” USA TODAY (Nov. 5, 2020).

[2]  John Kruzel, “Justice Ruth Bader Ginsburg has taken to bashing Donald Trump in recent days,” (July 12, 2016).

[3]  Jessica Taylor, “Ginsburg Apologizes For ‘Ill-Advised’ Trump Comments,” Nat’l Public Radio (July 14, 2016).

[4]  Maggie Haberman, “George Will Leaves the G.O.P. Over Donald Trump,” N.Y. Times (June 25, 2016).

[5]  Roger Scruton, “What Trump Doesn’t Get About Conservatism,” N.Y. Times (July 4, 2018).

[6]  Tom Szigeti, “Sir Roger Scruton on Trump: ‘He doesn’t have any thoughts that are longer than 140 characters’,” Hungary Today (June 8, 2017).

[7]  David Zaruk, “The Trump Effect: Stop Telling me What to Think!,” RiskMonger (Nov. 5, 2020).

[8]  See Katherine Stewart, The Power Worshippers: Inside the Dangerous Rise of Religious Nationalism (2020).

[9]  See Amanda Carpenter, Gaslighting America – Why We Love It When Trump Lies to Us 2018.

[10]  Lisa Lerer, “‘This Is Not a Fraud Case’: Keep an eye on what President Trump’s lawyers say about supposed voter fraud in court, where lying under oath is a crime,” (Nov. 18, 2020).

[11]  Gail Collins, “Barr the Bad or Rudy the Ridiculous?” N.Y. Times (Nov. 18, 2020).

[12]  Jim Rutenberg, Nick Corasaniti and Alan Feuer, “With No Evidence of Fraud, Trump Fails to Make Headway on Legal Cases,” N.Y. Times (Nov. 7, 2020); Aaron Blake, “It goes from bad to worse for the Trump legal team,” Wash. Post (Nov. 13, 2020); Alan Feuer, “Trump Loses String of Election Lawsuits, Leaving Few Vehicles to Fight His Defeat,” N.Y. Times (Nov. 13, 2020); Jon Swaine & Elise Viebeck, “Trump campaign jettisons major parts of its legal challenge against Pennsylvania’s election results,” Wash. Post (Nov. 15, 2020).

Judicial Dodgers – Rule 702 Tie Does Not Go to Proponent

June 2nd, 2020

The Advisory Committee notes to the year 2000 amendment to Federal Rule of Evidence 702 included a comment:

“A review of the case law after Daubert shows that the rejection of expert testimony is the exception rather than the rule. Daubert did not work a ‘seachange over federal evidence law’, and ‘the trial court’s role as gatekeeper is not intended to serve as a replacement for the adversary system’.”[internal citation omitted]

In stating its review of the caselaw, perhaps the Committee was attempting to allay the anxiety of technophobic judges. But was the Committee also attempting to derive an “ought” from an “is”?  Before the Supreme Court decided Daubert in 1993, virtually every admissibility challenge to expert witness opinion testimony failed. The trial courts were slow to adapt and to adopt the reframed admissibility standard. As the Joiner case illustrated, some Circuits were even slower to permit trial judges the discretion to assess the validity vel non of expert witnesses’ opinions.

The Committee’s observation about the “exceptional” nature of exclusions was thus unexceptional as a description of the case law before and shortly after Daubert was decided. And even if the Committee were describing a normative view, it is not at all clear how that view should translate into a ruling in a given case, without a very close analysis of the opinions at issue, under the Rule 702 criteria. In baseball, most hitters are thrown out at first base, but that fact does not help an umpire one whit in calling a specific runner “safe” or “out.”  Nonetheless, courts have repeatedly offered the observation about the exceptional nature of exclusion as both an explanation and a justification of their opinions to admit testimony.[1] The Advisory Committee note has thus mutated into a mandate to err on the side of admissibility, as though deliberately committing error was a good thing for any judge to do.[2] First rule: courts shall not err, not intentionally, recklessly, or negligently.

Close Calls and Resolving Doubts

Another mutant offspring of the “exception, not the rule” mantra is that “[a]ny doubts regarding the admissibility of an expert’s testimony should be resolved in favor of admissibility.”[3] Why not resolve the doubts and rule in accordance with the law? Or, if doubts remain, then charge them against the proponent who has the burden of showing admissibility? Unlike baseball, in which a tie goes to the runner, in expert witness law, a tie goes to the challenger because the defender of the motion has failed to show a preponderance in favor of admissibility. A better mantra: “exclusion when it is the Rule.”

Some courts re-imagine the Advisory Committee’s about exceptional exclusions as a recommendation for admitting Rule 702 expert witness opinion testimony as a preferred outcome. Again, that interpretation reverses the burden of proof and makes a mockery of equal justice and scientific due process.

Yet another similar judicial mutation is the notion that courts should refuse Rule 702 motions when they are “close calls.”[4] Telling the litigants that the call was close might help assuage the loser and temper the litigation enthusiasms of the winner, but it does not answer the key question: Did the proponent carry the burden of showing admissibility? Residual doubts would seem to weigh against the proponent.

Not all is lost. In one case, decided by a trial court within the Ninth Circuit, the trial judge explicitly pointed to the proponent’s failure to identify his findings and methodology as part of the basis for exclusion, not admission, of the challenged witness’s opinion testimony.[5] Difficulty in resolving whether the Rule 702 predicates were satisfied worked against, not for, the proponent, whose burden it was to show those predicates.

In another case, Judge David G. Campbell, of the District of Arizona, who has participated in the Rules Committee’s deliberations, showed the way by clearly stating that the exclusion of opinion testimony was required when the Rule 702 conditions were not met:

“Plaintiffs have not shown by a preponderance of the evidence that [the expert witness’s] causation opinions are based on sufficient facts or data to which reliable principles and methods have been applied reliably… .”[6]

Exclusion followed because the absent showings were “conditions for admissibility,” and not “mere” credibility considerations.

Trust Me, I’m a Liberal

One of the reasons that the Daubert Court rejected incorporating the Frye standard into Rule 702 was its view that a rigid “general acceptance” standard “would be at odds with the ‘liberal thrust’ of the Federal Rules.”[7] Some courts have cited this “liberal thrust” as though it explained or justified a particular decision to admit expert witness opinion testimony.[8]

The word “liberal” does not appear in the Federal Rules of Evidence.  Instead, the Rules contain an explicit statement of how judges must construe and apply the evidentiary provisions:

“These rules shall be construed to secure fairness in administration, elimination of unjustifiable expense and delay, and promotion of growth and development of the law of evidence to the end that the truth may be ascertained and proceedings justly determined.”[9]

A “liberal” approach, construed as a “let it all in” approach would be ill-designed to secure fairness, eliminate unjustifiable expense and time of trial, or lead to just and correct outcomes.  The “liberal” approach of letting in opinion testimony and let the jury guess at questions of scientific validiy would be a most illiberal result.  The truth will not be readily ascertained if expert witnesses are permitted to pass off hypotheses and ill-founded conclusions as scientific knowledge.

Avoiding the rigidity of the Frye standard, which was so rigid that it was virtually never applied, certainly seems like a worthwhile judicial goal. But how do courts go from the Justice Blackmun’s “liberal thrust” to infer a libertine “anything goes”? And why does liberal not connote seeking of the truth, free of superstitions? Can it be liberal to permit opinions that are based upon fallacious or flawed inferences, invalid studies, or cherry-picked data sets?

In reviewing the many judicial dodges that are used to avoid engaging in meaningful Rule 702 gatekeeping, I am mindful of Reporter Daniel Capra’s caveat that the ill-advised locutions used by judges do not necessarily mean that their decisions might not be completely justifiable on a carefully worded and reasoned opinion that showed that Rule 702 and all its subparts were met. Of course, we could infer that the conditions for admissibility were met whenever an expert witness’s opinions were admitted, and ditch the whole process of having judges offer reasoned explanations. Due process, however, requires more. Judges need to specify why they denied Rule 702 challenges in terms of the statutory requirements for admissibility so that other courts and the Bar can develop a principled jurisprudence of expert witness opinion testimony.


[1]  See, e.g., In re Scrap Metal Antitrust Litig., 527 F.3d 517, 530 (6th Cir. 2008) (“‘[R]ejection of expert testimony is the exception, rather than the rule,’ and we will generally permit testimony based on allegedly erroneous facts when there is some support for those facts in the record.”) (quoting Advisory Committee Note to 2000 Amendments to Rule 702); Citizens State Bank v. Leslie, No. 6-18-CV-00237-ADA, 2020 WL 1065723, at *4 (W.D. Tex. Mar. 5, 2020) (rejecting challenge to expert witness opinion “not based on sufficient facts”; excusing failure to assess factual basis with statement that “the rejection of expert testimony is the exception rather than the rule.”); In re E. I. du Pont de Nemours & Co. C-8 Pers. Injury Litig., No. 2:18-CV-00136, 2019 WL 6894069, at *2 (S.D. Ohio Dec. 18, 2019) (committing naturalistic fallacy; “[A] review of the case law … shows that rejection of the expert testimony is the exception rather than the rule.”): Frankenmuth Mutual Insur. Co. v. Ohio Edison Co., No. 5:17CV2013, 2018 WL 9870044, at *2 (N.D. Ohio Oct. 9, 2018) (quoting Advisory Committee Note “exception”); Wright v. Stern, 450 F. Supp. 2d 335, 359–60 (S.D.N.Y. 2006)(“Rejection of expert testimony, however, is still ‘the exception rather than the rule,’ Fed.R.Evid. 702 advisory committee’s note (2000 Amendments)[.] . . . Thus, in a close case the testimony should be allowed for the jury’s consideration.”) (internal quotation omitted).

[2]  Lombardo v. Saint Louis, No. 4:16-CV-01637-NCC, 2019 WL 414773, at *12 (E.D. Mo. Feb. 1, 2019) (“[T]he Court will err on the side of admissibility.”).

[3]  Mason v. CVS Health, 384 F. Supp. 3d 882, 891 (S.D. Ohio 2019).

[4]  Frankenmuth Mutual Insur. Co. v. Ohio Edison Co., No. 5:17CV2013, 2018 WL 9870044, at *2 (N.D. Ohio Oct. 9, 2018) (concluding “[a]lthough it is a very close call, the Court declines to exclude Churchwell’s expert opinions under Rule 702.”); In re E. I. du Pont de Nemours & Co. C-8 Pers. Injury Litig., No. 2:18-CV-00136, 2019 WL 6894069, at *2 (S.D. Ohio Dec. 18, 2019) (suggesting doubts should be resolved in favor of admissibility).

[5]  Rovid v. Graco Children’s Prod. Inc., No. 17-CV-01506-PJH, 2018 WL 5906075, at *13 (N.D. Cal. Nov. 9, 2018), app. dism’d, No. 19-15033, 2019 WL 1522786 (9th Cir. Mar. 7, 2019).

[6]  Alsadi v. Intel Corp., No. CV-16-03738-PHX-DGC, 2019 WL 4849482, at *4 -*5 (D. Ariz. Sept. 30, 2019).

[7]  Daubert v. Merrell Dow Pharms., Inc. 509 U.S. 579, 588 (1993).

[8]  In re ResCap Liquidating Trust Litig., No. 13-CV-3451 (SRN/HB), 2020 WL 209790, at *3 (D. Minn. Jan. 14, 2020) (“Courts generally support an attempt to liberalize the rules governing the admission of expert testimony, and favor admissibility over exclusion.”)(internal quotation omitted); Collie v. Wal-Mart Stores East, L.P., No. 1:16-CV-227, 2017 WL 2264351, at *1 (M.D. Pa. May 24, 2017) (“Rule 702 embraces a ‘liberal policy of admissibility’, under which it is preferable to admit any evidence that may assist the factfinder[.]”); In re Zyprexa Prod. Liab. Litig., 489 F. Supp. 2d 230, 282 (E.D.N.Y. 2007); Billone v. Sulzer Orthopedics, Inc., No. 99-CV-6132, 2005 WL 2044554, at *3 (W.D.N.Y. Aug. 25, 2005) (“[T]he Supreme Court has emphasized the ‘liberal thrust’ of Rule 702, favoring the admissibility of expert testimony.”).

[9]  Federal Rule of Evidence Rule 102 (“Purpose and Construction”) (emphasis added).