TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

People Get Ready – There’s a Reference Manual a Comin’

July 16th, 2021

Science is the key …

Back in February, I wrote about a National Academies’ workshop that featured some outstanding members of the scientific and statistical world, and which gave participants to identify new potential subjects for inclusion in a proposed fourth edition of the Reference Manual on Scientific Evidence.[1] Funding for that new edition is now secured, and the National Academies has published a précis of the February workshop. National Academies of Sciences, Engineering, and Medicine, Emerging Areas of Science, Engineering, and Medicine for the Courts: Proceedings of a Workshop – in Brief (Washington, DC 2021). The Rapporteurs for these proceedings provide a helpful overview for this meeting, which was not generally covered in the legal media.[2]

The goal of the workshop, which was supported by a planning committee, the Committee on Science, Technology, and Law, the National Academies, the Federal Judicial Center, and the National Science Foundation, was, of course, to identify chapters for a new, fourth edition, of Reference Manual on Scientific Evidence. The workshop was co-chaired by Dr. Thomas D. Albright, of the Salk Institute for Biological Studies, and the Hon. Kathleen McDonald O’Malley, Judge on the U.S. Court of Appeals for the Federal Circuit.

The Rapporteurs duly noted Judge O’Malley’s Workshop comments that she hoped that the reconsideration of the Reference Manual can help close the gap between science and the law. It is thus encouraging that the Rapporteurs focused a large part of their summary on the presentation of Professor Xiao-Li Meng[3] on selection bias, which “can come from cherry picking data, which alters the strength of the evidence.” Meng identified the

“7 S’(ins)” of selection bias:

(1) selection of target/hypothesis (e.g., subgroup analysis);

(2) selection of data (e.g., deleting ‘outliers’ or using only ‘complete cases’);

(3) selection of methodologies (e.g., choosing tests to pass the goodness-of-fit); (4) selective due diligence and debugging (e.g., triple checking only when the outcome seems undesirable);

(5) selection of publications (e.g., only when p-value <0.05);

(6) selections in reporting/summary (e.g., suppressing caveats); and

(7) selections in understanding and interpretation (e.g., our preference for deterministic, ‘common sense’ interpretation).”

Meng also addressed the problem of analyzing subgroup findings after not finding an association in the full sample, dubious algorithms, selection bias in publishing “splashy” and nominally “statistically significant” results, and media bias and incompetence in disseminating study results. Meng discussed how these biases could affect the accuracy of research findings, and how these biases obviously affect the accuracy, validity, and reliability of research findings that are relied upon by expert witnesses in court cases.

The Rapporteurs’ emphasis on Professor Meng’s presentation was noteworthy because the current edition of the Reference Manual is generally lacking in a serious exploration of systematic bias and confounding. To be sure, the concepts are superficially addressed in the Manual’s chapter on epidemiology, but in a way that has allowed many district judges to shrug off serious questions of invalidity with the shibboleth that such questions “to to the weight, not the admissibility,” of challenged expert witness opinion testimony. Perhaps the pending revision to Rule 702 will help improve fidelity to the spirit and text of Rule 702.

Questions of bias and noise have come to receive more attention in the professional statistical and epidemiologic literature. In 2009, Professor Timothy Lash published an important book-length treatment of quantitative bias analysis.[4] Last year, statistician David Hand published a comprehensive, but readily understandable, book on “Dark Data,” and the ways statistical and scientific interference are derailed.[5] One of the presenters at the February workshop, nobel laureate, Daniel Kahneman, published a book on “noise,” just a few weeks ago.[6]

David Hand’s book, Dark Data, (Chapter 10) sets out a useful taxonomy of the ways that data can be subverted by what the consumers of data do not know. The taxonomy would provide a useful organizational map for a new chapter of the Reference Manual:

A Taxonomy of Dark Data

Type 1: Data We Know Are Missing

Type 2: Data We Don’t Know Are Missing

Type 3: Choosing Just Some Cases

Type 4: Self- Selection

Type 5: Missing What Matters

Type 7: Changes with Time

Type 8: Definitions of Data

Type 9: Summaries of Data

Type 11: Feedback and Gaming

Type 12: Information Asymmetry

Type 13: Intentionally Darkened Data

Type 14: Fabricated and Synthetic Data

Type 15: Extrapolating beyond Your Data

Providing guidance not only on “how we know,” but also on how we go astray, patho-epistemology, would be helpful for judges and lawyers. Hand’s book really just a beginning to helping gatekeepers appreciate how superficially plausible health-effects claims are invalidated by the data relied upon by proffered expert witnesses.

* * * * * * * * * * * *

“There ain’t no room for the hopeless sinner
Who would hurt all mankind, just to save his own, believe me now
Have pity on those whose chances grow thinner”


[1]Reference Manual on Scientific Evidence v4.0” (Feb. 28, 2021).

[2] Steven Kendall, Joe S. Cecil, Jason A. Cantone, Meghan Dunn, and Aaron Wolf.

[3] Prof. Meng is the Whipple V. N. Jones Professor of Statistics, in Harvard University. (“Seeking simplicity in statistics, complexity in wine, and everything else in fortune cookies.”)

[4] Timothy L. Lash, Matthew P. Fox, and Aliza K. Fink, Applying Quantitative Bias Analysis to Epidemiologic Data (2009).

[5] David J. Hand, Dark data : why what you don’t know matters (2020).

[6] Daniel Kahneman, Olivier Sibony, and Cass R. Sunstein, Noise: A Flaw in Human Judgment (2021).

Slide Rule 702

June 26th, 2021

Note: A “fatal error,” caused by an old theme has disrupted the layout of my website. I am working on it

The opposition to Daubert’s regime of gatekeeping by the lawsuit industry has been fierce. From the beginning, the resistance has found allies on the bench, who have made the application of Rule 702 to expert witnesses, in both civil and criminal, uneven at best. Back in 2015, Professor David Bernstein and Eric Lasker wrote an exposé, about the unlawful disregard of the statutory language of Rule 702, and they called for and proposed an amendment to the rule.[1] At the time, I was skeptical of unleashing a change through the rules committee, given the uncertainty of where any amendment might ultimately look like.[2]

In the last several years, there have been some notable applications of Rule 702 in litigation involving sertraline, atorvastatin, sildenafil, and other medications, but aberrant decisions have continued to upend the rule of law in the area of expert witness gatekeeping. Last year, I noted that I had come to see the wisdom of Professor Bernstein’s proposal,[3] in the light of continued judicial dodging of Rule 702.[4] Numerous lawyers and legal organizations have chimed in to urge a revision to Rule 702.[5]

Earlier this week, the Committee on Rules of Practice & Procedure rolled out a proposed draft of an amended Rule 702.[6] The proposed new rule looks very much like the current rule:[7]

  1. Rule Testimony by Expert Witnesses
  2. A witness   who   is   qualified   as   an   expert   by
  3. knowledge, skill,  experience,  training,  or  education may
  4. testify in the form of an opinion or otherwise if the proponent
  5. has demonstrated by a preponderance of the evidence that:
  6. (a) the  expert’s  scientific,  technical,  or  other
  7. specialized knowledge will help the trier of
  8. fact  to   understand   the   evidence   or   to
  9. determine a fact in issue;
  10. (b) the testimony is based on sufficient facts or
  11. data;
  12. (c) the  testimony  is  the  product  of  reliable
  13. principles and methods; and
  14. (d) the  expert  has  reliably  applied  expert’s
  15. opinion reflects a reliable application of the
  16. principles and methods to the facts of the
  17. case

Despite what looks like minor linguistic changes, the Rules Committee’s note suggests otherwise. First, the amendment is intended to emphasize that the burden of showing the admissibility requirements rests on the proponent of the challenged expert witness testimony. Of course, the burden of course has always been with the proponent, but some courts have deployed various strategems to shift the burden with conclusory assessments that the challenge “goes to the weight not the admissibility,” thereby dodging the judicial responsibility for gatekeeping. The Committee now would make clear that many courts have erred by having treated the “critical questions of the sufficiency of an expert’s basis, and the application of the expert’s methodology” as going to weight and not admissibility.[8]

The Committee appears, however, to be struggling to provide guidance on when challenges do raise “matters of weight rather than admissibility.” For instance, the Committee Note suggests that:

“nothing in the amendment requires the court to nitpick an expert’s opinion in order to reach a perfect expression of what the basis and methodology can support. The Rule 104(a) standard does not require perfection. On the other hand, it does not permit the expert to make extravagant claims that are unsupported by the expert’s basis and methodology.”[9]

Somehow, I fear that the mantra of “weight not admissibility” has been or will be replaced by refusals to nitpick an expert’s opinion. How many nits does it take to make a causal claim “extravant”?

Perhaps I am the one nitpicking now. The Committee has recognized the essential weakness of gatekeeping as frequently practiced in federal courts by emphasizing that judicial gatekeeping is “essential” and required by the institutional incompetence of jurors to determine whether expert witnesses have reliably applied sound methodology to the facts of the case:

“a trial judge must exercise gatekeeping authority with respect to the opinion ultimately expressed by a testifying expert. A testifying expert’s opinion must stay within the bounds of what can be concluded by a reliable application of the expert’s basis and methodology. Judicial gatekeeping is essential because just as jurors may be unable to evaluate meaningfully the reliability of scientific and other methods underlying expert opinion, jurors may also be unable to assess the conclusions of an expert that go beyond what the expert’s basis and methodology may reliably support.”[10]

If the sentiment of the Rule Committee’s draft note carries through to the Committee Note that accompanies the amended rule, then perhaps some good will come of this effort.


[1] David E. Bernstein & Eric G. Lasker,“Defending Daubert: It’s Time to Amend Federal Rule of Evidence 702,” 57 William & Mary L. Rev. 1 (2015).

[2]On Amending Rule 702 of the Federal Rules of Evidence” (Oct. 17, 2015).

[3]Should Federal Rule of Evidence 702 Be Amended?” (May 8, 2020).

[4]Dodgy Data Duck Daubert Decisions” (April 2, 2020); “Judicial Dodgers – The Crossexamination Excuse for Denying Rule 702 Motions” (May 11, 2020); “Judicial Dodgers – Reassigning the Burden of Proof on Rule 702” (May 13, 2020); “Judicial Dodgers – Weight not Admissibility” (May 28, 2020); “Judicial Dodgers – Rule 702 Tie Does Not Go to Proponent” (June 2, 2020).

[5] See, e.g., Daniel Higginbotham, “The Proposed Amendment to Federal Rule of

Evidence 702 – Will it Work?” IADC Products Liability Newsletter (March 2021); Cary Silverman, “Fact or Fiction: Ensuring the Integrity of Expert Testimony,” U.S. Chamber Instit. Leg. Reform (Feb. 2021); Thomas D. Schroeder, “Federal Courts, Practice & Procedure: Toward a More Apparent Approach to Considering the Admission of Expert Testimony,” 95 Notre Dame L. Rev. 2039, 2043 (2020); Lee Mickus, “Gatekeeping Reorientation: Amend Rule 702 To Correct Judicial Misunderstanding about Expert Evidence,” Wash. Leg. Foundation (May 2020)

13-18 (noting numerous cases that fail to honor the spirit and language of Rule 702); Lawyers for Civil Justice, “Comment to the Advisory Comm. on Evidence Rules and its Rule 702 Subcommittee; A Note about the Note: Specific Rejection of Errant Case Law is Necessary for the Success of an Amendment Clarifying Rule 702’s Admissibility Requirements 1 (Feb. 8, 2021) (arguing that “[t]he only unambiguous way for the Note to convey the intent of the amendment is to reject the specific offending caselaw by name.”).

[6] Committee on Rules of Practice & Procedure Agenda Book (June 22, 2021). See Email Cara Salvatore, “Court Rules Committee Moves to Stiffen Expert Standard,” Law360 (June 23, 2021).

[7] Id. at 836. The proposal has been the subject of submissions and debate for a while. See Jim Beck, “Civil Rules Committee Proposes to Toughen Rule 702,” Drug & Device Law (May 4, 2021).

[8] Committee on Rules of Practice & Procedure Agenda Book at 839 (June 22, 2021).

[9] Id.

[10] Id. at 838-39.

Reference Manual on Scientific Evidence v4.0

February 28th, 2021

The need for revisions to the third edition of the Reference Manual on Scientific Evidence (RMSE) has been apparent since its publication in 2011. A decade has passed, and the federal agencies involved in the third edition, the Federal Judicial Center (FJC) and the National Academies of Science Engineering and Medicine (NASEM), are assembling staff to prepare the long-needed revisions.

The first sign of life for this new edition came back on November 24, 2020, when the NASEM held a short, closed door virtual meeting to discuss planning for a fourth edition.[1] The meeting was billed by the NASEM as “the first meeting of the Committee on Emerging Areas of Science, Engineering, and Medicine for the Courts: Identifying Chapters for a Fourth Edition of The Reference Manual on Scientific Evidence.” The Committee members heard from John S. Cooke (FJC Director), and Alan Tomkins and Reggie Sheehan, both of the National Science Foundation (NSF). The stated purpose of the meeting was to review the third edition of the RMSE to identify “identify areas of science, technology, and medicine that may be candidates for new or updated chapters in a proposed new (fourth) edition of the manual.” The only public pronouncement from the first meeting was that the committee would sponsor a workshop on the topic of new chapters for the RMSE, in early 2021.

The Committee’s second meeting took place a week later, again in closed session.[2] The stated purpose of the Committee’s second meeting was to review the third edition of the RMSE, and to discuss candidate areas for inclusion as new and updated chapters for a fourth edition.

Last week saw the Committee’s third, public meeting. The meeting spanned two days (Feb. 24 and 25, 2021), and was open to the public. The meeting was sponsored by NASEM, FJC, along with the NSF, and was co-chaired by Thomas D. Albright, Professor and Conrad T. Prebys Chair at the Salk Institute for Biological Studies, and the Hon. Kathleen McDonald O’Malley, who sits on the United States Court of Appeals for the Federal Circuit. Identified members of the committee include:

Steven M. Bellovin, professor in the Computer Science department at Columbia University;

Karen Kafadar, Departmental Chair and Commonwealth Professor of Statistics at the University of Virginia, and former president of the American Statistical Association;

Andrew Maynard, professor, and director of the Risk Innovation Lab at the School for the Future of Innovation in Society, at Arizona State University;

Venkatachalam Ramaswamy, Director of the Geophysical Fluid Dynamics Laboratory of the National Oceanic and Atmospheric Administration (NOAA) Office of Oceanic and Atmospheric Research (OAR), studying climate modeling and climate change;

Thomas Schroeder, Chief Judge for the U.S. District Court for the Middle District of North Carolina;

David S. Tatel, United States Court of Appeals for the District of Columbia Circuit; and

Steven R. Kendall, Staff Officer

The meeting comprised five panel presentations, made up of remarkably accomplished and talented speakers. Each panel’s presentations were followed by discussion among the panelists, and the committee members. Some panels answered questions submitted from the public audience. Judge O’Malley opened the meeting with introductory remarks about the purpose and scope of the RMSE, and of the inquiry into additional possible chapters.

  1. Challenges in Evaluating Scientific Evidence in Court

The first panel consisted entirely of judges, who held forth on their approaches to judicial gatekeeping of expert witnesses, and their approach to scientific and technical issues. Chief Judge Schroeder moderated the presentations of panelists:

Barbara Parker Hervey, Texas Court of Criminal Appeals;

Patti B. Saris, Chief Judge of the United States District Court for the District of Massachusetts,  member of President’s Council of Advisors on Science and Technology (PCAST);

Leonard P. Stark, U.S. District Court for the District of Delaware; and

Sarah S. Vance, Judge (former Chief Judge) of the U.S. District Court for the Eastern District of Louisiana, chair of the Judicial Panel on Multidistrict Litigation.

  1. Emerging Issues in the Climate and Environmental Sciences

Paul Hanle, of the Environmental Law Institute moderated presenters:

Joellen L. Russell, the Thomas R. Brown Distinguished Chair of Integrative Science and Professor at the University of Arizona in the Department of Geosciences;

Veerabhadran Ramanathan, Edward A. Frieman Endowed Presidential Chair in Climate Sustainability at the Scripps Institution of Oceanography at the University of California, San Diego;

Benjamin D. Santer, atmospheric scientist at Lawrence Livermore National Laboratory; and

Donald J. Wuebbles, the Harry E. Preble Professor of Atmospheric Science at the University of Illinois.

  1. Emerging Issues in Computer Science and Information Technology

Josh Goldfoot, Principal Deputy Chief, Computer Crime & Intellectual Property Section, at U.S. Department of Justice, moderated panelists:

Jeremy J. Epstein, Deputy Division Director of Computer and Information Science and Engineering (CISE) and Computer and Network Systems (CNS) at the National Science Foundation;

Russ Housley, founder of Vigil Security, LLC;

Subbarao Kambhampati, professor of computer science at Arizona State University; and

Alice Xiang, Senior Research Scientist at Sony AI.

  1. Emerging Issues in the Biological Sciences

Panel four was moderated by Professor Ellen Wright Clayton, the Craig-Weaver Professor of Pediatrics, and Professor of Law and of Health Policy at Vanderbilt Law School, at Vanderbilt University. Her panelists were:

Dana Carroll, distinguished professor in the Department of Biochemistry at the University of Utah School of Medicine;

Yaniv Erlich, Chief Executive Officer of Eleven Therapeutics, Chief Science Officer of MyHeritage;

Steven E. Hyman, director of the Stanley Center for Psychiatric Research at Broad Institute of MIT and Harvard; and

Philip Sabes, Professor Emeritus in Physiology at the University of California, San Francisco (UCSF).

  1. Emerging areas in Psychology, Data, and Statistical Sciences

Gary Marchant, Lincoln Professor of Emerging Technologies, Law and Ethics, at Arizona State University’s Sandra Day O’Connor College of Law, moderated panelists:

Xiao-Li Meng, the Whipple V. N. Jones Professor of Statistics, Harvard University, and the Founding Editor-in-Chief of Harvard Data Science Review;

Rebecca Doerge, Glen de Vries Dean of the Mellon College of Science at Carnegie Mellon University, member of the Dietrich College of Humanities and Social Sciences’ Department of Statistics and Data Science, and of the Mellon College of Science’s Department of Biological Sciences;

Daniel Kahneman, Professor of Psychology and Public Affairs Emeritus at the Princeton School of Public and International Affairs, the Eugene Higgins Professor of Psychology Emeritus at Princeton University, and a fellow of the Center for Rationality at the Hebrew University in Jerusalem; and

Goodwin Liu, Associate Justice of the California Supreme Court.

The Proceedings of this two day meeting were recorded and will be published. The website materials are unclear whether the verbatim remarks will be included, but regardless, the proceedings should warrant careful reading.

Judge O’Malley, in her introductory remarks, emphasized that the RMSE must be a neutral, disinterested source of information for federal judges, an aspirational judgment from which there can be no dissent. More controversial will be Her Honor’s assessment that epidemiologic studies can “take forever,” and other judges’ suggestion that plaintiffs lack financial resources to put forward credible, reliable expert witnesses. Judge Vance corrected the course of the discussion by pointing out that MDL plaintiffs were not disadvantaged, but no one pointed out that plaintiffs’ counsel were among the wealthiest individuals in the United States, and that they have been known to sponsor epidemiologic and other studies that wind up as evidence in court.

Panel One was perhaps the most discomforting experience, as it involved revelations about how sausage is made in the gatekeeping process. The panel was remarkable for including a state court judge from Texas, Judge Barbara Parker Hervey, of the Texas Court of Criminal Appeals. Judge Hervey remarked that [in her experience] if we judges “can’t understand it, we won’t read it.” Her dictum raises interesting issues. No doubt, in some instances, the judicial failure of comprehension is the fault of the lawyers. What happens when the judges “can’t understand it”? Do they ask for further briefing? Or do they ask for a hearing with viva voce testimony from expert witnesses? The point was not followed up.

Leonard P. Stark’s insights were interesting in that his docket in the District of Delaware is flooded with patent and Hatch-Waxman Act litigation. Judge Stark’s extensive educational training is in politics and political science. The docket volume Judge Stark described, however, raised issues about how much attention he could give to any one case.

When the panel was asked how they dealt with scientific issues, Judge Saris discussed her presiding over In re Neurontin, which was a “big challenge for me to understand,” with no randomized trials or objective assessments by the litigants.[3] Judge Vance discussed her experience of presiding in a low-level benzene exposure case, in which plaintiff claimed that his acute myelogenous leukemia was caused by gasoline.[4]

Perhaps the key difference in approach to Rule 702 emerged when the judges were asked whether they read the underlying studies. Judge Saris did not answer directly, but stated she reads the reports. Judge Vance, on the other hand, noted that she reads the relied upon studies. In her gasoline-leukemia case, she read the relied-upon epidemiologic studies, which she described as a “hodge podge,” and which were misrepresented by the expert witnesses and counsel. She emphasized the distortions of the adversarial system and the need to moderate its excesses by validating what exactly the expert witnesses had relied upon.

This division in judicial approach was seen again when Professor Karen Kafadar asked how the judges dealt with peer review. Judge Saris seemed to suggest that the peer-reviewed published article was prima facie reliable. Others disagreed and noted that peer reviewed articles can have findings that are overstated, and wrong. One speaker noted that Jerome Kassirer had downplayed the significance of, and the validation provided by, peer review, in the RMSE (3rd ed 2011).

Curiously, there was no discussion of Rule 703, either in Judge O’Malley’s opening remarks on the RMSE, or in the first panel discussion. When someone from the audience submitted a question about the role of Rule 703 in the gatekeeping process, the moderator did not read it.

Panel Two. The climate change panel was a tour de force of the case for anthropogenic climate change. To some, the presentations may have seemed like a reprise of The Day After Tomorrow. Indeed, the science was presented so confidently, if not stridently, that one of the committee members asked whether there could be any reasonable disagreement. The panelists responded essentially by pointing out that there could be no good faith opposition. The panelists were much less convincing on the issue of attributability. None of the speakers addressed the appropriateness vel non of climate change litigation, when the federal and state governments encouraged, licensed, and regulated the exploitation and use of fossil fuel reserves.

Panel Four. Dr. Clayton’s panel was fascinating and likely to lead to new chapters. Professor Hyman presented on heritability, a subject that did not receive much attention in the RMSE third edition. With the advent of genetic claims of susceptibility and defenses of mutation-induced disease, courts will likely need some good advice on navigating the science. Dana Carroll presented on human genome editing (CRISPR). Philip Sabes presented on brain-computer interfaces, which have progressed well beyond the level of sci-fi thrillers, such as The Brain That Wouldn’t Die (“Jan in the Pan”).

In addition to the therapeutic applications, Sabes discussed some of potential forensic uses, such as lie detectors, pain quantification, and the like. Yaniv Erlich, of MyHeritage, discussed advances in forensic genetic genealogy, which have made a dramatic entrance to the common imagination through the apprehension of Joseph James DeAngelo, the Golden State killer. The technique of triangulating DNA matches from consumer DNA databases has other applications, of course, such as identifying lost heirs, and resolving paternity issues.

Panel Five. Professor Marchant’s panel may well have identified some of the most salient needs for the next edition of the RMSE. Nobel Laureate Daniel Kahneman presented some of the highlights from his forthcoming book about “noise” in human judgment.[5] Kahneman’s expansion upon his previous thinking about the sources of error in human – and scientific – judgment are a much needed addition to the RMSE. Along the same lines, Professor Xiao Li Meng, presented on selection bias, and how it pervades scientific work, and detracts from the strength of evidence in the form of:

  1. cherry picking
  2. subgroup analyses
  3. unprincipled handling of outliers
  4. selection in methodologies (different tests)
  5. selection in due diligence (check only when you don’t like results)
  6. publication bias that results from publishing only impressive or statistically significant results
  7. selection in reporting, not reporting limitations all analyses
  8. selection in understanding

Professor Meng’s insights are sorely lacking in the third edition of the RMSE, and among judicial gatekeepers generally.  All too often, undue selectivity in methodologies and in relied-upon data is treated by judges as an issue that “goes to the weight, not the admissibility” of expert witness opinion testimony. In actuality, the selection biases, and other systematic and cognitive biases, are as important as, if not more important than, random error assessments. Indeed a close look at the RMSE third edition reveals a close embrace of the amorphous, anything-goes “weight of the evidence” approach in the epidemiology chapter.  That chapter marginalizes meta-analyses and fails to mention systematic review techiniques altogether. The chapter on clinical medicine, however, takes a divergent approach, emphasizing the hierarchy of evidence inherent in different study types, and the need for principled and systematic reviews of the available evidence.[6]

The Committee co-chairs and panel moderators did a wonderful job to identify important new trends in genetics, data science, error assessment, and computer science, and they should be congratulated for their efforts. Judge O’Malley is certainly correct in saying that the RMSE must be a neutral source of information on statistical and scientific methodologies, and it needs to be revised and updated to address errors and omissions in the previous editions. The legal community should look for, and study, the published proceedings when they become available.

——————————————————————————————————

[1]  SeeEmerging Areas of Science, Engineering, and Medicine for the Courts: Identifying Chapters for a Fourth Edition of The Reference Manual on Scientific Evidence – Committee Meeting” (Nov. 24, 2020).

[2]  SeeEmerging Areas of Science, Engineering, and Medicine for the Courts: Identifying Chapters for a Fourth Edition of The Reference Manual on Scientific Evidence – Committee Meeting 2 (Virtual)” (Dec. 1, 2020).

[3]  In re Neurontin Marketing, Sales Practices & Prods. Liab. Litig., 612 F. Supp. 2d 116 (D. Mass. 2009) (Saris, J.).

[4]  Burst v. Shell Oil Co., 104 F.Supp.3d 773 (E.D.La. 2015) (Vance, J.), aff’d, ___ Fed. App’x ___, 2016 WL 2989261 (5th Cir. May 23, 2016), cert. denied, 137 S.Ct. 312 (2016). SeeThe One Percent Non-solution – Infante Fuels His Own Exclusion in Gasoline Leukemia Case” (June 25, 2015).

[5]  Daniel Kahneman, Olivier Sibony, and Cass R. Sunstein, Noise: A Flaw in Human Judgment (anticipated May 2021).

[6]  See John B. Wong, Lawrence O. Gostin, and Oscar A. Cabrera, “Reference Guide on Medical Testimony,” Reference Manual on Scientific Evidence 723-24 (3ed ed. 2011) (discussing hierarchy of medical evidence, with systematic reviews at the apex).

Center for Truth in Science

February 2nd, 2021

The Center for Truth in Science

Well, now I have had the complete 2020 experience, trailing into 2021. CoVid-19, a.k.a. Trump flu happened. The worst for me is now mostly over, and I can see a light at the end of the tunnel. Fortunately it is not the hypoxemic end-of-life light at the end of the tunnel.

Kurt Gödel famously noted that the “the meaning of world is the separation of wish and fact.” The work of science in fields that touch on religion, politics, and other dogmas requires nothing less than separating wish from fact. Sadly, most people are cut off from the world of science by ignorance, lack of education, and social media that blur the distinction between wish and fact, and ultimately replace the latter with the former.

It should go without saying that truth is science and science is truth, but our current crises show that truth and science are both victims of the same forces that blur wish with fact. We might think that a center established for “truth in science” is as otiose as a center for justice in the law, but all the social forces at work to blur wish and fact make such a center an imperative for our time.

The Center for Truth in Science was established last year, and has already weighed in on important issues and scientific controversies that occupy American courtrooms and legislatures. Championing “fact-based” science, the Center has begun to tackle some of the difficult contemporary scientific issues that loom large on the judicial scene – talc, glyphosate, per- and polyfluoroalkyl substances, and other – as well as methodological and conceptual problems that underlie these issues. (Of course, there is no other kind of science than fact-based, but there are many pseudo-, non-fact based knock offs out there.) The Center has already produced helpful papers on various topics, with many more important papers in progress. The Center’s website is a welcomed resource for news and insights on science that matters for current policy decisions.

The Center is an important and exciting development, and its work promises to provide the tools to help us separate wish from fact. Nothing less than the meaning of the world is at stake.

A TrumPence for Your Thoughts

November 21st, 2020

Trigger Warning: Political Rant

“Let them call me rebel and welcome, I feel no concern from it; but I should suffer the misery of devils, were I to make a whore of my soul by swearing allegiance to one whose character is that of a sottish, stupid, stubborn, worthless, brutish man.”

Thomas Paine, “The Crisis, Number 1” (Dec. 23, 1776), in Ian Shapiro & Jane E. Calvert, eds., Selected Writings of Thomas Paine 53, 58 (2014).

♂, ♀, ✳, †, ∞

Person, woman, man, camera, TV

Back on October 20, 2020, televangelist Pat Robertson heard voices in his head, and interpreted them to be the voice of god, announcing the imminent victory of Donald Trump. How Robertson knows he was not hearing the devil, he does not say. Even gods get their facts and predictions wrong sometimes. We should always ask for the data and the analysis.

Trump’s “spiritual advisor,” mega-maga-church pastor and televangelist, Paula White, violated the ban on establishment of religion, and prayed for Trump’s victory.[1] Speaking in tongues, White made Trump seem articulate. White wandered from unconstitutional into blatantly criminal territory, however, when she sought intervention of foreign powers in the election, by summoning angels from Africa and South America to help Trump win the election.  Trump seemed not to take notice that these angels were undocumented, illegal aliens. In the end, the unlawful aliens proved ineffective. Our better angels prevailed over Ms. White’s immigrant angels. Now ICE will now have to track these angels down and deport them back to their you-know-what countries of origin.

How did we get to this place? It is not that astute observers on the left and the right did not warn us.

Before Trump was elected in 2016, Justice Ruth Bader Ginsburg notoriously bashed Donald Trump, by calling him a “faker”:

“He has no consistency about him. He says whatever comes into his head at the moment. He really has an ego … How has he gotten away with not turning over his tax returns? The press seems to be very gentle with him on that.”[2]

Faker was a fitting epithet that captured Trump’s many pretensions. It is a word that has a broader meaning in the polyglot world of New York City, where both Justice Ginsburg and Donald Trump were born and grew up. The word has a similar range of connotations as trombenik, “a lazy person, ne’er-do-well, boastful loudmouth, bullshitter, bum.” Maybe we should modify trombenik to Trumpnik?

Justice Ginsburg’s public pronouncement was, of course, inappropriate, but accurate nonetheless. She did something, however, that Trump has never done in his public persona; she apologized:

“‘On reflection, my recent remarks in response to press inquiries were ill-advised and I regret making them’, Ginsburg said in a statement Thursday morning. ‘Judges should avoid commenting on a candidate for public office. In the future I will be more circumspect’.”[3]

Of course, Justice Ginsburg should have been more circumspect, but her disdain for Trump was not simply an aversion to his toxic politics and personality. Justice Ginsburg was a close friend of Justice Antonin Scalia, who was one of the most conservative justices on the Supreme Court bench. Ginsburg and Scalia could and did disagree vigorously and still share friendship and many common interests. Scalia was not a faker; Trump is.

Other conservative writers have had an equal or even a greater disdain for Trump. On this side of the Atlantic, principled conservatives rejected the moral and political chaos of Donald Trump. When Trump’s nomination as the Republican Party candidate for president seemed assured in June 2016, columnist George Will announced to the Federalist Society that he had changed his party affiliation from Republican to unaffiliated.[4]

On the other side of the Atlantic, conservative thinkers such as the late Sir Roger Scruton rolled their eyes at the prospect of Donald Trump’s masquerading as a conservative.[5] After Trump had the benefit of a few months to get his sea legs on the ship of state, Sir Roger noted that Trump was nothing more than a craven opportunist:

“Q. Does ‘Trumpism’ as an ideology exist, and if it does, is it conservative, or is it just opportunism?

A. It is opportunism. He probably does have conservative instincts, but let’s face it, he doesn’t have any thoughts that are longer than 140 characters, so how can he have a real philosophy?”[6]

Twitter did, at some point, double the number of characters permitted in a tweet, but Trump simply repeated himself more.

In the United States, we have had social conservatives, fiscal conservatives, classic liberal conservatives, and more recently, we have seen neo-cons, theo-cons, and Vichy cons. I suppose there have always been con-cons, but Trump has strongly raised the profile of this last subgroup. There can be little doubt that Donald John Trump has always been a con-con. Now we have Banana Republicans who have made a travesty of the rule of law. Four years in, we are all suffering from what Barak Obama termed “truth decay.”

Cancel culture has always been with us. Socrates, Jesus, and Julius Caesar were all canceled, with extreme prejudice. In the United States, Senator Joseph McCarthy developed cancel culture into a national past time. In this century, the Woke Left has weaponized cancel culture into a serious social and intellectual problem. Now, Donald Trump wants to go one step further and cancel our republican form of democracy. Trump is attempting in plain sight to cancel a national election he lost.

Yes, I have wandered from my main mission on this blog to write about tort law and about how the law handles scientific and statistical issues. My desultory writings on this blog have largely focused on evidence in scientific controversies that find their way into the law. Our political structures are created and conditioned by our law, and our commitment to the rule of law, and the mistreatment of scientific issues by political actors is as pressing a concern, to me at least, as mistreatment of science by judges or lawyers. Trump has now made the post-modernists look like paragons of epistemic virtue. As exemplified in the political response to the pandemic, this political development has important implications for the public acceptance of science and evidence-based policies and positions in all walks of life.

Another blogger whose work on science and risk I respect is David Zaruk, who openly acknowledges that Donald Trump is an “ethically and intellectually flawed train wreck of a politician.”[7] Like Trump apologists James Lindsay and Ben Shapiro, however, Zaruk excuses the large turnout for Trump because Trump voters:

“are sick to death of being told by smug, arrogant, sanctimonious zealots how to think, how to feel and how to act. Nobody likes to be fixed and especially not by self-righteous, moralising mercenaries.”

But wait:  Isn’t this putative defense itself a smug, arrogant, sanctimonious, and zealous lecture that we should somehow be tolerant of Trump and his supporters? What about the sickness unto death over Trump’s endless propagation of lies and fraud? Trump has set an example that empowers his followers to do likewise. Zaruk’s reductionist analysis ignores important determinants of the vote. Many of the Trump voters were motivated by the most self-righteous of all moralizing mercenaries – leaders of Christian nationalism.[8] Zaruk’s acknowledgement of Trump’s deep ethical and intellectual flaws, while refraining from criticizing Trump voters, fits the pattern of the Trump-supporting mass social media that engages in the rhetoric of gas-lighting “what-about-ism.”[9]

Sure, no one likes to be told that they are bereft of moral, practical, and political judgment, but voting for Trump is complicit in advancing “a deeply ethically and intellectually flawed” opportunist. Labeling all of Trump’s opponents as “smug, arrogant, sanctimonious zealots” is really as empty as Trump’s list of achievements. Furthermore, Zaruk’s animadversions against the Woke Left miss the full picture of who is criticizing Trump and his “base.” The critique of Trump has come not just from so-called progressives but from deeply conservative writers such as Will and Scruton, and from pragmatic conservative political commentators such as George Conway, Amanda Carpenter, Sarah Longwell, and Charles Sykes. There is no moral equivalency between the possibility that the Wokies will influence a Biden administration and the certainty that truly deplorable people such as Bannon, Gingrich, Giuliani, Navarro, et alia, will both influence and control our nation’s policy agenda.

Of course, Trump voters may honestly believe that a Democratic administration will be on the wrong side of key issues, such as immigration, abortion, gun control, regulation, taxation, and the like. Certainly opponents of the Democratic positions on these issues could seek an honest broker to represent their views. Trump voters, however, cannot honestly endorse the character and morality of Mr. Trump, his cabinet, and his key Senate enablers. Trump has been the Vector-in-Chief of contagion and lies. As for Trump’s evangelical Christian supporters, they have an irreconcilable problem with our fundamental prohibition against state establishment of religion.

It has been a difficult year for Trump. He has had the full 2020 experience. He developed COVID, lost his job, and received an eviction notice. And now he finds himself with electicle dysfunction. Trump has long been a hater and a denier. Without intending to libel his siblings, we can say that hating and denying are in his DNA. Trump hates and denies truth, evidence, valid inference, careful analysis and synthesis. He is the apotheosis of what happens when a corrupt, small-minded business man surrounds himself with lackies, yes-people, and emotionally damaged, financially dependent children.

Trump declared victory before the votes could be tallied, and he announced in advance, without evidence, that the election was rigged but only if it turned out with the “appearance” of his losing. After the votes were tallied, and he had lost by over 5,000,000 votes, and he lost the Electoral College by the same margin he labeled a “landslide” for him four years earlier, he claimed victory, contrary to the evidence, just as he said he would. Sore loser. Millions of voting Americans, to whom Zaruk would give a moral pass, do not see this as a problem.

In The Queen’s Gambit, a Netflix series, the stern, taciturn janitor of a girls’ orphanage, Mr. Shaibel, taught Beth Harmon, a seven year old, how to play chess. In one of their early games, Beth has a clearly lost position, and Mr. Shaibel instructs her, “now you resign.” Beth protests that she still has moves she can make before there is a checkmate, but Mr. Shaibel sternly repeats himself, “no, now you resign.” Beth breaks into tears and runs out of the room, but she learned the lesson and developed the resiliency, focus, and sportsmanship to play competitive chess at the highest level. If only Mr. Shaibel could have taught our current president this lesson, perhaps he would understand that the American electorate, both the self-styled progressives and conservatives who care about decency and morality, have united in saying to him, “now you resign.”

Dr. Mary Trump, the President’s niece, has written an unflattering psychological analysis of Trump. It does not take a Ph.D. in clinical psychology to see the problem. Donald Trump and his family do not have a dog. Before Donald Trump, James K. Polk (11th president) was the last president not to have a dog in the White House (March 4, 1845 – March 4, 1849). Polk died three months after leaving office.

I suppose there are some good people who do not like dogs, but liking and caring for dogs, and being open to their affection back, certainly marks people as capable of empathy, concern, and love. I could forgive the Obamas for never having had dogs before moving into the Whitehouse; they were a hard working, ambitious two career couple, living in a large city. They fixed their omission shortly upon Obama’s election. The absence of dog from the Trump White House speaks volumes about Trump. In a rally speech, he mocked: “Can you imagine me walking a dog?” Of course, he would not want to walk a dog down a ramp. How interesting that of all the criticisms lodged against Trump, the observation that he lacked canine companionship struck such a nerve that he addressed the matter defensively in one of his rallies. And how sad that he could not imagine his son Barron walking a dog. It was probably Barron’s only hope of having another living creature close to him show concern. Of course, Melania could walk the dog, which would allow her to do something useful and entertaining (besides ignoring the Christmas decorations), especially in her high-heel dog-walking shoes.

Saturday, November 7, 2020. O joy, o rapture! People danced in the streets of the Upper East. Cars honked horns. People hung out their windows and banged pots. Grown men and women shed tears of joy and laughter. A beautiful New York day, VD Day, not venereal disease day, but victory over Donald. Trump can begin to plan for the Trump Presidential Lie-brary and adult book store.

But wait. Trump legal advisor Harmeet Dhillon tells Lou Dobbs on the Fox News Channel: “We’re waiting for the United States Supreme Court – of which the president has nominated three justices – to step in and do something. And hopefully Amy Coney Barrett will come through.” Well, that was not a terribly subtle indication of the corruption in Trump’s soul and on his legal team. Americans now know all about loyalty oaths to the leader, and the abdication of principles. Fealty to Trump is the only principle; just read the Republican Party Platform.

Former White House chief strategist Steven Bannon was not to be out done in his demonstrations of fealty. Bannon called for Dr. Anthony Fauci and FBI Director Christopher Wray to be beheaded “as a warning to federal bureaucrats. You either get with the program or you are gone.” Bannon, of course, was not in a principal-agent relationship with Trump, as was Dhillon, but given that Trump has an opinion about everything on Twitter-Twatter, and that he was silent about Bannon’s call for decapitations, we have to take his silence as tacit agreement.

It does seem that many Republicans are clutching at straws to hang on. Fraud claims require pleading with particularity, and proof by clear and convincing evidence. Extraordinary claims require extraordinary evidence. First and second order hearsay will not suffice. Surely, Rudy the Wanker knows this; indeed, when he has appeared in court, he has readily admitted that he is not pursuing a fraud case.[10] In open court, Guiliani, with a straight face, told a federal judge that his client was denied the opportunity to ensure opacity at the polls.[11]

Under the eye of Newt Gingrich, former Republican Speaker of the House, poll workers should be jailed, and Attorney General William P. Barr should step in to the fray. Never failing to disappoint, Bully Barr obliged. Still, the Republican attempt to win by litigation, a distinctly un-conservative approach, has been failing.[12]

How will we know when our national nightmare is over? There will not be the usual concession speech. Look for Trump’s announcement of his candidacy for the 2024 presidential election.

Donald J. Trump Foundation, Trump Airlines, Trump Magazine, Trump Steaks, Trump Vodka, Trump Mortgage, Trump: The Game, Trump University, GoTrump.com, Trump Marriage #1, Trump Marriage #2, Trump Taj Mahal, Trump Plaza Hotel and Casino, Plaza Hotel, Trump Castle Hotel and Casino, Trump Hotels and Casino Resorts, Trump Entertainment Resorts, Trumpnet – all failures – are now gone. Soon Trump himself will be gone as well.


Post-Script

A dimly lit room filled with coffins. Spider webs stretch across the room. Rats scurry across the floor. Slowly, the tops of the coffins are pushed open from within in, by arms of skeletons. The occupants of the coffins, skeleton, slowly get up and start talking.

Skeleton one: COVID, COVID, COVID, COVID, COVID, COVID, that’s all everyone wants to talk about.

Skeleton two: It’s no big deal; we were going to die anyway. Well at some point.

Skeleton three: And besides, now we are immune. Ha, ha, ha!

Skeleton four: Hey, look at us; we’re rounding the corner.

All, singing while dancing in a circle conga line:

We’ll be coming around the corner when he’s gone (toot, toot)

We’ll be coming around the corner when he’s gone (toot, toot)

We’ll be coming around the corner, we’ll be coming around the corner

We’ll be coming around the corner when he’s gone (toot, toot).


[1]  Wyatte Grantham-Philips, “Pastor Paula White calls on angels from Africa and South America to bring Trump victory,” USA TODAY (Nov. 5, 2020).

[2]  John Kruzel, “Justice Ruth Bader Ginsburg has taken to bashing Donald Trump in recent days,” (July 12, 2016).

[3]  Jessica Taylor, “Ginsburg Apologizes For ‘Ill-Advised’ Trump Comments,” Nat’l Public Radio (July 14, 2016).

[4]  Maggie Haberman, “George Will Leaves the G.O.P. Over Donald Trump,” N.Y. Times (June 25, 2016).

[5]  Roger Scruton, “What Trump Doesn’t Get About Conservatism,” N.Y. Times (July 4, 2018).

[6]  Tom Szigeti, “Sir Roger Scruton on Trump: ‘He doesn’t have any thoughts that are longer than 140 characters’,” Hungary Today (June 8, 2017).

[7]  David Zaruk, “The Trump Effect: Stop Telling me What to Think!,” RiskMonger (Nov. 5, 2020).

[8]  See Katherine Stewart, The Power Worshippers: Inside the Dangerous Rise of Religious Nationalism (2020).

[9]  See Amanda Carpenter, Gaslighting America – Why We Love It When Trump Lies to Us 2018.

[10]  Lisa Lerer, “‘This Is Not a Fraud Case’: Keep an eye on what President Trump’s lawyers say about supposed voter fraud in court, where lying under oath is a crime,” (Nov. 18, 2020).

[11]  Gail Collins, “Barr the Bad or Rudy the Ridiculous?” N.Y. Times (Nov. 18, 2020).

[12]  Jim Rutenberg, Nick Corasaniti and Alan Feuer, “With No Evidence of Fraud, Trump Fails to Make Headway on Legal Cases,” N.Y. Times (Nov. 7, 2020); Aaron Blake, “It goes from bad to worse for the Trump legal team,” Wash. Post (Nov. 13, 2020); Alan Feuer, “Trump Loses String of Election Lawsuits, Leaving Few Vehicles to Fight His Defeat,” N.Y. Times (Nov. 13, 2020); Jon Swaine & Elise Viebeck, “Trump campaign jettisons major parts of its legal challenge against Pennsylvania’s election results,” Wash. Post (Nov. 15, 2020).

Judicial Dodgers – Rule 702 Tie Does Not Go to Proponent

June 2nd, 2020

The Advisory Committee notes to the year 2000 amendment to Federal Rule of Evidence 702 included a comment:

“A review of the case law after Daubert shows that the rejection of expert testimony is the exception rather than the rule. Daubert did not work a ‘seachange over federal evidence law’, and ‘the trial court’s role as gatekeeper is not intended to serve as a replacement for the adversary system’.”[internal citation omitted]

In stating its review of the caselaw, perhaps the Committee was attempting to allay the anxiety of technophobic judges. But was the Committee also attempting to derive an “ought” from an “is”?  Before the Supreme Court decided Daubert in 1993, virtually every admissibility challenge to expert witness opinion testimony failed. The trial courts were slow to adapt and to adopt the reframed admissibility standard. As the Joiner case illustrated, some Circuits were even slower to permit trial judges the discretion to assess the validity vel non of expert witnesses’ opinions.

The Committee’s observation about the “exceptional” nature of exclusions was thus unexceptional as a description of the case law before and shortly after Daubert was decided. And even if the Committee were describing a normative view, it is not at all clear how that view should translate into a ruling in a given case, without a very close analysis of the opinions at issue, under the Rule 702 criteria. In baseball, most hitters are thrown out at first base, but that fact does not help an umpire one whit in calling a specific runner “safe” or “out.”  Nonetheless, courts have repeatedly offered the observation about the exceptional nature of exclusion as both an explanation and a justification of their opinions to admit testimony.[1] The Advisory Committee note has thus mutated into a mandate to err on the side of admissibility, as though deliberately committing error was a good thing for any judge to do.[2] First rule: courts shall not err, not intentionally, recklessly, or negligently.

Close Calls and Resolving Doubts

Another mutant offspring of the “exception, not the rule” mantra is that “[a]ny doubts regarding the admissibility of an expert’s testimony should be resolved in favor of admissibility.”[3] Why not resolve the doubts and rule in accordance with the law? Or, if doubts remain, then charge them against the proponent who has the burden of showing admissibility? Unlike baseball, in which a tie goes to the runner, in expert witness law, a tie goes to the challenger because the defender of the motion has failed to show a preponderance in favor of admissibility. A better mantra: “exclusion when it is the Rule.”

Some courts re-imagine the Advisory Committee’s about exceptional exclusions as a recommendation for admitting Rule 702 expert witness opinion testimony as a preferred outcome. Again, that interpretation reverses the burden of proof and makes a mockery of equal justice and scientific due process.

Yet another similar judicial mutation is the notion that courts should refuse Rule 702 motions when they are “close calls.”[4] Telling the litigants that the call was close might help assuage the loser and temper the litigation enthusiasms of the winner, but it does not answer the key question: Did the proponent carry the burden of showing admissibility? Residual doubts would seem to weigh against the proponent.

Not all is lost. In one case, decided by a trial court within the Ninth Circuit, the trial judge explicitly pointed to the proponent’s failure to identify his findings and methodology as part of the basis for exclusion, not admission, of the challenged witness’s opinion testimony.[5] Difficulty in resolving whether the Rule 702 predicates were satisfied worked against, not for, the proponent, whose burden it was to show those predicates.

In another case, Judge David G. Campbell, of the District of Arizona, who has participated in the Rules Committee’s deliberations, showed the way by clearly stating that the exclusion of opinion testimony was required when the Rule 702 conditions were not met:

“Plaintiffs have not shown by a preponderance of the evidence that [the expert witness’s] causation opinions are based on sufficient facts or data to which reliable principles and methods have been applied reliably… .”[6]

Exclusion followed because the absent showings were “conditions for admissibility,” and not “mere” credibility considerations.

Trust Me, I’m a Liberal

One of the reasons that the Daubert Court rejected incorporating the Frye standard into Rule 702 was its view that a rigid “general acceptance” standard “would be at odds with the ‘liberal thrust’ of the Federal Rules.”[7] Some courts have cited this “liberal thrust” as though it explained or justified a particular decision to admit expert witness opinion testimony.[8]

The word “liberal” does not appear in the Federal Rules of Evidence.  Instead, the Rules contain an explicit statement of how judges must construe and apply the evidentiary provisions:

“These rules shall be construed to secure fairness in administration, elimination of unjustifiable expense and delay, and promotion of growth and development of the law of evidence to the end that the truth may be ascertained and proceedings justly determined.”[9]

A “liberal” approach, construed as a “let it all in” approach would be ill-designed to secure fairness, eliminate unjustifiable expense and time of trial, or lead to just and correct outcomes.  The “liberal” approach of letting in opinion testimony and let the jury guess at questions of scientific validiy would be a most illiberal result.  The truth will not be readily ascertained if expert witnesses are permitted to pass off hypotheses and ill-founded conclusions as scientific knowledge.

Avoiding the rigidity of the Frye standard, which was so rigid that it was virtually never applied, certainly seems like a worthwhile judicial goal. But how do courts go from the Justice Blackmun’s “liberal thrust” to infer a libertine “anything goes”? And why does liberal not connote seeking of the truth, free of superstitions? Can it be liberal to permit opinions that are based upon fallacious or flawed inferences, invalid studies, or cherry-picked data sets?

In reviewing the many judicial dodges that are used to avoid engaging in meaningful Rule 702 gatekeeping, I am mindful of Reporter Daniel Capra’s caveat that the ill-advised locutions used by judges do not necessarily mean that their decisions might not be completely justifiable on a carefully worded and reasoned opinion that showed that Rule 702 and all its subparts were met. Of course, we could infer that the conditions for admissibility were met whenever an expert witness’s opinions were admitted, and ditch the whole process of having judges offer reasoned explanations. Due process, however, requires more. Judges need to specify why they denied Rule 702 challenges in terms of the statutory requirements for admissibility so that other courts and the Bar can develop a principled jurisprudence of expert witness opinion testimony.


[1]  See, e.g., In re Scrap Metal Antitrust Litig., 527 F.3d 517, 530 (6th Cir. 2008) (“‘[R]ejection of expert testimony is the exception, rather than the rule,’ and we will generally permit testimony based on allegedly erroneous facts when there is some support for those facts in the record.”) (quoting Advisory Committee Note to 2000 Amendments to Rule 702); Citizens State Bank v. Leslie, No. 6-18-CV-00237-ADA, 2020 WL 1065723, at *4 (W.D. Tex. Mar. 5, 2020) (rejecting challenge to expert witness opinion “not based on sufficient facts”; excusing failure to assess factual basis with statement that “the rejection of expert testimony is the exception rather than the rule.”); In re E. I. du Pont de Nemours & Co. C-8 Pers. Injury Litig., No. 2:18-CV-00136, 2019 WL 6894069, at *2 (S.D. Ohio Dec. 18, 2019) (committing naturalistic fallacy; “[A] review of the case law … shows that rejection of the expert testimony is the exception rather than the rule.”): Frankenmuth Mutual Insur. Co. v. Ohio Edison Co., No. 5:17CV2013, 2018 WL 9870044, at *2 (N.D. Ohio Oct. 9, 2018) (quoting Advisory Committee Note “exception”); Wright v. Stern, 450 F. Supp. 2d 335, 359–60 (S.D.N.Y. 2006)(“Rejection of expert testimony, however, is still ‘the exception rather than the rule,’ Fed.R.Evid. 702 advisory committee’s note (2000 Amendments)[.] . . . Thus, in a close case the testimony should be allowed for the jury’s consideration.”) (internal quotation omitted).

[2]  Lombardo v. Saint Louis, No. 4:16-CV-01637-NCC, 2019 WL 414773, at *12 (E.D. Mo. Feb. 1, 2019) (“[T]he Court will err on the side of admissibility.”).

[3]  Mason v. CVS Health, 384 F. Supp. 3d 882, 891 (S.D. Ohio 2019).

[4]  Frankenmuth Mutual Insur. Co. v. Ohio Edison Co., No. 5:17CV2013, 2018 WL 9870044, at *2 (N.D. Ohio Oct. 9, 2018) (concluding “[a]lthough it is a very close call, the Court declines to exclude Churchwell’s expert opinions under Rule 702.”); In re E. I. du Pont de Nemours & Co. C-8 Pers. Injury Litig., No. 2:18-CV-00136, 2019 WL 6894069, at *2 (S.D. Ohio Dec. 18, 2019) (suggesting doubts should be resolved in favor of admissibility).

[5]  Rovid v. Graco Children’s Prod. Inc., No. 17-CV-01506-PJH, 2018 WL 5906075, at *13 (N.D. Cal. Nov. 9, 2018), app. dism’d, No. 19-15033, 2019 WL 1522786 (9th Cir. Mar. 7, 2019).

[6]  Alsadi v. Intel Corp., No. CV-16-03738-PHX-DGC, 2019 WL 4849482, at *4 -*5 (D. Ariz. Sept. 30, 2019).

[7]  Daubert v. Merrell Dow Pharms., Inc. 509 U.S. 579, 588 (1993).

[8]  In re ResCap Liquidating Trust Litig., No. 13-CV-3451 (SRN/HB), 2020 WL 209790, at *3 (D. Minn. Jan. 14, 2020) (“Courts generally support an attempt to liberalize the rules governing the admission of expert testimony, and favor admissibility over exclusion.”)(internal quotation omitted); Collie v. Wal-Mart Stores East, L.P., No. 1:16-CV-227, 2017 WL 2264351, at *1 (M.D. Pa. May 24, 2017) (“Rule 702 embraces a ‘liberal policy of admissibility’, under which it is preferable to admit any evidence that may assist the factfinder[.]”); In re Zyprexa Prod. Liab. Litig., 489 F. Supp. 2d 230, 282 (E.D.N.Y. 2007); Billone v. Sulzer Orthopedics, Inc., No. 99-CV-6132, 2005 WL 2044554, at *3 (W.D.N.Y. Aug. 25, 2005) (“[T]he Supreme Court has emphasized the ‘liberal thrust’ of Rule 702, favoring the admissibility of expert testimony.”).

[9]  Federal Rule of Evidence Rule 102 (“Purpose and Construction”) (emphasis added).

Dodgy Data Duck Daubert Decisions

March 11th, 2020

Judges say the darndest things, especially when it comes to their gatekeeping responsibilities under Federal Rules of Evidence 702 and 703. One of the darndest things judges say is that they do not have to assess the quality of the data underlying an expert witness’s opinion.

Even when acknowledging their obligation to “assess the reasoning and methodology underlying the expert’s opinion, and determine whether it is both scientifically valid and applicable to a particular set of facts,”[1] judges have excused themselves from having to look at the trustworthiness of the underlying data for assessing the admissibility of an expert witness’s opinion.

In McCall v. Skyland Grain LLC, the defendant challenged an expert witness’s reliance upon oral reports of clients. The witness, Mr. Bradley Walker, asserted that he regularly relied upon such reports, in similar contexts of the allegations that the defendant misapplied herbicide to plaintiffs’ crops. The trial court ruled that the defendant could cross-examine the declarant who was available trial, and concluded that the “reliability of that underlying data can be challenged in that manner and goes to the weight to be afforded Mr. Walker’s conclusions, not their admissibility.”[2] Remarkably, the district court never evaluated the reasonableness of Mr. Walker’s reliance upon client reports in this or any context.

In another federal district court case, Rodgers v. Beechcraft Corporation, the trial judge explicitly acknowledged the responsibility to assess whether the expert witness’s opinion was based upon “sufficient facts and data,” but disclaimed any obligation to assess the quality of the underlying data.[3] The trial court in Rodgers cited a Tenth Circuit case from 2005,[4] which in turn cited the Supreme Court’s 1993 decision in Daubert, for the proposition that the admissibility review of an expert witness’s opinion was limited to a quantitative sufficiency analysis, and precluded a qualitative analysis of the underlying data’s reliability. Quoting from another district court criminal case, the court in Rodgers announced that “the Court does not examine whether the facts obtained by the witness are themselves reliable – whether the facts used are qualitatively reliable is a question of the weight to be given the opinion by the factfinder, not the admissibility of the opinion.”[5]

In a 2016 decision, United States v. DishNetwork LLC, the court explicitly disclaimed that it was required to “evaluate the quality of the underlying data or the quality of the expert’s conclusions.”[6] This district court pointed to a Seventh Circuit decision, which maintained that  “[t]he soundness of the factual underpinnings of the expert’s analysis and the correctness of the expert’s conclusions based on that analysis are factual matters to be determined by the trier of fact, or, where appropriate, on summary judgment.”[7] The Seventh Circuit’s decision, however, issued in June 2000, several months before the effective date of the amendments to Federal Rule of Evidence 702 (December 2000).

In 2012, a magistrate judge issued an opinion along the same lines, in Bixby v. KBR, Inc.[8] After acknowledging what must be done in ruling on a challenge to an expert witness, the judge took joy in what could be overlooked. If the facts or data upon which the expert witness has relied are “minimally sufficient,” then the gatekeeper can regard questions about “the nature or quality of the underlying data bear upon the weight to which the opinion is entitled or to the credibility of the expert’s opinion, and do not bear upon the question of admissibility.”[9]

There need not be any common law mysticism to the governing standard. The relevant law is, of course, a statute, which appears to be forgotten in many of the failed gatekeeping decisions:

Rule 702. Testimony by Expert Witnesses

A witness who is qualified as an expert by knowledge, skill, experience, training, or education may testify in the form of an opinion or otherwise if:

(a) the expert’s scientific, technical, or other specialized knowledge will help the trier of fact to understand the evidence or to determine a fact in issue;

(b) the testimony is based on sufficient facts or data;

(c) the testimony is the product of reliable principles and methods; and

(d) the expert has reliably applied the principles and methods to the facts of the case.

It would seem that you could not produce testimony that is the product of reliable principles and methods by starting with unreliable underlying facts and data. Certainly, having a reliable method would require selecting reliable facts and data from which to start. What good would the reliable application of reliable principles to crummy data?

The Advisory Committee Notes to Rule 702 hints at an answer to the problem:

“There has been some confusion over the relationship between Rules 702 and 703. The amendment makes clear that the sufficiency of the basis of an expert’s testimony is to be decided under Rule 702. Rule 702 sets forth the overarching requirement of reliability, and an analysis of the sufficiency of the expert’s basis cannot be divorced from the ultimate reliability of the expert’s opinion. In contrast, the ‘reasonable reliance’ requirement of Rule 703 is a relatively narrow inquiry. When an expert relies on inadmissible information, Rule 703 requires the trial court to determine whether that information is of a type reasonably relied on by other experts in the field. If so, the expert can rely on the information in reaching an opinion. However, the question whether the expert is relying on a sufficient basis of information—whether admissible information or not—is governed by the requirements of Rule 702.”

The answer is only partially satisfactory. First, if the underlying data are independently admissible, then there may indeed be no gatekeeping of an expert witness’s reliance upon such data. Rule 703 imposes a reasonableness test for reliance upon inadmissible underlying facts and data, but appears to give otherwise admissible facts and data a pass. Second, the above judicial decisions do not mention any Rule 703 challenge to the expert witnesses’ reliance. If so, then there is a clear lesson for counsel. When framing a challenge to the admissibility of an expert witness’s opinion, show that the witness has unreasonably relied upon facts and data, from whatever source, in violation of Rule 703. Then show that without the unreasonably relied upon facts and data, the witness cannot show that his or her opinion satisfies Rule 702(a)-(d).


[1]  See, e.g., McCall v. Skyland Grain LLC, Case 1:08-cv-01128-KHV-BNB, Order (D. Colo. June 22, 2010) (Brimmer, J.) (citing Dodge v. Cotter Corp., 328 F.3d 1212, 1221 (10th Cir. 2003), citing in turn Daubert v. Merrill Dow Pharms., Inc., 509 U.S. 579,  592-93 (1993).

[2]  McCall v. Skyland Grain LLC Case 1:08-cv-01128-KHV-BNB, Order at p.9 n.6 (D. Colo. June 22, 2010) (Brimmer, J.)

[3]  Rodgers v. Beechcraft Corp., Case No. 15-CV-129-CVE-PJC, Report & Recommendation at p.6 (N.D. Okla. Nov. 29, 2016).

[4]  Id., citing United.States. v. Lauder, 409 F.3d 1254, 1264 (10th Cir. 2005) (“By its terms, the Daubert opinion applies only to the qualifications of an expert and the methodology or reasoning used to render an expert opinion” and “generally does not, however, regulate the underlying facts or data that an expert relies on when forming her opinion.”), citing Daubert v. Merrill Dow Pharms., Inc., 509 U.S. 579, 592-93 (1993).

[5]  Id., citing and quoting United States v. Crabbe, 556 F. Supp. 2d 1217, 1223
(D. Colo. 2008) (emphasis in original). In Crabbe, the district judge mostly excluded the challenged expert witness, thus rendering its verbiage on quality of data as obiter dicta). The pronouncements about the nature of gatekeeping proved harmless error when the court dismissed the case on other grounds. Rodgers v. Beechcraft Corp., 248 F. Supp. 3d 1158 (N.D. Okla. 2017) (granting summary judgment).

[6]  United States v. DishNetwork LLC, No. 09-3073, Slip op. at 4-5 (C.D. Ill. Jan. 13, 2016) (Myerscough, J.)

[7]  Smith v. Ford Motor Co., 215 F.3d 713, 718 (7th Cir. 2000).

[8]  Bixby v. KBR, Inc., Case 3:09-cv-00632-PK, Slip op. at 6-7 (D. Ore. Aug. 29, 2012) (Papak, M.J.)

[9]  Id. (citing Hangarter v. Provident Life & Accident Ins. Co., 373 F.3d 998, 1017 (9th Cir. 2004), quoting Children’s Broad Corp. v. Walt Disney Co., 357 F.3d 860, 865 (8th Cir. 2004) (“The factual basis of an expert opinion goes to the credibility of the testimony, not the admissibility, and it is up to the opposing party to examine the factual basis for the opinion in cross-examination.”).

Science Journalism – UnDark Noir

February 23rd, 2020

Critics of the National Association of Scholars’ conference on Fixing Science pointed readers to an article in Undark, an on-line popular science site for lay audiences, and they touted the site for its science journalism. My review of the particular article left me unimpressed and suspicious of Undark’s darker side. When I saw that the site featured an article on the history of the Supreme Court’s Daubert decision, I decided to give the site another try. For one thing, I am sympathetic to the task science journalists take on: it is important and difficult. In many ways, lawyers must commit to perform the same task. Sadly, most journalists and lawyers, with some notable exceptions, lack the scientific acumen and English communication skills to meet the needs of this task.

The Undark article that caught my attention was a history of the Daubert decision and the Bendectin litigation that gave rise to the Supreme Court case.[1] The author, Peter Andrey Smith, is a freelance reporter, who often covers science issues. In his Undark piece, Smith covered some of the oft-told history of the Daubert case, which has been told before, better and in more detail in many legal sources. Smith gets some credit for giving the correct pronunciation of the plaintiff’s name – “DAW-burt,” and for recounting how both sides declared victory after the Supreme Court’s ruling. The explanation Smith gives of the opinion by Associate Justice Harry Blackmun is reasonably accurate, and he correctly notes that a partial dissenting opinion by Chief Justice Rehnquist complained that the majority’s decision would have trial judges become “amateur scientists.” Nowhere in the article will you find, however, the counter to the dissent: an honest assessment of the institutional and individual competence of juries to decide complex scientific issues.

The author’s biases eventually, however, become obvious. He recounts his interviews with Jason Daubert and his mother, Joyce Daubert. He earnestly reports how Joyce Daubert remembered having taken Bendectin during her pregnancy with Jason, and in the moment of that recall, “she felt she’d finally identified the teratogen that harmed Jason.” Really? Is that how teratogens are identified? Might it have been useful and relevant for a scientific journalist to explain that there are four million live births every year in the United States and that 3% of children born each year have major congenital malformations? And that most malformations have no known cause? Smith ingenuously relays that Jason Daubert had genetic testing, but omits that genetic testing in the early 1990s was fairly primitive and limited. In any event, how were any expert witnesses supposed to rule out base-line risk of birth defects, especially given weak to non-existent epidemiologic support for the Daubert’s claims? Smith does answer these questions; he does not even acknowledge the questions.

Smith later quotes Joyce Daubert as describing the litigation she signed up for as “the hill I’ll die on. You only go to war when you think you can win.” Without comment or analysis, Smith gives Joyce Daubert an opportunity to rant against the “injustice” of how her lawsuit turned out. Smith tells us that the Dauberts found the “legal system remains profoundly disillusioning.” Joyce Daubert told Smith that “it makes me feel stupid that I was so naïve to think that, after we’d invested so much in the case, that we would get justice.”  When called for jury duty, she introduces herself as

“I’m Daubert of Daubert versus Merrell Dow … ; I don’t want to sit on this jury and pretend that I can pass judgment on somebody when there is no justice. Please allow me to be excused.”

But didn’t she really get all the justice she deserved? Given her zealotry, doesn’t she deserve to have her name on the decision that serves to rein in expert witnesses who outrun their scientific headlights? Smith is coy and does not say, but in presenting Mrs. Daubert’s rant, without presenting the other side, he is using his journalistic tools in a fairly blatant attempt to mislead. At this point, I begin to get the feeling that Smith is preaching to a like-minded choir over there at Undark.

The reader is not treated to any interviews with anyone from the company that made Bendectin, any of its scientists, or any of the scientists who published actual studies on whether Bendectin was associated with the particular birth defects Jason Daubert had, or for that matter, with any birth defects at all. The plaintiffs’ expert witnesses quoted and cited never published anything at all on the subject. The readers are left to their imagination about how the people who developed Bendectin felt about the litigation strategies and tactics of the lawsuit industry.

The journalistic ruse is continued with Smith’s treatment of the other actors in the Daubert passion play. Smith describes the Bendectin plaintiffs’ lawyer Barry Nace in hagiographic terms, but omits his bar disciplinary proceedings.[2] Smith tells us that Nace had an impressive background in chemistry, and quotes him in an interview in which he described the evidentiary rules on scientific witness testimony as “scientific evidence crap.”

Smith never describes the Daubert’s actual affirmative evidence in any detail, which one might expect in a sophisticated journalistic outlet. Instead, he described some of their expert witnesses, Shanna Swan, a reproductive epidemiologist, and Alan K. Done, “a former pediatrician from Wayne State University.” Smith is secretive about why Done was done in at Wayne State; and we learn nothing about the serious accusations of perjury on credentials by Done. Instead, Smith regales us with Done’s tsumish theory, which takes inconclusive bits of evidence, throws them together, and then declares causation that somehow eludes the rest of the scientific establishment.

Smith tells us that Swan was a rebuttal witness, who gave an opinion that the data did not rule out “the possibility Bendectin caused defects.” Legally and scientifically, Smith is derelict in failing to explain that the burden was on the party claiming causation, and that Swan’s efforts to manufacture doubt were beside the point. Merrell Dow did not have to rule out any possibility of causation; the plaintiffs had to establish causation. Nor does Smith delve into how Swan sought to reprise her performance in the silicone gel breast implant litigation, only to be booted by several judges as an expert witness. And then for a convincer, Smith sympathetically repeats plaintiffs’ lawyer Barry Nace’s hyperbolic claim that Bendectin manufacturer, Merrell Dow had been “financing scientific articles to get their way,” adding by way of emphasis, in his own voice:

“In some ways, here was the fake news of its time: If you lacked any compelling scientific support for your case, one way to undermine the credibility of your opponents was by calling their evidence ‘junk science’.”

Against Nace’s scatalogical Jackson Pollack approach, Smith is silent about another plaintiffs’ expert witness, William McBride, who was found guilty of scientific fraud.[3] Smith reports interviews of several well-known, well-respected evidence scholars. He dutifully report Professor Edward Cheng’s view that “the courts were right to dismiss the [Bendectin] plaintiffs’ claims.” Smith quotes Professor D. Michael Risinger that claims from both sides in Bendectin cases were exaggerated, and that the 1970s and 1980s saw an “unbridled expansion of self-anointed experts,” with “causation in toxic torts had been allowed to become extremely lax.” So a critical reader might wonder why someone like Professor Cheng, who has a doctorate in statistics, a law degree from Harvard, and teaches at Vanderbilt Law School, would vindicate the manufacturers’ position in the Bendectin litigation. Smith never attempts to reconcile his interviews of the law professors with the emotive comments of Barry Nace and Joyce Daubert.

Smith acknowledges that a reformulated version of Bendectin, known as  Diclegis, was approved by the Food and Drug Administration in the United States, in 2013, for treatment of  nausea and vomiting during pregnancy. Smith tells us that Joyce is not convinced the drug should be back on the market,” but really why would any reasonable person care about her view of the matter? The challenge by Nav Persaud, a Toronto physician, is cited, but Persaud’s challenge is to the claim of efficacy, not to the safety of the medication. Smith tells us that Jason Daubert “briefly mulled reopening his case when Diclegis, the updated version of Bendectin, was re-approved.” But how would the approval of Diclegis, on the strength of a full new drug application, somehow support his claim anew? And how would he “reopen” a claim that had been fully litigated in the 1990s, and well past any statute of limitations?

Is this straight reporting? I think not. It is manipulative and misleading.

Smith notes, without attribution, that some scholars condemn litigation, such as the cases involving Bendectin, as an illegitimate form of regulation of medications. In opposition, he appears to rely upon Elizabeth Chamblee Burch, a professor at the University of Georgia School of Law for the view that because the initial pivotal clinical trials for regulatory approvals take place in limited populations, litigation “serves as a stopgap for identifying rare adverse outcomes that could crop up when several hundreds of millions of people are exposed to those products over longer periods of time.” The problem with this view is that Smith ignores the whole process of pharmacovigilance, post-registration trials, and pharmaco-epidemiologic studies conducted after the licensing of a new medication. The suggested necessity of reliance upon the litigation system as an adjunct to regulatory approval is at best misplaced and tenuous.

Smith correctly explains that the Daubert standard is still resisted in criminal cases, where it could much improve the gatekeeping of forensic expert witness opinion. But while the author gets his knickers in a knot over wrongful convictions, he seems quite indifferent to wrongful judgments in civil action.

Perhaps the one positive aspect of this journalistic account of the Daubert case was that Jason Daubert, unlike his mother, was open minded about his role in transforming the law of scientific evidence. According to Smith, Jason Daubert did not see the case as having “not ruined his life.” Indeed, Jason seemed to approve the basic principle of the Daubert case, and the subsequent legislation that refined the admissibility standard: “Good science should be all that gets into the courts.”


[1] Peter Andrey Smith, “Where Science Enters the Courtroom, the Daubert Name Looms Large: Decades ago, two parents sued a drug company over their newborn’s deformity – and changed courtroom science forever,” Undark (Feb. 17, 2020).

[2]  Lawyer Disciplinary Board v. Nace, 753 S.E.2d 618, 621–22 (W. Va.) (per curiam), cert. denied, 134 S. Ct. 474 (2013).

[3] Neil Genzlinger, “William McBride, Who Warned About Thalidomide, Dies at 91,” N.Y. Times (July 15, 2018); Leigh Dayton, “Thalidomide hero found guilty of scientific fraud,” New Scientist (Feb. 27, 1993); G.F. Humphrey, “Scientific fraud: the McBride case,” 32 Med. Sci. Law 199 (1992); Andrew Skolnick, “Key Witness Against Morning Sickness Drug Faces Scientific Fraud Charges,” 263 J. Am. Med. Ass’n 1468 (1990).

Science Bench Book for Judges

July 13th, 2019

On July 1st of this year, the National Judicial College and the Justice Speakers Institute, LLC released an online publication of the Science Bench Book for Judges [Bench Book]. The Bench Book sets out to cover much of the substantive material already covered by the Federal Judicial Center’s Reference Manual:

Acknowledgments

Table of Contents

  1. Introduction: Why This Bench Book?
  2. What is Science?
  3. Scientific Evidence
  4. Introduction to Research Terminology and Concepts
  5. Pre-Trial Civil
  6. Pre-trial Criminal
  7. Trial
  8. Juvenile Court
  9. The Expert Witness
  10. Evidence-Based Sentencing
  11. Post Sentencing Supervision
  12. Civil Post Trial Proceedings
  13. Conclusion: Judges—The Gatekeepers of Scientific Evidence

Appendix 1 – Frye/Daubert—State-by-State

Appendix 2 – Sample Orders for Criminal Discovery

Appendix 3 – Biographies

The Bench Book gives some good advice in very general terms about the need to consider study validity,[1] and to approach scientific evidence with care and “healthy skepticism.”[2] When the Bench Book attempts to instruct on what it represents the scientific method of hypothesis testing, the good advice unravels:

“A scientific hypothesis simply cannot be proved. Statisticians attempt to solve this dilemma by adopting an alternate [sic] hypothesis – the null hypothesis. The null hypothesis is the opposite of the scientific hypothesis. It assumes that the scientific hypothesis is not true. The researcher conducts a statistical analysis of the study data to see if the null hypothesis can be rejected. If the null hypothesis is found to be untrue, the data support the scientific hypothesis as true.”[3]

Even in experimental settings, a statistical analysis of the data do not lead to a conclusion that the null hypothesis is untrue, as opposed to not reasonably compatible with the study’s data. In observational studies, the statistical analysis must acknowledge whether and to what extent the study has excluded bias and confounding. When the Bench Book turns to speak of statistical significance, more trouble ensues:

“The goal of an experiment, or observational study, is to achieve results that are statistically significant; that is, not occurring by chance.”[4]

In the world of result-oriented science, and scientific advocacy, it is perhaps true that scientists seek to achieve statistically significant results. Still, it seems crass to come right out and say so, as opposed to saying that the scientists are querying the data to see whether they are compatible with the null hypothesis. This first pass at statistical significance is only mildly astray compared with the Bench Book’s more serious attempts to define statistical significance and confidence intervals:

4.10 Statistical Significance

The research field agrees that study outcomes must demonstrate they are not the result of random chance. Leaving room for an error of .05, the study must achieve a 95% level of confidence that the results were the product of the study. This is denoted as p ≤ 05. (or .01 or .1).”[5]

and

“The confidence interval is also a way to gauge the reliability of an estimate. The confidence interval predicts the parameters within which a sample value will fall. It looks at the distance from the mean a value will fall, and is measured by using standard deviations. For example, if all values fall within 2 standard deviations from the mean, about 95% of the values will be within that range.”[6]

Of course, the interval speaks to the precision of the estimate, not its reliability, but that is a small point. These definitions are virtually guaranteed to confuse judges into conflating statistical significance and the coefficient of confidence with the legal burden of proof probability.

The Bench Book runs into problems in interpreting legal decisions, which would seem softer grist for the judicial mill. The authors present dictum from the Daubert decision as though it were a holding:[7]

“As noted in Daubert, ‘[t]he focus, of course, must be solely on principles and methodology, not on the conclusions they generate’.”

The authors fail to mention that this dictum was abandoned in Joiner, and that it is specifically rejected by statute, in the 2000 revision to the Federal Rule of Evidence 702.

Early in the Bench Book, it authors present a subsection entitled “The Myth of Scientific Objectivity,” which they might have borrowed from Feyerabend or Derrida. The heading appears misleading because the text contradicts it:

“Scientists often develop emotional attachments to their work—it can be difficult to abandon an idea. Regardless of bias, the strongest intellectual argument, based on accepted scientific hypotheses, will always prevail, but the road to that conclusion may be fraught with scholarly cul-de-sacs.”[8]

In a similar vein, the authors misleadingly tell readers that “the forefront of science is rarely encountered in court,” and so “much of the science mentioned there shall be considered established….”[9] Of course, the reality is that many causal claims presented in court have already been rejected or held to be indeterminate by the scientific community. And just when readers may think themselves safe from the goblins of nihilism, the authors launch into a theory of naïve probabilism that science is just placing subjective probabilities upon data, based upon preconceived biases and beliefs:

“All of these biases and beliefs play into the process of weighing data, a critical aspect of science. Placing weight on a result is the process of assigning a probability to an outcome. Everything in the universe can be expressed in probabilities.”[10]

So help the expert witness who honestly (and correctly) testifies that the causal claim or its rejection cannot be expressed as a probability statement!

Although I have not read all of the Bench Book closely, there appears to be no meaningful discussion of Rule 703, or of the need to access underlying data to ensure that the proffered scientific opinion under scrutiny has used appropriate methodologies at every step in its development. Even a 412 text cannot address every issue, but this one does little to help the judicial reader find more in-depth help on statistical and scientific methodological issues that arise in occupational and environmental disease claims, and in pharmaceutical products litigation.

The organizations involved in this Bench Book appear to be honest brokers of remedial education for judges. The writing of this Bench Book was funded by the State Justice Institute (SJI) Which is a creation of federal legislation enacted with the laudatory goal of improving the quality of judging in state courts.[11] Despite its provenance in federal legislation, the SJI is a a private, nonprofit corporation, governed by 11 directors appointed by the President, and confirmed by the Senate. A majority of the directors (six) are state court judges, one state court administrator, and four members of the public (no more than two from any one political party). The function of the SJI is to award grants to improve judging in state courts.

The National Judicial College (NJC) originated in the early 1960s, from the efforts of the American Bar Association, American Judicature Society and the Institute of Judicial Administration, to provide education for judges. In 1977, the NJC became a Nevada not-for-profit (501)(c)(3) educational corporation, which its campus at the University of Nevada, Reno, where judges could go for training and recreational activities.

The Justice Speakers Institute appears to be a for-profit company that provides educational resources for judge. A Press Release touts the Bench Book and follow-on webinars. Caveat emptor.

The rationale for this Bench Book is open to question. Unlike the Reference Manual for Scientific Evidence, which was co-produced by the Federal Judicial Center and the National Academies of Science, the Bench Book’s authors are lawyers and judges, without any subject-matter expertise. Unlike the Reference Manual, the Bench Book’s chapters have no scientist or statistician authors, and it shows. Remarkably, the Bench Book does not appear to cite to the Reference Manual or the Manual on Complex Litigation, at any point in its discussion of the federal law of expert witnesses or of scientific or statistical method. Perhaps taxpayers would have been spared substantial expense if state judges were simply encouraged to read the Reference Manual.


[1]  Bench Book at 190.

[2]  Bench Book at 174 (“Given the large amount of statistical information contained in expert reports, as well as in the daily lives of the general society, the ability to be a competent consumer of scientific reports is challenging. Effective critical review of scientific information requires vigilance, and some healthy skepticism.”).

[3]  Bench Book at 137; see also id. at 162.

[4]  Bench Book at 148.

[5]  Bench Book at 160.

[6]  Bench Book at 152.

[7]  Bench Book at 233, quoting Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579, 595 (1993).

[8]  Bench Book at 10.

[9]  Id. at 10.

[10]  Id. at 10.

[11] See State Justice Institute Act of 1984 (42 U.S.C. ch. 113, 42 U.S.C. § 10701 et seq.).

Daubert Retrospective – Statistical Significance

January 5th, 2019

The holiday break was an opportunity and an excuse to revisit the briefs filed in the Supreme Court by parties and amici, in the Daubert case. The 22 amicus briefs in particular provided a wonderful basis upon which to reflect how far we have come, and also how far we have to go, to achieve real evidence-based fact finding in technical and scientific litigation. Twenty-five years ago, Rules 702 and 703 vied for control over errant and improvident expert witness testimony. With Daubert decided, Rule 702 emerged as the winner. Sadly, most courts seem to ignore or forget about Rule 703, perhaps because of its awkward wording. Rule 702, however, received the judicial imprimatur to support the policing and gatekeeping of dysepistemic claims in the federal courts.

As noted last week,1 the petitioners (plaintiffs) in Daubert advanced several lines of fallacious and specious argument, some of which was lost in the shuffle and page limitations of the Supreme Court briefings. The plaintiffs’ transposition fallacy received barely a mention, although it did bring forth at least a footnote in an important and overlooked amicus brief filed by American Medical Association (AMA), the American College of Physicians, and over a dozen other medical specialty organizations,2 all of which both emphasized the importance of statistical significance in interpreting epidemiologic studies, and the fallacy of interpreting 95% confidence intervals as providing a measure of certainty about the estimated association as a parameter. The language of these associations’ amicus brief is noteworthy and still relevant to today’s controversies.

The AMA’s amicus brief, like the brief filed by the National Academies of Science and the American Association for the Advancement of Science, strongly endorsed a gatekeeping role for trial courts to exclude testimony not based upon rigorous scientific analysis:

The touchstone of Rule 702 is scientific knowledge. Under this Rule, expert scientific testimony must adhere to the recognized standards of good scientific methodology including rigorous analysis, accurate and statistically significant measurement, and reproducibility.”3

Having incorporated the term “scientific knowledge,” Rule 702 could not permit anything less in expert witness testimony, lest it pollute federal courtrooms across the land.

Elsewhere, the AMA elaborated upon its reference to “statistically significant measurement”:

Medical researchers acquire scientific knowledge through laboratory investigation, studies of animal models, human trials, and epidemiological studies. Such empirical investigations frequently demonstrate some correlation between the intervention studied and the hypothesized result. However, the demonstration of a correlation does not prove the hypothesized result and does not constitute scientific knowledge. In order to determine whether the observed correlation is indicative of a causal relationship, scientists necessarily rely on the concept of “statistical significance.” The requirement of statistical reliability, which tends to prove that the relationship is not merely the product of chance, is a fundamental and indispensable component of valid scientific methodology.”4

And then again, the AMA spelled out its position, in case the Court missed its other references to the importance of statistical significance:

Medical studies, whether clinical trials or epidemiologic studies, frequently demonstrate some correlation between the action studied … . To determine whether the observed correlation is not due to chance, medical scientists rely on the concept of ‘statistical significance’. A ‘statistically significant’ correlation is generally considered to be one in which statistical analysis suggests that the observed relationship is not the result of chance. A statistically significant correlation does not ‘prove’ causation, but in the absence of such a correlation, scientific causation clearly is not proven.95

In its footnote 9, in the above quoted section of the brief, the AMA called out the plaintiffs’ transposition fallacy, without specifically citing to plaintiffs’ briefs:

It is misleading to compare the 95% confidence level used in empirical research to the 51% level inherent in the preponderance of the evidence standard.”6

Actually the plaintiffs’ ruse was much worse than misleading. The plaintiffs did not compare the two probabilities; they equated them. Some might call this ruse, an outright fraud on the court. In any event, the AMA amicus brief remains an available, citable source for opposing this fraud and the casual dismissal of the importance of statistical significance.

One other amicus brief touched on the plaintiffs’ statistical shanigans. The Product Liability Advisory Council, National Association of Manufacturers, Business Roundtable, and Chemical Manufacturers Association jointly filed an amicus brief to challenge some of the excesses of the plaintiffs’ submissions.7  Plaintiffs’ expert witness, Shanna Swan, had calculated type II error rates and post-hoc power for some selected epidemiologic studies relied upon by the defense. Swan’s complaint had been that some studies had only 20% probability (power) to detect a statistically significant doubling of limb reduction risk, with significance at p < 5%.8

The PLAC Brief pointed out that power calculations must assume an alternative hypothesis, and that the doubling of risk hypothesis had no basis in the evidentiary record. Although the PLAC complaint was correct, it missed the plaintiffs’ point that the defense had set exceeding a risk ratio of 2.0, as an important benchmark for specific causation attributability. Swan’s calculation of post-hoc power would have yielded an even lower probability for detecting risk ratios of 1.2 or so. More to the point, PLAC noted that other studies had much greater power, and that collectively, all the available studies would have had much greater power to have at least one study achieve statistical significance without dodgy re-analyses.


1 The Advocates’ Errors in Daubert” (Dec. 28, 2018).

2 American Academy of Allergy and Immunology, American Academy of Dermatology, American Academy of Family Physicians, American Academy of Neurology, American Academy of Orthopaedic Surgeons, American Academy of Pain Medicine, American Association of Neurological Surgeons, American College of Obstetricians and Gynecologists, American College of Pain Medicine, American College of Physicians, American College of Radiology, American Society of Anesthesiologists, American Society of Plastic and Reconstructive Surgeons, American Urological Association, and College of American Pathologists.

3 Brief of the American Medical Association, et al., as Amici Curiae, in Support of Respondent, in Daubert v. Merrell Dow Pharmaceuticals, Inc., U.S. Supreme Court no. 92-102, 1993 WL 13006285, at *27 (U.S., Jan. 19, 1993)[AMA Brief].

4 AMA Brief at *4-*5 (emphasis added).

5 AMA Brief at *14-*15 (emphasis added).

6 AMA Brief at *15 & n.9.

7 Brief of the Product Liability Advisory Council, Inc., National Association of Manufacturers, Business Roundtable, and Chemical Manufacturers Association as Amici Curiae in Support of Respondent, as Amici Curiae, in Support of Respondent, in Daubert v. Merrell Dow Pharmaceuticals, Inc., U.S. Supreme Court no. 92-102, 1993 WL 13006288 (U.S., Jan. 19, 1993) [PLAC Brief].

8 PLAC Brief at *21.