TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

A Proclamation from the Task Force on Statistical Significance

June 21st, 2021

The American Statistical Association (ASA) has finally spoken up about statistical significance testing.[1] Sort of.

Back in February of this year, I wrote about the simmering controversy over statistical significance at the ASA.[2] Back in 2016, the ASA issued its guidance paper on p-values and statistical significance, which sought to correct misinterpretations and misrepresentations of “statistical significance.”[3] Lawsuit industry lawyers seized upon the ASA statement to proclaim a new freedom from having to exclude random error.[4] To obtain their ends, however, the plaintiffs’ bar had to distort the ASA guidance in statistically significant ways.

To add to the confusion, in 2019, the ASA Executive Director published an editorial that called for an end to statistical significance testing.[5] Because the editorial lacked disclaimers about whether or not it represented official ASA positions, scientists, statisticians, and lawyers on all sides were fooled into thinking the ASA had gone whole hog.[6] Then ASA President Karen Kafadar stepped into the breach to explain that the Executive Director was not speaking for the ASA.[7]

In November 2019, members of the ASA board of directors (BOD) approved a motion to create a “Task Force on Statistical Significance and Replicability.”[8] Its charge was

“to develop thoughtful principles and practices that the ASA can endorse and share with scientists and journal editors. The task force will be appointed by the ASA President with advice and participation from the ASA BOD. The task force will report to the ASA BOD by November 2020.

The members of the Task Force identified in the motion were:

Linda Young (Nat’l Agricultural Statistics Service & Univ. of Florida; Co-Chair)

Xuming He (Univ. Michigan; Co-Chair)

Yoav Benjamini (Tel Aviv Univ.)

Dick De Veaux (Williams College; ASA Vice President)

Bradley Efron (Stanford Univ.)

Scott Evans (George Washington Univ.; ASA Publications Representative)

Mark Glickman (Harvard Univ.; ASA Section Representative)

Barry Graubard (Nat’l Cancer Instit.)

Xiao-Li Meng (Harvard Univ.)

Vijay Nair (Wells Fargo & Univ. Michigan)

Nancy Reid (Univ. Toronto)

Stephen Stigler (Univ. Chicago)

Stephen Vardeman (Iowa State Univ.)

Chris Wikle (Univ. Missouri)

Tommy Wright (U.S. Census Bureau)

Despite the inclusion of highly accomplished and distinguished statisticians on the Task Force, there were isolated demurrers. Geoff Cumming, for one, clucked:

“Why won’t statistical significance simply whither and die, taking p <. 05 and maybe even p-values with it? The ASA needs a Task Force on Statistical Inference and Open Science, not one that has its eye firmly in the rear view mirror, gazing back at .05 and significance and other such relics.”[9]

Despite the clucking, the Taskforce arrived at its recommendations, but curiously, its report did not find a home in an ASA publication. Instead, the “The ASA President’s Task Force Statement on Statistical Significance and Replicability” has now appeared as an “in press” publication at The Annals of Applied Statistics, where Karen Kafadar is the editor in chief.[10] The report is accompanied by an editorial by Kafadar.[11]

The Taskforce advanced five basic propositions, which may have been obscured by some of the recent glosses on the ASA 2016 p-value statement:

  1. “Capturing the uncertainty associated with statistical summaries is critical.”
  2. “Dealing with replicability and uncertainty lies at the heart of statistical science. Study results are replicable if they can be verified in further studies with new data.”
  3. “The theoretical basis of statistical science offers several general strategies for dealing with uncertainty.”
  4. “Thresholds are helpful when actions are required.”
  5. “P-values and significance tests, when properly applied and interpreted, increase the rigor of the conclusions drawn from data.”

All of this seems obvious and anodyne, but I suspect it will not silence the clucking.


[1] Deborah Mayo, “Alas! The ASA President’s Task Force Statement on Statistical Significance and Replicability,” Error Statistics (June 20, 2021).

[2]Falsehood Flies – The ASA 2016 Statement on Statistical Significance” (Feb. 26, 2021).

[3] Ronald L. Wasserstein & Nicole A. Lazar, “The ASA’s Statement on p-Values: Context, Process, and Purpose,” 70 The Am. Statistician 129 (2016); see “The American Statistical Association’s Statement on and of Significance” (March 17, 2016).

[4]The American Statistical Association Statement on Significance Testing Goes to Court – Part I” (Nov. 13, 2018); “The American Statistical Association Statement on Significance Testing Goes to Court – Part 2” (Mar. 7, 2019).

[5]Has the American Statistical Association Gone Post-Modern?” (Mar. 24, 2019); “American Statistical Association – Consensus versus Personal Opinion” (Dec. 13, 2019). See also Deborah G. Mayo, “The 2019 ASA Guide to P-values and Statistical Significance: Don’t Say What You Don’t Mean,” Error Statistics Philosophy (June 17, 2019); B. Haig, “The ASA’s 2019 update on P-values and significance,” Error Statistics Philosophy  (July 12, 2019); Brian Tarran, “THE S WORD … and what to do about it,” Significance (Aug. 2019); Donald Macnaughton, “Who Said What,” Significance 47 (Oct. 2019).

[6] Ronald L. Wasserstein, Allen L. Schirm, and Nicole A. Lazar, “Editorial: Moving to a World Beyond ‘p < 0.05’,” 73 Am. Statistician S1, S2 (2019).

[7] Karen Kafadar, “The Year in Review … And More to Come,” AmStat News 3 (Dec. 2019) (emphasis added); see Kafadar, “Statistics & Unintended Consequences,” AmStat News 3,4 (June 2019).

[8] Karen Kafadar, “Task Force on Statistical Significance and Replicability,” ASA Amstat Blog (Feb. 1, 2020).

[9] See, e.g., Geoff Cumming, “The ASA and p Values: Here We Go Again,” The New Statistics (Mar. 13, 2020).

[10] Yoav Benjamini, Richard D. DeVeaux, Bradly Efron, Scott Evans, Mark Glickman, Barry Braubard, Xuming He, Xiao Li Meng, Nancy Reid, Stephen M. Stigler, Stephen B. Vardeman, Christopher K. Wikle, Tommy Wright, Linda J. Young, and Karen Kafadar, “The ASA President’s Task Force Statement on Statistical Significance and Replicability,” 15 Annals of Applied Statistics 2021, available at https://www.e-publications.org/ims/submission/AOAS/user/submissionFile/51526?confirm=79a17040.

[11] Karen Kafadar, “Editorial: Statistical Significance, P-Values, and Replicability,” 15 Annals of Applied Statistics 2021, available at https://www.e-publications.org/ims/submission/AOAS/user/submissionFile/51525?confirm=3079934e.

Judge Jack B. Weinstein – A Remembrance

June 17th, 2021

There is one less force of nature in the universe. Judge Jack Bertrand Weinstein died earlier this week, about two months shy of a century.[1] His passing has been noticed by the media, lawyers, and legal scholars[2]. In its obituary, the New York Times noted that Weinstein was known for his “bold jurisprudence and his outsize personality,” and that he was “revered, feared, and disparaged.” The obituary quoted Professor Peter H. Schuck, who observed that Weinstein was “something of a benevolent despot.”

As an advocate, I found Judge Weinstein to be anything but fearsome. His jurisprudence was often driven by intellectual humility rather than boldness or despotism. One area in which Judge Weinstein was diffident and restrained was in his exercise of gatekeeping of expert witness opinion. He, and his friend, the late Professor Margaret Berger, were opponents of giving trial judges discretion to exclude expert witness opinions on ground of validity and reliability. Their antagonism to gatekeeping was, no doubt, partly due to their sympathies for injured plaintiffs and their realization that plaintiffs’ expert witnesses often come up with dodgy scientific opinions to advance plaintiffs’ claims. In part, however, Judge Weinstein’s antagonism was due to his skepticism about judicial competence and his own intellectual humility.

Although epistemically humble, Judge Weinstein was not incurious. His interest in scientific issues occasionally got him into trouble, as when he was beguiled by Dr. Irving Selikoff and colleagues, who misled him on aspects of the occupational medicine of asbestos exposure. In 1990, Judge Weinstein issued a curious mea culpa. Because of a trial in progress, Judge Weinstein, along with state judge (Justice Helen Freedman), attended an ex parte private luncheon meeting with Dr. Selikoff. Here is how Judge Weinstein described the event:

“But what I did may have been even worse [than Judge Kelly’s conduct that led to his disqualification]. A state judge and I were attempting to settle large numbers of asbestos cases. We had a private meeting with Dr. Irwin [sic] J. Selikoff at his hospital office to discuss the nature of his research. He had never testified and would never testify. Nevertheless, I now think that it was a mistake not to have informed all counsel in advance and, perhaps, to have had a court reporter present and to have put that meeting on the record.”[3]

Judge Weinstein’s point about Selikoff’s having never testified was demonstrably false, but I impute no scienter for false statements to the judge. The misrepresentation almost certainly originated with Selikoff. Dr. Selikoff had testified frequently up to the point at which he and plaintiffs’ counsel realized that his shaky credentials and his pronouncements on “state of the art,” were hurtful to the plaintiffs’ cause. Even if Selikoff had not been an accomplished testifier, any disinterested observer should, by 1990, have known that Selikoff was himself not a disinterested actor in medical asbestos controversies.[4] The meeting with Selikoff apparently weighed on Judge Weinstein’s conscience. He repeated his mea culpa almost verbatim, along with the false statement about Selikoff’s never having testified, in a law review article in 1994, and then incorporated the misrepresentation into a full-length book.[5]

In his famous handling of the Agent Orange class action, Judge Weinstein manipulated the defendants into settling, and only then applied his considerable analytical ability in dissecting the inadequacies of the plaintiffs’ causation case. Rather than place the weight of his decision on Rule 702, Judge Weinstein dismembered the causation claim by finding that the bulk of what the plaintiffs’ expert witnesses relied upon under Rule 703 was unreasonable. He then found that what remained, if anything, could not reasonably support a verdict for plaintiffs, and he entered summary judgment for the defense in the opt-out cases.[6]

In 1993, the U.S. Supreme Court breathed fresh life into the trial court’s power and obligation to review expert witness opinions and to exclude unsound opinions.[7] Several months before the Supreme Court charted this new direction on expert witness testimony, the silicone breast implant litigation, fueled by iffy science and iffier scientists, erupted.[8] In October 1994, the Judicial Panel on Multi-District Litigation created MDL 926, which consolidated the federal breast implant cases before Judge Sam Pointer, in the Northern District of Alabama. Unlike most contemporary MDL judges, however, Judge Pointer did not believe that Rule 702 and 703 objections should be addressed by the MDL judge. Pointer believed strongly that the trial judges, in the individual, remanded cases, should rule on objections to the validity of proffered expert witness opinion testimony. As a result, so-called Daubert hearings began taking place in district courts around the country, in parallel with other centralized proceedings in MDL 926.

By the summer of 1996, Judge Robert E. Jones had a full-blown Rule 702 attack on the plaintiffs’ expert witnesses before him, in a case remanded from MDL 926. In the face of the plaintiffs’ MDL leadership committee’s determined opposition, Judge Jones appointed four independent scientists to serve as scientific advisors. With their help, in December 1996, Judge Jones issued one of the seminal rulings in the breast implant litigation, and excluded the plaintiffs’ expert witnesses.[9]

While Judge Jones was studying the record, and writing his opinion in the Hall case, Judge Weinstein, with a judge from the Southern District of New York, conducted a two-week Rule 702 hearing, in his Brooklyn courtroom. Judge Weinstein announced at the outset that he had studied the record from the Hall case, and that he would incorporate it into his record for the cases remanded to the Southern and Eastern Districts of New York.

I had one of the first witnesses, Dr. Donnard Dwyer, before Judge Weinstein during that chilly autumn of 1996. Dwyer was a very earnest immunologist, who appeared on direct examination to endorse the methodological findings of the plaintiffs’ expert witnesses, including a very dodgy study by Dr. Douglas Shanklin. On cross-examination, I elicited Dwyer’s view that the Shanklin study involved fraudulent methodology and that he, Dwyer, would never use such a method or allow a graduate student to use it. This examination, of course, was great fun, and as I dug deeper with relish, Judge Weinstein stopped me, and asked rhetorically to the plaintiffs’ counsel, whether any of them intended to rely upon the discredited Shanklin study. My main adversary Mike Williams did not miss a beat; he jumped to his feet to say no, and that he did not know why I was belaboring this study. But then Denise Dunleavy, of Weitz & Luxenberg, knowing that Shanklin was her listed expert witness in many cases, rose to say that her expert witnesses would rely upon the Shanklin study. Incredulous, Weinstein looked at me, rolled his eyes, paused dramatically, and then waved his hand at me to continue.

Later in my cross-examination, I was inquiring about another study that reported a statistic from a small sample. The authors reported a confidence interval that included negative values for a test that could not have had any result less than zero. The sample was obviously skewed, and the authors had probably used an inappropriate parametric test, but Dwyer was about to commit to the invalidity of the study when Judge Weinstein stopped me. He was well aware that the normal approximation had created the aberrant result, and that perhaps the authors only sin was in failing to use a non-parametric test. I have not had many trial judges interfere so knowledgably.

In short order, on October 23, 1996, Judge Weinstein issued a short, published opinion, in which he ducked the pending Rule 702 motions, and he granted partial summary judgment on the claims of systemic disease.[10] Only the lawyers involved in the matters would have known that there was no pending motion for summary judgment!

Following up with grant of summary judgment, Judge Weinstein appointed a group of scientists and a legal scholar, to help him assemble a panel of Rule 706 expert witnesses for future remanded case. Law Professor Margaret Berger, along with Drs. Joel Cohen and Alan Wolff, began meeting with the lawyers to identify areas of expertise needed by the court, and what the process of court-appointment of neutral expert witnesses would look like.

The plaintiffs’ counsel were apoplectic. They argued to Judge Weinstein that Judge Pointer, in the MDL, should be supervising the process of assembling court-appointed experts. Of course, the plaintiffs’ lawyers knew that Judge Pointer, unlike Judges Jones and Weinstein, believed that both sides’ expert witnesses were extreme, and mistakenly believed that the truth lay between. Judge Pointer was an even bigger foe of gatekeeping, and he was generally blind to the invalid evidence put forward by plaintiffs. In response to the plaintiffs’ counsel’s, Judge Weinstein sardonically observed that if there were a real MDL judge, he should take it over.

Within a month or so, Judge Pointer did, in fact, take over the court-appointed expert witness process, and incorporated Judge Weinstein’s selection panel. The process did not going very smoothly in front of the MDL judge, who allowed the plaintiffs lawyers to slow down the process by throwing in irrelevant documents and deploying rhetorical tricks. The court-appointed expert witnesses did not take kindly to the shenanigans, or to the bogus evidence. The expert panel’s unanimous rejection of the plaintiffs’ claims of connective tissue disease causation was an expensive, but long overdue judgment from which there was no appeal. Not many commentators, however, know that the panel would never have happened but for Judge Weinstein’s clever judicial politics.

In April 1997, while Judge Pointer was getting started with the neutral expert selection panel,[11] the parties met with Judge Weinstein one last time to argue the defense motions to exclude the plaintiffs’ expert witnesses. Invoking the pendency of the Rule 706 court-appointed expert witness processs in the MDL, Judge Weinstein quickly made his view clear that he would not rule on the motions. His Honor also made clear that if we pressed for a ruling, he would deny our motions, even though he had also ruled that plaintiffs’ could not make out a submissible case on causation.

I recall still the frustration that we, the defense counsel, felt that April day, when Judge Weinstein tried to explain why he would grant partial summary judgment but not rule on our motions contra plaintiffs’ expert witnesses. It would be many years later, before he let his judicial assessment see the light of day. Two decades and then some later, in a law review article, Judge Weinstein made clear that “[t]he breast implant litigation was largely based on a litigation fraud. …  Claims—supported by medical charlatans—that enormous damages to women’s systems resulted could not be supported.”[12] Indeed.

Judge Weinstein was incredibly smart and diligent, but he was human with human biases and human fallibilities. If he was a despot, he was at least kind and benevolent. In my experience, he was always polite to counsel and accommodating. Appearing before Judge Weinstein was a pleasure and an education.


[1] Laura Mansnerus, “Jack B. Weinstein, U.S. Judge With an Activist Streak, Is Dead at 99,” N.Y. Times (June 15, 2021).

[2] Christopher J. Robinette, “Judge Jack Weinstein 1921-2021,” TortsProf Blog (June 15, 2021).

[3] Jack B. Weinstein, “Learning, Speaking, and Acting: What Are the Limits for Judges?” 77 Judicature 322, 326 (May-June 1994).

[4]Selikoff Timeline & Asbestos Litigation History” (Dec. 20, 2018).

[5] See Jack B. Weinstein, “Limits on Judges’ Learning, Speaking and Acting – Part I- Tentative First Thoughts: How May Judges Learn?” 36 Ariz. L. Rev. 539, 560 (1994) (“He [Selikoff] had never testified and would never testify.”); Jack B. Weinstein, Individual Justice in Mass Tort Litigation: The Effect of Class Actions, Consolidations, and other Multi-Party Devices 117 (1995) (“A court should not coerce independent eminent scientists, such as the late Dr. Irving Selikoff, to testify if, like he, they prefer to publish their results only in scientific journals.”)

[6] In re Agent Orange Product Liab. Litig., 597 F. Supp. 740, 785 (E.D.N.Y. 1984), aff’d 818 F.2d 145, 150-51 (2d Cir. 1987)(approving district court’s analysis), cert. denied sub nom. Pinkney v. Dow Chemical Co., 487 U.S. 1234 (1988);  In re “Agent Orange” Prod. Liab. Litig., 611 F. Supp. 1223 (E.D.N.Y. 1985), aff’d, 818 F.2d 187 (2d Cir. 1987), cert. denied, 487 U.S. 1234 (1988).

[7] Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993).

[8] Reuters, “Record $25 Million Awarded In Silicone-Gel Implants Case,” N.Y. Times (Dec. 24, 1992).

[9] See Hall v. Baxter Healthcare Corp., 947 F. Supp. 1387 (D. Ore. 1996).

[10] In re Breast Implant Cases, 942 F. Supp. 958 (E.& S.D.N.Y. 1996).

[11] MDL 926 Order 31 (May 31, 1996) (order to show cause why a national Science Panel should not be appointed under Federal Rule of Evidence 706); MDL 926 Order No. 31C (Aug. 23, 1996) (appointing Drs. Barbara S. Hulka, Peter Tugwell, and Betty A. Diamond); Order No. 31D (Sept. 17, 1996) (appointing Dr. Nancy I. Kerkvliet).

[12] Hon. Jack B. Weinstein, “Preliminary Reflections on Administration of Complex Litigation” 2009 Cardozo L. Rev. de novo 1, 14 (2009) (emphasis added).

Scientists Suing Scientists, and Behaving Badly

June 2nd, 2021

In his 1994 Nobel Prize acceptance speech, the Hungarian born chemist George Andrew Olah acknowledged an aspect of science that rarely is noted in popular discussions:

“[One] way of dealing with errors is to have friends who are willing to spend the time necessary to carry out a critical examination of the experimental design beforehand and the results after the experiments have been completed. An even better way is to have an enemy. An enemy is willing to devote a vast amount of time and brain power to ferreting out errors both large and small, and this without any compensation. The trouble is that really capable enemies are scarce; most of them are only ordinary. Another trouble with enemies is that they sometimes develop into friends and lose a good deal of their zeal. It was in this way the writer lost his three best enemies. Everyone, not just scientists, need a few good enemies!”[1]

If you take science seriously, you must take error as something for which we should always be vigilant, and something we are committed to eliminate. As Olah and Von Békésy have acknowledged, sometimes an enemy is required. It would thus seem to be quite unscientific to complain that an enemy was harassing you, when she was criticizing your data, study design, methods, or motives.

Elisabeth Margaretha Harbers-Bik would be a good enemy to have. Trained in the Netherlands in microbiology, Dr. Bik came to the United States, where for some years she conducted research at Stanford University. In 2018, Bik began in earnest a new career in analyzing published scientific studies for image duplication and manipulation, and other dubious practices.[2]

Her blog, Scientific Integrity Digest, should be on the reading list of every lawyer who labors in the muck of science repurposed for litigation. You never know when your adversary’s expert witness will be featured in the pages of the Digest!

Dr. Bik is not a lone ranger; there are other scientists who have committed to cleaning up the scientific literature. After an illustrious career as an editor of prestigious journals, and a director of the Rockefeller University Press, Dr. Mike Rossner founded Image Data Integrity, Inc., to stamp out image fraud and error in scientific publications.

On March 16, 2020, a gaggle of French authors, including Dr. Didier Raoult, uploaded a pre-print of a paper to medRxiv, reporting on hydroxychloroquine (HCQ) and azithromycin in Covid-19 patients. The authors submitted their manuscript that same day to the International Journal of Antimicrobial Agents, which accepted it in 24 hours or less, on March 17, 2020. The journal published the paper online, three days after acceptance, on March 20th. Peer-review, to the extent it took place, was abridged.[3]

The misleading title of the paper, “Hydroxychloroquine and azithromycin as a treatment of COVID-19: results of an open-label non-randomized clinical trial,” may have led some untutored observers into thinking the paper reported a study high in the hierachy of evidence. Instead the paper was a rather flawed observational study, or perhaps just a concatenation of anecdotes. In any event, the authors reported that patients who had received both medications cleared the SARS-CoV2 the fastest.

Four days after publication online at a supposedly peer-reviewed journal, Elisabeth Bik posted an insightful analysis of the Raoult paper.[4] If peer review it were, her blog post pointed out the review’s failure by identifying an apparent conflict of interest and various methodological flaws, including missing data on six (out of 26) patients, including one patient who died, and three whose conditions worsened on therapy.

Raoult’s paper, and his overly zealous advocacy for HCQ did not go unnoticed in the world of kooks, speculators, and fraudfeasors. Elon Musk tweeted about Raoult’s paper; and Fox News amplified Musk’s tweet, which made it into the swamp of misinformation, Trump’s mind and his twitterverse.[5]

In the wake of the hoopla over Raoult’s paper, the journal owner admitted that the paper did not live up to the society’s standards. The publisher, Elsevier, called for an independent investigation. The French Infectious Diseases Society accused Raoult of spreading false information about hydroxychloroquine’s efficacy in Covid-19 patients. To date, there has been no further official discussions of disciplinary actions or proceedings at the Society.

Raoult apparently stewed over Bik’s criticisms and debunking of his over-interpretation of his flawed HCQ study.  Last month, Raoult filed a complaint with a French prosecutor, which marked the commencement of legal proceedings against Bik for harassment and “extortion.” The extortion charge is based upon nothing more than Bik’s having a Patreon account to support her search for fraud and error in the published medical literature.[6]

The initial expression of outrage over Marseille Raoult’s bad behavior came from Citizen4Science, a French not-for-profit organization that works to promote scientific integrity. According to Dr. Fabienne Blum, president of Citizen4Science, the organization issued its press release on May 5, 2021, to call on authorities to investigate and to intervene in Raoult’s harassment of scientists. Their press release about “the French scandal” was signed by scientists and non-scientists from around the world; it currently remains open for signatures, which number well over 4,000. “Harassment of scientific spokespersons and defenders of scientific integrity: Citizen4Science calls on the authorities to intervene urgently” (May 5, 2021). Dr. Blum and Citizen4Science are now harassed on Twitter, where they have been labeled “Bik’s gang.” Inevitably, they will be sued as well.

On June 1st, Dr. Raoult posted his self-serving take on the controversy on that scholarly forum known as YouTube. An English translation of Raoult’s diatribe can be found at Citizen4Science’s website. Perhaps others have noted that Raoult refers to Bik as “Madame” (or Mrs.) Bik, rather than as Dr. Bik, which leads to some speculation that Raoult has trouble taking criticism from intelligent women.

Having projected his worst characteristics onto adversaries, Raoult lodged accusations against Bik, which actually reflected his own behaviors closely. Haven’t we seen someone in public life who operates just like this? Raoult has criticized Bik in the lay media, and he released personal information about her, including her residential address. Raoult’s intemperate and inappropriate personal attacks on Bik have led several hundred scientists to sign an open letter in support of Bik.[7]

This scientist doth protest too much, methinks.


[1] George Andrew Olah Nobel Prize Speech (1994) (quoting from George Von Békésy, Experiments in Hearing 8 (1960).

[2] Elisabeth M. Bik, Arturo Casadevall, and Ferric C. Fang, “The Prevalence of Inappropriate Image Duplication in Biomedical Research Publications,” 7 mBio e00809 (2016); Daniele Fanelli, Rodrigo Costas, Ferric C. Fang, Arturo Casadevall, Elisabeth M. Bik, “Testing Hypotheses on Risk Factors for Scientific Misconduct via Matched-Control Analysis of Papers Containing Problematic Image Duplications,” 25 Science & Engineering Ethics 771 (2019); see also Jayashree Rajagopalan, “I have found about 2,000 problematic papers, says Dr. Elisabeth Bik,” Editage Insights (Aug 08, 2019).

[3] Philippe Gautret, Jean-Christophe Lagier, Philippe Parola, Van Thuan Hoang, Line Meddeb, Morgane Mailhe, Barbara Doudier, Johan Courjon, Valérie Giordanengo, Vera Esteves Vieira, Hervé Tissot Dupont, Stéphane Honoré, Philippe Colson, Eric Chabrière, Bernard La Scola, Jean-Marc Rolain, Philippe Brouqui, and Didier Raoult, “Hydroxychloroquine and azithromycin as a treatment of COVID-19: results of an open-label non-randomized clinical trial,” 56 Clinical Trial Internat’l J. Antimicrob. Agents e105949 (2020).

[4] Bik, “Thoughts on the Gautret et al. paper about Hydroxychloroquine and Azithromycin treatment of COVID-19 infections,” Scientific Integrity Digest (March 24, 2020).

[5] Charles Piller, “‘This is insane!’ Many scientists lament Trump’s embrace of risky malaria drugs for coronavirus,” Science Mag. (Mar. 26, 2020).

[6] Melissa Davey, “World expert in scientific misconduct faces legal action for challenging integrity of hydroxychloroquine study,” The Guardian (May 22, 2021); Kristina Fiore, “HCQ Doc Sues Critic,” MedPage Today (May 26, 2021).

[7] Lonni Besançon, Alexander Samuel, Thibault Sana, Mathieu Rebeaud, Anthony Guihur, Marc Robinson-Rechavi, Nicolas Le Berre, Matthieu Mulot, Gideon Meyerowitz-Katz, Maisonneuve, Brian A. Nosek, “Open Letter: Scientists stand up to protect academic whistleblowers and post-publication peer review,” (May 18, 2021).

The Practicing Law Institute’s Second Edition of Products Liability Litigation

May 30th, 2021

In late March, the Practicing Law Institute released the second edition of its treatise on products liability. George D. Sax, Stephanie A. Scharf, Sarah R. Marmor, eds., Product Liability Litigation: Current Law, Strategies and Best Practices, (2nd ed. 2021).

The new edition is now in two volumes, which cover substantive products liability law, as well as legal theory, policy, and strategy considerations important to products liability law, both pursuers and defenders. The work of the editors, Stephanie A. Scharf and her colleagues, George D. Sax and Sarah R. Marmor, in managing this process is nothing short of Homeric.  The authors are mostly practitioners, with a wealth of practical experience. There are a good number of friends, colleagues, and adversaries, among the chapters’ authors, so any recommendation I make should be tempered by my disclosure.

Unlike the first edition, the PLI has doubled down on control of the copyright license, and so I am no longer able to upload my chapter on statistical evidence to ResearchGate, Academia.com, or my own website.  But here is the outline index to my contribution, Chapter 28, “Statistical Evidence in Products Liability Litigation”:

  • 28:1 History and Overview
  • 28:2 Litigation Context of Statistical Issues
  • 28:3 Qualifications of Expert Witnesses Who Give Testimony on Statistical Issues
  • 28:4 Admissibility of Statistical Evidence – Rules 702 and 703
  • 28:5 Significance Probability
  • 28:5.1 Definition of Significance Probability (The “p-value”)
  • 28:5.2 Misstatements about Statistical Significance
  • 28:5.3 Transposition Fallacy
  • 28:5.4 Confusion Between Significance Probability and Burden of Proof
  • 28:5.5 Hypothesis Testing
  • 28:5.6 Confidence Intervals
  • 28:5.7 Inappropriate Use of Statistics – Matrixx  Initiatives
    • [A]     Sequelae of Matrixx Initiatives
    • [B]     Is Statistical Significance Necessary?
  • 28: 5.8 American Statistical Association’s Statemen on P-Values
  • 28:6 Statistical Power
  • 28:6.1 Definition of Statistical Power
  • 28:6.2 Cases Involving Statistical Power
  • 28:7 Evidentiary Rule of Completeness
  • 28:8 Meta-Analysis
  • 28:8.1 Definition and History of Meta-Analysis
  • 28:8.2 Consensus  Statements
  • 28:8.3 Use of Meta-Analysis in Litigation
  • 28:8.4 Competing Models for Meta-Analysis
  • 28:8.5 Recent Cases Involving Meta-Analyses
  • 28:9 Statistical Inference in Securities Fraud Cases Against Pharmaceutical Manufacturers
  • 28:10 Multiple Testing
  • 28:11 Ethical Considerations Raised by Statistical Expert Witness Testimony
  • 28:12 Conclusion

A detailed table of contents for the entire treatise is available at the PLI’s website The authors and their chapters are set out below.

Chapter 1. What Product Liability Might Look Like in the Twenty-First Century (James M. Beck)

Chapter 2. Recent Trends in Product Claims and Product Defenses (Lori B. Leskin & Angela R. Vicari)

Chapter 3. Game-Changers: Defending Products Cases with Child Plaintiffs (Sandra Giannone Ezell & Diana M. Miller)

Chapter 4. Preemption Defenses (Joseph G. Petrosinelli, Ana C. Reyes & Amy Mason Saharia)

Chapter 5. Defending Class Action Lawsuits (Mark Herrmann, Pearson N. Bownas & Katherine Garceau Sobiech)

Chapter 6. Litigation in Foreign Countries Against U.S. Companies (Joseph G. Petrosinelli & Ana C. Reyes)

Chapter 7. Emerging Issues in Pharmaceutical Litigation (Allen P. Waxman, Loren H. Brown & Brooke Kim)

Chapter 8. Recent Developments in Asbestos, Talc, Silica, Tobacco, and E-Cigarette/Vaping Litigation in the U.S. and Canada (George Gigounas, Arthur Hoffmann, David Jaroslaw, Amy Pressman, Nancy Shane Rappaport, Wendy Michael, Christopher Gismondi, Stephen H. Barrett, Micah Chavin, Adam A. DeSipio, Ryan McNamara, Sean Newland, Becky Rock, Greg Sperla & Michael Lisanti)

Chapter 9. Emerging Issues in Medical Device Litigation (David R. Geiger, Richard G. Baldwin, Stephen G.W. Stich & E. Jacqueline Chávez)

Chapter 10. Emerging Issues in Automotive Product Liability Litigation (Eric P. Conn, Howard A. Fried, Thomas N. Lurie & Nina A. Rosenbach)

Chapter 11. Emerging Issues in Food Law and Litigation (Sarah L. Brew & Joelle Groshek)

Chapter 12. Regulating Cannabis Products (James H. Rotondo, Steven A. Cash & Kaitlin A. Canty)

Chapter 13. Blockchain Technology and Its Impact on Product Litigation (Justin Wales & Matt Kohen)

Chapter 14. Emerging Trends: Smart Technology and the Internet of Things (Christopher C. Hossellman & Damion M. Young)

Chapter 15. The Law of Damages in Product Liability Litigation (Evan D. Buxner & Dionne L. Koller)

Chapter 16. Using Early Case Assessments to Develop Strategy (Mark E. (Rick) Richardson)

Chapter 17. Impact of Insurance Policies (Kamil Ismail, Linda S. Woolf & Richard M. Barnes)

Chapter 18. Advantages and Disadvantages of Multidistrict Litigation (Wendy R. Fleishman)

Chapter 19. Strategies for Co-Defending Product Actions (Lem E. Montgomery III & Anna Little Morris)

Chapter 20. Crisis Management and Media Strategy (Joanne M. Gray & Nilda M. Isidro)

Chapter 21. Class Action Settlements (Richard B. Goetz, Carlos M. Lazatin & Esteban Rodriguez)

Chapter 22. Mass Tort Settlement Strategies (Richard B. Goetz & Carlos M. Lazatin)

Chapter 23. Arbitration (Beth L. Kaufman & Charles B. Updike)

Chapter 24. Privilege in a Global Product Economy (Marina G. McGuire)

Chapter 25. E-Discovery—Practical Considerations (Denise J. Talbert, John C. Vaglio, Jeremiah S. Wikler & Christy A. Pulis)

Chapter 26. Expert Evidence—Law, Strategies and Best Practices (Stephanie A. Scharf, George D. Sax, Sarah R. Marmor & Morgan G. Churma)

Chapter 27. Court-Appointed and Unconventional Expert Issues (Jonathan M. Hoffman)

Chapter 28. Statistical Evidence in Products Liability Litigation (Nathan A. Schachtman)

Chapter 29. Post-Sale Responsibilities in the United States and Foreign Countries (Kenneth Ross & George W. Soule)

Chapter 30. Role of Corporate Executives (Samuel Goldblatt & Benjamin R. Dwyer)

Chapter 31. Contacting Corporate Employees (Sharon L. Caffrey, Kenneth M. Argentieri & Rachel M. Good)

Chapter 32. Spoliation of Product Evidence (Paul E. Benson & Adam E. Witkov)

Chapter 33. Presenting Complex Scientific Evidence (Morton D. Dubin II & Nina Trovato)

Chapter 34. How to Win a Dismissal When the Plaintiff Declares Bankruptcy (Anita Hotchkiss & Earyn Edwards)

Chapter 35. Juries (Christopher C. Spencer)

Chapter 36. Preparing for the Appeal (Wendy F. Lumish & Alina Alonso Rodriguez)

Chapter 37. Global Reach: Foreign Defendants in the United States (Lisa J. Savitt)

Reference Manual on Scientific Evidence v4.0

February 28th, 2021

The need for revisions to the third edition of the Reference Manual on Scientific Evidence (RMSE) has been apparent since its publication in 2011. A decade has passed, and the federal agencies involved in the third edition, the Federal Judicial Center (FJC) and the National Academies of Science Engineering and Medicine (NASEM), are assembling staff to prepare the long-needed revisions.

The first sign of life for this new edition came back on November 24, 2020, when the NASEM held a short, closed door virtual meeting to discuss planning for a fourth edition.[1] The meeting was billed by the NASEM as “the first meeting of the Committee on Emerging Areas of Science, Engineering, and Medicine for the Courts: Identifying Chapters for a Fourth Edition of The Reference Manual on Scientific Evidence.” The Committee members heard from John S. Cooke (FJC Director), and Alan Tomkins and Reggie Sheehan, both of the National Science Foundation (NSF). The stated purpose of the meeting was to review the third edition of the RMSE to identify “identify areas of science, technology, and medicine that may be candidates for new or updated chapters in a proposed new (fourth) edition of the manual.” The only public pronouncement from the first meeting was that the committee would sponsor a workshop on the topic of new chapters for the RMSE, in early 2021.

The Committee’s second meeting took place a week later, again in closed session.[2] The stated purpose of the Committee’s second meeting was to review the third edition of the RMSE, and to discuss candidate areas for inclusion as new and updated chapters for a fourth edition.

Last week saw the Committee’s third, public meeting. The meeting spanned two days (Feb. 24 and 25, 2021), and was open to the public. The meeting was sponsored by NASEM, FJC, along with the NSF, and was co-chaired by Thomas D. Albright, Professor and Conrad T. Prebys Chair at the Salk Institute for Biological Studies, and the Hon. Kathleen McDonald O’Malley, who sits on the United States Court of Appeals for the Federal Circuit. Identified members of the committee include:

Steven M. Bellovin, professor in the Computer Science department at Columbia University;

Karen Kafadar, Departmental Chair and Commonwealth Professor of Statistics at the University of Virginia, and former president of the American Statistical Association;

Andrew Maynard, professor, and director of the Risk Innovation Lab at the School for the Future of Innovation in Society, at Arizona State University;

Venkatachalam Ramaswamy, Director of the Geophysical Fluid Dynamics Laboratory of the National Oceanic and Atmospheric Administration (NOAA) Office of Oceanic and Atmospheric Research (OAR), studying climate modeling and climate change;

Thomas Schroeder, Chief Judge for the U.S. District Court for the Middle District of North Carolina;

David S. Tatel, United States Court of Appeals for the District of Columbia Circuit; and

Steven R. Kendall, Staff Officer

The meeting comprised five panel presentations, made up of remarkably accomplished and talented speakers. Each panel’s presentations were followed by discussion among the panelists, and the committee members. Some panels answered questions submitted from the public audience. Judge O’Malley opened the meeting with introductory remarks about the purpose and scope of the RMSE, and of the inquiry into additional possible chapters.

  1. Challenges in Evaluating Scientific Evidence in Court

The first panel consisted entirely of judges, who held forth on their approaches to judicial gatekeeping of expert witnesses, and their approach to scientific and technical issues. Chief Judge Schroeder moderated the presentations of panelists:

Barbara Parker Hervey, Texas Court of Criminal Appeals;

Patti B. Saris, Chief Judge of the United States District Court for the District of Massachusetts,  member of President’s Council of Advisors on Science and Technology (PCAST);

Leonard P. Stark, U.S. District Court for the District of Delaware; and

Sarah S. Vance, Judge (former Chief Judge) of the U.S. District Court for the Eastern District of Louisiana, chair of the Judicial Panel on Multidistrict Litigation.

  1. Emerging Issues in the Climate and Environmental Sciences

Paul Hanle, of the Environmental Law Institute moderated presenters:

Joellen L. Russell, the Thomas R. Brown Distinguished Chair of Integrative Science and Professor at the University of Arizona in the Department of Geosciences;

Veerabhadran Ramanathan, Edward A. Frieman Endowed Presidential Chair in Climate Sustainability at the Scripps Institution of Oceanography at the University of California, San Diego;

Benjamin D. Santer, atmospheric scientist at Lawrence Livermore National Laboratory; and

Donald J. Wuebbles, the Harry E. Preble Professor of Atmospheric Science at the University of Illinois.

  1. Emerging Issues in Computer Science and Information Technology

Josh Goldfoot, Principal Deputy Chief, Computer Crime & Intellectual Property Section, at U.S. Department of Justice, moderated panelists:

Jeremy J. Epstein, Deputy Division Director of Computer and Information Science and Engineering (CISE) and Computer and Network Systems (CNS) at the National Science Foundation;

Russ Housley, founder of Vigil Security, LLC;

Subbarao Kambhampati, professor of computer science at Arizona State University; and

Alice Xiang, Senior Research Scientist at Sony AI.

  1. Emerging Issues in the Biological Sciences

Panel four was moderated by Professor Ellen Wright Clayton, the Craig-Weaver Professor of Pediatrics, and Professor of Law and of Health Policy at Vanderbilt Law School, at Vanderbilt University. Her panelists were:

Dana Carroll, distinguished professor in the Department of Biochemistry at the University of Utah School of Medicine;

Yaniv Erlich, Chief Executive Officer of Eleven Therapeutics, Chief Science Officer of MyHeritage;

Steven E. Hyman, director of the Stanley Center for Psychiatric Research at Broad Institute of MIT and Harvard; and

Philip Sabes, Professor Emeritus in Physiology at the University of California, San Francisco (UCSF).

  1. Emerging areas in Psychology, Data, and Statistical Sciences

Gary Marchant, Lincoln Professor of Emerging Technologies, Law and Ethics, at Arizona State University’s Sandra Day O’Connor College of Law, moderated panelists:

Xiao-Li Meng, the Whipple V. N. Jones Professor of Statistics, Harvard University, and the Founding Editor-in-Chief of Harvard Data Science Review;

Rebecca Doerge, Glen de Vries Dean of the Mellon College of Science at Carnegie Mellon University, member of the Dietrich College of Humanities and Social Sciences’ Department of Statistics and Data Science, and of the Mellon College of Science’s Department of Biological Sciences;

Daniel Kahneman, Professor of Psychology and Public Affairs Emeritus at the Princeton School of Public and International Affairs, the Eugene Higgins Professor of Psychology Emeritus at Princeton University, and a fellow of the Center for Rationality at the Hebrew University in Jerusalem; and

Goodwin Liu, Associate Justice of the California Supreme Court.

The Proceedings of this two day meeting were recorded and will be published. The website materials are unclear whether the verbatim remarks will be included, but regardless, the proceedings should warrant careful reading.

Judge O’Malley, in her introductory remarks, emphasized that the RMSE must be a neutral, disinterested source of information for federal judges, an aspirational judgment from which there can be no dissent. More controversial will be Her Honor’s assessment that epidemiologic studies can “take forever,” and other judges’ suggestion that plaintiffs lack financial resources to put forward credible, reliable expert witnesses. Judge Vance corrected the course of the discussion by pointing out that MDL plaintiffs were not disadvantaged, but no one pointed out that plaintiffs’ counsel were among the wealthiest individuals in the United States, and that they have been known to sponsor epidemiologic and other studies that wind up as evidence in court.

Panel One was perhaps the most discomforting experience, as it involved revelations about how sausage is made in the gatekeeping process. The panel was remarkable for including a state court judge from Texas, Judge Barbara Parker Hervey, of the Texas Court of Criminal Appeals. Judge Hervey remarked that [in her experience] if we judges “can’t understand it, we won’t read it.” Her dictum raises interesting issues. No doubt, in some instances, the judicial failure of comprehension is the fault of the lawyers. What happens when the judges “can’t understand it”? Do they ask for further briefing? Or do they ask for a hearing with viva voce testimony from expert witnesses? The point was not followed up.

Leonard P. Stark’s insights were interesting in that his docket in the District of Delaware is flooded with patent and Hatch-Waxman Act litigation. Judge Stark’s extensive educational training is in politics and political science. The docket volume Judge Stark described, however, raised issues about how much attention he could give to any one case.

When the panel was asked how they dealt with scientific issues, Judge Saris discussed her presiding over In re Neurontin, which was a “big challenge for me to understand,” with no randomized trials or objective assessments by the litigants.[3] Judge Vance discussed her experience of presiding in a low-level benzene exposure case, in which plaintiff claimed that his acute myelogenous leukemia was caused by gasoline.[4]

Perhaps the key difference in approach to Rule 702 emerged when the judges were asked whether they read the underlying studies. Judge Saris did not answer directly, but stated she reads the reports. Judge Vance, on the other hand, noted that she reads the relied upon studies. In her gasoline-leukemia case, she read the relied-upon epidemiologic studies, which she described as a “hodge podge,” and which were misrepresented by the expert witnesses and counsel. She emphasized the distortions of the adversarial system and the need to moderate its excesses by validating what exactly the expert witnesses had relied upon.

This division in judicial approach was seen again when Professor Karen Kafadar asked how the judges dealt with peer review. Judge Saris seemed to suggest that the peer-reviewed published article was prima facie reliable. Others disagreed and noted that peer reviewed articles can have findings that are overstated, and wrong. One speaker noted that Jerome Kassirer had downplayed the significance of, and the validation provided by, peer review, in the RMSE (3rd ed 2011).

Curiously, there was no discussion of Rule 703, either in Judge O’Malley’s opening remarks on the RMSE, or in the first panel discussion. When someone from the audience submitted a question about the role of Rule 703 in the gatekeeping process, the moderator did not read it.

Panel Two. The climate change panel was a tour de force of the case for anthropogenic climate change. To some, the presentations may have seemed like a reprise of The Day After Tomorrow. Indeed, the science was presented so confidently, if not stridently, that one of the committee members asked whether there could be any reasonable disagreement. The panelists responded essentially by pointing out that there could be no good faith opposition. The panelists were much less convincing on the issue of attributability. None of the speakers addressed the appropriateness vel non of climate change litigation, when the federal and state governments encouraged, licensed, and regulated the exploitation and use of fossil fuel reserves.

Panel Four. Dr. Clayton’s panel was fascinating and likely to lead to new chapters. Professor Hyman presented on heritability, a subject that did not receive much attention in the RMSE third edition. With the advent of genetic claims of susceptibility and defenses of mutation-induced disease, courts will likely need some good advice on navigating the science. Dana Carroll presented on human genome editing (CRISPR). Philip Sabes presented on brain-computer interfaces, which have progressed well beyond the level of sci-fi thrillers, such as The Brain That Wouldn’t Die (“Jan in the Pan”).

In addition to the therapeutic applications, Sabes discussed some of potential forensic uses, such as lie detectors, pain quantification, and the like. Yaniv Erlich, of MyHeritage, discussed advances in forensic genetic genealogy, which have made a dramatic entrance to the common imagination through the apprehension of Joseph James DeAngelo, the Golden State killer. The technique of triangulating DNA matches from consumer DNA databases has other applications, of course, such as identifying lost heirs, and resolving paternity issues.

Panel Five. Professor Marchant’s panel may well have identified some of the most salient needs for the next edition of the RMSE. Nobel Laureate Daniel Kahneman presented some of the highlights from his forthcoming book about “noise” in human judgment.[5] Kahneman’s expansion upon his previous thinking about the sources of error in human – and scientific – judgment are a much needed addition to the RMSE. Along the same lines, Professor Xiao Li Meng, presented on selection bias, and how it pervades scientific work, and detracts from the strength of evidence in the form of:

  1. cherry picking
  2. subgroup analyses
  3. unprincipled handling of outliers
  4. selection in methodologies (different tests)
  5. selection in due diligence (check only when you don’t like results)
  6. publication bias that results from publishing only impressive or statistically significant results
  7. selection in reporting, not reporting limitations all analyses
  8. selection in understanding

Professor Meng’s insights are sorely lacking in the third edition of the RMSE, and among judicial gatekeepers generally.  All too often, undue selectivity in methodologies and in relied-upon data is treated by judges as an issue that “goes to the weight, not the admissibility” of expert witness opinion testimony. In actuality, the selection biases, and other systematic and cognitive biases, are as important as, if not more important than, random error assessments. Indeed a close look at the RMSE third edition reveals a close embrace of the amorphous, anything-goes “weight of the evidence” approach in the epidemiology chapter.  That chapter marginalizes meta-analyses and fails to mention systematic review techiniques altogether. The chapter on clinical medicine, however, takes a divergent approach, emphasizing the hierarchy of evidence inherent in different study types, and the need for principled and systematic reviews of the available evidence.[6]

The Committee co-chairs and panel moderators did a wonderful job to identify important new trends in genetics, data science, error assessment, and computer science, and they should be congratulated for their efforts. Judge O’Malley is certainly correct in saying that the RMSE must be a neutral source of information on statistical and scientific methodologies, and it needs to be revised and updated to address errors and omissions in the previous editions. The legal community should look for, and study, the published proceedings when they become available.

——————————————————————————————————

[1]  SeeEmerging Areas of Science, Engineering, and Medicine for the Courts: Identifying Chapters for a Fourth Edition of The Reference Manual on Scientific Evidence – Committee Meeting” (Nov. 24, 2020).

[2]  SeeEmerging Areas of Science, Engineering, and Medicine for the Courts: Identifying Chapters for a Fourth Edition of The Reference Manual on Scientific Evidence – Committee Meeting 2 (Virtual)” (Dec. 1, 2020).

[3]  In re Neurontin Marketing, Sales Practices & Prods. Liab. Litig., 612 F. Supp. 2d 116 (D. Mass. 2009) (Saris, J.).

[4]  Burst v. Shell Oil Co., 104 F.Supp.3d 773 (E.D.La. 2015) (Vance, J.), aff’d, ___ Fed. App’x ___, 2016 WL 2989261 (5th Cir. May 23, 2016), cert. denied, 137 S.Ct. 312 (2016). SeeThe One Percent Non-solution – Infante Fuels His Own Exclusion in Gasoline Leukemia Case” (June 25, 2015).

[5]  Daniel Kahneman, Olivier Sibony, and Cass R. Sunstein, Noise: A Flaw in Human Judgment (anticipated May 2021).

[6]  See John B. Wong, Lawrence O. Gostin, and Oscar A. Cabrera, “Reference Guide on Medical Testimony,” Reference Manual on Scientific Evidence 723-24 (3ed ed. 2011) (discussing hierarchy of medical evidence, with systematic reviews at the apex).

Carl Cranor’s Inference to the Best Explanation

February 12th, 2021

Carl Cranor pays me the dubious honor of quoting my assessment of weight of the evidence (WOE) pseudo-methodology as used by lawsuit industry expert witnesses, in one of his recent publications:

“Take all the evidence, throw it into the hopper, close your eyes, open your heart, and guess the weight. You could be a lucky winner! The weight of the evidence suggests that the weight-of-the-evidence (WOE) method is little more than subjective opinion, but why care if it helps you to get to a verdict!”[1]

Cranor’s intent was to deride my comments, but they hold up fairly well. I have always maintained that if were wrong, I would eat my words, but that they will be quite digestible. Nothing to eat here, though.

In his essay in the Public Affairs Quarterly, Cranor attempts to explain and support his advocacy of WOE in the notorious case, Milward, in which Cranor, along with his friend and business partner, Martyn Smith, served as partisan, paid expert witnesss.[2] Not disclosed in this article is that after the trial court excluded the opinions of Cranor and Smith under Federal Rule of Evidence 702, and plaintiff appealed, the lawsuit industry, acting through The Council for Education and Research on Toxics (CERT) filed an amicus brief to persuade the Court of Appeals to reverse the exclusion. The plaintiffs’ counsel, Cranor and Smith, and CERT failed to disclose that CERT was founded by the two witnesses, Cranor and Smith, whose exclusion was at issue.[3] Many of the lawsuit industry’s regular testifiers were signatories, and none raised any ethical qualms about the obvious conflict of interest, or the conspiracy to pervert the course of justice.[4]

Cranor equates WOE to “inference to the best explanation,” which reductively strips science of its predictive and reproducible nature. Readers may get the sense he is operating in the realm of narrative, not science, and they would be correct. Cranor goes on to conflate WOE methodology with “diagnostic induction,” and “differential diagnosis.”[5] The latter term is well understood in both medicine and in law to involve the assessment of an individual patient’s condition, based upon what is already known upon good and sufficient bases. The term has no accepted or justifiable meaning for assessing general causation. Cranor’s approach would pretermit the determination of general causation by making the disputed cause a differential.

Cranor offers several considerations in support of his WOE-ful methodology. First, he notes that the arguments for causal claims are not deductive. True, but indifferent as to his advocacy for WOE and inference to the best explanation.

Second, Cranor describes a search for relevant evidence once the scientific issue (hypothesis?) is formulated. Again, there is nothing unique about this described step, but Cranor intentionally leaves out considerations of validity, as in extrapolations between high and low dose, or between species. Similarly, he leaves out considerations of validity of study designs (such as whether any weight would be given to case studies, cross-sectional, or ecological studies) or of validity of individual studies.

Cranor’s third step is the formulation of a “sufficiently complete range of reasonable and plausible explanations to account for the evidence.” Again, nothing unique here about WOE, except that Cranor’s WOE abridges the process by ignoring the very real possibility that we do not have the correct plausible explanation available.

Fourth, according to Cranor, scientists rank, explicitly or implicitly, the putative “explanations” by plausibility and persuasiveness, based upon the evidence at hand, in view of general toxicological and background knowledge.[6] Note the absence of consideration of the predictive abilities of the competing explanations, or any felt need to assess the quality of evidence or the validity of study design.

For Cranor, the fifth consideration is to use the initial plausibility assessments, made on incomplete understanding of the phenomena, and on incomplete evidence, to direct “additionally relevant /available evidence to separate founded explanations from less well-founded ones.” Obviously missing from Cranor’s scheme is the idea of trying to challenge or test hypotheses severely to see whether withstand such challenges.

Sixth, Cranor suggests that “all scientifically relevant information” should be considered in moving to the “best supported” explanation. Because “best” is determined based upon what is available, regardless of the quality of the data, or the validity of the inference, Cranor rigs his WOE-ful methodology in favor of eliminating “indeterminate” as a possible conclusion.

In a seventh step, Cranor points to the need to “integrate, synthesize, and assess or evaluate,” all lines of “available relevant evidence.” There is nothing truly remarkable about this step, which clearly requires judgment. Cranor notes that there can be convergence of disparate lines of evidence, or divergence, and that some selection of “lines” of evidence may be endorsed as supporting the “more persuasive conclusion” of causality.[7] In other words, a grand gemish.

Cranor’s WOE-ful approach leaves out any consideration of random error, or systematic bias, or data quality, or study design. The words “bias” and “confounding” do not appear in Cranor’s essay, and he erroneously discusses “error” and “error rates,” only to disparage them as the machinations of defense lawyers in litigation. Similarly, Cranor omits any serious mention of reproducibility, or of the need to formulate predictions that have the ability to falsify tentative conclusions.

Quite stridently, Cranor insists that there is no room for any actual weighting of study types or designs. In apparent earnest, Cranor writes that:

“this conclusion is in accordance with a National Cancer Institute (NCI) recommendation that ‘there should be no hierarchy [among different types of scientific methods to determine cancer causation]. Epidemiology, animal, tissue culture and molecular pathology should be seen as integrating evidences in the determination of human carcinogenicity.”[8]

There is much whining and special pleading about the difficulty, expense, and lack of statistical power of epidemiologic studies, even though the last point is a curious backdoor endorsement of statistical significance. The first two points ignore the availability of large administrative databases from which large cohorts can be identified and studied, with tremendous statistical power. Case-control studies can in some instances be assembled quickly as studies nested in existing cohorts.

As I have noted elsewhere,[9] Cranor’s attempt to level all types of evidence starkly misrepresents the cited “NCI” source, which is not at all an NCI recommendation, but rather a “meeting report” of a workshop of non-epidemiologists.[10] The cited source is not an official pronouncement of the NCI, the authors were not NCI scientists, and the NCI did not sponsored the meeting. The meeting report appeared in the journal Cancer Research as a paid advertisement, not in the NCI’s Journal of the National Cancer Institute as a scholarly article:

“The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.”[11]

Tellingly, Cranor’s deception was relied upon and cited by the First Circuit, in its Milward, decision.[12] The scholarly fraud hit its mark. As a result of Cranor’s own dubious actions, the Milward decision has both both ethical and scholarship black clouds hovering over it.  The First Circuit should withdraw the decision as improvidently decided.

The article ends with Cranor’s triumphant view of Milward,[13] which he published previously, along with the plaintiffs’ lawyer who hired him.[14] What Cranor leaves out is that the First Circuit’s holding is now suspect because of the court’s uncritical acceptance of Cranor’s own misrepresentations and CERT’s omissions of conflict-of-interest disclosures, as well as the subsequent procedural history of the case. After the Circuit reversed the Rule 702 exclusions, and the Supreme Court denied the petition for a writ of certiorari, the case returned to the federal district court, where the defense lodged a Rule 702 challenge to expert witness opinion that attributed plaintiff’s acute promyelocytic leukemia to benzene exposure. This specific causation issue was not previously addressed in the earlier proceedings. The trial court sustained the challenge, which left the plaintiff unable to show specific causation. The result was summary judgment for the defense, which the First Circuit affirmed on appeal.[15] The upshot of the subsequent proceedings, with their dispositive ruling in favor of the defense on specific causation, is that the earlier ruling on general causation is no longer necessary to the final judgment, and not the holding of the case when all the proceedings are considered.

In the end, Cranor’s WOE leaves us with a misdirected search for an “explanation of causation,” rather than a testable, tested, reproducible, and valid “inference of causation.” Cranor’s attempt to invoke the liberalization of the Federal Rules of Evidence ignores the true meaning of “liberal” in being free from dogma and authority. Evidence does not equal eminence, and expert witnesses in court must show their data and defend their inferences, whatever their explanations may be.

——————————————————————————————————–

[1]  Carl F. Cranor, “How Courts’ Reviews of Science in Toxic Tort Cases Have Changed and Why That’s a Good Thing,” 31 Public Affairs Q. 280 (2017), quoting from Schachtman, “WOE-fully Inadequate Methodology – An Ipse Dixit by Another Name” (May 1, 2012).

[2]  Milward v. Acuity Specialty Products Group, Inc., 639 F. 3d 11 (1st Cir. 2011), cert. denied, 132 S.Ct. 1002 (2012).

[3]  SeeThe Council for Education and Research on Toxics” (July 9, 2013).

[4] Among the signatures were Nachman Brautbar, David C. Christiani, Richard W. Clapp, James Dahlgren, Arthur L. Frank, Peter F. Infante, Philip J. Landrigan, Barry S. Levy, David Ozonoff, David Rosner, Allan H. Smith, and Daniel Thau Teitelbaum.

[5]  Cranor at 286-87.

[6]  Cranor at 287.

[7]  Cranor at 287-88.

[8]  Cranor at 290.

[9]  “Cranor’s Defense of Milward at the CPR’s Celebration” (May 12, 2013).

[10]  Michelle Carbone, Jack Gruber, and May Wong, “Modern criteria to establish human cancer etiology,” 14 Semin. Cancer Biol. 397 (2004).

[11]  Michele Carbone, George Klein, Jack Gruber and May Wong, “Modern Criteria to Establish Human Cancer Etiology,” 64 Cancer Research 5518 (2004).

[12]  Milward v. Acuity Specialty Products Group, Inc., 639 F. 3d 11, 17 (1st Cir. 2011) (“when a group from the National Cancer Institute was asked to rank the different types of evidence, it concluded that ‘[t]here should be no such hierarchy’.”), cert. denied, 132 S.Ct. 1002 (2012).

[13]  Cranor at 292.

[14]  SeeWake Forest Publishes the Litigation Industry’s Views on Milward” (April 20, 2013).

[15]  Milward v. Acuity Specialty Products Group, Inc., 969 F. Supp. 2d 101 (D. Mass. 2013), aff’d sub nom. Milward v. Rust-Oleum Corp., 820 F.3d 469 (1st Cir. 2016).

Center for Truth in Science

February 2nd, 2021

The Center for Truth in Science

Well, now I have had the complete 2020 experience, trailing into 2021. CoVid-19, a.k.a. Trump flu happened. The worst for me is now mostly over, and I can see a light at the end of the tunnel. Fortunately it is not the hypoxemic end-of-life light at the end of the tunnel.

Kurt Gödel famously noted that the “the meaning of world is the separation of wish and fact.” The work of science in fields that touch on religion, politics, and other dogmas requires nothing less than separating wish from fact. Sadly, most people are cut off from the world of science by ignorance, lack of education, and social media that blur the distinction between wish and fact, and ultimately replace the latter with the former.

It should go without saying that truth is science and science is truth, but our current crises show that truth and science are both victims of the same forces that blur wish with fact. We might think that a center established for “truth in science” is as otiose as a center for justice in the law, but all the social forces at work to blur wish and fact make such a center an imperative for our time.

The Center for Truth in Science was established last year, and has already weighed in on important issues and scientific controversies that occupy American courtrooms and legislatures. Championing “fact-based” science, the Center has begun to tackle some of the difficult contemporary scientific issues that loom large on the judicial scene – talc, glyphosate, per- and polyfluoroalkyl substances, and other – as well as methodological and conceptual problems that underlie these issues. (Of course, there is no other kind of science than fact-based, but there are many pseudo-, non-fact based knock offs out there.) The Center has already produced helpful papers on various topics, with many more important papers in progress. The Center’s website is a welcomed resource for news and insights on science that matters for current policy decisions.

The Center is an important and exciting development, and its work promises to provide the tools to help us separate wish from fact. Nothing less than the meaning of the world is at stake.

Susan Haack on Judging Expert Testimony

December 19th, 2020

Susan Haack has written frequently about expert witness testimony in the United States legal system. At times, Haack’s observations are interesting and astute, perhaps more so because she has no training in the law or legal scholarship. She trained in philosophy, and her works no doubt are taken seriously because of her academic seniority; she is the Distinguished Professor in the Humanities, Cooper Senior Scholar in Arts and Sciences, Professor of Philosophy and Professor of Law at the University of Miami.

On occasion, Haack has used her background and experience from teaching about epistemology to good effect in elucidating how epistemiologic issues are handled in the law. For instance, her exploration of the vice of credulity, as voiced by W.K. Clifford,[1] is a useful counterweight to the shrill agnotologists, Robert Proctor, Naomi Oreskes, and David Michaels.

Professor Haack has also been a source of confused, fuzzy, and errant advice when it comes to the issue Rule 702 gatekeeping. Haack’s most recent article on “Judging Expert Testimony” is an example of some unfocused thinking about one of the most important aspect of modern litigation practice, admissibility challenges to expert witness opinion testimony.[2]

Uncontroversially, Haack finds the case law on expert witness gatekeeping lacking in “effective practical guidance,” and she seeks to offer courts, and presumably litigants, “operational help.” Haack sets out to explain “why the legal formulae” are not of practical use. Haack notes that terms such as “reliable” and “sufficient” are qualitative, and vague,[3] much like “obscene” and other adjectives that gave the courts such a difficult time. Rules with vague terms such as these give judges very little guidance. As a philosopher, Haack might have noted that the various judicial formulations of gatekeeping standards are couched as conclusions, devoid of explanatory force.[4] And she might have pointed out that the judicial tendency to confuse reliability with validity has muddled many court opinions and lawyers’ briefs.

Focusing specifically on the field of epidemiology, Haack attempts to help courts by offering questions that judges and lawyers should be asking. She tells us that the Reference Manual for Scientific Evidence is of little practical help, which is a bit unfair.[5] The Manual in its present form has problems, but ultimately the performance of gatekeepers can be improved only if the gatekeepers develop some aptitude and knowledge in the subject matter of the expert witnesses who undergoing Rule 702 challenges. Haack seems unduly reluctant to acknowledge that gatekeeping will require subject matter expertise. The chapter on statistics in the current edition of the Manual, by David Kaye and the late David Freeman, is a rich resource for judges and lawyers in evaluating statistical evidence, including statistical analyses that appear in epidemiologic studies.

Why do judges struggle with epidemiologic testimony? Haack unwittingly shows the way by suggestion that “[e]pidemiological testimony will be to the effect that a correlation, an increased relative risk, has, or hasn’t, been found, between exposure to some substance (the alleged toxin at issue in the case) and some disease or disorder (the alleged disease or disorder the plaintiff claims to have suffered)… .”[6] Some philosophical parsing of the difference between “correlation” and “increased risk” as two very different things might have been in order. Haack suggests an incorrect identity between correlation and increased risk that has confused courts as well as some epidemiologists.

Haack suggests asking various questions that are fairly obvious such as the soundness of the data, measurements, study design, and data interpretation. Haack gives the example of failing to ascertain exposure to an alleged teratogen  during first trimester of pregnancy as a failure of study design that could obscure a real association. Curiously she claims that some of Merrell Dow’s studies of Bendectin did such a thing, not by citing to any publications but to the second-hand accounts of a trial judge.[7] Beyond the objectionable lack of scholarship, the example comes from a medication exposure that has been as exculpated as much as possible from the dubious litigation claims made of its teratogenicity. The misleading example begs the question why choose a Bendectin case, from a litigation that was punctuated by fraud and perjury from plaintiffs’ expert witnesses, and a medication that has been shown to be safe and effective in pregnancy?[8]

Haack balks when it comes to statistical significance, which she tells us is merely based upon a convention, and set “high” to avoid false alarms.[9] Haack’s dismissive attitude cannot be squared with the absolute need to address random error and to assess whether the research claim has been meaningfully tested.[10] Haack would reduce the assessment of random error to the uncertainties of eyeballing sample size. She tells us that:

“But of course, the larger the sample is, then, other things being equal, the better the study. Andrew Wakefield’s dreadful work supposedly finding a correlation between MMR vaccination, bowel disorders, and autism—based on a sample of only 12 children — is a paradigm example of a bad study.”[11]

Sample size was the least of Wakefield’s problems, but more to the point, in some study designs for some hypotheses, a sample of 12 may be quite adequate to the task, and capable of generating robust and even statistically significant findings.

Inevitably, Haack alights upon personal bias or conflicts of interest, as a subject of inquiry.[12] Of course, this is one of the few areas that judges and lawyers understand all too well, and do not need encouragement to pursue. Haack dives in, regardless, to advise asking:

“Do those who paid for or conducted a study have an interest in reaching a given conclusion (were they, for example, scientists working for manufacturers hoping to establish that their medication is effective and safe, or were they scientists working, like Wakefield, with attorneys for one party or another)?”[13]

Speaking of bias, we can detect some in how Haack frames the inquiry. Do scientists work for manufacturers (Boo!) or were they “like Wakefield” working for attorneys for a party? Haack cannot seem to bring herself to say that Wakefield, and many other expert witnesses, worked for plaintiffs and plaintiffs’ counsel, a.k.a., the lawsuit industry. Perhaps Haack included such expert witnesses as working for those who manufacture lawsuits. Similarly, in her discussion of journal quality, she notes that some journals carry advertisements from manufacturers, or receive financial support from them. There is a distinct lack of symmetry discernible in the lack of Haack’s curiosity about journals that are run by scientists or physicians who belong to advocacy groups, or who regularly testify for plaintiffs’ counsel.

There are many other quirky opinions here, but I will conclude with the obvious point that in the epidemiologic literature, there is a huge gulf between reporting on associations and drawing causal conclusions. Haack asks her readers to remember “that epidemiological studies can only show correlations, not causation.”[14] This suggestion ignores Haack’s article discussion of certain clinical trial results, which do “show” causal relationships. And epidemiologic studies can show strong, robust, consistent associations, with exposure-response gradients, not likely consistent with random variation, and these findings collectively can show causation in appropriate cases.

My recommendation is to ignore Haack’s suggestions and to pay closer attention to the subject matter of the expert witness who is under challenge. If the subject matter is epidemiology, open a few good textbooks on the subject. On the legal side, a good treatise such as The New Wigmore will provide much more illumination and guidance for judges and lawyers than vague, general suggestions.[15]


[1] William Kingdon Clifford, “The Ethics of Belief,” in L. Stephen & F. Pollock, eds., The Ethics of Belief 70-96 (1877) (“In order that we may have the to accept [someone’s] testimony as ground for believing what he says, we must have reasonable grounds for trusting his veracity, that he is really trying to speak the truth so far as he knows it; his knowledge, that he has had opportunities of knowing the truth about this matter; and his judgement, that he has made proper use of those opportunities in coming to the conclusion which he affirms.”), quoted in Susan Haack, “Judging Expert Testimony: From Verbal Formalism to Practical Advice,” 1 Quaestio facti. Internat’l J. Evidential Legal Reasoning 13, 13 (2020).

[2]  Susan Haack, “Judging Expert Testimony: From Verbal Formalism to Practical Advice,” 1 Quaestio facti. Internat’l J. Evidential Legal Reasoning 13, 13 (2020) [cited as Haack].

[3]  Haack at 21.

[4]  See, e.g., “Judicial Dodgers – The Crossexamination Excuse for Denying Rule 702 Motions”; “Judicial Dodgers – Reassigning the Burden of Proof on Rule 702”; “Judicial Dodgers – Weight not Admissibility”; “Judicial Dodgers – Rule 702 Tie Does Not Go to Proponent.”

[5]  Haack at 21.

[6]  Haack at 22.

[7]  Haack at 24, citing Blum v. Merrell Dow Pharms., Inc., 33 Phila. Cty. Rep. 193, 214-17 (1996).

[8]  See, e.g., “Bendectin, Diclegis & The Philosophy of Science” (Oct. 23, 2013).

[9]  Haack at 23.

[10]  See generally Deborah MayoStatistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (2018).

[11]  Haack at 23-24 (emphasis added).

[12]  Haack at 24.

[13]  Haack at 24.

[14]  Haack at 25.

[15]  David H. Kaye, David E. Bernstein & Jennifer L. Mnookin, The New Wigmore: A Treatise on Evidence: Expert Evidence (2nd ed. 2011). A new edition is due out presently.

Is Your Daubert Motion Racist?

July 17th, 2020

In this week’s New York Magazine, Jonathan Chait points out there is now a vibrant anti-racism consulting industry that exists to help white (or White?) people to recognize the extent to which their race has enabled their success, in the face of systematic inequalities that burden people of color. Chait acknowledges that some of what this industry does is salutary and timely, but he also notes that there are disturbing elements in this industry’s messaging, which is nothing short of an attack on individualism as racist myth that ignores that individuals are subsumed completely into their respective racial group. Chait argues that many of the West’s most cherished values – individualism, due process, free speech and inquiry, and the rule of law – are imperiled by so-called “radical progressivism” and “identity politics.”[1]

It is hard to fathom how anti-racism can collapse all identity into racial categories, even if some inarticulate progressives say so. Chait’s claim, however, seems to be supported by the Smithsonian National Museum of African American History & Culture, and its webpages on “Talking about Race,” which provides an extended analysis of “whiteness,” “white privilege,” and the like.

On May 31, 2020, the Museum’s website published a graphic that presented its view of the “Aspects & Assumptions of Whiteness and White Culture in the United States,” which made many startling claims about what is “white,” and by implication, what is “non-white.” [The chart is set out below.] I will leave it to the sociologists, psychologists, and anthropologists to parse the discussion of “white-dominant culture,” and white “racial identity,” provided in the Museum’s webpages. In my view, the characterizations of “whiteness” were overtly racist and insulting to all races and ethnicities. As Chait points out, with an abundance of irony, Donald Trump would seem to be the epitome of non-white, by his disavowal of the Museum’s identification of white culture’s insistence that “hard work is the key to success.”

The aspect of the graphic summary of whiteness, which I found most curious, most racist, and most insulting to people of all colors and ethnicities, is the chart’s assertion that white culture places “Emphasis on the Scientific Method,” with its valuation of “[o]bjective, rational linear thinking; “[c]ause and effect relationships”; and “[q]uantitative emphasis.” The implication is that non-whites do not emphasize or care about the scientific method. So scientific method, with its concern over validity of inference, and ruling out random and systematic errors, is just white privilege, and a microaggression against non-white people.

Really? Can the Smithsonian National Museum of African American History & Culture really mean that scientific punctilio is just another manifestation of racism and cultural imperialism. Chait seems to think so, quoting Glenn Singleton, president of Courageous Conversation, a racial-sensitivity training firm, who asserts that valuing “written communication over other forms” is “a hallmark of whiteness,” as is “scientific, linear thinking. Cause and effect.”

The Museum has apparently removed the graphic from its website, in response to a blitz of criticism from right-wing media and pundits.[2]  According to the Washington Post, the graphic has its origins in a 1978 book on White Awareness.[3] In response to the criticism, museum director Spencer Crew apologized and removed the graphic, agreeing that “it did not contribute to the discussion as planned.”[4]

The removal of the graphic is not really the point. Many people will now simply be bitter that they cannot publicly display their racist tropes. More important yet, many people will continue to believe that causal, rational, linear thinking is white, exclusionary, and even racist. Something to remember when you make your next Rule 702 motion.

   


[1]  Jonathan Chait, “Is the Anti-Racism Training Industry Just Peddling White Supremacy?” New York Magazine (July 16, 2020).

[2]  Laura Gesualdi-Gilmore “‘DEEPLY INSULTING’ African American museum accused of ‘racism’ over whiteness chart linking hard work and nuclear family to white culture,” The Sun (Jul 16 2020); “DC museum criticized for saying ‘delayed gratification’ and ‘decision-making’ are aspects of ‘whiteness’,” Fox News (July 16, 2020) (noting that the National Museum of African American History and Culture received a tremendous outcry after equating the nuclear family and self-reliance to whiteness); Sam Dorman, “African-American museum removes controversial chart linking ‘whiteness’ to self-reliance, decision-making The chart didn’t contribute to the ‘productive conversation’ they wanted to see,” Fox News (July 16, 2020); Mairead McArdle, “African American History Museum Publishes Graphic Linking ‘Rational Linear Thinking,’ ‘Nuclear Family’ to White Culture,” Nat’l Rev. (July 15, 2020).

[3]  Judy H. Katz, White Awareness: Handbook for Anti-Racism Training (1978).

[4]  Peggy McGlone, “African American Museum site removes ‘whiteness’ chart after criticism from Trump Jr. and conservative media,” Wash. Post (July 17, 2020).

Sharpiegate – Trump’s Assault on Scientific Expertise

July 10th, 2020

Trump lies so often, so irresponsibly, so ruthlessly, that the American people have become numb to the assault on truth. Remarkably, Trump’s lies are frequently casual, random, non-ideological, and wanton. When the lies are about scientifically verifiable processes and outcomes, the lies are particularly reprehensible because they further dumb the American people’s shaky aptitude for scientific discourse.

Take Trump’s lie last September that Hurricane Dorian would hit Alabama much harder than had been anticipated. Thousands of lies later, perhaps only a few may remember the doctored weather map, on which a falsified projection had been drawn with a sharpie pen, to suggest that the hurricane was moving towards southeastern Alabama. A few days later, the National Oceanic and Atmospheric Administration (NOAA) issued a statement that purported to support Trump’s bogus forecast.[1]

Now, almost a year later, the Inspector General for the Commerce Department, Peggy Gustafson, has issued a report that lambasts the White House (Trump and cronies) for pressuring the NOAA into issuing its unscientific, unsupportable statement.[2] The Inspector General found that the NOAA had politicized a straightforward scientific assessment, backed the Trumpian forecast, criticized the agency’s own scientists, and eroded public trust in the agency, by succumbing to pressure from the White House.

Of course, 40 percent of the United States’ electorate will not care, as long as they have their theocracy. Ms. Gustafson’s days are numbered, even as the End Times draw nigh for Trump. You may not need a weatherman to know which way the blows, but you do if you want to know which way the wind will blow.

Remember, that 40 percent could be on your jury. And there may be another 40% that blows the other way. Sharpiegate is a poignant reminder that abuse of science occurs in all three branches of government.


[1]  Andrew Freedman & Jason Samenow, “Investigation rebukes Commerce Department for siding with Trump over forecasters during Hurricane Dorian: Report confirms Commerce officials responded to orders from the White House,” Wash. Post (July 9, 2020).

[2]  Gustafson, Evaluation of NOAA’s September 6, 2019, Statement about Hurricane Dorian Forecasts (June 26, 2019).