TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Cancel Causation

March 9th, 2021

The Subversion of Causation into Normative Feelings

The late Professor Margaret Berger argued for the abandonment of general causation, or cause-in-fact, as an element of tort claims under the law.[1] Her antipathy to the requirement of showing causation ultimately involved her deprecating efforts to inject due scientific care in gatekeeping of causation opinions. After a long, distinguished career as a law professor, Berger died in November 2010.  Her animus against causation and Rule 702, however, was so strong that her chapter in the third edition of the Reference Manual on Scientific Evidence, which came out almost one year after her death, she embraced the First Circuit’s notorious anti-Daubert decision in Milward, which also post-dated her passing.[2]

Despite this posthumous writing and publication by Professor Berger, there have been no further instances of Zombie scholarship or ghost authorship.  Nonetheless, the assault on causation has been picked up by Professor Alexandra D. Lahav, of the University of Connecticut School of Law, in a recent essay posted online.[3] Lahav’s essay is an extension of her work, “The Knowledge Remedy,” published last year.[4]

This second essay, entitled “Chancy Causation in Tort Law,” is the plaintiffs’ brief against counterfactual causation, which Lahav acknowledges is the dominant test for factual causation.[5] Lahav begins with a reasonable, reasonably understandable distinction between deterministic (necessary and sufficient) and probabilistic (or chancy in her parlance) causation.

The putative victim of a toxic exposure (such as glyphosate and Non-Hodgkin’s lymphoma) cannot show that his exposure was a necessary and sufficient determinant of his developing NHL. Not everyone similarly exposed develops NHL; and not everyone with NHL has been exposed to glyphosate. In Lahav’s terminology, specific causation in such a case is “chancy.” Lahav asserts, but never proves, that the putative victim “could never prove that he would not have developed cancer if he had not been exposed to that herbicide.”[6]

Lahav’s example presents an example of a causal claim, which involves both general and specific causation, which is easily distinguishable from someone who claims his death was caused by being run over by a high-speed train. Despite this difference, Lahav never marshals any evidence to show why the putative glyphosate victim cannot show a probability that his case is causally related by adverting to the magnitude of the relative risk created by the prior exposure.

Repeatedly, Lahav asserts that when causation is chancy – probabilistic – it can never be shown by counterfactual causal reasoning, which she claims “assumes deterministic causation.” And she further asserts that because probabilistic causation cannot fit the counterfactual model, it can never “meet the law’s demand for a binary determination of cause.”[7]

Contrary to these ipse dixits, probabilistic causation can, at both the general and specific, or individual, levels be described in terms of counterfactuals. The modification requires us, of course, to address the baseline situation as a rate or frequency of events, and the post-exposure world as one with a modified rate or frequency. The exposure is the cause of the change in event rates. Modern physics addresses whether we must be content with probability statements, rather than precise deterministic “billiard ball” physics, which is so useful in a game of snooker, but less so in describing quarks. In the first half of the 20th century, the biological sciences learned with some difficulty that it must embrace probabilistic models, in genetic science, as well as in epidemiology. Many biological causation models are completely stated in terms of probabilities that are modified by specified conditions.

When Lahav gets to providing an example of where chancy causation fails in reasoning about individual causation, she gives a meaningless hypothetical of a woman, Mary, who is a smoker who develops lung cancer. To remove any semblance to real world cases, Lahav postulates that Mary had a 20% increased risk of lung cancer from smoking (a relative risk of 1.2). Thus, Lahav suggests that:

“[i]f Mary is a smoker and develops lung cancer, even after she has developed lung cancer it would still be the case that the cause of her cancer could only be described as a likelihood of 20 percent greater than what it would have been otherwise. Her doctor would not be able to say to her ‘Mary, if you had not smoked, you would not have developed this cancer’ because she might have developed it in any event.”

A more pertinent, less counterfactual hypothetical, is that Mary had a 2,000% increase in risk from her tobacco smoking. This corresponds to the relative risks in the range of 20, seen in many, if not most, epidemiologic studies of smoking and lung cancer. And there is an individual probability of causation that would be well over 0.9, for such a risk.

To be sure, there are critics of using the probability of causation because it assumes that the risk is distributed stochastically, which may not be correct. Of course, claimants are free to try to show that more of the risk fell on them for some reason, but of course, this requires evidence!

Lahav attempts to answer this point, but her argument runs off its rails.  She notes that:

“[i]f there is an 80% chance that a given smoker’s cancer is caused by smoking, and Mary smoked, some might like to say that she has met her burden of proof.

This approach confuses the strength of the evidence with its content. Assume that it is more likely than not, based on recognized scientific methodology, that for 80% of smokers who contract lung cancer their cancer is attributable to smoking. That fact does not answer the question of whether we ought to infer that Mary’s cancer was caused by smoking. I use the word ought advisedly here. Suppose Mary and the cigarette company stipulate that 80% of people like Mary will contract lung cancer, the burden of proof has been met. The strength of the evidence is established. The next question regards the legal permissibility of an inference that bridges the gap between the run of cases and Mary. The burden of proof cannot dictate the answer. It is a normative question of whether to impose liability on the cigarette company for Mary’s harm.”[8]

Lahav is correct that an 80% probability of causation might be based upon very flimsy evidence, and so that probability alone cannot establish that the plaintiff has a “submissible” case. If the 80% probability of causation is stipulated, and not subject to challenge, then Lahav’s claim is remarkable and contrary to most of the scholarship that has followed the infamous Blue Bus hypothetical. Indeed, she is making the very argument that tobacco companies made in opposition to the use of epidemiologic evidence in tobacco cases, in the 1950s and 1960s.

Lahav advances a perverse skepticism that any inferences about individuals can be drawn from information about rates or frequencies in groups of similar individuals.  Yes, there may always be some debate about what is “similar,” but successive studies may well draw the net tighter around what is the appropriate class. Lahav’s skepticism and her outright denialism, is common among some in the legal academy, but it ignores that group to individual inferences are drawn in epidemiology in multiple contexts. Regressions for disease prediction are based upon individual data within groups, and the regression equations are then applied to future individuals to help predict those individuals’ probability of future disease (such as heart attack or breast cancer), or their probability of cancer-free survival after a specific therapy. Group to individual inferences are, of course, also the basis for prescribing decisions in clinical medicine.  These are not normative inferences; they are based upon evidence-based causal thinking.

Lahav suggests that the metaphor of a “link” between exposure and outcome implies “something is determined and knowable, which is not possible in chancy causation cases.”[9] Not only is the link metaphor used all the time by sloppy journalists and some scientists, but when they use it, they mostly use it in the context of what Lahav would characterize as “chancy causation.” Even when speaking more carefully, and eschewing the link metaphor, scientists speak of probabilistic causation as something that is real, based upon evidence and valid inferences, not normative judgments or emotive reactions.

The probabilistic nature of the probability of causation does not affect its epistemic status.

The law does not assume that binary deterministic causality, as Lahav describes, is required to apply “but for” or counterfactual analysis. Juries are instructed to determine whether the party with the burden of proof has prevailed on each element of the claim, by a preponderance of the evidence. This civil jury instruction is almost always explained in terms of a posterior probability greater than 0.5, whether the claimed tort is a car crash or a case of Non-Hodgkin’s lymphoma.

Elsewhere, Lahav struggles with the concept of probability. Her essay suggests that

“[p]robability follows certain rules, or tendencies, but these regular laws do not abolish chance. There is a chance that the exposure caused his cancer, and a chance that it did not.”[10]

The use of chance here, in contradistinction to probability, is so idiosyncratic, and unexplained, that it is impossible to know what is meant.

Manufactured Doubt

Lahav’s essay twice touches upon a strawperson argument that stretches to claim that “manufacturing doubt” does not undermine her arguments about the nature of chancy causation. To Lahav, the likes of David Michaels have “demonstrated” that manufactured uncertainty is a genuine problem, but not one that affects her main claims. Nevertheless, Lahav remarkably sees no problem with manufactured certainty in the advocacy science of many authors.[11]

Lahav swallows the Michaels’ line, lure and all, and goes so far as to describe Rule 702 challenges to causal claims as having the “negative effect” of producing “incentives to sow doubt about epidemiologic studies using methodological battles, a strategy pioneered by the tobacco companies … .”[12] There is no corresponding concern about the negative effect of producing incentives to overstate the findings, or the validity of inferences, in order to get to a verdict for claimants.

Post-Modern Causation

What we have then is the ultimate post-modern program, which asserts that cause is “irreducibly chancy,” and thus indeterminate, and rightfully in the realm of “normative decisions.”[13] Lahav maintains there is an extreme plasticity to the very concept of causation:

“Causation in tort law can be whatever judges want it to be… .”[14]

I for one sincerely doubt it. And if judges make up some Lahav-inspired concept or normative causation, the scientific community would rightfully scoff.

Taking Lahav’s earlier paper, “The Knowledge Remedy,” along with this paper, the reader will see that Lahav is arguing for a rather extreme, radical precautionary principle approach to causation. There is a germ of truth that gatekeeping is affected by the moral quality of the defendant or its product. In the early days of the silicone gel breast implant litigation, some judges were influenced by suggestions that breast implants were frivolous products, made and sold to cater to male fantasies. Later, upon more mature reflection, judges recognized that roughly one third of breast implant surgeries were post-mastectomy, and that silicone was an essential biomaterial.  The recognition brought a sea change in critical thinking about the evidence proffered by claimants, and ultimately brought a recognition that the claimants were relying upon bogus and fraudulent evidence.[15]

—————————————————————————————–

[1]  Margaret A. Berger, “Eliminating General Causation: Notes towards a New Theory of Justice and Toxic Torts,” 97 Colum. L. Rev. 2117 (1997).

[2] Milward v. Acuity Specialty Products Group, Inc., 639 F.3d 11 (1st Cir. 2011), cert. denied sub nom., U.S. Steel Corp. v. Milward, 132 S. Ct. 1002 (2012)

[3]  Alexandra D. Lahav, “Chancy Causation in Tort,” (May 15, 2020) [cited as Chancy], available at https://ssrn.com/abstract=3633923 or http://dx.doi.org/10.2139/ssrn.3633923.

[4]  Alexandra D. Lahav, “The Knowledge Remedy,” 98 Texas L. Rev. 1361 (2020). SeeThe Knowledge Remedy Proposal” (Nov. 14, 2020).

[5]  Chancy at 2 (citing American Law Institute, Restatement (Third) of Torts: Physical & Emotional Harm § 26 & com. a (2010) (describing legal history of causal tests)).

[6]  Id. at 2-3.

[7]  Id.

[8]  Id. at 10.

[9]  Id. at 12.

[10]  Id. at 2.

[11]  Id. at 8 (citing David Michaels, The Triumph of Doubt: Dark Money and the Science of Deception (2020), among others).

[12]  Id. at 18.

[13]  Id. at 6.

[14]  Id. at 3.

[15]  Hon. Jack B. Weinstein, “Preliminary Reflections on Administration of Complex Litigation” 2009 Cardozo L. Rev. de novo 1, 14 (2009) (describing plaintiffs’ expert witnesses in silicone litigation as “charlatans” and the litigation as largely based upon fraud).

Reference Manual on Scientific Evidence v4.0

February 28th, 2021

The need for revisions to the third edition of the Reference Manual on Scientific Evidence (RMSE) has been apparent since its publication in 2011. A decade has passed, and the federal agencies involved in the third edition, the Federal Judicial Center (FJC) and the National Academies of Science Engineering and Medicine (NASEM), are assembling staff to prepare the long-needed revisions.

The first sign of life for this new edition came back on November 24, 2020, when the NASEM held a short, closed door virtual meeting to discuss planning for a fourth edition.[1] The meeting was billed by the NASEM as “the first meeting of the Committee on Emerging Areas of Science, Engineering, and Medicine for the Courts: Identifying Chapters for a Fourth Edition of The Reference Manual on Scientific Evidence.” The Committee members heard from John S. Cooke (FJC Director), and Alan Tomkins and Reggie Sheehan, both of the National Science Foundation (NSF). The stated purpose of the meeting was to review the third edition of the RMSE to identify “identify areas of science, technology, and medicine that may be candidates for new or updated chapters in a proposed new (fourth) edition of the manual.” The only public pronouncement from the first meeting was that the committee would sponsor a workshop on the topic of new chapters for the RMSE, in early 2021.

The Committee’s second meeting took place a week later, again in closed session.[2] The stated purpose of the Committee’s second meeting was to review the third edition of the RMSE, and to discuss candidate areas for inclusion as new and updated chapters for a fourth edition.

Last week saw the Committee’s third, public meeting. The meeting spanned two days (Feb. 24 and 25, 2021), and was open to the public. The meeting was sponsored by NASEM, FJC, along with the NSF, and was co-chaired by Thomas D. Albright, Professor and Conrad T. Prebys Chair at the Salk Institute for Biological Studies, and the Hon. Kathleen McDonald O’Malley, who sits on the United States Court of Appeals for the Federal Circuit. Identified members of the committee include:

Steven M. Bellovin, professor in the Computer Science department at Columbia University;

Karen Kafadar, Departmental Chair and Commonwealth Professor of Statistics at the University of Virginia, and former president of the American Statistical Association;

Andrew Maynard, professor, and director of the Risk Innovation Lab at the School for the Future of Innovation in Society, at Arizona State University;

Venkatachalam Ramaswamy, Director of the Geophysical Fluid Dynamics Laboratory of the National Oceanic and Atmospheric Administration (NOAA) Office of Oceanic and Atmospheric Research (OAR), studying climate modeling and climate change;

Thomas Schroeder, Chief Judge for the U.S. District Court for the Middle District of North Carolina;

David S. Tatel, United States Court of Appeals for the District of Columbia Circuit; and

Steven R. Kendall, Staff Officer

The meeting comprised five panel presentations, made up of remarkably accomplished and talented speakers. Each panel’s presentations were followed by discussion among the panelists, and the committee members. Some panels answered questions submitted from the public audience. Judge O’Malley opened the meeting with introductory remarks about the purpose and scope of the RMSE, and of the inquiry into additional possible chapters.

  1. Challenges in Evaluating Scientific Evidence in Court

The first panel consisted entirely of judges, who held forth on their approaches to judicial gatekeeping of expert witnesses, and their approach to scientific and technical issues. Chief Judge Schroeder moderated the presentations of panelists:

Barbara Parker Hervey, Texas Court of Criminal Appeals;

Patti B. Saris, Chief Judge of the United States District Court for the District of Massachusetts,  member of President’s Council of Advisors on Science and Technology (PCAST);

Leonard P. Stark, U.S. District Court for the District of Delaware; and

Sarah S. Vance, Judge (former Chief Judge) of the U.S. District Court for the Eastern District of Louisiana, chair of the Judicial Panel on Multidistrict Litigation.

  1. Emerging Issues in the Climate and Environmental Sciences

Paul Hanle, of the Environmental Law Institute moderated presenters:

Joellen L. Russell, the Thomas R. Brown Distinguished Chair of Integrative Science and Professor at the University of Arizona in the Department of Geosciences;

Veerabhadran Ramanathan, Edward A. Frieman Endowed Presidential Chair in Climate Sustainability at the Scripps Institution of Oceanography at the University of California, San Diego;

Benjamin D. Santer, atmospheric scientist at Lawrence Livermore National Laboratory; and

Donald J. Wuebbles, the Harry E. Preble Professor of Atmospheric Science at the University of Illinois.

  1. Emerging Issues in Computer Science and Information Technology

Josh Goldfoot, Principal Deputy Chief, Computer Crime & Intellectual Property Section, at U.S. Department of Justice, moderated panelists:

Jeremy J. Epstein, Deputy Division Director of Computer and Information Science and Engineering (CISE) and Computer and Network Systems (CNS) at the National Science Foundation;

Russ Housley, founder of Vigil Security, LLC;

Subbarao Kambhampati, professor of computer science at Arizona State University; and

Alice Xiang, Senior Research Scientist at Sony AI.

  1. Emerging Issues in the Biological Sciences

Panel four was moderated by Professor Ellen Wright Clayton, the Craig-Weaver Professor of Pediatrics, and Professor of Law and of Health Policy at Vanderbilt Law School, at Vanderbilt University. Her panelists were:

Dana Carroll, distinguished professor in the Department of Biochemistry at the University of Utah School of Medicine;

Yaniv Erlich, Chief Executive Officer of Eleven Therapeutics, Chief Science Officer of MyHeritage;

Steven E. Hyman, director of the Stanley Center for Psychiatric Research at Broad Institute of MIT and Harvard; and

Philip Sabes, Professor Emeritus in Physiology at the University of California, San Francisco (UCSF).

  1. Emerging areas in Psychology, Data, and Statistical Sciences

Gary Marchant, Lincoln Professor of Emerging Technologies, Law and Ethics, at Arizona State University’s Sandra Day O’Connor College of Law, moderated panelists:

Xiao-Li Meng, the Whipple V. N. Jones Professor of Statistics, Harvard University, and the Founding Editor-in-Chief of Harvard Data Science Review;

Rebecca Doerge, Glen de Vries Dean of the Mellon College of Science at Carnegie Mellon University, member of the Dietrich College of Humanities and Social Sciences’ Department of Statistics and Data Science, and of the Mellon College of Science’s Department of Biological Sciences;

Daniel Kahneman, Professor of Psychology and Public Affairs Emeritus at the Princeton School of Public and International Affairs, the Eugene Higgins Professor of Psychology Emeritus at Princeton University, and a fellow of the Center for Rationality at the Hebrew University in Jerusalem; and

Goodwin Liu, Associate Justice of the California Supreme Court.

The Proceedings of this two day meeting were recorded and will be published. The website materials are unclear whether the verbatim remarks will be included, but regardless, the proceedings should warrant careful reading.

Judge O’Malley, in her introductory remarks, emphasized that the RMSE must be a neutral, disinterested source of information for federal judges, an aspirational judgment from which there can be no dissent. More controversial will be Her Honor’s assessment that epidemiologic studies can “take forever,” and other judges’ suggestion that plaintiffs lack financial resources to put forward credible, reliable expert witnesses. Judge Vance corrected the course of the discussion by pointing out that MDL plaintiffs were not disadvantaged, but no one pointed out that plaintiffs’ counsel were among the wealthiest individuals in the United States, and that they have been known to sponsor epidemiologic and other studies that wind up as evidence in court.

Panel One was perhaps the most discomforting experience, as it involved revelations about how sausage is made in the gatekeeping process. The panel was remarkable for including a state court judge from Texas, Judge Barbara Parker Hervey, of the Texas Court of Criminal Appeals. Judge Hervey remarked that [in her experience] if we judges “can’t understand it, we won’t read it.” Her dictum raises interesting issues. No doubt, in some instances, the judicial failure of comprehension is the fault of the lawyers. What happens when the judges “can’t understand it”? Do they ask for further briefing? Or do they ask for a hearing with viva voce testimony from expert witnesses? The point was not followed up.

Leonard P. Stark’s insights were interesting in that his docket in the District of Delaware is flooded with patent and Hatch-Waxman Act litigation. Judge Stark’s extensive educational training is in politics and political science. The docket volume Judge Stark described, however, raised issues about how much attention he could give to any one case.

When the panel was asked how they dealt with scientific issues, Judge Saris discussed her presiding over In re Neurontin, which was a “big challenge for me to understand,” with no randomized trials or objective assessments by the litigants.[3] Judge Vance discussed her experience of presiding in a low-level benzene exposure case, in which plaintiff claimed that his acute myelogenous leukemia was caused by gasoline.[4]

Perhaps the key difference in approach to Rule 702 emerged when the judges were asked whether they read the underlying studies. Judge Saris did not answer directly, but stated she reads the reports. Judge Vance, on the other hand, noted that she reads the relied upon studies. In her gasoline-leukemia case, she read the relied-upon epidemiologic studies, which she described as a “hodge podge,” and which were misrepresented by the expert witnesses and counsel. She emphasized the distortions of the adversarial system and the need to moderate its excesses by validating what exactly the expert witnesses had relied upon.

This division in judicial approach was seen again when Professor Karen Kafadar asked how the judges dealt with peer review. Judge Saris seemed to suggest that the peer-reviewed published article was prima facie reliable. Others disagreed and noted that peer reviewed articles can have findings that are overstated, and wrong. One speaker noted that Jerome Kassirer had downplayed the significance of, and the validation provided by, peer review, in the RMSE (3rd ed 2011).

Curiously, there was no discussion of Rule 703, either in Judge O’Malley’s opening remarks on the RMSE, or in the first panel discussion. When someone from the audience submitted a question about the role of Rule 703 in the gatekeeping process, the moderator did not read it.

Panel Two. The climate change panel was a tour de force of the case for anthropogenic climate change. To some, the presentations may have seemed like a reprise of The Day After Tomorrow. Indeed, the science was presented so confidently, if not stridently, that one of the committee members asked whether there could be any reasonable disagreement. The panelists responded essentially by pointing out that there could be no good faith opposition. The panelists were much less convincing on the issue of attributability. None of the speakers addressed the appropriateness vel non of climate change litigation, when the federal and state governments encouraged, licensed, and regulated the exploitation and use of fossil fuel reserves.

Panel Four. Dr. Clayton’s panel was fascinating and likely to lead to new chapters. Professor Hyman presented on heritability, a subject that did not receive much attention in the RMSE third edition. With the advent of genetic claims of susceptibility and defenses of mutation-induced disease, courts will likely need some good advice on navigating the science. Dana Carroll presented on human genome editing (CRISPR). Philip Sabes presented on brain-computer interfaces, which have progressed well beyond the level of sci-fi thrillers, such as The Brain That Wouldn’t Die (“Jan in the Pan”).

In addition to the therapeutic applications, Sabes discussed some of potential forensic uses, such as lie detectors, pain quantification, and the like. Yaniv Erlich, of MyHeritage, discussed advances in forensic genetic genealogy, which have made a dramatic entrance to the common imagination through the apprehension of Joseph James DeAngelo, the Golden State killer. The technique of triangulating DNA matches from consumer DNA databases has other applications, of course, such as identifying lost heirs, and resolving paternity issues.

Panel Five. Professor Marchant’s panel may well have identified some of the most salient needs for the next edition of the RMSE. Nobel Laureate Daniel Kahneman presented some of the highlights from his forthcoming book about “noise” in human judgment.[5] Kahneman’s expansion upon his previous thinking about the sources of error in human – and scientific – judgment are a much needed addition to the RMSE. Along the same lines, Professor Xiao Li Meng, presented on selection bias, and how it pervades scientific work, and detracts from the strength of evidence in the form of:

  1. cherry picking
  2. subgroup analyses
  3. unprincipled handling of outliers
  4. selection in methodologies (different tests)
  5. selection in due diligence (check only when you don’t like results)
  6. publication bias that results from publishing only impressive or statistically significant results
  7. selection in reporting, not reporting limitations all analyses
  8. selection in understanding

Professor Meng’s insights are sorely lacking in the third edition of the RMSE, and among judicial gatekeepers generally.  All too often, undue selectivity in methodologies and in relied-upon data is treated by judges as an issue that “goes to the weight, not the admissibility” of expert witness opinion testimony. In actuality, the selection biases, and other systematic and cognitive biases, are as important as, if not more important than, random error assessments. Indeed a close look at the RMSE third edition reveals a close embrace of the amorphous, anything-goes “weight of the evidence” approach in the epidemiology chapter.  That chapter marginalizes meta-analyses and fails to mention systematic review techiniques altogether. The chapter on clinical medicine, however, takes a divergent approach, emphasizing the hierarchy of evidence inherent in different study types, and the need for principled and systematic reviews of the available evidence.[6]

The Committee co-chairs and panel moderators did a wonderful job to identify important new trends in genetics, data science, error assessment, and computer science, and they should be congratulated for their efforts. Judge O’Malley is certainly correct in saying that the RMSE must be a neutral source of information on statistical and scientific methodologies, and it needs to be revised and updated to address errors and omissions in the previous editions. The legal community should look for, and study, the published proceedings when they become available.

——————————————————————————————————

[1]  SeeEmerging Areas of Science, Engineering, and Medicine for the Courts: Identifying Chapters for a Fourth Edition of The Reference Manual on Scientific Evidence – Committee Meeting” (Nov. 24, 2020).

[2]  SeeEmerging Areas of Science, Engineering, and Medicine for the Courts: Identifying Chapters for a Fourth Edition of The Reference Manual on Scientific Evidence – Committee Meeting 2 (Virtual)” (Dec. 1, 2020).

[3]  In re Neurontin Marketing, Sales Practices & Prods. Liab. Litig., 612 F. Supp. 2d 116 (D. Mass. 2009) (Saris, J.).

[4]  Burst v. Shell Oil Co., 104 F.Supp.3d 773 (E.D.La. 2015) (Vance, J.), aff’d, ___ Fed. App’x ___, 2016 WL 2989261 (5th Cir. May 23, 2016), cert. denied, 137 S.Ct. 312 (2016). SeeThe One Percent Non-solution – Infante Fuels His Own Exclusion in Gasoline Leukemia Case” (June 25, 2015).

[5]  Daniel Kahneman, Olivier Sibony, and Cass R. Sunstein, Noise: A Flaw in Human Judgment (anticipated May 2021).

[6]  See John B. Wong, Lawrence O. Gostin, and Oscar A. Cabrera, “Reference Guide on Medical Testimony,” Reference Manual on Scientific Evidence 723-24 (3ed ed. 2011) (discussing hierarchy of medical evidence, with systematic reviews at the apex).

On Praising Judicial Decisions – In re Viagra

February 8th, 2021

We live in strange times. A virulent form of tribal stupidity gave us Trumpism, a personality cult in which it impossible to function in the Republican party and criticize der Führe. Even a diehard right-winger such as Liz Cheney, who dared to criticize Trump is censured, for nothing more than being disloyal to a cretin who fomented an insurrection that resulted in the murder of a Capital police officer and the deaths of several other people.[1]

Unfortunately, a similar, even if less extreme, tribal chauvinism affects legal commentary, from both sides of the courtroom. When Judge Richard Seeborg issued an opinion, early in 2020), in the melanoma – phosphodiesterase type 5 inhibitor (PDE5i) litigation,[2] I praised the decision for not shirking the gatekeeping responsibility even when the causal claim was based upon multiple, consistent statistically significant observational studies that showed an association between PDE5i medications and melanoma.[3] Although many of the plaintiffs’ relied-upon studies reported statistically significant associations between PDE5i use and melanoma occurrence, they also found similar size associations with non-melanoma skin cancers. Because skin carcinomas were not part of the hypothesized causal mechanism, the study findings strongly suggested a common, unmeasured confounding variable such as skin damage from ultraviolet light. The plaintiffs’ expert witnesses’ failure to account for confounding was fatal under Rule 702, and Judge Seeborg’s recognition of this defect, and his willingness to go beyond multiple, consistent, statistically significant associations was what made the decision important.

There were, however, problems and even a blatant error in the decision that required attention. Although the error was harmless in that its correction would not have required, or even suggested, a different result, Judge Seeborg, like many other judges and lawyers, tripped up over the proper interpretation of a confidence interval:

“When reviewing the results of a study it is important to consider the confidence interval, which, in simple terms, is the ‘margin of error’. For example, a given study could calculate a relative risk of 1.4 (a 40 percent increased risk of adverse events), but show a 95 percent ‘confidence interval’ of .8 to 1.9. That confidence interval means there is 95 percent chance that the true value—the actual relative risk—is between .8 and 1.9.”[4]

This statement about the true value is simply wrong. The provenance of this error is old, but the mistake was unfortunately amplified in the Third Edition of the Reference Manual on Scientific Evidence,[5] in its chapter on epidemiology.[6] The chapter, which is often cited, twice misstates the meaning of a confidence interval:

“A confidence interval provides both the relative risk (or other risk measure) found in the study and a range (interval) within which the risk likely would fall if the study were repeated numerous times.”[7]

and

“A confidence interval is a range of possible values calculated from the results of a study. If a 95% confidence interval is specified, the range encompasses the results we would expect 95% of the time if samples for new studies were repeatedly drawn from the same population. Thus, the width of the interval reflects random error.”[8]

The 95% confidence interval does represent random error, 1.96 standard errors above and below the point estimate from the sample date. The confidence interval is not the range of possible values, which could well be anything, but the range of reasonable compatible estimates with this one, particular study sample statistic.[9] Intervals have lower and upper bounds, which are themselves random variables, with approximately normal (or some other specified) distributions. The essence of the interval is that no value within the interval would be rejected as a null hypothesis based upon the data collected for the particular sample. Although the chapter on statistics in the Reference Manual accurately describes confidence intervals, judges and many lawyers are misled by the misstatements in the epidemiology chapter.[10]

Given the misdirection created by the Federal Judicial Center’s manual, Judge Seeborg’s erroneous definition of a confidence interval is understandable, but it should be noted in the context of praising the important gatekeeping decision in In re Viagra. Certainly our litigation tribalism should not “allow us to believe” impossible things.[11] The time to revise the Reference Manual is long overdue.

_____________________________________________________________________

[1]  John Ruwitch, “Wyoming GOP Censures Liz Cheney For Voting To Impeach Trump,” Nat’l Pub. Radio (Feb. 6, 2021).

[2]  In re Viagra (Sildenafil Citrate) and Cialis (Tadalafil) Prods. Liab. Litig., 424 F. Supp. 3d 781 (N.D. Cal. 2020) [Viagra].

[3]  SeeJudicial Gatekeeping Cures Claims That Viagra Can Cause Melonoma” (Jan. 24, 2020).

[4]  Id. at 787.

[5]  Federal Judicial Center, Reference Manual on Scientific Evidence (3rd ed. 2011).

[6]  Michael D. Green, D. Michal Freedman, & Leon Gordis, “Reference Guide on Epidemiology,” in Federal Judicial Center, Reference Manual on Scientific Evidence 549 (3rd ed. 2011).

[7]  Id. at 573.

[8]  Id. at 580.

[9] Michael O. Finkelstein & Bruce Levin, Statistics for Lawyers 171, 173-74 (3rd ed. 2015). See also Sander Greenland, Stephen J. Senn, Kenneth J. Rothman, John B. Carlin, Charles Poole, Steven N. Goodman, and Douglas G. Altman, “Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations,” 31 Eur. J. Epidem. 337 (2016).

[10]  See, e.g., Derek C. Smith, Jeremy S. Goldkind, and William R. Andrichik, “Statistically Significant Association: Preventing the Misuse of the Bradford Hill Criteria to Prove Causation in Toxic Tort Cases,” 86 Defense Counsel J. 1 (2020) (mischaracterizing the meaning of confidence intervals based upon the epidemiology chapter in the Reference Manual).

[11]  See, e.g., James Beck, “Tort Pandemic Countermeasures? The Ten Best Prescription Drug/Medical Device Decisions of 2020,” Drug and Device Law Blog (Dec. 30, 2020) (suggesting that Judge Seeborg’s decision represented the rejection of plausibility and a single “association” as insufficient); Steven Boranian, “General Causation Experts Excluded In Viagra/Cialis MDL,” (Jan. 23, 2020).

Susan Haack on Judging Expert Testimony

December 19th, 2020

Susan Haack has written frequently about expert witness testimony in the United States legal system. At times, Haack’s observations are interesting and astute, perhaps more so because she has no training in the law or legal scholarship. She trained in philosophy, and her works no doubt are taken seriously because of her academic seniority; she is the Distinguished Professor in the Humanities, Cooper Senior Scholar in Arts and Sciences, Professor of Philosophy and Professor of Law at the University of Miami.

On occasion, Haack has used her background and experience from teaching about epistemology to good effect in elucidating how epistemiologic issues are handled in the law. For instance, her exploration of the vice of credulity, as voiced by W.K. Clifford,[1] is a useful counterweight to the shrill agnotologists, Robert Proctor, Naomi Oreskes, and David Michaels.

Professor Haack has also been a source of confused, fuzzy, and errant advice when it comes to the issue Rule 702 gatekeeping. Haack’s most recent article on “Judging Expert Testimony” is an example of some unfocused thinking about one of the most important aspect of modern litigation practice, admissibility challenges to expert witness opinion testimony.[2]

Uncontroversially, Haack finds the case law on expert witness gatekeeping lacking in “effective practical guidance,” and she seeks to offer courts, and presumably litigants, “operational help.” Haack sets out to explain “why the legal formulae” are not of practical use. Haack notes that terms such as “reliable” and “sufficient” are qualitative, and vague,[3] much like “obscene” and other adjectives that gave the courts such a difficult time. Rules with vague terms such as these give judges very little guidance. As a philosopher, Haack might have noted that the various judicial formulations of gatekeeping standards are couched as conclusions, devoid of explanatory force.[4] And she might have pointed out that the judicial tendency to confuse reliability with validity has muddled many court opinions and lawyers’ briefs.

Focusing specifically on the field of epidemiology, Haack attempts to help courts by offering questions that judges and lawyers should be asking. She tells us that the Reference Manual for Scientific Evidence is of little practical help, which is a bit unfair.[5] The Manual in its present form has problems, but ultimately the performance of gatekeepers can be improved only if the gatekeepers develop some aptitude and knowledge in the subject matter of the expert witnesses who undergoing Rule 702 challenges. Haack seems unduly reluctant to acknowledge that gatekeeping will require subject matter expertise. The chapter on statistics in the current edition of the Manual, by David Kaye and the late David Freeman, is a rich resource for judges and lawyers in evaluating statistical evidence, including statistical analyses that appear in epidemiologic studies.

Why do judges struggle with epidemiologic testimony? Haack unwittingly shows the way by suggestion that “[e]pidemiological testimony will be to the effect that a correlation, an increased relative risk, has, or hasn’t, been found, between exposure to some substance (the alleged toxin at issue in the case) and some disease or disorder (the alleged disease or disorder the plaintiff claims to have suffered)… .”[6] Some philosophical parsing of the difference between “correlation” and “increased risk” as two very different things might have been in order. Haack suggests an incorrect identity between correlation and increased risk that has confused courts as well as some epidemiologists.

Haack suggests asking various questions that are fairly obvious such as the soundness of the data, measurements, study design, and data interpretation. Haack gives the example of failing to ascertain exposure to an alleged teratogen  during first trimester of pregnancy as a failure of study design that could obscure a real association. Curiously she claims that some of Merrell Dow’s studies of Bendectin did such a thing, not by citing to any publications but to the second-hand accounts of a trial judge.[7] Beyond the objectionable lack of scholarship, the example comes from a medication exposure that has been as exculpated as much as possible from the dubious litigation claims made of its teratogenicity. The misleading example begs the question why choose a Bendectin case, from a litigation that was punctuated by fraud and perjury from plaintiffs’ expert witnesses, and a medication that has been shown to be safe and effective in pregnancy?[8]

Haack balks when it comes to statistical significance, which she tells us is merely based upon a convention, and set “high” to avoid false alarms.[9] Haack’s dismissive attitude cannot be squared with the absolute need to address random error and to assess whether the research claim has been meaningfully tested.[10] Haack would reduce the assessment of random error to the uncertainties of eyeballing sample size. She tells us that:

“But of course, the larger the sample is, then, other things being equal, the better the study. Andrew Wakefield’s dreadful work supposedly finding a correlation between MMR vaccination, bowel disorders, and autism—based on a sample of only 12 children — is a paradigm example of a bad study.”[11]

Sample size was the least of Wakefield’s problems, but more to the point, in some study designs for some hypotheses, a sample of 12 may be quite adequate to the task, and capable of generating robust and even statistically significant findings.

Inevitably, Haack alights upon personal bias or conflicts of interest, as a subject of inquiry.[12] Of course, this is one of the few areas that judges and lawyers understand all too well, and do not need encouragement to pursue. Haack dives in, regardless, to advise asking:

“Do those who paid for or conducted a study have an interest in reaching a given conclusion (were they, for example, scientists working for manufacturers hoping to establish that their medication is effective and safe, or were they scientists working, like Wakefield, with attorneys for one party or another)?”[13]

Speaking of bias, we can detect some in how Haack frames the inquiry. Do scientists work for manufacturers (Boo!) or were they “like Wakefield” working for attorneys for a party? Haack cannot seem to bring herself to say that Wakefield, and many other expert witnesses, worked for plaintiffs and plaintiffs’ counsel, a.k.a., the lawsuit industry. Perhaps Haack included such expert witnesses as working for those who manufacture lawsuits. Similarly, in her discussion of journal quality, she notes that some journals carry advertisements from manufacturers, or receive financial support from them. There is a distinct lack of symmetry discernible in the lack of Haack’s curiosity about journals that are run by scientists or physicians who belong to advocacy groups, or who regularly testify for plaintiffs’ counsel.

There are many other quirky opinions here, but I will conclude with the obvious point that in the epidemiologic literature, there is a huge gulf between reporting on associations and drawing causal conclusions. Haack asks her readers to remember “that epidemiological studies can only show correlations, not causation.”[14] This suggestion ignores Haack’s article discussion of certain clinical trial results, which do “show” causal relationships. And epidemiologic studies can show strong, robust, consistent associations, with exposure-response gradients, not likely consistent with random variation, and these findings collectively can show causation in appropriate cases.

My recommendation is to ignore Haack’s suggestions and to pay closer attention to the subject matter of the expert witness who is under challenge. If the subject matter is epidemiology, open a few good textbooks on the subject. On the legal side, a good treatise such as The New Wigmore will provide much more illumination and guidance for judges and lawyers than vague, general suggestions.[15]


[1] William Kingdon Clifford, “The Ethics of Belief,” in L. Stephen & F. Pollock, eds., The Ethics of Belief 70-96 (1877) (“In order that we may have the to accept [someone’s] testimony as ground for believing what he says, we must have reasonable grounds for trusting his veracity, that he is really trying to speak the truth so far as he knows it; his knowledge, that he has had opportunities of knowing the truth about this matter; and his judgement, that he has made proper use of those opportunities in coming to the conclusion which he affirms.”), quoted in Susan Haack, “Judging Expert Testimony: From Verbal Formalism to Practical Advice,” 1 Quaestio facti. Internat’l J. Evidential Legal Reasoning 13, 13 (2020).

[2]  Susan Haack, “Judging Expert Testimony: From Verbal Formalism to Practical Advice,” 1 Quaestio facti. Internat’l J. Evidential Legal Reasoning 13, 13 (2020) [cited as Haack].

[3]  Haack at 21.

[4]  See, e.g., “Judicial Dodgers – The Crossexamination Excuse for Denying Rule 702 Motions”; “Judicial Dodgers – Reassigning the Burden of Proof on Rule 702”; “Judicial Dodgers – Weight not Admissibility”; “Judicial Dodgers – Rule 702 Tie Does Not Go to Proponent.”

[5]  Haack at 21.

[6]  Haack at 22.

[7]  Haack at 24, citing Blum v. Merrell Dow Pharms., Inc., 33 Phila. Cty. Rep. 193, 214-17 (1996).

[8]  See, e.g., “Bendectin, Diclegis & The Philosophy of Science” (Oct. 23, 2013).

[9]  Haack at 23.

[10]  See generally Deborah MayoStatistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (2018).

[11]  Haack at 23-24 (emphasis added).

[12]  Haack at 24.

[13]  Haack at 24.

[14]  Haack at 25.

[15]  David H. Kaye, David E. Bernstein & Jennifer L. Mnookin, The New Wigmore: A Treatise on Evidence: Expert Evidence (2nd ed. 2011). A new edition is due out presently.

The Knowledge Remedy Proposal

November 14th, 2020

Alexandra D. Lahav is the Ellen Ash Peters Professor of Law at the University of Connecticut School of Law. This year’s symposium issue of the Texas Law Review has published Professor Lahav’s article, “The Knowledge Remedy,” which calls for the imposition of a duty to conduct studies by defendants, to provide evidence relevant to plaintiffs’ product liability claims. Alexandra D. Lahav, “The Knowledge Remedy,” 98 Texas L. Rev. 1361 (2020) [cited as Lahav].

Professor Lahav’s advocated reform is based upon the premises that (1) the requisite studies needed for causal assessment “are too costly for plaintiffs to fund,” (2) are not done by manufacturers, or (3) are not done in good faith, and (4) are not conducted or adequately funded by government. Lahav believes that plaintiffs are injured by exposure to chemicals but they cannot establish causation in court because the defendant “hid its head in the sand,” or worse, “engaged in misconduct to prevent or hide research into its products.”[1] Lahav thus argues that when defendants have been found to have engaged in misconduct, courts should order them to fund studies into risks posed by their products.

Lahav’s claims are either empty or non-factual. The suggestion that plaintiffs are injured by products but cannot “prove” causation begs the question how she knows that these people were injured by the products at issue. In law professors’ language, Lahav has committed the fallacy of petitio principia.

Lahav’s poor-mouthing on behalf of claimants is factually unsupported in this article. Lahav tells us that:

“studies are too expensive for individuals or even groups to fund.”

This is assertion is never backed up with any data or evidence about the expense involved. Case-control studies for rare outcomes suffer from potential threats to their validity, but they can be assembled relatively quickly and inexpensively. Perhaps a more dramatic refutation of Lahav’s assertions come from the cohort studies done in administrative databases, such as the national healthcare databases of Denmark or Sweden, or the Veterans’ Administration database in the United States. These studies involve querying existing databases for the exposures and outcomes of interest, with appropriate controls; such studies are frequently of as high quality and validity as can be had in observational analytical epidemiology.

There are, of course, examples of corporate defendants’ misconduct in sponsoring or conducting studies. There is also evidence of misconduct in plaintiffs’ sponsorship of studies,[2] and outright fraud.[3] And certainly there is evidence of misconduct or misdirection in governmentally funded and sponsored research, sometimes done in cahoots with plaintiffs’ counsel.[4]

Perhaps more important for the intended audience of the Texas Law Review, Lahav’s assertion is demonstrably false. Plaintiffs, plaintiffs’ counsel, and plaintiffs’ advocacy groups have funded studies, often surreptitiously, in many litigations, including those involving claims of harm from Bair Hugger, asbestos, silicone gel breast implants, welding fume, Zofran, isotretinoin, and others. Lahav’s repetition of the claim does not make it true.[5] Plaintiffs and their proxies, including scientific advocates, can and do conduct studies, very much with a view toward supporting litigation claims. Mass tort litigation is a big business, often run by lawyer oligarchs of the plaintiffs’ bar. Ignorantia facti is not an excuse for someone who argues for a radical re-ordering of an already fragile litigation system.

Lahav also complains that studies take so long that the statute of limitations will run on the injury claims before the scientific studies can be completed. There is a germ of truth in this complaint, but the issue could be resolved with minor procedural modifications. Plaintiffs could be allowed a procedure to propound a simple interrogatory to manufacturing firms to ask whether they believe that causality exists between their product and a specific kind of harm, or whether a claimant should reasonably know that such causality exists to warrant pursuing a legal claim. If the manufacturers answer in the negative, then the firms would not be able to assert a limitations defense for any injury that arose on or before the date of its answer. Perhaps the court could allow the matter to stay on its docket and require that the defendant answer the question annually. Plaintiffs and their proxies would be able to sponsor studies necessary to support their claims, and putative defendants would be on notice that such studies are underway.

Without any serious consideration of the extant regulations, Lahav even extends her claims of inadequate testing and lax regulation to pharmaceutical products, which are subject to extensive requirements of showing safety and efficacy, both before and after approval for marketing. Lahav’s advocacy ignores that an individual epidemiologic study rarely “demonstrates” causation, and many such studies are required before the scientific community can accept the causal hypothesis as “disproven.” Lahav’s knowledge remedy is mostly an ignorance ruse.


[1]  Lahav at 1361.

[2]  For a recent, egregious example, see In re Zofran Prods. Liab. Litig., MDL No. 1:15-md-2657-FDS, Order on Defendant’s Motion to De-Designate Certain Documents as Confidential Under the Protective Order (D.Mass. Apr. 1, 2020) (uncovering dark data and dark money behind April Zambelli‐Weiner, Christina Via, Matt Yuen, Daniel Weiner, and Russell S. Kirby, “First Trimester Pregnancy Exposure to Ondansetron and Risk of Structural Birth Defects,” 83 Reproductive Toxicology 14 (2019)). See also In re Zofran (Ondansetron) Prod. Liab. Litig., 392 F. Supp. 3d 179, 182-84 (D. Mass. 2019) (MDL 2657);  “April Fool – Zambelli-Weiner Must Disclose” (April 2, 2020); “Litigation Science – In re Zambelli-Weiner” (April 8, 2019); “Mass Torts Made Less Bad – The Zambelli-Weiner Affair in the Zofran MDL” (July 30, 2019). See also Nate Raymond, “GSK accuses Zofran plaintiffs’ law firms of funding academic study,” Reuters (Mar. 5, 2019).

[3]  See Hon. Jack B. Weinstein, “Preliminary Reflections on Administration of Complex Litigation” 2009 Cardozo L. Rev. de novo 1, 14 (2009) (“[t]he breast implant litigation was largely based on a litigation fraud. …  Claims—supported by medical charlatans—that enormous damages to women’s systems resulted could not be supported.”) (emphasis added).

[4]  See, e.g., Robert M. Park, Paul A. Schulte, Joseph D. Bowman, James T. Walker, Stephen C. Bondy, Michael G. Yost, Jennifer A. Touchstone, and Mustafa Dosemeci, “Potential Occupational Risks for Neurodegenerative Diseases,” 48 Am. J. Ind. Med. 63, 65 (2005).

[5]  Lahav at 1369-70.

Hacking at the “A” Cell

November 10th, 2020

At the heart of epidemiologic studies and clinical trials is the contingency table. The term, contingency table, was introduced by Karl Pearson in the early 20th century as a way to explore the independence, vel non, in a multivariate model. The simplest version of the table is the “2 by 2” table that is at the heart of case-control and other studies:

  Cases (with outcome of interest) Controls (without outcome of interest)  
Exposure of Interest Present                A                  B A + B

Marginal total of all exposed

Exposure of Interest Absent                C                  D C + D

Marginal total of all non-exposed

  A + C

Marginal total of cases

B + D

Marginal total of controls

A + B + C + D

Total observed in study

 

A measure of association between the exposure of interest and the outcome of interest can be shown in the odds ratio (OR), which can be assessed for random error on the assumption of no association.

OR = (A/C)/(B/D) = A*D/B*C

The measurement of the OR turns on faithfully applying the same method of counting cases regardless of exposure status. When investigators expand the “A” cell by loosening their criteria for exposure, we say that they have engaged in “hacking the A cell.”

Something akin to hacking the A cell occurred in the large epidemiologic study, known as  “Yale Hemorrhagic Stroke Project (HSP),” which was the center piece of the plaintiffs’ case in In re Phenylpropanolamine Products Liability Litigation. Although the HSP was sponsored by manufacturers, it was conducted independently without any manufacturer oversight beyond the protocol. The FDA reviewed the HSP results, and ultimately the HSP was published in the New England Journal of Medicine.[1]

The HSP was challenged in a Rule 702 hearing in the Multi-District Litigation (MDL). The MDL judge, Judge Rothstein, conducted hearings and entertained extensive briefings on the reliability of plaintiffs’ expert witnesses’ opinions, which were based largely upon the HSP. The hearings, however, could not go beyond doubts raised by the published paper, and Judge Rothstein permitted plaintiffs’ expert witnesses’ proffered testimony based upon the study, finding that:

“The prestigious NEJM published the HSP results, further substantiating that the research bears the indicia of good science.”[2]

The HSP study was subjected to much greater analysis in litigation.  After the MDL concluded its abridged gatekeeping process, the defense successfully sought the underlying data to the HSP. These data unraveled the HSP paper by showing that the study investigators had deviated from the protocol in a way to increase the number of exposed cases (A cell), with the obvious result of increasing the OR reported by the study.

Both sides of the PPA litigation accused the other side of “hacking at the A cell,” but juries seemed to understand that the hacking had started before the paper was published. A notable string of defense verdicts ensued. After one of the early defense verdicts, plaintiffs’ counsel challenged the defendant’s reliance upon underlying data that went behind the peer-reviewed publication.  The trial court rejected the request for a new trial, and spoke to the significance of challenging the superficial significance of peer review of the key study relied upon by plaintiffs in the PPA litigation:

“I mean, you could almost say that there was some unethical activity with that Yale Study.  It’s real close.  I mean, I — I am very, very concerned at the integrity of those researchers. Yale gets — Yale gets a big black eye on this.”[3]

Today we can see the equivalent of “A” cell hacking in a rather sleazy attempt by the Banana Republicans to steal a presidential election they lost. Cry-baby conservatives are seeking recounts where they lost, but not where they won. They are challenging individual ballots on the basis of outcome. They are raising speculative questions about the electoral processes of entire states, even where the states in question have handed them notable wins down ballot.


[1]  Walter N. Kernan, Catherine M. Viscoli, Lawrence M. Brass, Joseph P. Broderick, Thomas Brott, Edward Feldmann, Lewis B. Morgenstern,  Janet Lee Wilterdink, and Ralph I. Horwitz, “Phenylpropanolamine and the Risk of Hemorrhagic Stroke,” 343 New Engl. J. Med. 1826 (2000). SeeMisplaced Reliance On Peer Review to Separate Valid Science From Nonsense” (Aug. 14, 2011).

[2]  In re Phenylpropanolamine Prod. Liab. Litig., 289 F. 2d 1230, 1239 (2003) (citing Daubert II for the proposition that peer review shows the research meets the minimal criteria for good science).  There were many layers of peer review for the HSP study, all of which proved ultimately ineffectual compared with the closer scrutiny that the HSP received in litigation where underlying data were produced.

[3]  O’Neill v. Novartis AG, California Superior Court, Los Angeles Cty., Transcript of Oral Argument on Post-Trial Motions, at 46 -47 (March 18, 2004) (Hon. Anthony J. Mohr).

Is Your Daubert Motion Racist?

July 17th, 2020

In this week’s New York Magazine, Jonathan Chait points out there is now a vibrant anti-racism consulting industry that exists to help white (or White?) people to recognize the extent to which their race has enabled their success, in the face of systematic inequalities that burden people of color. Chait acknowledges that some of what this industry does is salutary and timely, but he also notes that there are disturbing elements in this industry’s messaging, which is nothing short of an attack on individualism as racist myth that ignores that individuals are subsumed completely into their respective racial group. Chait argues that many of the West’s most cherished values – individualism, due process, free speech and inquiry, and the rule of law – are imperiled by so-called “radical progressivism” and “identity politics.”[1]

It is hard to fathom how anti-racism can collapse all identity into racial categories, even if some inarticulate progressives say so. Chait’s claim, however, seems to be supported by the Smithsonian National Museum of African American History & Culture, and its webpages on “Talking about Race,” which provides an extended analysis of “whiteness,” “white privilege,” and the like.

On May 31, 2020, the Museum’s website published a graphic that presented its view of the “Aspects & Assumptions of Whiteness and White Culture in the United States,” which made many startling claims about what is “white,” and by implication, what is “non-white.” [The chart is set out below.] I will leave it to the sociologists, psychologists, and anthropologists to parse the discussion of “white-dominant culture,” and white “racial identity,” provided in the Museum’s webpages. In my view, the characterizations of “whiteness” were overtly racist and insulting to all races and ethnicities. As Chait points out, with an abundance of irony, Donald Trump would seem to be the epitome of non-white, by his disavowal of the Museum’s identification of white culture’s insistence that “hard work is the key to success.”

The aspect of the graphic summary of whiteness, which I found most curious, most racist, and most insulting to people of all colors and ethnicities, is the chart’s assertion that white culture places “Emphasis on the Scientific Method,” with its valuation of “[o]bjective, rational linear thinking; “[c]ause and effect relationships”; and “[q]uantitative emphasis.” The implication is that non-whites do not emphasize or care about the scientific method. So scientific method, with its concern over validity of inference, and ruling out random and systematic errors, is just white privilege, and a microaggression against non-white people.

Really? Can the Smithsonian National Museum of African American History & Culture really mean that scientific punctilio is just another manifestation of racism and cultural imperialism. Chait seems to think so, quoting Glenn Singleton, president of Courageous Conversation, a racial-sensitivity training firm, who asserts that valuing “written communication over other forms” is “a hallmark of whiteness,” as is “scientific, linear thinking. Cause and effect.”

The Museum has apparently removed the graphic from its website, in response to a blitz of criticism from right-wing media and pundits.[2]  According to the Washington Post, the graphic has its origins in a 1978 book on White Awareness.[3] In response to the criticism, museum director Spencer Crew apologized and removed the graphic, agreeing that “it did not contribute to the discussion as planned.”[4]

The removal of the graphic is not really the point. Many people will now simply be bitter that they cannot publicly display their racist tropes. More important yet, many people will continue to believe that causal, rational, linear thinking is white, exclusionary, and even racist. Something to remember when you make your next Rule 702 motion.

   


[1]  Jonathan Chait, “Is the Anti-Racism Training Industry Just Peddling White Supremacy?” New York Magazine (July 16, 2020).

[2]  Laura Gesualdi-Gilmore “‘DEEPLY INSULTING’ African American museum accused of ‘racism’ over whiteness chart linking hard work and nuclear family to white culture,” The Sun (Jul 16 2020); “DC museum criticized for saying ‘delayed gratification’ and ‘decision-making’ are aspects of ‘whiteness’,” Fox News (July 16, 2020) (noting that the National Museum of African American History and Culture received a tremendous outcry after equating the nuclear family and self-reliance to whiteness); Sam Dorman, “African-American museum removes controversial chart linking ‘whiteness’ to self-reliance, decision-making The chart didn’t contribute to the ‘productive conversation’ they wanted to see,” Fox News (July 16, 2020); Mairead McArdle, “African American History Museum Publishes Graphic Linking ‘Rational Linear Thinking,’ ‘Nuclear Family’ to White Culture,” Nat’l Rev. (July 15, 2020).

[3]  Judy H. Katz, White Awareness: Handbook for Anti-Racism Training (1978).

[4]  Peggy McGlone, “African American Museum site removes ‘whiteness’ chart after criticism from Trump Jr. and conservative media,” Wash. Post (July 17, 2020).

Ingham v. Johnson & Johnson – A Case of Meretricious Mensuration?

July 3rd, 2020

There are a few incontrovertible facts underlying the Ingham fiasco. First, only God can make asbestos; it is not a man-made substance. Second, “asbestos” is not a mineralogical or geological term. The word asbestos developed in an industrial context to designate one of six different minerals that occurred in a fibrous habit, and which had commercial application. Five of the six asbestos minerals are double-chain silicates in the amphibole family: actinolite, anthophyllite, crocidolite, grunerite (known by its non-mineralogical name, amosite, from Amosa, “asbestos mines of South Africa), and tremolite. The sixth asbestos mineral is a serpentine family silicate: chrysotile.

Many other minerals occur in fibrous habit, but not all fibrous minerals are asbestos. Of the minerals designated as asbestos, some refer to minerals that occur in fibrous and non-fibrous habits: actinolite, anthophyllite, grunerite, and tremolite. An analytical report that found one of these minerals could not automatically be interpreted as having “asbestos.” The fibrous nature of the mineral would have to be ascertained as well as its chemical an structural nature.

The asbestos mineral crocidolite is known as riebeckite when non-fibrous; and chrysotile is the fibrous form that comes from a group of serpentine minerals, including non-fibrous lizardite and antigorite.[1]

The term “asbestiform” is often used to distinguish the fibrous habit of those asbestos minerals that can occur in fibrous or non-fibrous form. The term, however, is also used to refer to any inorganic fiber, natural or synthetic that resembles the long, thin habit of the asbestos minerals.[2]

What is a fiber?

The asbestos minerals were commercially useful in large part because of their fibrous habit, which allowed them to be woven into cloth or used as heat-resistant binders in insulation materials. Fibers were very long, thin structures with aspect ratios in the hundreds or thousands. Some of the fibers can fracture into long, thin fibrils. Some of the asbestos minerals can appear in their non-fibrous habit as small cleavage fragments, which may have aspect ratios ranging from 1 to 10. The EPA’s counting protocols count fragments with aspect ratios of 3 or greater as “fibers,” but that does not mean that there is strong evidence that amphibole cleavage fragments with aspect ratios of 3 cause cancer.

According to Johnson & Johnson’s principal brief, the plaintiffs’ expert witness William Longo counted any amphibole particle long and thin enough to satisfy a particular regulatory definition of “fiber” set out by the Environmental Protection Agency (EPA).[3]

Unfortunately, in its opening brief, J&J never explained clearly what separates the asbestiform from the non-asbestiform in the counting process. The appeal presents other potential problems. From a review of the appellants’ briefs, it seems unclear whether J&J disputed Longo’s adherence to the EPA definition of asbestiform. In any event, J&J appears not to have challenged the claim that any “asbestiform” fiber as defined by regulatory agencies can cause cancer. Moreover, plaintiffs’ expert witness, Dr. Jacqueline Moline, opined that cleavage fragments, or non-asbestiform amphiboles cause cancer.[4] This opinion seems highly dubious,[5] but there was NO appellate point in the defendants’ appellate brief to allege error in admitting Moline’s testimony. In addition, the appellate court’s opinion stated plaintiffs’ position that each and every exposure was a substantial causal factor without any suggestion that there was a challenge to the admissibility of this opinion.

What was the estimated exposure?

The plaintiffs’ expert witnesses appeared to be wildly inconsistent in their quantitative estimations of asbestos exposure from the ordinary use of J&J’s talcum powder. According to J&J’s appellate brief:

“Dr. Longo testified that plaintiffs’ use of the Powders would have exposed them to levels of asbestos at least ‘10 to 20 times above’ the amount in every day air that you breathe’. Tr. 1071. He put these exposure levels in the ‘same category’ as occupational levels. Tr. 1073.”[6]

There are many estimates of the ambient asbestos levels in “every day air,” but one estimate on the high side was given by the National Research Council, in 1984, as 0.0004 fibers/cm3.[7] Using Longo’s upper estimate of 20 times the “every day” level yields exposures of 0.008 f/cm3, a level that is well below the current permissible exposure level set by the U.S. Occupational Safety and Health Administration. Historically, workers in occupational cohorts experienced asbestos exposures at or even above 50 f/cm3.[8]

David Egilman also gave inflated exposure estimates that he equated with “occupational exposure” to the plaintiffs. Egilman opined, based upon Longo’s simulation study, a NIOSH study that counted all fibers, and a published study of another talc product, that the amount of asbestos dust released during personal use of J&J’s product was as high as 2.2 f/cm3, during the application process. These estimates were not time-weighted averages, and the estimates, such as they are, would be many orders of magnitude lower if they were analyzed as part of an eight-hour work day. Nonetheless, Egilman concluded that the plaintiffs’ exposures to J&J’s talc products more than doubled their ovarian cancer risk over baseline.[9]

In my previous post on Ingham, I noted how scientifically ignorant and irresponsible Egilman’s testimony was with respect to equating talc and anthopyllite.[10]  The Missouri Court of Appeals presented Egilman’s opinion as though it were well supported, and gave perfunctory consideration to J&J’s complaint about this testimony:

“Plaintiffs concede that Dr. Egilman’s intensity values for diapering came from a test that counted all types of fibers released by a sample of the Powders, including fibers that are not asbestos (principally talc fibers). RB124.  Suggesting that any of those fibers was asbestos would be speculative; assuming all of them were, as Dr. Egilman did, is absurd. Plaintiffs respond with the radical (and scientifically false) assertion that talc fibers are ‘chemically identical’ to anthophyllite asbestos fibers and therefore equivalent. Id. But plaintiffs never argued at trial, much less proved, that talc is identical to asbestos. Indeed, their own expert, Dr. Longo, distinguished between anthophyllite fibers and talc. See Tr.1062.”[11]

We should all sympathize with a litigant that has been abused by absurd opinion testimony. The Court of Appeals took a more insouciant approach:

“Defendants maintain Dr. Egilman’s measurements ‘lacked a reasonable factual basis’ for several reasons. However, their arguments are insufficient to render Dr. Egilman’s testimony inadmissible. ‘[Q]uestions relating to the bases and sources of an expert’s opinion affect the weight to be assigned that opinion rather than its admissbility and should be left for the jury’s consideration.’  Primrose Operating Co. v. Nat’l Am. Ins. Co., 382 F.3d 546, 562 (5th Cir. 2004) (alterations in original) (internal quotations omitted). The problems Defendants cite with Dr. Egilman’s testimony go to the weight of his testimony, not its admissibility.”[12]

Curiously, the Missouri Court of Appeals cited a federal court decision that applied an incorrect standard for evaluating the admissibility of expert witness opinion testimony.[13] It is inconceivable that the validity of the expert witness’s bases, and his inferences therefrom, are beyond the judicial gatekeeper’s scrutiny. If Egilman consulted a mercator projection map, from which he concluded the world was flat, would the Court of Appeals from the “Show Me” state shrug and say show it to the jury?

Perhaps even more remarkable than Longo’s and Egilman’s meretricious mensuration was Egilman’s opinion that personal use of talc more than doubled the plaintiffs’ risk of ovarian cancer. In the meta-analyses of studies of occupational asbestos exposure, the summary risk estimates were well below two.[14]


[1]  SeeSerpentine subgroup,” in Wikipedia.

[2]  Lester Breslow, et al., Asbestiform Fibers: Nonoccupational Health Risks at 7 (Nat’l Research Council 1984).

[3]  Appellants’ Brief at 38, in Ingham v. Johnson & Johnson, No. No. ED107476, Missouri Court of Appeals for the Eastern District (St. Louis) (Sept. 6, 2019) (Tr. 1171-73).

[4]  Respondents’ Brief at 37, in Ingham v. Johnson & Johnson, No. No. ED107476, Missouri Court of Appeals for the Eastern District (St. Louis) (Dec. 19, 2019) (Tr.5.3369).

[5]  See, e.g., John F. Gamble & Graham W. Gibbs, “An evaluation of the risks of lung cancer and mesothelioma from exposure to amphibole cleavage fragments,” 52 Regulatory Toxicol. & Pharmacol. S154 (2008).

[6]  Appellants’ Brief at 52.

[7]  Lester Breslow, et al., Asbestiform Fibers: Nonoccupational Health Risks at 3 (Nat’l Research Council 1984).

[8]  Irving John Selikoff, “Statistical Compassion,” 44 J. Clin. Epidemiol. 141S, 142S (1991).

[9]  Ingham v. Johnson & Johnson, Slip op. at 52-53, No. No. ED107476, Missouri Court of Appeals for the Eastern District (St. Louis) (June 23, 2020) (Slip op.).

[10]  See “Ingham v. Johnson & Johnson – Passing Talc Off As Asbestos,” (June 26, 2020).

[11]  Appellants’ Reply Brief at 43, in Ingham v. Johnson & Johnson, No. No. ED107476, Missouri Court of Appeals for the Eastern District (St. Louis) (Mar. 3, 2020)

[12]  Slip op. at 53.

[13]  SeeJudicial Dodgers – Weight not Admissibility” (May 28, 2020) (collecting authorities).

[14]  See M. Constanza Camargo, Leslie T. Stayner, Kurt Straif, Margarita Reina, Umaima Al-Alem, Paul A. Demers, and Philip J. Landrigan, “Occupational Exposure to Asbestos and Ovarian Cancer: A Meta-analysis,” 119 Envt’l Health Persp. 1211 (2011); Alison Reid, Nick de Klerk, and Arthur W Musk, “Does Exposure to Asbestos Cause Ovarian Cancer? A Systematic Literature Review and Meta-Analysis,” 20 Cancer Epidemiol., Biomarkers & Prevention 1287 (2011).

Ingham v. Johnson & Johnson – Passing Talc Off As Asbestos

June 26th, 2020

In talc exposure litigation of ovarian cancer claims, plaintiffs were struggling to show that cosmetic talc use caused ovarian cancer, despite missteps by the defense.[1] And then lawsuit industrialist Mark Lanier entered the fray and offered a meretriciously beguiling move: Stop trying talc cases and start trying asbestos cases.

The Ingham appellate decision this week from the Missouri Court of Appeals appears to be a superficial affirmation of the Lanier strategy.[2] The court gave defendants some relief on jurisdictional issues, but largely affirmed the admissibility of Lanier’s expert witnesses on medical causation, both general and specific.[3]

After all, asbestos is an established cause of ovarian cancer. Or is it?

In 2006, the Institute of Medicine (now the National Academy of Medicine) addressed extra-pulmonary cancers caused by asbestos, without ever mentioning ovarian carcinoma.[4] Many textbooks and reviews found themselves unable to conclude that asbestos of any type caused ovarian cancer throughout the 20th century and a decade into the 21st century. The world of opinions changed, however, in 2011, when a working group of the International Agency for Research on Cancer (IARC) met in Lyon, France, and issued its support for the general causation claim in a suspect document published in 2012.[5] The IARC has strict rules that prohibit anyone who has any connection with manufacturing industry from serving on its working groups, but the Agency allows consultants and contractors for the lawsuit industry to serve without limitation. The 2011 working group on fibers and dusts thus sported lawsuit industry acolytes such as Peter F. Infante, Jonathan Samet, and Philip J. Landrigan.

Given the composition of this working group, no one was surprised by its finding:

“The Working Group noted that a causal association between exposure to asbestos and cancer of the ovary was clearly established, based on five strongly positive cohort mortality studies of women with heavy occupational exposure to asbestos (Acheson et al., 1982; Wignall & Fox, 1982; Germani et al., 1999; Berry et al., 2000; Magnani et al., 2008). The conclusion received additional support from studies showing that women and girls with environmental, but not occupational exposure to asbestos (Ferrante et al., 2007; Reid et al., 2008, 2009) had positive, though non-significant, increases in both ovarian cancer incidence and mortality.”[6]

The herd mentality is fairly strong in the world of occupational medicine, but not everyone concurred. A group of Australian asbestos researchers (Reid, et al.) without lawsuit industry credentials published another meta-analysis in 2011, as well.[7] Although the Australian researchers reported an increased summary estimate of risk, they were careful to point out that this elevation may have resulted from disease misclassification:

“In the studies that did not examine ovarian cancer pathology, or confirmed cases of mesothelioma from a cancer or mesothelioma registry, misclassification of the cause of death in some cases is likely to have occurred, given that misclassification was reported in those studies that did reexamine cancer pathology specimens. Misclassification may result in an underestimate of peritoneal mesothelioma and an overestimate of ovarian cancer or the converse. Among women, peritoneal mesothelioma may be more likely to be classified as ovarian, colon, or stomach cancer, rather than a rare occupational cancer.”[8]

The authors noted that Irving Selikoff had first reported that a significant number of peritoneal cancers, likely mesothelial in origin, have been misclassified as ovarian cancers. Studies that relied upon death certificates only might thus be very misleading. Supporting the danger of misclassification, the Reid study reported that:

“Only the meta-analysis of those studies that reported ovarian cancer incidence (i.e., those studies that did not rely on cause of death certification to classify their cases of ovarian cancer) did not observe a significant excess risk.”[9]

Reid also reported the absence of other indicia of causation:

“No study showed a statistically significant trend  of ovarian cancer with degree of asbestos exposure. In addition, there was no evidence of a significant trend across studies as grouped exposure increased.”[10]

Other scientists and physicians have acknowledged the controversial nature of the IARC’s determination. In 2011, pathologist Samuel Hammar, who has testified regularly for the lawsuit industry, voiced concerns about the diagnostic accuracy of ovarian cancer cases in asbestos studies:

“It has been difficult to draw conclusions on the basis of epidemiologic studies of ovarian cancers because, histologically, their distinction between peritoneal mesothelioma and carcinomatous peritonei (including primary peritoneal serous papillary adenocarcinoma) is difficult. Ovarian tumors tend to grow by extension and uncommonly metastasize through the bloodstream, which is similar to tumors of mesothelial origin … .”[11]

In 2014, a working group of the Finnish Institute of Occupational Health noted that “despite the conclusions by IARC and the support from recent studies, the hypothesis that asbestos is [a] cause of ovarian cancer remains controversial.”[12] The same year, 2014, the relevant chapter in a leading textbook by Dr. Victor L. Roggli and colleagues opined that:

“the balance of the evidence available at present does not support an association between asbestos exposure and cancers of the female reproductive system.”[13]

Two years later, a text by Dr. Dorsett D. Smith cited “the lack of certainty of the pathologic diagnosis of ovarian cancer versus a peritoneal mesothelioma in epidemiologic studies” as making the epidemiology uninterpretable and any conclusions impossible.[14]

Against this backdrop of evidence, I took a look at what Johnson & Johnson had to say about the occupational asbestos epidemiology in its briefs, in section “B. Studies on asbestos and ovarian cancer.”[15] The defense acknowledged that plaintiffs’ expert witnesses Drs. Jacqueline Moline and Dean Felsher focused on the IARC conclusion, and on studies of heavy occupational exposure. J & J recited without comment or criticism what plaintiffs’ expert witnesses had testified, much of which was quite objectionable.[16]

For instance, Moline and Felsher both reprised the scientifically and judicially debunked views that there is “no known safe level of exposure,” from which they inferred the non-sequitur that “any amount above ordinary background levels – could cause ovarian cancer.”[17] From ignorance, nothing derives but conjecture.

Another example was Felsher’s testimony that asbestos can make the body of an ovarian cancer patient therapy-resistant. In response to these and other remarkable assertions, J & J countered with only the statement that their expert witness, Dr. Huh, “did not agree that all of this was true in the context of ovarian cancer.”[18]

Huh, indeed; that the defense expert witness disagree with some of what plaintiffs’ witnesses claimed hardly frames an issue for exclusion of any expert witness’s opinion. Even more disturbing, there is no appellate point that corresponds to a motion to exclude Dr Moline’s testimony.

The Egilman Challenge

There was a challenge to the testimony of another expert witness, David Egilman, a frequent testifier for Mark Lanier and other lawsuit industrialists. One of the challenges that the defendants made on appeal to the admissibility of Dr. David Egilman’s testimony was his use of a 1972 NIOSH study that apparently quantified exposure in terms of fibers per cubic centimeter, without specifying whether all fibers in the measurement were asbestos fibers, as opposed to non-asbestos fibers, including talc fibers.

The Missouri Court of Appeals rejected this specificc challenge in part because Egilman had explained that:

“whether the 1972 NIOSH study identified fibers specifically as ‘asbestos’ was inconsequential, as the only other possible fiber that could be present in a talc sample is a ‘talc fiber, which is chemically identical to anthophyllite asbestos and structurally the same’.”[19]

Talc typically crystallizes in small plates, but it can occur occasionally as fibers. Egilman, however, equated a talc fiber as chemically and structurally identical to an anthophyllite fiber.

Does Egilman’s opinion hold water?

No, Egilman has wet himself badly (assuming the Missouri appellate court quoted testimony accurately).

According to the Mineralogical Society of America’s Handbook of Mineralogy (and every other standard work on mineralogy I reviewed), anthophyllite and talc, whether in fibrous habit or not, are two different minerals, with very different chemical formulae, crystal chemistry, and structure.[20] Anthophyllite has the chemical formula: (Mg;Fe2+)2(Mg;Fe2+)5Si8O22(OH)2 and is an amphibole double chain silicate. Talc, on the other hand, is a phyllosilicate, a hydrated magnesium silicate with the chemical formula Mg3Si4O10(OH)2. Talc crystallizes in the triclinic class, although sometimes monoclinic, and crystals are platy and very soft.

If the Missouri Court of Appeals characterized Egilman’s testimony correctly on this point, then Egilman gave patently false testimony. Talc and anthophyllite are different chemically and structurally.


[1]  SeeThe Slemp Case, Part I – Jury Verdict for Plaintiff – 10 Initial Observations”; “The Slemp Case, Part 2 – Openings”; “ Slemp Trial Part 3 – The Defense Expert Witness – Huh”; “Slemp Trial Part 4 – Graham Colditz”; “ Slemp Trial Part 5 – Daniel W. Cramer”; “Lawsuit Magic – Turning Talcum into Wampum”; “Talc Litigation Supported by Slippery Expert Witness” (2017).

[2]  Ingham v. Johnson & Johnson, No. No. ED107476, Missouri Court of Appeals for the Eastern District (St. Louis) (June 23, 2020) (Slip op.).

[3]  Cara Salvatore, “Missouri Appeals Court Slashes $4.7B Talc Verdict Against J&J,” Law360 (June 23, 2020).

[4]  Jonathan M. Samet, et al., Asbestos: Selected Cancers Effects (I.O.M. Committee on Asbestos 2006).

[5]  International Agency for Research on Cancer, A Review of Human Carcinogens, Monograph Vol. 100, Part C: Arsenic, Metals, Fibres, and Dusts (2012).

[6]  Id. at 256. Some members followed up their controversial finding with an attempt to justify it with a meta-analysis; see M. Constanza Camargo, Leslie T. Stayner, Kurt Straif, Margarita Reina, Umaima Al-Alem, Paul A. Demers, and Philip J. Landrigan, “Occupational Exposure to Asbestos and Ovarian Cancer: A Meta-analysis,” 119 Envt’l Health Persp. 1211 (2011).

[7]  Alison Reid, Nick de Klerk, and Arthur W Musk, “Does Exposure to Asbestos Cause Ovarian Cancer? A Systematic Literature Review and Meta-Analysis,” 20 Cancer Epidemiol., Biomarkers & Prevention 1287 (2011) [Reid].

[8]  Reid at 1293, 1287.

[9]  Id. at 1293.

[10]  Id. at 1294.

[11]  Samuel Hammar, Richard A. Lemen, Douglas W. Henderson & James Leigh, “Asbestos and other cancers,” chap. 8, in Ronald F. Dodson & Samuel P. Hammar, eds., Asbestos: Risk Assessment, Epidemiology, and Health Effects 435 (2nd ed. 2011) (internal citation omitted).

[12]  Finnish Institute of Occupational Health, Asbestos, Asbestosis and Cancer – Helsinki Criteria for Diagnosis and Attribution 60 (2014) (concluding that there was an increased risk in cohorts of women with “relatively high asbestos exposures”).

[13]  Faye F. Gao and Tim D. Oury, “Other Neoplasia,” chap. 8, in Tim D. Oury, Thomas A. Sporn & Victor L. Roggli, eds., in Pathology of Asbestos-Associated Diseases 177, 188 (3d ed. 2014).

[14]  Dorsett D. Smith, The Health Effects of Asbestos: An Evidence-based Approach 208 (2016).

[15]  Brief of Appellants Johnson & Johnson and Johnson & Johnson Consumer Inc., at 29, in Ingham v. Johnson & Johnson, No. No. ED107476, Missouri Court of Appeals for the Eastern District (St. Louis) (filed Sept. 6, 2019) [J&J Brief].

[16]  Id. at 30.

[17]  See Mark A. Behrens & William L. Anderson, “The ‘Any Exposure’ Theory: An Unsound Basis for Asbestos Causation and Expert Testimony,” 37 SW. U. L. Rev. 479 (2008); William L. Anderson, Lynn Levitan & Kieran Tuckley, “The ‘Any Exposure’ Theory Round II — Court Review of Minimal Exposure Expert Testimony in Asbestos and Toxic Tort Litigation Since 2008,” 22 Kans. J. L. & Pub. Pol’y 1 (2012); William L. Anderson & Kieran Tuckley, “The Any Exposure Theory Round III: An Update on the State of the Case Law 2012 – 2016,” Defense Counsel J. 264 (July 2016); William L. Anderson & Kieran Tuckley, “How Much Is Enough? A Judicial Roadmap to Low Dose Causation Testimony in Asbestos and Tort Litigation,” 42 Am. J. Trial Advocacy 38 (2018).

[18]  Id. at 30.

[19]  Slip op. at 54.

[20]  John W. Anthony, Richard A. Bideaux, Kenneth W. Bladh, and Monte C. Nichols, Handbook of Mineralogy (Mineralogical Soc’y of America 2001).

David Madigan’s Graywashed Meta-Analysis in Taxotere MDL

June 12th, 2020

Once again, a meta-analysis is advanced as a basis for an expert witness’s causation opinion, and once again, the opinion is the subject of a Rule 702 challenge. The litigation is In re Taxotere (Docetaxel) Products Liability Litigation, a multi-district litigation (MDL) proceeding before Judge Jane Triche Milazzo, who sits on the United States District Court for the Eastern District of Louisiana.

Taxotere is the brand name for docetaxel, a chemotherapic medication used either alone or in conjunction with another chemotherapy, to treat a number of different cancers. Hair loss is a side effect of Taxotere, but in the MDL, plaintiffs claim that they have experienced permanent hair loss, which was not adequately warned about in their view. The litigation thus involved issues of exactly what “permanent” means, medical causation, adequacy of warnings in the Taxotere package insert, and warnings causation.

Defendant Sanofi challenged plaintiffs’ statistical expert witness, David Madigan, a frequent testifier for the lawsuit industry. In its Rule 702 motion, Sanofi argued that Madigan had relied upon two randomized clinical trials (TAX 316 and GEICAM 9805) that evaluated “ongoing alopecia” to reach conclusions about “permanent alopecia.” Sanofi made the point that “ongoing” is not “permanent,” and that trial participants who had ongoing alopecia may have had their hair grow back. Madigan’s reliance upon an end point different from what plaintiffs complained about made his analysis irrelevant. The MDL court rejected Sanofi’s argument, with the observation that Madigan’s analysis was not irrelevant for using the wrong end point, only less persuasive, and that Sanofi’s criticism was one that “Sanofi can highlight for the jury on cross-examination.”[1]

Did Judge Milazzo engage in judicial dodging with rejecting the relevancy argument and emphasizing the truism that Sanofi could highlight the discrepancy on cross-examination?  In the sense that the disconnect can be easily shown by highlight the different event rates for the alopecia differently defined, the Sanofi argument seems like one that a jury could easily grasp and refute. The judicial shrug, however, begs the question why the defendant should have to address a data analysis that does not support the plaintiffs’ contention about “permanence.” The federal rules are supposed to advance the finding of the truth and the fair, speedy resolution of cases.

Sanofi’s more interesting argument, from the perspective of Rule 702 case law, was its claim that Madigan had relied upon a flawed methodology in analyzing the two clinical trials:

“Sanofi emphasizes that the results of each study individually produced no statistically significant results. Sanofi argues that Dr. Madigan cannot now combine the results of the studies to achieve statistical significance. The Court rejects Sanofi’s argument and finds that Sanofi’s concern goes to the weight of Dr. Madigan’s testimony, not to its admissibility.34”[2]

There seems to be a lot going on in the Rule 702 challenge that is not revealed in the cryptic language of the MDL district court. First, the court deployed the jurisprudentially horrific, conclusory language to dismiss a challenge that “goes to the weight …, not to … admissibility.” As discussed elsewhere, this judicial locution is rarely true, fails to explain the decision, and shows a lack of engagement with the actual challenge.[3] Of course, aside from the inanity of the expression, and the failure to explain or justify the denial of the Rule 702 challenge, the MDL court may have been able to provide a perfectly adequately explanation.

Second, the footnote in the quoted language, number 34, was to the infamous Milward case,[4] with the explanatory parenthetical that the First Circuit had reversed a district court for excluding testimony of an expert witness who had sought to “draw conclusions based on combination of studies, finding that alleged flaws identified by district court go to weight of testimony not admissibility.”[5] As discussed previously, the widespread use of the “weight not admissibility” locution, even by the Court of Appeals, does not justify it. More important, however, the invocation of Milward suggests that any alleged flaws in combining study results in a meta-analysis are always matters for the jury, no matter how arcane, technical, or threatening to validity they may be.

So was Judge Milazzo engaged in judicial dodging in Her Honor’s opinion in Taxotere? Although the citation to Milward tends to inculpate, the cursory description of the challenge raises questions whether the challenge itself was valid in the first place. Fortunately, in this era of electronic dockets, finding the actual Rule 702 motion is not very difficult, and we can inspect the challenge to see whether it was dodged or given short shrift. Remarkably, the reality is much more complicated than the simple, simplistic rejection by the MDL court would suggest.

Sanofi’s brief attacks three separate analyses proffered by David Madigan, and not surprisingly, the MDL court did not address every point made by Sanofi.[6] Sanofi’s point about the inappropriateness of conducting the meta-analysis was its third in its supporting brief:

“Third, Dr. Madigan conducted a statistical analysis on the TAX316 and GEICAM9805/TAX301 clinical trials separately and combined them to do a ‘meta-analysis’. But Dr. Madigan based his analysis on unproven assumptions, rendering his methodology unreliable. Even without those assumptions, Dr. Madigan did not find statistical significance for either of the clinical trials independently, making this analysis unhelpful to the trier of fact.”[7]

This introductory statement of the issue is itself not particularly helpful because it fails to explain why combining two individual clinical trials (“RCTs”), each not having “statistically significant” results, by meta-analysis would be unhelpful. Sanofi’s brief identified other problems with Madigan’s analyses, but eventually returned to the meta-analysis issue, with the heading:

“Dr. Madigan’s analysis of the individual clinical trials did not result in statistical significance, thus is unhelpful to the jury and will unfairly prejudice Sanofi.”[8]

After a discussion of some of the case law about statistical significance, Sanofi pressed its case against Madigan. Madigan’s statistical analysis of each of two RCTs apparently did not reach statistical significance, and Sanofi complained that permitting Madigan to present these two analyses with results that were “not statistically very impressive,” would confuse and mislead the jury.[9]

“Dr. Madigan tried to avoid that result here [of having two statistically non-significant results] by conducting a ‘meta-analysis’ — a greywashed term meaning that he combined two statistically insignificant results to try to achieve statistical significance. Madigan Report at 20 ¶ 53. Courts have held that meta-analyses are admissible, but only when used to reduce the numerical instability on existing statistically significant differences, not as a means to achieve statistical significance where it does not exist. RMSE at 361–362, fn76.”

Now the claims here are quite unsettling, especially considering that they were lodged in a defense brief, in an MDL, with many cases at stake, made on behalf of an important pharmaceutical company, represented by two large, capable national or international law firms.

First, what does the defense brief signify by placing ‘meta-analysis’ in quotes. Are these scare quotes to suggest that Madigan was passing off something as a meta-analysis that failed to be one? If so, there is nothing in the remainder of the brief that explains such an interpretation. Meta-analysis has been around for decades, and reporting meta-analyses of observational or of experimental studies has been the subject of numerous consensus and standard-setting papers over the last two decades. Furthermore, the FDA has now issued a draft guidance for the use of meta-analyses in pharmacoepidemiology. Scare quotes are at best unexplained, at worst, inappropriate. If the authors had something else in mind, they did not explain the meaning of using quotes around meta-analysis.

Second, the defense lawyers referred to meta-analysis as a “greywashed” term. I am always eager to expand my vocabulary, and so I looked up the word in various dictionaries of statistical and epidemiologic terms. Nothing there. Perhaps it was not a technical term, so I checked with the venerable Oxford English Dictionary. No relevant entries.

Pushed to the wall, I checked the font of all knowledge – the internet. To be sure, I found definitions, but nothing that could explain this odd locution in a brief filed in an important motion:

gray-washing: “noun In calico-bleaching, an operation following the singeing, consisting of washing in pure water in order to wet out the cloth and render it more absorbent, and also to remove some of the weavers’ dressing.”

graywashed: “adj. adopting all the world’s cultures but not really belonging to any of them; in essence, liking a little bit of everything but not everything of a little bit.”

Those definitions do not appear pertinent.

Another website offered a definition based upon the “blogsphere”:

Graywash: “A fairly new term in the blogsphere, this means an investigation that deals with an offense strongly, but not strongly enough in the eyes of the speaker.”

Hmmm. Still not on point.

Another one from “Urban Dictionary” might capture something of what was being implied:

Graywashing: “The deliberate, malicious act of making art having characters appear much older and uglier than they are in the book, television, or video game series.”

Still, I am not sure how this is an argument that a federal judge can respond to in a motion affecting many cases.

Perhaps, you say, I am quibbling with word choices, and I am not sufficiently in tune with the way people talk in the Eastern District of Louisiana. I plead guilty to both counts. But the third, and most important point, is the defense assertion that meta-analyses are only admissible “when used to reduce the numerical instability on existing statistically significant differences, not as a means to achieve statistical significance where it does not exist.”

This assertion is truly puzzling. Meta-analyses involve so many layers of hearsay that they will virtually never be admissible. Admissibility of the meta-analyses is virtually never the issue. When an expert witness has conducted a meta-analysis, or has relied upon one, the important legal question is whether the witness may reasonably rely upon the meta-analysis (under Rule 703) for an inference that satisfies Rule 702. The meta-analysis itself does not come into evidence, and does not go out to the jury for its deliberations.

But what about the defense brief’s “only when” language that clearly implies that courts have held that expert witnesses may rely upon meta-analyses only to reduce “numerical instability on existing statistically significant differences”? This seems clearly wrong because achieving statistical significance from studies that have no “instability” for their point estimates but individually lack statistical significance is a perfectly legitimate and valid goal. Consider a situation in which, for some reason, sample size in each study is limited by the available observations, but we have 10 studies, each with a point estimate of 1.5, and each with a 95% confidence interval of (0.88, 2.5). This hypothetical situation presents no instability of point estimates, and the meta-analytical summary point estimate would shrink the confidence interval so that the lower bound would exclude 1.0, in a perfectly valid analysis. In the real world, meta-analyses are conducted on studies with point estimates of risk that vary, because of random and non-random error, but there is no reason that meta-analyses cannot reduce random error to show that the summary point estimate is statistically significant at a pre-specified alpha, even though no constituent study was statistically significant.

Sanofi’s lawyers did not cite to any case for the remarkable proposition they advanced, but they did cite the Reference Manual for Scientific Evidence (RMSE). Earlier in the brief, the defense cited to this work in its third edition (2011), and so I turned to the cited page (“RMSE at 361–362, fn76”) only to find the introduction to the chapter on survey research, with footnotes 1 through 6.

After a diligent search through the third edition, I could not find any other language remotely supportive of the assertion by Sanofi’s counsel. There are important discussions about how a poorly conducted meta-analysis, or a meta-analysis that was heavily weighted in a direction by a methodologically flawed study, could render an expert witness’s opinion inadmissible under Rule 702.[10] Indeed, the third edition has a more sustained discussion of meta-analysis under the heading “VI. What Methods Exist for Combining the Results of Multiple Studies,”[11] but nothing in that discussion comes close to supporting the remarkable assertion by defense counsel.

On a hunch, I checked the second edition of RMSE, published in the year 2000. There was indeed a footnote 76, on page 361, which discussed meta-analysis. The discussion comes in the midst of the superseded edition’s chapter on epidemiology. Nothing, however, in the text or in the cited footnote appears to support the defense’s contention about meta-analyses are appropriate only when each included clinical trial has independently reported a statistically significant result.

If this analysis is correct, the MDL court was fully justified in rejecting the defense argument that combining two statistically non-significant clinical trials to yield a statistically significant result was methodologically infirm. No cases were cited, and the Reference Manual does not support the contention. Furthermore, no statistical text or treatise on meta-analysis supports the Sanofi claim. Sanofi did not support its motion with any affidavits of experts on meta-analysis.

Now there were other arguments advanced in support of excluding David Madigan’s testimony. Indeed, there was a very strong methodological challenge to Madigan’s decision to include the two RCTs in his meta-analysis, other than those RCTs lack of statistical significance on the end point at issue. In the words of the Sanofi brief:

“Both TAX clinical trials examined two different treatment regimens, TAC (docetaxel in combination with doxorubicin and cyclophosphamide) versus FAC (5-fluorouracil in combination with doxorubicin and cyclophosphamide). Madigan Report at 18–19 ¶¶ 47–48. Dr. Madigan admitted that TAC is not Taxotere alone, Madigan Dep. 305:21–23 (Ex. B); however, he did not rule out doxorubicin or cyclophosphamide in his analysis. Madigan Dep. 284:4–12 (“Q. You can’t rule out other chemotherapies as causes of irreversible alopecia? … A. I can’t rule out — I do not know, one way or another, whether other chemotherapy agents cause irreversible alopecia.”).”[12]

Now unlike the statistical significance argument, this argument is rather straightforward and turns on the clinical heterogeneity of the two trials that seems to clearly point to the invalidity of a meta-analysis of them. Sanofi’s lawyers could have easily supported this point with statements from standard textbooks and non-testifying experts (but alas did not). Sanofi did support their challenge, however, with citations to an important litigation and Fifth Circuit precedent.[13]

This closer look at the actual challenge to David Madigan’s opinions suggests that Sanofi’s counsel may have diluted very strong arguments about heterogeneity in exposure variable, and in the outcome variable, by advancing what seems a very doubtful argument based upon the lack of statistical significance of the individual studies in the Madigan meta-analysis.

Sanofi advanced two very strong points, first about the irrelevant outcome variable definitions used by Madigan, and second about the complexity of Taxotere’s being used with other, and different, chemotherapeutic agents in each of the two trials that Madigan combined.[14] The MDL court addressed the first point in a perfunctory and ultimately unsatisfactory fashion, but did not address the second point at all.

Ultimately, the result was that Madigan was given a pass to offer extremely tenuous opinions in an MDL on causation. Given that Madigan has proffered tendentious opinions in the past, and has been characterized as “an expert on a mission,” whose opinions are “conclusion driven,”[15] the missteps in the briefing, and the MDL court’s abridgement of the gatekeeping process are regrettable. Also regrettable is that the merits or demerits of a Rule 702 challenge cannot be fairly evaluated from cursory, conclusory judicial decisions riddled with meaningless verbiage such as “the challenge goes to the weight and not the admissibility of the witness.” Access to the actual Rule 702 motion helped shed important light on the inadequacy of one point in the motion but also the complexity and fullness of the challenge that was not fully addressed in the MDL court’s decision. It is possible that a Reply or a Supplemental brief, or oral argument, may have filled in gaps, corrected errors, or modified the motion, and the above analysis missed some important aspect of what happened in the Taxotere MDL. If so, all the more reason that we need better judicial gatekeeping, especially when a decision can affect thousands of pending cases.[16]


[1]  In re Taxotere (Docetaxel) Prods. Liab. Litig., 2019 U.S. Dist. LEXIS 143642, at *13 (E.D. La. Aug. 23, 2019) [Op.]

[2]  Op. at *13-14.

[3]  “Judicial Dodgers – Weight not Admissibility” (May 28, 2020).

[4]  Milward v. Acuity Specialty Prods. Grp., Inc., 639 F.3d 11, 17-22 (1st Cir. 2011).

[5]  Op. at *13-14 (quoting and citing Milward, 639 F.3d at 17-22).

[6]  Memorandum in Support of Sanofi Defendants’ Motion to Exclude Expert Testimony of David Madigan, Ph.D., Document 6144, in In re Taxotere (Docetaxel) Prods. Liab. Litig. (E.D. La. Feb. 8, 2019) [Brief].

[7]  Brief at 2; see also Brief at 14 (restating without initially explaining why combining two statistically non-significant RCTs by meta-analysis would be unhelpful).

[8]  Brief at 16.

[9]  Brief at 17 (quoting from Madigan Dep. 256:14–15).

[10]  Michael D. Green, Michael Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” at 581n.89, in Fed. Jud. Center, Reference Manual on Scientific Evidence (3d ed. 2011).

[11]  Id. at 606.

[12]  Brief at 14.

[13]  Brief at 14, citing Burst v. Shell Oil Co., C. A. No. 14–109, 2015 WL 3755953, at *7 (E.D. La. June 16, 2015) (Vance, J.) (quoting LeBlanc v. Chevron USA, Inc., 396 F. App’x 94, 99 (5th Cir. 2010)) (“[A] study that notes ‘that the subjects were exposed to a range of substances and then nonspecifically note[s] increases in disease incidence’ can be disregarded.”), aff’d, 650 F. App’x 170 (5th Cir. 2016). SeeThe One Percent Non-solution – Infante Fuels His Own Exclusion in Gasoline Leukemia Case” (June 25, 2015).

[14]  Brief at 14-16.

[15]  In re Accutane Litig., 2015 WL 753674, at *19 (N.J.L.Div., Atlantic Cty., Feb. 20, 2015), aff’d, 234 N.J. 340, 191 A.3d 560 (2018). SeeJohnson of Accutane – Keeping the Gate in the Garden State” (Mar. 28, 2015); “N.J. Supreme Court Uproots Weeds in Garden State’s Law of Expert Witnesses” (Aug. 8, 2018).

[16]  Cara Salvatore, “Sanofi Beats First Bellwether In Chemo Drug Hair Loss MDL,” Law360 (Sept. 27, 2019).