TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Reference Manual on Scientific Evidence – 3rd Edition is Past Its Expiry

October 17th, 2021

INTRODUCTION

The new, third edition of the Reference Manual on Scientific Evidence was released to the public in September 2011, as a joint production of the National Academies of Science, and the Federal Judicial Center. Within a year of its publication, I wrote that the Manual needed attention on several key issues. Now that there is a committee working on the fourth edition, I am reprising the critique, slightly modified, in the hope that it may make a difference for the fourth edition.

The Development Committee for the third edition included Co-Chairs, Professor Jerome Kassirer, of Tufts University School of Medicine, and the Hon. Gladys Kessler, who sits on the District Court for the District of Columbia.  The members of the Development Committee included:

  • Ming W. Chin, Associate Justice, The Supreme Court of California
  • Pauline Newman, Judge, Court of Appeals for the Federal Circuit
  • Kathleen O’Malley, Judge, Court of Appeals for the Federal Circuit (formerly a district judge on the Northern District of Ohio)
  • Jed S. Rakoff, Judge, Southern District of New York
  • Channing Robertson, Professor of Engineering, Stanford University
  • Joseph V. Rodricks, Principal, Environ
  • Allen Wilcox, Senior Investigator, Institute of Environmental Health Sciences
  • Sandy L. Zabell, Professor of Statistics and Mathematics, Weinberg College of Arts and Sciences, Northwestern University

Joe S. Cecil, Project Director, Program on Scientific and Technical Evidence, in the Federal Judicial Center’s Division of Research, who shepherded the first two editions, served as consultant to the Committee.

With over 1,000 pages, there was much to digest in the third edition of the Reference Manual on Scientific Evidence (RMSE 3d).  Much of what is covered was solid information on the individual scientific and technical disciplines covered.  Although the information is easily available from other sources, there is some value in collecting the material in a single volume for the convenience of judges and lawyers.  Of course, given that this information is provided to judges from an ostensibly neutral, credible source, lawyers will naturally focus on what is doubtful or controversial in the RMSE. To date, there have been only a few reviews and acknowledgments of the new edition.[1]

Like previous editions, the substantive scientific areas were covered in discrete chapters, written by subject matter specialists, often along with a lawyer who addresses the legal implications and judicial treatment of that subject matter.  From my perspective, the chapters on statistics, epidemiology, and toxicology were the most important in my practice and in teaching, and I have focused on issues raised by these chapters.

The strengths of the chapter on statistical evidence, updated from the second edition, remained, as did some of the strengths and flaws of the chapter on epidemiology.  In addition, there was a good deal of overlap among the chapters on statistics, epidemiology, and medical testimony.  This overlap was at first blush troubling because the RMSE has the potential to confuse and obscure issues by having multiple authors address them inconsistently.  This is an area where reviewers of the upcoming edition should pay close attention.

I. Reference Manual’s Disregard of Study Validity in Favor of the “Whole Tsumish”

There was a deep discordance among the chapters in the third Reference Manual as to how judges should approach scientific gatekeeping issues. The third edition vacillated between encouraging judges to look at scientific validity, and discouraging them from any meaningful analysis by emphasizing inaccurate proxies for validity, such as conflicts of interest.[2]

The Third Edition featured an updated version of the late Professor Margaret Berger’s chapter from the second edition, “The Admissibility of Expert Testimony.”[3]  Berger’s chapter criticized “atomization,” a process she describes pejoratively as a “slicing-and-dicing” approach.[4]  Drawing on the publications of Daubert-critic Susan Haack, Berger rejected the notion that courts should examine the reliability of each study independently.[5]  Berger contended that the “proper” scientific method, as evidenced by works of the International Agency for Research on Cancer, the Institute of Medicine, the National Institute of Health, the National Research Council, and the National Institute for Environmental Health Sciences, “is to consider all the relevant available scientific evidence, taken as a whole, to determine which conclusion or hypothesis regarding a causal claim is best supported by the body of evidence.”[6]

Berger’s contention, however, was profoundly misleading.  Of course, scientists undertaking a systematic review should identify all the relevant studies, but some of the “relevant” studies may well be insufficiently reliable (because of internal or external validity issues) to answer the research question at hand. All the cited agencies, and other research organizations and researchers, exclude studies that are fundamentally flawed, whether as a result of bias, confounding, erroneous data analyses, or related problems.  Berger cited no support for her remarkable suggestion that scientists do not make “reliability” judgments about available studies when assessing the “totality of the evidence.”

Professor Berger, who had a distinguished career as a law professor and evidence scholar, died in November 2010.  She was no friend of Daubert,[7] but remarkably her antipathy had outlived her.  Berger’s critical discussion of “atomization” cited the notorious decision in Milward v. Acuity Specialty Products Group, Inc., 639 F.3d 11, 26 (1st Cir. 2011), which was decided four months after her passing.[8]

Professor Berger’s contention about the need to avoid assessments of individual studies in favor of the whole “tsumish” must also be rejected because Federal Rule of Evidence 703 requires that each study considered by an expert witness “qualify” for reasonable reliance by virtue of the study’s containing facts or data that are “of a type reasonably relied upon by experts in the particular field forming opinions or inferences upon the subject.”  One of the deeply troubling aspects of the Milward decision is that it reversed the trial court’s sensible decision to exclude a toxicologist, Dr. Martyn Smith, who outran his headlights on issues having to do with a field in which he was clearly inexperienced – epidemiology.

Scientific studies, and especially epidemiologic studies, involve multiple levels of hearsay.  A typical epidemiologic study may contain hearsay leaps from patient to clinician, to laboratory technicians, to specialists interpreting test results, back to the clinician for a diagnosis, to a nosologist for disease coding, to a national or hospital database, to a researcher querying the database, to a statistician analyzing the data, to a manuscript that details data, analyses, and results, to editors and peer reviewers, back to study authors, and on to publication.  Those leaps do not mean that the final results are untrustworthy, only that the study itself is not likely admissible in evidence.

The inadmissibility of scientific studies is not problematic because Rule 703 permits testifying expert witnesses to formulate opinions based upon facts and data, which are not independently admissible in evidence. The distinction between relied upon and admissible studies is codified in the Federal Rules of Evidence, and in virtually every state’s evidence law.

Referring to studies, without qualification, as admissible in themselves is usually wrong as a matter of evidence law.  The error has the potential to encourage carelessness in gatekeeping expert witnesses’ opinions for their reliance upon inadmissible studies.  The error is doubly wrong if this approach to expert witness gatekeeping is taken as license to permit expert witnesses to rely upon any marginally relevant study of their choosing.  It is therefore disconcerting that the RMSE 3d failed to make the appropriate distinction between admissibility of studies and admissibility of expert witness opinion that has reasonably relied upon appropriate studies.

Consider the following statement from the chapter on epidemiology:

“An epidemiologic study that is sufficiently rigorous to justify a conclusion that it is scientifically valid should be admissible, as it tends to make an issue in dispute more or less likely.”[9]

Curiously, the advice from the authors of the epidemiology chapter, by speaking to a single study’s validity, was at odds with Professor Berger’s caution against slicing and dicing. The authors of the epidemiology chapter seemed to be stressing that scientifically valid studies should be admissible.  Their footnote emphasized and confused the point:

See DeLuca v. Merrell Dow Pharms., Inc., 911 F.2d 941, 958 (3d Cir. 1990); cf. Kehm v. Procter & Gamble Co., 580 F. Supp. 890, 902 (N.D. Iowa 1982) (“These [epidemiologic] studies were highly probative on the issue of causation—they all concluded that an association between tampon use and menstrually related TSS [toxic shock syndrome] cases exists.”), aff’d, 724 F.2d 613 (8th Cir. 1984). Hearsay concerns may limit the independent admissibility of the study, but the study could be relied on by an expert in forming an opinion and may be admissible pursuant to Fed. R. Evid. 703 as part of the underlying facts or data relied on by the expert. In Ellis v. International Playtex, Inc., 745 F.2d 292, 303 (4th Cir. 1984), the court concluded that certain epidemiologic studies were admissible despite criticism of the methodology used in the studies. The court held that the claims of bias went to the studies’ weight rather than their admissibility. Cf. Christophersen v. Allied-Signal Corp., 939 F.2d 1106, 1109 (5th Cir. 1991) (“As a general rule, questions relating to the bases and sources of an expert’s opinion affect the weight to be assigned that opinion rather than its admissibility. . . .”).”[10]

This footnote, however, that studies relied upon by an expert in forming an opinion may be admissible pursuant to Rule 703, was unsupported by and contrary to Rule 703 and the overwhelming weight of case law interpreting and applying the rule.[11] The citation to a pre-Daubert decision, Christophersen, was doubtful as a legal argument, and managed to engender much confusion

Furthermore, Kehm and Ellis, the cases cited in this footnote by the authors of the epidemiology chapter, both involved “factual findings” in public investigative or evaluative reports, which were independently admissible under Federal Rule of Evidence 803(8)(C). See Ellis, 745 F.2d at 299-303; Kehm, 724 F.2d at 617-18.  As such, the cases hardly support the chapter’s suggestion that Rule 703 is a rule of admissibility for epidemiologic studies.

Here the RMSE 3d, in one sentence, confused Rule 703 with an exception to the rule against hearsay, which would prevent the statistically based epidemiologic studies from being received in evidence.  The point is reasonably clear, however, that the studies “may be offered” in testimony to explain an expert witness’s opinion. Under Rule 705, that offer may also be refused. The offer, however, is to “explain,” not to have the studies admitted in evidence.  The RMSE 3d was certainly not alone in advancing this notion that studies are themselves admissible.  Other well-respected evidence scholars have lapsed into this error.[12]

Evidence scholars should not conflate admissibility of the epidemiologic (or other) studies with the ability of an expert witness to advert to a study to explain his or her opinion.  The testifying expert witness really should not be allowed to become a conduit for off-hand comments and opinions in the introduction or discussion section of relied upon articles, and the wholesale admission of such hearsay opinions undermines the trial court’s control over opinion evidence.  Rule 703 authorizes reasonable reliance upon “facts and data,” not every opinion that creeps into the published literature.

II. Toxicology for Judges

The toxicology chapter, “Reference Guide on Toxicology,” in RMSE 3d was written by Professor Bernard D. Goldstein, of the University of Pittsburgh Graduate School of Public Health, and Mary Sue Henifin, a partner in the Princeton, New Jersey office of Buchanan Ingersoll, P.C.

  1. Conflicts of Interest

At the question and answer session of the Reference Manual’s public release ceremony, in September 2011, one gentleman rose to note that some of the authors were lawyers with big firm affiliations, which he supposed must mean that they represent mostly defendants.  Based upon his premise, he asked what the review committee had done to ensure that conflicts of interest did not skew or distort the discussions in the affected chapters.  Dr. Kassirer and Judge Kessler responded by pointing out that the chapters were peer reviewed by outside reviewers, and reviewed by members of the supervising review committee.  The questioner seemed reassured, but now that I have looked at the toxicology chapter, I am not so sure.

The questioner’s premise that a member of a large firm will represent mostly defendants and thus have a pro-defense bias was probably a common perception among unsophisticated lay observers.  For instance, some large firms represent insurance companies intent upon denying coverage to product manufacturers.  These counsel for insurance companies often take the plaintiffs’ side of the underlying disputed issue in order to claim an exclusion to the contract of insurance, under a claim that the harm was “expected or intended.”  Similarly, the common perception ignores the reality of lawyers’ true conflict:  although gatekeeping helps the defense lawyers’ clients, it takes away legal work from firms that represent defendants in the litigations that are pretermitted by effective judicial gatekeeping.  Erosion of gatekeeping concepts, however, inures to the benefit of plaintiffs, their counsel, as well as the expert witnesses engaged on behalf of plaintiffs in litigation.

The questioner’s supposition in the case of the toxicology chapter, however, is doubly flawed.  If he had known more about the authors, he would probably not have asked his question.  First, the lawyer author, Ms. Henifin, despite her large firm affiliation, has taken some aggressive positions contrary to the interests of manufacturers.[13]  As for the scientist author of the toxicology chapter, Professor Goldstein, the casual reader of the chapter may want to know that he has testified in any number of toxic tort cases, almost invariably on the plaintiffs’ side.  Unlike the defense lawyer, who loses business revenue, when courts shut down unreliable claims, plaintiffs’ testifying or consulting expert witnesses stand to gain by minimalist expert witness opinion gatekeeping.  Given the economic asymmetries, the reader must thus want to know that Professor Goldstein was excluded as an expert witness in some high-profile toxic tort cases.[14]  There do not appear to be any disclosures of Professor Goldstein’s (or any other scientist author’s) conflicts of interests in RMSE 3d.  Having pointed out this conflict, I would note that financial conflicts of interest are nothing really compared with ideological conflicts of interest, which often propel scientists into service as expert witnesses to advance their political agenda.

  1. Hormesis

One way that ideological conflicts might be revealed is to look for imbalances in the presentation of toxicologic concepts.  Most lawyers who litigate cases that involve exposure-response issues are familiar with the “linear no threshold” (LNT) concept that is used frequently in regulatory risk assessments, and which has metastasized to toxic tort litigation, where LNT often has no proper place.

LNT is a dubious assumption because it claims to “know” the dose response at very low exposure levels in the absence of data.  There is a thin plausibility for LNT for genotoxic chemicals claimed to be carcinogens, but even that plausibility evaporates when one realizes that there are DNA defense and repair mechanisms to genotoxicity, which must first be saturated, overwhelmed, or inhibited, before there can be a carcinogenic response. The upshot is that low exposures that do not swamp DNA repair and tumor suppression proteins will not cause cancer.

Hormesis is today an accepted concept that describes a dose-response relationship that shows a benefit at low doses, but harm at high doses. The toxicology chapter in the Reference Manual has several references to LNT but none to hormesis.  That font of all knowledge, Wikipedia reports that hormesis is controversial, but so is LNT.  This is the sort of imbalance that may well reflect an ideological bias.

One of the leading textbooks on toxicology describes hormesis[15]:

“There is considerable evidence to suggest that some non-nutritional toxic substances may also impart beneficial or stimulatory effects at low doses but that, at higher doses, they produce adverse effects. This concept of “hormesis” was first described for radiation effects but may also pertain to most chemical responses.”

Similarly, the Encyclopedia of Toxicology describes hormesis as an important phenomenon in toxicologic science[16]:

“This type of dose–response relationship is observed in a phenomenon known as hormesis, with one explanation being that exposure to small amounts of a material can actually confer resistance to the agent before frank toxicity begins to appear following exposures to larger amounts.  However, analysis of the available mechanistic studies indicates that there is no single hormetic mechanism. In fact, there are numerous ways for biological systems to show hormetic-like biphasic dose–response relationship. Hormetic dose–response has emerged in recent years as a dose–response phenomenon of great interest in toxicology and risk assessment.”

One might think that hormesis would also be of great interest to federal judges, but they will not learn about it from reading the Reference Manual.

Hormesis research has come into its own.  The International Dose-Response Society, which “focus[es] on the dose-response in the low-dose zone,” publishes a journal, Dose-Response, and a newsletter, BELLE:  Biological Effects of Low Level Exposure.  In 2009, two leading researchers in the area of hormesis published a collection of important papers:  Mark P. Mattson and Edward J. Calabrese, eds., Hormesis: A Revolution in Biology, Toxicology and Medicine (2009).

A check in PubMed shows that LNT has more “hits” than “hormesis” or “hermetic,” but still the latter phrases exceed 1,267 references, hardly insubstantial.  In actuality, there are many more hermetic relationships identified in the scientific literature, which often fails to identify the relationship by the term hormesis or hermetic.[17]

The Reference Manual’s omission of hormesis was regrettable.  Its inclusion of references to LNT but not to hormesis suggests a biased treatment of the subject.

  1. Questionable Substantive Opinions

Readers and litigants would fondly hope that the toxicology chapter would not put forward partisan substantive positions on issues that are currently the subject of active litigation.  Fervently, we would hope that any substantive position advanced would at least be well documented.

For at least one issue, the toxicology chapter disappointed significantly.  Table 1 in the chapter presents a “Sample of Selected Toxicological End Points and Examples of Agents of Concern in Humans.” No documentation or citations are provided for this table.  Most of the exposure agent/disease outcome relationships in the table are well accepted, but curiously at least one agent-disease pair, which is the subject of current litigation, is wildly off the mark:

“Parkinson’s disease and manganese[18]

If the chapter’s authors had looked, they would have found that Parkinson’s disease is almost universally accepted to have no known cause, at least outside court rooms.  They would also have found that the issue has been addressed carefully and the claimed relationship or “concern” has been rejected by the leading researchers in the field (who have no litigation ties).[19]  Table 1 suggests a certain lack of objectivity, and its inclusion of a highly controversial relationship, manganese-Parkinson’s disease, suggests a good deal of partisanship.

  1. When All You Have Is a Hammer, Everything Looks Like a Nail

The substantive area author, Professor Goldstein, is not a physician; nor is he an epidemiologist.  His professional focus on animal and cell research appeared to color and bias the opinions offered in this chapter:[20]

“In qualitative extrapolation, one can usually rely on the fact that a compound causing an effect in one mammalian species will cause it in another species. This is a basic principle of toxicology and pharmacology.  If a heavy metal, such as mercury, causes kidney toxicity in laboratory animals, it is highly likely to do so at some dose in humans.”

Such extrapolations may make sense in regulatory contexts, where precauationary judgments are of interest, but they hardly can be said to be generally accepted in controversies in scientific communities, or in civil actions over actual causation.  There are too many counterexamples to cite, but consider crystalline silica, silicon dioxide.  Silica causes something resembling lung cancer in rats, but not in mice, guinea pigs, or hamsters.  It hardly makes sense to ask juries to decide whether the plaintiff is more like a rat than a mouse.

For a sober second opinion to the toxicology chapter, one may consider the views of some well-known authors:

“Whereas the concordance was high between cancer-causing agents initially discovered in humans and positive results in animal studies (Tomatis et al., 1989; Wilbourn et al., 1984), the same could not be said for the reverse relationship: carcinogenic effects in animals frequently lacked concordance with overall patterns in human cancer incidence (Pastoor and Stevens, 2005).”[21]

III. New Reference Manual’s Uneven Treatment of Causation and of Conflicts of Interest

The third edition of the Reference Manual on Scientific Evidence (RMSE) appeared to get off to a good start in the Preface by Judge Kessler and Dr. Kassirer, when they noted that the Supreme Court mandated federal courts to:

“examine the scientific basis of expert testimony to ensure that it meets the same rigorous standard employed by scientific researchers and practitioners outside the courtroom.”

RMSE at xiii.  The preface faltered, however, on two key issues, causation and conflicts of interest, which are taken up as an introduction to the third edition.

  1. Causation

The authors reported in somewhat squishy terms that causal assessments are judgments:

“Fundamentally, the task is an inferential process of weighing evidence and using judgment to conclude whether or not an effect is the result of some stimulus. Judgment is required even when using sophisticated statistical methods. Such methods can provide powerful evidence of associations between variables, but they cannot prove that a causal relationship exists. Theories of causation (evolution, for example) lose their designation as theories only if the scientific community has rejected alternative theories and accepted the causal relationship as fact. Elements that are often considered in helping to establish a causal relationship include predisposing factors, proximity of a stimulus to its putative outcome, the strength of the stimulus, and the strength of the events in a causal chain.”[22]

The authors left the inferential process as a matter of “weighing evidence,” but without saying anything about how the scientific community does its “weighing.” Language about “proving” causation is also unclear because “proof” in scientific parlance connotes a demonstration, which we typically find in logic or in mathematics. Proving empirical propositions suggests a bar set so high such that the courts must inevitably acquiesce in a very low threshold of evidence.  The question, of course, is how low can and will judges go to admit evidence.

The authors thus introduced hand waving and excuses for why evidence can be weighed differently in court proceedings from the world of science:

“Unfortunately, judges may be in a less favorable position than scientists to make causal assessments. Scientists may delay their decision while they or others gather more data. Judges, on the other hand, must rule on causation based on existing information. Concepts of causation familiar to scientists (no matter what stripe) may not resonate with judges who are asked to rule on general causation (i.e., is a particular stimulus known to produce a particular reaction) or specific causation (i.e., did a particular stimulus cause a particular consequence in a specific instance). In the final analysis, a judge does not have the option of suspending judgment until more information is available, but must decide after considering the best available science.”[23]

But the “best available science” may be pretty crummy, and the temptation to turn desperation into evidence (“well, it’s the best we have now”) is often severe.  The authors of the Preface thus remarkable signalled that “inconclusive” is not a judgment open to judges charged with expert witness gatekeeping.  If the authors truly meant to suggest that judges should go with whatever is dished out as “the best available science,” then they have overlooked the obvious:  Rule 702 opens the door to “scientific, technical, or other specialized knowledge,” not to hunches, suggestive but inconclusive evidence, and wishful thinking about how the science may turn out when further along.  Courts have a choice to exclude expert witness opinion testimony that is based upon incomplete or inconclusive evidence. The authors went fairly far afield to suggest, erroneously, that the incomplete and the inconclusive are good enough and should be admitted.

  1. Conflicts of Interest

Surprisingly, given the scope of the scientific areas covered in the RMSE, the authors discussed conflicts of interest (COI) at some length.  Conflicts of interest are a fact of life in all endeavors, and it is understandable counsel judges and juries to try to identify, assess, and control them.  COIs, however, are weak proxies for unreliability.  The emphasis given here was, however, undue because federal judges were enticed into thinking that they can discern unreliability from COI, when they should be focused on the data, inferences, and analyses.

What becomes fairly clear is that the authors of the Preface set out to use COI as a basis for giving litigation plaintiffs a pass, and for holding back studies sponsored by corporate defendants.

“Conflict of interest manifests as bias, and given the high stakes and adversarial nature of many courtroom proceedings, bias can have a major influence on evidence, testimony, and decisionmaking. Conflicts of interest take many forms and can be based on religious, social, political, or other personal convictions. The biases that these convictions can induce may range from serious to extreme, but these intrinsic influences and the biases they can induce are difficult to identify. Even individuals with such prejudices may not appreciate that they have them, nor may they realize that their interpretations of scientific issues may be biased by them. Because of these limitations, we consider here only financial conflicts of interest; such conflicts are discoverable. Nonetheless, even though financial conflicts can be identified, having such a conflict, even one involving huge sums of money, does not necessarily mean that a given individual will be biased. Having a financial relationship with a commercial entity produces a conflict of interest, but it does not inevitably evoke bias. In science, financial conflict of interest is often accompanied by disclosure of the relationship, leaving to the public the decision whether the interpretation might be tainted. Needless to say, such an assessment may be difficult. The problem is compounded in scientific publications by obscure ways in which the conflicts are reported and by a lack of disclosure of dollar amounts.

Judges and juries, however, must consider financial conflicts of interest when assessing scientific testimony. The threshold for pursuing the possibility of bias must be low. In some instances, judges have been frustrated in identifying expert witnesses who are free of conflict of interest because entire fields of science seem to be co-opted by payments from industry. Judges must also be aware that the research methods of studies funded specifically for purposes of litigation could favor one of the parties. Though awareness of such financial conflicts in itself is not necessarily predictive of bias, such information should be sought and evaluated as part of the deliberations.”[24]

All in all, rather misleading advice.  Financial conflicts are not the only conflicts that can be “discovered.”  Often expert witnesses will have political and organizational alignments, which will show deep-seated ideological alignments with the party for which they are testifying.  For instance, in one silicosis case, an expert witness in the field of history of medicine testified, at an examination before trial, that his father suffered from a silica-related disease.  This witness’s alignment with Marxist historians and his identification with radical labor movements made his non-financial conflicts obvious, although these COI would not necessarily have been apparent from his scholarly publications alone.

How low will the bar be set for discovering COI?  If testifying expert witnesses are relying upon textbooks, articles, essays, will federal courts open the authors/hearsay declarants up to searching discovery of their finances? What really is at stake here is that the issues of accuracy, precision, and reliability are lost in the ad hominem project of discovery COIs.

Also misleading was the suggestion that “entire fields of science seem to be co-opted by payments from industry.”  Do the authors mean to exclude the plaintiffs’ lawyer lawsuit industry, which has become one of the largest rent-seeking organizations, and one of the most politically powerful groups in this country?  In litigations in which I have been involved, I have certainly seen plaintiffs’ counsel, or their proxies – labor unions, federal agencies, or “victim support groups” provide substantial funding for studies.  The Preface authors themselves show an untoward bias by their pointing out industry payments without giving balanced attention to other interested parties’ funding of scientific studies.

The attention to COI was also surprising given that one of the key chapters, for toxic tort practitioners, was written by Dr. Bernard D. Goldstein, who has testified in toxic tort cases, mostly (but not exclusively) for plaintiffs.[25]  In one such case, Makofsky, Dr. Goldstein’s participation was particularly revealing because he was forced to explain why he was willing to opine that benzene caused acute lymphocytic leukemia, despite the plethora of published studies finding no statistically significant relationship.  Dr. Goldstein resorted to the inaccurate notion that scientific “proof” of causation requires 95 percent certainty, whereas he imposed only a 51 percent certainty for his medico-legal testimonial adventures.[26] Dr. Goldstein also attempted to justify the discrepancy from the published literature by adverting to the lower standards used by federal regulatory agencies and treating physicians.  

These explanations were particularly concerning because they reflect basic errors in statistics and in causal reasoning.  The 95 percent derives from the use of the coefficient of confidence in confidence intervals, but the probability involved there is not the probability of the association’s being correct, and it has nothing to do with the probability in the belief that an association is real or is causal.  (Thankfully the RMSE chapter on statistics got this right, but my fear is that judges will skip over the more demanding chapter on statistics and place undue weight on the toxicology chapter.)  The reference to federal agencies (OSHA, EPA, etc.) and to treating physicians was meant, no doubt, to invoke precautionary principle concepts as a justification for some vague, ill-defined, lower standard of causal assessment.  These references were really covert invitations to shift the burden of proof.

The Preface authors might well have taken their own counsel and conducted a more searching assessment of COI among authors of Reference Manual.  Better yet, the authors might have focused the judiciary on the data and the analysis.

  1. Reference Manual on Scientific Evidence (3d edition) on Statistical Significance

How does the new Reference Manual on Scientific Evidence treat statistical significance?  Inconsistently and at times incoherently.

  1. Professor Berger’s Introduction

In her introductory chapter, the late Professor Margaret A. Berger raised the question what role statistical significance should play in evaluating a study’s support for causal conclusions[27]:

“What role should statistical significance play in assessing the value of a study? Epidemiological studies that are not conclusive but show some increased risk do not prove a lack of causation. Some courts find that they therefore have some probative value,62 at least in proving general causation.63

This seems rather backwards.  Berger’s suggestion that inconclusive studies do not prove lack of causation seems nothing more than a tautology. Certainly the failure to rule out causation is not probative of causation. How can that tautology support the claim that inconclusive studies “therefore” have some probative value? Berger’s argument seems obviously invalid, or perhaps text that badly needed a posthumous editor.  And what epidemiologic studies are conclusive?  Are the studies individually or collectively conclusive?  Berger introduced a tantalizing concept, which was not spelled out anywhere in the Manual.

Berger’s chapter raised other, serious problems. If the relied-upon studies are not statistically significant, how should we understand the testifying expert witness to have ruled out random variability as an explanation for the disparity observed in the study or studies?  Berger did not answer these important questions, but her rhetoric elsewhere suggested that trial courts should not look too hard at the statistical support (or its lack) for what expert witness testimony is proffered.

Berger’s citations in support were curiously inaccurate.  Footnote 62 cites the Cook case:

“62. See Cook v. Rockwell Int’l Corp., 580 F. Supp. 2d 1071 (D. Colo. 2006) (discussing why the court excluded expert’s testimony, even though his epidemiological study did not produce statistically significant results).”

Berger’s citation was disturbingly incomplete.[28] The expert witness, Dr. Clapp, in Cook did rely upon his own study, which did not obtain a statistically significant result, but the trial court admitted the expert witness’s testimony; the court denied the Rule 702 challenge to Clapp, and permitted him to testify about a statistically non-significant ecological study. Given that the judgment of the district court was reversed

Footnote 63 is no better:

“63. In re Viagra Prods., 572 F. Supp. 2d 1071 (D. Minn. 2008) (extensive review of all expert evidence proffered in multidistricted product liability case).”

With respect to the concept of statistical significance, the Viagra case centered around the motion to exclude plaintiffs’ expert witness, Gerald McGwin, who relied upon three studies, none of which obtained a statistically significant result in its primary analysis.  The Viagra court’s review was hardly extensive; the court did not report, discuss, or consider the appropriate point estimates in most of the studies, the confidence intervals around those point estimates, or any aspect of systematic error in the three studies.  At best, the court’s review was perfunctory.  When the defendant brought to light the lack of data integrity in McGwin’s own study, the Viagra MDL court reversed itself, and granted the motion to exclude McGwin’s testimony.[29]  Berger’s chapter omitted the cautionary tale of McGwin’s serious, pervasive errors, and how they led to his ultimate exclusion. Berger’s characterization of the review was incorrect, and her failure to cite the subsequent procedural history, misleading.

  1. Chapter on Statistics

The Third Edition’s chapter on statistics was relatively free of value judgments about significance probability, and, therefore, an improvement over Berger’s introduction.  The authors carefully described significance probability and p-values, and explain[30]:

“Small p-values argue against the null hypothesis. Statistical significance is determined by reference to the p-value; significance testing (also called hypothesis testing) is the technique for computing p-values and determining statistical significance.”

Although the chapter conflated the positions often taken to be Fisher’s interpretation of p-values and Neyman’s conceptualization of hypothesis testing as a dichotomous decision procedure, this treatment was unfortunately fairly standard in introductory textbooks.  The authors may have felt that presenting multiple interpretations of p-values was asking too much of judges and lawyers, but the oversimplification invited a false sense of certainty about the inferences that can be drawn from statistical significance.

Kaye and Freedman, however, did offer some important qualifications to the untoward consequences of using significance testing as a dichotomous outcome[31]:

“Artifacts from multiple testing are commonplace. Because research that fails to uncover significance often is not published, reviews of the literature may produce an unduly large number of studies finding statistical significance.111 Even a single researcher may examine so many different relationships that a few will achieve statistical significance by mere happenstance. Almost any large dataset—even pages from a table of random digits—will contain some unusual pattern that can be uncovered by diligent search. Having detected the pattern, the analyst can perform a statistical test for it, blandly ignoring the search effort. Statistical significance is bound to follow.

There are statistical methods for dealing with multiple looks at the data, which permit the calculation of meaningful p-values in certain cases.112 However, no general solution is available, and the existing methods would be of little help in the typical case where analysts have tested and rejected a variety of models before arriving at the one considered the most satisfactory (see infra Section V on regression models). In these situations, courts should not be overly impressed with claims that estimates are significant. Instead, they should be asking how analysts developed their models.113

This important qualification to statistical significance was omitted from the overlapping discussion in the chapter on epidemiology, where it was very much needed.

  1. Chapter on Multiple Regression

The chapter on regression did not add much to the earlier and later discussions.  The author asked rhetorically what is the appropriate level of statistical significance, and answers:

“In most scientific work, the level of statistical significance required to reject the null hypothesis (i.e., to obtain a statistically significant result) is set conventionally at 0.05, or 5%.47

Daniel Rubinfeld, “Reference Guide on Multiple Regression,” in RMSE3d 303, 320.

  1. Chapter on Epidemiology

The chapter on epidemiology[32] mostly muddled the discussion set out in Kaye and Freedman’s chapter on statistics.

“The two main techniques for assessing random error are statistical significance and confidence intervals. A study that is statistically significant has results that are unlikely to be the result of random error, although any criterion for ‘significance’ is somewhat arbitrary. A confidence interval provides both the relative risk (or other risk measure) found in the study and a range (interval) within which the risk likely would fall if the study were repeated numerous times.”

The suggestion that a statistically significant study has results unlikely due to chance, without reminding the reader that the finding is predicated on the assumption that there is no association, and that the probability distribution was correct, and came close to crossing the line in committing the transposition fallacy so nicely described and warned against in the statistics chapter. The problem was that “results” is ambiguous as between the data as extreme or more so than what was observed, and the point estimate of the mean or proportion in the sample, and the assumptions that lead to a p-value were not disclosed.

The suggestion that alpha is “arbitrary,” was “somewhat” correct, but this truncated discussion was distinctly unhelpful to judges who are likely to take “arbitrary“ to mean “I will get reversed.”  The selection of alpha is conventional to some extent, and arbitrary in the sense that the law’s setting an age of majority or a voting age is arbitrary.  Some young adults, age 17.8 years old, may be better educated, better engaged in politics, better informed about current events, than 35 year olds, but the law must set a cut off.  Two year olds are demonstrably unfit, and 82 year olds are surely past the threshold of maturity requisite for political participation. A court might admit an opinion based upon a study of rare diseases, with tight control of bias and confounding, when p = 0.051, but that is hardly a justification for ignoring random error altogether, or admitting an opinion based upon a study, in which the disparity observed had a p = 0.15.

The epidemiology chapter correctly called out judicial decisions that confuse “effect size” with statistical significance[33]:

“Understandably, some courts have been confused about the relationship between statistical significance and the magnitude of the association. See Hyman & Armstrong, P.S.C. v. Gunderson, 279 S.W.3d 93, 102 (Ky. 2008) (describing a small increased risk as being considered statistically insignificant and a somewhat larger risk as being considered statistically significant.); In re Pfizer Inc. Sec. Litig., 584 F. Supp. 2d 621, 634–35 (S.D.N.Y. 2008) (confusing the magnitude of the effect with whether the effect was statistically significant); In re Joint E. & S. Dist. Asbestos Litig., 827 F. Supp. 1014, 1041 (S.D.N.Y. 1993) (concluding that any relative risk less than 1.50 is statistically insignificant), rev’d on other grounds, 52 F.3d 1124 (2d Cir. 1995).”

Actually this confusion is not understandable at all.  The distinction has been the subject of teaching since the first edition of the Reference Manual, and two of the cited cases post-date the second edition.  The Southern District of New York asbestos case, of course, predated the first Manual.  To be sure, courts have on occasion badly misunderstood significance probability and significance testing.   The authors of the epidemiology chapter could well have added In re Viagra, to the list of courts that confused effect size with statistical significance.[34]

The epidemiology chapter appropriately chastised courts for confusing significance probability with the probability that the null hypothesis, or its complement, is correct[35]:

“A common error made by lawyers, judges, and academics is to equate the level of alpha with the legal burden of proof. Thus, one will often see a statement that using an alpha of .05 for statistical significance imposes a burden of proof on the plaintiff far higher than the civil burden of a preponderance of the evidence (i.e., greater than 50%).  See, e.g., In re Ephedra Prods. Liab. Litig., 393 F. Supp. 2d 181, 193 (S.D.N.Y. 2005); Marmo v. IBP, Inc., 360 F. Supp. 2d 1019, 1021 n.2 (D. Neb. 2005) (an expert toxicologist who stated that science requires proof with 95% certainty while expressing his understanding that the legal standard merely required more probable than not). But see Giles v. Wyeth, Inc., 500 F. Supp. 2d 1048, 1056–57 (S.D. Ill. 2007) (quoting the second edition of this reference guide).

Comparing a selected p-value with the legal burden of proof is mistaken, although the reasons are a bit complex and a full explanation would require more space and detail than is feasible here. Nevertheless, we sketch out a brief explanation: First, alpha does not address the likelihood that a plaintiff’s disease was caused by exposure to the agent; the magnitude of the association bears on that question. See infra Section VII. Second, significance testing only bears on whether the observed magnitude of association arose  as a result of random chance, not on whether the null hypothesis is true. Third, using stringent significance testing to avoid false-positive error comes at a complementary cost of inducing false-negative error. Fourth, using an alpha of .5 would not be equivalent to saying that the probability the association found is real is 50%, and the probability that it is a result of random error is 50%.”

The footnotes went on to explain further the difference between alpha probability and burden of proof probability, but somewhat misleadingly asserted that “significance testing only bears on whether the observed magnitude of association arose as a result of random chance, not on whether the null hypothesis is true.”[36]  The significance probability does not address the probability that the observed statistic is the result of random chance; rather it describes the probability of observing at least as large a departure from the expected value if the null hypothesis is true.  Of course, if this cumulative probability is sufficiently low, then the null hypothesis is rejected, and this would seem to bear upon whether the null hypothesis is true.  Kaye and Freedman’s chapter on statistics did much better at describing p-values and avoiding the transposition fallacy.

When they stayed on message, the authors of the epidemiology chapter were certainly correct that significance probability cannot be translated into an assessment of the probability that the null hypothesis, or the obtained sampling statistic, is correct.  What these authors omitted, however, was a clear statement that the many courts and counsel who have misstated this fact do not create any worthwhile precedent, persuasive or binding.

The epidemiology chapter ultimately failed to help judges in assessing statistical significance:

“There is some controversy among epidemiologists and biostatisticians about the appropriate role of significance testing.85 To the strictest significance testers, any study whose p-value is not less than the level chosen for statistical significance should be rejected as inadequate to disprove the null hypothesis. Others are critical of using strict significance testing, which rejects all studies with an observed p-value below that specified level. Epidemiologists have become increasingly sophisticated in addressing the issue of random error and examining the data from a study to ascertain what information they may provide about the relationship between an agent and a disease, without the necessity of rejecting all studies that are not statistically significant.86 Meta-analysis, as well, a method for pooling the results of multiple studies, sometimes can ameliorate concerns about random error.87  Calculation of a confidence interval permits a more refined assessment of appropriate inferences about the association found in an epidemiologic study.88

Id. at 578-79.  Mostly true, but again rather unhelpful to judges and lawyers.  Some of the controversy, to be sure, was instigated by statisticians and epidemiologists who would elevate Bayesian methods, and eliminate the use of significance probability and testing altogether. As for those scientists who still work within the dominant frequentist statistical paradigm, the chapter authors divided the world up into “strict” testers and those critical of “strict” testing.  Where, however, is the boundary? Does criticism of “strict” testing imply embrace of “non-strict” testing, or of no testing at all?  I can sympathize with a judge who permits reliance upon a series of studies that all go in the same direction, with each having a confidence interval that just misses excluding the null hypothesis.  Meta-analysis in such a situation might not just ameliorate concerns about random error, it might eliminate them.  But what of those scientists critical of strict testing?  This certainly does not suggest or imply that courts can or should ignore random error; yet that is exactly what happened in the early going in In re Viagra Products Liab. Litig.[37]  The epidemiology chapter’s reference to confidence intervals was correct in part; they permit a more refined assessment because they permit a more direct assessment of the extent of random error in terms of magnitude of association, as well as the point estimate of the association obtained from and conditioned on the sample.  Confidence intervals, however, do not eliminate the need to interpret the extent of random error; rather they provide a more direct assessment and measurement of the standard error.

V. Power in the Reference Manual for Scientific Evidence

The Third Edition treated statistical power in three of its chapters, those on statistics, epidemiology, and medical testimony.  Unfortunately, the treatments were not always consistent.

The chapter on statistics has been consistently among the most frequently ignored content of the three editions of the Reference Manual.  The third edition offered a good introduction to basic concepts of sampling, random variability, significance testing, and confidence intervals.[38]  Kaye and Freedman provided an acceptable non-technical definition of statistical power[39]:

“More precisely, power is the probability of rejecting the null hypothesis when the alternative hypothesis … is right. Typically, this probability will depend on the values of unknown parameters, as well as the preset significance level α. The power can be computed for any value of α and any choice of parameters satisfying the alternative hypothesis. Frequentist hypothesis testing keeps the risk of a false positive to a specified level (such as α = 5%) and then tries to maximize power. Statisticians usually denote power by the Greek letter beta (β). However, some authors use β to denote the probability of accepting the null hypothesis when the alternative hypothesis is true; this usage is fairly standard in epidemiology. Accepting the null hypothesis when the alternative holds true is a false negative (also called a Type II error, a missed signal, or a false acceptance of the null hypothesis).”

The definition was not, however, without problems.  First, it introduced a nomenclature issue likely to be confusing for judges and lawyers. Kaye and Freeman used β to denote statistical power, but they acknowledge that epidemiologists use β to denote the probability of a Type II error.  And indeed, both the chapters on epidemiology and medical testimony used β to reference Type II error rate, and thus denote power as the complement of β, or (1- β).[40]

Second, the reason for introducing the confusion about β was doubtful.  Kaye and Freeman suggested that statisticians usually denote power by β, but they offered no citations.  A quick review (not necessarily complete or even a random sample) suggests that many modern statistics texts denote power as (1- β).[41]   At the end of the day, there really was no reason for the conflicting nomenclature and the likely confusion it would engenders.  Indeed, the duplicative handling of statistical power, and other concepts, suggested that it is time to eliminate the repetitive discussions, in favor of one, clear, thorough discussion in the statistics chapter.

Third, Kaye and Freeman problematically refer to β as the probability of accepting the null hypothesis when elsewhere they more carefully instructed that a non-significant finding results in not rejecting the null hypothesis as opposed to accepting the null.  Id. at 253.[42]

Fourth, Kaye and Freeman’s discussion of power, unlike most of their chapter, offered advice that is controversial and unclear:

“On the other hand, when studies have a good chance of detecting a meaningful association, failure to obtain significance can be persuasive evidence that there is nothing much to be found.”[43]

Note that the authors left open what a legal or clinically meaningful association is, and thus offered no real guidance to judges on how to evaluate power after data are collected and analyzed.  As Professor Sander Greenland has argued, in legal contexts, this reliance upon observed power (as opposed to power as a guide in determining appropriate sample size in the planning stages of a study) was arbitrary and “unsalvageable as an analytic tool.”[44]

The chapter on epidemiology offered similar controversial advice on the use of power[45]:

“When a study fails to find a statistically significant association, an important question is whether the result tends to exonerate the agent’s toxicity or is essentially inconclusive with regard to toxicity.93 The concept of power can be helpful in evaluating whether a study’s outcome is exonerative or inconclusive.94  The power of a study is the probability of finding a statistically significant association of a given magnitude (if it exists) in light of the sample sizes used in the study. The power of a study depends on several factors: the sample size; the level of alpha (or statistical significance) specified; the background incidence of disease; and the specified relative risk that the researcher would like to detect.95  Power curves can be constructed that show the likelihood of finding any given relative risk in light of these factors. Often, power curves are used in the design of a study to determine what size the study populations should be.96

Although the authors correctly emphasized the need to specify an alternative hypothesis, their discussion and advice were empty of how that alternative should be selected in legal contexts.  The suggestion that power curves can be constructed was, of course, true, but irrelevant unless courts know where on the power curve they should be looking.  The authors were also correct that power is used to determine adequate sample size under specified conditions; but again, the use of power curves in this setting is today rather uncommon.  Investigators select a level of power corresponding to an acceptable Type II error rate, and an alternative hypothesis that would be clinically meaningful for their research, in order to determine their sample size. Translating clinical into legal meaningfulness is not always straightforward.

In a footnote, the authors of the epidemiology chapter noted that Professor Rothman has been “one of the leaders in advocating the use of confidence intervals and rejecting strict significance testing.”[46] What the chapter failed, however, to mention is that Rothman has also been outspoken in rejecting post-hoc power calculations that the epidemiology chapter seemed to invite:

“Standard statistical advice states that when the data indicate a lack of significance, it is important to consider the power of the study to detect as significant a specific alternative hypothesis. The power of a test, however, is only an indirect indicator of precision, and it requires an assumption about the magnitude of the effect. In planning a study, it is reasonable to make conjectures about the magnitude of an effect to compute study-size requirements or power. In analyzing data, however, it is always preferable to use the information in the data about the effect to estimate it directly, rather than to speculate about it with study-size or power calculations (Smith and Bates, 1992; Goodman and Berlin, 1994; Hoening and Heisey, 2001). Confidence limits and (even more so) P-value functions convey much more of the essential information by indicating the range of values that are reasonably compatible with the observations (albeit at a somewhat arbitrary alpha level), assuming the statistical model is correct. They can also show that the data do not contain the information necessary for reassurance about an absence of effect.”[47]

The selective, incomplete scholarship of the epidemiology chapter on the issue of statistical power was not only unfortunate, but it distorted the authors’ evaluation of the sparse case law on the issue of power.  For instance, they noted:

“Even when a study or body of studies tends to exonerate an agent, that does not establish that the agent is absolutely safe. See Cooley v. Lincoln Elec. Co., 693 F. Supp. 2d 767 (N.D. Ohio 2010). Epidemiology is not able to provide such evidence.”[48]

Here the authors, Green, Freedman, and Gordis, shifted the burden to the defendant and then go to an even further extreme of making the burden of proof one of absolute certainty in the product’s safety.  This is not, and never has been, a legal standard. The cases they cited amplified the error. In Cooley, for instance, the defense expert would have opined that welding fume exposure did not cause parkinsonism or Parkinson’s disease.  Although the expert witness had not conducted a meta-analysis, he had reviewed the confidence intervals around the point estimates of the available studies.  Many of the point estimates were at or below 1.0, and in some cases, the upper bound of the confidence interval excluded 1.0.  The trial court expressed its concern that the expert witness had inferred “evidence of absence” from “absence of evidence.”  Cooley v. Lincoln Elec. Co., 693 F. Supp. 2d 767, 773 (N.D. Ohio 2010).  This concern, however, was misguided given that many studies had tested the claimed association, and that virtually every case-control and cohort study had found risk ratios at or below 1.0, or very close to 1.0.  What the court in Cooley, and the authors of the epidemiology chapter in the third edition have lost sight of, is that when the hypothesis is repeatedly tested, with failure to reject the null hypothesis, and with point estimates at or very close to 1.0, and with narrow confidence intervals, then the claimed association is probably incorrect.[49]

The Cooley court’s comments might have had some validity when applied to a single study, but not to the impressive body of exculpatory epidemiologic evidence that pertained to welding fume and Parkinson’s disease.  Shortly after the Cooley case was decided, a published meta-analysis of welding fume or manganese exposure demonstrated a reduced level of risk for Parkinson’s disease among persons occupationally exposed to welding fumes or manganese.[50]

VI. The Treatment of Meta-Analysis in the Third Edition

Meta-analysis is a statistical procedure for aggregating data and statistics from individual studies into a single summary statistical estimate of the population measurement of interest.  The first meta-analysis is typically attributed to Karl Pearson, circa 1904, who sought a method to overcome the limitations of small sample size and low statistical power.  Statistical methods for meta-analysis in epidemiology and the social sciences, however, did not mature until the 1970s.  Even then, the biomedical scientific community remained skeptical of, if not out rightly hostile to, meta-analysis until relatively recently.

The hostility to meta-analysis, especially in the context of observational epidemiologic studies, was colorfully expressed by two capable epidemiologists, Samuel Shapiro and Alvan Feinstein, as late as the 1990s:

“Meta-analysis begins with scientific studies….  [D]ata from these studies are then run through computer models of bewildering complexity which produce results of implausible precision.”

* * * *

“I propose that the meta-analysis of published non-experimental data should be abandoned.”[51]

The professional skepticism about meta-analysis was reflected in some of the early judicial assessments of meta-analysis in court cases.  In the 1980s and early 1990s, some trial judges erroneously dismissed meta-analysis as a flawed statistical procedure that claimed to make something out of nothing.[52]

In In re Paoli Railroad Yard PCB Litigation, Judge Robert Kelly excluded plaintiffs’ expert witness Dr. William Nicholson and his testimony based upon his unpublished meta-analysis of health outcomes among PCB-exposed workers.  Judge Kelly found that the meta-analysis was a novel technique, and that Nicholson’s meta-analysis was not peer reviewed.  Furthermore, the meta-analysis assessed health outcomes not experienced by any of the plaintiffs before the trial court.[53]

The Court of Appeals for the Third Circuit reversed the exclusion of Dr. Nicholson’s testimony, and remanded for reconsideration with instructions.[54]  The Circuit noted that meta-analysis was not novel, and that the lack of peer-review was not an automatic disqualification.  Acknowledging that a meta-analysis could be performed poorly using invalid methods, the appellate court directed the trial court to evaluate the validity of Dr. Nicholson’s work on his meta-analysis. On remand, however, it seems that plaintiffs chose – wisely – not to proceed with Nicholson’s meta-analysis.[55]

In one of many squirmishes over colorectal cancer claims in asbestos litigation, Judge Sweet in the Southern District of New York was unimpressed by efforts to aggregate data across studies.  Judge Sweet declared that:

“no matter how many studies yield a positive but statistically insignificant SMR for colorectal cancer, the results remain statistically insignificant. Just as adding a series of zeros together yields yet another zero as the product, adding a series of positive but statistically insignificant SMRs together does not produce a statistically significant pattern.”[56]

The plaintiffs’ expert witness who had offered the unreliable testimony, Dr. Steven Markowitz, like Nicholson, another foot soldier in Dr. Irving Selikoff’s litigation machine, did not offer a formal meta-analysis to justify his assessment that multiple non-significant studies, taken together, rule out chance as a likely explanation for an aggregate finding of an increased risk.

Judge Sweet was quite justified in rejecting this back of the envelope, non-quantitative meta-analysis.  His suggestion, however, that multiple non-significant studies could never collectively serve to rule out chance as an explanation for an overall increased rate of disease in the exposed groups is completely wrong.  Judge Sweet would have better focused on the validity issues in key studies, the presence of bias and confounding, and the completeness of the proffered meta-analysis.  The Second Circuit reversed the entry of summary judgment, and remanded the colorectal cancer claim for trial.[57]  Over a decade later, with even more accumulated studies and data, the Institute of Medicine found the evidence for asbestos plaintiffs’ colorectal cancer claims to be scientifically insufficient.[58]

Courts continue to go astray with an erroneous belief that multiple studies, all without statistically significant results, cannot yield a statistically significant summary estimate of increased risk.  See, e.g., Baker v. Chevron USA, Inc., 2010 WL 99272, *14-15 (S.D.Ohio 2010) (addressing a meta-analysis by Dr. Infante on multiple myeloma outcomes in studies of benzene-exposed workers).  There were many sound objections to Infante’s meta-analysis, but the suggestion that multiple studies without statistical significance could not yield a summary estimate of risk with statistical significance was not one of them.

In the last two decades, meta-analysis has emerged as an important technique for addressing random variation in studies, as well as some of the limitations of frequentist statistical methods.  In 1980s, articles reporting meta-analyses were rare to non-existent.  In 2009, there were over 2,300 articles with “meta-analysis” in their title, or in their keywords, indexed in the PubMed database of the National Library of Medicine.[59]

The techniques for aggregating data have been studied, refined, and employed extensively in thousands of methods and application papers in the last decade. Consensus guideline papers have been published for meta-analyses of clinical trials as well as observational studies.[60]  Meta-analyses, of observational studies and of randomized clinical trials, routinely are relied upon by expert witnesses in pharmaceutical and so-called toxic tort litigation.[61]

The second edition of the Reference Manual on Scientific Evidence gave very little attention to meta-analysis; the third edition did not add very much to the discussion.  The time has come for the next edition to address meta-analyses, and criteria for their validity or invalidity.

  1. Statistics Chapter

The statistics chapter of the third edition gave scant attention to meta-analysis.  The chapter noted, in a footnote, that there are formal procedures for aggregating data across studies, and that the power of the aggregated data will exceed the power of the individual, included studies.  The footnote then cautioned that meta-analytic procedures “have their own weakness,”[62] without detailing what that weakness is. The time has come to spell out the weaknesses so that trial judges can evaluate opinion testimony based upon meta-analyses.

The glossary at the end of the statistics chapter offers a definition of meta-analysis:

“meta-analysis. Attempts to combine information from all studies on a certain topic. For example, in the epidemiological context, a meta-analysis may attempt to provide a summary odds ratio and confidence interval for the effect of a certain exposure on a certain disease.”[63]

This definition was inaccurate in ways that could yield serious mischief.  Virtually all meta-analyses are, or should be, built upon a systematic review that sets out to collect all available studies on a research issue of interest.  It is a rare meta-analysis, however, that includes “all” studies in its quantitative analysis.  The meta-analytic process involves a pre-specification of inclusionary and exclusionary criteria for the quantitative analysis of the summary estimate of risk.  Those criteria may limit the quantitative analysis to randomized trials, or to analytical epidemiologic studies.  Furthermore, meta-analyses frequently and appropriately have pre-specified exclusionary criteria that relate to study design or quality.

On a more technical note, the offered definition suggests that the summary estimate of risk will be an odds ratio, which may or may not be true.  Meta-analyses of risk ratios may yield summary estimates of risk in terms of relative risk or hazard ratios, or even of risk differences.  The meta-analysis may combine data of means rather than proportions as well.

  1. Epidemiology Chapter

The chapter on epidemiology delved into meta-analysis in greater detail than the statistics chapter, and offered apparently inconsistent advice.  The overall gist of the chapter, however, can perhaps best be summarized by the definition offered in this chapter’s glossary:

“meta-analysis. A technique used to combine the results of several studies to enhance the precision of the estimate of the effect size and reduce the plausibility that the association found is due to random sampling error.  Meta-analysis is best suited to pooling results from randomly controlled experimental studies, but if carefully performed, it also may be useful for observational studies.”[64]

It is now time to tell trial judges what “careful” means in the context of conducting and reporting and relying upon meta-analyses.

The epidemiology chapter appropriately noted that meta-analysis can help address concerns over random error in small studies.[65]  Having told us that properly conducted meta-analyses of observational studies can be helpful, the chapter then proceeded to hedge considerably[66]:

“Meta-analysis is most appropriate when used in pooling randomized experimental trials, because the studies included in the meta-analysis share the most significant methodological characteristics, in particular, use of randomized assignment of subjects to different exposure groups. However, often one is confronted with nonrandomized observational studies of the effects of possible toxic substances or agents. A method for summarizing such studies is greatly needed, but when meta-analysis is applied to observational studies – either case-control or cohort – it becomes more controversial.174 The reason for this is that often methodological differences among studies are much more pronounced than they are in randomized trials. Hence, the justification for pooling the results and deriving a single estimate of risk, for example, is problematic.175

The stated objection to pooling results for observational studies was certainly correct, but many research topics have sufficient studies available to allow for appropriate selectivity in framing inclusionary and exclusionary criteria to address the objection.  The chapter went on to credit the critics of meta-analyses of observational studies.  As they did in the second edition of the Reference Manual, the authors in the third edition repeated their cites to, and quotes from, early papers by John Bailar, who was then critical of such meta-analyses:

“Much has been written about meta-analysis recently and some experts consider the problems of meta-analysis to outweigh the benefits at the present time. For example, John Bailar has observed:

‘[P]roblems have been so frequent and so deep, and overstatements of the strength of conclusions so extreme, that one might well conclude there is something seriously and fundamentally wrong with the method. For the present . . . I still prefer the thoughtful, old-fashioned review of the literature by a knowledgeable expert who explains and defends the judgments that are presented. We have not yet reached a stage where these judgments can be passed on, even in part, to a formalized process such as meta-analysis.’

John Bailar, “Assessing Assessments,” 277 Science 528, 529 (1997).”[67]

Bailar’s subjective preference for “old-fashioned” reviews, which often cherry picked the included studies is, well, “old fashioned.”  More to the point, it is questionable science, and a distinctly minority viewpoint in the light of substantial improvements in the conduct and reporting of systematic reviews and meta-analyses of observational studies.  Bailar may be correct that some meta-analyses should have never left the protocol stage, but the third edition of the Reference Manual failed to provide the judiciary with the tools to appreciate the distinction between good and bad meta-analyses.

This categorical rejection, cited with apparent approval, is amplified by a recitation of some real or apparent problems with meta-analyses of observational studies.  What is missing is a discussion of how many of these problems can be and are dealt with in contemporary practice[68]:

“A number of problems and issues arise in meta-analysis. Should only published papers be included in the meta-analysis, or should any available studies be used, even if they have not been peer reviewed? Can the results of the meta-analysis itself be reproduced by other analysts? When there are several meta-analyses of a given relationship, why do the results of different meta-analyses often disagree? The appeal of a meta-analysis is that it generates a single estimate of risk (along with an associated confidence interval), but this strength can also be a weakness, and may lead to a false sense of security regarding the certainty of the estimate. A key issue is the matter of heterogeneity of results among the studies being summarized.  If there is more variance among study results than one would expect by chance, this creates further uncertainty about the summary measure from the meta-analysis. Such differences can arise from variations in study quality, or in study populations or in study designs. Such differences in results make it harder to trust a single estimate of effect; the reasons for such differences need at least to be acknowledged and, if possible, explained.176 People often tend to have an inordinate belief in the validity of the findings when a single number is attached to them, and many of the difficulties that may arise in conducting a meta-analysis, especially of observational studies such as epidemiologic ones, may consequently be overlooked.177

The epidemiology chapter authors were entitled to their opinion, but their discussion left the judiciary uninformed about current practice, and best practices, in epidemiology.  A categorical rejection of meta-analyses of observational studies is at odds with the chapter’s own claim that such meta-analyses can be helpful if properly performed. What was needed, and is missing, is a meaningful discussion to help the judiciary determine whether a meta-analysis of observational studies was properly performed.

  1. Medical Testimony Chapter

The chapter on medical testimony is the third pass at meta-analysis in the third edition of the Reference Manual.  The second edition’s chapter on medical testimony ignored meta-analysis completely; the new edition addresses meta-analysis in the context of the hierarchy of study designs[69]:

“Other circumstances that set the stage for an intense focus on medical evidence included

(1) the development of medical research, including randomized controlled trials and other observational study designs;

(2) the growth of diagnostic and therapeutic interventions;141

(3) interest in understanding medical decision making and how physicians reason;142 and

(4) the acceptance of meta-analysis as a method to combine data from multiple randomized trials.143

This language from the medical testimony chapter curiously omitted observational studies, but the footnote reference (note 143) then inconsistently discussed two meta-analyses of observational, rather than experimental, studies.[70]  The chapter then provided even further confusion by giving a more detailed listing of the hierarchy of medical evidence in the form of different study designs[71]:

3. Hierarchy of medical evidence

With the explosion of available medical evidence, increased emphasis has been placed on assembling, evaluating, and interpreting medical research evidence.  A fundamental principle of evidence-based medicine (see also Section IV.C.5, infra) is that the strength of medical evidence supporting a therapy or strategy is hierarchical.  When ordered from strongest to weakest, systematic review of randomized trials (meta-analysis) is at the top, followed by single randomized trials, systematic reviews of observational studies, single observational studies, physiological studies, and unsystematic clinical observations.150 An analysis of the frequency with which various study designs are cited by others provides empirical evidence supporting the influence of meta-analysis followed by randomized controlled trials in the medical evidence hierarchy.151 Although they are at the bottom of the evidence hierarchy, unsystematic clinical observations or case reports may be the first signals of adverse events or associations that are later confirmed with larger or controlled epidemiological studies (e.g., aplastic anemia caused by chloramphenicol,152 or lung cancer caused by asbestos153). Nonetheless, subsequent studies may not confirm initial reports (e.g., the putative association between coffee consumption and pancreatic cancer).154

This discussion further muddied the water by using a parenthetical to suggest that meta-analyses of randomized clinical trials are equivalent to systematic reviews of such studies — “systematic review of randomized trials (meta-analysis).” Of course, systematic reviews are not meta-analyses, although they are usually a necessary precondition for conducting a proper meta-analysis.  The relationship between the procedures for a systematic review and a meta-analysis are in need of clarification, but the judiciary will not find it in the third edition of the Reference Manual.

CONCLUSION

The idea of the Reference Manual was important to support trial judge’s efforts to engage in gatekeeping in unfamiliar subject matter areas. In its third incarnation, the Manual has become a standard starting place for discussion, but on several crucial issues, the third edition was unclear, imprecise, contradictory, or muddled. The organizational committee and authors for the fourth edition have a fair amount of work on their hands. There is clearly room for improvement in the Fourth Edition.


[1] Adam Dutkiewicz, “Book Review: Reference Manual on Scientific Evidence, Third Edition,” 28 Thomas M. Cooley L. Rev. 343 (2011); John A. Budny, “Book Review: Reference Manual on Scientific Evidence, Third Edition,” 31 Internat’l J. Toxicol. 95 (2012); James F. Rogers, Jim Shelson, and Jessalyn H. Zeigler, “Changes in the Reference Manual on Scientific Evidence (Third Edition),” Internat’l Ass’n Def. Csl. Drug, Device & Biotech. Comm. Newsltr. (June 2012).  See Schachtman “New Reference Manual’s Uneven Treatment of Conflicts of Interest.” (Oct. 12, 2011).

[2] The Manual did not do quite so well in addressing its own conflicts of interest.  See, e.g., infra at notes 7, 20.

[3] RSME 3d 11 (2011).

[4] Id. at 19.

[5] Id. at 20 & n. 51 (citing Susan Haack, “An Epistemologist in the Bramble-Bush: At the Supreme Court with Mr. Joiner,” 26 J. Health Pol. Pol’y & L. 217–37 (1999).

[6] Id. at 19-20 & n.52.

[7] Professor Berger filed an amicus brief on behalf of plaintiffs, in Rider v. Sandoz Pharms. Corp., 295 F.3d 1194 (11th Cir. 2002).

[8] Id. at 20 n.51. (The editors noted misleadingly that the published chapter was Berger’s last revision, with “a few edits to respond to suggestions by reviewers.”). I have written elsewhere of the ethical cloud hanging over this Milward decision. SeeCarl Cranor’s Inference to the Best Explanation” (Feb. 12, 2021); “From here to CERT-ainty” (June 28, 2018); “The Council for Education & Research on Toxics” (July 9, 2013) (CERT amicus brief filed without any disclosure of conflict of interest). See also NAS, “Carl Cranor’s Conflicted Jeremiad Against Daubert” (Sept. 23, 2018).

[9] RMSE 3d at 610 (internal citations omitted).

[10] RMSE 3d at 610 n.184 (emphasis, in bold, added).

[11] Interestingly, the authors of this chapter seem to abandon their suggestion that studies relied upon “might qualify for the learned treatise exception to the hearsay rule, Fed. R. Evid. 803(18), or possibly the catchall exceptions, Fed. R. Evid. 803(24) & 804(5),” which was part of their argument in the Second Edition.  RMSE 2d at 335 (2000).  See also RMSE 3d at 214 (discussing statistical studies as generally “admissible,” but acknowledging that admissibility may be no more than permission to explain the basis for an expert’s opinion, which is hardly admissibility at all).

[12] David L. Faigman, et al., Modern Scientific Evidence:  The Law and Science of Expert Testimony v.1, § 23:1,at 206 (2009) (“Well conducted studies are uniformly admitted.”).

[13] See Richard M. Lynch and Mary S. Henifin, “Causation in Occupational Disease: Balancing Epidemiology, Law and Manufacturer Conduct,” 9 Risk: Health, Safety & Environment 259, 269 (1998) (conflating distinct causal and liability concepts, and arguing that legal and scientific causal criteria should be abrogated when manufacturing defendant has breached a duty of care).

[14]  See, e.g., Parker v. Mobil Oil Corp., 7 N.Y.3d 434, 857 N.E.2d 1114, 824 N.Y.S.2d 584 (2006) (dismissing leukemia (AML) claim based upon claimed low-level benzene exposure from gasoline), aff’g 16 A.D.3d 648 (App. Div. 2d Dep’t 2005).  No; you will not find the Parker case cited in the Manual‘s chapter on toxicology. (Parker is, however, cited in the chapter on exposure science even though it is a state court case.).

[15] Curtis D. Klaassen, Casarett & Doull’s Toxicology: The Basic Science of Poisons 23 (7th ed. 2008) (internal citations omitted).

[16] Philip Wexler, Bethesda, et al., eds., 2 Encyclopedia of Toxicology 96 (2005).

[17] See Edward J. Calabrese and Robyn B. Blain, “The hormesis database: The occurrence of hormetic dose responses in the toxicological literature,” 61 Regulatory Toxicology and Pharmacology 73 (2011) (reviewing about 9,000 dose-response relationships for hormesis, to create a database of various aspects of hormesis).  See also Edward J. Calabrese and Robyn B. Blain, “The occurrence of hormetic dose responses in the toxicological literature, the hormesis database: An overview,” 202 Toxicol. & Applied Pharmacol. 289 (2005) (earlier effort to establish hormesis database).

[18] Reference Manual at 653

[19] See e.g., Karin Wirdefeldt, Hans-Olaf Adami, Philip Cole, Dimitrios Trichopoulos, and Jack Mandel, “Epidemiology and etiology of Parkinson’s disease: a review of the evidence.  26 European J. Epidemiol. S1, S20-21 (2011); Tomas R. Guilarte, “Manganese and Parkinson’s Disease: A Critical Review and New Findings,” 118 Environ Health Perspect. 1071, 1078 (2010) (“The available evidence from human and non­human primate studies using behavioral, neuroimaging, neurochemical, and neuropathological end points provides strong sup­port to the hypothesis that, although excess levels of [manganese] accumulation in the brain results in an atypical form of parkinsonism, this clini­cal outcome is not associated with the degen­eration of nigrostriatal dopaminergic neurons as is the case in PD [Parkinson’s disease].”)

[20] RMSE3ed at 646.

[21] Hans-Olov Adami, Sir Colin L. Berry, Charles B. Breckenridge, Lewis L. Smith, James A. Swenberg, Dimitrios Trichopoulos, Noel S. Weiss, and Timothy P. Pastoor, “Toxicology and Epidemiology: Improving the Science with a Framework for Combining Toxicological and Epidemiological Evidence to Establish Causal Inference,” 122 Toxciological Sciences 223, 224 (2011).

[22] RMSE3d at xiv.

[23] RMSE3d at xiv.

[24] RMSE3d at xiv-xv.

[25] See, e.g., Parker v. Mobil Oil Corp., 7 N.Y.3d 434, 857 N.E.2d 1114, 824 N.Y.S.2d 584 (2006); Exxon Corp. v. Makofski, 116 SW 3d 176 (Tex. Ct. App. 2003).

[26] Goldstein here and elsewhere has confused significance probability with the posterior probability required by courts and scientists.

[27] Margaret A. Berger, “The Admissibility of Expert Testimony,” in RMSE3d 11, 24 (2011).

[28] Cook v. Rockwell Int’l Corp., 580 F. Supp. 2d 1071, 1122 (D. Colo. 2006), rev’d and remanded on other grounds, 618 F.3d 1127 (10th Cir. 2010), cert. denied, ___ U.S. ___ (May 24, 2012).

[29] In re Viagra Products Liab. Litig., 658 F. Supp. 2d 936, 945 (D. Minn. 2009). 

[31] Id. at 256 -57.

[32] Michael D. Green, D. Michal Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” in RMSE3d 549, 573.

[33] Id. at 573n.68.

[34] See In re Viagra Products Liab. Litig., 572 F. Supp. 2d 1071, 1081 (D. Minn. 2008).

[35] RSME3d at 577 n81.

[36] Id.

[37] 572 F. Supp. 2d 1071, 1081 (D. Minn. 2008).

[38] David H. Kaye & David A. Freedman, “Reference Guide on Statistics,” in RMSE3ed 209 (2011).

[39] Id. at 254 n.106

[40] See Michael D. Green, D. Michal Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” in RMSE3ed 549, 582, 626 ; John B. Wong, Lawrence O. Gostin, and Oscar A. Cabrera, Abogado, “Reference Guide on Medical Testimony,” in RMSE3ed 687, 724.  This confusion in nomenclature is regrettable, given the difficulty many lawyers and judges seem have in following discussions of statistical concepts.

[41] See, e.g., Richard D. De Veaux, Paul F. Velleman, and David E. Bock, Intro Stats 545-48 (3d ed. 2012); Rand R. Wilcox, Fundamentals of Modern Statistical Methods 65 (2d ed. 2010).

[42] See also Daniel Rubinfeld, “Reference Guide on Multiple Regression,“ in RMSE3d 303, 321 (describing a p-value > 5% as leading to failing to reject the null hypothesis).

[43] RMSE3d at 254.

[44] See Sander Greenland, “Nonsignificance Plus High Power Does Not Imply Support Over the Alternative,” 22 Ann. Epidemiol. 364, 364 (2012).

[45] Michael D. Green, D. Michal Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” RMSE3ed 549, 582.

[46] RMSE3d at 579 n.88.

[47] Kenneth Rothman, Sander Greenland, and Timothy Lash, Modern Epidemiology 160 (3d ed. 2008).  See also Kenneth J. Rothman, “Significance Questing,” 105 Ann. Intern. Med. 445, 446 (1986) (“[Simon] rightly dismisses calculations of power as a weak substitute for confidence intervals, because power calculations address only the qualitative issue of statistical significance and do not take account of the results already in hand.”).

[48] RMSE3d at 582 n.93; id. at 582 n.94 (“Thus, in Smith v. Wyeth-Ayerst Labs. Co., 278 F.Supp. 2d 684, 693 (W.D.N.C. 2003), and Cooley v. Lincoln Electric Co., 693 F. Supp. 2d 767, 773 (N.D. Ohio 2010), the courts recognized that the power of a study was critical to assessing whether the failure of the study to find a statistically significant association was exonerative of the agent or inconclusive.”).

[49] See, e.g., Anthony J. Swerdlow, Maria Feychting, Adele C. Green, Leeka Kheifets, David A. Savitz, International Commission for Non-Ionizing Radiation Protection Standing Committee on Epidemiology, “Mobile Phones, Brain Tumors, and the Interphone Study: Where Are We Now?” 119 Envt’l Health Persp. 1534, 1534 (2011) (“Although there remains some uncertainty, the trend in the accumulating evidence is increasingly against the hypothesis that mobile phone use can cause brain tumors in adults.”).

[50] James Mortimer, Amy Borenstein, and Lorene Nelson, “Associations of welding and manganese exposure with Parkinson disease: Review and meta-analysis,” 79 Neurology 1174 (2012).

[51] Samuel Shapiro, “Meta-analysis/Smeta-analysis,” 140 Am. J. Epidem. 771, 777 (1994).  See also Alvan Feinstein, “Meta-Analysis: Statistical Alchemy for the 21st Century,” 48 J. Clin. Epidem. 71 (1995).

[52] Allen v. Int’l Bus. Mach. Corp., No. 94-264-LON, 1997 U.S. Dist. LEXIS 8016, at *71–*74 (suggesting that meta-analysis of observational studies was controversial among epidemiologists).

[53] 706 F. Supp. 358, 373 (E.D. Pa. 1988).

[54] In re Paoli R.R. Yard PCB Litig., 916 F.2d 829, 856-57 (3d Cir. 1990), cert. denied, 499 U.S. 961 (1991); Hines v. Consol. Rail Corp., 926 F.2d 262, 273 (3d Cir. 1991).

[55] SeeThe Shmeta-Analysis in Paoli,” (July 11, 2019).

[56] In re Joint E. & S. Dist. Asbestos Litig., 827 F. Supp. 1014, 1042 (S.D.N.Y. 1993).

[57] 52 F.3d 1124 (2d Cir. 1995).

[58] Institute of Medicine, Asbestos: Selected Cancers (Wash. D.C. 2006).

[59] See Michael O. Finkelstein and Bruce Levin, “Meta-Analysis of ‘Sparse’ Data: Perspectives from the Avandia CasesJurimetrics J. (2011).

[60] See Donna Stroup, et al., “Meta-analysis of Observational Studies in Epidemiology: A Proposal for Reporting,” 283 J. Am. Med. Ass’n 2008 (2000) (MOOSE statement); David Moher, Deborah Cook, Susan Eastwood, Ingram Olkin, Drummond Rennie, and Donna Stroup, “Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement,” 354 Lancet 1896 (1999).  See also Jesse Berlin & Carin Kim, “The Use of Meta-Analysis in Pharmacoepidemiology,” in Brian Strom, ed., Pharmacoepidemiology 681, 683–84 (4th ed. 2005); Zachary Gerbarg & Ralph Horwitz, “Resolving Conflicting Clinical Trials: Guidelines for Meta-Analysis,” 41 J. Clin. Epidemiol. 503 (1988).

[61] See Finkelstein & Levin, supra at note 59. See also In re Bextra and Celebrex Marketing Sales Practices and Prod. Liab. Litig., 524 F. Supp. 2d 1166, 1174, 1184 (N.D. Cal. 2007) (holding that reliance upon “[a] meta-analysis of all available published and unpublished randomized clinical trials” was reasonable and appropriate, and criticizing the expert witnesses who urged the complete rejection of meta-analysis of observational studies).

[62] RMSE 3d at 254 n.107.

[63] Id. at 289.

[64] Reference Guide on Epidemiology, RSME3d at 624.  See also id. at 581 n. 89 (“Meta-analysis is better suited to combining results from randomly controlled experimental studies, but if carefully performed it may also be helpful for observational studies, such as those in the epidemiologic field.”).

[65] Id. at 579; see also id. at 607 n. 171.

[66] Id. at 607.

[67] Id. at 607 n.177.

[68] Id. at 608.

[69] RMSE 3d at 722-23.

[70] Id. at 723 n.143 (“143. … Video Software Dealers Ass’n v. Schwarzenegger, 556 F.3d 950, 963 (9th Cir. 2009) (analyzing a meta-analysis of studies on video games and adolescent behavior); Kennecott Greens Creek Min. Co. v. Mine Safety & Health Admin., 476 F.3d 946, 953 (D.C. Cir. 2007) (reviewing the Mine Safety and Health Administration’s reliance on epidemiological studies and two meta-analyses).”).

[71] Id. at 723-24.

A Proclamation from the Task Force on Statistical Significance

June 21st, 2021

The American Statistical Association (ASA) has finally spoken up about statistical significance testing.[1] Sort of.

Back in February of this year, I wrote about the simmering controversy over statistical significance at the ASA.[2] Back in 2016, the ASA issued its guidance paper on p-values and statistical significance, which sought to correct misinterpretations and misrepresentations of “statistical significance.”[3] Lawsuit industry lawyers seized upon the ASA statement to proclaim a new freedom from having to exclude random error.[4] To obtain their ends, however, the plaintiffs’ bar had to distort the ASA guidance in statistically significant ways.

To add to the confusion, in 2019, the ASA Executive Director published an editorial that called for an end to statistical significance testing.[5] Because the editorial lacked disclaimers about whether or not it represented official ASA positions, scientists, statisticians, and lawyers on all sides were fooled into thinking the ASA had gone whole hog.[6] Then ASA President Karen Kafadar stepped into the breach to explain that the Executive Director was not speaking for the ASA.[7]

In November 2019, members of the ASA board of directors (BOD) approved a motion to create a “Task Force on Statistical Significance and Replicability.”[8] Its charge was

“to develop thoughtful principles and practices that the ASA can endorse and share with scientists and journal editors. The task force will be appointed by the ASA President with advice and participation from the ASA BOD. The task force will report to the ASA BOD by November 2020.

The members of the Task Force identified in the motion were:

Linda Young (Nat’l Agricultural Statistics Service & Univ. of Florida; Co-Chair)

Xuming He (Univ. Michigan; Co-Chair)

Yoav Benjamini (Tel Aviv Univ.)

Dick De Veaux (Williams College; ASA Vice President)

Bradley Efron (Stanford Univ.)

Scott Evans (George Washington Univ.; ASA Publications Representative)

Mark Glickman (Harvard Univ.; ASA Section Representative)

Barry Graubard (Nat’l Cancer Instit.)

Xiao-Li Meng (Harvard Univ.)

Vijay Nair (Wells Fargo & Univ. Michigan)

Nancy Reid (Univ. Toronto)

Stephen Stigler (Univ. Chicago)

Stephen Vardeman (Iowa State Univ.)

Chris Wikle (Univ. Missouri)

Tommy Wright (U.S. Census Bureau)

Despite the inclusion of highly accomplished and distinguished statisticians on the Task Force, there were isolated demurrers. Geoff Cumming, for one, clucked:

“Why won’t statistical significance simply whither and die, taking p <. 05 and maybe even p-values with it? The ASA needs a Task Force on Statistical Inference and Open Science, not one that has its eye firmly in the rear view mirror, gazing back at .05 and significance and other such relics.”[9]

Despite the clucking, the Taskforce arrived at its recommendations, but curiously, its report did not find a home in an ASA publication. Instead, the “The ASA President’s Task Force Statement on Statistical Significance and Replicability” has now appeared as an “in press” publication at The Annals of Applied Statistics, where Karen Kafadar is the editor in chief.[10] The report is accompanied by an editorial by Kafadar.[11]

The Taskforce advanced five basic propositions, which may have been obscured by some of the recent glosses on the ASA 2016 p-value statement:

  1. “Capturing the uncertainty associated with statistical summaries is critical.”
  2. “Dealing with replicability and uncertainty lies at the heart of statistical science. Study results are replicable if they can be verified in further studies with new data.”
  3. “The theoretical basis of statistical science offers several general strategies for dealing with uncertainty.”
  4. “Thresholds are helpful when actions are required.”
  5. “P-values and significance tests, when properly applied and interpreted, increase the rigor of the conclusions drawn from data.”

All of this seems obvious and anodyne, but I suspect it will not silence the clucking.


[1] Deborah Mayo, “Alas! The ASA President’s Task Force Statement on Statistical Significance and Replicability,” Error Statistics (June 20, 2021).

[2]Falsehood Flies – The ASA 2016 Statement on Statistical Significance” (Feb. 26, 2021).

[3] Ronald L. Wasserstein & Nicole A. Lazar, “The ASA’s Statement on p-Values: Context, Process, and Purpose,” 70 The Am. Statistician 129 (2016); see “The American Statistical Association’s Statement on and of Significance” (March 17, 2016).

[4]The American Statistical Association Statement on Significance Testing Goes to Court – Part I” (Nov. 13, 2018); “The American Statistical Association Statement on Significance Testing Goes to Court – Part 2” (Mar. 7, 2019).

[5]Has the American Statistical Association Gone Post-Modern?” (Mar. 24, 2019); “American Statistical Association – Consensus versus Personal Opinion” (Dec. 13, 2019). See also Deborah G. Mayo, “The 2019 ASA Guide to P-values and Statistical Significance: Don’t Say What You Don’t Mean,” Error Statistics Philosophy (June 17, 2019); B. Haig, “The ASA’s 2019 update on P-values and significance,” Error Statistics Philosophy  (July 12, 2019); Brian Tarran, “THE S WORD … and what to do about it,” Significance (Aug. 2019); Donald Macnaughton, “Who Said What,” Significance 47 (Oct. 2019).

[6] Ronald L. Wasserstein, Allen L. Schirm, and Nicole A. Lazar, “Editorial: Moving to a World Beyond ‘p < 0.05’,” 73 Am. Statistician S1, S2 (2019).

[7] Karen Kafadar, “The Year in Review … And More to Come,” AmStat News 3 (Dec. 2019) (emphasis added); see Kafadar, “Statistics & Unintended Consequences,” AmStat News 3,4 (June 2019).

[8] Karen Kafadar, “Task Force on Statistical Significance and Replicability,” ASA Amstat Blog (Feb. 1, 2020).

[9] See, e.g., Geoff Cumming, “The ASA and p Values: Here We Go Again,” The New Statistics (Mar. 13, 2020).

[10] Yoav Benjamini, Richard D. DeVeaux, Bradly Efron, Scott Evans, Mark Glickman, Barry Braubard, Xuming He, Xiao Li Meng, Nancy Reid, Stephen M. Stigler, Stephen B. Vardeman, Christopher K. Wikle, Tommy Wright, Linda J. Young, and Karen Kafadar, “The ASA President’s Task Force Statement on Statistical Significance and Replicability,” 15 Annals of Applied Statistics 2021, available at https://www.e-publications.org/ims/submission/AOAS/user/submissionFile/51526?confirm=79a17040.

[11] Karen Kafadar, “Editorial: Statistical Significance, P-Values, and Replicability,” 15 Annals of Applied Statistics 2021, available at https://www.e-publications.org/ims/submission/AOAS/user/submissionFile/51525?confirm=3079934e.

The Practicing Law Institute’s Second Edition of Products Liability Litigation

May 30th, 2021

In late March, the Practicing Law Institute released the second edition of its treatise on products liability. George D. Sax, Stephanie A. Scharf, Sarah R. Marmor, eds., Product Liability Litigation: Current Law, Strategies and Best Practices, (2nd ed. 2021).

The new edition is now in two volumes, which cover substantive products liability law, as well as legal theory, policy, and strategy considerations important to products liability law, both pursuers and defenders. The work of the editors, Stephanie A. Scharf and her colleagues, George D. Sax and Sarah R. Marmor, in managing this process is nothing short of Homeric.  The authors are mostly practitioners, with a wealth of practical experience. There are a good number of friends, colleagues, and adversaries, among the chapters’ authors, so any recommendation I make should be tempered by my disclosure.

Unlike the first edition, the PLI has doubled down on control of the copyright license, and so I am no longer able to upload my chapter on statistical evidence to ResearchGate, Academia.com, or my own website.  But here is the outline index to my contribution, Chapter 28, “Statistical Evidence in Products Liability Litigation”:

  • 28:1 History and Overview
  • 28:2 Litigation Context of Statistical Issues
  • 28:3 Qualifications of Expert Witnesses Who Give Testimony on Statistical Issues
  • 28:4 Admissibility of Statistical Evidence – Rules 702 and 703
  • 28:5 Significance Probability
  • 28:5.1 Definition of Significance Probability (The “p-value”)
  • 28:5.2 Misstatements about Statistical Significance
  • 28:5.3 Transposition Fallacy
  • 28:5.4 Confusion Between Significance Probability and Burden of Proof
  • 28:5.5 Hypothesis Testing
  • 28:5.6 Confidence Intervals
  • 28:5.7 Inappropriate Use of Statistics – Matrixx  Initiatives
    • [A]     Sequelae of Matrixx Initiatives
    • [B]     Is Statistical Significance Necessary?
  • 28: 5.8 American Statistical Association’s Statemen on P-Values
  • 28:6 Statistical Power
  • 28:6.1 Definition of Statistical Power
  • 28:6.2 Cases Involving Statistical Power
  • 28:7 Evidentiary Rule of Completeness
  • 28:8 Meta-Analysis
  • 28:8.1 Definition and History of Meta-Analysis
  • 28:8.2 Consensus  Statements
  • 28:8.3 Use of Meta-Analysis in Litigation
  • 28:8.4 Competing Models for Meta-Analysis
  • 28:8.5 Recent Cases Involving Meta-Analyses
  • 28:9 Statistical Inference in Securities Fraud Cases Against Pharmaceutical Manufacturers
  • 28:10 Multiple Testing
  • 28:11 Ethical Considerations Raised by Statistical Expert Witness Testimony
  • 28:12 Conclusion

A detailed table of contents for the entire treatise is available at the PLI’s website The authors and their chapters are set out below.

Chapter 1. What Product Liability Might Look Like in the Twenty-First Century (James M. Beck)

Chapter 2. Recent Trends in Product Claims and Product Defenses (Lori B. Leskin & Angela R. Vicari)

Chapter 3. Game-Changers: Defending Products Cases with Child Plaintiffs (Sandra Giannone Ezell & Diana M. Miller)

Chapter 4. Preemption Defenses (Joseph G. Petrosinelli, Ana C. Reyes & Amy Mason Saharia)

Chapter 5. Defending Class Action Lawsuits (Mark Herrmann, Pearson N. Bownas & Katherine Garceau Sobiech)

Chapter 6. Litigation in Foreign Countries Against U.S. Companies (Joseph G. Petrosinelli & Ana C. Reyes)

Chapter 7. Emerging Issues in Pharmaceutical Litigation (Allen P. Waxman, Loren H. Brown & Brooke Kim)

Chapter 8. Recent Developments in Asbestos, Talc, Silica, Tobacco, and E-Cigarette/Vaping Litigation in the U.S. and Canada (George Gigounas, Arthur Hoffmann, David Jaroslaw, Amy Pressman, Nancy Shane Rappaport, Wendy Michael, Christopher Gismondi, Stephen H. Barrett, Micah Chavin, Adam A. DeSipio, Ryan McNamara, Sean Newland, Becky Rock, Greg Sperla & Michael Lisanti)

Chapter 9. Emerging Issues in Medical Device Litigation (David R. Geiger, Richard G. Baldwin, Stephen G.W. Stich & E. Jacqueline Chávez)

Chapter 10. Emerging Issues in Automotive Product Liability Litigation (Eric P. Conn, Howard A. Fried, Thomas N. Lurie & Nina A. Rosenbach)

Chapter 11. Emerging Issues in Food Law and Litigation (Sarah L. Brew & Joelle Groshek)

Chapter 12. Regulating Cannabis Products (James H. Rotondo, Steven A. Cash & Kaitlin A. Canty)

Chapter 13. Blockchain Technology and Its Impact on Product Litigation (Justin Wales & Matt Kohen)

Chapter 14. Emerging Trends: Smart Technology and the Internet of Things (Christopher C. Hossellman & Damion M. Young)

Chapter 15. The Law of Damages in Product Liability Litigation (Evan D. Buxner & Dionne L. Koller)

Chapter 16. Using Early Case Assessments to Develop Strategy (Mark E. (Rick) Richardson)

Chapter 17. Impact of Insurance Policies (Kamil Ismail, Linda S. Woolf & Richard M. Barnes)

Chapter 18. Advantages and Disadvantages of Multidistrict Litigation (Wendy R. Fleishman)

Chapter 19. Strategies for Co-Defending Product Actions (Lem E. Montgomery III & Anna Little Morris)

Chapter 20. Crisis Management and Media Strategy (Joanne M. Gray & Nilda M. Isidro)

Chapter 21. Class Action Settlements (Richard B. Goetz, Carlos M. Lazatin & Esteban Rodriguez)

Chapter 22. Mass Tort Settlement Strategies (Richard B. Goetz & Carlos M. Lazatin)

Chapter 23. Arbitration (Beth L. Kaufman & Charles B. Updike)

Chapter 24. Privilege in a Global Product Economy (Marina G. McGuire)

Chapter 25. E-Discovery—Practical Considerations (Denise J. Talbert, John C. Vaglio, Jeremiah S. Wikler & Christy A. Pulis)

Chapter 26. Expert Evidence—Law, Strategies and Best Practices (Stephanie A. Scharf, George D. Sax, Sarah R. Marmor & Morgan G. Churma)

Chapter 27. Court-Appointed and Unconventional Expert Issues (Jonathan M. Hoffman)

Chapter 28. Statistical Evidence in Products Liability Litigation (Nathan A. Schachtman)

Chapter 29. Post-Sale Responsibilities in the United States and Foreign Countries (Kenneth Ross & George W. Soule)

Chapter 30. Role of Corporate Executives (Samuel Goldblatt & Benjamin R. Dwyer)

Chapter 31. Contacting Corporate Employees (Sharon L. Caffrey, Kenneth M. Argentieri & Rachel M. Good)

Chapter 32. Spoliation of Product Evidence (Paul E. Benson & Adam E. Witkov)

Chapter 33. Presenting Complex Scientific Evidence (Morton D. Dubin II & Nina Trovato)

Chapter 34. How to Win a Dismissal When the Plaintiff Declares Bankruptcy (Anita Hotchkiss & Earyn Edwards)

Chapter 35. Juries (Christopher C. Spencer)

Chapter 36. Preparing for the Appeal (Wendy F. Lumish & Alina Alonso Rodriguez)

Chapter 37. Global Reach: Foreign Defendants in the United States (Lisa J. Savitt)

Cancel Causation

March 9th, 2021

The Subversion of Causation into Normative Feelings

The late Professor Margaret Berger argued for the abandonment of general causation, or cause-in-fact, as an element of tort claims under the law.[1] Her antipathy to the requirement of showing causation ultimately involved her deprecating efforts to inject due scientific care in gatekeeping of causation opinions. After a long, distinguished career as a law professor, Berger died in November 2010.  Her animus against causation and Rule 702, however, was so strong that her chapter in the third edition of the Reference Manual on Scientific Evidence, which came out almost one year after her death, she embraced the First Circuit’s notorious anti-Daubert decision in Milward, which also post-dated her passing.[2]

Despite this posthumous writing and publication by Professor Berger, there have been no further instances of Zombie scholarship or ghost authorship.  Nonetheless, the assault on causation has been picked up by Professor Alexandra D. Lahav, of the University of Connecticut School of Law, in a recent essay posted online.[3] Lahav’s essay is an extension of her work, “The Knowledge Remedy,” published last year.[4]

This second essay, entitled “Chancy Causation in Tort Law,” is the plaintiffs’ brief against counterfactual causation, which Lahav acknowledges is the dominant test for factual causation.[5] Lahav begins with a reasonable, reasonably understandable distinction between deterministic (necessary and sufficient) and probabilistic (or chancy in her parlance) causation.

The putative victim of a toxic exposure (such as glyphosate and Non-Hodgkin’s lymphoma) cannot show that his exposure was a necessary and sufficient determinant of his developing NHL. Not everyone similarly exposed develops NHL; and not everyone with NHL has been exposed to glyphosate. In Lahav’s terminology, specific causation in such a case is “chancy.” Lahav asserts, but never proves, that the putative victim “could never prove that he would not have developed cancer if he had not been exposed to that herbicide.”[6]

Lahav’s example presents an example of a causal claim, which involves both general and specific causation, which is easily distinguishable from someone who claims his death was caused by being run over by a high-speed train. Despite this difference, Lahav never marshals any evidence to show why the putative glyphosate victim cannot show a probability that his case is causally related by adverting to the magnitude of the relative risk created by the prior exposure.

Repeatedly, Lahav asserts that when causation is chancy – probabilistic – it can never be shown by counterfactual causal reasoning, which she claims “assumes deterministic causation.” And she further asserts that because probabilistic causation cannot fit the counterfactual model, it can never “meet the law’s demand for a binary determination of cause.”[7]

Contrary to these ipse dixits, probabilistic causation can, at both the general and specific, or individual, levels be described in terms of counterfactuals. The modification requires us, of course, to address the baseline situation as a rate or frequency of events, and the post-exposure world as one with a modified rate or frequency. The exposure is the cause of the change in event rates. Modern physics addresses whether we must be content with probability statements, rather than precise deterministic “billiard ball” physics, which is so useful in a game of snooker, but less so in describing quarks. In the first half of the 20th century, the biological sciences learned with some difficulty that it must embrace probabilistic models, in genetic science, as well as in epidemiology. Many biological causation models are completely stated in terms of probabilities that are modified by specified conditions.

When Lahav gets to providing an example of where chancy causation fails in reasoning about individual causation, she gives a meaningless hypothetical of a woman, Mary, who is a smoker who develops lung cancer. To remove any semblance to real world cases, Lahav postulates that Mary had a 20% increased risk of lung cancer from smoking (a relative risk of 1.2). Thus, Lahav suggests that:

“[i]f Mary is a smoker and develops lung cancer, even after she has developed lung cancer it would still be the case that the cause of her cancer could only be described as a likelihood of 20 percent greater than what it would have been otherwise. Her doctor would not be able to say to her ‘Mary, if you had not smoked, you would not have developed this cancer’ because she might have developed it in any event.”

A more pertinent, less counterfactual hypothetical, is that Mary had a 2,000% increase in risk from her tobacco smoking. This corresponds to the relative risks in the range of 20, seen in many, if not most, epidemiologic studies of smoking and lung cancer. And there is an individual probability of causation that would be well over 0.9, for such a risk.

To be sure, there are critics of using the probability of causation because it assumes that the risk is distributed stochastically, which may not be correct. Of course, claimants are free to try to show that more of the risk fell on them for some reason, but of course, this requires evidence!

Lahav attempts to answer this point, but her argument runs off its rails.  She notes that:

“[i]f there is an 80% chance that a given smoker’s cancer is caused by smoking, and Mary smoked, some might like to say that she has met her burden of proof.

This approach confuses the strength of the evidence with its content. Assume that it is more likely than not, based on recognized scientific methodology, that for 80% of smokers who contract lung cancer their cancer is attributable to smoking. That fact does not answer the question of whether we ought to infer that Mary’s cancer was caused by smoking. I use the word ought advisedly here. Suppose Mary and the cigarette company stipulate that 80% of people like Mary will contract lung cancer, the burden of proof has been met. The strength of the evidence is established. The next question regards the legal permissibility of an inference that bridges the gap between the run of cases and Mary. The burden of proof cannot dictate the answer. It is a normative question of whether to impose liability on the cigarette company for Mary’s harm.”[8]

Lahav is correct that an 80% probability of causation might be based upon very flimsy evidence, and so that probability alone cannot establish that the plaintiff has a “submissible” case. If the 80% probability of causation is stipulated, and not subject to challenge, then Lahav’s claim is remarkable and contrary to most of the scholarship that has followed the infamous Blue Bus hypothetical. Indeed, she is making the very argument that tobacco companies made in opposition to the use of epidemiologic evidence in tobacco cases, in the 1950s and 1960s.

Lahav advances a perverse skepticism that any inferences about individuals can be drawn from information about rates or frequencies in groups of similar individuals.  Yes, there may always be some debate about what is “similar,” but successive studies may well draw the net tighter around what is the appropriate class. Lahav’s skepticism and her outright denialism, is common among some in the legal academy, but it ignores that group to individual inferences are drawn in epidemiology in multiple contexts. Regressions for disease prediction are based upon individual data within groups, and the regression equations are then applied to future individuals to help predict those individuals’ probability of future disease (such as heart attack or breast cancer), or their probability of cancer-free survival after a specific therapy. Group to individual inferences are, of course, also the basis for prescribing decisions in clinical medicine.  These are not normative inferences; they are based upon evidence-based causal thinking.

Lahav suggests that the metaphor of a “link” between exposure and outcome implies “something is determined and knowable, which is not possible in chancy causation cases.”[9] Not only is the link metaphor used all the time by sloppy journalists and some scientists, but when they use it, they mostly use it in the context of what Lahav would characterize as “chancy causation.” Even when speaking more carefully, and eschewing the link metaphor, scientists speak of probabilistic causation as something that is real, based upon evidence and valid inferences, not normative judgments or emotive reactions.

The probabilistic nature of the probability of causation does not affect its epistemic status.

The law does not assume that binary deterministic causality, as Lahav describes, is required to apply “but for” or counterfactual analysis. Juries are instructed to determine whether the party with the burden of proof has prevailed on each element of the claim, by a preponderance of the evidence. This civil jury instruction is almost always explained in terms of a posterior probability greater than 0.5, whether the claimed tort is a car crash or a case of Non-Hodgkin’s lymphoma.

Elsewhere, Lahav struggles with the concept of probability. Her essay suggests that

“[p]robability follows certain rules, or tendencies, but these regular laws do not abolish chance. There is a chance that the exposure caused his cancer, and a chance that it did not.”[10]

The use of chance here, in contradistinction to probability, is so idiosyncratic, and unexplained, that it is impossible to know what is meant.

Manufactured Doubt

Lahav’s essay twice touches upon a strawperson argument that stretches to claim that “manufacturing doubt” does not undermine her arguments about the nature of chancy causation. To Lahav, the likes of David Michaels have “demonstrated” that manufactured uncertainty is a genuine problem, but not one that affects her main claims. Nevertheless, Lahav remarkably sees no problem with manufactured certainty in the advocacy science of many authors.[11]

Lahav swallows the Michaels’ line, lure and all, and goes so far as to describe Rule 702 challenges to causal claims as having the “negative effect” of producing “incentives to sow doubt about epidemiologic studies using methodological battles, a strategy pioneered by the tobacco companies … .”[12] There is no corresponding concern about the negative effect of producing incentives to overstate the findings, or the validity of inferences, in order to get to a verdict for claimants.

Post-Modern Causation

What we have then is the ultimate post-modern program, which asserts that cause is “irreducibly chancy,” and thus indeterminate, and rightfully in the realm of “normative decisions.”[13] Lahav maintains there is an extreme plasticity to the very concept of causation:

“Causation in tort law can be whatever judges want it to be… .”[14]

I for one sincerely doubt it. And if judges make up some Lahav-inspired concept or normative causation, the scientific community would rightfully scoff.

Taking Lahav’s earlier paper, “The Knowledge Remedy,” along with this paper, the reader will see that Lahav is arguing for a rather extreme, radical precautionary principle approach to causation. There is a germ of truth that gatekeeping is affected by the moral quality of the defendant or its product. In the early days of the silicone gel breast implant litigation, some judges were influenced by suggestions that breast implants were frivolous products, made and sold to cater to male fantasies. Later, upon more mature reflection, judges recognized that roughly one third of breast implant surgeries were post-mastectomy, and that silicone was an essential biomaterial.  The recognition brought a sea change in critical thinking about the evidence proffered by claimants, and ultimately brought a recognition that the claimants were relying upon bogus and fraudulent evidence.[15]

—————————————————————————————–

[1]  Margaret A. Berger, “Eliminating General Causation: Notes towards a New Theory of Justice and Toxic Torts,” 97 Colum. L. Rev. 2117 (1997).

[2] Milward v. Acuity Specialty Products Group, Inc., 639 F.3d 11 (1st Cir. 2011), cert. denied sub nom., U.S. Steel Corp. v. Milward, 132 S. Ct. 1002 (2012)

[3]  Alexandra D. Lahav, “Chancy Causation in Tort,” (May 15, 2020) [cited as Chancy], available at https://ssrn.com/abstract=3633923 or http://dx.doi.org/10.2139/ssrn.3633923.

[4]  Alexandra D. Lahav, “The Knowledge Remedy,” 98 Texas L. Rev. 1361 (2020). SeeThe Knowledge Remedy Proposal” (Nov. 14, 2020).

[5]  Chancy at 2 (citing American Law Institute, Restatement (Third) of Torts: Physical & Emotional Harm § 26 & com. a (2010) (describing legal history of causal tests)).

[6]  Id. at 2-3.

[7]  Id.

[8]  Id. at 10.

[9]  Id. at 12.

[10]  Id. at 2.

[11]  Id. at 8 (citing David Michaels, The Triumph of Doubt: Dark Money and the Science of Deception (2020), among others).

[12]  Id. at 18.

[13]  Id. at 6.

[14]  Id. at 3.

[15]  Hon. Jack B. Weinstein, “Preliminary Reflections on Administration of Complex Litigation” 2009 Cardozo L. Rev. de novo 1, 14 (2009) (describing plaintiffs’ expert witnesses in silicone litigation as “charlatans” and the litigation as largely based upon fraud).

Falsehood Flies – The ASA 2016 Statement on Statistical Significance

February 26th, 2021

Under the heading of “falsehood flies,” we have the attempt by the American Statistical Association (ASA) to correct misinterpretations and misrepresentations of “statistical significance,” in a 2016 consensus statement.[1] Almost before the ink was dried, lawsuit industry lawyers seized upon the ASA statement to proclaim a new freedom from having to exclude random error.[2] Those misrepresentations were easily enough defeated by the actual text of the ASA statement, as long as lawyers bothered to read it carefully.

In 2019, Ronald Wasserstein, the ASA executive director, along with two other authors wrote an editorial, which explicitly called for the abandonment of using “statistical significance.” Although the piece, published in the American Statistician, was labeled “editorial,”[3] I predicted that Wasserstein’s official title, which appears in the editorial, and the absence of a disclaimer that the piece was not an ex cathedra pronouncement, would lead to widespread confusion, abuse, and further misrepresentations of the ASA’s views.[4]

Some people pooh-poohed the danger of confusion, but I was doubtful, given the experience with what happened with the anodyne 2016 ASA statement. What I did not realize until recently was that the Wasserstein editorial was misunderstood to be an official policy statement by the ASA’s own publication, Significance!

Significance is a bimonthly magazine on statistics for educated laypeople, published jointly the ASA and the Royal Statistical Society. In August 2019, the editor of Significance, Brian Turran, published an editorial that clearly reflected that Turran interpreted the Wasserstein editorial as an official ASA pronouncement.[5] Indeed, Turran cited the Wasserstein 2019 editorial as the ASA “recommendation.”

Donald Macnaughton, President of MatStat Research Consulting Inc., in Toronto, wrote a letter to point out Turran’s error.[6] Macnaughton noted that Wasserstein had disclaimed an official imprimatur for his ideas in various oral presentations, and that the editors of the New England Journal of Medicine had explicitly rejected the editorial’s call for abandoning statistical significance.[7]

In reply, Tarran graciously acknowledge the mistake, and pointed to an ASA press release that had led him astray:

“Thank you for this clarification. Our mistake was to give too much weight to the headline of a press release, ‘ASA Calls Time on “Statistically Significant” in Science Research’ (bit.ly/2UBWKNq).”

Inquiring minds might wonder why the ASA allowed such a press release to go out.

In 2019, then President of the ASA, Karen Kafadar, wrote on multiple occasions, in AmStat News, to correct any confusion or misimpression created by Wasserstein’s editorial:

“One final challenge, which I hope to address in my final month as ASA president, concerns issues of significance, multiplicity, and reproducibility. In 2016, the ASA published a statement that simply reiterated what p-values are and are not. It did not recommend specific approaches, other than ‘good statistical practice … principles of good study design and conduct, a variety of numerical and graphical summaries of data, understanding of the phenomenon under study, interpretation of results in context, complete reporting and proper logical and quantitative understanding of what data summaries mean’.

The guest editors of the March 2019 supplement to The American Statistician went further, writing: ‘The ASA Statement on P-Values and Statistical Significance stopped just short of recommending that declarations of “statistical significance” be abandoned. We take that step here. … [I]t is time to stop using the term “statistically significant” entirely’.

Many of you have written of instances in which authors and journal editors – and even some ASA members – have mistakenly assumed this editorial represented ASA policy. The mistake is understandable: The editorial was coauthored by an official of the ASA. In fact, the ASA does not endorse any article, by any author, in any journal – even an article written by a member of its own staff in a journal the ASA publishes.”[8]

Kafadar did not address the hyperactivity of the ASA public relations’ office, but her careful statement of the issues should put the matter to bed. There are now citable sources to correct the incorrect claim that the ASA has recommended the complete abandonment of significance testing.

——————————————————————————————————————–

[1]  Ronald L. Wasserstein & Nicole A. Lazar, “The ASA’s Statement on p-Values: Context, Process, and Purpose,” 70 The Am. Statistician 129 (2016); seeThe American Statistical Association’s Statement on and of Significance” (March 17, 2016).

[2]  “The American Statistical Association Statement on Significance Testing Goes to Court – Part I” (Nov. 13, 2018); “The American Statistical Association Statement on Significance Testing Goes to Court – Part 2” (Mar. 7, 2019).

[3]  Ronald L. Wasserstein, Allen L. Schirm, and Nicole A. Lazar, “Editorial: Moving to a World Beyond ‘p < 0.05’,” 73 Am. Statistician S1, S2 (2019).

[4]  “Has the American Statistical Association Gone Post-Modern?” (Mar. 24, 2019); “American Statistical Association – Consensus versus Personal Opinion” (Dec. 13, 2019). See also Deborah G. Mayo, “The 2019 ASA Guide to P-values and Statistical Significance: Don’t Say What You Don’t Mean,” Error Statistics Philosophy (June 17, 2019); B. Haig, “The ASA’s 2019 update on P-values and significance,” Error Statistics Philosophy  (July 12, 2019).

[5]  Brian Tarran, “THE S WORD … and what to do about it,” Significance (Aug. 2019).

[6]  Donald Macnaughton, “Who Said What,” Significance 47 (Oct. 2019).

[7]  See “Statistical Significance at the New England Journal of Medicine” (July 19, 2019); See also Deborah G. Mayo, “The NEJM Issues New Guidelines on Statistical Reporting: Is the ASA P-Value Project Backfiring?” Error Statistics Philosophy  (July 19, 2019).

[8]  Karen Kafadar, “The Year in Review … And More to Come,” AmStat News 3 (Dec. 2019) (emphasis added); see Kafadar, “Statistics & Unintended Consequences,” AmStat News 3,4 (June 2019).

On Praising Judicial Decisions – In re Viagra

February 8th, 2021

We live in strange times. A virulent form of tribal stupidity gave us Trumpism, a personality cult in which it impossible to function in the Republican party and criticize der Führe. Even a diehard right-winger such as Liz Cheney, who dared to criticize Trump is censured, for nothing more than being disloyal to a cretin who fomented an insurrection that resulted in the murder of a Capital police officer and the deaths of several other people.[1]

Unfortunately, a similar, even if less extreme, tribal chauvinism affects legal commentary, from both sides of the courtroom. When Judge Richard Seeborg issued an opinion, early in 2020), in the melanoma – phosphodiesterase type 5 inhibitor (PDE5i) litigation,[2] I praised the decision for not shirking the gatekeeping responsibility even when the causal claim was based upon multiple, consistent statistically significant observational studies that showed an association between PDE5i medications and melanoma.[3] Although many of the plaintiffs’ relied-upon studies reported statistically significant associations between PDE5i use and melanoma occurrence, they also found similar size associations with non-melanoma skin cancers. Because skin carcinomas were not part of the hypothesized causal mechanism, the study findings strongly suggested a common, unmeasured confounding variable such as skin damage from ultraviolet light. The plaintiffs’ expert witnesses’ failure to account for confounding was fatal under Rule 702, and Judge Seeborg’s recognition of this defect, and his willingness to go beyond multiple, consistent, statistically significant associations was what made the decision important.

There were, however, problems and even a blatant error in the decision that required attention. Although the error was harmless in that its correction would not have required, or even suggested, a different result, Judge Seeborg, like many other judges and lawyers, tripped up over the proper interpretation of a confidence interval:

“When reviewing the results of a study it is important to consider the confidence interval, which, in simple terms, is the ‘margin of error’. For example, a given study could calculate a relative risk of 1.4 (a 40 percent increased risk of adverse events), but show a 95 percent ‘confidence interval’ of .8 to 1.9. That confidence interval means there is 95 percent chance that the true value—the actual relative risk—is between .8 and 1.9.”[4]

This statement about the true value is simply wrong. The provenance of this error is old, but the mistake was unfortunately amplified in the Third Edition of the Reference Manual on Scientific Evidence,[5] in its chapter on epidemiology.[6] The chapter, which is often cited, twice misstates the meaning of a confidence interval:

“A confidence interval provides both the relative risk (or other risk measure) found in the study and a range (interval) within which the risk likely would fall if the study were repeated numerous times.”[7]

and

“A confidence interval is a range of possible values calculated from the results of a study. If a 95% confidence interval is specified, the range encompasses the results we would expect 95% of the time if samples for new studies were repeatedly drawn from the same population. Thus, the width of the interval reflects random error.”[8]

The 95% confidence interval does represent random error, 1.96 standard errors above and below the point estimate from the sample date. The confidence interval is not the range of possible values, which could well be anything, but the range of reasonable compatible estimates with this one, particular study sample statistic.[9] Intervals have lower and upper bounds, which are themselves random variables, with approximately normal (or some other specified) distributions. The essence of the interval is that no value within the interval would be rejected as a null hypothesis based upon the data collected for the particular sample. Although the chapter on statistics in the Reference Manual accurately describes confidence intervals, judges and many lawyers are misled by the misstatements in the epidemiology chapter.[10]

Given the misdirection created by the Federal Judicial Center’s manual, Judge Seeborg’s erroneous definition of a confidence interval is understandable, but it should be noted in the context of praising the important gatekeeping decision in In re Viagra. Certainly our litigation tribalism should not “allow us to believe” impossible things.[11] The time to revise the Reference Manual is long overdue.

_____________________________________________________________________

[1]  John Ruwitch, “Wyoming GOP Censures Liz Cheney For Voting To Impeach Trump,” Nat’l Pub. Radio (Feb. 6, 2021).

[2]  In re Viagra (Sildenafil Citrate) and Cialis (Tadalafil) Prods. Liab. Litig., 424 F. Supp. 3d 781 (N.D. Cal. 2020) [Viagra].

[3]  SeeJudicial Gatekeeping Cures Claims That Viagra Can Cause Melonoma” (Jan. 24, 2020).

[4]  Id. at 787.

[5]  Federal Judicial Center, Reference Manual on Scientific Evidence (3rd ed. 2011).

[6]  Michael D. Green, D. Michal Freedman, & Leon Gordis, “Reference Guide on Epidemiology,” in Federal Judicial Center, Reference Manual on Scientific Evidence 549 (3rd ed. 2011).

[7]  Id. at 573.

[8]  Id. at 580.

[9] Michael O. Finkelstein & Bruce Levin, Statistics for Lawyers 171, 173-74 (3rd ed. 2015). See also Sander Greenland, Stephen J. Senn, Kenneth J. Rothman, John B. Carlin, Charles Poole, Steven N. Goodman, and Douglas G. Altman, “Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations,” 31 Eur. J. Epidem. 337 (2016).

[10]  See, e.g., Derek C. Smith, Jeremy S. Goldkind, and William R. Andrichik, “Statistically Significant Association: Preventing the Misuse of the Bradford Hill Criteria to Prove Causation in Toxic Tort Cases,” 86 Defense Counsel J. 1 (2020) (mischaracterizing the meaning of confidence intervals based upon the epidemiology chapter in the Reference Manual).

[11]  See, e.g., James Beck, “Tort Pandemic Countermeasures? The Ten Best Prescription Drug/Medical Device Decisions of 2020,” Drug and Device Law Blog (Dec. 30, 2020) (suggesting that Judge Seeborg’s decision represented the rejection of plausibility and a single “association” as insufficient); Steven Boranian, “General Causation Experts Excluded In Viagra/Cialis MDL,” (Jan. 23, 2020).

Pernicious Probabilities in the Supreme Court

December 11th, 2020

Based upon Plato’s attribution,[1] philosophers credit pre-Socratic philosopher Heraclitus, who was in his prime about 500 B.C., for the oracular observation that πάντα χωρεῖ και οὐδε ν μένει, or in more elaborative English:

all things pass and nothing stays, and comparing existing things to the flow of a river, he says you could not step twice into the same river.

Time changes us all. Certainly 2016 is not 2020, and the general elections held in November of those two years were not the same elections, and certainly not the same electorate. No one would need a statistician to know that the population of voters in 2016 was different from that in 2020.  Inevitably, some voters from 2016 died in the course of the Trump presidency; some no doubt died as a result of Trump’s malfeasance in handling the pandemic. Inevitably, some new voters came of age or became citizens and were thus eligible to vote in 2020, when they could not vote in 2016. Some potential voters who were unregistered in 2016 became new registrants. Non-voters in 2016 chose to vote in 2020, and some voters in 2016 chose not to vote in 2020. Overall, many more people turned out to vote in 2020 than turned out in 2016.

The candidates in 2016 and 2020 were different as well. On the Republican side, we had ostensibly the same candidate, but in 2020, Trump was the incumbent and had a record of dismal moral and political failures, four years in duration. Many Republicans who fooled themselves into believing that the Office of the Presidency would transform Trump into an honest political actor, came to realize that he was, and always has been, and always will be, a moral leper. These “apostate” Republicans effectively organized across the country, through groups like the Lincoln Project and the Bulwark, against Trump, and for the Democratic candidate, Joseph Biden.

In the 2016 election, Hilary Clinton outspent Donald Trump, but Trump used social media more effectively, with a big help from Vladimir Putin. In the 2020 election, Russian hackers did not have to develop a disinformation campaign; the incumbent president had been doing so for four years.

On the Democratic side of the 2016 and 2020 elections, there was a dramatic change in the line-up. In 2016, candidate Hilary Clinton inspired many feminists because of her XX 23rd chromosomes. She also suffered significant damage in primary battles with social democrat Bernie Sanders, whose supporters were alienated by the ham-fisted prejudices of the Clinton-supporters on the Democratic National Committee. Many of Sanders’ supporters stayed home on election day, 2016. In 2020, Sanders and the left-wing of the Democratic party made peace with the centrist candidate Joseph Biden, in recognition that the alternative – Trump – involved existential risks to our republican democracy.

In 2016, third party candidates, from the Green Party and the Libertarian Party, attracted more votes than they did in 2020. The 2016 election saw more votes siphoned from the two major party candidates by third parties because of the unacceptable choice between Trump and Clinton for several percent of the voting public. In 2020, with Trump’s authoritarian kleptocracy fully disclosed to Americans, a symbolic vote for a third-party candidate was tantamount to the unacceptable decision to not vote at all.

In 2016, after eight years of Obama’s presidency, the economy and the health of the nation were good. In 2020, the general election occurred in the midst of a pandemic and great economic suffering. Many more people voted by absentee or mail-in ballot than voted in that manner in 2016. State legislatures anticipated the deluge of mail-in ballots; some by facilitating early counting, and some by prohibiting early counting. The Trump administration anticipated the large uptick in mail-in ballots by manipulating the Post Office’s funding, by anticipatory charges of fraud in mail-in procedures, and by spreading lies and disinformation about COVID-19, along with spreading the infection itself.

On December 8, 2020, without apparently tiring of losing so much, the Trump Campaign orchestrated the filing of the big one, the “kraken lawsuit.” The State of Texas filed a complaint in the United States Supreme Court, in an attempt to invoke that court’s original jurisdiction to adjudicate Texas’ complaint that it was harmed by voting procedures in four states in which Trump lost the popular vote. All four states had certified their results before Texas filed its audacious lawsuit. Legal commentators were skeptical and derisive of the kraken’s legal theories.[2] Even the stalwart National Review saw the frivolity.[3]

Charles J. Cicchetti[4] is an economist, who is a director at the Berkeley Research Group. Previously, Cicchetti held academic positions at the University of Southern California, and the Energy and Environmental Policy Center at Harvard University’s John F. Kennedy School of Government. At the heart of the kraken is a declaration from Cicchetti, who tells us under penalty of perjury, that he was “formally trained statistics and econometrics [sic][5] and accepted as an expert witness in civil proceedings.”[6] Declaration of Charles J. Cicchetti, Ph.D., Dec. 6, 2020, filed in support of Texas’ motion at ¶ 2.

Cicchetti’s declaration is not a model of clarity, but it is clear that he conducted several statistical analyses. He was quite transparent in stating his basic assumption for all his analyses; namely, the outcomes for the two Democratic candidates, Clinton and Biden, for the two major party candidates, Clinton versus Trump and Biden versus Trump, and for in-person and for mail-in voters were all randomly drawn from the same population. Id. at ¶ 7. Using a binomial model, Cicchetti calculated Z-scores for the observed disparities in rates, which was very good evidence to reject the “same population” assumptions.

Based upon very large Z-scores, Cicchetti rejected the null hypothesis of “same population” and of Biden = Clinton. Id. at ¶ 20. But nothing of importance follows from this. We knew before the analysis that Biden ≠ Clinton, and the various populations compare were definitely not the same. Cicchetti might have stopped there and preserved his integrity and reputation, but he went further.

He treated the four states, Georgia, Michigan, Pennsylvania, and Wisconsin, as independent tests, which of course they are not. All states had different populations from 2016 to 2020; all had no pandemic in 2016, and pandemic in 2020; all had been exposed for four years of Trump’s incompetence, venality, corruption, bigotry, and bullying. Cicchetti gilded the lily with the independence assumption, and came up with even lower, more meaningless probabilities that the populations were the same. And then he stepped into the abyss of the fallacy and non sequitur:

“In my opinion, this difference in the Clinton and Biden performance warrants further investigation of the vote tally particularly in large metropolitan counties within and adjacent to the urban centers in Atlanta, Philadelphia, Pittsburgh, Detroit and Milwaukee.”

Id. at ¶ 30. Cicchetti’s suggestion that there is anything amiss, which warrants investigation, follows only from a maga, mega-transposition fallacy. The high Z-score does not mean that observed result is not accurate or fair; it means only that the starting assumptions were outlandishly false.

Early versus Late Counting

Texas’ claim that there is something “odd” about the reporting before and after 3 a.m., on the morning after Election Day fares no better. Cicchetti tells us that “many Americans went to sleep election night with President Donald Trump (Trump) winning key battleground states, only to learn the next day that Biden surged ahead.” Id. at ¶ 7.

Well, Americans who wanted to learn the final count should not have gone to sleep, for several days. Again, the later counted mail-in votes came from a segment of the population that was obviously different from the in-person voters. Cicchetti’s statistical analysis shows that we should reject any assumption that they were the same, but who would make that assumption?  These expected values for the mail-in ballots differed from the expected values for in-person votes; the difference was driven by Republican lies and disinformation about Covid-19, and by laws that prohibited early counting.  Not surprisingly, the Trumpist propaganda had an effect, and there was a disparity between the rate at which Trump and Biden supporters voted in person, and who voted by mail-in ballot. The late counting and reporting of mail-in ballots was further ensured by laws in some states that prohibited counting before Election Day. Trump was never winning in the referenced “key battleground” states; he was ahead in some states, at 2:59 a.m., but the count changed after all lawfully cast ballots had been counted.

The Response to Cicchetti’s Analyses

The statistical “argument,” such as it is, has not fooled anyone outside of maga-land.[7] Cicchetti’s analysis has been derided as “ludicrous” and “incompetence, by Professors Kenneth Mayer and David Post. Mayer described the analysis as one that will be “used in undergraduate statistics classes as a canonical example of how not to do statistics.”[8] It might even make its way into a Berenstain Bear book on statistics. Andrew Gelman called the analysis “horrible,” and likened the declaration to the infamous Dreyfus case.[9]

The Texas lawsuit speaks volumes of the insincerity of the Trumpist Republican party. The rantings of Pat Robertson, asking God to intervene in the election to keep Trump in office, are more likely to have an effect.[10] The only issue the kraken fairly raises is whether the plaintiff, and plaintiff intervenor, should be be sanctioned for “multipl[ying] the proceedings in any case unreasonably and vexatiously.”[11]


[1]  Plato, Cratylus 402a = A6.

[2] Adam Liptak, “Texas files an audacious suit with the Supreme Court challenging the election results,” N.Y. Times (Dec. 8, 2020); Jeremy W. Peters and Maggie Haberman, “17 Republican Attorneys General Back Trump in Far-Fetched Election Lawsuit,” N.Y. Times (Dec. 9, 2020); Paul J. Weber, “Trump’s election fight puts embattled Texas AG in spotlight,” Wash. Post (Dec. 9, 2020).

[3] Andrew C. McCarthy, “Texas’s Frivolous Lawsuit Seeks to Overturn Election in Four Other States,” Nat’l Rev. (Dec. 9, 2020); Robert VerBruggen, “The Dumb Statistical Argument in Texas’s Election Lawsuit,” Nat’l Rev. (Dec. 9, 2020).

[4] Not to be confused with Chicolini, Sylvania’s master spy.

[5] Apparently not formally trained in English.

[6] See, e.g., K N Energy, Inc. v. Cities of Alliance & Oshkosh, 266 Neb. 882, 670 N.W.2d 319 (2003), Center for Biological Diversity v. Pizarchik, 858 F. Supp. 2d 1221 (D. Colo. 2012), National Paint & Coatings Ass’n, v. City of Chicago, 835 F. Supp. 421 (N.D. Ill. 1993), National Paint & Coatings Ass’n, v. City of Chicago, 835 F. Supp. 414 (N.D. Ill. 1993); Mississippi v. Entergy Mississippi, Inc. (S.D. Miss. 2012); Hiko Energy, LLC v. Pennsylvania Public Utility Comm’n, 209 A.3d 246 (Pa. 2019).

[7] Philip Bump, “Trump’s effort to steal the election comes down to some utterly ridiculous statistical claims,” Wash. Post (Dec. 9, 2020); Jeremy W. Peters, David Montgomery, Linda Qiu & Adam Liptak, “Two reasons the Texas election case is faulty: flawed legal theory and statistical fallacy,N.Y. Times (Dec. 10, 2020); David Post, “More on Statistical Stupidity at SCOTUS,” Volokh Conspiracy (Dec. 9, 2020).

[8] Eric Litke, “Lawsuit claim that statistics prove fraud in Wisconsin, elsewhere is wildly illogical,”  PolitiFact ((Dec. 9, 2020).

[9] Andrew Gelman, “The p-value is 4.76×10^−264 1 in a quadrillionStatistical Modeling, Causal Inference, and Social Science (Dec. 8, 2020).

[10]  Evan Brechtel, “Pat Robertson Calls on God to ‘Intervene’ in the Election to Keep Trump President in Bonkers Rant” (Dec. 10, 2020).

[11] SeeCounsel’s liability for excessive costs,” 28 U.S. Code § 1927.

Regressive Methodology in Pharmaco-Epidemiology

October 24th, 2020

Medications are rigorously tested for safety and efficacy in clinical trials before approval by regulatory agencies such as the U.S. Food & Drug Administration (FDA) or the European Medicines Agency (EMA). The approval process, however, contemplates that more data about safety and efficacy will emerge from the use of approved medications in pharmacoepidemiologic studies conducted outside of clinical trials. Litigation of safety outcomes rarely arises from claims based upon the pivotal clinical trials that were conducted for regulatory approval and licensing. The typical courtroom scenario is that a safety outcome is called into question by pharmacoepidemiologic studies that purport to find associations or causality between the use of a specific medication and the claimed harm.

The International Society for Pharmacoepidemiology (ISPE), established in 1989, describes itself as an international professional organization intent on advancing health through pharmacoepidemiology, and related areas of pharmacovigilance. The ISPE website defines pharmacoepidemiology as

“the science that applies epidemiologic approaches to studying the use, effectiveness, value and safety of pharmaceuticals.”

The ISPE conceptualizes pharmacoepidemiology as “real-world” evidence, in contrast to randomized clinical trials:

“Randomized controlled trials (RCTs) have served and will continue to serve as the major evidentiary standard for regulatory approvals of new molecular entities and other health technology. Nonetheless, RWE derived from well-designed studies, with application of rigorous epidemiologic methods, combined with judicious interpretation, can offer robust evidence regarding safety and effectiveness. Such evidence contributes to the development, approval, and post-marketing evaluation of medicines and other health technology. It enables patient, clinician, payer, and regulatory decision-making when a traditional RCT is not feasible or not appropriate.”

ISPE Position on Real-World Evidence (Feb. 12, 2020) (emphasis in original).

The ISPE publishes an official journal, Pharmacoepidemiology and Drug Safety, and sponsors conferences and seminars, all of which are watched by lawyers pursuing and defending drug and device health safety claims. The endorsement by the ISPE of the American Statistical Association’s 2016 statement on p-values is thus of interest not only to statisticians, but to lawyers and claimants involved in drug safety litigation.

The ISPE, through its board of directors, formally endorsed the ASA 2016 p-value statement on April 1, 2017 (no fooling) in a statement that can be found at its website:

The International Society for Pharmacoepidemiology, ISPE, formally endorses the ASA statement on the misuse of p-values and accepts it as an important step forward in the pursuit of reasonable and appropriate interpretation of data.

On March 7, 2016, the American Statistical Association (ASA) issued a policy statement that warned the scientific community about the use P-values and statistical significance for interpretation of reported associations. The policy statement was accompanied by an introduction that characterized the reliance on significance testing as a vicious cycle of teaching significance testing because it was expected, and using it because that was what was taught. The statement and many accompanying commentaries illustrated that p-values were commonly misinterpreted to imply conclusions that they cannot imply. Most notably, “p-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.” Also, “a p-value does not provide a good measure of evidence regarding a model or hypothesis.” Furthermore, reliance on p-values for data

interpretation has exacerbated the replication problem of scientific work, as replication of a finding is often confused with replicating the statistical significance of a finding, on the erroneous assumption that replication should lead to studies getting similar p-values.

This official statement from the ASA has ramifications for a broad range of disciplines, including pharmacoepidemiology, where use of significance testing and misinterpretation of data based on P-values is still common. ISPE has already adopted a similar stance and incorporated it into our GPP [ref] guidelines. The ASA statement, however, carries weight on this topic that other organizations cannot, and will inevitably lead to changes in journals and classrooms.

There are points of interpretation of the ASA Statement, which can be discussed and debated. What is clear, however, is that the ASA never urged the abandonment of p-values or even of statistical significance. The Statement contained six principles, some of which did nothing other than to attempt to correct prevalent misunderstandings of p-values. The third principle stated that “[s]cientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold.” (emphasis added).

This principle, as stated, thus hardly advocated for the abandonment of a threshold in testing; rather it made the unexceptional point that the ultimate scientific conclusion (say about causality) required more assessment than only determining whether a p-value passed a specified threshold.

Presumably, the ISPE’s endorsement of the ASA’s 2016 Statement embraces all six of the articulated principles, including the ASA’s fourth principle:

4. Proper inference requires full reporting and transparency

P-values and related analyses should not be reported selectively. Conducting multiple analyses of the data and reporting only those with certain p-values (typically those passing a significance threshold) renders the reported p-values essentially uninterpretable. Cherry-picking promising findings, also known by such terms as data dredging, significance chasing, significance questing, selective inference, and “p-hacking,” leads to a spurious excess of statistically significant results in the published literature and should be vigorously avoided. One need not formally carry out multiple statistical tests for this problem to arise: Whenever a researcher chooses what to present based on statistical results, valid interpretation of those results is severely compromised if the reader is not informed of the choice and its basis. Researchers should disclose the number of hypotheses explored during the study, all data collection decisions, all statistical analyses conducted, and all p-values computed. Valid scientific conclusions based on p-values and related statistics cannot be drawn without at least knowing how many and which analyses were conducted, and how those analyses (including p-values) were selected for reporting.”

The ISPE’s endorsement of the ASA 2016 Statement references the ISPE’s own

Guidelines for Good Pharmacoepidemiology Practices (GPP),” which were promulgated initially in 1996, and revised as recently as June 2015. Good practices, as of 2015, provided that:

“Interpretation of statistical measures, including confidence intervals, should be tempered with appropriate judgment and acknowledgements of potential sources of error and limitations of the analysis, and should never be taken as the sole or rigid basis for concluding that there is or is not a relation between an exposure and outcome. Sensitivity analyses should be conducted to examine the effect of varying potentially critical assumptions of the analysis.”

All well and good, but this “good practices” statement might be taken as a bit anemic, given that it contains no mention of, or caution against, unqualified or unadjusted confidence intervals or p-values that come from multiple testing or comparisons. The ISPE endorsement of the ASA Statement now expands upon the ISPE’s good practices to include the avoidance of multiplicity and the disclosure of the full extent of analyses conducted in a study.

What happens in the “real world” of publishing, outside the board room?

Last month, the ISPE conducted its (virtual) 36th International Conference on Pharmacoepidemiology & Therapeutic Risk Management. The abstracts and poster presentations from this Conference were published last week as a Special Issue of the ISPE journal. I spot checked the journal contents to see how well the presentations lived up to the ISPE’s statistical aspirations.

One poster presentation addressed statin use and skin cancer risk in a French prospective cohort.[1]  The authors described their cohort of French women, who were 40 to 65 years old, in 1990, and were followed forward. Exposure to statin medications was assessed from 2004 through 2014. The analysis included outcomes of any skin cancer, melanoma, basal-cell carcinoma (BCC), and squamous-call carcinoma (SCC), among 66,916 women. Here is how the authors describe their findings:

There was no association between ever use of statins and skin cancer risk: the HRs were 0.96 (95% CI = 0.87-1.05) for overall skin cancer, 1.18 (95% CI = 0.96-1.47) for melanoma, 0.89 (95% CI = 0.79-1.01) for BCC, and 0.90 (95% CI = 0.67-1.21) for SCC. Associations did not differ by statin molecule nor by duration or dose of use. However, women who started to use statins before age 60 were at increased risk of BCC (HR = 1.45, 95% CI = 1.07-1.96 for ever vs never use).

To be fair, this was a poster presentation, but this short description of findings makes clear that the investigators looked at least at the following subgroups:

Exposure subgroups:

  • specific statin drug
  • duration of use
  • dosage
  • age strata

and

Outcome subgroups:

  • melanoma
  • basal-cell carcinoma
  • squamous-cell carcinoma

The reader is not told how many specific statins, how many duration groups, dosage groups, and age strata were involved in the exposure analysis. My estimate is that the exposure subgroups were likely in excess of 100. With three disease outcome subgroups, the total subgroup analyses thus likely exceeded 300. The authors did not provide any information about the full extent of their analyses.

Here is how the authors reported their conclusion:

“These findings of increased BCC risk in statin users before age 60 deserve further investigations.”

Now, the authors did not use the phrase “statistically significant,” but it is clear that they have characterized a finding of “increased BCC risk in statin users before age 60,” and in no other subgroup, and they have done so based upon a reported nominal “HR = 1.45, 95% CI = 1.07-1.96 for ever vs never use.” It is also clear that the authors have made no allowance, adjustment, modification, or qualification, for the wild multiplicity arising from their estimated 300 or so subgroups. Instead, they made an unqualified statement about “increased BCC risk,” and they offered an opinion about the warrant for further studies.

Endorsement of good statistical practices is a welcome professional organizational activity, but it is rather meaningless unless the professional societies begin to implement the good practices in their article selection, editing, and publishing activities.


[1]  Marie Al Rahmoun, Yahya Mahamat-Saleh, Iris Cervenka, Gianluca Severi, Marie-Christine Boutron-Ruault, Marina Kvaskoff, and Agnès Fournier, “Statin use and skin cancer risk: A French prospective cohort study,” 29 Pharmacoepidemiol. & Drug Safety s645 (2020).

The Defenestration of Sir Ronald Aylmer Fisher

August 20th, 2020

Fisher has been defenestrated. Literally.

Sir Ronald Fisher was a brilliant statistician. Born in 1890, he won a scholarship to Gonville and Caius College, in Cambridge University, in 1909. Three years later, he gained first class honors in Mathematics, and he went on to have extraordinary careers in genetics and statistics. In 1929, Fisher was elected to the Royal Society, and in 1952, Queen Bessy knighted him for his many contributions to the realm, including his work on experimental design and data interpretation, and his bridging the Mendelian theory of genetics and Darwin’s theory of evolution. In 1998, Bradley Efron described Fisher as “the single most important figure in 20th century statistics.[1] And in 2010, University College, London, established the “R. A. Fisher Chair in Statistical Genetics” in honor of Fisher’s remarkable contributions to both genetics and statistics. Fisher’s college put up a stained-glass window to celebrate its accomplished graduate.

Fisher was, through his interest in genetics, also interested in eugenics through the application of genetic learning to political problems. For instance, he favored abolishing extra social support to large families, in favor of support proportional to the father’s wages. Fisher also entertained with some seriousness grand claims about the connection between rise and fall of civilizations and the loss of fertility among the upper classes.[2] While a student at Caius College, Fisher joined the Cambridge Eugenics Society, as did John Maynard Keynes. For reasons having to do with professional jealousies, Fisher’s appointment at University College London, in 1933, was as a professor of Eugenics, not Statistics.

After World War II, an agency of the United Nations, the United Nations Educational, Scientific and Cultural Organization (UNESCO) sought to forge a scientific consensus against racism, and Nazi horrors.[3] Fisher participated in the UNESCO commission, which he found to be “well-intentioned” but errant for failing to acknowledge inter-group differences “in their innate capacity for intellectual and emotional development.”[4]

Later in the UNESCO report, Fisher’s objections are described as the same as those of Herman Joseph Muller, who won the Nobel Prize for Medicine in 1946, The report provides Fisher’s objections in his own words:

“As you ask for remarks and suggestions, there is one that occurs to me, unfortunately of a somewhat fundamental nature, namely that the Statement as it stands appears to draw a distinction between the body and mind of men, which must, I think, prove untenable. It appears to me unmistakable that gene differences which influence the growth or physiological development of an organism will ordinarily pari passu influence the congenital inclinations and capacities of the mind. In fact, I should say that, to vary conclusion (2) on page 5, ‘Available scientific knowledge provides a firm basis for believing that the groups of mankind differ in their innate capacity for intellectual and emotional development,’ seeing that such groups do differ undoubtedly in a very large number of their genes.”[5]

Fisher’s comments may not be totally anodyne by today’s standards, but he had also commented that that:

“the practical international problem is that of learning to share the resources of this planet amicably with persons of materially different nature, and that this problem is being obscured by entirely well-intentioned efforts to minimize the real differences that exist.”[6]

Fisher’s comments seem to reflect his beliefs in the importance of the genetic contribution to “intelligence and emotional development,” which today retain both their plausibility and controversial status. Fisher’s participation in the UNESCO effort, and his emphasis on sharing resources peacefully, seem to speak against malignant racism, and distinguish him from the ugliness of the racism expressed by the Marxist statistician (and eugenicist) Karl Pearson.[7]

Cancel Culture Catches Up With Sir Ronald A. Fisher

Nonetheless the Woke mob has had its daggers out for Sir Ronald, for some time. Back in June of this year, graffiti covered the walls of Caius College, calling for the defenestration of Fisher.  A more sedate group circulated a petition for the removal of the Fisher window.[8] Later that month, the university removed the Fisher window, literally defenestrating him.[9]

The de-platforming of Fisher was not contained to the campus of a college in Cambridge University.  Fisher spent some of his most productive years, outside the university, at the Rothamsted Experimental Station.  Not to be found deficient in the metrics of social justice, Rothamsted Research issued a statement, on June 9, 2020, concerning its most famous resident scientist:

“Ronald Aylmer Fisher is often considered to have founded modern statistics. Starting in 1919, Fisher worked at Rothamsted Experimental Station (as it was called then) for 14 years.

Among his many interests, Fisher supported the philosophy of eugenics, which was not uncommon among intellectuals in Europe and America in the early 20th Century.

The Trustees of the Lawes Agricultural Trust, therefore, consider it appropriate to change the name of the Fisher Court accommodation block (opened in 2018 and named after the old Fisher Building that it replaced) to ‘AnoVa Court’, after the analysis of variance statistical test developed by Fisher’s team at Rothamsted, and which is widely used today. Arrangements for this change of name are currently being made.”

I suppose that soon it will verboten to mention Fisher’s Exact Test.

Daniel Cleather, a scientist and self-proclaimed anarchist, goes further and claims that the entire enterprise of statistics is racist.[10] Cleather argues that mathematical models of reality are biased against causal explanation, and that this bias supports eugenics and politically conservative goals. Cleather claims that statistical methods were developed “by white supremacists for the express purpose of demonstrating that white men are better than other people.” Cleather never delivers any evidence, however, to support his charges, but he no doubt feels strongly about it, and feels unsafe in the presence of Fisher’s work on experimental methods.

It is interesting to compare the disparate treatment that other famous scholars and scientists are receiving from the Woke. Aristotle was a great philosopher and “natural philosopher” scientist. There is a well-known philosophical society, the Aristotlean Society, obviously named for Aristotle, as is fitting. In the aftermath of the killings of George Floyd, Breonna Taylor and Ahmaud Arbery, the Aristotlean Society engaged in this bit of moral grandstanding, of which The Philosopher would have likely disapproved:

A statement from the Aristotelian Society

“The recent killings of George Floyd, Breonna Taylor and Ahmaud Arbery have underlined the systemic racism and racial injustice that continue to pervade not just US but also British society. The Aristotelian Society stands resolutely opposed to racism and discrimination in any form. In line with its founding principles, the Society is committed to ensuring that all its members can meet on an equal footing in the promotion of philosophy. In order to achieve this aim, we will continue to work to identify ways that we can improve, in consultation with others. We recognise it as part of the mission of the Society to actively promote philosophical work that engages productively with issues of race and racism.”

I am sure it occurred to the members of the Society that Aristotle had expressed a view that some people were slaves by nature.[11] Today, we certainly do not celebrate Aristotle for this view, but we have not defenestrated him for a view much more hateful than any expressed by Sir Ronald. My point is merely that the vaunted Aristotelian Society is well able to look at the entire set of accomplishments of Aristotle, and not throw him out the window for his views on slavery. Still, if you have art work depicting Aristotle, you may be wise to put it out of harms way.

If Aristotle’s transgressions were too ancient for the Woke mob, then consider those of Nathan Roscoe Pound, who was the Dean of Harvard Law School, from 1916 to 1936. Pound wrote on jurisprudential issues, and he is generally regarded as the founder of “sociological jurisprudence,” which seeks to understand law as influenced and determined by sociological conditions. Pound is celebrated especially by the plaintiffs’ bar, for his work for National Association of Claimants‘ Compensation Attorneys, which was the precursor to the Association of Trial Lawyers of America, and the current, rent-seeking, American Association for Justice. A group of “compensation lawyers” founded the Roscoe Pound –American Trial Lawyers Foundation (now the The Pound Civil Justice Institute) in 1956, to build on Pound’s work.

Pound died in 1964, but he lives on in the hearts of social justice warriors, who seem oblivious of Pound’s affinity for Hitler and Nazism.[12] Pound’s enthusiasm was not a momentary lapse, but lasted a decade according to Daniel R. Coquillette, professor of American legal history at Harvard Law School.[13] Although Pound is represented in various ways as having been a great leader throughout the Harvard Law School, Coquillette says that volume two of his history of the school will address the sordid business of Pound’s Nazi leanings. In the meanwhile, no one is spraying graffiti on Pound’s portraits, photographs, and memorabilia, which are scattered throughout the School.

I would not want my defense of Fisher to be taken as a Trumpist “what-about” rhetorical diversion. Still, the Woke criteria for defenestrations seem, at best, to be applied inconsistently. More important, the Woke seem to have no patience for examining the positive contributions made by those they denounce. In Fisher’s (and Aristotle’s) case, the balance between good and bad ideas, and the creativity and brilliance of his important contributions, should allow of people of good will to celebrate his many achievements, without moral hand waving. If the Aristotelian Society can keep its name, the Cambridge should be able to keep its stained-glass window memorial to Fisher.


[1]        Bradley Efron, “R. A. Fisher in the 21st century,” 13 Statistical Science 95, 95 (1998).

[2]        See Ronald A. Fisher, The Genetical Theory of Natural Selection 228-55 (1930) (chap. XI, “Social Selection of Fertility,” addresses the “decay of ruling classes”).

[3]        UNESCO, The Race Concept: Results of an Inquiry (1952).

[4]        Id. at 27 (noting that “Sir Ronald Fisher has one fundamental objection to the Statement, which, as he himself says, destroys the very spirit of the whole document. He believes that human groups differ profoundly “in their innate capacity for intellectual and emotional development.”)

[5]        Id. at 56.

[6]        Id. at 27.

[7]        Karl Pearson & Margaret Moul, “The Problem of Alien Immigration into Great Britain, Illustrated by an Examination of Russian and Polish Jewish Children, Part I,” 1 Ann. Human Genetics 5 (1925) (opining that Jewish immigrants “will develop into a parasitic race. […] Taken on the average, and regarding both sexes, this alien Jewish population is somewhat inferior physically and mentally to the native population.” ); “Part II,” 2 Ann. Human Genetics 111 (1927); “Part III,” 3 Ann. Human Genetics 1 (1928).

[8]        “Petition: Remove the window in honour of R. A. Fisher at Gonville & Caius, University of Cambridge.” See Genevieve Holl-Allen, “Students petition for window commemorating eugenicist to be removed from college hall; The petition surpassed 600 signatures in under a day,” The Cambridge Tab (June 2020).

[9]        Eli Cahan, “Amid protests against racism, scientists move to strip offensive names from journals, prizes, and more,” Science (July 2, 2020); Sam Kean “Ronald Fisher, a Bad Cup of Tea, and the Birth of Modern Statistics: A lesson in humility begets a scientific revolution,” Distillations (Science History Institute) (Aug. 6, 2019). Bayesians have been all-too-happy to throw shade at Fisher. See Eric-Jan Wagenmakers & Johnny van Doorn, “This Statement by Sir Ronald Fisher Will Shock You,” Bayesian Spectacles (July 2, 2020).

[10]      Daniel Cleather, “Is Statistics Racist?Medium (Mar. 9, 2020).

[11]      Aristotle, Politics, 1254b16–21.

[12]      James Q. Whitman, Hitler’s American Model: The United States and the Making of Nazi Race Law 15 & n. 39 (2017); Stephen H. Norwood, The Third Reich in the Ivory Tower 56-57 (2009); Peter Rees, “Nathan Roscoe Pound and the Nazis,”  60 Boston Coll. L. Rev. 1313 (2019); Ron Grossman, “Harvard accused of coddling Nazis,” Chicago Tribune (Nov. 30, 2004).

[13]      Garrett W. O’Brien, “The Hidden History of the Harvard Law School Library’s Treasure Room,” The Crimson (Mar. 28, 2020).

David Madigan’s Graywashed Meta-Analysis in Taxotere MDL

June 12th, 2020

Once again, a meta-analysis is advanced as a basis for an expert witness’s causation opinion, and once again, the opinion is the subject of a Rule 702 challenge. The litigation is In re Taxotere (Docetaxel) Products Liability Litigation, a multi-district litigation (MDL) proceeding before Judge Jane Triche Milazzo, who sits on the United States District Court for the Eastern District of Louisiana.

Taxotere is the brand name for docetaxel, a chemotherapic medication used either alone or in conjunction with another chemotherapy, to treat a number of different cancers. Hair loss is a side effect of Taxotere, but in the MDL, plaintiffs claim that they have experienced permanent hair loss, which was not adequately warned about in their view. The litigation thus involved issues of exactly what “permanent” means, medical causation, adequacy of warnings in the Taxotere package insert, and warnings causation.

Defendant Sanofi challenged plaintiffs’ statistical expert witness, David Madigan, a frequent testifier for the lawsuit industry. In its Rule 702 motion, Sanofi argued that Madigan had relied upon two randomized clinical trials (TAX 316 and GEICAM 9805) that evaluated “ongoing alopecia” to reach conclusions about “permanent alopecia.” Sanofi made the point that “ongoing” is not “permanent,” and that trial participants who had ongoing alopecia may have had their hair grow back. Madigan’s reliance upon an end point different from what plaintiffs complained about made his analysis irrelevant. The MDL court rejected Sanofi’s argument, with the observation that Madigan’s analysis was not irrelevant for using the wrong end point, only less persuasive, and that Sanofi’s criticism was one that “Sanofi can highlight for the jury on cross-examination.”[1]

Did Judge Milazzo engage in judicial dodging with rejecting the relevancy argument and emphasizing the truism that Sanofi could highlight the discrepancy on cross-examination?  In the sense that the disconnect can be easily shown by highlight the different event rates for the alopecia differently defined, the Sanofi argument seems like one that a jury could easily grasp and refute. The judicial shrug, however, begs the question why the defendant should have to address a data analysis that does not support the plaintiffs’ contention about “permanence.” The federal rules are supposed to advance the finding of the truth and the fair, speedy resolution of cases.

Sanofi’s more interesting argument, from the perspective of Rule 702 case law, was its claim that Madigan had relied upon a flawed methodology in analyzing the two clinical trials:

“Sanofi emphasizes that the results of each study individually produced no statistically significant results. Sanofi argues that Dr. Madigan cannot now combine the results of the studies to achieve statistical significance. The Court rejects Sanofi’s argument and finds that Sanofi’s concern goes to the weight of Dr. Madigan’s testimony, not to its admissibility.34”[2]

There seems to be a lot going on in the Rule 702 challenge that is not revealed in the cryptic language of the MDL district court. First, the court deployed the jurisprudentially horrific, conclusory language to dismiss a challenge that “goes to the weight …, not to … admissibility.” As discussed elsewhere, this judicial locution is rarely true, fails to explain the decision, and shows a lack of engagement with the actual challenge.[3] Of course, aside from the inanity of the expression, and the failure to explain or justify the denial of the Rule 702 challenge, the MDL court may have been able to provide a perfectly adequately explanation.

Second, the footnote in the quoted language, number 34, was to the infamous Milward case,[4] with the explanatory parenthetical that the First Circuit had reversed a district court for excluding testimony of an expert witness who had sought to “draw conclusions based on combination of studies, finding that alleged flaws identified by district court go to weight of testimony not admissibility.”[5] As discussed previously, the widespread use of the “weight not admissibility” locution, even by the Court of Appeals, does not justify it. More important, however, the invocation of Milward suggests that any alleged flaws in combining study results in a meta-analysis are always matters for the jury, no matter how arcane, technical, or threatening to validity they may be.

So was Judge Milazzo engaged in judicial dodging in Her Honor’s opinion in Taxotere? Although the citation to Milward tends to inculpate, the cursory description of the challenge raises questions whether the challenge itself was valid in the first place. Fortunately, in this era of electronic dockets, finding the actual Rule 702 motion is not very difficult, and we can inspect the challenge to see whether it was dodged or given short shrift. Remarkably, the reality is much more complicated than the simple, simplistic rejection by the MDL court would suggest.

Sanofi’s brief attacks three separate analyses proffered by David Madigan, and not surprisingly, the MDL court did not address every point made by Sanofi.[6] Sanofi’s point about the inappropriateness of conducting the meta-analysis was its third in its supporting brief:

“Third, Dr. Madigan conducted a statistical analysis on the TAX316 and GEICAM9805/TAX301 clinical trials separately and combined them to do a ‘meta-analysis’. But Dr. Madigan based his analysis on unproven assumptions, rendering his methodology unreliable. Even without those assumptions, Dr. Madigan did not find statistical significance for either of the clinical trials independently, making this analysis unhelpful to the trier of fact.”[7]

This introductory statement of the issue is itself not particularly helpful because it fails to explain why combining two individual clinical trials (“RCTs”), each not having “statistically significant” results, by meta-analysis would be unhelpful. Sanofi’s brief identified other problems with Madigan’s analyses, but eventually returned to the meta-analysis issue, with the heading:

“Dr. Madigan’s analysis of the individual clinical trials did not result in statistical significance, thus is unhelpful to the jury and will unfairly prejudice Sanofi.”[8]

After a discussion of some of the case law about statistical significance, Sanofi pressed its case against Madigan. Madigan’s statistical analysis of each of two RCTs apparently did not reach statistical significance, and Sanofi complained that permitting Madigan to present these two analyses with results that were “not statistically very impressive,” would confuse and mislead the jury.[9]

“Dr. Madigan tried to avoid that result here [of having two statistically non-significant results] by conducting a ‘meta-analysis’ — a greywashed term meaning that he combined two statistically insignificant results to try to achieve statistical significance. Madigan Report at 20 ¶ 53. Courts have held that meta-analyses are admissible, but only when used to reduce the numerical instability on existing statistically significant differences, not as a means to achieve statistical significance where it does not exist. RMSE at 361–362, fn76.”

Now the claims here are quite unsettling, especially considering that they were lodged in a defense brief, in an MDL, with many cases at stake, made on behalf of an important pharmaceutical company, represented by two large, capable national or international law firms.

First, what does the defense brief signify by placing ‘meta-analysis’ in quotes. Are these scare quotes to suggest that Madigan was passing off something as a meta-analysis that failed to be one? If so, there is nothing in the remainder of the brief that explains such an interpretation. Meta-analysis has been around for decades, and reporting meta-analyses of observational or of experimental studies has been the subject of numerous consensus and standard-setting papers over the last two decades. Furthermore, the FDA has now issued a draft guidance for the use of meta-analyses in pharmacoepidemiology. Scare quotes are at best unexplained, at worst, inappropriate. If the authors had something else in mind, they did not explain the meaning of using quotes around meta-analysis.

Second, the defense lawyers referred to meta-analysis as a “greywashed” term. I am always eager to expand my vocabulary, and so I looked up the word in various dictionaries of statistical and epidemiologic terms. Nothing there. Perhaps it was not a technical term, so I checked with the venerable Oxford English Dictionary. No relevant entries.

Pushed to the wall, I checked the font of all knowledge – the internet. To be sure, I found definitions, but nothing that could explain this odd locution in a brief filed in an important motion:

gray-washing: “noun In calico-bleaching, an operation following the singeing, consisting of washing in pure water in order to wet out the cloth and render it more absorbent, and also to remove some of the weavers’ dressing.”

graywashed: “adj. adopting all the world’s cultures but not really belonging to any of them; in essence, liking a little bit of everything but not everything of a little bit.”

Those definitions do not appear pertinent.

Another website offered a definition based upon the “blogsphere”:

Graywash: “A fairly new term in the blogsphere, this means an investigation that deals with an offense strongly, but not strongly enough in the eyes of the speaker.”

Hmmm. Still not on point.

Another one from “Urban Dictionary” might capture something of what was being implied:

Graywashing: “The deliberate, malicious act of making art having characters appear much older and uglier than they are in the book, television, or video game series.”

Still, I am not sure how this is an argument that a federal judge can respond to in a motion affecting many cases.

Perhaps, you say, I am quibbling with word choices, and I am not sufficiently in tune with the way people talk in the Eastern District of Louisiana. I plead guilty to both counts. But the third, and most important point, is the defense assertion that meta-analyses are only admissible “when used to reduce the numerical instability on existing statistically significant differences, not as a means to achieve statistical significance where it does not exist.”

This assertion is truly puzzling. Meta-analyses involve so many layers of hearsay that they will virtually never be admissible. Admissibility of the meta-analyses is virtually never the issue. When an expert witness has conducted a meta-analysis, or has relied upon one, the important legal question is whether the witness may reasonably rely upon the meta-analysis (under Rule 703) for an inference that satisfies Rule 702. The meta-analysis itself does not come into evidence, and does not go out to the jury for its deliberations.

But what about the defense brief’s “only when” language that clearly implies that courts have held that expert witnesses may rely upon meta-analyses only to reduce “numerical instability on existing statistically significant differences”? This seems clearly wrong because achieving statistical significance from studies that have no “instability” for their point estimates but individually lack statistical significance is a perfectly legitimate and valid goal. Consider a situation in which, for some reason, sample size in each study is limited by the available observations, but we have 10 studies, each with a point estimate of 1.5, and each with a 95% confidence interval of (0.88, 2.5). This hypothetical situation presents no instability of point estimates, and the meta-analytical summary point estimate would shrink the confidence interval so that the lower bound would exclude 1.0, in a perfectly valid analysis. In the real world, meta-analyses are conducted on studies with point estimates of risk that vary, because of random and non-random error, but there is no reason that meta-analyses cannot reduce random error to show that the summary point estimate is statistically significant at a pre-specified alpha, even though no constituent study was statistically significant.

Sanofi’s lawyers did not cite to any case for the remarkable proposition they advanced, but they did cite the Reference Manual for Scientific Evidence (RMSE). Earlier in the brief, the defense cited to this work in its third edition (2011), and so I turned to the cited page (“RMSE at 361–362, fn76”) only to find the introduction to the chapter on survey research, with footnotes 1 through 6.

After a diligent search through the third edition, I could not find any other language remotely supportive of the assertion by Sanofi’s counsel. There are important discussions about how a poorly conducted meta-analysis, or a meta-analysis that was heavily weighted in a direction by a methodologically flawed study, could render an expert witness’s opinion inadmissible under Rule 702.[10] Indeed, the third edition has a more sustained discussion of meta-analysis under the heading “VI. What Methods Exist for Combining the Results of Multiple Studies,”[11] but nothing in that discussion comes close to supporting the remarkable assertion by defense counsel.

On a hunch, I checked the second edition of RMSE, published in the year 2000. There was indeed a footnote 76, on page 361, which discussed meta-analysis. The discussion comes in the midst of the superseded edition’s chapter on epidemiology. Nothing, however, in the text or in the cited footnote appears to support the defense’s contention about meta-analyses are appropriate only when each included clinical trial has independently reported a statistically significant result.

If this analysis is correct, the MDL court was fully justified in rejecting the defense argument that combining two statistically non-significant clinical trials to yield a statistically significant result was methodologically infirm. No cases were cited, and the Reference Manual does not support the contention. Furthermore, no statistical text or treatise on meta-analysis supports the Sanofi claim. Sanofi did not support its motion with any affidavits of experts on meta-analysis.

Now there were other arguments advanced in support of excluding David Madigan’s testimony. Indeed, there was a very strong methodological challenge to Madigan’s decision to include the two RCTs in his meta-analysis, other than those RCTs lack of statistical significance on the end point at issue. In the words of the Sanofi brief:

“Both TAX clinical trials examined two different treatment regimens, TAC (docetaxel in combination with doxorubicin and cyclophosphamide) versus FAC (5-fluorouracil in combination with doxorubicin and cyclophosphamide). Madigan Report at 18–19 ¶¶ 47–48. Dr. Madigan admitted that TAC is not Taxotere alone, Madigan Dep. 305:21–23 (Ex. B); however, he did not rule out doxorubicin or cyclophosphamide in his analysis. Madigan Dep. 284:4–12 (“Q. You can’t rule out other chemotherapies as causes of irreversible alopecia? … A. I can’t rule out — I do not know, one way or another, whether other chemotherapy agents cause irreversible alopecia.”).”[12]

Now unlike the statistical significance argument, this argument is rather straightforward and turns on the clinical heterogeneity of the two trials that seems to clearly point to the invalidity of a meta-analysis of them. Sanofi’s lawyers could have easily supported this point with statements from standard textbooks and non-testifying experts (but alas did not). Sanofi did support their challenge, however, with citations to an important litigation and Fifth Circuit precedent.[13]

This closer look at the actual challenge to David Madigan’s opinions suggests that Sanofi’s counsel may have diluted very strong arguments about heterogeneity in exposure variable, and in the outcome variable, by advancing what seems a very doubtful argument based upon the lack of statistical significance of the individual studies in the Madigan meta-analysis.

Sanofi advanced two very strong points, first about the irrelevant outcome variable definitions used by Madigan, and second about the complexity of Taxotere’s being used with other, and different, chemotherapeutic agents in each of the two trials that Madigan combined.[14] The MDL court addressed the first point in a perfunctory and ultimately unsatisfactory fashion, but did not address the second point at all.

Ultimately, the result was that Madigan was given a pass to offer extremely tenuous opinions in an MDL on causation. Given that Madigan has proffered tendentious opinions in the past, and has been characterized as “an expert on a mission,” whose opinions are “conclusion driven,”[15] the missteps in the briefing, and the MDL court’s abridgement of the gatekeeping process are regrettable. Also regrettable is that the merits or demerits of a Rule 702 challenge cannot be fairly evaluated from cursory, conclusory judicial decisions riddled with meaningless verbiage such as “the challenge goes to the weight and not the admissibility of the witness.” Access to the actual Rule 702 motion helped shed important light on the inadequacy of one point in the motion but also the complexity and fullness of the challenge that was not fully addressed in the MDL court’s decision. It is possible that a Reply or a Supplemental brief, or oral argument, may have filled in gaps, corrected errors, or modified the motion, and the above analysis missed some important aspect of what happened in the Taxotere MDL. If so, all the more reason that we need better judicial gatekeeping, especially when a decision can affect thousands of pending cases.[16]


[1]  In re Taxotere (Docetaxel) Prods. Liab. Litig., 2019 U.S. Dist. LEXIS 143642, at *13 (E.D. La. Aug. 23, 2019) [Op.]

[2]  Op. at *13-14.

[3]  “Judicial Dodgers – Weight not Admissibility” (May 28, 2020).

[4]  Milward v. Acuity Specialty Prods. Grp., Inc., 639 F.3d 11, 17-22 (1st Cir. 2011).

[5]  Op. at *13-14 (quoting and citing Milward, 639 F.3d at 17-22).

[6]  Memorandum in Support of Sanofi Defendants’ Motion to Exclude Expert Testimony of David Madigan, Ph.D., Document 6144, in In re Taxotere (Docetaxel) Prods. Liab. Litig. (E.D. La. Feb. 8, 2019) [Brief].

[7]  Brief at 2; see also Brief at 14 (restating without initially explaining why combining two statistically non-significant RCTs by meta-analysis would be unhelpful).

[8]  Brief at 16.

[9]  Brief at 17 (quoting from Madigan Dep. 256:14–15).

[10]  Michael D. Green, Michael Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” at 581n.89, in Fed. Jud. Center, Reference Manual on Scientific Evidence (3d ed. 2011).

[11]  Id. at 606.

[12]  Brief at 14.

[13]  Brief at 14, citing Burst v. Shell Oil Co., C. A. No. 14–109, 2015 WL 3755953, at *7 (E.D. La. June 16, 2015) (Vance, J.) (quoting LeBlanc v. Chevron USA, Inc., 396 F. App’x 94, 99 (5th Cir. 2010)) (“[A] study that notes ‘that the subjects were exposed to a range of substances and then nonspecifically note[s] increases in disease incidence’ can be disregarded.”), aff’d, 650 F. App’x 170 (5th Cir. 2016). SeeThe One Percent Non-solution – Infante Fuels His Own Exclusion in Gasoline Leukemia Case” (June 25, 2015).

[14]  Brief at 14-16.

[15]  In re Accutane Litig., 2015 WL 753674, at *19 (N.J.L.Div., Atlantic Cty., Feb. 20, 2015), aff’d, 234 N.J. 340, 191 A.3d 560 (2018). SeeJohnson of Accutane – Keeping the Gate in the Garden State” (Mar. 28, 2015); “N.J. Supreme Court Uproots Weeds in Garden State’s Law of Expert Witnesses” (Aug. 8, 2018).

[16]  Cara Salvatore, “Sanofi Beats First Bellwether In Chemo Drug Hair Loss MDL,” Law360 (Sept. 27, 2019).