TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

ALI Reporters Are Snookered by Racette Fallacy

April 27th, 2015

In the Reference Manual on Scientific Evidence, the authors of the epidemiology chapter advance instances of acceleration of onset of disease as an example of a situation in which reliance upon doubling of risk will not provide a reliable probability of causation calculation[1]. In a previous post, I suggested that the authors’ assertion may be unfounded. SeeReference Manual on Scientific Evidence on Relative Risk Greater Than Two For Specific Causation Inference” (April 25, 2014). Several epidemiologic methods would permit the calculation of relative risk within specific time windows from first exposure.

The American Law Institute (ALI) Reporters, for the Restatement of Torts, make similar claims.[2] First, the Reporters, citing the Manual’s second edition, repeat the Manual’s claim that:

 “Epidemiologists, however, do not seek to understand causation at the individual level and do not use incidence rates in group to studies to determine the cause of an individual’s disease.”

American Law Institute, Restatement (Third) of Torts: Liability for Physical and Emotional Harm § 28(a) cmt. c(4) & rptrs. notes (2010) [Comment c(4)]. In making this claim, the Reporters ignore an extensive body of epidemiologic studies on genetic associations and on biomarkers, which do address causation implicitly or explicitly, on an individual level.

The Reporters also repeat the Manual’s doubtful claim that acceleration of onset of disease prevents an assessment of attributable risk, although they acknowledge that an average earlier age of onset would form the basis of damages calculations rather than calculations for damages for an injury that would not have occurred but for the tortious exposure. Comment c(4). The Reporters go a step further than the Manual, however, and provide an example of the acceleration-of-onset studies that they have in mind:

“For studies whose results suggest acceleration, see Brad A. Racette, Welding-Related Parkinsonism: Clinical Features, Treatments, and Pathophysiology,” 56 Neurology 8, 12 (2001) (stating that authors “believe that welding acts as an accelerant to cause [Parkinson’s Disease]… .”

The citation to Racette’s 2001 paper[3] is curious, interesting, disturbing, and perhaps revealing. In this 2001 paper, Racette misrepresented the type of study he claimed to have done, and the inferences he drew from his case series are invalid. Any one experienced in the field of epidemiology would have dismissed this study, its conclusions, and its suggested relation between welding and parkinsonism.

Dr. Brad A. Racette teaches and practices neurology at Washington University in St. Louis, across the river from a hotbed of mass tort litigation, Madison County, Illinois. In the 1990s, Racette received referrals from plaintiffs’ attorneys to evaluate their clients in litigation over exposure to welding fumes. Plaintiffs were claiming that their occupational exposures caused them to develop manganism, a distinctive parkinsonism that differs from Parkinson’s disease [PD], but has signs and symptoms that might be confused with PD by unsophisticated physicians unfamiliar with both manganism and PD.

After the publication of his 2001 paper, Racette became the darling of felon Dicky Scruggs and other plaintiffs’ lawyers. The litigation industrialists invited Racette and his team down to Alabama and Mississippi, to conduct screenings of welding tradesmen, recruited by Scruggs and his team, for potential lawsuits for PD and parkinsonism. The result was a paper that helped Scruggs propel a litigation assault against the welding industry.[4]

Racette’s 2001 paper was accompanied by a press release, as have many of his papers, in which he was quoted as stating that “[m]anganism is a very different disease” from PD. Gila Reckess, “Welding, Parkinson’s link suspected” (Feb. 9, 2001)[5].

Racette’s 2001 paper provoked a strongly worded letter that called Racette and his colleagues out for misrepresenting the nature of their work:

“The authors describe their work as a case–control study. Racette et al. ascertained welders with parkinsonism and compared their concurrent clinical features to those of subjects with PD. This is more consistent with a cross-sectional design, as the disease state and factors of interest were ascertained simultaneously. Cross-sectional studies are descriptive and therefore cannot be used to infer causation.”

*****

“The data reported by Racette et al. do not necessarily support any inference about welding as a risk factor in PD. A cohort study would be the best way to evaluate the role of welding in PD.”

Bernard Ravina, Andrew Siderowf, John Farrar, Howard Hurtig, “Welding-related parkinsonism: Clinical features, treatment, and pathophysiology,” 57 Neurology 936, 936 (2001).

As we will see, Dr. Ravina and his colleagues were charitable to suggest that the study was more compatible with a cross-sectional study. Racette had set out to determine “whether welding-related parkinsonism differs from idiopathic PD.” He claimed that he had “performed a case-control study,” with a case group of welders and two control groups. His inferences drawn from his “data” are, however, fallacious because he employed an invalid study design.

In reality, Racette’s paper was nothing more than a chart review, a case series of 15 “welders” in the context of a movement disorder clinic. After his clinical and radiographic evaluation, Racette found that these 15 cases were clinically indistinguishable from PD, and thus unlike manganism. Racette did not reveal whether any of these 15 welders had been referred by plaintiffs’ counsel; nor did he suggest that these welding tradesmen made up a disproportionate number of his patient base in St. Louis, Missouri.

Racette compared his selected 15 career welders with PD to his general movement disorders clinic patient population, for comparison. From the patient population, Racette deployed two “control” groups, one matched for age and sex with the 15 welders, and the other group not matched. The America Law Institute reporters are indeed correct that Racette suggested that the average age of onset for these 15 welders was lower than that for his non-welder patients, but their uncritical embrace overlooked the fact that Racette’s suggestion does not support his claimed inference that in welders, therefore, “welding exposure acts as an accelerant to cause PD.”

Racette’s claimed inference is remarkable because he did not perform an analytical epidemiologic study that was capable of generating causal inferences. His paper incongruously presents odds ratios, although the controls have PD, the disease of interest, which invalidates any analytical inference from his case series. Given the referral and selection biases inherent in tertiary-care specialty practices, this paper can provide no reliable inferences about associations or differences in ages of onset. Even within the confines of a case series misrepresented to be a case-control study, Racette acknowledged that “[s]ubsequent comparisons of the welders with age-matched controls showed no significant differences.”

NOT A CASE-CONTROL STUDY

That Racette wrongly identified his paper as a case-control study is beyond debate. How the journal Neurology accepted the paper for publication is a mystery. The acceptance of the inference by the ALI Reporter, lawyers and judges, is regrettable.

Structurally, Racette’s paper could never quality as a case-control study, or any other analytical epidemiologic study. Here is how a leading textbook on case-control studies defines a case-control study:

“In a case-control study, individuals with a particular condition or disease (the cases) are selected for comparison with a series of individuals in whom the condition or disease is absent (the controls).”

James J. Schlesselman, Case-control Studies. Design, Conduct, Analysis at 14 (N.Y. 1982)[6].

Every patient in Racette’s paper, welders and non-welders, have the outcome of interest, PD. There is no epidemiologic study design that corresponds to what Racette did, and there is no way to draw any useful inference from Racette’s comparisons. Racette’s paper violates the key principle for a proper case-control study; namely, all subjects must be selected independently of the study exposure that is under investigation. Schlesselman stressed that that identifying an eligible case or control must not depend upon that person’s exposure status for any factor under consideration. Id. Racette’s 2001 paper deliberately violated this basic principle.

Racette’s study design, with only cases with the outcome of interest appearing in the analysis, recklessly obscures the underlying association between the exposure (welding) and age in the population. We would, of course, expect self-identified welders to be younger than the average Parkinson’s disease patient because welding is physical work that requires good health. An equally fallacious study could be cobbled together to “show” that the age-of-onset of Parkinson’s disease for sitcom actors (such as Michael J. Fox) is lower than the age-of-onset of Parkinson’s disease for Popes (such as John Paul II). Sitcom actors are generally younger as a group than Popes. Comparing age of onset between disparate groups that have different age distributions generates a biased comparison and an erroneous inference.

The invalidity and fallaciousness of Racette’s approach to studying the age-of-onset issue of PD in welders, and his uncritical inferences, have been extensively commented upon in the general epidemiologic literature. For instance, in studies that compared the age at death for left-handed versus right-handed person, studies reported an observed nine-year earlier death for left handers, leading to (unfounded) speculation that earlier mortality resulted from birth and life stressors and accidents for left handers, living in a world designed to accommodate right-handed person[7]. The inference has been shown to be fallacious and the result of social pressure in the early twentieth century to push left handers to use their right hands, a prejudicial practice that abated over the decades of the last century. Left handers born later in the century were less likely to be “switched,” as opposed to those persons born earlier and now dying, who were less likely to be classified as left-handed, as a result of a birth-cohort effect[8]. When proper prospective cohort studies were conducted, valid data showed that left-handers and right-handers have equivalent mortality rates[9].

Epidemiologist Ken Rothman addressed the fallacy of Racette’s paper at some length in one of his books:

“Suppose we study two groups of people and look at the average age at death among those who die. In group A, the average age of death is 4 years; in group B, it is 28 years. Can we say that being a member of group A is riskier than being a member of group B? We cannot… . Suppose that group A comprises nursery school students and group B comprises military commandos. It would be no surprise that the average age at death of people who are currently military commandos is 28 years or that the average age of people who are currently nursery students is 4 years. …

In a study of factory workers, an investigator inferred that the factory work was dangerous because the average age of onset of a particular kind of cancer was lower in these workers than among the general population. But just as for the nursery school students and military commandos, if these workers were young, the cancers that occurred among them would have to be occurring in young people. Furthermore, the age of onset of a disease does not take into account what proportion of people get the disease.

These examples reflect the fallacy of comparing the average age at which death or disease strikes rather than comparing the risk of death between groups of the same age.”

Kenneth J. Rothman, “Introduction to Epidemiologic Thinking,” in Epidemiology: An Introduction at 5-6 (N.Y. 2002).

And here is how another author of Modern Epidemiology[10] addressed the Racette fallacy in a different context involving PD:

“Valid studies of age-at-onset require no underlying association between the risk factor and aging or birth cohort in the source population. They must also consider whether a sufficient induction time has passed for the risk factor to have an effect. When these criteria and others cannot be satisfied, age-specific or standardized risks or rates, or a population-based case-control design, must be used to study the association between the risk factor and outcome. These designs allow the investigator to disaggregate the relation between aging and the prevalence of the risk factor, using familiar methods to control confounding in the design or analysis. When prior knowledge strongly suggests that the prevalence of the risk factor changes with age in the source population, case-only studies may support a relation between the risk factor and age-at-onset, regardless of whether the inference is justified.”

Jemma B. Wilk & Timothy L. Lash, “Risk factor studies of age-at-onset in a sample ascertained for Parkinson disease affected sibling pairs: a cautionary tale,” 4 Emerging Themes in Epidemiology 1 (2007) (internal citations omitted) (emphasis added).

A properly designed epidemiologic study would have avoided Racette’s fallacy. A relevant cohort study would have enrolled welders in the study at the outset of their careers, and would have continued to follow them even if they changed occupations. A case-control study would have enrolled cases with PD and controls without PD (or more broadly, parkinsonism), with cases and controls selected independently of their exposure to welding fumes. Either method would have determined the rate of PD in both groups, absolutely or relatively. Racette’s paper, which completely lacked non-PD cases, could not have possibly accomplished his stated objectives, and it did not support his claims.

Racette’s questionable work provoked a mass tort litigation and ultimately federal Multi-District Litigation 1535.[11] Ultimately, analytical epidemiologic studies consistently showed no association between welding and PD. A meta-analysis published in 2012 ended the debate[12] as a practical matter, and MDL 1535 is no more. How strange that the ALI reporters chose the Racette work as an example of their claims about acceleration of onset!


[1] Michael D. Green, D. Michal Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” in Federal Judicial Center, Reference Manual on Scientific Evidence 549, 614 (Wash., DC 3d ed. 2011).

[2] Michael D. Green was an ALI Reporter, and of course, an author of the chapter in the Reference Manual.

[3] Brad A. Racette, L. McGee-Minnich, S. M. Moerlein, J. W. Mink, T. O. Videen, and Joel S. Perlmutter, “Welding-related parkinsonism: clinical features, treatment, and pathophysiology,” 56 Neurology 8 (2001).

[4] See Brad A. Racette, S.D. Tabbal, D. Jennings, L. Good, Joel S. Perlmutter, and Brad Evanoff, “Prevalence of parkinsonism and relationship to exposure in a large sample of Alabama welders,” 64 Neurology 230 (2005); Brad A. Racette, et al., “A rapid method for mass screening for parkinsonism,” 27 Neurotoxicology 357 (2006) (duplicate publication of the earlier, 2005, paper).

[5] Previously available at <http://record.wustl.edu/archive/2001/02-09-01/articles/welding.html>, last visited on June 27, 2005.

[6] See also Brian MacMahon & Dimitrios Trichopoulos, Epidemiology. Principles and Methods at 229 (2ed 1996) (“A case-control study is an inquiry in which groups of individuals are selected based on whether they do (the cases) or do not (the controls) have the disease of which the etiology is to be studied.”); Jennifer L. Kelsey, W.D. Thompson, A.S. Evans, Methods in Observational Epidemiology at 148 (N.Y. 1986) (“In a case-control study, persons with a given disease (the cases) and persons without the disease (the controls) are selected … .”).

[7] See, e.g., Diane F. Halpern & Stanley Coren, “Do right-handers live longer?” 333 Nature 213 (1988); Diane F. Halpern & Stanley Coren, “Handedness and life span,” 324 New Engl. J. Med. 998 (1991).

[8] Kenneth J. Rothman, “Left-handedness and life expectancy,” 325 New Engl. J. Med. 1041 (1991) (pointing out that by comparing age of onset method, nursery education would be found more dangerous than paratrooper training, given that the age at death of pres-schoolers wo died would be much lower than that of paratroopers who died); see also Martin Bland & Doug Altman, “Do the left-handed die young?” Significance 166 (Dec. 2005).

[9] See Philip A. Wolf, Ralph B. D’Agostino, Janet L. Cobb, “Left-handedness and life expectancy,” 325 New Engl. J. Med. 1042 (1991); Marcel E. Salive, Jack M. Guralnik & Robert J. Glynn, “Left-handedness and mortality,” 83 Am. J. Public Health 265 (1993); Olga Basso, Jørn Olsen, Niels Holm, Axel Skytthe, James W. Vaupel, and Kaare Christensen, “Handedness and mortality: A follow-up study of Danish twins born between 1900 and 1910,” 11 Epidemiology 576 (2000). See also Martin Wolkewitz, Arthur Allignol, Martin Schumacher, and Jan Beyersmann, “Two Pitfalls in Survival Analyses of Time-Dependent Exposure: A Case Study in a Cohort of Oscar Nominees,” 64 Am. Statistician 205 (2010); Michael F. Picco, Steven Goodman, James Reed, and Theodore M. Bayless, “Methodologic pitfalls in the determination of genetic anticipation: the case of Crohn’s disease,” 134 Ann. Intern. Med. 1124 (2001).

[10] Kenneth J. Rothman, Sander Greenland, Timothy L. Lash, eds., Modern Epidemiology (3d ed. 2008).

[11] Dicky Scruggs served on the Plaintiffs’ Steering Committee until his conviction on criminal charges.

[12] James Mortimer, Amy Borenstein, and Lorene Nelson, “Associations of welding and manganese exposure with Parkinson disease: Review and meta-analysis,” 79 Neurology 1174 (2012).

Cherry Picking; Systematic Reviews; Weight of the Evidence

April 5th, 2015

In a paper prepared for one of Professor Margaret Berger’s symposia on law and science, Lisa Bero, a professor of clinical pharmacy in the University of California San Francisco’s School of Pharmacy identified a major source of error in published reviews of putative health effects:

“The biased citation of studies in a review can be a major source of error in the results of the review. Authors of reviews can influence their conclusions by citing only studies that support their preconceived, desired outcome.”

Lisa Bero, “Evaluating Systematic Reviews and Meta-Analyses,” 14 J. L. & Policy 569, 576 (2006). Biased citation, consideration, and reliance are major sources of methodological error in courtroom proceedings as well. Sometimes astute judges recognize and bar expert witnesses who would pass off their opinions, as well considered, when they are propped up only by biased citation. Unfortunately, courts have been inconsistent, sometimes rewarding cherry picking of studies by admitting biased opinions[1], sometimes unhorsing the would-be expert witnesses by excluding their opinions[2].

Given that cherry picking or “biased citation” is recognized in the professional community as a rather serious methodological sin, judges may be astonished to learn that both phrases, “cherry picking” and “biased citation” do not appear in the third edition of the Reference Manual on Scientific Evidence. Of course, the Manual could have dealt with the underlying issue of biased citation by affirmatively promoting the procedure of systematic reviews, but here again, the Manual falls short. There is no discussion of systematic review in the chapters on toxicology[3], epidemiology[4], or statistics[5]. Only the chapter on clinical medicine discusses the systematic review, briefly[6]. The absence of support for the procedures of systematic reviews, combined with the occasional cheerleading for “weight of the evidence,” in which expert witnesses subjectively include and weight studies to reach pre-ordained opinions, tends to undermines the reliability of the latest edition of the Manual[7].


[1] Spray-Rite Serv. Corp. v. Monsanto Co., 684 F.2d 1226, 1242 (7th Cir. 1982) (failure to consider factors identified by opposing side’s expert did not make testimony inadmissible).

[2] In re Zoloft, 26 F. Supp. 3d 449 (E.D. Pa. 2014) (excluding perinatal epidemiologist, Anick Bérard, for biased cherry picking of data points); In re Accutane, No. 271(MCL), 2015 WL 753674, 2015 BL 59277 (N.J.Super. Law Div. Atlantic Cty. Feb. 20, 2015) (excluding opinions Drs. Arthur Kornbluth and David Madigan because of their authors’ unjustified dismissal of studies that contradicted or undermined their opinions); In re Bextra & Celebrex Mktg. Sales Practices & Prods. Liab. Litig., 524 F.Supp. 2d 1166, 1175–76, 1179 (N.D.Cal.2007) (holding that expert witnesses may not ‘‘cherry-pick[ ]’’ observational studies to support a conclusion that is contradicted by randomized controlled trials, meta-analyses of such trials, and meta-analyses of observational studies; excluding expert witness who ‘‘ignores the vast majority of the evidence in favor of the few studies that support her conclusion’’); Grant v. Pharmative, LLC, 452 F. Supp. 2d 903, 908 (D. Neb. 2006) (excluding expert witness opinion testimony that plaintiff’s use of black cohash caused her autoimmune hepatitis) (“Dr. Corbett’s failure to adequately address the body of contrary epidemiological evidence weighs heavily against admission of his testimony.”); Downs v. Perstorp Components, Inc., 126 F. Supp. 2d 1090,1124-29 (E.D. Tenn. 1999) (expert’s opinion raised seven “red flags” indicating that his testimony was litigation biased), aff’d, 2002 U.S. App. Lexis 382 (6th Cir. Jan. 4, 2002).

[3] Bernard D. Goldstein & Mary Sue Henifin, “Reference Guide on Toxicology,” in Reference Manual on Scientific Evidence 633 (3d ed. 2011).

[4] Michael D. Green, D. Michal Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” in Reference Manual on Scientific Evidence 549 (3d ed. 2011).

[5] David H. Kaye & David A. Freedman, “Reference Guide on Statistics,” in Reference Manual on Scientific Evidence 209 (3d ed. 2011).

[6] John B. Wong, Lawrence O. Gostin, and Oscar A. Cabrera, “Reference Guide on Medical Testimony,” in Federal Judicial Center and National Research Council, Reference Manual on Scientific Evidence 687 (3d ed. 2011).

[7] See Margaret A. Berger, “The Admissibility of Expert Testimony,” in Reference Manual on Scientific Evidence 11, 20 & n.51 (3d ed. 2011) (posthumously citing Milward v. Acuity Specialty Products Group, Inc., 639 F.3d 11, 26 (1st Cir. 2011), with approval, for reversing exclusion of expert witnesses who advanced “weight of the evidence” opinions).

Johnson of Accutane – Keeping the Gate in the Garden State

March 28th, 2015

Flag of Aquitaine     Nelson Johnson is the author of Boardwalk Empire: The Birth, High Times, and Corruption of Atlantic City (2010), a rattling good yarn, which formed the basis for a thinly fictionalized story of Atlantic City under the control of mob boss (and Republican politician) Enoch “Nucky” Johnson. HBO transformed Johnson’s book into a multi-season series, with Steve Buscemi playing Nucky Johnson (Thompson in the series). Robert Strauss, “Judge Nelson Johnson: Atlantic City’s Godfather — A Q&A with Judge Nelson Johnson,” New Jersey Monthly (Aug. 16, 2010).

Nelson Johnson is also known as the Honorable Nelson Johnson, a trial court judge in Atlantic County, New Jersey, where he inherited some of the mass tort docket of Judge Carol Higbee. Judge Higbee has since ascended to the Appellate Division of the New Jersey Superior Court. One of the litigations Judge Johnson presides over is the mosh pit of isotretinoin (Accutane) cases, involving claims that the acne medication causes irritable bowel syndrome (IBS) and Crohn’s disease (CD). Judge Johnson is not only an accomplished writer of historical fiction, but he is also an astute evaluator of the facts and data, and the accompanying lawyers’ rhetoric, thrown about in pharmaceutical products liability litigation.

Perhaps more than his predecessor ever displayed, Judge Johnson recently demonstrated his aptitude for facts and data in serving as a gatekeeper of scientific evidence, as required by the New Jersey Supreme Court, in Kemp v. The State of New Jersey, 174 NJ 412 (2002). Faced with a complex evidentiary display on the validity and reliability of the scientific evidence, Judge Johnson entertained extensive briefings, testimony, and oral argument. When the dust settled, the court ruled that the proffered testimony of Dr, Arthur Kornbluth and Dr. David Madigan did not meet the liberal New Jersey test for admissibility. In re Accutane, No. 271(MCL), 2015 WL 753674, 2015 BL 59277 (N.J.Super. Law Div. Atlantic Cty. Feb. 20, 2015). And in settling the dust, Judge Johnson dispatched several bogus and misleading “lines of evidence,” which have become standard ploys to clog New Jersey and other courthouses.

Case Reports

As so often is the case when there is no serious scientific evidence of harm in pharmaceutical cases, plaintiffs in the Accutane litigation relied heavily upon case and adverse event reports. Id. at *11. Judge Johnson was duly unimpressed, and noted that:

“[u]nsystematic clinical observations or case reports and adverse event reports are at the bottom of the evidence hierarchy.”

Id. at *16.

Bootstrapped, Manufactured Evidence

With respect to case reports that are submitted to the FDA’s Adverse Event Reporting System (FAERS), Judge Johnson acknowledged the “serious limitations” of the hearsay anecdotes that make up such reports. Despite the value of AERs in generating signals for future investigation, Judge Johnson, citing FDA’s own description of the reporting system, concluded that the system’s anecdotal data are “not evidentiary in a court of law.” Id. at 14 (quoting FDA’s description of FAERS).

Judge Johnson took notice of another fact; namely, the industry litigation creates evidence that it then uses to claim causal connections in the courtroom. Plaintiffs’ lawyers in pharmaceutical cases routinely file Medwatch adverse event reports, which thus inflate the “signal,” they claim supports the signal of harm from medication use. This evidentiary bootstrapping machine was hard at work in the isotretinoin litigation. See Derrick J. Stobaugh, Parakkal Deepak, and Eli D. Ehrenpreis, “Alleged Isotretinoin-Associated Inflammatory Bowel Disease: Disproportionate reporting by attorneys to the Food and Drug Administration Adverse Event Reporting System,” 69 J. Am. Acad. Dermatol. 398 (2013) (“Attorney-initiated reports inflate the pharmacovigilance signal of isotretinoin-associated IBD in the FAERS.”). Judge Johnson gave a wry hat tip to plaintiffs’ counsel’s industry, by acknowledging that the litigation industry itself had inflated this signal-generating process:

“The legal profession is a bulwark of our society, yet the courts should never underestimate the resourcefulness of some attorneys.”

In re Accutane, 2015 WL 753674, at *15.

Bias and Confounding

The epidemiologic studies referenced by the parties had identified a fairly wide range of “risk factors” for irritable bowel syndrome, including many prevalent factors in Westernized countries such as prior appendectomy, breast-feeding as an infant, stress, Vitamin D deficiency, tobacco or alcohol use, refined sugars, dietary animal fat, fast food. In re Accutane, 2015 WL 753674, at *9. The court also noted that there were four medications known to be risk factors for IBD: aspirin, nonsteroidal anti-inflammatory medications (NSAIDs), oral contraceptives, and antibiotics.

In reviewing the plaintiffs’ expert witnesses’ methodology, Judge Johnson found that they had been inordinately, and inappropriately selective in the studies chosen for reliance. The challenged witnesses had discounted and discarded most of the available studies in favor of two studies that were small, biased, and not population based. Indeed, one of the studies evidenced substantial selection bias by using referrals to obtain study participants, a process deprecated by the trial court as “cherry picking the subjects.” Id. at *18. “The scientific literature does not support reliance upon such insignificant studies to arrive at conclusions.” Id.

Animal Studies

Both sides in the isotretinoin cases seemed to concede the relative unimportance of animal studies. The trial court discussed the limitations on animal studies, especially the absence of a compelling animal model of human irritable bowel syndrome. Id. at *18.

Cherry Picking and Other Crafty Stratagems

With respect to the complete scientific evidentiary display, plaintiffs asserted that their expert witnesses had considered everything, but then failed to account for most of the evidence. Judge Johnson found this approach deceptive and further evidence of a cherry-picking, pathological methodology:

‘‘Finally, coursing through Plaintiffs’ presentation is a refrain that is a ruse. Repeatedly, counsel for the Plaintiffs and their witnesses spoke of ‛lines of evidence”, emphasizing that their experts examined ‛the same lines of evidence’ as did the experts for the Defense. Counsels’ sophistry is belied by the fact that the examination of the ‘lines of evidence’ by Plaintiffs’ experts was highly selective, looking no further than they wanted to—cherry picking the evidence—in order to find support for their conclusion-driven testimony in support of a hypothesis made of disparate pieces, all at the bottom of the medical evidence hierarchy.’’

Id. at *21.

New Jersey Rule of Evidence 703

The New Jersey rules of evidence, like the Federal Rules, imposes a reasonableness limit on what sorts of otherwise inadmissible evidence an expert witness may rely upon. SeeRULE OF EVIDENCE 703 — Problem Child of Article VII” (Sept. 9, 2011). Although Judge Johnson did not invoke Rule 703 specifically, he was clearly troubled by plaintiffs’ expert witnesses’ reliance upon an unadjusted odds ratio from an abstract, which did not address substantial confounding from a known causal risk factor – antibiotics use. Judge Johnson concluded that the reliance upon the higher, unadjusted risk figure, contrary to the authors’ own methods and conclusions, and without a cogent explanation for so doing was “pure advocacy” on the part of the witnesses. In re Accutane, 2015 WL 753674, at *17; see also id. at *5 (citing Landrigan v. Celotex Corp., 127 N.J. 404, 417 (1992), for the proposition that “when an expert relies on such data as epidemiological studies, the trial court should review the studies, as well as other information proffered by the parties, to determine if they are of a kind on which such experts ordinarily rely.”).

Discordance Between Courtroom and Professional Opinions

One of plaintiffs’ expert witnesses, Dr. Arthur Kornbluth actually had studied putative association between isotretinoin and CD before he became intensively involved in litigation as an expert witness. In re Accutane, 2015 WL 753674, at *7. Having an expert witness who is a real world expert can be a plus, but not when that expert witness maintains a double standard for assessing causal connections. Back in 2009, Kornbluth published an article, “Ulcerative Colitis Practice Guidelines in Adults” in The American Journal of Gastroenterology. Id. at *10. This positive achievement became a large demerit when cross-examination at the Kemp hearing revealed that Kornbluth had considered but rejected the urgings of a colleague, Dr. David Sachar, to comment on isotretinoin as a cause of irritable bowel syndrome. In front of Judge Johnson, Dr. Kornbluth felt no such scruples. Id. at *11. Dr. Kornbluth’s stature in the field of gastroenterology, along with his silence on the issue in his own field, created a striking contrast with his stridency about causation in the courtroom. The contrast raised the trial court’s level of scrutiny and skepticism about his causal opinions in the New Jersey litigation. Id. (citing and quoting Soldo v. Sandoz Pharms. Corp, 244 F. Supp. 2d 434, 528 (W.D. Pa. 2003) (“Expert opinions generated as the result of litigation have less credibility than opinions generated as the result of academic research or other forms of ‘pure’ research.”) (“The expert’s motivation for his/her study and research is important. … We may not ignore the fact that a scientist’s normal work place is the lab or field, not the courtroom or the lawyer’s office.”).

Meta-Analysis

Meta-analysis has become an important facet of pharmaceutical and other products liability litigation[1]. Fortunately for Judge Johnson, he had before him an extremely capable expert witness, Dr. Stephen Goodman, to explain meta-analysis generally, and two meta-analyses performed on isotretinoin and irritable bowel outcomes. In re Accutane, 2015 WL 753674, at *8. Dr. Goodman explained that:

“the strength of the meta-analysis is that no one feature, no one study, is determinant. You don’t throw out evidence except when you absolutely have to.”

Id. Dr. Goodman further explained that plaintiffs’ expert witnesses’ failure to perform a meta-analysis was telling meta-analysis “can get us closer to the truth.” Id.

Some Nitpicking

Specific Causation

After such a commanding judicial performance by Judge Johnson, nitpicking on specific causation might strike some as ungrateful. For some reason, however, Judge Johnson cited several cases on the appropriateness of expert witnesses’ reliance upon epidemiologic studies for assessing specific causation or for causal apportionment between two or more causes. In re Accutane, 2015 WL 753674, at *5 (citing Landrigan v. Celotex Corp., 127 N.J. 404 (1992), Caterinicchio v. Pittsburgh Corning, 127 N.J. 428 (1992), and Dafler v. Raymark Inc., 259 N.J. Super. 17, 36 (App. Div. 1992), aff’d. o.b. 132 N.J. 96 (1993)). Fair enough, but specific causation was not at issue in the Accutane Kemp hearing, and the Landrigan and Caterinicchio cases are irrelevant to general causation.

In both Landrigan and Caterincchio, the defendants moved for directed verdicts by arguing that, assuming arguendo that asbestos causes colon cancer, the plaintiffs’ expert witnesses had not presented a sufficient opinion to support that Landrigan’s and Caterinnichio’s colon cancers were caused by asbestos. SeeLandrigan v. The Celotex Corporation, Revisited” (June 4, 2013). General causation was thus never at issue, and the holdings never addressed the admissibility of the expert witnesses’ causation opinions. Only sufficiency of the opinions that equated increased risks, less than 2.0, to specific causation was at issue in the directed verdicts, and the appeals taken from the judgments entered on those verdicts.

Judge Johnson, in discussing previous case law suggests that the New Jersey Supreme Court reversed and remanded the Landrigan case for trial, holding that “epidemiologists could help juries determine causation in toxic tort cases and rejected the proposition that epidemiological studies must show a relative risk factor of 2.0 before gaining acceptance by a court.” In re Accutane, 2015 WL 753674, at *5, citing Landrigan, 127 N.J. at 419. A close and fair reading of Landrigan, however, shows that it was about a directed verdict, 127 N.J. at 412, and not a challenge to the use of epidemiologic studies generally, or to their use to show general causation.

Necessity of Precise Biological Mechanism

In the Accutane hearings, the plaintiffs’ counsel and their expert witnesses failed to provide a precise biological mechanism of the cause of IBD. Judge Johnson implied that any study that asserted that Accutane caused IBD ‘‘would, of necessity, require an explication of a precise biological mechanism of the cause of IBD and no one has yet to venture more than alternate and speculative hypotheses on that question.’’ In re Accutane, 2015 WL 753674, at *8. Conclusions of causality, however, do not always come accompanied by understood biological mechanisms, and Judge Johnson demonstrated that the methods and evidence relied upon by plaintiffs’ expert witnesses could not, in any event, allow them to draw causal conclusions.

Interpreting Results Contrary to Publication Authors’ Interpretations

There is good authority, no less than the United States Supreme Court in Joiner, that there is something suspect in expert witnesses’ interpreting a published study’s results in contrary to the authors’ publication. Judge Johnson found that the plaintiffs’ expert witnesses in the Accutane litigation had inferred that two studies showed increased risk when the authors of those studies had concluded that their studies did not appear to show an increased risk. Id. at *17. There will be times, however, when a published study may have incorrectly interpreted its own data, when “real” expert witnesses can, and should, interpret the data appropriately. Accutane was not such a case. In In re Accutane, Judge Johnson carefully documented and explained how the plaintiffs’ expert witnesses’ supposed reinterpretation was little more than attempted obfuscation. His Honor concluded that the witnesses’ distortion of, and ‘‘reliance upon these two studies is fatal and reveals the lengths to which legal counsel and their experts are willing to contort the facts and torture the logic associated with Plaintiffs’ hypothesis.’’ Id. at *18.


[1] “The Treatment of Meta-Analysis in the Third Edition of the Reference Manual on Scientific Evidence” (Nov. 14, 2011) (The Reference Manual fails to come to grips with the prevalence and importance of meta-analysis in litigation, and fails to provide meaningful guidance to trial judges).

The Mythology of Linear No-Threshold Cancer Causation

March 13th, 2015

“For the great enemy of the truth is very often not the lie—deliberate, contrived, and dishonest—but the myth—persistent, persuasive, and unrealistic. Too often we hold fast to the clichés of our forebears. We subject all facts to a prefabricated set of interpretations. We enjoy the comfort of opinion without the discomfort of thought.”

John F. Kennedy, Yale University Commencement (June 11, 1962)

         *        *        *        *        *        *        *        *        *

The linear no-threshold model for risk assessment has its origins in a dubious attempt of scientists playing at policy making[1]. The model has survived as a political strategy to inject the precautionary principle into regulatory decision making, but it has turned into a malignant myth in litigation over low-dose exposures to putative carcinogens. Ignorance or uncertainty about low-dose exposures is turned into an affirmative opinion that the low-dose exposures are actually causative. Call it contrived, or dishonest, or call it a myth, the LNT model is an intellectual cliché.

The LNT cliché pervades American media as well as courtrooms. Earlier this week, the New York Times provided a lovely example of the myth taking center stage, without explanation or justification. Lumber Liquidators is under regulatory and litigation attack for having sold Chinese laminate wood flooring made with formaldehyde-containing materials. According to a “60 Minutes” investigation, the flooring off-gases formaldehyde at concentrations in excess of regulatory permissible levels. See Aaron M. Kessler & Rachel Abrams, “Homeowners Try to Assess Risks From Chemical in Floors,” New York Times (Mar. 10, 2015).

The Times reporters, in discussing whether a risk exists to people who live in houses and apartments with the Lumber Liquidators flooring sought out and quoted the opinion of Marilyn Howarth:

“Any exposure to a carcinogen can increase your risk of cancer,” said Marilyn Howarth, a toxicologist at the University of Pennsylvania’s Perelman School of Medicine.

Id. Dr. Howarth, however, is not a toxicologist; she is an occupational and environmental physician, and serves as the Director of Occupational and Environmental Consultation Services at the Hospital of the University of Pennsylvania. She is also an adjunct associate professor of emergency medicine, and the Director, of the Community Outreach and Engagement Core, Center of Excellence in Environmental Toxicology, at the University of Pennsylvania Perelman School of Medicine. Without detracting from Dr. Howarth’s fine credentials, the New York Times reporters might have noticed that Dr. Howarth’s publications are primarily on latex allergies, and not on the issue of the effect of low-dose exposure to carcinogens.

The point is not to diminish Dr. Howarth’s accomplishments, but to criticize the Times reporters for seeking out an opinion of a physician whose expertise is not well matched to the question they raise about risks, and then to publish that opinion even though it is demonstrably wrong. Clearly, there are some carcinogens, and perhaps all, that do not increase risk at “any exposure.” Consider ethanol, which is known to cause cancer of the larynx, liver, female breast, and perhaps other organs[2]. Despite known causation, no one would assert that “any exposure” to alcohol-containing food and drink increases the risk of these cancers. And the same could be said for most, if not all, carcinogens. The human body has defense mechanisms to carcinogens, including DNA repair mechanisms and programmed cell suicide, which work to prevent carcinogenesis from low-dose exposures.

The no threshold hypothesis is really at best an hypothesis, with affirmative evidence showing that the hypothesis should be rejected for some cancers[3]. The factual status of LNT is a myth; it is an opinion, and a poorly supported opinion at that.

         *        *        *        *        *        *        *        *        *

“There are, in fact, two things: science and opinion. The former brings knowledge, the latter ignorance.”

Hippocrates of Cos


[1] See Edward J. Calabrese, “Cancer risk assessment foundation unraveling: New historical evidence reveals that the US National Academy of Sciences (US NAS), Biological Effects of Atomic Radiation (BEAR) Committee Genetics Panel falsified the research record to promote acceptance of the LNT,” 89 Arch. Toxicol. 649 (2015); Edward J. Calabrese & Michael K. O’Connor, “Estimating Risk of Low Radiation Doses – A Critical Review of the BEIR VII Report and its Use of the Linear No-Threshold (LNT) Hypothesis,” 182 Radiation Research 463 (2014); Edward J. Calabrese, “Origin of the linearity no threshold (LNT) dose–response concept,” 87 Arch. Toxicol. 1621 (2013); Edward J. Calabrese, “The road to linearity at low doses became the basis for carcinogen risk assessment,” 83 Arch. Toxicol. 203 (2009).

[2] See, e.g., IARC Monographs on the Evaluation of Carcinogenic Risks to Humans – Alcohol Consumption and Ethyl Carbamate; volume 96 (2010).

[3] See, e.g., Jerry M. Cuttler, “Commentary on Fukushima and Beneficial Effects of Low Radiation,” 11 Dose-Response 432 (2013); Jerry M. Cuttler, “Remedy for Radiation Fear – Discard the Politicized Science,” 12 Dose Response 170 (2014).

Sander Greenland on “The Need for Critical Appraisal of Expert Witnesses in Epidemiology and Statistics”

February 8th, 2015

Sander Greenland is one of the few academics, who has served as an expert witness, who has written post-mortems of his involvement in various litigations[1]. Although settling scores with opposing expert witnesses can be a risky business[2], the practice can provide important insights for judges and lawyers who want to avoid the errors of the past. Greenland correctly senses that many errors seem endlessly recycled, and that courts could benefit from disinterested commentary on cases. And so, there should be a resounding affirmation from federal and state courts to the proclaimed “need for critical appraisal of expert witnesses in epidemiology and statistics,” as well as in many other disciplines.

A recent exchange[3] with Professor Greenland led me to revisit his Wake Forest Law Review article. His article raises some interesting points, some mistaken, but some valuable and thoughtful considerations about how to improve the state of statistical expert witness testimony. For better and worse[4], lawyers who litigate health effects issues should read it.

Other Misunderstandings

Greenland posits criticisms of defense expert witnesses[5], who he believes have misinterpreted or misstated the appropriate inferences to be drawn from null studies. In one instance, Greenland revisits one of his own cases, without any clear acknowledgment that his views were largely rejected.[6] The State of California had declared, pursuant to Proposition 65 ( the Safe Drinking Water and Toxic Enforcement Act of 1986, Health and Safety Code sections 25249.5, et seq.), that the State “knew” that di(2-ethylhexyl)phthalate, or “DEHP” caused cancer. Baxter Healthcare challenged the classification, and according to Greenland, the defense experts erroneously interpreted inclusive studies with evidence supporting a conclusion that DEHP does not cause cancer.

Greenland argues that the Baxter expert’s reference[7] to an IARC working group’s classification of DEHP as “not classifiable as to its carcinogenicity to humans” did not support the expert’s conclusion that DEHP does not cause cancer in human. If Baxter’s expert invoked the IARC working group’s classification for complete exoneration of DEHP, then Greenland’s point is fair enough. In his single-minded attack on Baxter’s expert’s testimony, however, Greenland missed a more important point, which is that the IARC’s determination that DEHP is not classifiable as to carcinogenicity is directly contradictory of California’s epistemic claim to “know” that DEHP causes cancer. And Greenland conveniently omits any discussion that the IARC working group had reclassified DEHP from “possibly carcinogenic” to “not classifiable,” in the light of its conclusion that mechanistic evidence of carcinogenesis in rodents did not pertain to humans.[8] Greenland maintains that Baxter’s experts misrepresented the IARC working group’s conclusion[9], but that conclusion, at the very least, demonstrates that California was on very shaky ground when it declared that it “knew” that DEHP was a carcinogen. California’s semantic gamesmanship over its epistemic claims is at the root of the problem, not a misstep by defense experts in describing inconclusive evidence as exonerative.

Greenland goes on to complain that in litigation over health claims:

“A verdict of ‛uncertain’ is not allowed, yet it is the scientific verdict most often warranted. Elimination of this verdict from an expert’s options leads to the rather perverse practice (illustrated in the DEHP testimony cited above) of applying criminal law standards to risk assessments, as if chemicals were citizens to be presumed innocent until proven guilty.

39 Wake Forest Law Rev. at 303. Despite Greenland’s alignment with California in the Denton case, the fact of the matter is that a verdict of “uncertain” was allowed, and he was free to criticize California for making a grossly exaggerated epistemic claim on inconclusive evidence.

Perhaps recognizing that he may be readily be seen as an advocate for coming to the defense of California on the DEHP issue, Greenland protests that:

“I am not suggesting that judgments for plaintiffs or actions against chemicals should be taken when evidence is inconclusive.”

39 Wake Forest Law Rev. at 305. And yet, his involvement in the Denton case (as well as other cases, such as silicone gel breast implant cases, thimerosal cases, etc.) suggest that he is willing to lend aid and support to judgments for plaintiffs when the evidence is inconclusive.

Important Advice and Recommendations

These foregoing points are rather severe limitations to Greenland’s article, but lawyers and judges should also look to what is good and helpful here. Greenland is correct to call out expert witnesses, regardless of party of affiliation, who opine that inconclusive studies are “proof” of the null hypothesis. Although some of Greenland’s arguments against the use of significance probability may be overstated, his corrections to the misstatements and misunderstandings of significance probability should command greater attention in the legal community. In one strained passage, however, Greenland uses a disjunction to juxtapose null hypothesis testing with proof beyond a reasonable doubt[10]. Greenland of course understands the difference, but the context would lead some untutored readers to think he has equated the two probabilistic assessments. Writing in a law review for lawyers and judges might have led him to be more careful. Given the prevalence of plaintiffs’ counsel’s confusing the 95% confidence coefficient with a burden of proof akin to beyond a reasonable doubt, great care in this area is, indeed, required.

Despite his appearing for plaintiffs’ counsel in health effects litigation, some of Greenland’s suggestions are balanced and perhaps more truth-promoting than many plaintiffs’ counsel would abide. His article provides an important argument in favor of raising the legal criteria for witnesses who purport to have expertise to address and interpret epidemiologic and experimental evidence[11]. And beyond raising qualification requirements above mere “reasonable pretense at expertise,” Professor Greenland offers some thoughtful, helpful recommendations for improving expert witness testimony in the courts:

  • “Begin publishing projects in which controversial testimony (a matter of public record) is submitted, and as space allows, published on a regular basis in scientific or law journals, perhaps with commentary. An online version could provide extended excerpts, with additional context.
  • Give courts the resources and encouragement to hire neutral experts to peer-review expert testimony.
  • Encourage universities and established scholarly societies (such as AAAS, ASA, APHA, and SER) to conduct workshops on basic epidemiologic and statistical inference for judges and other legal professionals.”

39 Wake Forest Law Rev. at 308.

Each of these three suggestions is valuable and constructive, and worthy of an independent paper. The recommendation of neutral expert witnesses and scholarly tutorials for judges is hardly new. Many defense counsel and judges have argued for them in litigation and in commentary. The first recommendation, of publishing “controversial testimony” is part of the purpose of this blog. There would be great utility to making expert witness testimony, and analysis thereof, more available for didactic purposes. Perhaps the more egregious testimonial adventures should be republished in professional journals, as Greenland suggests. Greenland qualifies his recommendation with “as space allows,” but space is hardly the limiting consideration in the digital age.

Causation

Professor Greenland correctly points out that causal concepts and conclusions are often essentially contested[12], but his argument might well be incorrectly taken for “anything goes.” More helpfully, Greenland argues that various academic ideals should infuse expert witness testimony. He suggests that greater scholarship, with acknowledgment of all viewpoints, and all evidence, is needed in expert witnessing. 39 Wake Forest Law Rev. at 293.

Greenland’s argument provides an important corrective to the rhetoric of Oreskes, Cranor, Michaels, Egilman, and others on “manufacturing doubt”:

“Never force a choice among competing theories; always maintain the option of concluding that more research is needed before a defensible choice can be made.”

Id. Despite his position in the Denton case, and others, Greenland and all expert witnesses are free to maintain that more research is needed before a causal claim can be supported. Greenland also maintains that expert witnesses should “look past” the conclusions drawn by authors, and base their opinions on the “actual data” on which the statistical analyses are based, and from which conclusions have been drawn. Courts have generally rejected this view, but if courts were to insist upon real expertise in epidemiology and statistics, then the testifying expert witnesses should not be constrained by the hearsay opinions in the discussion sections of published studies – sections which by nature are incomplete and tendentious. See Follow the Data, Not the Discussion” (May 2, 2010).

Greenland urges expert witnesses and legal counsel to be forthcoming about their assumptions, their uncertainty about conclusions:

“Acknowledgment of controversy and uncertainty is a hallmark of good science as well as good policy, but clashes with the very time limited tasks faced by attorneys and courts”

39 Wake Forest Law Rev. at 293-4. This recommendation would be helpful in assuring courts that the data may simply not support conclusions sufficiently certain to be submitted to lay judges and jurors. Rosen v. Ciba-Geigy Corp., 78 F.3d 316, 319, 320 (7th Cir. 1996) (“But the courtroom is not the place for scientific guesswork, even of the inspired sort. Law lags science; it does not lead it.”) (internal citations omitted).

Threats to Validity

One of the serious mistakes counsel often make in health effects litigation is to invite courts to believe that statistical significance is sufficient for causal inferences. Greenland emphasizes that validity considerations often are much stronger, and more important considerations than the play of random error[13]:

“For very imperfect data (e.g., epidemiologic data), the limited conclusions offered by statistics must be further tempered by validity considerations.”

*   *   *   *   *   *

“Examples of validity problems include non-random distribution of the exposure in question, non-random selection or cooperation of subjects, and errors in assessment of exposure or disease.”

39 Wake Forest Law Rev. at 302 – 03. Greenland’s abbreviated list of threats to validity should remind courts that they cannot sniff a p-value below five percent and then safely kick the can to the jury. The literature on evaluating bias and confounding is huge, but Greenland was a co-author on an important recent paper, which needs to be added to the required reading lists of judges charged with gatekeeping expert witness opinion testimony about health effects. See Timothy L. Lash, et al., “Good practices for quantitative bias analysis,” 43 Internat’l J. Epidem. 1969 (2014).


[1] For an influential example of this sparse genre, see James T. Rosenbaum, “Lessons from litigation over silicone breast implants: A call for activism by scientists,” 276 Science 1524 (1997) (describing the exaggerations, distortions, and misrepresentations of plaintiffs’ expert witnesses in silicone gel breast implant litigation, from perspective of a highly accomplished scientist physician, who served as a defense expert witness, in proceedings before Judge Robert Jones, in Hall v. Baxter Healthcare Corp., 947 F. Supp. 1387 (D. Or. 1996). In one attempt to “correct the record” in the aftermath of a case, Greenland excoriated a defense expert witness, Professor Robert Makuch, for stating that Bayesian methods are rarely used in medicine or in the regulation of medicines. Sander Greenland, “The Need for Critical Appraisal of Expert Witnesses in Epidemiology and Statistics,” 39 Wake Forest Law Rev. 291, 306 (2004).  Greenland heaped adjectives upon his adversary, “ludicrous claim,” “disturbing, “misleading expert testimony,” and “demonstrably quite false.” See “The Infrequency of Bayesian Analyses in Non-Forensic Court Decisions” (Feb. 16, 2014) (debunking Prof. Greenland’s claims).

[2] One almost comical example of trying too hard to settle a score occurs in a footnote, where Greenland cites a breast implant case as having been reversed in part by another case in the same appellate court. See 39 Wake Forest Law Rev. at 309 n.68, citing Allison v. McGhan Med. Corp., 184 F.3d 1300, 1310 (11th Cir. 1999), aff’d in part & rev’d in part, United States v. Baxter Int’l, Inc., 345 F.3d 866 (11th Cir. 2003). The subsequent case was not by any stretch of the imagination a reversal of the earlier Allison case; the egregious citation is a legal fantasy. Furthermore, Allison had no connection with the procedures for court-appointed expert witnesses or technical advisors. Perhaps the most charitable interpretation of this footnote is that it was injected by the law review editors or supervisors.

[3] SeeSignificance Levels are Made a Whipping Boy on Climate Change Evidence: Is .05 Too Strict? (Schachtman on Oreskes)” (Jan. 4, 2015).

[4] In addition to the unfair attack on Professor Makuch, see supra, n.1, there is much that some will find “disturbing,” “misleading,” and even “ludicrous,” (some of Greenland’s favorite pejorative adjectives) in the article. Greenland repeats in brief his arguments against the legal system’s use of probabilities of causation[4], which I have addressed elsewhere.

[5] One of Baxter’s expert witnesses appeared to be the late Professor Patricia Buffler.

[6] See 39 Wake Forest Law Rev. at 294-95, citing Baxter Healthcare Corp. v. Denton, No. 99CS00868, 2002 WL 31600035, at *1 (Cal. App. Dep’t Super. Ct. Oct. 3, 2002) (unpublished); Baxter Healthcare Corp. v. Denton, 120 Cal. App. 4th 333 (2004)

[7] Although Greenland cites to a transcript, the citation is to a judicial opinion, and the actual transcript of testimony is not available at the citation give.

[8] See Denton, supra.

[9] 39 Wake Forest L. Rev. at 297.

[10] 39 Wake Forest L. Rev. at 305 (“If it is necessary to prove causation ‛beyond a reasonable doubt’–or be ‛compelled to give up the null’ – then action can be forestalled forever by focusing on any aspect of available evidence that fails to conform neatly with the causal (alternative) hypothesis. And in medical and social science there is almost always such evidence available, not only because of the ‛play of chance’ (the focus of ordinary statistical theory), but also because of the numerous validity problems in human research.”

[11] See Peter Green, “Letter from the President to the Lord Chancellor regarding the use of statistical evidence in court cases” (Jan. 23, 2002) (writing on behalf of The Royal Statistical Society; “Although many scientists have some familiarity with statistical methods, statistics remains a specialised area. The Society urges you to take steps to ensure that statistical evidence is presented only by appropriately qualified statistical experts, as would be the case for any other form of expert evidence.”).

[12] 39 Wake Forest Law Rev. at 291 (“In reality, there is no universally accepted method for inferring presence or absence of causation from human observational data, nor is there any universally accepted method for inferring probabilities of causation (as courts often desire); there is not even a universally accepted definition of cause or effect.”).

[13] 39 Wake Forest Law Rev. at 302-03 (“If one is more concerned with explaining associations scientifically, rather than with mechanical statistical analysis, evidence about validity can be more important than statistical results.”).

Sander Greenland on “The Need for Critical Appraisal of Expert Witnesses in Epidemiology and Statistics”

February 8th, 2015

Sander Greenland is one of the few academics, who has served as an expert witness, who has written post-mortems of his involvement in various litigations[1]. Although settling scores with opposing expert witnesses can be a risky business[2], the practice can provide important insights for judges and lawyers who want to avoid the errors of the past. Greenland correctly senses that many errors seem endlessly recycled, and that courts could benefit from disinterested commentary on cases. And so, there should be a resounding affirmation from federal and state courts to the proclaimed “need for critical appraisal of expert witnesses in epidemiology and statistics,” as well as in many other disciplines.

A recent exchange[3] with Professor Greenland led me to revisit his Wake Forest Law Review article. His article raises some interesting points, some mistaken, but some valuable and thoughtful considerations about how to improve the state of statistical expert witness testimony. For better and worse[4], lawyers who litigate health effects issues should read it.

Other Misunderstandings

Greenland posits criticisms of defense expert witnesses[5], who he believes have misinterpreted or misstated the appropriate inferences to be drawn from null studies. In one instance, Greenland revisits one of his own cases, without any clear acknowledgment that his views were largely rejected.[6] The State of California had declared, pursuant to Proposition 65 ( the Safe Drinking Water and Toxic Enforcement Act of 1986, Health and Safety Code sections 25249.5, et seq.), that the State “knew” that di(2-ethylhexyl)phthalate, or “DEHP” caused cancer. Baxter Healthcare challenged the classification, and according to Greenland, the defense experts erroneously interpreted inclusive studies with evidence supporting a conclusion that DEHP does not cause cancer.

Greenland argues that the Baxter expert’s reference[7] to an IARC working group’s classification of DEHP as “not classifiable as to its carcinogenicity to humans” did not support the expert’s conclusion that DEHP does not cause cancer in human. If Baxter’s expert invoked the IARC working group’s classification for complete exoneration of DEHP, then Greenland’s point is fair enough. In his single-minded attack on Baxter’s expert’s testimony, however, Greenland missed a more important point, which is that the IARC’s determination that DEHP is not classifiable as to carcinogenicity is directly contradictory of California’s epistemic claim to “know” that DEHP causes cancer. And Greenland conveniently omits any discussion that the IARC working group had reclassified DEHP from “possibly carcinogenic” to “not classifiable,” in the light of its conclusion that mechanistic evidence of carcinogenesis in rodents did not pertain to humans.[8] Greenland maintains that Baxter’s experts misrepresented the IARC working group’s conclusion[9], but that conclusion, at the very least, demonstrates that California was on very shaky ground when it declared that it “knew” that DEHP was a carcinogen. California’s semantic gamesmanship over its epistemic claims is at the root of the problem, not a misstep by defense experts in describing inconclusive evidence as exonerative.

Greenland goes on to complain that in litigation over health claims:

“A verdict of ‛uncertain’ is not allowed, yet it is the scientific verdict most often warranted. Elimination of this verdict from an expert’s options leads to the rather perverse practice (illustrated in the DEHP testimony cited above) of applying criminal law standards to risk assessments, as if chemicals were citizens to be presumed innocent until proven guilty.

39 Wake Forest Law Rev. at 303. Despite Greenland’s alignment with California in the Denton case, the fact of the matter is that a verdict of “uncertain” was allowed, and he was free to criticize California for making a grossly exaggerated epistemic claim on inconclusive evidence.

Perhaps recognizing that he may be readily be seen as an advocate for coming to the defense of California on the DEHP issue, Greenland protests that:

“I am not suggesting that judgments for plaintiffs or actions against chemicals should be taken when evidence is inconclusive.”

39 Wake Forest Law Rev. at 305. And yet, his involvement in the Denton case (as well as other cases, such as silicone gel breast implant cases, thimerosal cases, etc.) suggest that he is willing to lend aid and support to judgments for plaintiffs when the evidence is inconclusive.

Important Advice and Recommendations

These foregoing points are rather severe limitations to Greenland’s article, but lawyers and judges should also look to what is good and helpful here. Greenland is correct to call out expert witnesses, regardless of party of affiliation, who opine that inconclusive studies are “proof” of the null hypothesis. Although some of Greenland’s arguments against the use of significance probability may be overstated, his corrections to the misstatements and misunderstandings of significance probability should command greater attention in the legal community. In one strained passage, however, Greenland uses a disjunction to juxtapose null hypothesis testing with proof beyond a reasonable doubt[10]. Greenland of course understands the difference, but the context would lead some untutored readers to think he has equated the two probabilistic assessments. Writing in a law review for lawyers and judges might have led him to be more careful. Given the prevalence of plaintiffs’ counsel’s confusing the 95% confidence coefficient with a burden of proof akin to beyond a reasonable doubt, great care in this area is, indeed, required.

Despite his appearing for plaintiffs’ counsel in health effects litigation, some of Greenland’s suggestions are balanced and perhaps more truth-promoting than many plaintiffs’ counsel would abide. His article provides an important argument in favor of raising the legal criteria for witnesses who purport to have expertise to address and interpret epidemiologic and experimental evidence[11]. And beyond raising qualification requirements above mere “reasonable pretense at expertise,” Professor Greenland offers some thoughtful, helpful recommendations for improving expert witness testimony in the courts:

  • “Begin publishing projects in which controversial testimony (a matter of public record) is submitted, and as space allows, published on a regular basis in scientific or law journals, perhaps with commentary. An online version could provide extended excerpts, with additional context.
  • Give courts the resources and encouragement to hire neutral experts to peer-review expert testimony.
  • Encourage universities and established scholarly societies (such as AAAS, ASA, APHA, and SER) to conduct workshops on basic epidemiologic and statistical inference for judges and other legal professionals.”

39 Wake Forest Law Rev. at 308.

Each of these three suggestions is valuable and constructive, and worthy of an independent paper. The recommendation of neutral expert witnesses and scholarly tutorials for judges is hardly new. Many defense counsel and judges have argued for them in litigation and in commentary. The first recommendation, of publishing “controversial testimony” is part of the purpose of this blog. There would be great utility to making expert witness testimony, and analysis thereof, more available for didactic purposes. Perhaps the more egregious testimonial adventures should be republished in professional journals, as Greenland suggests. Greenland qualifies his recommendation with “as space allows,” but space is hardly the limiting consideration in the digital age.

Causation

Professor Greenland correctly points out that causal concepts and conclusions are often essentially contested[12], but his argument might well be incorrectly taken for “anything goes.” More helpfully, Greenland argues that various academic ideals should infuse expert witness testimony. He suggests that greater scholarship, with acknowledgment of all viewpoints, and all evidence, is needed in expert witnessing. 39 Wake Forest Law Rev. at 293.

Greenland’s argument provides an important corrective to the rhetoric of Oreskes, Cranor, Michaels, Egilman, and others on “manufacturing doubt”:

“Never force a choice among competing theories; always maintain the option of concluding that more research is needed before a defensible choice can be made.”

Id. Despite his position in the Denton case, and others, Greenland and all expert witnesses are free to maintain that more research is needed before a causal claim can be supported. Greenland also maintains that expert witnesses should “look past” the conclusions drawn by authors, and base their opinions on the “actual data” on which the statistical analyses are based, and from which conclusions have been drawn. Courts have generally rejected this view, but if courts were to insist upon real expertise in epidemiology and statistics, then the testifying expert witnesses should not be constrained by the hearsay opinions in the discussion sections of published studies – sections which by nature are incomplete and tendentious. See Follow the Data, Not the Discussion” (May 2, 2010).

Greenland urges expert witnesses and legal counsel to be forthcoming about their assumptions, their uncertainty about conclusions:

“Acknowledgment of controversy and uncertainty is a hallmark of good science as well as good policy, but clashes with the very time limited tasks faced by attorneys and courts”

39 Wake Forest Law Rev. at 293-4. This recommendation would be helpful in assuring courts that the data may simply not support conclusions sufficiently certain to be submitted to lay judges and jurors. Rosen v. Ciba-Geigy Corp., 78 F.3d 316, 319, 320 (7th Cir. 1996) (“But the courtroom is not the place for scientific guesswork, even of the inspired sort. Law lags science; it does not lead it.”) (internal citations omitted).

Threats to Validity

One of the serious mistakes counsel often make in health effects litigation is to invite courts to believe that statistical significance is sufficient for causal inferences. Greenland emphasizes that validity considerations often are much stronger, and more important considerations than the play of random error[13]:

“For very imperfect data (e.g., epidemiologic data), the limited conclusions offered by statistics must be further tempered by validity considerations.”

*   *   *   *   *   *

“Examples of validity problems include non-random distribution of the exposure in question, non-random selection or cooperation of subjects, and errors in assessment of exposure or disease.”

39 Wake Forest Law Rev. at 302 – 03. Greenland’s abbreviated list of threats to validity should remind courts that they cannot sniff a p-value below five percent and then safely kick the can to the jury. The literature on evaluating bias and confounding is huge, but Greenland was a co-author on an important recent paper, which needs to be added to the required reading lists of judges charged with gatekeeping expert witness opinion testimony about health effects. See Timothy L. Lash, et al., “Good practices for quantitative bias analysis,” 43 Internat’l J. Epidem. 1969 (2014).


[1] For an influential example of this sparse genre, see James T. Rosenbaum, “Lessons from litigation over silicone breast implants: A call for activism by scientists,” 276 Science 1524 (1997) (describing the exaggerations, distortions, and misrepresentations of plaintiffs’ expert witnesses in silicone gel breast implant litigation, from perspective of a highly accomplished scientist physician, who served as a defense expert witness, in proceedings before Judge Robert Jones, in Hall v. Baxter Healthcare Corp., 947 F. Supp. 1387 (D. Or. 1996). In one attempt to “correct the record” in the aftermath of a case, Greenland excoriated a defense expert witness, Professor Robert Makuch, for stating that Bayesian methods are rarely used in medicine or in the regulation of medicines. Sander Greenland, “The Need for Critical Appraisal of Expert Witnesses in Epidemiology and Statistics,” 39 Wake Forest Law Rev. 291, 306 (2004).  Greenland heaped adjectives upon his adversary, “ludicrous claim,” “disturbing, “misleading expert testimony,” and “demonstrably quite false.” See “The Infrequency of Bayesian Analyses in Non-Forensic Court Decisions” (Feb. 16, 2014) (debunking Prof. Greenland’s claims).

[2] One almost comical example of trying too hard to settle a score occurs in a footnote, where Greenland cites a breast implant case as having been reversed in part by another case in the same appellate court. See 39 Wake Forest Law Rev. at 309 n.68, citing Allison v. McGhan Med. Corp., 184 F.3d 1300, 1310 (11th Cir. 1999), aff’d in part & rev’d in part, United States v. Baxter Int’l, Inc., 345 F.3d 866 (11th Cir. 2003). The subsequent case was not by any stretch of the imagination a reversal of the earlier Allison case; the egregious citation is a legal fantasy. Furthermore, Allison had no connection with the procedures for court-appointed expert witnesses or technical advisors. Perhaps the most charitable interpretation of this footnote is that it was injected by the law review editors or supervisors.

[3] SeeSignificance Levels are Made a Whipping Boy on Climate Change Evidence: Is .05 Too Strict? (Schachtman on Oreskes)” (Jan. 4, 2015).

[4] In addition to the unfair attack on Professor Makuch, see supra, n.1, there is much that some will find “disturbing,” “misleading,” and even “ludicrous,” (some of Greenland’s favorite pejorative adjectives) in the article. Greenland repeats in brief his arguments against the legal system’s use of probabilities of causation[4], which I have addressed elsewhere.

[5] One of Baxter’s expert witnesses appeared to be the late Professor Patricia Buffler.

[6] See 39 Wake Forest Law Rev. at 294-95, citing Baxter Healthcare Corp. v. Denton, No. 99CS00868, 2002 WL 31600035, at *1 (Cal. App. Dep’t Super. Ct. Oct. 3, 2002) (unpublished); Baxter Healthcare Corp. v. Denton, 120 Cal. App. 4th 333 (2004)

[7] Although Greenland cites to a transcript, the citation is to a judicial opinion, and the actual transcript of testimony is not available at the citation give.

[8] See Denton, supra.

[9] 39 Wake Forest L. Rev. at 297.

[10] 39 Wake Forest L. Rev. at 305 (“If it is necessary to prove causation ‛beyond a reasonable doubt’–or be ‛compelled to give up the null’ – then action can be forestalled forever by focusing on any aspect of available evidence that fails to conform neatly with the causal (alternative) hypothesis. And in medical and social science there is almost always such evidence available, not only because of the ‛play of chance’ (the focus of ordinary statistical theory), but also because of the numerous validity problems in human research.”

[11] See Peter Green, “Letter from the President to the Lord Chancellor regarding the use of statistical evidence in court cases” (Jan. 23, 2002) (writing on behalf of The Royal Statistical Society; “Although many scientists have some familiarity with statistical methods, statistics remains a specialised area. The Society urges you to take steps to ensure that statistical evidence is presented only by appropriately qualified statistical experts, as would be the case for any other form of expert evidence.”).

[12] 39 Wake Forest Law Rev. at 291 (“In reality, there is no universally accepted method for inferring presence or absence of causation from human observational data, nor is there any universally accepted method for inferring probabilities of causation (as courts often desire); there is not even a universally accepted definition of cause or effect.”).

[13] 39 Wake Forest Law Rev. at 302-03 (“If one is more concerned with explaining associations scientifically, rather than with mechanical statistical analysis, evidence about validity can be more important than statistical results.”).

Fixodent Study Causes Lockjaw in Plaintiffs’ Counsel

February 4th, 2015

Litigation Drives Science

Back in 2011, the Fixodent MDL Court sustained Rule 702 challenges to plaintiffs’ expert witnesses. “Hypotheses are verified by testing, not by submitting them to lay juries for a vote.” In re Denture Cream Prods. Liab. Litig., 795 F. Supp. 2d 1345, 1367 (S.D.Fla.2011), aff’d, Chapman v. Procter & Gamble Distrib., LLC, 766 F.3d 1296 (11th Cir. 2014). The Court found that the plaintiffs had raised a superficially plausible hypothesis, but that they had not verified the hypothesis by appropriate testing[1].

Like dentures to Fixodent, the plaintiffs stuck to their claims, and set out to create the missing evidence. Plaintiffs’ counsel contracted with Dr. Salim Shah and his companies Sarfez Pharmaceuticals, Inc. and Sarfez USA, Inc. (“Sarfez”) to conduct human research in India, to support their claims that zinc in denture cream causes neurological damage[2]In re Denture Cream Prods. Liab. Litig., Misc. Action 13-384 (RBW), 2013 U.S. Dist. LEXIS 93456, *2 (D.D.C. July 3, 2013).  When the defense learned of this study, and the plaintiffs’ counsel’s payments of over $300,000, to support the study, they sought discovery of raw data, study protocol, statistical analyses, and other materials from plaintiffs’ counsel.  Plaintiffs’ counsel protested that they did not have all the materials, and directed defense counsel to Sarfez.  Although other courts have made counsel produce similar materials from the scientists and independent contractors they engaged, in this case, defense counsel followed the trail of documents to contractor, Sarfez, with subpoenas in hand.  Id. at *3-4.

The defense served a Rule 45 subpoena on Sarfez, which produced some, but not all responsive documents. Proctor & Gamble pressed for the missing materials, including study protocols, analytical reports, and raw data.  Id. at *12-13.  Judge Reggie Walton upheld the subpoena, which sought underlying data and non-privileged correspondence, to be within the scope of Rules 26(b) and 45, and not unduly burdensome. Id. at *9-10, *20. Sarfez attempted to argue that the requested materials, listed as email attachments, might not exist, but Judge Walton branded the suggestion “disingenuous.”  Attachments to emails should be produced along with the emails.  Id. at *12 (citing and collecting cases). Although Judge Walton did not grant a request for forensic recovery of hard-drive data or for sanctions, His Honor warned Sarfez that it might be required to bear the cost of forensic data recovery if it did not comply the court’s order.  Id. at *15, *22.

Plaintiffs Put Their Study Into Play

The study at issue in the subpoena was designed by Frederick K. Askari, M.D., Ph.D., an associate professor of hepatology, in the University of Michigan Health System. In re Denture Cream Prods. Liab. Litig., No. 09–2051–MD, 2015 WL 392021, at *7 (S.D. Fla. Jan. 28, 2015). At the instruction of plaintiffs’ counsel, Dr. Askari sought to study the short-term effects of Fixodent on copper absorption in humans. Working in India, Askari conducted the study on 24 participants, who were given a controlled diet for 36 days. Of the 24 participants, 12, randomly selected, received 12 grams of Fixodent per day (containing 204 mg. of zinc). Another six participants, randomly selected, were given zinc acetate, three times per day (150 mg of zinc), and the remaining six participants received placebo, three times per day.

A study protocol was approved by an independent group[3], id. at *9, and the study was supposed to be conducted with a double blind. Id. at *7. Not surprisingly, those participants who received doses of Fixodent or zinc acetate had higher urinary levels of zinc (pee < 0.05). The important issue, however, was whether the dietary zinc levels affect copper excretion in a way that would support plaintiffs’ claims that copper levels were lowered sufficiently by Fixodent to cause a syndromic neurological disorder. The MDL Court ultimately concluded that plaintiffs’ expert witnesses’ opinions on general causation claims were not sufficiently supported to satisfy the requirements of Rule 702, and upheld defense challenges to those expert witnesses. In doing so, the MDL Court had much of interest to say about case reports, weight of the evidence, and other important issues. This post, however, concentrates on the deviations of one study, commissioned by plaintiffs’ counsel, from the scientific standard of care. The Askari “research” makes for a fascinating case study of how not to conduct a study in a litigation caldron.

Non-Standard Deviations

The First Deviation – Changing the Ascertainment Period After the Data Are Collected

The protocol apparently identified a primary endpoint to be:

“the mean increase in [copper 65] excretion in fecal matter above the baseline (mg/day) averaged over the study period … to test the hypothesis that the release of [zinc] either from Fixodent or Zinc Acetate impairs [copper 65] absorption as measured in feces.”

The study outcome, on the primary end point, was clear. The plaintiffs’ testifying statistician, Hongkun Wang, stated in her deposition that the fecal copper (whether isotope Cu63 or Cu65) was not different across the three groups (Fixodent, zinc acetate, and placebo). Id. at *9[4]. Even Dr. Askari himself admitted that the total fecal copper levels were not increased in the Fixodent group compared with the placebo control group. Id. at *9.[5]

Apparently after obtaining the data, and finding no difference in the pre-specified end point of average fecal copper levels between Fixodent and placebo groups, Askari turned to a new end point, measured in a different way, not described in the protocol as the primary end point.

The Second Deviation – Changing Primary End Point After the Data Are Collected

In the early (days 3, 4, and 5) and late (days 31, 32, and 33) part of the Study, participants received a dose of purified copper 65[6] to help detect the “blockade of copper.” Id. at 8*. The participants’ fecal copper 65 levels were compared to their naturally occurring copper 63 levels. According to Dr. Askari:

“if copper is being blocked in the Fixodent and zinc acetate test subjects from exposure to the zinc in the test product (Fixodent) and positive control (zinc acetate), the ratio of their fecal output of copper 65 as compared to their fecal output of copper 63 would increase relative to the control subjects, who were not dosed with zinc. In short, a higher ratio of copper 65 to copper 63 reflects blocking of copper.”

Id.

Askari analyzed the ratio of two copper isotopes (Cu65 /Cu63), in the limited period of observation to study days 31 to 33. Id. at *9. Askari thus changed the outcome to be measured, the timing of the measurement, and manner of measurement (average over entire period versus amount on days 31 to 33). On this post hoc, non-prespecified end point, Askari claimed to have found “significant” differences.

The MDL Court expressed its skepticism and concern over the difference between the protocol’s specified end point, and one that came into the study only after the data were obtained and analyzed. The plaintiffs claimed that it was their (and Askari’s) intention from the initial stages of designing the Fixodent Blockade Study to use the Cu65/Cu63 ratio as the primary end point. According to the plaintiffs, the isotope ratio was simply better articulated and “clarified” as the primary end point in the final report than it was in the protocol. The Court was not amused or assuaged by the plaintiffs’ assurances. The study sponsor, Dr. Salim Shah could not point to a draft protocol that indicated the isotope ratio as the end point; nor could Dr. Shah identify a request for this analysis by Wang until after the study was concluded. Id. at *9.[7]

Ultimately, the Court declared that whether the protocol was changed post hoc after the primary end point provided disappointing analysis, or the isotope ratio was carelessly omitted from the protocol, the design or conduct of the study was “incompatible with reliable scientific methodology.”

The Third Deviation – Changing the Standard of “Significance” After the Data Are Collected and P-Values Are Computed

The protocol for the Blockade study called for a pre-determined Type I error rate (p-value) of no more than 5 percent.[8] Id. at *10. The difference in the isotope ratio showed an attained level of significance probability of 5.7 percent, and thus even the post hoc end point missed the prespecified level of significance. The final protocol changed the value of “significance” to 10 percent, to permit the plaintiffs to declare a “statistically significant” result. Dr. Wang admitted in deposition that she doubled the acceptable level of Type I error only after she obtained the data and calculated the p-value of 0.057. Id. at *10.[9]

The Court found that this deliberate moving of the statistical goal post reflected a “lack of objectivity and reliability,” which smacked of contrivance[10].

The Court found that the study’s deviations from the protocol demonstrated a lack of objectivity. The inadequacy of the Study’s statistical analysis plan supported the Court’s conclusion that Dr. Askari’s supposed finding of a “statistically significant” difference in fecal copper isotope ratio between Fixodent and placebo group participants was “not based on sufficiently reliable and objective scientific methodology” and thus could not support plaintiffs’ expert witnesses’ general causation claims.

The Fourth Deviation – Failing to Take Steps to Preserve the Blind

The protocol called for a double-blinded study, with neither the participants nor the clinical investigators knowing which participant was in which group. Rather than delivering the three different groups capsules that looked similar, the group each received starkly different looking capsules. Id. at *11. The capsules for one set were apparently so large that the investigators worried whether the participants would comply with the dosing regimen.

The Fifth Deviation – Failing to Take Steps to Keep Biological Samples From Becoming Contaminated

Documents and emails from Dr. Shah acknowledged that there had been “difficulties in storing samples at appropriate temperature.” Id. at *11. Fecal samples were “exposed to unfrozen and undesirable temperature conditions.” Dr. Shah called for remedial steps from the Study manager, but there was no documentation that such steps were taken to correct the problem. Id.

The Consequences of Discrediting the Study

Dr. Askari opined that the Study, along with other evidence, shows that Fixodent can cause copper deficiency myeloneuropathy (“CDM”). The plaintiffs, of course, argued that the Defendants’ criticisms of the Fixodent

Study’s methodology went merely to the “weight rather than admissibility.” Id. at *9. Askari’s study was but one leg of the stool, but the defense’s thorough discrediting of the study was an important step in collapsing the support for the plaintiffs’ claims. As the MDL Court explained:

“The Court cannot turn a blind eye to the myriad, serious methodological flaws in the Fixodent Blockade Study and conclude they go to weight rather than admissibility. While some of these flaws, on their own, may not be serious enough to justify exclusion of the Fixodent Blockade Study; taken together, the Court finds Fixodent Blockade Study is not “good science,” and is not admissible. Daubert, 509 U.S. at 593 (internal quotation marks and citation omitted).”

Id. at *11.

A study, such as the Fixodent Blockade Study, is not itself admissible, but the deconstruction of the study upon which plaintiffs’ expert witnesses relied, led directly to the Court’s decision to exclude those witnesses. The Court omitted any reference to Federal Rule of Evidence 703, which addresses the requirements of facts and data, otherwise inadmissible, which may be relied upon by expert witnesses in reaching their opinions.


 

[1] SeePhiladelphia Plaintiff’s Claims Against Fixodent Prove Toothless” (May 2, 2012); Jacoby v. Rite Aid Corp., 2012 Phila. Ct. Com. Pl. LEXIS 208 (2012), aff’d, 93 A.3d 503 (Pa. Super. 2013); “Pennsylvania Superior Court Takes The Bite Out of Fixodent Claims” (Dec. 12, 2013).

[2] SeeUsing the Rule 45 Subpoena to Obtain Research Data” (July 24, 2013)

[3] The group was identified as the Ethica Norma Ethical Committee.

[4] citing Wang Dep. at 56:7–25, Aug. 13, 2013), and Wang Analysis of Fixodent Blockade Study [ECF No. 2197–56] (noting “no clear treatment effect on Cu63 or Cu65”).

[5] Askari Dep. at 69:21–24, June 20, 2013.

[6] Copper 65 is not a typical tracer; it is not radioactive. Naturally occurring copper consists almost exclusively of two stable (non-radioactive) isotope, Cu65 about 31 percent, Cu63 about 69 percent. See, e.g., Manuel Olivares, Bo Lönnerdal, Steve A Abrams, Fernando Pizarro, and Ricardo Uauy, “Age and copper intake do not affect copper absorption, measured with the use of 65Cu as a tracer, in young infants,” 76 Am. J. Clin. Nutr. 641 (2002); T.D. Lyon, et al., “Use of a stable copper isotope (65Cu) in the differential diagnosis of Wilson’s disease,” 88 Clin. Sci. 727 (1995).

[7] Shah Dep. at 87:12–25; 476:2–536:12, 138:6–142:12, June 5, 2013).

[8] The reported decision leaves unclear how the analysis would proceed, whether by ANOVA for the three groups, or t-tests, and whether there was multiple testing.

[9] Wang Dep. at 151:13–152:7; 153:15–18.

[10] 2015 WL 392021, at *10, citing Perry v. United States, 755 F.2d 888, 892 (11th Cir. 1985) (“A scientist who has a formed opinion as to the answer he is going to find before he even begins his research may be less objective than he needs to be in order to produce reliable scientific results.”); Rink v. Cheminova, Inc., 400 F.3d 1286, 1293 n. 7 (11th Cir.2005) (“In evaluating the reliability of an expert’s method … a district court may properly consider whether the expert’s methodology has been contrived to reach a particular result.” (alteration added)).

 

Zoloft MDL Relieves Matrixx Depression

January 30th, 2015

When the Supreme Court delivered its decision in Matrixx Initiatives, Inc. v. Siracusano, 131 S. Ct. 1309 (2011), a colleague, David Venderbush from Alston & Bird LLP, and I wrote a Washington Legal Foundation Legal Backgrounder, in which we predicted that plaintiffs’ counsel would distort the holding, and inflate the dicta of the opinion. Schachtman & Venderbush, “Matrixx Unbounded: High Court’s Ruling Needlessly Complicates Scientific Evidence Principles,” 26 (14) Legal Backgrounder (June 17, 2011)[1]. Our prediction was sadly all-too accurate. Not only was the context of the Matrixx distorted, but several district courts appeared to adopt the dicta on statistical significance as though it represented the holding of the case[2].

The Matrixx decision, along with the few district court opinions that had embraced its dicta[3], was urged as the basis for denying a defense challenge to the proffered testimony of Dr. Anick Bérard, a Canadian perinatal epidemiologist, in the Zoloft MDL. The trial court, however, correctly discerned several methodological shortcomings and failures, including Dr. Bérard’s reliance upon claims of statistical significance from studies that conducted dozens and hundreds of multiple comparisons. See In re Zoloft (Sertraline Hydrochloride) Prods. Liab. Litig., MDL No. 2342; 12-md-2342, 2014 U.S. Dist. LEXIS 87592; 2014 WL 2921648 (E.D. Pa. June 27, 2014) (Rufe, J.).

Plaintiffs (through their Plaintiffs’ Steering Committee (PSC) in the Zoloft MDL) were undaunted and moved for reconsideration, asserting that the MDL trial court had failed to give appropriate weight to the Supreme Court’s decision in Matrixx, and a Third Circuit decision in DeLuca v. Merrell Dow Pharms., Inc., 911 F.2d 941 (3d Cir. 1990). The MDL trial judge, however, deftly rebuffed the plaintiffs’ use of Matrixx, and their attempt to banish consideration of random error in the interpretation of epidemiologic studies. In re Zoloft (Sertraline Hydrochloride) Prods. Liab. Litig., MDL No. 2342; 12-md-2342, 2015 WL 314149 (E.D. Pa. Jan. 23, 2015) (Rufe, J.) (denying PSC’s motion for reconsideration).

In rejecting the motion for reconsideration, the Zoloft MDL trial judge noted that the PSC had previously cited Matrixx, and that the Court had addressed the case in its earlier ruling. 2015 WL 314149, at *2-3. The MDL Court then proceeded to expand upon its earlier ruling, and to explain how Matrixx was largely irrelevant to the Rule 702 context of Pfizer’s challenge to Dr. Bérard. There were, to be sure, some studies with nominal statistically significant results, for some birth defects, among children of mothers who took Zoloft in their first trimester of pregnancy. As Judge Rufe explained, statistical significance, or the lack thereof, was only one item in a fairly long list of methodological deficiencies in Dr. Bérard’s causation opinions:

“The [original] opinion set forth a detailed and multi-faceted rationale for finding Dr. Bérard’s testimony unreliable, including her inattention to the principles of replication and statistical significance, her use of certain principles and methods without demonstrating either that they are recognized by her scientific community or that they should otherwise be considered scientifically valid, the unreliability of conclusions drawn without adequate hypothesis testing, the unreliability of opinions supported by a ‛cherry-picked’ sub-set of research selected because it was supportive of her opinions (without adequately addressing non-supportive findings), and Dr. Bérard’s failure to reconcile her currently expressed opinions with her prior opinions and her published, peer-reviewed research. Taking into account all these factors, as well as others discussed in the Opinion, the Court found that Dr. Bérard departed from well-established epidemiological principles and methods, and that her opinion on human causation must be excluded.”

Id. at *1.

In citing the multiple deficiencies of the proffered expert witness, the Zoloft MDL Court thus put its decision well within the scope of the Third Circuit’s recent precedent of affirming the exclusion of Dr. Bennet Omalu, in Pritchard v. Dow Agro Sciences, 430 F. App’x 102, 104 (3d Cir.2011). The Zoloft MDL Court further defended its ruling by pointing out that it had not created a legal standard requiring statistical significance, but rather had made a factual finding that epidemiologist, such as the challenged witness, Dr. Anick Bérard, would use some measure of statistical significance in reaching conclusions in her discipline of epidemiology. 2015 WL 314149, at *2[4].

On the plaintiffs’ motion for reconsideration, the Zoloft Court revisited the Matrixx case, properly distinguishing the case as a securities fraud case about materiality of non-disclosed information, not about causation. 2015 WL 314149, at *4. Although the MDL Court could and should have identified the Matrixx language as clearly obiter dicta, it did confidently distinguish the Supreme Court holding about pleading materiality from its own task of gatekeeping expert witness testimony on causation in a products liability case:

“Because the facts and procedural posture of the Zoloft MDL are so dissimilar from those presented in Matrixx, this Court reviewed but did not rely upon Matrixx in reaching its decision regarding Dr. Bérard. However, even accepting the PSC’s interpretation of Matrixx, the Court’s Opinion is consistent with that ruling, as the Court reviewed Dr. Bérard’s methodology as a whole, and did not apply a bright-line rule requiring statistically significant findings.”

Id. at *4.

In mounting their challenge to the MDL Court’s earlier ruling, the Zoloft plaintiffs asserted that the Court had failed to credit Dr. Bérard’s reliance upon what Dr. Bérard called the “Rothman approach.” This approach, attribution to Professor Kenneth Rothman had received some attention in the Bendectin litigation in the Third Circuit, where plaintiffs sought to be excused from their failure to show statistically significant associations when claiming causation between maternal use of Bendectin and infant birth defects. DeLuca v. Merrell Dow Pharms., Inc., 911 F.2d 941 (3d Cir. 1990). The Zoloft MDL Court pointed out that the Circuit, in DeLuca, had never affirmatively endorsed Professor Rothman’s “approach,” but had reversed and remanded the Bendectin case to the district court for a hearing under Rule 702:

“by directing such an overall evaluation, however, we do not mean to reject at this point Merrell Dow’s contention that a showing of a .05 level of statistical significance should be a threshold requirement for any statistical analysis concluding that Bendectin is a teratogen regardless of the presence of other indicial of reliability. That contention will need to be addressed on remand. The root issue it poses is what risk of what type of error the judicial system is willing to tolerate. This is not an easy issue to resolve and one possible resolution is a conclusion that the system should not tolerate any expert opinion rooted in statistical analysis where the results of the underlying studies are not significant at a .05 level.”

2015 WL 314149, at *4 (quoting from DeLuca, 911 F.2d at 955). After remand, the district court excluded the DeLuca plaintiffs’ expert witnesses, and granted summary judgment, based upon the dubious methods employed by plaintiffs’ expert witnesses in cherry picking data, recalculating risk ratios in published studies, and ignoring bias and confounding in studies. The Third Circuit affirmed the judgment for Merrell Dow. DeLuca v. Merrell Dow Pharma., Inc., 791 F. Supp. 1042 (3d Cir. 1992), aff’d, 6 F.3d 778 (3d Cir. 1993).

In the Zoloft MDL, the plaintiffs not only offered an erroneous interpretation of the Third Circuit’s precedents in DeLuca, they also failed to show that the “Rothman” approach had become generally accepted in over two decades since DeLuca. 2015 WL 314149, at *4. Indeed, the hearing record was quite muddled about what the “Rothman” approach involved, other than glib, vague suggestions that the approach would have countenanced Dr. Bérard’s selective, over-reaching analysis of the extant epidemiologic studies. The plaintiffs did not call Rothman as an expert witness; nor did they offer any of Rothman’s publications as exhibits at the Zoloft hearing. Although Professor Rothman has criticized the overemphasis upon p-values and significance testing, he has never suggested that researchers and scientists should ignore random error in interpreting research data. Nevertheless, plaintiffs attempted to invoke some vague notion of a Rothman approach that would ignore confidence intervals, attained significance probability, multiplicity, bias, and confounding. Ultimately, the MDL Court would have none of it. The Court held that the Rothman Approach (whatever that is), as applied by Dr. Bérard, did not satisfy Rule 702.

The testimony at the Rule 702 hearing on the so-called “Rothman approach” had been sketchy at best. Dr. Bérard protested, perhaps too much, when asked about her having ignored p-values:

“I’m not the only one saying that. It’s really the evolution of the thinking of the importance of statistical significance. One of my professors and also a friend of mine at Harvard, Ken Rothman, actually wrote on it – wrote on the topic. And in his book at the end he says obviously what I just said, validity should not be confused with precision, but the third bullet point, it’s saying that the lack of statistical significance does not invalidate results because sometimes you are in the context of rare events, few cases, few exposed cases, small sample size, exactly – you know even if you start with hundreds of thousands of pregnancies because you are looking at rare events and if you want to stratify by exposure category, well your stratum becomes smaller and smaller and your precision decreases. I’m not the only one saying that. Ken Rothman says it as well, so I’m not different from the others. And if you look at many of the studies published nowadays, they also discuss that as well.”

Notes of Testimony of Dr. Anick Bérard, at 76:21- 77:14 (April 9, 2014). See also Notes of Testimony of Dr. Anick Bérard, at 211 (April 11, 2014) (discussing non-statistically significant findings as a “trend,” and asserting that the lack of a significant finding does not mean that there is “no effect”). Bérard’s invocation of Rothman here is accurate but unhelpful. Rothman and Bérard are not alone in insisting that confidence intervals provide a measure of precision of an estimate, and that we should be careful not to interpret the lack of significance to mean no effect. But the lack of significance cannot be used to interpret data to show an effect.

At the Rule 702 hearing, the PSC tried to bolster Dr. Bérard’s supposed reliance upon the “Rothman approach” in cross-examining Pfizer’s expert witness, Dr. Stephen Kimmel:

“Q. You know who Dr. Rothman is, the epidemiologist?
A. Yes.
Q. You actually took a course from Dr. Rothman, didn’t you?
A. I did when I was a student way back.
Q. He is a well-known epidemiologist, isn’t he?
A. Yes, he is.
Q. He has published this book, Modern Epidemiology. Do you have a copy of this?
A. I do.
Q. Do you – Have you ever read it?
A. I read his earlier edition. I have not read the most recent edition.
Q. There’s two other authors, Sander Greenland and Tim Lash. Do you know either one of them?
A. I know Sander. I don’t know Tim.
Q. Dr. Rothman has some – he has written about confidence intervals and statistical significance for some time, hasn’t he?
A. He has.
Q. Do you agree with him that statistical significance is a not matter of validity. It’s a matter of precision?
A. It’s a matter of – well, confidence intervals are matters of precision. P-values are not.
Q. Okay. I want to put up a table and see if you are in agreement with Dr. Rothman. This is the third edition of Modern Epidemiology. And he has – and ignore my brother’s handwriting. But there is an hypothesized rate ratio under 10-3. It says: p-value function from which one can find all confidence limits for a hypothetical study with a rate ratio estimate of 3.1 Do you see that there?
A. Yes. I don’t see the top of the figure, not that it matters.
Q. I want to make sure. The way I understand this, he is giving us a hypothesis that we have a relative risk of 3.1 and it [presumably a 95% confidence interval] crosses 1, meaning it’s not statistically significant. Is that fair?
A. Well, if you are using a value of .05, yes. And again, if this is a single test and there’s a lot of things that go behind it. But, yes, so this is a total hypothetical.
Q. Yes.
A. I’ sorry. He’s saying here is a hypothetical based on math. And so here is – this is what we would propose.
Q. Yes, I want to highlight what he says about this figure and get your thoughts on it. He says:
The message of figure 10-3 is that the example data are more compatible with a moderate to strong association than with no association, assuming the statistical model used to construct the function is correct.
A. Yes.
Q. Would you agree with that statement?
A. Assuming the statistical model is correct. And the problem is, this is a hypothetical.
Q. Sure. So let’s just assume. So what this means to sort of put some meat on the bone, this means that although we cross 1 and therefore are statistically
significant [sic, non-significant], he says the more likely truth here is that there is a moderate to strong effect rather than no effect?
A. Well, you know he has hypothesized this. This is not used in common methods practice in pharmacoepi. Dr. Rothman has lots of ideas but it’s not part of our standard scientific method.

Notes of Testimony of Dr. Stephen Kimmel, at 126:2 to 128:20.

Nothing very concrete about the “Rothman approach” is put before the MDL Court, either through Dr. Bérard or Dr. Kimmel. There are, however, other instructive aspects to the plaintiff’s counsel’s examination. First, the referenced portion of the text, Modern Epidemiology, is a discussion of p-value functions, not of p-values or of confidence intervals per se. Modern Epidemiology at 158-59 (3d ed. 2008). Dr. Bérard never discussed p-value functions in her report or in her testimony, and Dr. Kimmel testified, without contradiction, that such p-value functions are “not used in common methods practice.” Second, the plaintiff’s counsel never marked and offered the Rothman text as an exhibit for the MDL Court to consider. Third, the cross-examiner first asked about the implication for a hypothetical association, and then, when he wanted to “put some meat on the bone” changed the word used in Rothman’s text, “association,” to “effect.” The word “effect” does not appear in Rothman’s text at the referenced discussion about p-value functions. Fortunately, the MDL Court was not poisoned by the “meat on the bone.”

The Pit and the Pendulum

Another document glibly referenced but not provided to the MDL Court was the publication of Sir Austin Bradford Hill’s presidential address to the Royal Society of Medicine on causation. The MDL Court acknowledged that the PSC had argued that the emphasis upon statistical significance was contrary to Hill’s work and teaching. 2015 WL 314149, at *5. In the Court’s words:

“the PSC argues that the Court’s finding regarding the importance of statistical significance in the field of epidemiology is inconsistent with the work of Bradford Hill. The PSC points to a 1965 address by Sir Austin Bradford Hill, which it has not previously presented to the Court, except in opening statements of the Daubert hearings.20 The PSC failed to put forth evidence establishing that Bradford Hill’s statement that ‛I wonder whether the pendulum has not swung too far [in requiring statistical significance before drawing conclusions]’ has, in the decades since that 1965 address, altered the importance of statistical significance to scientists in the field of epidemiology.”

Id. This failure, identified by the Court, is hardly surprising. The snippet of a quotation from Hill would not sustain the plaintiffs’ sweeping generalization. The quoted language in context may help to explain why Hill’s paper was not provided:

“I wonder whether the pendulum has not swung too far – not only with the attentive pupils but even with the statisticians themselves. To decline to draw conclusions without standard errors can surely be just as silly? Fortunately I believe we have not yet gone so far as our friends in the USA where, I am told, some editors of journals will return an article because tests of significance have not been applied. Yet there are innumerable situations in which they are totally unnecessary – because the difference is grotesquely obvious, because it is negligible, or because, whether it be formally significant or not, it is too small to be of any practical importance. What is worse the glitter of the t table diverts attention from the inadequacies of the fare. Only a tithe, and an unknown tithe, of the factory personnel volunteer for some procedure or interview, 20% of patients treated in some particular way are lost to sight, 30% of a randomly-drawn sample are never contracted. The sample may, indeed, be akin to that of the man who, according to Swift, ‘had a mind to sell his house and carried a piece of brick in his pocket, which he showed as a pattern to encourage purchasers.’ The writer, the editor and the reader are unmoved. The magic formulae are there.”

Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295, 299 (1965).

In the Zoloft cases, no expert witness was prepared to state that the disparity was “grotesquely obvious,” or “negligible.” And Bradford Hill’s larger point was that bias and confounding often dwarf considerations of random error, and that there are many instances in which significance testing is unavailing or unhelpful. And in some studies, with large “effect sizes,” statistical significance testing may be beside the point.

Hill’s presidential address to the Royal Society of Medicine commemorated his successes in epidemiology, and we need only turn to Hill’s own work to see how prevalent was his use of measurements of significance probability. See, e.g., Richard Doll & Austin Bradford Hill, “Smoking and Carcinoma of the Lung: Preliminary Report,” Brit. Med. J. 740 (Sept. 30, 1950); Medical Research Council, “Streptomycin Treatment of Pulmonary Tuberculosis,” Brit. Med. J. 769 (Oct. 30, 1948).

Considering the misdirection on Rothman and on Hill, the Zoloft MDL Court did an admirable job in unraveling the Matrixx trap set by counsel. The Court insisted upon parsing the Bradford Hill factors[5], over Pfizer’s objection, despite the plaintiffs’ failure to show “an association between two variables, perfectly clear-cut and beyond what we would care to attribute to the play of chance,” which Bradford Hill insisted was the prerequisite for the exploration of the nine factors he set out in his classic paper. Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295, 295 (1965). Given the outcome, the Court’s questionable indulgence of plaintiffs’ position was ultimately harmless.


[1] See alsoThe Matrixx – A Comedy of Errors,” and “Matrixx Unloaded,” (Mar. 29, 2011), “The Matrixx Oversold,” and “De-Zincing the Matrixx.”

[2] SeeSiracusano Dicta Infects Daubert Decisions” (Sept. 22, 2012).

[3] See, e.g., In re Chantix (Varenicline) Prods. Liab. Litig., 2012 U.S. Dist. LEXIS 130144, at *22 (N.D. Ala. 2012); Cheek v. Wyeth Pharm. Inc., 2012 U.S. Dist. LEXIS 123485 (E.D. Pa. Aug. 30, 2012); In re Celexa & Lexapro Prods. Liab. Litig.,  ___ F.3d ___, 2013 WL 791780 (E.D. Mo. 2013).

[4] The Court’s reasoning on this point begged the question whether an ordinary clinician, ignorant of the standards, requirements, and niceties of statistical reasoning and inference, would be allowed to testify, unconstrained by any principled epidemiologic reasoning about random or systematic error. It is hard to imagine that Rule 702 would countenance such an end-run around the requirements of sound science.

[5] Adhering to Bradford Hill’s own admonition might have saved the Court the confusion of describing statistical significance as a measure of strength of association. 2015 WL 314149, at *2.

The Lie Detector and Wonder Woman – Quirks and Quacks of Legal History

January 27th, 2015

From 1923, until the United States Supreme Court decided the Daubert case in 1993, Frye was cited as “controlling authority” on questions of the admissibility of scientific opinion testimony and test results. The decision is infuriatingly cryptic and unhelpful as to background or context of the specific case, as well as how it might be applied to future controversies. Of the 669 words, these are typically cited as the guiding “rule” with respect to expert witness opinion testimony:

“Just when a scientific principle or discovery crosses the line between the experimental and demonstrable stages is difficult to define. Somewhere in this twilight zone the evidential force of the principle must be recognized, and while the courts will go a long way in admitting expert testimony deduced from a well recognized scientific principle or discovery, the thing from which the deduction is made must be sufficiently established to have gained general acceptance in the particular field in which it belongs.”

Frye v. United States, 293 F. 1013, 1014 (D.C. Cir. 1923).

As most scholars of evidence realize, the back story of the Frye case is rich and bizarre. The expert witness involved, William Marston, was a lawyer and scientist, who had made advances in a systolic blood pressure cuff to be used as a “lie detector.” Marston was also an advocate of free love and, with his wife and his mistress, the inventor of Wonder Woman and her lasso of truth.

Jill Lepore, a professor of history in Harvard University, has written an historical account of Marston and his colleagues. Jill Lepore, The Secret History of Wonder Woman (N.Y. 2014). More recently, Lepore has written an important law review on the historical and legal record of the Frye case, which is concealed in the terse 669 words of the Court of Appeals’ opinion. Jill Lepore, “On Evidence: Proving Frye as a Matter of Law, Science, and History,” 124 Yale L.J. 1092 (2015).

Lepore’s history is an important gloss on the Frye case, but her paper points to a larger, more prevalent, chronic problem in the law, which especially afflicts judicial decisions of scientific or technical issues. As an historian, Lepore is troubled, as we all should be, by the censoring, selecting, suppressing, and distorting of facts that go into judicial decisions. From cases and their holdings, lawyers are taught to infer rules that guide their conduct at the bar, and their clients’ conduct and expectations, but everyone requires fair access to the evidence to determine what facts are material to decision.

As Professor Lepore puts it:

“Marston is missing from Frye because the law of evidence, case law, the case method, and the conventions of legal scholarship — together, and relentlessly — hide facts.”

Id. at 1097. Generalizing from Marston and the Frye case, Lepore notes that:

“Case law is like that, too, except that it doesn’t only fail to notice details; it conceals them.”

Id. at 1099.

Lepore documents that Marston’s psychological research was rife with cherry picking and data dredging. Id. at 1113-14. Despite his degree magna cum laude in philosophy from Harvard College, his L.L.B from Harvard Law School (with no particular distinction), and his Ph.D. from Harvard University, Marston was not a rigorous scientist. In exploring the historical and legal record, not recounted in the Frye decision, Lepore’s history provides a wonderful early example, of what has become a familiar phenomenon of modern litigation: an expert witness who seeks to achieve acceptance for a dubious opinion or device in the courtroom rather than in the court of scientific opinion. Id. at 1122. The trial judge in Frye’s murder case, Justice McCoy, was an astute judge, and quite modest in his ability to evaluate the validity of Marston’s opinions, but he had more than sufficient perspicacity to discern that Marston’s investigation was “wildly unscientific,” with no control groups. Id. at 1135. The trial record of defense counsel’s proffer, and Justice McCoy’s rulings and comments from the bench, reproduced in Lepore’s article, anticipate and predict much of the scholarship surrounding both Frye and Daubert cases.

Lepore complains that the important historical record, including Marston’s correspondence with Professor Wigmore, the criminal fraud charges against Marston, and the correspondence of Frye’s lawyers, lies “filed, undigitized” in various archives. Id. at 1150. Although Professor Lepore tirelessly cites to internet URL sources when available, she could have easily made the primary historical materials available for all, using readily available internet technology. Lepore’s main thesis should encourage lawyers and law scholars to look beyond appellate decisions as the data for legal analysis.

The Rhetoric of Playing Dumb on Statistical Significance – Further Comments on Oreskes

January 17th, 2015

As a matter of policy, I leave the comment field turned off on this blog. I don’t have the time or patience to moderate discussions, but that is not to say that I don’t value feedback. Many readers have written, with compliments, concurrences, criticisms, and corrections. Some correspondents have given me valuable suggestions and materials. I believe I can say that aside from a few scurrilous emails, the feedback generally has been constructive, and welcomed.

My last post was on Naomi Oreskes’ opinion piece in the Sunday New York Times[1]. Professor Deborah Mayo asked me for permission to re-post the substance of this post, and to link to the original[2]. Mayo’s blog does allow for comments, and much to my surprise, the posts drew a great deal of attention, links, comment, and twittering. The number and intensity of the comments, as well as the other blog posts and tweets, seemed out of proportion to the point I was trying to make about misinterpreting confidence intervals and other statistical concepts. I suspect that some climate skeptics received my criticisms of Oreskes with a degree of schadenfreude, and that some who criticized me did so because they fear any challenge to Oreskes as a climate-change advocate. So be it. As I made clear in my post, I was not seeking to engage Oreskes on climate change or her judgments on that issue. What I saw in Oreskes’ article was the same rhetorical move made in the courtroom, and in scientific publications, in which plaintiffs environmentalists attempt to claim a scientific imprimatur for their conclusions without adhering to the rigor required for scientific judgments[3].

Some of the comments about Professor Oreskes caused me to take a look at her recent book, Naomi Oreskes & Erik M. Conway, Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming (N.Y. 2010). Interestingly, much of the substance of Oreskes’ newspaper article comes directly from this book. In the context of reporting on the dispute over the EPA’s meta-analysis of studies on passive smoking and lung cancer, Oreskes addressed the 95 percent issue:

“There’s nothing magic about 95 percent. It could be 80 percent. It could be 51 percent. In Vegas if you play a game with 51 percent odds in your favor, you’ll still come out ahead if you play long enough. The 95 percent confidence level is a social convention, a value judgment. And the value it reflects is one that says that the worst mistake a scientist can make is to fool herself: to think an effect is real when it is not. Statisticians call this a type I error. You can think of it as being gullible, naive, or having undue faith in your own ideas.89 To avoid it, scientists place the burden of proof on the person claiming a cause and effect. But there’s another kind of error-type 2-where you miss effects that are really there. You can think of that as being excessively skeptical or overly cautious. Conventional statistics is set up to be skeptical and avoid type I errors. The 95 percent confidence standard means that there is only 1 chance in 20 that you believe something that isn’t true. That is a very high bar. It reflects a scientific worldview in which skepticism is a virtue, credulity is not.90 As one Web site puts it, ‘A type I error is often considered to be more serious, and therefore more important to avoid, than a type II error’.91 In fact, some statisticians claim that type 2 errors aren’t really errors at all,  just missed opportunities.92

Id. at 156-57 (emphasis added). Oreskes’ statement of the confidence interval, from her book, advances more ambiguity by not specifying what the “something” you don’t believe to be true. Of course, if it is the assumed parameter, then she has made the same error as she did in the Times. Oreskes’ further discussion of the EPA environmental tobacco smoke meta-analysis issue makes her meaning clearer, and her interpretation of statistical significance, less defensible:

“Even if 90 percent is less stringent than 95 percent, it still means that there is a 9 in 10 chance that the observed results did not occur by chance. Think of it this way. If you were nine-tenths sure about a crossword puzzle answer, wouldn’t you write it in?94

Id.  Throughout her discussion, Oreskes fails to acknowledge that the p-value assumes the correctness of the null hypothesis in order to assess the strength of the specific data as evidence against the null. As I have pointed out elsewhere, this misinterpretation of significance testing is a rhetorical strategy to evade significance testing, as well as to obscure the role of bias and confounding in accounting for data that differs from an expected value.

Oreskes also continues to maintain that a failure to reject the null is playing “dumb” and placing:

“the burden of proof on the victim, rather than, for example, the manufacturer of a harmful product-and we may fail to protect some people who are really getting hurt.”

Id. So again, the same petitio principii as we saw in the Times. Victimhood is exactly what remains to be established. Oreskes cannot assume it, and then criticize time-tested methods that fail to deliver a confirmatory judgment.

There are endnotes in her book, but the authors fail to cite any serious statistics text. The only reference of dubious relevance is another University of Michigan book, Stephen T. Ziliak & Deidre N. McCloskey, The Cult of Statistical Significance (2008). Enough said[4].

With a little digging, I learned that Oreskes and Conway are science fiction writers, and perhaps we should judge them by literary rather than scientific standards. See Naomi Oreskes & Erik M. Conway, “The Collapse of Western Civilization: A View from the Future,” 142 Dædalus 41 (2013). I do not imply any pejorative judgment of Oreskes for advancing her apocalyptic vision of the future of Earth’s environment as a work of fiction. Her literary work is a worthy thought experiment that has the potential to lead us to accept her precautionary judgments; and at least her publication, in Dædalus, is clearly labeled science fiction.

Oreskes’ future fantasy is, not surprisingly, exactly what Oreskes, the historian of science, now predicts in terms of catastrophic environmental change. Looking back from the future, the science fiction authors attempt to explore the historical origins of the catastrophe, only to discover that it is the fault of everyone who disagreed with Naomi Oreskes in the early 21st century. Heavy blame is laid at the feet of the ancestor scientists (Oreskes’ contemporaries) who insisted upon scientific and statistical standards for inferring conclusions from observational data. Implicit in the science fiction tale is the welcome acknowledgment that science should make accurate predictions.

In Oreskes’ science fiction, these scientists of yesteryear, today’s adversaries of climate-change advocates, were “almost childlike,” in their felt-need to adopt “strict” standards, and their adherence to severe tests derived from their ancestors’ religious asceticism. In other words, significance testing is a form of self-flagellation. Lest you think, I exaggerate, consider the actual words of Oreskes and Conway:

“In an almost childlike attempt to demarcate their practices from those of older explanatory traditions, scientists felt it necessary to prove to themselves and the world how strict they were in their intellectual standards. Thus, they placed the burden of proof on novel claims, including those about climate. Some scientists in the early twenty-first century, for example, had recognized that hurricanes were intensifying, but they backed down from this conclusion under pressure from their scientific colleagues. Much of the argument surrounded the concept of statistical significance. Given what we now know about the dominance of nonlinear systems and the distribution of stochastic processes, the then-dominant notion of a 95 percent confidence limit is hard to fathom. Yet overwhelming evidence suggests that twentieth-century scientists believed that a claim could be accepted only if, by the standards of Fisherian statistics, the possibility that an observed event could have happened by chance was less than 1 in 20. Many phenomena whose causal mechanisms were physically, chemically, or biologically linked to warmer temperatures were dismissed as “unproven” because they did not adhere to this standard of demonstration.

Historians have long argued about why this standard was accepted, given that it had no substantive mathematical basis. We have come to understand the 95 percent confidence limit as a social convention rooted in scientists’ desire to demonstrate their disciplinary severity. Just as religious orders of prior centuries had demonstrated moral rigor through extreme practices of asceticism in dress, lodging, behavior, and food–in essence, practices of physical self-denial–so, too, did natural scientists of the twentieth century attempt to demonstrate their intellectual rigor through intellectual self-denial.14 This practice led scientists to demand an excessively stringent standard for accepting claims of any kind, even those involving imminent threats.”

142 Dædalus at 44.

The science fiction piece in Dædalus has now morphed into a short book, which is billed within as a “haunting, provocative work of science-based fiction.” Naomi Oreskes & Erik M. Conway, The Collapse of Western Civilization: A View from the Future (N.Y. 2014). Under the cover of fiction, Oreskes and Conway provide their idiosyncratic, fictional definition of statistical significance, in a “Lexicon of Archaic Terms,” at the back of the book:

statistical significance  The archaic concept that an observed phenomenon could only be accepted as true if the odds of it happening by chance were very small, typically taken to be no more than 1 in 20.”

Id. at 61-62. Of course, in writing fiction, you can make up anything you like. Caveat lector.


 

[1] SeePlaying Dumb on Statistical Significance” (Jan. 4, 2015).

[2] SeeSignificance Levels are Made a Whipping Boy on Climate Change Evidence: Is .05 Too Strict? (Schachtman on Oreskes)” (Jan. 4, 2015).

[3] SeeRhetorical Strategy in Characterizing Scientific Burdens of Proof” (Nov. 15, 2014).

[4] SeeThe Will to Ummph” (Jan. 10, 2012).