TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Don’t Double Dip Data

March 9th, 2015

Meta-analyses have become commonplace in epidemiology and in other sciences. When well conducted and transparently reported, meta-analyses can be extremely helpful. In several litigations, meta-analyses determined the outcome of the medical causation issues. In the silicone gel breast implant litigation, after defense expert witnesses proffered meta-analyses[1], court-appointed expert witnesses adopted the approach and featured meta-analyses in their reports to the MDL court[2].

In the welding fume litigation, plaintiffs’ expert witness offered a crude, non-quantified, “vote counting” exercise to argue that welding causes Parkinson’s disease[3]. In rebuttal, one of the defense expert witnesses offered a quantitative meta-analysis, which provided strong evidence against plaintiffs’ claim.[4] Although the welding fume MDL court excluded the defense expert’s meta-analysis from the pre-trial Rule 702 hearing as untimely, plaintiffs’ counsel soon thereafter initiated settlement discussions of the entire set of MDL cases. Subsequently, the defense expert witness, with his professional colleagues, published an expanded version of the meta-analysis.[5]

And last month, a meta-analysis proffered by a defense expert witness helped dispatch a long-festering litigation in New Jersey’s multi-county isotretinoin (Accutane) litigation. In re Accutane Litig., No. 271(MCL), 2015 WL 753674 (N.J. Super., Law Div., Atlantic Cty., Feb. 20, 2015) (excluding plaintiffs’ expert witness David Madigan).

Of course, when a meta-analysis is done improperly, the resulting analysis may be worse than none at all. Some methodological flaws involve arcane statistical concepts and procedures, and may be easily missed. Other flaws are flagrant and call for a gatekeeping bucket brigade.

When a merchant puts his hand the scale at the check-out counter, we call that fraud. When George Costanza double dipped his chip twice in the chip dip, he was properly called out for his boorish and unsanitary practice. When a statistician or epidemiologist produces a meta-analysis that double counts crucial data to inflate a summary estimate of association, or to create spurious precision in the estimate, we don’t need to crack open Modern Epidemiology or the Reference Manual on Scientific Evidence to know that something fishy has taken place.

In litigation involving claims that selective serotonin reuptake inhibitors cause birth defects, plaintiffs’ expert witness, a perinatal epidemiologist, relied upon two published meta-analyses[6]. In an examination before trial, this epidemiologist was confronted with the double counting (and other data entry errors) in the relied-upon meta-analyses, and she readily agreed that the meta-analyses were improperly done and that she had to abandon her reliance upon them.[7] The result of the expert witness’s deposition epiphany, however, was that she no longer had the illusory benefit of an aggregation of data, with an outcome supporting her opinion. The further consequence was that her opinion succumbed to a Rule 702 challenge. See In re Zoloft (Sertraline Hydrochloride) Prods. Liab. Litig., MDL No. 2342; 12-md-2342, 2014 U.S. Dist. LEXIS 87592; 2014 WL 2921648 (E.D. Pa. June 27, 2014) (Rufe, J.).

Double counting of studies, or subgroups within studies, is a flaw that most careful readers can identify in a meta-analysis, without advance training. According to statistician Stephen Senn, double counting of evidence is a serious problem in published meta-analytical studies. Stephen J. Senn, “Overstating the evidence – double counting in meta-analysis and related problems,” 9, at *1 BMC Medical Research Methodology 10 (2009). Senn observes that he had little difficulty in finding examples of meta-analyses gone wrong, including meta-analyses with double counting of studies or data, in some of the leading clinical medical journals. Id. Senn urges analysts to “[b]e vigilant about double counting,” id. at *4, and recommends that journals should withdraw meta-analyses promptly when mistakes are found,” id. at *1.

Similar advice abounds in books and journals[8]. Professor Sander Greenland addresses the issue in his chapter on meta-analysis in Modern Epidemiology:

Conducting a Sound and Credible Meta-Analysis

Like any scientific study, an ideal meta-analysis would follow an explicit protocol that is fully replicable by others. This ideal can be hard to attain, but meeting certain conditions can enhance soundness (validity) and credibility (believability). Among these conditions we include the following:

  • A clearly defined set of research questions to address.

  • An explicit and detailed working protocol.

  • A replicable literature-search strategy.

  • Explicit study inclusion and exclusion criteria, with a rationale for each.

  • Nonoverlap of included studies (use of separate subjects in different included studies), or use of statistical methods that account for overlap. * * * * *”

Sander Greenland & Keith O’Rourke, “Meta-Analysis – Chapter 33,” in Kenneth J. Rothman, Sander Greenland, Timothy L. Lash, Modern Epidemiology 652, 655 (3d ed. 2008) (emphasis added).

Just remember George Costanza; don’t double dip that chip, and don’t double dip in the data.


[1] See, e.g., Otto Wong, “A Critical Assessment of the Relationship between Silicone Breast Implants and Connective Tissue Diseases,” 23 Regulatory Toxicol. & Pharmacol. 74 (1996).

[2] See Barbara Hulka, Betty Diamond, Nancy Kerkvliet & Peter Tugwell, “Silicone Breast Implants in Relation to Connective Tissue Diseases and Immunologic Dysfunction:  A Report by a National Science Panel to the Hon. Sam Pointer Jr., MDL 926 (Nov. 30, 1998)”; Barbara Hulka, Nancy Kerkvliet & Peter Tugwell, “Experience of a Scientific Panel Formed to Advise the Federal Judiciary on Silicone Breast Implants,” 342 New Engl. J. Med. 812 (2000).

[3] Deposition of Dr. Juan Sanchez-Ramos, Street v. Lincoln Elec. Co., Case No. 1:06-cv-17026, 2011 WL 6008514 (N.D. Ohio May 17, 2011).

[4] Deposition of Dr. James Mortimer, Street v. Lincoln Elec. Co., Case No. 1:06-cv-17026, 2011 WL 6008054 (N.D. Ohio June 29, 2011).

[5] James Mortimer, Amy Borenstein & Laurene Nelson, Associations of Welding and Manganese Exposure with Parkinson’s Disease: Review and Meta-Analysis, 79 Neurology 1174 (2012).

[6] Shekoufeh Nikfar, Roja Rahimi, Narjes Hendoiee, and Mohammad Abdollahi, “Increasing the risk of spontaneous abortion and major malformations in newborns following use of serotonin reuptake inhibitors during pregnancy: A systematic review and updated meta-analysis,” 20 DARU J. Pharm. Sci. 75 (2012); Roja Rahimi, Shekoufeh Nikfara, Mohammad Abdollahic, “Pregnancy outcomes following exposure to serotonin reuptake inhibitors: a meta-analysis of clinical trials,” 22 Reproductive Toxicol. 571 (2006).

[7] “Q So the question was: Have you read it carefully and do you understand everything that was done in the Nikfar meta-analysis?

A Yes, I think so.

* * *

Q And Nikfar stated that she included studies, correct, in the cardiac malformation meta-analysis?

A That’s what she says.

* * *

Q So if you look at the STATA output, the demonstrative, the — the forest plot, the second study is Kornum 2010. Do you see that?

A Am I —

Q You’re looking at figure four, the cardiac malformations.

A Okay.

Q And Kornum 2010, —

A Yes.

Q — that’s a study you relied upon.

A Mm-hmm.

Q Is that right?

A Yes.

Q And it’s on this forest plot, along with its odds ratio and confidence interval, correct?

A Yeah.

Q And if you look at the last study on the forest plot, it’s the same study, Kornum 2010, same odds ratio and same confidence interval, true?

A You’re right.

Q And to paraphrase My Cousin Vinny, no self-respecting epidemiologist would do a meta-analysis by including the same study twice, correct?

A Well, that was an error. Yeah, you’re right.

***

Q Instead of putting 2 out of 98, they extracted the data and put 9 out of 28.

A Yeah. You’re right.

Q So there’s a numerical transposition that generated a 25-fold increased risk; is that right?

A You’re correct.

Q And, again, to quote My Cousin Vinny, this is no way to do a meta-analysis, is it?

A You’re right.”

Testimony of Anick Bérard, Kuykendall v. Forest Labs, at 223:14-17; 238:17-20; 239:11-240:10; 245:5-12 (Cole County, Missouri; Nov. 15, 2013). According to a Google Scholar search, the Rahimi 2005 meta-analysis had been cited 90 times; the Nikfar 2012 meta-analysis, 11 times, as recently as this month. See, e.g., Etienne Weisskopf, Celine J. Fischer, Myriam Bickle Graz, Mathilde Morisod Harari, Jean-Francois Tolsa, Olivier Claris, Yvan Vial, Chin B. Eap, Chantal Csajka & Alice Panchaud, “Risk-benefit balance assessment of SSRI antidepressant use during pregnancy and lactation based on best available evidence,” 14 Expert Op. Drug Safety 413 (2015); Kimberly A. Yonkers, Katherine A. Blackwell & Ariadna Forray, “Antidepressant Use in Pregnant and Postpartum Women,” 10 Ann. Rev. Clin. Psychol. 369 (2014); Abbie D. Leino & Vicki L. Ellingrod, “SSRIs in pregnancy: What should you tell your depressed patient?” 12 Current Psychiatry 41 (2013).

[8] Julian Higgins & Sally Green, eds., Cochrane Handbook for Systematic Reviews of Interventions 152 (2008) (“7.2.2 Identifying multiple reports from the same study. Duplicate publication can introduce substantial biases if studies are inadvertently included more than once in a meta-analysis (Tramèr 1997). Duplicate publication can take various forms, ranging from identical manuscripts to reports describing different numbers of participants and different outcomes (von Elm 2004). It can be difficult to detect duplicate publication, and some ‘detectivework’ by the reviewauthors may be required.”); see also id. at 298 (Table 10.1.a “Definitions of some types of reporting biases”); id. at 304-05 (10.2.2.1 Duplicate (multiple) publication bias … “The inclusion of duplicated data may therefore lead to overestimation of intervention effects.”); Julian P.T. Higgins, Peter W. Lane, Betsy Anagnostelis, Judith Anzures-Cabrera, Nigel F. Baker, Joseph C. Cappelleri, Scott Haughie, Sally Hollis, Steff C. Lewis, Patrick Moneuse & Anne Whitehead, “A tool to assess the quality of a meta-analysis,” 4 Research Synthesis Methods 351, 363 (2013) (“A common error is to double-count individuals in a meta-analysis.”); Alessandro Liberati, Douglas G. Altman, Jennifer Tetzlaff, Cynthia Mulrow, Peter C. Gøtzsche, John P.A. Ioannidis, Mike Clarke, Devereaux, Jos Kleijnen, and David Moher, “The PRISMA Statement for Reporting Systematic Reviews and Meta-Analyses of Studies That Evaluate Health Care Interventions: Explanation and Elaboration,” 151 Ann. Intern. Med. W-65, W-75 (2009) (“Some studies are published more than once. Duplicate publications may be difficult to ascertain, and their inclusion may introduce bias. We advise authors to describe any steps they used to avoid double counting and piece together data from multiple reports of the same study (e.g., juxtaposing author names, treatment comparisons, sample sizes, or outcomes).”) (internal citations omitted); Erik von Elm, Greta Poglia; Bernhard Walder, and Martin R. Tramèr, “Different patterns of duplicate publication: an analysis of articles used in systematic reviews,” 291 J. Am. Med. Ass’n 974 (2004); John Andy Wood, “Methodology for Dealing With Duplicate Study Effects in a Meta-Analysis,” 11 Organizational Research Methods 79, 79 (2008) (“Dependent studies, duplicate study effects, nonindependent studies, and even covert duplicate publications are all terms that have been used to describe a threat to the validity of the meta-analytic process.”) (internal citations omitted); Martin R. Tramèr, D. John M. Reynolds, R. Andrew Moore, Henry J. McQuay, “Impact of covert duplicate publication on meta­analysis: a case study,” 315 Brit. Med. J. 635 (1997); Beverley J Shea, Jeremy M Grimshaw, George A. Wells, Maarten Boers, Neil Andersson, Candyce Hamel, Ashley C. Porter, Peter Tugwell, David Moher, and Lex M. Bouter, “Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews,” 7(10) BMC Medical Research Methodology 2007 (systematic reviews must inquire whether there was “duplicate study selection and data extraction”).

The Legacy of Irving Selikoff & Wicked Wikipedia

March 7th, 2015

Earlier this year, January 15, 2015, would have been Irving J. Selikoff’s 100th birthday. Selikoff left a lifetime legacy of having improved public health awareness, with a shadow of some rather questionable opinions and conduct in the world of litigation[1]. Given Selikoff’s fame and prestige among public health advocates and labor union activists, it is remarkable that now, over twenty since his death, there are no major biographies of Selikoff. Even Selikoff’s Wikipedia page[2] is skimpy and devoid of many details of his activities.

There are some comical aspects to the Selikoff wikipedia page, some of which revolve around someone’s anonymous disparaging of my writing about Selikoff::

“Part of the contrary perspective was presented by a Nathan A. Schachtman, an adjunct lecturer at the Columbia Law School. He suggested that Selikoff and his supporters may have organized ‘a lopsided medical conference, arranged for the conference to feature defendant’s expert witnesses, set out to give short shrift to opposing points of view, invited key judges to attend the conference, and paid for the judges’ travel and hotel expenses’. This quote from Schachtman came from a web site he maintained, unlike the quote from McCulloch and Tweedale, whose comments were published only after being accepted by reviewers for a refereed academic journal.“Nathan A. Schachtman”. www.law.columbia.edu. Columbia Law School. Retrieved September 16, 2013.”

Make no mistake about it; I wasn’t “suggesting”; I was stating a fact. As for the reviewers who “refereed” the journal article by McCulloch and Tweedale, I have shown that this peer review was not worth a warm bucket of spit[3].

One of the disturbing aspects of Wikipedia is that contributors can hide behind I.P. addresses or pseudonyms. Whoever attempted to quote my blog posting distorted my meaning by selectively and incompletely quoting me to suggest that the conference featured defendants’ experts. I can understand that the dumbot wanted to remain anonymous to mislead in this way, but what I wrote was:

“One can only imagine the hue and cry that would arise if a defendant company had funded a lopsided medical conference, arranged for the conference to feature defendant’s expert witnesses, set out to give short shrift to opposing points of view, invited key judges to attend the conference, and paid for the judges’ travel and hotel expenses.”

The counterfactual point, obviously, was that if defense counsel had conspired with defense expert witnesses, to hold an ex parte conference with sitting judges, to feature the work of defense experts, there would have been acrimonious denunciations from the public health community about the evils of corporate influence. In the Wikipedia article, the only reference to Selikoff’s participation in the conspiracy with the litigation industry was an attack on my writing, and a distortion of my posting by incomplete citation. But the misquotation was welcomed in motivating me to register with Wikipedia to correct the misattribution.

There are two document archives of Selikoff documents, one at Mt. Sinai Hospital in New York[4], and the other in St. Louis[5]. Jock McCulloch described Selikoff as having “avoided litigation” and having “fought to keep his papers away from the legal arena.”[6] The first part of McCulloch’s description is demonstrably wrong, but the efforts to suppress access to his papers, and data, is sadly all too true. The accusations of “cover up” flow so freely against industry, but why the cover up of Selikoff’s papers? And who would trust the Mt. Sinai custodians?

The Asbestos Disease Awareness Organization (ADAO) claims to be an “independent asbestos victims’ organization,” started in 2004. Its website points out that the ADAO is a registered 501(c)(3) nonprofit corporation, which “does not make legal referrals.” The ADAO posted a kind memoriam to the late Dr. Selikoff: “Dr. Irving Selikoff: Clinician, Researcher, Public Health Advocate and Occupational Health Pioneer (1915 – 2015)” (Jan. 15, 2015).

For almost ten years, the ADAO has been recognizing “exceptional leaders” with the Dr. Irving Selikoff Lifetime Achievement Award, for the recipient’s efforts to increase awareness and prevention of asbestos-related diseases.

Remarkably, many of the “exceptional leaders,” in the eyes of the ADAO, are (or were before their deaths) regular testifiers for the litigation industry:

Paul Brodeur 2006

Yasunosuke Suzuki 2006

Michael Harbut 2008:

Barry Castleman 2008

Stephen Levin 2009

Arthur Frank, 2012

Richard Lemen, 2012

Celeste Monforton 2013

David Egilman 2014

Brodeur, of course, did not testify; he wrote for the New Yorker, including a series that became the book, Outrageous Misconduct: the Asbestos Industry on Trial, This book became an important lobbying tool for plaintiffs’ counsel with judges and legislatures. His subsequent book, The Great Power-Line Cover-Up: How the Utilities and Government Are Trying to Hide the Cancer Hazard Posed by Electromagnetic Fields (1993) revealed his aptitude for overinterpreting studies and failing to appreciate validity concerns. See Sander Greenland, Asher R. Sheppard, William T. Kaune, Charles Poole, and Michael A. Kelsh, “A Pooled Analysis of Magnetic Fields, Wire Codes, and Childhood Leukemia,” 11 Epidemiology 624 (2000).

Harbut was the proponent, in the silicone gel breast implant litigation, of a half-baked theory about a role for platinum in causing autoimmune disease among claimants. The FDA and The Institute of Medicine easily dispatched Harbut’s theory. Suzuki, Castleman, Levin, Frank, and Lemen testify (or did testify when alive) with some frequency and regularity in asbestos litigation, on behalf of the litigation industry. Egilman to his credit is perhaps the lone recipient who has spoken out[7], on one or more occasions against the depredations of the litigation industry’s unethical[8] and unlawful screenings, but he has openly acknowledged his bias against corporate industry (although not against litigation industry). See David S. Egilman, “Corporate and Government Suppression of Research” (2004). And Monforton was one of the movers and shakers in establishing SKAPP[9], which misrepresented its funding sources, while lobbying against the legal requirements of reliability and validity for scientific expert witness opinion testimony.


[1] SeeSelikoff and the Mystery of the Disappearing Testimony” (Dec. 3, 2010); “Selikoff and the Mystery of the Disappearing Asbestosis” (Dec. 6, 2010); “Selikoff and the Mystery of the Disappearing Amphiboles” (Dec. 10, 2010); “The Selikoff – Castleman Conspiracy” (Mar. 13, 2011); “Irving Selikoff and the Right to Peaceful Dissembling” (June 5, 2013); “The Mt. Sinai Catechism” (June 7, 2013); “Historians Should Verify Not Vilify or Abilify – The Difficult Case of Irving Selikoff” (Jan. 4, 2014); “What Happens When Historians Have Bad Memories” (Mar. 15, 2014); “The Last Squirmish Between Irving Selikoff and Sir Richard Doll” (Sept. 9, 2014); “Irving Selikoff – Media Plodder to Media Zealot” (Sept. 9, 2014); “Scientific Prestige, Reputation, Authority & The Creation of Scientific Dogmas” (Oct. 4, 2014). See also Cathleen M. Devlin, “Disqualification of Federal Judges – Third Circuit Orders District Judge James McGirr Kelly to Disqualify Himself so as to Preserve the Appearance of Justice under 28 U.S.C.§ 455,” 38 Vill. L. Rev. 1219 (1993); W.K.C. Morgan, “Asbestos and cancer: history and public policy,” 49 Br. J. Indus. Med. 451, 451 (1992).

[2] Wikipedia, “Irving Selikoff” (last visited March 6, 2015).

[3]Historians Should Verify Not Vilify or Abilify – The Difficult Case of Irving Selikoff” (Jan. 4, 2014); “Scientific Prestige, Reputation, Authority & The Creation of Scientific Dogmas” (Oct. 4, 2014).

[4] 83 Am. J. Pub. Health 609, 609 (1993)(describing the Irving J. Selikoff Asbestos Archives and Research Center holdings of Dr. Selikoff’s research documents).

[5] http://beckerarchives.wustl.edu/?p=collections/controlcard&id=6725

[6] Jock McCulloch and Geoffrey Tweedale, Defending The Indefensible: The Global Asbestos Industry and its Fight for Survival 271 (Oxford 2008) (describing how even after his death, the Selikoff papers have still not been made generally available, but thanking Valerie Josephson, Philip Landrigan, and Stephen Levin, for helping McCulloch gain access to the papers).

[7] David Egilman & Susanna Rankin Bohme, “Attorney-Directed Screenings Can Be Hazardous,” 45 Am. J. Indus. Med. 305 (2004).

[8] Nathan A. Schachtman & Cynthia J. Rhodes, “Medico-Legal Issues in Occupational Lung Disease Litigation,” 27 Sem. Roentgenology 140 (1992).

[9]SKAPP-A-LOT” (April 30, 2010); “Conflicted Public Interest Groups” (Nov. 3, 2013).

Bentham’s Legacy – Quantification of Fact Finding

March 1st, 2015

Jeremy Bentham, radical philosopher, was a source of many antic proposals. Perhaps his most antic proposal was to have himself stuffed, mounted, and displayed in the halls of University College of London, where he may still be observed during normal school hours. In ethical theory, Bentham advocated for an extreme ethical reductionism, known as utilitarianism. Bentham shared Edmund Burke’s opposition to the invocation of natural rights, but unlike Burke, Bentham was an ardent foe of the American Revolution.

Bentham was also a non-practicing lawyer who had an inexhaustible capacity for rationalistic revisions of legal practice. Among his revisionary schemes, Bentham proposed to reduce or translate qualitative beliefs to a numerical a scale, like a thermometer. Jeremy Bentham, 6 The Works of Jeremy Bentham; Rationale of Evidence, Rationale of Judicial Evidence at 225 (1843); 1 Rationale of Judicial Evidence Specially Applied to Judicial Practice at 76 (1827). The legal profession, that is lawyers who actually tried or judged cases, did think much of Bentham’s proposal:

“The notions of those who have proposed that mere moral probabilities or relations could ever be represented by numbers or space, and thus be subjected to arithmetical analysis, cannot but be regarded as visionary and chimerical.”

Thomas Starkie, A Practical Treatise of the Law of Evidence 225 (2d ed. 1833). Having graduated from St. John’s College, Cambridge University, as senior wrangler, Starkie was no slouch in mathematics, and he was an accomplished lawyer and judge later in life.

Starkie’s pronouncement upon Bentham’s proposal was, in the legal profession, a final judgment. The idea of having witnesses provide a decigrade or centigrade scale of belief in facts never caught on in the law. No evidentiary code or set of rules allows for, or requires, such quantification, but on the fringes, Bentham’s ideas still resonate with some observers who would require juries or judges to quantify their findings of fact:

“Consequently statistical ideas should be used in court and have already been used in the analysis of forensic data. But there are other areas to explore. Thus I do not think a jury should be required to decide guilty or innocent; they should provide their probability of guilt. The judge can then apply MEU [maximised expected utility] by incorporating society’s utility. Hutton could usefully have used some probability. A lawyer and I wrote a paper on the evidential worth of failure to produce evidence.”

Lindley, “Bayesian Thoughts,” Significance 73, 74-75 (June 2004). Some might say that Lindley was trash picking in the dustbin of legal history.

Lies, False Memories, or Bad Biochemistry?

February 23rd, 2015

NBC suspended Brian Williams for six months for a bout of misremembering[1]. The electronic media hardly skipped a sine wave. David Brooks thinks that the public reaction was “barbaric” [sic][2], and counseled more empathy. Indeed, some commentators jumped on Williams’ apology as an admission of lying.[3] Science Times writer Parker-Pope more charitably asked whether Brian Williams was a victim of false memory[4]. The media fretted that Williams, their most trustworthy voice and face, had undermined the public’s trust in the major news services. Why worry though? There was little trust in the nightly news programs that serve up news as entertainment.

Memory is key to what trial lawyers do in presenting evidence in support of historical narrative about events that took place in the past. False memory, whether the innocent or guilty kind, is a major threat to the judicial system’s attempt to discern the truth based upon narrative testimony. Lawyers recognize that memories are fragile and subject to manipulation. The tenuous connection between past events and current recollection is an omnipresent challenge to the basic process of juridical fact finding. The most cynical members[5] of the bar, no doubt, exploit the amorphous quality of memory, but even at its best, memory can be a slender reed to support our judgments.

Consider the criminal prosecution against Adnan Syed, which turned on the dodgy testimony of Jay Wilds. As recounted in Sarah Koenig’s Serial podcasts, the State of Maryland tried Syed twice. The first trial ended in a mistrial before the case could be submitted to a jury. The murder charges turned on factual accounts of one witness, Jay Wilds, who gave a wildly different account each time he talked about his participation in the cover up of the murder of Hae Min Lee.

Koenig’s series not only breathed life into Syed’s efforts to obtain a hearing on his ineffective assistance challenge, but they demonstrated the power of hindsight and other cognitive bias in the criminal justice system, as well as in all human endeavors[6]. Witnesses, including Syed, struggled to give accurate accounts of past events, even a few weeks after the fact, whatever that was.

Every lawyer who has tried cases can recount instances in which witnesses failed to recall, or recalled erroneously, essential facts in the litigation. In one recent case I had in upstate New York, a witness, who appeared very honest to the defense counsel in the deposition room, testified that he had never seen or read any warnings concerning crystalline silica in the pottery factory, where had worked for several decades. When the employer produced documents from its safety program, the documentary record showed that the plaintiff sat on a safety committee, and that he had helped to prepare safety warning placards for the workplace. Memories are like that, especially when they are inconvenient.

In 2013, Mauro Costa-Mattioli and his research group showed that actin polymerization, fostered by the mTORC2 pathway, is essential for long-term memory in mice and flies. See Wei Huang,Ping Jun Zhu, Shixing Zhang, Hongyi Zhou, Loredana Stoica, Mauricio Galiano, Krešimir Krnjević, Gregg Roman & Mauro Costa-Mattioli, “mTORC2 controls actin polymerization required for consolidation of long term memory,” 16 Nature Neurosci. 441 (2013). Well, perhaps Brian Williams simply had a surfeit of actin polymerization working to remodel his memory in a false and misleading way? And perhaps my plaintiff had a deficit of actin remodeling of his neurons?


[1] Emily Steel & Ravi Somaiya, “Brian Williams Suspended From NBC for 6 Months Without Pay,” N.Y. Times (Feb. 10, 2015).

[2] David Brooks, “The Act of Rigorous Forgiving,” N.Y. Times (Feb. 10, 2015). Brooks would probably think this use of “sic” was barbaric rather than merely barbarous pedantry in pointing out a diction error.

[3] “NBC’S Brian Williams admits on air to lying: Apologizes for bogus story of his ‛personal heroics’.”

[4] Tara Parker-Pope, “Was Brian Williams a Victim of False Memory?” N.Y. Times (Feb. 9, 2015). (The print edition contained a different, more biting headline: “False Memory vs. Bald Faced Lie”).

[5] See “Preparing for your deposition” a.k.a. the Baron & Budd asbestos memo” Wikipedia; Walter Olson, “Thanks for the Memories: How lawyers get the testimony they want,” Reason (June 1998).

[6] Not everyone agrees. JaneAnne Murray, a lawyer with criminal defense experience, took a surprising approach in suggesting that Adnan Syed should have pleaded guilty, in part because the prosecution’s case against Syed was strong. See JaneAnne Murray, “Why Adnan Syed of ‘Serial’ Should Have Pleaded Guilty,” N.Y. Times (Jan. 22, 2015) (“In this case, the injustice may lie not in the conviction, but in the failure to negotiate the charges. The unstructured presentation of the facts in “Serial” obscured a strong case for the prosecution.”). Actually, Koenig’s Serial podcasts appeared to play down the strength of the defense, and what should and could have been done with the testimony of the chief prosecution witness, Jay Wilds. Koenig lauded Syed’s defense lawyer, Cristina Gutierrez, as having a reputation for brillance, which seemed conspicuously absent in the segments of trial tapes excerpted on Serial. In any event, the Maryland Special Court of Appeals has only recently granted Syed an appeal on his claim for post-conviction relief. Emma G. Fitzsimmons, “Appeal to Be Heard in ‘Serial’ Murder Case,” N.Y. Times (Feb. 7, 2015).

 

 

Plaintiffs Sue Friendly Physicians for Access to Opinions and to Data

February 14th, 2015

Access to underlying data continues to be a difficult and contentious issue in litigation and in the world of public health. Healthcare providers have made intemperate demands upon pharmaceutical manufacturers to produce underlying data from clinical trials, but often the same providers object the loudest when asked to share underlying data from their own observational or experimental studies.

Last month saw a judicial rejection of an effort by New York researchers to resist production of underlying data. Plaintiffs in the World Trade Center health effects litigation sought to compel their treating physicians at the Mt. Sinai Health System to testify to their opinions about general and specific causation of the plaintiffs’ alleged injuries. Previously, in discovery depositions of the plaintiffs’ treating physicians as fact witnesses, Mt. Sinai Hospital’s attorneys objected to plaintiffs’ counsel’s efforts to elicit causation opinions not recorded in the plaintiffs’ personal medical records. Plaintiffs then moved to compel the treating physicians to give opinions on causation, and to produce underlying data from their published papers on the claimed health effects of exposure to World Trade Center dust. The hospital opposed these motions on grounds that its neutrality in the litigation would be compromised by compulsory testimony and production of data.

In a short opinion, Judge Alvin K. Hellerstein refused plaintiffs’ motions to compel testimony, but granted their motion to compel production of underlying data. In re World Trade Center Lower Manhattan Disaster Site Litig., ___ F.R.D. ___, 2015 WL 220988, at *1-2 (S.D.N.Y. Jan. 15, 2015). Judge Hellerstein acknowledged that, in rare instances, federal trial courts have compelled experts to provide opinion testimony. Carter–Wallace, Inc. v. Otte, 474 F.2d 529, 536 (2d Cir.1972) (“The weight of authority holds that, although it is not the usual practice, a court does have the power to subpoena an expert witness and … require him [or her] to state whatever opinions he [or she] may have previously formed.”).

Judge Hellerstein cited the five factors articulated in Kaufman v. Edelstein, as governing the exercise of discretion in compelling expert testimony:

(1) “[T]he degree to which the expert is being called because of his knowledge of facts relevant to the case rather than in order to give opinion testimony”;

(2) “the difference between testifying to a previously formed or expressed opinion and forming a new one”;

(3) “the possibility that, for other reasons, the witness is a unique expert”;

(4) “the extent to which the calling party is able to show the unlikelihood that any comparable witness will willingly testify”; and

(5) “the degree to which the witness is able to show that he has been oppressed by having continually to testify[.]”

2015 WL 220988, at *2-3, citing and quoting Kaufman v. Edelstein, 539 F.2d 811, 822 (2d Cir.1976).

Considering these factors, Judge Hellerstein refused to enforce the subpoena ad testificandum against the Mt. Sinai physicians. Plaintiffs have their own retained expert witness, Dr. Tee Guidotti, who was prepared to testify about both specific and general causation of the plaintiffs’ claimed injuries. The underlying data gathered by Mt. Sinai physicians and scientists, however, were another matter:

“However, it is undisputed that the scope and detail of the research conducted by the Mt. Sinai WTC Health Program is unparalleled. This litigation can only benefit from a full record and the inclusion of the relevant data underlying the research conducted by the Mt. Sinai WTC Health Program is appropriate. Accordingly, Mt. Sinai is ordered to produce all Mt. Sinai WTC Health Program data pursuant to the protocol established in In re World Trade Center Disaster Site Litigation, 21–mc–100, accounting for the redaction of personal identifying information and other sensitive patient material.”

Id. at *4; see also id. at *1.

Cynical observers may wonder whether the Mt. Sinai opposition to the subpoenas and motions to compel was posturing. The hospital and many of its physicians have been outspoken advocates on many occupational and environmental issues. Perhaps like Brer Rabbit, they were protesting not to be thrown in the briar patch. Or maybe, they realized that they could not resist the subpoena for data unless they also declined to testify about their opinions. In any event, Judge Hellerstein maintained the right of expert witnesses to hold their opinions to themselves and to avoid participating in the litigation system, while ensuring that the data are available to all. The plaintiffs may well have been clever by halves in bringing their motion.

Sander Greenland on “The Need for Critical Appraisal of Expert Witnesses in Epidemiology and Statistics”

February 8th, 2015

Sander Greenland is one of the few academics, who has served as an expert witness, who has written post-mortems of his involvement in various litigations[1]. Although settling scores with opposing expert witnesses can be a risky business[2], the practice can provide important insights for judges and lawyers who want to avoid the errors of the past. Greenland correctly senses that many errors seem endlessly recycled, and that courts could benefit from disinterested commentary on cases. And so, there should be a resounding affirmation from federal and state courts to the proclaimed “need for critical appraisal of expert witnesses in epidemiology and statistics,” as well as in many other disciplines.

A recent exchange[3] with Professor Greenland led me to revisit his Wake Forest Law Review article. His article raises some interesting points, some mistaken, but some valuable and thoughtful considerations about how to improve the state of statistical expert witness testimony. For better and worse[4], lawyers who litigate health effects issues should read it.

Other Misunderstandings

Greenland posits criticisms of defense expert witnesses[5], who he believes have misinterpreted or misstated the appropriate inferences to be drawn from null studies. In one instance, Greenland revisits one of his own cases, without any clear acknowledgment that his views were largely rejected.[6] The State of California had declared, pursuant to Proposition 65 ( the Safe Drinking Water and Toxic Enforcement Act of 1986, Health and Safety Code sections 25249.5, et seq.), that the State “knew” that di(2-ethylhexyl)phthalate, or “DEHP” caused cancer. Baxter Healthcare challenged the classification, and according to Greenland, the defense experts erroneously interpreted inclusive studies with evidence supporting a conclusion that DEHP does not cause cancer.

Greenland argues that the Baxter expert’s reference[7] to an IARC working group’s classification of DEHP as “not classifiable as to its carcinogenicity to humans” did not support the expert’s conclusion that DEHP does not cause cancer in human. If Baxter’s expert invoked the IARC working group’s classification for complete exoneration of DEHP, then Greenland’s point is fair enough. In his single-minded attack on Baxter’s expert’s testimony, however, Greenland missed a more important point, which is that the IARC’s determination that DEHP is not classifiable as to carcinogenicity is directly contradictory of California’s epistemic claim to “know” that DEHP causes cancer. And Greenland conveniently omits any discussion that the IARC working group had reclassified DEHP from “possibly carcinogenic” to “not classifiable,” in the light of its conclusion that mechanistic evidence of carcinogenesis in rodents did not pertain to humans.[8] Greenland maintains that Baxter’s experts misrepresented the IARC working group’s conclusion[9], but that conclusion, at the very least, demonstrates that California was on very shaky ground when it declared that it “knew” that DEHP was a carcinogen. California’s semantic gamesmanship over its epistemic claims is at the root of the problem, not a misstep by defense experts in describing inconclusive evidence as exonerative.

Greenland goes on to complain that in litigation over health claims:

“A verdict of ‛uncertain’ is not allowed, yet it is the scientific verdict most often warranted. Elimination of this verdict from an expert’s options leads to the rather perverse practice (illustrated in the DEHP testimony cited above) of applying criminal law standards to risk assessments, as if chemicals were citizens to be presumed innocent until proven guilty.

39 Wake Forest Law Rev. at 303. Despite Greenland’s alignment with California in the Denton case, the fact of the matter is that a verdict of “uncertain” was allowed, and he was free to criticize California for making a grossly exaggerated epistemic claim on inconclusive evidence.

Perhaps recognizing that he may be readily be seen as an advocate for coming to the defense of California on the DEHP issue, Greenland protests that:

“I am not suggesting that judgments for plaintiffs or actions against chemicals should be taken when evidence is inconclusive.”

39 Wake Forest Law Rev. at 305. And yet, his involvement in the Denton case (as well as other cases, such as silicone gel breast implant cases, thimerosal cases, etc.) suggest that he is willing to lend aid and support to judgments for plaintiffs when the evidence is inconclusive.

Important Advice and Recommendations

These foregoing points are rather severe limitations to Greenland’s article, but lawyers and judges should also look to what is good and helpful here. Greenland is correct to call out expert witnesses, regardless of party of affiliation, who opine that inconclusive studies are “proof” of the null hypothesis. Although some of Greenland’s arguments against the use of significance probability may be overstated, his corrections to the misstatements and misunderstandings of significance probability should command greater attention in the legal community. In one strained passage, however, Greenland uses a disjunction to juxtapose null hypothesis testing with proof beyond a reasonable doubt[10]. Greenland of course understands the difference, but the context would lead some untutored readers to think he has equated the two probabilistic assessments. Writing in a law review for lawyers and judges might have led him to be more careful. Given the prevalence of plaintiffs’ counsel’s confusing the 95% confidence coefficient with a burden of proof akin to beyond a reasonable doubt, great care in this area is, indeed, required.

Despite his appearing for plaintiffs’ counsel in health effects litigation, some of Greenland’s suggestions are balanced and perhaps more truth-promoting than many plaintiffs’ counsel would abide. His article provides an important argument in favor of raising the legal criteria for witnesses who purport to have expertise to address and interpret epidemiologic and experimental evidence[11]. And beyond raising qualification requirements above mere “reasonable pretense at expertise,” Professor Greenland offers some thoughtful, helpful recommendations for improving expert witness testimony in the courts:

  • “Begin publishing projects in which controversial testimony (a matter of public record) is submitted, and as space allows, published on a regular basis in scientific or law journals, perhaps with commentary. An online version could provide extended excerpts, with additional context.
  • Give courts the resources and encouragement to hire neutral experts to peer-review expert testimony.
  • Encourage universities and established scholarly societies (such as AAAS, ASA, APHA, and SER) to conduct workshops on basic epidemiologic and statistical inference for judges and other legal professionals.”

39 Wake Forest Law Rev. at 308.

Each of these three suggestions is valuable and constructive, and worthy of an independent paper. The recommendation of neutral expert witnesses and scholarly tutorials for judges is hardly new. Many defense counsel and judges have argued for them in litigation and in commentary. The first recommendation, of publishing “controversial testimony” is part of the purpose of this blog. There would be great utility to making expert witness testimony, and analysis thereof, more available for didactic purposes. Perhaps the more egregious testimonial adventures should be republished in professional journals, as Greenland suggests. Greenland qualifies his recommendation with “as space allows,” but space is hardly the limiting consideration in the digital age.

Causation

Professor Greenland correctly points out that causal concepts and conclusions are often essentially contested[12], but his argument might well be incorrectly taken for “anything goes.” More helpfully, Greenland argues that various academic ideals should infuse expert witness testimony. He suggests that greater scholarship, with acknowledgment of all viewpoints, and all evidence, is needed in expert witnessing. 39 Wake Forest Law Rev. at 293.

Greenland’s argument provides an important corrective to the rhetoric of Oreskes, Cranor, Michaels, Egilman, and others on “manufacturing doubt”:

“Never force a choice among competing theories; always maintain the option of concluding that more research is needed before a defensible choice can be made.”

Id. Despite his position in the Denton case, and others, Greenland and all expert witnesses are free to maintain that more research is needed before a causal claim can be supported. Greenland also maintains that expert witnesses should “look past” the conclusions drawn by authors, and base their opinions on the “actual data” on which the statistical analyses are based, and from which conclusions have been drawn. Courts have generally rejected this view, but if courts were to insist upon real expertise in epidemiology and statistics, then the testifying expert witnesses should not be constrained by the hearsay opinions in the discussion sections of published studies – sections which by nature are incomplete and tendentious. See Follow the Data, Not the Discussion” (May 2, 2010).

Greenland urges expert witnesses and legal counsel to be forthcoming about their assumptions, their uncertainty about conclusions:

“Acknowledgment of controversy and uncertainty is a hallmark of good science as well as good policy, but clashes with the very time limited tasks faced by attorneys and courts”

39 Wake Forest Law Rev. at 293-4. This recommendation would be helpful in assuring courts that the data may simply not support conclusions sufficiently certain to be submitted to lay judges and jurors. Rosen v. Ciba-Geigy Corp., 78 F.3d 316, 319, 320 (7th Cir. 1996) (“But the courtroom is not the place for scientific guesswork, even of the inspired sort. Law lags science; it does not lead it.”) (internal citations omitted).

Threats to Validity

One of the serious mistakes counsel often make in health effects litigation is to invite courts to believe that statistical significance is sufficient for causal inferences. Greenland emphasizes that validity considerations often are much stronger, and more important considerations than the play of random error[13]:

“For very imperfect data (e.g., epidemiologic data), the limited conclusions offered by statistics must be further tempered by validity considerations.”

*   *   *   *   *   *

“Examples of validity problems include non-random distribution of the exposure in question, non-random selection or cooperation of subjects, and errors in assessment of exposure or disease.”

39 Wake Forest Law Rev. at 302 – 03. Greenland’s abbreviated list of threats to validity should remind courts that they cannot sniff a p-value below five percent and then safely kick the can to the jury. The literature on evaluating bias and confounding is huge, but Greenland was a co-author on an important recent paper, which needs to be added to the required reading lists of judges charged with gatekeeping expert witness opinion testimony about health effects. See Timothy L. Lash, et al., “Good practices for quantitative bias analysis,” 43 Internat’l J. Epidem. 1969 (2014).


[1] For an influential example of this sparse genre, see James T. Rosenbaum, “Lessons from litigation over silicone breast implants: A call for activism by scientists,” 276 Science 1524 (1997) (describing the exaggerations, distortions, and misrepresentations of plaintiffs’ expert witnesses in silicone gel breast implant litigation, from perspective of a highly accomplished scientist physician, who served as a defense expert witness, in proceedings before Judge Robert Jones, in Hall v. Baxter Healthcare Corp., 947 F. Supp. 1387 (D. Or. 1996). In one attempt to “correct the record” in the aftermath of a case, Greenland excoriated a defense expert witness, Professor Robert Makuch, for stating that Bayesian methods are rarely used in medicine or in the regulation of medicines. Sander Greenland, “The Need for Critical Appraisal of Expert Witnesses in Epidemiology and Statistics,” 39 Wake Forest Law Rev. 291, 306 (2004).  Greenland heaped adjectives upon his adversary, “ludicrous claim,” “disturbing, “misleading expert testimony,” and “demonstrably quite false.” See “The Infrequency of Bayesian Analyses in Non-Forensic Court Decisions” (Feb. 16, 2014) (debunking Prof. Greenland’s claims).

[2] One almost comical example of trying too hard to settle a score occurs in a footnote, where Greenland cites a breast implant case as having been reversed in part by another case in the same appellate court. See 39 Wake Forest Law Rev. at 309 n.68, citing Allison v. McGhan Med. Corp., 184 F.3d 1300, 1310 (11th Cir. 1999), aff’d in part & rev’d in part, United States v. Baxter Int’l, Inc., 345 F.3d 866 (11th Cir. 2003). The subsequent case was not by any stretch of the imagination a reversal of the earlier Allison case; the egregious citation is a legal fantasy. Furthermore, Allison had no connection with the procedures for court-appointed expert witnesses or technical advisors. Perhaps the most charitable interpretation of this footnote is that it was injected by the law review editors or supervisors.

[3] SeeSignificance Levels are Made a Whipping Boy on Climate Change Evidence: Is .05 Too Strict? (Schachtman on Oreskes)” (Jan. 4, 2015).

[4] In addition to the unfair attack on Professor Makuch, see supra, n.1, there is much that some will find “disturbing,” “misleading,” and even “ludicrous,” (some of Greenland’s favorite pejorative adjectives) in the article. Greenland repeats in brief his arguments against the legal system’s use of probabilities of causation[4], which I have addressed elsewhere.

[5] One of Baxter’s expert witnesses appeared to be the late Professor Patricia Buffler.

[6] See 39 Wake Forest Law Rev. at 294-95, citing Baxter Healthcare Corp. v. Denton, No. 99CS00868, 2002 WL 31600035, at *1 (Cal. App. Dep’t Super. Ct. Oct. 3, 2002) (unpublished); Baxter Healthcare Corp. v. Denton, 120 Cal. App. 4th 333 (2004)

[7] Although Greenland cites to a transcript, the citation is to a judicial opinion, and the actual transcript of testimony is not available at the citation give.

[8] See Denton, supra.

[9] 39 Wake Forest L. Rev. at 297.

[10] 39 Wake Forest L. Rev. at 305 (“If it is necessary to prove causation ‛beyond a reasonable doubt’–or be ‛compelled to give up the null’ – then action can be forestalled forever by focusing on any aspect of available evidence that fails to conform neatly with the causal (alternative) hypothesis. And in medical and social science there is almost always such evidence available, not only because of the ‛play of chance’ (the focus of ordinary statistical theory), but also because of the numerous validity problems in human research.”

[11] See Peter Green, “Letter from the President to the Lord Chancellor regarding the use of statistical evidence in court cases” (Jan. 23, 2002) (writing on behalf of The Royal Statistical Society; “Although many scientists have some familiarity with statistical methods, statistics remains a specialised area. The Society urges you to take steps to ensure that statistical evidence is presented only by appropriately qualified statistical experts, as would be the case for any other form of expert evidence.”).

[12] 39 Wake Forest Law Rev. at 291 (“In reality, there is no universally accepted method for inferring presence or absence of causation from human observational data, nor is there any universally accepted method for inferring probabilities of causation (as courts often desire); there is not even a universally accepted definition of cause or effect.”).

[13] 39 Wake Forest Law Rev. at 302-03 (“If one is more concerned with explaining associations scientifically, rather than with mechanical statistical analysis, evidence about validity can be more important than statistical results.”).

Sander Greenland on “The Need for Critical Appraisal of Expert Witnesses in Epidemiology and Statistics”

February 8th, 2015

Sander Greenland is one of the few academics, who has served as an expert witness, who has written post-mortems of his involvement in various litigations[1]. Although settling scores with opposing expert witnesses can be a risky business[2], the practice can provide important insights for judges and lawyers who want to avoid the errors of the past. Greenland correctly senses that many errors seem endlessly recycled, and that courts could benefit from disinterested commentary on cases. And so, there should be a resounding affirmation from federal and state courts to the proclaimed “need for critical appraisal of expert witnesses in epidemiology and statistics,” as well as in many other disciplines.

A recent exchange[3] with Professor Greenland led me to revisit his Wake Forest Law Review article. His article raises some interesting points, some mistaken, but some valuable and thoughtful considerations about how to improve the state of statistical expert witness testimony. For better and worse[4], lawyers who litigate health effects issues should read it.

Other Misunderstandings

Greenland posits criticisms of defense expert witnesses[5], who he believes have misinterpreted or misstated the appropriate inferences to be drawn from null studies. In one instance, Greenland revisits one of his own cases, without any clear acknowledgment that his views were largely rejected.[6] The State of California had declared, pursuant to Proposition 65 ( the Safe Drinking Water and Toxic Enforcement Act of 1986, Health and Safety Code sections 25249.5, et seq.), that the State “knew” that di(2-ethylhexyl)phthalate, or “DEHP” caused cancer. Baxter Healthcare challenged the classification, and according to Greenland, the defense experts erroneously interpreted inclusive studies with evidence supporting a conclusion that DEHP does not cause cancer.

Greenland argues that the Baxter expert’s reference[7] to an IARC working group’s classification of DEHP as “not classifiable as to its carcinogenicity to humans” did not support the expert’s conclusion that DEHP does not cause cancer in human. If Baxter’s expert invoked the IARC working group’s classification for complete exoneration of DEHP, then Greenland’s point is fair enough. In his single-minded attack on Baxter’s expert’s testimony, however, Greenland missed a more important point, which is that the IARC’s determination that DEHP is not classifiable as to carcinogenicity is directly contradictory of California’s epistemic claim to “know” that DEHP causes cancer. And Greenland conveniently omits any discussion that the IARC working group had reclassified DEHP from “possibly carcinogenic” to “not classifiable,” in the light of its conclusion that mechanistic evidence of carcinogenesis in rodents did not pertain to humans.[8] Greenland maintains that Baxter’s experts misrepresented the IARC working group’s conclusion[9], but that conclusion, at the very least, demonstrates that California was on very shaky ground when it declared that it “knew” that DEHP was a carcinogen. California’s semantic gamesmanship over its epistemic claims is at the root of the problem, not a misstep by defense experts in describing inconclusive evidence as exonerative.

Greenland goes on to complain that in litigation over health claims:

“A verdict of ‛uncertain’ is not allowed, yet it is the scientific verdict most often warranted. Elimination of this verdict from an expert’s options leads to the rather perverse practice (illustrated in the DEHP testimony cited above) of applying criminal law standards to risk assessments, as if chemicals were citizens to be presumed innocent until proven guilty.

39 Wake Forest Law Rev. at 303. Despite Greenland’s alignment with California in the Denton case, the fact of the matter is that a verdict of “uncertain” was allowed, and he was free to criticize California for making a grossly exaggerated epistemic claim on inconclusive evidence.

Perhaps recognizing that he may be readily be seen as an advocate for coming to the defense of California on the DEHP issue, Greenland protests that:

“I am not suggesting that judgments for plaintiffs or actions against chemicals should be taken when evidence is inconclusive.”

39 Wake Forest Law Rev. at 305. And yet, his involvement in the Denton case (as well as other cases, such as silicone gel breast implant cases, thimerosal cases, etc.) suggest that he is willing to lend aid and support to judgments for plaintiffs when the evidence is inconclusive.

Important Advice and Recommendations

These foregoing points are rather severe limitations to Greenland’s article, but lawyers and judges should also look to what is good and helpful here. Greenland is correct to call out expert witnesses, regardless of party of affiliation, who opine that inconclusive studies are “proof” of the null hypothesis. Although some of Greenland’s arguments against the use of significance probability may be overstated, his corrections to the misstatements and misunderstandings of significance probability should command greater attention in the legal community. In one strained passage, however, Greenland uses a disjunction to juxtapose null hypothesis testing with proof beyond a reasonable doubt[10]. Greenland of course understands the difference, but the context would lead some untutored readers to think he has equated the two probabilistic assessments. Writing in a law review for lawyers and judges might have led him to be more careful. Given the prevalence of plaintiffs’ counsel’s confusing the 95% confidence coefficient with a burden of proof akin to beyond a reasonable doubt, great care in this area is, indeed, required.

Despite his appearing for plaintiffs’ counsel in health effects litigation, some of Greenland’s suggestions are balanced and perhaps more truth-promoting than many plaintiffs’ counsel would abide. His article provides an important argument in favor of raising the legal criteria for witnesses who purport to have expertise to address and interpret epidemiologic and experimental evidence[11]. And beyond raising qualification requirements above mere “reasonable pretense at expertise,” Professor Greenland offers some thoughtful, helpful recommendations for improving expert witness testimony in the courts:

  • “Begin publishing projects in which controversial testimony (a matter of public record) is submitted, and as space allows, published on a regular basis in scientific or law journals, perhaps with commentary. An online version could provide extended excerpts, with additional context.
  • Give courts the resources and encouragement to hire neutral experts to peer-review expert testimony.
  • Encourage universities and established scholarly societies (such as AAAS, ASA, APHA, and SER) to conduct workshops on basic epidemiologic and statistical inference for judges and other legal professionals.”

39 Wake Forest Law Rev. at 308.

Each of these three suggestions is valuable and constructive, and worthy of an independent paper. The recommendation of neutral expert witnesses and scholarly tutorials for judges is hardly new. Many defense counsel and judges have argued for them in litigation and in commentary. The first recommendation, of publishing “controversial testimony” is part of the purpose of this blog. There would be great utility to making expert witness testimony, and analysis thereof, more available for didactic purposes. Perhaps the more egregious testimonial adventures should be republished in professional journals, as Greenland suggests. Greenland qualifies his recommendation with “as space allows,” but space is hardly the limiting consideration in the digital age.

Causation

Professor Greenland correctly points out that causal concepts and conclusions are often essentially contested[12], but his argument might well be incorrectly taken for “anything goes.” More helpfully, Greenland argues that various academic ideals should infuse expert witness testimony. He suggests that greater scholarship, with acknowledgment of all viewpoints, and all evidence, is needed in expert witnessing. 39 Wake Forest Law Rev. at 293.

Greenland’s argument provides an important corrective to the rhetoric of Oreskes, Cranor, Michaels, Egilman, and others on “manufacturing doubt”:

“Never force a choice among competing theories; always maintain the option of concluding that more research is needed before a defensible choice can be made.”

Id. Despite his position in the Denton case, and others, Greenland and all expert witnesses are free to maintain that more research is needed before a causal claim can be supported. Greenland also maintains that expert witnesses should “look past” the conclusions drawn by authors, and base their opinions on the “actual data” on which the statistical analyses are based, and from which conclusions have been drawn. Courts have generally rejected this view, but if courts were to insist upon real expertise in epidemiology and statistics, then the testifying expert witnesses should not be constrained by the hearsay opinions in the discussion sections of published studies – sections which by nature are incomplete and tendentious. See Follow the Data, Not the Discussion” (May 2, 2010).

Greenland urges expert witnesses and legal counsel to be forthcoming about their assumptions, their uncertainty about conclusions:

“Acknowledgment of controversy and uncertainty is a hallmark of good science as well as good policy, but clashes with the very time limited tasks faced by attorneys and courts”

39 Wake Forest Law Rev. at 293-4. This recommendation would be helpful in assuring courts that the data may simply not support conclusions sufficiently certain to be submitted to lay judges and jurors. Rosen v. Ciba-Geigy Corp., 78 F.3d 316, 319, 320 (7th Cir. 1996) (“But the courtroom is not the place for scientific guesswork, even of the inspired sort. Law lags science; it does not lead it.”) (internal citations omitted).

Threats to Validity

One of the serious mistakes counsel often make in health effects litigation is to invite courts to believe that statistical significance is sufficient for causal inferences. Greenland emphasizes that validity considerations often are much stronger, and more important considerations than the play of random error[13]:

“For very imperfect data (e.g., epidemiologic data), the limited conclusions offered by statistics must be further tempered by validity considerations.”

*   *   *   *   *   *

“Examples of validity problems include non-random distribution of the exposure in question, non-random selection or cooperation of subjects, and errors in assessment of exposure or disease.”

39 Wake Forest Law Rev. at 302 – 03. Greenland’s abbreviated list of threats to validity should remind courts that they cannot sniff a p-value below five percent and then safely kick the can to the jury. The literature on evaluating bias and confounding is huge, but Greenland was a co-author on an important recent paper, which needs to be added to the required reading lists of judges charged with gatekeeping expert witness opinion testimony about health effects. See Timothy L. Lash, et al., “Good practices for quantitative bias analysis,” 43 Internat’l J. Epidem. 1969 (2014).


[1] For an influential example of this sparse genre, see James T. Rosenbaum, “Lessons from litigation over silicone breast implants: A call for activism by scientists,” 276 Science 1524 (1997) (describing the exaggerations, distortions, and misrepresentations of plaintiffs’ expert witnesses in silicone gel breast implant litigation, from perspective of a highly accomplished scientist physician, who served as a defense expert witness, in proceedings before Judge Robert Jones, in Hall v. Baxter Healthcare Corp., 947 F. Supp. 1387 (D. Or. 1996). In one attempt to “correct the record” in the aftermath of a case, Greenland excoriated a defense expert witness, Professor Robert Makuch, for stating that Bayesian methods are rarely used in medicine or in the regulation of medicines. Sander Greenland, “The Need for Critical Appraisal of Expert Witnesses in Epidemiology and Statistics,” 39 Wake Forest Law Rev. 291, 306 (2004).  Greenland heaped adjectives upon his adversary, “ludicrous claim,” “disturbing, “misleading expert testimony,” and “demonstrably quite false.” See “The Infrequency of Bayesian Analyses in Non-Forensic Court Decisions” (Feb. 16, 2014) (debunking Prof. Greenland’s claims).

[2] One almost comical example of trying too hard to settle a score occurs in a footnote, where Greenland cites a breast implant case as having been reversed in part by another case in the same appellate court. See 39 Wake Forest Law Rev. at 309 n.68, citing Allison v. McGhan Med. Corp., 184 F.3d 1300, 1310 (11th Cir. 1999), aff’d in part & rev’d in part, United States v. Baxter Int’l, Inc., 345 F.3d 866 (11th Cir. 2003). The subsequent case was not by any stretch of the imagination a reversal of the earlier Allison case; the egregious citation is a legal fantasy. Furthermore, Allison had no connection with the procedures for court-appointed expert witnesses or technical advisors. Perhaps the most charitable interpretation of this footnote is that it was injected by the law review editors or supervisors.

[3] SeeSignificance Levels are Made a Whipping Boy on Climate Change Evidence: Is .05 Too Strict? (Schachtman on Oreskes)” (Jan. 4, 2015).

[4] In addition to the unfair attack on Professor Makuch, see supra, n.1, there is much that some will find “disturbing,” “misleading,” and even “ludicrous,” (some of Greenland’s favorite pejorative adjectives) in the article. Greenland repeats in brief his arguments against the legal system’s use of probabilities of causation[4], which I have addressed elsewhere.

[5] One of Baxter’s expert witnesses appeared to be the late Professor Patricia Buffler.

[6] See 39 Wake Forest Law Rev. at 294-95, citing Baxter Healthcare Corp. v. Denton, No. 99CS00868, 2002 WL 31600035, at *1 (Cal. App. Dep’t Super. Ct. Oct. 3, 2002) (unpublished); Baxter Healthcare Corp. v. Denton, 120 Cal. App. 4th 333 (2004)

[7] Although Greenland cites to a transcript, the citation is to a judicial opinion, and the actual transcript of testimony is not available at the citation give.

[8] See Denton, supra.

[9] 39 Wake Forest L. Rev. at 297.

[10] 39 Wake Forest L. Rev. at 305 (“If it is necessary to prove causation ‛beyond a reasonable doubt’–or be ‛compelled to give up the null’ – then action can be forestalled forever by focusing on any aspect of available evidence that fails to conform neatly with the causal (alternative) hypothesis. And in medical and social science there is almost always such evidence available, not only because of the ‛play of chance’ (the focus of ordinary statistical theory), but also because of the numerous validity problems in human research.”

[11] See Peter Green, “Letter from the President to the Lord Chancellor regarding the use of statistical evidence in court cases” (Jan. 23, 2002) (writing on behalf of The Royal Statistical Society; “Although many scientists have some familiarity with statistical methods, statistics remains a specialised area. The Society urges you to take steps to ensure that statistical evidence is presented only by appropriately qualified statistical experts, as would be the case for any other form of expert evidence.”).

[12] 39 Wake Forest Law Rev. at 291 (“In reality, there is no universally accepted method for inferring presence or absence of causation from human observational data, nor is there any universally accepted method for inferring probabilities of causation (as courts often desire); there is not even a universally accepted definition of cause or effect.”).

[13] 39 Wake Forest Law Rev. at 302-03 (“If one is more concerned with explaining associations scientifically, rather than with mechanical statistical analysis, evidence about validity can be more important than statistical results.”).

Fixodent Study Causes Lockjaw in Plaintiffs’ Counsel

February 4th, 2015

Litigation Drives Science

Back in 2011, the Fixodent MDL Court sustained Rule 702 challenges to plaintiffs’ expert witnesses. “Hypotheses are verified by testing, not by submitting them to lay juries for a vote.” In re Denture Cream Prods. Liab. Litig., 795 F. Supp. 2d 1345, 1367 (S.D.Fla.2011), aff’d, Chapman v. Procter & Gamble Distrib., LLC, 766 F.3d 1296 (11th Cir. 2014). The Court found that the plaintiffs had raised a superficially plausible hypothesis, but that they had not verified the hypothesis by appropriate testing[1].

Like dentures to Fixodent, the plaintiffs stuck to their claims, and set out to create the missing evidence. Plaintiffs’ counsel contracted with Dr. Salim Shah and his companies Sarfez Pharmaceuticals, Inc. and Sarfez USA, Inc. (“Sarfez”) to conduct human research in India, to support their claims that zinc in denture cream causes neurological damage[2]In re Denture Cream Prods. Liab. Litig., Misc. Action 13-384 (RBW), 2013 U.S. Dist. LEXIS 93456, *2 (D.D.C. July 3, 2013).  When the defense learned of this study, and the plaintiffs’ counsel’s payments of over $300,000, to support the study, they sought discovery of raw data, study protocol, statistical analyses, and other materials from plaintiffs’ counsel.  Plaintiffs’ counsel protested that they did not have all the materials, and directed defense counsel to Sarfez.  Although other courts have made counsel produce similar materials from the scientists and independent contractors they engaged, in this case, defense counsel followed the trail of documents to contractor, Sarfez, with subpoenas in hand.  Id. at *3-4.

The defense served a Rule 45 subpoena on Sarfez, which produced some, but not all responsive documents. Proctor & Gamble pressed for the missing materials, including study protocols, analytical reports, and raw data.  Id. at *12-13.  Judge Reggie Walton upheld the subpoena, which sought underlying data and non-privileged correspondence, to be within the scope of Rules 26(b) and 45, and not unduly burdensome. Id. at *9-10, *20. Sarfez attempted to argue that the requested materials, listed as email attachments, might not exist, but Judge Walton branded the suggestion “disingenuous.”  Attachments to emails should be produced along with the emails.  Id. at *12 (citing and collecting cases). Although Judge Walton did not grant a request for forensic recovery of hard-drive data or for sanctions, His Honor warned Sarfez that it might be required to bear the cost of forensic data recovery if it did not comply the court’s order.  Id. at *15, *22.

Plaintiffs Put Their Study Into Play

The study at issue in the subpoena was designed by Frederick K. Askari, M.D., Ph.D., an associate professor of hepatology, in the University of Michigan Health System. In re Denture Cream Prods. Liab. Litig., No. 09–2051–MD, 2015 WL 392021, at *7 (S.D. Fla. Jan. 28, 2015). At the instruction of plaintiffs’ counsel, Dr. Askari sought to study the short-term effects of Fixodent on copper absorption in humans. Working in India, Askari conducted the study on 24 participants, who were given a controlled diet for 36 days. Of the 24 participants, 12, randomly selected, received 12 grams of Fixodent per day (containing 204 mg. of zinc). Another six participants, randomly selected, were given zinc acetate, three times per day (150 mg of zinc), and the remaining six participants received placebo, three times per day.

A study protocol was approved by an independent group[3], id. at *9, and the study was supposed to be conducted with a double blind. Id. at *7. Not surprisingly, those participants who received doses of Fixodent or zinc acetate had higher urinary levels of zinc (pee < 0.05). The important issue, however, was whether the dietary zinc levels affect copper excretion in a way that would support plaintiffs’ claims that copper levels were lowered sufficiently by Fixodent to cause a syndromic neurological disorder. The MDL Court ultimately concluded that plaintiffs’ expert witnesses’ opinions on general causation claims were not sufficiently supported to satisfy the requirements of Rule 702, and upheld defense challenges to those expert witnesses. In doing so, the MDL Court had much of interest to say about case reports, weight of the evidence, and other important issues. This post, however, concentrates on the deviations of one study, commissioned by plaintiffs’ counsel, from the scientific standard of care. The Askari “research” makes for a fascinating case study of how not to conduct a study in a litigation caldron.

Non-Standard Deviations

The First Deviation – Changing the Ascertainment Period After the Data Are Collected

The protocol apparently identified a primary endpoint to be:

“the mean increase in [copper 65] excretion in fecal matter above the baseline (mg/day) averaged over the study period … to test the hypothesis that the release of [zinc] either from Fixodent or Zinc Acetate impairs [copper 65] absorption as measured in feces.”

The study outcome, on the primary end point, was clear. The plaintiffs’ testifying statistician, Hongkun Wang, stated in her deposition that the fecal copper (whether isotope Cu63 or Cu65) was not different across the three groups (Fixodent, zinc acetate, and placebo). Id. at *9[4]. Even Dr. Askari himself admitted that the total fecal copper levels were not increased in the Fixodent group compared with the placebo control group. Id. at *9.[5]

Apparently after obtaining the data, and finding no difference in the pre-specified end point of average fecal copper levels between Fixodent and placebo groups, Askari turned to a new end point, measured in a different way, not described in the protocol as the primary end point.

The Second Deviation – Changing Primary End Point After the Data Are Collected

In the early (days 3, 4, and 5) and late (days 31, 32, and 33) part of the Study, participants received a dose of purified copper 65[6] to help detect the “blockade of copper.” Id. at 8*. The participants’ fecal copper 65 levels were compared to their naturally occurring copper 63 levels. According to Dr. Askari:

“if copper is being blocked in the Fixodent and zinc acetate test subjects from exposure to the zinc in the test product (Fixodent) and positive control (zinc acetate), the ratio of their fecal output of copper 65 as compared to their fecal output of copper 63 would increase relative to the control subjects, who were not dosed with zinc. In short, a higher ratio of copper 65 to copper 63 reflects blocking of copper.”

Id.

Askari analyzed the ratio of two copper isotopes (Cu65 /Cu63), in the limited period of observation to study days 31 to 33. Id. at *9. Askari thus changed the outcome to be measured, the timing of the measurement, and manner of measurement (average over entire period versus amount on days 31 to 33). On this post hoc, non-prespecified end point, Askari claimed to have found “significant” differences.

The MDL Court expressed its skepticism and concern over the difference between the protocol’s specified end point, and one that came into the study only after the data were obtained and analyzed. The plaintiffs claimed that it was their (and Askari’s) intention from the initial stages of designing the Fixodent Blockade Study to use the Cu65/Cu63 ratio as the primary end point. According to the plaintiffs, the isotope ratio was simply better articulated and “clarified” as the primary end point in the final report than it was in the protocol. The Court was not amused or assuaged by the plaintiffs’ assurances. The study sponsor, Dr. Salim Shah could not point to a draft protocol that indicated the isotope ratio as the end point; nor could Dr. Shah identify a request for this analysis by Wang until after the study was concluded. Id. at *9.[7]

Ultimately, the Court declared that whether the protocol was changed post hoc after the primary end point provided disappointing analysis, or the isotope ratio was carelessly omitted from the protocol, the design or conduct of the study was “incompatible with reliable scientific methodology.”

The Third Deviation – Changing the Standard of “Significance” After the Data Are Collected and P-Values Are Computed

The protocol for the Blockade study called for a pre-determined Type I error rate (p-value) of no more than 5 percent.[8] Id. at *10. The difference in the isotope ratio showed an attained level of significance probability of 5.7 percent, and thus even the post hoc end point missed the prespecified level of significance. The final protocol changed the value of “significance” to 10 percent, to permit the plaintiffs to declare a “statistically significant” result. Dr. Wang admitted in deposition that she doubled the acceptable level of Type I error only after she obtained the data and calculated the p-value of 0.057. Id. at *10.[9]

The Court found that this deliberate moving of the statistical goal post reflected a “lack of objectivity and reliability,” which smacked of contrivance[10].

The Court found that the study’s deviations from the protocol demonstrated a lack of objectivity. The inadequacy of the Study’s statistical analysis plan supported the Court’s conclusion that Dr. Askari’s supposed finding of a “statistically significant” difference in fecal copper isotope ratio between Fixodent and placebo group participants was “not based on sufficiently reliable and objective scientific methodology” and thus could not support plaintiffs’ expert witnesses’ general causation claims.

The Fourth Deviation – Failing to Take Steps to Preserve the Blind

The protocol called for a double-blinded study, with neither the participants nor the clinical investigators knowing which participant was in which group. Rather than delivering the three different groups capsules that looked similar, the group each received starkly different looking capsules. Id. at *11. The capsules for one set were apparently so large that the investigators worried whether the participants would comply with the dosing regimen.

The Fifth Deviation – Failing to Take Steps to Keep Biological Samples From Becoming Contaminated

Documents and emails from Dr. Shah acknowledged that there had been “difficulties in storing samples at appropriate temperature.” Id. at *11. Fecal samples were “exposed to unfrozen and undesirable temperature conditions.” Dr. Shah called for remedial steps from the Study manager, but there was no documentation that such steps were taken to correct the problem. Id.

The Consequences of Discrediting the Study

Dr. Askari opined that the Study, along with other evidence, shows that Fixodent can cause copper deficiency myeloneuropathy (“CDM”). The plaintiffs, of course, argued that the Defendants’ criticisms of the Fixodent

Study’s methodology went merely to the “weight rather than admissibility.” Id. at *9. Askari’s study was but one leg of the stool, but the defense’s thorough discrediting of the study was an important step in collapsing the support for the plaintiffs’ claims. As the MDL Court explained:

“The Court cannot turn a blind eye to the myriad, serious methodological flaws in the Fixodent Blockade Study and conclude they go to weight rather than admissibility. While some of these flaws, on their own, may not be serious enough to justify exclusion of the Fixodent Blockade Study; taken together, the Court finds Fixodent Blockade Study is not “good science,” and is not admissible. Daubert, 509 U.S. at 593 (internal quotation marks and citation omitted).”

Id. at *11.

A study, such as the Fixodent Blockade Study, is not itself admissible, but the deconstruction of the study upon which plaintiffs’ expert witnesses relied, led directly to the Court’s decision to exclude those witnesses. The Court omitted any reference to Federal Rule of Evidence 703, which addresses the requirements of facts and data, otherwise inadmissible, which may be relied upon by expert witnesses in reaching their opinions.


 

[1] SeePhiladelphia Plaintiff’s Claims Against Fixodent Prove Toothless” (May 2, 2012); Jacoby v. Rite Aid Corp., 2012 Phila. Ct. Com. Pl. LEXIS 208 (2012), aff’d, 93 A.3d 503 (Pa. Super. 2013); “Pennsylvania Superior Court Takes The Bite Out of Fixodent Claims” (Dec. 12, 2013).

[2] SeeUsing the Rule 45 Subpoena to Obtain Research Data” (July 24, 2013)

[3] The group was identified as the Ethica Norma Ethical Committee.

[4] citing Wang Dep. at 56:7–25, Aug. 13, 2013), and Wang Analysis of Fixodent Blockade Study [ECF No. 2197–56] (noting “no clear treatment effect on Cu63 or Cu65”).

[5] Askari Dep. at 69:21–24, June 20, 2013.

[6] Copper 65 is not a typical tracer; it is not radioactive. Naturally occurring copper consists almost exclusively of two stable (non-radioactive) isotope, Cu65 about 31 percent, Cu63 about 69 percent. See, e.g., Manuel Olivares, Bo Lönnerdal, Steve A Abrams, Fernando Pizarro, and Ricardo Uauy, “Age and copper intake do not affect copper absorption, measured with the use of 65Cu as a tracer, in young infants,” 76 Am. J. Clin. Nutr. 641 (2002); T.D. Lyon, et al., “Use of a stable copper isotope (65Cu) in the differential diagnosis of Wilson’s disease,” 88 Clin. Sci. 727 (1995).

[7] Shah Dep. at 87:12–25; 476:2–536:12, 138:6–142:12, June 5, 2013).

[8] The reported decision leaves unclear how the analysis would proceed, whether by ANOVA for the three groups, or t-tests, and whether there was multiple testing.

[9] Wang Dep. at 151:13–152:7; 153:15–18.

[10] 2015 WL 392021, at *10, citing Perry v. United States, 755 F.2d 888, 892 (11th Cir. 1985) (“A scientist who has a formed opinion as to the answer he is going to find before he even begins his research may be less objective than he needs to be in order to produce reliable scientific results.”); Rink v. Cheminova, Inc., 400 F.3d 1286, 1293 n. 7 (11th Cir.2005) (“In evaluating the reliability of an expert’s method … a district court may properly consider whether the expert’s methodology has been contrived to reach a particular result.” (alteration added)).

 

Zoloft MDL Relieves Matrixx Depression

January 30th, 2015

When the Supreme Court delivered its decision in Matrixx Initiatives, Inc. v. Siracusano, 131 S. Ct. 1309 (2011), a colleague, David Venderbush from Alston & Bird LLP, and I wrote a Washington Legal Foundation Legal Backgrounder, in which we predicted that plaintiffs’ counsel would distort the holding, and inflate the dicta of the opinion. Schachtman & Venderbush, “Matrixx Unbounded: High Court’s Ruling Needlessly Complicates Scientific Evidence Principles,” 26 (14) Legal Backgrounder (June 17, 2011)[1]. Our prediction was sadly all-too accurate. Not only was the context of the Matrixx distorted, but several district courts appeared to adopt the dicta on statistical significance as though it represented the holding of the case[2].

The Matrixx decision, along with the few district court opinions that had embraced its dicta[3], was urged as the basis for denying a defense challenge to the proffered testimony of Dr. Anick Bérard, a Canadian perinatal epidemiologist, in the Zoloft MDL. The trial court, however, correctly discerned several methodological shortcomings and failures, including Dr. Bérard’s reliance upon claims of statistical significance from studies that conducted dozens and hundreds of multiple comparisons. See In re Zoloft (Sertraline Hydrochloride) Prods. Liab. Litig., MDL No. 2342; 12-md-2342, 2014 U.S. Dist. LEXIS 87592; 2014 WL 2921648 (E.D. Pa. June 27, 2014) (Rufe, J.).

Plaintiffs (through their Plaintiffs’ Steering Committee (PSC) in the Zoloft MDL) were undaunted and moved for reconsideration, asserting that the MDL trial court had failed to give appropriate weight to the Supreme Court’s decision in Matrixx, and a Third Circuit decision in DeLuca v. Merrell Dow Pharms., Inc., 911 F.2d 941 (3d Cir. 1990). The MDL trial judge, however, deftly rebuffed the plaintiffs’ use of Matrixx, and their attempt to banish consideration of random error in the interpretation of epidemiologic studies. In re Zoloft (Sertraline Hydrochloride) Prods. Liab. Litig., MDL No. 2342; 12-md-2342, 2015 WL 314149 (E.D. Pa. Jan. 23, 2015) (Rufe, J.) (denying PSC’s motion for reconsideration).

In rejecting the motion for reconsideration, the Zoloft MDL trial judge noted that the PSC had previously cited Matrixx, and that the Court had addressed the case in its earlier ruling. 2015 WL 314149, at *2-3. The MDL Court then proceeded to expand upon its earlier ruling, and to explain how Matrixx was largely irrelevant to the Rule 702 context of Pfizer’s challenge to Dr. Bérard. There were, to be sure, some studies with nominal statistically significant results, for some birth defects, among children of mothers who took Zoloft in their first trimester of pregnancy. As Judge Rufe explained, statistical significance, or the lack thereof, was only one item in a fairly long list of methodological deficiencies in Dr. Bérard’s causation opinions:

“The [original] opinion set forth a detailed and multi-faceted rationale for finding Dr. Bérard’s testimony unreliable, including her inattention to the principles of replication and statistical significance, her use of certain principles and methods without demonstrating either that they are recognized by her scientific community or that they should otherwise be considered scientifically valid, the unreliability of conclusions drawn without adequate hypothesis testing, the unreliability of opinions supported by a ‛cherry-picked’ sub-set of research selected because it was supportive of her opinions (without adequately addressing non-supportive findings), and Dr. Bérard’s failure to reconcile her currently expressed opinions with her prior opinions and her published, peer-reviewed research. Taking into account all these factors, as well as others discussed in the Opinion, the Court found that Dr. Bérard departed from well-established epidemiological principles and methods, and that her opinion on human causation must be excluded.”

Id. at *1.

In citing the multiple deficiencies of the proffered expert witness, the Zoloft MDL Court thus put its decision well within the scope of the Third Circuit’s recent precedent of affirming the exclusion of Dr. Bennet Omalu, in Pritchard v. Dow Agro Sciences, 430 F. App’x 102, 104 (3d Cir.2011). The Zoloft MDL Court further defended its ruling by pointing out that it had not created a legal standard requiring statistical significance, but rather had made a factual finding that epidemiologist, such as the challenged witness, Dr. Anick Bérard, would use some measure of statistical significance in reaching conclusions in her discipline of epidemiology. 2015 WL 314149, at *2[4].

On the plaintiffs’ motion for reconsideration, the Zoloft Court revisited the Matrixx case, properly distinguishing the case as a securities fraud case about materiality of non-disclosed information, not about causation. 2015 WL 314149, at *4. Although the MDL Court could and should have identified the Matrixx language as clearly obiter dicta, it did confidently distinguish the Supreme Court holding about pleading materiality from its own task of gatekeeping expert witness testimony on causation in a products liability case:

“Because the facts and procedural posture of the Zoloft MDL are so dissimilar from those presented in Matrixx, this Court reviewed but did not rely upon Matrixx in reaching its decision regarding Dr. Bérard. However, even accepting the PSC’s interpretation of Matrixx, the Court’s Opinion is consistent with that ruling, as the Court reviewed Dr. Bérard’s methodology as a whole, and did not apply a bright-line rule requiring statistically significant findings.”

Id. at *4.

In mounting their challenge to the MDL Court’s earlier ruling, the Zoloft plaintiffs asserted that the Court had failed to credit Dr. Bérard’s reliance upon what Dr. Bérard called the “Rothman approach.” This approach, attribution to Professor Kenneth Rothman had received some attention in the Bendectin litigation in the Third Circuit, where plaintiffs sought to be excused from their failure to show statistically significant associations when claiming causation between maternal use of Bendectin and infant birth defects. DeLuca v. Merrell Dow Pharms., Inc., 911 F.2d 941 (3d Cir. 1990). The Zoloft MDL Court pointed out that the Circuit, in DeLuca, had never affirmatively endorsed Professor Rothman’s “approach,” but had reversed and remanded the Bendectin case to the district court for a hearing under Rule 702:

“by directing such an overall evaluation, however, we do not mean to reject at this point Merrell Dow’s contention that a showing of a .05 level of statistical significance should be a threshold requirement for any statistical analysis concluding that Bendectin is a teratogen regardless of the presence of other indicial of reliability. That contention will need to be addressed on remand. The root issue it poses is what risk of what type of error the judicial system is willing to tolerate. This is not an easy issue to resolve and one possible resolution is a conclusion that the system should not tolerate any expert opinion rooted in statistical analysis where the results of the underlying studies are not significant at a .05 level.”

2015 WL 314149, at *4 (quoting from DeLuca, 911 F.2d at 955). After remand, the district court excluded the DeLuca plaintiffs’ expert witnesses, and granted summary judgment, based upon the dubious methods employed by plaintiffs’ expert witnesses in cherry picking data, recalculating risk ratios in published studies, and ignoring bias and confounding in studies. The Third Circuit affirmed the judgment for Merrell Dow. DeLuca v. Merrell Dow Pharma., Inc., 791 F. Supp. 1042 (3d Cir. 1992), aff’d, 6 F.3d 778 (3d Cir. 1993).

In the Zoloft MDL, the plaintiffs not only offered an erroneous interpretation of the Third Circuit’s precedents in DeLuca, they also failed to show that the “Rothman” approach had become generally accepted in over two decades since DeLuca. 2015 WL 314149, at *4. Indeed, the hearing record was quite muddled about what the “Rothman” approach involved, other than glib, vague suggestions that the approach would have countenanced Dr. Bérard’s selective, over-reaching analysis of the extant epidemiologic studies. The plaintiffs did not call Rothman as an expert witness; nor did they offer any of Rothman’s publications as exhibits at the Zoloft hearing. Although Professor Rothman has criticized the overemphasis upon p-values and significance testing, he has never suggested that researchers and scientists should ignore random error in interpreting research data. Nevertheless, plaintiffs attempted to invoke some vague notion of a Rothman approach that would ignore confidence intervals, attained significance probability, multiplicity, bias, and confounding. Ultimately, the MDL Court would have none of it. The Court held that the Rothman Approach (whatever that is), as applied by Dr. Bérard, did not satisfy Rule 702.

The testimony at the Rule 702 hearing on the so-called “Rothman approach” had been sketchy at best. Dr. Bérard protested, perhaps too much, when asked about her having ignored p-values:

“I’m not the only one saying that. It’s really the evolution of the thinking of the importance of statistical significance. One of my professors and also a friend of mine at Harvard, Ken Rothman, actually wrote on it – wrote on the topic. And in his book at the end he says obviously what I just said, validity should not be confused with precision, but the third bullet point, it’s saying that the lack of statistical significance does not invalidate results because sometimes you are in the context of rare events, few cases, few exposed cases, small sample size, exactly – you know even if you start with hundreds of thousands of pregnancies because you are looking at rare events and if you want to stratify by exposure category, well your stratum becomes smaller and smaller and your precision decreases. I’m not the only one saying that. Ken Rothman says it as well, so I’m not different from the others. And if you look at many of the studies published nowadays, they also discuss that as well.”

Notes of Testimony of Dr. Anick Bérard, at 76:21- 77:14 (April 9, 2014). See also Notes of Testimony of Dr. Anick Bérard, at 211 (April 11, 2014) (discussing non-statistically significant findings as a “trend,” and asserting that the lack of a significant finding does not mean that there is “no effect”). Bérard’s invocation of Rothman here is accurate but unhelpful. Rothman and Bérard are not alone in insisting that confidence intervals provide a measure of precision of an estimate, and that we should be careful not to interpret the lack of significance to mean no effect. But the lack of significance cannot be used to interpret data to show an effect.

At the Rule 702 hearing, the PSC tried to bolster Dr. Bérard’s supposed reliance upon the “Rothman approach” in cross-examining Pfizer’s expert witness, Dr. Stephen Kimmel:

“Q. You know who Dr. Rothman is, the epidemiologist?
A. Yes.
Q. You actually took a course from Dr. Rothman, didn’t you?
A. I did when I was a student way back.
Q. He is a well-known epidemiologist, isn’t he?
A. Yes, he is.
Q. He has published this book, Modern Epidemiology. Do you have a copy of this?
A. I do.
Q. Do you – Have you ever read it?
A. I read his earlier edition. I have not read the most recent edition.
Q. There’s two other authors, Sander Greenland and Tim Lash. Do you know either one of them?
A. I know Sander. I don’t know Tim.
Q. Dr. Rothman has some – he has written about confidence intervals and statistical significance for some time, hasn’t he?
A. He has.
Q. Do you agree with him that statistical significance is a not matter of validity. It’s a matter of precision?
A. It’s a matter of – well, confidence intervals are matters of precision. P-values are not.
Q. Okay. I want to put up a table and see if you are in agreement with Dr. Rothman. This is the third edition of Modern Epidemiology. And he has – and ignore my brother’s handwriting. But there is an hypothesized rate ratio under 10-3. It says: p-value function from which one can find all confidence limits for a hypothetical study with a rate ratio estimate of 3.1 Do you see that there?
A. Yes. I don’t see the top of the figure, not that it matters.
Q. I want to make sure. The way I understand this, he is giving us a hypothesis that we have a relative risk of 3.1 and it [presumably a 95% confidence interval] crosses 1, meaning it’s not statistically significant. Is that fair?
A. Well, if you are using a value of .05, yes. And again, if this is a single test and there’s a lot of things that go behind it. But, yes, so this is a total hypothetical.
Q. Yes.
A. I’ sorry. He’s saying here is a hypothetical based on math. And so here is – this is what we would propose.
Q. Yes, I want to highlight what he says about this figure and get your thoughts on it. He says:
The message of figure 10-3 is that the example data are more compatible with a moderate to strong association than with no association, assuming the statistical model used to construct the function is correct.
A. Yes.
Q. Would you agree with that statement?
A. Assuming the statistical model is correct. And the problem is, this is a hypothetical.
Q. Sure. So let’s just assume. So what this means to sort of put some meat on the bone, this means that although we cross 1 and therefore are statistically
significant [sic, non-significant], he says the more likely truth here is that there is a moderate to strong effect rather than no effect?
A. Well, you know he has hypothesized this. This is not used in common methods practice in pharmacoepi. Dr. Rothman has lots of ideas but it’s not part of our standard scientific method.

Notes of Testimony of Dr. Stephen Kimmel, at 126:2 to 128:20.

Nothing very concrete about the “Rothman approach” is put before the MDL Court, either through Dr. Bérard or Dr. Kimmel. There are, however, other instructive aspects to the plaintiff’s counsel’s examination. First, the referenced portion of the text, Modern Epidemiology, is a discussion of p-value functions, not of p-values or of confidence intervals per se. Modern Epidemiology at 158-59 (3d ed. 2008). Dr. Bérard never discussed p-value functions in her report or in her testimony, and Dr. Kimmel testified, without contradiction, that such p-value functions are “not used in common methods practice.” Second, the plaintiff’s counsel never marked and offered the Rothman text as an exhibit for the MDL Court to consider. Third, the cross-examiner first asked about the implication for a hypothetical association, and then, when he wanted to “put some meat on the bone” changed the word used in Rothman’s text, “association,” to “effect.” The word “effect” does not appear in Rothman’s text at the referenced discussion about p-value functions. Fortunately, the MDL Court was not poisoned by the “meat on the bone.”

The Pit and the Pendulum

Another document glibly referenced but not provided to the MDL Court was the publication of Sir Austin Bradford Hill’s presidential address to the Royal Society of Medicine on causation. The MDL Court acknowledged that the PSC had argued that the emphasis upon statistical significance was contrary to Hill’s work and teaching. 2015 WL 314149, at *5. In the Court’s words:

“the PSC argues that the Court’s finding regarding the importance of statistical significance in the field of epidemiology is inconsistent with the work of Bradford Hill. The PSC points to a 1965 address by Sir Austin Bradford Hill, which it has not previously presented to the Court, except in opening statements of the Daubert hearings.20 The PSC failed to put forth evidence establishing that Bradford Hill’s statement that ‛I wonder whether the pendulum has not swung too far [in requiring statistical significance before drawing conclusions]’ has, in the decades since that 1965 address, altered the importance of statistical significance to scientists in the field of epidemiology.”

Id. This failure, identified by the Court, is hardly surprising. The snippet of a quotation from Hill would not sustain the plaintiffs’ sweeping generalization. The quoted language in context may help to explain why Hill’s paper was not provided:

“I wonder whether the pendulum has not swung too far – not only with the attentive pupils but even with the statisticians themselves. To decline to draw conclusions without standard errors can surely be just as silly? Fortunately I believe we have not yet gone so far as our friends in the USA where, I am told, some editors of journals will return an article because tests of significance have not been applied. Yet there are innumerable situations in which they are totally unnecessary – because the difference is grotesquely obvious, because it is negligible, or because, whether it be formally significant or not, it is too small to be of any practical importance. What is worse the glitter of the t table diverts attention from the inadequacies of the fare. Only a tithe, and an unknown tithe, of the factory personnel volunteer for some procedure or interview, 20% of patients treated in some particular way are lost to sight, 30% of a randomly-drawn sample are never contracted. The sample may, indeed, be akin to that of the man who, according to Swift, ‘had a mind to sell his house and carried a piece of brick in his pocket, which he showed as a pattern to encourage purchasers.’ The writer, the editor and the reader are unmoved. The magic formulae are there.”

Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295, 299 (1965).

In the Zoloft cases, no expert witness was prepared to state that the disparity was “grotesquely obvious,” or “negligible.” And Bradford Hill’s larger point was that bias and confounding often dwarf considerations of random error, and that there are many instances in which significance testing is unavailing or unhelpful. And in some studies, with large “effect sizes,” statistical significance testing may be beside the point.

Hill’s presidential address to the Royal Society of Medicine commemorated his successes in epidemiology, and we need only turn to Hill’s own work to see how prevalent was his use of measurements of significance probability. See, e.g., Richard Doll & Austin Bradford Hill, “Smoking and Carcinoma of the Lung: Preliminary Report,” Brit. Med. J. 740 (Sept. 30, 1950); Medical Research Council, “Streptomycin Treatment of Pulmonary Tuberculosis,” Brit. Med. J. 769 (Oct. 30, 1948).

Considering the misdirection on Rothman and on Hill, the Zoloft MDL Court did an admirable job in unraveling the Matrixx trap set by counsel. The Court insisted upon parsing the Bradford Hill factors[5], over Pfizer’s objection, despite the plaintiffs’ failure to show “an association between two variables, perfectly clear-cut and beyond what we would care to attribute to the play of chance,” which Bradford Hill insisted was the prerequisite for the exploration of the nine factors he set out in his classic paper. Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295, 295 (1965). Given the outcome, the Court’s questionable indulgence of plaintiffs’ position was ultimately harmless.


[1] See alsoThe Matrixx – A Comedy of Errors,” and “Matrixx Unloaded,” (Mar. 29, 2011), “The Matrixx Oversold,” and “De-Zincing the Matrixx.”

[2] SeeSiracusano Dicta Infects Daubert Decisions” (Sept. 22, 2012).

[3] See, e.g., In re Chantix (Varenicline) Prods. Liab. Litig., 2012 U.S. Dist. LEXIS 130144, at *22 (N.D. Ala. 2012); Cheek v. Wyeth Pharm. Inc., 2012 U.S. Dist. LEXIS 123485 (E.D. Pa. Aug. 30, 2012); In re Celexa & Lexapro Prods. Liab. Litig.,  ___ F.3d ___, 2013 WL 791780 (E.D. Mo. 2013).

[4] The Court’s reasoning on this point begged the question whether an ordinary clinician, ignorant of the standards, requirements, and niceties of statistical reasoning and inference, would be allowed to testify, unconstrained by any principled epidemiologic reasoning about random or systematic error. It is hard to imagine that Rule 702 would countenance such an end-run around the requirements of sound science.

[5] Adhering to Bradford Hill’s own admonition might have saved the Court the confusion of describing statistical significance as a measure of strength of association. 2015 WL 314149, at *2.

The Lie Detector and Wonder Woman – Quirks and Quacks of Legal History

January 27th, 2015

From 1923, until the United States Supreme Court decided the Daubert case in 1993, Frye was cited as “controlling authority” on questions of the admissibility of scientific opinion testimony and test results. The decision is infuriatingly cryptic and unhelpful as to background or context of the specific case, as well as how it might be applied to future controversies. Of the 669 words, these are typically cited as the guiding “rule” with respect to expert witness opinion testimony:

“Just when a scientific principle or discovery crosses the line between the experimental and demonstrable stages is difficult to define. Somewhere in this twilight zone the evidential force of the principle must be recognized, and while the courts will go a long way in admitting expert testimony deduced from a well recognized scientific principle or discovery, the thing from which the deduction is made must be sufficiently established to have gained general acceptance in the particular field in which it belongs.”

Frye v. United States, 293 F. 1013, 1014 (D.C. Cir. 1923).

As most scholars of evidence realize, the back story of the Frye case is rich and bizarre. The expert witness involved, William Marston, was a lawyer and scientist, who had made advances in a systolic blood pressure cuff to be used as a “lie detector.” Marston was also an advocate of free love and, with his wife and his mistress, the inventor of Wonder Woman and her lasso of truth.

Jill Lepore, a professor of history in Harvard University, has written an historical account of Marston and his colleagues. Jill Lepore, The Secret History of Wonder Woman (N.Y. 2014). More recently, Lepore has written an important law review on the historical and legal record of the Frye case, which is concealed in the terse 669 words of the Court of Appeals’ opinion. Jill Lepore, “On Evidence: Proving Frye as a Matter of Law, Science, and History,” 124 Yale L.J. 1092 (2015).

Lepore’s history is an important gloss on the Frye case, but her paper points to a larger, more prevalent, chronic problem in the law, which especially afflicts judicial decisions of scientific or technical issues. As an historian, Lepore is troubled, as we all should be, by the censoring, selecting, suppressing, and distorting of facts that go into judicial decisions. From cases and their holdings, lawyers are taught to infer rules that guide their conduct at the bar, and their clients’ conduct and expectations, but everyone requires fair access to the evidence to determine what facts are material to decision.

As Professor Lepore puts it:

“Marston is missing from Frye because the law of evidence, case law, the case method, and the conventions of legal scholarship — together, and relentlessly — hide facts.”

Id. at 1097. Generalizing from Marston and the Frye case, Lepore notes that:

“Case law is like that, too, except that it doesn’t only fail to notice details; it conceals them.”

Id. at 1099.

Lepore documents that Marston’s psychological research was rife with cherry picking and data dredging. Id. at 1113-14. Despite his degree magna cum laude in philosophy from Harvard College, his L.L.B from Harvard Law School (with no particular distinction), and his Ph.D. from Harvard University, Marston was not a rigorous scientist. In exploring the historical and legal record, not recounted in the Frye decision, Lepore’s history provides a wonderful early example, of what has become a familiar phenomenon of modern litigation: an expert witness who seeks to achieve acceptance for a dubious opinion or device in the courtroom rather than in the court of scientific opinion. Id. at 1122. The trial judge in Frye’s murder case, Justice McCoy, was an astute judge, and quite modest in his ability to evaluate the validity of Marston’s opinions, but he had more than sufficient perspicacity to discern that Marston’s investigation was “wildly unscientific,” with no control groups. Id. at 1135. The trial record of defense counsel’s proffer, and Justice McCoy’s rulings and comments from the bench, reproduced in Lepore’s article, anticipate and predict much of the scholarship surrounding both Frye and Daubert cases.

Lepore complains that the important historical record, including Marston’s correspondence with Professor Wigmore, the criminal fraud charges against Marston, and the correspondence of Frye’s lawyers, lies “filed, undigitized” in various archives. Id. at 1150. Although Professor Lepore tirelessly cites to internet URL sources when available, she could have easily made the primary historical materials available for all, using readily available internet technology. Lepore’s main thesis should encourage lawyers and law scholars to look beyond appellate decisions as the data for legal analysis.

The opinions, statements, and asseverations expressed on Tortini are my own, or those of invited guests, and these writings do not necessarily represent the views of clients, friends, or family, even when supported by good and sufficient reason.