TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Peer Review, Protocols, and QRPs

April 3rd, 2024

In Daubert, the Supreme Court decided a legal question about the proper interpretation of a statute, Rule 702, and then remanded the case to the Ninth Circuit of the Court of Appeals for further proceedings. The Court did, however, weigh in with dicta about some several considerations in admissibility decisions.  In particular, the Court identified four non-dispositive factors: whether the challenged opinion has been empirically tested, published and peer reviewed, and whether the underlying scientific technique or method supporting the opinion has an acceptable rate of error, and has gained general acceptance.[1]

The context in which peer review was discussed in Daubert is of some importance to our understanding its holding peer review out as a consideraton. One of the bases for the defense challenges to some of the plaintiffs’ expert witnesses’ opinions in Daubert was their reliance upon re-analyses of published studies to suggest that there was indeed an increased risk of birth defects if only the publication authors had used some other control group, or taken some other analytical approach. Re-analyses can be important, but these reanalyses of published Bendectin studies were post hoc, litigation driven, and obviously result oriented. The Court’s discussion of peer review reveals that it was not simply creating a box to be checked before a trial court could admit an expert witness’s opinions. Peer review was suggested as a consideration because:

“submission to the scrutiny of the scientific community is a component of “good science,” in part because it increases the likelihood that substantive flaws in methodology will be detected. The fact of publication (or lack thereof) in a peer reviewed journal thus will be a relevant, though not dispositive, consideration in assessing the scientific validity of a particular technique or methodology on which an opinion is premised.”[2]

Peer review, or the lack thereof, for the challenged expert witnesses’ re-analyses was called out because it raised suspicions of lack of validity. Nothing in Daubert, or in later decisions, or more importantly in Rule 702 itself, supports admitting expert witness testimony just because the witness relied upon peer-reviewed studies, especially when the studies are invalid or are based upon questionable research practices. The Court was careful to point out that peer-reviewed publication was “not a sine qua non of admissibility; it does not necessarily correlate with reliability, … .”[3] The Court thus showed that it was well aware that well-ground (and thus admissible) opinions may not have been previously published, and that the existence of peer review was simply a potential aid in answering the essential question, whether the proponent of a proffered opinion has shown “the scientific validity of a particular technique or methodology on which an opinion is premised.[4]

Since 1993, much has changed in the world of bio-science publishing. The wild proliferation of journals, including predatory and “pay-to-play” journals, has disabused most observers that peer review provides evidence of validity of methods. Along with the exponential growth in publications has come an exponential growth in expressions of concern and out-right retractions of articles, as chronicled and detailed at Retraction Watch.[5] Some journals encourage authors to nominate the peer reviewers for their manuscripts; some journals let authors block some scientists as peer reviewers of their submitted manuscripts. If the Supreme Court were writing today, it might well note that peer review is often a feature of bad science, advanced by scientists who know that peer-reviewed publication is the price of admission to the advocacy arena.

Since the Supreme Court decided Daubert, the Federal Judicial Center and National Academies of Science have provided a Reference Manual for Scientific Evidence, now in its third edition, and with a fourth edition on the horizon, to assist judges and lawyers involved in the litigation of scientific issues. Professor Goodstein, in his chapter “How Science Works,” in the third edition, provides the most extensive discussion of peer review in the Manual, and emphasizes that peer review “works very poorly in catching cheating or fraud.”[6]  Goodstein invokes his own experience as a peer reviewer to note that “peer review referees and editors limit their assessment of submitted articles to such matters as style, plausibility, and defensibility; they do not duplicate experiments from scratch or plow through reams of computer-generated data in order to guarantee accuracy or veracity or certainty.”[7] Indeed, Goodstein’s essay in the Reference Manual characterizes the ability of peer review to warrant study validity as a “myth”:

Myth: The institution of peer review assures that all published papers are sound and dependable.

Fact: Peer review generally will catch something that is completely out of step with majority thinking at the time, but it is practically useless for catching outright fraud, and it is not very good at dealing with truly novel ideas. … It certainly does not ensure that the work has been fully vetted in terms of the data analysis and the proper application of research methods.[8]

Goodstein’s experience as a peer reviewer is hardly idiosyncratic. One standard text on the ethical conduct of research reports that peer review is often ineffective or incompetent, and that it may not even catch simple statistical or methodological errors.[9] According to the authors, Shamoo and Resnik:

“[p]eer review is not good at detecting data fabrication or falsification partly because reviewers usually do not have access to the material they would need to detect fraud, such as the original data, protocols, and standard operating procedures.”[10]

Indeed, without access to protocols, statistical analysis plans, and original data, peer review often cannot identify good faith or negligent deviations from the standard of scientific care. There is some evidence to support this negative assessment of peer review from testing of the counter-factual. Reviewers were able to detect questionable, selective reporting when they had access to the study authors’ research protocols.[11]

Study Protocol

The study protocol provides the scientific rationale for a study, clearly defines the research question, the data collection process, defines the key exposure and outcomes, and describes the methods to be applied, before commencing data collection.[12] The protocol also typically pre-specifies the statistical data analysis. The epidemiology chapter of the current edition of the Reference Manual for Scientific Evidence offers blandly only that epidemiologists attempt to minimize bias in observational studies with “data collection protocols.”[13] Epidemiologists and statisticians are much clearer in emphasizing the importance, indeed the necessity, of having a study protocol before commencing data collection. Back in 1988, John Bailar and Frederick Mosteller explained that it was critical in reporting statistical analyses to inform readers about how and when the authors devised the study design, and whether they set the design criteria out in writing before they began to collect data.[14]

The necessity of a study protocol is “self-evident,”[15] and essential to research integrity.[16] The International Society of Pharmacoepidemiology has issued Guidelines for “Good Pharmacoepidemiology Practices,”[17] which calls for every study to have a written protocol. Among the requirements set out in this set of guidelines are descriptions of the research method, study design, operational definitions of exposure and outcome variables, and projected study sample size. The Guidelines provide that a detailed statistical analysis plan may be specified after data collection begins, but before any analysis commences.

Expert witness opinions on health effects are built upon studies, and so it behooves legal counsel to identify the methodological strengths and weaknesses of key studies through questioning whether they have protocols, whether the protocols were methodologically appropriate, and whether the researchers faithfully followed their protocols and their statistical analysis plans. Determining the peer review status of a publication, on the other hand, will often not advance a challenge based upon improvident methodology.

In some instances, a published study will have sufficiently detailed descriptions of methods and data that readers, even lawyers, can evaluate their scientific validity or reliability (vel non). In some cases, however, readers will be no better off than the peer reviewers who were deprived of access to protocols, statistical analysis plans, and original data. When a particular study is crucial support for an adversary’s expert witness, a reasonable litigation goal may well be to obtain the protocol and statistical analysis plan, and if need be, the original underlying data. The decision to undertake such discovery is difficult. Discovery of non-party scientists can be expensive and protracted; it will almost certainly be contentious. When expert witnesses rely upon one or a few studies, which telegraph internal validity, this litigation strategy may provide the strongest evidence against the study’s being reasonably relied upon, or its providing “sufficient facts and data” to support an admissible expert witness opinion.


[1] Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579, 593-594 (1993).

[2] Id. at 594 (internal citations omitted) (emphasis added).

[3] Id.

[4] Id. at 593-94.

[5] Retraction Watch, at https://retractionwatch.com/.

[6] Reference Manual on Scientific Evidence at 37, 44-45 (3rd ed. 2011) [Manual].

[7] Id. at 44-45 n.11.

[8] Id. at 48 (emphasis added).

[9] Adil E. Shamoo and David B. Resnik, Responsible Conduct of Research 133 (4th ed. 2022).

[10] Id.

[11] An-Wen Chan, Asbjørn Hróbjartsson, Mette T. Haahr, Peter C. Gøtzsche, and David G. Altman, D. G. “Empirical evidence for selective reporting of outcomes in randomized trials: Comparison of protocols to published articles,” 291 J. Am. Med. Ass’n 2457 (2004).

[12] Wolfgang Ahrens & Iris Pigeot, eds., Handbook of Epidemiology 477 (2nd ed. 2014).

[13] Michael D. Green, D. Michal Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” in Reference Manual on Scientific Evidence 573 (3rd ed. 2011) 573 (“Study designs are developed before they begin gathering data.”).

[14] John Bailar & Frederick Mosteller, “Guidelines for Statistical Reporting in Articles for Medical Journals,” 108 Ann. Intern. Med. 2266, 268 (1988).

[15] Wolfgang Ahrens & Iris Pigeot, eds., Handbook of Epidemiology 477 (2nd ed. 2014).

[16] Sandra Alba, et al., “Bridging research integrity and global health epidemiology statement: guidelines for good epidemiological practice,” 5 BMJ Global Health e003236, at p.3 & passim (2020).

[17] See “The ISPE Guidelines for Good Pharmacoepidemiology Practices (GPP),” available at <https://www.pharmacoepi.org/resources/policies/guidelines-08027/>.

Doctor Moline – Why Can’t You Be True?

December 18th, 2022

Doctor Moline, why can’t you be true?

Oh, Doc Moline, why can’t you be true?

You done started doing the things you used to do.

Mass torts are the product of the lawsuit industry, and since the 1960s, this industry has produced tort claims on a truly industrial scale. The industry now has an economic ally and adjunct in the litigation finance industry, and it has been boosted by the desuetude of laws against champerty and maintenance. The way that mass torts are adjudicated in some places could easily be interpreted as legalized theft.

One governor on the rapaciousness of the lawsuit industry has been the requirement that claims actually be proven in court. Since the Supreme Court’s ruling in Daubert, the defense bar has been able, on notable occasions, to squelch some instances of false claiming. Just as equity often varies with the length of the Chancellor’s foot, gatekeeping of scientific opinion about causation often varies with the scientific acumen of the trial judge. From the decision in Daubert itself, gatekeeping has been under assault form the lawsuit industry and its allies. I have, in these pages, detailed the efforts of the now defunct Project on Scientific Knowledge and Public Policy (SKAPP) to undermine any gatekeeping of scientific opinion testimony for scientific or statistical validity. SKAPP, as well as other organizations, and some academics, in aid of the lawsuit industry, have lobbied for the abandonment of the requirement of proving causation, or for the dilution of the scientific standards for expert opinions of causation.[1] The counter to this advocacy has been, and continues to be, an insistence that the traditional elements of a case, including general and specific causation, be sufficiently proven, with opinion testimony that satisfies the legal knowledge requirement for such testimony.

Alas, expert witness testimony can go awry in other ways besides merely failing to satisfy the validity and relevance requirements of the law of evidence.[2] One way I had not previously contemplated is suing for defamation or “product disparagement.”

We are now half a century since occupational exposures to various asbestos fibers came under general federal regulatory control, with regulatory requirements that employers warn their employees about the hazards involved with asbestos exposure. This federally enforced dissemination of information about asbestos hazards created a significant problem for the asbestos lawsuit industry.  Cases of mesothelioma have always occurred among persons non-occupationally exposed to asbestos, but as occupational exposure declined, the relative proportion of mesothelioma cases with no obvious occupational exposures increased. The lawsuit industry could not stand around and let these tragic cases go to waste.

Cosmetic talc variably has some mineral particulate that comes under the category of “elongate mineral particles,” (EMP), which the lawsuit industry could assert is “asbestos.” As a result, this industry has been able to reprise asbestos litigation into a new morality tale against cosmetic talc producers and sellers. LTL Management LLC was formerly known as Johnson & Johnson Consumer Inc. [J&J], a manufacturer and seller of cosmetic talc. J&J became a major target of the lawsuit industry in mesothelioma (and ovarian cancer) cases, based upon claims that EMP/asbestos in cosmetic talc caused their cancers. The lawsuit industry recruited its usual retinue of expert witnesses to support its litigation efforts.

Standing out in this retinue was Dr. Jacqueline Moline. On December 16, J&J did something that rarely happens in the world of mass torts; it sued Dr. Moline for fraud, injurious falsehood and product disparagement, and violations of the Lanham Act (§ 43(a), 15 U.S.C. § 1125(a)).[3] The gravamen of the complaint is that Dr. Moline, in 2020, published a case series of 33 persons who supposedly used cosmetic talc products and later developed malignant mesothelioma. According to her article, the 33 patients had no other exposures to asbestos, which she concluded, showed that cosmetic talc use can cause mesothelioma:

Objective: To describe 33 cases of malignant mesothelioma among individuals with no known asbestos exposure other than cosmetic talcum powder.

Methods: Cases were referred for medico-legal evaluation, and tissue digestions were performed in some cases. Tissue digestion for the six ases described was done according to standard methodology.

Results: Asbestos of the type found in talcum powder was found in all six cases evaluated. Talcum powder usage was the only source of asbestos for all 33 cases.

Conclusions: Exposure to asbestos-contaminated talcum powders can cause mesothelioma. Clinicians should elicit a history of talcum powder usage in all patients presenting with mesothelioma.”[4]

Jacqueline Moline and Ronald Gordon both gave anemic conflicts disclosures: “Authors J.M. and R.G. have served as expert witnesses in asbestos litigation, including talc litigation for plaintiffs.”[5] Co-author Maya Alexandri was a lawyer at the time of publication; she is now a physician practicing emergency medicine, and also a fabulist. The article does not disclose the nature of Dr. Alexandri’s legal practice.

Dr. Moline is a professor and chair of occupational medicine at the Zucker School of Medicine at Hofstra/Northwell. She received her medical degree from the University of Chicago-Pritzker School of Medicine and a Master of Science degree in community medicine from the Mount Sinai School of Medicine. She completed a residency in internal medicine at Yale New Haven Hospital and an occupational and environmental medicine residency at Mount Sinai Medical Center. Dr. Moline is also a major-league testifier for the lawsuit industry.  Over the last quarter century, she has testified from sea to shining sea, for plaintiffs in asbestos, talc, and other litigations.[6]

According to J&J, Dr. Moline was listed as an expert witness for plaintiff, in over 200 talc mesothelioma cases against J&J.  There are, of course, other target defendants in this litigation, and the actual case count is likely higher. Moline has testified in 46 talc cases against J&J, and she has testified in 16 of those cases.[7] J&J estimates that she has made millions of dollars in service of the lawsuit industry.[8]

The authors’ own description of the manuscript makes clear the concern over the validity of personal and occupational histories of the 33 cases: “This manuscript is the first to describe mesothelioma among talcum powder consumers. Our case study suggest [sic] that cosmetic talcum powder use may help explain the high prevalence of idiopathic mesothelioma cases, particularly among women, and stresses the need for improved exposure history elicitation among physicians.”[9]

The Complaint alleges that Moline knew that her article, testimony, and public statements about the absence of occupational asbestos exposure in subjects of her case series, were false.  After having her testimony either excluded by trial courts, or held on appeal to be legally insufficient,[10] Moline set out to have a peer-reviewed publication that would support her claims. Because mesothelioma is sometimes considered, uncritically, as pathognomonic of amphibole asbestos exposure, Moline was obviously keen to establish the absence of occupational exposure in any of the 33 cases.

Alas, the truth appears to have caught up with Moline because some of the 33 cases were in litigation, in which the detailed histories of each case would be discovered. Defense counsel sought to connect the dots between the details of each of the 33 cases and the details of pending or past lawsuits. The federal district court decision in the case of Bell v. American International Industries blew open the doors of Moline’s alleged fraud.[11]  Betty Bell claimed that her use of cosmetic talc had caused her to develop mesothelioma. What Dr. Moline and Bell’s counsel were bound to have known was that Bell had had occupational exposure to asbestos. Before filing a civil action against talc product suppliers, Bell filed workers’ compensation against two textile industry employers.[12] Judge Osteen’s opinion in Bell documents the anxious zeal that plaintiffs’ counsel brought to bear in trying to suppress the true nature of Ms. Bell’s exposure. After Judge Osteen excoriated Moline and plaintiffs’ counsel for their efforts to conceal information about Bell’s occupational asbestos exposures, and about her inclusion in the 33 case series, plaintiffs’ counsel dismissed her case.

Another of the 33 cases was the New Jersey case brought by Stephen Lanzo, for whom Moline testified as an expert witness.[13] In the course of the Lanzo case, the defense developed facts of Mr. Lanzo’s prior asbestos exposure.  Crocidolite fibers were found in his body, even though the amphibole crocidolite is not a fiber type found in talc. Crocidolite is orders of magnitude more potent in causing human mesotheliomas than other asbestos fiber types.[14] Despite these facts, Dr. Moline appears to have included Lanzo as one of the 33 cases in her article.

And then there were others, too.


[1] SeeSkappology” (May 26, 2020);  “SKAPP A LOT” (April 30, 2010); “Manufacturing Certainty” (Oct. 25, 2011); “David Michaels’ Public Relations Problem” (Dec. 2, 2011); “Conflicted Public Interest Groups” (Nov. 3, 2013).

[2] See, e.g., “Legal Remedies for Suspect Medical Science in Products Cases – Part One” (June 2, 2020); “Part Two” (June 3, 2020); “Part Three” (June 5, 2020); “Part 4” (June 7, 2020); “Part 5” (June 8, 2020).

[3] LTL Management LLC v. Dr. Jacqueline Miriam Moline,

Adv. Proc. No. 22- ____, in Chap. 11, Case No. 21-30589, Bankruptcy Ct., D.N.J. (Dec. 16, 2022) [Complaint]

[4] Jacqueline Moline, Kristin Bevilacqua, Maya Alexandri, and Ronald E. Gordon, “Mesothelioma Associated with the Use of Cosmetic Talc,” 62 J. Occup. & Envt’l Med. 11 (Jan. 2020) (emphasis added) [cited as Moline]

[5] Dr. Gordon has had other litigation activities of interest. See William C. Rempel, “Alleged Mob Case May Best Illustrate How Not to Play the Game : Crime: Scheme started in a Texas jail and ended with reputed mobsters charged in $30-million laundering scam,” L.A. Times (July 4, 1993).

[6] See., e.g., Fowler v. Akzo Nobel Chemicals, Inc., 251 N.J. 300, 276 A. 3d 1146 (2022); Lanzo v. Cyprus Amax Minerals Co., 467 N.J. Super. 476, 254 A.3d 691 (App. Div. 2021); Fishbain v. Colgate-Palmolive Co., No. A-1786-15T2 (N.J. App. Div. 2019); Buttitta v. Allied Signal, Inc., N.J. App. Div. (2017); Kaenzig v. Charles B. Chrystal Co., N.J. App. Div. (2015); Anderson v. A.J. Friedman Supply Co., 416 N.J. Super. 46, 3 A.3d 545 (App. Div. 2010); Cioni v. Avon Prods., Inc., 2022 NY Slip Op 33197(U) (2022); Zicklin v. Bergdorf Goodman Inc., 2022 NY Slip Op 32119(U) (N.Y.Sup. N.Y. Cty. 2022); Nemeth v. Brenntag North America, 183 A.D.3d 211, 123 N.Y.S.3d 12 (2020), rev’d, 38 N.Y.3d 336, 345 (2022) (Moline’s testimony insufficient); Olson v. Brenntag North America, Inc., 2020 NY Slip Op 33741(U) (N.Y.Sup. N.Y. Cty. 2020), rev’d, 207 A.D.3d 415, 416 (N.Y. 1st Dep’t 2022) (holding Moline’s testimony on causation insufficient).; Moldow v. A.I. Friedman, L.P., 2019 NY Slip Op 32060(U) (N.Y.Sup. N.Y. Cty. 2019); Zoas v BASF Catalysts, LLC., 2018 NY Slip Op 33009(U) (N.Y.Sup. N.Y. Cty. 2018); Prokocimer v. Avon Prods., Inc., 2018 NY Slip Op 33170(U) (Dec. 11, 2018); Shulman v. Brenntag North America, Inc., 2018 NY Slip Op 32943(U) (N.Y.Sup. N.Y. Cty. 2018); Pistone v. American Biltrite, Inc., 2018 NY Slip Op 30851(U) (2018); Evans v. 3M Co., 2017 NY Slip Op 30756(U) (N.Y.Sup. N.Y. Cty. 2017); Juni v. A.O. Smith Water Prods., 48 Misc.3d 460, 11 N.Y.S.3d 416 (2015), aff’d, 32 N.Y.3d 1116, 116 N.E.3d 75, 91 N.Y.S.3d 784 (2018); Konstantin v. 630 Third Ave. Associates, 121 A.D. 3d 230, 990 N.Y.S. 2d 174 (2014); Lopez v. Gem Gravure Co., 50 A.D.3d 1102, 858 N.Y.S.2d 226 (2008); Lopez v. Superflex, Ltd., 31 A.D. 3d 914, 819 N.Y.S. 2d 165 (2006); DeMeyer v. Advantage Auto, 9 Misc. 3d 306, 797 N.Y.S.2d 743 (2005); Amorgianos v. National RR Passenger Corp., 137 F. Supp. 2d 147 (E.D.N.Y. 2001), aff’d, 303 F. 3d 256 (2d Cir. 2002); Chapp v. Colgate-Palmolive Co., 2019 Wisc. App. 54, 935 N.W.2d 553 (2019); McNeal v. Whittaker, Clark & Daniels, Inc., 80 Cal. App. 853 (2022); Burnett v. American Internat’l Indus., Case No. 3:20-CV-3046 (W.D. Ark. Jan. 27, 2022); McAllister v. McDermott, Inc., Civ. Action No. 18-361-SDD-RLB (M.D.La. Aug. 14, 2020); Hanson v. Colgate-Palmolive Co., 353 F. Supp. 3d 1273 (S.D. Ga. 2018); Norman-Bloodsaw v. Lawrence Berkeley Laboratory, 135 F. 3d 1260 (9th Cir. 1998); Carroll v. Akebono Brake Corp., 514 P. 3d 720 (Wash. App. 2022).

[7] Complaint ¶15.

[8] Complaint ¶19.

[9] Moline at 11.

[10] See, e.g., In re New York City Asbestos Litig. (Juni), 148 A.D.3d 233, 236-37, 239 (N.Y. App. Div. 1st Dep’t 2017), aff’d, 2 N.Y.3d 1116, 1122 (2018); Nemeth v. Brenntag North America, 183 A.D.3d 211, 123 N.Y.S.3d 12 (N.Y. App. Div. 2020), rev’d, 38 N.Y.3d 336, 345 (2022); Olson v. Brenntag North America, Inc., 2020 NY Slip Op 33741(U) (N.Y.Sup. Ct. N.Y. Cty. 2020), rev’d, 207 A.D.3d 415, 416 (N.Y. App. Div. 1st Dep’t 2022).

[11] Bell v. American Internat’l Indus. et al., No. 1:17-CV-00111, 2022 U.S. Dist. LEXIS 199180 (M.D.N.C. Sept. 13, 2022) (William Lindsay Osteen, Jr., J.). See Daniel Fisher, “Key talc/cancer study cited by plaintiffs hid evidence of other exposure, lawyers say” (Dec. 1, 2022).

[12] According to the Complaint against Moline, Bell had filed workers’ compensation claims with the North Carolina Industrial Commission, back in 2015, declaring under oath that she had been exposed to asbestos while working with two textile manufacturing employers, Hoechst Celanese Corporation and Pillowtex Corporation. Complaint at ¶102. As frequently happens in civil actions, the claimant dismisses worker’s compensation without prejudice, to pursue the more lucrative payday in a civil action, without the burden of employers’ liens against the recovery. Complaint at 102.

[13] SeeNew Jersey Appellate Division Calls for Do-Over in Baby Powder Dust Up” (May 22, 2021).

[14] David H. Garabrant & Susan T. Pastula, “A comparison of asbestos fiber potency and elongate mineral particle (EMP) potency for mesothelioma in humans,” 361 Toxicology & Applied Pharmacol. 127 (2018) (“relative potency of chrysotile:amosite:crocidolite was 1:83:376”). See also D. Wayne Berman & Kenny S. Crump, “Update of Potency Factors for Asbestos-Related Lung Cancer and Mesothelioma,” 38(S1) Critical Reviews in Toxicology 1 (2008).

The Knowledge Remedy Proposal

November 14th, 2020

Alexandra D. Lahav is the Ellen Ash Peters Professor of Law at the University of Connecticut School of Law. This year’s symposium issue of the Texas Law Review has published Professor Lahav’s article, “The Knowledge Remedy,” which calls for the imposition of a duty to conduct studies by defendants, to provide evidence relevant to plaintiffs’ product liability claims. Alexandra D. Lahav, “The Knowledge Remedy,” 98 Texas L. Rev. 1361 (2020) [cited as Lahav].

Professor Lahav’s advocated reform is based upon the premises that (1) the requisite studies needed for causal assessment “are too costly for plaintiffs to fund,” (2) are not done by manufacturers, or (3) are not done in good faith, and (4) are not conducted or adequately funded by government. Lahav believes that plaintiffs are injured by exposure to chemicals but they cannot establish causation in court because the defendant “hid its head in the sand,” or worse, “engaged in misconduct to prevent or hide research into its products.”[1] Lahav thus argues that when defendants have been found to have engaged in misconduct, courts should order them to fund studies into risks posed by their products.

Lahav’s claims are either empty or non-factual. The suggestion that plaintiffs are injured by products but cannot “prove” causation begs the question how she knows that these people were injured by the products at issue. In law professors’ language, Lahav has committed the fallacy of petitio principia.

Lahav’s poor-mouthing on behalf of claimants is factually unsupported in this article. Lahav tells us that:

“studies are too expensive for individuals or even groups to fund.”

This is assertion is never backed up with any data or evidence about the expense involved. Case-control studies for rare outcomes suffer from potential threats to their validity, but they can be assembled relatively quickly and inexpensively. Perhaps a more dramatic refutation of Lahav’s assertions come from the cohort studies done in administrative databases, such as the national healthcare databases of Denmark or Sweden, or the Veterans’ Administration database in the United States. These studies involve querying existing databases for the exposures and outcomes of interest, with appropriate controls; such studies are frequently of as high quality and validity as can be had in observational analytical epidemiology.

There are, of course, examples of corporate defendants’ misconduct in sponsoring or conducting studies. There is also evidence of misconduct in plaintiffs’ sponsorship of studies,[2] and outright fraud.[3] And certainly there is evidence of misconduct or misdirection in governmentally funded and sponsored research, sometimes done in cahoots with plaintiffs’ counsel.[4]

Perhaps more important for the intended audience of the Texas Law Review, Lahav’s assertion is demonstrably false. Plaintiffs, plaintiffs’ counsel, and plaintiffs’ advocacy groups have funded studies, often surreptitiously, in many litigations, including those involving claims of harm from Bair Hugger, asbestos, silicone gel breast implants, welding fume, Zofran, isotretinoin, and others. Lahav’s repetition of the claim does not make it true.[5] Plaintiffs and their proxies, including scientific advocates, can and do conduct studies, very much with a view toward supporting litigation claims. Mass tort litigation is a big business, often run by lawyer oligarchs of the plaintiffs’ bar. Ignorantia facti is not an excuse for someone who argues for a radical re-ordering of an already fragile litigation system.

Lahav also complains that studies take so long that the statute of limitations will run on the injury claims before the scientific studies can be completed. There is a germ of truth in this complaint, but the issue could be resolved with minor procedural modifications. Plaintiffs could be allowed a procedure to propound a simple interrogatory to manufacturing firms to ask whether they believe that causality exists between their product and a specific kind of harm, or whether a claimant should reasonably know that such causality exists to warrant pursuing a legal claim. If the manufacturers answer in the negative, then the firms would not be able to assert a limitations defense for any injury that arose on or before the date of its answer. Perhaps the court could allow the matter to stay on its docket and require that the defendant answer the question annually. Plaintiffs and their proxies would be able to sponsor studies necessary to support their claims, and putative defendants would be on notice that such studies are underway.

Without any serious consideration of the extant regulations, Lahav even extends her claims of inadequate testing and lax regulation to pharmaceutical products, which are subject to extensive requirements of showing safety and efficacy, both before and after approval for marketing. Lahav’s advocacy ignores that an individual epidemiologic study rarely “demonstrates” causation, and many such studies are required before the scientific community can accept the causal hypothesis as “disproven.” Lahav’s knowledge remedy is mostly an ignorance ruse.


[1]  Lahav at 1361.

[2]  For a recent, egregious example, see In re Zofran Prods. Liab. Litig., MDL No. 1:15-md-2657-FDS, Order on Defendant’s Motion to De-Designate Certain Documents as Confidential Under the Protective Order (D.Mass. Apr. 1, 2020) (uncovering dark data and dark money behind April Zambelli‐Weiner, Christina Via, Matt Yuen, Daniel Weiner, and Russell S. Kirby, “First Trimester Pregnancy Exposure to Ondansetron and Risk of Structural Birth Defects,” 83 Reproductive Toxicology 14 (2019)). See also In re Zofran (Ondansetron) Prod. Liab. Litig., 392 F. Supp. 3d 179, 182-84 (D. Mass. 2019) (MDL 2657);  “April Fool – Zambelli-Weiner Must Disclose” (April 2, 2020); “Litigation Science – In re Zambelli-Weiner” (April 8, 2019); “Mass Torts Made Less Bad – The Zambelli-Weiner Affair in the Zofran MDL” (July 30, 2019). See also Nate Raymond, “GSK accuses Zofran plaintiffs’ law firms of funding academic study,” Reuters (Mar. 5, 2019).

[3]  See Hon. Jack B. Weinstein, “Preliminary Reflections on Administration of Complex Litigation” 2009 Cardozo L. Rev. de novo 1, 14 (2009) (“[t]he breast implant litigation was largely based on a litigation fraud. …  Claims—supported by medical charlatans—that enormous damages to women’s systems resulted could not be supported.”) (emphasis added).

[4]  See, e.g., Robert M. Park, Paul A. Schulte, Joseph D. Bowman, James T. Walker, Stephen C. Bondy, Michael G. Yost, Jennifer A. Touchstone, and Mustafa Dosemeci, “Potential Occupational Risks for Neurodegenerative Diseases,” 48 Am. J. Ind. Med. 63, 65 (2005).

[5]  Lahav at 1369-70.

Hacking at the “A” Cell

November 10th, 2020

At the heart of epidemiologic studies and clinical trials is the contingency table. The term, contingency table, was introduced by Karl Pearson in the early 20th century as a way to explore the independence, vel non, in a multivariate model. The simplest version of the table is the “2 by 2” table that is at the heart of case-control and other studies:

  Cases (with outcome of interest) Controls (without outcome of interest)  
Exposure of Interest Present                A                  B A + B

Marginal total of all exposed

Exposure of Interest Absent                C                  D C + D

Marginal total of all non-exposed

  A + C

Marginal total of cases

B + D

Marginal total of controls

A + B + C + D

Total observed in study

 

A measure of association between the exposure of interest and the outcome of interest can be shown in the odds ratio (OR), which can be assessed for random error on the assumption of no association.

OR = (A/C)/(B/D) = A*D/B*C

The measurement of the OR turns on faithfully applying the same method of counting cases regardless of exposure status. When investigators expand the “A” cell by loosening their criteria for exposure, we say that they have engaged in “hacking the A cell.”

Something akin to hacking the A cell occurred in the large epidemiologic study, known as  “Yale Hemorrhagic Stroke Project (HSP),” which was the center piece of the plaintiffs’ case in In re Phenylpropanolamine Products Liability Litigation. Although the HSP was sponsored by manufacturers, it was conducted independently without any manufacturer oversight beyond the protocol. The FDA reviewed the HSP results, and ultimately the HSP was published in the New England Journal of Medicine.[1]

The HSP was challenged in a Rule 702 hearing in the Multi-District Litigation (MDL). The MDL judge, Judge Rothstein, conducted hearings and entertained extensive briefings on the reliability of plaintiffs’ expert witnesses’ opinions, which were based largely upon the HSP. The hearings, however, could not go beyond doubts raised by the published paper, and Judge Rothstein permitted plaintiffs’ expert witnesses’ proffered testimony based upon the study, finding that:

“The prestigious NEJM published the HSP results, further substantiating that the research bears the indicia of good science.”[2]

The HSP study was subjected to much greater analysis in litigation.  After the MDL concluded its abridged gatekeeping process, the defense successfully sought the underlying data to the HSP. These data unraveled the HSP paper by showing that the study investigators had deviated from the protocol in a way to increase the number of exposed cases (A cell), with the obvious result of increasing the OR reported by the study.

Both sides of the PPA litigation accused the other side of “hacking at the A cell,” but juries seemed to understand that the hacking had started before the paper was published. A notable string of defense verdicts ensued. After one of the early defense verdicts, plaintiffs’ counsel challenged the defendant’s reliance upon underlying data that went behind the peer-reviewed publication.  The trial court rejected the request for a new trial, and spoke to the significance of challenging the superficial significance of peer review of the key study relied upon by plaintiffs in the PPA litigation:

“I mean, you could almost say that there was some unethical activity with that Yale Study.  It’s real close.  I mean, I — I am very, very concerned at the integrity of those researchers. Yale gets — Yale gets a big black eye on this.”[3]

Today we can see the equivalent of “A” cell hacking in a rather sleazy attempt by the Banana Republicans to steal a presidential election they lost. Cry-baby conservatives are seeking recounts where they lost, but not where they won. They are challenging individual ballots on the basis of outcome. They are raising speculative questions about the electoral processes of entire states, even where the states in question have handed them notable wins down ballot.


[1]  Walter N. Kernan, Catherine M. Viscoli, Lawrence M. Brass, Joseph P. Broderick, Thomas Brott, Edward Feldmann, Lewis B. Morgenstern,  Janet Lee Wilterdink, and Ralph I. Horwitz, “Phenylpropanolamine and the Risk of Hemorrhagic Stroke,” 343 New Engl. J. Med. 1826 (2000). SeeMisplaced Reliance On Peer Review to Separate Valid Science From Nonsense” (Aug. 14, 2011).

[2]  In re Phenylpropanolamine Prod. Liab. Litig., 289 F. 2d 1230, 1239 (2003) (citing Daubert II for the proposition that peer review shows the research meets the minimal criteria for good science).  There were many layers of peer review for the HSP study, all of which proved ultimately ineffectual compared with the closer scrutiny that the HSP received in litigation where underlying data were produced.

[3]  O’Neill v. Novartis AG, California Superior Court, Los Angeles Cty., Transcript of Oral Argument on Post-Trial Motions, at 46 -47 (March 18, 2004) (Hon. Anthony J. Mohr).

Data Games – A Techno Thriller

April 22nd, 2020

Data Games – A Techno Thriller

Sherlock Holmes, Hercule Poirot, Miss Marple, Father Brown, Harry Bosch, Nancy Drew, Joe and Frank Hardy, Sam Spade, Columbo, Lennie Briscoe, Inspector Clouseau, and Dominic Da Vinci:

Move over; there is a new super sleuth in town.

Meet Professor Ken Wheeler.

Ken is a statistician, and so by profession, he is a data detective. In his day job, he teaches at a northeastern university, where his biggest challenges are managing the expectations of students and administrators, while trying to impart statistical learning. At home, Ken rarely manages to meet the expectations of his wife and son. But as some statisticians are wont to do, Ken sometimes takes on consulting gigs that require him to use his statistical skills to help litigants sort out the role of chance in cases that run from discrimination claims to rare health effects. In this contentious, sharp-elbowed environment, Ken excels. And truth be told, Ken actually finds great satisfaction in identifying the egregious errors and distortions of adversary statisticians

Wheeler’s sleuthing usually involves ascertaining random error or uncovering a lurking variable, but in Herberg I. Weisberg’s just-published novel, Data Games: A Techno Thriller, Wheeler is drawn into a high-stakes conspiracy of intrigue, violence, and fraud that goes way beyond the run-of-the-mine p-hacking and data dredging.

An urgent call from a scientific consulting firm puts Ken Wheeler in the midst of imminent disaster for a pharmaceutical manufacturer, whose immunotherapy anti-cancer wonder drug, Verbana, is under attack. A group of apparently legitimate scientists have obtained the dataset from Verbana’s pivotal clinical trial, and they appear on the verge of blowing Verbana out of the formulary with a devastating analysis that will show that the drug causes early dementia. Wheeler’s mission is to debunk the debunking analysis when it comes.

For those readers who are engaged in the litigation defense of products liability claims against medications, the scenario is familiar enough. The scientific group studying Verbana’s alleged side effect seems on the up-and-up, but they appear to engaged in a cherry-picking exercise, guided by a dubious theory of biological plausibility, known as the “Kreutzfeld hypothesis.”

It is not often that mystery novels turn on surrogate outcomes, biomarkers, genomic medicine, and predictive analytics, but Data Games is no ordinary mystery. And Wheeler is no ordinary detective. To be sure, the middle-aged Wheeler drives a middle-aged BMW, not a Bond car, and certainly not a Bonferroni. And Wheeler’s toolkit may not include a Glock, but he can handle the lasso, the jacknife, and the logit, and serve them up with SAS. Wheeler sees patterns where others see only chaos.

Unlike the typical Hollywood rubbish about stereotyped evil pharmaceutical companies, the hero of Data Games finds that there are sinister forces behind what looks like an honest attempt to uncover safety problems with Verbana. These sinister forces will use anything to achieve their illicit ends, including superficially honest academics with white hats. The attack on Verbana gets the FDA’s attention and an urgent hearing in White Oak, where Wheeler shines.

The author of Data Games, Herbert I. Weisberg, is himself a statistician, and a veteran of some of the dramatic data games he writes about in this novel. Weisberg is perhaps better known for his “homework” books, such asWillful Ignorance: The Mismeasure of Uncertainty (2014), and Bias and Causation: Models and Judgment for Valid Comparisons (2010). If, however, you ever find yourself in a pandemic lockdown, Weisberg’s Data Games: A Techno Thriller is a perfect way to escape. For under $3, you will be entertained, and you might even learn something about probability and statistics.

April Fool – Zambelli-Weiner Must Disclose

April 2nd, 2020

Back in the summer of 2019, Judge Saylor, the MDL judge presiding over the Zofran birth defect cases, ordered epidemiologist, Dr. Zambelli-Weiner to produce documents relating to an epidemiologic study of Zofran,[1] as well as her claimed confidential consulting relationship with plaintiffs’ counsel.[2]

This previous round of motion practice and discovery established that Zambelli-Weiner was a paid consultant in advance of litigation, that her Zofran study was funded by plaintiffs’ counsel, and that she presented at a Las Vegas conference, for plaintiffs’ counsel only, on [sic] how to make mass torts perfect. Furthermore, she had made false statements to the court about her activities.[3]

Zambelli-Weiner ultimately responded to the discovery requests but she and plaintiffs’ counsel withheld several documents as confidential, pursuant to the MDL’s procedure for protective orders. Yesterday, April 1, 2020, Judge Saylor entered granted GlaxoSmithKline’s motion to de-designate four documents that plaintiffs claimed to be confidential.[4]

Zambelli-Weiner sought to resist GSK’s motion to compel disclosure of the documents on a claim that GSK was seeking the documents to advance its own litigation strategy. Judge Saylor acknowledged that Zambelli-Weiner’s psycho-analysis might be correct, but that GSK’s motive was not the critical issue. According to Judge Saylor, the proper inquiry was whether the claim of confidentiality was proper in the first place, and whether removing the cloak of secrecy was appropriate under the facts and circumstances of the case. Indeed, the court found “persuasive public-interest reasons” to support disclosure, including providing the FDA and the EMA a complete, unvarnished view of Zambelli-Weiner’s research.[5] Of course, the plaintiffs’ counsel, in close concert with Zambelli-Weiner, had created GSK’s need for the documents.

This discovery battle has no doubt been fought because plaintiffs and their testifying expert witnesses rely heavily upon the Zambelli-Weiner study to support their claim that Zofran causes birth defects. The present issue is whether four of the documents produced by Dr. Zambelli-Weiner pursuant to subpoena should continue to enjoy confidential status under the court’s protective order. GSK argued that the documents were never properly designated as confidential, and alternatively, the court should de-designate the documents because, among other things, the documents would disclose information important to medical researchers and regulators.

Judge Saylor’s Order considered GSK’s objections to plaintiffs’ and Zambelli-Weiner’s withholding four documents:

(1) Zambelli-Weiner’s Zofran study protocol;

(2) Undisclosed, hidden analyses that compared birth defects rates for children born to mothers who used Zofran with the rates seen with the use of other anti-emetic medications;

(3) An earlier draft Zambelli-Weiner’s Zofran study, which she had prepared to submit to the New England Journal of Medicine; and

(4) Zambelli-Weiner’s advocacy document, a “Causation Briefing Document,” which she prepared for plaintiffs’ lawyers.

Judge Saylor noted that none of the withheld documents would typically be viewed as confidential. None contained “sensitive personal, financial, or medical information.”[6]  The court dismissed Zambelli-Weiner’s contention that the documents all contained “business and proprietary information,” as conclusory and meritless. Neither she nor plaintiffs’ counsel explained how the requested documents implicated proprietary information when Zambelli-Weiner’s only business at issue is to assist in making lawsuits. The court observed that she is not “engaged in the business of conducting research to develop a pharmaceutical drug or other proprietary medical product or device,” and is related solely to her paid consultancy to plaintiffs’ lawyers. Neither she nor the plaintiffs’ lawyers showed how public disclosure would hurt her proprietary or business interests. Of course, if Zambelli-Weiner had been dishonest in carrying out the Zofran study, as reflected in study deviations from its protocol, her professional credibility and her business of conducting such studies might well suffer. Zambelli-Weiner, however, was not prepared to affirm the antecedent of that hypothetical. In any event, the court found that whatever right Zambelli-Weiner might have enjoyed to avoid discovery evaporated with her previous dishonest representations to the MDL court.[7]

The Zofran Study Protocol

GSK sought production of the Zofran study protocol, which in theory contained the research plan for the Zofran study and the analyses the researchers intended to conduct. Zambelli-Weiner attempted to resist production on the specious theory that she had not published the protocol, but the court found this “non-publication” irrelevant to the claim of confidentiality. Most professional organizations, such as the International Society of Pharmacoepidemiology (“ISPE”), which ultimately published Zambelli-Weiner’s study, encourage the publication and sharing of study protocols.[8] Disclosure of protocols helps ensure the integrity of studies by allowing readers to assess whether the researchers have adhered to their study plan, or have engaged in ad hoc data dredging in search for a desired result.[9]

The Secret, Undisclosed Analyses

Perhaps even more egregious than withholding the study protocol was the refusal to disclose unpublished analyses comparing the rate of birth defects among children born to mothers who used Zofran with the birth defect rates of children with in utero exposure to other anti-emetic medications.  In ruling that Zambelli-Weiner must produce the unpublished analyses, the court expressed its skepticism over whether these analyses could ever have been confidential. Under ISPE guidelines, researchers must report findings that significantly affect public health, and the relative safety of Zofran is essential to its evaluation by regulators and prescribing physicians.

Not only was Zambelli-Weiner’s failure to include these analyses in her published article ethically problematic, but she apparently hid these analyses from the Pharmacovigilance Risk Assessment Committee (PRAC) of the European Medicines Agency, which specifically inquired of Zambelli-Weiner whether she had performed such analyses. As a result, the PRAC recommended a label change based upon Zambelli-Weiner’s failure to disclosure material information. Furthermore, the plaintiffs’ counsel represented they intended to oppose GSK’s citizen petition to the FDA, based upon the Zambelli-Weiner study. The apparently fraudulent non-disclosure of relevant analyses could not have been more fraught for public health significance. The MDL court found that the public health need trumped any (doubtful) claim to confidentiality.[10] Against the obvious public interest, Zambelli-Weiner offered no “compelling countervailing interest” in keeping her secret analyses confidential.

There were other aspects to the data-dredging rationale not discussed in the court’s order. Without seeing the secret analyses of other anti-emetics, readers were deprive of an important opportunity to assess actual and potential confounding in her study. Perhaps even more important, the statistical tools that Zambelli-Weiner used, including any measurements of p-values and confidence intervals, and any declarations of “statistical significance,” were rendered meaningless by her secret, undisclosed, multiple testing. As noted by the American Statistical Association (ASA) in its 2016 position statement, “4. Proper inference requires full reporting and transparency.”

The ASA explains that the proper inference from a p-value can be completely undermined by “multiple analyses” of study data, with selective reporting of sample statistics that have attractively low p-values, or cherry picking of suggestive study findings. The ASA points out that common practices of selective reporting compromises valid interpretation. Hence the correlative recommendation:

“Researchers should disclose the number of hypotheses explored during the study, all data collection decisions, all statistical analyses conducted and all p-values computed. Valid scientific conclusions based on p-values and related statistics cannot be drawn without at least knowing how many and which analyses were conducted, and how those analyses (including p-values) were selected for reporting.”[11]

The Draft Manuscript for the New England Journal of Medicine

The MDL court wasted little time and ink in dispatching Zambelli-Weiner’s claim of confidentiality for her draft New England Journal of Medicine manuscript. The court found that she failed to explain how any differences in content between this manuscript and the published version constituted “proprietary business information,” or how disclosure would cause her any actual prejudice.

Zambelli-Weiner’s Litigation Road Map

In a world where social justice warriors complain about organizations such as Exponent, for its litigation support of defense efforts, the revelation that Zambelli-Weiner was helping to quarterback the plaintiffs’ offense deserves greater recognition. Zambelli-Weiner’s litigation road map was clearly created to help Grant & Eisenhofer, P.A., the plaintiffs’ lawyers,, create a causation strategy (to which she would add her Zofran study). Such a document from a consulting expert witness is typically the sort of document that enjoys confidentiality and protection from litigation discovery. The MDL court, however, looked beyond Zambelli-Weiner’s role as a “consulting witness” to her involvement in designing and conducting research. The broader extent of her involvement in producing studies and communicating with regulators made her litigation “strategery” “almost certainly relevant to scientists and regulatory authorities” charged with evaluating her study.”[12]

Despite Zambelli-Weiner’s protestations that she had made a disclosure of conflict of interest, the MDL court found her disclosure anemic and the public interest in knowing the full extent of her involvement in advising plaintiffs’ counsel, long before the study was conducted, great.[13]

The legal media has been uncommonly quiet about the rulings on April Zambelli-Weiner, in the Zofran litigation. From the Union of Concerned Scientists, and other industry scolds such as David Egilman, David Michaels, and Carl Cranor – crickets. Meanwhile, while the appeal over the admissibility of her testimony is pending before the Pennsylvania Supreme Court,[14] Zambelli-Weiner continues to create an unenviable record in Zofran, Accutane,[15] Mirena,[16] and other litigations.


[1]  April Zambelli‐Weiner, Christina Via, Matt Yuen, Daniel Weiner, and Russell S. Kirby, “First Trimester Pregnancy Exposure to Ondansetron and Risk of Structural Birth Defects,” 83 Reproductive Toxicology 14 (2019).

[2]  See In re Zofran (Ondansetron) Prod. Liab. Litig., 392 F. Supp. 3d 179, 182-84 (D. Mass. 2019) (MDL 2657) [cited as In re Zofran].

[3]  “Litigation Science – In re Zambelli-Weiner” (April 8, 2019); “Mass Torts Made Less Bad – The Zambelli-Weiner Affair in the Zofran MDL” (July 30, 2019). See also Nate Raymond, “GSK accuses Zofran plaintiffs’ law firms of funding academic study,” Reuters (Mar. 5, 2019).

[4]  In re Zofran Prods. Liab. Litig., MDL No. 1:15-md-2657-FDS, Order on Defendant’s Motion to De-Designate Certain Documents as Confidential Under the Protective Order (D.Mass. Apr. 1, 2020) [Order].

[5]  Order at n.3

[6]  Order at 3.

[7]  See In re Zofran, 392 F. Supp. 3d at 186.

[8]  Order at 4. See also Xavier Kurz, Susana Perez-Gutthann, the ENCePP Steering Group, “Strengthening standards, transparency, and collaboration to support medicine evaluation: Ten years of the European Network of Centres for Pharmacoepidemiology and Pharmacovigilance (ENCePP),” 27 Pharmacoepidemiology & Drug Safety 245 (2018).

[9]  Order at note 2 (citing Charles J. Walsh & Marc S. Klein, “From Dog Food to Prescription Drug Advertising: Litigating False Scientific Establishment Claims Under the Lanham Act,” 22 Seton Hall L. Rev. 389, 431 (1992) (noting that adherence to study protocol “is essential to avoid ‘data dredging’—looking through results without a predetermined plan until one finds data to support a claim”).

[10]  Order at 5, citing Anderson v. Cryovac, Inc., 805 F.2d 1, 8 (1st Cir. 1986) (describing public-health concerns as “compelling justification” for requiring disclosing of confidential information).

[11]  Ronald L. Wasserstein & Nicole A. Lazar, “The ASA’s Statement on p-Values: Context, Process, and Purpose,” 70 The American Statistician 129 (2016)

See alsoThe American Statistical Association’s Statement on and of Significance” (March 17, 2016).“Courts Can and Must Acknowledge Multiple Comparisons in Statistical Analyses (Oct. 14, 2014).

[12]  Order at 6.

[13]  Cf. Elizabeth J. Cabraser, Fabrice Vincent & Alexandra Foote, “Ethics and Admissibility: Failure to Disclose Conflicts of Interest in and/or Funding of Scientific Studies and/or Data May Warrant Evidentiary Exclusions,” Mealey’s Emerging Drugs Reporter (Dec. 2002) (arguing that failure to disclose conflicts of interest and study funding should result in evidentiary exclusions).

[14]  Walsh v. BASF Corp., GD #10-018588 (Oct. 5, 2016, Pa. Ct. C.P. Allegheny Cty., Pa.) (finding that Zambelli-Weiner’s and Nachman Brautbar’s opinions that pesticides generally cause acute myelogenous leukemia, that even the smallest exposure to benzene increases the risk of leukemia offended generally accepted scientific methodology), rev’d, 2018 Pa. Super. 174, 191 A.3d 838, 842-43 (Pa. Super. 2018), appeal granted, 203 A.3d 976 (Pa. 2019).

[15]  In re Accutane Litig., No. A-4952-16T1, (Jan. 17, 2020 N.J. App. Div.) (affirming exclusion of Zambelli-Weiner as an expert witness).

[16]  In re Mirena IUD Prods. Liab. Litig., 169 F. Supp. 3d 396 (S.D.N.Y. 2016) (excluding Zambelli-Weiner in part).

Dodgy Data Duck Daubert Decisions

March 11th, 2020

Judges say the darndest things, especially when it comes to their gatekeeping responsibilities under Federal Rules of Evidence 702 and 703. One of the darndest things judges say is that they do not have to assess the quality of the data underlying an expert witness’s opinion.

Even when acknowledging their obligation to “assess the reasoning and methodology underlying the expert’s opinion, and determine whether it is both scientifically valid and applicable to a particular set of facts,”[1] judges have excused themselves from having to look at the trustworthiness of the underlying data for assessing the admissibility of an expert witness’s opinion.

In McCall v. Skyland Grain LLC, the defendant challenged an expert witness’s reliance upon oral reports of clients. The witness, Mr. Bradley Walker, asserted that he regularly relied upon such reports, in similar contexts of the allegations that the defendant misapplied herbicide to plaintiffs’ crops. The trial court ruled that the defendant could cross-examine the declarant who was available trial, and concluded that the “reliability of that underlying data can be challenged in that manner and goes to the weight to be afforded Mr. Walker’s conclusions, not their admissibility.”[2] Remarkably, the district court never evaluated the reasonableness of Mr. Walker’s reliance upon client reports in this or any context.

In another federal district court case, Rodgers v. Beechcraft Corporation, the trial judge explicitly acknowledged the responsibility to assess whether the expert witness’s opinion was based upon “sufficient facts and data,” but disclaimed any obligation to assess the quality of the underlying data.[3] The trial court in Rodgers cited a Tenth Circuit case from 2005,[4] which in turn cited the Supreme Court’s 1993 decision in Daubert, for the proposition that the admissibility review of an expert witness’s opinion was limited to a quantitative sufficiency analysis, and precluded a qualitative analysis of the underlying data’s reliability. Quoting from another district court criminal case, the court in Rodgers announced that “the Court does not examine whether the facts obtained by the witness are themselves reliable – whether the facts used are qualitatively reliable is a question of the weight to be given the opinion by the factfinder, not the admissibility of the opinion.”[5]

In a 2016 decision, United States v. DishNetwork LLC, the court explicitly disclaimed that it was required to “evaluate the quality of the underlying data or the quality of the expert’s conclusions.”[6] This district court pointed to a Seventh Circuit decision, which maintained that  “[t]he soundness of the factual underpinnings of the expert’s analysis and the correctness of the expert’s conclusions based on that analysis are factual matters to be determined by the trier of fact, or, where appropriate, on summary judgment.”[7] The Seventh Circuit’s decision, however, issued in June 2000, several months before the effective date of the amendments to Federal Rule of Evidence 702 (December 2000).

In 2012, a magistrate judge issued an opinion along the same lines, in Bixby v. KBR, Inc.[8] After acknowledging what must be done in ruling on a challenge to an expert witness, the judge took joy in what could be overlooked. If the facts or data upon which the expert witness has relied are “minimally sufficient,” then the gatekeeper can regard questions about “the nature or quality of the underlying data bear upon the weight to which the opinion is entitled or to the credibility of the expert’s opinion, and do not bear upon the question of admissibility.”[9]

There need not be any common law mysticism to the governing standard. The relevant law is, of course, a statute, which appears to be forgotten in many of the failed gatekeeping decisions:

Rule 702. Testimony by Expert Witnesses

A witness who is qualified as an expert by knowledge, skill, experience, training, or education may testify in the form of an opinion or otherwise if:

(a) the expert’s scientific, technical, or other specialized knowledge will help the trier of fact to understand the evidence or to determine a fact in issue;

(b) the testimony is based on sufficient facts or data;

(c) the testimony is the product of reliable principles and methods; and

(d) the expert has reliably applied the principles and methods to the facts of the case.

It would seem that you could not produce testimony that is the product of reliable principles and methods by starting with unreliable underlying facts and data. Certainly, having a reliable method would require selecting reliable facts and data from which to start. What good would the reliable application of reliable principles to crummy data?

The Advisory Committee Notes to Rule 702 hints at an answer to the problem:

“There has been some confusion over the relationship between Rules 702 and 703. The amendment makes clear that the sufficiency of the basis of an expert’s testimony is to be decided under Rule 702. Rule 702 sets forth the overarching requirement of reliability, and an analysis of the sufficiency of the expert’s basis cannot be divorced from the ultimate reliability of the expert’s opinion. In contrast, the ‘reasonable reliance’ requirement of Rule 703 is a relatively narrow inquiry. When an expert relies on inadmissible information, Rule 703 requires the trial court to determine whether that information is of a type reasonably relied on by other experts in the field. If so, the expert can rely on the information in reaching an opinion. However, the question whether the expert is relying on a sufficient basis of information—whether admissible information or not—is governed by the requirements of Rule 702.”

The answer is only partially satisfactory. First, if the underlying data are independently admissible, then there may indeed be no gatekeeping of an expert witness’s reliance upon such data. Rule 703 imposes a reasonableness test for reliance upon inadmissible underlying facts and data, but appears to give otherwise admissible facts and data a pass. Second, the above judicial decisions do not mention any Rule 703 challenge to the expert witnesses’ reliance. If so, then there is a clear lesson for counsel. When framing a challenge to the admissibility of an expert witness’s opinion, show that the witness has unreasonably relied upon facts and data, from whatever source, in violation of Rule 703. Then show that without the unreasonably relied upon facts and data, the witness cannot show that his or her opinion satisfies Rule 702(a)-(d).


[1]  See, e.g., McCall v. Skyland Grain LLC, Case 1:08-cv-01128-KHV-BNB, Order (D. Colo. June 22, 2010) (Brimmer, J.) (citing Dodge v. Cotter Corp., 328 F.3d 1212, 1221 (10th Cir. 2003), citing in turn Daubert v. Merrill Dow Pharms., Inc., 509 U.S. 579,  592-93 (1993).

[2]  McCall v. Skyland Grain LLC Case 1:08-cv-01128-KHV-BNB, Order at p.9 n.6 (D. Colo. June 22, 2010) (Brimmer, J.)

[3]  Rodgers v. Beechcraft Corp., Case No. 15-CV-129-CVE-PJC, Report & Recommendation at p.6 (N.D. Okla. Nov. 29, 2016).

[4]  Id., citing United.States. v. Lauder, 409 F.3d 1254, 1264 (10th Cir. 2005) (“By its terms, the Daubert opinion applies only to the qualifications of an expert and the methodology or reasoning used to render an expert opinion” and “generally does not, however, regulate the underlying facts or data that an expert relies on when forming her opinion.”), citing Daubert v. Merrill Dow Pharms., Inc., 509 U.S. 579, 592-93 (1993).

[5]  Id., citing and quoting United States v. Crabbe, 556 F. Supp. 2d 1217, 1223
(D. Colo. 2008) (emphasis in original). In Crabbe, the district judge mostly excluded the challenged expert witness, thus rendering its verbiage on quality of data as obiter dicta). The pronouncements about the nature of gatekeeping proved harmless error when the court dismissed the case on other grounds. Rodgers v. Beechcraft Corp., 248 F. Supp. 3d 1158 (N.D. Okla. 2017) (granting summary judgment).

[6]  United States v. DishNetwork LLC, No. 09-3073, Slip op. at 4-5 (C.D. Ill. Jan. 13, 2016) (Myerscough, J.)

[7]  Smith v. Ford Motor Co., 215 F.3d 713, 718 (7th Cir. 2000).

[8]  Bixby v. KBR, Inc., Case 3:09-cv-00632-PK, Slip op. at 6-7 (D. Ore. Aug. 29, 2012) (Papak, M.J.)

[9]  Id. (citing Hangarter v. Provident Life & Accident Ins. Co., 373 F.3d 998, 1017 (9th Cir. 2004), quoting Children’s Broad Corp. v. Walt Disney Co., 357 F.3d 860, 865 (8th Cir. 2004) (“The factual basis of an expert opinion goes to the credibility of the testimony, not the admissibility, and it is up to the opposing party to examine the factual basis for the opinion in cross-examination.”).

Litigation Science – In re Zambelli-Weiner

April 8th, 2019

Back in 2001, in the aftermath of the silicone gel breast implant litigation, I participated in a Federal Judicial Center (FJC) television production of “Science in the Courtroom, program 6” (2001). Program six was a round-table discussion among the directors (past, present, and future) of the FJC, all of whom were sitting federal judges, with two lawyers in private practice, Elizabeth Cabraser and me.1 One of the more exasperating moments in our conversation came when Ms. Cabraser, who represented plaintiffs in the silicone litigation, complained that Daubert was unfair because corporate defendants were able to order up supporting scientific studies, whereas poor plaintiffs counsel did not have the means to gin up studies that confirmed what they knew to be true.2 Refraining from talking over her required all the self-restraint I could muster, but I did eventually respond by denying her glib generalization and offering the silicone litigation as one in which plaintiffs, plaintiffs’ counsel, and plaintiffs’ support groups were all involved in funding and directing some of the sketchiest studies, most of which managed to find homes in so-called peer-reviewed journals of some sort, even if not the best.

The litigation connections of the plaintiff-sponsored studies in the silicone litigation were not apparent on the face of the published articles. The partisan funding and provenance of the studies were mostly undisclosed and required persistent discovery and subpoenas. Cabraser’s propaganda reinforced the recognition of what so-called mass tort litigation had taught me about all scientific studies: “trust but verify.” Verification is especially important for studies that are sponsored by litigation-industry actors who have no reputation at stake in the world of healthcare.

Verification is not a straightforward task, however. Peer-review publication usually provides some basic information about “methods and materials,” but rarely if ever do published articles provide sufficient data and detail about methodology to replicate the reported analysis. In legal proceedings, verification of studies conducted and relied upon by testifying expert witnesses is facilitated by the rules of expert witness discovery. In federal court, expert witnesses must specify all opinions and all bases for their opinions. When such witnesses rely upon their own studies, and thus have had privileged access to the complete data and all analyses, courts have generally permitted full inquiry into the underlying materials of relied-upon studies. On the other, when the author of a relied-upon study is a “stranger to the litigation,” neither a party nor a retained expert witness, courts have permitted generally more limited discovery of the study’s full data set and analyses. Regardless of the author’s status, the question remains how litigants are to challenge an adversary’s expert witness’s trusted reliance upon a study, which cannot be “verified.”

Most lawyers would prefer, of course, to call an expert witness who has actually conducted studies pertinent to the issues in the case. The price, however, of allowing the other side to discover the underlying data and materials of the author expert witness’s studies may be too high. The relied-upon studies may well end up discredited, as well as the professional reputation of the expert witness. The litigation industry has adapted to these rules of discovery by avoiding, in most instances, calling testifying expert witnesses who have published studies that might be vulnerable.3

One work-around to the discovery rules lies in the use of “consulting, non-testifying expert witnesses.” The law permits the use of such expert witnesses to some extent to facilitate candid consultations with expert witnesses, usually without concerns that communications will be shared with the adversary party and witnesses. The hope is that such candid communications will permit realistic assessment of partisan positions, as well as allowing scientists and scholars to participate in an advisory capacity without the burden of depositions, formal report writing, and appearances at judicial hearings and trials. The confidentiality of consulting expert witnesses is open to abuse by counsel who would engage the consultants to conduct and publish studies, which can then be relied upon by the testifying expert witnesses. The upshot is that legal counsel can manipulate the published literature in a favorable way, without having to disclose their financial sponsorship or influence of the published studies used by their testifying expert witnesses.

This game of hiding study data and sponsorship through the litigation industry’s use of confidential consulting expert witnesses pervades so-called mass tort litigation, which provides ample financial incentives for study sponsorship and control. Defendants will almost always be unable to play the game, without detection. A simple interrogatory or other discovery request about funding of studies will reveal the attempt to pass off a party-sponsored study as having been conducted by disinterested scientists. Furthermore, most scientists will feel obligated to reveal corporate funding as a potential conflict of interest, in their submission of manuscripts for publication.

Revealing litigation-industry (plaintiffs’) funding of studies is more complicated. First, the funding may be through one firm, which is not the legal counsel in the case for which discovery is being conducted. In such instances, the plaintiff’s lawyers can truthfully declare that they lack personal knowledge of any financial support for studies relied upon by their testifying expert witnesses. Second, the plaintiffs’ lawyer firm is not a party is not itself subject to discovery. Even if the plaintiffs’ lawyers funded a study, they can claim, with plausible deniability, that they funded the study in connection with another client’s case, not the client who is plaintiff in the case in which discovery is sought. Third, the plaintiffs’ firm may take the position, however dubious it might be, that the funding of the relied-upon study was simply a confidential consultation with the authors of that study, and not subject to discovery.

The now pending litigation against ondansetron (Zofran) provides the most recent example of the dubious use of consulting expert witnesses to hide party sponsorship of an epidemiologic study. The plaintiffs, who are claiming that Zofran causes birth defects in this multi-district litigation assigned to Judge F. Dennis Saylor, have designated Dr. Carol Luik as their sole testifying expert witness on epidemiology. Dr. Luik, in turn, has relied substantially upon a study conducted by Dr. April Zambelli-Weiner.4

According to motion papers filed by defendants,5 the plaintiffs’ counsel initially claimed that they had no knowledge of any financial support or conflicts for Dr Zambelli-Weiner. The conflict-of-interest disclosure in Zambelli-Weiner’s paper was, to say the least, suspicious:

The authors declare that there was no outside involvement in study design; in the collection, analysis and interpretation of data; in the writing of the manuscript; and in the decision to submit the manuscript for publication.”

As an organization TTi reports receiving funds from plaintiff law firms involved in ondansetron litigation and a manufacturer of ondansetron.”

According to its website, TTi

is an economically disadvantaged woman-owned small business headquartered in Westminster, Maryland. We are focused on the development, evaluation, and implementation of technologies and solutions that advance the transformation of data into actionable knowledge. TTi serves a diverse clientele, including all stakeholders in the health space (governments, payors, providers, pharmaceutical and device companies, and foundations) who have a vested interest in advancing research to improve patient outcomes, population health, and access to care while reducing costs and eliminating health disparities.”

According to defendants’ briefing, and contrary to plaintiffs’ initial claims and Zambelli-Weiner’s anemic conflicts disclosure, plaintiffs’ counsel eventually admitted that “Plaintiffs’ Leadership Attorneys paid $210,000 as financial support relating to” Zambelli-Weiner’s epidemiologic study. The women at TTi are apparently less economically disadvantaged than advertised.

The Zofran defendants served subpoenas duces tecum and ad testificandum on two of the study authors, Drs. April Zambelli-Weiner and Russell Kirby. Curiously, the plaintiffs (who would seem to have no interest in defending the third-party subpoenas) sought a protective order by arguing that defendants were harassing “third-party scientists.” Their motion for protection conveniently and disingenuously omitted, that Zambelli-Weiner had been a paid consultant to the Zofran plaintiffs.

Judge Saylor refused to quash the subpoenas, and Zambelli-Weiner appeared herself, through counsel, to seek a protective order. Her supporting affidavit averred that she had not been retained as an expert witness, and that she had no documents “concerning any data analyses or results that were not reported in the [published study].” Zambelli-Weiner’s attempt to evade discovery was embarrassed by her having presented a “Zofran Litigation Update” with Plaintiffs’ counsel Robert Jenner and Elizabeth Graham at a national conference for plaintiffs’ attorneys. Judge Saylor was not persuaded, and the MDL court refused Dr. Zambelli-Weiner’s motion. The law and the public has a right to every man’s, and every woman’s, (even if economically disadvantaged) evidence.6

Tellingly, in the aftermath of the motions to quash, Zambelli-Weiner’s counsel, Scott Marder, abandoned his client by filing an emergency motion to withdraw, because “certain of the factual assertions in Dr. Zambelli-Weiner’s Motion for Protective Order and Affidavit were inaccurate.” Mr. Marder also honorably notified defense counsel that he could no longer represent that Zambelli-Weiner’s document production was complete.

Early this year, on January 29, 2019, Zambelli-Weiner submitted, through new counsel, a “Supplemental Affidavit,” wherein she admitted she had been a “consulting expert” witness for the law firm of Grant & Eisenhofer on the claimed teratogenicity of Zofran.7 Zambelli-Weiner also produced a few extensively redacted documents. On February 1, 2019, Zambelli-Weiner testified at deposition that the moneys she received from Grant & Eisenhofer were not to fund her Zofran study, but for other, “unrelated work.” Her testimony was at odds with the plaintiffs’ counsel’s confession that the $210,000 related to her Zofran study.

Zambelli-Weiner’s etiolated document production was confounded by the several hundred of pages of documents produced by fellow author, Dr. Russell Kirby. When confronted with documents from Kirby’s production, Zambelli-Weiner’s lawyer unilaterally suspended the deposition.

Deja Vu All Over Again

Federal courts have seen the Zambelli maneuver before. In litigation over claimed welding fume health effects, plaintiffs’ counsel Richard (Dickie) Scruggs and colleagues funded some neurological researchers to travel to Alabama and Mississippi to “screen” plaintiffs and potential plaintiffs in litigation for over claims of neurological injury and disease from welding fume exposure, with a novel videotaping methodology. The plaintiffs’ lawyers rounded up the research subjects (a.k.a. clients and potential clients), talked to them before the medical evaluations, and administered the study questionnaires. The study subjects were clearly aware of Mr. Scruggs’ “research” hypothesis, and had already promised him 40% of any recovery.8

After their sojourn, at Scruggs’ expense to Alabama and Mississippi, the researchers wrote up their results, with little or no detail of the circumstances of how they had acquired their research “participants,” or those participants’ motives to give accurate or inaccurate medical and employment history information.9

Defense counsel served subpoenas upon both Dr. Racette and his institution, Washington University St. Louis, for the study protocol, underlying data, data codes, and all statistical analyses. Racette and Washington University resisted sharing their data and materials with every page in the Directory of Non-Transparent Research. They claimed that the subpoenas sought production of testimony, information and documents in violation of:

(1) the Federal Regulations set forth in the Department of Health and Human Services Policy for Protection of Human Research Subjects,

(2) the Federal regulations set forth in the HIPPA Regulations,

(3) the physician/patient privilege,

(4) the research scholar’s privilege,

(5) the trade secret/confidential research privilege and

(6) the scope of discovery as codified by the Federal Rules of Civil Procedure and the Missouri Rules of Civil Procedure.”

After a long discovery fight, the MDL court largely enforced the subpoenas.10 The welding MDL court ordered Racette to produce

a ‘limited data set’ which links the specific categories requested by defendants: diagnosis, occupation, and age. This information may be produced as a ‘deidentified’ data set, such that the categories would be linked to each particular patient, without using any individual patient identifiers. This data set should: (1) allow matching of each study participant’s occupational status and age with his or her neurological condition, as diagnosed by the study’s researchers; and (2) to the greatest extent possible (except for necessary de-identification), show original coding and any code-keys.”

After the defense had the opportunity to obtain and analyze the underlying data in the Scruggs-Racette study, the welding plaintiffs retreated from their epidemiologic case. Various defense expert witnesses analyzed the underlying data produced by Racette, and prepared devastating rebuttal reports. These reports were served upon plaintiffs’ counsel, whose expert witnesses never attempted any response. Reliance upon Racette’s study was withdrawn or abandoned. After the underlying data were shared with the parties to MDL 1535, no scientist appeared to defend the results in the published papers.11 The Racette Alabama study faded into the background of the subsequent welding-fume cases and trials.

The motion battle in the welding MDL revealed interesting contradictions, similar to those seen in the Zambelli-Weiner affair. For example, Racette claimed he had no relationship whatsoever with plaintiffs’ counsel, other than showing up by happenstance in Alabama at places where Scruggs’ clients also just happened to show up. Racette claimed that the men and women he screened were his patients, but he had no license to practice in Alabama, where the screenings took place. Plaintiffs’ counsel disclaimed that Racette was a treating physician, which acknowledgment would have made the individual’s screening results discoverable in their individual cases. And more interestingly, plaintiffs’ counsel claimed that both Dr. Racette and Washington University were “non-testifying, consulting experts utilized to advise and assist Plaintiffs’ counsel with respect to evaluating and assessing each of their client’s potential lawsuit or claim (or not).”12

Over the last decade or so, best practices and codes of conduct for the relationship between pharmacoepidemiologists and study funders have been published.13 These standards apply with equal force to public agencies, private industry, and regulatory authories. Perhaps it is time for them to specify that the apply to the litigation industry as well.


1 See Smith v. Wyeth-Ayerst Labs. Co., 278 F. Supp. 2d 684, 710 & n. 56 (W.D.N.C. 2003).

2 Ironically, Ms. Cabraser has published her opinion that failure to disclose conflicts of interest and study funding should result in evidentiary exclusions, a view which would have simplified and greatly shortened the silicone gel breast implant litigation. See Elizabeth J. Cabraser, Fabrice Vincent & Alexandra Foote, “Ethics and Admissibility: Failure to Disclose Conflicts of Interest in and/or Funding of Scientific Studies and/or Data May Warrant Evidentiary Exclusions,” Mealey’s Emerging Drugs Reporter (Dec. 2002).

3 Litigation concerning Viagra is one notable example where plaintiffs’ counsel called an expert witness who was the author of the very study that supposedly supported their causal claim. It did not go well for the plaintiffs or the expert witness. See Lori B. Leskin & Bert L. Slonim, “A Primer on Challenging Peer-Reviewed Scientific Literature in Mass Tort and Product Liability Actions,” 25 Toxics L. Rptr. 651 (Jul. 1, 2010).

4 April Zambelli‐Weiner, Christina Via, Matt Yuen, Daniel Weiner, and Russell S. Kirby, “First Trimester Pregnancy Exposure to Ondansetron and Risk of Structural Birth Defects,” 83 Reproductive Toxicology 14 (2019).

5 Nate Raymond, “GSK accuses Zofran plaintiffs’ law firms of funding academic study,” Reuters (Mar. 5, 2019).

6 See Branzburg v. Hayes, 408 U.S. 665, 674 (1972).

7 Affidavit of April Zambelli-Weiner, dated January 9, 2019 (Doc. No. 1272).

8 The plaintiffs’ lawyers’ motive and opportunity to poison the study by coaching their “clients” was palpable. See David B. Resnik & David J. McCann, “Deception by Research Participants,” 373 New Engl. J. Med. 1192 (2015).

9 See Brad A. Racette, S.D. Tabbal, D. Jennings, L. Good, J.S. Perlmutter, and Brad Evanoff, “Prevalence of parkinsonism and relationship to exposure in a large sample of Alabama welders,” 64 Neurology 230 (2005); Brad A. Racette, et al., “A rapid method for mass screening for parkinsonism,” 27 Neurotoxicology 357 (2006) (a largely duplicative report of the Alabama welders study).

10 See, e.g., In re Welding Fume Prods. Liab. Litig., MDL 1535, 2005 WL 5417815 (N.D. Ohio Oct. 18, 2005) (upholding defendants’ subpoena for protocol, data, data codes, statistical analyses, and other things from Dr. Racette’s Alabama study on welding and parkinsonism).

11 Racette sought and obtained a protective order for the data produced, and thus I still cannot share the materials he provided asking that any reviewer sign the court-mandated protective order. Revealingly, Racette was concerned about who had seen his underlying data, and he obtained a requirement in the court’s non-disclosure affidavit that any one who reviews the underlying data will not sit on peer review of his publications or his grant applications. See Motion to Compel List of Defendants’ Reviewers of Data Produced by Brad A. Racette, M.D., and Washington University Pursuant to Protective Order, in In re Welding Fume Products Liab. Litig., MDL No. 1535, Case 1:03-cv-17000-KMO, Document 1642-1 (N.D. Ohio Feb. 14, 2006). Curiously, Racette never moved to compel a list of Plaintiffs’ Reviewers!

12 Plaintiffs’ Motion for Protective Order, Motion to Reconsider Order Requiring Disclovery from Dr. Racette, and Request for In Camera Inspection as to Any Responses or Information Provided by Dr. Racette, filed in Solis v. Lincoln Elec. Co., case No. 1:03-CV-17000, MDL 1535 (N.D. Ohio May 8, 2006).

13 See, e.g., Xavier Kurz, Susana Perez‐Gutthann, and the ENCePP Steering Group, “Strengthening standards, transparency, and collaboration to support medicine evaluation: Ten years of the European Network of Centres for Pharmacoepidemiology and Pharmacovigilance (ENCePP),” 27 Pharmacoepidem. & Drug Safety 245 (2018).

Statistical Deontology

March 2nd, 2018

In courtrooms across America, there has been a lot of buzzing and palavering about the American Statistical Association’s Statement on Statistical Significance Testing,1 but very little discussion of the Society’s Ethical Guidelines, which were updated and promulgated in the same year, 2016. Statisticians and statistics, like lawyers and the law, receive their fair share of calumny over their professional activities, but the statistician’s principal North American professional organization is trying to do something about members’ transgressions.

The American Statistical Society (ASA) has promulgated ethical guidelines for statisticians, as has the Royal Statistical Society,2 even if these organizations lack the means and procedures to enforce their codes. The ASA’s guidelines3 are rich with implications for statistical analyses put forward in all contexts, including in litigation and regulatory rule making. As such, the guidelines are well worth studying by lawyers.

The ASA Guidelines were prepared by the Committee on Professional Ethics, and approved by the ASA’s Board in April 2016. There are lots of “thou shall” and “thou shall nots,” but I will focus on the issues that are more likely to arise in litigation. What is remarkable about the Guidelines is that if followed, they probably are more likely to eliminate unsound statistical practices in the courtroom than the ASA State on P-values.

Defining Good Statistical Practice

Good statistical practice is fundamentally based on transparent assumptions, reproducible results, and valid interpretations.” Guidelines at 1. The Guidelines thus incorporate something akin to the Kumho Tire standard that an expert witness ‘‘employs in the courtroom the same level of intellectual rigor that characterizes the practice of an expert in the relevant field.’’ Kumho Tire Co. v. Carmichael, 526 U.S. 137, 152 (1999).

A statistician engaged in expert witness testimony should provide “only expert testimony, written work, and oral presentations that he/she would be willing to have peer reviewed.” Guidelines at 2. “The ethical statistician uses methodology and data that are relevant and appropriate, without favoritism or prejudice, and in a manner intended to produce valid, interpretable, and reproducible results.” Id. Similarly, the statistician, if ethical, will identify and mitigate biases, and use analyses “appropriate and valid for the specific question to be addressed, so that results extend beyond the sample to a population relevant to the objectives with minimal error under reasonable assumptions.” Id. If the Guidelines were followed, a lot of spurious analyses would drop off the litigation landscape, regardless whether they used p-values or confidence intervals, or a Bayesian approach.

Integrity of Data and Methods

The ASA’s Guidelines also have a good deal to say about data integrity and statistical methods. In particular, the Guidelines call for candor about limitations in the statistical methods or the integrity of the underlying data:

The ethical statistician is candid about any known or suspected limitations, defects, or biases in the data that may impact the integrity or reliability of the statistical analysis. Objective and valid interpretation of the results requires that the underlying analysis recognizes and acknowledges the degree of reliability and integrity of the data.”

Guidelines at 3.

The statistical analyst openly acknowledges the limits of statistical inference, the potential sources of error, as well as the statistical and substantive assumptions made in the execution and interpretation of any analysis,” including data editing and imputation. Id. The Guidelines urge analysts to address potential confounding not assessed by the study design. Id. at 3, 10. How often do we see these acknowledgments in litigation-driven analyses, or in peer-reviewed papers, for that matter?

Affirmative Actions Prescribed

In the aid of promoting data and methodological integrity, the Guidelines also urge analysts to share data when appropriate without revealing the identities of study participants. Statistical analysts should publicly correct any disseminated data and analyses in their own work, as well as working to “expose incompetent or corrupt statistical practice.” Of course, the Lawsuit Industry will call this ethical duty “attacking the messenger,” but maybe that’s a rhetorical strategy based upon an assessment of risks versus benefits to the Lawsuit Industry.

Multiplicity

The ASA Guidelines address the impropriety of substantive statistical errors, such as:

[r]unning multiple tests on the same data set at the same stage of an analysis increases the chance of obtaining at least one invalid result. Selecting the one “significant” result from a multiplicity of parallel tests poses a grave risk of an incorrect conclusion. Failure to disclose the full extent of tests and their results in such a case would be highly misleading.”

Guidelines at 9.

There are some Lawsuit Industrialists who have taken comfort in the pronouncements of Kenneth Rothman on corrections for multiple comparisons. Rothman’s views on multiple comparisons are, however, much broader and more nuanced than the Industry’s sound bites.4 Given that Rothman opposes anything like strict statistical significance testing, it follows that he is relatively unmoved for the need for adjustments to alpha or the coefficient of confidence. Rothman, however, has never deprecated the need to consider the multiplicity of testing, and the need for researchers to be forthright in disclosing the the scope of comparisons originally planned and actually done.


2 Royal Statistical Society – Code of Conduct (2014); Steven Piantadosi, Clinical Trials: A Methodologic Perspective 609 (2d ed. 2005).

3 Shelley Hurwitz & John S. Gardenier, “Ethical Guidelines for Statistical Practice: The First 60 Years and Beyond,” 66 Am. Statistician 99 (2012) (describing the history and evolution of the Guidelines).

4 Kenneth J. Rothman, “Six Persistent Research Misconceptions,” 29 J. Gen. Intern. Med. 1060, 1063 (2014).

Gatekeeping of Expert Witnesses Needs a Bair Hug

December 20th, 2017

For every Rule 702 (“Daubert”) success story, there are multiple gatekeeping failures. See David E. Bernstein, “The Misbegotten Judicial Resistance to the Daubert Revolution,” 89 Notre Dame L. Rev. 27 (2013).1 Exemplars of inadequate expert witness gatekeeping in state or federal court abound, and overwhelm the bar. The only solace one might find is that the abuse-of-discretion appellate standard of review keeps the bad decisions from precedentially outlawing the good ones.

Judge Joan Ericksen recently provided another Berenstain Bears’ example of how not to keep the expert witness gate, in litigation claims that the Bair Hugger forced air warming devices (“Bair Huggers”) cause infections. In re Bair Hugger Forced Air Warming, MDL No. 15-2666, 2017 WL 6397721 (D. Minn. Dec. 13, 2017). Although Her Honor properly cited and quoted Rule 702 (2000), a new standard is announced in a bold heading:

Under Federal Rule of Evidence 702, the Court need only exclude expert testimony that is so fundamentally unsupported that it can offer no assistance to the jury.”

Id. at *1. This new standard thus permits largely unsupported opinion that can offer bad assistance to the jury. As Judge Ericksen demonstrates, this new standard, which has no warrant in the statutory text of Rule 702 or its advisory committee notes, allows expert witnesses to rely upon studies that have serious internal and external validity flaws.

Jonathan Samet, a specialist in pulmonary medicine, not infectious disease or statistics, is one of the plaintiffs’ principal expert witnesses. Samet relies in large measure upon an observational study2, which purports to find an increased odds ratio for use of the Bair Hugger among infection cases in one particular hospital. The defense epidemiologist, Jonathan B. Borak, criticized the McGovern observational study on several grounds, including that the study was highly confounded by the presence of other known infection risks. Id. at *6. Judge Ericksen characterized Borak’s opinion as an assertion that the McGovern study was an “insufficient basis” for the plaintiffs’ claims. A fair reading of even Judge Ericksen’s précis of Borak’s proffered testimony requires the conclusion that Borak’s opinion was that the McGovern study was invalid because of data collection errors and confounding. Id.

Judge Ericksen’s judicial assessment, taken from the disagreement between Samet and Borak, is that there are issues with the McGovern study, which go to “weight of the evidence.” This finding obscures, however, that there were strong challenges to the internal and external validity of the study. Drawing causal inferences from an invalid observational study is a methodological issue, not a weight-of-the-evidence problem for the jury to resolve. This MDL opinion never addresses the Rule 703 issue, whether an epidemiologic expert would reasonably rely upon such a confounded study.

The defense proffered the opinion of Theodore R. Holford, who criticized Dr. Samet for drawing causal inferences from the McGovern observational study. Holford, a professor of biostatistics at Yale University’s School of Public Health, analyzed the raw data behind the McGovern study. Id. at *8. The plaintiffs challenged Holford’s opinions on the ground that he relied on data in “non-final” form, from a temporally expanded dataset. Even more intriguingly, given that the plaintiffs did not present a statistician expert witness, plaintiffs argued that Holford’s opinions should be excluded because

(1) he insufficiently justified his use of a statistical test, and

(2) he “emphasizes statistical significance more than he would in his professional work.”

Id.

The MDL court dismissed the plaintiffs’ challenge on the mistaken conclusion that the alleged contradictions between Holford’s practice and his testimony impugn his credibility at most.” If there were truly such a deviation from the statistical standard of care, the issue is methodological, not a credibility issue of whether Holford was telling the truth. And as for the alleged over-emphasis on statistical significance, the MDL court again falls back to the glib conclusions that the allegation goes to the weight, not the admissibility of expert witness opinion testimony, and that plaintiffs can elicit testimony from Dr Samet as to how and why Professor Holford over-emphasized statistical significance. Id. Inquiring minds, at the bar, and in the academy, are left with no information about what the real issues are in the case.

Generally, both sides’ challenges to expert witnesses were denied.3 The real losers, however, were the scientific and medical communities, bench, bar, and general public. The MDL court glibly and incorrectly treated methodological issues as “credibility” issues, confused sufficiency with validity, and banished methodological failures to consideration by the trier of fact for “weight.” Confounding was mistreated as simply a debating point between the parties’ expert witnesses. The reader of Judge Ericksen’s opinion never learns what statistical test was used by Professor Holford, what justification was needed but allegedly absent for the test, why the justification was contested, and what other test was alleged by plaintiffs to have been a “better” statistical test. As for the emphasis given statistical significance, the reader is left in the dark about exactly what that emphasis was, and how it led to Holford’s conclusions and opinions, and what the proper emphasis should have been.

Eventually appellate review of the Bair Hugger MDL decision must turn on whether the district court abused its discretion. Although appellate courts give trial judges discretion to resolve Rule 702 issues, the appellate courts cannot reach reasoned decisions when the inferior courts fail to give even a cursory description of what the issues were, and how and why they were resolved as they were.


2 P. D. McGovern, M. Albrecht, K. G. Belani, C. Nachtsheim, P. F. Partington, I. Carluke, and M. R. Reed, “Forced-Air Warming and Ultra-Clean Ventilation Do Not Mix: An Investigation of Theatre Ventilation, Patient Warming and Joint Replacement Infection in Orthopaedics,” 93 J. Bone Joint 1537 (2011). The article as published contains no disclosures of potential or actual conflicts of interest. A persistent rumor has it that the investigators were funded by a commercial rival to the manufacturer of the Bair Hugger at issue in Judge Ericksen’s MDL. See generally, Melissa D. Kellam, Loraine S. Dieckmann, and Paul N. Austin, “Forced-Air Warming Devices and the Risk of Surgical Site Infections,” 98 Ass’n periOperative Registered Nurses (AORN) J. 354 (2013).

3 A challenge to plaintiffs’ expert witness Yadin David was sustained to the extent he sought to offer opinions about the defendant’s state of mind. Id. at *5.