TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Zhang’s Glyphosate Meta-Analysis Succumbs to Judicial Scrutiny

August 5th, 2024

Back in March 2015, the International Agency for Research on Cancer (IARC) issued its working group’s monograph on glyphosate weed killer. The report classified glyphosate as a “probable carcinogen,” which is highly misleading. For IARC, the term “probable” does not mean more likely than not, or for that matter, probable does not have any quantitative meaning. The all-important statement of IARC methods, “The Preamble,” makes this clear.[1] 

In the case of glyphosate, the IARC working group concluded that the epidemiologic evidence for an association between glyphosate exposure and cancer (specifically non-Hodgkins lymphoma (NHL)), was limited, which is IARC’s euphemism for insuffcient. Instead of epidemiology, IARC’s glyphosate conclusion was based largely upon rodent studies, but even the animal evidence relied upon by IARC was dubious. The IARC working group cherry picked a few arguably “positive” rodent study results with increases in tumors, while ignoring exculpatory rodent studies with decreasing tumor yield.[2]

Although the IARC hazard classification was uncritically embraced by the lawsuit industry, most regulatory agencies, even indulging precautionary principle reasoning, rejected the claim of carcinogenicity. The United States  Environmental Protection Agency (EPA), European Food Safety Authority, Food and Agriculture Organization (in conjunction with World Health Organization, European Chemicals Agency, Health Canada, German Federal Institute for Risk Assessment, among others, found that the scientific evidence did not support the claim that glyphosate causes NHL. The IARC monograph very quickly after publication became the proximate cause of a huge litigation effort by the lawsuit industry against Monsanto.

The personal injury cases against Monsanto, filed in federal court, were aggregated for pre-trial hearing, before Judge Vince Chhabria, of the Northern District of California, as MDL 2741. Judge Chhabria denied Monsanto’s early Rule 702 motions, and thus cases proceeded to trial, with mixed results.

In 2019, the Zhang study, a curious meta-analysis of some of the available glyphosate epidemiologic studies appeared in Mutation Research / Reviews in Mutation Research, a toxicology journal that seemed an unlikely venue for a meta-analysis of epidemiologic studies. The authors combined selected results from one large cohort study, the Agricultural Health Study, along with five case-control studies, to reach a summary relative risk of 1.41 (95% confidence interval 1.13-1.75).[3] According to the authors, their “current meta-analysis of human epidemiological studies suggests a compelling link between exposures to GBHs [glyphosate] and increased risk for NHL.”

The Zhang meta-analysis was not well reviewed in regulatory and scientific circles. The EPA found that Zhang used inappropriate methods in her meta-analysis.[4] Academic authors also panned the Zhang meta-analysis in both scholarly,[5] and popular articles.[6] The senior author of the Zhang paper, Lianne Sheppard, a Professor in the University of Washington Departments of Environmental  and  Occupational Health Sciences, and Biostatistics, attempted to defend the study, in Forbes.[7] Professor Geoffrey Kabat very adeptly showed that this defense was futile.[8] Despite the very serious and real objections to the validity of the Zhang meta-analysis, plaintiffs’ expert witnesses, such as Beate Ritz, an epidemiologist with U.C.L.A. testified that she trusted and relied upon the analysis.[9]

For five years, the Zhang study was a debating point for lawyers and expert witnesses in the glyphosate litigation, without significant judicial gatekeeping. It took the entrance of Luoping Zhang herself as an expert witness in the glyphosate litigation, and the procedural oddity of her placing exclusive reliance upon her own meta-analysis, to bring the meta-analysis into the unforgiving light of judicial scrutiny.

Zhang is a biochemist and toxicologist, in the University of California, Berkeley. Along with two other co-authors of her 2019 meta-analysis paper, she had been a board member of the EPA’s 2016 scientific advisory panel on glyphosate. After plaintiffs’ counsel disclosed Zhang as an expert witness, she disclosed her anticipated testimony, as is required by Federal Rule of Civil Procedure 26, by attaching and adopting by reference the contents of two of her published papers. The first paper was her 2019 meta-analysis; the other paper discussed putative mechanisms. Neither paper concluded that glyphosate causes NHL. Zhang’s disclosure did not add materially to her 2019 published analysis of six epidemiologic studies on glyphosate and NHL.

The defense challenged the validity of Dr. Zhang’s proffered opinions, and her exclusive reliance upon her own 2019 meta-analysis required the MDL court to pay attention to the failings of that paper, which had previously escaped critical judicial scrutiny. In June 2024, after an oral hearing in Bulone v. Monsanto, at which Dr. Zhang testified, Judge Chhabria ruled that Zhang’s proffered testimony, and her reliance upon her own meta-analysis was “junk science.”[10]

Judge Chhabria, perhaps encouraged by the recently fortifying amendment to Rule 702, issued a remarkable opinion that paid close attention to the indicia of validity of an expert witness’s opinion and the underlying meta-analysis. Judge Chhabria quickly spotted the disconnect between Zhang’s published papers and what is required for an admissible causation opinion. The mechanism paper did not address the extant epidemiology, and both sides in the MDL had emphasized that the epidemiology was critically important for determining whether there was, or was not, causation.

Zhang’s meta-analysis did evaluate some, but not all, of the available epidemiology, but the paper’s conclusion stopped considerably short of the needed opinion on causation. Zhang and colleagues had concluded that there was a “compelling link” between exposures to [glyphosate-based herbicides] and increased risk for NHL. In their paper’s key figure, show casing the summary estimate of relative risk of 1.41 (95% C.I., 1.13 -1.75), Zhang and her co-authors concluded only that exposure was “associated with an increased risk of NHL.” According to Judge Chhabria, in incorporating her 2019 paper into her Rule 26 report, Zhang failed to add a proper holistic causation analysis, as had other expert witnesses who had considered the Bradford Hill predicates and considerations.

Judge Chhabria picked up on another problem that has both legal and scientific implications. A meta-analysis is out of date as soon as a subsequent epidemiologic study becomes available, which would have satisfied the inclusion criteria for the meta-analysis. Since publishing her meta-analysis in 2019, additional studies had in fact been published. At the hearing, Dr. Zhang acknowledged that several of them would qualify for inclusion in the meta-analysis, per her own stated methods. Her failure to update the meta-analysis made her report incomplete and inadmissible for a court matter in 2024.

Judge Chhabria might have stopped there, but he took a closer look at the meta-analysis to explore whether it was a valid analysis, on its own terms. Much as Chief Judge Nancy Rosenstengel had done with the made-for-litigation meta-analysis concocted by Martin Wells in the paraquat litigation,[11] Judge Chhabria examined whether Zhang had been faithful to her own stated methods. Like Chief Judge Rosenstengel’s analysis, Judge Chhabria’s analysis stands as a strong rebuttal to the uncharitable opinion of Professor Edward Cheng, who has asserted that judges lack the expertise to evaluate the “expert opinions” before them.[12]

Judge Chhabria accepted the intellectual challenge that Rule 702 mandates. With the EPA memorandum lighting the way, Judge Chhabria readily discerned that “the challenged meta-analysis was not reliably performed.” He declared that the Zhang meta-analysis was “junk science,” with “deep methodological problems.”

Zhang claimed that she was basing the meta-analysis on the subgroups of six studies with the heaviest glyphosate exposure. This claim was undermined by the absence of any exposure-response gradient in the study deemed by Zhang to be of the highest quality. Furthermore, of the remaining five studies, three studies failed to provide any exposure-dependent analysis other than a comparison of NHL rates among “ever” versus “never” glyphosate exposure. As a result of this heterogeneity, Zhang used all the data from studies without exposure characterizations, but only limited data from the other studies that analyzed NHL by exposure levels. And because the highest quality study was among those that provided exposure level correlations, Zhang’s meta-analysis used only some of the data from it.

The analytical problems created by Zhang’s meta-analytical approach were compounded by the included studies’ having measured glyphosate exposures differently, with different cut-points for inclusion as heavily exposed. Some of the excluded study participants would have heavier exposure than those included in the summary analysis.

In the universe of included studies, some provided adjusted results from multi-variate analyses that included other pesticide exposures. Other studies reported only unadjusted results. Even though Zhang’s method stated a preference for adjusted analyses, she inexplicably failed to use adjusted data in the case of one study that provided both adjusted and unadjusted results.

As shown in Judge Chhabria’s review, Zhang’s methodological errors created an incoherent analysis, with methods that could not be justified. Even accepting its own stated methodology, the meta-analysis was an exercise in cherry picking. In the court’s terms, it was, without qualification, “junk science.”

After the filing of briefs, Judge Chhabria provided the parties an oral hearing, with an opportunity for viva voce testimony. Dr. Zhang thus had a full opportunity to defend her meta-analysis. The hearing, however, did not go well for her. Zhang could not talk intelligently about the studies included, or how they defined high exposure. Zhang’s lack of familiarity with her own opinion and published paper was yet a further reason for excluding her testimony.

As might be expected, plaintiffs’ counsel attempted to hide behind peer review. Plaintiffs’ counsel attempted to shut down Rule 702 scrutiny of the Zhang meta-analysis by suggesting that the trial court had no business digging into validity concerns given that Zhang had published her meta-analysis in what apparently was a peer reviewed journal. Judge Chhabria would have none of it. In his opinion, publication in a peer-reviewed journal cannot obscure the glaring methodological defects of the relied upon meta-analysis. The court observed that “[p]re-publication editorial peer review, just by itself, is far from a guarantee of scientific reliability.”[13] The EPA memorandum was thus a more telling indicator of the validity issues than the publication in a nominally peer-reviewed journal.

Contrary to some law professors who are now seeking to dismantle expert witness gatekeeping as beyond a judge’s competence, Judge Chhabria dismissed the suggestion that he lacked the expertise to adjudicate the validity issues. Indeed, he displayed a better understanding of the meta-analytic process than did Dr. Zhang. As the court observed, one of the goals of MDL assignments was to permit a single trial judge to have time to engage with the scientific issues and to develop “fluency” in the relevant scientific studies. Indeed, when MDL judges have the fluency in the scientific concepts to address Rule 702 or 703 issues, it would be criminal for them to ignore it.

The Bulone opinion should encourage lawyers to get “into the weeds” of expert witness opinions. There is nothing that a little clear thinking – and glyphosate – cannot clear away. Indeed, now that the weeds of Zhang’s meta-analysis are cleared away, it is hard to fathom that any other expert witness can rely upon it without running afoul of both Federal Rules of Evidence 702 and 703.

There were a few issues not addressed in Bulone. As seen in her oral hearing testimony, Zhang probably lacked the qualifications to proffer the meta-analysis. The bar for qualification as an expert witness, however, is sadly very low. One other issue that might well have been addressed is Zhang’s use of a fixed effect model for her meta-analysis. Considering that she was pooling data from cohort and case-control studies, some with and some without adjustments for confounders, with different measures of exposure, and some with and some without exposure-dependent analyses, Zhang and her co-authors were not justified in using a fixed effect model for arriving at a summary estimate of relative risk. Admittedly, this error could easily have been lost in the flood of others.

Postscript

Glyphosate is not merely a scientific issue. Its manufacturer, Monsanto, is the frequent target of media outlets (such as Telesur) from autocratic countries, such as Communist China and its client state, Venezuela.[14]

天安门广场英雄万岁


[1]The IARC-hy of Evidence – Incoherent & Inconsistent Classifications of Carcinogenicity,” Tortini (Sept. 19, 2023).

[2] Robert E Tarone, “On the International Agency for Research on Cancer classification of glyphosate as a probable human carcinogen,” 27 Eur. J. Cancer Prev. 82 (2018).

[3] Luoping Zhang, Iemaan Rana, Rachel M. Shaffer, Emanuela Taioli, Lianne Sheppard, “Exposure to glyphosate-based herbicides and risk for non-Hodgkin lymphoma: A meta-analysis and supporting evidence,” 781 Mutation Research/Reviews in Mutation Research 186 (2019).

[4] David J. Miller, Acting Chief Toxicology and Epidemiology Branch Health Effects Division, U.S. Environmental Protection Agency, Memorandum to Christine Olinger, Chief Risk Assessment Branch I, “Glyphosate: Epidemiology Review of Zhang et al. (2019) and Leon et al. (2019) publications for Response to Comments on the Proposed Interim Decision” (Jan. 6, 2020).

[5] Geoffrey C. Kabat, William J. Price, Robert E. Tarone, “On recent meta-analyses of exposure to glyphosate and risk of non-Hodgkin’s lymphoma in humans,” 32 Cancer Causes & Control 409 (2021).

[6] Geoffrey Kabat, “Paper Claims A Link Between Glyphosate And Cancer But Fails To Show Evidence,” Science 2.0 (Feb. 18, 2019).

[7] Lianne Sheppard, “Glyphosate Science is Nuanced. Arguments about it on the Internet? Not so much,” Forbes (Feb. 20, 2020).

[8] Geoffrey Kabat, “EPA Refuted A Meta-Analysis Claiming Glyphosate Can Cause Cancer And Senior Author Lianne Sheppard Doubled Down,” Science 2.0 (Feb. 26, 2020).

[9] Maria Dinzeo, “Jurors Hear of New Study Linking Roundup to Cancer,” Courthouse News Service (April 8, 2019).

[10] Bulone v. Monsanto Co., Case No. 16-md-02741-VC, MDL 2741 (N.D. Cal. June 20, 2024). See Hank Campbell, “Glyphosate legal update: Meta-study used by ambulance-chasing tort lawyers targeting Bayer’s Roundup as carcinogenic deemed ‘junk science nonsense’ by trial judge,” Genetic Literacy Project (June 24, 2024).

[11] In re Paraquat Prods. Liab. Litig., No. 3:21-MD-3004-NJR, 2024 WL 1659687 (S.D. Ill. Apr. 17, 2024) (opinion sur Rule 702 motion), appealed sub nom., Fuller v. Syngenta Crop Protection, LLC, No. 24-1868 (7th Cir. May 17, 2024). SeeParaquat Shape-Shifting Expert Witness Quashed,” Tortini (April 24, 2024).

[12] Edward K. Cheng, “The Consensus Rule: A New Approach to Scientific Evidence,” 75 Vanderbilt L. Rev. 407 (2022). SeeCheng’s Proposed Consensus Rule for Expert Witnesses,” Tortini (Sept. 15, 2022); “Further thoughts on Cheng’s Consensus Rule,” Tortini (Oct. 3, 2022).

[13] Bulone, citing Valentine v. Pioneer Chlor Alkali Co., 921 F. Supp. 666, 674-76 (D. Nev. 1996), for its distinction between “editorial peer review” and “true peer review,” with the latter’s inclusion of post-publication assessment of a paper as really important for Rule 702 purposes.

[14] Anne Applebaum, Autocracy, Inc.: The Dictators Who Want to Run the World 66 (2024).

Paraquat Shape-Shifting Expert Witness Quashed

April 24th, 2024

Another multi-district litigation (MDL) has hit a jarring speed bump. Claims for Parkinson’s disease (PD), allegedly caused by exposure to paraquat dichloride (paraquat), were consolidated, in June 2021, for pre-trial coordination in MDL No. 3004, in the Southern District of Illinois, before Chief Judge Nancy J. Rosenstengel. Like many health-effects litigation claims, the plaintiffs’ claims in these paraquat cases turn on epidemiologic evidence. To make their causation case in the first MDL trial cases, plaintiffs’ counsel nominated a statistician, Martin T. Wells, to present their causation case. Last week, Judge Rosenstengel found Wells’ opinion so infected by invalid methodologies and inferences as to be inadmissible under the most recent version of Rule 702.[1] Summary judgment in the trial cases followed.[2]

Back in the 1980s, paraquat gained some legal notoriety in one of the most retrograde Rule 702 decisions.[3] Both the herbicide and Rule 702 survived, however, and they both remain in wide use. For the last two decades, there has been a widespread challenges to the safety of paraquat, and in particular there have been claims that paraquat can cause PD or parkinsonism under some circumstances.  Despite this background, the plaintiffs’ counsel in MDL 3004 began with four problems.

First, paraquat is closely regulated for agricultural use in the United States. Under federal law, paraquat can be used to control the growth of weeds only “by or under the direct supervision of a certified applicator.”[4] The regulatory record created an uphill battle for plaintiffs.[5] Under the Federal Insecticide, Fungicide, and Rodenticide Act (“FIFRA”), the U.S. EPA has regulatory and enforcement authority over the use, sale, and labeling of paraquat.[6] As part of its regulatory responsibilities, in 2019, the EPA systematically reviewed available evidence to assess whether there was an association between paraquat and PD. The agency’s review concluded that “there is limited, but insufficient epidemiologic evidence at this time to conclude that there is a clear associative or causal relationship between occupational paraquat exposure and PD.”[7] In 2021, the EPA issued its Interim Registration Review Decision, and reapproved the registration of paraquat. In doing so, the EPA concluded that “the weight of evidence was insufficient to link paraquat exposure from pesticidal use of U.S. registered products to Parkinson’s disease in humans.”[8]

Second, beyond the EPA, there were no other published reviews, systematic or otherwise, which reached a conclusion that paraquat causes PD.[9]

Third, the plaintiffs claims faced another serious impediment. Their counsel placed their reliance upon Professor Martin Wells, a statistician on the faculty of Cornell University. Unfortunately for plaintiffs, Wells has been known to operate as a “cherry picker,” and his methodology has been previously reviewed in an unfavorable light. Another MDL court, which reviewed a review and meta-analysis propounded by Wells, found that his reports “were marred by a selective review of data and inconsistent application of inclusion criteria.”[10]

Fourth, the plaintiffs’ claims were before Chief Judge Nancy J. Rosenstengel, who was willing to do the hard work required under Rule 702, specially as it has been recently amended for clarification and emphasis of the gatekeeper’s responsibilities to evaluate validity issues in the proffered opinions of expert witnesses. As her 97 page decision evinces, Judge Rosenstengel conducted four days of hearings, which included viva voce testimony from Martin Wells, and she obviously read the underlying papers, reviews, as well as the briefs and the Reference Manual on Scientific Evidence, with great care. What followed did not go well for Wells or the plaintiffs’ claims.[11] Judge Rosenstengel has written an opinion that may be the first careful judicial consideration of the basic requirements of systematic review.

The court noted that systematic reviewers carefully define a research question and what kinds of empirical evidence will be reviewed, and then collect, summarize, and, if feasible, synthesize the available evidence into a conclusion.[12] The court emphasized that systematic reviewers should “develop a protocol for the review before commencement and adhere to the protocol regardless of the results of the review.”[13]

Wells proffered a meta-analysis, and a “weight of the evidence” (WOE) review from which he concluded that paraquat causes PD and nearly triples the risk of the disease among workers exposed to the herbicide.[14] In his reports, Wells identified a universe of at least 36 studies, but included seven in his meta-analysis. The defense had identified another two studies that were germane.[15]

Chief Judge Rosenstengel’s opinion is noteworthy for its fine attention to detail, detail that matters to the validity of the expert witness’s enterprise. Martin Wells set out to do a meta-analysis, which was all fine and good. With a universe of 36 studies, with sub-findings, alternative analyses, and changing definitions of relevant exposure, the devil lay in the details.

The MDL court was careful to point out that it was not gainsaying Wells’ decision to limit his meta-analysis to case-control studies, or to his grading of any particular study as being of low quality. Systematic reviews and meta-analyses are generally accepted techniques that are part of a scientific approach to causal inference, but each has standards, predicates, and requirements for valid use. Expert witnesses must not only use a reliable methodology, Rule 702(d) requires that they must reliably apply their chosen methodology to the facts at hand in reaching their conclusions.[16]

The MDL court concluded that Wells’ meta-analysis was not sufficiently reliable under Rule 702 because he failed faithfully and reliably to apply his own articulated methodology. The court followed Wells’ lead in identifying the source and content of his chosen methodology, and simply examined his proffered opinion for compliance with that methodology.[17] The basic principles of validity for conducting meta-analyses were not, in any event, really contested. These principles and requirements were clearly designed to ensure and enhance the reliability of meta-analyses by pre-empting results-driven, reverse-engineered summary estimates of association.

The court found that Wells failed clearly to pre-specify his eligibility criteria. He then proceeded to redefine exposure criteria and study inclusion or eligibility criteria, and study quality criteria, after looking at the evidence. He also inconsistently applied his stated criteria, all in an apparently desired effort to exclude less favorable study outcomes. These ad hoc steps were some of Wells’ deviations from the standards to which he played lip service.

The court did not exclude Wells because it disagreed with his substantive decisions to include or exclude any particular study, or his quality grading of any study. Rather, Dr. Wells’ meta-analysis does not pass muster under Rule 702 because its methodology was unclear, inconsistently applied, not replicable, and at times transparently reverse-engineered.[18]

The court’s evaluation of Wells was unflinchingly critical. Wells’ proffered opinions “required several methodological contortions and outright violations of the scientific standards he professed to apply.”[19] From his first involvement in this litigation, Wells had violated the basic rules of conducting systematic reviews and meta-analyses.[20] His definition of “occupational” exposure meandered to suit his desire to include one study (with low variance) that might otherwise have been excluded.[21] Rather than pre-specifying his review process, his study inclusion criteria, and his quality scores, Wells engaged in an unwritten “holistic” review process, which he conceded was not objectively replicable. Wells’ approach left him free to include studies he wanted in his meta-analysis, and then provide post hoc justifications.[22] His failure to identify his inclusion/exclusion criteria was a “methodological red flag” in Dr. Wells’ meta-analysis, which suggested his reverse engineering of the whole analysis, the “very antithesis of a systematic review.”[23]

In what the court described as “methodological shapeshifting,” Wells blatantly and inconsistently graded studies he wanted to include, and had already decided to include in his meta-analysis, to be of higher quality.[24] The paraquat MDL court found, unequivocally, that Wells had “failed to apply the same level of intellectual rigor to his work in the four trial selection cases that would be required of him and his peers in a non-litigation setting.”[25]

It was also not lost upon the MDL court that Wells had shifted from a fixed effect to a random effects meta-analysis, between his principal and rebuttal reports.[26] Basic to the meta-analytical enterprise is a predicate systematic review, properly done, with pre-specification of inclusion and exclusion criteria for what studies would go into any meta-analysis. The MDL court noted that both sides had cited Borenstein’s textbook on meta-analysis,[27] and that Wells had himself cited the Cochrane Handbook[28] for the basic proposition that that objective and scientifically valid study selection criteria should be clearly stated in advance to ensure the objectivity of the analysis.

There was of course legal authority for this basic proposition about prespecification. Given that the selection of studies that go into a systematic review and meta-analysis can be dispositive of its conclusion, undue subjectivity or ad hoc inclusion can easily arrange a desired outcome.[29] Furthermore, meta-analysis carries with it the opportunity to mislead a lay jury with a single (and inflated) risk ratio,[30] which is obtained by the operator’s manipulation of inclusion and exclusion criteria. This opportunity required the MDL court to examine the methodological rigor of the proffered meta-analysis carefully to evaluate whether it reflects a valid pooling of data or it was concocted to win a case.[31]

Martin Wells had previously acknowledged the dangers of manipulation and subjective selectivity inherent in systematic reviews and meta-analyses. The MDL court quoted from Wells’ testimony in Martin v. Actavis:

QUESTION: You would certainly agree that the inclusion-exclusion criteria should be based upon objective criteria and not simply because you were trying to get to a particular result?

WELLS: No, you shouldn’t load the – sort of cook the books.

QUESTION: You should have prespecified objective criteria in advance, correct?

WELLS: Yes.[32]

The MDL court also picked up on a subtle but important methodological point about which odds ratio to use in a meta-analysis when a study provides multiple analyses of the same association. In his first paraquat deposition, Wells cited the Cochrane Handbook, for the proposition that if a crude risk ratio and a risk ratio from a multivariate analysis are both presented in a given study, then the adjusted risk ratio (and its corresponding measure of standard error seen in its confidence interval) is generally preferable to reduce the play of confounding.[33] Wells violated this basic principle by ignoring the multivariate analysis in the study that dominated his meta-analysis (Liou) in favor of the unadjusted bivariate analysis. Given that Wells accepted this basic principle, the MDL court found that Wells likely selected the minimally adjusted odds ratio over the multiviariate adjusted odds ratio for inclusion in his meta-analysis in order to have the smaller variance (and thus greater weight) from the former. This maneuver was disqualifying under Rule 702.[34]

All in all, the paraquat MDL court’s Rule 702 ruling was a convincing demonstration that non-expert generalist judges, with assistance from subject-matter experts, treatises, and legal counsel, can evaluate and identify deviations from methodological standards of care.


[1] In re Paraquat Prods. Prods. Liab. Litig., Case No. 3:21-md-3004-NJR, MDL No. 3004, Slip op., ___ F.3d ___ (S.D. Ill. Apr. 17, 2024) [Slip op.]

[2] In re Paraquat Prods. Prods. Liab. Litig., Op. sur motion for judgment, Case No. 3:21-md-3004-NJR, MDL No. 3004 (S.D. Ill. Apr. 17, 2024). See also Brendan Pierson, “Judge rejects key expert in paraquat lawsuits, tosses first cases set for trial,” Reuters (Apr. 17, 2024); Hailey Konnath, “Trial-Ready Paraquat MDL Cases Tossed After Testimony Axed,” Law360 (Apr. 18, 2024).

[3] Ferebee v. Chevron Chem. Co., 552 F. Supp. 1297 (D.D.C. 1982), aff’d, 736 F.2d 1529 (D.C. Cir.), cert. denied, 469 U.S. 1062 (1984). SeeFerebee Revisited,” Tortini (Dec. 28, 1017).

[4] See 40 C.F.R. § 152.175.

[5] Slip op. at 31.

[6] 7 U.S.C. § 136w; 7 U.S.C. § 136a(a); 40 C.F.R. § 152.175. The agency must periodically review the registration of the herbicide. 7 U.S.C. § 136a(g)(1)(A). See Ruckelshaus v. Monsanto Co., 467 U.S. 986, 991-92 (1984).

[7] See Austin Wray & Aaron Niman, Memorandum, Paraquat Dichloride: Systematic review of the literature to evaluate the relationship between paraquat dichloride exposure and Parkinson’s disease at 35 (June 26, 2019).

[8] See also Jeffrey Brent and Tammi Schaeffer, “Systematic Review of Parkinsonian Syndromes in Short- and Long-Term Survivors of Paraquat Poisoning,” 53 J. Occup. & Envt’l Med. 1332 (2011) (“An analysis the world’s entire published experience found no connection between high-dose paraquat exposure in humans and the development of parkinsonism.”).

[9] Douglas L. Weed, “Does paraquat cause Parkinson’s disease? A review of reviews,” 86 Neurotoxicology 180, 180 (2021).

[10] In re Incretin-Based Therapies Prods. Liab. Litig., 524 F.Supp. 3d 1007, 1038, 1043 (S.D. Cal. 2021), aff’d, No. 21-55342, 2022 WL 898595 (9th Cir. Mar. 28, 2022) (per curiam). SeeMadigan’s Shenanigans and Wells Quelled in Incretin-Mimetic CasesTortini (July 15, 2022).

[11] The MDL court obviously worked hard to learn the basics principles of epidemiology. The court relied extensively upon the epidemiology chapter in the Reference Manual on Scientific Evidence. Much of that material is very helpful, but its exposition on statistical concepts is at times confused and erroneous. It is unfortunate that courts do not pay more attention to the more precise and accurate exposition in the chapter on statistics. Citing the epidemiology chapter, the MDL court gave an incorrect interpretation of the p-value: “A statistically significant result is one that is unlikely the product of chance. Slip op. at 17 n. 11. And then again, citing the Reference Manual, the court declared that “[a] p-value of .1 means that there is a 10% chance that values at least as large as the observed result could have been the product of random error. Id.” Id. Similarly, the MDL court gave an incorrect interpretation of the confidence interval. In a footnote, the court tells us that “[r]esearchers ordinarily assert a 95% confidence interval, meaning that ‘there is a 95% chance that the “true” odds ratio value falls within the confidence interval range’. In re Zoloft (Sertraline Hydrochloride) Prod. Liab. Litig., MDL No. 2342, 2015 WL 7776911, at *2 (E.D. Pa. Dec. 2, 2015).” Slip op. at 17n.12.  Citing another court for the definition of a statistical concept is a risky business.

[12] Slip op. at 20, citing Lisa A. Bero, “Evaluating Systematic Reviews and Meta-Analyses,” 14 J.L. & Pol’y 569, 570 (2006).

[13] Slip op. at 21, quoting Bero, at 575.

[14] Slip op. at 3.

[15] The nine studies at issue were as follows: (1) H.H. Liou, et al., “Environmental risk factors and Parkinson’s disease; A case-control study in Taiwan,” 48 Neurology 1583 (1997); (2) Caroline M. Tanner, et al.,Rotenone, Paraquat and Parkinson’s Disease,” 119 Envt’l Health Persps. 866 (2011) (a nested case-control study within the Agricultural Health Study (“AHS”)); (3) Clyde Hertzman, et al., “A Case-Control Study of Parkinson’s Disease in a Horticultural Region of British Columbia,” 9 Movement Disorders 69 (1994); (4) Anne-Maria Kuopio, et al., “Environmental Risk Factors in Parkinson’s Disease,” 14 Movement Disorders 928 (1999); (5) Katherine Rugbjerg, et al., “Pesticide exposure and risk of Parkinson’s disease – a population-based case-control study evaluating the potential for recall bias,” 37 Scandinavian J. of Work, Env’t & Health 427 (2011); (6) Jordan A. Firestone, et al., “Occupational Factors and Risk of Parkinson’s Disease: A Population-Based Case-Control Study,” 53 Am. J. of Indus. Med. 217 (2010); (7) Amanpreet S. Dhillon,“Pesticide / Environmental Exposures and Parkinson’s Disease in East Texas,” 13 J. of Agromedicine 37 (2008); (8) Marianne van der Mark, et al., “Occupational exposure to pesticides and endotoxin and Parkinson’s disease in the Netherlands,” 71 J. Occup. & Envt’l Med. 757 (2014); (9) Srishti Shrestha, et al., “Pesticide use and incident Parkinson’s disease in a cohort of farmers and their spouses,” Envt’l Research 191 (2020).

[16] Slip op. at 75.

[17] Slip op. at 73.

[18] Slip op. at 75, citing In re Mirena IUS Levonorgestrel-Related Prod. Liab. Litig. (No. II), 341 F. Supp. 3d 213, 241 (S.D.N.Y. 2018) (“Opinions that assume a conclusion and reverse-engineer a theory to fit that conclusion are . . . inadmissible.”) (internal citation omitted), aff’d, 982 F.3d 113 (2d Cir. 2020); In re Zoloft (Sertraline Hydrochloride) Prod. Liab. Litig., No. 12-md-2342, 2015 WL 7776911, at *16 (E.D. Pa. Dec. 2, 2015) (excluding expert’s opinion where he “failed to consistently apply the scientific methods he articulat[ed], . . . deviated from or downplayed certain well established principles of his field, and . . . inconsistently applied methods and standards to the data so as to support his a priori opinion.”), aff’d, 858 F.3d 787 (3d Cir. 2017).

[19] Slip op. at 35.

[20] Slip op. at 58.

[21] Slip op. at 55.

[22] Slip op. at 41, 64.

[23] Slip op. at 59-60, citing In re Lipitor (Atorvastatin Calcium) Mktg., Sales Pracs. & Prod. Liab. Litig., 892 F.3d 624, 634 (4th Cir. 2018) (“Result-driven analysis, or cherry-picking, undermines principles of the scientific method and is a quintessential example of applying methodologies (valid or otherwise) in an unreliable fashion.”).

[24] Slip op. at 67, 69-70, citing In re Zoloft (Sertraline Hydrochloride) Prod. Liab. Litig., 858 F.3d 787, 795-97 (3d Cir. 2017) (“[I]f an expert applies certain techniques to a subset of the body of evidence and other techniques to another subset without explanation, this raises an inference of unreliable application of methodology.”); In re Bextra and Celebrex Mktg. Sales Pracs. & Prod. Liab. Litig., 524 F. Supp. 2d 1166, 1179 (N.D. Cal. 2007) (excluding an expert witness’s causation opinion because of his result-oriented, inconsistent evaluation of data sources).

[25] Slip op. at 40.

[26] Slip op. at 61 n.44.

[27] Michael Borenstein, Larry V. Hedges, Julian P. T. Higgins, and Hannah R. Rothstein, Introduction to Meta-Analysis (2d ed. 2021).

[28] Jacqueline Chandler, James Thomas, Julian P. T. Higgins, Matthew J. Page, Miranda Cumpston, Tianjing Li, Vivian A. Welch, eds., Cochrane Handbook for Systematic Reviews of Interventions (2ed 2023).

[29] Slip op. at 56, citing In re Zimmer Nexgen Knee Implant Prod. Liab. Litig., No. 11 C 5468, 2015 WL 5050214, at *10 (N.D. Ill. Aug. 25, 2015).

[30] Slip op. at 22. The court noted that the Reference Manual on Scientific Evidence cautions that “[p]eople often tend to have an inordinate belief in the validity of the findings when a single number is attached to them, and many of the difficulties that may arise in conducting a meta-analysis, especially of observational studies such as epidemiological ones, may consequently be overlooked.” Id., quoting from Manual, at 608.

[31] Slip op. at 57, citing Deutsch v. Novartis Pharms. Corp., 768 F. Supp. 2d 420, 457-58 (E.D.N.Y. 2011) (“[T]here is a strong risk of prejudice if a Court permits testimony based on an unreliable meta-analysis because of the propensity for juries to latch on to the single number.”).

[32] Slip op. at 64, quoting from Notes of Testimony of Martin Wells, in In re Testosterone Replacement Therapy Prod. Liab. Litig., Nos. 1:14-cv-1748, 15-cv-4292, 15-cv-426, 2018 WL 7350886 (N.D. Ill. Apr. 2, 2018).

[33] Slip op. at 70.

[34] Slip op. at 71-72, citing People Who Care v. Rockford Bd. of Educ., 111 F.3d 528, 537-38 (7th Cir. 1997) (“[A] statistical study that fails to correct for salient explanatory variables . . . has no value as causal explanation and is therefore inadmissible in federal court.”); In re Roundup Prod. Liab. Litig., 390 F. Supp. 3d 1102, 1140 (N.D. Cal. 2018). Slip op. at 17 n. 12.

How Access to a Protocol and Underlying Data Gave Yale Researchers a Big Black Eye

April 13th, 2024

Prelude to Litigation

Phenylpropanolamine (PPA) was a widely used direct α-adrenergic agonist used as a medication to control cold symptoms and to suppress appetite for weight loss.[1] In 1972, an over-the-counter (OTC) Advisory Review Panel considered the safety and efficacy of PPA-containing nasal decongestant medications, leading, in 1976, to a recommendation that the agency label these medications as “generally recognized as safe and effective.” Several years later, another Panel recommended that PPA-containing weight control products also be recognized as safe and effective.

Six years later, in 1982, another FDA panel recommended that PPA be considered safe and effective for appetite suppression in dieting.  Two epidemiologic studies of PPA and hemorrhagic stroke were conducted in the 1980s. The results of one study by Hershel Jick and colleagues, presented as a letter to the editor, reported a relative risk of 0.58, with a 95% exact confidence interval, 0.03 – 2.9.[2] A year later, two researchers, reporting a study based upon Medicaid databases, found no significant associations between HS and PPA.[3]

The FDA, however, did not approve a final monograph for PPA, with recognition of its “safe and effective” status because of occasional reports of hemorrhagic stroke that occurred in patients who used PPA-containing medications, mostly young women who had used PPA appetite suppressants for dieting. In 1982, the FDA requested information on the effects of PPA on blood pressure, particularly with respect to weight-loss medications. The agency deferred a proposed 1985 final monograph because of the blood pressure issue.

The FDA deemed the data inadequate to answer its safety concerns. Congressional and agency hearings in the early 1990s amplified some public concern, but in 1990, the Director of Cardio-Renal Drug Products, at the Center for Drug Evaluation and Research, found several well-supported facts, based upon robust evidence. Blood pressure studies in humans showed a biphasic response. PPA initially causes blood pressure to rise above baseline (a pressor effect), and then to fall below baseline (depressor effect). These blood pressure responses are dose-related, and diminish with repeated use. Patients develop tolerance to the pressor effects within a few hours. The Center concluded that at doses of 50 mg of PPA and below, the pressor effects of the medication are smaller, indeed smaller than normal daily variations in basal blood pressure. Humans develop tolerance to the pressor effects quickly, within the time frame of a single dose. The only time period in which even a theoretical risk might exist is within a few hours, or less, of a patient’s taking the first dose of PPA medication. Doses of 25 mg. immediate-release PPA could not realistically be considered to pose any “absolute safety risk and have a reasonable safety margin.”[4]

In 1991, Dr. Heidi Jolson, an FDA scientist wrote that the agency’s spontaneous adverse event reporting system “suggested” that PPA appetite suppressants increased the risk of cerebrovascular accidents. A review of stroke data, including the adverse event reports, by epidemiology consultants failed to support a causal association between PPA and hemorrhagic stroke (HS). The reviewers, however, acknowledged that the available data did not permit them to rule out a risk of HS. The FDA adopted the reviewers’ recommendation for a prospective, large case-control study designed to take into account the known physiological effects of PPA on blood pressure.[5]

What emerged from this regulatory indecision was a decision to conduct another epidemiologic study. In November 1992, a manufacturers’ group, now known as the Consumer Healthcare Products Association (CHPA) proposed a case-control study that would become known as the Hemorrhagic Stroke Project (HSP). In March 1993, the group submitted a proposed protocol, and a suggestion that the study be conducted by several researchers at Yale University. After feedback from the public and the Yale researchers, the group submitted a final protocol in April 1994. Both the researchers and the sponsors agreed to a scientific advisory group that would operate independently and oversee the study. The study began in September 1994. The FDA deferred action on a final monograph for PPA, and product marketing continued.

The Yale HSP authors delivered their final report on their case-control study to FDA, in May 2000.[6] The HSP was a study, with 702 HS cases, and over 1,376 controls, men and women, ages 18 to 49. The report authors concluded that “the results of the HSP suggest that PPA increases the risk for hemorrhagic stroke.”[7] The study had taken over five years to design, conduct, and analyze. In September 2000, the FDA’s Office of Post-Marketing Drug Risk Assessment released the results, with its own interpretation and conclusion that dramatically exceeded the HSP authors’ own interpretation.[8] The FDA’s Non-Prescription Drug Advisory Committee then voted, on October 19, 2000, to recommend that PPA be reclassified as “unsafe.” The Committee’s meeting, however, was attended by several leading epidemiologists who pointed to important methodological problems and limitations in the design and execution of the HSP.[9]

In November 2000, the FDA” Nonprescription Drugs Advisory Committee determined that there was a significant association PPA and HS, and recommended that PPA not be considered safe for OTC use. The FDA never addressed causality; nor did it have to do so under governing law. The FDA’s actions led the drug companies voluntarily to withdraw PPA-containing products.

The December 21, 2000, issue of The New England Journal of Medicine featured a revised version of the HSP report as its lead article.[10] Under the journal’s guidelines for statistical reporting, the authors were required to present two-tailed p-values or confidence intervals. Results from the HSP Final Report looked considerably less impressive after the obtained significance probabilities were doubled. Only the finding in appetite suppressant use was branded an independent risk factor:

“The results suggest that phenylpropanolamine in appetite suppressants, and possibly in cough and cold remedies, is an independent risk factor for hemorrhagic stroke in women.”[11]

The HSP had multiple pre-specified aims, and several other statistical comparisons and analyses were added along the way. No statistical adjustment was made for these multiple comparisons, but their presence in the study must be considered. Perhaps that is why the authors merely suggest that PPA in appetite suppressants was an independent risk factor for HS in women. Under current statistical guidelines for the New England Journal of Medicine, this suggestion might require even further qualification and weakening.[12]

The HSP study faced difficult methodological issues. The detailed and robust identification of PPA’s blood pressure effects in humans focused attention on the crucial timing of timing of a HS in relation to ingestion of a PPA medication. Any use, or any use within the last seven or 30 days, would be fairly irrelevant to the pathophysiology of a cerebral hemorrhage. The HSP authors settled on a definition of “first use” as any use of a PPA product within 24 hours, and no other uses in the previous two weeks.[13] Given the rapid onset of pressor and depressor effects, and adaptation response, this definition of first use was generous and likely included many irrelevant exposed cases, but at least the definition attempted to incorporate the phenomena of short-lived effect and adaption. The appetite suppressant association did not involve any “first use,” which makes the one “suggested” increase risk much less certain and relevant.

The alternative definition of exposure, in addition to “first use,” the ingestion of the PPA-containing medication took place as “the index day before the focal time and the preceding three calendar days.” Again, given the known pharmacokinetics and physiological effects of PPA, this three-day (plus) window seems doubtfully relevant.

All instances of “first use” occurred among men and women who used a cough or cold remedy, with an adjusted OR of 3.14, with a 95% confidence interval (CI), of 0.96–10.28), p = 0.06. The very wide confidence interval, in excess of an order of magnitude, reveals the fragility of the statistical inference. There were but 8 first use exposed stroke cases (out of 702), and 5 exposed controls (out of 1,376).

When this first use analysis is broken down between men and women, the result becomes even more fragile. Among men, there was only one first use exposure in 319 male HS patients, and one first use exposure in 626 controls, for an adjusted OR of 2.95, CI 0.15 – 59.59, and p = 0.48. Among women, there were 7 first use exposures among 383 female HS patients, and 4 first use exposures among 750 controls, with an adjusted OR of 3.13, CI 0.86 – 11.46, p = 0.08.

The small numbers of actual first exposure events speak loudly for the inconclusiveness and fragility of the study results, and the sensitivity of the results to any methodological deviations or irregularities. Of course, for the one “suggested” association for appetite suppressant use among women, the results were even more fragile. None of the appetite suppressant cases were “first use,” which raises serious questions whether anything meaningful was measured. There were six (non-first use) exposed among 383 female HS patients, with only a single exposed female control among 750. The authors presented an adjusted OR of 15.58, with a p-value of 0.02. The CI, however, spanned more than two orders of magnitude, 1.51 – 182.21, which makes the result well-nigh uninterpretable. One of six appetite suppressant cases was also a user of cough-cold remedies, and she was double counted in the study’s analyses. This double-counted case, had a body-mass index of 19, which is certainly not overweight, and at the low end of normal.[14] The one appetite suppressant control was obese.

For the more expansive any exposure analysis for use of PPA cough-cold medication, the results were significantly unimpressive. There were six exposed male cases among 391 male HS cases, and 13 exposed controls, for an adjusted odds ratio of 0.62, CI 0.20 – 1.92, p = 0.41. Although not an inverse association, the sample results for men were incompatible with a hypothetical doubling of risk. For women, on the expansive exposure definition, there were 16 exposed cases, among 383 female cases, with 19 exposed controls out of 750 female controls.  The odds ratio for female PPA cough-cold medication was 1.54, CI 0.76 – 3.14, p = 0.23.

Aside from doubts whether the HSP measured meaningful exposures, the small number of exposed cases and controls present insuperable interpretative difficulties for the study. First, working with a case-control design and odds ratios, there should be some acknowledgment that odds ratios always exaggerate the observed association size compared with a relative risk.[15] Second, the authors knew that confounding would be an important consideration in evaluating any observed association. Known and suspected risk factors were consistently more prevalent among cases than controls.[16]

The HSP authors valiantly attempted to control for confounding in two ways. They selected controls by a technique known as random digit dialing, to find two controls for each case, matched on telephone exchange, sex, age, and race. The HSP authors, however, used imperfectly matched controls rather than lose the corresponding case from their study.[17] For other co-variates, the authors used multi-variate logistic regression to provide odds ratios that were adjusted for potential confounding from the measured covariates. At least two of co-variates, alcohol and cocaine use, in the population under age 50 sample involved potential legal or moral judgment, which almost certainly would have skewed interview results.

An even more important threat to methodological validity, key co-variates, such as smoking, alcohol use, hypertension, and cocaine use were incorporated into the adjustment regression as dichotomous variables; body mass index was entered as a polychotomous variable. Monte Carlo simulation shows that categorizing a continuous variable in logistic regression results in inflating the rate of finding false positive associations.[18] The type I (false-positive) error rates increases with sample size, with increasing correlation between the confounding variable and outcome of interest, and the number of categories used for the continuous variables. Numerous authors have warned of the cost and danger of dichotomizing continuous variables, in losing information, statistical power, and reliability.[19]  In the field of pharmaco-epidemiology, the bias created by dichotomization of a continuous variable is harmful from both the perspective of statistical estimation and hypothesis testing.[20] Readers will be misled into believing that a study has adjusted for important co-variates with the false allure of fully adjusted model.

Finally, with respect to the use of logistic regression to control confounding and provide adjusted odds ratios, there is the problem of the small number of events. Although the overall sample size is adequate for logistic regression, cell sizes of one, or two, or three, raise serious questions about the use of large-sample statistical methods for analysis of the HSP results.[21]

A Surfeit of Sub-Groups

The study protocol identified three (really four or five) specific goals, to estimate the associations: (1) between PPA use and HS; (2) between HS and type of PPA use (cough-cold remedy or appetite suppression); and (3) in women, between PPA appetite suppressant use and HS, and between PPA first use and HS.[22]

With two different definitions of “exposure,” and some modifications added along the way, with two sexes, two different indications (cold remedy and appetite suppression), and with non-pre-specified analyses such as men’s cough-cold PPA use, there was ample opportunity to inflate the Type I error rate. As the authors of the HSP final report acknowledged, they were able to identify only 60 “exposed” cases and controls.[23] In the context of a large case-controls study, the authors were able to identify some nominally statistically significant outcomes (PPA appetite suppressant and HS), but these were based upon very small numbers (six and one exposed, cases and controls, respectively), which made the results very uncertain considering the potential biases and confounding.

Design and Implementation Problems

Case-control studies always present some difficulty of obtaining controls that are similar to cases except that they did not experience the outcome of interest. As noted, controls were selected using “random digit dialing” in the same area code as the cases. The investigators were troubled by poor response rates from potential controls. They deviated from standard methodology for enrolling controls through random digit dialing by enrolling the first eligible control who agreed to participate, while failing to call back candidates who had asked to speak at another time.[24]

The exposure prevalence rate among controls was considerably lower than shown from PPA-product marketing research. This again raises questions about the low reported exposure rates among controls, which would inflate any observed odds ratios. Of course, it seems eminently reasonable to predict that persons who were suffering from head colds or the flu might not answer their phones or might request a call back. People who are obese might be reluctant to tell a stranger on the telephone that they are using a medication to suppress their appetite.

In the face of this obvious opportunity for selection bias, there was also ample room for recall bias. Cases were asked about medication use just before a unforgettable catastrophic event in their lives. Controls were asked about medication use before a day within the range of the previous week. More controls were interviewed by phone than were cases. Given the small number of exposed cases and controls, recall bias created by the differential circumstances and interview settings and procedures, was never excluded.

Lumpen Epidemiology ICH vs SAH

Every epidemiologic study or clinical trial has an exposure and outcome of interest, in a population of interest. The point is to compare exposed and unexposed persons, of relevant age, gender, and background, with comparable risk factors other than the exposure of interest, to determine if the exposure makes any difference in the rate of events of the outcome of interest.

Composite end points represent “lumping” together different individual end points for consideration as a single outcome. The validity of composite end points depends upon assumptions, which will have to be made at the time investigators design their study and write their protocol.  After the data are collected and analyzed, the assumptions may or may not be supported.

Lumping may offer some methodological benefits, such as increasing statistical power or reducing sample size requirements. Standard epidemiologic practice, however, as reflected in numerous textbooks and methodology articles, requires the reporting of the individual constitutive end points, along with the composite result. Even when the composite end point was employed based upon a view that the component end points are sufficiently related, that view must itself ultimately be tested by showing that the individual end points are, in fact, concordant, with risk ratios in the same direction.

There are many clear statements that caution the consumers of medical studies against being misled by misleading claims that may be based upon composite end points, in the medical literature.  In 2004, the British Medical Journal published a useful paper, “Users’ guide to detecting misleading claims in clinical research reports,” One of the authors’ suggestions to readers was:

“Beware of composite endpoints.”[25]

The one methodological point to which virtually all writers agree is that authors should report the results for the composite end point separately to permit readers to evaluate the individual results.[26]  A leading biostatistical methodologist, the late Douglas Altman, cautioned readers against assuming that the overall estimate of association can be interpreted for each individual end point, and advised authors to provide “[a] clear listing of the individual endpoints and the number of participants experiencing them” to permit a more meaningful interpretation of composite outcomes.[27]

The HSP authors used a composite of hemorrhagic strokes, which was composed of both intracerebral hemorrhages (ICH) and subarachnoid hemorrhages (SAH). In their New England Journal of Medicine article, the authors presented the composite end point, but not the risk ratios for the two individual end points. Before they published the article, one of the authors wrote his fellow authors to advise them that because ICH and SAH are very different medical phenomena, they should present the individual end points in their analysis.[28]

The HSP researchers eventually did publish an analysis of SAH and PPA use.[29] The authors identified 425 SAH cases, of which 312 met the criteria for aneurysmal SAH. They looked at many potential risk factors such as smoking (OR = 5.07), family history (OR = 3.1), marijuana (OR = 2.38), cocaine (OR = 24.97), hypertension (OR = 2.39), aspirin (OR = 1.24), alcohol (OR = 2.95), education, as well as PPA.

Only a bivariate analysis was presented for PPA, with an odds ratio of 1.15, p = 0.87. No confidence intervals were presented. The authors were a bit more forthcoming about the potential role of bias and confounding in this publication than they were in their earlier 2000 HSP paper. “Biases that might have affected this analysis of the HSP include selection and recall bias.”[30]

Judge Rothstein’s Rule 702 opinion reports that the “Defendants assert that this article demonstrates the lack of an association between PPA and SAHs resulting from the rupture of an aneurysm.”[31] If the defendants actually claimed a “demonstration” of “the lack of association,” then shame, and more shame, on them! First, the cited study provided only a bivariate analysis for PPA and SAH. The odds ratio of 1.15 pales in comparison the risk ratios reported for many other common exposures. We can only speculate what happens to the 1.15, when the PPA exposure is placed in a fully adjusted model for all important covariates. Second, the p-value of 0.87 does not tell that 1.15 is unreal or due to chance. The HSP reported a 15% increase in odds ratio, which is very compatible with no risk at all. Perhaps if the defendants had been more modest in their characterization they would not have given the court the basis to find that “defendants distort and misinterpret the Stroke Article.”[32]

Rejecting the defendants’ characterization, the court drew upon an affidavit from plaintiffs’ expert witness, Kenneth Rothman, who explained that a p-value cannot provide evidence of lack of an effect.[33] A high p-value, with its corresponding 95% confidence interval that includes 1.0, can, however, show that the sample data are compatible with the null hypothesis. What Judge Rothstein missed, and the defendants may not have said effectively, is that the statistical analysis was a test of an hypothesis, and the test failed to allow us to reject the null hypothesis.  The plaintiffs were left with an indeterminant analysis, from which they really could not honestly claim an association between PPA use and aneurismal SAH.

I Once Was Blind, But Now I See

The HSP protocol called for interviewers to be blinded to the study hypothesis, but this guard against bias was abandoned.[34]  The HSP report acknowledged that “[b]linding would have provided extra protection against unequal ascertainment of PPA exposure in case subjects compared with control subjects.”[35]

The study was conducted out of four sites, and at least one of the sites violated protocol by informing cases that they were participating in a study designed to evaluate PPA and HS.[36] The published article in the New England Journal of Medicine misleadingly claimed that study participants were blinded to its research hypothesis.[37] Although the plaintiffs’ expert witnesses tried to slough off this criticism, the lack of blinding among interviewers and study subjects amplifies recall biases, especially when study subjects and interviewers may have been reluctant to discuss fully several of the co-variate exposures, such as cocaine, marijuana, and alcohol use.[38]

No Causation At All

Scientists and the general population alike have been conditioned to view the controversy over tobacco smoking and lung cancer as a contrivance of the tobacco industry. What is lost in this conditioning is the context of Sir Arthur Bradford Hill’s triumphant 1965 Royal Society of Medicine presidential address. Hill, along with his colleague Sir Richard Doll, were not overly concerned with the tobacco industry, but rather the important methodological criticisms  posited by three leading statistical scientists, Joseph Berkson, Jerzy Neyman, and Sir Ronald Fisher. Hill and Doll’s success in showing that tobacco smoking causes lung cancer required sufficient rebuttal to these critics. The 1965 speech is often cited for its articulation of nine factors to consider in evaluating an association, but the necessary condition is often overlooked. In his speech, Hill identified the situation before the nine factors come into play:

“Disregarding then any such problem in semantics we have this situation. Our observations reveal an association between two variables, perfectly clear-cut and beyond what we would care to attribute to the play of chance. What aspects of that association should we especially consider before deciding that the most likely interpretation of it is causation?”[39]

The starting point, before the Bradford Hill nine factors come into play, requires a “clear-cut” association, which is “beyond what we would care to attribute to the play of chance.”  What is “clear-cut” association?  The most reasonable interpretation of Bradford Hill is that the starting point is an association that is not the result of chance, bias, or confounding.

Looking at the state of the science after the HSP was published, there were two studies that failed to find any association between PPA and HS. The HSP authors “suggested” an association between PPA appetite suppressant and HS, but with six cases and one control, this was hardly beyond the play of chance. And none of the putative associations were “clear cut” in removing bias and confounding as an explanation for the observations.

And Then Litigation Cometh

A tsunami of state and federal cases followed the publication of the HSP study.[40] The Judicial Panel on Multi-district Litigation gave Judge Barbara Rothstein, in the Western District of Washington, responsibility for the pre-trial management of the federal PPA cases. Given the problems with the HSP, the defense unsurprisingly lodged Rule 702 challenges to plaintiffs’ expert witnesses’ opinions, and Rule 703 challenges to reliance upon the HSP.[41]

In June 2003, Judge Rothstein issued her decision on the defense motions. After reviewing a selective regulatory history of PPA, the court turned to epidemiology, and its statistical analysis.  Although misunderstanding of p-values and confidence intervals is endemic among the judiciary, the descriptions provided by Judge Rothstein portended a poor outcome:

“P-values measure the probability that the reported association was due to chance, while confidence intervals indicate the range of values within which the true odds ratio is likely to fall.”[42]

Both descriptions are seriously incorrect,[43] which is especially concerning given that Judge Rothstein would go on, in 2003, to become the director of the Federal Judicial Center, where she would oversee work on the Reference Manual on Scientific Evidence.

The MDL court also managed to make a mash out of the one-tailed test used in the HSP report. That report was designed to inform regulatory action, where actual conclusions of causation are not necessary. When the HSP authors submitted their paper to the New England Journal of Medicine, they of course had to comply with the standards of that journal, and they doubled their reported p-values to comply with the journal’s requirement of using a two-tailed test. Some key results of the HSP no longer had p-values below 5 percent, as the defense was keen to point out in its briefings.

From the sources it cited, the court clearly did not understand the issue, which was the need to control for random error. The court declared that it had found:

“that the HSP’s one-tailed statistical analysis complies with proper scientific methodology, and concludes that the difference in the expression of the HSP’s findings [and in the published article] falls far short of impugning the study’s reliability.”[44]

This finding ignores the very different contexts between regulatory action and causation in civil litigation. The court’s citation to an early version of the Reference Manual on Scientific Evidence further illustrates its confusion:

“Since most investigators of toxic substances are only interested in whether the agent increases the incidence of disease (as distinguished from providing protection from the disease), a one-tailed test is often viewed as appropriate.”

*****

“a rigid rule [requiring a two-tailed test] is not required if p-values and significance levels are used as clues rather than as mechanical rules for statistical proof.”[45]

In a sense, given the prevalence of advocacy epidemiology, many researchers are interested in only showing an increased risk. Nonetheless, the point of evaluating p-values is to assess random error involved in sampling of a population, and that sampling generates a rate of error even when the null hypothesis is assumed to be absolutely correct. Random error can go in either direction, resulting in risk ratios above or below 1.0. Indeed, the probability of observing a risk ratio of exactly 1.0, in a large study, is incredibly small even if the null hypothesis is correct. The risk ratio for men who had used a PPA product was below 1.0, which also recommends a two-tailed test. Trading on the confusion of regulatory and litigation findings, the court proceeded to mischaracterize the parties’ interests in designing the HSP, as only whether PPA increased the risk of stroke. In the MDL, the parties did not want “clues,” or help on what FDA policy should be; they wanted a test of the causal hypothesis.

In a footnote, the court pointed to testimony of Dr. Ralph Horwitz, one of the HSP investigators, who stated that all parties “[a]ll parties involved in designing the HSP were interested solely in testing whether PPA increased the risk of stroke.” The parties, of course, were not designing the HSP for support for litigation claims.[46] The court also cited, in this footnote, a then recent case that found a one-tailed p-value inappropriate “where that analysis assumed the very fact in dispute.” The plaintiffs’ reliance upon the one-sided p-values in the unpublished HSP report did exactly that.[47] The court tried to excuse the failure to rule out random error by pointing to language in the published HSP article, where the authors stated that inconclusive findings raised “concern regarding  safety.”[48]

In analyzing the defense challenge to the opinions based upon the HSP, Judge Rothstein committed both legal and logical fallacies. First, citing Professor David Faigman’s treatise for the proposition that epidemiology is widely accepted because the “general techniques are valid,” the court found that the HSP, and reliance upon it, was valid, despite the identified problems. The issue was not whether epidemiological techniques are valid, but whether the techniques used in the HSP were valid. The devilish details of the HSP in particular largely went ignored.[49] From a legal perspective, Judge Rothstein’s opinion can be seen to place a burden upon the defense to show invalidity, by invoking a presumption of validity. This shifting of the burden was then, and is now, contrary to the law.

Perhaps the most obvious dodge of the court’s gatekeeping responsibility came with the conclusory assertion that the “Defendants’ ex post facto dissection of the HSP fails to undermine its reliability. Scientific studies almost invariably contain flaws.”[50] Perhaps it is sobering to consider that all human beings have flaws, and yet somehow we distinguish between sinners and saints, and between criminals and heroes. The court shirked its responsibility to look at the identified flaws to determine whether they threatened the HSP’s internal validity, as well as its external validity in the plaintiffs’ claims for hemorrhagic strokes in each of the many subgroups considered in the HSP, as well as outcomes not considered, such as myocardial infarction and ischemic stroke. Given that there was but one key epidemiologic study relied upon for support of the plaintiffs’ extravagant causal claims, the identified flaws might be expected to lead to some epistemic humility.

The PPA MDL court exhibited a willingness to cherry pick HSP results to support its low-grade gatekeeping. For instance, the court recited that “[b]ecause no men reported use of appetite suppressants and only two reported first use of a PPA-containing product, the investigators could not determine whether PPA posed an increased risk for hemorrhagic stroke in men.”[51] There was, of course, another definition of PPA exposure that yielded a total of 19 exposed men, about one-third of all exposed cases and controls. All exposed men used OTC PPA cough cold remedies, six men with HS, and 13 controls, with a reported odds ratio of 0.62 (95%, C.I., 0.20 – 1.92); p = 0.41. Although the result for men was not statistically significant, the point estimate for the sample was a risk ratio below one, with a confidence interval that excludes a doubling of the risk based upon this sample statistic. The number of male HS exposed cases was the same as the number of female HS appetite suppressant cases, which somehow did not disturb the court.

Superficially, the PPA MDL court appeared to place great weight on the fact of peer review publication in a prestigious journal, by well-credentialed scientists and clinicians. Given that “[t]he prestigious NEJM published the HSP results …  research bears the indicia of good science.”[52] Although Professor Susan Haack’s writings on law and science are often errant, her analysis of this kind of blind reliance on peer review is noteworthy:

“though peer-reviewed publication is now standard practice at scientific and medical journals, I doubt that many working scientists imagine that the fact that a work has been accepted for publication after peer review is any guarantee that it is good stuff, or that it’s not having been published necessarily undermines its value. The legal system, however, has come to invest considerable epistemic confidence in peer-reviewed publication  — perhaps for no better reason than that the law reviews are not peer-reviewed!”[53]

Ultimately, the PPA MDL court revealed that it was quite inattentive to the validity concerns of the HSP. Among the cases filed in the federal court were heart attack and ischemic stroke claims.  The HSP did not address those claims, and the MDL court was perfectly willing to green light the claims on the basis of case reports and expert witness hand waving about “plausibility.”  Not only was this reliance upon case reports plus biological plausibility against the weight of legal authority, it was against the weight of scientific opinion, as expressed by the HSP authors themselves:

“Although the case reports called attention to a possible association between the use of phenylpropanolamine and the risk of hemorrhagic stroke, the absence of control subjects meant that these studies could not produce evidence that meets the usual criteria for valid scientific inference”[54]

Since no epidemiology was necessary at all for ischemic stroke and myocardial infarction claims, then a deeply flawed epidemiologic study was thus even better than nothing. And peer review and prestige were merely window dressing.

The HSP study was subjected to much greater analysis in actual trial litigation.  Before the MDL court concluded its abridged gatekeeping, the defense successfully sought the underlying data to the HSP. Plaintiffs’ counsel and the Yale investigators resisted and filed motions to quash the defense subpoenas. The MDL court denied the motions and required the parties to collaborate on redaction of medical records to be produced.[55]

In a law review article published a few years after the PPA Rule 702 decision, Judge Rothstein immodestly described the PPA MDL as a “model mass tort,” and without irony characterized herself as having taken “an aggressive role in determining the admissibility of scientific evidence [].”[56]

The MDL court’s PPA decision stands as a landmark of judicial incuriousness and credulity.  The court conducted hearings and entertaining extensive briefings on the reliability of plaintiffs’ expert witnesses’ opinions, which were based largely upon one epidemiologic study, known as the “Yale Hemorrhagic Stroke Project (HSP).”  In the end, publication in a prestigious peer-reviewed journal proved to be a proxy for independent review and an excuse not to exercise critical judgment: “The prestigious NEJM published the HSP results, further substantiating that the research bears the indicia of good science.” Id. at 1239 (citing Daubert II for the proposition that peer review shows the research meets the minimal criteria for good science). The admissibility challenges were refused.

Exuberant Praise for Judge Rothstein

In 2009, an American Law Institute – American Bar Association continuing legal education seminar on expert witnesses and environmental litigation, Anthony Roisman presented on “Daubert & Its Progeny – Finding & Selecting Experts – Direct & Cross-Examination.” Roisman has been active in various plaintiff advocacy organizations, including serving as the head of the American Trial Lawyers’ Association Section on Toxic, Environmental & Pharmaceutical Torts (STEP). In his 2009 lecture, Roisman praised Rothstein’s PPA Rule 702 decision as “the way Daubert should be interpreted.” More concerning was Roisman’s revelation that Judge Rothstein wrote the PPA decision, “fresh from a seminar conducted by the Tellus Institute, which is an organization set up of scientists to try to bring some common sense to the courts’ interpretation of science, which is what is going on in a Daubert case.”[57]

Roisman’s endorsement of the PPA decision may have been purely result-oriented jurisprudence, but what of his enthusiasm for the “learning” that Judge Rothstein received fresh from the Tellus Institute.  What exactly is or was the Tellus Institute?

In June 2003, the same month as Judge Rothstein’s PPA decision, the Tellus Institute supported a group known as Scientific Knowledge and Public Policy (SKAPP), in publishing an attack on the Daubert decision. The Tellus-SKAPP paper, “Daubert: The Most Influential Supreme Court Ruling You’ve Never Heard Of,” appeared online in 2003.[58]

David Michaels, a plaintiffs’ expert in chemical exposure cases, and a founder of SKAPP, has typically described his organization as having been funded by the Common Benefit Trust, “a fund established pursuant to a court order in the Silicone Gel Breast Implant Liability litigation.”[59] What Michaels hides is that this “Trust” is nothing other than the common benefits fund set up in MDL 926, as it is for most MDLs, to permit plaintiffs’ counsel to retain and present expert witnesses in the common proceedings. In other words, it was the plaintiffs’ lawyers’ walking-around money. SKAPP’s sister organization, the Tellus Institute is clearly aligned with SKAPP. Alas, Richard Clapp, who was a testifying expert witness for PPA plaintiffs, was an active member of the Tellus Institute, at the time of the judicial educational seminar for Judge Rothstein.[60] Clapp is listed as a member of the planning committee responsible for preparing the anti-Daubert pamphlet. In 2005, as director of the Federal Judicial Center, Judge Rothstein attended another conference, “the Coronado Conference, which was sponsored by SKAPP.[61]

Roisman’s revelation in 2009, after the dust had settled on the PPA litigation, may well put Judge Rothstein in the same category as Judge James Kelly, against whom the U.S. Court of Appeals for the Third Circuit issued a writ of mandamus for recusal. Judge Kelly was invited to attend a conference on asbestos medical issues, set up by Dr. Irving Selikoff with scientists who testified for plaintiffs’ counsel. The conference was funded by plaintiffs’ counsel. The co-conspirators, Selikoff and plaintiffs’ counsel, paid for Judge Kelly’s transportation and lodgings, without revealing the source of the funding.[62]

In the case of Selikoff and Motley’s effort to subvert the neutrality of Judge James M. Kelly in the school district asbestos litigation, and pervert the course of justice, the conspiracy was detected in time for a successful recusal effort. In the PPA litigation, there was no disclosure of the efforts by the anti-Daubert advocacy group, the Tellus Institute, to undermine the neutrality of a federal judge. 

Aftermath of Failed MDL Gatekeeping

Ultimately, the HSP study received much more careful analysis before juries. Although the cases that went to trial involved plaintiffs with catastrophic injuries, and a high-profile article in the New England Journal of Medicine, the jury verdicts were overwhelmingly in favor of the defense.[63]

In the first case that went to trial (but second to verdict), the defense presented a thorough scientific critique of the HSP. The underlying data and medical records that had been produced in response to a Rule 45 subpoena in the MDL allowed juries to see that the study investigators had deviated from the protocol in ways to increase the number of exposed cases, with the obvious result of increasing the odds ratios reported. Juries were ultimately much more curious about evidence and testimony on reclassifications of exposure that drove up the odds ratios for PPA use, than they were about the performance of linear logistic regressions.

The HSP investigators were well aware of the potential for medication use to occur after the onset of stroke symptoms (headache), which may have sent a person to the medicine chest for an OTC cold remedy. Case 71-0039 was just such a case, as shown by the medical records and the HSP investigators’ initial classification of the case. On dubious grounds, however, the study reclassified the time of stroke onset to after the PPA-medication use, in what the investigators knew increased their chances of finding an association.

The reclassification of Case 20-0092 was even more egregious. The patient was originally diagnosed as having experienced a transient ischemic attack (TIA), after a CT of the head showed no bleed. Case 20-0092 was not a case. For the TIA, the patient was given heparin, an appropriate therapy but one that is known to cause bleeding. The following day, MRI of the head revealed a HS. The HSP classified Case 20-0092 as a case.

In Case 18-0025, the patient experienced a headache in the morning, and took a PPA-medication (Contac) for relief. The stroke was already underway when the Contac was taken, but the HSP reversed the order of events.

Case 62-0094 presented an interesting medical history that included an event no one in the HSP considered including in the interview protocol. In addition to a history of heavy smoking, alcohol, cocaine, heroin, and marijuana use, and a history of seizure disorder, Case 62-0094 suffered a traumatic head injury immediately before developing a SAH. Treating physicians ascribed the SAH to traumatic injury, but understandably there were no controls that were identified with similar head injury within the exposure period.

Both sides of the PPA litigation accused the other of “hacking at the A cell,” but juries seemed to understand that the hacking had started before the paper was published.

In a case involving two plaintiffs, in Los Angeles, where the jury heard the details of how the HSP cases were analyzed, the jury returned two defense verdicts. In post-trial motions, plaintiffs’ counsel challenged the defendant’s reliance upon underlying data in the HSP, which went behind the peer-reviewed publication, and which showed that the peer review failed to prevent serious errors.  In essence, the plaintiffs’ counsel claimed that the defense’s scrutiny of the underlying data and investigator misclassifications were themselves not “generally accepted” methods, and thus inadmissible. The trial court rejected the plaintiffs’ claim and their request for a new trial, and spoke to the significance of challenging the superficial significance of peer review of the key study relied upon by plaintiffs in the PPA litigation:

“I mean, you could almost say that there was some unethical activity with that Yale Study.  It’s real close.  I mean, I — I am very, very concerned at the integrity of those researchers.

********

Yale gets — Yale gets a big black eye on this.”[64]

Epidemiologist Charles Hennekens, who had been a consultant to PPA-medication manufacturers, published a critique of the HSP study, in 2006. The Hennekens critique included many of the criticisms lodged by himself, as well as by epidemiologists Lewis Kuller, Noel Weiss, and Brian Strom, back in an October 2000 FDA meeting, before the HSP was published. Richard Clapp, Tellus Institute activist and expert witness for PPA plaintiffs, and Michael Williams, lawyer for PPA claimants, wrote a letter criticizing Hennekens.[65] David Michaels, an expert witness for plaintiffs in other chemical exposure cases, and a founder of SKAPP, which collaborated with the Tellus Institute on its anti-Daubert compaign, wrote a letter accusing Hennekens of “mercenary epidemiology,” for engaging in re-analysis of a published study. Michaels never complained about the litigation-inspired re-analyses put forward by plaintiffs’ witnesses in the Bendectin litigation.  Plaintiffs’ lawyers and their expert witnesses had much to gain by starting the litigation and trying to expand its reach. Defense lawyers and their expert witnesses effectively put themselves out of business by shutting it down.[66]


[1] Rachel Gorodetsky, “Phenylpropanolamine,” in Philip Wexler, ed., 7 Encyclopedia of Toxicology 559 (4th ed. 2024).

[2] Hershel Jick, Pamela Aselton, and Judith R. Hunter,  “Phenylpropanolamine and Cerebral Hemorrhage,” 323 Lancet 1017 (1984).

[3] Robert R. O’Neill & Stephen W. Van de Carr, “A Case-Control Study of Adrenergic  Decongestants and Hemorrhagic CVA Using a Medicaid Data Base” m.s. (1985).

[4] Ramond Lipicky, Center for Drug Evaluation and Research, PPA, Safety Summary at 29 (Aug. 9, 1900).

[5] Center for Drug Evaluation and Research, US Food and Drug Administration, “Epidemiologic Review of Phenylpropanolamine Safety Issues” (April 30, 1991).

[6] Ralph I. Horwitz, Lawrence M. Brass, Walter N. Kernan, Catherine M. Viscoli, “Phenylpropanolamine & Risk of Hemorrhagic Stroke – Final Report of the Hemorrhagic Stroke Project (May 10, 2000).

[7] Id. at 3, 26.

[8] Lois La Grenade & Parivash Nourjah, “Review of study protocol, final study report and raw data regarding the incidence of hemorrhagic stroke associated with the use of phenylopropanolamine,” Division of Drug Risk Assessment, Office of Post-Marketing Drug Risk Assessment (0PDRA) (Sept. 27, 2000). These authors concluded that the HSP report provided “compelling evidence of increased risk of hemorrhagic stroke in young people who use PPA-containing appetite suppressants. This finding, taken in association with evidence provided by spontaneous reports and case reports published in the

medical literature leads us to recommend that these products should no longer be available for over the counter use.”

[9] Among those who voiced criticisms of the design, methods, and interpretation of the HSP study were Noel Weiss, Lewis Kuller, Brian Strom, and Janet Daling. Many of the criticisms would prove to be understated in the light of post-publication review.

[10] Walter N. Kernan, Catherine M. Viscoli, Lawrence M. Brass, J.P. Broderick, T. Brott, and Edward Feldmann, “Phenylpropanolamine and the risk of hemorrhagic stroke,” 343 New Engl. J. Med. 1826 (2000) [cited as Kernan]

[11] Kernan, supra note 10, at 1826 (emphasis added).

[12] David Harrington, Ralph B. D’Agostino, Sr., Constantine Gatsonis, Joseph W. Hogan, David J. Hunter, Sharon-Lise T. Normand, Jeffrey M. Drazen, and Mary Beth Hamel, “New Guidelines for Statistical Reporting in the Journal,” 381 New Engl. J. Med. 285 (2019).

[13] Kernan, supra note 10, at 1827.

[14] Transcript of Meeting on Safety Issues of Phenylpropanolamine (PPA) in Over-the-Counter Drug Products 117 (Oct. 19, 2000).

[15][15] See, e.g., Huw Talfryn Oakley Davies, Iain Kinloch Crombie, and Manouche Tavakoli, “When can odds ratios mislead?” 316 Brit. Med. J. 989 (1998); Thomas F. Monaghan, Rahman, Christina W. Agudelo, Alan J. Wein, Jason M. Lazar, Karel Everaert, and Roger R. Dmochowski, “Foundational Statistical Principles in Medical Research: A Tutorial on Odds Ratios, Relative Risk, Absolute Risk, and Number Needed to Treat,” 18 Internat’l J. Envt’l Research & Public Health 5669 (2021).

[16] Kernan, supra note 10, at 1829, Table 2.

[17] Kernan, supra note 10, at 1827.

[18] Peter C. Austin & Lawrence J. Brunner, “Inflation of the type I error rate when a continuous confounding variable is categorized in logistic regression analyses,” 23 Statist. Med. 1159 (2004).

[19] See, e.g., Douglas G. Altman & Patrick Royston, “The cost of dichotomising continuous variables,” 332 Brit. Med. J. 1080 (2006); Patrick Royston, Douglas G. Altman, and Willi Sauerbrei, “Dichotomizing continuous predictors in multiple regression: a bad idea,” 25 Stat. Med. 127 (2006). See also Robert C. MacCallum, Shaobo Zhang, Kristopher J. Preacher, and Derek D. Rucker, “On the Practice of Dichotomization of Quantitative Variables,” 7 Psychological Methods 19 (2002); David L. Streiner, “Breaking Up is Hard to Do: The Heartbreak of Dichotomizing Continuous Data,” 47 Can. J. Psychiatry 262 (2002); Henian Chen, Patricia Cohen, and Sophie Chen, “Biased odds ratios from dichotomization of age,” 26 Statist. Med. 3487 (2007); Carl van Walraven & Robert G. Hart, “Leave ‘em Alone – Why Continuous Variables Should Be Analyzed as Such,” 30 Neuroepidemiology 138 (2008); O. Naggara, J. Raymond, F. Guilbert, D. Roy, A. Weill, and Douglas G. Altman, “Analysis by Categorizing or Dichotomizing Continuous Variables Is Inadvisable,” 32 Am. J. Neuroradiol. 437 (Mar 2011); Neal V. Dawson & Robert Weiss, “Dichotomizing Continuous Variables in Statistical Analysis: A Practice to Avoid,” Med. Decision Making 225 (2012); Phillippa M Cumberland, Gabriela Czanner, Catey Bunce, Caroline J Doré, Nick Freemantle, and Marta García-Fiñana, “Ophthalmic statistics note: the perils of dichotomising continuous variables,” 98 Brit. J. Ophthalmol. 841 (2014).

[20] Valerii Fedorov, Frank Mannino1, and Rongmei Zhang, “Consequences of dichotomization,” 8 Pharmaceut. Statist. 50 (2009).

[21] Peter Peduzzi, John Concato, Elizabeth Kemper, Theodore R. Holford, and Alvan R. Feinstein, “A simulation study of the number of events per variable in logistic regression analysis?” 49 J. Clin. Epidem. 1373 (1996).

[22] HSP Final Report at 5.

[23] HSP Final Report at 26.

[24] Byron G. Stier & Charles H. Hennekens, “Phenylpropanolamine and Hemorrhagic Stroke in the Hemorrhagic Stroke Project: A Reappraisal in the Context of Science, the Food and Drug Administration, and the Law,” 16 Ann. Epidem. 49, 50 (2006) [cited as Stier & Hennekens].

[25] Victor M. Montori, Roman Jaeschke, Holger J. Schünemann, Mohit Bhandari, Jan L Brozek, P. J. Devereaux, and Gordon H. Guyatt, “Users’ guide to detecting misleading claims in clinical research reports,” 329 Brit. Med. J. 1093 (2004). 

[26] Wolfgang Ahrens & Iris Pigeot, eds., Handbook of Epidemiology 1840 (2d ed. 2014) (47.5.8 Use of Composite Endpoints); Stuart J. Pocock, John J. V. McMurray, and Tim J. Collier, “Statistical Controversies in Reporting of Clinical Trials: Part 2 of a 4-Part Series on Statistics for Clinical Trials,” 66 J. Am. Coll. Cardiol. 2648, 2650-51 (2015) (“Interpret composite endpoints carefully.”); Schulz & Grimes, “Multiplicity in randomized trials I:  endpoints and treatments,” 365 Lancet 1591, 1595 (2005).

[27] Eric Lim, Adam Brown, Adel Helmy, Shafi Mussa & Douglas Altman, “Composite Outcomes in Cardiovascular Research: A Survey of Randomized Trials,” 149 Ann. Intern. Med. 612 (2008).

[28] See, e.g., Thomas Brott email to Walter Kernan (Sept. 10, 2000).

[29] Joseph P. Broderick, Catherine M. Viscoli, Thomas Brott, Walter N. Kernan, Lawrence M. Brass, Edward Feldmann, Lewis B. Morgenstern, Janet Lee Wilterdink, and Ralph I. Horwitz, “Major Risk Factors for Aneurysmal Subarachnoid Hemorrhage in the Young Are Modifiable,” 34 Stroke 1375 (2003).

[30] Id. at 1379.

[31] Id. at 1243.

[32] Id. at 1243.

[33] Id., citing Rothman Affidavit, ¶ 7; Kenneth J. Rothman, Epidemiology:  An Introduction at 117 (2002).

[34] HSP Final Report at 26 (‘‘HSP interviewers were not blinded to the case-control status of study subjects and some were aware of the study purpose’.”); Walter Kernan Dep. at 473-74, In re PPA Prods. Liab. Litig., MDL 1407 (W.D. Wash.) (Sept. 19, 2002).

[35] HSP Final Report at 26.

[36] Stier & Hennekens, note 24 supra, at 51.

[37] NEJM at 1831.

[38] See Christopher T. Robertson & Aaron S. Kesselheim, Blinding as a Solution to Bias – Strengthening Biomedical Science, Forensic Science, and the Law 53 (2016); Sandy Zabell, “The Virtues of Being Blind,” 29 Chance 32 (2016).

[39] Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295, 295 (1965).

[40] See Barbara J. Rothstein, Francis E. McGovern, and Sarah Jael Dion, “A Model Mass Tort: The PPA Experience,” 54 Drake L. Rev. 621 (2006); Linda A. Ash, Mary Ross Terry, and Daniel E. Clark, Matthew Bender Drug Product Liability § 15.86 PPA (2003).

[41] In re Phenylpropanolamine Prods. Liab. Litig., 289 F.Supp. 2d 1230 (W.D. Wash. 2003).

[42] Id. at 1236 n.1

[43] Michael O. Finkelstein & Bruce Levin, Statistics for Lawyers 171, 173-74 (3rd ed. 2015). See also Sander Greenland, Stephen J. Senn, Kenneth J. Rothman, John B. Carlin, Charles Poole, Steven N. Goodman, and Douglas G. Altman, “Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations,” 31 Eur. J. Epidem. 337 (2016).

[44] In re Phenylpropanolamine Prods. Liab. Litig., 289 F.Supp. 2d 1230, 1241 (W.D. Wash. 2003).

[45] Id. (citing Reference Manual at 126-27, 358 n. 69). The edition of Manual was not identified by the court.

[46] Id. at n.9, citing deposition of Ralph Horowitz [sic].

[47] Id., citing Good v. Fluor Daniel Corp., 222 F.Supp. 2d 1236, 1242-43 (E.D. Wash. 2002).

[48] Id. 1241, citing Kernan at 183.

[49] In re Phenylpropanolamine Prods. Liab. Litig., 289 F.Supp. 2d 1230, 1239 (W.D. Wash. 2003) (citing 2 Modern Scientific Evidence: The Law and Science of Expert Testimony § 28-1.1, at 302-03 (David L. Faigman,  et al., eds., 1997) (“Epidemiologic studies have been well received by courts trying mass tort suits. Well-conducted studies are uniformly admitted. The widespread acceptance of epidemiology is based in large part on the belief that the general techniques are valid.”).

[50] Id. at 1240. The court cited the Reference Manual on Scientific Evidence 337 (2d ed. 2000), for this universal attribution of flaws to epidemiology studies (“It is important to recognize that most studies have flaws. Some flaws are inevitable given the limits of technology and resources.”) Of course, when technology and resources are limited, expert witnesses are permitted to say “I cannot say.” The PPA MDL court also cited another MDL court, which declared that “there is no such thing as a perfect epidemiological study.” In re Orthopedic Bone Screw Prods. Liab. Litig., MDL No. 1014, 1997 WL 230818, at *8-9 (E.D.Pa. May 5, 1997).

[51] Id. at 1236.

[52] Id. at 1239.

[53] Susan Haack, “Irreconcilable Differences?  The Troubled Marriage of Science and Law,” 72 Law & Contemp. Problems 1, 19 (2009) (internal citations omitted). It may be telling that Haack has come to publish much of her analysis in law reviews. See Nathan Schachtman, “Misplaced Reliance On Peer Review to Separate Valid Science From NonsenseTortini (Aug. 14, 2011).

[54] Kernan, supra note 10, at 1831.

[55] In re Propanolamine Prods. Litig., MDL 1407, Order re Motion to Quash Subpoenas re Yale Study’s Hospital Records (W.D. Wash. Aug. 16, 2002). Two of the HSP investigators wrote an article, over a decade later, to complain about litigation efforts to obtain data from ongoing studies. They did not mention the PPA case. Walter N. Kernan, Catherine M. Viscoli, and Mathew C. Varughese, “Litigation Seeking Access to Data From Ongoing Clinical Trials: A Threat to Clinical Research,” 174 J. Am. Med. Ass’n Intern. Med. 1502 (2014).

[56] Barbara J. Rothstein, Francis E. McGovern, and Sarah Jael Dion, “A Model Mass Tort: The PPA Experience,” 54 Drake L. Rev. 621, 638 (2006).

[57] Anthony Roisman, “Daubert & Its Progeny – Finding & Selecting Experts – Direct & Cross-Examination,” ALI-ABA 2009. Roisman’s remarks about the role of Tellus Institute start just after minute 8, on the recording, available from the American Law Institute, and the author.

[58] See Daubert: The Most Influential Supreme Court Ruling You’ve Never Heard Of; A Publication of the Project on Scientific Knowledge and Public Policy, coordinated by the Tellus Institute” (2003).

[59] See, e.g., David Michaels, Doubt is Their Product: How Industry’s War on Science Threatens Your Health 267 (2008).

[60] See Richard W. Clapp & David Ozonoff, “Environment and Health: Vital Intersection or Contested Territory?” 30 Am. J. L. & Med. 189, 189 (2004) (“This Article also benefited from discussions with colleagues in the project on Scientific Knowledge and Public Policy at Tellus Institute, in Boston, Massachusetts.”).

[61] See Barbara Rothstein, “Bringing Science to Law,” 95 Am. J. Pub. Health S1 (2005) (“The Coronado Conference brought scientists and judges together to consider these and other tensions that arise when science is introduced in courts.”).

[62] In re School Asbestos Litigation, 977 F.2d 764 (3d Cir. 1992). See Cathleen M. Devlin, “Disqualification of Federal Judges – Third Circuit Orders District Judge James McGirr Kelly to Disqualify Himself So As To Preserve ‘The Appearance of Justice’ Under 28 U.S.C. § 455 – In re School Asbestos Litigation (1992),” 38 Villanova L. Rev. 1219 (1993); Bruce A. Green, “May Judges Attend Privately Funded Educational Programs? Should Judicial Education Be Privatized?: Questions of Judicial Ethics and Policy,” 29 Fordham Urb. L.J. 941, 996-98 (2002).

[63] Alison Frankel, “A Line in the Sand,” The Am. Lawyer – Litigation (2005); Alison Frankel, “The Mass Tort Bonanza That Wasn’t,” The Am. Lawyer (Jan. 6, 2006).

[64] O’Neill v. Novartis AG, California Superior Court, Los Angeles Cty., Transcript of Oral Argument on Post-Trial Motions, at 46 -47 (March 18, 2004) (Hon. Anthony J. Mohr), aff’d sub nom. O’Neill v. Novartis Consumer Health, Inc.,147 Cal. App. 4th 1388, 55 Cal. Rptr. 3d 551, 558-61 (2007).

[65] Richard Clapp & Michael L. Williams, Regarding ‘‘Phenylpropanolamine and Hemorrhagic Stroke in the Hemorrhagic Stroke Project,’’ 16 Ann. Epidem. 580 (2006).

[66] David Michaels, “Regarding ‘Phenylpropanolamine and Hemorrhagic Stroke in the Hemorrhagic Stroke Project’: Mercenary Epidemiology – Data Reanalysis and Reinterpretation for Sponsors with Financial Interest in the Outcome

16 Ann. Epidem. 583 (2006). Hennekens responded to these letters. Stier & Hennekens, note 24, supra.

Links, Ties, and Other Hook Ups in Risk Factor Epidemiology

July 5th, 2023

Many journalists struggle with reporting the results from risk factor epidemiology. Recently, JAMA Network Open published an epidemiologic study (“Williams study”) that explored whether exposure to Agent Orange amoby ng United States military veterans was associated with bladder cancer.[1] The reported study found little to no association, but lay and scientific journalists described the study as finding a “link,”[2] or a “tie,”[3] thus suggesting causality. One web-based media report stated, without qualification, that Agent Orange “increases bladder cancer risk.”[4]

 

Even the authors of the Williams study described the results inconsistently and hyperbolically. Within the four corners of the published article, the authors described their having found a “modestly increased risk of bladder cancer,” and then, on the same page, they reported that “the association was very slight (hazard ratio, 1.04; 95% C.I.,1.02-1.06).”

In one place, the Williams study states it looked at a cohort of 868,912 veterans with exposure to Agent Orange, and evaluated bladder cancer outcomes against outcomes in 2,427,677 matched controls. Elsewhere, they report different numbers, which are hard to reconcile. In any event, the authors had a very large sample size, which had the power to detect theoretically small differences as “statistically significant” (p < 0.05). Indeed, the study was so large that even a very slight disparity in rates between the exposed and unexposed cohort members could be “statistically significantly” different, notwithstanding that systematic error certainly played a much larger role in the results than random error. In terms of absolute numbers, the researchers found 50,781 bladder cancer diagnoses, on follow-up of 28,672,655 person-years. There were overall 2.1% bladder cancers among the exposed servicemen, and 2.0% among the unexposed. Calling this putative disparity a “modest association” is a gross overstatement, and it is difficult to square the authors’ pronouncement of a “modest association” with a “very slight increased risk.”

The authors also reported that there was no association between Agent Orange exposure and aggressiveness of bladder cancer, with bladder wall muscle invasion taken to be the marker of aggressiveness. Given that the authors were willing to proclaim a hazard ratio of 1.04 as an association, this report of no association with aggressiveness is manifestly false. The Williams study found a decreased odds of a diagnosis of muscle-invasive bladder cancer among the exposed cases, with an odds ratio of 0.91, 95% CI 0.85-0.98 (p = 0.009). The study thus did not find an absence of an association, but rather an inverse association.

Causality

Under the heading of “Meaning,” the authors wrote that “[t]hese findings suggest an association between exposure to Agent Orange and bladder cancer, although the clinical relevance of this was unclear.” Despite disclaiming a causal interpretation of their results, Williams and colleagues wrote that their results “support prior investigations and further support bladder cancer to be designated as an Agent Orange-associated disease.”

Williams and colleagues note that the Institute of Medicine had suggested that the association between Agent Orange exposure and bladder cancer outcomes required further research.[5] Requiring additional research was apparently sufficient for the Department of Veterans Affairs, in 2021, to assume facts not in evidence, and to designate “bladder cancer as a cancer caused by Agent Orange exposure.”[6]

Williams and colleagues themselves appear to disavow a causal interpretation of their results: “we cannot determine causality given the retrospective nature of our study design.” They also acknowledged their inability to “exclude potential selection bias and misclassification bias.” Although the authors did not explore the issue, exposed servicemen may well have been under greater scrutiny, creating surveillance and diagnostic biases.

The authors failed to grapple with other, perhaps more serious biases and inadequacy of methodology in their study. Although the authors claimed to have controlled for the most important confounders, they failed to include diabetes as a co-variate in their analysis, even though diabetic patients have a more than doubled increased risk for bladder cancer, even after adjustment for smoking.[7] Diabetic patients would also have been likely to have had more visits to VA centers for healthcare and more opportunity to have been diagnosed with bladder cancer.

Futhermore, with respect to the known confounding variable of smoking, the authors trichotomized smoking history as “never,” “former,” or “current” smoker. The authors were missing smoking information in about 13% of the cohort. In a univariate analysis based upon smoking status (Table 4), the authors reported the following hazard ratios for bladder cancer, by smoking status:

Smoking status at bladder cancer diagnosis

Never smoked      1   [Reference]

Current smoker   1.10 (1.00-1.21)

Former smoker    1.08 (1.00-1.18)

Unknown              1.17 (1.05-1.31)

This analysis for smoking risk points to the fragility of the Agent Orange analyses. First, the “unknown” smoking status is associated with roughly twice the risk for current or former smokers. Second, the risk ratios for muscle-invasive bladder cancer were understandably higher for current smokers (OR 1.10, 95% CI 1.00-1.21) and former smokers (OR 1.08, 95% CI 1.00-1.18) than for non-smoking veterans.

Third, the Williams’ study’s univariate analysis of smoking and bladder cancer generates risk ratios that are quite out of line with independent studies of smoking and bladder cancer risk. For instance, meta-analyses of studies of smoking and bladder cancer risk report risk ratios of 2.58 (95% C.I., 2.37–2.80) for any smoking, 3.47 (3.07–3.91) for current smoking, and 2.04 (1.85–2.25) for past smoking.[8] These smoking-related bladder cancer risks are thus order(s) of magnitude greater than the univariate analysis of smoking risk in the Williams study, as well as the multivariate analysis of Agent Orange risk reported.

Fourth, the authors engage in the common, but questionable practice of categorizing a known confounder, smoking, which should ideally be reported as a continuous variable with respect to quantity consumed, years smoked, and years since quitting.[9] The question here, given that the study is very large, is not the loss of power,[10] but bias away from the null. Peter Austin has shown, by Monte Carlo simulation, that categorizing a continuous variable in a logistic regression results in inflating the rate of finding false positive associations.[11] The type I (false-positive) error rates increases with sample size, with increasing correlation between the confounding variable and outcome of interest, and the number of categories used for the continuous variables. The large dataset used by Williams and colleagues, which they see as a plus, works against them by increasing the bias from the use of categorical variables for confounding variables.[12]

The Williams study raises serious questions not only about the quality of medical journalism, but also about how an executive agency such as the Veterans Administration evaluates scientific evidence. If the Williams study were to play a role in compensation determinations, it would seem that veterans with muscle-invasive bladder cancer would be turned away, while those veterans with less serious cancers would be compensated. But with 2.1% incidence versus 2.0%, how can compensation be rationally permitted in every case?


[1] Stephen B. Williams, Jessica L. Janes, Lauren E. Howard, Ruixin Yang, Amanda M. De Hoedt, Jacques G. Baillargeon, Yong-Fang Kuo, Douglas S. Tyler, Martha K. Terris, Stephen J. Freedland, “Exposure to Agent Orange and Risk of Bladder Cancer Among US Veterans,” 6 JAMA Network Open e2320593 (2023).

[2] Elana Gotkine, “Exposure to Agent Orange Linked to Risk of Bladder Cancer,” Buffalo News (June 28, 2023); Drew Amorosi, “Agent Orange exposure linked to increased risk for bladder cancer among Vietnam veterans,” HemOnc Today (June 27, 2023).

[3] Andrea S. Blevins Primeau, “Agent Orange Exposure Tied to Increased Risk of Bladder Cancer,” Cancer Therapy Advisor (June 30, 2023); Mike Bassett, “Agent Orange Exposure Tied to Bladder Cancer Risk in Veterans — Increased risk described as ‘modest’, and no association seen with aggressiveness of cancer,” Medpage Today (June 27, 2023).

[4] Darlene Dobkowski, “Agent Orange Exposure Modestly Increases Bladder Cancer Risk in Vietnam Veterans,” Cure Today (June 27, 2023).

[5] Institute of Medicine – Committee to Review the Health Effects in Vietnam Veterans of Exposure to Herbicides (Tenth Biennial Update), Veterans and Agent Orange: Update 2014 at 10 (2016) (upgrading previous finding of “inadequate” to “suggestive”).

[6] Williams study, citing U.S. Department of Veterans Affairs, “Agent Orange exposure and VA disability compensation.”

[7] Yeung Ng, I. Husain, N. Waterfall, “Diabetes Mellitus and Bladder Cancer – An Epidemiological Relationship?” 9 Path. Oncol. Research 30 (2003) (diabetic patients had an increased, significant odds ratio for bladder cancer compared with non diabetics even after adjustment for smoking and age [OR: 2.69 p=0.049 (95% CI 1.006-7.194)]).

[8] Marcus G. Cumberbatch, Matteo Rota, James W.F. Catto, and Carlo La Vecchia, “The Role of Tobacco Smoke in Bladder and Kidney Carcinogenesis: A Comparison of Exposures and Meta-analysis of Incidence and Mortality Risks?” 70 European Urology 458 (2016).

[9] See generally, “Confounded by Confounding in Unexpected Places” (Dec. 12, 2021).

[10] Jacob Cohen, “The cost of dichotomization,” 7 Applied Psychol. Measurement 249 (1983).

[11] Peter C. Austin & Lawrence J. Brunner, “Inflation of the type I error rate when a continuous confounding variable is categorized in logistic regression analyses,” 23 Statist. Med. 1159 (2004).

[12] See, e.g., Douglas G. Altman & Patrick Royston, “The cost of dichotomising continuous variables,” 332 Brit. Med. J. 1080 (2006); Patrick Royston, Douglas G. Altman, and Willi Sauerbrei, “Dichotomizing continuous predictors in multiple regression: a bad idea,” 25 Stat. Med. 127 (2006); Valerii Fedorov, Frank Mannino, and Rongmei Zhang, “Consequences of dichotomization,” 8 Pharmaceut. Statist. 50 (2009). See also Robert C. MacCallum, Shaobo Zhang, Kristopher J. Preacher, and Derek D. Rucker, “On the Practice of Dichotomization of Quantitative Variables,” 7 Psychological Methods 19 (2002); David L. Streiner, “Breaking Up is Hard to Do: The Heartbreak of Dichotomizing Continuous Data,” 47 Can. J. Psychiatry 262 (2002); Henian Chen, Patricia Cohen, and Sophie Chen, “Biased odds ratios from dichotomization of age,” 26 Statist. Med. 3487 (2007); Carl van Walraven & Robert G. Hart, “Leave ‘em Alone – Why Continuous Variables Should Be Analyzed as Such,” 30 Neuroepidemiology 138 (2008); O. Naggara, J. Raymond, F. Guilbert, D. Roy, A. Weill, and Douglas G. Altman, “Analysis by Categorizing or Dichotomizing Continuous Variables Is Inadvisable,” 32 Am. J. Neuroradiol. 437 (Mar 2011); Neal V. Dawson & Robert Weiss, “Dichotomizing Continuous Variables in Statistical Analysis: A Practice to Avoid,” Med. Decision Making 225 (2012); Phillippa M. Cumberland, Gabriela Czanner, Catey Bunce, Caroline J. Doré, Nick Freemantle, and Marta García-Fiñana, “Ophthalmic statistics note: the perils of dichotomising continuous variables,” 98 Brit. J. Ophthalmol. 841 (2014); Julie R. Irwin & Gary H. McClelland, “Negative Consequences of Dichotomizing Continuous Predictor Variables,” 40 J. Marketing Research 366 (2003); Stanley E. Lazic, “Four simple ways to increase power without increasing the sample size,” PeerJ Preprints (23 Oct 2017).

Judicial Flotsam & Jetsam – Retractions

June 12th, 2023

In scientific publishing, when scientists make a mistake, they publish an erratum or a corrigendum. If the mistake vitiates the study, then the erring scientists retract their article. To be sure, sometimes the retraction comes after an obscene delay, with the authors kicking and screaming.[1] Sometimes the retraction comes at the request of the authors, better late than never.[2]

Retractions in the biomedical journals, whether voluntary or not, are on the rise.[3] The process and procedures for retraction of articles often lack transparency. Many articles are retracted without explanation or disclosure of specific problems about the data or the analysis.[4] Sadly, however, misconduct in the form of plagiarism and data falsification is a frequent reason for retractions.[5] The lack of transparency for retractions, and sloppy scholarship, combine to create Zombie papers, which are retracted but continue to be cited in subsequent publications.[6]

LEGAL RETRACTIONS

The law treats errors very differently. Being a judge usually means that you never have to say you are sorry. Judge Andrew Hurwitz has argued that that our legal system would be better served if judges could and did “freely acknowledged and transparently corrected the occasional ‘goof’.”[7] Alas, as Judge Hurwitz notes, very few published decisions acknowledge mistakes.[8]

In the world of scientific jurisprudence, the judicial reticence to acknowledge mistakes is particularly dangerous, and it leads directly to the proliferation of citations to cases that make egregious mistakes. In the niche area of judicial assessment of scientific and statistical evidence, the proliferation of erroneous statements is especially harmful because it interferes with thinking clearly about the issues before courts. Judges believe that they have argued persuasively for a result, not by correctly marshaling statistical and scientific concepts, but by relying upon precedents erroneously arrived at by other judges in earlier cases. Regardless of how many cases are cited (and there are many possible “precedents”), the true parameter does not have a 95% probability of lying within the interval given by a given 95% confidence interval.[9] Similarly, as much as judges would like p-values and confidence intervals to eliminate the need to worry about systematic error, their saying so cannot make it so.[10] Even a mighty federal judge cannot make the p-value probability, or its complement, substitute for the posterior probability of a causal claim.[11]

Some cases in the books are so egregiously decided that it is truly remarkable that they would be cited for any proposition. I call these scientific Dred Scott cases, which illustrate that sometimes science has no criteria of validity that the law is bound to respect. One such Dred Scott case was the result of a bench trial in a federal district court in Atlanta, in Wells v. Ortho Pharmaceutical Corporation.[12]

Wells was notorious for its poor assessment of all the determinants of scientific causation.[13] The decision was met with a storm of opprobrium from the legal and medical community.[14] No scientists or legal scholars offered a serious defense of Wells on the scientific merits. Even the notorious plaintiffs’ expert witness, Carl Cranor, could muster only a distanced agnosticism:

“In Wells v. Ortho Pharmaceutical Corp., which involved a claim that birth defects were caused by a spermicidal jelly, the U.S. Court of Appeals for the 11th Circuit followed the principles of Ferebee and affirmed a plaintiff’s verdict for about five million dollars. However, some members of the medical community chastised the legal system essentially for ignoring a well-established scientific consensus that spermicides are not teratogenic. We are not in a position to judge this particular issue, but the possibility of such results exists.”[15]

Cranor apparently could not bring himself to note that it was not just scientific consensus that was ignored; the Wells case ignored the basic scientific process of examining relevant studies for both internal and external validity.

Notwithstanding this scholarly consensus and condemnation, we have witnessed the repeated recrudescence of the Wells decision. In Matrixx Initiatives, Inc. v. Siracusano,[16] in 2011, the Supreme Court, speaking through Justice Sotomayor, wandered into a discussion, irrelevant to its holding, whether statistical significance was necessary for a determination of the causality of an association:

“We note that courts frequently permit expert testimony on causation based on evidence other than statistical significance. Seee.g.Best v. Lowe’s Home Centers, Inc., 563 F. 3d 171, 178 (6th Cir 2009); Westberry v. Gislaved Gummi AB, 178 F. 3d 257, 263–264 (4th Cir. 1999) (citing cases); Wells v. Ortho Pharmaceutical Corp., 788 F. 2d 741, 744–745 (11th Cir. 1986). We need not consider whether the expert testimony was properly admitted in those cases, and we do not attempt to define here what constitutes reliable evidence of causation.”[17]

The quoted language is remarkable for two reasons. First, the Best and Westberry cases did not involve statistics at all. They addressed specific causation inferences from what is generally known as differential etiology. Second, the citation to Wells was noteworthy because the case has nothing to do with adverse event reports or the lack of statistical significance.

Wells involved a claim of birth defects caused by the use of spermicidal jelly contraceptive, which had been the subject of several studies, one of which at least yielded a nominally statistically significant increase in detected birth defects over what was expected.

Wells could thus hardly be an example of a case in which there was a judgment of causation based upon a scientific study that lacked statistical significance in its findings. Of course, finding statistical significance is just the beginning of assessing the causality of an association. The most remarkable and disturbing aspect of the citation to Wells, however, was that the Court was unaware of, or ignored, the case’s notoriety, and the scholarly and scientific consensus that criticized the decision for its failure to evaluate the entire evidentiary display, as well as for its failure to rule out bias and confounding in the studies relied upon by the plaintiff.

Justice Sotomayor’s decision for a unanimous Court is not alone in its failure of scholarship and analysis in embracing the dubious precedent of Wells. Many other courts have done much the same, both in state[18] and in federal courts,[19] and both before and after the Supreme Court decided Daubert, and even after Rule 702 was amended in 2000.[20] Perhaps even more disturbing is that the current edition of the Reference Manual on Scientific Evidence glibly cites to the Wells case, for the dubious proposition that

“Generally, researchers are conservative when it comes to assessing causal relationships, often calling for stronger evidence and more research before a conclusion of causation is drawn.”[21]

We are coming up on the 40th anniversary of the Wells judgment. It is long past time to stop citing the case. Perhaps we have reached the stage of dealing with scientific evidence at which errant and aberrant cases should be retracted, and clearly marked as retracted in the official reporters, and in the electronic legal databases. Certainly the technology exists to link the scholarly criticism with a case citation, just as we link subsequent judicial treatment by overruling, limiting, and criticizing.


[1] Laura Eggertson, “Lancet retracts 12-year-old article linking autism to MMR vaccines,” 182 Canadian Med. Ass’n J. E199 (2010).

[2] Notice of retraction for Teng Zeng & William Mitch, “Oral intake of ranitidine increases urinary excretion of N-nitrosodimethylamine,” 37 Carcinogenesis 625 (2016), published online (May 4, 2021) (retraction requested by authors with an acknowledgement that they had used incorrect analytical methods for their study).

[3] Tianwei He, “Retraction of global scientific publications from 2001 to 2010,” 96 Scientometrics 555 (2013); Bhumika Bhatt, “A multi-perspective analysis of retractions in life sciences,” 126 Scientometrics 4039 (2021); Raoul R.Wadhwa, Chandruganesh Rasendran, Zoran B. Popovic, Steven E. Nissen, and Milind Y. Desai, “Temporal Trends, Characteristics, and Citations of Retracted Articles in Cardiovascular Medicine,” 4 JAMA Network Open e2118263 (2021); Mario Gaudino, N. Bryce Robinson, Katia Audisio, Mohamed Rahouma, Umberto Benedetto, Paul Kurlansky, Stephen E. Fremes, “Trends and Characteristics of Retracted Articles in the Biomedical Literature, 1971 to 2020,” 181 J. Am. Med. Ass’n Internal Med. 1118 (2021); Nicole Shu Ling Yeo-Teh & Bor Luen Tang, “Sustained Rise in Retractions in the Life Sciences Literature during the Pandemic Years 2020 and 2021,” 10 Publications 29 (2022).

[4] Elizabeth Wager & Peter Williams, “Why and how do journals retract articles? An analysis of Medline retractions 1988-2008,” 37 J. Med. Ethics 567 (2011).

[5] Ferric C. Fanga, R. Grant Steen, and Arturo Casadevall, “Misconduct accounts for the majority of retracted scientific publications,” 109 Proc. Nat’l Acad. Sci. 17028 (2012); L.M. Chambers, C.M. Michener, and T. Falcone, “Plagiarism and data falsification are the most common reasons for retracted publications in obstetrics and gynaecology,” 126 Br. J. Obstetrics & Gyn. 1134 (2019); M.S. Marsh, “Separating the good guys and gals from the bad,” 126 Br. J. Obstetrics & Gyn. 1140 (2019).

[6] Tzu-Kun Hsiao and Jodi Schneider, “Continued use of retracted papers: Temporal trends in citations and (lack of) awareness of retractions shown in citation contexts in biomedicine,” 2 Quantitative Science Studies 1144 (2021).

[7] Andrew D. Hurwitz, “When Judges Err: Is Confession Good for the Soul?” 56 Ariz. L. Rev. 343, 343 (2014).

[8] See id. at 343-44 (quoting Justice Story who dealt with the need to contradict a previously published opinion, and who wrote “[m]y own error, however, can furnish no ground for its being adopted by this Court.” U.S. v. Gooding, 25 U.S. 460, 478 (1827)).

[9] See, e.g., DeLuca v. Merrell Dow Pharms., Inc., 791 F. Supp. 1042, 1046 (D.N.J. 1992) (”A 95% confidence interval means that there is a 95% probability that the ‘true’ relative risk falls within the interval”) , aff’d, 6 F.3d 778 (3d Cir. 1993); In re Silicone Gel Breast Implants Prods. Liab. Litig, 318 F.Supp.2d 879, 897 (C.D. Cal. 2004); Eli Lilly & Co. v. Teva Pharms, USA, 2008 WL 2410420, *24 (S.D.Ind. 2008) (stating incorrectly that “95% percent of the time, the true mean value will be contained within the lower and upper limits of the confidence interval range”). See also Confidence in Intervals and Diffidence in the Courts” (Mar. 4, 2012).

[10] See, e.g., Brock v. Merrill Dow Pharmaceuticals, Inc., 874 F.2d 307, 311-12 (5th Cir. 1989) (“Fortunately, we do not have to resolve any of the above questions [as to bias and confounding], since the studies presented to us incorporate the possibility of these factors by the use of a confidence interval.”). This howler has been widely acknowledged in the scholarly literature. See David Kaye, David Bernstein, and Jennifer Mnookin, The New Wigmore – A Treatise on Evidence: Expert Evidence § 12.6.4, at 546 (2d ed. 2011); Michael O. Finkelstein, Basic Concepts of Probability and Statistics in the Law 86-87 (2009) (criticizing the blatantly incorrect interpretation of confidence intervals by the Brock court).

[11] In re Ephedra Prods. Liab. Litig., 393 F.Supp. 2d 181, 191 (S.D.N.Y. 2005) (Rakoff, J.) (“Generally accepted scientific convention treats a result as statistically significant if the P-value is not greater than .05. The expression ‘P=.05’ means that there is one chance in twenty that a result showing increased risk was caused by a sampling error — i.e., that the randomly selected sample accidentally turned out to be so unrepresentative that it falsely indicates an elevated risk.”); see also In re Phenylpropanolamine (PPA) Prods. Liab. Litig., 289 F.Supp. 2d 1230, 1236 n.1 (W.D. Wash. 2003) (“P-values measure the probability that the reported association was due to chance… .”). Although the erroneous Ephedra opinion continues to be cited, it has been debunked in the scholarly literature. See Michael O. Finkelstein, Basic Concepts of Probability and Statistics in the Law 65 (2009); Nathan A. Schachtman, “Statistical Evidence in Products Liability Litigation,” at 28-13, chap. 28, in Stephanie A. Scharf, George D. Sax, & Sarah R. Marmor, eds., Product Liability Litigation: Current Law, Strategies and Best Practices (2d ed. 2021).

[12] Wells v. Ortho Pharm. Corp., 615 F. Supp. 262 (N.D. Ga.1985), aff’d & modified in part, remanded, 788 F.2d 741 (11th Cir.), cert. denied, 479 U.S. 950 (1986).

[13] I have discussed the Wells case in a series of posts, “Wells v. Ortho Pharm. Corp., Reconsidered,” (2012), part one, two, three, four, five, and six.

[14] See, e.g., James L. Mills and Duane Alexander, “Teratogens and ‘Litogens’,” 15 New Engl. J. Med. 1234 (1986); Samuel R. Gross, “Expert Evidence,” 1991 Wis. L. Rev. 1113, 1121-24 (1991) (“Unfortunately, Judge Shoob’s decision is absolutely wrong. There is no scientifically credible evidence that Ortho-Gynol Contraceptive Jelly ever causes birth defects.”). See also Editorial, “Federal Judges v. Science,” N.Y. Times, December 27, 1986, at A22 (unsigned editorial) (“That Judge Shoob and the appellate judges ignored the best scientific evidence is an intellectual embarrassment.”);  David E. Bernstein, “Junk Science in the Courtroom,” Wall St. J. at A 15 (Mar. 24,1993) (pointing to Wells as a prominent example of how the federal judiciary had embarrassed American judicial system with its careless, non-evidence based approach to scientific evidence); Bert Black, Francisco J. Ayala & Carol Saffran-Brinks, “Science and the Law in the Wake of Daubert: A New Search for Scientific Knowledge,” 72 Texas L. Rev. 715, 733-34 (1994) (lawyers and leading scientist noting that the district judge “found that the scientific studies relied upon by the plaintiffs’ expert were inconclusive, but nonetheless held his testimony sufficient to support a plaintiffs’ verdict. *** [T]he court explicitly based its decision on the demeanor, tone, motives, biases, and interests that might have influenced each expert’s opinion. Scientific validity apparently did not matter at all.”) (internal citations omitted); Bert Black, “A Unified Theory of Scientific Evidence,” 56 Fordham L. Rev. 595, 672-74 (1988); Paul F. Strain & Bert Black, “Dare We Trust the Jury – No,” 18 Brief  7 (1988); Bert Black, “Evolving Legal Standards for the Admissibility of Scientific Evidence,” 239 Science 1508, 1511 (1988); Diana K. Sheiness, “Out of the Twilight Zone: The Implications of Daubert v. Merrill Dow Pharmaceuticals, Inc.,” 69 Wash. L. Rev. 481, 493 (1994); David E. Bernstein, “The Admissibility of Scientific Evidence after Daubert v. Merrell Dow Pharmacueticals, Inc.,” 15 Cardozo L. Rev. 2139, 2140 (1993) (embarrassing decision); Troyen A. Brennan, “Untangling Causation Issues in Law and Medicine: Hazardous Substance Litigation,” 107 Ann. Intern. Med. 741, 744-45 (1987) (describing the result in Wells as arising from the difficulties created by the Ferebee case; “[t]he Wells case can be characterized as the court embracing the hypothesis when the epidemiologic study fails to show any effect”); Troyen A. Brennan, “Causal Chains and Statistical Links: Some Thoughts on the Role of Scientific Uncertainty in Hazardous Substance Litigation,” 73 Cornell L. Rev. 469, 496-500 (1988); David B. Brushwood, “Drug induced birth defects: difficult decisions and shared responsibilities,” 91 W. Va. L. Rev. 51, 74 (1988); Kenneth R. Foster, David E. Bernstein, and Peter W. Huber, eds., Phantom Risk: Scientific Inference and the Law 28-29, 138-39 (1993) (criticizing Wells decision); Peter Huber, “Medical Experts and the Ghost of Galileo,” 54 Law & Contemp. Problems 119, 158 (1991); Edward W. Kirsch, “Daubert v. Merrell Dow Pharmaceuticals: Active Judicial Scrutiny of Scientific Evidence,” 50 Food & Drug L.J. 213 (1995) (“a case in which a court completely ignored the overwhelming consensus of the scientific community”); Hans Zeisel & David Kaye, Prove It With Figures: Empirical Methods in Law and Litigation § 6.5, at 93(1997) (noting the multiple comparisons in studies of birth defects among women who used spermicides, based upon the many reported categories of birth malformations, and the large potential for even more unreported categories); id. at § 6.5 n.3, at 271 (characterizing Wells as “notorious,” and noting that the case became a “lightning rod for the legal system’s ability to handle expert evidence.”); Edward K. Cheng , “Independent Judicial Research in the ‘Daubert’ Age,” 56 Duke L. J. 1263 (2007) (“notoriously concluded”); Edward K. Cheng, “Same Old, Same Old: Scientific Evidence Past and Present,” 104 Michigan L. Rev. 1387, 1391 (2006) (“judge was fooled”); Harold P. Green, “The Law-Science Interface in Public Policy Decisionmaking,” 51 Ohio St. L.J. 375, 390 (1990); Stephen L. Isaacs & Renee Holt, “Drug regulation, product liability, and the contraceptive crunch: Choices are dwindling,” 8 J. Legal Med. 533 (1987); Neil Vidmar & Shari S. Diamond, “Juries and Expert Evidence,” 66 Brook. L. Rev. 1121, 1169-1170 (2001); Adil E. Shamoo, “Scientific evidence and the judicial system,” 4 Accountability in Research 21, 27 (1995); Michael S. Davidson, “The limitations of scientific testimony in chronic disease litigation,” 10 J. Am. Coll. Toxicol. 431, 435 (1991); Charles R. Nesson & Yochai Benkler, “Constitutional Hearsay: Requiring Foundational Testing and Corroboration under the Confrontation Clause,” 81 Virginia L. Rev. 149, 155 (1995); Stephen D. Sugarman, “The Need to Reform Personal Injury Law Leaving Scientific Disputes to Scientists,” 248 Science 823, 824 (1990); Jay P. Kesan, “A Critical Examination of the Post-Daubert Scientific Evidence Landscape,” 52 Food & Drug L. J. 225, 225 (1997); Ora Fred Harris, Jr., “Communicating the Hazards of Toxic Substance Exposure,” 39 J. Legal Ed. 97, 99 (1989) (“some seemingly horrendous decisions”); Ora Fred Harris, Jr., “Complex Product Design Litigation: A Need for More Capable Fact-Finders,” 79 Kentucky L. J. 510 & n.194 (1991) (“uninformed judicial decision”); Barry L. Shapiro & Marc S. Klein, “Epidemiology in the Courtroom: Anatomy of an Intellectual Embarrassment,” in Stanley A. Edlavitch, ed., Pharmacoepidemiology 87 (1989); Marc S. Klein, “Expert Testimony in Pharmaceutical Product Liability Actions,” 45 Food, Drug, Cosmetic L. J. 393, 410 (1990); Michael S. Lehv, “Medical Product Liability,” Ch. 39, in Sandy M. Sanbar & Marvin H. Firestone, eds., Legal Medicine 397, 397 (7th ed. 2007); R. Ryan Stoll, “A Question of Competence – Judicial Role in Regulation of Pharmaceuticals,” 45 Food, Drug, Cosmetic L. J. 279, 287 (1990); Note, “A Question of Competence: The Judicial Role in the Regulation of Pharmaceuticals,” Harvard L. Rev. 773, 781 (1990); Peter H. Schuck, “Multi-Culturalism Redux: Science, Law, and Politics,” 11 Yale L. & Policy Rev. 1, 13 (1993); Howard A. Denemark, “Improving Litigation Against Drug Manufacturers for Failure to Warn Against Possible Side  Effects: Keeping Dubious Lawsuits from Driving Good Drugs off the Market,” 40 Case Western Reserve L.  Rev. 413, 438-50 (1989-90); Howard A. Denemark, “The Search for Scientific Knowledge in Federal Courts in the Post-Frye Era: Refuting the Assertion that Law Seeks Justice While Science Seeks Truth,” 8 High Technology L. J. 235 (1993)

[15] Carl Cranor & Kurt Nutting, “Scientific and Legal Standards of Statistical Evidence in Toxic Tort and Discrimination Suits,” 9 Law & Philosophy 115, 123 (1990) (internal citations omitted).

[16] 131 S.Ct. 1309 (2011) [Matrixx]

[17] Id. at 1319.

[18] Baroldy v. Ortho Pharmaceutical Corp., 157 Ariz. 574, 583, 760 P.2d 574 (Ct. App. 1988); Earl v. Cryovac, A Div. of WR Grace, 115 Idaho 1087, 772 P. 2d 725, 733 (Ct. App. 1989); Rubanick v. Witco Chemical Corp., 242 N.J. Super. 36, 54, 576 A. 2d 4 (App. Div. 1990), aff’d in part, 125 N.J. 421, 442, 593 A. 2d 733 (1991); Minnesota Min. & Mfg. Co. v. Atterbury, 978 S.W. 2d 183, 193 n.7 (Tex. App. 1998); E.I. Dupont de Nemours v. Castillo ex rel. Castillo, 748 So. 2d 1108, 1120 (Fla. Dist. Ct. App. 2000); Bell v. Lollar, 791 N.E.2d 849, 854 (Ind. App. 2003; King v. Burlington Northern & Santa Fe Ry, 277 Neb. 203, 762 N.W.2d 24, 35 & n.16 (2009).

[19] City of Greenville v. WR Grace & Co., 827 F. 2d 975, 984 (4th Cir. 1987); American Home Products Corp. v. Johnson & Johnson, 672 F. Supp. 135, 142 (S.D.N.Y. 1987); Longmore v. Merrell Dow Pharms., Inc., 737 F. Supp. 1117, 1119 (D. Idaho 1990); Conde v. Velsicol Chemical Corp., 804 F. Supp. 972, 1019 (S.D. Ohio 1992); Joiner v. General Elec. Co., 864 F. Supp. 1310, 1322 (N.D. Ga. 1994) (which case ultimately ended up in the Supreme Court); Bowers v. Northern Telecom, Inc., 905 F. Supp. 1004, 1010 (N.D. Fla. 1995); Pick v. American Medical Systems, 958 F. Supp. 1151, 1158 (E.D. La. 1997); Baker v. Danek Medical, 35 F. Supp. 2d 875, 880 (N.D. Fla. 1998).

[20] Rider v. Sandoz Pharms. Corp., 295 F. 3d 1194, 1199 (11th Cir. 2002); Kilpatrick v. Breg, Inc., 613 F. 3d 1329, 1337 (11th Cir. 2010); Siharath v. Sandoz Pharms. Corp., 131 F. Supp. 2d 1347, 1359 (N.D. Ga. 2001); In re Meridia Prods. Liab. Litig., Case No. 5:02-CV-8000 (N.D. Ohio 2004); Henricksen v. ConocoPhillips Co., 605 F. Supp. 2d 1142, 1177 (E.D. Wash. 2009); Doe v. Northwestern Mutual Life Ins. Co., (D. S.C. 2012); In re Chantix (Varenicline) Prods. Liab. Litig., 889 F. Supp. 2d 1272, 1286, 1288, 1290 (N.D. Ala. 2012); Farmer v. Air & Liquid Systems Corp. at n.11 (M.D. Ga. 2018); In re Abilify Prods. Liab. Litig., 299 F. Supp. 3d 1291, 1306 (N.D. Fla. 2018).

[21] Michael D. Green, D. Michal Freedman & Leon Gordis, “Reference Guide on Epidemiology,” 549, 599 n.143, in Federal Judicial Center, National Research Council, Reference Manual on Scientific Evidence (3d ed. 2011).

Confounded by Confounding in Unexpected Places

December 12th, 2021

In assessing an association for causality, the starting point is “an association between two variables, perfectly clear-cut and beyond what we would care to attribute to the play of chance.”[1] In other words, before we even embark on consideration of Bradford Hill’s nine considerations, we should have ruled out chance, bias, and confounding as an explanation for the claimed association.[2]

Although confounding is sometimes considered as a type of systematic bias, its importance warrants its own category. Historically, courts have been rather careless in addressing confounding. The Supreme Court, in a case decided before Daubert and the statutory modifications to Rule 702, ignored the role of confounding in a multiple regression model used to support racial discrimination claims. In language that would be reprised many times to avoid and evade the epistemic demands of Rule 702, the Court held, in Bazemore, that the omission of variables in multiple regression models raises an issue that affects “the  analysis’ probativeness, not its admissibility.”[3]

When courts have not ignored confounding,[4] they have sidestepped its consideration by imparting magical abilities to confidence intervals to take care of problem posed by lurking variables.[5]

The advent of the Reference Manual on Scientific Manual allowed a ray of hope to shine on health effects litigation. Several important cases have been decided by judges who have taken note of the importance of assessing studies for confounding.[6] As a new, fourth edition of the Manual is being prepared, its editors and authors should not lose sight of the work that remains to be done.

The Third Edition of the Federal Judicial Center’s and the National Academies of Science, Engineering & Medicine’s Reference Manual on Scientific Evidence (RMSE3d 2011) addressed confounding in several chapters, not always consistently. The chapter on statistics defined “confounder” in terms of correlation between both the independent and dependent variables:

“[a] confounder is correlated with the independent variable and the dependent variable. An association between the dependent and independent variables in an observational study may not be causal, but may instead be due to confounding”[7]

The chapter on epidemiology, on the other hand, defined a confounder as a risk factor for both the exposure and disease outcome of interest:

“A factor that is both a risk factor for the disease and a factor associated with the exposure of interest. Confounding refers to a situation in which an association between an exposure and outcome is all or partly the result of a factor that affects the outcome but is unaffected by the exposure.”[8]

Unfortunately, the epidemiology chapter never defined “risk factor.” The term certainly seems much less neutral than a “correlated” variable, which lacks any suggestion of causality. Perhaps there is some implied help from the authors of the epidemiology chapter when they described a case of confounding by “known causal risk factors,” which suggests that some risk factors may not be causal.[9] To muck up the analysis, however, the epidemiology chapter went on to define “risk” as “[a] probability that an event will occur (e.g., that an individual will become ill or die within a stated period of time or by a certain age).”[10]

Both the statistics and the epidemiology chapters provide helpful examples of confounding and speak to the need for excluding confounding as the basis for an observed association. The statistics chapter, for instance, described confounding as a threat to “internal validity,”[11] and the need to inquire whether the adjustments in multivariate studies were “sensible and sufficient.”[12]

The epidemiology chapter in one passage instructed that when “an association is uncovered, further analysis should be conducted to assess whether the association is real or a result of sampling error, confounding, or bias.[13] Elsewhere in the same chapter, the precatory becomes mandatory.[14]

Legally Unexplored Source of Substantial Confounding

As the Reference Manual implies, attempting to control for confounding is not adequate.  The controlling must be carefully and sufficiently done. Under the heading of sufficiency and due care, there are epidemiologic studies that purport to control for confounding, but fail rather dramatically. The use of administrative databases, whether based upon national healthcare or insurance claims, has become a common place in chronic disease epidemiology. Their large size obviates many concerns about power to detect rare disease outcomes. Unfortunately, there is often a significant threat to the validity of such studies, which are based upon data sets that characterize patients as diabetic, hypertensive, obese, or smokers vel non. By dichotomizing what are continuous variables, the categorization extracts a significant price in multivariate models used in epidemiology.

Of course, physicians frequently create guidelines for normal versus abnormal, and these divisions or categories show up in medical records, in databases, and ultimately in epidemiologic studies. The actual measurements are not always available, and the use of a categorical variable may appear to simplify the statistical analysis of the dataset. Unfortunately, the results can be quite misleading. Consider the measurements of blood pressure in a study that is evaluating whether an exposure variable (such as medication use or environmental contaminant) is associated with an outcome such as cardiovascular or renal disease. Hypertension, if present, would clearly be a confounder, but the use of a categorical variable for hypertension would greatly undermine the validity of the study. If many of the study participants with hypertension had their condition well controlled by medication, then the categorical variable will dilute the adjustment for the role of hypertension in driving the association between the exposure and outcome variables of interest. Even if none of the hypertensive patients had good control, the reduction of all hypertension to a category, rather than a continuous measurement, is a path of the loss of information and the creation of bias.

Almost 40 years ago, Jacob Cohen showed that dichotomization of continuous variables results in a loss of power.[15] Twenty years later, Peter Austin showed in a Monte Carlo simulation that categorizing a continuous variable in a logistic regression results in inflating the rate of finding false positive associations.[16] The type I (false-positive) error rates increases with sample size, with increasing correlation between the confounding variable and outcome of interest, and the number of categories used for the continuous variables. Of course, the national databases often have huge sample sizes, which only serves to increase the bias from the use of categorical variables for confounding variables.

The late Douglas Altman, who did so much to steer the medical literature toward greater validity, warned that dichotomizing continuous variables was known to cause loss of information, statistical power, and reliability in medical research.[17]

In the field of pharmaco-epidemiology, the bias created by dichotomization of a continous variable is harmful from both the perspective of statistical estimation and hypothesis testing.[18] While readers are misled into believing that the study adjusts for important co-variates, the study will have lost information and power, with the result of presenting false-positive results that have the false-allure of a fully adjusted model. Indeed, this bias from inadequate control of confounding infects several pending pharmaceutical multi-district litigations.


Supreme Court

General Electric Co. v. Joiner, 522 U.S. 136, 145-46 (1997) (holding that an expert witness’s reliance on a study was misplaced when the subjects of the study “had been exposed to numerous potential carcinogens”)

First Circuit

Bricklayers & Trowel Trades Internat’l Pension Fund v. Credit Suisse Securities (USA) LLC, 752 F.3d 82, 89 (1st Cir. 2014) (affirming exclusion of expert witness who failed to account for confounding in event studies), aff’g 853 F. Supp. 2d 181, 188 (D. Mass. 2012)

Second Circuit

Wills v. Amerada Hess Corp., 379 F.3d 32, 50 (2d Cir. 2004) (holding expert witness’s specific causation opinion that plaintiff’s squamous cell carcinoma had been caused by polycyclic aromatic hydrocarbons was unreliable, when plaintiff had smoked and drunk alcohol)

Deutsch v. Novartis Pharms. Corp., 768 F.Supp. 2d 420, 432 (E.D.N.Y. 2011) (“When assessing the reliability of a epidemiologic study, a court must consider whether the study adequately accounted for “confounding factors.”)

Schwab v. Philip Morris USA, Inc., 449 F. Supp. 2d 992, 1199–1200 (E.D.N.Y. 2006), rev’d on other grounds, 522 F.3d 215 (2d Cir. 2008) (describing confounding in studies of low-tar cigarettes, where authors failed to account for confounding and assessing healthier life styles in users)

Third Circuit

In re Zoloft Prods. Liab. Litig., 858 F.3d 787, 793 (3d Cir. 2017) (affirming exclusion of causation expert witness)

Magistrini v. One Hour Martinizing Dry Cleaning, 180 F. Supp. 2d 584, 591 (D.N.J. 2002), aff’d, 68 Fed. Appx. 356 (3d Cir. 2003)(bias, confounding, and chance must be ruled out before an association  may be accepted as showing a causal association)

Soldo v. Sandoz Pharms. Corp., 244 F. Supp. 2d 434 (W.D.Pa. 2003) (excluding expert witnesses in Parlodel case; noting that causality assessments and case reports fail to account for confounding)

Wade-Greaux v. Whitehall Labs., Inc., 874 F. Supp. 1441 (D.V.I. 1994) (unanswered questions about confounding required summary judgment  against plaintiff in Primatene Mist birth defects case)

Fifth Circuit

Knight v. Kirby Inland Marine, Inc., 482 F.3d 347, 353 (5th Cir. 2007) (affirming exclusion of expert witnesses) (“Of all the organic solvents the study controlled for, it could not determine which led to an increased risk of cancer …. The study does not provide a reliable basis for the opinion that the types of chemicals appellants were exposed to could cause their particular injuries in the general population.”)

Burst v. Shell Oil Co., C. A. No. 14–109, 2015 WL 3755953, *7 (E.D. La. June 16, 2015) (excluding expert witness causation opinion that failed to account for other confounding exposures that could have accounted for the putative association), aff’d, 650 F. App’x 170 (5th Cir. 2016)

LeBlanc v. Chevron USA, Inc., 513 F. Supp. 2d 641, 648-50 (E.D. La. 2007) (excluding expert witness testimony that purported to show causality between plaintiff’s benzene ezposure and myelofibrosis), vacated, 275 Fed. App’x 319 (5th Cir. 2008) (remanding case for consideration of new government report on health effects of benzene)

Castellow v. Chevron USA, 97 F. Supp. 2d 780 (S.D. Tex. 2000) (discussing confounding in passing; excluding expert witness causation opinion in gasoline exposure AML case)

Kelley v. American Heyer-Schulte Corp., 957 F. Supp. 873 (W.D. Tex. 1997) (confounding in breast implant studies)

Sixth Circuit

Pluck v. BP Oil Pipeline Co., 640 F.3d 671 (6th Cir. 2011) (affirming exclusion of specific causation opinion that failed to rule out confounding factors)

Nelson v. Tennessee Gas Pipeline Co., 243 F.3d 244, 252-54 (6th Cir. 2001) (rewrite: expert’s failure to account for confounding factors in cohort study of alleged PCB exposures rendered his opinion unreliable)

Turpin v. Merrell Dow Pharms., Inc., 959 F. 2d 1349, 1355 -57 (6th Cir. 1992) (discussing failure of some studies to evaluate confounding)

Adams v. Cooper Indus. Inc., 2007 WL 2219212, 2007 U.S. Dist. LEXIS 55131 (E.D. Ky. 2007) (differential diagnosis includes ruling out confounding causes of plaintiffs’ disease).

Seventh Circuit

People Who Care v. Rockford Bd. of Educ., 111 F.3d 528, 537–38 (7th Cir. 1997) (noting importance of considering role of confounding variables in educational achievement);

Caraker v. Sandoz Pharms. Corp., 188 F. Supp. 2d 1026, 1032, 1036 (S.D. Ill 2001) (noting that “the number of dechallenge/rechallenge reports is too scant to reliably screen out other causes or confounders”)

Eighth Circuit

Penney v. Praxair, Inc., 116 F.3d 330, 333-334 (8th Cir. 1997) (affirming exclusion of expert witness who failed to account of the confounding effects of age, medications, and medical history in interpreting PET scans)

Marmo v. Tyson Fresh Meats, Inc., 457 F.3d 748, 758 (8th Cir. 2006) (affirming exclusion of specific causation expert witness opinion)

Ninth Circuit

Coleman v. Quaker Oats Co., 232 F.3d 1271, 1283 (9th Cir. 2000) (p-value of “3 in 100 billion” was not probative of age discrimination when “Quaker never contend[ed] that the disparity occurred by chance, just that it did not occur for discriminatory reasons. When other pertinent variables were factored in, the statistical disparity diminished and finally disappeared.”)

In re Viagra & Cialis Prods. Liab. Litig., 424 F.Supp. 3d 781 (N.D. Cal. 2020) (excluding causation opinion on grounds including failure to account properly for confounding)

Avila v. Willits Envt’l Remediation Trust, 2009 WL 1813125, 2009 U.S. Dist. LEXIS 67981 (N.D. Cal. 2009) (excluding expert witness opinion that failed to rule out confounding factors of other sources of exposure or other causes of disease), aff’d in relevant part, 633 F.3d 828 (9th Cir. 2011)

In re Phenylpropanolamine Prods. Liab. Litig., 289 F.Supp.2d 1230 (W.D.Wash. 2003) (ignoring study validity in a litigation arising almost exclusively from a single observational study that had multiple internal and external validity problems; relegating assessment of confounding to cross-examination)

In re Bextra and Celebrex Marketing Sales Practice, 524 F. Supp. 2d 1166, 1172 – 73 (N.D. Calif. 2007) (discussing invalidity caused by confounding in epidemiologic studies)

In re Silicone Gel Breast Implants Products Liab. Lit., 318 F.Supp. 2d 879, 893 (C.D.Cal. 2004) (observing that controlling for potential confounding variables is required, among other findings, before accepting epidemiologic studies as demonstrating causation).

Henricksen v. ConocoPhillips Co., 605 F. Supp. 2d 1142 (E.D. Wash. 2009) (noting that confounding must be ruled out)

Valentine v. Pioneer Chlor Alkali Co., Inc., 921 F. Supp. 666 (D. Nev. 1996) (excluding plaintiffs’ expert witnesses, including Dr. Kilburn, for reliance upon study that failed to control for confounding)

Tenth Circuit

Hollander v. Sandoz Pharms. Corp., 289 F.3d 1193, 1213 (10th Cir. 2002) (noting importance of accounting for confounding variables in causation of stroke)

In re Breast Implant Litig., 11 F. Supp. 2d 1217, 1233 (D. Colo. 1998) (alternative explanations, such confounding, should be ruled out before accepting causal claims).

Eleventh Circuit

In re Abilify (Aripiprazole) Prods. Liab. Litig., 299 F.Supp. 3d 1291 (N.D.Fla. 2018) (discussing confounding in studies but credulously accepting challenged explanations from David Madigan) (citing Bazemore, a pre-Daubert, decision that did not address a Rule 702 challenge to opinion testimony)

District of Columbia Circuit

American Farm Bureau Fed’n v. EPA, 559 F.3d 512 (D.C. Cir. 2009) (noting that data relied upon in setting particulate matter standards addressing visibility should avoid the confounding effects of humidity)

STATES

Delaware

In re Asbestos Litig., 911 A.2d 1176 (New Castle Cty., Del. Super. 2006) (discussing confounding; denying motion to exclude plaintiffs’ expert witnesses’ chrysotile causation opinions)

Minnesota

Goeb v. Tharaldson, 615 N.W.2d 800, 808, 815 (Minn. 2000) (affirming exclusion of Drs. Janette Sherman and Kaye Kilburn, in Dursban case, in part because of expert witnesses’ failures to consider confounding adequately).

New Jersey

In re Accutane Litig., 234 N.J. 340, 191 A.3d 560 (2018) (affirming exclusion of plaintiffs’ expert witnesses’ causation opinions; deprecating reliance upon studies not controlled for confounding)

In re Proportionality Review Project (II), 757 A.2d 168 (N.J. 2000) (noting the importance of assessing the role of confounders in capital sentences)

Grassis v. Johns-Manville Corp., 591 A.2d 671, 675 (N.J. Super. Ct. App. Div. 1991) (discussing the possibility that confounders may lead to an erroneous inference of a causal relationship)

Pennsylvania

Porter v. SmithKline Beecham Corp., No. 3516 EDA 2015, 2017 WL 1902905 (Pa. Super. May 8, 2017) (affirming exclusion of expert witness causation opinions in Zoloft birth defects case; discussing the importance of excluding confounding)

Tennessee

McDaniel v. CSX Transportation, Inc., 955 S.W.2d 257 (Tenn. 1997) (affirming trial court’s refusal to exclude expert witness opinion that failed to account for confounding)


[1] Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295, 295 (1965) (emphasis added).

[2] See, e.g., David A. Grimes & Kenneth F. Schulz, “Bias and Causal Associations in Observational Research,” 359 The Lancet 248 (2002).

[3] Bazemore v. Friday, 478 U.S. 385, 400 (1986) (reversing Court of Appeal’s decision that would have disallowed a multiple regression analysis that omitted important variables). Buried in a footnote, the Court did note, however, that “[t]here may, of course, be some regressions so incomplete as to be inadmissible as irrelevant; but such was clearly not the case here.” Id. at 400 n.10. What the Court missed, of course, is that the regression may be so incomplete as to be unreliable or invalid. The invalidity of the regression in Bazemore does not appear to have been raised as an evidentiary issue under Rule 702. None of the briefs in the Supreme Court or the judicial opinions cited or discussed Rule 702.

[4]Confounding in the Courts” (Nov. 2, 2018).

[5] See, e.g., Brock v. Merrill Dow Pharmaceuticals, Inc., 874 F.2d 307, 311-12 (5th Cir. 1989) (“Fortunately, we do not have to resolve any of the above questions [as to bias and confounding], since the studies presented to us incorporate the possibility of these factors by the use of a confidence interval.”). This howler has been widely acknowledged in the scholarly literature. See David Kaye, David Bernstein, and Jennifer Mnookin, The New Wigmore – A Treatise on Evidence: Expert Evidence § 12.6.4, at 546 (2d ed. 2011); Michael O. Finkelstein, Basic Concepts of Probability and Statistics in the Law 86-87 (2009) (criticizing the blatantly incorrect interpretation of confidence intervals by the Brock court).

[6]On Praising Judicial Decisions – In re Viagra” (Feb. 8, 2021); See “Ruling Out Bias and Confounding Is Necessary to Evaluate Expert Witness Causation Opinions” (Oct. 28, 2018); “Rule 702 Requires Courts to Sort Out Confounding” (Oct. 31, 2018).

[7] David H. Kaye and David A. Freedman, “Reference Guide on Statistics,” in RMSE3d 211, 285 (3ed 2011). 

[8] Michael D. Green, D. Michal Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” in RMSE3d 549, 621.

[9] Id. at 592.

[10] Id. at 627.

[11] Id. at 221.

[12] Id. at 222.

[13] Id. at 567-68 (emphasis added).

[14] Id. at 572 (describing chance, bias, and confounding, and noting that “[b]efore any inferences about causation are drawn from a study, the possibility of these phenomena must be examined”); id. at 511 n.22 (observing that “[c]onfounding factors must be carefully addressed”).

[15] Jacob Cohen, “The cost of dichotomization,” 7 Applied Psychol. Measurement 249 (1983).

[16] Peter C. Austin & Lawrence J. Brunner, “Inflation of the type I error rate when a continuous confounding variable is categorized in logistic regression analyses,” 23 Statist. Med. 1159 (2004).

[17] See, e.g., Douglas G. Altman & Patrick Royston, “The cost of dichotomising continuous variables,” 332 Brit. Med. J. 1080 (2006); Patrick Royston, Douglas G. Altman, and Willi Sauerbrei, “Dichotomizing continuous predictors in multiple regression: a bad idea,” 25 Stat. Med. 127 (2006). See also Robert C. MacCallum, Shaobo Zhang, Kristopher J. Preacher, and Derek D. Rucker, “On the Practice of Dichotomization of Quantitative Variables,” 7 Psychological Methods 19 (2002); David L. Streiner, “Breaking Up is Hard to Do: The Heartbreak of Dichotomizing Continuous Data,” 47 Can. J. Psychiatry 262 (2002); Henian Chen, Patricia Cohen, and Sophie Chen, “Biased odds ratios from dichotomization of age,” 26 Statist. Med. 3487 (2007); Carl van Walraven & Robert G. Hart, “Leave ‘em Alone – Why Continuous Variables Should Be Analyzed as Such,” 30 Neuroepidemiology 138 (2008); O. Naggara, J. Raymond, F. Guilbert, D. Roy, A. Weill, and Douglas G. Altman, “Analysis by Categorizing or Dichotomizing Continuous Variables Is Inadvisable,” 32 Am. J. Neuroradiol. 437 (Mar 2011); Neal V. Dawson & Robert Weiss, “Dichotomizing Continuous Variables in Statistical Analysis: A Practice to Avoid,” Med. Decision Making 225 (2012); Phillippa M Cumberland, Gabriela Czanner, Catey Bunce, Caroline J Doré, Nick Freemantle, and Marta García-Fiñana, “Ophthalmic statistics note: the perils of dichotomising continuous variables,” 98 Brit. J. Ophthalmol. 841 (2014).

[18] Valerii Fedorov, Frank Mannino1, and Rongmei Zhang, “Consequences of dichotomization,” 8 Pharmaceut. Statist. 50 (2009).

On Praising Judicial Decisions – In re Viagra

February 8th, 2021

We live in strange times. A virulent form of tribal stupidity gave us Trumpism, a personality cult in which it impossible to function in the Republican party and criticize der Führe. Even a diehard right-winger such as Liz Cheney, who dared to criticize Trump is censured, for nothing more than being disloyal to a cretin who fomented an insurrection that resulted in the murder of a Capital police officer and the deaths of several other people.[1]

Unfortunately, a similar, even if less extreme, tribal chauvinism affects legal commentary, from both sides of the courtroom. When Judge Richard Seeborg issued an opinion, early in 2020), in the melanoma – phosphodiesterase type 5 inhibitor (PDE5i) litigation,[2] I praised the decision for not shirking the gatekeeping responsibility even when the causal claim was based upon multiple, consistent statistically significant observational studies that showed an association between PDE5i medications and melanoma.[3] Although many of the plaintiffs’ relied-upon studies reported statistically significant associations between PDE5i use and melanoma occurrence, they also found similar size associations with non-melanoma skin cancers. Because skin carcinomas were not part of the hypothesized causal mechanism, the study findings strongly suggested a common, unmeasured confounding variable such as skin damage from ultraviolet light. The plaintiffs’ expert witnesses’ failure to account for confounding was fatal under Rule 702, and Judge Seeborg’s recognition of this defect, and his willingness to go beyond multiple, consistent, statistically significant associations was what made the decision important.

There were, however, problems and even a blatant error in the decision that required attention. Although the error was harmless in that its correction would not have required, or even suggested, a different result, Judge Seeborg, like many other judges and lawyers, tripped up over the proper interpretation of a confidence interval:

“When reviewing the results of a study it is important to consider the confidence interval, which, in simple terms, is the ‘margin of error’. For example, a given study could calculate a relative risk of 1.4 (a 40 percent increased risk of adverse events), but show a 95 percent ‘confidence interval’ of .8 to 1.9. That confidence interval means there is 95 percent chance that the true value—the actual relative risk—is between .8 and 1.9.”[4]

This statement about the true value is simply wrong. The provenance of this error is old, but the mistake was unfortunately amplified in the Third Edition of the Reference Manual on Scientific Evidence,[5] in its chapter on epidemiology.[6] The chapter, which is often cited, twice misstates the meaning of a confidence interval:

“A confidence interval provides both the relative risk (or other risk measure) found in the study and a range (interval) within which the risk likely would fall if the study were repeated numerous times.”[7]

and

“A confidence interval is a range of possible values calculated from the results of a study. If a 95% confidence interval is specified, the range encompasses the results we would expect 95% of the time if samples for new studies were repeatedly drawn from the same population. Thus, the width of the interval reflects random error.”[8]

The 95% confidence interval does represent random error, 1.96 standard errors above and below the point estimate from the sample date. The confidence interval is not the range of possible values, which could well be anything, but the range of reasonable compatible estimates with this one, particular study sample statistic.[9] Intervals have lower and upper bounds, which are themselves random variables, with approximately normal (or some other specified) distributions. The essence of the interval is that no value within the interval would be rejected as a null hypothesis based upon the data collected for the particular sample. Although the chapter on statistics in the Reference Manual accurately describes confidence intervals, judges and many lawyers are misled by the misstatements in the epidemiology chapter.[10]

Given the misdirection created by the Federal Judicial Center’s manual, Judge Seeborg’s erroneous definition of a confidence interval is understandable, but it should be noted in the context of praising the important gatekeeping decision in In re Viagra. Certainly our litigation tribalism should not “allow us to believe” impossible things.[11] The time to revise the Reference Manual is long overdue.

_____________________________________________________________________

[1]  John Ruwitch, “Wyoming GOP Censures Liz Cheney For Voting To Impeach Trump,” Nat’l Pub. Radio (Feb. 6, 2021).

[2]  In re Viagra (Sildenafil Citrate) and Cialis (Tadalafil) Prods. Liab. Litig., 424 F. Supp. 3d 781 (N.D. Cal. 2020) [Viagra].

[3]  SeeJudicial Gatekeeping Cures Claims That Viagra Can Cause Melonoma” (Jan. 24, 2020).

[4]  Id. at 787.

[5]  Federal Judicial Center, Reference Manual on Scientific Evidence (3rd ed. 2011).

[6]  Michael D. Green, D. Michal Freedman, & Leon Gordis, “Reference Guide on Epidemiology,” in Federal Judicial Center, Reference Manual on Scientific Evidence 549 (3rd ed. 2011).

[7]  Id. at 573.

[8]  Id. at 580.

[9] Michael O. Finkelstein & Bruce Levin, Statistics for Lawyers 171, 173-74 (3rd ed. 2015). See also Sander Greenland, Stephen J. Senn, Kenneth J. Rothman, John B. Carlin, Charles Poole, Steven N. Goodman, and Douglas G. Altman, “Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations,” 31 Eur. J. Epidem. 337 (2016).

[10]  See, e.g., Derek C. Smith, Jeremy S. Goldkind, and William R. Andrichik, “Statistically Significant Association: Preventing the Misuse of the Bradford Hill Criteria to Prove Causation in Toxic Tort Cases,” 86 Defense Counsel J. 1 (2020) (mischaracterizing the meaning of confidence intervals based upon the epidemiology chapter in the Reference Manual).

[11]  See, e.g., James Beck, “Tort Pandemic Countermeasures? The Ten Best Prescription Drug/Medical Device Decisions of 2020,” Drug and Device Law Blog (Dec. 30, 2020) (suggesting that Judge Seeborg’s decision represented the rejection of plausibility and a single “association” as insufficient); Steven Boranian, “General Causation Experts Excluded In Viagra/Cialis MDL,” (Jan. 23, 2020).

Lawsuit Industry Advertising Indirectly Stimulates Adverse Event Reporting

February 4th, 2021

The lawsuit industry spends many millions of dollars each year to persuade people that they are ill from the medications they use, and that lawsuit industry lawyers will enrich them for their woes. But does the lawyer advertising stimulate the reporting of adverse events by consumers’ filing of MedWatch reports in the Federal Adverse Event Reporting System (FAERS)?

The question is of some significance. Adverse event reporting is a recognized, important component of pharmacovigilence. Regulatory agencies around the world look to an increased rate of reporting of a specific adverse event as a potential signal that there may be an underlying association between medication use and the reported harm. In the last two decades, pharmacoepidemiologists have developed techniques for mining databases of adverse event reports for evidence of a disproportionate level of reporting for a particular medication – adverse event pair. Such studies can help identify “signals” of potential issues for further study with properly controlled epidemiologic studies.[1]

One of the vexing misuses of pharmacovigilance techniques in the pharmaceutical products litigation is the use of adverse events reporting, either as case reports or in the form of disproportionality analyses to claim causal inference. In some litigations, lawsuit industry lawyers have argued that case reports, in the FAERS, standing alone support their claims of causation.[2] Desperate to make their case through anecdotes, plaintiffs’ counsel will sometimes retreat to the claim that they want to introduce the MedWatch reports in support of a lesser claim that the reports put the defendant on “notice.” Typically, the notice argument leaves open exactly what the content of the notice is, but the clear intent is to argue notice that (1) there is an increased risk, and (2) the defendant was aware of the increased risk.[3]

Standard textbooks on pharmacovigilance and pharmacoepidemiology, as well as regulatory agency guidance, emphatically reject the use of FAERS anecdotes or their transmogrification into disportionality analyses (DPAs) to support causal claims. The U.S. FDA’s official guidance on good pharmacovigilance practices, for example, elaborates on DPAs as an example of data mining, and instructs us that:

“[d]ata mining is not a tool for establishing causal attributions between products and adverse events.”[4]

The FDA specifically cautions that the signals detected by data mining techniques should be acknowledged to be “inherently exploratory or hypothesis generating.”[5] The agency exercises caution when making its own comparisons of adverse events between products in the same class because of the low quality of the data themselves, and uncontrollable and unpredictable biases in how the data are collected.[6] Because of the uncertainties in DPAs, the FDA urges “extreme causation” in comparing reporting rates, and generally considers DPA and similar analyses as “exploratory or hypothesis-generating.”[7]

The European Medicines Agency offers similar advice and caution:

“Therefore, the concept of SDR [Signal of Disproportionate Reporting] is applied in this guideline to describe a ‘statistical signal’ that has originated from a statistical method. The underlying principle of this method is that a drug–event pair is reported more often than expected relative to an independence model, based on the frequency of ICSRs on the reported drug and the frequency of ICSRs of a specific adverse event. This statistical association does not imply any kind of causal relationship between the administration of the drug and the occurrence of the adverse event.”[8]

Because the lawsuit industry frequently relies upon and over-endorses DPAs in its pharmaceutical litigations, inquiring minds may want to know whether the industry itself is stimulating reporting of adverse events through its media advertising.

Recently, two investigators published a study that attempted to look at whether lawsuit industry advertising was associated with stimulation of adverse event reporting in the FAERS.[9] Tippett and Chen conducted a multivariate regression analysis of FAERS reporting with independent variables of Google searches, attorney advertising, and FDA actions that would affect reporting over the course of a single calendar year (mid-2015 to mid-2016). The authors analyzed 412,901 adverse event reports to FAERS, involving 28 groups of drugs that were the subject of solicitous advertising.

The authors reported that they found associations (statistically significant, p < 0.05) for regression coefficients for FDA safety actions and Google searches, but not for attorney advertising. Using lag periods of one, two, three, and four weeks, or one or two months, between FAERS reporting and the variables did not result in statistically significant coefficients for lawyer advertising.

The authors variably described their finding as “preliminarily” supporting a claim that FAERS reporting is not stimulated by “direct attorney submission or drug injury advertising,” or as failing to find “a statistically significant relationship between drug injury advertising and adverse event reports.”[10] The authors claim that their analyses show that litigation advertisements “do not appear to have spurred patients, providers, attorneys, or other individuals to file a FAERS report, as shown in our regression and graphical results.”[11]

There are substantial problems with this study. For most of the 28 drugs and drug groups studied, attorneys made up a very small proportion of all submitters of adverse event reports. The authors present no pre-study power analysis for this aspect of their study. The authors do not tell us how many analyses they have done before the one presented in this journal article, but they do acknowledge having done “exploratory analyses.” Contrary to the 2016 guidance of the American Statistical Association,[12] they present no actual p-values, and they provide no confidence or prediction intervals for their coefficients. The study did not include local television advertising, and so the reported statistical non-significance of attorney advertising must be qualified to show the limitations of the authors’ data.

Perhaps the most serious problem with this observational study of attorney advertising and stimulated reporting is the way in which the authors framed their hypothesis. Advertising stimulates people to call the toll-free number to learn more how they too may hit the litigation jackpot. The point of attorney advertising is designed to persuade people to become legal clients, not to file MedWatch forms. In the following weeks and months that follow, paralegals interview the callers, collect information, and only then FAERs happen. Lag times of one to four weeks are generally irrelevant, as is the hypothesis studied and reported upon in this article.

After decades of working in this area, I have never seen an advertisement that encourages filing a MedWatch report, and the authors do not suggest otherwise. Advertising is only the initial part of a client intake mechanism that would result in the viewers’ making a telephone call, with a subsequent interview by lawfirm personnel, a review of the putative claim, and the viewers’ obtaining and signing retainer agreements and authorizations to obtain medical records. The scope of the study, which looked at FAERS filings and attorney advertisements after short lag periods could not detect an association given how long the recruitment takes.

The authors speculate, without evidence, that the lawsuit industry may discourage their clients from filing MedWatch reports and that the industry lawyers may hesitate to file the reports to avoid serving as a fact witness in their client’s case.[13] Indeed, the authors themselves adduce compelling evidence to the contrary, in the context of the multidistrict litigation over claimed harms from the use of testosterone therapies.

In their aggregate analysis of the 28 drugs and drug groups, the authors found that the lawsuit industry submitted only six percent of MedWatch reports. This low percentage would have been much lower yet but for the very high proportion (68%) of lawyer-submitted reports concerning the use of testosterone. The litigation-driven filings lagged the relevant attorney advertising by about six months, which should have caused the authors to re-evaluate their conclusions and their observational design that looked for correlations within one or two months. The testosterone data shows rather clearly that attorney advertising leads to recruitment of clients, which in turn leads to the filing of litigation-driven adverse event reports.

As the authors explain, attorney advertising and trolling for clients occurred in the summer of 2015, but FAERS reporting did not increase until an extreme burst of filings took place several months later. The authors’ graph tells the story even better:

So the correct conclusion is that attorney advertising stimulates client recruitment, which results in mass filings of MedWatch reports.

___________________________________________________________________

[1]  Sean Hennessy, “Disproportionality analyses of spontaneous reports,” 13 Pharmacoepidemiology & Drug Safety 503, 503 (2004). See alsoDisproportionality Analyses Misused by Lawsuit Industry” (Apr. 20, 2020).

[2]  See, e.g., Fred S. Longer, “The Federal Judiciary’s Super Magnet,” 45 Trial 18, 18 (July 2009) (arguing that “adverse events . . . established a causal association between Piccolomal and liver disease at statistically significant levels”).

[3]  See, e.g., Paul D. Rheingold, “Drug Products Liability and Malpractice Cases,” 17 Am. Jur. 1, Trials, Cumulative Supplement (1970 & Supp. 2019) (“Adverse event reports (AERs) created by manufacturers when users of their over-the-counter pain reliever experienced adverse events or problems, were admissible to show notice” of the elevated risk.).

[4]  FDA, “Good Pharmacovigilance Practices and Pharmacoepidemiologic Assessment Guidance for Industry” at 8 (2005) (emphasis added).

[5]  Id. at 9.

[6]  Id.

[7]  Id. at 11.

[8] EUDRAVigilance Expert Working Group, European Medicines Agency, “Guideline on the Use of Statistical Signal Detection Methods in the EUDRAVigilance Data Analysis System,” at 3 (2006) (emphasis added). See also Gerald J. Dal Pan, Marie Lindquist & Kate Gelperin, “Postmarketing Spontaneous Pharmacovigilance Reporting Systems,” in Brian L. Strom & Stephen E. Kimmel and Sean Hennessy, Pharmacoepidemiology at 185 (6th ed. 2020).

[9]  Elizabeth C. Tippett & Brian K. Chen, “Does Attorney Advertising Stimulate Adverse Event Reporting?” 74 Food & Drug Law J. 501 (2020) [Tippett].

[10]  Id. at 502.

[11]  Id.

[12]  Ronald L. Wasserstein & Nicole A. Lazar, “The ASA’s Statement on p-Values: Context, Process, and Purpose,” 70 The Am. Statistician 129 (2016).

[13]  Tippett at 591.

Judicial Gatekeeping Cures Claims That Viagra Can Cause Melonoma

January 24th, 2020

The phosphodiesterases 5 inhibitor medications (PDE5i) seem to arouse the litigation propensities of the lawsuit industry. The PDE5i medications (sildenafil, tadalafil, etc.) have multiple indications, but they are perhaps best known for their ability to induce penile erections, which in some situations can be a very useful outcome.

The launch of Viagra in 1998 was followed by litigation that claimed the drug caused heart attacks, and not the romantic kind. The only broken hearts, however, were those of the plaintiffs’ lawyers and their expert witnesses who saw their litigation claims excluded and dismissed.[1]

Then came claims that the PDE5i medications caused non-arteritic anterior ischemic optic neuropathy (“NAION”), based upon a dubious epidemiologic study by Dr. Gerald McGwin. This litigation demonstrated, if anything, that while love may be blind, erections need not be.[2] The NAION cases were consolidated in a multi-district litigation (MDL) in front of Judge Paul Magnuson, in the District of Minnesota. After considerable back and forth, Judge Manguson ultimately concluded that the McGwin study was untrustworthy, and the NAION claims were dismissed.[3]

In 2014, the American Medical Association’s internal medicine journal published an observational epidemiologic study of sildenafil (Viagra) use and melanoma.[4] The authors of the study interpreted their study modestly, concluding:

“[s]ildenafil use may be associated with an increased risk of developing melanoma. Although this study is insufficient to alter clinical recommendations, we support a need for continued investigation of this association.”

Although the Li study eschewed causal conclusions and new clinical recommendations in view of the need for more research into the issue, the litigation industry filed lawsuits, claiming causality.[5]

In the new natural order of things, as soon as the litigation industry cranks out more than a few complaints, an MDL results, and the PDE5i – melanoma claims were no exception. By spring 2016, plaintiffs’ counsel had collected ten cases, a minion, sufficient for an MDL.[6] The MDL plaintiffs named the manufacturers of sildenafil and tadalafil, two of the more widely prescribed PDEi5 medications, on behalf of putative victims.

While the MDL cases were winding their way through discovery and possible trials, additional studies and meta-analyses were published. None of the subsequent studies, including the systematic reviews and meta-analyses, concluded that there was a causal association. Most scientists who were publishing on the issue opined that systematic error (generally confounding) prevented a causal interpretation of the data.[7]

Many of the observational studies found statistically significantly increased relative risks about 1.1 to 1.2 (10 to 20%), typically with upper bounds of 95% confidence intervals less than 2.0. The only scientists who inferred general causation from the available evidence were those who had been recruited and retained by plaintiffs’ counsel. As plaintiffs’ expert witnesses, they contended that the Li study, and the several studies that became available afterwards, collectively showed that PDE5i drugs cause melanoma in humans.

Not surprisingly, given the absence of any non-litigation experts endorsing the causal conclusion, the defendants challenged plaintiffs’ proffered expert witnesses under Federal Rule of Evidence 702. Plaintiffs’ counsel also embraced judicial gatekeeping and challenged the defense experts. The MDL trial judge, the Hon. Richard Seeborg, held hearings with four days of viva voce testimony from four of plaintiffs’ expert witnesses (two on biological plausibility, and two on epidemiology), and three of the defense’s experts. Last week, Judge Seeborg ruled by granting in part, and denying in part, the parties’ motions.[8]

The Decision

The MDL trial judge’s opinion is noteworthy in many respects. First, Judge Richard Seeborg cited and applied Rule 702, a statute, and not dicta from case law that predates the most recent statutory version of the rule. As a legal process matter, this respect for judicial process and the difference in legal authority between statutory and common law was refreshing. Second, the judge framed the Rule 702 issue, in line with the statute, and Ninth Circuit precedent, as an inquiry whether expert witnesses deviated from the standard of care of how scientists “conduct their research and reach their conclusions.”[9]

Biological Plausibility

Plaintiffs proffered three expert witnesses on biological plausibility, Drs. Rizwan Haq, Anand Ganesan, and Gary Piazza. All were subject to motions to exclude under Rule 702. Judge Seeborg denied the defense motions against all three of plaintiffs’ plausibility witnesses.[10]

The MDL judge determined that biological plausibility is neither necessary nor sufficient for inferring causation in science or in the law. The defense argued that the plausibility witnesses relied upon animal and cell culture studies that were unrealistic models of the human experience.[11] The MDL court, however, found that the standard for opinions on biological plausibility is relatively forgiving, and that the testimony of all three of plaintiffs’ proffered witnesses was admissible.

The subjective nature of opinions about biological plausibility is widely recognized in medical science.[12] Plausibility determinations are typically “Just So” stories, offered in the absence of hard evidence that postulated mechanisms are actually involved in a real causal pathway in human beings.

Causal Association

The real issue in the MDL hearings was the conclusion reached by plaintiffs’ expert witnesses that the PDE5i medications cause melanoma. The MDL court did not have to determine whether epidemiologic studies were necessary for such a causal conclusion. Plaintiffs’ counsel had proffered three expert witnesses with more or less expertise in epidemiology: Drs. Rehana Ahmed-Saucedo, Sonal Singh, and Feng Liu-Smith. All of plaintiffs’ epidemiology witnesses, and certainly all of defendants’ experts, implicitly if not explicitly embraced the proposition that analytical epidemiology was necessary to determine whether PDE5i medications can cause melanoma.

In their motions to exclude Ahmed-Saucedo, Singh, and Liu-Smith, the defense pointed out that, although many of the studies yielded statistically significant estimates of melanoma risk, none of the available studies adequately accounted for systematic bias in the form of confounding. Although the plaintiffs’ plausibility expert witnesses advanced “Just-So” stories about PDE5i and melanoma, the available studies showed an almost identical increased risk of basal cell carcinoma of the skin, which would be explained by confounding, but not by plaintiffs’ postulated mechanisms.[13]

The MDL court acknowledged that whether epidemiologic studies “adequately considered” confounding was “central” to the Rule 702 inquiry. Without any substantial analysis, however, the court gave its own ipse dixit that the existence vel non of confounding was an issue for cross-examination and the jury’s resolution.[14] Whether there was a reasonably valid association between PDE5i and melanoma was a jury question. This judicial refusal to engage with the issue of confounding was one of the disappointing aspects of the decision.

The MDL court was less forgiving when it came to the plaintiffs’ epidemiology expert witnesses’ assessment of the association as causal. All the parties’ epidemiology witnesses invoked Sir Austin Bradford Hill’s viewpoints or factors for judging whether associations were causal.[15] Although they embraced Hill’s viewpoints on causation, the plaintiffs’ epidemiologic expert witnesses had a much more difficult time faithfully applying them to the evidence at hand. The MDL court concluded that the plaintiffs’ witnesses deviated from their own professional standard of care in their analysis of the data.[16]

Hill’s first enumerated factor was “strength of association,” which is typically expressed epidemiologically as a risk ratio or a risk difference. The MDL court noted that the extant epidemiologic studies generally showed relative risks around 1.2 for PDE5i and melanoma, which was “undeniably” not a strong association.[17]

The plaintiffs’ epidemiology witnesses were at sea on how to explain away the lack of strength in the putative association. Dr. Ahmed-Saucedo retreated into an emphasis on how all or most of the studies found some increased risk, but the MDL court correctly found that this ruse was merely a conflation of strength with consistency of the observed associations. Dr. Ahmed-Saucedo’s dismissal of the importance of a dose-response relationship, another Hill factor, as unimportant sealed her fate. The MDL court found that her Bradford Hill analysis was “unduly results-driven,” and that her proffered testimony was not admissible.[18] Similarly, the MDL court found that Dr. Feng Liu-Smith similarly conflated strength of association with consistency, which error was too great a professional deviation from the standard of care.[19]

Dr. Sonal Singh fared no better after he contradicted his own prior testimony that there is an order of importance to the Hill factors, with “strength of association,” at or near the top. In the face of a set of studies, none of which showed a strong association, Dr. Singh abandoned his own interpretative principle to suit the litigation needs of the case. His analysis placed the greatest weight on the Li study, which had the highest risk ratio, but he failed to advance any persuasive reason for his emphasis on one of the smallest studies available. The MDL court found that Dr. Singh’s claim to have weighed strength of association heavily, despite the obvious absence of strong associations, puzzling and too great an analytical gap to abide.[20]

Judge Seeborg thus concluded that while the plaintiffs’ expert witness could opine that there was an association, which was arguably plausible, they could not, under Rule 702, contend that the association was causal. In attempting to advance an argument that the association met Bradford Hill’s factors for causality, the plaintiffs’ witnesses had ignored, misrepresented, or confused one of the most important factors, strength of the association, in a way that revealed their analyses to be result driven and unfaithful to the methodology they claimed to have followed. Judge Seeborg emphasized a feature of the revised Rule 702, which often is ignored by his fellow federal judges:[21]

“Under the amendment, as under Daubert, when an expert purports to apply principles and methods in accordance with professional standards, and yet reaches a conclusion that other experts in the field would not reach, the trial court may fairly suspect that the principles and methods have not been faithfully applied. See Lust v. Merrell Dow Pharmaceuticals, Inc., 89 F.3d 594, 598 (9th Cir. 1996). The amendment specifically provides that the trial court must scrutinize not only the principles and methods used by the expert, but also whether those principles and methods have been properly applied to the facts of the case.”

Given that the plaintiffs’ witnesses purported to apply a generally accepted methodology, Judge Seeborg was left to question why they would conclude causality when no one else in their field had done so.[22] The epidemiologic issue had been around for several years, and addressed not just in observational studies, but systematically reviewed and meta-analyzed. The absence of published causal conclusions was not just an absence of evidence, but evidence of absence of expert support for how plaintiffs’ expert witnesses applied the Bradford Hill factors.

Reliance Upon Studies That Did Not Conclude Causation Existed

Parties challenging causal claims will sometimes point to the absence of a causal conclusion in the publication of individual epidemiologic studies that are the main basis for the causal claim. In the PDE5i-melanoma cases, the defense advanced this argument unsuccessfully. The MDL court rejected the defense argument, based upon the absence of any comprehensive review of all the pertinent evidence for or against causality in an individual study; the study authors are mostly concerned with conveying the results of their own study.[23] The authors may have a short discussion of other study results as the rationale for their own study, but such discussions are often limited in scope and purpose. Judge Seeborg, in this latest round of PDE5i litigation, thus did not fault plaintiffs’ witnesses’ reliance upon epidemiologic or mechanistic studies, which individually did not assert causal conclusions; rather it was the absence of causal conclusions in systematic reviews, meta-analyses, narrative reviews, regulatory agency pronouncements, or clinical guidelines that ultimately raised the fatal inference that the plaintiffs’ witnesses were not faithfully deploying a generally accepted methodology.

The defense argument that pointed to the individual epidemiologic studies themselves derives some legal credibility from the Supreme Court’s opinion in General Electric Co. v. Joiner, 522 U.S. 136 (1997). In Joiner, the SCOTUS took plaintiffs’ expert witnesses to task for drawing stronger conclusions than were offered in the papers upon which they relied. Chief Justice Rehnquist gave considerable weight to the consideration that the plaintiffs’ expert witnesses relied upon studies, the authors of which explicitly refused to interpret as supporting a conclusion of human disease causation.[24]

Joiner’s criticisms of the reliance upon studies that do not themselves reach causal conclusions have gained a foothold in the case law interpreting Rule 702. The Fifth Circuit, for example, has declared:[25]

“It is axiomatic that causation testimony is inadmissible if an expert relies upon studies or publications, the authors of which were themselves unwilling to conclude that causation had been proven.”

This aspect of Joiner may properly limit the over-interpretation or misinterpretation of an individual study, which seems fine.[26] The Joiner case may, however, perpetuate an authority-based view of science to the detriment of requiring good and sufficient reasons to support the testifying expert witnesses’ opinions.  The problem with Joiner’s suggestion that expert witness opinion should not be admissible if it disagrees with the study authors’ discussion section is that sometimes study authors grossly over-interpret their data.  When it comes to scientific studies written by “political scientists” (scientists who see their work as advancing a political cause or agenda), then the discussion section often becomes a fertile source of unreliable, speculative opinions that should not be given credence in Rule 104(a) contexts, and certainly should not be admissible in trials. In other words, the misuse of non-rigorous comments in published articles can cut both ways.

There have been, and will continue to be, occasions in which published studies contain data, relevant and important to the causation issue, but which studies also contain speculative, personal opinions expressed in the Introduction and Discussion sections.  The parties’ expert witnesses may disagree with those opinions, but such disagreements hardly reflect poorly upon the testifying witnesses.  Neither side’s expert witnesses should be judged by those out-of-court opinions.  Perhaps the hearsay discussion section may be considered under Rule 104(a), which suspends the application of the Rules of Evidence, but it should hardly be a dispositive factor, other than raising questions for the reviewing court.

In exercising their gatekeeping function, trial judges should exercise care in how they assess expert witnesses’ reliance upon study data and analyses, when they disagree with the hearsay authors’ conclusions or discussions.  Given how many journals cater to advocacy scientists, and how variable the quality of peer review is, testifying expert witnesses should, in some instances,  have the expertise to interpret the data without substantial reliance upon, or reference to, the interpretative comments in the published literature.

Judge Seeborg sensibly seems to have distinguished between the absence of causal conclusions in individual epidemiologic studies and the absence of causal conclusions in any reputable medical literature.[27] He refused to be ensnared in the Joiner argument because:[28]

“Epidemiology studies typically only expressly address whether an association exists between agents such as sildenafil and tadalafil and outcomes like melanoma progression. As explained in In re Roundup Prod. Liab. Litig., 390 F. Supp. 3d 1102, 1116 (N.D. Cal. 2018), ‘[w]hether the agents cause the outcomes, however, ordinarily cannot be proven by epidemiological studies alone; an evaluation of causation requires epidemiologists to exercise judgment about the import of those studies and to consider them in context’.”

This new MDL opinion, relying upon the Advisory Committee Notes to Rule 702, is thus a more felicitous statement of the goals of gatekeeping.

Confidence Intervals

As welcome as some aspects of Judge Seeborg’s opinion are, the decision is not without mistakes. The district judge, like so many of his judicial colleagues, trips over the proper interpretation of a confidence interval:[29]

“When reviewing the results of a study it is important to consider the confidence interval, which, in simple terms, is the ‘margin of error’. For example, a given study could calculate a relative risk of 1.4 (a 40 percent increased risk of adverse events), but show a 95 percent ‘confidence interval’ of .8 to 1.9. That confidence interval means there is 95 percent chance that the true value—the actual relative risk—is between .8 and 1.9.”

This statement is inescapably wrong. The 95 percent probability attaches to the capturing of the true parameter – the actual relative risk – in the long run of repeated confidence intervals that result from repeated sampling of the same sample size, in the same manner, from the same population. In Judge Seeborg’s example, the next sample might give a relative risk point estimate 1.9, and that new estimate will have a confidence interval that may run from just below 1.0 to over 3. A third sample might turn up a relative risk estimate of 0.8, with a confidence interval that runs from say 0.3 to 1.4. Neither the second nor the third sample would be reasonably incompatible with the first. A more accurate assessment of the true parameter is that it will be somewhere between 0.3 and 3, a considerably broader range for the 95 percent.

Judge Seeborg’s error is sadly all too common. Whenever I see the error, I wonder whence it came. Often the error is in briefs of both plaintiffs’ and defense counsel. In this case, I did not see the erroneous assertion about confidence intervals made in plaintiffs’ or defendants’ briefs.


[1]  Brumley  v. Pfizer, Inc., 200 F.R.D. 596 (S.D. Tex. 2001) (excluding plaintiffs’ expert witness who claimed that Viagra caused heart attack); Selig v. Pfizer, Inc., 185 Misc. 2d 600 (N.Y. Cty. S. Ct. 2000) (excluding plaintiff’s expert witness), aff’d, 290 A.D. 2d 319, 735 N.Y.S. 2d 549 (2002).

[2]  “Love is Blind but What About Judicial Gatekeeping of Expert Witnesses? – Viagra Part I” (July 7, 2012); “Viagra, Part II — MDL Court Sees The Light – Bad Data Trump Nuances of Statistical Inference” (July 8, 2012).

[3]  In re Viagra Prods. Liab. Litig., 572 F.Supp. 2d 1071 (D. Minn. 2008), 658 F. Supp. 2d 936 (D. Minn. 2009), and 658 F. Supp. 2d 950 (D. Minn. 2009).

[4]  Wen-Qing Li, Abrar A. Qureshi, Kathleen C. Robinson, and Jiali Han, “Sildenafil use and increased risk of incident melanoma in US men: a prospective cohort study,” 174 J. Am. Med. Ass’n Intern. Med. 964 (2014).

[5]  See, e.g., Herrara v. Pfizer Inc., Complaint in 3:15-cv-04888 (N.D. Calif. Oct. 23, 2015); Diana Novak Jones, “Viagra Increases Risk Of Developing Melanoma, Suit Says,” Law360 (Oct. 26, 2015).

[6]  See In re Viagra (Sildenafil Citrate) Prods. Liab. Litig., 176 F. Supp. 3d 1377, 1378 (J.P.M.L. 2016).

[7]  See, e.g., Jenny Z. Wang, Stephanie Le , Claire Alexanian, Sucharita Boddu, Alexander Merleev, Alina Marusina, and Emanual Maverakis, “No Causal Link between Phosphodiesterase Type 5 Inhibition and Melanoma,” 37 World J. Men’s Health 313 (2019) (“There is currently no evidence to suggest that PDE5 inhibition in patients causes increased risk for melanoma. The few observational studies that demonstrated a positive association between PDE5 inhibitor use and melanoma often failed to account for major confounders. Nonetheless, the substantial evidence implicating PDE5 inhibition in the cyclic guanosine monophosphate (cGMP)-mediated melanoma pathway warrants further investigation in the clinical setting.”); Xinming Han, Yan Han, Yongsheng Zheng, Qiang Sun, Tao Ma, Li Dai, Junyi Zhang, and Lianji Xu, “Use of phosphodiesterase type 5 inhibitors and risk of melanoma: a meta-analysis of observational studies,” 11 OncoTargets & Therapy 711 (2018).

[8]  In re Viagra (Sildenafil Citrate) and Cialis (Tadalafil) Prods. Liab. Litig., Case No. 16-md-02691-RS, Order Granting in Part and Denying in Part Motions to Exclude Expert Testimony (N.D. Calif. Jan. 13, 2020) [cited as Opinion].

[9]  Opinion at 8 (“determin[ing] whether the analysis undergirding the experts’ testimony falls within the range of accepted standards governing how scientists conduct their research and reach their conclusions”), citing Daubert v. Merrell Dow Pharm., Inc. (Daubert II), 43 F.3d 1311, 1317 (9th Cir. 1995).

[10]  Opinion at 11.

[11]  Opinion at 11-13.

[12]  See Kenneth J. Rothman, Sander Greenland, and Timothy L. Lash, “Introduction,” chap. 1, in Kenneth J. Rothman, et al., eds., Modern Epidemiology at 29 (3d ed. 2008) (“no approach can transform plausibility into an objective causal criterion).

[13]  Opinion at 15-16.

[14]  Opinion at 16-17.

[15]  See Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295 (1965); see also “Woodside & Davis on the Bradford Hill Considerations” (April 23, 2013).

[16]  Opinion at 17 – 21.

[17]  Opinion at 18. The MDL court cited In re Silicone Gel Breast Implants Prod. Liab. Litig., 318 F. Supp. 2d 879, 893 (C.D. Cal. 2004), for the proposition that relative risks greater than 2.0 permit the inference that the agent under study “was more likely than not responsible for a particular individual’s disease.”

[18]  Opinion at 18.

[19]  Opinion at 20.

[20]  Opinion at 19.

[21]  Opinion at 21, quoting from Rule 702, Advisory Committee Notes (emphasis in Judge Seeborg’s opinion).

[22]  Opinion at 21.

[23]  SeeFollow the Data, Not the Discussion” (May 2, 2010).

[24]  Joiner, 522 U.S. at 145-46 (noting that the PCB studies at issue did not support expert witnesses’ conclusion that PCB exposure caused cancer because the study authors, who conducted the research, were not willing to endorse a conclusion of causation).

[25]  Huss v. Gayden, 571 F.3d 442  (5th Cir. 2009) (citing Vargas v. Lee, 317 F.3d 498, 501-01 (5th Cir. 2003) (noting that studies that did not themselves embrace causal conclusions undermined the reliability of the plaintiffs’ expert witness’s testimony that trauma caused fibromyalgia); see also McClain v. Metabolife Internat’l, Inc., 401 F.3d 1233, 1247-48 (11th Cir. 2005) (expert witnesses’ reliance upon studies that did not reach causal conclusions about ephedrine supported the challenge to the reliability of their proffered opinions); Happel v. Walmart, 602 F.3d 820, 826 (7th Cir. 2010) (observing that “is axiomatic that causation testimony is inadmissible if an expert relies upon studies or publications, the authors of which were themselves unwilling to conclude that causation had been proven”).

[26]  In re Accutane Prods. Liab. Litig., 511 F. Supp. 2d 1288, 1291 (M.D. Fla. 2007) (“When an expert relies on the studies of others, he must not exceed the limitations the authors themselves place on the study. That is, he must not draw overreaching conclusions.) (internal citations omitted).

[27]  See Rutigliano v. Valley Bus. Forms, 929 F. Supp. 779, 785 (D.N.J. 1996), aff’d, 118 F.3d 1577 (3d Cir. 1997) (“law warns against use of medical literature to draw conclusions not drawn in the literature itself …. Reliance upon medical literature for conclusions not drawn therein is not an accepted scientific methodology.”).

[28]  Opinion at 14

[29]  Opinion at 4 – 5.