TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Data Games – A Techno Thriller

April 22nd, 2020

Data Games – A Techno Thriller

Sherlock Holmes, Hercule Poirot, Miss Marple, Father Brown, Harry Bosch, Nancy Drew, Joe and Frank Hardy, Sam Spade, Columbo, Lennie Briscoe, Inspector Clouseau, and Dominic Da Vinci:

Move over; there is a new super sleuth in town.

Meet Professor Ken Wheeler.

Ken is a statistician, and so by profession, he is a data detective. In his day job, he teaches at a northeastern university, where his biggest challenges are managing the expectations of students and administrators, while trying to impart statistical learning. At home, Ken rarely manages to meet the expectations of his wife and son. But as some statisticians are wont to do, Ken sometimes takes on consulting gigs that require him to use his statistical skills to help litigants sort out the role of chance in cases that run from discrimination claims to rare health effects. In this contentious, sharp-elbowed environment, Ken excels. And truth be told, Ken actually finds great satisfaction in identifying the egregious errors and distortions of adversary statisticians

Wheeler’s sleuthing usually involves ascertaining random error or uncovering a lurking variable, but in Herberg I. Weisberg’s just-published novel, Data Games: A Techno Thriller, Wheeler is drawn into a high-stakes conspiracy of intrigue, violence, and fraud that goes way beyond the run-of-the-mine p-hacking and data dredging.

An urgent call from a scientific consulting firm puts Ken Wheeler in the midst of imminent disaster for a pharmaceutical manufacturer, whose immunotherapy anti-cancer wonder drug, Verbana, is under attack. A group of apparently legitimate scientists have obtained the dataset from Verbana’s pivotal clinical trial, and they appear on the verge of blowing Verbana out of the formulary with a devastating analysis that will show that the drug causes early dementia. Wheeler’s mission is to debunk the debunking analysis when it comes.

For those readers who are engaged in the litigation defense of products liability claims against medications, the scenario is familiar enough. The scientific group studying Verbana’s alleged side effect seems on the up-and-up, but they appear to engaged in a cherry-picking exercise, guided by a dubious theory of biological plausibility, known as the “Kreutzfeld hypothesis.”

It is not often that mystery novels turn on surrogate outcomes, biomarkers, genomic medicine, and predictive analytics, but Data Games is no ordinary mystery. And Wheeler is no ordinary detective. To be sure, the middle-aged Wheeler drives a middle-aged BMW, not a Bond car, and certainly not a Bonferroni. And Wheeler’s toolkit may not include a Glock, but he can handle the lasso, the jacknife, and the logit, and serve them up with SAS. Wheeler sees patterns where others see only chaos.

Unlike the typical Hollywood rubbish about stereotyped evil pharmaceutical companies, the hero of Data Games finds that there are sinister forces behind what looks like an honest attempt to uncover safety problems with Verbana. These sinister forces will use anything to achieve their illicit ends, including superficially honest academics with white hats. The attack on Verbana gets the FDA’s attention and an urgent hearing in White Oak, where Wheeler shines.

The author of Data Games, Herbert I. Weisberg, is himself a statistician, and a veteran of some of the dramatic data games he writes about in this novel. Weisberg is perhaps better known for his “homework” books, such asWillful Ignorance: The Mismeasure of Uncertainty (2014), and Bias and Causation: Models and Judgment for Valid Comparisons (2010). If, however, you ever find yourself in a pandemic lockdown, Weisberg’s Data Games: A Techno Thriller is a perfect way to escape. For under $3, you will be entertained, and you might even learn something about probability and statistics.

Disproportionality Analyses Misused by Lawsuit Industry

April 20th, 2020

Adverse event reporting is a recognized, important component of pharmacovigilence. Regulatory agencies around the world further acknowledge that an increased rate of reporting of a specific adverse event may signal the possible existence of an association. In the last two decades, pharmacoepidemiologists have developed techniques for mining databases of adverse event reports for evidence of a disproportionate level of reporting for a particular medication – adverse event pair. Such studies can help identify “signals” of potential issues for further study with properly controlled epidemiologic studies.[1]

Most sane and sensible epidemiologists recognize that the low quality, inconsistences, and biases of the data in adverse event reporting databases render studies of disproportionate reporting “poor surrogates for controlled epidemiologic studies.” In the face of incomplete and inconsistent reporting, so-called disproportionality analyses (“DPA”) assume that incomplete reporting will be constant for all events for a specific medication. Regulatory attention, product labeling, lawyer advertising and client recruitment, social media and publicity, and time since launch are all known to affect reporting rates, and to ensure that reporting rates for some event types for a specific medication will be higher. Thus, the DPA assumptions are virtually always false and unverifiable.[2]

DPAs are non-analytical epidemiologic studies that cannot rise in quality or probativeness above the level of the anecdote upon which they are based. DPAs may generate signals or hypotheses, but they cannot test hypotheses of causality. Although simple in concept, DPAs involve some complicated computations that embue them with an aura of “proofiness.” As would-be studies that lack probativeness for causality, they are thus ideal tools for the lawsuit industry to support litigation campaigns against drugs and medical devices. Indeed, if a statistical technique is difficult to understand but relatively easy to perform and even easier to pass off to unsuspecting courts and juries, then you can count on its metastatic use in litigation. The DPA has become one of the favorite tools of the lawsuit industry’s statisticians. This litigation use, however, cannot obscure the simple fact that the relative reporting risk provided by a DPA can never rise to the level of a relative risk.

In one case in which a Parkinson’s disease patient claimed that his compulsive gambling was caused by his use of the drug Requip, the plaintiff’s expert witness attempted to invoke a DPA in support of his causal claim. In granting a Rule 702 motion to exclude the expert witnesses who relied upon a DPA, the trial judge rejected the probativeness of DPAs, based upon the FDA’s rejection of such analyses for anything other than signal detection.[3]

In the Accutane litigation, statistician David Madigan attempted to support his fatally weak causation opinion with a DPA for Crohn’s disease and Accutane adverse event reports. According to the New Jersey Supreme Court, Madigan claimed that his DPA showed “striking signal of disproportionality” indicative of a “strong association” between Accutane use and Crohn’s disease.[4]  With the benefit of a thorough review by the trial court, the New Jersey Supreme Court found other indicia of unreliability in Madigan’s opinions, such that it was not fooled by Madigan’s shenanigans. In any event, no signal of disproportionality could ever show an association between medication use and a disease; at best the DPA can show only an association between reporting of the medication use and the outcome of interest.

In litigation over Mirena and intracranial hypertension, one of the lawsuit industry’s regulars, Mayhar Etminan, published a DPA based upon the FDA’s Adverse Event Reporting System, which purported to find an increased reporting odds ratio.[5] Unthinkingly, the plaintiffs’ other testifying expert witnesses relied upon Etminan’s study. When a defense expert witness pointed out that Etminan had failed to adjust for age and gender in his multivariate analysis,[6] he repudiated his findings.[7] Remarkably, when Etminan published his original DPA in 2015, he declared that he had no conflicts, but when he published his repudiation, he disclosed that he “has been an expert witness in Mirena litigation in the past but is no longer part of the litigation.” The Etminan kerfuffle helped scuttle the plaintiffs’ assault on Mirena.[8]

DPAs have, on occasion, bamboozled federal judges into treating them as analytical epidemiology that can support causal claims. For instance, misrepresentations or misunderstandings of what DPAs can and cannot do carried the day in a Rule 702 contest on the admissibility of opinion testimony by statistician Rebecca Betensky. In multidistrict litigation over the safety of inferior vena cava (“IVC”) filters, plaintiffs’ counsel retained Rebecca Betensky, to prepare a DPA of adverse events reported for the defendants’ retrievable filters. The MDL judge’s description of Betensky’s opinion demonstrates that her DPA was either misrepresented or misunderstood:

“In this MDL, Dr. Betensky opines generally that there is a higher risk of adverse events for Bard’s retrievable IVC filters than for its permanent SNF.”[9]

The court clearly took Betensky to be opining about risk and not the risk of reporting. The court’s opinion goes on to describe Betensky’s calculation of a “reporting risk ratio,” but found that she could testify that the retrievable IVC filters increased the risk of the claimed adverse events, and not merely that there was an increase in reporting risk ratios.

Betensky acknowledged that the reporting risk ratios were “imperfect estimates of the actual risk ratios,”[10] but nevertheless dismissed all caveats about the inability of DPAs to assess actual increased risk. The trial court quoted Dr. Betensky’s attempt to infuse analytical rigor into a data mining exercise:

“[A]dverse events are generally considered to be underreported to the databases, and potentially differentially by severity of adverse event and by drug or medical device. . . . It is important to recognize that underreporting in and of itself is not problematic. Rather, differential underreporting of the higher risk device is what leads to bias. And even if there was differential underreporting of the higher risk device, given the variation in reporting relative risks across adverse events, the differential reporting would have had to have been highly variable across adverse events. This does not seem plausible given the severity of the adverse events considered. Given the magnitude of the RRR’s [relative reporting ratios], and their variability across adverse events, it seems implausible that differential underreporting by filter could fully explain the deviation of the observed RRR’s from 1.”[11]

Of course, this explanation fails to account for differential over-reporting for the newer, but less risky or equally risk device. Betensky dismissed notoriety bias as having caused an increase in reporting adverse events because her DPA ended with 2014, before the FDA had issued a warning letter. The lawsuit industry, however, was on the attack against IVC filers, years before 2014.[12] Similarly, Betensky dismissed consideration of the Weber effect, but her analysis apparently failed to acknowledge that notoriety and Weber effect are just two of many possible biases in DPAs.

In the face of her credentials, the MDL trial judge retreated to the usual chestnuts that are served up when a Rule 702 challenge is denied.  Judge Campbell thus observed that “[i]t is not the job of the court to insure that the evidence heard by the jury is error-free, but to insure that it is sufficiently reliable to be considered by the jury.”[13]  The trial judge professed a need to be “be careful not to conflate questions of admissibility of expert testimony with the weight appropriately to be accorded to such testimony by the fact finder.”[14] The court denied the claim that Betensky had engaged in an ipse dixit, by engaging in its own ipse dixit. Judge Campbell found that Betensky had explained her assumptions, had acknowledged shortcomings, and had engaged in various sensitivity tests of the validity of her DPA; and so he concluded that Betensky did not present “a case where ‘there is simply too great an analytical gap between the data and the opinion proffered’.”[15]

By closing off inquiry into the limits of the DPA methodology, Judge Campbell managed to stumble into a huge analytical gap he blindly ignored, or was unaware of. Even the best DPAs cannot substitute for analytical epidemiology in a scientific methodology of determining causation. The ipse dixit becomes apparent when we consider that the MDL gatekeeping opinion on Rebecca Betensky fails to mention the extensive body of regulatory and scientific opinion about the distinct methodologic limitations of DPA. The U.S. FDA’s official guidance on good pharmacovigilance practices, for example, instructs us that

“[d]ata mining is not a tool for establishing causal attributions between products and adverse events.”[16]

The FDA specifically cautions that the signals detected by data mining techniques should be acknowledged to be “inherently exploratory or hypothesis generating.”[17] The agency exercises caution when making its own comparisons of adverse events between products in the same class because of the low quality of the data themselves, and uncontrollable and unpredictable biases in how the data are collected.[18] Because of the uncertainties in DPAs,

“FDA suggests that a comparison of two or more reporting rates be viewed with extreme caution and generally considered exploratory or hypothesis-generating. Reporting rates can by no means be considered incidence rates, for either absolute or comparative purposes.”[19]

The European Medicines Agency offers similar advice and caution:

“Therefore, the concept of SDR [Signal of Disproportionate Reporting] is applied in this guideline to describe a ‘statistical signal’ that has originated from a statistical method. The underlying principle of this method is that a drug–event pair is reported more often than expected relative to an independence model, based on the frequency of ICSRs on the reported drug and the frequency of ICSRs of a specific adverse event. This statistical association does not imply any kind of causal relationship between the administration of the drug and the occurrence of the adverse event.”[20]

The current version of perhaps the leading textbook on pharmacoepidemiology is completely in accord with the above regulatory guidances. In addition to emphasizing the limitations on data quality from adverse event reporting, and the inability to interpret temporal trends, the textbook authors clearly characterize DPAs as generating signals, and unable to serve as hypothesis tests:

“a signal of disproportionality is a measure of a statistical association within a collection of AE/ADR reports (rather than in a population), and it is not a measure of causality. In this regard, it is important to underscore that the use of data mining is for signal detection – that is, for hypothesis  generation – and that further work is needed to evaluate the signal.”[21]

Reporting ratios are not, and cannot serve as, measures of incidence or prevalence, because adverse event databases do not capture all the events of interest, and so these ratios “it must be interpreted cautiously.”[22] The authors further emphasize that “well-designed pharmacoepidemiology or clinical studies are needed to assess the signal.”[23]

The authors of this chapter are all scientists and officials at the FDA’s Center for Drug Evaluation and Research, and the World Health Organization. Although they properly disclaimed to have been writing for their agencies, their agencies have independently embraced their concepts in other agency publications. The consensus view of the hypothesis generating nature of DPAs can easily be seen in surveying the relevant literature.[24] Passing off a DPA as a study that supports causal inference is not a mere matter of “weight,” or excluding any opinion that has some potential for error. The misuse of Betensky’s DPA is a methodological error that goes to the heart of what Congress intended to be screened and excluded by Rule 702.


[1]  Sean Hennessy, “Disproportionality analyses of spontaneous reports,” 13 Pharmacoepidemiology & Drug Safety 503, 503 (2004).

[2]  Id. See, e.g., Patrick Waller & Mira Harrison-Woolrych, An Introduction to Pharmacovigilance 68-69 (2nd ed. 2017) (noting the example of the WHO’s DPA that found a 10-fold reporting rate increase for statins and ALS, which reporting association turned out to be spurious).

[3]  Wells v. SmithKline Beecham Corp., 2009 WL 564303, at *12 (W.D. Tex. 2009) (citing and quoting from the FDA’s Guidance for Industry: Good Pharmacovigilance Practices and Pharmacoepidemiologic Assessment (2005)), aff’d, 601 F.3d 375 (5th Cir. 2010). But see In re Abilify (Aripiprazole) Prods. Liab. Litig., 299 F.Supp. 3d 1291. 1324 (N.D. Fla. 2018) (noting that the finding of a DPA that compared Abilify with other anti-psychotics helped to show that a traditional epidemiologic study was not confounded by the indication for depressive symptoms).

[4]  In re Accutane Litig., 234 N.J. 340, 191 A.3d 560, 574 (2018).

[5]  See Mahyar Etminan, Hao Luo, and Paul Gustafson, et al., “Risk of intracranial hypertension with intrauterine levonorgestrel,” 6 Therapeutic Advances in Drug Safety 110 (2015).

[6]  Deborah Friedman, “Risk of intracranial hypertension with intrauterine levonorgestrel,” 7 Therapeutic Advances in Drug Safety 23 (2016).

[7]  Mahyar Etminan, “Revised disproportionality analysis of Mirena and benign intracranial hypertension,” 8 Therapeutic Advances in Drug Safety 299 (2017).

[8]  In re Mirena IUS Levonorgestrel-Relaated Prods. Liab. Litig. (No. II), 387 F. Supp. 3d 323, 331 (S.D.N.Y. 2019) (Engelmayer, J.).

[9]  In re Bard IVC Filters Prods. Liab. Litig., No. MDL 15-02641-PHX DGC, Order Denying Motion to Exclude Rebecca Betensky at 2 (D. Ariz. Jan. 22, 2018) (Campbell, J.) (emphasis added) [Order]

[10]  Id. at 4.

[11]  Id.

[12]  See Matt Fair, “C.R. Bard’s Faulty Filters Pose Health Risks, Suit Says,” Law360 (Aug. 10, 2012); See, e.g., Derrick J. Stobaugh, Parakkal Deepak, & Eli D. Ehrenpreis, “Alleged isotretinoin-associated inflammatory bowel disease: Disproportionate reporting by attorneys to the Food and Drug Administration Adverse Event Reporting System,” 69 J. Am. Acad. Dermatol. 393 (2013) (documenting stimulated reporting from litigation activities).

[13]  Order at 6, quoting from Southwire Co. v. J.P. Morgan Chase & Co., 528 F. Supp. 2d 908, 928 (W.D. Wis. 2007).

[14]  Id., citing In re Trasylol Prods. Liab. Litig., No. 08-MD-01928, 2010 WL 1489793, at *7 (S.D. Fla. Feb. 24, 2010).

[15]  Id., citing and quoting from In re Trasylol Prods. Liab. Litig., No. 08-MD-01928, 2010 WL 1489793, at *7 (S.D. Fla. Feb. 24, 2010) ((quoting General Electric v. Joiner, 522 U.S. 136, 146 (1997)).

[16]  FDA, “Good Pharmacovigilance Practices and Pharmacoepidemiologic Assessment Guidance for Industry” at 8 (2005) (emphasis added).

[17]  Id. at 9.

[18]  Id.

[19]  Id. at 11 (emphasis added).

[20]  EUDRAVigilance Expert Working Group, European Medicines Agency, “Guideline on the Use of Statistical Signal Detection Methods in the EUDRAVigilance Data Analysis System,” at 3 (2006) (emphasis added).

[21]  Gerald J. Dal Pan, Marie Lindquist & Kate Gelperin, “Postmarketing Spontaneous Pharmacovigilance Reporting Systems,” in Brian L. Strom & Stephen E. Kimmel and Sean Hennessy, Pharmacoepidemiology at 185 (6th ed. 2020) (emphasis added).

[22]  Id. at 187.

[23]  Id. See also Andrew Bate, Gianluca Trifirò, Paul Avillach & Stephen J.W. Evans, “Data Mining and Other Informatics Approaches to Pharmacoepidemiology,” chap. 27, in Brian L. Strom & Stephen E. Kimmel and Sean Hennessy, Pharmacoepidemiology at 685-88 (6th ed. 2020) (acknowledging the importance of DPAs for detecting signals that must then be tested with analytical epidemiology) (authors from industry, Pfizer, and academia, including NYU School of Medicine, Harvard Medical School, and London School of Hygiene and Tropical Medicine).

[24]  See, e.g., Patrick Waller & Mira Harrison-Woolrych, An Introduction to Pharmacovigilance 61 (2nd ed. 2017) (“[A]lthough the numbers are calculated in a similar way to relative risks, they do not represent a meaningful calculation of risk.” *** “Indicators of disproportionality are measures of association and even quite extreme results may not be causal.”); Ronald D. Mann & Elizabeth B. Andrews, Pharmacovigilance 240 (2d ed. 2007) (“Importantly, data mining cannot prove or refute causal associations between drugs and events. Data mining simply identifies disproportionality of drugevent reporting patterns in databases. The absence of a signal does not rule out a safety problem. Similarly, the presence of a signal is not a proof of a causal relationship between a drug and an adverse event.”); Patrick Waller, An Introduction to Pharmacovigilance 49 (2010) (“[A]lthough the numbers are calculated in a similar way to relative risks, they do not represent a meaningful calculation of risk. Whilst it is true that the greater the degree of disproportionality, the more reason there is to look further, the only real utility of the numbers is to decide whether or not there are more cases than might reasonably have been expected. Indicators of disproportionality are measures of association and even quite extreme results may not be causal.”); Sidney N. Kahn, “You’ve found a safety signal–now what?  Regulatory implications of industry signal detection activities,” 30 Drug Safety 615 (2007).

Dark Money, Scott Augustine, and Hot Air

April 11th, 2020

Fraud by the litigation industry takes many different forms. In the massive silicosis litigation unleashed in Mississippi and Texas in the early 2000s, plaintiffs’ lawyers colluded with physicians to concoct dubious diagnoses of silicosis. Fraudulent diagnoses of silicosis led to dismissals of thousands of cases, as well as the professional defrocking of some physician witnesses.[1] For those trying to keep up with lawsuit industry’s publishing arm, discussion of the Great Silicosis Fraud is completely absent from David Michaels’ recent book, The Triumph of Doubt.[2] So too is any mention of “dark money” that propelled the recently concluded Bair Hugger litigation.

Back in 2017, I wrote about the denial of a Rule 702 motion in the Bair Hugger litigation.[3] At the time, I viewed the trial court’s denial, on the facts of the case, to be a typical failure of gatekeeping.[4] Events in the Bair Hugger cases were only warming up in 2017.

After the court’s ruling, 3M took the first bellwether case to trial and won the case with jury, on May 30, 2018. Perhaps this jury verdict encouraged the MDL trial judge to take 3M’s motion for reconsideration of the Rule 702 motion seriously. In July 2019, the MDL court granted 3M’s motion to exclude the opinion testimony of plaintiffs’ general causation and mechanism expert witnesses, Drs. Jarvis, Samet, Stonnington, and Elghobashi.[5] Without these witnesses, over 5,000 plaintiffs, who had been misled about the merits of their cases, were stranded and set up for dismissal. On August 2, 2019, the MDL cases were dismissed for want of evidentiary support on causation. On August 29, 2019, plaintiffs filed a joint notice of appeal to the Eight Circuit.

The two Bair Hugger Rule 702 federal court decisions focused (or failed to focus) on scientific considerations. Most of the story of “dark money” and the manufacturing of science to support the litigation were suppressed in the Rule 702 motion practice, and in the federal jury trial. In her second Rule 702 reconsideration opinion, the MDL judge did mention undisclosed conflicts of interest by authors of the key studies relied upon by plaintiffs’ witnesses.[6]

To understand how the Bair Hugger litigation got started, and to obtain a full understanding of the nature of the scientific evidence was, a disinterested observer will have to read the state court decisions. Defendant 3M moved to exclude plaintiffs’ causation expert witnesses, in its Minnesota state court cases, under the so-called Frye standard. In response, the state judge excluded plaintiffs’ witnesses for advancing a novel scientific theory that lacked acceptance in the relevant scientific community. The Minnesota Court of Appeals affirmed, with a decision that talked rather more freely about the plaintiffs’ counsel’s dark money. In re 3M Bair Hugger Litig., 924 N.W.2d 16 (Minn. App. 2019) [cited as Bair Hugger].

As the Minnesota Court of Appeals explained, a forced-air warming device (FAWD) is a very important, useful device to keep patients’ body temperatures normal during surgery. The “Bair Hugger” is a FAWD, which was invented in 1987, by Dr. Scott Augustine, an anesthesiologist, who at the time was the chief executive officer of Augustine Medical, Inc. Bair Hugger at 19.

In the following 15 years, the Bair Hugger became the leading FAWD in the world. In 2002, the federal government notified Augustine that it was investigating him for Medicare fraud. Augustine resigned from the company that bore his name, and the company purged the taint by reorganizing as Arizant Healthcare Inc. (Arizant), which continued to make the Bair Hugger. In the following year, 2003, Augustine pleaded guilty to fraud and paid a $2 million fine. His sentence included a five-year ban from involvement in federal health-care programs.

During the years of his banishment, fraudfeasor Augustine developed a rival product and then embarked upon a global attack on the safety of his own earlier invention, the Bair Hugger. In the United Kingdom, his claim that the Bair Hugger increased risks of surgical site infections attacks was rejected by the UK National Institute for Health and Clinical Excellence. A German court enjoined Augustine from falsely claiming that the Bair Hugger led to increased bacterial contamination.[7] The United States FDA considered and rejected Augustine’s claims, and recommended the use of FAWDs.

In 2009, Augustine began to work as a non-testifying expert witness with the Houston, Texas, plaintiffs’ law firm of Kennedy Hodges LLP. A series of publications resulted in which the authors attempted to raise questions about the safety of the Bair Hugger. By 2013, with the medical literature “seeded” with several studies attacking the Bair Hugger, the Kennedy Hodges law firm began to manufacture law suits against Arizant and 3M (which had bought the Bair Hugger product line from Arizant in 2010). Bair Hugger at 20.

The seeding studies were marketing and litigation propaganda used by Augustine to encourage the all-too-complicit lawsuit industry to ramp up production of complaints against 3M over the Bair Hugger. Several of the plaintiffs’ studies included as an author a young statistician, Mark Albrecht, an employee of, or a contractor for, Augustine’s new companies, Augustine Temperature Management and Augustine Medical. Even when disclosures were made, they were at best “anemic”:

“The author or one or more of the authors have received or will receive benefits for personal or professional use from a commercial party related directly or indirectly to the subject of this article.”[8]

Some of these studies generally included a disclosure that Albrecht was funded or employed by Augustine, but they did not disclose the protracted, bitter feud or Augustine’s confessed fraudulent conduct. Another author of some of the plaintiffs’ studies included David Leaper, who was a highly paid “consultant’’ to Augustine at the time of the work on the study. None of the studies disclosed Leaper’s consultancy for Augustin:

  1. Mark Albrecht, Robert Gauthier, and David Leaper, “Forced air warming, a source of airborne contamination in the operating room?” 1 Orthopedic Rev. (Pavia) e28 (2009)
  2. Mark Albrecht, Robert L. Gauthier, Kumar Belani, Mark Litchy, and David Leaper, “Forced-air warming blowers: An evaluation of filtration adequacy and airborne contamination emissions in the operating room,” 39 Am. J. Infection Control 321 (2011)
  3. P.D. McGovern, Mark Albrecht, Kumar Belani, C. Nachtsheim, “Forced-air warming and ultra-clean ventilation do not mix,” 93 J. Bone & Joint Surg. – British 1537 (2011)
  4. K.B. Dasari, Mark Albrecht, and M. Harper, “Effect of forced-air warming on the performance of operating-theatre laminar-flow ventilation,” 67 Anaesthesia 244 (2012)
  5. Mike Reed, Oliver Kimberger, Paul D. McGovern, and Mark C. Albrecht, “Forced-Air Warming Design: Evaluation of Intake Filtration, Internal Microbial Buildup, and Airborne-Contamination Emissions,” 81 Am. Ass’n Nurse Anesthetists 275 (2013)
  6. Kumar Belani, Mark Albrecht, Paul McGovern, Mike Reed, and Christopher Nachtsheim, “Patient warming excess heat: the effects on orthopedic operating room ventilation performance,” 117 Anesthesia & Analgesia 406 (2013)

In one study, Augustine’s employee Mark Albrecht conducted the experiment with one of the authors, but was not listed as an author although he wrote an early draft of the study. Augustine provided all the equipment used in the experiment. The published paper failed to disclose any of these questionable activities:

  1. A.J. Legg & A.J. Hammer, “Forced-air patient warming blankets disrupt unidirectional flow,” 95 Bone & Joint J. 407 (2013)

Another study had more peripheral but still questionable involvement of Augustine, whose company lent the authors equipment used to conduct the study, without proper acknowledgment and disclosure:

  1. A.J. Legg, T. Cannon, and A. J. Hamer, “Do forced-air warming devices disrupt unidirectional downward airflow?” 94 J. Bone & Joint Surg. – British 254 (2012)

In addition to the defects in the authors’ disclosures, 3M discovered that two of the studies had investigated whether the Bair Hugger spread bacteria in the surgical area. Although the experiments found no spread with the Bair Hugger, the researchers never publicly disclosed their exculpatory evidence.[9]

Augustine’s marketing campaign, through these studies, ultimately fell flat at the FDA, which denied his citizen’s petition and recommended that surgeons continue to use FAWDs such as the Bair Hugger.[10] Augustine’s proxy litigation war against 3M also fizzled, unless the 8th Circuit revives his vendetta. Nonetheless, the Augustine saga raises serious questions about how litigation funding of “scientific studies” will vex the search for the truth in pharmaceutical products litigation. The Augustine attempt to pollute the medical literature was relatively apparent, but dark money from undisclosed financiers may require greater attention from litigants and from journal editors.


[1]  In re Silica Products Liab. Litig., MDL No. 1553, 398 F. Supp. 2d 563 (S.D.Tex. 2005).

[2]  David Michaels, The Triumph of Doubt: Dark Money and the Science of Deception (2020).

[3]  In re Bair Hugger Forced Air Warming, MDL No. 15-2666, 2017 WL 6397721 (D. Minn. Dec. 13, 2017).

[4]  “Gatekeeping of Expert Witnesses Needs a Bair Hug” (Dec. 20, 2017).

[5]  In re Bair Hugger Forced Air Warming Devices Prods. Liab. Litig., MDL No. 15-2666, 2019 WL 4394812 (D. Minn. July 31, 2019). See Joe G. Hollingsworth & Caroline Barker, “Exclusion of Junk Science in ‘Bair Hugger’ MDL Shows Daubert Is Still Breathing,” Wash. Leg. Foundation (Jan 23, 2020); Christine Kain, Patrick Reilly, Hannah Anderson and Isabelle Chammas, “Top 5 Drug And Medical Device Developments Of 2019,” Law360 (Jan. 9, 2020).

[6]  In re Bair Hugger Forced Air Warming Devices Prods. Liab. Litig., 2019 WL 4394812, at *10 n.13 (D. Minn. July 31, 2019) (observing that “[i]n the published study, the authors originally declared no conflicts of interest”).

[7]  Dr. Augustine has never been a stranger to the judicial system. See, e.g., Augustine Medical, Inc. v. Gaymar Industries, Inc., 181 F.3d 1291 (Fed. Cir. 1999); Augustine Medical, Inc. v. Progressive Dynamics, Inc., 194 F.3d 1367 (Fed. Cir. 1999); Cincinnati Sub-Zero Products, Inc. v. Augustine Medical, Inc., 800 F. Supp. 1549 (S.D. Ohio 1992).

[8]  P.D. McGovern, Mark Albrecht, Kumar Belani, and C. Nachtsheim, “Forced-air warming and ultra-clean ventilation do not mix,” 93 J. Bone & Joint Surg. – British 1537, 1544 (2011).

[9]  See https://www.truthaboutbairhugger.com/truth-science-behind-claims-3m-bair-hugger-system-look-augustine-connections-research-studies/.

[10]  William Maisel, “Information about the Use of Forced Air Thermal Regulating Systems – Letter to Health Care Providers”; Center for Devices and Radiological Health, U.S. Food and Drug Administration (Aug. 30, 2017).

April Fool – Zambelli-Weiner Must Disclose

April 2nd, 2020

Back in the summer of 2019, Judge Saylor, the MDL judge presiding over the Zofran birth defect cases, ordered epidemiologist, Dr. Zambelli-Weiner to produce documents relating to an epidemiologic study of Zofran,[1] as well as her claimed confidential consulting relationship with plaintiffs’ counsel.[2]

This previous round of motion practice and discovery established that Zambelli-Weiner was a paid consultant in advance of litigation, that her Zofran study was funded by plaintiffs’ counsel, and that she presented at a Las Vegas conference, for plaintiffs’ counsel only, on [sic] how to make mass torts perfect. Furthermore, she had made false statements to the court about her activities.[3]

Zambelli-Weiner ultimately responded to the discovery requests but she and plaintiffs’ counsel withheld several documents as confidential, pursuant to the MDL’s procedure for protective orders. Yesterday, April 1, 2020, Judge Saylor entered granted GlaxoSmithKline’s motion to de-designate four documents that plaintiffs claimed to be confidential.[4]

Zambelli-Weiner sought to resist GSK’s motion to compel disclosure of the documents on a claim that GSK was seeking the documents to advance its own litigation strategy. Judge Saylor acknowledged that Zambelli-Weiner’s psycho-analysis might be correct, but that GSK’s motive was not the critical issue. According to Judge Saylor, the proper inquiry was whether the claim of confidentiality was proper in the first place, and whether removing the cloak of secrecy was appropriate under the facts and circumstances of the case. Indeed, the court found “persuasive public-interest reasons” to support disclosure, including providing the FDA and the EMA a complete, unvarnished view of Zambelli-Weiner’s research.[5] Of course, the plaintiffs’ counsel, in close concert with Zambelli-Weiner, had created GSK’s need for the documents.

This discovery battle has no doubt been fought because plaintiffs and their testifying expert witnesses rely heavily upon the Zambelli-Weiner study to support their claim that Zofran causes birth defects. The present issue is whether four of the documents produced by Dr. Zambelli-Weiner pursuant to subpoena should continue to enjoy confidential status under the court’s protective order. GSK argued that the documents were never properly designated as confidential, and alternatively, the court should de-designate the documents because, among other things, the documents would disclose information important to medical researchers and regulators.

Judge Saylor’s Order considered GSK’s objections to plaintiffs’ and Zambelli-Weiner’s withholding four documents:

(1) Zambelli-Weiner’s Zofran study protocol;

(2) Undisclosed, hidden analyses that compared birth defects rates for children born to mothers who used Zofran with the rates seen with the use of other anti-emetic medications;

(3) An earlier draft Zambelli-Weiner’s Zofran study, which she had prepared to submit to the New England Journal of Medicine; and

(4) Zambelli-Weiner’s advocacy document, a “Causation Briefing Document,” which she prepared for plaintiffs’ lawyers.

Judge Saylor noted that none of the withheld documents would typically be viewed as confidential. None contained “sensitive personal, financial, or medical information.”[6]  The court dismissed Zambelli-Weiner’s contention that the documents all contained “business and proprietary information,” as conclusory and meritless. Neither she nor plaintiffs’ counsel explained how the requested documents implicated proprietary information when Zambelli-Weiner’s only business at issue is to assist in making lawsuits. The court observed that she is not “engaged in the business of conducting research to develop a pharmaceutical drug or other proprietary medical product or device,” and is related solely to her paid consultancy to plaintiffs’ lawyers. Neither she nor the plaintiffs’ lawyers showed how public disclosure would hurt her proprietary or business interests. Of course, if Zambelli-Weiner had been dishonest in carrying out the Zofran study, as reflected in study deviations from its protocol, her professional credibility and her business of conducting such studies might well suffer. Zambelli-Weiner, however, was not prepared to affirm the antecedent of that hypothetical. In any event, the court found that whatever right Zambelli-Weiner might have enjoyed to avoid discovery evaporated with her previous dishonest representations to the MDL court.[7]

The Zofran Study Protocol

GSK sought production of the Zofran study protocol, which in theory contained the research plan for the Zofran study and the analyses the researchers intended to conduct. Zambelli-Weiner attempted to resist production on the specious theory that she had not published the protocol, but the court found this “non-publication” irrelevant to the claim of confidentiality. Most professional organizations, such as the International Society of Pharmacoepidemiology (“ISPE”), which ultimately published Zambelli-Weiner’s study, encourage the publication and sharing of study protocols.[8] Disclosure of protocols helps ensure the integrity of studies by allowing readers to assess whether the researchers have adhered to their study plan, or have engaged in ad hoc data dredging in search for a desired result.[9]

The Secret, Undisclosed Analyses

Perhaps even more egregious than withholding the study protocol was the refusal to disclose unpublished analyses comparing the rate of birth defects among children born to mothers who used Zofran with the birth defect rates of children with in utero exposure to other anti-emetic medications.  In ruling that Zambelli-Weiner must produce the unpublished analyses, the court expressed its skepticism over whether these analyses could ever have been confidential. Under ISPE guidelines, researchers must report findings that significantly affect public health, and the relative safety of Zofran is essential to its evaluation by regulators and prescribing physicians.

Not only was Zambelli-Weiner’s failure to include these analyses in her published article ethically problematic, but she apparently hid these analyses from the Pharmacovigilance Risk Assessment Committee (PRAC) of the European Medicines Agency, which specifically inquired of Zambelli-Weiner whether she had performed such analyses. As a result, the PRAC recommended a label change based upon Zambelli-Weiner’s failure to disclosure material information. Furthermore, the plaintiffs’ counsel represented they intended to oppose GSK’s citizen petition to the FDA, based upon the Zambelli-Weiner study. The apparently fraudulent non-disclosure of relevant analyses could not have been more fraught for public health significance. The MDL court found that the public health need trumped any (doubtful) claim to confidentiality.[10] Against the obvious public interest, Zambelli-Weiner offered no “compelling countervailing interest” in keeping her secret analyses confidential.

There were other aspects to the data-dredging rationale not discussed in the court’s order. Without seeing the secret analyses of other anti-emetics, readers were deprive of an important opportunity to assess actual and potential confounding in her study. Perhaps even more important, the statistical tools that Zambelli-Weiner used, including any measurements of p-values and confidence intervals, and any declarations of “statistical significance,” were rendered meaningless by her secret, undisclosed, multiple testing. As noted by the American Statistical Association (ASA) in its 2016 position statement, “4. Proper inference requires full reporting and transparency.”

The ASA explains that the proper inference from a p-value can be completely undermined by “multiple analyses” of study data, with selective reporting of sample statistics that have attractively low p-values, or cherry picking of suggestive study findings. The ASA points out that common practices of selective reporting compromises valid interpretation. Hence the correlative recommendation:

“Researchers should disclose the number of hypotheses explored during the study, all data collection decisions, all statistical analyses conducted and all p-values computed. Valid scientific conclusions based on p-values and related statistics cannot be drawn without at least knowing how many and which analyses were conducted, and how those analyses (including p-values) were selected for reporting.”[11]

The Draft Manuscript for the New England Journal of Medicine

The MDL court wasted little time and ink in dispatching Zambelli-Weiner’s claim of confidentiality for her draft New England Journal of Medicine manuscript. The court found that she failed to explain how any differences in content between this manuscript and the published version constituted “proprietary business information,” or how disclosure would cause her any actual prejudice.

Zambelli-Weiner’s Litigation Road Map

In a world where social justice warriors complain about organizations such as Exponent, for its litigation support of defense efforts, the revelation that Zambelli-Weiner was helping to quarterback the plaintiffs’ offense deserves greater recognition. Zambelli-Weiner’s litigation road map was clearly created to help Grant & Eisenhofer, P.A., the plaintiffs’ lawyers,, create a causation strategy (to which she would add her Zofran study). Such a document from a consulting expert witness is typically the sort of document that enjoys confidentiality and protection from litigation discovery. The MDL court, however, looked beyond Zambelli-Weiner’s role as a “consulting witness” to her involvement in designing and conducting research. The broader extent of her involvement in producing studies and communicating with regulators made her litigation “strategery” “almost certainly relevant to scientists and regulatory authorities” charged with evaluating her study.”[12]

Despite Zambelli-Weiner’s protestations that she had made a disclosure of conflict of interest, the MDL court found her disclosure anemic and the public interest in knowing the full extent of her involvement in advising plaintiffs’ counsel, long before the study was conducted, great.[13]

The legal media has been uncommonly quiet about the rulings on April Zambelli-Weiner, in the Zofran litigation. From the Union of Concerned Scientists, and other industry scolds such as David Egilman, David Michaels, and Carl Cranor – crickets. Meanwhile, while the appeal over the admissibility of her testimony is pending before the Pennsylvania Supreme Court,[14] Zambelli-Weiner continues to create an unenviable record in Zofran, Accutane,[15] Mirena,[16] and other litigations.


[1]  April Zambelli‐Weiner, Christina Via, Matt Yuen, Daniel Weiner, and Russell S. Kirby, “First Trimester Pregnancy Exposure to Ondansetron and Risk of Structural Birth Defects,” 83 Reproductive Toxicology 14 (2019).

[2]  See In re Zofran (Ondansetron) Prod. Liab. Litig., 392 F. Supp. 3d 179, 182-84 (D. Mass. 2019) (MDL 2657) [cited as In re Zofran].

[3]  “Litigation Science – In re Zambelli-Weiner” (April 8, 2019); “Mass Torts Made Less Bad – The Zambelli-Weiner Affair in the Zofran MDL” (July 30, 2019). See also Nate Raymond, “GSK accuses Zofran plaintiffs’ law firms of funding academic study,” Reuters (Mar. 5, 2019).

[4]  In re Zofran Prods. Liab. Litig., MDL No. 1:15-md-2657-FDS, Order on Defendant’s Motion to De-Designate Certain Documents as Confidential Under the Protective Order (D.Mass. Apr. 1, 2020) [Order].

[5]  Order at n.3

[6]  Order at 3.

[7]  See In re Zofran, 392 F. Supp. 3d at 186.

[8]  Order at 4. See also Xavier Kurz, Susana Perez-Gutthann, the ENCePP Steering Group, “Strengthening standards, transparency, and collaboration to support medicine evaluation: Ten years of the European Network of Centres for Pharmacoepidemiology and Pharmacovigilance (ENCePP),” 27 Pharmacoepidemiology & Drug Safety 245 (2018).

[9]  Order at note 2 (citing Charles J. Walsh & Marc S. Klein, “From Dog Food to Prescription Drug Advertising: Litigating False Scientific Establishment Claims Under the Lanham Act,” 22 Seton Hall L. Rev. 389, 431 (1992) (noting that adherence to study protocol “is essential to avoid ‘data dredging’—looking through results without a predetermined plan until one finds data to support a claim”).

[10]  Order at 5, citing Anderson v. Cryovac, Inc., 805 F.2d 1, 8 (1st Cir. 1986) (describing public-health concerns as “compelling justification” for requiring disclosing of confidential information).

[11]  Ronald L. Wasserstein & Nicole A. Lazar, “The ASA’s Statement on p-Values: Context, Process, and Purpose,” 70 The American Statistician 129 (2016)

See alsoThe American Statistical Association’s Statement on and of Significance” (March 17, 2016).“Courts Can and Must Acknowledge Multiple Comparisons in Statistical Analyses (Oct. 14, 2014).

[12]  Order at 6.

[13]  Cf. Elizabeth J. Cabraser, Fabrice Vincent & Alexandra Foote, “Ethics and Admissibility: Failure to Disclose Conflicts of Interest in and/or Funding of Scientific Studies and/or Data May Warrant Evidentiary Exclusions,” Mealey’s Emerging Drugs Reporter (Dec. 2002) (arguing that failure to disclose conflicts of interest and study funding should result in evidentiary exclusions).

[14]  Walsh v. BASF Corp., GD #10-018588 (Oct. 5, 2016, Pa. Ct. C.P. Allegheny Cty., Pa.) (finding that Zambelli-Weiner’s and Nachman Brautbar’s opinions that pesticides generally cause acute myelogenous leukemia, that even the smallest exposure to benzene increases the risk of leukemia offended generally accepted scientific methodology), rev’d, 2018 Pa. Super. 174, 191 A.3d 838, 842-43 (Pa. Super. 2018), appeal granted, 203 A.3d 976 (Pa. 2019).

[15]  In re Accutane Litig., No. A-4952-16T1, (Jan. 17, 2020 N.J. App. Div.) (affirming exclusion of Zambelli-Weiner as an expert witness).

[16]  In re Mirena IUD Prods. Liab. Litig., 169 F. Supp. 3d 396 (S.D.N.Y. 2016) (excluding Zambelli-Weiner in part).