For your delectation and delight, desultory dicta on the law of delicts.

Regressive Methodology in Pharmaco-Epidemiology

October 24th, 2020

Medications are rigorously tested for safety and efficacy in clinical trials before approval by regulatory agencies such as the U.S. Food & Drug Administration (FDA) or the European Medicines Agency (EMA). The approval process, however, contemplates that more data about safety and efficacy will emerge from the use of approved medications in pharmacoepidemiologic studies conducted outside of clinical trials. Litigation of safety outcomes rarely arises from claims based upon the pivotal clinical trials that were conducted for regulatory approval and licensing. The typical courtroom scenario is that a safety outcome is called into question by pharmacoepidemiologic studies that purport to find associations or causality between the use of a specific medication and the claimed harm.

The International Society for Pharmacoepidemiology (ISPE), established in 1989, describes itself as an international professional organization intent on advancing health through pharmacoepidemiology, and related areas of pharmacovigilance. The ISPE website defines pharmacoepidemiology as

“the science that applies epidemiologic approaches to studying the use, effectiveness, value and safety of pharmaceuticals.”

The ISPE conceptualizes pharmacoepidemiology as “real-world” evidence, in contrast to randomized clinical trials:

“Randomized controlled trials (RCTs) have served and will continue to serve as the major evidentiary standard for regulatory approvals of new molecular entities and other health technology. Nonetheless, RWE derived from well-designed studies, with application of rigorous epidemiologic methods, combined with judicious interpretation, can offer robust evidence regarding safety and effectiveness. Such evidence contributes to the development, approval, and post-marketing evaluation of medicines and other health technology. It enables patient, clinician, payer, and regulatory decision-making when a traditional RCT is not feasible or not appropriate.”

ISPE Position on Real-World Evidence (Feb. 12, 2020) (emphasis in original).

The ISPE publishes an official journal, Pharmacoepidemiology and Drug Safety, and sponsors conferences and seminars, all of which are watched by lawyers pursuing and defending drug and device health safety claims. The endorsement by the ISPE of the American Statistical Association’s 2016 statement on p-values is thus of interest not only to statisticians, but to lawyers and claimants involved in drug safety litigation.

The ISPE, through its board of directors, formally endorsed the ASA 2016 p-value statement on April 1, 2017 (no fooling) in a statement that can be found at its website:

The International Society for Pharmacoepidemiology, ISPE, formally endorses the ASA statement on the misuse of p-values and accepts it as an important step forward in the pursuit of reasonable and appropriate interpretation of data.

On March 7, 2016, the American Statistical Association (ASA) issued a policy statement that warned the scientific community about the use P-values and statistical significance for interpretation of reported associations. The policy statement was accompanied by an introduction that characterized the reliance on significance testing as a vicious cycle of teaching significance testing because it was expected, and using it because that was what was taught. The statement and many accompanying commentaries illustrated that p-values were commonly misinterpreted to imply conclusions that they cannot imply. Most notably, “p-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.” Also, “a p-value does not provide a good measure of evidence regarding a model or hypothesis.” Furthermore, reliance on p-values for data

interpretation has exacerbated the replication problem of scientific work, as replication of a finding is often confused with replicating the statistical significance of a finding, on the erroneous assumption that replication should lead to studies getting similar p-values.

This official statement from the ASA has ramifications for a broad range of disciplines, including pharmacoepidemiology, where use of significance testing and misinterpretation of data based on P-values is still common. ISPE has already adopted a similar stance and incorporated it into our GPP [ref] guidelines. The ASA statement, however, carries weight on this topic that other organizations cannot, and will inevitably lead to changes in journals and classrooms.

There are points of interpretation of the ASA Statement, which can be discussed and debated. What is clear, however, is that the ASA never urged the abandonment of p-values or even of statistical significance. The Statement contained six principles, some of which did nothing other than to attempt to correct prevalent misunderstandings of p-values. The third principle stated that “[s]cientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold.” (emphasis added).

This principle, as stated, thus hardly advocated for the abandonment of a threshold in testing; rather it made the unexceptional point that the ultimate scientific conclusion (say about causality) required more assessment than only determining whether a p-value passed a specified threshold.

Presumably, the ISPE’s endorsement of the ASA’s 2016 Statement embraces all six of the articulated principles, including the ASA’s fourth principle:

4. Proper inference requires full reporting and transparency

P-values and related analyses should not be reported selectively. Conducting multiple analyses of the data and reporting only those with certain p-values (typically those passing a significance threshold) renders the reported p-values essentially uninterpretable. Cherry-picking promising findings, also known by such terms as data dredging, significance chasing, significance questing, selective inference, and “p-hacking,” leads to a spurious excess of statistically significant results in the published literature and should be vigorously avoided. One need not formally carry out multiple statistical tests for this problem to arise: Whenever a researcher chooses what to present based on statistical results, valid interpretation of those results is severely compromised if the reader is not informed of the choice and its basis. Researchers should disclose the number of hypotheses explored during the study, all data collection decisions, all statistical analyses conducted, and all p-values computed. Valid scientific conclusions based on p-values and related statistics cannot be drawn without at least knowing how many and which analyses were conducted, and how those analyses (including p-values) were selected for reporting.”

The ISPE’s endorsement of the ASA 2016 Statement references the ISPE’s own

Guidelines for Good Pharmacoepidemiology Practices (GPP),” which were promulgated initially in 1996, and revised as recently as June 2015. Good practices, as of 2015, provided that:

“Interpretation of statistical measures, including confidence intervals, should be tempered with appropriate judgment and acknowledgements of potential sources of error and limitations of the analysis, and should never be taken as the sole or rigid basis for concluding that there is or is not a relation between an exposure and outcome. Sensitivity analyses should be conducted to examine the effect of varying potentially critical assumptions of the analysis.”

All well and good, but this “good practices” statement might be taken as a bit anemic, given that it contains no mention of, or caution against, unqualified or unadjusted confidence intervals or p-values that come from multiple testing or comparisons. The ISPE endorsement of the ASA Statement now expands upon the ISPE’s good practices to include the avoidance of multiplicity and the disclosure of the full extent of analyses conducted in a study.

What happens in the “real world” of publishing, outside the board room?

Last month, the ISPE conducted its (virtual) 36th International Conference on Pharmacoepidemiology & Therapeutic Risk Management. The abstracts and poster presentations from this Conference were published last week as a Special Issue of the ISPE journal. I spot checked the journal contents to see how well the presentations lived up to the ISPE’s statistical aspirations.

One poster presentation addressed statin use and skin cancer risk in a French prospective cohort.[1]  The authors described their cohort of French women, who were 40 to 65 years old, in 1990, and were followed forward. Exposure to statin medications was assessed from 2004 through 2014. The analysis included outcomes of any skin cancer, melanoma, basal-cell carcinoma (BCC), and squamous-call carcinoma (SCC), among 66,916 women. Here is how the authors describe their findings:

There was no association between ever use of statins and skin cancer risk: the HRs were 0.96 (95% CI = 0.87-1.05) for overall skin cancer, 1.18 (95% CI = 0.96-1.47) for melanoma, 0.89 (95% CI = 0.79-1.01) for BCC, and 0.90 (95% CI = 0.67-1.21) for SCC. Associations did not differ by statin molecule nor by duration or dose of use. However, women who started to use statins before age 60 were at increased risk of BCC (HR = 1.45, 95% CI = 1.07-1.96 for ever vs never use).

To be fair, this was a poster presentation, but this short description of findings makes clear that the investigators looked at least at the following subgroups:

Exposure subgroups:

  • specific statin drug
  • duration of use
  • dosage
  • age strata


Outcome subgroups:

  • melanoma
  • basal-cell carcinoma
  • squamous-cell carcinoma

The reader is not told how many specific statins, how many duration groups, dosage groups, and age strata were involved in the exposure analysis. My estimate is that the exposure subgroups were likely in excess of 100. With three disease outcome subgroups, the total subgroup analyses thus likely exceeded 300. The authors did not provide any information about the full extent of their analyses.

Here is how the authors reported their conclusion:

“These findings of increased BCC risk in statin users before age 60 deserve further investigations.”

Now, the authors did not use the phrase “statistically significant,” but it is clear that they have characterized a finding of “increased BCC risk in statin users before age 60,” and in no other subgroup, and they have done so based upon a reported nominal “HR = 1.45, 95% CI = 1.07-1.96 for ever vs never use.” It is also clear that the authors have made no allowance, adjustment, modification, or qualification, for the wild multiplicity arising from their estimated 300 or so subgroups. Instead, they made an unqualified statement about “increased BCC risk,” and they offered an opinion about the warrant for further studies.

Endorsement of good statistical practices is a welcome professional organizational activity, but it is rather meaningless unless the professional societies begin to implement the good practices in their article selection, editing, and publishing activities.

[1]  Marie Al Rahmoun, Yahya Mahamat-Saleh, Iris Cervenka, Gianluca Severi, Marie-Christine Boutron-Ruault, Marina Kvaskoff, and Agnès Fournier, “Statin use and skin cancer risk: A French prospective cohort study,” 29 Pharmacoepidemiol. & Drug Safety s645 (2020).


May 26th, 2020

The Genetic Literacy Project (GLP) asks:

“Who is David and who is Goliath in the lobbying battle over agricultural biotechnology? Activists? Agro-business? In a commitment to transparency, the GLP has mined 5 years of data to help the public understand the funding network that shapes the biotechnology debate.”

The amount of money flowing into the campaign against genetically modified organisms (GMOs) is astonishing, but it does not stop the hypocritical complaints against industry’s sponsorship of studies to help show the safety of GMOs. In a recent on-line article, the GLP has published charts to map contributions from not-for-profit non-governmental organizations to anti-biotechnology advocacy groups. Close to a billion dollars ($850M) flowed into the coffers of these organizations from 2012 to 2016. The GLP’s work on tracking this funding is commendable for bringing balance to the debate about the effect of corporate money on health and environmental issues. Corporate includes the lawsuit industry and the advocacy industries.

Well actually, it would be a wonderful world if the GLP’s tracking were unnecessary. In one such alternative universe, people would ask to examine the evidence for and against claims, and they would have a healthy respect for uncertainty.

Studies funded by parties are routinely relied upon in litigation, and they are often pivotal in how courts decide significant claims of environmental or occupational harm.[1] Unfortunately, the sponsorship of studies by plaintiffs’ counsel, third-party litigation funding entities, and advocacy groups is often obscured or hidden.

* * * * * * * * * * * *

I recently happened upon an article of interest in an obscure journal, by a well-known author.[2]  The author, John C. Bailar, formerly an Editor-in-Chief of the Journal of the National Cancer Institute, was  professor emeritus in the University of Chicago’s Department of Public Health Sciences. He died in September 2016. Bailar was a graduate of the Yale University medical school, and also held a doctorate in statistics.

There is nothing ground breaking in Bailar’s article, but it is a nice summary of the ways that errors can creep into the scientific literature, short of actual fabrication or falsification of data.[3] It is also worth reading because it is an article that comes from one of the several Coronado Conferences, sponsored by an advocacy organization that has fraudulently concealed its funding, The Project on Scientific Knowledge and Public Policy, aka SKAPP.

To be sure, authors of SKAPP-funded articles have invariably cited their funding from SKAPP, and Bailar was no exception. Bailar made the following acknowledgements:

“Support for this paper was provided by The Project on Scientific Knowledge and Public Policy (SKAPP) at The George Washington University School of Public Health and Health Services. It is revised from a paper presented at SKAPP’s March 2006 Coronado Conference “Truth and Advocacy: The Quality and Nature of Litigation and Regulatory Science.” The papers from that conference will be published elsewhere.”[4]

The acknowledgement of support was rather anemic by SKAPP standards.  Most SKAPP-funded articles recited something closer to the following provided by David Michaels, who headed up SKAPP and worked as an expert witness for the litigation industry, until becoming the Administrator of the Occupational Health & Safety Administration, in President Obama’s administration:[5]

“DM [David Michaels] and CM [Celeste Monforton] are employed by the George Washington University School of Public Health and Health Services as part of the Project on Scientific Knowledge and Public Policy (SKAPP). Their salaries, in part, are funded by the Common Benefit Litigation Expense Trust, a fund established pursuant to a court order in the Silicone Gel Breast Implant Products Liability litigation. SKAPP’s funding is unrestricted; its funders are not given advance notice or the opportunity to review or approve any documents produced by the project. PL [Peter Lurie] is with Public Citizen’s Health Research Group.”

Michaels’ statement was perhaps a little more forthcoming, but few scientists or lay persons would know that his salary, and support, came from plaintiffs’ lawyers as part of an active litigation effort. Although Michaels claimed that the funding was unrestricted, like Big Tobacco funding, the sponsor, plaintiffs’ counsel, created a substantial selection effect in choosing beneficiaries who would deliver its pre-approved message. The Common Benefit Trust may sound like an eleemosynary, public-spirited, organization, with the imprimatur of the federal court system.  It was not.

Was Bailar influenced by his source of funding?  His topic would have permitted him many examples from the annals of science or litigation, but interestingly one of the few examples Bailar chose to give details about was a scientific dispute between the semiconductor industry and Richard Clapp, who was acting as an expert witness in litigation against that industry.  Although Clapp used a study design known to be inaccurate and biased, Bailar touted Clapp’s research over that sponsored by members of the industry.  Richard Clapp, in addition to have been an expert witness for the litigation industry on many occasions, also happened to have been a member of the SKAPP’s advisory committee. Hmmm.

Whence comes SKAPP funding?  SKAPP trades on most readers’ lack of familiarity with how “common benefit funds” are established.  They sound like some sort of disembodied charitable trust, such as the Pew. In fact, the silicone common benefit trust was nothing more than a funding device for mass federal litigation involving silicone breast implants. Ironically, the funding came from a litigation in which one leading judge described plaintiffs’ expert witnesses as “charlatans,” and the litigation claims as largely based upon fraud.[6] Cynics might believe that Bailar’s choice of Clapp versus the semiconductor industry, regardless of the merits, was driven by a desire to please SKAPP & Clapp.

The common benefit fund for the silicone-gel breast implant litigation was created by Order 13, “Establishing Plaintiffs’ Litigation Expense Fund to Compensate and Reimburse Attorneys for Services Performed and Expenses Incurred for Common Benefit.” The late Judge Sam Pointer, appointed to preside over MDL 926, In re Silicone Gel Breast Implants Products Liability Litigation, Master File No. CV 92-P-10000-S, entered the order on July 23, 1993.  Some of the pertinent terms of Order 13 illustrate how it was supposed to operate:

This order is entered in order to provide for the fair and equitable sharing among plaintiffs of the cost of special services performed and expenses incurred by attorneys acting for the common benefit of all plaintiffs.

  1. Plaintiffs’ Litigation Expense Fund to be Established. Plaintiffs’ National Liaison Counsel … are directed to establish an interest-bearing account to receive and disburse funds as provided in this order.


  1. Assessment.

(a)    All plaintiffs and their attorneys who, after this date, either agree — for a monetary consideration — to settle, compromise, dismiss, or reduce the amount of a claim or, with or without a trial, recover a judgment for monetary damages or other monetary relief, including both compensatory and punitive damages, with respect to a breast implant claim are hereby assessed:

(1)    5% of the “gross monetary recovery,” if the agreement is made or the judgment is entered after this date and before November 1, 1993, or

(2)    6% of the “gross monetary recovery,” if the agreement is made or the judgment is entered after October 31, 1993.

Defendants are directed to withhold this assessment from amounts paid to plaintiffs and their counsel, and to pay the assessment into the fund as a credit against the settlement or judgment.  ***

  1. Disbursements.

(a)    Payments may be made from the fund to attorneys who provide services or incur expenses for the joint and common benefit of plaintiffs in addition to their own client or clients.  Attorneys eligible are not limited to Plaintiffs’ National Liaison Counsel and members of Plaintiffs’ National Steering Committee, but include, for example, other attorneys called upon by them to assist in performing their responsibilities, State Liaison Counsel, and other attorneys performing similar responsibilities in state court actions in which the presiding state-court judge has imposed similar obligations upon plaintiffs to contribute to the fund.

(b)    Payments will be allowed only to compensate for special services performed, and to reimburse for special expenses incurred, for the joint and common benefit of all plaintiffs.


(c)    No amounts will be disbursed without review and approval by a committee of federal and state judicial officers to be designated by the court.  The committee may, however, utilize the services of a special master to assist in this review, and may authorize one or more of its members to act for the committee in approving particular types of applications for disbursement.

(d)    If the fund exceeds the amount needed to make payments as provided in this order, the court will order an refund to those who have contributed to the fund.  Any such refund will be made in proportion to the amount of the contributions.”

For a while, a defense lawyer, representing the defendants in the silicone MDL, participated in discussions concerning MDL 926 Order 13 funds, until the plaintiffs’ lawyers decided that his services were not needed, and excluded him from discussions of the use of the monies. The reality is that the plaintiffs’ lawyers in the silicone litigation were able to bamboozle the slim oversight committee into approving a propaganda campaign against Daubert gatekeeping, and that recipients of the plaintiffs’ lawyers’ largesse were able to misrepresent their funding as though it were from a federal court.

There are further ironies connected with the silicone common benefit trust.  First, the silicone litigation was effectively over when the court-appointed expert witnesses’ reports that announced that the plaintiffs’ expert witnesses lacked sound scientific evidence to support conclusions of causation.  SKAPP’s website reports that its activities started around 2002, by which time both the court-appointed witnesses, as well as the British Ministry of Health, and the Institute of Medicine’s select committee had reported that there was no basis for the plaintiffs’ causal claims in litigation.[7] The second irony is that SKAPP, through its sponsorship of various research and writing projects, had made the recipients of SKAPP money, by the terms of Order 13, agents of the silicone plaintiffs’ lawyers and their clients. Recipients of SKAPP funding who did not disclose that their support or salaries come from the coffers of plaintiffs’ counsel were engaged in misleading their readers and the scientific and legal communities.

I have written often in the past about SKAPP as an agent of plaintiffs’ counsel in mass tort litigation.[8] The concern is not new, but it has continuing significance because of the asymmetrical standard advanced by the lawsuit industry and its scientific advisors who seek to disqualify manufacturing industry and its scientific advisors from participating in scientific debate and argument about various health claims.[9]

[1]  See, e.g., Leaf River Forest Prods. v. Ferguson, 662 So. 2d 648, 657 (Miss. 1995) (litigation involving defense expert witness’s reliance upon dioxin studies funded by defendant paper mills); Maurer v. Heyer-Schulte Corp., No. Civ. A. 92-3485, 2002 WL 31819160 at *3 (E.D. La. Dec. 13, 2002) (granting defendant’s summary judgment against plaintiff’s claim that breast implants caused her harm; citing defendants’ sponsored epidemiologic studies showing no causal link, including epidemiologic study conducted in Sweden); Nat’l Res. Def. Council v. Evans, 232 F. Supp. 2d 1003, 1013 (N.D. Cal. 2002) (“commend[ing] defendants’ sponsorship of independent scientific research…”); FTC v. Pantron I, Corp., 1991 U.S. Dist. LEXIS 21858 (C.D. Cal. Sept 6, 1991) (finding study funded by defendants met “basic and fundamental requirements for scientific validity and reliability”).

[2]  John C. Bailar, “How to distort the scientific record without actually lying: truth, and the arts of science,” 11 European J. Oncol. 217 (2006).

[3]  Id. at 218.

[4]  Id. at 223.

[5]  David Michaels, Celeste Monforton & Peter Lurie, “Selected science: an industry campaign to undermine an OSHA hexavalent chromium standard,” 65 Envt’l Health 5 (2006).

[6]     Hon. Jack B. Weinstein, “Preliminary Reflections on Administration of Complex Litigation” 2009 Cardozo L. Rev. de novo 1, 14 (2009).

[7]   Independent Review Group, Silicone Breast Implants: The Report of the Independent Review Group 8, 22-23 (July 1998) (concluding that there was no demonstrable risk of connective tissue disease from silicone breast implants); Stuart Bondurant, Virginia Ernster, and Roger Herdman, eds., Safety of Silicone Breast Implants (1999) (rejecting plaintiffs’ theories and litigation claims of systemic disease).

[8]   “SKAPP A LOT” (April 30, 2010); “Manufacturing Certainty” (Oct. 25, 2011); “David Michaels’ Public Relations Problem” (Dec. 2, 2011); “Conflicted Public Interest Groups” (Nov. 3, 2013). See also Walter Olson, Schools for Misrule: Legal Academia and an Overlawyered America 121-22 (2011); David E. Bernstein & Eric G. Lasker, “Defending Daubert: It’s Time to Amend Federal Rule of Evidence 702,” 57 William & Mary L. Rev. 1, 39 & n.211 (2015); Ted Frank, “Daubert Debate,” Overlawyered (July 5, 2003); Peter Nordberg, “Bernstein on SKAPP (part 1),” Daubert on the Web (Jul)y 02, 2003).

[9]   Consider the media hysteria over former President Obama’s nomination of Dr. Robert Califf, to serve as Chair of the Food and Drug Administration.[9] The criticism was based upon his having served as the founding director of the Duke Clinical Research Institute, which received funding directly from pharmaceutical companies. The Senate confirmed Califf (89 to 4), but the controversy highlights the hypocrisy in play. Brady Dennis, “Senate confirms Robert Califf as new FDA commissioner,” Wash. Post (Feb. 24, 2016).

Data Games – A Techno Thriller

April 22nd, 2020

Data Games – A Techno Thriller

Sherlock Holmes, Hercule Poirot, Miss Marple, Father Brown, Harry Bosch, Nancy Drew, Joe and Frank Hardy, Sam Spade, Columbo, Lennie Briscoe, Inspector Clouseau, and Dominic Da Vinci:

Move over; there is a new super sleuth in town.

Meet Professor Ken Wheeler.

Ken is a statistician, and so by profession, he is a data detective. In his day job, he teaches at a northeastern university, where his biggest challenges are managing the expectations of students and administrators, while trying to impart statistical learning. At home, Ken rarely manages to meet the expectations of his wife and son. But as some statisticians are wont to do, Ken sometimes takes on consulting gigs that require him to use his statistical skills to help litigants sort out the role of chance in cases that run from discrimination claims to rare health effects. In this contentious, sharp-elbowed environment, Ken excels. And truth be told, Ken actually finds great satisfaction in identifying the egregious errors and distortions of adversary statisticians

Wheeler’s sleuthing usually involves ascertaining random error or uncovering a lurking variable, but in Herberg I. Weisberg’s just-published novel, Data Games: A Techno Thriller, Wheeler is drawn into a high-stakes conspiracy of intrigue, violence, and fraud that goes way beyond the run-of-the-mine p-hacking and data dredging.

An urgent call from a scientific consulting firm puts Ken Wheeler in the midst of imminent disaster for a pharmaceutical manufacturer, whose immunotherapy anti-cancer wonder drug, Verbana, is under attack. A group of apparently legitimate scientists have obtained the dataset from Verbana’s pivotal clinical trial, and they appear on the verge of blowing Verbana out of the formulary with a devastating analysis that will show that the drug causes early dementia. Wheeler’s mission is to debunk the debunking analysis when it comes.

For those readers who are engaged in the litigation defense of products liability claims against medications, the scenario is familiar enough. The scientific group studying Verbana’s alleged side effect seems on the up-and-up, but they appear to engaged in a cherry-picking exercise, guided by a dubious theory of biological plausibility, known as the “Kreutzfeld hypothesis.”

It is not often that mystery novels turn on surrogate outcomes, biomarkers, genomic medicine, and predictive analytics, but Data Games is no ordinary mystery. And Wheeler is no ordinary detective. To be sure, the middle-aged Wheeler drives a middle-aged BMW, not a Bond car, and certainly not a Bonferroni. And Wheeler’s toolkit may not include a Glock, but he can handle the lasso, the jacknife, and the logit, and serve them up with SAS. Wheeler sees patterns where others see only chaos.

Unlike the typical Hollywood rubbish about stereotyped evil pharmaceutical companies, the hero of Data Games finds that there are sinister forces behind what looks like an honest attempt to uncover safety problems with Verbana. These sinister forces will use anything to achieve their illicit ends, including superficially honest academics with white hats. The attack on Verbana gets the FDA’s attention and an urgent hearing in White Oak, where Wheeler shines.

The author of Data Games, Herbert I. Weisberg, is himself a statistician, and a veteran of some of the dramatic data games he writes about in this novel. Weisberg is perhaps better known for his “homework” books, such asWillful Ignorance: The Mismeasure of Uncertainty (2014), and Bias and Causation: Models and Judgment for Valid Comparisons (2010). If, however, you ever find yourself in a pandemic lockdown, Weisberg’s Data Games: A Techno Thriller is a perfect way to escape. For under $3, you will be entertained, and you might even learn something about probability and statistics.

Dark Money, Scott Augustine, and Hot Air

April 11th, 2020

Fraud by the litigation industry takes many different forms. In the massive silicosis litigation unleashed in Mississippi and Texas in the early 2000s, plaintiffs’ lawyers colluded with physicians to concoct dubious diagnoses of silicosis. Fraudulent diagnoses of silicosis led to dismissals of thousands of cases, as well as the professional defrocking of some physician witnesses.[1] For those trying to keep up with lawsuit industry’s publishing arm, discussion of the Great Silicosis Fraud is completely absent from David Michaels’ recent book, The Triumph of Doubt.[2] So too is any mention of “dark money” that propelled the recently concluded Bair Hugger litigation.

Back in 2017, I wrote about the denial of a Rule 702 motion in the Bair Hugger litigation.[3] At the time, I viewed the trial court’s denial, on the facts of the case, to be a typical failure of gatekeeping.[4] Events in the Bair Hugger cases were only warming up in 2017.

After the court’s ruling, 3M took the first bellwether case to trial and won the case with jury, on May 30, 2018. Perhaps this jury verdict encouraged the MDL trial judge to take 3M’s motion for reconsideration of the Rule 702 motion seriously. In July 2019, the MDL court granted 3M’s motion to exclude the opinion testimony of plaintiffs’ general causation and mechanism expert witnesses, Drs. Jarvis, Samet, Stonnington, and Elghobashi.[5] Without these witnesses, over 5,000 plaintiffs, who had been misled about the merits of their cases, were stranded and set up for dismissal. On August 2, 2019, the MDL cases were dismissed for want of evidentiary support on causation. On August 29, 2019, plaintiffs filed a joint notice of appeal to the Eight Circuit.

The two Bair Hugger Rule 702 federal court decisions focused (or failed to focus) on scientific considerations. Most of the story of “dark money” and the manufacturing of science to support the litigation were suppressed in the Rule 702 motion practice, and in the federal jury trial. In her second Rule 702 reconsideration opinion, the MDL judge did mention undisclosed conflicts of interest by authors of the key studies relied upon by plaintiffs’ witnesses.[6]

To understand how the Bair Hugger litigation got started, and to obtain a full understanding of the nature of the scientific evidence was, a disinterested observer will have to read the state court decisions. Defendant 3M moved to exclude plaintiffs’ causation expert witnesses, in its Minnesota state court cases, under the so-called Frye standard. In response, the state judge excluded plaintiffs’ witnesses for advancing a novel scientific theory that lacked acceptance in the relevant scientific community. The Minnesota Court of Appeals affirmed, with a decision that talked rather more freely about the plaintiffs’ counsel’s dark money. In re 3M Bair Hugger Litig., 924 N.W.2d 16 (Minn. App. 2019) [cited as Bair Hugger].

As the Minnesota Court of Appeals explained, a forced-air warming device (FAWD) is a very important, useful device to keep patients’ body temperatures normal during surgery. The “Bair Hugger” is a FAWD, which was invented in 1987, by Dr. Scott Augustine, an anesthesiologist, who at the time was the chief executive officer of Augustine Medical, Inc. Bair Hugger at 19.

In the following 15 years, the Bair Hugger became the leading FAWD in the world. In 2002, the federal government notified Augustine that it was investigating him for Medicare fraud. Augustine resigned from the company that bore his name, and the company purged the taint by reorganizing as Arizant Healthcare Inc. (Arizant), which continued to make the Bair Hugger. In the following year, 2003, Augustine pleaded guilty to fraud and paid a $2 million fine. His sentence included a five-year ban from involvement in federal health-care programs.

During the years of his banishment, fraudfeasor Augustine developed a rival product and then embarked upon a global attack on the safety of his own earlier invention, the Bair Hugger. In the United Kingdom, his claim that the Bair Hugger increased risks of surgical site infections attacks was rejected by the UK National Institute for Health and Clinical Excellence. A German court enjoined Augustine from falsely claiming that the Bair Hugger led to increased bacterial contamination.[7] The United States FDA considered and rejected Augustine’s claims, and recommended the use of FAWDs.

In 2009, Augustine began to work as a non-testifying expert witness with the Houston, Texas, plaintiffs’ law firm of Kennedy Hodges LLP. A series of publications resulted in which the authors attempted to raise questions about the safety of the Bair Hugger. By 2013, with the medical literature “seeded” with several studies attacking the Bair Hugger, the Kennedy Hodges law firm began to manufacture law suits against Arizant and 3M (which had bought the Bair Hugger product line from Arizant in 2010). Bair Hugger at 20.

The seeding studies were marketing and litigation propaganda used by Augustine to encourage the all-too-complicit lawsuit industry to ramp up production of complaints against 3M over the Bair Hugger. Several of the plaintiffs’ studies included as an author a young statistician, Mark Albrecht, an employee of, or a contractor for, Augustine’s new companies, Augustine Temperature Management and Augustine Medical. Even when disclosures were made, they were at best “anemic”:

“The author or one or more of the authors have received or will receive benefits for personal or professional use from a commercial party related directly or indirectly to the subject of this article.”[8]

Some of these studies generally included a disclosure that Albrecht was funded or employed by Augustine, but they did not disclose the protracted, bitter feud or Augustine’s confessed fraudulent conduct. Another author of some of the plaintiffs’ studies included David Leaper, who was a highly paid “consultant’’ to Augustine at the time of the work on the study. None of the studies disclosed Leaper’s consultancy for Augustin:

  1. Mark Albrecht, Robert Gauthier, and David Leaper, “Forced air warming, a source of airborne contamination in the operating room?” 1 Orthopedic Rev. (Pavia) e28 (2009)
  2. Mark Albrecht, Robert L. Gauthier, Kumar Belani, Mark Litchy, and David Leaper, “Forced-air warming blowers: An evaluation of filtration adequacy and airborne contamination emissions in the operating room,” 39 Am. J. Infection Control 321 (2011)
  3. P.D. McGovern, Mark Albrecht, Kumar Belani, C. Nachtsheim, “Forced-air warming and ultra-clean ventilation do not mix,” 93 J. Bone & Joint Surg. – British 1537 (2011)
  4. K.B. Dasari, Mark Albrecht, and M. Harper, “Effect of forced-air warming on the performance of operating-theatre laminar-flow ventilation,” 67 Anaesthesia 244 (2012)
  5. Mike Reed, Oliver Kimberger, Paul D. McGovern, and Mark C. Albrecht, “Forced-Air Warming Design: Evaluation of Intake Filtration, Internal Microbial Buildup, and Airborne-Contamination Emissions,” 81 Am. Ass’n Nurse Anesthetists 275 (2013)
  6. Kumar Belani, Mark Albrecht, Paul McGovern, Mike Reed, and Christopher Nachtsheim, “Patient warming excess heat: the effects on orthopedic operating room ventilation performance,” 117 Anesthesia & Analgesia 406 (2013)

In one study, Augustine’s employee Mark Albrecht conducted the experiment with one of the authors, but was not listed as an author although he wrote an early draft of the study. Augustine provided all the equipment used in the experiment. The published paper failed to disclose any of these questionable activities:

  1. A.J. Legg & A.J. Hammer, “Forced-air patient warming blankets disrupt unidirectional flow,” 95 Bone & Joint J. 407 (2013)

Another study had more peripheral but still questionable involvement of Augustine, whose company lent the authors equipment used to conduct the study, without proper acknowledgment and disclosure:

  1. A.J. Legg, T. Cannon, and A. J. Hamer, “Do forced-air warming devices disrupt unidirectional downward airflow?” 94 J. Bone & Joint Surg. – British 254 (2012)

In addition to the defects in the authors’ disclosures, 3M discovered that two of the studies had investigated whether the Bair Hugger spread bacteria in the surgical area. Although the experiments found no spread with the Bair Hugger, the researchers never publicly disclosed their exculpatory evidence.[9]

Augustine’s marketing campaign, through these studies, ultimately fell flat at the FDA, which denied his citizen’s petition and recommended that surgeons continue to use FAWDs such as the Bair Hugger.[10] Augustine’s proxy litigation war against 3M also fizzled, unless the 8th Circuit revives his vendetta. Nonetheless, the Augustine saga raises serious questions about how litigation funding of “scientific studies” will vex the search for the truth in pharmaceutical products litigation. The Augustine attempt to pollute the medical literature was relatively apparent, but dark money from undisclosed financiers may require greater attention from litigants and from journal editors.

[1]  In re Silica Products Liab. Litig., MDL No. 1553, 398 F. Supp. 2d 563 (S.D.Tex. 2005).

[2]  David Michaels, The Triumph of Doubt: Dark Money and the Science of Deception (2020).

[3]  In re Bair Hugger Forced Air Warming, MDL No. 15-2666, 2017 WL 6397721 (D. Minn. Dec. 13, 2017).

[4]  “Gatekeeping of Expert Witnesses Needs a Bair Hug” (Dec. 20, 2017).

[5]  In re Bair Hugger Forced Air Warming Devices Prods. Liab. Litig., MDL No. 15-2666, 2019 WL 4394812 (D. Minn. July 31, 2019). See Joe G. Hollingsworth & Caroline Barker, “Exclusion of Junk Science in ‘Bair Hugger’ MDL Shows Daubert Is Still Breathing,” Wash. Leg. Foundation (Jan 23, 2020); Christine Kain, Patrick Reilly, Hannah Anderson and Isabelle Chammas, “Top 5 Drug And Medical Device Developments Of 2019,” Law360 (Jan. 9, 2020).

[6]  In re Bair Hugger Forced Air Warming Devices Prods. Liab. Litig., 2019 WL 4394812, at *10 n.13 (D. Minn. July 31, 2019) (observing that “[i]n the published study, the authors originally declared no conflicts of interest”).

[7]  Dr. Augustine has never been a stranger to the judicial system. See, e.g., Augustine Medical, Inc. v. Gaymar Industries, Inc., 181 F.3d 1291 (Fed. Cir. 1999); Augustine Medical, Inc. v. Progressive Dynamics, Inc., 194 F.3d 1367 (Fed. Cir. 1999); Cincinnati Sub-Zero Products, Inc. v. Augustine Medical, Inc., 800 F. Supp. 1549 (S.D. Ohio 1992).

[8]  P.D. McGovern, Mark Albrecht, Kumar Belani, and C. Nachtsheim, “Forced-air warming and ultra-clean ventilation do not mix,” 93 J. Bone & Joint Surg. – British 1537, 1544 (2011).

[9]  See

[10]  William Maisel, “Information about the Use of Forced Air Thermal Regulating Systems – Letter to Health Care Providers”; Center for Devices and Radiological Health, U.S. Food and Drug Administration (Aug. 30, 2017).