TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Ingham v. Johnson & Johnson – Passing Talc Off As Asbestos

June 26th, 2020

In talc exposure litigation of ovarian cancer claims, plaintiffs were struggling to show that cosmetic talc use caused ovarian cancer, despite missteps by the defense.[1] And then lawsuit industrialist Mark Lanier entered the fray and offered a meretriciously beguiling move: Stop trying talc cases and start trying asbestos cases.

The Ingham appellate decision this week from the Missouri Court of Appeals appears to be a superficial affirmation of the Lanier strategy.[2] The court gave defendants some relief on jurisdictional issues, but largely affirmed the admissibility of Lanier’s expert witnesses on medical causation, both general and specific.[3]

After all, asbestos is an established cause of ovarian cancer. Or is it?

In 2006, the Institute of Medicine (now the National Academy of Medicine) addressed extra-pulmonary cancers caused by asbestos, without ever mentioning ovarian carcinoma.[4] Many textbooks and reviews found themselves unable to conclude that asbestos of any type caused ovarian cancer throughout the 20th century and a decade into the 21st century. The world of opinions changed, however, in 2011, when a working group of the International Agency for Research on Cancer (IARC) met in Lyon, France, and issued its support for the general causation claim in a suspect document published in 2012.[5] The IARC has strict rules that prohibit anyone who has any connection with manufacturing industry from serving on its working groups, but the Agency allows consultants and contractors for the lawsuit industry to serve without limitation. The 2011 working group on fibers and dusts thus sported lawsuit industry acolytes such as Peter F. Infante, Jonathan Samet, and Philip J. Landrigan.

Given the composition of this working group, no one was surprised by its finding:

“The Working Group noted that a causal association between exposure to asbestos and cancer of the ovary was clearly established, based on five strongly positive cohort mortality studies of women with heavy occupational exposure to asbestos (Acheson et al., 1982; Wignall & Fox, 1982; Germani et al., 1999; Berry et al., 2000; Magnani et al., 2008). The conclusion received additional support from studies showing that women and girls with environmental, but not occupational exposure to asbestos (Ferrante et al., 2007; Reid et al., 2008, 2009) had positive, though non-significant, increases in both ovarian cancer incidence and mortality.”[6]

The herd mentality is fairly strong in the world of occupational medicine, but not everyone concurred. A group of Australian asbestos researchers (Reid, et al.) without lawsuit industry credentials published another meta-analysis in 2011, as well.[7] Although the Australian researchers reported an increased summary estimate of risk, they were careful to point out that this elevation may have resulted from disease misclassification:

“In the studies that did not examine ovarian cancer pathology, or confirmed cases of mesothelioma from a cancer or mesothelioma registry, misclassification of the cause of death in some cases is likely to have occurred, given that misclassification was reported in those studies that did reexamine cancer pathology specimens. Misclassification may result in an underestimate of peritoneal mesothelioma and an overestimate of ovarian cancer or the converse. Among women, peritoneal mesothelioma may be more likely to be classified as ovarian, colon, or stomach cancer, rather than a rare occupational cancer.”[8]

The authors noted that Irving Selikoff had first reported that a significant number of peritoneal cancers, likely mesothelial in origin, have been misclassified as ovarian cancers. Studies that relied upon death certificates only might thus be very misleading. Supporting the danger of misclassification, the Reid study reported that:

“Only the meta-analysis of those studies that reported ovarian cancer incidence (i.e., those studies that did not rely on cause of death certification to classify their cases of ovarian cancer) did not observe a significant excess risk.”[9]

Reid also reported the absence of other indicia of causation:

“No study showed a statistically significant trend  of ovarian cancer with degree of asbestos exposure. In addition, there was no evidence of a significant trend across studies as grouped exposure increased.”[10]

Other scientists and physicians have acknowledged the controversial nature of the IARC’s determination. In 2011, pathologist Samuel Hammar, who has testified regularly for the lawsuit industry, voiced concerns about the diagnostic accuracy of ovarian cancer cases in asbestos studies:

“It has been difficult to draw conclusions on the basis of epidemiologic studies of ovarian cancers because, histologically, their distinction between peritoneal mesothelioma and carcinomatous peritonei (including primary peritoneal serous papillary adenocarcinoma) is difficult. Ovarian tumors tend to grow by extension and uncommonly metastasize through the bloodstream, which is similar to tumors of mesothelial origin … .”[11]

In 2014, a working group of the Finnish Institute of Occupational Health noted that “despite the conclusions by IARC and the support from recent studies, the hypothesis that asbestos is [a] cause of ovarian cancer remains controversial.”[12] The same year, 2014, the relevant chapter in a leading textbook by Dr. Victor L. Roggli and colleagues opined that:

“the balance of the evidence available at present does not support an association between asbestos exposure and cancers of the female reproductive system.”[13]

Two years later, a text by Dr. Dorsett D. Smith cited “the lack of certainty of the pathologic diagnosis of ovarian cancer versus a peritoneal mesothelioma in epidemiologic studies” as making the epidemiology uninterpretable and any conclusions impossible.[14]

Against this backdrop of evidence, I took a look at what Johnson & Johnson had to say about the occupational asbestos epidemiology in its briefs, in section “B. Studies on asbestos and ovarian cancer.”[15] The defense acknowledged that plaintiffs’ expert witnesses Drs. Jacqueline Moline and Dean Felsher focused on the IARC conclusion, and on studies of heavy occupational exposure. J & J recited without comment or criticism what plaintiffs’ expert witnesses had testified, much of which was quite objectionable.[16]

For instance, Moline and Felsher both reprised the scientifically and judicially debunked views that there is “no known safe level of exposure,” from which they inferred the non-sequitur that “any amount above ordinary background levels – could cause ovarian cancer.”[17] From ignorance, nothing derives but conjecture.

Another example was Felsher’s testimony that asbestos can make the body of an ovarian cancer patient therapy-resistant. In response to these and other remarkable assertions, J & J countered with only the statement that their expert witness, Dr. Huh, “did not agree that all of this was true in the context of ovarian cancer.”[18]

Huh, indeed; that the defense expert witness disagree with some of what plaintiffs’ witnesses claimed hardly frames an issue for exclusion of any expert witness’s opinion. Even more disturbing, there is no appellate point that corresponds to a motion to exclude Dr Moline’s testimony.

The Egilman Challenge

There was a challenge to the testimony of another expert witness, David Egilman, a frequent testifier for Mark Lanier and other lawsuit industrialists. One of the challenges that the defendants made on appeal to the admissibility of Dr. David Egilman’s testimony was his use of a 1972 NIOSH study that apparently quantified exposure in terms of fibers per cubic centimeter, without specifying whether all fibers in the measurement were asbestos fibers, as opposed to non-asbestos fibers, including talc fibers.

The Missouri Court of Appeals rejected this specificc challenge in part because Egilman had explained that:

“whether the 1972 NIOSH study identified fibers specifically as ‘asbestos’ was inconsequential, as the only other possible fiber that could be present in a talc sample is a ‘talc fiber, which is chemically identical to anthophyllite asbestos and structurally the same’.”[19]

Talc typically crystallizes in small plates, but it can occur occasionally as fibers. Egilman, however, equated a talc fiber as chemically and structurally identical to an anthophyllite fiber.

Does Egilman’s opinion hold water?

No, Egilman has wet himself badly (assuming the Missouri appellate court quoted testimony accurately).

According to the Mineralogical Society of America’s Handbook of Mineralogy (and every other standard work on mineralogy I reviewed), anthophyllite and talc, whether in fibrous habit or not, are two different minerals, with very different chemical formulae, crystal chemistry, and structure.[20] Anthophyllite has the chemical formula: (Mg;Fe2+)2(Mg;Fe2+)5Si8O22(OH)2 and is an amphibole double chain silicate. Talc, on the other hand, is a phyllosilicate, a hydrated magnesium silicate with the chemical formula Mg3Si4O10(OH)2. Talc crystallizes in the triclinic class, although sometimes monoclinic, and crystals are platy and very soft.

If the Missouri Court of Appeals characterized Egilman’s testimony correctly on this point, then Egilman gave patently false testimony. Talc and anthophyllite are different chemically and structurally.


[1]  SeeThe Slemp Case, Part I – Jury Verdict for Plaintiff – 10 Initial Observations”; “The Slemp Case, Part 2 – Openings”; “ Slemp Trial Part 3 – The Defense Expert Witness – Huh”; “Slemp Trial Part 4 – Graham Colditz”; “ Slemp Trial Part 5 – Daniel W. Cramer”; “Lawsuit Magic – Turning Talcum into Wampum”; “Talc Litigation Supported by Slippery Expert Witness” (2017).

[2]  Ingham v. Johnson & Johnson, No. No. ED107476, Missouri Court of Appeals for the Eastern District (St. Louis) (June 23, 2020) (Slip op.).

[3]  Cara Salvatore, “Missouri Appeals Court Slashes $4.7B Talc Verdict Against J&J,” Law360 (June 23, 2020).

[4]  Jonathan M. Samet, et al., Asbestos: Selected Cancers Effects (I.O.M. Committee on Asbestos 2006).

[5]  International Agency for Research on Cancer, A Review of Human Carcinogens, Monograph Vol. 100, Part C: Arsenic, Metals, Fibres, and Dusts (2012).

[6]  Id. at 256. Some members followed up their controversial finding with an attempt to justify it with a meta-analysis; see M. Constanza Camargo, Leslie T. Stayner, Kurt Straif, Margarita Reina, Umaima Al-Alem, Paul A. Demers, and Philip J. Landrigan, “Occupational Exposure to Asbestos and Ovarian Cancer: A Meta-analysis,” 119 Envt’l Health Persp. 1211 (2011).

[7]  Alison Reid, Nick de Klerk, and Arthur W Musk, “Does Exposure to Asbestos Cause Ovarian Cancer? A Systematic Literature Review and Meta-Analysis,” 20 Cancer Epidemiol., Biomarkers & Prevention 1287 (2011) [Reid].

[8]  Reid at 1293, 1287.

[9]  Id. at 1293.

[10]  Id. at 1294.

[11]  Samuel Hammar, Richard A. Lemen, Douglas W. Henderson & James Leigh, “Asbestos and other cancers,” chap. 8, in Ronald F. Dodson & Samuel P. Hammar, eds., Asbestos: Risk Assessment, Epidemiology, and Health Effects 435 (2nd ed. 2011) (internal citation omitted).

[12]  Finnish Institute of Occupational Health, Asbestos, Asbestosis and Cancer – Helsinki Criteria for Diagnosis and Attribution 60 (2014) (concluding that there was an increased risk in cohorts of women with “relatively high asbestos exposures”).

[13]  Faye F. Gao and Tim D. Oury, “Other Neoplasia,” chap. 8, in Tim D. Oury, Thomas A. Sporn & Victor L. Roggli, eds., in Pathology of Asbestos-Associated Diseases 177, 188 (3d ed. 2014).

[14]  Dorsett D. Smith, The Health Effects of Asbestos: An Evidence-based Approach 208 (2016).

[15]  Brief of Appellants Johnson & Johnson and Johnson & Johnson Consumer Inc., at 29, in Ingham v. Johnson & Johnson, No. No. ED107476, Missouri Court of Appeals for the Eastern District (St. Louis) (filed Sept. 6, 2019) [J&J Brief].

[16]  Id. at 30.

[17]  See Mark A. Behrens & William L. Anderson, “The ‘Any Exposure’ Theory: An Unsound Basis for Asbestos Causation and Expert Testimony,” 37 SW. U. L. Rev. 479 (2008); William L. Anderson, Lynn Levitan & Kieran Tuckley, “The ‘Any Exposure’ Theory Round II — Court Review of Minimal Exposure Expert Testimony in Asbestos and Toxic Tort Litigation Since 2008,” 22 Kans. J. L. & Pub. Pol’y 1 (2012); William L. Anderson & Kieran Tuckley, “The Any Exposure Theory Round III: An Update on the State of the Case Law 2012 – 2016,” Defense Counsel J. 264 (July 2016); William L. Anderson & Kieran Tuckley, “How Much Is Enough? A Judicial Roadmap to Low Dose Causation Testimony in Asbestos and Tort Litigation,” 42 Am. J. Trial Advocacy 38 (2018).

[18]  Id. at 30.

[19]  Slip op. at 54.

[20]  John W. Anthony, Richard A. Bideaux, Kenneth W. Bladh, and Monte C. Nichols, Handbook of Mineralogy (Mineralogical Soc’y of America 2001).

Science Journalism – UnDark Noir

February 23rd, 2020

Critics of the National Association of Scholars’ conference on Fixing Science pointed readers to an article in Undark, an on-line popular science site for lay audiences, and they touted the site for its science journalism. My review of the particular article left me unimpressed and suspicious of Undark’s darker side. When I saw that the site featured an article on the history of the Supreme Court’s Daubert decision, I decided to give the site another try. For one thing, I am sympathetic to the task science journalists take on: it is important and difficult. In many ways, lawyers must commit to perform the same task. Sadly, most journalists and lawyers, with some notable exceptions, lack the scientific acumen and English communication skills to meet the needs of this task.

The Undark article that caught my attention was a history of the Daubert decision and the Bendectin litigation that gave rise to the Supreme Court case.[1] The author, Peter Andrey Smith, is a freelance reporter, who often covers science issues. In his Undark piece, Smith covered some of the oft-told history of the Daubert case, which has been told before, better and in more detail in many legal sources. Smith gets some credit for giving the correct pronunciation of the plaintiff’s name – “DAW-burt,” and for recounting how both sides declared victory after the Supreme Court’s ruling. The explanation Smith gives of the opinion by Associate Justice Harry Blackmun is reasonably accurate, and he correctly notes that a partial dissenting opinion by Chief Justice Rehnquist complained that the majority’s decision would have trial judges become “amateur scientists.” Nowhere in the article will you find, however, the counter to the dissent: an honest assessment of the institutional and individual competence of juries to decide complex scientific issues.

The author’s biases eventually, however, become obvious. He recounts his interviews with Jason Daubert and his mother, Joyce Daubert. He earnestly reports how Joyce Daubert remembered having taken Bendectin during her pregnancy with Jason, and in the moment of that recall, “she felt she’d finally identified the teratogen that harmed Jason.” Really? Is that how teratogens are identified? Might it have been useful and relevant for a scientific journalist to explain that there are four million live births every year in the United States and that 3% of children born each year have major congenital malformations? And that most malformations have no known cause? Smith ingenuously relays that Jason Daubert had genetic testing, but omits that genetic testing in the early 1990s was fairly primitive and limited. In any event, how were any expert witnesses supposed to rule out base-line risk of birth defects, especially given weak to non-existent epidemiologic support for the Daubert’s claims? Smith does answer these questions; he does not even acknowledge the questions.

Smith later quotes Joyce Daubert as describing the litigation she signed up for as “the hill I’ll die on. You only go to war when you think you can win.” Without comment or analysis, Smith gives Joyce Daubert an opportunity to rant against the “injustice” of how her lawsuit turned out. Smith tells us that the Dauberts found the “legal system remains profoundly disillusioning.” Joyce Daubert told Smith that “it makes me feel stupid that I was so naïve to think that, after we’d invested so much in the case, that we would get justice.”  When called for jury duty, she introduces herself as

“I’m Daubert of Daubert versus Merrell Dow … ; I don’t want to sit on this jury and pretend that I can pass judgment on somebody when there is no justice. Please allow me to be excused.”

But didn’t she really get all the justice she deserved? Given her zealotry, doesn’t she deserve to have her name on the decision that serves to rein in expert witnesses who outrun their scientific headlights? Smith is coy and does not say, but in presenting Mrs. Daubert’s rant, without presenting the other side, he is using his journalistic tools in a fairly blatant attempt to mislead. At this point, I begin to get the feeling that Smith is preaching to a like-minded choir over there at Undark.

The reader is not treated to any interviews with anyone from the company that made Bendectin, any of its scientists, or any of the scientists who published actual studies on whether Bendectin was associated with the particular birth defects Jason Daubert had, or for that matter, with any birth defects at all. The plaintiffs’ expert witnesses quoted and cited never published anything at all on the subject. The readers are left to their imagination about how the people who developed Bendectin felt about the litigation strategies and tactics of the lawsuit industry.

The journalistic ruse is continued with Smith’s treatment of the other actors in the Daubert passion play. Smith describes the Bendectin plaintiffs’ lawyer Barry Nace in hagiographic terms, but omits his bar disciplinary proceedings.[2] Smith tells us that Nace had an impressive background in chemistry, and quotes him in an interview in which he described the evidentiary rules on scientific witness testimony as “scientific evidence crap.”

Smith never describes the Daubert’s actual affirmative evidence in any detail, which one might expect in a sophisticated journalistic outlet. Instead, he described some of their expert witnesses, Shanna Swan, a reproductive epidemiologist, and Alan K. Done, “a former pediatrician from Wayne State University.” Smith is secretive about why Done was done in at Wayne State; and we learn nothing about the serious accusations of perjury on credentials by Done. Instead, Smith regales us with Done’s tsumish theory, which takes inconclusive bits of evidence, throws them together, and then declares causation that somehow eludes the rest of the scientific establishment.

Smith tells us that Swan was a rebuttal witness, who gave an opinion that the data did not rule out “the possibility Bendectin caused defects.” Legally and scientifically, Smith is derelict in failing to explain that the burden was on the party claiming causation, and that Swan’s efforts to manufacture doubt were beside the point. Merrell Dow did not have to rule out any possibility of causation; the plaintiffs had to establish causation. Nor does Smith delve into how Swan sought to reprise her performance in the silicone gel breast implant litigation, only to be booted by several judges as an expert witness. And then for a convincer, Smith sympathetically repeats plaintiffs’ lawyer Barry Nace’s hyperbolic claim that Bendectin manufacturer, Merrell Dow had been “financing scientific articles to get their way,” adding by way of emphasis, in his own voice:

“In some ways, here was the fake news of its time: If you lacked any compelling scientific support for your case, one way to undermine the credibility of your opponents was by calling their evidence ‘junk science’.”

Against Nace’s scatalogical Jackson Pollack approach, Smith is silent about another plaintiffs’ expert witness, William McBride, who was found guilty of scientific fraud.[3] Smith reports interviews of several well-known, well-respected evidence scholars. He dutifully report Professor Edward Cheng’s view that “the courts were right to dismiss the [Bendectin] plaintiffs’ claims.” Smith quotes Professor D. Michael Risinger that claims from both sides in Bendectin cases were exaggerated, and that the 1970s and 1980s saw an “unbridled expansion of self-anointed experts,” with “causation in toxic torts had been allowed to become extremely lax.” So a critical reader might wonder why someone like Professor Cheng, who has a doctorate in statistics, a law degree from Harvard, and teaches at Vanderbilt Law School, would vindicate the manufacturers’ position in the Bendectin litigation. Smith never attempts to reconcile his interviews of the law professors with the emotive comments of Barry Nace and Joyce Daubert.

Smith acknowledges that a reformulated version of Bendectin, known as  Diclegis, was approved by the Food and Drug Administration in the United States, in 2013, for treatment of  nausea and vomiting during pregnancy. Smith tells us that Joyce is not convinced the drug should be back on the market,” but really why would any reasonable person care about her view of the matter? The challenge by Nav Persaud, a Toronto physician, is cited, but Persaud’s challenge is to the claim of efficacy, not to the safety of the medication. Smith tells us that Jason Daubert “briefly mulled reopening his case when Diclegis, the updated version of Bendectin, was re-approved.” But how would the approval of Diclegis, on the strength of a full new drug application, somehow support his claim anew? And how would he “reopen” a claim that had been fully litigated in the 1990s, and well past any statute of limitations?

Is this straight reporting? I think not. It is manipulative and misleading.

Smith notes, without attribution, that some scholars condemn litigation, such as the cases involving Bendectin, as an illegitimate form of regulation of medications. In opposition, he appears to rely upon Elizabeth Chamblee Burch, a professor at the University of Georgia School of Law for the view that because the initial pivotal clinical trials for regulatory approvals take place in limited populations, litigation “serves as a stopgap for identifying rare adverse outcomes that could crop up when several hundreds of millions of people are exposed to those products over longer periods of time.” The problem with this view is that Smith ignores the whole process of pharmacovigilance, post-registration trials, and pharmaco-epidemiologic studies conducted after the licensing of a new medication. The suggested necessity of reliance upon the litigation system as an adjunct to regulatory approval is at best misplaced and tenuous.

Smith correctly explains that the Daubert standard is still resisted in criminal cases, where it could much improve the gatekeeping of forensic expert witness opinion. But while the author gets his knickers in a knot over wrongful convictions, he seems quite indifferent to wrongful judgments in civil action.

Perhaps the one positive aspect of this journalistic account of the Daubert case was that Jason Daubert, unlike his mother, was open minded about his role in transforming the law of scientific evidence. According to Smith, Jason Daubert did not see the case as having “not ruined his life.” Indeed, Jason seemed to approve the basic principle of the Daubert case, and the subsequent legislation that refined the admissibility standard: “Good science should be all that gets into the courts.”


[1] Peter Andrey Smith, “Where Science Enters the Courtroom, the Daubert Name Looms Large: Decades ago, two parents sued a drug company over their newborn’s deformity – and changed courtroom science forever,” Undark (Feb. 17, 2020).

[2]  Lawyer Disciplinary Board v. Nace, 753 S.E.2d 618, 621–22 (W. Va.) (per curiam), cert. denied, 134 S. Ct. 474 (2013).

[3] Neil Genzlinger, “William McBride, Who Warned About Thalidomide, Dies at 91,” N.Y. Times (July 15, 2018); Leigh Dayton, “Thalidomide hero found guilty of scientific fraud,” New Scientist (Feb. 27, 1993); G.F. Humphrey, “Scientific fraud: the McBride case,” 32 Med. Sci. Law 199 (1992); Andrew Skolnick, “Key Witness Against Morning Sickness Drug Faces Scientific Fraud Charges,” 263 J. Am. Med. Ass’n 1468 (1990).

Counter Cancel Culture Part III – Fixing Science

February 14th, 2020

This is the last of three posts about Cancel Culture, and the National Association of Scholars (NAS) conference on Fixing Science, held February 7th and 8th, in Oakland, California.

In finding my participation in the National Association of Scholars’ conference on Fixing Science, “worrying” and “concerning,” John Mashey takes his cues from the former OSHA Administrator, David Michaels. David Michaels has written much about industry conflicts of interests and efforts to influence scientific debates and discussions. He popularized the notion of “manufacturing doubt,”[1] with his book of that title. I leave it to others to decide whether Mashey’s adverting to Michaels’ work, in finding my writings on silica litigation “concerning” and “worrying,” is itself worrisome. In order to evaluate Mashey’s argument, such as it is, the reader should know something more about David Michaels, and his publications.[2]

As one might guess from its title, The Triumph of Doubt: Dark Money and the Science of Deception, Michaels’ new book s appears to be a continuation of his attack on industry’s efforts to influence regulation. I confess not to have read this new book yet, but I am willing to venture a further guess that the industry Michaels is targeting is manufacturing industry, not the lawsuit industry, for which he has worked on many occasions. There is much irony (and no little hypocrisy) in Michaels’ complaints about dark money and the science of deception. For many years, Michaels ran the now defunct The Project on Scientific Knowledge and Public Policy (SKAPP), which was bankrolled by the plaintiffs’ counsel in the silicone gel breast implant litigation. Whenever SKAPP sponsored a conference, or a publication, the sponsors or authors dutifully gave a disclosure that the meeting or publication was underwritten by “a grant from the Common Benefit Trust, a fund established pursuant to a federal court order in the Silicone Gel Breast Implant Products Liability litigation.”

Non-lawyers might be forgiven for thinking that SKAPP and its propaganda had the imprimatur of the federal court system, but nothing could be further from the truth. A common benefits fund is the pool of money that is available to plaintiffs’ lawyers who serve on the steering committee of a large, multi-district litigation, to develop expert witnesses, analyze available scientific studies, and even commission studies of their own.[3] The source of the money was a “tax” imposed upon all settlements with defendants, which funneled the money into the so-called common benefits fund, controlled by the leadership of the plaintiffs’ counsel. When litigating the silicone gel breast implant cases involving claims of autoimmune disease became untenable due to an overwhelming scientific consensus against their causal claims,[4] the leadership of the plaintiffs’ steering committee gave the remaining money to SKAPP, rather than returning the money to the plaintiffs themselves.  David Michaels and his colleagues at SKAPP then misrepresented the source of the money as coming from a “trust fund” established by the federal court, which sounded rather like a neutral, disinterested source. This fund, however, was “walking around” money for the plaintiffs’ lawyers, which belonged to the settling plaintiffs, and which was diverted into a major propaganda effort against the judicial gatekeeping of expert witness opinion testimony.[5] A disinterested reader might well believe that David Michaels thus has some deep personal experience with “dark money,” and “the science of deception.” Mashey might be well advised to consider the adjacency issues raised by his placing such uncritical trust in what Michaels has published.

Regardless of David Michaels’ rhetoric, doubt is not such a bad thing in the face of uncertain and inconclusive evidence. In my view, we could use more doubt, and open-minded thought. Bertrand Russell is generally credited with having written some years ago:

“The biggest cause of trouble in the world today is that the stupid people are so sure about things and the intelligent folks are so full of doubts.”

What are we to make then of the charge by Dorothy Bishop that the conference would not be about regular scientific debate, but

“about weaponising the reproducibility debate to bolster the message that everything in science is uncertain — which is very convenient for those who wish to promote fringe ideas.”

I attended and presented at the conference because I have a long-standing interest in how scientific validity is assessed in the scientific and in the legal world. I have been litigating such issues in many different contexts for over 35 years, with notable scientific experts occasionally on either side. One phenomenon I have observed repeatedly is that expert witnesses of the greatest skill, experience, and knowledge are prone to cognitive biases, fallacies, and other errors. One of my jobs as a legal advocate is to make sure that my own expert witnesses engage fully with the evidence as well as how my adversaries are interpreting the evidence. In other words, expert witnesses of the highest scientific caliber succumb to biases in interpreting studies and evidence.

A quick anecdote, war story, will I hope make the point. A few years ago, I was helping a scientist get ready to testify in a case involving welding fume exposure and Parkinson’s disease. The scientist arrived with some PowerPoint slides, one of which commented that a study relied upon by plaintiffs’ expert witnesses had a fatal design flaw that rendered its conclusions invalid. Another slide embraced a study, sponsored by a co-defendant company, which had a null result but the same design flaw called out in the study used by plaintiff’s witnesses. It was one in the morning, but I gently pointed out the inconsistency, and the scientist immediately saw the problem and modified his slides.

The next day, my adversary noticed the lack of the codefendant’s study in the group of studies this scientist had relied upon. He cross-examined the scientist about why he had left out a study, which the codefendant had actually sponsored. The defense expert witness testified that the omitted study had the same design flaw as seen in the study embraced by plaintiffs’ expert witnesses, and that it had to be consigned to the same fate. The defense won this case, and long after the celibration died down, I received a very angry call from a lawyer for the codefendant. The embrace of bad studies and invalid inferences is not the exclusive province of the plaintiffs’ bar.

My response to Dorothy Bishop is that science ultimately has no political friends, although political actors will try to use criteria of validity selectively to arrive at convenient, and agreeable results. Do liberals ever advance junk science claims? Just say the words: Robert F. Kennedy, Jr. How bizarre and absurd for Kennedy to come out of a meeting with Trump’s organization, to proclaim a new vaccine committee to investigate autism outcomes! Although the issue has been explored in detail in medical journals for the last two decades, apparently there can even be bipartisan junk science. Another “litmus test” for conservatives would be whether they speak out against what are, in my view, unsubstantiated laws in several “Red States,” which mandate that physicians tell women who are seeking abortions that abortions cause breast cancer. There have been, to be sure, some studies that reported increased risks, but they were mostly case-control studies in which recall and reporting biases were uncontrolled. Much better, larger cohort studies done with unbiased information about history of abortions failed to support the association, which no medical organization has taken to be causal. This is actually a good example of irreproducibility that is corrected by the normal evolutionary process of scientific research, with political exploitation of the earlier, less valid studies.

Did presenters at the Fixing Science conference selectively present and challenge studies? It is difficult for me to say, not having a background in climate science. I participated in the conference to talk about how courts deal with problems of unreliable expert witness testimony and reliance upon unreliable studies. But what I heard at the conference were two main speakers argue that climate change and its human cause were real. The thrust of the most data-rich presentation was that many climate models advanced are overstated and not properly calibrated.  Is Bishop really saying that we cannot have a civil conversation about whether some climate change models are poorly done and validated? Assuming that the position I heard is a reasonable interpretation of the data and the models, it establishes a “floor” in opposition to the ceilings asserted by other climate scientists. There are some implications; perhaps the National Association of Scholars should condemn Donald Trump and others who claim that climate change is a hoax. Of course, condemning Trump every time he says something false, stupid, and unsupported would be a full time job. Having staked out an interest climate change, the Association might well consider balancing the negative impression others have of it as “deniers.”

The Science Brief

Back in June 2018, the National Association of Scholars issued a Science Brief, which it described as its official position statement in the area. A link to the brief online was broken, but a copy of the brief was distributed to those who attended the Fixing Science conference in Oakland. The NAS website does contain an open letter from Dr. Peter Wood, the president of the NAS, who described the brief thus:

“the positions we have put forward in these briefs are not settled once and for all. We expect NAS members will critique them. Please read and consider them. Are there essential points we got wrong? Others that we left out? Are there good points that could be made better?

We are not aiming to compile an NAS catechism. Rather, we are asked frequently by members, academics who are weighing whether to join, reporters, and others what NAS ‘thinks’ about various matters. Our 2,600 members (and growing) no doubt think a lot of different things. We prize that intellectual diversity and always welcome voices of dissent on our website, in our conferences, and in our print publications. But it helps if we can present a statement that offers a first-order approximation of how NAS’s general principles apply to particular disciplines or areas of inquiry.

We also hope that these issue briefs will make NAS more visible and that they will assist scholars who are finding their way in the maze of contemporary academic life.

As a preface to an attempt to address general principles, Peter Wood’s language struck me as liberal, in the best sense of open-minded and generous in spirit to the possibility of reasoned disagreement.

So what are the NAS principles when it comes to science? Because the Science Brief seems not to be online at the moment, I will quote it here at length:

OVERVIEW

The National Association of Scholars (NAS) supports the proper teaching and practice of science: the systematic exercise of reason, observation, hypothesis, and experiment aimed at understanding and making reliable predictions about the material world. We work to keep science as a mode of inquiry engaged in the disinterested pursuit of truth rather than a collection of ‘settled’ conclusions. We also work to integrate course requirements in the unique history of Western science into undergraduate core curricula and distribution requirements. The NAS promotes scientific freedom and transparency.

We support researchers’ freedom to formulate and test any scientific hypothesis, unconstrained by political inhibitions. We support researchers’ freedom to pursue any scientific experiment, within ethical research guidelines. We support transparent scientific research, to foster the scientific community’s collective search for truth.

The NAS supports course requirements on the history and the nature of the Western scientific tradition.

All students should learn a coherent general narrative of the history of science that tells how the scientific disciplines interrelate. We work to restore core curricula that include both the unique history of Western science and an introduction to the distinctive mode of Western scientific reasoning. We also work to add new requirements in statistics and experimental design for majors and graduate students in the sciences and social sciences.

The NAS works to reform the practice of modern science so that it generates reproducible results. Modern science and social science are crippled by a crisis of reproducibility. This crisis springs from a combination of misused statistics, slipshod research techniques, and political groupthink. We aim to eliminate the crisis of reproducibility by grounding scientific practice in the meticulous traditions of Western scientific thought and rigorous reproducibility standards.

The NAS works to eliminate the politicization of undergraduate science education.

Our priority is to dismantle advocacy-based science, which discards the exercise of rational skepticism in pursuit of truth when it explicitly declares that scientific inquiry should serve policy advocacy. We therefore work to remove advocacy-based science from the classroom and from university bureaucracies. We also criticize student movements that demand the replacement of disinterested scientific inquiry with advocacy-based science. We focus our critiques on disciplines such as climate science that are mostly engaged in policy advocacy.

The NAS tracks scientific controversies that affect public policy, studies the remedies that scientists propose, and criticizes laws, regulations, and proposed policies based upon advocacy-based science.

We do this to prevent a vicious cycle in which advocacy-based science justifies the misuse of government – and private funding to support yet more advocacy-based science. We also work to reform the administration of government science funding so as to prevent its capture by advocacy-scientists.  The NAS’s scientific reports draw on the expertise of its member scholars and staff, as well as independent scholars. Our aim is to provide professionally credible critiques of America’s science education and science-based public policy.

John Mashey in his critique of the NAS snarkily comments that folks at the NAS lack the expertise to make the assessments they call for. Considering that Mashey is a computer scientist, without training in the climate or life sciences, his comments fall short of their mark. Still, if he were to have something worthwhile to say, and he supported his statements by sufficient evidence and reasoning, I believe we should take it seriously.

Nonetheless, the NAS statement of principles and concerns about how science and statistics is taught are unexceptional. I suspect that neither Mashey nor anyone else is against scientific freedom, methodological rigor,  and ethical, transparent research.

The scientific, mathematical, and statistical literacy of most judges and lawyers, is poor indeed. The Law School Admission Test (LSAT) does not ask any questions about statistical reasoning. A jury trial is not a fair, adequate opportunity to teach jurors the intricacies of statistical and scientific methods. Most medical schools still do not teach a course in experimental design and statistical analysis. Until recently, the Medical College Acceptance Test (MCAT) did not ask any questions of a statistical nature, and the test still does not require applicants to have taken a full course in statistics. I do not believe any reasonable person could be against the NAS’s call for better statistical education for scientists, and I would add for policy makers. Certainly, Mashey offers no arguments or insights on this topic.

Perhaps Mashey is wary of the position that we should be skeptical of advocacy-based science, for fear that climate-change science will come in for unwelcomed attention. If the science is sound, the data accurate, and the models valid, then this science does not need to be privileged and protected from criticism. Whether Mashey cares to acknowledge the phenomenon or not, scientists do become personally invested in their hypotheses.

The NAS statement of principles in its Science Brief thus seems worthy of everyone’s support. Whether the NAS is scrupulous in applying its own principles to positions it takes will require investigation and cautious vigilance. Still, I think Mashey should not judge anyone harshly lest he be so judged. We are a country of great principles, but a long history of indifferent and sometimes poor implementation. To take just a few obvious examples, despite the stirring words in the Declaration of Independence about the equality of all men, native people, women, and African slaves were treated in distinctly unequal and deplorable ways. Although our Constitution was amended after the Civil War to enfranchise former slaves, our federal government, after an all-too-short period of Reconstruction, failed to enforce the letter or the spirit of the Civil War amendments for 100 years, and then some. Less than seven years after our Constitution was amended to include freedom from governmental interference with speech or publication, a Federalist Congress passed the Alien and Sedition Acts, which President Adams signed into law in 1798. It would take over 100 years before the United States Supreme Court would make a political reality of the full promise of the First Amendment.

In these sad, historical events, one thing is clear. The promise and hope of clearly stated principles did prevail. To me, the lesson is not to belittle the principles or the people, but to hold the latter to the former.  If Mashey believes that the NAS is inconsistent or hypocritical about its embrace of what otherwise seems like worthwhile first principles, he should say. For my part, I think the NAS will find it difficult to avoid a charge of selectivity if it were to criticize climate change science, and not cast a wider net.

Finally, I can say that the event sponsored by the Independent Institute and the NAS featured speakers with diverse, disparate opinions. Some speakers denied that there was a “crisis,” and some saw the crisis as overwhelming and destructive of sound science. I heard some casual opinions of climate change skepticism, but from the most serious, sustained look at the actual data and models, an affirmation of anthropogenic climate change. In the area of health effects, the scientific study more relevant to what I do, I heard a fairly wide consensus about the need to infuse greater rigor into methodology and to reduce investigators’ freedom to cherry pick data and hypotheses after data collection is finished. Even so, there were speakers with stark disagreement over methods. The conference was an important airing and exchanging of many ideas. I believe that those who attended and who participated went away with less orthodoxy and much to contemplate. The Independent Institute and the NAS deserve praise for having organized and sponsored the event. The intellectual courage of the sponsors in inviting such an intellectually diverse group of speakers undermines the charge by Mashey, Teytelman, and Bishop that the groups are simply shilling for Big Oil.


[1]        David Michaels, Doubt is Their Product: How Industry’s Assault on Science Threatens Your Health (2008).

[2]        David Michaels, The Triumph of Doubt: Dark Money and the Science of Deception (2020).

[3]        See, e.g., William Rubenstein, “On What a ‘Common Benefit Fee’ Is, Is Not, and Should Be,” Class Action Attorney Fee Digest 87, 89 (March 2009).

[4]        In 1999, after much deliberation, the Institute of Medicine issued a report that found the scientific claims in the silicone litigation to be without scientific support. Stuart Bondurant, et al., Safety of Silicone Breast Implants (I.O.M. 1999).

[5]        I have written about the lack of transparency and outright deception in SKAPP’s disclosures before; seeSKAPP A LOT” (April 30, 2010); “Manufacturing Certainty” (Oct. 25, 2011); “The Capture of the Public Health Community by the Litigation Industry” (Feb. 10, 2014); “Daubert’s Silver Anniversary – Retrospective View of Its Friends and Enemies” (Oct. 21, 2018); “David Michaels’ Public Relations Problem” (Dec. 2, 2011)

Judicial Gatekeeping Cures Claims That Viagra Can Cause Melonoma

January 24th, 2020

The phosphodiesterases 5 inhibitor medications (PDE5i) seem to arouse the litigation propensities of the lawsuit industry. The PDE5i medications (sildenafil, tadalafil, etc.) have multiple indications, but they are perhaps best known for their ability to induce penile erections, which in some situations can be a very useful outcome.

The launch of Viagra in 1998 was followed by litigation that claimed the drug caused heart attacks, and not the romantic kind. The only broken hearts, however, were those of the plaintiffs’ lawyers and their expert witnesses who saw their litigation claims excluded and dismissed.[1]

Then came claims that the PDE5i medications caused non-arteritic anterior ischemic optic neuropathy (“NAION”), based upon a dubious epidemiologic study by Dr. Gerald McGwin. This litigation demonstrated, if anything, that while love may be blind, erections need not be.[2] The NAION cases were consolidated in a multi-district litigation (MDL) in front of Judge Paul Magnuson, in the District of Minnesota. After considerable back and forth, Judge Manguson ultimately concluded that the McGwin study was untrustworthy, and the NAION claims were dismissed.[3]

In 2014, the American Medical Association’s internal medicine journal published an observational epidemiologic study of sildenafil (Viagra) use and melanoma.[4] The authors of the study interpreted their study modestly, concluding:

“[s]ildenafil use may be associated with an increased risk of developing melanoma. Although this study is insufficient to alter clinical recommendations, we support a need for continued investigation of this association.”

Although the Li study eschewed causal conclusions and new clinical recommendations in view of the need for more research into the issue, the litigation industry filed lawsuits, claiming causality.[5]

In the new natural order of things, as soon as the litigation industry cranks out more than a few complaints, an MDL results, and the PDE5i – melanoma claims were no exception. By spring 2016, plaintiffs’ counsel had collected ten cases, a minion, sufficient for an MDL.[6] The MDL plaintiffs named the manufacturers of sildenafil and tadalafil, two of the more widely prescribed PDEi5 medications, on behalf of putative victims.

While the MDL cases were winding their way through discovery and possible trials, additional studies and meta-analyses were published. None of the subsequent studies, including the systematic reviews and meta-analyses, concluded that there was a causal association. Most scientists who were publishing on the issue opined that systematic error (generally confounding) prevented a causal interpretation of the data.[7]

Many of the observational studies found statistically significantly increased relative risks about 1.1 to 1.2 (10 to 20%), typically with upper bounds of 95% confidence intervals less than 2.0. The only scientists who inferred general causation from the available evidence were those who had been recruited and retained by plaintiffs’ counsel. As plaintiffs’ expert witnesses, they contended that the Li study, and the several studies that became available afterwards, collectively showed that PDE5i drugs cause melanoma in humans.

Not surprisingly, given the absence of any non-litigation experts endorsing the causal conclusion, the defendants challenged plaintiffs’ proffered expert witnesses under Federal Rule of Evidence 702. Plaintiffs’ counsel also embraced judicial gatekeeping and challenged the defense experts. The MDL trial judge, the Hon. Richard Seeborg, held hearings with four days of viva voce testimony from four of plaintiffs’ expert witnesses (two on biological plausibility, and two on epidemiology), and three of the defense’s experts. Last week, Judge Seeborg ruled by granting in part, and denying in part, the parties’ motions.[8]

The Decision

The MDL trial judge’s opinion is noteworthy in many respects. First, Judge Richard Seeborg cited and applied Rule 702, a statute, and not dicta from case law that predates the most recent statutory version of the rule. As a legal process matter, this respect for judicial process and the difference in legal authority between statutory and common law was refreshing. Second, the judge framed the Rule 702 issue, in line with the statute, and Ninth Circuit precedent, as an inquiry whether expert witnesses deviated from the standard of care of how scientists “conduct their research and reach their conclusions.”[9]

Biological Plausibility

Plaintiffs proffered three expert witnesses on biological plausibility, Drs. Rizwan Haq, Anand Ganesan, and Gary Piazza. All were subject to motions to exclude under Rule 702. Judge Seeborg denied the defense motions against all three of plaintiffs’ plausibility witnesses.[10]

The MDL judge determined that biological plausibility is neither necessary nor sufficient for inferring causation in science or in the law. The defense argued that the plausibility witnesses relied upon animal and cell culture studies that were unrealistic models of the human experience.[11] The MDL court, however, found that the standard for opinions on biological plausibility is relatively forgiving, and that the testimony of all three of plaintiffs’ proffered witnesses was admissible.

The subjective nature of opinions about biological plausibility is widely recognized in medical science.[12] Plausibility determinations are typically “Just So” stories, offered in the absence of hard evidence that postulated mechanisms are actually involved in a real causal pathway in human beings.

Causal Association

The real issue in the MDL hearings was the conclusion reached by plaintiffs’ expert witnesses that the PDE5i medications cause melanoma. The MDL court did not have to determine whether epidemiologic studies were necessary for such a causal conclusion. Plaintiffs’ counsel had proffered three expert witnesses with more or less expertise in epidemiology: Drs. Rehana Ahmed-Saucedo, Sonal Singh, and Feng Liu-Smith. All of plaintiffs’ epidemiology witnesses, and certainly all of defendants’ experts, implicitly if not explicitly embraced the proposition that analytical epidemiology was necessary to determine whether PDE5i medications can cause melanoma.

In their motions to exclude Ahmed-Saucedo, Singh, and Liu-Smith, the defense pointed out that, although many of the studies yielded statistically significant estimates of melanoma risk, none of the available studies adequately accounted for systematic bias in the form of confounding. Although the plaintiffs’ plausibility expert witnesses advanced “Just-So” stories about PDE5i and melanoma, the available studies showed an almost identical increased risk of basal cell carcinoma of the skin, which would be explained by confounding, but not by plaintiffs’ postulated mechanisms.[13]

The MDL court acknowledged that whether epidemiologic studies “adequately considered” confounding was “central” to the Rule 702 inquiry. Without any substantial analysis, however, the court gave its own ipse dixit that the existence vel non of confounding was an issue for cross-examination and the jury’s resolution.[14] Whether there was a reasonably valid association between PDE5i and melanoma was a jury question. This judicial refusal to engage with the issue of confounding was one of the disappointing aspects of the decision.

The MDL court was less forgiving when it came to the plaintiffs’ epidemiology expert witnesses’ assessment of the association as causal. All the parties’ epidemiology witnesses invoked Sir Austin Bradford Hill’s viewpoints or factors for judging whether associations were causal.[15] Although they embraced Hill’s viewpoints on causation, the plaintiffs’ epidemiologic expert witnesses had a much more difficult time faithfully applying them to the evidence at hand. The MDL court concluded that the plaintiffs’ witnesses deviated from their own professional standard of care in their analysis of the data.[16]

Hill’s first enumerated factor was “strength of association,” which is typically expressed epidemiologically as a risk ratio or a risk difference. The MDL court noted that the extant epidemiologic studies generally showed relative risks around 1.2 for PDE5i and melanoma, which was “undeniably” not a strong association.[17]

The plaintiffs’ epidemiology witnesses were at sea on how to explain away the lack of strength in the putative association. Dr. Ahmed-Saucedo retreated into an emphasis on how all or most of the studies found some increased risk, but the MDL court correctly found that this ruse was merely a conflation of strength with consistency of the observed associations. Dr. Ahmed-Saucedo’s dismissal of the importance of a dose-response relationship, another Hill factor, as unimportant sealed her fate. The MDL court found that her Bradford Hill analysis was “unduly results-driven,” and that her proffered testimony was not admissible.[18] Similarly, the MDL court found that Dr. Feng Liu-Smith similarly conflated strength of association with consistency, which error was too great a professional deviation from the standard of care.[19]

Dr. Sonal Singh fared no better after he contradicted his own prior testimony that there is an order of importance to the Hill factors, with “strength of association,” at or near the top. In the face of a set of studies, none of which showed a strong association, Dr. Singh abandoned his own interpretative principle to suit the litigation needs of the case. His analysis placed the greatest weight on the Li study, which had the highest risk ratio, but he failed to advance any persuasive reason for his emphasis on one of the smallest studies available. The MDL court found that Dr. Singh’s claim to have weighed strength of association heavily, despite the obvious absence of strong associations, puzzling and too great an analytical gap to abide.[20]

Judge Seeborg thus concluded that while the plaintiffs’ expert witness could opine that there was an association, which was arguably plausible, they could not, under Rule 702, contend that the association was causal. In attempting to advance an argument that the association met Bradford Hill’s factors for causality, the plaintiffs’ witnesses had ignored, misrepresented, or confused one of the most important factors, strength of the association, in a way that revealed their analyses to be result driven and unfaithful to the methodology they claimed to have followed. Judge Seeborg emphasized a feature of the revised Rule 702, which often is ignored by his fellow federal judges:[21]

“Under the amendment, as under Daubert, when an expert purports to apply principles and methods in accordance with professional standards, and yet reaches a conclusion that other experts in the field would not reach, the trial court may fairly suspect that the principles and methods have not been faithfully applied. See Lust v. Merrell Dow Pharmaceuticals, Inc., 89 F.3d 594, 598 (9th Cir. 1996). The amendment specifically provides that the trial court must scrutinize not only the principles and methods used by the expert, but also whether those principles and methods have been properly applied to the facts of the case.”

Given that the plaintiffs’ witnesses purported to apply a generally accepted methodology, Judge Seeborg was left to question why they would conclude causality when no one else in their field had done so.[22] The epidemiologic issue had been around for several years, and addressed not just in observational studies, but systematically reviewed and meta-analyzed. The absence of published causal conclusions was not just an absence of evidence, but evidence of absence of expert support for how plaintiffs’ expert witnesses applied the Bradford Hill factors.

Reliance Upon Studies That Did Not Conclude Causation Existed

Parties challenging causal claims will sometimes point to the absence of a causal conclusion in the publication of individual epidemiologic studies that are the main basis for the causal claim. In the PDE5i-melanoma cases, the defense advanced this argument unsuccessfully. The MDL court rejected the defense argument, based upon the absence of any comprehensive review of all the pertinent evidence for or against causality in an individual study; the study authors are mostly concerned with conveying the results of their own study.[23] The authors may have a short discussion of other study results as the rationale for their own study, but such discussions are often limited in scope and purpose. Judge Seeborg, in this latest round of PDE5i litigation, thus did not fault plaintiffs’ witnesses’ reliance upon epidemiologic or mechanistic studies, which individually did not assert causal conclusions; rather it was the absence of causal conclusions in systematic reviews, meta-analyses, narrative reviews, regulatory agency pronouncements, or clinical guidelines that ultimately raised the fatal inference that the plaintiffs’ witnesses were not faithfully deploying a generally accepted methodology.

The defense argument that pointed to the individual epidemiologic studies themselves derives some legal credibility from the Supreme Court’s opinion in General Electric Co. v. Joiner, 522 U.S. 136 (1997). In Joiner, the SCOTUS took plaintiffs’ expert witnesses to task for drawing stronger conclusions than were offered in the papers upon which they relied. Chief Justice Rehnquist gave considerable weight to the consideration that the plaintiffs’ expert witnesses relied upon studies, the authors of which explicitly refused to interpret as supporting a conclusion of human disease causation.[24]

Joiner’s criticisms of the reliance upon studies that do not themselves reach causal conclusions have gained a foothold in the case law interpreting Rule 702. The Fifth Circuit, for example, has declared:[25]

“It is axiomatic that causation testimony is inadmissible if an expert relies upon studies or publications, the authors of which were themselves unwilling to conclude that causation had been proven.”

This aspect of Joiner may properly limit the over-interpretation or misinterpretation of an individual study, which seems fine.[26] The Joiner case may, however, perpetuate an authority-based view of science to the detriment of requiring good and sufficient reasons to support the testifying expert witnesses’ opinions.  The problem with Joiner’s suggestion that expert witness opinion should not be admissible if it disagrees with the study authors’ discussion section is that sometimes study authors grossly over-interpret their data.  When it comes to scientific studies written by “political scientists” (scientists who see their work as advancing a political cause or agenda), then the discussion section often becomes a fertile source of unreliable, speculative opinions that should not be given credence in Rule 104(a) contexts, and certainly should not be admissible in trials. In other words, the misuse of non-rigorous comments in published articles can cut both ways.

There have been, and will continue to be, occasions in which published studies contain data, relevant and important to the causation issue, but which studies also contain speculative, personal opinions expressed in the Introduction and Discussion sections.  The parties’ expert witnesses may disagree with those opinions, but such disagreements hardly reflect poorly upon the testifying witnesses.  Neither side’s expert witnesses should be judged by those out-of-court opinions.  Perhaps the hearsay discussion section may be considered under Rule 104(a), which suspends the application of the Rules of Evidence, but it should hardly be a dispositive factor, other than raising questions for the reviewing court.

In exercising their gatekeeping function, trial judges should exercise care in how they assess expert witnesses’ reliance upon study data and analyses, when they disagree with the hearsay authors’ conclusions or discussions.  Given how many journals cater to advocacy scientists, and how variable the quality of peer review is, testifying expert witnesses should, in some instances,  have the expertise to interpret the data without substantial reliance upon, or reference to, the interpretative comments in the published literature.

Judge Seeborg sensibly seems to have distinguished between the absence of causal conclusions in individual epidemiologic studies and the absence of causal conclusions in any reputable medical literature.[27] He refused to be ensnared in the Joiner argument because:[28]

“Epidemiology studies typically only expressly address whether an association exists between agents such as sildenafil and tadalafil and outcomes like melanoma progression. As explained in In re Roundup Prod. Liab. Litig., 390 F. Supp. 3d 1102, 1116 (N.D. Cal. 2018), ‘[w]hether the agents cause the outcomes, however, ordinarily cannot be proven by epidemiological studies alone; an evaluation of causation requires epidemiologists to exercise judgment about the import of those studies and to consider them in context’.”

This new MDL opinion, relying upon the Advisory Committee Notes to Rule 702, is thus a more felicitous statement of the goals of gatekeeping.

Confidence Intervals

As welcome as some aspects of Judge Seeborg’s opinion are, the decision is not without mistakes. The district judge, like so many of his judicial colleagues, trips over the proper interpretation of a confidence interval:[29]

“When reviewing the results of a study it is important to consider the confidence interval, which, in simple terms, is the ‘margin of error’. For example, a given study could calculate a relative risk of 1.4 (a 40 percent increased risk of adverse events), but show a 95 percent ‘confidence interval’ of .8 to 1.9. That confidence interval means there is 95 percent chance that the true value—the actual relative risk—is between .8 and 1.9.”

This statement is inescapably wrong. The 95 percent probability attaches to the capturing of the true parameter – the actual relative risk – in the long run of repeated confidence intervals that result from repeated sampling of the same sample size, in the same manner, from the same population. In Judge Seeborg’s example, the next sample might give a relative risk point estimate 1.9, and that new estimate will have a confidence interval that may run from just below 1.0 to over 3. A third sample might turn up a relative risk estimate of 0.8, with a confidence interval that runs from say 0.3 to 1.4. Neither the second nor the third sample would be reasonably incompatible with the first. A more accurate assessment of the true parameter is that it will be somewhere between 0.3 and 3, a considerably broader range for the 95 percent.

Judge Seeborg’s error is sadly all too common. Whenever I see the error, I wonder whence it came. Often the error is in briefs of both plaintiffs’ and defense counsel. In this case, I did not see the erroneous assertion about confidence intervals made in plaintiffs’ or defendants’ briefs.


[1]  Brumley  v. Pfizer, Inc., 200 F.R.D. 596 (S.D. Tex. 2001) (excluding plaintiffs’ expert witness who claimed that Viagra caused heart attack); Selig v. Pfizer, Inc., 185 Misc. 2d 600 (N.Y. Cty. S. Ct. 2000) (excluding plaintiff’s expert witness), aff’d, 290 A.D. 2d 319, 735 N.Y.S. 2d 549 (2002).

[2]  “Love is Blind but What About Judicial Gatekeeping of Expert Witnesses? – Viagra Part I” (July 7, 2012); “Viagra, Part II — MDL Court Sees The Light – Bad Data Trump Nuances of Statistical Inference” (July 8, 2012).

[3]  In re Viagra Prods. Liab. Litig., 572 F.Supp. 2d 1071 (D. Minn. 2008), 658 F. Supp. 2d 936 (D. Minn. 2009), and 658 F. Supp. 2d 950 (D. Minn. 2009).

[4]  Wen-Qing Li, Abrar A. Qureshi, Kathleen C. Robinson, and Jiali Han, “Sildenafil use and increased risk of incident melanoma in US men: a prospective cohort study,” 174 J. Am. Med. Ass’n Intern. Med. 964 (2014).

[5]  See, e.g., Herrara v. Pfizer Inc., Complaint in 3:15-cv-04888 (N.D. Calif. Oct. 23, 2015); Diana Novak Jones, “Viagra Increases Risk Of Developing Melanoma, Suit Says,” Law360 (Oct. 26, 2015).

[6]  See In re Viagra (Sildenafil Citrate) Prods. Liab. Litig., 176 F. Supp. 3d 1377, 1378 (J.P.M.L. 2016).

[7]  See, e.g., Jenny Z. Wang, Stephanie Le , Claire Alexanian, Sucharita Boddu, Alexander Merleev, Alina Marusina, and Emanual Maverakis, “No Causal Link between Phosphodiesterase Type 5 Inhibition and Melanoma,” 37 World J. Men’s Health 313 (2019) (“There is currently no evidence to suggest that PDE5 inhibition in patients causes increased risk for melanoma. The few observational studies that demonstrated a positive association between PDE5 inhibitor use and melanoma often failed to account for major confounders. Nonetheless, the substantial evidence implicating PDE5 inhibition in the cyclic guanosine monophosphate (cGMP)-mediated melanoma pathway warrants further investigation in the clinical setting.”); Xinming Han, Yan Han, Yongsheng Zheng, Qiang Sun, Tao Ma, Li Dai, Junyi Zhang, and Lianji Xu, “Use of phosphodiesterase type 5 inhibitors and risk of melanoma: a meta-analysis of observational studies,” 11 OncoTargets & Therapy 711 (2018).

[8]  In re Viagra (Sildenafil Citrate) and Cialis (Tadalafil) Prods. Liab. Litig., Case No. 16-md-02691-RS, Order Granting in Part and Denying in Part Motions to Exclude Expert Testimony (N.D. Calif. Jan. 13, 2020) [cited as Opinion].

[9]  Opinion at 8 (“determin[ing] whether the analysis undergirding the experts’ testimony falls within the range of accepted standards governing how scientists conduct their research and reach their conclusions”), citing Daubert v. Merrell Dow Pharm., Inc. (Daubert II), 43 F.3d 1311, 1317 (9th Cir. 1995).

[10]  Opinion at 11.

[11]  Opinion at 11-13.

[12]  See Kenneth J. Rothman, Sander Greenland, and Timothy L. Lash, “Introduction,” chap. 1, in Kenneth J. Rothman, et al., eds., Modern Epidemiology at 29 (3d ed. 2008) (“no approach can transform plausibility into an objective causal criterion).

[13]  Opinion at 15-16.

[14]  Opinion at 16-17.

[15]  See Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295 (1965); see also “Woodside & Davis on the Bradford Hill Considerations” (April 23, 2013).

[16]  Opinion at 17 – 21.

[17]  Opinion at 18. The MDL court cited In re Silicone Gel Breast Implants Prod. Liab. Litig., 318 F. Supp. 2d 879, 893 (C.D. Cal. 2004), for the proposition that relative risks greater than 2.0 permit the inference that the agent under study “was more likely than not responsible for a particular individual’s disease.”

[18]  Opinion at 18.

[19]  Opinion at 20.

[20]  Opinion at 19.

[21]  Opinion at 21, quoting from Rule 702, Advisory Committee Notes (emphasis in Judge Seeborg’s opinion).

[22]  Opinion at 21.

[23]  SeeFollow the Data, Not the Discussion” (May 2, 2010).

[24]  Joiner, 522 U.S. at 145-46 (noting that the PCB studies at issue did not support expert witnesses’ conclusion that PCB exposure caused cancer because the study authors, who conducted the research, were not willing to endorse a conclusion of causation).

[25]  Huss v. Gayden, 571 F.3d 442  (5th Cir. 2009) (citing Vargas v. Lee, 317 F.3d 498, 501-01 (5th Cir. 2003) (noting that studies that did not themselves embrace causal conclusions undermined the reliability of the plaintiffs’ expert witness’s testimony that trauma caused fibromyalgia); see also McClain v. Metabolife Internat’l, Inc., 401 F.3d 1233, 1247-48 (11th Cir. 2005) (expert witnesses’ reliance upon studies that did not reach causal conclusions about ephedrine supported the challenge to the reliability of their proffered opinions); Happel v. Walmart, 602 F.3d 820, 826 (7th Cir. 2010) (observing that “is axiomatic that causation testimony is inadmissible if an expert relies upon studies or publications, the authors of which were themselves unwilling to conclude that causation had been proven”).

[26]  In re Accutane Prods. Liab. Litig., 511 F. Supp. 2d 1288, 1291 (M.D. Fla. 2007) (“When an expert relies on the studies of others, he must not exceed the limitations the authors themselves place on the study. That is, he must not draw overreaching conclusions.) (internal citations omitted).

[27]  See Rutigliano v. Valley Bus. Forms, 929 F. Supp. 779, 785 (D.N.J. 1996), aff’d, 118 F.3d 1577 (3d Cir. 1997) (“law warns against use of medical literature to draw conclusions not drawn in the literature itself …. Reliance upon medical literature for conclusions not drawn therein is not an accepted scientific methodology.”).

[28]  Opinion at 14

[29]  Opinion at 4 – 5.