TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

April Fool – Zambelli-Weiner Must Disclose

April 2nd, 2020

Back in the summer of 2019, Judge Saylor, the MDL judge presiding over the Zofran birth defect cases, ordered epidemiologist, Dr. Zambelli-Weiner to produce documents relating to an epidemiologic study of Zofran,[1] as well as her claimed confidential consulting relationship with plaintiffs’ counsel.[2]

This previous round of motion practice and discovery established that Zambelli-Weiner was a paid consultant in advance of litigation, that her Zofran study was funded by plaintiffs’ counsel, and that she presented at a Las Vegas conference, for plaintiffs’ counsel only, on [sic] how to make mass torts perfect. Furthermore, she had made false statements to the court about her activities.[3]

Zambelli-Weiner ultimately responded to the discovery requests but she and plaintiffs’ counsel withheld several documents as confidential, pursuant to the MDL’s procedure for protective orders. Yesterday, April 1, 2020, Judge Saylor entered granted GlaxoSmithKline’s motion to de-designate four documents that plaintiffs claimed to be confidential.[4]

Zambelli-Weiner sought to resist GSK’s motion to compel disclosure of the documents on a claim that GSK was seeking the documents to advance its own litigation strategy. Judge Saylor acknowledged that Zambelli-Weiner’s psycho-analysis might be correct, but that GSK’s motive was not the critical issue. According to Judge Saylor, the proper inquiry was whether the claim of confidentiality was proper in the first place, and whether removing the cloak of secrecy was appropriate under the facts and circumstances of the case. Indeed, the court found “persuasive public-interest reasons” to support disclosure, including providing the FDA and the EMA a complete, unvarnished view of Zambelli-Weiner’s research.[5] Of course, the plaintiffs’ counsel, in close concert with Zambelli-Weiner, had created GSK’s need for the documents.

This discovery battle has no doubt been fought because plaintiffs and their testifying expert witnesses rely heavily upon the Zambelli-Weiner study to support their claim that Zofran causes birth defects. The present issue is whether four of the documents produced by Dr. Zambelli-Weiner pursuant to subpoena should continue to enjoy confidential status under the court’s protective order. GSK argued that the documents were never properly designated as confidential, and alternatively, the court should de-designate the documents because, among other things, the documents would disclose information important to medical researchers and regulators.

Judge Saylor’s Order considered GSK’s objections to plaintiffs’ and Zambelli-Weiner’s withholding four documents:

(1) Zambelli-Weiner’s Zofran study protocol;

(2) Undisclosed, hidden analyses that compared birth defects rates for children born to mothers who used Zofran with the rates seen with the use of other anti-emetic medications;

(3) An earlier draft Zambelli-Weiner’s Zofran study, which she had prepared to submit to the New England Journal of Medicine; and

(4) Zambelli-Weiner’s advocacy document, a “Causation Briefing Document,” which she prepared for plaintiffs’ lawyers.

Judge Saylor noted that none of the withheld documents would typically be viewed as confidential. None contained “sensitive personal, financial, or medical information.”[6]  The court dismissed Zambelli-Weiner’s contention that the documents all contained “business and proprietary information,” as conclusory and meritless. Neither she nor plaintiffs’ counsel explained how the requested documents implicated proprietary information when Zambelli-Weiner’s only business at issue is to assist in making lawsuits. The court observed that she is not “engaged in the business of conducting research to develop a pharmaceutical drug or other proprietary medical product or device,” and is related solely to her paid consultancy to plaintiffs’ lawyers. Neither she nor the plaintiffs’ lawyers showed how public disclosure would hurt her proprietary or business interests. Of course, if Zambelli-Weiner had been dishonest in carrying out the Zofran study, as reflected in study deviations from its protocol, her professional credibility and her business of conducting such studies might well suffer. Zambelli-Weiner, however, was not prepared to affirm the antecedent of that hypothetical. In any event, the court found that whatever right Zambelli-Weiner might have enjoyed to avoid discovery evaporated with her previous dishonest representations to the MDL court.[7]

The Zofran Study Protocol

GSK sought production of the Zofran study protocol, which in theory contained the research plan for the Zofran study and the analyses the researchers intended to conduct. Zambelli-Weiner attempted to resist production on the specious theory that she had not published the protocol, but the court found this “non-publication” irrelevant to the claim of confidentiality. Most professional organizations, such as the International Society of Pharmacoepidemiology (“ISPE”), which ultimately published Zambelli-Weiner’s study, encourage the publication and sharing of study protocols.[8] Disclosure of protocols helps ensure the integrity of studies by allowing readers to assess whether the researchers have adhered to their study plan, or have engaged in ad hoc data dredging in search for a desired result.[9]

The Secret, Undisclosed Analyses

Perhaps even more egregious than withholding the study protocol was the refusal to disclose unpublished analyses comparing the rate of birth defects among children born to mothers who used Zofran with the birth defect rates of children with in utero exposure to other anti-emetic medications.  In ruling that Zambelli-Weiner must produce the unpublished analyses, the court expressed its skepticism over whether these analyses could ever have been confidential. Under ISPE guidelines, researchers must report findings that significantly affect public health, and the relative safety of Zofran is essential to its evaluation by regulators and prescribing physicians.

Not only was Zambelli-Weiner’s failure to include these analyses in her published article ethically problematic, but she apparently hid these analyses from the Pharmacovigilance Risk Assessment Committee (PRAC) of the European Medicines Agency, which specifically inquired of Zambelli-Weiner whether she had performed such analyses. As a result, the PRAC recommended a label change based upon Zambelli-Weiner’s failure to disclosure material information. Furthermore, the plaintiffs’ counsel represented they intended to oppose GSK’s citizen petition to the FDA, based upon the Zambelli-Weiner study. The apparently fraudulent non-disclosure of relevant analyses could not have been more fraught for public health significance. The MDL court found that the public health need trumped any (doubtful) claim to confidentiality.[10] Against the obvious public interest, Zambelli-Weiner offered no “compelling countervailing interest” in keeping her secret analyses confidential.

There were other aspects to the data-dredging rationale not discussed in the court’s order. Without seeing the secret analyses of other anti-emetics, readers were deprive of an important opportunity to assess actual and potential confounding in her study. Perhaps even more important, the statistical tools that Zambelli-Weiner used, including any measurements of p-values and confidence intervals, and any declarations of “statistical significance,” were rendered meaningless by her secret, undisclosed, multiple testing. As noted by the American Statistical Association (ASA) in its 2016 position statement, “4. Proper inference requires full reporting and transparency.”

The ASA explains that the proper inference from a p-value can be completely undermined by “multiple analyses” of study data, with selective reporting of sample statistics that have attractively low p-values, or cherry picking of suggestive study findings. The ASA points out that common practices of selective reporting compromises valid interpretation. Hence the correlative recommendation:

“Researchers should disclose the number of hypotheses explored during the study, all data collection decisions, all statistical analyses conducted and all p-values computed. Valid scientific conclusions based on p-values and related statistics cannot be drawn without at least knowing how many and which analyses were conducted, and how those analyses (including p-values) were selected for reporting.”[11]

The Draft Manuscript for the New England Journal of Medicine

The MDL court wasted little time and ink in dispatching Zambelli-Weiner’s claim of confidentiality for her draft New England Journal of Medicine manuscript. The court found that she failed to explain how any differences in content between this manuscript and the published version constituted “proprietary business information,” or how disclosure would cause her any actual prejudice.

Zambelli-Weiner’s Litigation Road Map

In a world where social justice warriors complain about organizations such as Exponent, for its litigation support of defense efforts, the revelation that Zambelli-Weiner was helping to quarterback the plaintiffs’ offense deserves greater recognition. Zambelli-Weiner’s litigation road map was clearly created to help Grant & Eisenhofer, P.A., the plaintiffs’ lawyers,, create a causation strategy (to which she would add her Zofran study). Such a document from a consulting expert witness is typically the sort of document that enjoys confidentiality and protection from litigation discovery. The MDL court, however, looked beyond Zambelli-Weiner’s role as a “consulting witness” to her involvement in designing and conducting research. The broader extent of her involvement in producing studies and communicating with regulators made her litigation “strategery” “almost certainly relevant to scientists and regulatory authorities” charged with evaluating her study.”[12]

Despite Zambelli-Weiner’s protestations that she had made a disclosure of conflict of interest, the MDL court found her disclosure anemic and the public interest in knowing the full extent of her involvement in advising plaintiffs’ counsel, long before the study was conducted, great.[13]

The legal media has been uncommonly quiet about the rulings on April Zambelli-Weiner, in the Zofran litigation. From the Union of Concerned Scientists, and other industry scolds such as David Egilman, David Michaels, and Carl Cranor – crickets. Meanwhile, while the appeal over the admissibility of her testimony is pending before the Pennsylvania Supreme Court,[14] Zambelli-Weiner continues to create an unenviable record in Zofran, Accutane,[15] Mirena,[16] and other litigations.


[1]  April Zambelli‐Weiner, Christina Via, Matt Yuen, Daniel Weiner, and Russell S. Kirby, “First Trimester Pregnancy Exposure to Ondansetron and Risk of Structural Birth Defects,” 83 Reproductive Toxicology 14 (2019).

[2]  See In re Zofran (Ondansetron) Prod. Liab. Litig., 392 F. Supp. 3d 179, 182-84 (D. Mass. 2019) (MDL 2657) [cited as In re Zofran].

[3]  “Litigation Science – In re Zambelli-Weiner” (April 8, 2019); “Mass Torts Made Less Bad – The Zambelli-Weiner Affair in the Zofran MDL” (July 30, 2019). See also Nate Raymond, “GSK accuses Zofran plaintiffs’ law firms of funding academic study,” Reuters (Mar. 5, 2019).

[4]  In re Zofran Prods. Liab. Litig., MDL No. 1:15-md-2657-FDS, Order on Defendant’s Motion to De-Designate Certain Documents as Confidential Under the Protective Order (D.Mass. Apr. 1, 2020) [Order].

[5]  Order at n.3

[6]  Order at 3.

[7]  See In re Zofran, 392 F. Supp. 3d at 186.

[8]  Order at 4. See also Xavier Kurz, Susana Perez-Gutthann, the ENCePP Steering Group, “Strengthening standards, transparency, and collaboration to support medicine evaluation: Ten years of the European Network of Centres for Pharmacoepidemiology and Pharmacovigilance (ENCePP),” 27 Pharmacoepidemiology & Drug Safety 245 (2018).

[9]  Order at note 2 (citing Charles J. Walsh & Marc S. Klein, “From Dog Food to Prescription Drug Advertising: Litigating False Scientific Establishment Claims Under the Lanham Act,” 22 Seton Hall L. Rev. 389, 431 (1992) (noting that adherence to study protocol “is essential to avoid ‘data dredging’—looking through results without a predetermined plan until one finds data to support a claim”).

[10]  Order at 5, citing Anderson v. Cryovac, Inc., 805 F.2d 1, 8 (1st Cir. 1986) (describing public-health concerns as “compelling justification” for requiring disclosing of confidential information).

[11]  Ronald L. Wasserstein & Nicole A. Lazar, “The ASA’s Statement on p-Values: Context, Process, and Purpose,” 70 The American Statistician 129 (2016)

See alsoThe American Statistical Association’s Statement on and of Significance” (March 17, 2016).“Courts Can and Must Acknowledge Multiple Comparisons in Statistical Analyses (Oct. 14, 2014).

[12]  Order at 6.

[13]  Cf. Elizabeth J. Cabraser, Fabrice Vincent & Alexandra Foote, “Ethics and Admissibility: Failure to Disclose Conflicts of Interest in and/or Funding of Scientific Studies and/or Data May Warrant Evidentiary Exclusions,” Mealey’s Emerging Drugs Reporter (Dec. 2002) (arguing that failure to disclose conflicts of interest and study funding should result in evidentiary exclusions).

[14]  Walsh v. BASF Corp., GD #10-018588 (Oct. 5, 2016, Pa. Ct. C.P. Allegheny Cty., Pa.) (finding that Zambelli-Weiner’s and Nachman Brautbar’s opinions that pesticides generally cause acute myelogenous leukemia, that even the smallest exposure to benzene increases the risk of leukemia offended generally accepted scientific methodology), rev’d, 2018 Pa. Super. 174, 191 A.3d 838, 842-43 (Pa. Super. 2018), appeal granted, 203 A.3d 976 (Pa. 2019).

[15]  In re Accutane Litig., No. A-4952-16T1, (Jan. 17, 2020 N.J. App. Div.) (affirming exclusion of Zambelli-Weiner as an expert witness).

[16]  In re Mirena IUD Prods. Liab. Litig., 169 F. Supp. 3d 396 (S.D.N.Y. 2016) (excluding Zambelli-Weiner in part).

Science Journalism – UnDark Noir

February 23rd, 2020

Critics of the National Association of Scholars’ conference on Fixing Science pointed readers to an article in Undark, an on-line popular science site for lay audiences, and they touted the site for its science journalism. My review of the particular article left me unimpressed and suspicious of Undark’s darker side. When I saw that the site featured an article on the history of the Supreme Court’s Daubert decision, I decided to give the site another try. For one thing, I am sympathetic to the task science journalists take on: it is important and difficult. In many ways, lawyers must commit to perform the same task. Sadly, most journalists and lawyers, with some notable exceptions, lack the scientific acumen and English communication skills to meet the needs of this task.

The Undark article that caught my attention was a history of the Daubert decision and the Bendectin litigation that gave rise to the Supreme Court case.[1] The author, Peter Andrey Smith, is a freelance reporter, who often covers science issues. In his Undark piece, Smith covered some of the oft-told history of the Daubert case, which has been told before, better and in more detail in many legal sources. Smith gets some credit for giving the correct pronunciation of the plaintiff’s name – “DAW-burt,” and for recounting how both sides declared victory after the Supreme Court’s ruling. The explanation Smith gives of the opinion by Associate Justice Harry Blackmun is reasonably accurate, and he correctly notes that a partial dissenting opinion by Chief Justice Rehnquist complained that the majority’s decision would have trial judges become “amateur scientists.” Nowhere in the article will you find, however, the counter to the dissent: an honest assessment of the institutional and individual competence of juries to decide complex scientific issues.

The author’s biases eventually, however, become obvious. He recounts his interviews with Jason Daubert and his mother, Joyce Daubert. He earnestly reports how Joyce Daubert remembered having taken Bendectin during her pregnancy with Jason, and in the moment of that recall, “she felt she’d finally identified the teratogen that harmed Jason.” Really? Is that how teratogens are identified? Might it have been useful and relevant for a scientific journalist to explain that there are four million live births every year in the United States and that 3% of children born each year have major congenital malformations? And that most malformations have no known cause? Smith ingenuously relays that Jason Daubert had genetic testing, but omits that genetic testing in the early 1990s was fairly primitive and limited. In any event, how were any expert witnesses supposed to rule out base-line risk of birth defects, especially given weak to non-existent epidemiologic support for the Daubert’s claims? Smith does answer these questions; he does not even acknowledge the questions.

Smith later quotes Joyce Daubert as describing the litigation she signed up for as “the hill I’ll die on. You only go to war when you think you can win.” Without comment or analysis, Smith gives Joyce Daubert an opportunity to rant against the “injustice” of how her lawsuit turned out. Smith tells us that the Dauberts found the “legal system remains profoundly disillusioning.” Joyce Daubert told Smith that “it makes me feel stupid that I was so naïve to think that, after we’d invested so much in the case, that we would get justice.”  When called for jury duty, she introduces herself as

“I’m Daubert of Daubert versus Merrell Dow … ; I don’t want to sit on this jury and pretend that I can pass judgment on somebody when there is no justice. Please allow me to be excused.”

But didn’t she really get all the justice she deserved? Given her zealotry, doesn’t she deserve to have her name on the decision that serves to rein in expert witnesses who outrun their scientific headlights? Smith is coy and does not say, but in presenting Mrs. Daubert’s rant, without presenting the other side, he is using his journalistic tools in a fairly blatant attempt to mislead. At this point, I begin to get the feeling that Smith is preaching to a like-minded choir over there at Undark.

The reader is not treated to any interviews with anyone from the company that made Bendectin, any of its scientists, or any of the scientists who published actual studies on whether Bendectin was associated with the particular birth defects Jason Daubert had, or for that matter, with any birth defects at all. The plaintiffs’ expert witnesses quoted and cited never published anything at all on the subject. The readers are left to their imagination about how the people who developed Bendectin felt about the litigation strategies and tactics of the lawsuit industry.

The journalistic ruse is continued with Smith’s treatment of the other actors in the Daubert passion play. Smith describes the Bendectin plaintiffs’ lawyer Barry Nace in hagiographic terms, but omits his bar disciplinary proceedings.[2] Smith tells us that Nace had an impressive background in chemistry, and quotes him in an interview in which he described the evidentiary rules on scientific witness testimony as “scientific evidence crap.”

Smith never describes the Daubert’s actual affirmative evidence in any detail, which one might expect in a sophisticated journalistic outlet. Instead, he described some of their expert witnesses, Shanna Swan, a reproductive epidemiologist, and Alan K. Done, “a former pediatrician from Wayne State University.” Smith is secretive about why Done was done in at Wayne State; and we learn nothing about the serious accusations of perjury on credentials by Done. Instead, Smith regales us with Done’s tsumish theory, which takes inconclusive bits of evidence, throws them together, and then declares causation that somehow eludes the rest of the scientific establishment.

Smith tells us that Swan was a rebuttal witness, who gave an opinion that the data did not rule out “the possibility Bendectin caused defects.” Legally and scientifically, Smith is derelict in failing to explain that the burden was on the party claiming causation, and that Swan’s efforts to manufacture doubt were beside the point. Merrell Dow did not have to rule out any possibility of causation; the plaintiffs had to establish causation. Nor does Smith delve into how Swan sought to reprise her performance in the silicone gel breast implant litigation, only to be booted by several judges as an expert witness. And then for a convincer, Smith sympathetically repeats plaintiffs’ lawyer Barry Nace’s hyperbolic claim that Bendectin manufacturer, Merrell Dow had been “financing scientific articles to get their way,” adding by way of emphasis, in his own voice:

“In some ways, here was the fake news of its time: If you lacked any compelling scientific support for your case, one way to undermine the credibility of your opponents was by calling their evidence ‘junk science’.”

Against Nace’s scatalogical Jackson Pollack approach, Smith is silent about another plaintiffs’ expert witness, William McBride, who was found guilty of scientific fraud.[3] Smith reports interviews of several well-known, well-respected evidence scholars. He dutifully report Professor Edward Cheng’s view that “the courts were right to dismiss the [Bendectin] plaintiffs’ claims.” Smith quotes Professor D. Michael Risinger that claims from both sides in Bendectin cases were exaggerated, and that the 1970s and 1980s saw an “unbridled expansion of self-anointed experts,” with “causation in toxic torts had been allowed to become extremely lax.” So a critical reader might wonder why someone like Professor Cheng, who has a doctorate in statistics, a law degree from Harvard, and teaches at Vanderbilt Law School, would vindicate the manufacturers’ position in the Bendectin litigation. Smith never attempts to reconcile his interviews of the law professors with the emotive comments of Barry Nace and Joyce Daubert.

Smith acknowledges that a reformulated version of Bendectin, known as  Diclegis, was approved by the Food and Drug Administration in the United States, in 2013, for treatment of  nausea and vomiting during pregnancy. Smith tells us that Joyce is not convinced the drug should be back on the market,” but really why would any reasonable person care about her view of the matter? The challenge by Nav Persaud, a Toronto physician, is cited, but Persaud’s challenge is to the claim of efficacy, not to the safety of the medication. Smith tells us that Jason Daubert “briefly mulled reopening his case when Diclegis, the updated version of Bendectin, was re-approved.” But how would the approval of Diclegis, on the strength of a full new drug application, somehow support his claim anew? And how would he “reopen” a claim that had been fully litigated in the 1990s, and well past any statute of limitations?

Is this straight reporting? I think not. It is manipulative and misleading.

Smith notes, without attribution, that some scholars condemn litigation, such as the cases involving Bendectin, as an illegitimate form of regulation of medications. In opposition, he appears to rely upon Elizabeth Chamblee Burch, a professor at the University of Georgia School of Law for the view that because the initial pivotal clinical trials for regulatory approvals take place in limited populations, litigation “serves as a stopgap for identifying rare adverse outcomes that could crop up when several hundreds of millions of people are exposed to those products over longer periods of time.” The problem with this view is that Smith ignores the whole process of pharmacovigilance, post-registration trials, and pharmaco-epidemiologic studies conducted after the licensing of a new medication. The suggested necessity of reliance upon the litigation system as an adjunct to regulatory approval is at best misplaced and tenuous.

Smith correctly explains that the Daubert standard is still resisted in criminal cases, where it could much improve the gatekeeping of forensic expert witness opinion. But while the author gets his knickers in a knot over wrongful convictions, he seems quite indifferent to wrongful judgments in civil action.

Perhaps the one positive aspect of this journalistic account of the Daubert case was that Jason Daubert, unlike his mother, was open minded about his role in transforming the law of scientific evidence. According to Smith, Jason Daubert did not see the case as having “not ruined his life.” Indeed, Jason seemed to approve the basic principle of the Daubert case, and the subsequent legislation that refined the admissibility standard: “Good science should be all that gets into the courts.”


[1] Peter Andrey Smith, “Where Science Enters the Courtroom, the Daubert Name Looms Large: Decades ago, two parents sued a drug company over their newborn’s deformity – and changed courtroom science forever,” Undark (Feb. 17, 2020).

[2]  Lawyer Disciplinary Board v. Nace, 753 S.E.2d 618, 621–22 (W. Va.) (per curiam), cert. denied, 134 S. Ct. 474 (2013).

[3] Neil Genzlinger, “William McBride, Who Warned About Thalidomide, Dies at 91,” N.Y. Times (July 15, 2018); Leigh Dayton, “Thalidomide hero found guilty of scientific fraud,” New Scientist (Feb. 27, 1993); G.F. Humphrey, “Scientific fraud: the McBride case,” 32 Med. Sci. Law 199 (1992); Andrew Skolnick, “Key Witness Against Morning Sickness Drug Faces Scientific Fraud Charges,” 263 J. Am. Med. Ass’n 1468 (1990).

The Shmeta-Analysis in Paoli

July 11th, 2019

In the Paoli Railroad yard litigation, plaintiffs claimed injuries and increased risk of future cancers from environmental exposure to polychlorinated biphenyls (PCBs). This massive litigation showed up before federal district judge Hon. Robert F. Kelly,[1] in the Eastern District of Pennsylvania, who may well have been the first judge to grapple with a litigation attempt to use meta-analysis to show a causal association.

One of the plaintiffs’ expert witnesses was the late William J. Nicholson, who was a professor at Mt. Sinai School of Medicine, and a colleague of Irving Selikoff. Nicholson was trained in physics, and had no professional training in epidemiology. Nonetheless, Nicholson was Selikoff’s go-to colleague for performing epidemiologic studies. After Selikoff withdrew from active testifying for plaintiffs in tort litigation, Nicholson was one of his colleagues who jumped into the fray as a surrogate advocate for Selikoff.[2]

For his opinion that PCBs were causally associated with liver cancer in humans,[3] Nicholson relied upon a report he wrote for the Ontario Ministry of Labor. [cited here as “Report”].[4] Nicholson described his report as a “study of the data of all the PCB worker epidemiological studies that had been published,” from which he concluded that there was “substantial evidence for a causal association between excess risk of death from cancer of the liver, biliary tract, and gall bladder and exposure to PCBs.”[5]

The defense challenged the admissibility of Nicholson’s meta-analysis, on several grounds. The trial court decided the challenge based upon the Downing case, which was the law in the Third Circuit, before the Supreme Court decided Daubert.[6] The Downing case allowed some opportunity for consideration of reliability and validity concerns; there is, however, disappointingly little discussion of any actual validity concerns in the courts’ opinions.

The defense challenge to Nicholson’s proffered testimony on liver cancer turned on its characterization of meta-analysis as a “novel” technique, which is generally unreliable, and its claim that Nicholson’s meta-analysis in particular was unreliable. None of the individual studies that contributed data showed any “connection” between PCBs and liver cancer; nor did any individual study conclude that there was a causal association.

Of course, the appropriate response to this situation, with no one study finding a statistically significant association, or concluding that there was a causal association, should have been “so what?” One of the reasons to do a meta-analysis is that no available study was sufficiently large to find a statistically significant association, if one were there. As for drawing conclusions of causal associations, it is not the role or place of an individual study to synthesize all the available evidence into a principled conclusion of causation.

In any event, the trial court concluded that the proffered novel technique lacked sufficient reliability, that the meta-analysis would “overwhelm, confuse, or mislead the jury,” and that the proffered meta-analysis on liver cancer was not sufficiently relevant to the facts of the case (in which no plaintiff had developed, or had died of, liver cancer). The trial court noted that the Report had not been peer-reviewed, and that it had not been accepted or relied upon by the Ontario government for any finding or policy decision. The trial court also expressed its concern that the proffered testimony along the lines of the Report would possibly confuse the jury because it appeared to be “scientific” and because Nicholson appeared to be qualified.

The Appeal

The Court of Appeals for the Third Circuit, in an opinion by Judge Becker, reversed Judge Kelly’s exclusion of the Nicholson Report, in an opinion that is still sometimes cited, even though Downing is no longer good law in the Circuit or anywhere else.[7] The Court was ultimately not persuaded that the trial court had handled the exclusion of Nicholson’s Report and its meta-analysis correctly, and it remanded the case for a do-over analysis.

Judge Becker described Nicholson’s Report as a “meta-analysis,” which pooled or “combined the results of numerous epidemiologic surveys in order to achieve a larger sample size, adjusted the results for differences in testing techniques, and drew his own scientific conclusions.”[8] Through this method, Nicholson claimed to have shown that “exposure to PCBs can cause liver, gall bladder and biliary tract disorders … even though none of the individual surveys supports such a conclusion when considered in isolation.”[9]

Validity

The appellate court gave no weight to the possibility that a meta-analysis would confuse a jury, or that its “scientific nature” or Nicholson’s credentials would lead a jury to give it more weight than it deserved.[10] The Court of Appeals conceded, however, that exclusion would have been appropriate if the methodology used itself was invalid. The appellate opinion further acknowledged that the defense had offered opposition to Nicholson’s Report in which it documented his failure to include data that were inconsistent with his conclusions, and that “Nicholson had produced a scientifically invalid study.”[11]

Judge Becker’s opinion for a panel of the Third Circuit provided no details about the cherry picking. The opinion never analyzed why this charge of cherry-picking and manipulation of the dataset did not invalidate the meta-analytic method generally, or Nicholson’s method as applied. The opinion gave no suggestion that this counter-affidavit was ever answered by the plaintiffs.

Generally, Judge Becker’s opinion dodged engagement with the specific threats to validity in Nicholson’s Report, and took refuge in the indisputable fact that hundreds of meta-analyses were published annually, and that the defense expert witnesses did not question the general reliability of meta-analysis.[12] These facts undermined the defense claim that meta-analysis was novel.[13] The reality, however, was that meta-analysis was in its infancy in bio-medical research.

When it came to the specific meta-analysis at issue, the court did not discuss or analyze a single pertinent detail of the Report. Despite its lack of engagement with the specifics of the Report’s meta-analysis, the court astutely observed that prevalent errors and flaws do not mean that a particular meta-analysis is “necessarily in error.”[14] Of course, without bothering to look, the court would not know whether the proffered meta-analysis was “actually in error.”

The appellate court would have given Nicholson’s Report a “pass” if it was an application of an accepted methodology. The defense’s remedy under this condition would be to cross-examine the opinion in front of a jury. If, on the other hand, the Nicholson had altered an accepted methodology to skew its results, then the court’s gatekeeping responsibility under Downing would be invoked.

The appellate court went on to fault the trial court for failing to make sufficiently explicit findings as to whether the questioned meta-analysis was unreliable. From its perspective, the Court of Appeals saw the trial court as resolving the reliability issue upon the greater credibility of defense expert witnesses in branding the disputed meta-analysis as unreliability. Credibility determinations are for the jury, but the court left room for a challenge on reliability itself:[15]

“Assuming that Dr. Nicholson’s meta-analysis is the proper subject of Downing scrutiny, the district court’s decision is wanting, because it did not make explicit enough findings on the reliability of Dr. Nicholson’s meta-analysis to satisfy Downing. We decline to define the exact level at which a district court can exclude a technique as sufficiently unreliable. Reliability indicia vary so much from case to case that any attempt to define such a level would most likely be pointless. Downing itself lays down a flexible rule. What is not flexible under Downing is the requirement that there be a developed record and specific findings on reliability issues. Those are absent here. Thus, even if it may be possible to exclude Dr. Nicholson’s testimony under Downing, as an unreliable, skewed meta-analysis, we cannot make such a determination on the record as it now stands. Not only was there no hearing, in limine or otherwise, at which the bases for the opinions of the contesting experts could be evaluated, but the experts were also not even deposed. All of the expert evidence was based on affidavits.”

Peer Review

Understandably, the defense attacked Nicholson’s Report as not having been peer reviewed. Without any scrutiny of the scientific bona fides of the workers’ compensation agency, the appellate court acquiesced in Nicholson’s self-serving characterization of his Report as having been reviewed by “cooperating researchers” and the Panel of the Ontario Workers’ Compensation agency. Another partisan expert witness characterized Nicholson’s Report as a “balanced assessment,” and this seemed to appease the Third Circuit, which was wary of requiring peer review in the first place.[16]

Relevancy Prong

The defense had argued that Nicholson’s Report was irrelevant because no individual plaintiff claimed liver cancer.[17] The trial court largely accepted this argument, but the appellate court disagreed because of conclusory language in Nicholson’s affidavit, in which he asserted that “proof of an increased risk of liver cancer is probative of an increased risk of other forms of cancer.” The court seemed unfazed by the ipse dixit, asserted without any support. Indeed, Nicholson’s assertion was contradicted by his own Report, in which he reported that there were fewer cancers among PCB-exposed male capacitor manufacturing workers than expected,[18] and that the rate for all cancers for both men and women was lower than expected, with 132 observed and 139.40 expected.[19]

The trial court had also agreed with the defense’s suggestion that Nicholson’s report, and its conclusion of causality between PCB exposure and liver cancer, were irrelevant because the Report “could not be the basis for anyone to say with reasonable degree of scientific certainty that some particular person’s disease, not cancer of the liver, biliary tract or gall bladder, was caused by PCBs.”[20]

Analysis

It would likely have been lost on Judge Becker and his colleagues, but Nicholson presented SMRs (standardized mortality ratios) throughout his Report, and for the all cancers statistic, he gave an SMR of 95. What Nicholson clearly did in this, and in all other instances, was simply divide the observed number by the expected, and multiply by 100. This crude, simplistic calculation fails to present a standardized mortality ratio, which requires taking into account the age distribution of the exposed and the unexposed groups, and a weighting of the contribution of cases within each age stratum. Nicholson’s presentation of data was nothing short of false and misleading. And in case anyone remembers General Electric v. Joiner, Nicholson’s summary estimate of risk for lung cancer in men was below the expected rate.[21]

Nicholson’s Report was replete with many other methodological sins. He used a composite of three organs (liver, gall bladder, bile duct) without any biological rationale. His analysis combined male and female results, and still his analysis of the composite outcome was based upon only seven cases. Of those seven cases, some of the cases were not confirmed as primary liver cancer, and at least one case was confirmed as not being a primary liver cancer.[22]

Nicholson failed to standardize the analysis for the age distribution of the observed and expected cases, and he failed to present meaningful analysis of random or systematic error. When he did present p-values, he presented one-tailed values, and he made no corrections for his many comparisons from the same set of data.

Finally, and most egregiously, Nicholson’s meta-analysis was meta-analysis in name only. What he had done was simply to add “observed” and “expected” events across studies to arrive at totals, and to recalculate a bogus risk ratio, which he fraudulently called a standardized mortality ratio. Adding events across studies is not a valid meta-analysis; indeed, it is a well-known example of how to generate a Simpson’s Paradox, which can change the direction or magnitude of any association.[23]

Some may be tempted to criticize the defense for having focused its challenge on the “novelty” of Nicholson’s approach in Paoli. The problem of course was the invalidity of Nicholson’s work, but both the trial court’s exclusion of Nicholson, and the Court of Appeals’ reversal and remand of the exclusion decision, illustrate the problem in getting judges, even well-respected judges, to accept their responsibility to engage with questioned scientific evidence.

Even in Paoli, no amount of ketchup could conceal the unsavoriness of Nicholson’s scrapple analysis. When the Paoli case reached the Court Appeals again in 1994, Nicholson’s analysis was absent.[24] Apparently, the plaintiffs’ counsel had second thoughts about the whole matter. Today, under the revised Rule 702, there can be little doubt that Nicholson’s so-called meta-analysis should have been excluded.


[1]  Not to be confused with the Judge Kelly of the same district, who was unceremoniously disqualified after attending an ex parte conference with plaintiffs’ lawyers and expert witnesses, at the invitation of Dr. Irving Selikoff.

[2]  Pace Philip J. Landrigan & Myron A. Mehlman, “In Memoriam – William J. Nicholson,” 40 Am. J. Indus. Med. 231 (2001). Landrigan and Mehlman assert, without any support, that Nicholson was an epidemiologist. Their own description of his career, his undergraduate work at MIT, his doctorate in physics from the University of Washington, his employment at the Watson Laboratory, before becoming a staff member in Irving Selikoff’s department in 1969, all suggest that Nicholson brought little to no experience in epidemiology to his work on occupational and environmental exposure epidemiology.

[3]  In re Paoli RR Yard Litig., 706 F. Supp. 358, 372-73 (E.D. Pa. 1988).

[4]  William Nicholson, Report to the Workers’ Compensation Board on Occupational Exposure to PCBs and Various Cancers, for the Industrial Disease Standards Panel (ODP); IDSP Report No. 2 (Toronto, Ontario Dec. 1987).

[5]  Id. at 373.

[6]  United States v. Downing, 753 F.2d 1224 (3d Cir.1985)

[7]  In re Paoli RR Yard PCB Litig., 916 F.2d 829 (3d Cir. 1990), cert. denied sub nom. General Elec. Co. v. Knight, 111 S.Ct. 1584 (1991).

[8]  Id. at 845.

[9]  Id.

[10]  Id. at 841, 848.

[11]  Id. at 845.

[12]  Id. at 847-48.

[13]  See, e.g., Robert Rosenthal, Judgment studies: Design, analysis, and meta-analysis (1987); Richard J. Light & David B. Pillemer, Summing Up: the Science of Reviewing Research (1984); Thomas A. Louis, Harvey V. Fineberg & Frederick Mosteller, “Findings for Public Health from Meta-Analyses,” 6 Ann. Rev. Public Health 1 (1985); Kristan A. L’abbé, Allan S. Detsky & Keith O’Rourke, “Meta-analysis in clinical research,” 107 Ann. Intern. Med. 224 (1987).

[14]  Id. at 857.

[15]  Id. at 858/

[16]  Id. at 858.

[17]  Id. at 845.

[18]  Report, Table 16.

[19]  Report, Table 18.

[20]  In re Paoli, 916 F.2d at 847.

[21]  See General Electric v. Joiner, 522 U.S. 136 (1997); NAS, “How Have Important Rule 702 Holdings Held Up With Time?” (March 20, 2015).

[22]  Report, Table 22.

[23]  James A. Hanley, Gilles Thériault, Ralf Reintjes and Annette de Boer, “Simpson’s Paradox in Meta-Analysis,” 11 Epidemiology 613 (2000); H. James Norton & George Divine, “Simpson’s paradox and how to avoid it,” Significance 40 (Aug. 2015); George Udny Yule, Notes on the theory of association of attributes in Statistics, 2 Biometrika 121 (1903).

[24]  In re Paoli RR Yard Litig., 35 F.3d 717 (3d Cir. 1994).

The Contrivance Standard for Gatekeeping

March 23rd, 2019

According to Google ngram, the phrase “junk science” made its debut circa 1975, lagging junk food by about five years. SeeThe Rise and Rise of Junk Science” (Mar. 8, 2014). I have never much like the phrase “junk science” because it suggests that courts need only be wary of the absurd and ridiculous in their gatekeeping function. Some expert witness opinions are, in fact, serious scientific contributions, just not worthy of being advanced as scientific conclusions. Perhaps better than “junk” would be patho-epistemologic opinions, or maybe even wissenschmutz, but even these terms might obscure that the opinion that needs to be excluded derives from serious scientific, only it is not ready to be held forth as a scientific conclusion that can be colorably called knowledge.

Another formulation of my term, patho-epistemology, is the Eleventh Circuit’s lovely “Contrivance Standard.” Rink v. Cheminova, Inc., 400 F.3d 1286, 1293 & n.7 (11th Cir. 2005). In Rink, the appellate court held that the district court had acted within its discretion to exclude expert witness testimony because it had properly confined its focus to the challenged expert witness’s methodology, not his credibility:

“In evaluating the reliability of an expert’s method, however, a district court may properly consider whether the expert’s methodology has been contrived to reach a particular result. See Joiner, 522 U.S. at 146, 118 S.Ct. at 519 (affirming exclusion of testimony where the methodology was called into question because an “analytical gap” existed “between the data and the opinion proffered”); see also Elcock v. Kmart Corp., 233 F.3d 734, 748 (3d Cir. 2000) (questioning the methodology of an expert because his “novel synthesis” of two accepted methodologies allowed the expert to ”offer a subjective judgment … in the guise of a reliable expert opinion”).”

Note the resistance, however, to the Supreme Court’s mandate of gatekeeping. District courts must apply the statutes, Rule of Evidence 702 and 703. There is no legal authority for the suggestion that a district court “may properly consider wither the expert’s methodology has been contrived.” Rink, 400 F.3d at 1293 n.7 (emphasis added).