Open Admissions for Expert Witnesses in Chantix Litigation

Chantix is medication that helps people stop smoking.  Smoking kills people, but make a licensed drug and the lawsuits will come.

Earlier this month, Judge Inge Prytz Johnson, the MDL trial judge in the Chantix litigation, filed an opinion that rejected Pfizer’s challenges to plaintiffs’ general causation expert witnesses.  Memorandum Opinion and Order, In re Chantix (Varenicline) Products Liability Litigation, MDL No. 2092, Case 2:09-cv-02039-IPJ Document 642 (N.D. Ala. Aug. 21, 2012)[hereafter cited as Chantix].

Plaintiffs claimed that Chantix causes depression and suicidality, sometimes severe enough to result in suicide, attempted or completed.  Chantix at 3-4.  Others have written about Judge Johnson’s decision.  See Lacayo, “Win Some, Lose Some: Recent Federal Court Rulings on Daubert Challenges to Plaintiffs’ Experts,” (Aug. 30, 2012).

The breadth and depth of error of the trial court’s analysis, or lack thereof, remains, however, to be explored.



The Chantix MDL court notes several times that the defendant “harped” on this or that issue; the reader might think the defendant was a music label rather than a pharmaceutical manufacturer.  One of the defendant’s chords that failed to resonate with the trial judge was the point that the plaintiffs’ expert witnesses relied upon statistically non-significant results.  Here is how the trial court reported the issue:

“While the defendant repeatedly harps on the importance of statistically significant data, the United States Supreme Court recently stated that ‘[a] lack of statistically significant data does not mean that medical experts have no reliable basis for inferring a causal link between a drug and adverse events …. medical experts rely on other evidence to establish an inference of causation.’ Matrixx Initiatives, Inc. v. Siracsano, 131 S.Ct. 1309, 1319 (2011).”

Chantix at 22.

Well, it was only a matter of time before the Supreme Court’s dictum would be put to this predictably erroneous interpretation.  SeeThe Matrixx Oversold” (April 4, 2011).

Matrixx involved a motion to dismiss the complaint, which the trial court granted, but the Ninth Circuit reversed.  No evidence was offered; nor was any ruling that evidence was unreliable or insufficient at issue. The Supreme Court affirmed the Circuit on the issue whether pleading statistical significance was necessary.  Matrixx Initiatives took this position in the hopes of avoiding the merits, and so the issue of causation was never before the Supreme Court.  A unanimous Supreme Court held that because FDA regulatory action does not require reliable evidence to support a causal conclusion, pleading materiality for a securities fraud suit does not require an allegation of causation, and thus does not require an allegation of statistically significant evidence. Everything that the Court said about statistical significance and causation was obiter dictum, and rather ill-considered dictum at that.

The Supreme Court thus wandered far beyond its holding to suggest that courts “frequently permit expert testimony on causation based on evidence other than statistical significance.” Matrixx Initiatives, Inc. v. Siracsano, 131 S.Ct. 1309, 1319 (2011) (citing Wells v. Ortho Pharm. Corp., 788 F.2d 741, 744-745 (11th Cir.1986)).  But the Supreme Court’s citation to Wells, in Justice Sotomayor’s opinion, failed to support the point she was trying to make, or the decision that the trial court announced in Chantix.

Wells involved a claim of birth defects caused by the use of spermicidal jelly contraceptive.  At least one study reported a statistically significant increase in detected birth defects over the expected rate.  Wells v. Ortho Pharmaceutical Corp., 615 F. Supp. 262 (N.D.Ga. 1985), aff’d, and rev’d in part on other grounds, 788 F.2d 741 (11th Cir.), cert. denied, 479 U.S.950 (1986).  Wells is not an example of a case in which an expert witness opined about causation in the absence of a scientific study with statistical significance. Of course, finding statistical significance is just the beginning of assessing the causality of an association; the Wells case was and remains notorious for the expert witness’s poor assessment of all the determinants of scientific causation, including the validity of the studies relied upon.

The Wells decision was met with severe criticism in the 1980s.  The decision was widely criticized for its failure to evaluate the entire evidentiary display, as well as for its failure to rule out bias and confounding in the studies relied upon by the plaintiff.  See, e.g., James L. Mills and Duane Alexander, “Teratogens and ‘Litogens’,” 15 New Engl. J. Med. 1234 (1986); Samuel R. Gross, “Expert Evidence,” 1991 Wis. L. Rev. 1113, 1121-24 (1991) (“Unfortunately, Judge Shoob’s decision is absolutely wrong. There is no scientifically credible evidence that Ortho-Gynol Contraceptive Jelly ever causes birth defects.”). See also Editorial, “Federal Judges v. Science,” N.Y. Times, December 27, 1986, at A22 (unsigned editorial);  David E. Bernstein, “Junk Science in the Courtroom,” Wall St. J. at A 15 (Mar. 24,1993) (pointing to Wells as a prominent example of how the federal judiciary had embarrassed the American judicial system with its careless, non-evidence based approach to scientific evidence). A few years later, another case in the same judicial district, against the same defendant, for the same product, resulted in the grant of summary judgment.  Smith v. Ortho Pharmaceutical Corp., 770 F. Supp. 1561 (N.D. Ga. 1991) (supposedly distinguishing Wells on the basis of more recent studies).

Neither the Justices in Matrixx Initiatives nor the trial court in Chantix can be excused for their poor scholarship, or their failure to note that Wells was overruled sub silentio by the Supreme Court’s own subsequent decisions in Daubert, Joiner, Kumho Tire, and Weisgram.  And if the weight of precedent did not kill the concept, then there is the simple matter of a supervening statute:  the 2000 amendment of Rule 702, of Federal Rules of Evidence.



The Supreme Court in Matrixx Initiatives was careful to distinguish causal judgments from regulatory action, but then went on in dictum to conflate the two.  The trial judge in Chantix showed no similar analytical care.  Judge Johnson held that the asserted absence of statistical significance was not a basis for excluding plaintiffs’ expert witnesses’ opinions on general causation.  Her Honor adverted to the Matrixx Initiatives dictum that the FDA “does not apply any single metric for determining when additional inquiry or action is necessary.” Matrixx, 131 S.Ct. at 1320.  Chantix at 22.  Judge Johnson noted

“that ‘[n]ot only does the FDA rely on a wide range of evidence of causation, it sometimes acts on the basis of evidence that suggests, but does not prove, causation…. the FDA may make regulatory decisions against drugs based on postmarketing evidence that gives rise to only a suspicion of causation’.  Matrixx, id. The court declines to hold the plaintiffs’ experts to a more exacting standard as the defendant requests.”

Chantix at 23.

In the trial court’s analysis, the difference between regulatory action and civil litigation fact adjudication is obliterated.  This, however, is not the law of the United States, which has consistently acknowledged the difference. See, e.g., IUD v. API, 448 U.S. 607, 656 (1980)(“agency is free to use conservative assumptions in interpreting the data on the side of overprotection rather than underprotection.”)

As the Second Edition of the Reference Manual on Scientific Evidence (which was the out-dated edition cited by the court in Chantix) explains:

“[p]roof of risk and proof of causation entail somewhat different questions because risk assessment frequently calls for a cost-benefit analysis. The agency assessing risk may decide to bar a substance or product if the potential benefits are outweighed by the possibility of risks that are largely unquantifiable because of presently unknown contingencies. Consequently, risk assessors may pay heed to any evidence that points to a need for caution, rather than assess the likelihood that a causal relationship in a specific case is more likely than not.”

Margaret A. Berger, “The Supreme Court’s Trilogy on the Admissibility of Expert Testimony,” in Reference Manual On Scientific Evidence at 33 (Fed. Jud. Ctr. 2d. ed. 2000).



Judge Johnson insisted that the “court’s focus was solely on the principles and methodology, not on the conclusions they generate.” Chantix at 9.  This insistence, however, is contrary to the established law of Rule 702.

Although the United States Supreme Court attempted, in Daubert, to draw a distinction between the reliability of an expert witness’s methodology and conclusion, that Court soon realized that the distinction was flawed. If an expert witness’s proffered testimony is discordant from regulatory and scientific conclusions, a reasonable, disinterested scientists would be led to question the reliability of the testimony’s methodology and its inferences from facts and data, to its conclusion.  The Supreme Court recognized this connection in General Electric v. Joiner, and the connection between methodology and conclusions was ultimately incorporated into a statute, the revised Federal Rule of Evidence 702:

“[I]f scientific, technical or other specialized knowledge will assist the trier of fact to understand the evidence or to determine a fact in issue, a witness qualified as an expert by knowledge, skill, experience, training or education, may testify thereto in the form of an opinion or otherwise, if

  1. the testimony is based upon sufficient fact or data,
  2. the testimony is the product of reliable principles and methods; and
  3. the witness has applied the principles and methods reliably to the facts.”

When the testimony is a conclusion about causation, the Rule 702 directs an inquiry into whether that conclusion is based upon sufficient fact or data, and whether that conclusion is the product of reliable principles and methods.  The court’s focus should indeed be on the conclusion as well the methodology claimed to generate the conclusion.  The Chantix MDL court thus ignored the clear mandate of a statute, Rule 702(1), and applied dictum from Daubert, superseded by Joiner, and an Act of Congress.  The ruling is thus legally invalid to the extent it departs from the statute.



For obscure reasons, Judge Johnson sought to deprecate the need to rely upon epidemiologic studies, whether placebo-controlled clinical trials or observational studies.  See Chantix at 25 (citing Rider v. Sandoz Pharm. Corp., 295 F.3d 1194, 1198-99 (11 Cir.2002)). Of course, the language cited in Rider came from a pre-Daubert, pre-Joiner, case, Wells v. Ortho Pharm. Corp., 788 F.2d 741, 745 (11th Cir.1986) (holding that “a cause-effect relationship need not be clearly established by animal or epidemiological studies”).  This dubious legal lineage cannot support the glib dismissal of the need for epidemiologic evidence.



According to Judge Johnson, plaintiffs’ expert witness Shira Kramer considered all the evidence relevant to Chantix and neuropsychiatric side effects, in what Kramer described as a “weight of the evidence” analysis.  Chantix at 26.  In her report, Kramer had written that determinations about the weight of evidence are “subjective interpretations” based upon “various lines of scientific evidence. Id. (citing and quoting Kramer’s report). Kramer also claimed that every scientist “brings a unique set of experiences, training and expertise …. Philosophical differences exist between experts…. Therefore, it is not surprising that differences of opinion exist among scientists. Such differences of opinion are not necessarily evidence of flawed scientific reasoning or methodology, but rather differences in judgment between scientists.” Id.

Without any support from scientific literature, or the Reference Manual on Scientific Evidence, Judge Johnson accepted Kramer’s explanation of a totally subjective, unprincipled approach as a scientific methodology.  Not surprisingly, Judge Johnson cited the First Circuit’s embrace of a similar vacuous embrace of a WOE analysis in Milward v. Acuity Specialty Products Group, Inc. 639 F.3d 11, 22 (1st Cir. 2011).  Chantix at 51.



Judge Johnson noted, contrary to her earlier suggestion that Shira Kramer had considered all the studies, that Kramer had excluded data from her analysis.  Kramer’s basis for excluding data may have been based upon pre-specified exclusionary principles, or they may have been completely ad hoc, as were the lack of weighting principles in her WOE analysis.  In its gatekeeping role, however, the trial court expressed complete indifference to Kramer’s selectivity in excluding data.  “Why Dr. Kramer chose to include or exclude data from specific clinical trials is a matter for cross-examination.”  Chantix at 27.  This indifference is an abdication of the court’s gatekeeping responsibility.



The trial court attempted to justify its willingness to mute defendant’s harping on statistical significance by adverting to the concept of statistical power:

“Oftentimes, epidemiological studies lack the statistical power needed for definitive conclusions, either because they are small or the suspected adverse effect is particularly rare. Id. [Michael D. Green et al., “Reference Guide on Epidemiology,” in Reference Manual on Scientific Evidence 333, 335 (Fed. Judicial Ctr. 2d ed. 2000)… .

Chantix at 29 n.16.

To be fair to the trial court, the Reference Manual invited this illegitimate use of statistical power because it, at times, omits the specification that statistical power requires not only a level of statistical significance to be attained, but also a specified alternative hypothesis to assess power.  See Power in the Courts — Part One; Power in the Courts — Part Two.  The trial court offered no alternative hypothesis against which any measure of power was to be assessed.

Judge Johnson did not report any power analyses, and she certainly did not report any quantification of power or lack thereof against some specific alternative hypothesis.  Judge Johnson’s invocation of power was just that – power used arbitrarily, without data, evidence, or reason.



As with the invocation of statistical power, the trial also invoked the concept of confidence intervals to suggest that such intervals provide a more refined approach to assessing statistical significance:

“A study found to have ‘results that are unlikely to be the result of random error’ is ‘statistically significant’. Reference Guide on Epidemiology, supra, at 354. Statistical significance, however, does not indicate the strength of an association found in a study. Id. at 359. ‘A study may be statistically significant but may find only a very weak association; conversely, a study with small sample sizes may find a high relative risk but still not be statistically significant.’ Id. To reach a ‘more refined assessment of appropriate inferences about the association found in an epidemiologic study’, researchers rely on another statistical technique known as a confidence interval’. Id. at 360.”

Chantix at 30 n.17.  True, true, but immaterial.  The trial court, again, never carries through with the direction given by the Reference Manual.  Not a single confidence interval is presented.  No confidence intervals are subjected to this more refined assessment.  Why have more refined assessments when even the cruder assessments are not done?



The trial court somehow had the notion that all it had to do was state that every disputed fact and opinion went to the weight not the admissibility, and then pass to a presumably more scientifically literate jury.  To be sure, the court engaged in a good deal of hand waving, going through the motions of deciding a contested issues.  Not only did the Judge Johnson smash poor Pfizer’s harp, Her Honor unhinged the gate that federal judges are supposed to keep.  Chantix declares that it is now open admissions for expert witnesses testifying to causation in federal cases.  This is a judgment in search of an appeal.

Print Friendly, PDF & Email

Comments are closed.