TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Multiplicity in the Third Circuit

September 21st, 2017

In Karlo v. Pittsburgh Glass Works, LLC, C.A. No. 2:10-cv-01283 (W. D. Pa.), plaintiffs claimed that their employer’s reduction in force unlawfully targeted workers over 50 years of age. Plaintiffs lacked any evidence of employer animus against old folks, and thus attempted to make out a statistical disparate impact claim. The plaintiffs placed their chief reliance upon an expert witness, Michael A. Campion, to analyze a dataset of workers agreed to have been the subject of the R.I.F. For the last 30 years, Campion has been on the faculty in Purdue University. His academic training and graduate degrees are in industrial and organizational psychology. Campion has served an editor of Personnel Psychology, and as a past president of the Society for Industrial and Organizational Psychology. Campion’s academic website page notes that he manages a small consulting firm, Campion Consulting Services1.

The defense sought to characterize Campion as not qualified to offer his statistical analysis2. Campion did, however, have some statistical training as part of his master’s level training in psychology, and his professional publications did occasionally involve statistical analyses. To be sure, Campion’s statistical acumen paled in comparison to the defense expert witness, James Rosenberger, a fellow and a former vice president of the American Statistical Association, as well as a full professor of statistics in Pennsylvania State University. The threshold for qualification, however, is low, and the defense’s attack on Campion’s qualifications failed to attract the court’s serious attention.

On the merits, the defense subjected Campion to a strong challenge on whether he had misused data. The defense’s expert witness, Prof. Rosenberger, filed a report that questioned Campion’s data handling and statistical analyses. The defense claimed that Campion had engaged in questionable data manipulation by including, in his RIF analysis, workers who had been terminated when their plant was transferred to another company, as well as workers who retired voluntarily.

Using simple z-score tests, Campion compared the ages of terminated and non-terminated employees in four subgroups, ages 40+, 45+, 50+, and 55+. He did not conduct an analysis of the 60+ subgroup on the claim that this group had too few members for the test to have sufficient power3Campion found a small z-score for the 40+ versus <40 age groups comparison (z =1.51), which is not close to statistical significance at the 5% level. On the defense’s legal theory, this was the crucial comparison to be made under the Age Discrimination in Employment Act (ADEA). The plaintiffs, however, maintained that they could make out a case of disparate impact by showing age discrimination at age subgroups that started above the minimum specified by the ADEA. Although age is a continuous variable, Campion decided to conduct z-scores on subgroups that were based upon five-year increments. For the 45+, 50+, and 55+ age subgroups, he found z-scores that ranged from 2.15 to 2.46, and he concluded that there was evidence of disparate impact in the higher age subgroups4. Karlo v. Pittsburgh Glass Works, LLC, C.A. No. 2:10-cv-01283, 2015 WL 4232600, at *11 (W.D. Pa. July 13, 2015) (McVerry, S.J.)

The defense, and apparently the defense expert witnesses, branded Campion’s analysis as “data snooping,” which required correction for multiple comparisons. In the defense’s view, the multiple age subgroups required a Bonferroni correction that would have diminished the critical p-value for “significance” by a factor of four. The trial court agreed with the defense contention about data snooping and multiple comparisons, and excluded Campion’s opinion of disparate impact, which had been based upon finding statistically significant disparities in the 45+, 50+, and 55+ age subgroups. 2015 WL 4232600, at *13. The trial court noted that Campion, in finding significant disparities in terminations in the subgroups, but not in the 40+ versus <40 analysis:

[did] not apply any of the generally accepted statistical procedures (i.e., the Bonferroni procedure) to correct his results for the likelihood of a false indication of significance. This sort of subgrouping ‘analysis’ is data-snooping, plain and simple.”

Id. After excluding Campion’s opinions under Rule 702, as well as other evidence in support of plaintiffs’ disparate impact claim, the trial court granted summary judgment on the discrimination claims. Karlo v. Pittsburgh Glass Works, LLC, No. 2:10–cv–1283, 2015 WL 5156913 (W. D. Pa. Sept. 2, 2015).

On plaintiffs’ appeal, the Third Circuit took the wind out of the attack on Campion by holding that the ADEA prohibits disparate impacts based upon age, which need not necessarily be on workers’ being over 40 years old, as opposed to being at least 40 years old. Karlo v. Pittsburgh Glass Works, LLC, 849 F.3d 61, 66-68 (3d Cir. 2017). This holding took the legal significance out of the statistical insignificance of Campion’s comparison 40+ versus <40 age-group termination rates. Campion’s subgroup analyses were back in play, but the Third Circuit still faced the question whether Campion’s conclusions, based upon unadjusted z-scores and p-values, offended Rule 702.

The Third Circuit noted that the district court had identified three grounds for excluding Campion’s statistical analyses:

(1) Dr. Campion used facts or data that were not reliable;

(2) he failed to use a statistical adjustment called the Bonferroni procedure; and

(3) his testimony lacks ‘‘fit’’ to the case because subgroup claims are not cognizable.

849 F.3d at 81. The first issue was raised by the defense’s claims of Campion’s sloppy data handling, and inclusion of voluntarily retired workers and workers who were terminated when their plant was turned over to another company. The Circuit did not address these data handling issues, which it left for the trial court on remand. Id. at 82. The third ground went out of the case with the appellate court’s resolution of the scope of the ADEA. The Circuit did, however, engage on the issue whether adjustment for multiple comparisons was required by Rule 702.

On the “data-snooping” issue, the Circuit concluded that the trial court had applied “an incorrectly rigorous standard for reliability.” Id. The Circuit acknowledged that

[i]n theory, a researcher who searches for statistical significance in multiple attempts raises the probability of discovering it purely by chance, committing Type I error (i.e., finding a false positive).”

849 F.3d at 82. The defense expert witness contended that applying the Bonferroni adjustment, which would have reduced the critical significance probability level from 5% to 1%, would have rendered Campion’s analyses not statistically significant, and thus not probative of disparate impact. Given that plaintiffs’ cases were entirely statistical, the adjustment would have been fatal to their cases. Id. at 82.

At the trial level and on appeal, plaintiffs and Campion had objected to the data-snooping charge on ground that

(1) he had engaged in only four subgroups;

(2) virtually all subgroups were statistically significant;

(3) his methodology was “hypothesis driven” and involved logical increments in age to explore whether the strength of the evidence of age disparity in terminations continued in each, increasingly older subgroup;

(4) his method was analogous to replications with different samples; and

(5) his result was confirmed by a single, supplemental analysis.

Id. at 83. According to the plaintiffs, Campion’s approach was based upon the reality that age is a continuous, not a dichotomous variable, and he was exploring a single hypothesis. A.240-241; Brief of Appellants at 26. Campion’s explanations do mitigate somewhat the charge of “data snooping,” but they do not explain why Campion did not use a statistical analysis that treated age as a continuous variable, at the outset of his analysis. The single, supplemental analysis was never described or reported by the trial or appellate courts.

The Third Circuit concluded that the district court had applied a ‘‘merits standard of correctness,’’ which is higher than what Rule 702 requires. Specifically, the district court, having identified a potential methodological flaw, did not further evaluate whether Campion’s opinion relied upon good grounds. 849 F.3d at 83. The Circuit vacated the judgment below, and remanded the case to the district court for the opportunity to apply the correct standard.

The trial court’s acceptance that an adjustment was appropriate or required hardly seems a “merits standard.” The use of a proper adjustment for multiple comparisons is very much a methodological concern. If Campion could reach his conclusion only by way of an inappropriate methodology, then his conclusion surely would fail the requirements of Rule 702. The trial court did, however, appear to accept, without explicit evidence, that the failure to apply the Bonferroni correction made it impossible for Campion to present sound scientific argument for his conclusion that there had been disparate impact. The trial court’s opinion also suggests that the Bonferroni correction itself, as opposed to some more appropriate correction, was required.

Unfortunately, the reported opinions do not provide the reader with a clear account of what the analyses would have shown on the correct data set, without improper inclusions and exclusions, and with appropriate statistical adjustments. Presumably, the parties are left to make their cases on remand.

Based upon citations to sources that described the Bonferroni adjustment as “good statistical practice,” but one that is ‘‘not widely or consistently adopted’’ in the behavioral and social sciences, the Third Circuit observed that in some cases, failure to adjust for multiple comparisons may “simply diminish the weight of an expert’s finding.”5 The observation is problematic given that Kumho Tire suggests that an expert witness must use “in the courtroom the same level of intellectual rigor that characterizes the practice of an expert in the relevant field.” Kumho Tire Co. v. Carmichael, 526 U.S. 137, 150, (1999). One implication is that courts are prisoners to prevalent scientific malpractice and abuse of statistical methodology. Another implication is that courts need to look more closely at the assumptions and predicates for various statistical tests and adjustments, such as the Bonferroni correction.

These worrisome implications are exacerbated by the appellate court’s insistence that the question whether a study’s result was properly calculated or interpreted “goes to the weight of the evidence, not to its admissibility.”6 Combined with citations to pre-Daubert statistics cases7, judicial comments such as these can appear to be a general disregard for the statutory requirements of Rules 702 and 703. Claims of statistical significance, in studies with multiple exposure and multiple outcomes, are frequently not adjusted for multiple comparisons, without notation, explanation, or justification. The consequence is that study results are often over-interpreted and over-sold. Methodological errors related to multiple testing or over-claiming statistical significance are commonplace in tort litigation over “health-effects” studies of birth defects, cancer, and other chronic diseases that require epidemiologic evidence8.

In Karlo, the claimed methodological error is beset by its own methodological problems. As the court noted, adjustments for multiple comparisons are not free from methodological controversy9. One noteworthy textbook10 labels the Bonferroni correction as an “awful response” to the problem of multiple comparisons. Aside from this strident criticism, there are alternative approaches to statistical adjustment for multiple comparisons. In the context of the Karlo case, the Bonferroni might well be awful because Campion’s four subgroups are hardly independent tests. Because each subgroup is nested within the next higher age subgroup, the subgroup test results will be strongly correlated in a way that defeats the mathematical assumptions of the Bonferroni correction. On remand, the trial court in Karlo must still make his Rule 702 gatekeeping decision on the methodological appropriateness of whether Campion’s properly considered the role of multiple subgroups, and multiple anaslyses run on different models.


1 Although Campion describes his consulting business as small, he seems to turn up in quite a few employment discrimination cases. See, e.g., Chen-Oster v. Goldman, Sachs & Co., 10 Civ. 6950 (AT) (JCF) (S.D.N.Y. 2015); Brand v. Comcast Corp., Case No. 11 C 8471 (N.D. Ill. July 5, 2014); Powell v. Dallas Morning News L.P., 776 F. Supp. 2d 240, 247 (N.D. Tex. 2011) (excluding Campion’s opinions), aff’d, 486 F. App’x 469 (5th Cir. 2012).

2 See Defendant’s Motion to Bar Dr. Michael Campion’s Statistical Analysis, 2013 WL 11260556.

3 There was no mention of an effect size for the lower aged subgroups, and a power calculation for the 60+ subgroup’s probability of showing a z-score greater than two. Similarly, there was no discussion or argument about why this subgroup could not have been evaluated with Fisher’s exact test. In deciding the appeal, the Third Circuit observed that “Dr. Rosenberger test[ed] a subgroup of sixty-and-older employees, which Dr. Campion did not include in his analysis because ‘[t]here are only 14 terminations, which means the statistical power to detect a significant effect is very low’. A.244–45.” Karlo v. Pittsburgh Glass Works, LLC, 849 F.3d 61, 82 n.15 (3d Cir. 2017).

4 In the trial court’s words, the z-score converts the difference in termination rates into standard deviations. Karlo v. Pittsburgh Glass Works, LLC, C.A. No. 2:10-cv-01283, 2015 WL 4232600, at *11 n.13 (W.D. Pa. July 13, 2015). According to the trial court, Campion gave a rather dubious explanation of the meaning of the z-score: “[w]hen the number of standard deviations is less than –2 (actually–1.96), there is a 95% probability that the difference in termination rates of the subgroups is not due to chance alone” Id. (internal citation omitted).

5 See 849 F.3d 61, 83 (3d Cir. 2017) (citing and quoting from Paetzold & Willborn § 6:7, at 308 n.2) (describing the Bonferroni adjustment as ‘‘good statistical practice,’’ but ‘‘not widely or consistently adopted’’ in the behavioral and social sciences); see also E.E.O.C. v. Autozone, Inc., No. 00-2923, 2006 WL 2524093, at *4 (W.D. Tenn. Aug. 29, 2006) (‘‘[T]he Court does not have a sufficient basis to find that … the non-utilization [of the Bonferroni adjustment] makes [the expert’s] results unreliable.’’). And of course, the Third Circuit invoked the Daubert chestnut: ‘‘Vigorous cross-examination, presentation of contrary evidence, and careful instruction on the burden of proof are the traditional and appropriate means of attacking shaky but

admissible evidence.’’ Daubert, 509 U.S. 579, 596 (1993).

6 See 849 F.3d at 83 (citing Leonard v. Stemtech Internat’l Inc., 834 F.3d 376, 391 (3d Cir. 2016).

7 See 849 F.3d 61, 83 (3d Cir. 2017), citing Bazemore v. Friday, 478 U.S. 385, 400 (1986) (‘‘Normally, failure to include variables will affect the analysis’ probativeness, not its admissibility.’’).

8 See Hans Zeisel & David Kaye, Prove It with Figures: Empirical Methods in Law and Litigation 93 & n.3 (1997) (criticizing the “notorious” case of Wells v. Ortho Pharmaceutical Corp., 788 F.2d 741 (11th Cir.), cert. denied, 479 U.S. 950 (1986), for its erroneous endorsement of conclusions based upon “statistically significant” studies that explored dozens of congenital malformation outcomes, without statistical adjustment). The authors do, however, give an encouraging example of a English trial judge who took multiplicity seriously. Reay v. British Nuclear Fuels (Q.B. Oct. 8,1993) (published in The Independent, Nov. 22,1993). In Reay, the trial court took seriously the multiplicity of hypotheses tested in the study relied upon by plaintiffs. Id. (“the fact that a number of hypotheses were considered in the study requires an increase in the P-value of the findings with consequent reduction in the confidence that can be placed in the study result … .”), quoted in Zeisel & Kaye at 93. Zeisel and Kaye emphasize that courts should not be overly impressed with claims of statistically significant findings, and should pay close attention to how expert witnesses developed their statistical models. Id. at 94.

9 See David B. Cohen, Michael G. Aamodt, and Eric M. Dunleavy, Technical Advisory Committee Report on Best Practices in Adverse Impact Analyses (Center for Corporate Equality 2010).

10 Kenneth J. Rothman, Sander Greenland, and Timoth L. Lash, Modern Epidemiology 273 (3d ed. 2008); see also Kenneth J. Rothman, “No Adjustments Are Needed for Multiple Comparisons,” 1 Epidemiology 43, 43 (1990)

Lawsuit Magic – Turning Talcum into Wampum

August 27th, 2017

Last week, a Los Angeles jury, with little prior experience in giving away other people’s money, awarded Eva Echeverria $417,000,000 dollars, in compensatory and punitive damages.1 Pundits in the media, and from both sides of the bar, including your humble blogger, jumped in to offer their speculation about the cause of profligacy.2

In speaking to one reporter, I described the evidence against Johnson & Johnson in an earlier trial (Slemp) as showing that the company needed to engage more fully with the scientific evidence, and not reduce complex evidence to sound bites. Alas, no good deed goes unpunished; my comments were reduced to sound bites! The reporter quoted me in part as having said that the case was a tough one for the defense, but left out that I thought the case was tough because the defense will have a difficult time educating judges and juries in the scientific methods and judgment needed to reach a sound conclusion. The reporter suggested that I had opined that the evidence against J & J was “compelling,” when I had suggested the evidence was confounded and biased, and that J & J needed to take greater care in addressing study validity.3

Perhaps more interesting than my speculation is the guesswork of the plaintiffs’ counsel, who has had more experience with conjecture than I will ever enjoy. In an interview with an American Law Media reporter4, Allen Smith offered his view that three “new” pieces of evidence explain the Los Angeles hyper-verdict:

1. evidence that other companies selling consumer talcum power have begun to place ovarian cancer warnings on their packaging, within the few months;

2. evidence that two persons involved in the Cosmetic Industry Review, which has concluded that talcum powder is safe, had received payments from Johnson & Johnson for speaking engagements; and

3. evidence that Douglas Weed, a former National Cancer Institute epidemiologist, who testified for Johnson & Johnson as an expert witness in the Echeverria case, had been sanctioned in another, non-talc case in North Carolina, for lying under oath about whether he had notes to his expert report in that other case.

Smith claimed that the new evidence was “very compelling,” especially the evidence that Johnson & Johnson had presented “unbelievable and non-credible witnesses on an issue so important like this.”

Now, Smith was trial counsel. He was intimately involved in presenting the evidence, and in watching the jurors’ reactions. Nonetheless, I am skeptical that these three “bits” explain the jury’s extravagance.

The first “bit” seems completely irrelevant. The fact of another company’s having warned within months of the trial, and years after the plaintiff was diagnosed with ovarian cancer, suggests that the evidence was inflammatory without having any probative value. Feasibility of warning was not an issue. State of the art was an issue. In the Slemp trial, Graham Colditz testified that he had had his epiphany that talc causes ovarian cancer only two years ago, when he was instructed by plaintiffs’ counsel to formulate an opinion on the causal claim. That another company recently placed a warning to ward off the lawsuit industry is hardly evidence of industry or governmental standard. All that can really be said is that some companies have been bullied or scared into warnings by the Lawsuit Industry, in the hopes of avoiding litigation. Indeed, it is not at all clear how this bit of irrelevancy was admitted into evidence. All in all, this evidence of a recent warning, years after the plaintiff’s use of the defendant’s talcum powder seems quite out of bounds.

The second bit was simply more of the same inflammatory, scurrilous attacks on Johnson & Johnson. Having watched much of the Slemp trial, I can say that this was Allen Smith’s stock in trade. From media reports, he seemed to have succeeded in injecting his personal attacks on the most peripheral of issues into the Echeverria trial. Not everything in Slemp was collateral attack, but a lot was, and much of it was embarrassing to the legal system for having tolerated it.

The third bit of evidence about Dr. Weed’s having been sanctioned was news to me. A search on Westlaw and Google Scholar failed to find the sanctions order referred to by plaintiffs’ counsel. If anyone is familiar with the North Carolina case that gave rise to the alleged court sanction, please send me a copy or a citation.


1 Daniel Siegal, “J&J Hit With $417M Verdict In 1st Calif. Talc Cancer Trial,” Law360 (Aug. 21, 2017). The case was Echeverria v. Johnson & Johnson, case no. BC628228, Los Angeles Cty. Superior Court, California.

2 See Daniel Siegal, “Science No Salve For J&J In Talc Cases, $417M Verdict Shows,” Law360, Los Angeles (Aug. 22, 2017). See also Margaret Cronin Fisk & and Edvard Pettersson, “J&J Loses $417 Million Talc Verdict in First California Case,” Bloomberg News (Aug. 21, 2017).

3 Tina Bellon, “Massive California verdict expands J&J’s talc battlefield,” Reuters (Aug. 22, 2017); Tina Bellon, “Massive California verdict expands J&J’s talc battlefield,” CNBC (Aug. 22, 2017); Tina Bellon, “J&J’s talc woes expand with massive California verdict,” BNN Reuters (Aug. 22, 2017).

4 Amanda Bronstad, “New Evidence Seen as Key in LA Jury’s $417M Talc Verdict,” Law.com (Aug. 22, 2017).

WOE — Zoloft Escapes a MDL While Third Circuit Creates a Conceptual Muddle

July 31st, 2017

Multidistrict Litigations (MDLs) can be “muddles” that are easy to get in, but hard to get out of. Pfizer and subsidiary Greenstone fabulously escaped a muddle through persistent lawyering and the astute gatekeeping of a district judge, in the Eastern District of Pennsylvania. That judge, the Hon. Cynthia Rufe, sustained objections to the admissibility of plaintiffs’ epidemiologic expert witness Anick Bérard. When the MDL’s plaintiffs’ steering committee (PSC) demanded, requested, and begged for a do over, Judge Rufe granted them one more chance. The PSC put their litigation industry eggs in a single basket, carried by statistician Nicholas Jewell. Unfortunately for the PSC, Judge Rufe found Jewell’s basket to be as methodologically defective as Bérard’s, and Her Honor excluded Jewell’s proffered testimony. Motions, paper, and appeals followed, but on June 2, 2017, the Third Circuit declared that the PSC and its clients had had enough opportunities to get through the gate. Their baskets of methodological deplorables were not up to snuff. In re Zoloft Prod. Liab. Litig., No. 16-2247 , __ F.3d __, 2017 WL 2385279, 2017 U.S. App. LEXIS 9832 (3d Cir. June 2, 2017) (affirming exclusion of Jewell’s dodgy opinions, which involved multiple methodological flaws and failures to follow any methodology faithfully) [Slip op. cited below as Zoloft].

Plaintiffs Attempt to Substitute WOE for Depressingly Bad Expert Witness Opinion

The ruse of conflating “weight of the evidence,” as used to describe the appellate standard of review for sustaining or reversing a trial court’s factual finding with a purported scientific methodology for inferring causation, was on full display by the PSC in their attack on Judge Rufe’s gatekeeping. In their appellate brief in the Court of Appeals for the Third Circuit, the PSC asserted that Jewell had used a “weight of the evidence method,” even though that phrase, “weight of the evidence” (WOE) was never used in Jewell’s litigation reports. The full context of the PSC’s argument and citations to Milward make clear a deliberate attempt to conflate WOE as an appellate judicial standard for reviewing jury fact finding and a purported scientific methodology. See Appellants’ Opening Brief at 54 (Aug. 10, 2016) [cited as PSC] (asserting that “[a]t all times, the ultimate evaluation of the weight of the evidence is a jury question”; citing Milward v. Acuity Specialty Products Group, Inc., 639 F.3d 11, 20 (1st Cir. 2011), cert. denied, 133 S. Ct. 63 (2012).

Having staked the ground that WOE is akin to a jury’s factual finding, and thus immune to any but the most extraordinary trial court action or appellate intervention, the PSC then pivoted to claim that Jewell’s WOE-ful method was nothing much more than an assessment of “the totality of the available scientific evidence, guided by the well-accepted Bradford-Hill criteria.” PSC at 3, 4, 7. This maneuver allowed the PSC to argue, apparently with a straight face, that WOE methodology as used by Jewell, had been generally accepted in the scientific community, as well as by the Third Circuit, in previous cases in which the court accepted the use of Bradford Hill’s considerations as a reliable method for establishing general causation. See PSC at 4 (citing Gannon v. United States, 292 F. App’x 170, 173 n.1 (3d Cir. 2008)). Jewell then simply plugged in his expertise and “40 years of experience,” and the desired conclusion of causation popped out. Id. Quod erat demonstrandum.

In pressing its point, the PSC took full advantage of loose, inaccurate language from the American Law Institute’s Restatement’s notorious comment C:

No algorithm exists for applying the Hill guidelines to determine whether an association truly reflects a causal relationship or is spurious.”

PSC at 33-34, citing Restatement (Third) of Torts: Physical and Emotional Harm § 28 cmt. c(3) (2010). Well true, but the absence of a mathematical algorithm hardly means that causal judgments are devoid of principles and standards. The PSC was undeterred, by text or by shame, from equating an unarticulated use of WOE methodology with some vague invocation of Bradford Hill’s considerations for evaluating associations for causality. See PSC at 43 (citing cases that never mentioned WOE but only Bradford Hill’s 50-plus year old heuristic as somehow supporting the claimed identity of the two approaches)1.

Pfizer Rebuffs WOE

Pfizer filed a comprehensive brief that unraveled the PSC’s duplicity. For unknown reasons, tactical or otherwise, however, Pfizer did not challenge the specifics of PSC’s equation of WOE with an abridged, distorted application of Bradford Hill’s considerations. See generally Opposition Brief of Defendants-Appellees Pfizer Inc., Pfizer International LLC, and Greenstone LLC [cited as Pfizer]. Perhaps given page limits and limited judicial attention spans, and just how woefully bad Jewell’s opinions were, Pfizer may well have decided that a more directed approach of assuming arguendo WOE’s methodological appropriateness was a more economical, pragmatic approach. A close reading of Pfizer’s brief, however, makes clear that it never conceded the validity of WOE as a scientific methodology.

Pfizer did point to the recasting of Jewell’s aborted attempt to apply Bradford Hill considerations as an employment of WOE methodology. Pfizer at 46-47. The argument reminded me of Abraham Lincoln’s famous argument:

How many legs does a dog have if you call his tail a leg?

Four.

Saying that a tail is a leg doesn’t make it a leg.”

Allen Thorndike Rice, Reminiscences of Abraham Lincoln by Distinguished Men of His Time at 242 (1909). Calling Jewell’s supposed method WOE or Bradford Hill or WOE/Bradford Hill did not cure the “fatal methodological flaws in his opinions.” Pfizer at 47.

Pfizer understandably and properly objected to the PSC’s attempt to cast Jewell’s “methodology” at such a high level of generality that any consideration of the many instances of methodological infidelity would be relegated to mere jury questions. Acquiescence in the PSC’s rhetorical move would constitute a complete abandonment of the inquiry whether Jewell had used a proper method. Pfizer at 15-16.

Interestingly, none of the amici curiae addressed the slippery WOE arguments advanced by the PSC. See generally Brief of Amici Curiae American Tort Reform Ass’n & Pharmaceutical Research and Manufacturers of America (Oct. 18, 2016); Brief of Washington Legal Fdtn. as Amicus Curiae (Oct. 18, 2016). There was no meaningful discussion of WOE as a supposedly scientific methodology at oral argument. See Transcript of Oral Argument in In re Zoloft Prod. Liab. Litig., No. 16-2247 (Jan. 25, 2017).

The Third Circuit Acknowledges that Some Methodological Infelicities, Flaws, and Fallacies Are Properly the Subject of Judicial Gatekeeping

Fortunately, Jewell’s methodological infidelities were easily recognized by the Circuit judges. Jewell treated multiple studies, which were nested within one another, and thus involved overlapping and included populations, as though they were independent verifications of the same hypothesis. When the population at issue (from the Danish cohort) was included in a more inclusive pan-Scandivanian study, the relied-upon association dissipated, and Jewell utterly failed to explain or account for these data. Zoloft at 5-6.

Jewell relied upon a study by Anick Bérard, even though he later had to concede that the study had serious flaws that invalidated its conclusions, and which flaws caused him to have a lack of confidence in the paper’s findings.2 In another instance, Jewell relied innocently upon a study that purported to report a statistically significant association, but the authors of this paper were later required by the journal, The New England Journal of Medicine, to correct the very calculated confidence interval upon which Jewell had relied. Despite his substantial mathematical prowess, Jewell missed the miscalculation and relied (uncritically) upon a finding as statistically significant when in fact it was not.

Jewell rejected a meta-analysis of Zoloft studies for questionable methodological quibbles, even though he had relied upon the very same meta-analysis, with the same methodology, in his litigation efforts involving Prozac and birth defects. Not to be corralled by methodological punctilio, Jewell conducted his own meta-analysis with two studies Huybrechts (2014) and Jimenez-Solem (2012), but failed to explain why he excluded other studies, the inclusion of which would have undone his claimed result. Zoloft at 9. Jewell purported to reanalyze and recalculate point estimates in two studies, Jimenez-Solem (2012) and Huybrechts (2014), without any clear protocol or consistency in his approach to other studies. Zoloft at 9. The list goes on, but in sum, Jewell’s handling of these technical issues did not inspire confidence, either in the district or in the appellate court.

WOE to the Third Circuit

The Circuit gave the PSC every conceivable break. Because Pfizer had not engaged specifically on whether WOE was a proper, or any kind of, scientific method, the Circuit treated the issue as virtually conceded:

Pfizer does not seem to contest the reliability of the Bradford Hill criteria or weight of the evidence analysis generally; the dispute centers on whether the specific methodology implemented by Dr. Jewell is reliable. Flexible methodologies, such as the “weight of the evidence,” can be implemented in multiple ways; despite the fact that the methodology is generally reliable, each application is distinct and should be analyzed for reliability.”

Zoloft at 18. The Court acknowledged that WOE arose only in the PSC’s appellate brief, which would have made the entire dubious argument waived under general appellate jurisdictional principles, but the Court, in a footnote, indulged the assumption, “for the sake of argument,” that WOE was Jewell’s purported method from the inception. Zoloft at 18 n. 39. Without any real evidentiary support or analysis or concession from Pfizer, the Circuit accepted that WOE analyses were “generally reliable.” Zoloft at 21.

The Circuit accepted, rather uncritically, that Jewell used a combination of WOE analysis and Bradford Hill considerations. Zoloft at 17. Although Jewell had never described WOE in his litigation report, and WOE was not a feature of his hearing testimony, the Circuit impermissibly engrafted Carl Cranor’s description of WOE as involving inference to the best explanation. Zoloft at 17 & n.37, citing Milward v. Acuity Specialty Prods. Grp., Inc., 639 F.3d 11, 17 (1st Cir. 2011) (internal quotation marks and citation omitted).

There was, however, a limit to the Circuit’s credulousness and empathy. As the Court noted, there must be some assurance that the purported Bradford Hill/WOE method is something more than a “mere conclusion-oriented selection process.” Zoloft at 20. Ultimately, the Court put its markers down for Jewell’s putative WOE methodology:

there must be a scientific method of weighting that is used and explained.”

Zoloft at 20. Calling the method WOE did not, in the final analysis, exclude Jewell from Rule 702 gatekeeping. Try as the PSC might, there was just no mistaking Jewell’s approach as anything other than a crazy patchwork quilt of numerical wizardry in aid of subjective, result-oriented conclusion mongering.

In the Court’s words:

we find that Dr. Jewell did not 1) reliably apply the ‘techniques’ to the body of evidence or 2) adequately explain how this analysis supports specified Bradford Hill criteria. Because ‘any step that renders the analysis unreliable under the Daubert factors renders the expert’s testimony inadmissible’, this is sufficient to show that the District Court did not abuse its discretion in excluding Dr. Jewell’s testimony.”

Zoloft at 28. As heartening as the Circuit’s conclusion is, the Court’s couching its observation as a finding (“we find”) is disheartening with respect to the Third Circuit’s apparent inability to distinguish abuse-of-discretion review from de novo appellate findings. Equally distressing is the Court’s invocation of Daubert factors, which were dicta in a Supreme Court case that was superseded by an amended statute over 17 years ago, in Federal Rule of Evidence 702.

On the crucial question whether Jewell had engaged in an unreliable application of methods or techniques that superficially, at a very high level of generality, claim to be generally accepted, the Court stayed on course. The Court “found” that Jewell had applied techniques, analyses, and critiques so obviously inconsistently that no amount of judicial indulgence, assumptions arguendo, or careless glosses could save Jewell and his fatuous opinions from judicial banishment. Zoloft 28-29. Returning to the correct standard of review (abuse of discretion), but the wrong governing law (Daubert instead of Rule 702), the Court announced that:

[b]ecause ‘any step that renders the analysis unreliable under the Daubert factors renders the expert’s testimony inadmissible’, this is sufficient to show that the District Court did not abuse its discretion in excluding Dr. Jewell’s testimony.”

Zoloft at 21 n.50 (citation omitted). The Court found itself unable to say simply and directly that “the MDL trial court decided the case well within its discretion.”

The Zoloft case was not the Third Circuit’s first WOE rodeo. WOE had raised its unruly head in Magistrini v. One Hour Martinizing Dry Cleaning, 180 F. Supp. 2d 584, 602 (D.N.J. 2002), aff’d, 68 F. App’x 356 (3d Cir. 2003), where an expert witness, David Ozonoff, offered what purported to be a WOE opinion. The Magistrini trial court did not fuss with the assertion that WOE was generally reliable, but took issue with how Ozonoff tried to pass off his analysis as a comprehensive treatment of the totality of the evidence. In Magistrini, Judge Hochberg noted that regardless of the rubric of the methodology, the witness must show that in conducting a WOE analysis:

all of the relevant evidence must be gathered, and the assessment or weighing of that evidence must not be arbitrary, but must itself be based on methods of science.”

Magistrini, 180 F. Supp. 2d at 602. The witness must show that the methodology is more than a “mere conclusion-oriented selection process,” and that it has a “a scientific method of weighting that is used and explained.” Id. at 607. Asserting the use of WOE was not an excuse or escape from judicial gatekeeping as specified by Rule 702.

Although the Third Circuit gave the Zoloft MDL trial court’s findings a searching review (certainly much tougher than the prescribed abuse-of-discretion review), the MDL court’s finding that Jewell “failed to consistently apply the scientific methods he articulates, has deviated from or downplayed certain well-established principles of his field, and has inconsistently applied methods and standards to the data so as to support his a priori opinion” were ultimately vindicated by the Court of Appeals. Zoloft at 10.

All’s well that ends well. Perhaps. It remains unfortunate, however, that a hypothetical method, WOE — which was never actually advocated by the challenged expert witnesses, which lacks serious support in the scientific community, and which was merely assumed arguendo to be valid — will be taken by careless readers to have been endorsed the Third Circuit.


1 Among the cases cited without any support for the PSC’s dubious contention were Gannon v. United States, 292 F. App’x 170, 173 n.1 (3d Cir. 2008); Bitler v. A.O. Smith Corp., 391 F.3d 1114, 1124-25 (10th Cir. 2004); In re Joint E. & S. Dist. Asbestos Litig., 52 F.3d 1124, 1128 (2d Cir. 1995); In re Avandia Mktg., Sales Practices & Prods. Liab. Litig., No. 2007-MD-1871, 2011 WL 13576, at *3 (E.D. Pa. Jan. 4, 2011) (“Bradford-Hill criteria are used to assess whether an established association between two variables actually reflects a causal relationship.”).

2 Anick Bérard, Sertraline Use During Pregnancy and the Risk of Major Malformations, 212 Am. J. Obstet. Gynecol. 795 (2015).

Every Time a Bell Rings

July 1st, 2017

“Every time a bell rings, an angel gets his wings.”
Zuzu Bailey

And every time a court issues a non-citable opinion, a judge breaks fundamental law. Whether it wants to or not, a common law court, in deciding a case, creates precedent, and an expectation and a right that other, similarly situated litigants will be treated similarly. Deciding a case and prohibiting its citation deprives future litigants of due process and equal protection of the law. If that makes for more citable opinions, more work for judges and litigants, so be it; that is what our constitution requires.

Back in 2015, Judge Bernstein issued a ruling in a birth defects case in which the mother had claimed to have taken sertraline during pregnancy and this medication use caused her child to be born with congenital malformations. Applying what Pennsylvania courts insist is a Frye standard, Judge Bernstein excluded the proffered expert witness testimony that attempted to draw a causal connection between the plaintiff’s birth defect and the mother’s medication use. Porter v. SmithKline Beecham Corp., No. 03275, 2015 WL 5970639 (Phila. Cty. Pennsylvania, Ct. C.P. October 5, 2015) (Mark I. Bernstein, J.) Judge Bernstein has since left the bench, but he was and is a respected commentator on Pennsylvania evidence1, even if he was generally known for his pro-plaintiff views on many legal issues. Bernstein’s opinion in Porter was a capable demonstration of how Pennsylvania’s Frye rule can be interpreted to reach essentially the same outcome that is required by Federal Rule of Evidence 702. SeeDemonstration of Frye Gatekeeping in Pennsylvania Birth Defects Case” (Oct. 6, 2015); In re Zoloft Prod. Liab. Litig., No. 16-2247 , __ F.3d __, 2017 WL 2385279 , 2017 U.S. App. LEXIS 9832 (3d Cir. June 2, 2017) (affirming exclusion of dodgy statistical analyses and opinions, and the trial court’s entry of summary judgment on claims that sertraline causes birth defects).

In May of this year, the Pennsylvania Superior Court affirmed Judge Bernstein’s judgment, and essentially approved and adopted his reasoning. Porter v. SmithKline Beecham Corp., No. 3516 EDA 2015,2017 WL 1902905 (Pa. Super. May 8, 2017). What the Superior Court purport to giveth, the Superior Court taketh away. The Porter decision is franked as a “Non-Precedential Decision – See Superior Court I.O.P. 65.37.”

What is this Internal Operating Procedure that makes the Superior Court think that it can act and decide cases without creating precedent? Here is the relevant text from the Pennsylvania Code:

  1. An unpublished memorandum decision shall not be relied upon or cited by a Court or a party in any other action or proceeding, except that such a memorandum decision may be relied upon or cited
  1. when it is relevant under the doctrine of law of the case, res judicata, or collateral estoppel, and
  1. when the memorandum is relevant to a criminal action or proceeding because it recites issues raised and reasons for a decision affecting the same defendant in a prior action or proceeding.

210 Pa. Code § 65.37. Unpublished Memoranda Decisions. So, in other words, it is secret law.

No citation and no precedent rules are deeply problematic, and have attracted a great deal of scholarly attention2. And still, courts engage in this problematic practice. Prohibiting citation of Superior Court decisions in Pennsylvania is especially problematic in a state in which the highest court hears relatively few cases, and where the Justices involve themselves in internecine disputes. As other commentators have noted, prohibiting citation to prior decisions admitting or excluding expert witness testimony stunts the development of an area of evidence law, in which judges and litigants are often confused and in need of guidance. William E. Padgett, “‘Non-Precedential’ Unpublished Decisions in Daubert and Frye Cases, Often Silenced,” Nat’l L. Rev. (2017). The abuses of judge-made secret law from uncitable decisions has been abolished in the federal appeals courts for over a decade3. It is time for the state courts to follow suit.


1 See, e.g., Mark I. Bernstein, Pennsylvania Rules of Evidence (2017).

See Erica Weisgerber, “Unpublished Opinions: A Convenient Means to an Unconstitutional End,” 97 Georgetown L.J. 621 (2009);  Rafi Moghadam, “Judge Nullification: A Perception of Unpublished Opinions,” 62 Hastings L.J. 1397 (2011);  Norman R. Williams, “The failings of Originalism:  The Federal Courts and the Power of Precedent,” 37 U.C.. Davis L. Rev.761 (2004);  Dione C. Greene, “The Federal Courts of Appeals, Unpublished Decisions, and the ‘No-Citation Rule,” 81 Indiana L.J. 1503 (2006);  Vincent M. Cox, “Freeing Unpublished Opinions from Exile: Going Beyond the Citation Permitted by Proposed Federal Rule of Appellate Procedure 32.1,” 44 Washburn L.J. 105 (2004);  Sarah E. Ricks, “The Perils of Unpublished Non-Precedential Federal Appellate Opinions: A Case Study of The Substantive Due Process State-Created Danger Doctrine in One Circuit,” 81 Wash. L.Rev. 217 (2006);  Michael J. Woodruff, “State Supreme Court Opinion Publication in the Context of Ideology and Electoral Incentives.” New York University Department of Politics (March 2011);  Michael B. W. Sinclair, “Anastasoff versus Hart: The Constitutionality and Wisdom of Denying Precedential Authority to Circuit Court Decisions”; Thomas Healy, “Stare Decisis as a Constitutional Requirement,” 104 W. Va. L. Rev. 43 (2001); David R. Cleveland & William D. Bader, “Precedent and Justice,” 49 Duq. L. Rev. 35 (2011); Johanna S. Schiavoni, “Who’s Afraid of Precedent,” 49 UCLA L. Rev. 1859 (2002); Salem M. Katsh and Alex V. Chachkes, “Constitutionality of ‘No-Citation’ Rules,” 3 J. App. Prac. & Process 287 (2001); David R. Cleveland, “Appellate Court Rules Governing Publication, Citation, and Precedent of Opinions: An Update,” 16 J. App. Prac. & Process 257 (2015). See generally The Committee for the Rule of Law (website) (collecting scholarship and news on the issue of unpublished and supposedly non-precedential opinions). The problem even has its own Wikipedia page. SeeNon-publication of legal opinions in the United States.”

3 See Fed. R. App. Proc. 32.1 (prohibiting federal courts from barring or limiting citation to unpublished federal court opinions, effective after Jan. 1, 2007).