TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Some High-Value Targets for Sander Greenland in 2018

December 27th, 2017

A couple of years ago, Sander Greenland and I had an interesting exchange on Deborah Mayo’s website. I tweaked Sander for his practice of calling out defense expert witnesses for statistical errors, while ignoring whoopers made by plaintiffs’ expert witnesses. SeeSignificance Levels Made a Whipping Boy on Climate-Change Evidence: Is p < 0.05 Too Strict?” Error Statistics (Jan. 6, 2015).1 Sander acknowledged that he received a biased sample of expert reports through his service as a plaintiffs’ expert witness, but protested that defense counsel avoided him like the plague. In an effort to be helpful, I directed Sander to an example of bad statistical analysis that had been proffered by Dr Bennett Omalu, in a Dursban case, Pritchard v. Dow Agro Sciences, 705 F. Supp. 2d 471 (W.D. Pa. 2010), aff’d, 430 F. App’x 102, 104 (3d Cir. 2011).2

Sander was unimpressed with my example of Dr. Omalu; he found the example “a bit disappointing though because [Omalu] was merely a county medical examiner, and his junk analysis was duly struck. The expert I quoted in my citations was a full professor of biostatistics at a major public university, a Fellow of the American Statistical Association, a holder of large NIH grants, and his analysis (more subtle in its transgressions) was admitted” (emphasis added). Sander expressed an interest in finding “examples involving similarly well-credentialed, professionally accomplished plaintiff experts whose testimony was likewise admitted… .”

Although it was heartening to read Sander’s concurrence in the assessment of Omalu’s analysis as “junk,” Sander’s rejection of Dr. Omalu as merely a low-value target was disappointing, given that Omalu also has a master’s degree in public health, from the University of Pittsburgh, where he claims he studied with Professor Lew Kuller. Omalu has also gained some fame and notoriety for his claim to have identified the problem of chronic traumatic encephalopathy (CTE) among professional football players. After all, even Sander Greenland has not been the subject of a feature-length movie (Concussion), as has Omalu.

I lost track of our exchange in 2015, until recently I was reminded of it when reading an expert report by Professor Martin Wells. Unlike Omalu, Wells meets all the Greenland criteria for high-value targets. He is not only a full, chaired professor but also the statistics department chairman at an ivy-league school, Cornell University. Wells is a fellow of both the American Statistical Association and the Royal Statistical Society, but most important, Wells is a frequent plaintiffs’ expert witness, who is well known to Sander Greenland. Both Wells and Greenland served, side by side, as plaintiffs’ expert witnesses in the pain pump litigation.

So here is the passage in the Wells’ report that is worthy of Greenland’s attention:

If a 95% confidence interval is specified, the range encompasses the results we would expect 95% of the time if samples for new studies were repeatedly drawn from the same population.”

In re Testosterone Replacement Therapy Prods. Liab. Litig., Declaration of Martin T. Wells, Ph.D., at 2-3 (N.D. Ill., Oct. 30, 2016). Unlike the Dursban litigation involving Bennett Omalu, where the “junk analysis” was excluded, in the litigation against AbbVie for its manufacture and selling of prescription testosterone supplementation, Wells’ opinions were not excluded or limited. In re Testosterone Replacement Therapy Prods. Liab. Litig., No. 14 C 1748, MDL No. 2545, 2017 WL 1833173 (N.D. Ill. May 8, 2017) (denying Rule 702 motions).

Now this statement by Wells surely offends the guidance provided by Greenland and colleagues.3 And it was exactly the sort of misrepresentation that led to a confabulation of the American Statistical Association, and that Association’s consensus statement on statistical significance.4

And here is another example, which occurs not in a distorting litigation forum, but on the pages of an occupational health journal, where the editor in chief, Anthony L. Kiorpes, ranted about the need for better statistical editing and writing in his own journal. See Anthony L Kiorpes, “Lies, damned lies, and statistics,” 33 Toxicol. & Indus. Health 885 (2017). Kiorpes decried he misuse of statistics:

I am not implying that it is the intent of the scientists who publish in these pages to mislead readers by their use of statistics, but I submit that the misuse of statistics, whether intentional or otherwise, creates confusion and error.”

Id. at 885. Kiorpes then proceeded to hold himself up as Exhibit A to his screed:

Remember that p values are estimates of the probability that the null hypothesis (no difference) is true.”

Id. Uggh; we seem to be back sliding after the American Statistical Association’s consensus statement.

Almost all scientists have stated (or have been tempted to state) something like ‘the mean of Group A was greater than that of Group B, but the difference was not statistically significant’. With very few exceptions (which I will mention below), this statement is nonsense.”

* * * * *

What the statistics are indicating when the p-value is greater than 0.05 is that there is ‘no difference’ between group A and group B.”

Id. at 886.

Let’s hope that this gets Sander Greenland away from his biased sampling of expert witnesses, off the backs of defense expert witnesses, and on to some of the real culprits out there, in the new year.


See also Sander Greenland on ‘The Need for Critical Appraisal of Expert Witnesses in Epidemiology and Statistics’” (Feb. 8, 2015).

See alsoPritchard v. Dow Agro – Gatekeeping Exemplified” (Aug. 25, 2014); Omalu and Science — A Bad Weld” (Oct. 22, 2016); Brian v. Association of Independent Oil Distributors, No. 2011-3413, Westmoreland Cty. Ct. Common Pleas, Order of July 18, 2016 (excluding Dr. Omalu’s testimony on welding and solvents and Parkinson’s disease).

3 See, e.g., Sander Greenland, Stephen J. Senn, Kenneth J. Rothman, John B. Carlin, Charles Poole, Steven N. Goodman, and Douglas G. Altman, “Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations,” 31 Eur. J. Epidem. 337 (2016).

4 Ronald L. Wasserstein & Nicole A. Lazar, “American Statistical Association Statement on statistical significance and p values,” 70 Am. Statistician 129 (2016)

Statistical Gobbledygook Goes to the Supreme Court

October 20th, 2017

Back in July, my summer slumber was rudely interrupted by an intemperate, ad hominem rant from statistican Sander Greenland. Greenland’s rant concerned my views of the the Supreme Court’s decision in Matrixx Initiatives v. Siracusano, 563 U.S. 27 (2011).

Greenland held forth, unfiltered, on Deborah Mayo’s web blog, where he wrote:

Glad to have finally flushed out Schachtman, whose blog did not allow my critical dissenting comments back when this case first hit. Nice to see him insult the intellect of the Court too, using standard legal obfuscation of the fact that the Court is entitled to consider science, ordinary logic, and common sense outside of that legal framework to form and justify its ruling – that reasoning is what composes the bulk of the opinion I linked. Go read it and see what you think without the smokescreen offered by Schachtman.”

A megateam of reproducibility-minded scientists look to lowering the p-value,” Error Statistics (July 25, 2017).

Oh my! It is true that my blog does not have comments enabled, but as I have written on several occasions, I would gladly welcome requests to post opposing views, even those of Sander Greenland. On Deborah Mayo’s blog, I had the opportunity to explain carefully why Greenland has been giving a naïve, mistaken characterization of the holding of Matrixx Initiatives, in his expert witness reports for plaintiffs’ counsel, as well as in his professional publications. Ultimately, Greenland ran out of epithets, lost his enthusiasm for the discussion, and slunk away into cyber-silence.

I was a bit jarred, however, by Greenland’s accusation that I had insulted the Court. Certainly, I did not use any of the pejorative adjectives that Greenland had hurled at me; rather, I simply have given legal analysis of the Court’s opinions and a description of the legal, scientific, and statistical errors therein.1 And, to be sure, other knowledgeable writers and evidence scholars, have critiqued the Court’s decision and some of the pronouncements of the parties and the amici in Matrixx Initiatives2.

This week, John Pfaff, a professor at Fordham Law School, published an editorial in the New York Times, to argue that “The Supreme Court Justices Need Fact-Checkers,” N.Y. Times (Oct. 18, 2017). No doubt, Greenland would consider Pfaff’s editorial to be “insulting” to the Court, unless of course, Greenland thinks criticism can be insulting only if it challenges views he wants to see articulated by the Court.

In support of his criticism of the Court, Pfaff adverted to the Chief Justice’s recent comments in the oral argument of a gerrymandering case, Gill v. Whitford. In a question critical of the gerrymander challenge, Chief Justice Roberts described the supporting evidence:

it may be simply my educational background, but I can only describe as sociological gobbledygook.”

Oral Argument before the U.S. Supreme Court at p.40, in Gill v. Whitford, No. 16-1161 (Oct. 3, 2017). The Chief Justice’s dismissive comments about gobble may well have been provoked by an amicus brief filed on behalf of 44 election law, scientific evidencce, and empirical legal scholars, who explored the legal and statistical basis for striking down the Wisconsin gerrymander. See Brief of Amici Curiae, of 44 Election Law, Scientific Evidence, and Empirical Legal Scholars, filed in Gill v. Whitford, No. 16-1161 (Sept. 1, 2017).

As with Greenland’s obsequious respect for the Matrixx Initiatives opinion, no one is likely to have been misled by Chief Justice Roberts’ false modesty. John Roberts was graduated summa cum laude from Harvard College in three years, although with a major in a “soft” discipline, history. He went on to Harvard Law School, where he was the managing editor of the Harvard Law Review, and was graduated magna cum laude. As a lawyer, Roberts has had an extraordinarily successful career. And yet, the Chief Justice went out of his way to disparage the mathematical and statistical models used to show gerrymandering in the Gill case, as “gobbledygook.” Odds are that the Chief Justice was thus not deprecating his own education; yet, inquiring minds might wonder whether that education was deficient in mathematics, statistics, and science.

Policy is a major part of the court’s docket now, whether the Justices likes it or not. The Justices cannot avoid adapting to the technical requirements of scientific and statistical issues, and they cannot simply dismiss evidence they do not understand as “gobbledygook.” Referencing a recent ProPublica report, Professor Pfaff suggests that the Supreme Court might well employ independent advisors to fact check their use of descriptive statistics3

The problem identified by Pfaff, however, seems to implicate a fundamental divide between the “two cultures” of science and the humanities. See C.P. Snow, The Rede Lecture 1959. Perhaps Professor Pfaff might start with his own educational institution. The Fordham University School of Law does not offer a course in statistics and probability; nor does it require entering students to have satisfied a requirement of course work in mathematics, science, or statistics. The closest offering at Fordham is a course on accounting for lawyer, and the opportunity to take a one-credit course in “quantitative methods” at the graduate school.

Fordham School of Law, of course, is hardly alone. Despite cries for “relevancy” and experiential learning in legal education, some law schools eschew courses in statistics and probability for legal applications, sometimes on the explicit acknowledgement that such courses are too “hard,” or provoke too much student anxiety. The result, as C.P. Snow saw over a half century ago, is that lawyers and judges cannot tell gobbledygook from important data analysis, even when it smacks them in the face.


1 With David Venderbush of Alston & Bird LLP, I published my initial views of the Matrixx case, in the the form of a Washington Legal Foundation Legal Backgrounder, available at the Foundation’s website. See Schachtman & Venderbush, “Matrixx Unbounded: High Court’s Ruling Needlessly Complicates Scientific Evidence Principles,” 26 (14) Legal Backgrounder (June 17, 2011). I expanded on my critique in several blog posts. See, e.g., Matrixx Unloaded” (Mar. 29, 2011); The Matrixx Oversold” (Apr. 4, 2011); The Matrixx – A Comedy of Errors” (Apr. 6, 2011); De-Zincing the Matrixx” (Apr. 12, 2011); “Siracusano Dicta Infects Daubert Decisions” (Sept. 22, 2012).

2 See David Kaye, “The Transposition Fallacy in Matrixx Initiatives, Inc. v. Siracusano: Part I” (Aug. 19, 2011), and “The Transposition Fallacy in Matrixx Initiatives, Inc. v. Siracusano: Part II” (Aug. 26, 2011); David Kaye, “Trapped in the Matrixx: The U.S. Supreme Court and the Need for Statistical Significance,” BNA Product Safety & Liability Reporter 1007 (Sept. 12, 2011).

Multiplicity in the Third Circuit

September 21st, 2017

In Karlo v. Pittsburgh Glass Works, LLC, C.A. No. 2:10-cv-01283 (W. D. Pa.), plaintiffs claimed that their employer’s reduction in force unlawfully targeted workers over 50 years of age. Plaintiffs lacked any evidence of employer animus against old folks, and thus attempted to make out a statistical disparate impact claim. The plaintiffs placed their chief reliance upon an expert witness, Michael A. Campion, to analyze a dataset of workers agreed to have been the subject of the R.I.F. For the last 30 years, Campion has been on the faculty in Purdue University. His academic training and graduate degrees are in industrial and organizational psychology. Campion has served an editor of Personnel Psychology, and as a past president of the Society for Industrial and Organizational Psychology. Campion’s academic website page notes that he manages a small consulting firm, Campion Consulting Services1.

The defense sought to characterize Campion as not qualified to offer his statistical analysis2. Campion did, however, have some statistical training as part of his master’s level training in psychology, and his professional publications did occasionally involve statistical analyses. To be sure, Campion’s statistical acumen paled in comparison to the defense expert witness, James Rosenberger, a fellow and a former vice president of the American Statistical Association, as well as a full professor of statistics in Pennsylvania State University. The threshold for qualification, however, is low, and the defense’s attack on Campion’s qualifications failed to attract the court’s serious attention.

On the merits, the defense subjected Campion to a strong challenge on whether he had misused data. The defense’s expert witness, Prof. Rosenberger, filed a report that questioned Campion’s data handling and statistical analyses. The defense claimed that Campion had engaged in questionable data manipulation by including, in his RIF analysis, workers who had been terminated when their plant was transferred to another company, as well as workers who retired voluntarily.

Using simple z-score tests, Campion compared the ages of terminated and non-terminated employees in four subgroups, ages 40+, 45+, 50+, and 55+. He did not conduct an analysis of the 60+ subgroup on the claim that this group had too few members for the test to have sufficient power3Campion found a small z-score for the 40+ versus <40 age groups comparison (z =1.51), which is not close to statistical significance at the 5% level. On the defense’s legal theory, this was the crucial comparison to be made under the Age Discrimination in Employment Act (ADEA). The plaintiffs, however, maintained that they could make out a case of disparate impact by showing age discrimination at age subgroups that started above the minimum specified by the ADEA. Although age is a continuous variable, Campion decided to conduct z-scores on subgroups that were based upon five-year increments. For the 45+, 50+, and 55+ age subgroups, he found z-scores that ranged from 2.15 to 2.46, and he concluded that there was evidence of disparate impact in the higher age subgroups4. Karlo v. Pittsburgh Glass Works, LLC, C.A. No. 2:10-cv-01283, 2015 WL 4232600, at *11 (W.D. Pa. July 13, 2015) (McVerry, S.J.)

The defense, and apparently the defense expert witnesses, branded Campion’s analysis as “data snooping,” which required correction for multiple comparisons. In the defense’s view, the multiple age subgroups required a Bonferroni correction that would have diminished the critical p-value for “significance” by a factor of four. The trial court agreed with the defense contention about data snooping and multiple comparisons, and excluded Campion’s opinion of disparate impact, which had been based upon finding statistically significant disparities in the 45+, 50+, and 55+ age subgroups. 2015 WL 4232600, at *13. The trial court noted that Campion, in finding significant disparities in terminations in the subgroups, but not in the 40+ versus <40 analysis:

[did] not apply any of the generally accepted statistical procedures (i.e., the Bonferroni procedure) to correct his results for the likelihood of a false indication of significance. This sort of subgrouping ‘analysis’ is data-snooping, plain and simple.”

Id. After excluding Campion’s opinions under Rule 702, as well as other evidence in support of plaintiffs’ disparate impact claim, the trial court granted summary judgment on the discrimination claims. Karlo v. Pittsburgh Glass Works, LLC, No. 2:10–cv–1283, 2015 WL 5156913 (W. D. Pa. Sept. 2, 2015).

On plaintiffs’ appeal, the Third Circuit took the wind out of the attack on Campion by holding that the ADEA prohibits disparate impacts based upon age, which need not necessarily be on workers’ being over 40 years old, as opposed to being at least 40 years old. Karlo v. Pittsburgh Glass Works, LLC, 849 F.3d 61, 66-68 (3d Cir. 2017). This holding took the legal significance out of the statistical insignificance of Campion’s comparison 40+ versus <40 age-group termination rates. Campion’s subgroup analyses were back in play, but the Third Circuit still faced the question whether Campion’s conclusions, based upon unadjusted z-scores and p-values, offended Rule 702.

The Third Circuit noted that the district court had identified three grounds for excluding Campion’s statistical analyses:

(1) Dr. Campion used facts or data that were not reliable;

(2) he failed to use a statistical adjustment called the Bonferroni procedure; and

(3) his testimony lacks ‘‘fit’’ to the case because subgroup claims are not cognizable.

849 F.3d at 81. The first issue was raised by the defense’s claims of Campion’s sloppy data handling, and inclusion of voluntarily retired workers and workers who were terminated when their plant was turned over to another company. The Circuit did not address these data handling issues, which it left for the trial court on remand. Id. at 82. The third ground went out of the case with the appellate court’s resolution of the scope of the ADEA. The Circuit did, however, engage on the issue whether adjustment for multiple comparisons was required by Rule 702.

On the “data-snooping” issue, the Circuit concluded that the trial court had applied “an incorrectly rigorous standard for reliability.” Id. The Circuit acknowledged that

[i]n theory, a researcher who searches for statistical significance in multiple attempts raises the probability of discovering it purely by chance, committing Type I error (i.e., finding a false positive).”

849 F.3d at 82. The defense expert witness contended that applying the Bonferroni adjustment, which would have reduced the critical significance probability level from 5% to 1%, would have rendered Campion’s analyses not statistically significant, and thus not probative of disparate impact. Given that plaintiffs’ cases were entirely statistical, the adjustment would have been fatal to their cases. Id. at 82.

At the trial level and on appeal, plaintiffs and Campion had objected to the data-snooping charge on ground that

(1) he had engaged in only four subgroups;

(2) virtually all subgroups were statistically significant;

(3) his methodology was “hypothesis driven” and involved logical increments in age to explore whether the strength of the evidence of age disparity in terminations continued in each, increasingly older subgroup;

(4) his method was analogous to replications with different samples; and

(5) his result was confirmed by a single, supplemental analysis.

Id. at 83. According to the plaintiffs, Campion’s approach was based upon the reality that age is a continuous, not a dichotomous variable, and he was exploring a single hypothesis. A.240-241; Brief of Appellants at 26. Campion’s explanations do mitigate somewhat the charge of “data snooping,” but they do not explain why Campion did not use a statistical analysis that treated age as a continuous variable, at the outset of his analysis. The single, supplemental analysis was never described or reported by the trial or appellate courts.

The Third Circuit concluded that the district court had applied a ‘‘merits standard of correctness,’’ which is higher than what Rule 702 requires. Specifically, the district court, having identified a potential methodological flaw, did not further evaluate whether Campion’s opinion relied upon good grounds. 849 F.3d at 83. The Circuit vacated the judgment below, and remanded the case to the district court for the opportunity to apply the correct standard.

The trial court’s acceptance that an adjustment was appropriate or required hardly seems a “merits standard.” The use of a proper adjustment for multiple comparisons is very much a methodological concern. If Campion could reach his conclusion only by way of an inappropriate methodology, then his conclusion surely would fail the requirements of Rule 702. The trial court did, however, appear to accept, without explicit evidence, that the failure to apply the Bonferroni correction made it impossible for Campion to present sound scientific argument for his conclusion that there had been disparate impact. The trial court’s opinion also suggests that the Bonferroni correction itself, as opposed to some more appropriate correction, was required.

Unfortunately, the reported opinions do not provide the reader with a clear account of what the analyses would have shown on the correct data set, without improper inclusions and exclusions, and with appropriate statistical adjustments. Presumably, the parties are left to make their cases on remand.

Based upon citations to sources that described the Bonferroni adjustment as “good statistical practice,” but one that is ‘‘not widely or consistently adopted’’ in the behavioral and social sciences, the Third Circuit observed that in some cases, failure to adjust for multiple comparisons may “simply diminish the weight of an expert’s finding.”5 The observation is problematic given that Kumho Tire suggests that an expert witness must use “in the courtroom the same level of intellectual rigor that characterizes the practice of an expert in the relevant field.” Kumho Tire Co. v. Carmichael, 526 U.S. 137, 150, (1999). One implication is that courts are prisoners to prevalent scientific malpractice and abuse of statistical methodology. Another implication is that courts need to look more closely at the assumptions and predicates for various statistical tests and adjustments, such as the Bonferroni correction.

These worrisome implications are exacerbated by the appellate court’s insistence that the question whether a study’s result was properly calculated or interpreted “goes to the weight of the evidence, not to its admissibility.”6 Combined with citations to pre-Daubert statistics cases7, judicial comments such as these can appear to be a general disregard for the statutory requirements of Rules 702 and 703. Claims of statistical significance, in studies with multiple exposure and multiple outcomes, are frequently not adjusted for multiple comparisons, without notation, explanation, or justification. The consequence is that study results are often over-interpreted and over-sold. Methodological errors related to multiple testing or over-claiming statistical significance are commonplace in tort litigation over “health-effects” studies of birth defects, cancer, and other chronic diseases that require epidemiologic evidence8.

In Karlo, the claimed methodological error is beset by its own methodological problems. As the court noted, adjustments for multiple comparisons are not free from methodological controversy9. One noteworthy textbook10 labels the Bonferroni correction as an “awful response” to the problem of multiple comparisons. Aside from this strident criticism, there are alternative approaches to statistical adjustment for multiple comparisons. In the context of the Karlo case, the Bonferroni might well be awful because Campion’s four subgroups are hardly independent tests. Because each subgroup is nested within the next higher age subgroup, the subgroup test results will be strongly correlated in a way that defeats the mathematical assumptions of the Bonferroni correction. On remand, the trial court in Karlo must still make his Rule 702 gatekeeping decision on the methodological appropriateness of whether Campion’s properly considered the role of multiple subgroups, and multiple anaslyses run on different models.


1 Although Campion describes his consulting business as small, he seems to turn up in quite a few employment discrimination cases. See, e.g., Chen-Oster v. Goldman, Sachs & Co., 10 Civ. 6950 (AT) (JCF) (S.D.N.Y. 2015); Brand v. Comcast Corp., Case No. 11 C 8471 (N.D. Ill. July 5, 2014); Powell v. Dallas Morning News L.P., 776 F. Supp. 2d 240, 247 (N.D. Tex. 2011) (excluding Campion’s opinions), aff’d, 486 F. App’x 469 (5th Cir. 2012).

2 See Defendant’s Motion to Bar Dr. Michael Campion’s Statistical Analysis, 2013 WL 11260556.

3 There was no mention of an effect size for the lower aged subgroups, and a power calculation for the 60+ subgroup’s probability of showing a z-score greater than two. Similarly, there was no discussion or argument about why this subgroup could not have been evaluated with Fisher’s exact test. In deciding the appeal, the Third Circuit observed that “Dr. Rosenberger test[ed] a subgroup of sixty-and-older employees, which Dr. Campion did not include in his analysis because ‘[t]here are only 14 terminations, which means the statistical power to detect a significant effect is very low’. A.244–45.” Karlo v. Pittsburgh Glass Works, LLC, 849 F.3d 61, 82 n.15 (3d Cir. 2017).

4 In the trial court’s words, the z-score converts the difference in termination rates into standard deviations. Karlo v. Pittsburgh Glass Works, LLC, C.A. No. 2:10-cv-01283, 2015 WL 4232600, at *11 n.13 (W.D. Pa. July 13, 2015). According to the trial court, Campion gave a rather dubious explanation of the meaning of the z-score: “[w]hen the number of standard deviations is less than –2 (actually–1.96), there is a 95% probability that the difference in termination rates of the subgroups is not due to chance alone” Id. (internal citation omitted).

5 See 849 F.3d 61, 83 (3d Cir. 2017) (citing and quoting from Paetzold & Willborn § 6:7, at 308 n.2) (describing the Bonferroni adjustment as ‘‘good statistical practice,’’ but ‘‘not widely or consistently adopted’’ in the behavioral and social sciences); see also E.E.O.C. v. Autozone, Inc., No. 00-2923, 2006 WL 2524093, at *4 (W.D. Tenn. Aug. 29, 2006) (‘‘[T]he Court does not have a sufficient basis to find that … the non-utilization [of the Bonferroni adjustment] makes [the expert’s] results unreliable.’’). And of course, the Third Circuit invoked the Daubert chestnut: ‘‘Vigorous cross-examination, presentation of contrary evidence, and careful instruction on the burden of proof are the traditional and appropriate means of attacking shaky but

admissible evidence.’’ Daubert, 509 U.S. 579, 596 (1993).

6 See 849 F.3d at 83 (citing Leonard v. Stemtech Internat’l Inc., 834 F.3d 376, 391 (3d Cir. 2016).

7 See 849 F.3d 61, 83 (3d Cir. 2017), citing Bazemore v. Friday, 478 U.S. 385, 400 (1986) (‘‘Normally, failure to include variables will affect the analysis’ probativeness, not its admissibility.’’).

8 See Hans Zeisel & David Kaye, Prove It with Figures: Empirical Methods in Law and Litigation 93 & n.3 (1997) (criticizing the “notorious” case of Wells v. Ortho Pharmaceutical Corp., 788 F.2d 741 (11th Cir.), cert. denied, 479 U.S. 950 (1986), for its erroneous endorsement of conclusions based upon “statistically significant” studies that explored dozens of congenital malformation outcomes, without statistical adjustment). The authors do, however, give an encouraging example of a English trial judge who took multiplicity seriously. Reay v. British Nuclear Fuels (Q.B. Oct. 8,1993) (published in The Independent, Nov. 22,1993). In Reay, the trial court took seriously the multiplicity of hypotheses tested in the study relied upon by plaintiffs. Id. (“the fact that a number of hypotheses were considered in the study requires an increase in the P-value of the findings with consequent reduction in the confidence that can be placed in the study result … .”), quoted in Zeisel & Kaye at 93. Zeisel and Kaye emphasize that courts should not be overly impressed with claims of statistically significant findings, and should pay close attention to how expert witnesses developed their statistical models. Id. at 94.

9 See David B. Cohen, Michael G. Aamodt, and Eric M. Dunleavy, Technical Advisory Committee Report on Best Practices in Adverse Impact Analyses (Center for Corporate Equality 2010).

10 Kenneth J. Rothman, Sander Greenland, and Timoth L. Lash, Modern Epidemiology 273 (3d ed. 2008); see also Kenneth J. Rothman, “No Adjustments Are Needed for Multiple Comparisons,” 1 Epidemiology 43, 43 (1990)

WOE — Zoloft Escapes a MDL While Third Circuit Creates a Conceptual Muddle

July 31st, 2017

Multidistrict Litigations (MDLs) can be “muddles” that are easy to get in, but hard to get out of. Pfizer and subsidiary Greenstone fabulously escaped a muddle through persistent lawyering and the astute gatekeeping of a district judge, in the Eastern District of Pennsylvania. That judge, the Hon. Cynthia Rufe, sustained objections to the admissibility of plaintiffs’ epidemiologic expert witness Anick Bérard. When the MDL’s plaintiffs’ steering committee (PSC) demanded, requested, and begged for a do over, Judge Rufe granted them one more chance. The PSC put their litigation industry eggs in a single basket, carried by statistician Nicholas Jewell. Unfortunately for the PSC, Judge Rufe found Jewell’s basket to be as methodologically defective as Bérard’s, and Her Honor excluded Jewell’s proffered testimony. Motions, paper, and appeals followed, but on June 2, 2017, the Third Circuit declared that the PSC and its clients had had enough opportunities to get through the gate. Their baskets of methodological deplorables were not up to snuff. In re Zoloft Prod. Liab. Litig., No. 16-2247 , __ F.3d __, 2017 WL 2385279, 2017 U.S. App. LEXIS 9832 (3d Cir. June 2, 2017) (affirming exclusion of Jewell’s dodgy opinions, which involved multiple methodological flaws and failures to follow any methodology faithfully) [Slip op. cited below as Zoloft].

Plaintiffs Attempt to Substitute WOE for Depressingly Bad Expert Witness Opinion

The ruse of conflating “weight of the evidence,” as used to describe the appellate standard of review for sustaining or reversing a trial court’s factual finding with a purported scientific methodology for inferring causation, was on full display by the PSC in their attack on Judge Rufe’s gatekeeping. In their appellate brief in the Court of Appeals for the Third Circuit, the PSC asserted that Jewell had used a “weight of the evidence method,” even though that phrase, “weight of the evidence” (WOE) was never used in Jewell’s litigation reports. The full context of the PSC’s argument and citations to Milward make clear a deliberate attempt to conflate WOE as an appellate judicial standard for reviewing jury fact finding and a purported scientific methodology. See Appellants’ Opening Brief at 54 (Aug. 10, 2016) [cited as PSC] (asserting that “[a]t all times, the ultimate evaluation of the weight of the evidence is a jury question”; citing Milward v. Acuity Specialty Products Group, Inc., 639 F.3d 11, 20 (1st Cir. 2011), cert. denied, 133 S. Ct. 63 (2012).

Having staked the ground that WOE is akin to a jury’s factual finding, and thus immune to any but the most extraordinary trial court action or appellate intervention, the PSC then pivoted to claim that Jewell’s WOE-ful method was nothing much more than an assessment of “the totality of the available scientific evidence, guided by the well-accepted Bradford-Hill criteria.” PSC at 3, 4, 7. This maneuver allowed the PSC to argue, apparently with a straight face, that WOE methodology as used by Jewell, had been generally accepted in the scientific community, as well as by the Third Circuit, in previous cases in which the court accepted the use of Bradford Hill’s considerations as a reliable method for establishing general causation. See PSC at 4 (citing Gannon v. United States, 292 F. App’x 170, 173 n.1 (3d Cir. 2008)). Jewell then simply plugged in his expertise and “40 years of experience,” and the desired conclusion of causation popped out. Id. Quod erat demonstrandum.

In pressing its point, the PSC took full advantage of loose, inaccurate language from the American Law Institute’s Restatement’s notorious comment C:

No algorithm exists for applying the Hill guidelines to determine whether an association truly reflects a causal relationship or is spurious.”

PSC at 33-34, citing Restatement (Third) of Torts: Physical and Emotional Harm § 28 cmt. c(3) (2010). Well true, but the absence of a mathematical algorithm hardly means that causal judgments are devoid of principles and standards. The PSC was undeterred, by text or by shame, from equating an unarticulated use of WOE methodology with some vague invocation of Bradford Hill’s considerations for evaluating associations for causality. See PSC at 43 (citing cases that never mentioned WOE but only Bradford Hill’s 50-plus year old heuristic as somehow supporting the claimed identity of the two approaches)1.

Pfizer Rebuffs WOE

Pfizer filed a comprehensive brief that unraveled the PSC’s duplicity. For unknown reasons, tactical or otherwise, however, Pfizer did not challenge the specifics of PSC’s equation of WOE with an abridged, distorted application of Bradford Hill’s considerations. See generally Opposition Brief of Defendants-Appellees Pfizer Inc., Pfizer International LLC, and Greenstone LLC [cited as Pfizer]. Perhaps given page limits and limited judicial attention spans, and just how woefully bad Jewell’s opinions were, Pfizer may well have decided that a more directed approach of assuming arguendo WOE’s methodological appropriateness was a more economical, pragmatic approach. A close reading of Pfizer’s brief, however, makes clear that it never conceded the validity of WOE as a scientific methodology.

Pfizer did point to the recasting of Jewell’s aborted attempt to apply Bradford Hill considerations as an employment of WOE methodology. Pfizer at 46-47. The argument reminded me of Abraham Lincoln’s famous argument:

How many legs does a dog have if you call his tail a leg?

Four.

Saying that a tail is a leg doesn’t make it a leg.”

Allen Thorndike Rice, Reminiscences of Abraham Lincoln by Distinguished Men of His Time at 242 (1909). Calling Jewell’s supposed method WOE or Bradford Hill or WOE/Bradford Hill did not cure the “fatal methodological flaws in his opinions.” Pfizer at 47.

Pfizer understandably and properly objected to the PSC’s attempt to cast Jewell’s “methodology” at such a high level of generality that any consideration of the many instances of methodological infidelity would be relegated to mere jury questions. Acquiescence in the PSC’s rhetorical move would constitute a complete abandonment of the inquiry whether Jewell had used a proper method. Pfizer at 15-16.

Interestingly, none of the amici curiae addressed the slippery WOE arguments advanced by the PSC. See generally Brief of Amici Curiae American Tort Reform Ass’n & Pharmaceutical Research and Manufacturers of America (Oct. 18, 2016); Brief of Washington Legal Fdtn. as Amicus Curiae (Oct. 18, 2016). There was no meaningful discussion of WOE as a supposedly scientific methodology at oral argument. See Transcript of Oral Argument in In re Zoloft Prod. Liab. Litig., No. 16-2247 (Jan. 25, 2017).

The Third Circuit Acknowledges that Some Methodological Infelicities, Flaws, and Fallacies Are Properly the Subject of Judicial Gatekeeping

Fortunately, Jewell’s methodological infidelities were easily recognized by the Circuit judges. Jewell treated multiple studies, which were nested within one another, and thus involved overlapping and included populations, as though they were independent verifications of the same hypothesis. When the population at issue (from the Danish cohort) was included in a more inclusive pan-Scandivanian study, the relied-upon association dissipated, and Jewell utterly failed to explain or account for these data. Zoloft at 5-6.

Jewell relied upon a study by Anick Bérard, even though he later had to concede that the study had serious flaws that invalidated its conclusions, and which flaws caused him to have a lack of confidence in the paper’s findings.2 In another instance, Jewell relied innocently upon a study that purported to report a statistically significant association, but the authors of this paper were later required by the journal, The New England Journal of Medicine, to correct the very calculated confidence interval upon which Jewell had relied. Despite his substantial mathematical prowess, Jewell missed the miscalculation and relied (uncritically) upon a finding as statistically significant when in fact it was not.

Jewell rejected a meta-analysis of Zoloft studies for questionable methodological quibbles, even though he had relied upon the very same meta-analysis, with the same methodology, in his litigation efforts involving Prozac and birth defects. Not to be corralled by methodological punctilio, Jewell conducted his own meta-analysis with two studies Huybrechts (2014) and Jimenez-Solem (2012), but failed to explain why he excluded other studies, the inclusion of which would have undone his claimed result. Zoloft at 9. Jewell purported to reanalyze and recalculate point estimates in two studies, Jimenez-Solem (2012) and Huybrechts (2014), without any clear protocol or consistency in his approach to other studies. Zoloft at 9. The list goes on, but in sum, Jewell’s handling of these technical issues did not inspire confidence, either in the district or in the appellate court.

WOE to the Third Circuit

The Circuit gave the PSC every conceivable break. Because Pfizer had not engaged specifically on whether WOE was a proper, or any kind of, scientific method, the Circuit treated the issue as virtually conceded:

Pfizer does not seem to contest the reliability of the Bradford Hill criteria or weight of the evidence analysis generally; the dispute centers on whether the specific methodology implemented by Dr. Jewell is reliable. Flexible methodologies, such as the “weight of the evidence,” can be implemented in multiple ways; despite the fact that the methodology is generally reliable, each application is distinct and should be analyzed for reliability.”

Zoloft at 18. The Court acknowledged that WOE arose only in the PSC’s appellate brief, which would have made the entire dubious argument waived under general appellate jurisdictional principles, but the Court, in a footnote, indulged the assumption, “for the sake of argument,” that WOE was Jewell’s purported method from the inception. Zoloft at 18 n. 39. Without any real evidentiary support or analysis or concession from Pfizer, the Circuit accepted that WOE analyses were “generally reliable.” Zoloft at 21.

The Circuit accepted, rather uncritically, that Jewell used a combination of WOE analysis and Bradford Hill considerations. Zoloft at 17. Although Jewell had never described WOE in his litigation report, and WOE was not a feature of his hearing testimony, the Circuit impermissibly engrafted Carl Cranor’s description of WOE as involving inference to the best explanation. Zoloft at 17 & n.37, citing Milward v. Acuity Specialty Prods. Grp., Inc., 639 F.3d 11, 17 (1st Cir. 2011) (internal quotation marks and citation omitted).

There was, however, a limit to the Circuit’s credulousness and empathy. As the Court noted, there must be some assurance that the purported Bradford Hill/WOE method is something more than a “mere conclusion-oriented selection process.” Zoloft at 20. Ultimately, the Court put its markers down for Jewell’s putative WOE methodology:

there must be a scientific method of weighting that is used and explained.”

Zoloft at 20. Calling the method WOE did not, in the final analysis, exclude Jewell from Rule 702 gatekeeping. Try as the PSC might, there was just no mistaking Jewell’s approach as anything other than a crazy patchwork quilt of numerical wizardry in aid of subjective, result-oriented conclusion mongering.

In the Court’s words:

we find that Dr. Jewell did not 1) reliably apply the ‘techniques’ to the body of evidence or 2) adequately explain how this analysis supports specified Bradford Hill criteria. Because ‘any step that renders the analysis unreliable under the Daubert factors renders the expert’s testimony inadmissible’, this is sufficient to show that the District Court did not abuse its discretion in excluding Dr. Jewell’s testimony.”

Zoloft at 28. As heartening as the Circuit’s conclusion is, the Court’s couching its observation as a finding (“we find”) is disheartening with respect to the Third Circuit’s apparent inability to distinguish abuse-of-discretion review from de novo appellate findings. Equally distressing is the Court’s invocation of Daubert factors, which were dicta in a Supreme Court case that was superseded by an amended statute over 17 years ago, in Federal Rule of Evidence 702.

On the crucial question whether Jewell had engaged in an unreliable application of methods or techniques that superficially, at a very high level of generality, claim to be generally accepted, the Court stayed on course. The Court “found” that Jewell had applied techniques, analyses, and critiques so obviously inconsistently that no amount of judicial indulgence, assumptions arguendo, or careless glosses could save Jewell and his fatuous opinions from judicial banishment. Zoloft 28-29. Returning to the correct standard of review (abuse of discretion), but the wrong governing law (Daubert instead of Rule 702), the Court announced that:

[b]ecause ‘any step that renders the analysis unreliable under the Daubert factors renders the expert’s testimony inadmissible’, this is sufficient to show that the District Court did not abuse its discretion in excluding Dr. Jewell’s testimony.”

Zoloft at 21 n.50 (citation omitted). The Court found itself unable to say simply and directly that “the MDL trial court decided the case well within its discretion.”

The Zoloft case was not the Third Circuit’s first WOE rodeo. WOE had raised its unruly head in Magistrini v. One Hour Martinizing Dry Cleaning, 180 F. Supp. 2d 584, 602 (D.N.J. 2002), aff’d, 68 F. App’x 356 (3d Cir. 2003), where an expert witness, David Ozonoff, offered what purported to be a WOE opinion. The Magistrini trial court did not fuss with the assertion that WOE was generally reliable, but took issue with how Ozonoff tried to pass off his analysis as a comprehensive treatment of the totality of the evidence. In Magistrini, Judge Hochberg noted that regardless of the rubric of the methodology, the witness must show that in conducting a WOE analysis:

all of the relevant evidence must be gathered, and the assessment or weighing of that evidence must not be arbitrary, but must itself be based on methods of science.”

Magistrini, 180 F. Supp. 2d at 602. The witness must show that the methodology is more than a “mere conclusion-oriented selection process,” and that it has a “a scientific method of weighting that is used and explained.” Id. at 607. Asserting the use of WOE was not an excuse or escape from judicial gatekeeping as specified by Rule 702.

Although the Third Circuit gave the Zoloft MDL trial court’s findings a searching review (certainly much tougher than the prescribed abuse-of-discretion review), the MDL court’s finding that Jewell “failed to consistently apply the scientific methods he articulates, has deviated from or downplayed certain well-established principles of his field, and has inconsistently applied methods and standards to the data so as to support his a priori opinion” were ultimately vindicated by the Court of Appeals. Zoloft at 10.

All’s well that ends well. Perhaps. It remains unfortunate, however, that a hypothetical method, WOE — which was never actually advocated by the challenged expert witnesses, which lacks serious support in the scientific community, and which was merely assumed arguendo to be valid — will be taken by careless readers to have been endorsed the Third Circuit.


1 Among the cases cited without any support for the PSC’s dubious contention were Gannon v. United States, 292 F. App’x 170, 173 n.1 (3d Cir. 2008); Bitler v. A.O. Smith Corp., 391 F.3d 1114, 1124-25 (10th Cir. 2004); In re Joint E. & S. Dist. Asbestos Litig., 52 F.3d 1124, 1128 (2d Cir. 1995); In re Avandia Mktg., Sales Practices & Prods. Liab. Litig., No. 2007-MD-1871, 2011 WL 13576, at *3 (E.D. Pa. Jan. 4, 2011) (“Bradford-Hill criteria are used to assess whether an established association between two variables actually reflects a causal relationship.”).

2 Anick Bérard, Sertraline Use During Pregnancy and the Risk of Major Malformations, 212 Am. J. Obstet. Gynecol. 795 (2015).

Welding Litigation – Another Positive Example of Litigation-Generated Science

July 11th, 2017

In a recent post1, I noted Samuel Tarry’s valuable article2 for its helpful, contrarian discussion of the importance of some scientific articles with litigation provenances. Public health debates can spill over to the courtroom, and developments in the courtroom can, on occasion, inform and even resolve those public health debates that gave rise to the litigation. Tarry provided an account of three such articles, and I provided a brief account of another article, a published meta-analysis, from the welding fume litigation.

The welding litigation actually accounted for several studies, but in this post, I detail the background of another published study, this one an epidemiologic study by a noted Harvard epidemiologist. Not every expert witness’s report has the making of a published paper. In theory, if the expert witness has conducted a systematic review, and reached a conclusion that is not populated among already published papers, we might well expect that the witness had achieved the “least publishable unit.” The reality is that most causal claims are not based upon what could even remotely be called a systematic review. Given the lack of credibility to the causal claim, rebuttal reports are likely to have little interest to serious scientists.

Martin Wells

In the welding fume cases, one of plaintiffs’ hired expert witnesses, Martin Wells, a statistician, proffered an analysis of Parkinson’s disease (PD) mortality among welders and welding tradesmen. Using the National Center for Health Statistics (NCHS) database, Wells aggregated data from 1993 to 1999, for PD among welders and compared this to PD mortality among non-welders. Wells claimed to find an increased risk of PD mortality among younger (under age 65 at death) welders and welding tradesmen in this dataset.

The defense sought discovery of Wells’s methods and materials, and obtained the underlying data from the NCHS. Wells had no protocol, no pre-stated commitment to which years in the dataset he would use, and no pre-stated statistical analysis plan. At a Rule 702 hearing, Wells was unable to state how many welders were included in his analysis, why he selected some years but not others, or why he had selected age 65 as the cut off. His analyses appeared to be pure data dredging.

As the defense discovered, the NCHS dataset contained mortality data for many more years than the limited range employed by Wells in his analysis. Working with an expert witness at the Harvard School of Public Health, the defense discovered that Wells had gerrymandered the years included (and excluded) in his analysis in a way that just happened to generate a marginally, nominally statistically significant association.

NCHS Welder Age Distribution

The defense was thus able to show that the data overall, and in each year, were very sparse. For most years, the value was either 0 or 1, for PD deaths under age 65. Because of the huge denominators, however, the calculated mortality odds ratios were nominally statistically significant. The value of four PD deaths in 1998 is clearly an outlier. If the value were three rather than four, the statistical significance of the calculated OR would have been lost. Alternatively, a simple sensitivity test suggests that if instead of overall n = 7, n were 6, statistical significance would have been lost. The chart below, prepared at the time with help from Dr. David Schwartzof Innovative Science solutions, shows the actual number of “underlying cause” PD deaths that were in the dataset for each year in the NCHS dataset, and how sparse and granular” these data were:

A couple of years later, the Wells’ litigation analysis showed up as a manuscript, with only minor changes in its analyses, and with authors listed as Martin T. Wells and Katherine W. Eisenberg, in the editorial offices of Neurology. Katherine W. Eisenberg, AB and Martin T. Wells, Ph.D., “A Mortality Odds Ratio Study of Welders and Parkinson Disease.” Wells disclosed that he had testified for plaintiffs in the welding fume litigation, but Eisenberg declared no conflicts. Having only an undergraduate degree, and attending medical school at the time of submission, Ms. Eisenberg would not seem to have had the opportunity to accumulate any conflicts of interest. Undisclosed to the editors of Neurology, however, was that Ms. Eisenberg was the daughter of Theodore (Ted) Eisenberg, a lawyer who taught at Cornell University and who represented plaintiffs in the same welding MDL as the one in which Wells testified. Inquiring minds might have wondered whether Ms. Eisenberg’s tuition, room, and board were subsidized by Ted’s earnings in the welding fume and other litigations. Ted Eisenberg and Martin Wells had collaborated on many other projects, but in the welding fume litigation, Ted worked as an attorney for MDL welding plaintiffs, and Martin Wells was compensated handsomely as an expert witness. The acknowledgment at the end of the manuscript thanked Theodore Eisenberg for his thoughtful comments and discussion, without noting that he had been a paid member of the plaintiff’s litigation team. Nor did Wells and Eisenberg tells the Neurology editors that the article had grown out of Wells’ 2005 litigation report in the welding MDL.

The disclosure lapses and oversights by Wells and the younger Eisenberg proved harmless error because Neurology rejected the Wells and Eisenberg paper for publication, and it was never submitted elsewhere. The paper used the same restricted set of years of NCHS data, 1993-1999. The defense had already shown, through its own expert witness’s rebuttal report, that the manuscript’s analysis achieved statistical significance only because it omitted years from the analysis. For instance, if the authors had analyzed 1992 through 1999, their Parkinson’s disease mortality point estimate for younger welding tradesmen would no longer have been statistically significant.

Robert Park

One reason that Wells and Eisenberg may have abandoned their gerrymandered statistical analysis of the NCHS dataset was that an ostensibly independent group3 of investigators published a paper that presented a competing analysis. Robert M. Park, Paul A. Schulte, Joseph D. Bowman, James T. Walker, Stephen C. Bondy, Michael G. Yost, Jennifer A. Touchstone, and Mustafa Dosemeci, “Potential Occupational Risks for Neurodegenerative Diseases,” 48 Am. J. Ind. Med. 63 (2005) [cited as Park (2005)]. The authors accessed the same NCHS dataset, and looked at hundreds of different occupations, including welding tradesmen, and four neurodegenerative diseases.

Park, et al., claimed that they looked at occupations that had previously shown elevated proportional mortality ratios (PMR) in a previous publication of the NIOSH. A few other occupations were included; in all their were hundreds of independent analyses, without any adjustment for multiple testing. Welding occupations4 were included “[b]ecause of reports of Parkinsonism in welders [Racette et al.,, 2001; Levy and Nassetta, 2003], possibly attributable to manganese exposure (from welding rods and steel alloys)… .”5 Racette was a consultant for the Lawsuit Industry, which had been funded his research on parkinsonism among welders. Levy was a testifying expert witness for Lawsuit, Inc. A betting person would conclude that Park had consulted with Wells and Eisenberg, and their colleagues.

These authors looked at four neurological degenerative diseases (NDDs), Alzheimer’s disease, Parkinson’s disease, motor neuron disease, and pre-senile dementia. The authors looked at NCHS death certificate occupational information from 1992 to 1998, which was remarkable because Wells had insisted that 1992 somehow was not available for inclusion in his analyses. During 1992 to 1998, in 22 states, there were 2,614,346 deaths with 33,678 from Parkinson’s diseases. (p. 65b). Then for each of the four disease outcomes, the authors conducted an analysis for deaths below age 65. For the welding tradesmen, none of the four NDDs showed any associations. Park went on to conduct subgroup analyses for each of the four NDDs for death below age 65. In these subgroup analyses for welding tradesmen, the authors purported to find only an association only with Parkinson’s disease:

Of the four NDDs under study, only PD was associated with occupations where arc-welding of steel is performed, and only for the 20 PD deaths below age 65 (MOR=1.77, 95% CI=1.08-2.75) (Table V).”

Park (2005), at 70.

The exact nature of the subgroup was obscure, to say the least. Remarkably, Park and his colleagues had not calculated an odds ratio for welding tradesmen under age 65 at death compared with non-welding tradesmen under age 65 at death. The table’s legend attempts to explain the authors’ calculation:

Adjusted for age, race, gender, region and SES. Model contains multiplicative terms for exposure and for exposure if age at death <65; thus MOR is estimate for deaths occurring age 65+, and MOR, age <65 is estimate of enhanced risk: age <65 versus age 65+”

In other words, Park looked to see whether welding tradesmen who died at a younger age (below age 65) were more likely to have a PD cause of death than welding tradesmen who died an older age (over age 65). The meaning of this internal comparison is totally unclear, but it cannot represent a comparison of welder’s with non-welders. Indeed, every time, Park and his colleagues calculated and reported this strange odds ratio for any occupational group in the published paper, the odds ratio was elevated. If the odds ratio means anything, it is that younger Parkinson’s patients, regardless of occupation, are more likely to die of their neurological disease than older patients. Older men, regardless of occupation, are more likely to die of cancer, cardiovascular disease, and other chronic diseases. Furthermore, this age association within (not between) an occupational groups may be nothing other than a reflection of the greater severity of early-onset Parkinson’s disease in anyone, regardless of their occupation.

Like the manuscript by Eisenberg and Wells, the Park paper was an exercise in data dredging. The Park study reported increased odds ratios for Parkinson’s disease among the following groups on the primary analysis:

biological, medical scientists [MOR 2.04 (95% CI, 1.37-2.92)]

clergy [MOR 1.79 (95% CI, 1.58-2.02)]

religious workers [MOR 1.70 (95% CI, 1.27-2.21)]

college teachers [MOR 1.61 (95% CI, 1.39-1.85)]

social workers [MOR 1.44 (95% CI, 1.14-1.80)]

As noted above, the Park paper reported all of the internal mortality odds ratios for below versus above age 65, within occupational groups were nominally statistically significantly elevated. Nonetheless, the Park authors were on a mission, and determined to make something out of nothing, at least when it came to welding and Parkinson’s disease among younger patients. The authors’ conclusion reflected stunningly poor scholarship:

Studies in the US, Europe, and Korea implicate manganese fumes from arc-welding of steel in the development of a Parkinson’s-like disorder, probably a manifestation of manganism [Sjogren et al., 1990; Kim et al., 1999; Luccini, et al., 1999; Moon et al., 1999]. The observation here that PD mortality is elevated among workers with likely manganese exposures from welding, below age 65 (based on 20 deaths), supports the welding-Parkinsonism connection.”

Park (2005) at 73.

Stunningly bad because the cited papers by Sjogren, Luccini, Kim, and Moon did not examine Parkinson’s disease as an outcome; indeed, they did not even examine a parkinsonian movement disorder. More egregious, however, was the authors’ assertion that their analysis, which compared the odds of Parkinson’s disease mortality between welders under age 65 to that mortality for welders over age 65, supported an association between welding and “Parkinsonism.” 

Every time the authors conducted this analysis internal to an occupational group, they found an elevation among under age 65 deaths compared with over age 65 deaths within the occupational group. They did not report comparisons of any age-defined subgroup of a single occupational group with similarly aged mortality in the remaining dataset.

Elan Louis

The plaintiffs’ lawyers used the Park paper as “evidence” of an association that they claimed was causal. They were aided by a cadre of expert witnesses who could cite to a paper’s conclusions, but could not understand its methods. Occasionally, one of the plaintiffs’ expert witnesses would confess ignorance about exactly what Robert Park had done in this paper. Elan Louis, one of the better qualified expert witnesses on the side of claimants, for instance, testified in the plaintiffs’ attempt to certify a national medical monitoring class action for welding tradesmen. His testimony about what to make of the Park paper was more honest than most of the plaintiffs’ expert witnesses:

Q. My question to you is, is it true that that 1.77 point estimate of risk, is not a comparison of this welder and allied tradesmen under this age 65 mortality, compared with non-welders and allied tradesmen who die under age 65?

A. I think it’s not clear that the footnote — I think that the footnote is not clearly written. When you read the footnote, you didn’t read the punctuation that there are semicolons and colons and commas in the same sentence. And it’s not a well constructed sentence. And I’ve gone through this sentence many times. And I’ve gone through this sentence with Ted Eisenberg many times. This is a topic of our discussion. One of the topics of our discussions. And it’s not clear from this sentence that that’s the appropriate interpretation. *  *  *  However, the footnote, because it’s so poorly written, it obscures what he actually did. And then I think it opens up alternative interpretations.

Q. And if we can pursue that for a moment. If you look at other tables for other occupational titles, or exposure related variables, is it true that every time that Mr. Park reports on that MOR age under 65, that the estimate is elevated and statistically significantly so?

A. Yes. And he uses the same footnote every time. He’s obviously cut and paste that footnote every single time, down to the punctuation is exactly the same. And I would agree that if you look for example at table 4, the mortality odds ratios are elevated in that manner for Parkinson’s Disease, with reference to farming, with reference to pesticides, and with reference to farmers excluding horticultural deaths.

Deposition testimony of Elan Louis, at p. 401-04, in Steele v. A. O. Smith Corp., no. 1:03 CV-17000, MDL 1535 (Jan. 18, 2007). Other less qualified, or less honest expert witnesses on the plaintiffs’ side were content to cite Park (2005) as support for their causal opinions.

Meir Stampfer

The empathetic MDL trial judge denied the plaintiffs’ request for class certification in Steele, but individual personal injury cases continued to be litigated. Steele v. A.O. Smith Corp., 245 F.R.D. 279 (N.D. Ohio 2007) (denying class certification); In re Welding Fume Prods. Liab. Litig., No. 1:03-CV-17000, MDL 1535, 2008 WL 3166309 (N.D. Ohio Aug. 4, 2008) (striking pendent state-law class actions claims)

Although Elan Louis was honest enough to acknowledge his own confusion about the Park paper, other expert witnesses continued to rely upon it, and plaintiffs’ counsel continued to cite the paper in their briefs and to use the apparently elevated point estimate for welders in their cross-examinations of defense expert witnesses. With the NCHS data in hand (on a DVD), defense counsel returned to Meir Stampfer, who had helped them unravel the Martin Wells’ litigation analysis. The question for Professor Stampfer was whether Park’s reported point estimate for PD mortality odds ratio was truly a comparison of welders versus non-welders, or whether it was some uninformative internal comparison of younger welders versus older welders.

The one certainty available to the defense is that it had the same dataset that had been used by Martin Wells in the earlier litigation analysis, and now by Robert Park and his colleagues in their published analysis. Using the NCHS dataset, and Park’s definition of a welder or a welding tradesman, Professor Stampfer calculated PD mortality odds ratios for each definition, as well as for each definition for deaths under age 65. None of these analyses yielded statistically significant associations. Park’s curious results could not be replicated from the NCHS dataset.

For welders, the overall PD mortality odds ratio (MOR) was 0.85 (95% CI, 0.77–0.94), for years 1985 through 1999, in the NCHS dataset. If the definition of welders was expanded to including welding tradesmen, as used by Robert Park, the MOR was 0.83 (95% CI, 0.78–0.88) for all years available in the NCHS dataset.

When Stampfer conducted an age-restricted analysis, which properly compared welders or welding tradesmen with non-welding tradesmen, with death under age 65, he similarly obtained no associations for PD MOR. For the years 1985-1991, death under 65 from PD, Stampfer found MORs 0.99 (95% CI, 0.44–2.22) for just welders, and 0.83 (95% CI, 0.48–1.44) all welding tradesmen.

And for 1992-1999, the years used by Park (2005), and similar to the date range used by Martin Wells, for PD deaths at under age 65, for welders only, Stampfer found a MOR of 1.44 (95% CI, 0.79–2.62), and for all welding tradesmen, 1.20 (95% CI, 0.79–1.84)

None of Park’s slicing, dicing, and subgrouping of welding and PD results could be replicated. Although Dr. Stampfer submitted a report in Steele, there remained the problem that Park (2005) was a peer-reviewed paper, and that plaintiffs’ counsel, expert witnesses, and other published papers were citing it for its claimed results and errant discussion. The defense asked Dr. Stampfer whether the “least publishable unit” had been achieved, and Stampfer reluctantly agreed. He wrote up his analysis, and published it in 2009, with an appropriate disclosure6. Meir J. Stampfer, “Welding Occupations and Mortality from Parkinson’s Disease and Other Neurodegenerative Diseases Among United States Men, 1985–1999,” 6 J. Occup. & Envt’l Hygiene 267 (2009).

Professor Stampfer’s paper may not be the most important contribution to the epidemiology of Parkinson’s disease, but it corrected the distortions and misrepresentations of data in Robert Park’s paper. His paper has since been cited by well-known researchers in support of their conclusion that there is no association between welding and Parkinson’s disease7. Park’s paper has been criticized on PubPeer, with no rebuttal8.

Almost comically, Park has cited Stampfer’s study tendentiously for a claim that there is a healthy worker bias present in the available epidemiology of welding and PD, without noting, or responding to, the devastating criticism of his own Park (2005) work:

For a mortality study of neurodegenerative disease deaths in the United States during 1985 – 1999, Stampfer [61] used the Cause of Death database of the US National Center for Health Statistics and observed adjusted mortality odds ratios for PD of 0.85 (95% CI, 0.77 – 0.94) and 0.83 (95% CI, 0.78 – 0.88) in welders, using two definitions of welding occupations [61]. This supports the presence of a significant HWE [healthy worker effect] among welders. An even stronger effect was observed in welders for motor neuron disease (amyotrophic lateral sclerosis, OR 0.71, 95% CI, 0.56 – 0.89), a chronic condition that clearly would affect welders’ ability to work.”

Robert M. Park, “Neurobehavioral Deficits and Parkinsonism in Occupations with Manganese Exposure: A Review of Methodological Issues in the Epidemiological Literature,” 4 Safety & Health at Work 123, 126 (2013). Amyotrophic lateral sclerosis has a sudden onset, usually in middle age, without any real prodomal signs or symptoms, which would keep a young man from entering welding as a trade. Just shows you can get any opinion published in a peer-reviewed journal, somewhere. Stampfer’s paper, along with Mortimer’s meta-analysis helped put the kabosh on welding fume litigation.

Addendum

A few weeks ago, the Sixth Circuit affirmed the dismissal of a class action that was attempted based upon claims of environmental manganese exposure. Abrams v. Nucor Steel Marion, Inc., Case No. 3:13 CV 137, 2015 WL 6872511 (N. D. Ohio Nov. 9, 2015) (finding testimony of neurologist Jonathan Rutchik to be nugatory, and excluding his proffered opinions), aff’d, 2017 U.S. App. LEXIS 9323 (6th Cir. May 25, 2017). Class plaintiffs employed one of the regulators, Jonathan Rutchik, from the welding fume parkinsonism litigation).


2 Samuel L. Tarry, Jr., “Can Litigation-Generated Science Promote Public Health?” 33 Am. J. Trial Advocacy 315 (2009)

3 Ostensibly, but not really. Robert M. Park was an employee of NIOSH, but he had spent most of his career working as an employee for the United Autoworkers labor union. The paper acknowledged help from Ed Baker, David Savitz, and Kyle Steenland. Baker is a colleague and associate of B.S. Levy, who was an expert witness for plaintiffs in the welding fume litigation, as well as many others. The article was published in the “red” journal, the American Journal of Industrial Medicine.

4 The welding tradesmen included in the analyses were welders and cutters, boilermakers, structural metal workers, millwrights, plumbers, pipefitters, and steamfitters. Robert M. Park, Paul A. Schulte, Joseph D. Bowman, James T. Walker, Stephen C. Bondy, Michael G. Yost, Jennifer A. Touchstone, and Mustafa Dosemeci, “Potential Occupational Risks for Neurodegenerative Diseases,” 48 Am. J. Ind. Med. 63, 65a, ¶2 (2005).

5 Id.

6 “The project was supported in part through a consulting agreement with a group of manufacturers of welding consumables who had no role in the analysis, or in preparing this report, did not see any draft of this manuscript prior to submission for publication, and had no control over any aspect of the work or its publication.” Stampfer, at 272.

7 Karin Wirdefeldt, Hans-Olov Adami, Philip Cole, Dimitrios Trichopoulos, and Jack Mandel, “Epidemiology and etiology of Parkinson’s disease: a review of the evidence,” 26 Eur. J. Epidemiol. S1 (2011).

8 The criticisms can be found at <https://pubpeer.com/publications/798F9D98B5D2E5A832136C0A4AD261>, last visited on July 10, 2017.

Slemp Trial Part 3 – The Defense Expert Witness – Huh

July 9th, 2017

On June 19, 2017, the U.S. Supreme Court curtailed the predatory jurisdictional practices of the lawsuit industry in seeking out favorable trial courts with no meaningful connection to their claims. See Bristol-Myers Squib Co. v. Superior Court, No. 16-466, 582 U.S. ___ (June 19, 2017). The same day, the defendants in a pending talc cancer case in St. Louis filed a motion for a mistrial. Swann v. Johnson & Johnson, Case No. 1422-CC09326-01, Division 10, Circuit Court of St. Louis City, Missouri. Missouri law may protect St. Louis judges from having to get involved in gatekeeping scientific expert witness testimony, but when the Supreme Court speaks to the requirements of the federal constitution’s due process clause, even St. Louis judges must listen. Bristol-Myers held that the constitution limits the practice of suing defendants in jurisdictions unrelated to the asserted claims, and the St. Louis trial judge, Judge Rex Burlison, granted the requested mistrial in Swann. As a result, there will not be another test of plaintiffs’ claims that talc causes ovarian cancer, and the previous Slemp case will remain an important event to interpret.

The Sole Defense Expert Witness

Previous posts1 addressed some of the big picture issues as well as the opening statements in Slemp. This posts turns to the defense expert witness, Dr. Walter Huh, in an attempt to understand how and why the jury returned its egregious verdict. Juries can, of course, act out of sympathy, passion, or prejudice, but their verdicts are usually black boxes when it comes to discerning their motivations and analyses. A more interesting and fruitful exercise is to ask whether a reasonable jury could have reached the conclusion in the case. The value of this exercise is limited, however. A reasonable jury should have reasonable expertise in the subject matter, and in our civil litigation system, this premise is usually not satisfied.

Dr. Walter Huh, a gynecologic oncologist, was the only expert witness who testified for the defense. As the only defense witness, and as a clinician, Huh had a terrible burden. He had to meet and rebut testimony outside his fields of expertise, including pathology, toxicology, and most important, epidemiology. Huh was by all measures well-spoken, articulate, and well-qualified as a clinical gynecologic oncologist. Defense counsel and Huh, however, tried to make the case that Huh was qualified to speak to all issues in the case. The initial examination on qualifications was long and tedious, and seemed to overcompensate for the obvious gaps in Dr. Huh’s qualifications. In my view, the defense never presented much in the way of credible explanations about where Huh had obtained the training, experience, and expertise to weigh in on areas outside clinical medicine. Ultimately, the cross-examination is the crucial test of whether this strategy of one witness for all subjects can hold. The cross-examination of Dr. Huh, however, exposed the gaps in qualifications, and more important, Dr. Huh made substantive errors that were unnecessary and unhelpful to the defense of the case.

The defense pitched the notion that Dr. Huh somehow trumped all the expert witnesses called by plaintiff because Huh was the “only physician heard by the jury” in court. Somehow, I wonder whether the jury was so naïve. It seems like a poor strategic choice to hope that the biases of the jury in favor of the omniscience of physicians (over scientists) will carry the day.

There were, to be sure, some difficult clinical issues, on which Dr. Huh could address within his competence. Cancer causation itself is a multi-disciplinary science, but in the case of a disease, such as ovarian cancer, with a substantial base-rate in the general population and without any biomarker of a causal pathway between exposure and outcome, epidemiology will be a necessary tool. Huh was thus forced to “play” on the plaintiffs’ expert witnesses’ home court, much to his detriment.

General Causation

Don’t confuse causation with links, association, and risk factors

The defense strong point is that virtually no one, other than the plaintiffs’ expert witnesses themselves, and only in the context of litigation, has causally attributed ovarian cancer to talc exposure. There are, however, some ways that this point can be dulled in the rough and tumble of trial. Lawyers, like journalists, and even some imprecise scientists, use a variety of terms such as “risk,” “risk factor,” “increased risk,” and “link,” for something less than causation. Sometimes these terms are used deliberately to try to pass off something less than causation as causation; sometimes the speaker is confused; and sometimes the speaker is simply being imprecise. It seems incumbent upon the defense to explain the differences between and among these terms, and to stick with a consistent, appropriate terminology.

One instance in which Dr. Huh took his eye off the “causation ball,” arose when plaintiffs’ counsel showed him a study conclusion that talc use among African American women was statistically significantly associated with ovarian cancer. Huh answered, non-responsively, “I disagree with the concept that talc causes ovarian cancer.” The study, however, did not advance a causal conclusion and there was no reason to suggest to the jury that he disagreed with anything in the paper; rather it was the opportunity to repeat that association is not causation, and the article did not contradict anything he had said.

Similarly, Dr. Huh was confronted with several precautionary recommendations that women “may” benefit from avoiding talc. Remarkably, Huh simply disagreed, rather than making the obvious point that the recommendation was not stated as something that would in fact benefit women.

When witnesses answer long, involved questions, with a simple “yes,” then they may have made every implied proposition in the questions into facts in the case. In an exchange between plaintiff’s counsel and Huh, counsel asked whether a textbook listed talc as a risk factor.2 Huh struggled to disagree, which disagreement tended to impair his credibility, for disagreeing with a textbook he acknowledged using and relying upon. Disagreement, however, was not necessary; the text merely stated that “talc … may increase risk.” If “increased risk” had been defined and explained as something substantially below causation, then Huh could have answered simply “yes, but that quotation does not support a causal claim.”

At another point, plaintiffs’ counsel, realizing that none of the individual studies reached a causal conclusion, asked whether it would be improper for a single study to give such a conclusion. It was a good question, with a solid premise, but Dr. Huh missed the opportunity for explaining that the authors of all the various individual studies had not conducted systematic reviews that advanced the causal conclusion that plaintiffs would need. Certainly, the authors of individual studies were not prohibited from taking the next step to advance a causal conclusion in a separate paper with the appropriate analysis.

Bradford Hill’s Factors

Dr. Huh’s testimony provided the jury with some understanding of Sir Austin Bradford Hill’s nine factors, but Dr. Huh would have helped himself by acknowledging several important points. First, as Hill explained, the nine factors are invoked only after there is a clear-cut (valid) association beyond that which we care to attribute to chance. Second, establishing all nine factors is not necessary. Third, some of the nine factors are more important than others.

Study validity

In the epidemiology of talc and ovarian cancer, statistical power and significance are not the crucial issues; study validity is. It should have been the plaintiff’s burden to rule out bias, and confounding, as well as chance. Hours had passed in the defense examination of Dr. Huh before study validity was raised, and it was never comprehensively explained. Dr. Huh explained recall bias as a particular problem of case-control studies, which made up the bulk of evidence upon which plaintiffs’ expert witnesses relied. A more sophisticated witness on epidemiology might well have explained that the selection of controls can be a serious problem without obvious solutions in case-control studies.

On cross-examination, plaintiffs’ counsel, citing Kenneth Rothman, asked whether misclassification bias always yields a lower risk ratio. Dr. Huh resisted with “not necessarily,” but failed to dig in whether the conditions for rejecting plaintiffs’ generalization (such as polychotomous exposure classification) obtained in the relevant cohort studies. More importantly, Huh missed the opportunity to point out that the most recent, most sophisticated cohort study reported a risk ratio below 1.0, which on the plaintiffs’ theory about misclassification would have been even lower than 1.0 than reported in the published paper. Again, a qualified epidemiologist would not have failed to make these points.

Dr. Huh never read the testimony of one of the plaintiffs’ expert witnesses on epidemiology, Graham Colditz, and offered no specific rebuttal of Colditz’s opinions. With respect to the other of plaintiffs’ epidemiology expert witness, Dr. Cramer, Huh criticized him for engaging in post-hoc secondary analyses and asserted that Cramer’s meta-analysis could not be validated. Huh never attempted to validate the meta-analysis himself; nor did Huh offer his own meta-analysis or explain why a meta-analysis of seriously biased studies was unhelpful. These omissions substantially blunted Huh’s criticisms.

On the issue of study validity, Dr. Huh seem to intimate that cohort studies were necessarily better than case-control studies because of recall bias, but also because there are more women involved in the cohort studies than in the case-control studies. The latter point, although arithmetically correct, is epidemiologically bogus. There are often fewer ovarian cancer cases in the cohort study, especially if the cohort is not followed for a very long time. The true test comes in the statistical precision of the point estimate, relative risk or odds ratio, in the different type of study. The case-control studies often generate much more precise point estimates as seen from their narrower confidence intervals. Of course, the real issue is not precision here, but accuracy.  Still, Dr. Huh appeared to have endorsed the defense counsel misleading argument about study size, a consideration that will not help the defense when the contentions of the parties are heard in scientific fora.

Statistical Significance

Huh appeared at times to stake out a position that if a study does not have statistical significance, then we must accept the null hypothesis. I believe that most careful scientists would reject this position. Null studies simply fail to reject the null hypothesis.

Although there seems to be no end to fallacious reasoning by plaintiffs, there is a particular defense fallacy seen in some cases that turn on epidemiology. What if we had 10 studies that each found an elevated risk ratio of 1.5, with two-tailed 95 percent confidence intervals of 0.92 – 2.18, or so. Can the defense claim victory because no study is statistically significant? Huh seemed to suggest so, but this is clearly wrong. Of course, we might ask why no one conducted the 11th study, with sufficient power to detect a risk ratio of 1.5, at the desired level of significance. But parties go to trial with the evidence they have, not what they might want to have. On the above 10-study hypothetical, a meta-analysis might well be done (assuming the studies could be appropriately included), and the summary risk ratio for all studies would be 1.5, and highly statistically significant.

On the question of talc and ovarian cancer, there were several meta-analyses at issue, and so the role of statistical significance of individual studies was less relevant. The real issue was study validity. This issue was muddled by assertions that risk ratios such as 2.05 (95%, 0.94 – 4.47) were “chance findings.” Chance may not have been ruled out, but the defense can hardly assert that chance and chance alone produced the findings; otherwise, it will be sunk by the available meta-analyses.

Strength of Association

The risk ratios involved in most of the talc ovarian cancer studies are small, and that is obviously an important factor to consider in evaluating the studies for causal conclusions. Still, it is also obvious that sometimes real causal associations can be small in magnitude. Dr Huh could and should have conceded in direct that small associations can be causal, but explained that validity concerns about the studies that show small associations become critical. Examples would have helped, such as the body of observational epidemiology that suggested that estrogen replacement therapy in post-menopausal women provided cardiovascular benefit, only to be reversed by higher quality clinical trials. Similarly, observational studies suggested that lung cancer rates were reduced by Vitamin A intake, but again clinical trial data showed the opposite.

Consistency of Studies

Are studies that have statistically non-significant risk ratios above 1.0 inconsistent with studies that find statistically significant elevated risk ratios? At several points, Huh appears to say that such a group of studies is inconsistent, but that is not necessarily so. Huh’s assertion provoked a good bit of harmful cross-examination, in which he seemed to resist the notion that meta-analysis could help answer whether a group of studies is statistically consistent. Huh could have conceded the point readily but emphasized that a group of biased studies would give only a consistently biased estimate of association.

Authority

One of the cheapest tricks in the trial lawyers’ briefcase is the “learned treatise” exception to the rule against hearsay.”3 The lawyer sets up witnesses in deposition by obtaining their agreement that a particular author or text is “authoritative.” Then at trial, the lawyer confronts the witnesses with a snippet of text, which appears to disagree with the expert witnesses’ testimony. Under the rule, in federal and in some state courts, the jury may accept the snippet or sound bite as true, and also accept that the witnesses do not know what they are talking about when they disagree with the “authoritative” text.

The rule is problematic and should have been retired long ago. Since 1663, the Royal Society has sported the motto:  “Nullius in verba.”  Disputes in science are resolved with data, from high-quality, reproducible experimental or observational studies, not with appeals to the prestige of the speaker. And yet, we lawyers will try, and sometimes succeed, with this greasy kidstuff approach cross-examination. Indeed, when there is an opportunity to use it, we may even have an obligation to use so-called learned treatises to advance our clients’ cause.

In the Slemp trial, the plaintiff’s counsel apparently had gotten a concession from Dr. Huh that plaintiff’s expert witness on epidemiology, Dr. Daniel Cramer, was “credible and authoritative.” Plaintiff’s counsel then used Huh’s disagreement with Cramer’s testimony as well as his published papers to undermine Huh’s credibility.

This attack on Huh was a self-inflicted wound. The proper response to a request for a concession that someone or some publication is “authoritative,” is that this word really has no meaning in science. “Nullius in verba,” and all that. Sure, someone can be a respected research based upon past success, but past performance is no guarantee of future success. Look at Linus Pauling and Vitamin C. The truth of a conclusion rests on the data and the soundness of the inferences therefrom.

Collateral Attacks

The plaintiff’s lawyer in Slemp was particularly adept at another propaganda routine – attacking the witness on the stand for having cited another witness, whose credibility in turn was attacked by someone else, even if that someone else was a crackpot. Senator McCarthy (Joseph not Eugene) would have been proud of plaintiff’s lawyer’s use of the scurrilous attack on Paolo Boffetta for his views on EMF and cancer, as set out in Microwave News, a fringe publication that advances EMF-cancer claims. Now, the claim that non-ionizing radiation causes cancer has not met with much if any acceptance, and Boffetta’s criticisms of the claims are hardly unique or unsupported. Yet plaintiff’s counsel used this throw-away publication’s characterization of Boffetta as “the devil’s advocate,” to impugn Boffetta’s publications and opinions on EMF, as well as Huh’s opinions that relied upon some aspect of Boffetta’s work on talc. Not that “authority” counts, but Boffetta is the Associate Director for Population Sciences of the Tisch Cancer Institute and Chief of the Division of Cancer Prevention and Control of the Department of Oncological Sciences, at the Mt. Sinai School of Medicine in New York. He has published many epidemiologic studies, as well as a textbook on the epidemiology of cancer.4

The author from the Microwave News was never identified, but almost certainly lacks the training, experience, and expertise of Paolo Boffetta. The point, however, is that this cross-examination was extremely collateral, had nothing to do with Huh, or the issues in the Slemp case, and warranted an objection and admonition to plaintiff’s counsel for the scurrilous attack. An alert trial judge, who cared about substantial justice, might have shut down this frivolous, highly collateral attack, sua sponte. When Huh was confronted with the “devil’s advocate” characterization, he responded “OK,” seemingly affirming the premise of the question.

Specific Causation

Dr. Huh and the talc defendants took the position that epidemiology never informs assessment of individual causation. This opinion is hard to sustain. Elevated risk ratios reflect more individual cases than expected in a sample. Epidemiologic models are used to make individual predictions of risk for purposes of clinical monitoring and treatment. Population-based statistics are used to define range of normal function and to assess individuals as impaired or disabled, or not.

At one point in the cross-examination, plaintiffs’ counsel suggested the irrelevance of the size of relative risk by asking whether Dr. Huh would agree that a 20% increased risk was not small if you are someone who has gotten the disease. Huh answered “Well, if it is a real association.” This answer fails on several levels. First, it conflates “increased risk” and “real association” with causation. The point was for Huh to explain that an increased risk, if statistically significant, may be an association, but it is not necessary causal.

Second, and equally important, Huh missed the opportunity to explain that even if the 20% increased risk was real and causal, it would still mean that an individual patient’s ovarian cancer was most likely not caused by the exposure. See David H. Schwartz, “The Importance of Attributable Risk in Toxic Tort Litigation,” (July 5, 2017).

Conclusion

The defense strategy of eliciting all their scientific and medical testimony from a single witness was dangerous at best. As good a clinician as Dr. Huh appears to be, the defense strategy did not bode well for a good outcome when many of the scientific issues were outside of Dr. Huh’s expertise.


2 Jonathan S. Berek & Neville F. Hacker, Gynecologic Oncology at 231 (6th ed. 2014).

3 SeeTrust-Me Rules of Evidence” (Oct. 18 2012).

4 See, e.g., Paolo Boffetta, Stefania Boccia, Carol La Vecchia, A Quick Guide to Cancer Epidemiology (2014).

Traditional, Frequentist Statistics Still Hegemonic

March 25th, 2017

The Defense Fallacy

In civil actions, defendants, and their legal counsel sometimes argue that the absence of statistical significance across multiple studies requires a verdict of “no cause” for the defense. This argument is fallacious, as can be seen where there are many studies, say eight or nine, which all consistently find elevated risk ratios, but with p-values slightly higher than 5%. The probability that eight studies, free of bias, would consistently find an elevated risk ratio, regardless of the individual studies’ p-values, is itself very small. If the studies were amenable to meta-analysis, the summary estimate of the risk ratio would itself likely be highly statistically significant in this hypothetical.

The Plaintiffs’ Fallacy

The plaintiffs’ fallacy derives from instances, such as the hypothetical one above, in which statistical significance, taken as a property of individual studies, is lacking. Even though we can hypothesize such instances, plaintiffs fallaciously extrapolate from them to the conclusion that statistical significance, or any other measure of sampling estimate precision, is unnecessary to support a conclusion of causation.

In courtroom proceedings, epidemiologist Kenneth Rothman is frequently cited by plaintiffs as having shown or argued that statistical significance is unimportant. For instance, in the Zoloft multi-district birth defects litigation, plaintiffs argued in a motion for reconsideration of the exclusion of their epidemiologic witness that the trial court had failed to give appropriate weight to the Supreme Court’s decision in Matrixx Initiatives, Inc. v. Siracusano, 563 U.S. 27 (2011), as well as to the Third Circuit’s invocation of the so-called “Rothman” approach in a Bendectin birth defects case, DeLuca v. Merrell Dow Pharms., Inc., 911 F.2d 941 (3d Cir. 1990). According to the plaintiffs’ argument, their excluded epidemiologic witness, Dr. Anick Bérard, had used this approach in arriving at her novel conclusion that sertraline causes virtually every kind of birth defect.

The Zoloft plaintiffs did not call Rothman as a witness; nor did they even present an expert witness to explain what Rothman’s arguments were. Instead, the plaintiffs’ counsel, sneaked in some references and vague conclusions into their cross-examinations of defense expert witnesses, and submitted snippets from Rothman’s textbook, Modern Epidemiology.

If plaintiffs had called Dr. Rothman to testify, he would have probably insisted that statistical significance is not a criterion for causation. Such insistence is not as helpful to plaintiffs in cases such as Zoloft birth defects cases as their lawyers might have thought or hoped. Consider for instance the cases in which causal inferences are arrived at without formal statistical analysis. These instances are often not relevant to mass tort litigation that involve prevalent exposure and a prevalent outcome.

Rothman also would have likely insisted that consideration of random variation and bias are essential to the assessment of causation, and that many apparently or nominally statistically significant associations do not and cannot support valid inferences of causation. Furthermore, he might have been given the opportunity to explain that his criticisms of significance testing are as much directed to the creation of false positive as to false negative rates in observational epidemiology. In keeping with his publications, Rothman would have challenged strict significance testing with p-values as opposed to the use of sample statistical estimates in conjunction with confidence intervals. The irony of the Zoloft case and many other litigations was that the defense was not using significance testing in the way that Rothman had criticized; rather the plaintiffs were over-endorsing statistical significance that was nominal, plagued by multi-testing, and inconsistent.

Judge Rufe, who presided over the Zoloft MDL, pointed out that the Third Circuit in DeLuca had never affirmatively endorsed Professor Rothman’s “approach,” but had reversed and remanded the Bendectin case to the district court for a hearing under Rule 702:

by directing such an overall evaluation, however, we do not mean to reject at this point Merrell Dow’s contention that a showing of a .05 level of statistical significance should be a threshold requirement for any statistical analysis concluding that Bendectin is a teratogen regardless of the presence of other indicia of reliability. That contention will need to be addressed on remand. The root issue it poses is what risk of what type of error the judicial system is willing to tolerate. This is not an easy issue to resolve and one possible resolution is a conclusion that the system should not tolerate any expert opinion rooted in statistical analysis where the results of the underlying studies are not significant at a .05 level.”

2015 WL 314149, at *4 (quoting from DeLuca, 911 F.2d at 955). And in DeLuca, after remand, the district court excluded the DeLuca plaintiffs’ expert witnesses, and granted summary judgment, based upon the dubious methods employed by plaintiffs’ expert witnesses (including the infamous Dr. Done, and Shanna Swan), in cherry picking data, recalculating risk ratios in published studies, and ignoring bias and confounding in studies. On subsequent appeal, the Third Circuit affirmed the judgment for Merrell Dow. DeLuca v. Merrell Dow Pharma., Inc., 791 F. Supp. 1042 (3d Cir. 1992), aff’d, 6 F.3d 778 (3d Cir. 1993).

Judge Rufe similarly rebuffed the plaintiffs’ use of the Rothman approach, their reliance upon Matrixx, and their attempt to banish consideration of random error in the interpretation of epidemiologic studies. In re Zoloft (Sertraline Hydrochloride) Prods. Liab. Litig., MDL No. 2342; 12-md-2342, 2015 WL 314149 (E.D. Pa. Jan. 23, 2015) (Rufe, J.) (denying PSC’s motion for reconsideration). SeeZoloft MDL Relieves Matrixx Depression” (Feb. 4, 2015).

Some Statisticians’ Errors

Recently, Dr. Rothman and three other epidemiologists set out to track the change, over time, from 1975 to 2014, of the use of various statistical methodologies. Andreas Stang, Markus Deckert, Charles Poole & Kenneth J. Rothman, “Statistical inference in abstracts of major medical and epidemiology journals 1975–2014: a systematic review,” 32 Eur. J. Epidem. 21 (2017) [cited below as Stang]. They made clear that their preferred methodological approach was to avoid the strictly dichotomous null hypothesis significance testing (NHST), which has evolved from Fisher’s significance testing and Neyman’s null hypothesis testing (NHT), in favor of the use of estimation with confidence intervals (CI). The authors conducted a meta-study, that is a study of studies, to track the trends in use of NHST, ST, NHT and CI reporting in the major bio-medical journals.

Unfortunately, the authors limited their data and analysis to abstracts, which makes their results very likely misleading and incomplete. Even when abstracts reported using so-called CI-only approaches, the authors may well have reasoned that point estimates with CIs that spanned no association were “non-significant.” Similarly, authors who found elevated risk ratios with very wide confidence intervals may well have properly acknowledged that their study did not provide credible evidence of an association. See W. Douglas Thompson, “Statistical criteria in the interpretation of epidemiologic data,” 77 Am. J. Public Health 191, 191 (1987) (discussing the over-interpretation of skimpy data).

Rothman and colleagues found that while a few epidemiologic journals had a rising prevalence of CI-only reports in abstracts, for many biomedical journals the NHST approach remained more common. Interestingly, at three of the major clinical medical journals, the Journal of the American Medical Association, the New England Journal of Medicine, and Lancet, the NHST has prevailed over the almost four decades of observation.

The clear implication of Rothman’s meta-study is that consideration of significance probability, whether or not treated as a dichotomous outcome, and whether or not treated as a p-value or a point estimate with a confidence interval, is absolutely critical to how biomedical research is conducted, analyzed, and reported. In Rothman’s words:

Despite the many cautions, NHST remains one of the most prevalent statistical procedures in the biomedical literature.”

Stang at 22. See also David Chavalarias, Joshua David Wallach, Alvin Ho Ting & John P. A. Ioannidis, “Evolution of Reporting P Values in the Biomedical Literature, 1990-2015,” 315 J. Am. Med. Ass’n 1141 (2016) (noting the absence of the use of Bayes’ factors, among other techniques).

There is one aspect to the Stang article that is almost Trump-like in its citing to an inappropriate, unknowledgable source and then treating its author as having meaningful knowledge of the subject. As part of their rhetorical goals, Stang and colleagues declare that:

there are some indications that it has begun to create a movement away from strict adherence to NHT, if not to ST as well. For instance, in the Matrixx decision in 2011, the U.S. Supreme Court unanimously ruled that admissible evidence of causality does not have to be statistically significant [12].”

Stang at 22. Whence comes this claim? Footnote 12 takes us to what could well be fake news of a legal holding, an article by a statistician about a legal case:

Joseph L. Gastwirth, “Statistical considerations support the Supreme Court’s decision in Matrixx Initiatives v. Siracusano, 52 Jurimetrics J. 155 (2012).

Citing a secondary source when the primary source is readily available, and what is at issue, seems like poor scholarship. Professor Gastwirth is a statistician, not a lawyer, and his exegesis of the Supreme Court’s decision is wildly off target. As any first year law student could discern, the Matrixx case could not have been about the admissibility of evidence because the case had been dismissed on the pleadings, and no evidence had ever been admitted or excluded. The only issue on appeal was the adequacy of the allegations, not the admissibility of evidence.

Although the Court managed to muddle its analysis by wandering off into dicta about causation, the holding of the case is that alleging causation was not required to plead a case of materiality for a securities fraud case. Having dispatched causality from the case, the Court had no serious business in setting the considerations for alleging in pleadings or proving at trial the elements of causation. Indeed, the Court made it clear that its frolic and detour into causation could not be taken seriously:

We need not consider whether the expert testimony was properly admitted in those cases [cited earlier in the opinion], and we do not attempt to define here what constitutes reliable evidence of causation.”

Matrixx Initiatives, Inc. v. Siracusano, 563 U.S. 27, 131 S.Ct. 1309, 1319 (2011).

The word “admissible” or “admissibility” never appear in the Court’s opinion, and the above quote explains that the admissibility was not considered. Laughably, the Court went on to cite three cases as examples of supposed causation opinions in the absence of statistical significance. Two of the three were specific causation, differential etiology cases that involved known general causation. The third case involved a claim of birth defects from contraceptive jelly, when the plaintiffs’ expert witnesses actually relied upon statistically significant (but thoroughly flawed and invalid) associations.1

When it comes to statistical testing the legal world would be much improved if lawyers actually and carefully read statistics authors, and if statisticians and scientists actually read court opinions.

Washington Legal Foundation’s Paper on Statistical Significance in Rule 702 Proceedings

March 13th, 2017

The Washington Legal Foundation has released a Working Paper, No. 201, by Kirby Griffis, entitledThe Role of Statistical Significance in Daubert / Rule 702 Hearings,” in its Critical Legal Issues Working Paper Series, (Mar. 2017) [cited below as Griffis]. I am a fan of many of the Foundation’s Working Papers (having written one some years ago), but this one gives me pause.

Griffis’s paper manages to avoid many of the common errors of lawyers writing about this topic, but adds little to the statistics chapter in the Reference Manual on Scientific Evidence (3d ed. 2011), and he propagates some new, unfortunate misunderstandings. On the positive side, Griffis studiously avoids the transposition fallacy in defining significance probability, and he notes that multiplicity from subgroups and multiple comparisons often undermines claims of statistical significance. Griffis gets both points right. These are woefully common errors, and they deserve the emphasis Griffis gives to them in this working paper.

On the negative side, however, Griffis falls into error on several points. Griffis helpfully narrates the Supreme Court’s evolution in Daubert and then in Joiner, but he fails to address the serious mischief and devolution introduced by the Court’s opinion in Matrixx Initiatives, Inc. v. Siracusano, 563 U.S. 27, 131 S.Ct. 1309 (2011). See Schachtman, “The Matrixx – A Comedy of Errors” (April 6, 2011)”; David Kaye, “Trapped in the Matrixx: The U.S. Supreme Court and the Need for Statistical Significance,” BNA Product Safety & Liability Reporter 1007 (Sept. 12, 2011). With respect to statistical practice, this Working Paper is at times wide of the mark.

Non-Significance

Although avoiding the transposition fallacy, Griffis falls into another mistake in interpreting tests of significance; he states that a non-significant result tells us that an hypothesis is “perfectly consistent with mere chance”! Griffis at 9. This is, of course, wrong, or at least seriously misleading. A failure to reject the null hypothesis does not prove the null such that we can say that the “null results” in one study were perfectly consistent with chance. The test may have lacked power to detect an “effect size” of interest. Furthermore, tests of significance cannot rule out systematic bias or confounding, and that limitation alone ensures that Griffis’s interpretation is mistaken. A null result may have resulted from bias or confounding that obscured a measurable association.

Griffis states that p-values are expressed as percentages “usually 95% or 99%, corresponding to 0.05 or 0.01,” but this states things backwards. The p-value that is pre-specified to be “significant” is a probability or percentage that is low; it is the coefficient of confidence used to construct a confidence interval that is the complement of the significance probability. Griffis at 10. An alpha, or pre-specified statistical significance level, of 5% thus corresponds to a coefficient of confidence of 95% (or 1.0 – 0.05).

The Mid-p Controversy

In discussing the emerging case law, Griffis rightly points to cases that chastise Dr. Nicholas Jewell for the many liberties he has taken in various litigations as an expert witness for the lawsuit industry. One instance cited by Griffis is the Lipitor diabetes litigation, where the MDL court suggested that Jewell switched improperly from a Fisher’s exact test to a mid-test. Griffis at 18-19. Griffis seems to agree, but as I have explained elsewhere, Fisher’s exact test generates a one-tailed measure of significance probability, and the analyst is left to one of several ways of calculating a two-tailed test. SeeLipitor Diabetes MDL’s Inexact Analysis of Fisher’s Exact Test” (April 21, 2016). The mid-p is one legitimate approach for asymmetric distributions, and is more favorable to the defense than passing off the one-tailed measure as the result of the test. The mere fact that a statistical software package does not automatically specify the mid-p for a Fisher’s exact analysis does not make invoking this measure into p-hacking or other misconduct. Doubling the attained significance probability of a particular Fisher’s exact test result is generally considered less accurate than a mid-p calculation, even though some software packages using doubling attained significance probability as a default. As much as we might dislike bailing Jewell out of Daubert limbo, on this one, limited point, he deserved a better hearing.

Mis-Definitions

On recounting the Bendectin litigation, Griffis refers to the epidemiologic studies of birth defects and Bendectin as “experiments,” Griffis at 7, and then describes such studies as comparing “populations,” when he clearly meant “samples.” Griffis at 8.

Griffis conflates personal bias with bias as a scientific concept of systematic error in research, a confusion usually perpetuated by plaintiffs’ counsel. See Griffis at 9 (“Coins are not the only things that can be biased: scientists can be, too, as can their experimental subjects, their hypotheses, and their manipulations of the data.”) Of course, the term has multiple connotations, but too often an accusation of personal bias, such as conflict of interest, is used to avoid engaging with the merits of a study.

Relative Risks

Griffis correctly describes the measure known as “relative risk” as a determination of the “the strength of a particular association.” Griffis at 10. The discussion then lapses into using a given relative risk as a measure of the likelihood that an individual with the exposure studied develop the disease. Sometimes this general-to-specific inference is warranted, but without further analysis, it is impossible to tell whether Griffis lapsed from general to specific, deliberately or inadvertently, in describing the interpretation of relative risk.

Conclusion

Griffis is right in his chief contention that the proper planning, conduct and interpretation statistical tests is hugely important to judicial gatekeeping of some expert witness opinion testimony under Federal Rule of Evidence 702 (and under Rule 703, too). Judicial and lawyer aptitude in this area is low, and needs to be bolstered.

Statistical Analysis Requires an Expert Witness with Statistical Expertise

November 13th, 2016

Christina K. Connearne sued her employer, Main Line Hospitals, for age discrimination. Main Line charged Connearne with fabricating medical records, but Connearne replied that the charge was merely a pretext. Connearney v. Main Line Hospitals, Inc., Civ. Action No. 15-02730, 2016 WL 6569292 (E.D. Pa. Nov. 4, 2016) [cited as Connearney]. Connearne’s legal counsel engaged Christopher Wright, an expert witness on “human resources,” for a variety of opinions, most of which were not relevant to the action. Alas, for Ms. Connearne, the few relevant opinions proffered by Wright were unreliable. On a Rule 702 motion, Judge Pappert excluded Wright from testifying at trial.

Although not a statistician, Wright sought to offer his statistical analysis in support of the age discrimination claim. Connearney at *4. According to Judge Pappert’s opinion, Wright had taken just two classes in statistics, but perhaps His Honor meant two courses. (Wright Dep., at 10:3–4.) If the latter, then Wright had more statistical training than most physicians who are often permitted to give bogus statistical opinions in health effects litigation. In 2015, the Medical College Admission Test apparently started to include some very basic questions on statistical concepts. Some medical schools now require an undergraduate course in statistics. See Harvard Medical School Requirements for Admission (2016). Most medical schools, however, still do not require statistical training for their entering students. See Veritas Prep, “How to Select Undergraduate Premed Coursework” (Dec. 5, 2011); “Georgetown College Course Requirements for Medical School” (2016).

Regardless of formal training, or lack thereof, Christopher Wright demonstrated a profound ignorance of, and disregard for, statistical concepts. (Wright Dep., at 10:15–12:10; 28:6–14.) Wright was shown to be the wrong expert witness for the job by his inability to define statistical significance. When asked what he understood to be a “statistically significant sample,” Wright gave a meaningless, incoherent answer:

I think it depends on the environment that you’re analyzing. If you look at things like political polls, you and I wouldn’t necessarily say that serving [sic] 1 percent of a population is a statistically significant sample, yet it is the methodology that’s used in the political polls. In the HR field, you tend to not limit yourself to statistical sampling because you then would miss outliers. So, most HR statistical work tends to be let’s look at the entire population of whatever it is we’re looking at and go from there.”

Connearney at *5 (Wright Dep., at 10:15–11:7). When questioned again, more specifically on the meaning of statistical significance, Wright demonstrated his complete ignorance of the subject:

Q: And do you recall the testimony it’s generally around 85 to 90 employees at any given time, the ER [emergency room]?

A: I don’t recall that specific number, no.

Q: And four employees out of 85 or 90 is about what, 5 or 6 percent?

A: I’m agreeing with your math, yes.

Q: Is that a statistically significant sample?

A: In the HR [human resources] field it sure is, yes.

Q: Based on what?

A: Well, if one employee had been hit, physically struck, by their boss, that’s less than 5 percent. That’s statistically significant.”

Connearney at *5 n.5 (Wright Dep., at 28:6–14)

In support of his opinion about “disparate treatment,” Wright’s report contained nothing than a naked comparison of two raw percentages and a causal conclusion, without any statistical analysis. Even for this simplistic comparison of rates, Wright failed to explain how he obtained the percentages in a way that permitted the parties and the trial court to understand his computation and his comparisons. Without a statistical analysis, the trial court concluded that Wright had failed to show that the disparity in termination rates among younger and older employees was not likely consistent with random chance. See also Moultrie v. Martin, 690 F. 2d 1078 (4th Cir. 1982) (rejecting writ of habeas corpus when petitioner failed to support claim of grand jury race discrimination with anything other than the numbers of white and black grand jurors).

Although Wright gave the wrong definition of statistical significance, the trial court relied upon judges of the Third Circuit who also did not get the definition quite right. The trial court cited a 2010 case in the Circuit, which conflated substantive and statistical significance and then gave a questionable definition of statistical significance:

The Supreme Court has not provided any definitive guidance about when statistical evidence is sufficiently substantial, but a leading treatise notes that ‘[t]he most widely used means of showing that an observed disparity in outcomes is sufficiently substantial to satisfy the plaintiff’s burden of proving adverse impact is to show that the disparity is sufficiently large that it is highly unlikely to have occurred at random.’ This is typically done by the use of tests of statistical significance, which determine the probability of the observed disparity obtaining by chance.”

See Connearney at *6 & n.7, citing and quoting from Stagi v. National RR Passenger Corp., 391 Fed. Appx. 133, 137 (3d Cir. 2010) (emphasis added) (internal citation omitted). Ultimately, however, this was all harmless error on the way to the right result.

Benhaim v. St. Germain – Supreme Court of Canada Wrestles With Probability

November 11th, 2016

On November 10, 2016, the Supreme Court of Canada handed down a divided (four-to-three decision) in a medical malpractice case, which involved statistical evidence, or rather probabilistic inference. Benhaim v. St-Germain, 2016 SCC 48 (Nov. 10, 2016).  The case involved an appeal from a Quebec trial court, and the Quebec Court of Appeal, and some issues peculiar to Canadian lawyers. For one thing, Canadian law does not appear to follow lost-chance doctrine outlined in the American Law Institute’s Restatement. The consequence seems to be that negligent omissions in the professional liability context are assessed for their causal effect by the Canadian “balance of probabilities” standard.

The facts were reasonably clear, although their interpretation were disputed. In November 2005, Mr. Émond was 44 years old, a lifelong non-smoker, and in good health. At his annual physical with general practitioner Dr. Albert Benhaim, Émond had a chest X-ray (CXR). Benhaim at 11, 6. Remarkably, neither the majority nor the dissent commented upon the lack of reasonable medical necessity for a CXR in a healthy, non-smoking 40-something male. Few insurers in the United States would have paid for such a procedure. Maybe Canadian healthcare is more expansive than what we see in the United States.

The radiologist reviewing Mr. Émond’s CXR reported a 1.5 to 2.0 cm solitary lesion, and suggested a review with previous CXRs and a recommendation for a CT scan of the thorax. Dr. Benhaim did not follow the radiologist’s suggestions, but Mr. Émond did have a repeat CXR two months later, on January 17, 2006, which was interpreted as unchanged. A recommendation for a follow-up third CXR in four months was not acted upon. Benhaim at 11, 7. The trial court found that the defendant physicians deviated from the professional standard of care, a finding from which there was no appeal.

Mr. Émond did have a follow-up CXR at the end of 2006, on December 4, 2006, which showed that the solitary lung nodule had grown. Follow up CT and PET scans confirmed that Mr. Émond had Stage IV lung cancer. Id.

The issues in controversy turned on the staging of Mr. Émond’s lung cancer at the time of his first CXR, in November 2005, the medical consequences of the delay in diagnosis. Plaintiffs presented expert witness opinion testimony that Mr. Émond’s lung cancer was only Stage I (or at most IIA), at initial radiographic discovery of a nodule, and that he was at Stage III or IV in December 2006, when CT and PET scans confirmed the actual diagnosis of lung cancer. In the view of plaintiff’s expert witnesses, the delay in diagnosis, and the accompanying growth of the tumor and change from Stage I to IV, dramatically decreased Émond’s chance of survival. Id. At 13, 15-16. Indeed, plaintiff’s expert witnesses opined that had Mr. Émond been timely diagnosed and treated in November 2005, he probably would have been cured.

The defense expert witness, Dr. Ferraro, testified that Mr. Émond’s lung cancer was Stage III or IV in November 2005, when the radiographic nodule was first seen, and his chances of survival at that time were already quite poor. According to Dr. Ferraro, earlier intervention and treatment would probably not have been successful in curing Mr. Émond, and the delay in diagnosis was not a cause of his death.

The trial court rejected plaintiffs’ expert witnesses’ opinions on factual grounds. These witnesses had argued that Mr. Émond’s lung cancer was at Stage I in November 2005 because the lung nodule was less than 3 cm., and because Mr. Émond was asymptomatic and in good health. These three points of contention were clearly unreliable because they were all present in January 2007, when Mr. Émond was diagnosed with Stage IV cancer, according to all the expert witnesses. Every point cited by plaintiffs’ expert witnesses in support of their staging failed to discriminate Stage I from Stage III. In Her Honor’s opinion, the lung cancer was probably Stage III in November 2005, and this staging implied a poor prognosis on all the expert witnesses’ opinions. The failure to diagnose until late 2006 was thus not, on the “balance of probabilities” a cause of death. Id. At 15, ¶21.

The intermediate appellate court reversed on grounds of a presumption of causation, which comes into being when the defendant’s negligence interferes with plaintiff’s ability to show causation, and there is some independent evidence of causation to support the case. I will leave this presumption, which the Supreme Court of Canada held inappropriate on the facts of this case, to Canadian lawyers to debate. What was more interesting was the independent evidence adduced by plaintiffs. This evidence consisted of statistical evidence in the form of generality that 78 percent of fortuitously discovered lung cancers are at Stage I, which in turn is associated with a cure rate of 70 percent. Id. at 18 30.

The plaintiffs’ witnesses hoped to apply this generality to this case, notwithstanding that Émond’s nodule was close to 2 cm. on CXR, that the general statistic was based up more sensitive CT studies, and that Émond had been a non-smoker (which may have influenced tumor growth and staging). Furthermore, there was an additional, ominous finding in Mr. Émond’s first CXR, of hilar prominence, which supported the defense’s differentiation of his case from the generality of fortuitously discovered (presumably small, solitary lung nodules without hilar involvement). Id. at 44 83.

The trial court rejected the inference from the group statistic of 70% survival to the conclusion that Mr. Émond had a 70% probability of survival. Tellingly, there was no discussion of the variance for the 70% figure; nor any mention of relevant subgroups. The Court of Appeals, however, would have turned this statistic into a binding presumption by virtue of accepting the 78 percent as providing strong evidencec that the 70% survival figure pertained to Mr. Émond. The intermediate appellate court would then have taken the group survival rate as providing a more likely than not conclusion about Mr. Émond, while rejecting the defense expert witness’s statistics as mere speculation. Id. at 36 ¶67.

Adopting a skeptical stance with respect to probabilistic evidence, the Supreme Court reversed the Quebec Court of Appeal’s reversal of the trial court’s judgment. The Court cited Richard Wright and Jonathan Cohen’s criticisms of probabilistic evidence (and Cohen’s Gatecrasher’s Paradox), and urged caution in applying class or group statistics to generate probabilities that class members share the group characteristic.

Appellate courts should generally not interfere with a trial judge’s decision not to draw an inference from a general statistic to a particular case. Statistics themselves are silent about whether the particular parties before the court would have conformed to the trend or been an exception from it. Without an evidentiary bridge to the specific circumstances of the plaintiff, statistical evidence is of little assistance. For this reason, such general trends are not determinative in particular cases. What inferences follow from such evidence — whether the generalization that a statistic represents is instantiated in the particular case — is a matter for the trier of fact. This determination must be made with reference to the whole of the evidence.”

Benhaim at 39, 74, 75 (internal citations omitted).

To some extent, the Supreme Court’s comments about statistical evidence were rather wide of there mark. The 78% statistic was based upon a high level of generality, namely all cases, without regard for the size of the radiographically discovered lesion, the manner of discovery (CXR versus CT), presence or absence of hilar pathology, or group or individual’s smoking status. In the context of the facts of the case, however, the trial court clearly had a factual basis for resisting the application of the group statistic (78% fortuitously discovered tumors were Stage I with 70% five-year survival).

The Canadian Supreme Court seems to have navigated these probabilistic waters fairly adeptly, although the majority opinion contains broad brush generalities and inaccuracies, which will, no doubt, show up in future lower court cases. For instance:

This is because the law requires proof of causation only on a balance of probabilities, whereas scientific or medical experts often require a higher degree of certainty before drawing conclusions on causation (p. 330). Simply put, scientific causation and factual causation for legal purposes are two different things.”

Benhaim at 24, 47. The Court cited legal precedent for its observation, and not any scientific treatises. And then, the Supreme Court suggested that all one needs to prevail in a tort case in Canada is a medical expert witness who speculates:

Trial judges are empowered to make legal determinations even where medical experts are not able to express an opinion with certainty.

Benhaim at 37, 72Clearly dictum on the facts of Benhaim, but it seems that judges in Canada are like those in the United States. Black robes empower them to do what mere scientists could not do. If we were to ignore the holding of Benhaim, we might think that all one needs in Canada is a medical expert who speculates.