TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Scientific Evidence in Canadian Courts

February 20th, 2018

A couple of years ago, Deborah Mayo called my attention to the Canadian version of the Reference Manual on Scientific Evidence.1 In the course of discussion of mistaken definitions and uses of p-values, confidence intervals, and significance testing, Sander Greenland pointed to some dubious pronouncements in the Science Manual for Canadian Judges [Manual].

Unlike the United States federal court Reference Manual, which is published through a joint effort of the National Academies of Science, Engineering, and Medicine, the Canadian version, is the product of the Canadian National Judicial Institute (NJI, or the Institut National de la Magistrature, if you live in Quebec), which claims to be an independent, not-for-profit group, that is committed to educating Canadian judges. In addition to the Manual, the Institute publishes Model Jury Instructions and a guide, Problem Solving in Canada’s Courtrooms: A Guide to Therapeutic Justice (2d ed.), as well as conducting educational courses.

The NJI’s website describes the Instute’s Manual as follows:

Without the proper tools, the justice system can be vulnerable to unreliable expert scientific evidence.

         * * *

The goal of the Science Manual is to provide judges with tools to better understand expert evidence and to assess the validity of purportedly scientific evidence presented to them. …”

The Chief Justice of Canada, Hon. Beverley M. McLachlin, contributed an introduction to the Manual, which was notable for its frank admission that:

[w]ithout the proper tools, the justice system is vulnerable to unreliable expert scientific evidence.

****

Within the increasingly science-rich culture of the courtroom, the judiciary needs to discern ‘good’ science from ‘bad’ science, in order to assess expert evidence effectively and establish a proper threshold for admissibility. Judicial education in science, the scientific method, and technology is essential to ensure that judges are capable of dealing with scientific evidence, and to counterbalance the discomfort of jurists confronted with this specific subject matter.”

Manual at 14. These are laudable goals, indeed, but did the National Judicial Institute live up to its stated goals, or did it leave Canadian judges vulnerable to the Institute’s own “bad science”?

In his comments on Deborah Mayo’s blog, Greenland noted some rather cavalier statements in Chapter two that suggest that the conventional alpha of 5% corresponds to a “scientific attitude that unless we are 95% sure the null hypothesis is false, we provisionally accept it.” And he, pointed elsewhere where the chapter seems to suggest that the coefficient of confidence that corresponds to an alpha of 5% “constitutes a rather high standard of proof,” thus confusing and conflating probability of random error with posterior probabilities. Greenland is absolutely correct that the Manual does a rather miserable job of educating Canadian judges if our standard for its work product is accuracy and truth.

Some of the most egregious errors are within what is perhaps the most important chapter of the Manual, Chapter 2, “Science and the Scientific Method.” The chapter has two authors, a scientist, Scott Findlay, and a lawyer, Nathalie Chalifour. Findlay is an Associate Professor, in the Department of Biology, of the University of Ottawa. Nathalie Chalifour is an Associate Professor on the Faculty of Law, also in the University of Ottawa. Together, they produced some dubious pronouncements, such as:

Weight of the Evidence (WOE)

First, the concept of weight of evidence in science is similar in many respects to its legal counterpart. In both settings, the outcome of a weight-of-evidence assessment by the trier of fact is a binary decision.”

Manual at 40. Findlay and Chalifour cite no support for their characterization of WOE in science. Most attempts to invoke WOE are woefully vague and amorphous, with no meaningful guidance or content.2  Sixty-five pages later, if any one is noticing, the authors let us in a dirty, little secret:

at present, there exists no established prescriptive methodology for weight of evidence assessment in science.”

Manual at 105. The authors omit, however, that there are prescriptive methods for inferring causation in science; you just will not see them in discussions of weight of the evidence. The authors then compound the semantic and conceptual problems by stating that “in a civil proceeding, if the evidence adduced by the plaintiff is weightier than that brought forth by the defendant, a judge is obliged to find in favour of the plaintiff.” Manual at 41. This is a remarkable suggestion, which implies that if the plaintiff adduces the crummiest crumb of evidence, a mere peppercorn on the scales of justice, but the defendant has none to offer, that the plaintiff must win. The plaintiff wins notwithstanding that no reasonable person could believe that the plaintiff’s claims are more likely than not true. Even if there were the law of Canada, it is certainly not how scientists think about establishing the truth of empirical propositions.

Confusion of Hypothesis Testing with “Beyond a Reasonable Doubt”

The authors’ next assault comes in conflating significance probability with the probability connected with the burden of proof, a posterior probability. Legal proceedings have a defined burden of proof, with criminal cases requiring the state to prove guilt “beyond a reasonable doubt.” Findlay and Chalifour’s discussion then runs off the rails by likening hypothesis testing, with an alpha of 5% or its complement, 95%, as a coefficient of confidence, to a “very high” burden of proof:

In statistical hypothesis-testing – one of the tools commonly employed by scientists – the predisposition is that there is a particular hypothesis (the null hypothesis) that is assumed to be true unless sufficient evidence is adduced to overturn it. But in statistical hypothesis-testing, the standard of proof has traditionally been set very high such that, in general, scientists will only (provisionally) reject the null hypothesis if they are at least 95% sure it is false. Third, in both scientific and legal proceedings, the setting of the predisposition and the associated standard of proof are purely normative decisions, based ultimately on the perceived consequences of an error in inference.”

Manual at 41. This is, as Greenland and many others have pointed out, a totally bogus conception of hypothesis testing, and an utterly false description of the probabilities involved.

Later in the chapter, Findlay and Chalifour flirt with the truth, but then lapse into an unrecognizable parody of it:

Inferential statistics adopt the frequentist view of probability whereby a proposition is either true or false, and the task at hand is to estimate the probability of getting results as discrepant or more discrepant than those observed, given the null hypothesis. Thus, in statistical hypothesis testing, the usual inferred conclusion is either that the null is true (or rather, that we have insufficient evidence to reject it) or it is false (in which case we reject it). 16 The decision to reject or not is based on the value of p if the estimated value of p is below some threshold value a, we reject the null; otherwise we accept it.”

Manual at 74. OK; so far so good, but here comes the train wreck:

By convention (and by convention only), scientists tend to set α = 0.05; this corresponds to the collective – and, one assumes, consensual – scientific attitude that unless we are 95% sure the null hypothesis is false, we provisionally accept it. It is partly because of this that scientists have the reputation of being a notoriously conservative lot, given that a 95% threshold constitutes a rather high standard of proof.”

Manual at 75. Uggh; so we are back to significance probability’s being a posterior probability. As if to atone for their sins, in the very next paragraph, the authors then remind the judicial readers that:

As noted above, p is the probability of obtaining results at least as discrepant as those observed if the null is true. This is not the same as the probability of the null hypothesis being true, given the results.”

Manual at 75. True, true, and completely at odds with what the authors have stated previously. And to add to the reader’s now fully justified conclusion, the authors describe the standard for rejecting the null hypothesis as “very high indeed.” Manual at 102, 109. Any reader who is following the discussion might wonder how and why there is such a problem of replication and reproducibility in contemporary science.

Conflating Bayesianism with Frequentist Modes of Inference

We have seen how Findlay and Chalifour conflate significance and posterior probabilities, some of the time. In a section of their chapter that deals explicitly with probability, the authors tell us that before any study is conducted the prior probability of the truth of the tested hypothesis is 50%, sans evidence. This an astonishing creation of certainty out nothingness, and perhaps it explains the authors’ implied claim that the crummiest morsel of evidence on one side is sufficient to compel a verdict, if the other side has no morsels at all. Here is how the authors put their claim to the Canadian judges:

Before each study is conducted (that is, a priori), the hypothesis is as likely to be true as it is to be false. Once the results are in, we can ask: How likely is it now that the hypothesis is true? In the first study, the low a priori inferential strength of the study design means that this probability will not be much different from the a priori value of 0.5 because any result will be rather equivocal owing to limitations in the experimental design.”

Manual at 64. This implied Bayesian slant, with 50% priors, in the world of science would lead anyone to believe “as many as six impossible things before breakfast,” and many more throughout the day.

Lest you think that the Manual is all rubbish, there are occasional gems of advice to the Canadian judges. The authors admonish the judges to

be wary of individual ‘statistically significant’ results that are mined from comparatively large numbers of trials or experiments, as the results may be ‘cherry picked’ from a larger set of experiments or studies that yielded mostly negative results. The court might ask the expert how many other trials or experiments testing the same hypothesis he or she is aware of, and to describe the outcome of those studies.”

Manual at 87. Good advice, but at odds with the authors’ characterization of statistical significance as establishing the rejection of the null hypothesis well-nigh beyond a reasonable doubt.

When Greenland first called attention to this Manual, I reached to some people who had been involved in its peer review. One reviewer told me that it was a “living document,” and would likely be revised after he had the chance to call the NJI’s attention to the errors. But two years later, the errors remain, and so we have to infer that the authors meant to say all the contradictory and false statements that are still present in the downloadable version of the Manual.


2 SeeWOE-fully Inadequate Methodology – An Ipse Dixit By Another Name” (May 1, 2012); “Weight of the Evidence in Science and in Law” (July 29, 2017); see also David E. Bernstein, “The Misbegotten Judicial Resistance to the Daubert Revolution,” 89 Notre Dame L. Rev. 27 (2013).

High, Low and Right-Sided Colonics – Ridding the Courts of Junk Science

July 16th, 2016

Not surprisingly, many of Selikoff’s litigation- and regulatory-driven opinions have not fared well, such as the notions that asbestos causes gastrointestinal cancers and that all asbestos minerals have equal potential and strength to cause mesothelioma.  Forty years after Selikoff testified in litigation that occupational asbestos exposure caused an insulator’s colorectal cancer, the Institute of Medicine reviewed the extant evidence and announced that the evidence was  “suggestive but not sufficient to infer a causal relationship between asbestos exposure and pharyngeal, stomach, and colorectal cancers.” Jonathan Samet, et al., eds., Institute of Medicine Review of Asbestos: Selected Cancers (2006).[1] The Institute of Medicine’s monograph has fostered a more circumspect approach in some of the federal agencies.  The National Cancer Institute’s website now proclaims that the evidence is insufficient to permit a conclusion that asbestos causes non-pulmonary cancers of gastrointestinal tract and throat.[2]

As discussed elsewhere, Selikoff testified as early as 1966 that asbestos causes colorectal cancer, in advance of any meaningful evidence to support such an opinion, and then he, and his protégées, worked hard to lace the scientific literature with their pronouncements on the subject, without disclosing their financial, political, and positional conflicts of interest.[3]

With plaintiffs’ firm’s (Lanier) zealous pursuit of bias information from the University of Idaho, in the LoGuidice case, what are we to make of Selikoff’s and his minions’ dubious ethics of failed disclosure. Do Selikoff and Mount Sinai receive a pass because their asbestos research predated the discovery of ethics? The “Lobby” (as the late Douglas Liddell called Selikoff and his associates)[4] has seriously distorted truth-finding in any number of litigations, but nowhere are the Lobby’s distortions more at work than in lawsuits for claimed asbestos injuries. Here the conflicts of interests truly have had a deleterious effect on the quality of civil justice. As we saw with the Selikoff exceptionalism displayed by the New York Supreme Court in reviewing third-party subpoenas,[5] some courts seem bent on ignoring evidence-based analyses in favor of Mount Sinai faith-based initiatives.

Current Asbestos Litigation Claims Involving Colorectal Cancer

Although Selikoff has passed from the litigation scene, his trainees and followers have lined up at the courthouse door to propagate his opinions. Even before the IOM’s 2006 monograph, more sophisticated epidemiologists consistently rejected the Selikoff conclusion on asbestos and colon cancer, which grew out of Selikoff’s litigation activities.[6] And yet, the minions keep coming.

In the pre-Daubert era, defendants lacked an evidentiary challenge to the Selikoff’s opinion that asbestos caused colorectal cancer. Instead of contesting the legal validity or sufficiency of the plaintiffs’ general causation claims, defendants often focused on the unreliability of the causal attribution for the specific claimant’s disease. These early cases are often misunderstood to be challenges to expert witnesses’ opinions about whether asbestos causes colorectal cancer; they were not.[7]

Of course, after the IOM’s 2006 monograph, active expert witness gatekeeping should eliminate asbestos gastrointestinal cancer claims, but sadly they persist. Perhaps, courts simply considered the issue “grandfathered” in from the era in which judicial scrutiny of expert witness opinion testimony was restricted. Perhaps, defense counsel are failing to frame and support their challenges properly.  Perhaps both.

Arthur Frank Jumps the Gate

Although ostensibly a “Frye” state, Pennsylvania judges have, when moved by the occasion, to apply a fairly thorough analysis of proffered expert witness opinion.[8] On occasion, Pennsylvania judges have excluded unreliably or invalidly supported causation opinions, under the Pennsylvania version of the Frye standard. A recent case, however, tried before a Workman’s Compensation Judge (WCJ), and appealed to the Commonwealth Court, shows how inconsistent the application of the standard can be, especially when Selikoff’s legacy views are at issue.

Michael Piatetsky, an architect, died of colorectal cancer. Before his death, he and his wife filed a worker’s compensation claim, in which they alleged that his disease was caused by his workplace exposure to asbestos. Garrison Architects v. Workers’ Comp. Appeal Bd. (Piatetsky), No. 1095 C.D. 2015, Pa. Cmwlth. Ct., 2016 Pa. Commw. Unpub. LEXIS 72 (Jan. 22, 2016) [cited as Piatetsky]. Mr. Piatetsky was an architect, almost certainly knowledgeable about asbestos hazards generally.  Despite his knowledge, Piatetsky eschewed personal protective equipment even when working at dusty work sites well marked with warnings. Although he had engaged in culpable conduct, the employer in worker compensation proceedings does not have ordinary negligence defenses, such as contributory negligence or assumption of risk.

In litigating the Piatetsky’s claim, the employer dragged its feet and failed to name an expert witness.  Eventually, after many requests for continuances, the Workers’ Compensation Judge barred the employer from presenting an expert witness. With the record closed, and without an expert witness, the Judge understandably ruled in favor of the claimant.

The employer, sans expert witness, had to confront claimant’s expert witness, Arthur L. Frank, a minion of Selikoff and a frequent testifier in asbestos and many other litigations. Frank, of course, opined that asbestos causes colon cancer and that it caused Mr. Piatetsky’s cancer. Mr. Piatetsky’s colon cancer originated on the right side of his colon. Dr. Frank thus emphasized that asbestos causes colon cancer in all locations, but especially on the right side in view of one study’s having concluded “that colon cancer caused by asbestos is more likely to begin on the right side.” Piatetsky at *6.

On appeal, the employer sought relief on several issues, but the only one of interest here is the employer’s argument “that Claimant’s medical expert based his opinion on flimsy medical studies.” Piatetsky at *10. The employer’s appeal seemed to go off the rails with the insistence that the Claimant’s medical opinion was invalid because Dr. Frank relied upon studies not involving architects. Piatetsky at *14. The Commonwealth Court was able to point to testimony, although probably exaggerated, which suggested that Mr. Piatetsky had been heavily exposed, at least at times, and thus his exposure was similar to that in the studies cited by Frank.

With respect to Frank’s right-sided (non-sinister) opinion, the Commonwealth Court framed the employer’s issue as a contention that Dr. Frank’s opinion on the asbestos-relatedness of right-sided colon cancer was “not universally accepted.” But universal acceptance has never been the test or standard for the rejection or acceptance of expert witness opinion testimony in any state.  Either the employer badly framed its appeal, or the appellate court badly misstated the employer’s ground for relief. In any event, the Commonwealth Court never addressed the relevant legal standard in its discussion.

The Claimant argued that the hearing Judge had found that Frank’s opinion was based on “numerous studies.” Piatetsky at *15. None of these studies is cited to permit the public to assess the argument and the Court’s acceptance of it. The appellate court made inappropriately short work of this appellate issue by confusing general and specific causation, and invoking Mr. Piatetsky’s age, his lack of family history of colon cancer, Frank’s review of medical records, testimony, and work records, as warranting Frank’s causal inference. None of these factors is relevant to general causation, and none is probative of the specific causation claim.  Many if not most colon cancers have no identifiable risk factor, and Dr. Frank had no way to rule out baseline risk, even if there were an increased risk from asbestos exposure. Piatetsky at *16. With no defense expert witness, the employer certainly had a difficult appellate journey. It is hard for the reader of the Commonwealth Court’s opinion to determine whether the case was poorly defended, poorly briefed on appeal, or poorly described by the appellate judges.

In any event, the right-sided ruse of Arthur Frank went unreprimanded.  Intellectual due process might have led the appellate court to cite the article at issue, but it failed to do so.  It is interesting and curious to see how the appellate court gave a detailed recitation of the controverted facts of asbestos exposure, while how glib the court was when describing the scientific issues and evidence.  Nonetheless, the article referenced vaguely, which went uncited by the appellate court, was no doubt the paper:  K. Jakobsson, M. Albin & L. Hagmar, “Asbestos, cement, and cancer in the right part of the colon,” 51 Occup. & Envt’l Med. 95 (1994).

These authors 24 observed versus 9.63 expected right-sided colon cancers, and they concluded that there was an increased rate of right-sided colon cancer in the asbestos cement plant workers.  Notably the authors’ reference population had a curiously low rate of right-sided colon cancer.  For left-sided colon cancer, the authors 9.3 expected cases but observed only 5 cases in the asbestos-cement cohort.  Contrary to Frank’s suggestion, the authors did not conclude that right-sided colon cancers had been caused by asbestos; indeed, the authors never reached any conclusion whether asbestos causes colorectal  cancer under any circumstances.  In their discussion, these authors noted that “[d]espite numerous epidemiological and experimental studies, there is no consensus concerning exposure to asbestos and risks of gastrointestinal cancer.” Jakobsson at 99; see also Dorsett D. Smith, “Does Asbestos Cause Additional Malignancies Other than Lung Cancer,” chap. 11, in Dorsett D. Smith, The Health Effects of Asbestos: An Evidence-based Approach 143, 154 (2015). Even this casual description of the Jakobsson study will awake the learned reader to the multiple comparisons that went on in this cohort study, with outcomes reported for left, right, rectum, and multiple sites, without any adjustment to the level of significance.  Risk of right-sided colon cancer was not a pre-specified outcome of the study, and the results of subsequent studies have never corroborated this small cohort study.

A sane understanding of subgroup analyses is important to judicial gatekeeping. SeeSub-group Analyses in Epidemiologic Studies — Dangers of Statistical Significance as a Bright-Line Test” (May 17, 2011).  The chapter on statistics in the Reference Manual for Scientific Evidence (3d ed. 2011) has some prudent caveats for multiple comparisons and testing, but neither the chapter on epidemiology, nor the chapter on clinical medicine[9], provides any sense of the dangers of over-interpreting subgroup analyses.

Some commentators have argued that we must not dissuade scientists from doing subgroup analysis, but the issue is not whether they should be done, but how they should be interpreted.[10] Certainly many authors have called for caution in how subgroup analyses are interpreted[11], but apparently Expert Witness Arthur Frank, did not receive the memo, before testifying in the Piatetsky case, and the Commonwealth Court did not before deciding this case.


[1] As good as the IOM process can be on occasion, even its reviews are sometimes less than thorough. The asbestos monograph gave no consideration to alcohol in the causation of laryngeal cancer, and no consideration to smoking in its analysis of asbestos and colorectal cancer. See, e.g., Peter S. Liang, Ting-Yi Chen & Edward Giovannucci, “Cigarette smoking and colorectal cancer incidence and mortality: Systematic review and meta-analysis,” 124 Internat’l J. Cancer 2406, 2410 (2009) (“Our results indicate that both past and current smokers have an increased risk of [colorectal cancer] incidence and mortality. Significantly increased risk was found for current smokers in terms of mortality (RR 5 1.40), former smokers in terms of incidence (RR 5 1.25)”); Lindsay M. Hannan, Eric J. Jacobs and Michael J. Thun, “The Association between Cigarette Smoking and Risk of Colorectal Cancer in a Large Prospective Cohort from the United States,” 18 Cancer Epidemiol., Biomarkers & Prevention 3362 (2009).

[2] National Cancer Institute, “Asbestos Exposure and Cancer Risk” (last visited July 10, 2016) (“In addition to lung cancer and mesothelioma, some studies have suggested an association between asbestos exposure and gastrointestinal and colorectal cancers, as well as an elevated risk for cancers of the throat, kidney, esophagus, and gallbladder (3, 4). However, the evidence is inconclusive.”).

[3] Compare “Health Hazard Progress Notes: Compensation Advance Made in New York State,” 16(5) Asbestos Worker 13 (May 1966) (thanking Selikoff for testifying in a colon cancer case) with, Irving J. Selikoff, “Epidemiology of gastrointestinal cancer,” 9 Envt’l Health Persp. 299 (1974) (arguing for his causal conclusion between asbestos and all gastrointestinal cancers, with no acknowledgment of his role in litigation or his funding from the asbestos insulators’ union).

[4] F.D.K. Liddell, “Magic, Menace, Myth and Malice,” 41 Ann. Occup. Hyg. 3, 3 (1997); see alsoThe Lobby Lives – Lobbyists Attack IARC for Conducting Scientific Research” (Feb. 19, 2013).

[5]

SeeThe LoGiudice Inquisitiorial Subpoena & Its Antecedents in N.Y. Law” (July 14, 2016).

[6] See, e.g., Richard Doll & Julian Peto, Asbestos: Effects on health of exposure to asbestos 8 (1985) (“In particular, there are no grounds for believing that gastrointestinal cancers in general are peculiarly likely to be caused by asbestos exposure.”).

[7] See Landrigan v. The Celotex Corporation, Revisited” (June 4, 2013); Landrigan v. The Celotex Corp., 127 N.J. 404, 605 A.2d 1079 (1992); Caterinicchio v. Pittsburgh Corning Corp., 127 NJ. 428, 605 A.2d 1092 (1992). In both Landrigan and Caterinicchio, there had been no challenge to the reliability or validity of the plaintiffs’ expert witnesses’ general causation opinions. Instead, the trial courts entered judgments, assuming arguendo that asbestos can cause colorectal cancer (a dubious proposition), on the ground that the low relative risk cited by plaintiffs’ expert witnesses (about 1.5) was factually insufficient to support a verdict for plaintiffs on specific causation.  Indeed, the relative risk suggested that the odds were about 2 to 1 in defendants’ favor that the plaintiffs’ colorectal cancers were not caused by asbestos.

[8] See, e.g., Porter v. Smithkline Beecham Corp., Sept. Term 2007, No. 03275. 2016 WL 614572 (Phila. Cty. Com. Pleas, Oct. 5, 2015); “Demonstration of Frye Gatekeeping in Pennsylvania Birth Defects Case” (Oct. 6, 2015).

[9] John B. Wong, Lawrence O. Gostin & Oscar A. Cabrera, “Reference Guide on Medical Testimony,” in Reference Manual for Scientific Evidence 687 (3d ed. 2011).

[10] See, e.g., Phillip I. Good & James W. Hardin, Common Errors in Statistics (and How to Avoid Them) 13 (2003) (proclaiming a scientists’ Bill of Rights under which they should be allowed to conduct subgroup analyses); Ralph I. Horwitz, Burton H. Singer, Robert W. Makuch, Catherine M. Viscoli, “Clinical versus statistical considerations in the design and analysis of clinical research,” 51 J. Clin. Epidemiol. 305 (1998) (arguing for the value of subgroup analyses). In United States v. Harkonen, the federal government prosecuted a scientist for fraud in sending a telecopy that described a clinical trial as “demonstrating” a benefit in a subgroup of a secondary trial outcome.  Remarkably, in the Harkonen case, the author, and criminal defendant, was describing a result in a pre-specified outcome, in a plausible but post-hoc subgroup, which result accorded with prior clinical trials and experimental evidence. United States v. Harkonen (D. Calif. 2009); United States v. Harkonen (D. Calif. 2010) (post-trial motions), aff’d, 510 F. App’x 633 (9th Cir. 2013) (unpublished), cert. denied, 134 S. Ct. 824, ___ U.S. ___ (2014); Brief by Scientists And Academics as Amici Curiae In Support Of Petitioner, On Petition For Writ Of Certiorari in the Supreme Court of the United States, W. Scott Harkonen v. United States, No. 13-180 (filed Sept. 4, 2013).

[11] SeeSub-group Analyses in Epidemiologic Studies — Dangers of Statistical Significance as a Bright-Line Test” (May 17, 2011) (collecting commentary); see also Lemuel A. Moyé, Statistical Reasoning in Medicine:  The Intuitive P-Value Primer 206, 225 (2d ed. 2006) (noting that subgroup analyses are often misleading: “Fishing expeditions for significance commonly catch only the junk of sampling error”); Victor M. Montori, Roman Jaeschke, Holger J. Schünemann, Mohit Bhandari, Jan L Brozek, P. J. Devereaux & Gordon H Guyatt, “Users’ guide to detecting misleading claims in clinical research reports,” 329 Brit. Med. J. 1093 (2004) (“Beware subgroup analysis”); Susan F. Assmann, Stuart J. Pocock, Laura E. Enos, Linda E. Kasten, “Subgroup analysis and other (mis)uses) of baseline data in clinical trials,” 355 Lancet 1064 (2000); George Davey Smith & Mathias Egger, “Commentary: Incommunicable knowledge? Interpreting and applying the results of clinical trials and meta-analyses,” 51 J. Clin. Epidemiol. 289 (1998) (arguing against post-hoc hypothesis testing); Douglas G. Altman, “Statistical reviewing for medical journals,” 17 Stat. Med. 2662 (1998); Douglas G. Altman, “Commentary:  Within trial variation – A false trail?” 51 J. Clin. Epidemiol. 301 (1998) (noting that observed associations are expected to vary across subgroup because of random variability); Christopher Bulpitt, “Subgroup Analysis,” 2 Lancet: 31 (1988).

Judicial Control of the Rate of Error in Expert Witness Testimony

May 28th, 2015

In Daubert, the Supreme Court set out several criteria or factors for evaluating the “reliability” of expert witness opinion testimony. The third factor in the Court’s enumeration was whether the trial court had considered “the known or potential rate of error” in assessing the scientific reliability of the proffered expert witness’s opinion. Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 593 (1993). The Court, speaking through Justice Blackmun, failed to provide much guidance on the nature of the errors subject to gatekeeping, on how to quantify the errors, and on to know how much error was too much. Rather than provide a taxonomy of error, the Court lumped “accuracy, validity, and reliability” together with a grand pronouncement that these measures were distinguished by no more than a “hen’s kick.” Id. at 590 n.9 (1993) (citing and quoting James E. Starrs, “Frye v. United States Restructured and Revitalized: A Proposal to Amend Federal Evidence Rule 702,” 26 Jurimetrics J. 249, 256 (1986)).

The Supreme Court’s failure to elucidate its “rate of error” factor has caused a great deal of mischief in the lower courts. In practice, trial courts have rejected engineering opinions on stated grounds of their lacking an error rate as a way of noting that the opinions were bereft of experimental and empirical evidential support[1]. For polygraph evidence, courts have used the error rate factor to obscure their policy prejudices against polygraphs, and to exclude test data even when the error rate is known, and rather low compared to what passes for expert witness opinion testimony in many other fields[2]. In the context of forensic evidence, the courts have rebuffed objections to random-match probabilities that would require that such probabilities be modified by the probability of laboratory or other error[3].

When it comes to epidemiologic and other studies that require statistical analyses, lawyers on both sides of the “v” frequently misunderstand p-values or confidence intervals to provide complete measures of error, and ignore the larger errors that result from bias, confounding, study validity (internal and external), inappropriate data synthesis, and the like[4]. Not surprisingly, parties fallaciously argue that the Daubert criterion of “rate of error” is satisfied by expert witness’s reliance upon studies that in turn use conventional 95% confidence intervals and measures of statistical significance in p-values below 0.05[5].

The lawyers who embrace confidence intervals and p-values as their sole measure of error rate fail to recognize that confidence intervals and p-values are means of assessing only one kind of error: random sampling error. Given the carelessness of the Supreme Court’s use of technical terms in Daubert, and its failure to engage in the actual evidence at issue in the case, it is difficult to know whether the Court intended to suggest that random error was the error rate it had in mind[6]. The statistics chapter in the Reference Manual on Scientific Evidence helpfully points out that the inferences that can be drawn from data turn on p-values and confidence intervals, as well as on study design, data quality, and the presence or absence of systematic errors, such as bias or confounding.  Reference Manual on Scientific Evidence at 240 (3d 2011) [Manual]. Random errors are reflected in the size of p-values or the width of confidence intervals, but these measures of random sampling error ignore systematic errors such as confounding and study biases. Id. at 249 & n.96.

The Manual’s chapter on epidemiology takes an even stronger stance: the p-value for a given study does not provide a rate of error or even a probability of error for an epidemiologic study:

“Epidemiology, however, unlike some other methodologies—fingerprint identification, for example—does not permit an assessment of its accuracy by testing with a known reference standard. A p-value provides information only about the plausibility of random error given the study result, but the true relationship between agent and outcome remains unknown. Moreover, a p-value provides no information about whether other sources of error – bias and confounding – exist and, if so, their magnitude. In short, for epidemiology, there is no way to determine a rate of error.”

Manual at 575. This stance seems not entirely justified given that there are Bayesian approaches that would produce credibility intervals accounting for sampling and systematic biases. To be sure, such approaches have their own problems and they have received little to no attention in courtroom proceedings to date.

The authors of the Manual’s epidemiology chapter, who are usually forgiving of judicial error in interpreting epidemiologic studies, point to one United States Court of Appeals case that fallaciously interpreted confidence intervals magically to quantify bias and confounding in a Bendectin birth defects case. Id. at 575 n. 96[7]. The Manual could have gone further to point out that, in the context of multiple studies, of different designs and analyses, cognitive biases involved in evaluating, assessing, and synthesizing the studies are also ignored by statistical measures such as p-values and confidence intervals. Although the Manual notes that assessing the role of chance in producing a particular set of sample data is “often viewed as essential when making inferences from data,” the Manual never suggests that random sampling error is the only kind of error that must be assessed when interpreting data. The Daubert criterion would appear to encompass all varieties or error, not just random error.

The Manual’s suggestion that epidemiology does not permit an assessment of the accuracy of epidemiologic findings misrepresents the capabilities of modern epidemiologic methods. Courts can, and do, invoke gatekeeping approaches to weed out confounded study findings. SeeSorting Out Confounded Research – Required by Rule 702” (June 10, 2012). The “reverse Cornfield inequality” was an important analysis that helped establish the causal connection between tobacco smoke and lung cancer[8]. Olav Axelson studied and quantified the role of smoking as a confounder in epidemiologic analyses of other putative lung carcinogens.[9] Quantitative methods for identifying confounders have been widely deployed[10].

A recent study in birth defects epidemiology demonstrates the power of sibling cohorts in addressing the problem of residual confounding from observational population studies with limited information about confounding variables. Researchers looking at various birth defect outcomes among offspring of women who used certain antidepressants in early pregnancy generally found no associations in pooled data from Iceland, Norway, Sweden, Finland, and Denmark. A putative association between maternal antidepressant use and a specific kind of cardiac defect (right ventricular outflow tract obstruction or RVOTO) did appear in the overall analysis, but was reversed when the analysis was limited to the sibling subcohort. The study found an apparent association between RVOTO defects and first trimester maternal exposure to selective serotonin reuptake inhibitors, with an adjusted odds ratio of 1.48 (95% C.I., 1.15, 1.89). In the adjusted analysis for siblings, the study found an OR of 0.56 (95% C.I., 0.21, 1.49) in an adjusted sibling analysis[11]. This study and many others show how creative analyses can elucidate and quantify the direction and magnitude of confounding effects in observational epidemiology.

Systematic bias has also begun to succumb to more quantitative approaches. A recent guidance paper by well-known authors encourages the use of quantitative bias analysis to provide estimates of uncertainty due to systematic errors[12].

Although the courts have failed to articulate the nature and consequences of erroneous inference, some authors would reduce all of Rule 702 (and perhaps 704, 403 as well) to a requirement that proffered expert witnesses “account” for the known and potential errors in their opinions:

“If an expert can account for the measurement error, the random error, and the systematic error in his evidence, then he ought to be permitted to testify. On the other hand, if he should fail to account for any one or more of these three types of error, then his testimony ought not be admitted.”

Mark Haug & Emily Baird, “Finding the Error in Daubert,” 62 Hastings L.J. 737, 739 (2011).

Like most antic proposals to revise Rule 702, this reform vision shuts out the full range of Rule 702’s remedial scope. Scientists certainly try to identify potential sources of error, but they are not necessarily very good at it. See Richard Horton, “Offline: What is medicine’s 5 sigma?” 385 Lancet 1380 (2015) (“much of the scientific literature, perhaps half, may simply be untrue”). And as Holmes pointed out[13], certitude is not certainty, and expert witnesses are not likely to be good judges of their own inferential errors[14]. Courts continue to say and do wildly inconsistent things in the course of gatekeeping. Compare In re Zoloft (Setraline Hydrochloride) Products, 26 F. Supp. 3d 449, 452 (E.D. Pa. 2014) (excluding expert witness) (“The experts must use good grounds to reach their conclusions, but not necessarily the best grounds or unflawed methods.”), with Gutierrez v. Johnson & Johnson, 2006 WL 3246605, at *2 (D.N.J. November 6, 2006) (denying motions to exclude expert witnesses) (“The Daubert inquiry was designed to shield the fact finder from flawed evidence.”).


[1] See, e.g., Rabozzi v. Bombardier, Inc., No. 5:03-CV-1397 (NAM/DEP), 2007 U.S. Dist. LEXIS 21724, at *7, *8, *20 (N.D.N.Y. Mar. 27, 2007) (excluding testimony from civil engineer about boat design, in part because witness failed to provide rate of error); Sorto-Romero v. Delta Int’l Mach. Corp., No. 05-CV-5172 (SJF) (AKT), 2007 U.S. Dist. LEXIS 71588, at *22–23 (E.D.N.Y. Sept. 24, 2007) (excluding engineering opinion that defective wood-carving tool caused injury because of lack of error rate); Phillips v. Raymond Corp., 364 F. Supp. 2d 730, 732–33 (N.D. Ill. 2005) (excluding biomechanics expert witness who had not reliably tested his claims in a way to produce an accurate rate of error); Roane v. Greenwich Swim Comm., 330 F. Supp. 2d 306, 309, 319 (S.D.N.Y. 2004) (excluding mechanical engineer, in part because witness failed to provide rate of error); Nook v. Long Island R.R., 190 F. Supp. 2d 639, 641–42 (S.D.N.Y. 2002) (excluding industrial hygienist’s opinion in part because witness was unable to provide a known rate of error).

[2] See, e.g., United States v. Microtek Int’l Dev. Sys. Div., Inc., No. 99-298-KI, 2000 U.S. Dist. LEXIS 2771, at *2, *10–13, *15 (D. Or. Mar. 10, 2000) (excluding polygraph data based upon showing that claimed error rate came from highly controlled situations, and that “real world” situations led to much higher error (10%) false positive error rates); Meyers v. Arcudi, 947 F. Supp. 581 (D. Conn. 1996) (excluding polygraph in civil action).

[3] See, e.g., United States v. Ewell, 252 F. Supp. 2d 104, 113–14 (D.N.J. 2003) (rejecting defendant’s objection to government’s failure to quantify laboratory error rate); United States v. Shea, 957 F. Supp. 331, 334–45 (D.N.H. 1997) (rejecting objection to government witness’s providing separate match and error probability rates).

[4] For a typical judicial misstatement, see In re Zoloft Products, 26 F. Supp.3d 449, 454 (E.D. Pa. 2014) (“A 95% confidence interval means that there is a 95% chance that the ‘‘true’’ ratio value falls within the confidence interval range.”).

[5] From my experience, this fallacious argument is advanced by both plaintiffs’ and defendants’ counsel and expert witnesses. See also Mark Haug & Emily Baird, “Finding the Error in Daubert,” 62 Hastings L.J. 737, 751 & n.72 (2011).

[6] See David L. Faigman, et al. eds., Modern Scientific Evidence: The Law and Science of Expert Testimony § 6:36, at 359 (2007–08) (“it is easy to mistake the p-value for the probability that there is no difference”)

[7] Brock v. Merrell Dow Pharmaceuticals, Inc., 874 F.2d 307, 311-12 (5th Cir. 1989), modified, 884 F.2d 166 (5th Cir. 1989), cert. denied, 494 U.S. 1046 (1990). As with any error of this sort, there is always the question whether the judges were entrapped by the parties or their expert witnesses, or whether the judges came up with the fallacy on their own.

[8] See Joel B Greenhouse, “Commentary: Cornfield, Epidemiology and Causality,” 38 Internat’l J. Epidem. 1199 (2009).

[9] Olav Axelson & Kyle Steenland, “Indirect methods of assessing the effects of tobacco use in occupational studies,” 13 Am. J. Indus. Med. 105 (1988); Olav Axelson, “Confounding from smoking in occupational epidemiology,” 46 Brit. J. Indus. Med. 505 (1989); Olav Axelson, “Aspects on confounding in occupational health epidemiology,” 4 Scand. J. Work Envt’l Health 85 (1978).

[10] See, e.g., David Kriebel, Ariana Zeka1, Ellen A Eisen, and David H. Wegman, “Quantitative evaluation of the effects of uncontrolled confounding by alcohol and tobacco in occupational cancer studies,” 33 Internat’l J. Epidem. 1040 (2004).

[11] Kari Furu, Helle Kieler, Bengt Haglund, Anders Engeland, Randi Selmer, Olof Stephansson, Unnur Anna Valdimarsdottir, Helga Zoega, Miia Artama, Mika Gissler, Heli Malm, and Mette Nørgaard, “Selective serotonin reuptake inhibitors and ventafaxine in early pregnancy and risk of birth defects: population based cohort study and sibling design,” 350 Brit. Med. J. 1798 (2015).

[12] Timothy L.. Lash, Matthew P. Fox, Richard F. MacLehose, George Maldonado, Lawrence C. McCandless, and Sander Greenland, “Good practices for quantitative bias analysis,” 43 Internat’l J. Epidem. 1969 (2014).

[13] Oliver Wendell Holmes, Jr., Collected Legal Papers at 311 (1920) (“Certitude is not the test of certainty. We have been cock-sure of many things that were not so.”).

[14] See, e.g., Daniel Kahneman & Amos Tversky, “Judgment under Uncertainty:  Heuristics and Biases,” 185 Science 1124 (1974).

ALI Reporters Are Snookered by Racette Fallacy

April 27th, 2015

In the Reference Manual on Scientific Evidence, the authors of the epidemiology chapter advance instances of acceleration of onset of disease as an example of a situation in which reliance upon doubling of risk will not provide a reliable probability of causation calculation[1]. In a previous post, I suggested that the authors’ assertion may be unfounded. SeeReference Manual on Scientific Evidence on Relative Risk Greater Than Two For Specific Causation Inference” (April 25, 2014). Several epidemiologic methods would permit the calculation of relative risk within specific time windows from first exposure.

The American Law Institute (ALI) Reporters, for the Restatement of Torts, make similar claims.[2] First, the Reporters, citing the Manual’s second edition, repeat the Manual’s claim that:

 “Epidemiologists, however, do not seek to understand causation at the individual level and do not use incidence rates in group to studies to determine the cause of an individual’s disease.”

American Law Institute, Restatement (Third) of Torts: Liability for Physical and Emotional Harm § 28(a) cmt. c(4) & rptrs. notes (2010) [Comment c(4)]. In making this claim, the Reporters ignore an extensive body of epidemiologic studies on genetic associations and on biomarkers, which do address causation implicitly or explicitly, on an individual level.

The Reporters also repeat the Manual’s doubtful claim that acceleration of onset of disease prevents an assessment of attributable risk, although they acknowledge that an average earlier age of onset would form the basis of damages calculations rather than calculations for damages for an injury that would not have occurred but for the tortious exposure. Comment c(4). The Reporters go a step further than the Manual, however, and provide an example of the acceleration-of-onset studies that they have in mind:

“For studies whose results suggest acceleration, see Brad A. Racette, Welding-Related Parkinsonism: Clinical Features, Treatments, and Pathophysiology,” 56 Neurology 8, 12 (2001) (stating that authors “believe that welding acts as an accelerant to cause [Parkinson’s Disease]… .”

The citation to Racette’s 2001 paper[3] is curious, interesting, disturbing, and perhaps revealing. In this 2001 paper, Racette misrepresented the type of study he claimed to have done, and the inferences he drew from his case series are invalid. Any one experienced in the field of epidemiology would have dismissed this study, its conclusions, and its suggested relation between welding and parkinsonism.

Dr. Brad A. Racette teaches and practices neurology at Washington University in St. Louis, across the river from a hotbed of mass tort litigation, Madison County, Illinois. In the 1990s, Racette received referrals from plaintiffs’ attorneys to evaluate their clients in litigation over exposure to welding fumes. Plaintiffs were claiming that their occupational exposures caused them to develop manganism, a distinctive parkinsonism that differs from Parkinson’s disease [PD], but has signs and symptoms that might be confused with PD by unsophisticated physicians unfamiliar with both manganism and PD.

After the publication of his 2001 paper, Racette became the darling of felon Dicky Scruggs and other plaintiffs’ lawyers. The litigation industrialists invited Racette and his team down to Alabama and Mississippi, to conduct screenings of welding tradesmen, recruited by Scruggs and his team, for potential lawsuits for PD and parkinsonism. The result was a paper that helped Scruggs propel a litigation assault against the welding industry.[4]

Racette’s 2001 paper was accompanied by a press release, as have many of his papers, in which he was quoted as stating that “[m]anganism is a very different disease” from PD. Gila Reckess, “Welding, Parkinson’s link suspected” (Feb. 9, 2001)[5].

Racette’s 2001 paper provoked a strongly worded letter that called Racette and his colleagues out for misrepresenting the nature of their work:

“The authors describe their work as a case–control study. Racette et al. ascertained welders with parkinsonism and compared their concurrent clinical features to those of subjects with PD. This is more consistent with a cross-sectional design, as the disease state and factors of interest were ascertained simultaneously. Cross-sectional studies are descriptive and therefore cannot be used to infer causation.”

*****

“The data reported by Racette et al. do not necessarily support any inference about welding as a risk factor in PD. A cohort study would be the best way to evaluate the role of welding in PD.”

Bernard Ravina, Andrew Siderowf, John Farrar, Howard Hurtig, “Welding-related parkinsonism: Clinical features, treatment, and pathophysiology,” 57 Neurology 936, 936 (2001).

As we will see, Dr. Ravina and his colleagues were charitable to suggest that the study was more compatible with a cross-sectional study. Racette had set out to determine “whether welding-related parkinsonism differs from idiopathic PD.” He claimed that he had “performed a case-control study,” with a case group of welders and two control groups. His inferences drawn from his “data” are, however, fallacious because he employed an invalid study design.

In reality, Racette’s paper was nothing more than a chart review, a case series of 15 “welders” in the context of a movement disorder clinic. After his clinical and radiographic evaluation, Racette found that these 15 cases were clinically indistinguishable from PD, and thus unlike manganism. Racette did not reveal whether any of these 15 welders had been referred by plaintiffs’ counsel; nor did he suggest that these welding tradesmen made up a disproportionate number of his patient base in St. Louis, Missouri.

Racette compared his selected 15 career welders with PD to his general movement disorders clinic patient population, for comparison. From the patient population, Racette deployed two “control” groups, one matched for age and sex with the 15 welders, and the other group not matched. The America Law Institute reporters are indeed correct that Racette suggested that the average age of onset for these 15 welders was lower than that for his non-welder patients, but their uncritical embrace overlooked the fact that Racette’s suggestion does not support his claimed inference that in welders, therefore, “welding exposure acts as an accelerant to cause PD.”

Racette’s claimed inference is remarkable because he did not perform an analytical epidemiologic study that was capable of generating causal inferences. His paper incongruously presents odds ratios, although the controls have PD, the disease of interest, which invalidates any analytical inference from his case series. Given the referral and selection biases inherent in tertiary-care specialty practices, this paper can provide no reliable inferences about associations or differences in ages of onset. Even within the confines of a case series misrepresented to be a case-control study, Racette acknowledged that “[s]ubsequent comparisons of the welders with age-matched controls showed no significant differences.”

NOT A CASE-CONTROL STUDY

That Racette wrongly identified his paper as a case-control study is beyond debate. How the journal Neurology accepted the paper for publication is a mystery. The acceptance of the inference by the ALI Reporter, lawyers and judges, is regrettable.

Structurally, Racette’s paper could never quality as a case-control study, or any other analytical epidemiologic study. Here is how a leading textbook on case-control studies defines a case-control study:

“In a case-control study, individuals with a particular condition or disease (the cases) are selected for comparison with a series of individuals in whom the condition or disease is absent (the controls).”

James J. Schlesselman, Case-control Studies. Design, Conduct, Analysis at 14 (N.Y. 1982)[6].

Every patient in Racette’s paper, welders and non-welders, have the outcome of interest, PD. There is no epidemiologic study design that corresponds to what Racette did, and there is no way to draw any useful inference from Racette’s comparisons. Racette’s paper violates the key principle for a proper case-control study; namely, all subjects must be selected independently of the study exposure that is under investigation. Schlesselman stressed that that identifying an eligible case or control must not depend upon that person’s exposure status for any factor under consideration. Id. Racette’s 2001 paper deliberately violated this basic principle.

Racette’s study design, with only cases with the outcome of interest appearing in the analysis, recklessly obscures the underlying association between the exposure (welding) and age in the population. We would, of course, expect self-identified welders to be younger than the average Parkinson’s disease patient because welding is physical work that requires good health. An equally fallacious study could be cobbled together to “show” that the age-of-onset of Parkinson’s disease for sitcom actors (such as Michael J. Fox) is lower than the age-of-onset of Parkinson’s disease for Popes (such as John Paul II). Sitcom actors are generally younger as a group than Popes. Comparing age of onset between disparate groups that have different age distributions generates a biased comparison and an erroneous inference.

The invalidity and fallaciousness of Racette’s approach to studying the age-of-onset issue of PD in welders, and his uncritical inferences, have been extensively commented upon in the general epidemiologic literature. For instance, in studies that compared the age at death for left-handed versus right-handed person, studies reported an observed nine-year earlier death for left handers, leading to (unfounded) speculation that earlier mortality resulted from birth and life stressors and accidents for left handers, living in a world designed to accommodate right-handed person[7]. The inference has been shown to be fallacious and the result of social pressure in the early twentieth century to push left handers to use their right hands, a prejudicial practice that abated over the decades of the last century. Left handers born later in the century were less likely to be “switched,” as opposed to those persons born earlier and now dying, who were less likely to be classified as left-handed, as a result of a birth-cohort effect[8]. When proper prospective cohort studies were conducted, valid data showed that left-handers and right-handers have equivalent mortality rates[9].

Epidemiologist Ken Rothman addressed the fallacy of Racette’s paper at some length in one of his books:

“Suppose we study two groups of people and look at the average age at death among those who die. In group A, the average age of death is 4 years; in group B, it is 28 years. Can we say that being a member of group A is riskier than being a member of group B? We cannot… . Suppose that group A comprises nursery school students and group B comprises military commandos. It would be no surprise that the average age at death of people who are currently military commandos is 28 years or that the average age of people who are currently nursery students is 4 years. …

In a study of factory workers, an investigator inferred that the factory work was dangerous because the average age of onset of a particular kind of cancer was lower in these workers than among the general population. But just as for the nursery school students and military commandos, if these workers were young, the cancers that occurred among them would have to be occurring in young people. Furthermore, the age of onset of a disease does not take into account what proportion of people get the disease.

These examples reflect the fallacy of comparing the average age at which death or disease strikes rather than comparing the risk of death between groups of the same age.”

Kenneth J. Rothman, “Introduction to Epidemiologic Thinking,” in Epidemiology: An Introduction at 5-6 (N.Y. 2002).

And here is how another author of Modern Epidemiology[10] addressed the Racette fallacy in a different context involving PD:

“Valid studies of age-at-onset require no underlying association between the risk factor and aging or birth cohort in the source population. They must also consider whether a sufficient induction time has passed for the risk factor to have an effect. When these criteria and others cannot be satisfied, age-specific or standardized risks or rates, or a population-based case-control design, must be used to study the association between the risk factor and outcome. These designs allow the investigator to disaggregate the relation between aging and the prevalence of the risk factor, using familiar methods to control confounding in the design or analysis. When prior knowledge strongly suggests that the prevalence of the risk factor changes with age in the source population, case-only studies may support a relation between the risk factor and age-at-onset, regardless of whether the inference is justified.”

Jemma B. Wilk & Timothy L. Lash, “Risk factor studies of age-at-onset in a sample ascertained for Parkinson disease affected sibling pairs: a cautionary tale,” 4 Emerging Themes in Epidemiology 1 (2007) (internal citations omitted) (emphasis added).

A properly designed epidemiologic study would have avoided Racette’s fallacy. A relevant cohort study would have enrolled welders in the study at the outset of their careers, and would have continued to follow them even if they changed occupations. A case-control study would have enrolled cases with PD and controls without PD (or more broadly, parkinsonism), with cases and controls selected independently of their exposure to welding fumes. Either method would have determined the rate of PD in both groups, absolutely or relatively. Racette’s paper, which completely lacked non-PD cases, could not have possibly accomplished his stated objectives, and it did not support his claims.

Racette’s questionable work provoked a mass tort litigation and ultimately federal Multi-District Litigation 1535.[11] Ultimately, analytical epidemiologic studies consistently showed no association between welding and PD. A meta-analysis published in 2012 ended the debate[12] as a practical matter, and MDL 1535 is no more. How strange that the ALI reporters chose the Racette work as an example of their claims about acceleration of onset!


[1] Michael D. Green, D. Michal Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” in Federal Judicial Center, Reference Manual on Scientific Evidence 549, 614 (Wash., DC 3d ed. 2011).

[2] Michael D. Green was an ALI Reporter, and of course, an author of the chapter in the Reference Manual.

[3] Brad A. Racette, L. McGee-Minnich, S. M. Moerlein, J. W. Mink, T. O. Videen, and Joel S. Perlmutter, “Welding-related parkinsonism: clinical features, treatment, and pathophysiology,” 56 Neurology 8 (2001).

[4] See Brad A. Racette, S.D. Tabbal, D. Jennings, L. Good, Joel S. Perlmutter, and Brad Evanoff, “Prevalence of parkinsonism and relationship to exposure in a large sample of Alabama welders,” 64 Neurology 230 (2005); Brad A. Racette, et al., “A rapid method for mass screening for parkinsonism,” 27 Neurotoxicology 357 (2006) (duplicate publication of the earlier, 2005, paper).

[5] Previously available at <http://record.wustl.edu/archive/2001/02-09-01/articles/welding.html>, last visited on June 27, 2005.

[6] See also Brian MacMahon & Dimitrios Trichopoulos, Epidemiology. Principles and Methods at 229 (2ed 1996) (“A case-control study is an inquiry in which groups of individuals are selected based on whether they do (the cases) or do not (the controls) have the disease of which the etiology is to be studied.”); Jennifer L. Kelsey, W.D. Thompson, A.S. Evans, Methods in Observational Epidemiology at 148 (N.Y. 1986) (“In a case-control study, persons with a given disease (the cases) and persons without the disease (the controls) are selected … .”).

[7] See, e.g., Diane F. Halpern & Stanley Coren, “Do right-handers live longer?” 333 Nature 213 (1988); Diane F. Halpern & Stanley Coren, “Handedness and life span,” 324 New Engl. J. Med. 998 (1991).

[8] Kenneth J. Rothman, “Left-handedness and life expectancy,” 325 New Engl. J. Med. 1041 (1991) (pointing out that by comparing age of onset method, nursery education would be found more dangerous than paratrooper training, given that the age at death of pres-schoolers wo died would be much lower than that of paratroopers who died); see also Martin Bland & Doug Altman, “Do the left-handed die young?” Significance 166 (Dec. 2005).

[9] See Philip A. Wolf, Ralph B. D’Agostino, Janet L. Cobb, “Left-handedness and life expectancy,” 325 New Engl. J. Med. 1042 (1991); Marcel E. Salive, Jack M. Guralnik & Robert J. Glynn, “Left-handedness and mortality,” 83 Am. J. Public Health 265 (1993); Olga Basso, Jørn Olsen, Niels Holm, Axel Skytthe, James W. Vaupel, and Kaare Christensen, “Handedness and mortality: A follow-up study of Danish twins born between 1900 and 1910,” 11 Epidemiology 576 (2000). See also Martin Wolkewitz, Arthur Allignol, Martin Schumacher, and Jan Beyersmann, “Two Pitfalls in Survival Analyses of Time-Dependent Exposure: A Case Study in a Cohort of Oscar Nominees,” 64 Am. Statistician 205 (2010); Michael F. Picco, Steven Goodman, James Reed, and Theodore M. Bayless, “Methodologic pitfalls in the determination of genetic anticipation: the case of Crohn’s disease,” 134 Ann. Intern. Med. 1124 (2001).

[10] Kenneth J. Rothman, Sander Greenland, Timothy L. Lash, eds., Modern Epidemiology (3d ed. 2008).

[11] Dicky Scruggs served on the Plaintiffs’ Steering Committee until his conviction on criminal charges.

[12] James Mortimer, Amy Borenstein, and Lorene Nelson, “Associations of welding and manganese exposure with Parkinson disease: Review and meta-analysis,” 79 Neurology 1174 (2012).

Reference Manual on Scientific Evidence on Relative Risk Greater Than Two For Specific Causation Inference

April 25th, 2015

The first edition of the Reference Manual on Scientific Evidence [Manual] was published in 1994, a year after the Supreme Court delivered its opinion in Daubert. The Federal Judicial Center organized and produced the Manual, in response to the kernel panic created by the Supreme Court’s mandate that federal trial judges serve as gatekeepers of the methodological propriety of testifying expert witnesses’ opinions. Considering the intellectual vacuum the Center had to fill, and the speed with which it had to work, the first edition was a stunning accomplishment.

In litigating specific causation in so-called toxic tort cases, defense counsel quickly embraced the Manual’s apparent endorsement of the doubling-of-the-risk argument, which would require relative risks in excess of two in order to draw inferences of specific causation in a given case. See Linda A. Bailey, Leon Gordis, and Michael D. Green, “Reference Guide on Epidemiology,” in Federal Judicial Center, Reference Manual on Scientific Evidence 123, 150, 168 (Washington, DC:, 1st ed., 1994) (“The relative risk from an epidemiological study can be adapted to this 50% plus standard to yield a probability or likelihood that an agent caused an individual’s disease. The threshold for concluding that an agent was more likely than not the cause of a disease than not is a relative risk greater than 2.0.”) (internal citations omitted).

In the Second Edition of the Manual, the authorship of the epidemiology chapter shifted, and so did its treatment of doubling of the risk. By adopting a more nuanced analysis, the Second Edition deprived defense counsel of a readily citable source for the proposition that low relative risks do not support inferences of specific causation. The exact conditions for when and how the doubling argument should prevail were, however, left fuzzy and unspecified. See Michael D. Green, D. Michal Freedman , and Leon Gordis, “Reference Guide on Epidemiology,” in Federal Judicial Center, Reference Manual on Scientific Evidence 333, 348-49 (Wash., DC, 2d ed. 2000)

The latest edition of the Manual attempts to correct the failings of the Second Edition by introducing an explanation and a discussion of some of the conditions that might undermine an inference, or opposition thereto, of specific causation from magnitude of relative risk. Michael D. Green, D. Michal Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” in Federal Judicial Center, Reference Manual on Scientific Evidence 549, 612 (Wash., DC 3d ed., 2011).

The authors of the Manual now acknowledge that doubling of risk inference has “a certain logic as far as it goes,” but point out that there are some “significant assumptions and important caveats that require explication.” Id.

What are the assumptions according the Manual?

First, and foremost, there must be “[a] valid study and risk estimate.” Id. (emphasis in original). The identification of this predicate assumption is, of course, correct, but the authors overlook that the assumption is often trivially satisfied by the legal context in which the doubling argument arises. For instance, in the Landrigan and Caterinichio cases, cited below, the doubling issue arose not as an admissibility question of expert witness opinion, but on motions for directed verdict. In both cases, plaintiffs’ expert witnesses committed to opinions about plaintiffs’ being at risk from asbestos exposure, based upon studies that they identified. Defense counsel in those cases did not concede the existence of risk, the size of the risk, or the validity of the study, but rather stipulated such facts solely for purposes of their motions to dismiss. In other words, even if the plaintiffs’ relied upon studies were valid and the risk estimates accurate (with relative risks of 1.5), plaintiffs could not prevail because no reasonable jury could infer that plaintiffs’ colorectal cancers were caused by their occupational asbestos exposure. The procedural context of the doubling risk thus often pretermits questions of validity, bias, and confounding.

Second, the Manual identifies that there must be “[s]imilarity among study subjects and plaintiff.” Id. at 613. Again, this assumption is often either pretermitted for purposes of lodging a dispositive motion, conceded, or included as part of the challenge to an expert witness’s opinion’s admissibility. For example, in some litigations, plaintiffs will rely upon high-dose or high-exposure studies that are not comparable to the plaintiff’s actual exposure, and the defense may have shown that the only reliable evidence is that there is a small (relative risk less than two) or no risk at all from the plaintiff’s exposure. External validity objections may well play a role in a contest under Rule 702, but the resolution of a doubling of risk issue will require an appropriate measure of risk for the plaintiff whose injury is at issue.

In the course of identifying this second assumption, the Manual now points out that the doubling argument turns on applying “an average risk for the group” to each individual in the group. Id. This point again is correct, but the Manual does not come to terms with the challenge often made to what I call the assumption of stochastic risk. The Manual authors quote a leading textbook on epidemiology:

“We cannot measure the individual risk, and assigning the average value to everyone in the category reflects nothing more than our ignorance about the determinants of lung cancer that interact with cigarette smoke. It is apparent from epidemiological data that some people can engage in chain smoking for many decades without developing lung cancer. Others are or will become primed by unknown circumstances and need only to add cigarette smoke to the nearly sufficient constellation of causes to initiate lung cancer. In our ignorance of these hidden causal components, the best we can do in assessing risk is to classify people according to measured causal risk indicators and then assign the average observed within a class to persons within the class.”

Id at n.198., quoting from Kenneth J. Rothman, Sander Greenland, and Tim L. Lash, Modern Epidemiology 9 (3d ed. 2008). Although the textbook on this point is unimpeachable, taken at face value, it would introduce an evidentiary nihilism for judicial determinations of specific causation in cases in which epidemiologic measures of risk size are the only basis for drawing probabilistic inferences of specific causation. See also Manual at 614 n. 198., citing Ofer Shpilberg, et al., The Next Stage: Molecular Epidemiology, 50 J. Clin. Epidem. 633, 637 (1997) (“A 1.5-fold relative risk may be composed of a 5-fold risk in 10% of the population, and a 1.1-fold risk in the remaining 90%, or a 2-fold risk in 25% and a 1.1-fold for 75%, or a 1.5-fold risk for the entire population.”). The assumption of stochastic risk is, as Judge Weinstein recognized in Agent Orange, often the only assumption on which plaintiffs will ever have a basis for claiming individual causation on typical datasets available to support health effects claims. Elsewhere, the authors of the Manual’s chapter suggest that statistical “frequentists” would resist the adaptation of relative risk to provide a probability of causation because for the frequentist, the individual case either is or is not caused by the exposure at issue. Manual at 611 n.188. This suggestion appears to confuse the frequentist enterprise for evaluating evidence on the basis of statistical measures of the probability of observing at least as great a departure from expected in a sample rather than attempting to affixing a probability to the population parameter. The doubling argument derives from the well-known “urn model” in probability theory, which is not really at issue in the frequentist-Bayesian wars.

Third, the Manual authors state that the doubling argument assumes the “[n]onacceleration of disease.” In some cases, this statement is correct, but there is no evidence of acceleration, and because an acceleration-of-onset theory would diminish damages, typically defendants would have the burden of going forward with identifying the acceleration phenomenon. The authors go further, however, in stating that “for most of the chronic diseases of adulthood, it is not possible for epidemiologic studies to distinguish between acceleration of disease and causation of new disease.” Manual at 614. The inability to distinguish acceleration from causation of new cases would typically redound to the disadvantage of defendants that are making the doubling argument. In other words, the defendants would, by this supposed inability, be unable to mitigate damages by showing that the alleged harm would have occurred any way, but only later in time. See Manual at 615 n. 199 (“If acceleration occurs, then the appropriate characterization of the harm for purposes of determining damages would have to be addressed. A defendant who only accelerates the occurrence of harm, say, chronic back pain, that would have occurred independently in the plaintiff at a later time is not liable for the same amount of damages as a defendant who causes a lifetime of chronic back pain.”). More important, however, the Manual appears to be wrong that epidemiologic studies cannot identify acceleration of onset of a particular disease in an epidemiologic study or clinical trial. Many modern longitudinal epidemiologic studies and clinical trials use survival analysis and time windows to identify latency or time lagged outcomes in association with identified exposures.

The fourth assumption identified in the Manual is that the exposure under study acts independently of other exposures. The authors give the time-worn example of multiplicative synergy between asbestos and smoking, what elsewhere has been referred to as “The Mt. Sinai Catechism” (June 7, 2013). The example was improvidently chosen given that the multiplicative nature was doubtful when first advanced, and now has effectively been retracted or modified by the researchers following the health outcomes of asbestos insulators in the United States. More important for our purposes here, interactions can be quantified and added to the analysis of attributable risk; interactions are not insuperable barriers to reasonable apportiontment of risk.

Fifth, the Manual identifies two additional assumptions in that (a) the exposure at issue is not responsible for another outcome that competes with morbidity or mortality, and (b) the exposure does not provide a protective “effect” in a subpopulation of those studied. Manual at 615. On the first of these assumptions, the authors suggest that this assumption is required “because in the epidemiologic studies relied on, those deaths caused by the alternative disease process will mask the true magnitude of increased incidence of the studied disease when the study subjects die before developing the disease of interest.” Id. at 615 n.202. Competing causes, however, are frequently studied and can be treated as confounders in an appropriate regression or propensity score analysis to yield a risk estimate for each individual putative effect at issue. The second of the two assumptions is a rehash of the speculative assertion that the epidemiologic study (and the population it samples) may not have a stochastic distribution of risk. Although the stochastic assumption may not be correct, it is often favorable to the party asserting the claim who otherwise would not be able to show that he was not in a sub-population of people not affected at all, or even benefitted from the exposure. Again, modern epidemiology does not stop at identifying populations at risk, but continues to refine the assessment by trying to identify subpopulations that have the risk exclusively. The existence of multi-modal distributions of risk within a population is, again, not a barrier to the doubling argument.

With sufficiently large samples, epidemiologic studies may be able to identify subgroups that have very large relative risks, even when the overall sample under study had a relative risk under two. The possibility of such subgroups, however, should not be an invitation to wholesale speculation that a given plaintiff is in a “vulnerable” subgroup without reliable, valid evidence of what the risks for the identified subgroup are. Too often, the vulnerable plaintiff or subgroup claim is merely hand waving in an evidentiary vacuum. The Manual authors seem to adopt this hand-waving attitude when they give a speculative hypothetical example:

“For example, genetics might be known to be responsible for 50% of the incidence of a disease independent of exposure to the agent. If genetics can be ruled out in an individual’s case, then a relative risk greater than 1.5 might be sufficient to support an inference that the agent was more likely than not responsible for the plaintiff’s disease.”

Manual at 615-16 (internal citations omitted). The hypothetical is unclear whether “the genetics” cases are part of the study that yielded a relative risk of 1.5, but of course if the “genetics” were uniformly distributed in the population, and also in the sample studied in the epidemiologic study, then the “genetics” would appear to drop out of playing any role in elevating risk. But as the authors pointed out in their caveats about interaction, there may well be a role of interaction between the “genetics” and the exposure in the study such that “the genetics” cases occurred earlier or did not add anything to the disease burden that would have been caused by the exposure under study that reported out a relative risk of 1.5. So bottom line, plaintiff would need a study that applied the “genetics” to the epidemiologic study to see what relative risks might be observed in people without the genes at issue.

The Third Edition of the Manual does add more nuance to the doubling of risk argument, but alas more nuance yet is needed. The chapter is an important source to include in any legal argument for or against inferences of specific causation, but it is hardly the final word.

Below, I have updated a reference list of cases that reference the doubling argument.


Radiation

Johnston v. United States, 597 F. Supp. 374, 412, 425-26 (D. Kan. 1984) (rejecting even a relative risk of greater than two as supporting an inference of specific causation)

Allen v. United States, 588 F. Supp. 247, 418 (1984) (rejecting mechanical application of doubling of risk), rev’d on other grounds, 816 F.2d 1417 (10th Cir. 1987), cert. denied, 484 U.S. 1004 (1988)

In re TMI Litig., 927 F. Supp. 834, 845, 864–66 (M.D. Pa. 1996), aff’d, 89 F.3d 1106 (3d Cir. 1996), aff’d in part, rev’d in part, 193 F.3d 613 (3d Cir. 1999) (rejecting “doubling dose” trial court’s analysis), modified 199 F.3d 158 (3d Cir. 2000) (stating that a dose below ten rems is insufficient to infer more likely than not the existence of a causal link)

In re Hanford Nuclear Reservation Litig., 1998 WL 775340, at *8 (E.D. Wash. Aug. 21, 1998) (“‘[d]oubling of the risk’ is the legal standard for evaluating the sufficiency of the plaintiffs’ evidence and for determining which claims should be heard by the jury,” citing Daubert II), rev’d, 292 F.3d 1124, 1136-37 (9th Cir. 2002) (general causation)

In re Berg Litig., 293 F.3d 1127 (9th Cir. 2002) (companion case to In re Hanford)

Cano v. Everest Minerals Corp., 362 F. Supp. 2d 814, 846 (W.D. Tex. 2005) (relative risk less than 3.0 represents only a weak association)

Cook v. Rockwell Internat’l Corp., 580 F. Supp. 2d 1071, 1083n.8, 1084, 1088-89 (D. Colo. 2006) (citing Daubert II and “concerns” by Sander Greenland and David Egilman, plaintiffs’ expert witnesses in other cases), rev’d and remanded on other grounds, 618 F.3d 1127 (10th Cir. 2010), cert. denied, ___ U.S. ___ (May 24, 2012)

Cotroneo v. Shaw Envt’l & Infrastructure, Inc., No. H-05- 1250, 2007 WL 3145791, at *3 (S.D. Tex. Oct. 25, 2007) (citing Havner, 953 S.W.2d at 717) (radioactive material)


Swine Flu- GBS Cases

Cook v. United States, 545 F. Supp. 306, 308 (N.D. Cal. 1982)(“Whenever the relative risk to vaccinated persons is greater than two times the risk to unvaccinated persons, there is a greater than 50% chance that a given GBS case among vaccinees of that latency period is attributable to vaccination, thus sustaining plaintiff’s burden of proof on causation.”)

Robinson v. United States, 533 F. Supp. 320, 325-28 (E.D. Mich. 1982) (finding for the government and against claimant who developed acute signs and symptoms of GBS 17 weeks after innoculation, in part because of relative and attributable risks)

Padgett v. United States, 553 F. Supp. 794, 800 – 01 (W.D. Tex. 1982) (“From the relative risk, we can calculate the probability that a given case of GBS was caused by vaccination. . . . [A] relative risk of 2 or greater would indicate that it was more likely than not that vaccination caused a case of GBS.”)

Manko v. United States, 636 F. Supp. 1419, 1434 (W.D. Mo. 1986) (relative risk of 2, or less, means exposure not the probable cause of disease claimed) (incorrectly suggesting that relative risk of two means that there was a 50% chance the disease was caused by “chance alone”), aff’d in relevant part, 830 F.2d 831 (8th Cir. 1987)


IUD Cases – Pelvic Inflammatory Disease

Marder v. G.D. Searle & Co., 630 F. Supp. 1087, 1092 (D.Md. 1986) (“In epidemiological terms, a two-fold increased risk is an important showing for plaintiffs to make because it is the equivalent of the required legal burden of proof—a showing of causation by the preponderance of the evidence or, in other words, a probability of greater than 50%.”), aff’d mem. on other grounds sub nom. Wheelahan v. G.D.Searle & Co., 814 F.2d 655 (4th Cir. 1987) (per curiam)


Bendectin cases

Lynch v. Merrill-National Laboratories, 646 F.Supp. 856 (D. Mass. 1986)(granting summary judgment), aff’d, 830 F.2d 1190, 1197 (1st Cir. 1987)(distinguishing between chances that “somewhat favor” plaintiff and plaintiff’s burden of showing specific causation by “preponderant evidence”)

DeLuca v. Merrell Dow Pharm., Inc., 911 F.2d 941, 958-59 (3d Cir. 1990) (commenting that ‘‘[i]f New Jersey law requires the DeLucas to show that it is more likely than not that Bendectin caused Amy DeLuca’s birth defects, and they are forced to rely solely on Dr. Done’s epidemiological analysis in order to avoid summary judgment, the relative risk of limb reduction defects arising from the epidemiological data Done relies upon will, at a minimum, have to exceed ‘2’’’)

Daubert v. Merrell Dow Pharms., Inc., 43 F.3d 1311, 1321 (9th Cir.) (“Daubert II”) (holding that for epidemiological testimony to be admissible to prove specific causation, there must have been a relative risk for the plaintiff of greater than 2; testimony that the drug “increased somewhat the likelihood of birth defects” is insufficient) (“For an epidemiological study to show causation under a preponderance standard . . . the study must how that children whose mothers took Bendectin are more than twice as likely to develop limb reduction birth defects as children whose mothers did not.”), cert. denied, 516 U.S. 869 (1995)

DePyper v. Navarro, 1995 WL 788828 (Mich. Cir. Ct. Nov. 27, 1995)

Oxendine v. Merrell Dow Pharm., Inc., 1996 WL 680992 (D.C. Super. Ct. Oct. 24, 1996) (noting testimony by Dr. Michael Bracken, that had Bendectin doubled risk of birth defects, overall rate of that birth defect should have fallen 23% after manufacturer withdrew drug from market, when in fact the rate remained relatively steady)

Merrell Dow Pharms., Inc. v. Havner, 953 S.W.2d 706, 716 (Tex. 1997) (holding, in accord with the weight of judicial authority, “that the requirement of a more than 50% probability means that epidemiological evidence must show that the risk of an injury or condition in the exposed population was more than double the risk in the unexposed or control population”); id. at at 719 (rejecting isolated statistically significant associations when not consistently found among studies)


Silicone Cases

Hall v. Baxter Healthcare, 947 F. Supp. 1387, 1392, 1397, 1403-04 (D. Ore. 1996) (discussing relative risk of 2.0)

Pick v. American Medical Systems, Inc., 958 F. Supp. 1151, 1160 (E.D. La. 1997) (noting, correctly but irrelevantly, in penile implant case, that “any” increased risk suggests that the exposure “may” have played some causal role)

In re Breast Implant Litigation, 11 F. Supp. 2d 1217, 1226 -27 (D. Colo. 1998) (relative risk of 2.0 or less shows that the background risk is at least as likely to have given rise to the alleged injury)

Barrow v. Bristol-Myers Squibb Co., 1998 WL 812318 (M.D. Fla. Oct. 29, 1998)

Minnesota Mining and Manufacturing v. Atterbury, 978 S.W.2d 183, 198 (Tex.App. – Texarkana 1998) (noting that Havner declined to set strict criteria and that “[t]here is no requirement in a toxic tort case that a party must have reliable evidence of a relative risk of 2.0 or greater”)

Allison v. McGhan Med. Corp., 184 F.3d 1300, 1315n.16, 1316 (11th Cir. 1999) (affirming exclusion of expert testimony based upon a study with a risk ratio of 1.24; noting that statistically significant epidemiological study reporting an increased risk of marker of disease of 1.24 times in patients with breast implants was so close to 1.0 that it “was not worth serious consideration for proving causation”; threshold for concluding that an agent more likely than not caused a disease is 2.0, citing Federal Judicial Center, Reference Manual on Scientific Evidence 168-69 (1994))

Grant v. Bristol-Myers Squibb, 97 F. Supp. 2d 986, 992 (D. Ariz. 2000)

Pozefsky v. Baxter Healthcare Corp., No. 92-CV-0314, 2001 WL 967608, at *3 (N.D.N.Y. August 16, 2001) (excluding causation opinion testimony given contrary epidemiologic studies; noting that sufficient epidemiologic evidence requires relative risk greater than two)

In re Silicone Gel Breast Implant Litig., 318 F. Supp. 2d 879, 893 (C.D. Cal. 2004) (“The relative risk is obtained by dividing the proportion of individuals in the exposed group who contract the disease by the proportion of individuals who contract the disease in the non-exposed group.”) (noting that relative risk must be more than doubled at a minimum to permit an inference that the risk was operating in plaintiff’s case)

Norris v. Baxter Healthcare Corp., 397 F.3d 878 (10th Cir. 2005) (discussing but not deciding specific causation and the need for relative risk greater than two; no reliable showing of general causation)

Barrow v. Bristol-Meyers Squibb Co., 1998 WL 812318, at *23 (M.D. Fla., Oct. 29, 1998)

Minnesota Mining and Manufacturing v. Atterbury, 978 S.W.2d 183, 198 (Tex. App. – Texarkana 1998) (noting that “[t]here is no requirement in a toxic tort case that a party must have reliable evidence of a relative risk of 2.0 or greater”)


Asbestos

Lee v. Johns Manville Corp., slip op. at 3, Phila. Cty. Ct. C.P., Sept. Term 1978, No. 88 (123) (Oct. 26, 1983) (Forer, J.)(entering verdict in favor of defendants on grounds that plaintiff had failed to show that his colo rectal cancer had been caused by asbestos exposure after adducing evidence of a relative risk less than two)

Washington v. Armstrong World Indus., Inc., 839 F.2d 1121 (5th Cir. 1988) (affirming grant of summary judgment on grounds that there was insufficient evidence that plaintiff’s colon cancer was caused by asbestos)

Primavera v. Celotex Corp., Phila. Cty. Ct. C.P., December Term, 1981, No. 1283 (Bench Op. of Hon. Berel Caesar, (Nov. 2, 1988) (granting compulsory nonsuit on the plaintiff’s claim that his colorectal cancer was caused by his occupational exposure to asbestos)

In re Fibreboard Corp.,893 F.2d 706, 712 (5th Cir.1990) (“It is evident that these statistical estimates deal only with general causation, for population-based probability estimates do not speak to a probability of causation in any one case; the estimate of relative risk is a property of the studied population, not of an individual’s case.” (internal quotation omitted) (emphasis in original))

Grassis v. Johns-Manville Corp., 248 N.J. Super. 446, 455-56, 591 A.2d 671, 676 (App. Div. 1991) (rejecting doubling of risk threshold in asbestos gastrointestinal cancer claim)

Landrigan v. Celotex Corp., 127 N.J. 404, 419, 605 A.2d 1079 (1992) (reversing judgment entered on directed verdict for defendant on specific causation of claim that asbestos caused decedent’s colon cancer)

Caterinicchio v. Pittsburgh Corning Corp., 127 N.J. 428, 605 A.2d 1092 (1992) (reversing judgment entered on directed verdict for defendant on specific causation of claim that asbestos caused plaintiff’s colon cancer)

In re Joint E. & S. Dist. Asbestos Litig., 758 F. Supp. 199 (S.D.N.Y. 1991), rev’d sub nom. Maiorano v. Owens Corning Corp., 964 F.2d 92, 97 (2d Cir. 1992);

Maiorana v. National Gypsum, 827 F. Supp. 1014, 1043 (S.D.N.Y. 1993), aff’d in part and rev’d in part, 52 F.3d 1124, 1134 (2d Cir. 1995) (stating a preference for the district court’s instructing the jury on the science and then letting the jury weigh the studies)

Keene Corp. v. Hall, 626 A.2d 997 (Md. Spec. Ct. App. 1993) (laryngeal cancer)

Jones v. Owens-Corning Fiberglas Corp., 288 N.J. Super. 258, 266, 672 A.2d 230, 235 (App. Div. 1996) (rejecting doubling of risk threshold in asbestos gastrointestinal cancer claim)

In re W.R. Grace & Co., 355 B.R. 462, 483 (Bankr. D. Del. 2006) (requiring showing of relative risk greater than two to support property damage claims based on unreasonable risks from asbestos insulation products)

Kwasnik v. A.C. & S., Inc. (El Paso Cty., Tex. 2002)

Sienkiewicz v. Greif (U.K.) Ltd., [2009] EWCA (Civ) 1159, at ¶23 (Lady Justice Smith) (“In my view, it must now be taken that, saving the expression of a different view by the Supreme Court, in a case of multiple potential causes, a claimant can demonstrate causation in a case by showing that the tortious exposure has at least doubled the risk arising from the non-tortious cause or causes.”)

Sienkiewicz v. Greif  Ltd., [2011] UKSC 10.

“Where there are competing alternative, rather than cumulative, potential causes of a disease or injury, such as in Hotson, I can see no reason in principle why epidemiological reason should not be used to show that one of the causes was more than twice as likely as all the others put together to have caused the disease or injury.” (Lord Philips, at ¶ 93)

(arguing that statistical evidence should be considered without clearly identifying the nature and extent of its role) (Baroness Hale, ¶ 172-73)

(insisting upon difference between fact and probability of causation, with statistical evidence not probative of the former) (Lord Roger, at ¶143-59)

(“the law is concerned with the rights and wrongs of an individual situation, and should not treat people and even companies as statistics,” although epidemiologic evidence can appropriately be used he identified “in conjunction with specific evidence”) (Lord Mance, at ¶205)

(concluding that epidemiologic evidence can establish the probability, but not the fact of causation, and vaguely suggesting that whether epidemiologic evidence should be allowed was a matter of policy) (Lord Dyson, ¶218-19)

Dixon v. Ford Motor Co., 47 A. 3d 1038, 1046-47 & n.11 (Md. Ct. Special Appeals 2012)(“we can explicitly derive the probability of causation from the statistical measure known as ‘relative risk’, as did the U.S. Court of Appeals for the Third Circuit in DeLuca v. Merrell Dow Pharmaceuticals, Inc., 911 F.2d 941, 958 (3d Cir.1990), in a holding later adopted by several courts. For reasons we need not explore in detail, it is not prudent to set a singular minimum ‘relative risk’value as a legal standard. But even if there were some legal threshold, Dr. Welch provided no information that could help the finder of fact to decide whether the elevated risk in this case was ‘substantial’.”)(internal citations omitted), rev’d, 433 Md. 137, 70 A.3d 328 (2013)


Pharmaceutical Cases

Ambrosini v. Upjohn, 1995 WL 637650, at *4 (D.D.C. Oct. 18, 1995) (excluding plaintiff’s expert witness, Dr. Brian Strom, who was unable to state that mother’s use of Depo-Provero to prevent miscarriage more than doubled her child’s risk of a birth defect)

Ambrosini v. Labarraque, 101 F.3d 129, 135 (D.C. Cir. 1996)(Depo-Provera, birth defects) (testimony “does not warrant exclusion simply because it fails to establish the causal link to a specified degree of probability”)

Siharath v. Sandoz Pharms. Corp., 131 F. Supp. 2d 1347, 1356 (N.D. Ga. 2001)

Cloud v. Pfizer Inc., 198 F. Supp. 2d 1118, 1134 (D. Ariz. 2001) (sertraline and suicide)

Miller v. Pfizer, 196 F. Supp. 2d 1062, 1079 (D. Kan. 2002) (acknowledging that most courts require a showing of RR > 2, but questioning their reasoning; “Court rejects Pfizer’s argument that unless Zoloft is shown to create a relative risk [of akathisia] greater than 2.0, [expert’s] testimony is inadmissible”), aff’d, 356 F. 3d 1326 (10th Cir.), cert. denied, 543 U.S. 917 (2004)

XYZ, et al. v. Schering Health Care Ltd., [2002] EWHC 1420, at ¶21, 70 BMLR 88 (QB 2002) (noting with approval that claimants had accepted the need to  prove relative risk greater than two; finding that most likely relative risk was 1.7, which required finding against claimants even if general causation were established)

Smith v. Wyeth-Ayerst Laboratories Co., 278 F. Supp. 2d 684, 691 (W.D.N.C. 2003) (recognizing that risk and cause are distinct concepts) (“Epidemiologic data that shows a risk cannot support an inference of cause unless (1) the data are statistically significant according to scientific standards used for evaluating such associations; (2) the relative risk is sufficiently strong to support an inference of ‘more likely than not’; and (3)  the epidemiologic data fits the plaintiff’s case in terms of exposure, latency, and other relevant variables.”) (citing FJC Reference Manual at 384 – 85 (2d ed. 2000))

Kelley v. Sec’y of Health & Human Servs., 68 Fed. Cl. 84, 92 (Fed. Cl. 2005) (quoting Kelley v. Sec’y of Health & Human Servs., No. 02-223V, 2005 WL 1125671, at *5 (Fed. Cl. Mar. 17, 2005) (opinion of Special Master explaining that epidemiology must show relative risk greater than two to provide evidence of causation), rev’d on other grounds, 68 Fed. Cl. 84 (2005))

Pafford v. Secretary of HHS, No. 01–0165V, 64 Fed. Cl. 19, 2005 WL 4575936 at *8 (2005) (expressing preference for “an epidemiologic study demonstrating a relative risk greater than two … or dispositive clinical or pathological markers evidencing a direct causal relationship”) (citing Stevens v. Secretary of HHS, No.2001 WL 387418 at *12), aff’d, 451 F.3d 1352 (Fed. Cir. 2006)

Burton v. Wyeth-Ayerst Labs., 513 F. Supp. 2d 719, 730 (N.D. Tex. 2007) (affirming exclusion of expert witness testimony that did not meet Havner’s requirement of relative risks greater than two, Merrell Dow Pharm., Inc. v. Havner, 953 S.W.2d 706, 717–18 (Tex. 1997))

In re Bextra and Celebrex Marketing Sales Practices and Prod. Liab. Litig., 524 F. Supp. 2d 1166, 1172 (N.D. Calif. 2007) (observing that epidemiologic studies “can also be probative of specific causation, but only if the relative risk is greater than 2.0, that is, the product more than doubles the risk of getting the disease”)

In re Bextra & Celebrex, 2008 N.Y. Misc. LEXIS 720, *23-24, 239 N.Y.L.J. 27 (2008) (“Proof that a relative risk is greater than 2.0 is arguably relevant to the issue of specific, as opposed to general causation and is not required for plaintiffs to meet their burden in opposing defendants’ motion.”)

In re Viagra Products Liab. Litigat., 572 F. Supp. 2d 1071, 1078 (D. Minn. 2008) (noting that some but not all courts have concluded relative risks under two support finding expert witness’s opinion to be inadmissible)

Vanderwerf v. SmithKlineBeecham Corp., 529 F.Supp. 2d 1294, 1302 n.10 (D. Kan. 2008), appeal dism’d, 603 F.3d 842 (10th Cir. 2010) (“relative risk of 2.00 means that a particular event of suicidal behavior has a 50 per cent chance that is associated with the exposure to Paxil … .”)

Wright v. American Home Products Corp., 557 F. Supp. 2d 1032, 1035-36 (W.D. Mo. 2008) (fenfluramine case)

Beylin v. Wyeth, 738 F.Supp. 2d 887, 893 n.3 (E.D.Ark. 2010) (MDL court) (Wilson, J. & Montgomery, J.) (addressing relative risk of two argument in dictum; holding that defendants’ argument that for an opinion to be relevant it must show that the medication causes the relative risk to exceed two “was without merit”)

Merck & Co. v. Garza, 347 S.W.3d 256 (Tex. 2011), rev’g 2008 WL 2037350, at *2 (Tex. App. — San Antonio May 14, 2008, no pet. h.)

Scharff v. Wyeth, No. 2:10–CV–220–WKW, 2012 WL 3149248, *6 & n.9, 11 (M.D. Ala. Aug. 1, 2012) (post-menopausal hormone therapy case; “A relative risk of 2.0 implies a 50% likelihood that an exposed individual’s disease was caused by the agent. The lower relative risk in this study reveals that some number less than half of the additional cases could be attributed to [estrogen and progestin].”)

Cheek v. Wyeth, LLC (In re Diet Drugs), 890 F.Supp. 2d 552 (E.D. Pa. 2012)


Medical Malpractice – Failure to Prescribe; Delay in Treatment

Merriam v. Wanger, 757 A.2d 778, 2000 Me. 159 (2000) (reversing judgment on jury verdict for plaintiff on grounds that plaintiff failed to show that defendant failure to act were, more likely than not, a cause of harm)

Bonesmo v. The Nemours Foundation, 253 F. Supp. 2d 801, 809 (D. Del. 2003)

Theofanis v. Sarrafi, 791 N.E.2d 38,48 (Ill. App. 2003) (reversing and granting new trial to plaintiff who received an award of no damages when experts testified that relative risk was between 2.0 and 3.0)(“where the risk with the negligent act is at least twice as great as the risk in the absence of negligence, the evidence supports a finding that, more likely than not, the negligence in fact caused the harm”)

Cottrelle v. Gerrard, 67 OR (3d) 737 (2003), 2003 CanLII 50091 (ONCA), at ¶ 25 (Sharpe, J.A.) (less than a probable chance that timely treatment would have made a difference for plaintiff is insufficient), leave to appeal den’d SCC (April 22, 2004)

Joshi v. Providence Health System of Oregon Corp., 342 Or. 152, 156, 149 P. 3d 1164, 1166 (2006) (affirming directed verdict for defendants when expert witness testified that he could not state, to a reasonable degree of medical probability, beyond 30%, that administering t-PA, or other anti-coagulant would have changed the outcome and prevented death)

Ensink v. Mecosta County Gen. Hosp., 262 Mich. App. 518, 687 N.W.2d 143 (Mich. App. 2004) (affirming summary judgment for hospital and physicians when patient could not greater than 50% probability of obtaining a better result had emergency physician administered t-PA within three hours of stroke symptoms)

Lake Cumberland, LLC v. Dishman, 2007 WL 1229432, *5 (Ky. Ct. App. 2007) (unpublished) confusing 30% with a “reasonable probability”; citing without critical discussion an apparently innumerate opinion of expert witness Dr. Lawson Bernstein)

Mich. Comp. Laws § 600.2912a(2) (2009) (“In an action alleging medical malpractice, the plaintiff has the burden of proving that he or she suffered an injury that more probably than not was proximately caused by the negligence of the defendant or defendants. In an action alleging medical malpractice, the plaintiff cannot recover for loss of an opportunity to survive or an opportunity to achieve a better result unless the opportunity was greater than 50%.”)

O’Neal v. St. John Hosp. & Med. Ctr., 487 Mich. 485, 791 N.W.2d 853 (Mich. 2010) (affirming denial of summary judgment when failure to administer therapy (not t-PA) in a timely fashion supposedly more than doubled the risk of stroke)

Kava v. Peters, 450 Fed. Appx. 470, 478-79 (6th Cir. 2011) (affirming summary judgment for defendants when plaintiffs expert witnesses failed to provide clear testimony that plaintiff specific condition would have been improved by timely administration of therapy)

Smith v. Bubak, 643 F.3d 1137, 1141–42 (8th Cir. 2011) (rejecting relative benefit testimony and suggesting in dictum that absolute benefit “is the measure of a drug’s overall effectiveness”)

Young v. Mem’l Hermann Hosp. Sys., 573 F.3d 233, 236 (5th Cir. 2009) (holding that Texas law requires a doubling of the relative risk of an adverse outcome to prove causation), cert. denied, ___ U.S. ___, 130 S.Ct. 1512 (2010)

Gyani v. Great Neck Medical Group, 2011 WL 1430037 (N.Y. S.Ct. for Nassau Cty., April 4, 2011) (denying summary judgment to medical malpractice defendant on stroke patient’s claims that failure to administer t-PA, on naked assertions of proximate cause by plaintiff’s expert witness, and without considering actual magnitude of risk increased by alleged failure to treat)

Samaan v. St. Joseph Hospital, 670 F.3d 21 (1st Cir. 2012)

Goodman v. Viljoen, 2011 ONSC 821 (CanLII)(treating a risk ratio of 1.7 for harm, or 0.6 for prevention, as satisfying the “balance of probabilities” when taken with additional unquantified, unvalidated speculation), aff’d, 2012 ONCA 896 (CanLII), leave appeal den’d, Supreme Court of Canada No. 35230 (July 11, 2013)

Briante v. Vancouver Island Health Authority, 2014 Brit. Columbia S.Ct 1511, at ¶ 317 (plaintiff must show “on a balance of probabilities that the defendant caused the injury”)


Toxic Tort Cases

In re Agent Orange Product Liab. Litig., 597 F. Supp. 740, 785, 836 (E.D.N.Y. 1984) (“A government administrative agency may regulate or prohibit the use of toxic substances through rulemaking, despite a very low probability of any causal relationship.  A court, in contrast, must observe the tort law requirement that a plaintiff establish a probability of more than 50% that the defendant’s action injured him. … This means that at least a two-fold increase in incidence of the disease attributable to Agent Orange exposure is required to permit recovery if epidemiological studies alone are relied upon.”), aff’d 818 F.2d 145, 150-51 (2d Cir. 1987)(approving district court’s analysis), cert. denied sub nom. Pinkney v. Dow Chemical Co., 487 U.S. 1234 (1988)

Wright v. Willamette Indus., Inc., 91 F.3d 1105 (8th Cir. 1996)(“Actions in tort for damages focus on the question of whether to transfer money from one individual to another, and under common-law principles (like the ones that Arkansas law recognizes) that transfer can take place only if one individual proves, among other things, that it is more likely than not that another individual has caused him or her harm.  It is therefore not enough for a plaintiff to show that a certain chemical agent sometimes causes the kind of harm that he or she is complaining of.  At a minimum, we think that there must be evidence from which the factfinder can conclude that the plaintiff was exposed to levels of that agent that are known to cause the kind of harm that the plaintiff claims to have suffered. See Abuan v. General Elec. Co., 3 F.3d at 333.  We do not require a mathematically precise table equating levels of exposure with levels of harm, but there must be evidence from which a reasonable person could conclude that a defendant’s emission has probably caused a particular plaintiff the kind of harm of which he or she complains before there can be a recovery.”)

Sanderson v. Internat’l Flavors & Fragrances, Inc., 950 F. Supp. 981, 998 n. 17,  999-1000, 1004 (C.D. Cal.1996) (more than a doubling of risk is required in case involving aldehyde exposure and claimed multiple chemical sensitivities)

McDaniel v. CSX Transp., Inc., 955 S.W.2d 257, 264 (1997) (doubling of risk is relevant but not required as a matter of law)

Schudel v. General Electric Co., 120 F.3d 991, 996 (9th Cir. 1997) (polychlorinated biphenyls)

Lofgren v. Motorola, 1998 WL 299925 *14 (Ariz. Super. June 1, 1998) (suggesting that relative risk requirement in tricholorethylene cancer medical monitoring case was arbitrary, but excluding plaintiffs’ expert witnesses on other grounds)

Berry v. CSX Transp., Inc., 709 So. 2d 552 (Fla. D. Ct.App. 1998) (reversing exclusion of plaintiff’s epidemiologist in case involving claims of toxic encephalopathy from solvent exposure, before Florida adopted Daubert standard)

Bartley v. Euclid, Inc., 158 F.3d 261 (5th Cir. 1998) (evidence at trial more than satisfied the relative risk greater than two requirement), rev’d on rehearing en banc, 180 F.3d 175 (5th Cir. 1999)

Magistrini v. One Hour Martinizing Dry Cleaning, 180 F. Supp. 2d 584, 591-92, 605 n.27, 606–07 (D.N.J. 2002) (“When the relative risk reaches 2.0, the risk has doubled, indicating that the risk is twice as high among the exposed group as compared to the non-exposed group. Thus, ‘the threshold for concluding that an agent was more likely than not the cause of an individual’s disease is a relative risk greater than 2.0’.”) (quoting FJC Reference Manual at 384), aff’d, 68 F. App’x 356 (3d Cir. 2003)

Allison v. Fire Ins. Exchange, 98 S.W.3d 227, 239 (Tex. App. — Austin 2002, no pet. h.)

Ferguson v. Riverside School Dist. No. 416, 2002 WL 34355958 (E.D. Wash. Feb. 6, 2002) (No. CS-00-0097-FVS)

Daniels v. Lyondell-Citgo Refining Co., 99 S.W.3d 722, 727 (Tex. App. – Houston [1st Dist.] 2003) (affirming exclusion of expert witness testimony that did not meet Havner’s requirement of relative risks greater than two)

Exxon Corp. v. Makofski, 116 S.W.3d 176, 184-85 (Tex. App. — Houston 2003)

Frias v. Atlantic Richfield Co., 104 S.W.3d 925 (Tex. App. — Houston 2003)

Graham v. Lautrec, Ltd., 2003 WL 23512133, at *1 (Mich. Cir. Ct. 2003) (mold)

Mobil Oil Corp. v. Bailey, 187 S.W.3d 263, 268 (Tex. App. – Beaumont 2006) (affirming exclusion of expert witness testimony that did not meet Havner’s requirement of relative risks greater than two)

In re Lockheed Litig. Cases, 115 Cal. App. 4th 558 (2004)(alleging brain, liver, and kidney damage), rev’d in part, 23 Cal. Rptr. 3d 762, 765 (Cal. App. 2d Dist. 2005) (“[A] court cannot exclude an epidemiological study from consideration solely because the study shows a relative risk of less than 2.0.”), rev. dismissed, 192 P.3d 403 (Cal. 2007)

Novartis Grimsby Ltd. v. Cookson, [2007] EWCA (Civ) 1261, at para. 74 (causation was successfully established by risk ratio greater than two; per Lady Justice Smith: “Put in terms of risk, the occupational exposure had more than doubled the risk [of the bladder cancer complained of] due to smoking. . . . if the correct test for causation in a case such as this is the “but for” test and nothing less will do, that test is plainly satisfied on the facts as found. . . . In terms of risk, if the occupational exposure more than doubles the risk due to smoking, it must, as a matter of logic, be probable that the disease was caused by the former.”)

Watts v. Radiator Specialty Co., 990 So. 2d 143 (Miss. 2008) (“The threshold for concluding that an agent was more likely than not the cause of an individual’s disease is a relative risk greater than 2.0.”)

King v. Burlington Northern Santa Fe Ry, 762 N.W.2d 24, 36-37 (Neb. 2009) (reversing exclusion of proffered testimony of Arthur Frank on claim that diesel exposure caused multiple myeloma, and addressing in dicta the ability of expert witnesses to speculate reasons why specific causation exists even with relative risk less than two) (“If a study shows a relative risk of 2.0, ‘the agent is responsible for an equal number of cases of disease as all other background causes.’ This finding ‘implies a 50% likelihood that an exposed individual’s disease was caused by the agent.’ If the relative risk is greater than 2.0, the study shows a greater than 50–percent likelihood that the agent caused the disease.”)(internal citations to Reference Manual on Scientific Evidence (2d ed. 2000) omitted)

Henricksen v. Conocophillips Co., 605 F. Supp. 2d 1142, 1158 (E.D. Wash. 2009) (noting that under Circuit precedent, epidemiologic studies showing low-level risk may suffiicent to show general causation but are sufficient to show specific causation only if relative risk exceeds two) (excluding plaintiff‘s expert witness’s testimony because epidemiologic evidence is “contradictory and inconsistent”)

City of San Antonio v. Pollock, 284 S.W.3d 809, 818 (Tex. 2009) (holding testimony admitted insufficient as matter of law)

George v. Vermont League of Cities and Towns, 2010 Vt. 1, 993 A.2d 367, 375 (2010)

Blanchard v. Goodyear Tire & Rubber Co., No. 837-12-07 Wrcv (Eaton, J., June 28, 2010) (excluding expert witness, David Goldsmith, and entering summary judgment), aff’d, 190 Vt. 577, 30 A.3d 1271 (2011)

Pritchard v. Dow Agro Sciences, 705 F. Supp. 2d 471, 486 (W.D. Pa. 2010) (excluding opinions of Dr. Omalu on Dursban, in part because of low relative risk) (“Therefore, a relative risk of 2.0 is not dispositive of the reliability of an expert’s opinion relying on an epidemiological study, but it is a factor, among others, which the Court is to consider in its evaluation.”), aff’d, 430 Fed. Appx. 102, 2011 WL 2160456 (3d Cir. 2011)

Faust v. BNSF Ry., 337 S.W.3d 325, 337 (Tex. Ct. App. 2d Dist. 2011) (“To be considered reliable scientific evidence of general causation, an epidemiological study must (1) have a relative risk of 2.0 and (2) be statistically significant at the 95% confidence level.”) (internal citations omitted)

Nonnon v. City of New York, 88 A.D.3d 384, 398-99, 932 N.Y.S.2d 428, 437-38 (1st Dep’t 2011) (holding that the strength of the epidemiologic evidence, with relative risks greater than 2.0, permitted an inference of causation)

Milward v. Acuity Specialty Products Group, Inc., 969 F. Supp. 2d 101, 112-13 & n.7 (D. Mass. 2013) (avoiding doubling of risk issue and holding that plaintiffs’ expert witnesses failed to rely upon a valid exposure estimate and lacked sufficient qualifications to evaluate and weigh the epidemiologic studies that provided estimates of relative risk) (generalities about the “core competencies” of physicians or specialty practices cannot overcome an expert witness’s explicit admission of lacking the epidemiologic expertise needed to evaluate and weigh the epidemiologic studies and methods at issue in the case. Without the requisite qualifications, an expert witness cannot show that the challenged opinion has a sufficiently reliable scientific foundation in epidemiologic studies and method.)

Berg v. Johnson & Johnson, 940 F.Supp.2d 983 (D.S.D. 2013) (talc and ovarian cancer)


Other

In re Hannaford Bros. Co. Customer Data Sec. Breach Litig., 293 F.R.D. 21, 2:08-MD-1954-DBH, 2013 WL 1182733, *1 (D. Me. Mar. 20, 2013) (Hornby, J.) (denying motion for class certification) (“population-based probability estimates do not speak to a probability of causation in any one case; the estimate of relative risk is a property of the studied population, not of an individual’s case.”)

Cherry Picking; Systematic Reviews; Weight of the Evidence

April 5th, 2015

In a paper prepared for one of Professor Margaret Berger’s symposia on law and science, Lisa Bero, a professor of clinical pharmacy in the University of California San Francisco’s School of Pharmacy identified a major source of error in published reviews of putative health effects:

“The biased citation of studies in a review can be a major source of error in the results of the review. Authors of reviews can influence their conclusions by citing only studies that support their preconceived, desired outcome.”

Lisa Bero, “Evaluating Systematic Reviews and Meta-Analyses,” 14 J. L. & Policy 569, 576 (2006). Biased citation, consideration, and reliance are major sources of methodological error in courtroom proceedings as well. Sometimes astute judges recognize and bar expert witnesses who would pass off their opinions, as well considered, when they are propped up only by biased citation. Unfortunately, courts have been inconsistent, sometimes rewarding cherry picking of studies by admitting biased opinions[1], sometimes unhorsing the would-be expert witnesses by excluding their opinions[2].

Given that cherry picking or “biased citation” is recognized in the professional community as a rather serious methodological sin, judges may be astonished to learn that both phrases, “cherry picking” and “biased citation” do not appear in the third edition of the Reference Manual on Scientific Evidence. Of course, the Manual could have dealt with the underlying issue of biased citation by affirmatively promoting the procedure of systematic reviews, but here again, the Manual falls short. There is no discussion of systematic review in the chapters on toxicology[3], epidemiology[4], or statistics[5]. Only the chapter on clinical medicine discusses the systematic review, briefly[6]. The absence of support for the procedures of systematic reviews, combined with the occasional cheerleading for “weight of the evidence,” in which expert witnesses subjectively include and weight studies to reach pre-ordained opinions, tends to undermines the reliability of the latest edition of the Manual[7].


[1] Spray-Rite Serv. Corp. v. Monsanto Co., 684 F.2d 1226, 1242 (7th Cir. 1982) (failure to consider factors identified by opposing side’s expert did not make testimony inadmissible).

[2] In re Zoloft, 26 F. Supp. 3d 449 (E.D. Pa. 2014) (excluding perinatal epidemiologist, Anick Bérard, for biased cherry picking of data points); In re Accutane, No. 271(MCL), 2015 WL 753674, 2015 BL 59277 (N.J.Super. Law Div. Atlantic Cty. Feb. 20, 2015) (excluding opinions Drs. Arthur Kornbluth and David Madigan because of their authors’ unjustified dismissal of studies that contradicted or undermined their opinions); In re Bextra & Celebrex Mktg. Sales Practices & Prods. Liab. Litig., 524 F.Supp. 2d 1166, 1175–76, 1179 (N.D.Cal.2007) (holding that expert witnesses may not ‘‘cherry-pick[ ]’’ observational studies to support a conclusion that is contradicted by randomized controlled trials, meta-analyses of such trials, and meta-analyses of observational studies; excluding expert witness who ‘‘ignores the vast majority of the evidence in favor of the few studies that support her conclusion’’); Grant v. Pharmative, LLC, 452 F. Supp. 2d 903, 908 (D. Neb. 2006) (excluding expert witness opinion testimony that plaintiff’s use of black cohash caused her autoimmune hepatitis) (“Dr. Corbett’s failure to adequately address the body of contrary epidemiological evidence weighs heavily against admission of his testimony.”); Downs v. Perstorp Components, Inc., 126 F. Supp. 2d 1090,1124-29 (E.D. Tenn. 1999) (expert’s opinion raised seven “red flags” indicating that his testimony was litigation biased), aff’d, 2002 U.S. App. Lexis 382 (6th Cir. Jan. 4, 2002).

[3] Bernard D. Goldstein & Mary Sue Henifin, “Reference Guide on Toxicology,” in Reference Manual on Scientific Evidence 633 (3d ed. 2011).

[4] Michael D. Green, D. Michal Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” in Reference Manual on Scientific Evidence 549 (3d ed. 2011).

[5] David H. Kaye & David A. Freedman, “Reference Guide on Statistics,” in Reference Manual on Scientific Evidence 209 (3d ed. 2011).

[6] John B. Wong, Lawrence O. Gostin, and Oscar A. Cabrera, “Reference Guide on Medical Testimony,” in Federal Judicial Center and National Research Council, Reference Manual on Scientific Evidence 687 (3d ed. 2011).

[7] See Margaret A. Berger, “The Admissibility of Expert Testimony,” in Reference Manual on Scientific Evidence 11, 20 & n.51 (3d ed. 2011) (posthumously citing Milward v. Acuity Specialty Products Group, Inc., 639 F.3d 11, 26 (1st Cir. 2011), with approval, for reversing exclusion of expert witnesses who advanced “weight of the evidence” opinions).

The Misbegotten Judicial Resistance to the Daubert Revolution

December 8th, 2013

David Bernstein is a Professor at the George Mason University School of Law.  Professor Bernstein has been writing about expert witness evidentiary issues for almost as long as I have been litigating them.  I have learned much from his academic writings on expert witness issues, which include his contributions to two important multi-authored texts, The New Wigmore: Expert Evidence (2d ed. 2010), Phantom Risk: Scientific Inference and the Law (MIT Press 1993).

Bernstein’s draft article on the Daubert Counter-revolution, which some might call a surge by judicial reactionaries, has been available on the Social Science Research Network, and on his law school’s website. SeeDavid Bernstein on the Daubert Counterrevolution” (April 19, 2013).  Professor Bernstein’s article has now been published in the current issue of the Notre Dame Law Review, and is available at its website. David E. Bernstein, “The Misbegotten Judicial Resistance to the Daubert Revolution,” 89 Notre Dame L. Rev. 27 (2013).  This article might well replace the out-dated chapter by the late Professor Berger in the latest edition of the Reference Manual on Scientific Evidence.

 

 

Manganese Meta-Analysis Further Undermines Reference Manual’s Toxicology Chapter

October 15th, 2012

Last October, when the ink was still wet on the Reference Manual on Scientific Evidence (3d 2011), I dipped into the toxicology chapter only to find the treatment of a number of key issues to be partial and biased.  SeeToxicology for Judges – The New Reference Manual on Scientific Evidence” (Oct. 5, 2011).

The chapter, “Reference Guide on Toxicology,” was written by Professor Bernard D. Goldstein, of the University of Pittsburgh Graduate School of Public Health, and Mary Sue Henifin, a partner in the law firm of Buchanan Ingersoll, P.C.  In particular, I noted the authors’ conflicts of interest, both financial and ideological, which may have resulted in an incomplete and tendentious presentation of important concepts in the chapter.  Important concepts in toxicology, such as hormesis, were omitted completely from the chapter.  See, e.g., Mark P. Mattson and Edward J. Calabrese, eds., Hormesis: A Revolution in Biology, Toxicology and Medicine (N.Y. 2009); Curtis D. Klaassen, Casarett & Doull’s Toxicology: The Basic Science of Poisons 23 (7th ed. 2008) (“There is considerable evidence to suggest that some non-nutritional toxic substances may also impart beneficial or stimulatory effects at low doses but that, at higher doses, they produce adverse effects. This concept of “hormesis” was first described for radiation effects but may also pertain to most chemical responses.”)(internal citations omitted); Philip Wexler, et al., eds., 2 Encyclopedia of Toxicology 96 (2005) (“This type of dose–response relationship is observed in a phenomenon known as hormesis, with one explanation being that exposure to small amounts of a material can actually confer resistance to the agent before frank toxicity begins to appear following exposures to larger amounts.  However, analysis of the available mechanistic studies indicates that there is no single hormetic mechanism. In fact, there are numerous ways for biological systems to show hormetic-like biphasic dose–response relationship. Hormetic dose–response has emerged in recent years as a dose–response phenomenon of great interest in toxicology and risk assessment.”).

The financial conflicts are perhaps more readily appreciated.  Goldstein has testified in any number of so-called toxic tort cases, including several in which courts had excluded his testimony as being methodologically unreliable.  These cases are not cited in the ManualSee, e.g., Parker v. Mobil Oil Corp., 7 N.Y.3d 434, 857 N.E.2d 1114, 824 N.Y.S.2d 584 (2006) (dismissing leukemia (AML) claim based upon claimed low-level benzene exposure from gasoline) , aff’g 16 A.D.3d 648 (App. Div. 2d Dep’t 2005); Exxon Corp. v. Makofski, 116 S.W.3d 176 (Tex.App.–Houston [14th Dist.] 2003, pet. denied) (benzene and ALL claim).

One of the disappointments of the toxicology chapter was its failure to remain neutral in substantive disputes, unless of course it could document its position against adversarial claims.  Table 1 in the chapter presents, without documentation or citation,  a “Sample of Selected Toxicological End Points and Examples of Agents of Concern in Humans.” Although many of the agent/disease outcome relationships in the table are well accepted, one was curiously unsupported at the time; namely the claim that manganese causes Parkinson’s disease (PD).  Reference Manual at 653.This tendentious claim undermines the Manual’s attempt to remain disinterested in what was then an ongoing litigation effort.  Last year, I noted that Goldstein’s scholarship was questionable at the time of publication because PD is generally accepted to have no known cause.  Claims that manganese can cause PD had been addressed in several reviews. See, e.g., Karin Wirdefeldt, Hans-Olaf Adami, Philip Cole, Dimitrios Trichopoulos, and Jack Mandel, “Epidemiology and etiology of Parkinson’s disease: a review of the evidence.  26 European J. Epidemiol. S1, S20-21 (2011); Tomas R. Guilarte, “Manganese and Parkinson’s Disease: A Critical Review and New Findings,” 118 Environ Health Perspect. 1071, 1078 (2010) (“The available evidence from human and non­human primate studies using behavioral, neuroimaging, neurochemical, and neuropathological end points provides strong sup­port to the hypothesis that, although excess levels of [manganese] accumulation in the brain results in an atypical form of parkinsonism, this clini­cal outcome is not associated with the degen­eration of nigrostriatal dopaminergic neurons as is the case in PD.”).

More recently, three neuro-epidemiologists have published a systematic review and meta-analysis of the available analytical epidemiologic studies.  What they found was an inverse association between welding, a trade that involves manganese fume exposure, and Parkinson’s disease. James Mortimer, Amy Borenstein, and Lorene Nelson, “Associations of welding and manganese exposure with Parkinson disease: Review and meta-analysis,” 79 Neurology 1174 (2012).

Here are the summary figures from the published meta-analysis:

 

The Fourth Edition should aim at a better integration of toxicology into the evolving science of human health effects.

Pin the Tail on the Significance Test

July 14th, 2012

Statistical significance has proven a difficult concept for many judges and lawyers to understand and apply.  See .  An adequate understanding of significance probability requires the recognition that the tail probability that represents the probability of a result at least as extreme as the result obtained if the null hypothesis is true could be the area under one or both sides of the probability distribution curve.  Specifying an attained significance probability requires us to specify further whether the p-value is one- or two-sided; that is, whether we have ascertained the result and the more extreme results in one or both directions.

 

Reference Manual on Scientific Evidence

As with many other essential statistical concepts, we can expert courts and counsel to look to the Reference Manual for guidance.  As with the notion of statistical significance itself, the Manual is not entirely consistent or accurate.

Statistics Chapter

The statistics chapter in the Reference Manual on Scientific Evidence provides a good example of one- versus two-tail statistical tests:

One tail or two?

In many cases, a statistical test can be done either one-tailed or two-tailed; the second method often produces a p-value twice as big as the first method. The methods are easily explained with a hypothetical example. Suppose we toss a coin 1000 times and get 532 heads. The null hypothesis to be tested asserts that the coin is fair. If the null is correct, the chance of getting 532 or more heads is 2.3%.

That is a one-tailed test, whose p-value is 2.3%. To make a two-tailed test, the statistician computes the chance of getting 532 or more heads—or 500 − 32 = 468 heads or fewer. This is 4.6%. In other words, the two-tailed p-value is 4.6%. Because small p-values are evidence against the null hypothesis, the one-tailed test seems to produce stronger evidence than its two-tailed counterpart. However, the advantage is largely illusory, as the example suggests. (The two-tailed test may seem artificial, but it offers some protection against possible artifacts resulting from multiple testing—the topic of the next section.)

Some courts and commentators have argued for one or the other type of test, but a rigid rule is not required if significance levels are used as guidelines rather than as mechanical rules for statistical proof.110 One-tailed tests often make it easier to reach a threshold such as 5%, at least in terms of appearance. However, if we recognize that 5% is not a magic line, then the choice between one tail and two is less important—as long as the choice and its effect on the p-value are made explicit.”

David H. Kaye and David A. Freedman, “Reference Guide on Statistics,” in RMSE3d 211, 255-56 (3ed 2011). This advice is pragmatic but a bit misleading.  The reason for the two-tailed test, however, is not really tied to multiple testing.  If there were 20 independent tests, doubling the p-value would hardly be “some protection” against multiple testing artifacts. In some cases, where the hypothesis test specifies an alternative hypothesis that is not equal to the null hypothesis, extreme values both  above and below the null hypothesis count in favor of rejecting the null.  A two-tailed test results.  Multiple testing may be a reason for modifying our interpretation of the strength of a p-value, but it really should not drive our choice between one-tailed and two-tailed tests.

The authors of the statistics chapter are certainly correct that 5% is not “a magic line,” but they might ask what does the FDA do when looking to see whether a clinical trial has established efficacy of a new medication.  Does it license the medication if the sponsor’s trial comes close to 5%, or does it demand 5%, two-tailed, as a minimal showing?  There are times in science, industry, regulation, and law, when a dichotomous test is needed.

Kaye and Freedman provide an important further observation, which is ignored in the subsequent epidemiology chapter’s discussion:

“One-tailed tests at the 5% level are viewed as weak evidence—no weaker standard is commonly used in the technical literature.  One-tailed tests are also called one-sided (with no pejorative intent); two-tailed tests are two-sided.”

Id. at 255 n.10. This statement is a helpful bulwark against the oft-repeated suggestion that any p-value would be an arbitrary cut-off for rejecting null hypotheses.

 

Chapter on Multiple Regression

This chapter explains how the choice of the statistical tests, whether one- or two-sided, may be tied to prior beliefs and the selection of the alternative hypothesis in the hypothesis test.

“3. Should statistical tests be one-tailed or two-tailed?

When the expert evaluates the null hypothesis that a variable of interest has no linear association with a dependent variable against the alternative hypothesis that there is an association, a two-tailed test, which allows for the effect to be either positive or negative, is usually appropriate. A one-tailed test would usually be applied when the expert believes, perhaps on the basis of other direct evidence presented at trial, that the alternative hypothesis is either positive or negative, but not both. For example, an expert might use a one-tailed test in a patent infringement case if he or she strongly believes that the effect of the alleged infringement on the price of the infringed product was either zero or negative. (The sales of the infringing product competed with the sales of the infringed product, thereby lowering the price.) By using a one-tailed test, the expert is in effect stating that prior to looking at the data it would be very surprising if the data pointed in the direct opposite to the one posited by the expert.

Because using a one-tailed test produces p-values that are one-half the size of p-values using a two-tailed test, the choice of a one-tailed test makes it easier for the expert to reject a null hypothesis. Correspondingly, the choice of a two-tailed test makes null hypothesis rejection less likely. Because there is some arbitrariness involved in the choice of an alternative hypothesis, courts should avoid relying solely on sharply defined statistical tests.49 Reporting the p-value or a confidence interval should be encouraged because it conveys useful information to the court, whether or not a null hypothesis is rejected.”

Id. at 321.  This statement is not quite consistent with the chapter on statistics, and it introduces new problems.  The choice of the alternative hypothesis is not always arbitrary, there are times when the use of a one-tail or a two-tail test is preferable, but the chapter withholds its guidance. The statement that “one-tailed test produces p-values that are one-half the size of p-values using a two-tailed test” is true for Gaussian distributions, which of necessity are symmetrical.  Doubling the one-tailed test value will not necessarily yield a correct two-tailed measure for some asymmetrical binomial or hypergeometric distributions.  If great weight must be placed on the exactness of the p-value for legal purposes, and whether the p-value is less than 0.05, then courts must realize that there may alternative approaches to calculating significance probability such as the mid-p-value.  The author of the chapter on multiple regression goes on to note that most courts have shown a preference for two-tailed tests.  Id. at 321 n. 49.  The legal citations, however, are limited, and given the lack sophistication in many courts, it is not clear what prescriptive effect such a preference, if correct, should have.

 

Chapter on Epidemiology

The chapter on epidemiology appears to be substantially at odds with the chapters on statistics and multiple regression.  Remarkably the authors of the epidemiology chapter declare that “most investigators of toxic substances are only interested in whether the agent increases the incidence of disease (as distinguished from providing protection from the disease), a one-tailed test is often viewed as appropriate.” Michael D. Green, D. Michal Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” in RMSE3d 549, 577 n. 83 (3d ed. 2011).

The chapter cites no support for what “most investigators” are “only interested in,” and they fail to provide a comprehensive survey of the case law.  I believe that the authors’ suggestion about the interest of “most investigators” is incorrect.  The chapter authors cite to a questionable case involving over-the-counter medications that contained phenylpropanolamine (PPA), for allergy and cold decongestion. Id. citing In re Phenylpropanolamine (PPA) Prods. Liab. Litig., 289 F. Supp. 2d 1230, 1241 (W.D. Wash. 2003) (accepting the propriety of a one-tailed test for statistical significance in a toxic substance case).  The PPA case cited another case, Good v. Fluor Daniel Corp., 222 F. Supp. 2d 1236, 1243 (E.D. Wash. 2002), which explicitly rejected the use of the one-tailed test.  More important, the preliminary report of the key study in the PPA litigation, used one-tailed tests, when submitted to the FDA, but was revised to use two-tailed tests, when the authors prepared their manuscript for publication in the New England Journal of Medicine.  The PPA case thus represents a case which, for regulatory purposes, the one-tail test was used, but for a scientific and clinical audience, the two-tailed test was used.

The other case cited by the epidemiology chapter was the District of Columbia Circuit’s review of an EPA risk assessment of second-hand smoke.  United States v. Philip Morris USA, Inc., 449 F. Supp. 2d 1, 701 (D.D.C. 2006) (explaining the basis for EPA’s decision to use one-tailed test in assessing whether second-hand smoke was a carcinogen). The EPA is a federal agency in the “protection” business, not in investigating scientific claims.  As widely acknowledged in many judicial decisions, regulatory action if often based upon precautionary principle judgments, and are different from scientific causal claims.  See, e.g., In re Agent Orange Product Liab. Litig., 597 F. Supp. 740, 781 (E.D.N.Y.1984)(“The distinction between avoidance of risk through regulation and compensation for injuries after the fact is a fundamental one.”), aff’d in relevant part, 818 F.2d 145 (2d Cir.1987), cert. denied sub nom. Pinkney v. Dow Chemical Co., 484 U.S. 1004  (1988).

 

Litigation

In the securities fraud class action against Pfizer over Celebrex, one of plaintiffs’ expert witnesses criticized a defense expert witness’s meta-analysis for not using a one-sided p-value.  According to Nicholas Jewell, Dr. Lee-Jen Wei should have used a one-sided test for his summary meta-analytic estimates of association.  In his deposition testimony, however, Jewell was unable to identify any published or unpublished studies of NSAIDs that used a one-sided test.  One of plaintiffs’ expert witnesses, Prof. Madigan, rejected the use of one-sided p-values in this situation, out of hand.  Another plaintiffs’ expert witness, Curt Furberg, referred to Jewell’s one-side testing  as “cheating” because it assumes an increased risk and artificially biases the analysis against Celebrex.  Pfizer’s Mem. of Law in Opp. to Plaintiffs’ Motion to Exclude Expert Testimony by Dr. Lee-Jen Wei at 2, filed Sept. 8, 2009, in In re Pfizer, Inc. Securities Litig., Nos. 04 Civ. 9866(LTS)(JLC), 05 md 1688(LTS), Doc. 153 (S.D.N.Y.)(citing Markel Decl., Ex. 18 at 223, 226, 229 (Jewell Dep., In re Bextra); Ex. 7, at 123 (Furberg Dep., Haslam v. Pfizer)).

 

Legal Commentary

One of the leading texts on statistical analyses in the law provides important insights into the choice between one-tail and two-tail statistical tests.  While scientific studies will almost always use two-tail tests of significance probability, there are times, especially in discrimination cases, when a one-tail test is appropriate:

“Many scientific researchers recommend two-tailed tests even if there are good reasons for assuming that the result will lie in one direction. The researcher who uses a one-tailed test is in a sense prejudging the result by ignoring the possibility that the experimental observation will not coincide with his prior views. The conservative investigator includes that possibility in reporting the rate of possible error. Thus routine calculation of significance levels, especially when there are many to report, is most often done with two-tailed tests. Large randomized clinical trials are always tested with two-tails.

In most litigated disputes, however, there is no difference between non-rejection of the null hypothesis because, e.g., blacks are represented in numbers not significantly less than their expected numbers, or because they are in fact overrepresented. In either case, the claim of underrepresentation must fail. Unless whites also sue, the only Type I error possible is that of rejecting the null hypothesis in cases of underrepresentation when in fact there is no discrimination: the rate of this error is controlled by a one-tailed test. As one statistician put it, a one-tailed test is appropriate when ‘the investigator is not interested in a difference in the reverse direction from the hypothesized’. Joseph Fleiss, Statistical Methods for Rates and Proportions 21 (2d ed. 1981).”

Michael Finkelstein & Bruce Levin, Statistics for Lawyers at 121-22 (2d ed. 2001).  These authors provide a useful corrective to the Reference Manual‘s quirky suggestion that scientific investigators are not interested in two-tailed tests of significance.  As Finkelstein and Levin point out, however, discrimination cases may involve probability models for which we care only about random error in one direction.

Professor Finkelstein elaborates further in his basic text, with an illustration from a Supreme Court case, in which the choice of the two-tailed test was tied to the outcome of the adjudication:

“If intended as a rule for sufficiency of evidence in a lawsuit, the Court’s translation of social science requirements was imperfect. The mistranslation  relates to the issue of two-tailed vs. one-tailed tests. In most social science pursuits investigators recommend two-tailed tests. For example, in a sociological study of the wages of men and women the question may be whether their earnings are the same or different. Although we might have a priori reasons for thinking that men would earn more than women, a departure from equality in either direction would count as evidence against the null hypothesis; thus we should use a two-tailed test. Under a two-tailed test, 1.96 standard errors is associated with a 5% level of significance, which is the convention. Under a one-tailed test, the same level of significance is 1.64 standard errors. Hence if a one-tailed test is appropriate, the conventional cutoff would be 1.64 standard errors instead of 1.96. In the social science arena a one-tailed test would be justified only if we had very strong reasons for believing that men did not earn less than women. But in most settings such a prejudgment has seemed improper to investigators in scientific or academic pursuits; and so they generally recommend two-tailed tests. The setting of a discrimination lawsuit is different, however. There, unless the men also sue, we do not care whether women earn the same or more than men; in either case the lawsuit on their behalf is correctly dismissed. Errors occur only in rejecting the null hypothesis when men do not earn more than women; the rate of such errors is controlled by one-tailed test. Thus when women earn at least as much as men, a 5% one-tailed test in a discrimination case with the cutoff at 1.64 standard deviations has the same 5% rate of errors as the academic study with a cutoff at 1.96 standard errors. The advantage of the one-tailed test in the judicial dispute is that by making it easier to reject the null hypothesis one makes fewer errors of failing to reject it when it is false.

The difference between one-tailed and two-tailed tests was of some consequence in Hazelwood School District v. United States,4[433 U.S. 299 (1977)] a case involving charges of discrimination against blacks in the hiring of teachers for a suburban school district.  A majority of the Supreme Court found that the case turned on whether teachers in the city of St. Louis, who were predominantly black, had to be included in the hiring pool and remanded for a determination of that issue. The majority based that conclusion on the fact that, using a two-tailed test and a hiring pool that excluded St. Louis teachers, the underrepresentation of black hires was less than two standard errors from expectation, but if St. Louis teachers were included, the disparity was greater than five standard errors. Justice Stevens, in dissent, used a one-tailed test, found that the underrepresentation was statistically significant at the 5% level without including the St. Louis teachers, and concluded that a remand was unnecessary because discrimination was proved with either pool. From our point of view. Justice Stevens was right to use a one-tailed test and the remand was unnecessary.”

Michael Finkelstein, Basic Concepts of Probability and Statistics in the Law 57-58 (N.Y. 2009).  See also William R. Rice & Stephen D. Gaines, “Heads I Win, Tails You Lose: Testing Directional Alternative Hypotheses in Ecological and Evolutionary Research,” 9 Trends in Ecology & Evolution 235‐237, 235 (1994) (“The use of such one‐tailed test statistics, however, poses an ongoing philosophical dilemma. The problem is a conflict between two issues: the large gain in power when one‐tailed tests are used appropriately versus the possibility of ‘surprising’ experimental results, where there is strong evidence of non‐compliance with the null hypothesis (Ho) but in the unanticipated direction.”); Anthony McCluskey & Abdul Lalkhen, “Statistics IV: Interpreting the Results of Statistical Tests,” 7 Continuing Education in Anesthesia, Critical Care & Pain 221 (2007) (“It is almost always appropriate to conduct statistical analysis of data using two‐tailed tests and this should be specified in the study protocol before data collection. A one‐tailed test is usually inappropriate. It answers a similar question to the two‐tailed test but crucially it specifies in advance that we are only interested if the sample mean of one group is greater than the other. If analysis of the data reveals a result opposite to that expected, the difference between the sample means must be attributed to chance, even if this difference is large.”).

The treatise, Modern Scientific Evidence, addresses some of the caselaw that faced disputes over one- versus two-tailed tests.  David Faigman, Michael Saks, Joseph Sanders, and Edward Cheng, Modern Scientific Evidence: The Law and Science of Expert Testimony § 23:13, at 240.  In discussing a Texas case, Kelley, cited infra, these authors note that the court correctly rejected an expert witness’s attempt to claim statistical significance on the basis of a one-tail test of data in a study of silicone and autoimmune disease.

The following is an incomplete review of cases that have addressed the choice between one- and two-tailed tests of statistical significance.

First Circuit

Chang v. University of Rhode Island, 606 F.Supp. 1161, 1205 (D.R.I.1985) (comparing one-tail and two-tail test results).

Second Circuit

Procter Gamble Co. v. Chesebrough-Pond’s Inc., 747 F. 2d 114 (2d Cir. 1984)(discussing one-tail versus two in the context of a Lanham Act claim of product superiority)

Ottaviani v. State University of New York at New Paltz, 679 F.Supp. 288 (S.D.N.Y. 1988) (“Defendant’s criticism of a one-tail test is also compelling: since under a one-tail test 1.64 standard deviations equal the statistically significant probability level of .05 percent, while 1.96 standard deviations are required under the two-tailed test, the one-tail test favors the plaintiffs because it requires them to show a smaller difference in treatment between men and women.”) (“The small difference between a one-tail and two-tail test of probability is not relevant. The Court will not treat 1.96 standard deviation as the dividing point between valid and invalid claims. Rather, the Court will examine the statistical significance of the results under both one and two tails and from that infer what it can about the existence of discrimination against women at New Paltz.”)

Third Circuit

United States v. Delaware, 2004 U.S. Dist. LEXIS 4560, at *36 n.27 (D. Del. Mar. 22, 2004) (stating that for a one-tailed test to be appropriate, “one must assume … that there will only be one type of relationship between the variables”)

Fourth Circuit

Equal Employment Opportunity Comm’n v. Federal Reserve Bank of Richmond, 698 F.2d 633 (4th Cir. 1983)(“We repeat, however, that we are not persuaded that it is at all proper to use a test such as the “one-tail” test which all opinion finds to be skewed in favor of plaintiffs in discrimination cases, especially when the use of all other neutral analyses refutes any inference of discrimination, as in this case.”), rev’d on other grounds, sub nom. Cooper v. FRB of Richmond, 467 U.S. 867 (1984)

Hoops v. Elk Run Coal Co., Inc., 95 F.Supp.2d 612 (S.D.W.Va. 2000)(“Some, including our Court of Appeals, suggest a one-tail test favors a plaintiff’s point of view and might be inappropriate under some circumstances.”)

Fifth Circuit

Kelley v. American Heyer-Schulte Corp., 957 F. Supp. 873, 879, (W.D. Tex. 1997), appeal dismissed, 139 F.3d 899 (5th Cir. 1998)(rejecting Shanna Swan’s effort to reinterpret study data by using a one-tail test of significance; ‘‘Dr. Swan assumes a priori that the data tends to show that breast implants have negative health effects on women—an assumption that the authors of the Hennekens study did not feel comfortable making when they looked at the data.’’)

Brown v. Delta Air Lines, Inc., 522 F.Supp. 1218, 1229, n. 14 (S.D.Texas 1980)(discussing how one-tailed test favors plaintiff’s viewpoint)

Sixth Circuit

Dobbs-Weinstein v. Vanderbilt Univ., 1 F.Supp.2d 783 (M.D. Tenn. 1998) (rejecting one-tailed test in discrimination action)

Seventh Circuit

Mozee v. American Commercial Marine Service Co., 940 F.2d 1036, 1043 & n.7 (7th Cir. 1991)(noting that district court had applied one-tailed test and that plaintiff did not challenge that application on appeal), cert. denied, ___ U.S. ___, 113 S.Ct. 207 (1992)

Premium Plus Partners LLP v. Davis, 653 F.Supp. 2d 855 (N.D. Ill. 2009)(rejecting challenge based in part upon use of a one-tailed test), aff’d on other grounds, 648 F.3d 533 (7th Cir. 2011)

Ninth Circuit

In re Phenylpropanolamine (PPA) Prods. Liab. Litig., 289 F. Supp. 2d 1230, 1241 (W.D. Wash. 2003) (refusing to reject reliance upon a study of stroke and PPA use, which was statistically significant only with a one-tailed test)

Good v. Fluor Daniel Corp., 222 F. Supp. 2d 1236, 1242-43 (E.D. Wash. 2002) (rejecting use of one-tailed test when its use assumes fact in dispute)

Stender v. Lucky Stores, Inc., 803 F.Supp. 259, 323 (N.D.Cal. 1992)(“Statisticians can employ either one or two-tailed tests in measuring significance levels. The terms one-tailed and two-tailed indicate whether the significance levels are calculated from one or two tails of a sampling distribution. Two-tailed tests are appropriate when there is a possibility of both overselection and underselection in the populations that are being compared.  One-tailed tests are most appropriate when one population is consistently overselected over another.”)

District of Columbia Circuit

United States v. Philip Morris USA, Inc., 449 F. Supp. 2d 1, 701 (D.D.C. 2006) (explaining the basis for EPA’s decision to use one-tailed test in assessing whether second-hand smoke was a carcinogen)

Palmer v. Shultz, 815 F.2d 84, 95-96 (D.C.Cir.1987)(rejecting use of one-tailed test; “although we by no means intend entirely to foreclose the use of one-tailed tests, we think that generally two-tailed tests are more appropriate in Title VII cases. After all, the hypothesis to be tested in any disparate treatment claim should generally be that the selection process treated men and women equally, not that the selection process treated women at least as well as or better than men. Two-tailed tests are used where the hypothesis to be rejected is that certain proportions are equal and not that one proportion is equal to or greater than the other proportion.”)

Moore v. Summers, 113 F. Supp. 2d 5, 20 & n.2 (D.D.C. 2000)(stating preference for two-tailed test)

Hartman v. Duffey, 88 F.3d 1232, 1238 (D.C.Cir. 1996)(“one-tailed analysis tests whether a group is disfavored in hiring decisions while two-tailed analysis tests whether the group is preferred or disfavored.”)

Csicseri v. Bowsher, 862 F. Supp. 547, 565, 574 (D.D.C. 1994)(noting that a one-tailed test is “not without merit,” but a two-tailed test is preferable)

Berger v. Iron Workers Reinforced Rodmen Local 201, 843 F.2d 1395 (D.C. Cir. 1988)(describing but avoiding choice between one-tail and two-tail tests as “nettlesome”)

Segar v. Civiletti, 508 F.Supp. 690 (D.D.C. 1981)(“Plaintiffs analyses are one tailed. In discrimination cases of this kind, where only a positive disparity is of interest, the one tailed test is superior.”)

Love is Blind but What About Judicial Gatekeeping of Expert Witnesses? – Viagra Part I

July 7th, 2012

The Viagra litigation over claimed vision loss vividly illustrates the difficulties that trial judges have in understanding and applying the concept of statistical significance.  In this MDL, plaintiffs sued for a specific form of vision loss, non-arteritic ischemic optic neuropathy (NAION), which they claimed was caused by their use of defendant’s medication, Viagra.  In re Viagra Products Liab. Litig., 572 F. Supp. 2d 1071 (D. Minn. 2008).  Plaintiffs’ key expert witness, Gerald McGwin considered three epidemiologic studies; none found a statistically significant elevation of risk of NAION after Viagra use.  Id. at 1076. The defense filed a Rule 702 motion to exclude McGwin’s testimony, based in part upon the lack of statistical significance of the risk ratios he relied upon for his causal opinion.  The trial court held that this lack did not render McGwin’s testimony and unreliable and inadmissible  Id. at 1090.

One of the three studies considered by McGwin was his own published paper.  G. McGwin, Jr., M. Vaphiades, T. Hall, C. Owsley, ‘‘Non-arteritic anterior ischaemic optic neuropathy and the treatment of erectile dysfunction,’’ 90 Br. J. Ophthalmol. 154 (2006)[“McGwin 2006”].    The MDL court noted that McGwin had stated that his paper reported an odds ratio (OR) of 1.75, with a 95% confidence interval (CI), 0.48 to 6.30.  Id. at 1080.  The study also presented multiple subgroup analyses of men who had reported Viagra use after a history of heart attack (OR = 10.7) or hypertension (OR = 6.9), but the MDL court did not provide p-values or confidence intervals for the subgroup analysis results.

Curiously, Judge Magnuson eschewed the guidance of the Reference Manual on Scientific Evidence, in dealing with statistics of sampling estimates of means or proportions.  The Reference Manual on Scientific Evidence (2d ed. 2000) urges that:

“[w]henever possible, an estimate should be accompanied by its standard error.”

RMSE 2d ed. at 117-18.  The new third edition again conveys the same basic message:

What is the standard error? The confidence interval?

An estimate based on a sample is likely to be off the mark, at least by a small amount, because of random error. The standard error gives the likely magnitude of this random error, with smaller standard errors indicating better estimates.”

RMSE 3d ed. at 243.

The point of the RSME‘s guidance is, of course, that the standard error, or the confidence interval (C.I.) based upon a specified number of standard errors, is an important component of the sample statistic, without which the sample estimate is virtually meaningless.  Just as a narrative statement should not be truncated, a statistical or numerical expression should not be unduly abridged.

The statistical data on which McGwin was basing his opinion was readily available from McGwin 2006:

“Overall, males with NAION were no more likely to report a history of Viagra … use compared to similarly aged controls (odd ratio (OR) 1.75, 95% confidence interval (CI) 0.48 to 6.30.  However, for those with a history of myocardial infarction, a statistically significant association was observed (OR 10.7, 95% CI 1.3 to 95.8). A similar association was observed for those with a history of hypertension though it lacked statistical significance (OR 6.9, 95% CI 0.8 to 63.6).”

McGwin 2006, at 154.  Following the RSME‘s guidance would have assisted the MDL court in its gatekeeping responsibility in several distinct ways.  First, the court would have focused on how wide the 95% confidence intervals were.  The width of the intervals pointed to statistical imprecision and instability in the point estimates urged by McGwin.  Second, the MDL court would have confronted the extent to which there were multiple ad hoc subgroup analyses in McGwin’s paper.  See Newman v. Motorola, Inc., 218 F. Supp. 2d 769, 779 (D. Md. 2002)(“It is not good scientific methodology to highlight certain elevated subgroups as significant findings without having earlier enunciated a hypothesis to look for or explain particular patterns.”) Third, the court would have confronted the extent to which the study’s validity was undermined by several potent biases.  Statistical significance was the least of the problems faced by McGwin 2006.

The second study considered and relied upon by McGwin was referred to as Margo & French.  McGwin cited this paper for an “elevated OR of 1.10,” id. at 1081, but again, had the court engaged with the actual evidence, it would have found that McGwin had cherry picked the data he chose to emphasize.  The Margo & French study was a retrospective cohort study using the National Veterans Health Administration’s pharmacy and clinical databases.  C. Margo & D. French, ‘‘Ischemic optic neuropathy in male veterans prescribed phosphodiesterase-5 inhibitors,’’ 143 Am. J. Ophthalmol. 538 (2007).  There were two outcomes ascertained:  NAION and “possible” NAION.  The relative risk of NAION among men prescribed a PDE-5 inhibitor (the class to which Viagra belongs) was 1.02 (95% confidence interval [CI]: 0.92 to 1.12.  In other words, the Margo & French paper had very high statistical precision, and it reported essentially no increased risk at all.  Judge Magnuson cited uncritically McGwin’s endorsement of a risk ratio that included ‘‘possible’’ NAION cases, which could not bode well for a gatekeeping process that is supposed to protect against speculative evidence and conclusions.

McGwin’s citation of Margo & French for the proposition that men who had taken the PDE-5 inhibitors had a 10% increased risk was wrong on several counts.  First, he relied upon an outcome measure that included ‘‘possible’’ cases of NAION.  Second, he completely ignored the sampling error that is captured in the confidence interval.  The MDL court failed to note or acknowledge the p-value or confidence interval for any result in Margo & French. The consideration of random error was not an optional exercise for the expert witness or the court; nor was ignoring it a methodological choice that simply went to the ‘‘disagreement among experts.’’

The Viagra MDL court not only lost its way by ignoring the guidance of the RMSE, it appeared to confuse the magnitude of the associations with the concept of statistical significance.  In the midst of the discussion of statistical significance, the court digressed to address the notion that the small relative risk in Margo & French might mean that no plaintiff could show specific causation, and then in the same paragraph returned to state that ‘‘persuasive authority’’ supported the notion that the lack of statistical significance did not detract from the reliability of a study.  Id. at 1081 (citing In re Phenylpropanolamine (PPA) Prods. Liab. Litig., MDL No. 1407, 289 F.Supp.2d 1230, 1241 (W.D.Wash. 2003)).  The magnitude of the observed odds ratio is an independent concept from that of whether an odds ratio as extreme or more so would have occurred by chance if there really was no elevation.

Citing one case, at odds with a great many others, however, did not create an epistemic warrant for ignoring the lack of statistical significance.  The entire notion of cited caselaw for the meaning and importance of statistical significance for drawing inferences is wrong headed.  Even more to the point, the lack of statistical significance in the key study in the PPA litigation did not detract from the reliability of the study, although other features of that study certainly did.  The lack of statistical significance in the PPA study did, however, detract from the reliability of the inference from the study’s estimate of ‘‘effect size’’ to a conclusion of causal association. Indeed, nowhere in the key PPA study did its authors draw a causal conclusion with respect to PPA ingestion and hemorrhagic stroke.  See Walter Kernan, Catherine Viscoli, Lawrence Brass, Joseph Broderick, Thomas Brott, Edward Feldmann, Lewis Morgenstern, Janet Lee Wilterdink, and Ralph Horwitz, ‘‘Phenylpropanolamine and the Risk of Hemorrhagic Stroke,’’ 343 New England J. Med. 1826 (2000).

The MDL court did attempt to distinguish the Eighth Circuit’s decision in Glastetter v. Novartis Pharms. Corp., 252 F.3d 986 (8th Cir. 2001), cited by the defense:

‘‘[I]n Glastetter … expert evidence was excluded because ‘rechallenge and dechallenge data’ presented statistically insignificant results and because the data involved conditions ‘quite distinct’ from the conditions at issue in the case. Here, epidemiologic data is at issue and the studies’ conditions are not distinct from the conditions present in the case. The Court does not find Glastetter to be controlling.’’

Id. at 1081 (internal citations omitted; emphasis in original).  This reading of Glastetter, however, misses important features of that case and the Parlodel litigation more generally.  First, the Eighth Circuit commented not only upon the rechallenge-dechallenge data, which involved arterial spasms, but upon an epidemiologic study of stroke, from which Ms. Glastetter suffered.  The Glastetter court did not review the epidemiologic evidence itself, but cited to another court, which did discuss and criticize the study for various ‘‘statistical and conceptual flaws.’’  See Glastetter, 252 F.3d at 992 (citing Siharath v. Sandoz Pharms.Corp., 131 F.Supp. 2d 1347, 1356-59 (N.D.Ga.2001)).  Glastetter was binding authority, and not so easily dismissed and distinguished.

The Viagra MDL court ultimately placed its holding upon the facts that:

‘‘the McGwin et al. and Margo et al. studies were peer-reviewed, published, contain known rates of error, and result from generally accepted epidemiologic research.’’

In re Viagra, 572 F. Supp. 2d at 1081 (citations omitted).  This holding was a judicial ipse dixit substituting for the expert witness’s ipse dixit.  There were no known rates of error for the systematic errors in the McGwin study, and the ‘‘known’’ rates of error for random error in McGwin 2006  were intolerably high.  The MDL court never considered any of the error rates, systematic or random, for the Margo & French study.  The court appeared to have abdicated its gatekeeping responsibility by delegating it to unknown peer reviewers, who never considered whether the studies at issue in isolation or together could support a causal health claim.

With respect to the last of the three studies considered, the Gorkin study, McGwin opined that it was  too small, and the data were not suited to assessing temporal relationship.  Id.  The court did not appear inclined to go beyond McGwin’s ipse dixit.  The Gorkin study was hardly small, in that it was based upon more than 35,000 patient-years of observation in epidemiologic studies and clinical trials, and provided an estimate of incidence for NAION among users of Viagra that was not statistically different from the general U.S. population.  See L. Gorkin, K. Hvidsten, R. Sobel, and R. Siegel, ‘‘Sildenafil citrate use and the incidence of nonarteritic anterior ischemic optic neuropathy,’’ 60 Internat’l J. Clin. Pract. 500, 500 (2006).

Judge Magnuson did proceed, in his 2008 opinion, to exclude all the other expert witnesses put forward by the plaintiffs.  McGwin survived the defendant’s Rule 702 challenge, largely because the court refused to consider the substantial random variability in the point estimates from the studies relied upon by McGwin. There was no consideration of the magnitude of random error, or for that matter, of the systematic error in McGwin’s study.  The MDL court found that the studies upon which McGwin relied had a known and presumably acceptable ‘‘rate of error.’’  In fact, the court did not consider the random or sampling error in any of the three cited studies; it failed to consider the multiple testing and interaction; and it failed to consider the actual and potential biases in the McGwin study.

Some legal commentators have argued that statistical significance should not be a litmus test.  David Faigman, Michael Saks, Joseph Sanders, and Edward Cheng, Modern Scientific Evidence: The Law and Science of Expert Testimony § 23:13, at 241 (‘‘Statistical significance should not be a litmus test. However, there are many situations where the lack of significance combined with other aspects of the research should be enough to exclude an expert’s testimony.’’)  While I agree that significance probability should not be evaluated in a mechanical fashion, without consideration of study validity, multiple testing, bias, confounding, and the like, handing waving about litmus tests does not excuse courts or commentators from totally ignoring random variability in studies based upon population sampling.  The dataset in the Viagra litigation was not a close call.