This post has been updated here.
This post has been updated here.
“Every time a bell rings an angel gets his wings”
… It’s a Wonderful Life (1946)Every time a plaintiff shows the smallest imaginable exposure, there is a full recovery.
… The American tort system.
In 1984, Philadelphia County had a non-jury system for asbestos personal injury cases, with a right to “appeal” for a de novo trial with a jury. The non-jury trials were a wonderful training ground for a generation of trial lawyers, and for a generation or two of testifying expert witnesses. When I started to try asbestos cases as a young lawyer, the plaintiffs’ counsel had already taught their expert witnesses to include the “each and every exposure” talismanic language in their direct examination testimonies on the causation of the plaintiffs’ condition. The litigation industry had figured out that this expression would help avoid a compulsory non-suit on proximate causation.
Back in those wild, woolly frontier days, I encountered the slick Dr. Joseph Sokolowski (“Sok”), a pulmonary physician in private practice in New Jersey. Sok, like many other pulmonary physicians in the Delaware Valley area, had seen civilian workers referred by Philadelphia Naval Shipyard to be evaluated for asbestosis. When the plaintiff-friendly physicians diagnosed asbestosis, a few preferred firms would then pursue their claims under the Federal Employees Compensation Act (FECA). The United States government would notify the workers of their occupational disease, and urge them to pursue the government’s outside vendors of asbestos-containing materials, with a reminder that the government had a lien against any civil action recovery. The federal government thus made common cause with the niche law practices of workers’ compensation lawyers,1 and helped launch the tsunami of asbestos litigation.2
Sok was perfect for his role in the federal kick-back scheme. He could deliver the most implausible testimony, and weather brutal cross-examination without flinching. He had the face of a choir boy, and his service as an outside examiner for the Navy Yard employees gave his diagnoses the apparent imprimatur of the federal government. Although Sok had no real understanding of epidemiology, he could readily master the Selikoff litany of 5-10-50, for relative risks for lung cancer, from asbestos alone (supposedly), from smoking alone, and from asbestos and smoking combined, respectively. And he similarly mastered his lines that “each and every exposure” is substantial, when pressed on whether and how exposure to a minor vendor’s product was a substantial factor. Back in those days, before Johns-Manville (JM) Corporation went bankrupt, honest witnesses at the Navy Yard acknowledged that JM supplied the vast majority of asbestos products, but that testimony changed literally over the course of a trial day, when the plaintiffs’ bar learned of the JM bankruptcy.
It was into this topsy-turvy litigation world, I was thrown. I had the sense that there was no basis for the “each and every exposure” opinion, but my elders at the defense bar seemed to avoid the opinion studiously on cross-examination. I recall co-defendants’ counsels’ looks of horror and disapproval when I broached the topic in my first cross-examination. Sok had known to incorporate the “each and every exposure” opinion into his direct testimony, but he had no intelligible response to my question about what possible basis there was for the opinion. “Well, we have to blame each and every exposure because we have no way distinguish among exposures.” I could not let it lie there, and so I asked: “So your opinion about each and every exposure is based upon your ignorance?” My question was quickly met with an objection, and just as quickly with a rather loud and disapproving, “Sustained!” When Sok finished his testimony, I moved to strike his substantial factor opinion as having no foundation, but my motion was met with by judicial annoyance and apathy.
And so I learned that science and logic had nothing to do with asbestos litigation. Some determined defense counsel persevered, however, and in the face of over one hundred bankruptcies,3 a few courts started to take the evidence and arguments against the “every exposure” testimony, seriously. Last week, the New York Court of Appeals, New York’s highest court, agreed to state out loud that the plaintiffs’ “every exposure” theory had no clothes, no foundation, and no science. Juni v. A.O. Smith Water Products Co., No. 123, N.Y. Court of Appeals (Nov. 27, 2018).4
In a short, concise opinion, with a single dissent, the Court held that plaintiffs’ evidence (any exposure, no matter how trivial) in a mesothelioma death case was “insufficient as a matter of law to establish that respondent Ford Motor Co.’s conduct was a proximate cause of the decedent’s injuries.” The ruling affirmed the First Department’s affirmance of a trial court’s judgment notwithstanding the $11 million jury verdict against Ford.5 Arguing for the proposition that every exposure is substantial, over three dozen scientists, physicians, and historians, most of whom regularly support and testify for the litigation industry, filed a brief in support of the plaintiffs.6 The Atlantic Legal Foundation filed an amicus brief on behalf of several scientists,7 and I had the privilege of filing an amicus brief on behalf of the Coalition for Litigation Justice and nine other organizations in support of Ford’s positions.8
It has been 34 years since I first encountered the “every exposure is substantial” dogma in a Philaddelphia courtroom. Some times in litigation, it takes a long time to see the truth come out.
1 E.g., Shein and Brookman; Greitzer & Locks; both of Philadelphia.
2 Encouraging litigation against its suppliers, the federal government pulled off a coup of misdirection. First, it deflected public censure from the Navy and other governmental branches for its own carelessness in the use, installation, and removal of asbestos-containing insulations. Second, the government winnowed the ranks of older, better compensated workers. Third, and most diabolically, the government, which was self-insured for FECA claims, recovered most of their outlay when its former employees recovered judgments or settlements against the government’s outside asbestos product vendors. “The United States Government’s Role in the Asbestos Mess” (Jan. 31, 2012). See also Walter Olson, “Asbestos awareness pre-Selikoff,” Point of Law (Oct. 19, 2007); “The U.S. Navy and the asbestos calamity” Point of Law (Oct. 9, 2007).
3 Rand Corporation, “Bankruptcy Trusts Complicate the Outcomes of Asbestos Lawsuits” (2015).
4 The plaintiffs were represented by Alani Golanski of Weitz & Luxenberg LLP.
5 See also Oded Burger, “New York’s Highest Court Upholds Defense Judgment as a Matter of Law Based on Lack of Sufficient Scientific Evidence,” Asbestos Case Tracker (Nov. 27, 2018); Ryan Boysen, “Ford Not Liable In Asbestos Death Suit, NY High Court Says,” Law360 (Nov. 27, 2018).
6 Abby Lippman, Annie Thebaud Mony, Arthur L. Frank, Barry Castleman, Bruce P. Lanphear,
Celeste Monforton, Colin L. Soskolne, Daniel Thau Teitelbaum, Dario Consonni, Dario Mirabelli, David Egilman, David F. Goldsmith, David Ozonoff, David Rosner, Fiorella Belpoggi, James Huff, John Heinzow, John M. Dement, John Coulter Maddox, Karl T. Kelsey, Kathleen Ruff, Kenneth D. Rosenman, L. Christine Oliver, Laura Welch, Leslie Thomas Stayner, Morris Greenberg, Nachman Brautbar, Philip J. Landrigan, Xaver Baur, Hans-Joachim Woitowitz, Bice Fubini, Richard Kradin, T.K. Joshi, Theresa S. Emory, Thomas H. Gassert,
Tony Fletcher, and Yv Bonnier Viger.
7 John Henderson Duffus, Ronald E. Gots, Arthur M. Langer, Robert Nolan, Gordon L. Nord, Alan John Rogers, and Emanuel Rubin.
8 Amici Curiae Brief of Coalition for Litigation Justice, Inc., Business Council of New York State, Lawsuit Reform Alliance of New York, New York Insurance Association, Inc., Northeast Retail Lumber Association, National Association of Manufacturers, Chamber of Commerce of the U.S.A., American Tort Reform Association, American Insurance Association, and NFIB Small Business Legal Center Supporting Defendant-Respondent Ford Motor Company.
ABERRANT DECISIONS
The Daubert trilogy and the statutory revisions to Rule 702 have not brought universal enlightenment. Many decisions reflect a curmudgeonly and dismissive approach to gatekeeping.
The New Jersey Experience
Until recently, New Jersey law looked as though it favored vigorous gatekeeping of invalid expert witness opinion testimony. The law as applied, however, was another matter, with most New Jersey judges keen to find ways to escape the logical and scientific implications of the articulated standards, at least in civil cases.1 For example, in Grassis v. Johns-Manville Corp., 248 N.J. Super. 446, 591 A.2d 671, 675 (App. Div. 1991), the intermediate appellate court discussed the possibility that confounders may lead to an erroneous inference of a causal relationship. Plaintiffs’ counsel claimed that occupational asbestos exposure causes colorectal cancer, but the available studies, inconsistent as they were, failed to assess the role of smoking, family history, and dietary factors. The court essentially shrugged its judicial shoulders and let a plaintiffs’ verdict stand, even though it was supported by expert witness testimony that had relied upon seriously flawed and confounded studies. Not surprisingly, 15 years after the Grassis case, the scientific community acknowledged what should have been obvious in 1991: the studies did not support a conclusion that asbestos causes colorectal cancer.2
This year, however, saw the New Jersey Supreme Court step in to help extricate the lower courts from their gatekeeping doldrums. In a case that involved the dismissal of plaintiffs’ expert witnesses’ testimony in over 2,000 Accutane cases, the New Jersey Supreme Court demonstrated how to close the gate on testimony that is based upon flawed studies and involves tenuous and unreliable inferences.3 There were other remarkable aspects of the Supreme Court’s Accutane decision. For instance, the Court put its weight behind the common-sense and accurate interpretation of Sir Austin Bradford Hill’s famous articulation of factors for causal judgment, which requires that sampling error, bias, and confounding be eliminated before assessing whether the observed association is strong, consistent, plausible, and the like.4
Cook v. Rockwell International
The litigation over radioactive contamination from the Colorado Rocky Flats nuclear weapons plant is illustrative of the retrograde tendency in some federal courts. The defense objected to plaintiffs’ expert witness, Dr. Clapp, whose study failed to account for known confounders.5 Judge Kane denied the challenge, claiming that the defense could:
“cite no authority, scientific or legal, that compliance with all, or even one, of these factors is required for Dr. Clapp’s methodology and conclusions to be deemed sufficiently reliable to be admissible under Rule 702. The scientific consensus is, in fact, to the contrary. It identifies Defendants’ list of factors as some of the nine factors or lenses that guide epidemiologists in making judgments about causation. Ref. Guide on Epidemiolog at 375.).”6
In Cook, the trial court or the parties or both missed the obvious references in the Reference Manual to the need to control for confounding. Certainly many other scientific sources could be cited as well. Judge Kane apparently took a defense expert witness’s statement that ecological studies do not account for confounders to mean that the presence of confounding does not render such studies unscientific. Id. True but immaterial. Ecological studies may be “scientific,” but they do not warrant inferences of causation. Some so-called scientific studies are merely hypothesis generating, preliminary, tentative, or data-dredging exercises. Judge Kane employed the flaws-are-features approach, and opined that ecological studies are merely “less probative” than other studies, and the relative weights of studies do not render them inadmissible.7 This approach is, of course, a complete abdication of gatekeeping responsibility. First, studies themselves are not admissible; it is the expert witness, whose testimony is challenged. The witness’s reliance upon studies is relevant to the Rule 702 and 703 analyses, but admissibility is not the issue. Second, Rule 702 requires that the proffered opinion be “scientific knowledge,” and ecological studies simply lack the necessary epistemic warrant to support a causal conclusion. Third, the trial court in Cook had to ignore the federal judiciary’s own reference manual’s warnings about the inability of ecological studies to provide causal inferences.8 The Cook case is part of an unfortunate trend to regard all studies as “flawed,” and their relative weights simply a matter of argument and debate for the litigants.9
Abilify
Another example of sloppy reasoning about confounding can be found in a recent federal trial court decision, In re Abilify Products Liability Litigation,10 where the trial court advanced a futility analysis. All observational studies have potential confounding, and so confounding is not an error but a feature. Given this simplistic position, it follows that failure to control for every imaginable potential confounder does not invalidate an epidemiologic study.11 From its nihilistic starting point, the trial court readily found that an expert witness could reasonably dispense with controlling for confounding factors of psychiatric conditions in studies of a putative association between the antipsychotic medication Abilify and gambling disorders.12
Under this sort of “reasoning,” some criminal defense lawyers might argue that since all human beings are “flawed,” we have no basis to distinguish sinners from saints. We have a long way to go before our courts are part of the evidence-based world.
1 In the context of a “social justice” issue such as whether race disparities exist in death penalty cases, New Jersey court has carefully considered confounding in its analyses. See In re Proportionality Review Project (II), 165 N.J. 206, 757 A.2d 168 (2000) (noting that bivariate analyses of race and capital sentences were confounded by missing important variables). Unlike the New Jersey courts (until the recent decision in Accutane), the Texas courts were quick to adopt the principles and policies of gatekeeping expert witness opinion testimony. See Merrell Dow Pharms., Inc. v. Havner, 953 S.W.2d 706, 714, 724 (Tex.1997) (reviewing court should consider whether the studies relied upon were scientifically reliable, including consideration of the presence of confounding variables). Even some so-called Frye jurisdictions “get it.” See, e.g., Porter v. SmithKline Beecham Corp., No. 3516 EDA 2015, 2017 WL 1902905 *6 (Phila. Super., May 8, 2017) (unpublished) (affirming exclusion of plaintiffs’ expert witness on epidemiology, under Frye test, for relying upon an epidemiologic study that failed to exclude confounding as an explanation for a putative association), affirming, Mem. Op., No. 03275, 2015 WL 5970639 (Phila. Ct. Com. Pl. Oct. 5, 2015) (Bernstein, J.), and Op. sur Appellate Issues (Phila. Ct. Com. Pl., Feb. 10, 2016) (Bernstein, J.).
3 In re Accutane Litig., ___ N.J. ___, ___ A.3d ___, 2018 WL 3636867 (2018); see “N.J. Supreme Court Uproots Weeds in Garden State’s Law of Expert Witnesses” (Aug. 8, 2018).
4 2018 WL 3636867, at *20 (citing the Reference Manual 3d ed., at 597-99).
5 Cook v. Rockwell Internat’l Corp., 580 F. Supp. 2d 1071, 1098 (D. Colo. 2006) (“Defendants next claim that Dr. Clapp’s study and the conclusions he drew from it are unreliable because they failed to comply with four factors or criteria for drawing causal interferences from epidemiological studies: accounting for known confounders … .”), rev’d and remanded on other grounds, 618 F.3d 1127 (10th Cir. 2010), cert. denied, ___ U.S. ___ (May 24, 2012). For another example of a trial court refusing to see through important qualitative differences between and among epidemiologic studies, see In re Welding Fume Prods. Liab. Litig., 2006 WL 4507859, *33 (N.D. Ohio 2006) (reducing all studies to one level, and treating all criticisms as though they rendered all studies invalid).
6 Id.
7 Id.
8 RMSE3d at 561-62 (“[ecological] studies may be useful for identifying associations, but they rarely provide definitive causal answers”) (internal citations omitted); see also David A. Freedman, “Ecological Inference and the Ecological Fallacy,” in Neil J. Smelser & Paul B. Baltes, eds., 6 Internat’l Encyclopedia of the Social and Behavioral Sciences 4027 (2001).
9 See also McDaniel v. CSX Transportation, Inc., 955 S.W.2d 257 (Tenn. 1997) (considering confounding but holding that it was a jury issue); Perkins v. Origin Medsystems Inc., 299 F. Supp. 2d 45 (D. Conn. 2004) (striking reliance upon a study with uncontrolled confounding, but allowing expert witness to testify anyway)
10 In re Abilifiy (Aripiprazole) Prods. Liab. Litig., 299 F. Supp. 3d 1291 (N.D. Fla. 2018).
11 Id. at 1322-23 (citing Bazemore as a purported justification for the court’s nihilistic approach); see Bazemore v. Friday, 478 U.S. 385, 400 (1986) (“Normally, failure to include variables will affect the analysis’ probativeness, not its admissibility.).
12 Id. at 1325.
Appendix – Some Federal Court Decisions on Confounding
1st Circuit
Bricklayers & Trowel Trades Internat’l Pension Fund v. Credit Suisse Sec. (USA) LLC, 752 F.3d 82, 85 (1st Cir. 2014) (affirming exclusion of expert witness whose event study and causal conclusion failed to consider relevant confounding variables and information that entered market on the event date)
2d Circuit
In re “Agent Orange” Prod. Liab. Litig., 597 F. Supp. 740, 783 (E.D.N.Y. 1984) (noting that confounding had not been sufficiently addressed in a study of U.S. servicemen exposed to Agent Orange), aff’d, 818 F.2d 145 (2d Cir. 1987) (approving district court’s analysis), cert. denied sub nom. Pinkney v. Dow Chemical Co., 484 U.S. 1004 (1988)
3d Circuit
In re Zoloft Prods. Liab. Litig., 858 F.3d 787, 793, 799 (2017) (acknowledging that statistically significant findings occur in the presence of inadequately controlled confounding or bias; affirming the exclusion of statistical expert witness, Nicholas Jewell, in part for using an admittedly non-rigorous approach to adjusting for confouding by indication)
4th Circuit
Gross v. King David Bistro, Inc., 83 F. Supp. 2d 597 (D. Md. 2000) (excluding expert witness who opined shigella infection caused fibromyalgia, given the existence of many confounding factors that muddled the putative association)
5th Circuit
Kelley v. American Heyer-Schulte Corp., 957 F. Supp. 873 (W.D. Tex. 1997) (noting that observed association may be causal or spurious, and that confounding factors must be considered to distinguish spurious from real associations)
Brock v. Merrell Dow Pharms., Inc., 874 F.2d 307, 311 (5th Cir. 1989) (noting that “[o]ne difficulty with epidemiologic studies is that often several factors can cause the same disease.”)
6th Circuit
Nelson v. Tennessee Gas Pipeline Co., WL 1297690, at *4 (W.D. Tenn. Aug. 31, 1998) (excluding an expert witness who failed to take into consideration confounding factors), aff’d, 243 F.3d 244, 252 (6th Cir. 2001), cert. denied, 534 U.S. 822 (2001)
Adams v. Cooper Indus. Inc., 2007 WL 2219212, 2007 U.S. Dist. LEXIS 55131 (E.D. Ky. 2007) (differential diagnosis includes ruling out confounding causes of plaintiffs’ disease).
7th Circuit
People Who Care v. Rockford Bd. of Educ., 111 F.3d 528, 537-38 (7th Cir. 1997) (Posner, J.) (“a statistical study that fails to correct for salient explanatory variables, or even to make the most elementary comparisons, has no value as causal explanation and is therefore inadmissible in a federal court”) (educational achievement in multiple regression);
Sheehan v. Daily Racing Form, Inc., 104 F.3d 940 (7th Cir. 1997) (holding that expert witness’s opinion, which failed to correct for any potential explanatory variables other than age, was inadmissible)
Allgood v. General Motors Corp., 2006 WL 2669337, at *11 (S.D. Ind. 2006) (noting that confounding factors must be carefully addressed; holding that selection bias rendered expert testimony inadmissible)
9th Circuit
In re Bextra & Celebrex Marketing Celebrex Sales Practices & Prod. Liab. Litig., 524 F.Supp. 2d 1166, 1178-79 (N.D. Cal. 2007) (noting plaintiffs’ expert witnesses’ inconsistent criticism of studies for failing to control for confounders; excluding opinions that Celebrex at 200 mg/day can cause heart attacks, as failing to satisfy Rule 702)
Avila v. Willits Envt’l Remediation Trust, 2009 WL 1813125, 2009 U.S. Dist. LEXIS 67981 (N.D. Cal. 2009) (excluding expert witness’s opinion in part because of his failure to rule out confounding exposures and risk factors for the outcomes of interest), aff’d in relevant part, 633 F.3d 828 (9th Cir.), cert denied, 132 S.Ct. 120 (2011)
Hendricksen v. ConocoPhillips Co., 605 F. Supp. 2d 1142, 1158 (E.D. Wash. 2009) (“In general, epidemiology studies are probative of general causation: a relative risk greater than 1.0 means the product has the capacity to cause the disease. “Where the study properly accounts for potential confounding factors and concludes that exposure to the agent is what increases the probability of contracting the disease, the study has demonstrated general causation – that exposure to the agent is capable of causing [the illness at issue] in the general population.’’) (internal quotation marks and citation omitted)
Valentine v. Pioneer Chlor Alkali Co., Inc., 921 F. Supp. 666, 677 (D. Nev. 1996) (‘‘In summary, Dr. Kilburn’s study suffers from very serious flaws. He took no steps to eliminate selection bias in the study group, he failed to identify the background rate for the observed disorders in the Henderson community, he failed to control for potential recall bias, he simply ignored the lack of reliable dosage data, he chose a tiny sample size, and he did not attempt to eliminate so-called confounding factors which might have been responsible for the incidence of neurological disorders in the subject group.’’)
Claar v. Burlington No. RR, 29 F.3d 499 (9th Cir. 1994) (affirming exclusion of plaintiffs’ expert witnesses, and grant of summary judgment, when plaintiffs’ witnesses concluded that the plaintiffs’ injuries were caused by exposure to toxic chemicals, without investigating any other possible causes).
10th Circuit
Hollander v. Sandoz Pharms. Corp., 289 F.3d 1193, 1213 (10th Cir. 2002) (affirming exclusion in Parlodel case involving stroke; confounding makes case reports inappropriate bases for causal inferences, and even observational epidemiologic studies must evaluated carefully for confounding)
D.C. Circuit
American Farm Bureau Fed’n v. EPA, 559 F.3d 512 (2009) (noting that in setting particulate matter standards addressing visibility, agency should avoid relying upon data that failed to control for the confounding effects of humidity)
CONFOUNDING1
Back in 2000, several law professors wrote an essay, in which they detailed some of the problems courts experienced in expert witness gatekeeping. Their article noted that judges easily grasped the problem of generalizing from animal evidence to human experience, and thus they simplistically emphasized human (epidemiologic) data. But in their emphasis on the problems in toxicological evidence, the judges missed problems of internal validity, such as confounding, in epidemiologic studies:
“Why do courts have such a preference for human epidemiological studies over animal experiments? Probably because the problem of external validity (generalizability) is one of the most obvious aspects of research methodology, and therefore one that non-scientists (including judges) are able to discern with ease – and then give excessive weight to (because whether something generalizes or not is an empirical question; sometimes things do and other times they do not). But even very serious problems of internal validity are harder for the untrained to see and understand, so judges are slower to exclude inevitably confounded epidemiological studies (and give insufficient weight to that problem). Sophisticated students of empirical research see the varied weaknesses, want to see the varied data, and draw more nuanced conclusions.”2
I am not sure that the problems are dependent in the fashion suggested by the authors, but their assessment that judges may be reluctant to break the seal on the black box of epidemiology, and that judges frequently lack the ability to make nuanced evaluations of the studies on which expert witnesses rely seems fair enough. Judges continue to miss important validity issues, perhaps because the adversarial process levels all studies to debating points in litigation.3
The frequent existence of validity issues undermines the partisan suggestion that Rule 702 exclusions are merely about “sufficiency of the evidence.” Sometimes, there is just too much of nothing to rise even to a problem of insufficiency. Some studies are “not even wrong.”4 Similarly, validity issues are an embarrassment to those authors who argue that we must assemble all the evidence and consider the entirety under ethereal standards, such as “weight of the evidence,” or “inference to the best explanation.” Sometimes, some or much of the available evidence does not warrant inclusion in the data set at all, and any causal inference is unacceptable.
Threats to validity come in many forms, but confounding is a particularly dangerous one. In claims that substances such as diesel fume or crystalline silica cause lung cancer, confounding is a huge problem. The proponents of the claims suggest relative risks in the range of 1.1 to 1.6 for such substances, but tobacco smoking results in relative risks in excess of 20, and some claim that passive smoking at home or in the workplace results in relative risks of the same magnitude as the risk ratios claimed for diesel particulate or silica. Furthermore the studies behind these claims frequently involve exposures to other known or suspected lung carcinogens, such as arsenic, radon, dietary factors, asbestos, and others.
Definition of Confounding
Confounding results from the presence of a so-called confounding (or lurking) variable, helpfully defined in the chapter on statistics in the Reference Manual on Scientific Evidence:
“confounding variable; confounder. A confounder is correlated with the independent variable and the dependent variable. An association between the dependent and independent variables in an observational study may not be causal, but may instead be due to confounding. See controlled experiment; observational study.”5
This definition suggests that the confounder need not be known to cause the dependent variable/outcome; the confounder need be only correlated with the outcome and an independent variable, such as exposure. Furthermore, the confounder may be actually involved in such a way as to increase or decrease the estimated relationship between dependent and independent variables. A confounder that is known to be present typically is referred to as a an “actual” confounder, as opposed to one that may be at work, and known as a “potential” confounder. Furthermore, even after exhausting known and potential confounders, studies of may be affected by “residual” confounding, especially when the total array of causes of the outcome of interest is not understood, and these unknown causes are not randomly distributed between exposed and unexposed groups in epidemiologic studies. Litigation frequently involves diseases or outcomes with unknown causes, and so the reality of unidentified residual confounders is unavoidable.
In some instances, especially in studies pharmaceutical adverse outcomes, there is the danger that the hypothesized outcome is also a feature of the underlying disease being treated. This phenomenon is known as confounding by indication, or as indication bias.6
Kaye and Freedman’s statistics chapter notes that confounding is a particularly important consideration when evaluating observational studies. In randomized clinical trials, one goal of the randomization is the elimination of the role of bias and confounding by the random assignment of exposures:
“2. Randomized controlled experiments
In randomized controlled experiments, investigators assign subjects to treatment or control groups at random. The groups are therefore likely to be comparable, except for the treatment. This minimizes the role of confounding.”7
In observational studies, confounding may completely invalidate an association. Kaye and Freedman give an example from the epidemiologic literature:
“Confounding remains a problem to reckon with, even for the best observational research. For example, women with herpes are more likely to develop cervical cancer than other women. Some investigators concluded that herpes caused cancer: In other words, they thought the association was causal. Later research showed that the primary cause of cervical cancer was human papilloma virus (HPV). Herpes was a marker of sexual activity. Women who had multiple sexual partners were more likely to be exposed not only to herpes but also to HPV. The association between herpes and cervical cancer was due to other variables.”8
The problem identified as confounding by Freedman and Kaye cannot be dismissed as an issue that goes to the “weight” of the study issue; the confounding goes to the heart of the ability of the herpes studies to show an association that can be interpreted to be causal. Invalidity from confounding renders the studies “weightless” in any “weight of the evidence” approach. There are, of course, many ways to address confounding in studies: stratification, multivariate analyses, multiple regression, propensity scores, etc. Consideration of the propriety and efficacy of these methods is a whole other level of analysis, which does not arise unless and until the threshold question of confounding is addressed.
Reference Manual on Scientific Evidence
The epidemiology chapter of the Second Edition of the Manual stated that ruling out of confounding as an obligation of the expert witness who chooses to rely upon the study.9 Although the same chapter in the Third Edition occasionally waffles, its authors come down on the side of describing confounding as a threat to validity, which must be ruled out before the study can be relied upon. In one place, the authors indicate “care” is required, and that analysis for random error, confounding, bias “should be conducted”:
“Although relative risk is a straightforward concept, care must be taken in interpreting it. Whenever an association is uncovered, further analysis should be conducted to assess whether the association is real or a result of sampling error, confounding, or bias. These same sources of error may mask a true association, resulting in a study that erroneously finds no association.”10
Elsewhere in the same chapter, the authors note that “chance, bias, and confounding” must be looked at, but again, the authors stop short of noting that these threats to validity must be eliminated:
“Three general categories of phenomena can result in an association found in a study to be erroneous: chance, bias, and confounding. Before any inferences about causation are drawn from a study, the possibility of these phenomena must be examined.”11
* * * * * * * *
“To make a judgment about causation, a knowledgeable expert must consider the possibility of confounding factors.”12
Eventually, however, the epidemiology chapter takes a stand, and an important one:
“When researchers find an association between an agent and a disease, it is critical to determine whether the association is causal or the result of confounding.”13
Mandatory Not Precatory
The better reasoned cases decided under Federal Rule of Evidence 702, and state-court analogues, follow the Reference Manual in making clear that confounding factors must be carefully addressed and eliminated. Failure to rule out the role of confounding renders a conclusion of causation, reached in reliance upon confounded studies, invalid.14
The inescapable mandate of Rules 702 and 703 is to require judges to evaluate the bases of a challenged expert witness’s opinion. Threats to internal validity, such as confounding, in a study may make reliance upon any given study, or an entire set of studies, unreasonable, which thus implicates Rule 703. Importantly, stacking up more invalid studies does not overcome the problem by presenting a heap of evidence, incompetent to show anything.
Pre-Daubert
Before the Supreme Court decided Daubert, few federal or state courts were willing to roll up their sleeves to evaluate the internal validity of relied upon epidemiologic studies. Issues of bias and confounding were typically dismissed by courts as issues that went to “weight, not admissibility.”
Judge Weinstein’s handling of the Agent Orange litigation, in the mid-1980s, marked a milestone in judicial sophistication and willingness to think critically about the evidence that was being funneled into the courtroom.15 The Bendectin litigation also was an important proving ground in which the defendant pushed courts to keep their eyes and minds open to issues of random error, bias, and confounding, when evaluating scientific evidence, on both pre-trial and on post-trial motions.16
Post-Daubert
When the United States Supreme Court addressed the admissibility of plaintiffs’ expert witnesses in Daubert, its principal focus was on the continuing applicability of the so-called Frye rule after the enactment of the Federal Rules of Evidence. The Court left the details of applying the then newly clarified “Daubert” standard to the facts of the case on remand to the intermediate appellate court. The Ninth Circuit, upon reconsidering the case, re-affirmed the trial court’s previous grant of summary judgment, on grounds of the plaintiffs’ failure to show specific causation.
A few years later, the Supreme Court itself engaged with the actual evidentiary record on appeal, in a lung cancer claim, which had been dismissed by the district court. Confounding was one among several validity issues in the studies relied upon by plaintiffs” expert witnesses. The Court concluded that the plaintiffs’ expert witnesses’ bases did not individually or collectively support their conclusions of causation in a reliable way. With respect to one particular epidemiologic study, the Supreme Court observed that a study that looked at workers who “had been exposed to numerous potential carcinogens” could not show that PCBs cause lung cancer. General Elec. Co. v. Joiner, 522 U.S. 136, 146 (1997).17
1 An earlier version of this post can be found at “Sorting Out Confounded Research – Required by Rule 702” (June 10, 2012).
2 David Faigman, David Kaye, Michael Saks, and Joseph Sanders, “How Good is Good Enough? Expert Evidence Under Daubert andKumho,” 50Case Western Reserve L. Rev. 645, 661 n.55 (2000).
3 See, e.g., In re Welding Fume Prods. Liab. Litig., 2006 WL 4507859, *33 (N.D.Ohio 2006) (reducing all studies to one level, and treating all criticisms as though they rendered all studies invalid).
4 R. Peierls, “Wolfgang Ernst Pauli, 1900-1958,” 5Biographical Memoirs of Fellows of the Royal Society 186 (1960) (quoting Wolfgang Pauli’s famous dismissal of a particularly bad physics paper).
5 David Kaye & David Freedman, “Reference Guide on Statistics,” inReference Manual on Scientific Evidence 211, 285 (3d ed. 2011)[hereafter theRMSE3d].
6 See, e.g., R. Didham, et al., “Suicide and Self-Harm Following Prescription of SSRIs and Other Antidepressants: Confounding By Indication,” 60Br. J. Clinical Pharmacol. 519 (2005).
7 RMSE3d at 220.
8 RMSE3d at 219 (internal citations omitted).
9 Reference Guide on Epidemiology at 369 -70 (2ed 2000) (“Even if an association is present, epidemiologists must still determine whether the exposure causes the disease or if a confounding factor is wholly or partly responsible for the development of the outcome.”).
10 RMSE3d at 567-68 (internal citations omitted).
11 RMSE3d at 572.
12 RMSE3d at 591 (internal citations omitted).
13 RMSE3d at 591
14 Similarly, an exonerative conclusion of no association might be vitiated by confounding with a protective factor, not accounted for in a multivariate analysis. Practically, such confounding seems less prevalent than confounding that generates a positive association.
15 In re “Agent Orange” Prod. Liab. Litig., 597 F. Supp. 740, 783 (E.D.N.Y. 1984) (noting that confounding had not been sufficiently addressed in a study of U.S. servicemen exposed to Agent Orange), aff’d, 818 F.2d 145 (2d Cir. 1987) (approving district court’s analysis), cert. denied sub nom. Pinkney v. Dow Chemical Co., 484 U.S. 1004 (1988).
16 Brock v. Merrell Dow Pharms., Inc., 874 F.2d 307, 311 , modified on reh’g, 884 F.2d 166 (5th Cir. 1989) (noting that “[o]ne difficulty with epidemiologic studies is that often several factors can cause the same disease.”)
17 The Court’s discussion related to the reliance of plaintiffs’ expert witnesses upon, among other studies, Kuratsune, Nakamura, Ikeda, & Hirohata, “Analysis of Deaths Seen Among Patients with Yusho – A Preliminary Report,” 16 Chemosphere 2085 (1987).
One of the challenges of epidemiologic research is selecting the right outcome of interest to study. What seems like a simple and obvious choice can often be the most complicated aspect of the design of clinical trials or studies.1 Lurking in this choice of end point is a particular threat to validity in the use of composite end points, when the real outcome of interest is one constituent among multiple end points aggregated into the composite. There may, for instance, be strong evidence in favor of one of the constituents of the composite, but using the composite end point results to support a causal claim for a different constituent begs the question that needs to be answered, whether in science or in law.
The dangers of extrapolating from one disease outcome to another is well-recognized in the medical literature. Remarkably, however, the problem received no meaningful discussion in the Reference Manual on Scientific Evidence (3d ed. 2011). The handbook designed to help judges decide threshold issues of admissibility of expert witness opinion testimony discusses the extrapolation from sample to population, from in vitro to in vivo, from one species to another, from high to low dose, and from long to short duration of exposure. The Manual, however, has no discussion of “lumping,” or on the appropriate (and inappropriate) use of composite or combined end points.
Composite End Points
Composite end points are typically defined, perhaps circularly, as a single group of health outcomes, which group is made up of constituent or single end points. Curtis Meinert defined a composite outcome as “an event that is considered to have occurred if any of several different events or outcomes is observed.”2 Similarly, Montori defined composite end points as “outcomes that capture the number of patients experiencing one or more of several adverse events.”3 Composite end points are also sometimes referred to as combined or aggregate end points.
Many composite end points are clearly defined for a clinical trial, and the component end points are specified. In some instances, the composite nature of an outcome may be subtle or be glossed over by the study’s authors. In the realm of cardiovascular studies, for example, investigators may look at stroke as a single endpoint, without acknowledging that there are important clinical and pathophysiological differences between ischemic strokes and hemorrhagic strokes (intracerebral or subarachnoid). The Fletchers’ textbook4 on clinical epidemiology gives the example:
“In a study of cardiovascular disease, for example, the primary outcomes might be the occurrence of either fatal coronary heart disease or non-fatal myocardial infarction. Composite outcomes are often used when the individual elements share a common cause and treatment. Because they comprise more outcome events than the component outcomes alone, they are more likely to show a statistical effect.”
Utility of Composite End Points
The quest for statistical “power” is often cited as a basis for using composite end points. Reduction in the number of “events,” such as myocardial infarction (MI), through improvements in medical care has led to decreased rates of MI in studies and clinical trials. These low event rates have caused power issues for clinical trialists, who have responded by turning to composite end points to capture more events. Composite end points permit smaller sample sizes and shorter follow-up times, without sacrificing power, the ability to detect a statistically significant increased rate of a prespecified size and Type I error. Increasing study power, while reducing sample size or observation time, is perhaps the most frequently cited rationale for using composite end points.
Competing Risks
Another reason sometimes offered in support of using composite end points is composites provide a strategy to avoid the problem of competing risks.5 Death (any cause) is sometimes added to a distinct clinical morbidity because patients who are taken out of the trial by death are “unavailable” to experience the morbidity outcome.
Multiple Testing
By aggregating several individual end points into a single pre-specified outcome, trialists can avoid corrections for multiple testing. Trials that seek data on multiple outcomes, or on multiple subgroups, inevitably raise concerns about the appropriate choice of the measure for the statistical test (alpha) to determine whether to reject the null hypothesis. According to some authors, “[c]omposite endpoints alleviate multiplicity concerns”:
“If designated a priori as the primary outcome, the composite obviates the multiple comparisons associated with testing of the separate components. Moreover, composite outcomes usually lead to high event rates thereby increasing power or reducing sample size requirements. Not surprisingly, investigators frequently use composite endpoints.”6
Other authors have similarly acknowledged that the need to avoid false positive results from multiple testing is an important rationale for composite end points:
“Because the likelihood of observing a statistically significant result by chance alone increases with the number of tests, it is important to restrict the number of tests undertaken and limit the type 1 error to preserve the overall error rate for the trial.”7
Indecision about an Appropriate Single Outcome
The International Conference on Harmonization suggests that the inability to select a single outcome variable may lead to the adoption of a composite outcome:
“If a single primary variable cannot be selected …, another useful strategy is to integrate or combine the multiple measurements into a single or composite variable.”8
The “indecision” rationale has also been criticized as “generally not a good reason to use a composite end point.”9
Validity of Composite End Points
The validity of composite end points depends upon methodological assumptions, which will have to be made at the time of the study design and protocol creation. After the data are collected and analyzed, the assumptions may or may not be supported. Among the supporting assumptions about the validity of using composites are:10
similarity in patient importance for included component end points,
similarity of association size of the components, and
number of events across the components.
The use of composite end points can sometimes be appropriate in the “first look” at a class of diseases or disorders, with the understanding that further research will sort out and refine the associated end point. Research into the causes of human birth defects, for instance, often starts out with a look at “all major malformations,” before focusing in on specific organ and tissue systems. To some extent, the legal system, in its gatekeeping function, has recognized the dangers and invalidity of lumping in the epidemiology of birth defects.11 The Frischhertz decision, for instance, clearly acknowledged that given the clear evidence that different birth defects arise at different times, based upon interference with different embryological processes, “lumping” of end points was methodologically inappropriate. 2012 U.S. Dist. LEXIS 181507, at *8 (citing Chamber v. Exxon Corp., 81 F. Supp. 2d 661 (M.D. La. 2000), aff’d, 247 F.3d 240 (5th Cir. 2001) (unpublished)).
The Chamber decision involved a challenge to the causation opinion of frequent litigation industry witness, Peter Infante,12 who attempted to defend his opinion about benzene and chronic myelogenous leukemia, based upon epidemiology of benzene and acute myelogenous leukemia. Plaintiffs’ witnesses and counsel sought to evade the burden of producing evidence of an AML association by pointing to a study that reported “excess leukemias,” without specifying the relevant type. Chamber, 81 F. Supp. 2d at 664. The trial court, however, perspicaciously recognized the claimants’ failure to identify relevant evidence of the specific association needed to support the causal claim.
The Frischhertz and Chamber cases are hardly unique. Several state and federal courts have concurred in the context of cancer causation claims.13 In the context of birth defects litigation, the Public Affairs Committee of the Teratology Society has weighed in with strong guidance that counsels against extrapolation between different birth defects in litigation:
“Determination of a causal relationship between a chemical and an outcome is specific to the outcome at issue. If an expert witness believes that a chemical causes malformation A, this belief is not evidence that the chemical causes malformation B, unless malformation B can be shown to result from malformation A. In the same sense, causation of one kind of reproductive adverse effect, such as infertility or miscarriage, is not proof of causation of a different kind of adverse effect, such as malformation.”14
The threat to validity in attributing a suggested risk for a composite end point to all included component end points is not, unfortunately, recognized by all courts. The trial court, in Ruff v. Ensign-Bickford Industries, Inc.,15 permitted plaintiffs’ expert witness to reanalyze a study by grouping together two previously distinct cancer outcomes to generate a statistically significant result. The result in Ruff is disappointing, but not uncommon. The result is also surprising, considering the guidance provided by the American Law Institute’s Restatement:
“Even when satisfactory evidence of general causation exists, such evidence generally supports proof of causation only for a specific disease. The vast majority of toxic agents cause a single disease or a series of biologically-related diseases. (Of course, many different toxic agents may be combined in a single product, such as cigarettes.) When biological-mechanism evidence is available, it may permit an inference that a toxic agent caused a related disease. Otherwise, proof that an agent causes one disease is generally not probative of its capacity to cause other unrelated diseases. Thus, while there is substantial scientific evidence that asbestos causes lung cancer and mesothelioma, whether asbestos causes other cancers would require independent proof. Courts refusing to permit use of scientific studies that support general causation for diseases other than the one from which the plaintiff suffers unless there is evidence showing a common biological mechanism include Christophersen v. Allied-Signal Corp., 939 F.2d 1106, 1115-1116 (5th Cir. 1991) (applying Texas law) (epidemiologic connection between heavy-metal agents and lung cancer cannot be used as evidence that same agents caused colon cancer); Cavallo v. Star Enters., 892 F. Supp. 756 (E.D. Va. 1995), aff’d in part and rev’d in part, 100 F.3d 1150 (4th Cir. 1996); Boyles v. Am. Cyanamid Co., 796 F. Supp. 704 (E.D.N.Y. 1992). In Austin v. Kerr-McGee Ref. Corp., 25 S.W.3d 280, 290 (Tex. Ct. App. 2000), the plaintiff sought to rely on studies showing that benzene caused one type of leukemia to prove that benzene caused a different type of leukemia in her decedent. Quite sensibly, the court insisted that before plaintiff could do so, she would have to submit evidence that both types of leukemia had a common biological mechanism of development.”
Restatement (Third) of Torts § 28 cmt. c, at 406 (2010). Notwithstanding some of the Restatement’s excesses on other issues, the guidance on composites, seems sane and consonant with the scientific literature.
Role of Mechanism in Justifying Composite End Points
A composite end point may make sense when the individual end points are biologically related, and the investigators can reasonably expect that the individual end points would be affected in the same direction, and approximately to the same extent:16
“Confidence in a composite end point rests partly on a belief that similar reductions in relative risk apply to all the components. Investigators should therefore construct composite endpoints in which the biology would lead us to expect similar effects across components.”
The important point, missed by some investigators and many courts, is that the assumption of similar “effects” must be tested by examining the individual component end points, and especially the end point that is the harm claimed by plaintiffs in a given case.
Methodological Issues
The acceptability of composite end points is often a delicate balance between the statistical power and efficiency gained and the reliability concerns raised by using the composite. As with any statistical or interpretative tool, the key questions turn on how the tool is used, and for what purpose. The reliability issues raised by the use of composites are likely to be highly contextual.
For instance, there is an important asymmetry between justifying the use of a composite for measuring efficacy and the use of the same composite for safety outcomes. A biological improvement in type 2 diabetes might be expected to lead to a reduction in all the macrovascular complications of that disease, but a medication for type 2 diabetes might have a very specific toxicity or drug interaction, which affects only one constituent end point among all macrovascular complications, such as myocardial infarction. The asymmetry between efficacy and safety outcomes is specifically addressed by cardiovascular epidemiologists in an important methodological paper:17
“Varying definitions of composite end points, such as MACE, can lead to substantially different results and conclusions. There, the term MACE, in particular, should not be used, and when composite study end points are desired, researchers should focus separately on safety and effectiveness outcomes, and construct separate composite end points to match these different clinical goals.”
There are many clear, published statements that caution consumers of medical studies against being misled by claims based upon composite end points. Several years ago, for example, the British Medical Journal published a paper with six methodological suggestions for consumers of studies, one of which deals explicitly with composite end points:18
“Guide to avoid being misled by biased presentation and interpretation of data
1. Read only the Methods and Results sections; bypass the Discuss section
2. Read the abstract reported in evidence based secondary publications
3. Beware faulty comparators
4. Beware composite endpoints
5. Beware small treatment effects
6. Beware subgroup analyses”
The paper elaborates on the problems that arise from the use of composite end points:19
“Problems in the interpretation of these trials arise when composite end points include component outcomes to which patients attribute very different importance… .”
“Problems may also arise when the most important end point occurs infrequently or when the apparent effect on component end points differs.”
“When the more important outcomes occur infrequently, clinicians should focus on individual outcomes rather than on composite end points. Under these circumstances, inferences about the end points (which because they occur infrequently will have very wide confidence intervals) will be weak.”
Authors generally acknowledge that “[w]hen large variations exist between components the composite end point should be abandoned.”20
Methodological Issues Concerning Causal Inferences from Composite End Points to Individual End Points
Several authors have criticized pharmaceutical companies for using composite end points to “game” their trials. Composites allow smaller sample size, but they lend themselves to broader claims for outcomes included within the composite. The same criticism applies to attempts to infer that there is risk of an individual endpoint based upon a showing of harm in the composite endpoint.
“If a trial report specifies a composite endpoint, the components of the composite should be in the well-known pathophysiology of the disease. The researchers should interpret the composite endpoint in aggregate rather than as showing efficacy of the individual components. However, the components should be specified as secondary outcomes and reported beside the results of the primary analysis.”21
Virtually the entire field of epidemiology and clinical trial study has urged caution in inferring risk for a component end point from suggested risk in a composite end point:
“In summary, evaluating trials that use composite outcome requires scrutiny in regard to the underlying reasons for combining endpoints and its implications and has impact on medical decision-making (see below in Sect. 47.8). Composite endpoints are credible only when the components are of similar importance and the relative effects of the intervention are similar across components (Guyatt et al. 2008a).”22
Not only do important methodologists urge caution in the interpretation of composite end points,23 they emphasize a basic point of scientific (and legal) relevancy:
“[A] positive result for a composite outcome applies only to the cluster of events included in the composite and not to the individual components.”24
Even regular testifying expert witnesses for the litigation industry insist upon the “principle of full disclosure”:
“The analysis of the effect of therapy on the combined end point should be accompanied by a tabulation of the effect of the therapy for each of the component end points.”25
Gatekeepers in our judicial system need to be more vigilant against bait-and-switch inferences based upon composite end points. The quest for statistical power hardly justifies larding up an end point with irrelevant data points.
1 See, e.g., Milton Packer, “Unbelievable! Electrophysiologists Embrace ‘Alternative Facts’,” MedPage (May 16, 2018) (describing clinical trialists’ abandoning pre-specified intention-to-treat analysis).
2 Curtis Meinert, Clinical Trials Dictionary (Johns Hopkins Center for Clinical Trials 1996).
3 Victor M. Montori, et al., “Validity of composite end points in clinical trials.” 300 Brit. Med. J. 594, 596 (2005).
4 R. Fletcher & S. Fletcher, Clinical Epidemiology: The Essentials at 109 (4th ed. 2005).
5 Neaton, et al., “Key issues in end point selection for heart failure trials: composite end points,” 11 J. Cardiac Failure 567, 569a (2005).
6 Schulz & Grimes, “Multiplicity in randomized trials I: endpoints and treatments,” 365 Lancet 1591, 1593a (2005).
7 Freemantle & Calvert, “Composite and surrogate outcomes in randomized controlled trials,” 334 Brit. Med. J. 756, 756a – b (2007).
8 International Conference on Harmonisation of Technical Requrements for Registration of Pharmaceuticals for Human Use; “ICH harmonized tripartite guideline: statistical principles for clinical trials,” 18 Stat. Med. 1905 (1999).
9 Neaton, et al., “Key issues in end point selection for heart failure trials: composite end points,” 11 J. Cardiac Failure 567, 569b (2005).
10 Montori, et al., “Validity of composite end points in clinical trials.” 300 Brit. Med. J. 594, 596, Summary Point No. 2 (2005).
11 See “Lumpenepidemiology” (Dec. 24, 2012), discussing Frischhertz v. SmithKline Beecham Corp., 2012 U.S. Dist. LEXIS 181507 (E.D. La. 2012).Frischhertz was decided in the same month that a New York City trial judge ruled Dr. Shira Kramer out of bounds in the commission of similarly invalid lumping, in Reeps v. BMW of North America, LLC, 2012 NY Slip Op 33030(U), N.Y.S.Ct., Index No. 100725/08 (New York Cty. Dec. 21, 2012) (York, J.), 2012 WL 6729899, aff’d on rearg., 2013 WL 2362566, aff’d, 115 A.D.3d 432, 981 N.Y.S.2d 514 (2013), aff’d sub nom. Sean R. v. BMW of North America, LLC, ___ N.E.3d ___, 2016 WL 527107 (2016). See also “New York Breathes Life Into Frye Standard – Reeps v. BMW” (Mar. 5, 2013).
12 “Infante-lizing the IARC” (May 13, 2018).
13 Knight v. Kirby Inland Marine, 363 F.Supp. 2d 859, 864 (N.D. Miss. 2005), aff’d, 482 F.3d 347 (5th Cir. 2007) (excluding opinion of B.S. Levy on Hodgkin’s disease based upon studies of other lymphomas and myelomas); Allen v. Pennsylvania Eng’g Corp., 102 F.3d 194, 198 (5th Cir. 1996) (noting that evidence suggesting a causal connection between ethylene oxide and human lymphatic cancers is not probative of a connection with brain cancer);Current v. Atochem North America, Inc., 2001 WL 36101283, at *3 (W.D. Tex. Nov. 30, 2001) (excluding expert witness opinion of Michael Gochfeld, who asserted that arsenic causes rectal cancer on the basis of studies that show association with lung and bladder cancer; Hill’s consistency factor in causal inference does not apply to cancers generally); Exxon Corp. v. Makofski, 116 S.W.3d 176, 184-85 (Tex. App. Houston 2003) (“While lumping distinct diseases together as ‘leukemia’ may yield a statistical increase as to the whole category, it does so only by ignoring proof that some types of disease have a much greater association with benzene than others.”).
14The Public Affairs Committee of the Teratology Society, “Teratology Society Public Affairs Committee Position Paper Causation in Teratology-Related Litigation,” 73 Birth Defects Research (Part A) 421, 423 (2005).
15 168 F. Supp. 2d 1271, 1284–87 (D. Utah 2001).
16 Montori, et al., “Validity of composite end points in clinical trials.” 300 Brit. Med. J. 594, 595b (2005).
17 Kevin Kip, et al., “The problem with composite end points in cardiovascular studies,” 51 J. Am. Coll. Cardiol. 701, 701 (2008) (Abstract – Conclusions) (emphasis in original).
18 Montori, et al., “Users’ guide to detecting misleading claims in clinical research reports,” 329 Brit. Med. J. 1093 (2004) (emphasis added).
19 Id. at 1094b, 1095a.
20 Montori, et al., “Validity of composite end points in clinical trials.” 300 Brit. Med. J. 594, 596 (2005).
21 Schulz & Grimes, “Multiplicity in randomized trials I: endpoints and treatments,” 365 Lancet 1591, 1595a (2005) (emphasis added). These authors acknowledge that composite end points often lack clinical relevancy, and that the gain in statistical efficiency comes at the high cost of interpretational difficulties. Id. at 1593.
22 Wolfgang Ahrens & Iris Pigeot, eds., Handbook of Epidemiology 1840 (2d ed. 2014) (47.5.8 Use of Composite Endpoints).
23 See, e.g., Stuart J. Pocock, John J.V. McMurray, and Tim J. Collier, “Statistical Controversies in Reporting of Clinical Trials: Part 2 of a 4-Part Series on Statistics for Clinical Trials,” 66 J. Am. Coll. Cardiol. 2648, 2650-51 (2015) (“Interpret composite endpoints carefully.”)(“COMPOSITE ENDPOINTS. These are commonly used in CV RCTs to combine evidence across 2 or more outcomes into a single primary endpoint. But, there is a danger of oversimplifying the evidence by putting too much emphasis on the composite, without adequate inspection of the contribution from each separate component.”); Eric Lim, Adam Brown, Adel Helmy, Shafi Mussa, and Douglas G. Altman, “Composite Outcomes in Cardiovascular Research: A Survey of Randomized Trials,” 149 Ann. Intern. Med. 612, 612, 615-16 (2008) (“Individual outcomes do not contribute equally to composite measures, so the overall estimate of effect for a composite measure cannot be assumed to apply equally to each of its individual outcomes.”) (“Therefore, readers are cautioned against assuming that the overall estimate of effect for the composite outcome can be interpreted to be the same for each individual outcome.”); Freemantle, et al., “Composite outcomes in randomized trials: Greater precision but with greater uncertainty.” 289 J. Am. Med. Ass’n 2554, 2559a (2003) (“To avoid the burying of important components of composite primary outcomes for which on their own no effect is concerned, . . . the components of a composite outcome should always be declared as secondary outcomes, and the results described alongside the result for the composite outcome.”).
24 Freemantle & Calvert, “Composite and surrogate outcomes in randomized controlled trials.” 334 Brit. Med. J. 757a (2007).
25 Lem Moyé, “Statistical Methods for Cardiovascular Researchers,” 118 Circulation Research 439, 451 (2016).
The real Daedalus (not the musician), as every school child knows, was the creator of the Cretan Labyrinth, where the Minotaur resided. The Labyrinth had been the undoing of many Greeks and barbarians, until an Athenian, Theseus, took up the challenge of slaying the Minotaur. With the help of Ariadne’s thread, Theseus solved the labyrinthic puzzle and slayed the Minotaur.
Theseus and the Minotaur on 6th-century black-figure pottery (Wikimedia Commons 2005)
Dædalus is also the Journal of the American Academy of Arts and Sciences. The Academy has been, for over 230 years, addressing issues issues in both the humanities and in the sciences. In the fall 2018 issue of Dædalus (volume 147, No. 4), the Academy has published a dozen essays by noted scholars in the field, who report on the murky interface of science and law in the courtrooms of the United States. Several of the essays focus on sorry state of forensic “science” in the criminal justice system, which has been the subject of several critical official investigations, only to be dismissed and downplayed by both the Obama and Trump administrations. Other essays address the equally sorry state of judicial gatekeeping in civil actions, with some limited suggestions on how the process of scientific fact finding might be improved. In any event, this issue, “Science & the Legal System,” is worth reading even if you do not agree with the diagnoses or the proposed therapies. There is still room for a collaboration between a modern day Daedalus and Ariadne to help us find the way out of this labyrinth.
Introduction
Shari Seidman Diamond & Richard O. Lempert, “Introduction” (pp. 5–14)
Connecting Science and Law
Sheila Jasanoff, “Science, Common Sense & Judicial Power in U.S. Courts” (pp. 15-27)
Linda Greenhouse, “The Supreme Court & Science: A Case in Point,” (pp. 28–40)
Shari Seidman Diamond & Richard O. Lempert, “When Law Calls, Does Science Answer? A Survey of Distinguished Scientists & Engineers,” (pp. 41–60)
Accomodation or Collision: When Science and Law Meet
Jules Lobel & Huda Akil, “Law & Neuroscience: The Case of Solitary Confinement,” (pp. 61–75)
Rebecca S. Eisenberg & Robert Cook-Deegan, “Universities: The Fallen Angels of Bayh-Dole?” (pp. 76–89)
Jed S. Rakoff & Elizabeth F. Loftus, “The Intractability of Inaccurate Eyewitness Identification” (pp. 90–98)
Jennifer L. Mnookin, “The Uncertain Future of Forensic Science” (pp. 99–118)
Joseph B. Kadane and Jonathan J. Koehler, “Certainty & Uncertainty in Reporting Fingerprint Evidence” (pp. 119–134)
Communicating Science in Court
Nancy Gertner & Joseph Sanders, “Alternatives to Traditional Adversary Methods of Presenting Scientific Expertise in the Legal System” (pp. 135–151)
Daniel L. Rubinfeld & Joe S. Cecil, “Scientists as Experts Serving the Court” (pp. 152–163)
Valerie P. Hans and Michael J. Saks, “Improving Judge & Jury Evaluation of Scientific Evidence” (pp. 164–180)
Continuing the Dialogue
David Baltimore, David S. Tatel & Anne-Marie Mazza, “Bridging the Science-Law Divide” (pp. 181–194)
Differential etiology is a high-fallutin’ term given to a simple disjunctive syllogism in which all disjuncts in the premise but one are eliminated. The syllogism would be a persuasive argument for the one remaining disjunct but only if all the other premises are effectively eliminated. Otherwise, we are left with competing disjunctive premises that remain, without any way of embracing the “one,” for which someone is contending.
Over 100 years ago, the United States Supreme Court recognized the need for eliminating all but the claimed cause in a simple FELA negligence action. In a unanimous decision, the Court declared:
“And where the testimony leaves the matter uncertain and shows that any one of half a dozen things may have brought about the injury, for some of which the employer is responsible and for some of which he is not, it is not for the jury to guess between these half a dozen causes and find that the negligence of the employer was the real cause, when there is no satisfactory foundation in the testimony for that conclusion. If the employe is unable to adduce sufficient evidence to show negligence on the part of the employer, it is only one of the many cases in which the plaintiff fails in his testimony, and no mere sympathy for the unfortunate victim of an accident justifies any departure from settled rules of proof resting upon all plaintiffs.”
Patton v. Texas & Pacific RR, 179 U.S. 658, 663-64 (1901).
Recently the United States Court of Appeals, for the Ninth Circuit, recognized the need to rule out alternative factual explanations before a court could enter judgment on a claim of copyright infringement.1 Cobbler Nevada, LLC v Thomas Gonzales, No. 17-35041 (9th Cir., Aug. 27, 2018). The facts of Cobbler Nevada are illustrative.
Someone with access to an IP address registered to Thomas Gonzales used BitTorrent to download a copy of “The Cobbler,” an Adam Sandler movie. Cobbler Nevada LLC sued Mr. Gonzales, not for bad taste, but for infringing on its copyright to the movie. Mr. Gonzales, however, was the owner of an adult foster home, in which several other people had access to Gonzales’ IP address. Cobbler Nevada had no evidence that eliminated the possibility of downloading by other people in the home.
An amended complaint accused Mr. Gonzales of directly infringing the copyright, and alternatively, of contributing to the infringement by not policing this own internet connection.
The panel affirmed the rejection of the infringement claim because the claimant had failed to rule out downloading by someone who other Gonzales:
“The direct infringement claim fails because Gonzales’ status as the registered subscriber of an infringing IP address, standing alone, does not create a reasonable inference that he is also the infringer… .”
Id. The panel reasoned that others in the household could have accessed Gonzales’ internet connection, and that the law did not impose a duty to secure the connection from a “frugal” neighbor.
In personal injury cases, the Ninth Circuit takes a very different, and thoroughly illogical approach from its astute reasoning in Cobbler Nevada. In one Ninth Circuit case, the plaintiff claimed without much of any supporting evidence that he had sustained a drug-induced disease, when over 70 percent of cases of that disease were idiopathic. The trial court accurately diagnosed the situation as an impossible proof problem for the plaintiff because the differential etiology method could not eliminate idiopathic causes in the case before the court. Rule 702 led to the exclusion of plantiffs’ proffered opinions, and the trial court entered summary judgment for the defendants. The Ninth Circuit reversed in an ipse dixit judgment that threw logic to the wind. Wendell v. Johnson & Johnson, No. 09-cv-04124, 2014 WL 2943572, at *5 (N.D. Cal. June 30, 2014), rev’d sub nom. Wendell v. GlaxoSmithKline LLC, 858 F.3d 1227 (9th Cir. 2017).2
The two cases, Wendell and Cobbler Nevada, cannot be reconciled. The aberrant and costive reasoning of Wendell will give rise to unflattering speculation about the Circuit’s motivation. Perhaps the next edition of the Reference Manual on Scientific Evidence should have a chapter on elementary logic, to help avoid such embarrassing situations.
1 Jason Tashea, “9th Circuit rules that sharing IP address is insufficient for copyright infringement,” Am. Bar. Ass’n J. (Sept. 4, 2018).
2 For a lively vivisection of the Ninth Circuit’s decision in Wendell, see David L. Faigman & Jennifer Mnookin, “The Curious Case of Wendell v. GlaxoSmithKline LLC,” 48 Seton Hall L. Rev. 607 (2018).
“And you never ask questions
When God’s on your side”
Bob Dylan, “With God on Our Side” 1963.
Cases involving claims of personal injury have inspired some of the most dubious scientific studies in the so-called medical literature, but the flights of fancy in published papers are nothing compared with what is recorded in the annals of expert witness testimony. The weaker the medical claims, the more outlandish is the expert testimony proffered. Claims for personal injury supposedly resulting from mold exposure are no exception to the general rule. The expert witness opinion testimony in mold litigation has resulted in several commentaries1 and professional position papers,2 offered to curb the apparent excesses.
Ritchie Shoemaker, M.D., has been a regular expert witness for the mold lawsuit industry. Professional criticism has not deterred Shoemaker, although discerning courts have put the kibosh on some of Shoemaker’s testimonial adventures.3
Shoemaker cannot be everywhere, and so in conjunction with the mold lawsuit industry, Shoemaker has taken to certifying new expert witnesses. But how will Shoemaker and his protégées overcome the critical judicial reception?
Enter Divine Intervention
“Make thee an ark of gopher wood; rooms shalt thou make in the ark, and shalt pitch it within and without with pitch.4”
Some say the age of prophets, burning bushes, and the like is over, but perhaps not so. Maybe God speaks to expert witnesses to fill in the voids left by missing evidence. Consider the testimony of Dr. Scott W. McMahon, who recently testified that he was Shoemaker trained, and divinely inspired:
Q. Jumping around a little bit, Doctor, how did your interest in indoor environmental quality in general, and mold in particular, how did that come about?
A. I had — in 2009, I had been asked to give a talk at a medical society at the end of October and the people who were involved in it were harassing me almost on a weekly basis asking me what the title of my talk was going to be. I had spoken to the same society the previous four years. I had no idea what I was going to speak about. I am a man of faith, I’ve been a pastor and a missionary and other things, so I prayed about it and what I heard in my head verbatim was pediatric mold exposure colon the next great epidemic question mark. That’s what I heard in my head. And so because I try to live by faith, I typed that up as an email and said this is the name of my topic. And then I said, okay, God, you have ten weeks to teach me about this, and he did. Within three, four weeks maybe five, he had connected me to Dr. Shoemaker who was the leading person in the world at that time and the discoverer of this chronic inflammatory response.
*****
I am a man of faith, I’ve been a pastor and everything. And I realized that this was a real entity.
*****
Q. And do you attribute your decision or the decision for you to start Whole World Health Care also to be a divine intervention?
A. Well, that certainly started the process but I used my brain, too. Like I said, I went and I investigated Dr. Shoemaker, I wanted to make sure that his methods were real, that he wasn’t doing, you know, some sort of voodoo medicine and I saw that he wasn’t, that his scientific practice was standard. I mean, he changes one variable at a time in tests. He tested every step of the way. And I found that his conclusions were realistic. And then, you know, over the last few years, I’ve 1 gathered my own data and I see that they confirm almost every one of his conclusions.
Q. Doctor, was there anything in your past or anything dealing with your family in terms of exposure to mold or other indoor health issues?
A. No, it was totally off my radar.
Q. *** I’m not going to go into great detail with respect to Dr. Shoemaker, but are you Shoemaker certified?
A. I am.
Deposition transcript of Dr. Scott W. McMahon, at pp.46-49, in Courcelle v. C.W. Nola Properties LLC, Orleans Parish, Louisiana No. 15-3870, Sec. 7, Div. F. (May 18, 2018).
You may be surprised that the examining lawyer did not ask about the voice in which God spoke. The examining lawyer seems to have accepted without further question that the voice was that of an adult male voice. Still did the God entity speak in English, or in tongues? Was it a deep, resonant voice like Morgan Freeman’s in Bruce Almighty (2003)? Or was it a Yiddische voice like George Burns, in Oh God (1977)? Were there bushes burning when God spoke to McMahon? Or did the toast burn darker than expected?
Some might think that McMahon was impudent if not outright blasphemous for telling God that “He” had 10 weeks in which to instruct McMahon in the nuances of how mold causes human illness. Apparently, God was not bothered by this presumptuousness and complied with McMahon, which makes McMahon a special sort of prophet.
Of course, McMahon says he used his “brain,” in addition to following God’s instructions. But really why bother? Were there evidentiary or inferential gaps filled in by the Lord? The deposition does not address this issue.
In federal court, and in many state courts, an expert witness may base opinions on facts or data that are not admissible if, and only if, expert witnesses “in the particular field would reasonably rely on those kinds of facts or data in forming an opinion on the subject.5”
Have other expert witnesses claimed divine inspiration for opinion testimony? A quick Pubmed search does not reveal any papers by God, or papers with God as someone’s Co-Author. It is only a matter of time, however, before a judge, some where, takes judicial notice of divinely inspired expert witness testimony.
1 See, e.g., Howard M. Weiner, Ronald E. Gots, and Robert P. Hein, “Medical Causation and Expert Testimony: Allergists at this Intersection of Medicine and Law,” 12 Curr. Allergy Asthma Rep. 590 (2012).
2 See, e.g., Bryan D. Hardin, Bruce J. Kelman, and Andrew Saxon, “ACOEM Evidence-Based Statement: Adverse Human Health Effects Associated with Molds in the Indoor Environment,” 45 J. Occup. & Envt’l Med. 470 (2003).
3 See, e.g., Chesson v. Montgomery Mutual Insur. Co., 434 Md. 346, 75 A.3d 932, 2013 WL 5311126 (2013) (“Dr. Shoemaker’s technique, which reflects a dearth of scientific methodology, as well as his causal theory, therefore, are not shown to be generally accepted in the relevant scientific community.”); Young v. Burton, 567 F. Supp. 2d 121, 130-31 (D.D.C. 2008) (excluding Dr. Shoemaker’s theories as lacking general acceptance and reliability; listing Virginia, Florida, and Alabama as states in which courts have rejected Shoemaker’s theory).
4 Genesis 6:14 (King James translation).
5 Federal Rule of Evidence. Bases of an Expert.
In many states, the so-called “learned treatise” doctrine creates a pseudo-exception to the rule against hearsay. The contents of such a treatise can be read to the jury, not for its truth, but for the jury to consider against the credibility of an expert witness who denies the truth of the treatise. Supposedly, some lawyers can understand the distinction between the treatise’s content’s being admitted for its truth as opposed to the credibility of an expert witness who denies its truth. Under the Federal Rules of Evidence, and in some states, the language of the treatise may be considered for its truth as well, but the physical treatise may not be entered into evidence. There are several serious problems with both the state and the federal versions of the doctrine.1
Legal on-line media recently reported about an appeal in the Pennsylvania Superior Court, which heard arguments in a case that apparently turned on allegations of trial court error in refusing to allow learned treatise cross-examination of a plaintiff’s expert witness in Pledger v. Janssen Pharms., Inc., Phila. Cty. Ct. C.P., April Term 2012, No. 1997. See Matt Fair, “J&J Urges Pa. Appeals Court To Undo $2.5M Risperdal Verdict,” Law360 (Aug. 8, 2018) (reporting on defendants’ appeal in Pledger, Pa. Super. Ct. nos. 2088 EDA 2016 and 2187 EDA 2016).
In Pledger, plaintiff claimed that he developed gynecomastia after taking the defendants’ antipsychotic medication Risperdal. Defendants warned about gynecomastia, but the plaintiff claimed that the defendants had not accurately quantified the rate of gynecomastia in its package insert.
From Mr. Fair’s reporting, readers can discern only one ground for appeal, namely whether the “trial judge improperly barred it from using a scientific article to challenge an expert’s opinion that the antipsychotic drug Risperdal caused an adolescent boy to grow breasts.” Without having heard the full oral argument, or having read the briefs, the reader cannot tell whether there were other grounds. According to Mr. Fair, defense counsel contended that the trial court’s refusal to allow the learned treatise “had allowed the [plaintiff’s] expert’s opinion to go uncountered during cross-examination.” The argument, according to Mr. Fair, continued:
“Instead of being able to confront the medical causation expert with an article that absolutely contradicted and undermined his opinion, the court instead admonished counsel in front of the jury and said, ‘In Pennsylvania, we don’t try cases by books, we try them by live witnesses’.”
The cross-examination at issue, on the other hand, related to whether gynecomastia could occur naturally in pre-pubertal boys. Plaintiffs’ expert witness, Dr. Mark Solomon, a plastic surgeon, opined that gynecomastia did not occur naturally, and the defense counsel attempted to confront him with a “learned treatise,” an article from the Journal of Endocrinology, which apparently stated to the contrary. Solomon, following the usual expert witness playbook, testified that he had not read the article (and why would a surgeon have read this endocrinology journal?) Defense counsel pressed, and according to Mr. Fair, the trial judge disallowed further inquiry on cross-examination. On appeal, the defendants argued that the trial judge violated the learned treatise rule that allows “scholarly articles to be used as evidence.” The plaintiffs contended, in defense of their judgment below, that the “learned treatise rule” does not allow “scholarly articles to simply be read verbatim into the record,” and that the defense had the chance to raise the article in the direct examination of its own expert witnesses.
The Law360 reporting is curious on several fronts. The assigned error would have only been in support of a challenge to the denial of a new trial, and in a Risperdal case, the defense would likely have made a motion for judgment notwithstanding the verdict, as well as for new trial. Although the appellate briefs are not posted online, the defense’s post-trial motions in Pledger v. Janssen Pharms., Inc., Phila. Cty. Ct. C.P., April Term 2012, No. 1997, are available. See Defendants’ Motions for Post-Trial Relief Pursuant to Pa.R.C.P. 227.1 (Mar. 6, 2015).
At least at the post-trial motion stage, the defendants clearly made both motions for judgment and for a new trial, as expected.
As for the preservation of the “learned treatise” issue, the entire assignment of error is described in a single paragraph (out of 116 paragraphs) in the post-trial motion, as follows:
27. Moreover, appearing to rely on Aldridge v. Edmunds, 750 A.2d 292 (Pa. 2000), the Court prevented Janssen from cross-examining Dr. Solomon with scientific authority that would undermine his position. See, e.g., Tr. 60:9-63:2 (p.m.). Aldridge, however, addresses the use of learned treatises in the direct examination, and it cites with approval the case of Cummings v. Borough of Nazareth, 242 A.2d 460, 466 (Pa. 1968) (plurality op.), which stated that “[i]t is entirely proper in examination and cross-examination for counsel to call the witness’s attention to published works on the matter which is the subject of the witness’s testimony.” Janssen should not have been so limited in its cross examination of Dr. Solomon.
In Cummings, the issue revolved around using manuals that contained industry standards for swimming pool construction, not the appropriateness of a learned scientific treatise. Cummings v. Nazareth Borough, 430 Pa. 255, 266-67 (Pa. 1968). The defense motion did not contend that the defense counsel had laid the appropriate foundation for the learned treatise to be used. In any event, the trial judge wrote an opinion on the post-trial motions, in which he did not appear to address the learned treatise issue at all. Pledger v Janssen Pharms, Inc., Phila. Ct. C.P., Op. sur post-trial motions (Aug. 10., 2017) (Djerassi, J.).
The Pennsylvania Supreme Court has addressed the learned treatise exception to the rule against hearsay on several occasions. Perhaps the leading case described the law as:
“well-settled that an expert witness may be cross-examined on the contents of a publication upon which he or she has relied in forming an opinion, and also with respect to any other publication which the expert acknowledges to be a standard work in the field. * * * In such cases, the publication or literature is not admitted for the truth of the matter asserted, but only to challenge the credibility of the witness’ opinion and the weight to be accorded thereto. * * * Learned writings which are offered to prove the truth of the matters therein are hearsay and may not properly be admitted into evidence for consideration by the jury.”
Majdic v. Cincinnati Mach. Co., 537 A. 2d 334, 621-22 (Pa. 1988) (internal citations omitted).
The Law360 report is difficult to assess. Perhaps the reporting by Mr. Fair was non-eponymously unfair? There is no discussion of how the defense had laid its foundation. Perhaps the defense had promised “to connect up” by establishing the foundation of the treatise through a defense expert witness. If there had been a foundation established, or promised to be established, the post-trial motion would have, in the normal course of events, cited the transcript for the proffer of a foundation. And why did Mr. Fair report on the oral argument as though the learned treatise issue was the only issue before the court? Inquiring minds want to know.
Judge Djerassi’s opinion on post-trial motions was perhaps more notable for embracing some testimony on statistical significance from Dr. David Kessler, former Commissioner of the FDA, and now a frequent testifier for the lawsuit industry on regulatory matters. Judge Djerassi, in his opinion, stated:
“This statistically significant measure is shown in Table 21 and was within a chi-square rate of .02, meaning within a 98% chance of certainty. In Dr. Kessler’s opinion this is a statistically significant finding. (N.T. 1/29/15, afternoon, pp. p. 27, ln. 2 10-11, p. 28, lns. 7-12).”
Post-trial opinion at p.11.2 Surely, the defense’s expert witnesses explained that the chi-square test did not yield a measure of certainty that the measured statistic was the correct value.
The trial court’s whopper was enough of a teaser to force me to track down Kessler’s testimony, which was posted to the internet by the plaintiffs’ law firm. Judge Djerassi’s erroneous interpretation of the p-value can indeed be traced to Kessler’s improvident testimony:
Q. And since 2003, what have you been doing at University of California San Francisco, sir?
A. Among other things, I am currently a professor of pediatrics, professor of epidemiology, professor of biostatistics.
Pledger Transcript, Thurs., Jan. 28, 2015, Vol. 3, Morning Session at 111:3-7.
A. What statistical significance means is it’s mathematical and scientific calculations, but when we say something is statistically significant, it’s unlikely to happen by chance. So that association is very likely to be real. If you redid this, general statistically significant says if I redid this and redid the analysis a hundred times, I would get the same result 95 of those times.
Pledger Transcript, Fri., Jan. 29, 2015, Vol. 4, Morning Session at 80:18 – 81:2.
Q. So, sir, if we see on a study — and by the way, do the investigators of a study decided in their own criteria what is statistically significant? Do they assign what’s called a P value?
A. Exactly. So you can set it at 95, you can set it at 98, you can set it at 90. Generally, 95 significance level, for those of you who are mathematicians or scientifically inclined, it’s a P less than .05.
Q. As a general rule?
A. Yes.
Q. So if I see a number that is .0158, next to a dataset, that would mean that it occurs by chance less than two in 100. Correct?
A. Yes, that’s what the P value is saying.
Pledger Transcript, Fri., Jan. 29, 2015, Vol. 4, Morning Session at 81:5-20
Q. … If someone — if something has a p-value of less than .02, the converse of it is that your 98 — .98, that would be 98 percent certain that the result is not by chance?
A. Yes. That’s a fair way of saying it.
Q. And if you have a p-value of .10, that means the converse of it is 90 percent, or 90 percent that it’s not by chance, correct?
A. Yes.
Pledger Transcript, Fri., Jan. 29, 2015, Vol. 4, Afternoon Session at 7:14-22.
Q. Okay. And the last thing I’d like to ask about — sorry to keep going back and forth — is so if the jury saw a .0158, that’s of course less than .02, which means that it is 90 — almost 99 percent not by chance.
A. Yes. It’s statistically significant, as I would call it.
Pledger Transcript, Fri., Jan. 29, 2015, Vol. 4, Afternoon Session at 8:7-13.
1 See “Further Unraveling of the Learned Treatise Exception” (Sept. 29, 2010); “Unlearning The Learned Treatise Exception” (Aug. 21, 2010); “The New Wigmore on Learned Treatises” (Sept. 12, 2011); “Trust Me” Rules of Evidence” (Oct. 18, 2012).
2 See also Djerassi opinion at p.13 n. 13 (“P<0.02 is the chi—square rate reflecting a data outcome within a 98% chance of certainty.”).