TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Reference Manual on Scientific Evidence on Relative Risk Greater Than Two For Specific Causation Inference

April 25th, 2015

The first edition of the Reference Manual on Scientific Evidence [Manual] was published in 1994, a year after the Supreme Court delivered its opinion in Daubert. The Federal Judicial Center organized and produced the Manual, in response to the kernel panic created by the Supreme Court’s mandate that federal trial judges serve as gatekeepers of the methodological propriety of testifying expert witnesses’ opinions. Considering the intellectual vacuum the Center had to fill, and the speed with which it had to work, the first edition was a stunning accomplishment.

In litigating specific causation in so-called toxic tort cases, defense counsel quickly embraced the Manual’s apparent endorsement of the doubling-of-the-risk argument, which would require relative risks in excess of two in order to draw inferences of specific causation in a given case. See Linda A. Bailey, Leon Gordis, and Michael D. Green, “Reference Guide on Epidemiology,” in Federal Judicial Center, Reference Manual on Scientific Evidence 123, 150, 168 (Washington, DC:, 1st ed., 1994) (“The relative risk from an epidemiological study can be adapted to this 50% plus standard to yield a probability or likelihood that an agent caused an individual’s disease. The threshold for concluding that an agent was more likely than not the cause of a disease than not is a relative risk greater than 2.0.”) (internal citations omitted).

In the Second Edition of the Manual, the authorship of the epidemiology chapter shifted, and so did its treatment of doubling of the risk. By adopting a more nuanced analysis, the Second Edition deprived defense counsel of a readily citable source for the proposition that low relative risks do not support inferences of specific causation. The exact conditions for when and how the doubling argument should prevail were, however, left fuzzy and unspecified. See Michael D. Green, D. Michal Freedman , and Leon Gordis, “Reference Guide on Epidemiology,” in Federal Judicial Center, Reference Manual on Scientific Evidence 333, 348-49 (Wash., DC, 2d ed. 2000)

The latest edition of the Manual attempts to correct the failings of the Second Edition by introducing an explanation and a discussion of some of the conditions that might undermine an inference, or opposition thereto, of specific causation from magnitude of relative risk. Michael D. Green, D. Michal Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” in Federal Judicial Center, Reference Manual on Scientific Evidence 549, 612 (Wash., DC 3d ed., 2011).

The authors of the Manual now acknowledge that doubling of risk inference has “a certain logic as far as it goes,” but point out that there are some “significant assumptions and important caveats that require explication.” Id.

What are the assumptions according the Manual?

First, and foremost, there must be “[a] valid study and risk estimate.” Id. (emphasis in original). The identification of this predicate assumption is, of course, correct, but the authors overlook that the assumption is often trivially satisfied by the legal context in which the doubling argument arises. For instance, in the Landrigan and Caterinichio cases, cited below, the doubling issue arose not as an admissibility question of expert witness opinion, but on motions for directed verdict. In both cases, plaintiffs’ expert witnesses committed to opinions about plaintiffs’ being at risk from asbestos exposure, based upon studies that they identified. Defense counsel in those cases did not concede the existence of risk, the size of the risk, or the validity of the study, but rather stipulated such facts solely for purposes of their motions to dismiss. In other words, even if the plaintiffs’ relied upon studies were valid and the risk estimates accurate (with relative risks of 1.5), plaintiffs could not prevail because no reasonable jury could infer that plaintiffs’ colorectal cancers were caused by their occupational asbestos exposure. The procedural context of the doubling risk thus often pretermits questions of validity, bias, and confounding.

Second, the Manual identifies that there must be “[s]imilarity among study subjects and plaintiff.” Id. at 613. Again, this assumption is often either pretermitted for purposes of lodging a dispositive motion, conceded, or included as part of the challenge to an expert witness’s opinion’s admissibility. For example, in some litigations, plaintiffs will rely upon high-dose or high-exposure studies that are not comparable to the plaintiff’s actual exposure, and the defense may have shown that the only reliable evidence is that there is a small (relative risk less than two) or no risk at all from the plaintiff’s exposure. External validity objections may well play a role in a contest under Rule 702, but the resolution of a doubling of risk issue will require an appropriate measure of risk for the plaintiff whose injury is at issue.

In the course of identifying this second assumption, the Manual now points out that the doubling argument turns on applying “an average risk for the group” to each individual in the group. Id. This point again is correct, but the Manual does not come to terms with the challenge often made to what I call the assumption of stochastic risk. The Manual authors quote a leading textbook on epidemiology:

“We cannot measure the individual risk, and assigning the average value to everyone in the category reflects nothing more than our ignorance about the determinants of lung cancer that interact with cigarette smoke. It is apparent from epidemiological data that some people can engage in chain smoking for many decades without developing lung cancer. Others are or will become primed by unknown circumstances and need only to add cigarette smoke to the nearly sufficient constellation of causes to initiate lung cancer. In our ignorance of these hidden causal components, the best we can do in assessing risk is to classify people according to measured causal risk indicators and then assign the average observed within a class to persons within the class.”

Id at n.198., quoting from Kenneth J. Rothman, Sander Greenland, and Tim L. Lash, Modern Epidemiology 9 (3d ed. 2008). Although the textbook on this point is unimpeachable, taken at face value, it would introduce an evidentiary nihilism for judicial determinations of specific causation in cases in which epidemiologic measures of risk size are the only basis for drawing probabilistic inferences of specific causation. See also Manual at 614 n. 198., citing Ofer Shpilberg, et al., The Next Stage: Molecular Epidemiology, 50 J. Clin. Epidem. 633, 637 (1997) (“A 1.5-fold relative risk may be composed of a 5-fold risk in 10% of the population, and a 1.1-fold risk in the remaining 90%, or a 2-fold risk in 25% and a 1.1-fold for 75%, or a 1.5-fold risk for the entire population.”). The assumption of stochastic risk is, as Judge Weinstein recognized in Agent Orange, often the only assumption on which plaintiffs will ever have a basis for claiming individual causation on typical datasets available to support health effects claims. Elsewhere, the authors of the Manual’s chapter suggest that statistical “frequentists” would resist the adaptation of relative risk to provide a probability of causation because for the frequentist, the individual case either is or is not caused by the exposure at issue. Manual at 611 n.188. This suggestion appears to confuse the frequentist enterprise for evaluating evidence on the basis of statistical measures of the probability of observing at least as great a departure from expected in a sample rather than attempting to affixing a probability to the population parameter. The doubling argument derives from the well-known “urn model” in probability theory, which is not really at issue in the frequentist-Bayesian wars.

Third, the Manual authors state that the doubling argument assumes the “[n]onacceleration of disease.” In some cases, this statement is correct, but there is no evidence of acceleration, and because an acceleration-of-onset theory would diminish damages, typically defendants would have the burden of going forward with identifying the acceleration phenomenon. The authors go further, however, in stating that “for most of the chronic diseases of adulthood, it is not possible for epidemiologic studies to distinguish between acceleration of disease and causation of new disease.” Manual at 614. The inability to distinguish acceleration from causation of new cases would typically redound to the disadvantage of defendants that are making the doubling argument. In other words, the defendants would, by this supposed inability, be unable to mitigate damages by showing that the alleged harm would have occurred any way, but only later in time. See Manual at 615 n. 199 (“If acceleration occurs, then the appropriate characterization of the harm for purposes of determining damages would have to be addressed. A defendant who only accelerates the occurrence of harm, say, chronic back pain, that would have occurred independently in the plaintiff at a later time is not liable for the same amount of damages as a defendant who causes a lifetime of chronic back pain.”). More important, however, the Manual appears to be wrong that epidemiologic studies cannot identify acceleration of onset of a particular disease in an epidemiologic study or clinical trial. Many modern longitudinal epidemiologic studies and clinical trials use survival analysis and time windows to identify latency or time lagged outcomes in association with identified exposures.

The fourth assumption identified in the Manual is that the exposure under study acts independently of other exposures. The authors give the time-worn example of multiplicative synergy between asbestos and smoking, what elsewhere has been referred to as “The Mt. Sinai Catechism” (June 7, 2013). The example was improvidently chosen given that the multiplicative nature was doubtful when first advanced, and now has effectively been retracted or modified by the researchers following the health outcomes of asbestos insulators in the United States. More important for our purposes here, interactions can be quantified and added to the analysis of attributable risk; interactions are not insuperable barriers to reasonable apportiontment of risk.

Fifth, the Manual identifies two additional assumptions in that (a) the exposure at issue is not responsible for another outcome that competes with morbidity or mortality, and (b) the exposure does not provide a protective “effect” in a subpopulation of those studied. Manual at 615. On the first of these assumptions, the authors suggest that this assumption is required “because in the epidemiologic studies relied on, those deaths caused by the alternative disease process will mask the true magnitude of increased incidence of the studied disease when the study subjects die before developing the disease of interest.” Id. at 615 n.202. Competing causes, however, are frequently studied and can be treated as confounders in an appropriate regression or propensity score analysis to yield a risk estimate for each individual putative effect at issue. The second of the two assumptions is a rehash of the speculative assertion that the epidemiologic study (and the population it samples) may not have a stochastic distribution of risk. Although the stochastic assumption may not be correct, it is often favorable to the party asserting the claim who otherwise would not be able to show that he was not in a sub-population of people not affected at all, or even benefitted from the exposure. Again, modern epidemiology does not stop at identifying populations at risk, but continues to refine the assessment by trying to identify subpopulations that have the risk exclusively. The existence of multi-modal distributions of risk within a population is, again, not a barrier to the doubling argument.

With sufficiently large samples, epidemiologic studies may be able to identify subgroups that have very large relative risks, even when the overall sample under study had a relative risk under two. The possibility of such subgroups, however, should not be an invitation to wholesale speculation that a given plaintiff is in a “vulnerable” subgroup without reliable, valid evidence of what the risks for the identified subgroup are. Too often, the vulnerable plaintiff or subgroup claim is merely hand waving in an evidentiary vacuum. The Manual authors seem to adopt this hand-waving attitude when they give a speculative hypothetical example:

“For example, genetics might be known to be responsible for 50% of the incidence of a disease independent of exposure to the agent. If genetics can be ruled out in an individual’s case, then a relative risk greater than 1.5 might be sufficient to support an inference that the agent was more likely than not responsible for the plaintiff’s disease.”

Manual at 615-16 (internal citations omitted). The hypothetical is unclear whether “the genetics” cases are part of the study that yielded a relative risk of 1.5, but of course if the “genetics” were uniformly distributed in the population, and also in the sample studied in the epidemiologic study, then the “genetics” would appear to drop out of playing any role in elevating risk. But as the authors pointed out in their caveats about interaction, there may well be a role of interaction between the “genetics” and the exposure in the study such that “the genetics” cases occurred earlier or did not add anything to the disease burden that would have been caused by the exposure under study that reported out a relative risk of 1.5. So bottom line, plaintiff would need a study that applied the “genetics” to the epidemiologic study to see what relative risks might be observed in people without the genes at issue.

The Third Edition of the Manual does add more nuance to the doubling of risk argument, but alas more nuance yet is needed. The chapter is an important source to include in any legal argument for or against inferences of specific causation, but it is hardly the final word.

Below, I have updated a reference list of cases that reference the doubling argument.


Radiation

Johnston v. United States, 597 F. Supp. 374, 412, 425-26 (D. Kan. 1984) (rejecting even a relative risk of greater than two as supporting an inference of specific causation)

Allen v. United States, 588 F. Supp. 247, 418 (1984) (rejecting mechanical application of doubling of risk), rev’d on other grounds, 816 F.2d 1417 (10th Cir. 1987), cert. denied, 484 U.S. 1004 (1988)

In re TMI Litig., 927 F. Supp. 834, 845, 864–66 (M.D. Pa. 1996), aff’d, 89 F.3d 1106 (3d Cir. 1996), aff’d in part, rev’d in part, 193 F.3d 613 (3d Cir. 1999) (rejecting “doubling dose” trial court’s analysis), modified 199 F.3d 158 (3d Cir. 2000) (stating that a dose below ten rems is insufficient to infer more likely than not the existence of a causal link)

In re Hanford Nuclear Reservation Litig., 1998 WL 775340, at *8 (E.D. Wash. Aug. 21, 1998) (“‘[d]oubling of the risk’ is the legal standard for evaluating the sufficiency of the plaintiffs’ evidence and for determining which claims should be heard by the jury,” citing Daubert II), rev’d, 292 F.3d 1124, 1136-37 (9th Cir. 2002) (general causation)

In re Berg Litig., 293 F.3d 1127 (9th Cir. 2002) (companion case to In re Hanford)

Cano v. Everest Minerals Corp., 362 F. Supp. 2d 814, 846 (W.D. Tex. 2005) (relative risk less than 3.0 represents only a weak association)

Cook v. Rockwell Internat’l Corp., 580 F. Supp. 2d 1071, 1083n.8, 1084, 1088-89 (D. Colo. 2006) (citing Daubert II and “concerns” by Sander Greenland and David Egilman, plaintiffs’ expert witnesses in other cases), rev’d and remanded on other grounds, 618 F.3d 1127 (10th Cir. 2010), cert. denied, ___ U.S. ___ (May 24, 2012)

Cotroneo v. Shaw Envt’l & Infrastructure, Inc., No. H-05- 1250, 2007 WL 3145791, at *3 (S.D. Tex. Oct. 25, 2007) (citing Havner, 953 S.W.2d at 717) (radioactive material)


Swine Flu- GBS Cases

Cook v. United States, 545 F. Supp. 306, 308 (N.D. Cal. 1982)(“Whenever the relative risk to vaccinated persons is greater than two times the risk to unvaccinated persons, there is a greater than 50% chance that a given GBS case among vaccinees of that latency period is attributable to vaccination, thus sustaining plaintiff’s burden of proof on causation.”)

Robinson v. United States, 533 F. Supp. 320, 325-28 (E.D. Mich. 1982) (finding for the government and against claimant who developed acute signs and symptoms of GBS 17 weeks after innoculation, in part because of relative and attributable risks)

Padgett v. United States, 553 F. Supp. 794, 800 – 01 (W.D. Tex. 1982) (“From the relative risk, we can calculate the probability that a given case of GBS was caused by vaccination. . . . [A] relative risk of 2 or greater would indicate that it was more likely than not that vaccination caused a case of GBS.”)

Manko v. United States, 636 F. Supp. 1419, 1434 (W.D. Mo. 1986) (relative risk of 2, or less, means exposure not the probable cause of disease claimed) (incorrectly suggesting that relative risk of two means that there was a 50% chance the disease was caused by “chance alone”), aff’d in relevant part, 830 F.2d 831 (8th Cir. 1987)


IUD Cases – Pelvic Inflammatory Disease

Marder v. G.D. Searle & Co., 630 F. Supp. 1087, 1092 (D.Md. 1986) (“In epidemiological terms, a two-fold increased risk is an important showing for plaintiffs to make because it is the equivalent of the required legal burden of proof—a showing of causation by the preponderance of the evidence or, in other words, a probability of greater than 50%.”), aff’d mem. on other grounds sub nom. Wheelahan v. G.D.Searle & Co., 814 F.2d 655 (4th Cir. 1987) (per curiam)


Bendectin cases

Lynch v. Merrill-National Laboratories, 646 F.Supp. 856 (D. Mass. 1986)(granting summary judgment), aff’d, 830 F.2d 1190, 1197 (1st Cir. 1987)(distinguishing between chances that “somewhat favor” plaintiff and plaintiff’s burden of showing specific causation by “preponderant evidence”)

DeLuca v. Merrell Dow Pharm., Inc., 911 F.2d 941, 958-59 (3d Cir. 1990) (commenting that ‘‘[i]f New Jersey law requires the DeLucas to show that it is more likely than not that Bendectin caused Amy DeLuca’s birth defects, and they are forced to rely solely on Dr. Done’s epidemiological analysis in order to avoid summary judgment, the relative risk of limb reduction defects arising from the epidemiological data Done relies upon will, at a minimum, have to exceed ‘2’’’)

Daubert v. Merrell Dow Pharms., Inc., 43 F.3d 1311, 1321 (9th Cir.) (“Daubert II”) (holding that for epidemiological testimony to be admissible to prove specific causation, there must have been a relative risk for the plaintiff of greater than 2; testimony that the drug “increased somewhat the likelihood of birth defects” is insufficient) (“For an epidemiological study to show causation under a preponderance standard . . . the study must how that children whose mothers took Bendectin are more than twice as likely to develop limb reduction birth defects as children whose mothers did not.”), cert. denied, 516 U.S. 869 (1995)

DePyper v. Navarro, 1995 WL 788828 (Mich. Cir. Ct. Nov. 27, 1995)

Oxendine v. Merrell Dow Pharm., Inc., 1996 WL 680992 (D.C. Super. Ct. Oct. 24, 1996) (noting testimony by Dr. Michael Bracken, that had Bendectin doubled risk of birth defects, overall rate of that birth defect should have fallen 23% after manufacturer withdrew drug from market, when in fact the rate remained relatively steady)

Merrell Dow Pharms., Inc. v. Havner, 953 S.W.2d 706, 716 (Tex. 1997) (holding, in accord with the weight of judicial authority, “that the requirement of a more than 50% probability means that epidemiological evidence must show that the risk of an injury or condition in the exposed population was more than double the risk in the unexposed or control population”); id. at at 719 (rejecting isolated statistically significant associations when not consistently found among studies)


Silicone Cases

Hall v. Baxter Healthcare, 947 F. Supp. 1387, 1392, 1397, 1403-04 (D. Ore. 1996) (discussing relative risk of 2.0)

Pick v. American Medical Systems, Inc., 958 F. Supp. 1151, 1160 (E.D. La. 1997) (noting, correctly but irrelevantly, in penile implant case, that “any” increased risk suggests that the exposure “may” have played some causal role)

In re Breast Implant Litigation, 11 F. Supp. 2d 1217, 1226 -27 (D. Colo. 1998) (relative risk of 2.0 or less shows that the background risk is at least as likely to have given rise to the alleged injury)

Barrow v. Bristol-Myers Squibb Co., 1998 WL 812318 (M.D. Fla. Oct. 29, 1998)

Minnesota Mining and Manufacturing v. Atterbury, 978 S.W.2d 183, 198 (Tex.App. – Texarkana 1998) (noting that Havner declined to set strict criteria and that “[t]here is no requirement in a toxic tort case that a party must have reliable evidence of a relative risk of 2.0 or greater”)

Allison v. McGhan Med. Corp., 184 F.3d 1300, 1315n.16, 1316 (11th Cir. 1999) (affirming exclusion of expert testimony based upon a study with a risk ratio of 1.24; noting that statistically significant epidemiological study reporting an increased risk of marker of disease of 1.24 times in patients with breast implants was so close to 1.0 that it “was not worth serious consideration for proving causation”; threshold for concluding that an agent more likely than not caused a disease is 2.0, citing Federal Judicial Center, Reference Manual on Scientific Evidence 168-69 (1994))

Grant v. Bristol-Myers Squibb, 97 F. Supp. 2d 986, 992 (D. Ariz. 2000)

Pozefsky v. Baxter Healthcare Corp., No. 92-CV-0314, 2001 WL 967608, at *3 (N.D.N.Y. August 16, 2001) (excluding causation opinion testimony given contrary epidemiologic studies; noting that sufficient epidemiologic evidence requires relative risk greater than two)

In re Silicone Gel Breast Implant Litig., 318 F. Supp. 2d 879, 893 (C.D. Cal. 2004) (“The relative risk is obtained by dividing the proportion of individuals in the exposed group who contract the disease by the proportion of individuals who contract the disease in the non-exposed group.”) (noting that relative risk must be more than doubled at a minimum to permit an inference that the risk was operating in plaintiff’s case)

Norris v. Baxter Healthcare Corp., 397 F.3d 878 (10th Cir. 2005) (discussing but not deciding specific causation and the need for relative risk greater than two; no reliable showing of general causation)

Barrow v. Bristol-Meyers Squibb Co., 1998 WL 812318, at *23 (M.D. Fla., Oct. 29, 1998)

Minnesota Mining and Manufacturing v. Atterbury, 978 S.W.2d 183, 198 (Tex. App. – Texarkana 1998) (noting that “[t]here is no requirement in a toxic tort case that a party must have reliable evidence of a relative risk of 2.0 or greater”)


Asbestos

Lee v. Johns Manville Corp., slip op. at 3, Phila. Cty. Ct. C.P., Sept. Term 1978, No. 88 (123) (Oct. 26, 1983) (Forer, J.)(entering verdict in favor of defendants on grounds that plaintiff had failed to show that his colo rectal cancer had been caused by asbestos exposure after adducing evidence of a relative risk less than two)

Washington v. Armstrong World Indus., Inc., 839 F.2d 1121 (5th Cir. 1988) (affirming grant of summary judgment on grounds that there was insufficient evidence that plaintiff’s colon cancer was caused by asbestos)

Primavera v. Celotex Corp., Phila. Cty. Ct. C.P., December Term, 1981, No. 1283 (Bench Op. of Hon. Berel Caesar, (Nov. 2, 1988) (granting compulsory nonsuit on the plaintiff’s claim that his colorectal cancer was caused by his occupational exposure to asbestos)

In re Fibreboard Corp.,893 F.2d 706, 712 (5th Cir.1990) (“It is evident that these statistical estimates deal only with general causation, for population-based probability estimates do not speak to a probability of causation in any one case; the estimate of relative risk is a property of the studied population, not of an individual’s case.” (internal quotation omitted) (emphasis in original))

Grassis v. Johns-Manville Corp., 248 N.J. Super. 446, 455-56, 591 A.2d 671, 676 (App. Div. 1991) (rejecting doubling of risk threshold in asbestos gastrointestinal cancer claim)

Landrigan v. Celotex Corp., 127 N.J. 404, 419, 605 A.2d 1079 (1992) (reversing judgment entered on directed verdict for defendant on specific causation of claim that asbestos caused decedent’s colon cancer)

Caterinicchio v. Pittsburgh Corning Corp., 127 N.J. 428, 605 A.2d 1092 (1992) (reversing judgment entered on directed verdict for defendant on specific causation of claim that asbestos caused plaintiff’s colon cancer)

In re Joint E. & S. Dist. Asbestos Litig., 758 F. Supp. 199 (S.D.N.Y. 1991), rev’d sub nom. Maiorano v. Owens Corning Corp., 964 F.2d 92, 97 (2d Cir. 1992);

Maiorana v. National Gypsum, 827 F. Supp. 1014, 1043 (S.D.N.Y. 1993), aff’d in part and rev’d in part, 52 F.3d 1124, 1134 (2d Cir. 1995) (stating a preference for the district court’s instructing the jury on the science and then letting the jury weigh the studies)

Keene Corp. v. Hall, 626 A.2d 997 (Md. Spec. Ct. App. 1993) (laryngeal cancer)

Jones v. Owens-Corning Fiberglas Corp., 288 N.J. Super. 258, 266, 672 A.2d 230, 235 (App. Div. 1996) (rejecting doubling of risk threshold in asbestos gastrointestinal cancer claim)

In re W.R. Grace & Co., 355 B.R. 462, 483 (Bankr. D. Del. 2006) (requiring showing of relative risk greater than two to support property damage claims based on unreasonable risks from asbestos insulation products)

Kwasnik v. A.C. & S., Inc. (El Paso Cty., Tex. 2002)

Sienkiewicz v. Greif (U.K.) Ltd., [2009] EWCA (Civ) 1159, at ¶23 (Lady Justice Smith) (“In my view, it must now be taken that, saving the expression of a different view by the Supreme Court, in a case of multiple potential causes, a claimant can demonstrate causation in a case by showing that the tortious exposure has at least doubled the risk arising from the non-tortious cause or causes.”)

Sienkiewicz v. Greif  Ltd., [2011] UKSC 10.

“Where there are competing alternative, rather than cumulative, potential causes of a disease or injury, such as in Hotson, I can see no reason in principle why epidemiological reason should not be used to show that one of the causes was more than twice as likely as all the others put together to have caused the disease or injury.” (Lord Philips, at ¶ 93)

(arguing that statistical evidence should be considered without clearly identifying the nature and extent of its role) (Baroness Hale, ¶ 172-73)

(insisting upon difference between fact and probability of causation, with statistical evidence not probative of the former) (Lord Roger, at ¶143-59)

(“the law is concerned with the rights and wrongs of an individual situation, and should not treat people and even companies as statistics,” although epidemiologic evidence can appropriately be used he identified “in conjunction with specific evidence”) (Lord Mance, at ¶205)

(concluding that epidemiologic evidence can establish the probability, but not the fact of causation, and vaguely suggesting that whether epidemiologic evidence should be allowed was a matter of policy) (Lord Dyson, ¶218-19)

Dixon v. Ford Motor Co., 47 A. 3d 1038, 1046-47 & n.11 (Md. Ct. Special Appeals 2012)(“we can explicitly derive the probability of causation from the statistical measure known as ‘relative risk’, as did the U.S. Court of Appeals for the Third Circuit in DeLuca v. Merrell Dow Pharmaceuticals, Inc., 911 F.2d 941, 958 (3d Cir.1990), in a holding later adopted by several courts. For reasons we need not explore in detail, it is not prudent to set a singular minimum ‘relative risk’value as a legal standard. But even if there were some legal threshold, Dr. Welch provided no information that could help the finder of fact to decide whether the elevated risk in this case was ‘substantial’.”)(internal citations omitted), rev’d, 433 Md. 137, 70 A.3d 328 (2013)


Pharmaceutical Cases

Ambrosini v. Upjohn, 1995 WL 637650, at *4 (D.D.C. Oct. 18, 1995) (excluding plaintiff’s expert witness, Dr. Brian Strom, who was unable to state that mother’s use of Depo-Provero to prevent miscarriage more than doubled her child’s risk of a birth defect)

Ambrosini v. Labarraque, 101 F.3d 129, 135 (D.C. Cir. 1996)(Depo-Provera, birth defects) (testimony “does not warrant exclusion simply because it fails to establish the causal link to a specified degree of probability”)

Siharath v. Sandoz Pharms. Corp., 131 F. Supp. 2d 1347, 1356 (N.D. Ga. 2001)

Cloud v. Pfizer Inc., 198 F. Supp. 2d 1118, 1134 (D. Ariz. 2001) (sertraline and suicide)

Miller v. Pfizer, 196 F. Supp. 2d 1062, 1079 (D. Kan. 2002) (acknowledging that most courts require a showing of RR > 2, but questioning their reasoning; “Court rejects Pfizer’s argument that unless Zoloft is shown to create a relative risk [of akathisia] greater than 2.0, [expert’s] testimony is inadmissible”), aff’d, 356 F. 3d 1326 (10th Cir.), cert. denied, 543 U.S. 917 (2004)

XYZ, et al. v. Schering Health Care Ltd., [2002] EWHC 1420, at ¶21, 70 BMLR 88 (QB 2002) (noting with approval that claimants had accepted the need to  prove relative risk greater than two; finding that most likely relative risk was 1.7, which required finding against claimants even if general causation were established)

Smith v. Wyeth-Ayerst Laboratories Co., 278 F. Supp. 2d 684, 691 (W.D.N.C. 2003) (recognizing that risk and cause are distinct concepts) (“Epidemiologic data that shows a risk cannot support an inference of cause unless (1) the data are statistically significant according to scientific standards used for evaluating such associations; (2) the relative risk is sufficiently strong to support an inference of ‘more likely than not’; and (3)  the epidemiologic data fits the plaintiff’s case in terms of exposure, latency, and other relevant variables.”) (citing FJC Reference Manual at 384 – 85 (2d ed. 2000))

Kelley v. Sec’y of Health & Human Servs., 68 Fed. Cl. 84, 92 (Fed. Cl. 2005) (quoting Kelley v. Sec’y of Health & Human Servs., No. 02-223V, 2005 WL 1125671, at *5 (Fed. Cl. Mar. 17, 2005) (opinion of Special Master explaining that epidemiology must show relative risk greater than two to provide evidence of causation), rev’d on other grounds, 68 Fed. Cl. 84 (2005))

Pafford v. Secretary of HHS, No. 01–0165V, 64 Fed. Cl. 19, 2005 WL 4575936 at *8 (2005) (expressing preference for “an epidemiologic study demonstrating a relative risk greater than two … or dispositive clinical or pathological markers evidencing a direct causal relationship”) (citing Stevens v. Secretary of HHS, No.2001 WL 387418 at *12), aff’d, 451 F.3d 1352 (Fed. Cir. 2006)

Burton v. Wyeth-Ayerst Labs., 513 F. Supp. 2d 719, 730 (N.D. Tex. 2007) (affirming exclusion of expert witness testimony that did not meet Havner’s requirement of relative risks greater than two, Merrell Dow Pharm., Inc. v. Havner, 953 S.W.2d 706, 717–18 (Tex. 1997))

In re Bextra and Celebrex Marketing Sales Practices and Prod. Liab. Litig., 524 F. Supp. 2d 1166, 1172 (N.D. Calif. 2007) (observing that epidemiologic studies “can also be probative of specific causation, but only if the relative risk is greater than 2.0, that is, the product more than doubles the risk of getting the disease”)

In re Bextra & Celebrex, 2008 N.Y. Misc. LEXIS 720, *23-24, 239 N.Y.L.J. 27 (2008) (“Proof that a relative risk is greater than 2.0 is arguably relevant to the issue of specific, as opposed to general causation and is not required for plaintiffs to meet their burden in opposing defendants’ motion.”)

In re Viagra Products Liab. Litigat., 572 F. Supp. 2d 1071, 1078 (D. Minn. 2008) (noting that some but not all courts have concluded relative risks under two support finding expert witness’s opinion to be inadmissible)

Vanderwerf v. SmithKlineBeecham Corp., 529 F.Supp. 2d 1294, 1302 n.10 (D. Kan. 2008), appeal dism’d, 603 F.3d 842 (10th Cir. 2010) (“relative risk of 2.00 means that a particular event of suicidal behavior has a 50 per cent chance that is associated with the exposure to Paxil … .”)

Wright v. American Home Products Corp., 557 F. Supp. 2d 1032, 1035-36 (W.D. Mo. 2008) (fenfluramine case)

Beylin v. Wyeth, 738 F.Supp. 2d 887, 893 n.3 (E.D.Ark. 2010) (MDL court) (Wilson, J. & Montgomery, J.) (addressing relative risk of two argument in dictum; holding that defendants’ argument that for an opinion to be relevant it must show that the medication causes the relative risk to exceed two “was without merit”)

Merck & Co. v. Garza, 347 S.W.3d 256 (Tex. 2011), rev’g 2008 WL 2037350, at *2 (Tex. App. — San Antonio May 14, 2008, no pet. h.)

Scharff v. Wyeth, No. 2:10–CV–220–WKW, 2012 WL 3149248, *6 & n.9, 11 (M.D. Ala. Aug. 1, 2012) (post-menopausal hormone therapy case; “A relative risk of 2.0 implies a 50% likelihood that an exposed individual’s disease was caused by the agent. The lower relative risk in this study reveals that some number less than half of the additional cases could be attributed to [estrogen and progestin].”)

Cheek v. Wyeth, LLC (In re Diet Drugs), 890 F.Supp. 2d 552 (E.D. Pa. 2012)


Medical Malpractice – Failure to Prescribe; Delay in Treatment

Merriam v. Wanger, 757 A.2d 778, 2000 Me. 159 (2000) (reversing judgment on jury verdict for plaintiff on grounds that plaintiff failed to show that defendant failure to act were, more likely than not, a cause of harm)

Bonesmo v. The Nemours Foundation, 253 F. Supp. 2d 801, 809 (D. Del. 2003)

Theofanis v. Sarrafi, 791 N.E.2d 38,48 (Ill. App. 2003) (reversing and granting new trial to plaintiff who received an award of no damages when experts testified that relative risk was between 2.0 and 3.0)(“where the risk with the negligent act is at least twice as great as the risk in the absence of negligence, the evidence supports a finding that, more likely than not, the negligence in fact caused the harm”)

Cottrelle v. Gerrard, 67 OR (3d) 737 (2003), 2003 CanLII 50091 (ONCA), at ¶ 25 (Sharpe, J.A.) (less than a probable chance that timely treatment would have made a difference for plaintiff is insufficient), leave to appeal den’d SCC (April 22, 2004)

Joshi v. Providence Health System of Oregon Corp., 342 Or. 152, 156, 149 P. 3d 1164, 1166 (2006) (affirming directed verdict for defendants when expert witness testified that he could not state, to a reasonable degree of medical probability, beyond 30%, that administering t-PA, or other anti-coagulant would have changed the outcome and prevented death)

Ensink v. Mecosta County Gen. Hosp., 262 Mich. App. 518, 687 N.W.2d 143 (Mich. App. 2004) (affirming summary judgment for hospital and physicians when patient could not greater than 50% probability of obtaining a better result had emergency physician administered t-PA within three hours of stroke symptoms)

Lake Cumberland, LLC v. Dishman, 2007 WL 1229432, *5 (Ky. Ct. App. 2007) (unpublished) confusing 30% with a “reasonable probability”; citing without critical discussion an apparently innumerate opinion of expert witness Dr. Lawson Bernstein)

Mich. Comp. Laws § 600.2912a(2) (2009) (“In an action alleging medical malpractice, the plaintiff has the burden of proving that he or she suffered an injury that more probably than not was proximately caused by the negligence of the defendant or defendants. In an action alleging medical malpractice, the plaintiff cannot recover for loss of an opportunity to survive or an opportunity to achieve a better result unless the opportunity was greater than 50%.”)

O’Neal v. St. John Hosp. & Med. Ctr., 487 Mich. 485, 791 N.W.2d 853 (Mich. 2010) (affirming denial of summary judgment when failure to administer therapy (not t-PA) in a timely fashion supposedly more than doubled the risk of stroke)

Kava v. Peters, 450 Fed. Appx. 470, 478-79 (6th Cir. 2011) (affirming summary judgment for defendants when plaintiffs expert witnesses failed to provide clear testimony that plaintiff specific condition would have been improved by timely administration of therapy)

Smith v. Bubak, 643 F.3d 1137, 1141–42 (8th Cir. 2011) (rejecting relative benefit testimony and suggesting in dictum that absolute benefit “is the measure of a drug’s overall effectiveness”)

Young v. Mem’l Hermann Hosp. Sys., 573 F.3d 233, 236 (5th Cir. 2009) (holding that Texas law requires a doubling of the relative risk of an adverse outcome to prove causation), cert. denied, ___ U.S. ___, 130 S.Ct. 1512 (2010)

Gyani v. Great Neck Medical Group, 2011 WL 1430037 (N.Y. S.Ct. for Nassau Cty., April 4, 2011) (denying summary judgment to medical malpractice defendant on stroke patient’s claims that failure to administer t-PA, on naked assertions of proximate cause by plaintiff’s expert witness, and without considering actual magnitude of risk increased by alleged failure to treat)

Samaan v. St. Joseph Hospital, 670 F.3d 21 (1st Cir. 2012)

Goodman v. Viljoen, 2011 ONSC 821 (CanLII)(treating a risk ratio of 1.7 for harm, or 0.6 for prevention, as satisfying the “balance of probabilities” when taken with additional unquantified, unvalidated speculation), aff’d, 2012 ONCA 896 (CanLII), leave appeal den’d, Supreme Court of Canada No. 35230 (July 11, 2013)

Briante v. Vancouver Island Health Authority, 2014 Brit. Columbia S.Ct 1511, at ¶ 317 (plaintiff must show “on a balance of probabilities that the defendant caused the injury”)


Toxic Tort Cases

In re Agent Orange Product Liab. Litig., 597 F. Supp. 740, 785, 836 (E.D.N.Y. 1984) (“A government administrative agency may regulate or prohibit the use of toxic substances through rulemaking, despite a very low probability of any causal relationship.  A court, in contrast, must observe the tort law requirement that a plaintiff establish a probability of more than 50% that the defendant’s action injured him. … This means that at least a two-fold increase in incidence of the disease attributable to Agent Orange exposure is required to permit recovery if epidemiological studies alone are relied upon.”), aff’d 818 F.2d 145, 150-51 (2d Cir. 1987)(approving district court’s analysis), cert. denied sub nom. Pinkney v. Dow Chemical Co., 487 U.S. 1234 (1988)

Wright v. Willamette Indus., Inc., 91 F.3d 1105 (8th Cir. 1996)(“Actions in tort for damages focus on the question of whether to transfer money from one individual to another, and under common-law principles (like the ones that Arkansas law recognizes) that transfer can take place only if one individual proves, among other things, that it is more likely than not that another individual has caused him or her harm.  It is therefore not enough for a plaintiff to show that a certain chemical agent sometimes causes the kind of harm that he or she is complaining of.  At a minimum, we think that there must be evidence from which the factfinder can conclude that the plaintiff was exposed to levels of that agent that are known to cause the kind of harm that the plaintiff claims to have suffered. See Abuan v. General Elec. Co., 3 F.3d at 333.  We do not require a mathematically precise table equating levels of exposure with levels of harm, but there must be evidence from which a reasonable person could conclude that a defendant’s emission has probably caused a particular plaintiff the kind of harm of which he or she complains before there can be a recovery.”)

Sanderson v. Internat’l Flavors & Fragrances, Inc., 950 F. Supp. 981, 998 n. 17,  999-1000, 1004 (C.D. Cal.1996) (more than a doubling of risk is required in case involving aldehyde exposure and claimed multiple chemical sensitivities)

McDaniel v. CSX Transp., Inc., 955 S.W.2d 257, 264 (1997) (doubling of risk is relevant but not required as a matter of law)

Schudel v. General Electric Co., 120 F.3d 991, 996 (9th Cir. 1997) (polychlorinated biphenyls)

Lofgren v. Motorola, 1998 WL 299925 *14 (Ariz. Super. June 1, 1998) (suggesting that relative risk requirement in tricholorethylene cancer medical monitoring case was arbitrary, but excluding plaintiffs’ expert witnesses on other grounds)

Berry v. CSX Transp., Inc., 709 So. 2d 552 (Fla. D. Ct.App. 1998) (reversing exclusion of plaintiff’s epidemiologist in case involving claims of toxic encephalopathy from solvent exposure, before Florida adopted Daubert standard)

Bartley v. Euclid, Inc., 158 F.3d 261 (5th Cir. 1998) (evidence at trial more than satisfied the relative risk greater than two requirement), rev’d on rehearing en banc, 180 F.3d 175 (5th Cir. 1999)

Magistrini v. One Hour Martinizing Dry Cleaning, 180 F. Supp. 2d 584, 591-92, 605 n.27, 606–07 (D.N.J. 2002) (“When the relative risk reaches 2.0, the risk has doubled, indicating that the risk is twice as high among the exposed group as compared to the non-exposed group. Thus, ‘the threshold for concluding that an agent was more likely than not the cause of an individual’s disease is a relative risk greater than 2.0’.”) (quoting FJC Reference Manual at 384), aff’d, 68 F. App’x 356 (3d Cir. 2003)

Allison v. Fire Ins. Exchange, 98 S.W.3d 227, 239 (Tex. App. — Austin 2002, no pet. h.)

Ferguson v. Riverside School Dist. No. 416, 2002 WL 34355958 (E.D. Wash. Feb. 6, 2002) (No. CS-00-0097-FVS)

Daniels v. Lyondell-Citgo Refining Co., 99 S.W.3d 722, 727 (Tex. App. – Houston [1st Dist.] 2003) (affirming exclusion of expert witness testimony that did not meet Havner’s requirement of relative risks greater than two)

Exxon Corp. v. Makofski, 116 S.W.3d 176, 184-85 (Tex. App. — Houston 2003)

Frias v. Atlantic Richfield Co., 104 S.W.3d 925 (Tex. App. — Houston 2003)

Graham v. Lautrec, Ltd., 2003 WL 23512133, at *1 (Mich. Cir. Ct. 2003) (mold)

Mobil Oil Corp. v. Bailey, 187 S.W.3d 263, 268 (Tex. App. – Beaumont 2006) (affirming exclusion of expert witness testimony that did not meet Havner’s requirement of relative risks greater than two)

In re Lockheed Litig. Cases, 115 Cal. App. 4th 558 (2004)(alleging brain, liver, and kidney damage), rev’d in part, 23 Cal. Rptr. 3d 762, 765 (Cal. App. 2d Dist. 2005) (“[A] court cannot exclude an epidemiological study from consideration solely because the study shows a relative risk of less than 2.0.”), rev. dismissed, 192 P.3d 403 (Cal. 2007)

Novartis Grimsby Ltd. v. Cookson, [2007] EWCA (Civ) 1261, at para. 74 (causation was successfully established by risk ratio greater than two; per Lady Justice Smith: “Put in terms of risk, the occupational exposure had more than doubled the risk [of the bladder cancer complained of] due to smoking. . . . if the correct test for causation in a case such as this is the “but for” test and nothing less will do, that test is plainly satisfied on the facts as found. . . . In terms of risk, if the occupational exposure more than doubles the risk due to smoking, it must, as a matter of logic, be probable that the disease was caused by the former.”)

Watts v. Radiator Specialty Co., 990 So. 2d 143 (Miss. 2008) (“The threshold for concluding that an agent was more likely than not the cause of an individual’s disease is a relative risk greater than 2.0.”)

King v. Burlington Northern Santa Fe Ry, 762 N.W.2d 24, 36-37 (Neb. 2009) (reversing exclusion of proffered testimony of Arthur Frank on claim that diesel exposure caused multiple myeloma, and addressing in dicta the ability of expert witnesses to speculate reasons why specific causation exists even with relative risk less than two) (“If a study shows a relative risk of 2.0, ‘the agent is responsible for an equal number of cases of disease as all other background causes.’ This finding ‘implies a 50% likelihood that an exposed individual’s disease was caused by the agent.’ If the relative risk is greater than 2.0, the study shows a greater than 50–percent likelihood that the agent caused the disease.”)(internal citations to Reference Manual on Scientific Evidence (2d ed. 2000) omitted)

Henricksen v. Conocophillips Co., 605 F. Supp. 2d 1142, 1158 (E.D. Wash. 2009) (noting that under Circuit precedent, epidemiologic studies showing low-level risk may suffiicent to show general causation but are sufficient to show specific causation only if relative risk exceeds two) (excluding plaintiff‘s expert witness’s testimony because epidemiologic evidence is “contradictory and inconsistent”)

City of San Antonio v. Pollock, 284 S.W.3d 809, 818 (Tex. 2009) (holding testimony admitted insufficient as matter of law)

George v. Vermont League of Cities and Towns, 2010 Vt. 1, 993 A.2d 367, 375 (2010)

Blanchard v. Goodyear Tire & Rubber Co., No. 837-12-07 Wrcv (Eaton, J., June 28, 2010) (excluding expert witness, David Goldsmith, and entering summary judgment), aff’d, 190 Vt. 577, 30 A.3d 1271 (2011)

Pritchard v. Dow Agro Sciences, 705 F. Supp. 2d 471, 486 (W.D. Pa. 2010) (excluding opinions of Dr. Omalu on Dursban, in part because of low relative risk) (“Therefore, a relative risk of 2.0 is not dispositive of the reliability of an expert’s opinion relying on an epidemiological study, but it is a factor, among others, which the Court is to consider in its evaluation.”), aff’d, 430 Fed. Appx. 102, 2011 WL 2160456 (3d Cir. 2011)

Faust v. BNSF Ry., 337 S.W.3d 325, 337 (Tex. Ct. App. 2d Dist. 2011) (“To be considered reliable scientific evidence of general causation, an epidemiological study must (1) have a relative risk of 2.0 and (2) be statistically significant at the 95% confidence level.”) (internal citations omitted)

Nonnon v. City of New York, 88 A.D.3d 384, 398-99, 932 N.Y.S.2d 428, 437-38 (1st Dep’t 2011) (holding that the strength of the epidemiologic evidence, with relative risks greater than 2.0, permitted an inference of causation)

Milward v. Acuity Specialty Products Group, Inc., 969 F. Supp. 2d 101, 112-13 & n.7 (D. Mass. 2013) (avoiding doubling of risk issue and holding that plaintiffs’ expert witnesses failed to rely upon a valid exposure estimate and lacked sufficient qualifications to evaluate and weigh the epidemiologic studies that provided estimates of relative risk) (generalities about the “core competencies” of physicians or specialty practices cannot overcome an expert witness’s explicit admission of lacking the epidemiologic expertise needed to evaluate and weigh the epidemiologic studies and methods at issue in the case. Without the requisite qualifications, an expert witness cannot show that the challenged opinion has a sufficiently reliable scientific foundation in epidemiologic studies and method.)

Berg v. Johnson & Johnson, 940 F.Supp.2d 983 (D.S.D. 2013) (talc and ovarian cancer)


Other

In re Hannaford Bros. Co. Customer Data Sec. Breach Litig., 293 F.R.D. 21, 2:08-MD-1954-DBH, 2013 WL 1182733, *1 (D. Me. Mar. 20, 2013) (Hornby, J.) (denying motion for class certification) (“population-based probability estimates do not speak to a probability of causation in any one case; the estimate of relative risk is a property of the studied population, not of an individual’s case.”)

Adverse Liver Events and Causal Claims Against Black Cohosh

April 6th, 2015

Liver toxicity in pharmaceutical products liability cases is one of the more difficult categories of cases for judicial gatekeeping because of the possibility of idiosyncratic liver toxicity. Sometimes a plaintiff will exploit this difficulty and try to recover for an acute liver reaction.

Susan Grant began to take a black cohosh herbal remedy in 2002, and within a year, developed autoimmune hepatitis, which required her to undergo a liver transplant. She and her husband sued the seller of black cohosh for substantial damages. Grant v. Pharmavite, LLC, 452 F. Supp 2d 903 (D. Neb. 2006). Granted enlisted two expert witnesses, Michael Corbett, Ph.D, a toxicologist, and her treating gastroenterologist, Michael Sorrell, M.D. The defense relying upon liver expert, Phillip Guzelian, M.D., challenged the admissibility of plaintiffs’ expert witnesses’ opinions under the federal rules.

Struggling with the law, Senior Judge Strom observed that Nebraska law requires expert witness opinion testimony on causation. Id. at 906. Of course, in this diversity action, federal law controlled on the scope and the requirements of expert witness opinion testimony.

And in a similarly offbeat way, Judge Strom suggested that plaintiffs’ expert witnesses need not have opinion supported by evidence:

While it is not necessary that an opinion be backed by scientific research, it is necessary that an expert’s testimony, which contradicts all of the research, at minimum address and distinguish the contradictory research in order to support the expressed opinion.”

Id. at 907 (emphasis added). Senior Judge Strom thus suggests had there been no published research at all, then Dr. Corbett could just make up an opinion, not backed by scientific research. This is, of course, seriously wrong, but fortunately it amounts only to obiter detritus, because Judge Strom believed that given the available studies, the testifying expert witnesses had to do more than simply criticize the studies that disagreed with their subjective opinion.

Michael Corbett, Ph.D, a consultant in “chemical toxicology,” from Omaha, Nebraska, criticized existent studies, which generally failed to identify liver toxicity, but he failed to conduct his own studies. Id. at 907. And Corbett also failed to explain why he rejected the great weight of medical publications that found that black cohosh was not hepatotoxic. Id. Michael Sorrell, M.D., started out as Ms. Grant’s treating gastroenterologist, but became a litigation expert witness. He was generally unaware of the randomized clinical trials of black cohosh, or any study that, or group of scientists who, supported his opinion. Id. at 909.

To Dr. Sorrell’s credit, he did attempt to write up a case report, which was published after the termination of the case. Unfortunately for Dr. Sorrell and his colleagues, Ms. Grant and her lawyers were less than forthcoming about her medical history, which included medications and lifestyle variables that were apparently not shared with Dr. Sorrell. Id. at 909.

You know that the quality of gatekeeping due process is strained when judges fail to cite key studies sufficiently to permit their readers to find the scientific evidence. Between Google Scholar and PubMed, however, you can find Dr. Sorrell’s case report, which was published in 2005, before Judge Strom issued his Rule 702 opinion. Josh Levitsky, Tyron A. Alli, James Wisecarver, and Michael F. Sorrell, “Fulminant liver failure associated with the use of black cohosh,” 50 Digestive Dis. Sci. 538 (2005). If nothing else, Judge Strom provoked an erratum from Dr. Sorrell and colleagues:

“After the article was published, it was brought to the authors’ attention through legal documentation and testimony that the patient admitted to consuming alcohol and had been taking other medications at the time of her initial presentation of liver failure. From these records, she reported drinking no more than six glasses of wine per week. In addition, up until presentation, she was taking valacyclovir 500 mg daily for herpes prophylaxis for 2 years, an occasional pseudoephedrine tablet, calcium carbonate 500 mg three times daily, iron sulfate 325 mg daily and ibuprofen up to three times weekly. She had been taking erythromycin tablets but discontinued those 3 months prior to presentation.

The authors regret the omission of this information from the original case report. While this new information is important to include as a correction to the history, it does not change the authors’ clinical opinion … .”

The erratum omits that Ms. Grant was taking Advil (ibuprofen) at the time of her transplantation, and that she had been taking erythromycin for 2.5 years, stopping just a few months before her acute liver illness. The Valtrex use shows that Ms. Grant had a chronic herpes infection. In the past, plaintiff took such excessive doses of ibuprofen that she developed anemia. Grant v. Pharmavite, LLC, 452 F. Supp 2d at 909 n.1. Hardly an uncomplicated case report to interpret for causality and an interesting case history of confirmation bias. Remarkably, the journal charges $39.95 to download the erratum, as much as the case report itself!

And how has the plaintiff’s claim fared in the face of the evolving scientific record since Judge Strom’s opinion?

Not well.

See, e.g., Peter W Whiting, Andrew Clouston and Paul Kerlin, “Black cohosh and other herbal remedies associated with acute hepatitis,” 177 Med. J. Australia 432 (2002); Cohen SM, O’Connor AM, Hart J, et al. Autoimmune hepatitis associated with the use of black cohosh: a case study. 11 Menopause 575 (2004); Christopher R. Lynch, Milan E. Folkers, and William R. Hutson, “Fulminant hepatic failure associated with the use of black cohosh: A case report,” 12 Liver Transplantation 989 (2006); Elizabeth C-Y Chow, Marcus Teo, John A Ring and John W Chen, “Liver failure associated with the use of black cohosh for menopausal symptoms,” 188 Med. J. Australia 420 (2008); Gail B. Mahady, Tieraona Low Dog, Marilyn L. Barrett, Mary L. Chavez, Paula Gardiner, Richard Ko, Robin J. Marles, Linda S. Pellicore, Gabriel I. Giancaspro, and Dandapantula N. Sarma, “United States Pharmacopeia review of the black cohosh case reports of hepatotoxicity,” 15 Menopause 628 (2008) (toxicity only possible on available evidence); D. Joy, J. Joy, and P. Duane, “Black cohosh: a cause of abnormal postmenopausal liver function tests,” 11 Climacteric 84 (2008); Lily Dara, Jennifer Hewett, and Joseph Kartaik Lim, “Hydroxycut hepatotoxicity: A case series and review of liver toxicity from herbal weight loss supplements,” 14 World J. Gastroenterol. 6999 (2008); F. Borrelli & E. Ernst, “Black cohosh (Cimicifuga racemosa): a systematic review of adverse events,” Am. J. Obstet. & Gyn. 455 (2008); Rolf Teschke & A. Schwarzenboeck, “Suspected hepatotoxicity by Cimicifugae racemosae rhizoma (black cohosh, root): critical analysis and structured causality assessment,” 16 Phytomedicine 72 (2009); Stacie E. Geller, Lee P. Shulman, Richard B. van Breemen, Suzanne Banuvar, Ying Zhou, Geena Epstein, Samad Hedayat, Dejan Nikolic, Elizabeth C. Krause, Colleen E. Piersen, Judy L. Bolton, Guido F. Pauli, and Norman R. Farnsworth, “Safety and Efficacy of Black Cohosh and Red Clover for the Management of Vasomotor Symptoms: A Randomized Controlled Trial,” 16 Menopause 1156 (2009) (89 women randomized to four groups; no hepatic events in trial not powered to detect them); Rolf Teschke, “Black cohosh and suspected hepatotoxicity: inconsistencies, confounding variables, and prospective use of a diagnostic causality algorithm. A critical review,” 17 Menopause 426 (2010) (“The presented data do not support the concept of hepatotoxicity in a primarily suspected causal relationship to the use of BC and failure to provide a signal of safety concern, but further efforts have to be undertaken to dismiss or to substantiate the existence of BC hepatotoxicity as a special disease entity. The future strategy should be focused on prospective causality evaluations in patients diagnosed with suspected BC hepatotoxicity, using a structured, quantitative, and hepatotoxicity-specific causality assessment method.”); Fabio Firenzuoli, Luigi Gori, and Paolo Roberti di Sarsina, “Black Cohosh Hepatic Safety: Follow-Up of 107 Patients Consuming a Special Cimicifuga racemosa rhizome Herbal Extract and Review of Literature,” 2011 Evidence-Based Complementary & Alternative Med. 1 (2011); Rolf Teschke, Wolfgang Schmidt-Taenzer and Albrecht Wolff, “Spontaneous reports of assumed herbal hepatotoxicity by black cohosh: is the liver-unspecific Naranjo scale precise enough to ascertain causality?” 20 Pharmacoepidemiol. & Drug Safety 567 (2011) (causation unlikely or excluded); Rolf Teschke, Alexander Schwarzenboeck, Wolfgang Schmidt-Taenzer, Albrecht Wolff, and Karl-Heinz Hennermann, “Herb induced liver injury presumably caused by black cohosh: A survey of initially purported cases and herbal quality specifications,” 11 Ann. Hepatology 249 (2011).

Johnson of Accutane – Keeping the Gate in the Garden State

March 28th, 2015

Flag of Aquitaine     Nelson Johnson is the author of Boardwalk Empire: The Birth, High Times, and Corruption of Atlantic City (2010), a rattling good yarn, which formed the basis for a thinly fictionalized story of Atlantic City under the control of mob boss (and Republican politician) Enoch “Nucky” Johnson. HBO transformed Johnson’s book into a multi-season series, with Steve Buscemi playing Nucky Johnson (Thompson in the series). Robert Strauss, “Judge Nelson Johnson: Atlantic City’s Godfather — A Q&A with Judge Nelson Johnson,” New Jersey Monthly (Aug. 16, 2010).

Nelson Johnson is also known as the Honorable Nelson Johnson, a trial court judge in Atlantic County, New Jersey, where he inherited some of the mass tort docket of Judge Carol Higbee. Judge Higbee has since ascended to the Appellate Division of the New Jersey Superior Court. One of the litigations Judge Johnson presides over is the mosh pit of isotretinoin (Accutane) cases, involving claims that the acne medication causes irritable bowel syndrome (IBS) and Crohn’s disease (CD). Judge Johnson is not only an accomplished writer of historical fiction, but he is also an astute evaluator of the facts and data, and the accompanying lawyers’ rhetoric, thrown about in pharmaceutical products liability litigation.

Perhaps more than his predecessor ever displayed, Judge Johnson recently demonstrated his aptitude for facts and data in serving as a gatekeeper of scientific evidence, as required by the New Jersey Supreme Court, in Kemp v. The State of New Jersey, 174 NJ 412 (2002). Faced with a complex evidentiary display on the validity and reliability of the scientific evidence, Judge Johnson entertained extensive briefings, testimony, and oral argument. When the dust settled, the court ruled that the proffered testimony of Dr, Arthur Kornbluth and Dr. David Madigan did not meet the liberal New Jersey test for admissibility. In re Accutane, No. 271(MCL), 2015 WL 753674, 2015 BL 59277 (N.J.Super. Law Div. Atlantic Cty. Feb. 20, 2015). And in settling the dust, Judge Johnson dispatched several bogus and misleading “lines of evidence,” which have become standard ploys to clog New Jersey and other courthouses.

Case Reports

As so often is the case when there is no serious scientific evidence of harm in pharmaceutical cases, plaintiffs in the Accutane litigation relied heavily upon case and adverse event reports. Id. at *11. Judge Johnson was duly unimpressed, and noted that:

“[u]nsystematic clinical observations or case reports and adverse event reports are at the bottom of the evidence hierarchy.”

Id. at *16.

Bootstrapped, Manufactured Evidence

With respect to case reports that are submitted to the FDA’s Adverse Event Reporting System (FAERS), Judge Johnson acknowledged the “serious limitations” of the hearsay anecdotes that make up such reports. Despite the value of AERs in generating signals for future investigation, Judge Johnson, citing FDA’s own description of the reporting system, concluded that the system’s anecdotal data are “not evidentiary in a court of law.” Id. at 14 (quoting FDA’s description of FAERS).

Judge Johnson took notice of another fact; namely, the industry litigation creates evidence that it then uses to claim causal connections in the courtroom. Plaintiffs’ lawyers in pharmaceutical cases routinely file Medwatch adverse event reports, which thus inflate the “signal,” they claim supports the signal of harm from medication use. This evidentiary bootstrapping machine was hard at work in the isotretinoin litigation. See Derrick J. Stobaugh, Parakkal Deepak, and Eli D. Ehrenpreis, “Alleged Isotretinoin-Associated Inflammatory Bowel Disease: Disproportionate reporting by attorneys to the Food and Drug Administration Adverse Event Reporting System,” 69 J. Am. Acad. Dermatol. 398 (2013) (“Attorney-initiated reports inflate the pharmacovigilance signal of isotretinoin-associated IBD in the FAERS.”). Judge Johnson gave a wry hat tip to plaintiffs’ counsel’s industry, by acknowledging that the litigation industry itself had inflated this signal-generating process:

“The legal profession is a bulwark of our society, yet the courts should never underestimate the resourcefulness of some attorneys.”

In re Accutane, 2015 WL 753674, at *15.

Bias and Confounding

The epidemiologic studies referenced by the parties had identified a fairly wide range of “risk factors” for irritable bowel syndrome, including many prevalent factors in Westernized countries such as prior appendectomy, breast-feeding as an infant, stress, Vitamin D deficiency, tobacco or alcohol use, refined sugars, dietary animal fat, fast food. In re Accutane, 2015 WL 753674, at *9. The court also noted that there were four medications known to be risk factors for IBD: aspirin, nonsteroidal anti-inflammatory medications (NSAIDs), oral contraceptives, and antibiotics.

In reviewing the plaintiffs’ expert witnesses’ methodology, Judge Johnson found that they had been inordinately, and inappropriately selective in the studies chosen for reliance. The challenged witnesses had discounted and discarded most of the available studies in favor of two studies that were small, biased, and not population based. Indeed, one of the studies evidenced substantial selection bias by using referrals to obtain study participants, a process deprecated by the trial court as “cherry picking the subjects.” Id. at *18. “The scientific literature does not support reliance upon such insignificant studies to arrive at conclusions.” Id.

Animal Studies

Both sides in the isotretinoin cases seemed to concede the relative unimportance of animal studies. The trial court discussed the limitations on animal studies, especially the absence of a compelling animal model of human irritable bowel syndrome. Id. at *18.

Cherry Picking and Other Crafty Stratagems

With respect to the complete scientific evidentiary display, plaintiffs asserted that their expert witnesses had considered everything, but then failed to account for most of the evidence. Judge Johnson found this approach deceptive and further evidence of a cherry-picking, pathological methodology:

‘‘Finally, coursing through Plaintiffs’ presentation is a refrain that is a ruse. Repeatedly, counsel for the Plaintiffs and their witnesses spoke of ‛lines of evidence”, emphasizing that their experts examined ‛the same lines of evidence’ as did the experts for the Defense. Counsels’ sophistry is belied by the fact that the examination of the ‘lines of evidence’ by Plaintiffs’ experts was highly selective, looking no further than they wanted to—cherry picking the evidence—in order to find support for their conclusion-driven testimony in support of a hypothesis made of disparate pieces, all at the bottom of the medical evidence hierarchy.’’

Id. at *21.

New Jersey Rule of Evidence 703

The New Jersey rules of evidence, like the Federal Rules, imposes a reasonableness limit on what sorts of otherwise inadmissible evidence an expert witness may rely upon. SeeRULE OF EVIDENCE 703 — Problem Child of Article VII” (Sept. 9, 2011). Although Judge Johnson did not invoke Rule 703 specifically, he was clearly troubled by plaintiffs’ expert witnesses’ reliance upon an unadjusted odds ratio from an abstract, which did not address substantial confounding from a known causal risk factor – antibiotics use. Judge Johnson concluded that the reliance upon the higher, unadjusted risk figure, contrary to the authors’ own methods and conclusions, and without a cogent explanation for so doing was “pure advocacy” on the part of the witnesses. In re Accutane, 2015 WL 753674, at *17; see also id. at *5 (citing Landrigan v. Celotex Corp., 127 N.J. 404, 417 (1992), for the proposition that “when an expert relies on such data as epidemiological studies, the trial court should review the studies, as well as other information proffered by the parties, to determine if they are of a kind on which such experts ordinarily rely.”).

Discordance Between Courtroom and Professional Opinions

One of plaintiffs’ expert witnesses, Dr. Arthur Kornbluth actually had studied putative association between isotretinoin and CD before he became intensively involved in litigation as an expert witness. In re Accutane, 2015 WL 753674, at *7. Having an expert witness who is a real world expert can be a plus, but not when that expert witness maintains a double standard for assessing causal connections. Back in 2009, Kornbluth published an article, “Ulcerative Colitis Practice Guidelines in Adults” in The American Journal of Gastroenterology. Id. at *10. This positive achievement became a large demerit when cross-examination at the Kemp hearing revealed that Kornbluth had considered but rejected the urgings of a colleague, Dr. David Sachar, to comment on isotretinoin as a cause of irritable bowel syndrome. In front of Judge Johnson, Dr. Kornbluth felt no such scruples. Id. at *11. Dr. Kornbluth’s stature in the field of gastroenterology, along with his silence on the issue in his own field, created a striking contrast with his stridency about causation in the courtroom. The contrast raised the trial court’s level of scrutiny and skepticism about his causal opinions in the New Jersey litigation. Id. (citing and quoting Soldo v. Sandoz Pharms. Corp, 244 F. Supp. 2d 434, 528 (W.D. Pa. 2003) (“Expert opinions generated as the result of litigation have less credibility than opinions generated as the result of academic research or other forms of ‘pure’ research.”) (“The expert’s motivation for his/her study and research is important. … We may not ignore the fact that a scientist’s normal work place is the lab or field, not the courtroom or the lawyer’s office.”).

Meta-Analysis

Meta-analysis has become an important facet of pharmaceutical and other products liability litigation[1]. Fortunately for Judge Johnson, he had before him an extremely capable expert witness, Dr. Stephen Goodman, to explain meta-analysis generally, and two meta-analyses performed on isotretinoin and irritable bowel outcomes. In re Accutane, 2015 WL 753674, at *8. Dr. Goodman explained that:

“the strength of the meta-analysis is that no one feature, no one study, is determinant. You don’t throw out evidence except when you absolutely have to.”

Id. Dr. Goodman further explained that plaintiffs’ expert witnesses’ failure to perform a meta-analysis was telling meta-analysis “can get us closer to the truth.” Id.

Some Nitpicking

Specific Causation

After such a commanding judicial performance by Judge Johnson, nitpicking on specific causation might strike some as ungrateful. For some reason, however, Judge Johnson cited several cases on the appropriateness of expert witnesses’ reliance upon epidemiologic studies for assessing specific causation or for causal apportionment between two or more causes. In re Accutane, 2015 WL 753674, at *5 (citing Landrigan v. Celotex Corp., 127 N.J. 404 (1992), Caterinicchio v. Pittsburgh Corning, 127 N.J. 428 (1992), and Dafler v. Raymark Inc., 259 N.J. Super. 17, 36 (App. Div. 1992), aff’d. o.b. 132 N.J. 96 (1993)). Fair enough, but specific causation was not at issue in the Accutane Kemp hearing, and the Landrigan and Caterinicchio cases are irrelevant to general causation.

In both Landrigan and Caterincchio, the defendants moved for directed verdicts by arguing that, assuming arguendo that asbestos causes colon cancer, the plaintiffs’ expert witnesses had not presented a sufficient opinion to support that Landrigan’s and Caterinnichio’s colon cancers were caused by asbestos. SeeLandrigan v. The Celotex Corporation, Revisited” (June 4, 2013). General causation was thus never at issue, and the holdings never addressed the admissibility of the expert witnesses’ causation opinions. Only sufficiency of the opinions that equated increased risks, less than 2.0, to specific causation was at issue in the directed verdicts, and the appeals taken from the judgments entered on those verdicts.

Judge Johnson, in discussing previous case law suggests that the New Jersey Supreme Court reversed and remanded the Landrigan case for trial, holding that “epidemiologists could help juries determine causation in toxic tort cases and rejected the proposition that epidemiological studies must show a relative risk factor of 2.0 before gaining acceptance by a court.” In re Accutane, 2015 WL 753674, at *5, citing Landrigan, 127 N.J. at 419. A close and fair reading of Landrigan, however, shows that it was about a directed verdict, 127 N.J. at 412, and not a challenge to the use of epidemiologic studies generally, or to their use to show general causation.

Necessity of Precise Biological Mechanism

In the Accutane hearings, the plaintiffs’ counsel and their expert witnesses failed to provide a precise biological mechanism of the cause of IBD. Judge Johnson implied that any study that asserted that Accutane caused IBD ‘‘would, of necessity, require an explication of a precise biological mechanism of the cause of IBD and no one has yet to venture more than alternate and speculative hypotheses on that question.’’ In re Accutane, 2015 WL 753674, at *8. Conclusions of causality, however, do not always come accompanied by understood biological mechanisms, and Judge Johnson demonstrated that the methods and evidence relied upon by plaintiffs’ expert witnesses could not, in any event, allow them to draw causal conclusions.

Interpreting Results Contrary to Publication Authors’ Interpretations

There is good authority, no less than the United States Supreme Court in Joiner, that there is something suspect in expert witnesses’ interpreting a published study’s results in contrary to the authors’ publication. Judge Johnson found that the plaintiffs’ expert witnesses in the Accutane litigation had inferred that two studies showed increased risk when the authors of those studies had concluded that their studies did not appear to show an increased risk. Id. at *17. There will be times, however, when a published study may have incorrectly interpreted its own data, when “real” expert witnesses can, and should, interpret the data appropriately. Accutane was not such a case. In In re Accutane, Judge Johnson carefully documented and explained how the plaintiffs’ expert witnesses’ supposed reinterpretation was little more than attempted obfuscation. His Honor concluded that the witnesses’ distortion of, and ‘‘reliance upon these two studies is fatal and reveals the lengths to which legal counsel and their experts are willing to contort the facts and torture the logic associated with Plaintiffs’ hypothesis.’’ Id. at *18.


[1] “The Treatment of Meta-Analysis in the Third Edition of the Reference Manual on Scientific Evidence” (Nov. 14, 2011) (The Reference Manual fails to come to grips with the prevalence and importance of meta-analysis in litigation, and fails to provide meaningful guidance to trial judges).

The Joiner Finale

March 23rd, 2015

“This is the end
Beautiful friend
This is the end
My only friend, the end”

Jim Morrison, “The End” (c. 1966)

 *          *          *          *           *          *          *          *          *          *  

The General Electric Co. v. Joiner, 522 U.S. 136 (1997), case was based upon polychlorinated biphenyl exposures (PCB), only in part. The PCB part did not hold up well legally in the Supreme Court; nor was the PCB lung cancer claim vindicated by later scientific evidence. See How Have Important Rule 702 Holdings Held Up With Time?” (Mar. 20, 2015).

The Supreme Court in Joiner reversed and remanded the case to the 11th Circuit, which then remanded the case back to the district court to address claims that Mr. Joiner had been exposed to furans and dioxins, and that these other chemicals had caused, or contributed to, his lung cancer, as well. Joiner v. General Electric Co., 134 F.3d 1457 (11th Cir. 1998) (per curiam). Thus the dioxins were left in the case even after the Supreme Court ruled.

After the Supreme Court’s decision, Anthony Roisman argued that the Court had addressed an artificial question when asked about PCBs alone because the case was really about an alleged mixture of exposures, and he held out hope that the Joiners would do better on remand. Anthony Z. Roisman, “The Implications of G.E. v. Joiner for Admissibility of Expert Testimony,” 1 Res Communes 65 (1999).

Many Daubert observers (including me) are unaware of the legal fate of the Joiners’ claims on remand. In the only reference I could find, the commentator simply noted that the case resolved before trial.[1] I am indebted to Michael Risinger, and Joseph Cecil, for pointing me to documents from PACER, which shed some light upon the Joiner “endgame.”

In February 1998, Judge Orinda Evans, who had been the original trial judge, and who had sustained defendants’ Rule 702 challenges and granted their motions for summary judgments, received and reopened the case upon remand from the 11th Circuit. In March, Judge Evans directed the parties to submit a new pre-trial order by April 17, 1998. At a status conference in April 1998, Judge Evans permitted the plaintiffs additional discovery, to be completed by June 17, 1998. Five days before the expiration of their additional discovery period, the plaintiffs moved for additional time; defendants opposed the request. In July, Judge Evans granted the requested extension, and gave defendants until November 1, 1998, to file for summary judgment.

Meanwhile, in June 1998, new counsel entered their appearances for plaintiffs – William Sims Stone, Kevin R. Dean, Thomas Craig Earnest, and Stanley L. Merritt. The docket does not reflect much of anything about the new discovery other than a request for a protective order for an unpublished study. But by October 6, 1998, the new counsel, Earnest, Dean, and Stone (but not Merritt) withdrew as attorneys for the Joiners, and by the end of October 1998, Judge Evans entered an order to dismiss the case, without prejudice.

A few months later, in February 1999, the parties filed a stipulation, approved by the Clerk, dismissing the action with prejudice, and with each party to bear its own coasts. Given the flight of plaintiffs’ counsel, the dismissals without and then with prejudice, a settlement seems never to have been involved in the resolution of the Joiner case. In the end, the Joiners’ case fizzled perhaps to avoid being Frye’d.

And what has happened since to the science of dioxins and lung cancer?

Not much.

In 2006, the National Research Council published a monograph on dioxin, which took the controversial approach of focusing on all cancer mortality rather than specific cancers that had been suggested as likely outcomes of interest. See David L. Eaton (Chairperson), Health Risks from Dioxin and Related Compounds – Evaluation of the EPA Reassessment (2006). The validity of this approach, and the committee’s conclusions, were challenged vigorously in subsequent publications. Paolo Boffetta, Kenneth A. Mundt, Hans-Olov Adami, Philip Cole, and Jack S. Mandel, “TCDD and cancer: A critical review of epidemiologic studies,” 41 Critical Rev. Toxicol. 622 (2011) (“In conclusion, recent epidemiologicalevidence falls far short of conclusively demonstrating a causal link between TCDD exposure and cancer risk in humans.”

In 2013, the Industrial Injuries Advisory Council (IIAC), an independent scientific advisory body in the United Kingdom, published a review of lung cancer and dioxin. The Council found the epidemiologic studies mixed, and declined to endorse the compensability of lung cancer for dioxin-exposed industrial workers. Industrial Injuries Advisory Council – Information Note on Lung cancer and Dioxin (December 2013). See also Mann v. CSX Transp., Inc., 2009 WL 3766056, 2009 U.S. Dist. LEXIS 106433 (N.D. Ohio 2009) (Polster, J.) (dioxin exposure case) (“Plaintiffs’ medical expert, Dr. James Kornberg, has opined that numerous organizations have classified dioxins as a known human carcinogen. However, it is not appropriate for one set of experts to bring the conclusions of another set of experts into the courtroom and then testify merely that they ‘agree’ with that conclusion.”), citing Thorndike v. DaimlerChrysler Corp., 266 F. Supp. 2d 172 (D. Me. 2003) (court excluded expert who was “parroting” other experts’ conclusions).


[1] Morris S. Zedeck, Expert Witness in the Legal System: A Scientist’s Search for Justice 49 (2010) (noting that, after remand from the Supreme Court, Joiner v. General Electric resolved before trial)

How Have Important Rule 702 Holdings Held Up With Time?

March 20th, 2015

The Daubert case arose from claims of teratogenicity of Bendectin. The history of the evolving scientific record has not been kind to those claims. SeeBendectin, Diclegis & The Philosophy of Science” (Oct. 26, 2013); Gideon Koren, “The Return to the USA of the Doxylamine-Pyridoxine Delayed Release Combination (Diclegis®) for Morning Sickness — A New Morning for American Women,” 20 J. Popul. Ther. Clin. Pharmacol. e161 (2013). Twenty years later, the decisions in the Daubert appeals look sound, even if the reasoning was at times shaky. How have other notable Rule 702 exclusions stood up to evolving scientific records?

A recent publication of an epidemiologic study on lung cancer among workers exposed to polychlorinated biphenyls (PCBs) raised an interested question about a gap in so-called Daubert scholarship. Clearly, there are some cases, like General Electric v. Joiner[1], in which plaintiffs lack sufficient, valid evidence to make out their causal claims. But are there cases of Type II injustices, for which, in the fullness of time, the insufficiency or invalidity of the available evidentiary display is “cured” by subsequently published studies?

In Joiner, Chief Justice Rehnquist noted that the district court had carefully analyzed the four epidemiologic studies claimed by plaintiff to support the association between PCB exposure and lung cancer. The first such study[2] involved workers at an Italian capacitor plant who had been exposed to PCBs.

The Chief Justice reported that the authors of the Italian capacitor study had noted that lung cancer deaths among former employees were more numerous than expected (without reporting whether there was any assessment of random error), but that they concluded that “there were apparently no grounds for associating lung cancer deaths (although increased above expectations) and exposure in the plant.”[3] The court frowned at the hired expert witnesses’ willingness to draw a causal inference when the authors of the Bertazzi study would not. As others have noted, this disapproval was beside the point of the Rule 702 inquiry. It might well be the case that Bertazzi and his co-authors could not or did not conduct a causal analysis, but that does not mean that the study’s evidence could not be part of a larger effort to synthesize the available evidence. In any event, the Bertazzi study was small and uninformative. Although all cancer mortality was increased (14 observed vs. 5.5 expected, based upon national rates; SMR = 253; 95% CI 144-415), the study was too small to be meaningful for lung cancer outcomes.

The second cited study[4], from an unpublished report, followed workers at a Monsanto PCB production facility. The authors of the Monsanto study reported that the lung cancer mortality rate among exposed workers “somewhat” higher than expected, but that the “increase, however, was not statistically significant and the authors of the study did not suggest a link between the increase in lung cancer deaths and the exposure to PCBs.” Again, the Court’s emphasis on what the authors stated is unfortunate. What is important is obscured because the Court never reproduced the data from this unpublished study.

The third study[5] cited by plaintiff’s hired expert witnesses was of “no help,” in that the study followed workers exposed to mineral oil, without any known exposure to PCBs. Although the workers exposed to this particular mineral oil had a statistically significantly elevated lung cancer mortality, the study made no reference to PCBs.

The fourth study[6] cited by plaintiffs’ expert witnesses followed a Japanese PCB-exposed group, which had a “statistically significant increase in lung cancer deaths.” The Court, however, was properly concerned that the cohort was exposed to numerous other potential carcinogens, including toxic rice oil by ingestion.

The paucity of this evidence led the Court to observe:

“Trained experts commonly extrapolate from existing data. But nothing in either Daubert or the Federal Rules of Evidence requires a district court to admit opinion evidence which is connected to existing data only by the ipse dixit of the expert. A court may conclude that there is simply too great an analytical gap between the data and the opinion proffered. … That is what the District Court did here, and we hold that it did not abuse its discretion in so doing.”

Joiner, 522 U.S. at 146 (1997).

Interestingly omitted from the Supreme Court’s discussion was why the plaintiffs’ expert witnesses failed to rely upon all the available epidemiology. The excluded witnesses relied upon an unpublished Monsanto study, but apparently ignored an unpublished investigation by NIOSH researchers, who found that there were “no excess deaths from cancers of the … the lung,” among PCB-exposed workers at a Westinghouse Electric manufacturing facility[7]. Actually, NIOSH reported a statistically non-significant decrease in lung cancer rate, with fairly a narrow confidence interval.

Two Swedish studies[8] were perhaps too small to add much to the mix of evidence, but lung cancer rates were not apparently increased in a North American study[9].

Joiner thus represents not only an analytical gap case, but also a cherry picking case, as well. The Supreme Court was eminently correct to affirm the shoddy evidence proffered in the Joiner case.

But has the District Judge’s exclusion of Joiner’s expert witnesses (Dr. Arnold Schecter and Dr. (Rabbi) Daniel Teitelbaum) stood up to the evolving scientific record?

A couple of weeks ago, researchers published a large, updated cohort study, funded by General Electric, on the mortality experience of workers in a plant that manufactured capacitors with PCBs[10]. Although the Lobby and the Occupational Medicine Zealots will whine about the funding source, the study is a much stronger study than anything relied upon by Mr. Joiner’s expert witnesses, and its results are consistent with the NIOSH study available to, but ignored by, Joiner’s expert witnesses. And the results are not uniformly good for General Electric, but on the end point of lung cancer for men, the standardized mortality ratio was 81 (95% C.I., 68 – 96), nominally statistically significantly below the expected SMR of 100.


[1] General Electric v. Joiner, 522 U.S. 136 (1997).

[2] Bertazzi, Riboldi, Pesatori, Radice, & Zocchetti, “Cancer Mortality of Capacitor Manufacturing Workers, 11 Am. J. Indus. Med. 165 (1987).

[3] Id. at 172.

[4] J. Zack & D. Munsch, Mortality of PCB Workers at the Monsanto Plant in Sauget, Illinois (Dec. 14, 1979) (unpublished report), 3 Rec., Doc. No. 11.

[5] Ronneberg, Andersen, Skyberg, “Mortality and Incidence of Cancer Among Oil-Exposed Workers in a Norwegian Cable Manufacturing Company,” 45 Br. J. Indus. Med. 595 (1988).

[6] Kuratsune, Nakamura, Ikeda, & Hirohata, “Analysis of Deaths Seen Among Patients with Yusho – A Preliminary Report,” 16 Chemosphere 2085 (1987).

[7] Thomas Sinks, Alexander B. Smith, Robert Rinsky, M. Kathy Watkins, and Ruth Shults, Health Hazard Evaluation Report, HETA 89-116-209 (Jan. 1991) (reporting lung cancer SMR = 0.7 (95%CI, 0.4 – 1.2). This unpublished study was published by the time the Joiner case was litigated. Thomas Sinks, G. Steele, Alexander B. Smith, and Ruth Shults, “Mortality among workers exposed to polychlorinated biphenyls,” 136 Am. J. Epidemiol. 389 (1992). A follow-up on this unpublished study confirmed the paucity of lung cancer in the cohort. See Avima M. Ruder, Misty J. Hein, Nancy Nilsen, Martha A. Waters, Patricia Laber, Karen Davis-King, Mary M. Prince, and Elizabeth Whelan, “Mortality among Workers Exposed to Polychlorinated Biphenyls (PCBs) in an Electrical Capacitor Manufacturing Plant in Indiana: An Update,” 114 Environmental Health Perspect. 18 (2006).

[8] P. Gustavsson, C. Hogstedt, and C. Rappe, “Short-term mortality and cancer incidence in capacitor manufacturing workers exposed to polychlorinated biphenyls (PCBs),” 10 Am. J. Indus. Med. 341 (1986); P. Gustavsson & C. Hogstedt, “A cohort study of Swedish capacitor manufacturing workers exposed to polychlorinated biphenyls (PCBs),” 32 Am. J. Indus. Med. 234 (1997) (cancer incidence for entire cohort, SIR = 86, 95%; CI 51-137).

[9] David P. Brown, “Mortality of workers exposed to polychlorinated biphenyls–an update,” 42 Arch. Envt’l Health 333 (1987)

[10] See Renate D. Kimbrough, Constantine A. Krouskas, Wenjing Xu, and Peter G. Shields, “Mortality among capacitor workers exposed to polychlorinated biphenyls (PCBs), a long-term update,” 88 Internat’l Arch. Occup. & Envt’l Health 85 (2015).

The Mythology of Linear No-Threshold Cancer Causation

March 13th, 2015

“For the great enemy of the truth is very often not the lie—deliberate, contrived, and dishonest—but the myth—persistent, persuasive, and unrealistic. Too often we hold fast to the clichés of our forebears. We subject all facts to a prefabricated set of interpretations. We enjoy the comfort of opinion without the discomfort of thought.”

John F. Kennedy, Yale University Commencement (June 11, 1962)

         *        *        *        *        *        *        *        *        *

The linear no-threshold model for risk assessment has its origins in a dubious attempt of scientists playing at policy making[1]. The model has survived as a political strategy to inject the precautionary principle into regulatory decision making, but it has turned into a malignant myth in litigation over low-dose exposures to putative carcinogens. Ignorance or uncertainty about low-dose exposures is turned into an affirmative opinion that the low-dose exposures are actually causative. Call it contrived, or dishonest, or call it a myth, the LNT model is an intellectual cliché.

The LNT cliché pervades American media as well as courtrooms. Earlier this week, the New York Times provided a lovely example of the myth taking center stage, without explanation or justification. Lumber Liquidators is under regulatory and litigation attack for having sold Chinese laminate wood flooring made with formaldehyde-containing materials. According to a “60 Minutes” investigation, the flooring off-gases formaldehyde at concentrations in excess of regulatory permissible levels. See Aaron M. Kessler & Rachel Abrams, “Homeowners Try to Assess Risks From Chemical in Floors,” New York Times (Mar. 10, 2015).

The Times reporters, in discussing whether a risk exists to people who live in houses and apartments with the Lumber Liquidators flooring sought out and quoted the opinion of Marilyn Howarth:

“Any exposure to a carcinogen can increase your risk of cancer,” said Marilyn Howarth, a toxicologist at the University of Pennsylvania’s Perelman School of Medicine.

Id. Dr. Howarth, however, is not a toxicologist; she is an occupational and environmental physician, and serves as the Director of Occupational and Environmental Consultation Services at the Hospital of the University of Pennsylvania. She is also an adjunct associate professor of emergency medicine, and the Director, of the Community Outreach and Engagement Core, Center of Excellence in Environmental Toxicology, at the University of Pennsylvania Perelman School of Medicine. Without detracting from Dr. Howarth’s fine credentials, the New York Times reporters might have noticed that Dr. Howarth’s publications are primarily on latex allergies, and not on the issue of the effect of low-dose exposure to carcinogens.

The point is not to diminish Dr. Howarth’s accomplishments, but to criticize the Times reporters for seeking out an opinion of a physician whose expertise is not well matched to the question they raise about risks, and then to publish that opinion even though it is demonstrably wrong. Clearly, there are some carcinogens, and perhaps all, that do not increase risk at “any exposure.” Consider ethanol, which is known to cause cancer of the larynx, liver, female breast, and perhaps other organs[2]. Despite known causation, no one would assert that “any exposure” to alcohol-containing food and drink increases the risk of these cancers. And the same could be said for most, if not all, carcinogens. The human body has defense mechanisms to carcinogens, including DNA repair mechanisms and programmed cell suicide, which work to prevent carcinogenesis from low-dose exposures.

The no threshold hypothesis is really at best an hypothesis, with affirmative evidence showing that the hypothesis should be rejected for some cancers[3]. The factual status of LNT is a myth; it is an opinion, and a poorly supported opinion at that.

         *        *        *        *        *        *        *        *        *

“There are, in fact, two things: science and opinion. The former brings knowledge, the latter ignorance.”

Hippocrates of Cos


[1] See Edward J. Calabrese, “Cancer risk assessment foundation unraveling: New historical evidence reveals that the US National Academy of Sciences (US NAS), Biological Effects of Atomic Radiation (BEAR) Committee Genetics Panel falsified the research record to promote acceptance of the LNT,” 89 Arch. Toxicol. 649 (2015); Edward J. Calabrese & Michael K. O’Connor, “Estimating Risk of Low Radiation Doses – A Critical Review of the BEIR VII Report and its Use of the Linear No-Threshold (LNT) Hypothesis,” 182 Radiation Research 463 (2014); Edward J. Calabrese, “Origin of the linearity no threshold (LNT) dose–response concept,” 87 Arch. Toxicol. 1621 (2013); Edward J. Calabrese, “The road to linearity at low doses became the basis for carcinogen risk assessment,” 83 Arch. Toxicol. 203 (2009).

[2] See, e.g., IARC Monographs on the Evaluation of Carcinogenic Risks to Humans – Alcohol Consumption and Ethyl Carbamate; volume 96 (2010).

[3] See, e.g., Jerry M. Cuttler, “Commentary on Fukushima and Beneficial Effects of Low Radiation,” 11 Dose-Response 432 (2013); Jerry M. Cuttler, “Remedy for Radiation Fear – Discard the Politicized Science,” 12 Dose Response 170 (2014).

Sander Greenland on “The Need for Critical Appraisal of Expert Witnesses in Epidemiology and Statistics”

February 8th, 2015

Sander Greenland is one of the few academics, who has served as an expert witness, who has written post-mortems of his involvement in various litigations[1]. Although settling scores with opposing expert witnesses can be a risky business[2], the practice can provide important insights for judges and lawyers who want to avoid the errors of the past. Greenland correctly senses that many errors seem endlessly recycled, and that courts could benefit from disinterested commentary on cases. And so, there should be a resounding affirmation from federal and state courts to the proclaimed “need for critical appraisal of expert witnesses in epidemiology and statistics,” as well as in many other disciplines.

A recent exchange[3] with Professor Greenland led me to revisit his Wake Forest Law Review article. His article raises some interesting points, some mistaken, but some valuable and thoughtful considerations about how to improve the state of statistical expert witness testimony. For better and worse[4], lawyers who litigate health effects issues should read it.

Other Misunderstandings

Greenland posits criticisms of defense expert witnesses[5], who he believes have misinterpreted or misstated the appropriate inferences to be drawn from null studies. In one instance, Greenland revisits one of his own cases, without any clear acknowledgment that his views were largely rejected.[6] The State of California had declared, pursuant to Proposition 65 ( the Safe Drinking Water and Toxic Enforcement Act of 1986, Health and Safety Code sections 25249.5, et seq.), that the State “knew” that di(2-ethylhexyl)phthalate, or “DEHP” caused cancer. Baxter Healthcare challenged the classification, and according to Greenland, the defense experts erroneously interpreted inclusive studies with evidence supporting a conclusion that DEHP does not cause cancer.

Greenland argues that the Baxter expert’s reference[7] to an IARC working group’s classification of DEHP as “not classifiable as to its carcinogenicity to humans” did not support the expert’s conclusion that DEHP does not cause cancer in human. If Baxter’s expert invoked the IARC working group’s classification for complete exoneration of DEHP, then Greenland’s point is fair enough. In his single-minded attack on Baxter’s expert’s testimony, however, Greenland missed a more important point, which is that the IARC’s determination that DEHP is not classifiable as to carcinogenicity is directly contradictory of California’s epistemic claim to “know” that DEHP causes cancer. And Greenland conveniently omits any discussion that the IARC working group had reclassified DEHP from “possibly carcinogenic” to “not classifiable,” in the light of its conclusion that mechanistic evidence of carcinogenesis in rodents did not pertain to humans.[8] Greenland maintains that Baxter’s experts misrepresented the IARC working group’s conclusion[9], but that conclusion, at the very least, demonstrates that California was on very shaky ground when it declared that it “knew” that DEHP was a carcinogen. California’s semantic gamesmanship over its epistemic claims is at the root of the problem, not a misstep by defense experts in describing inconclusive evidence as exonerative.

Greenland goes on to complain that in litigation over health claims:

“A verdict of ‛uncertain’ is not allowed, yet it is the scientific verdict most often warranted. Elimination of this verdict from an expert’s options leads to the rather perverse practice (illustrated in the DEHP testimony cited above) of applying criminal law standards to risk assessments, as if chemicals were citizens to be presumed innocent until proven guilty.

39 Wake Forest Law Rev. at 303. Despite Greenland’s alignment with California in the Denton case, the fact of the matter is that a verdict of “uncertain” was allowed, and he was free to criticize California for making a grossly exaggerated epistemic claim on inconclusive evidence.

Perhaps recognizing that he may be readily be seen as an advocate for coming to the defense of California on the DEHP issue, Greenland protests that:

“I am not suggesting that judgments for plaintiffs or actions against chemicals should be taken when evidence is inconclusive.”

39 Wake Forest Law Rev. at 305. And yet, his involvement in the Denton case (as well as other cases, such as silicone gel breast implant cases, thimerosal cases, etc.) suggest that he is willing to lend aid and support to judgments for plaintiffs when the evidence is inconclusive.

Important Advice and Recommendations

These foregoing points are rather severe limitations to Greenland’s article, but lawyers and judges should also look to what is good and helpful here. Greenland is correct to call out expert witnesses, regardless of party of affiliation, who opine that inconclusive studies are “proof” of the null hypothesis. Although some of Greenland’s arguments against the use of significance probability may be overstated, his corrections to the misstatements and misunderstandings of significance probability should command greater attention in the legal community. In one strained passage, however, Greenland uses a disjunction to juxtapose null hypothesis testing with proof beyond a reasonable doubt[10]. Greenland of course understands the difference, but the context would lead some untutored readers to think he has equated the two probabilistic assessments. Writing in a law review for lawyers and judges might have led him to be more careful. Given the prevalence of plaintiffs’ counsel’s confusing the 95% confidence coefficient with a burden of proof akin to beyond a reasonable doubt, great care in this area is, indeed, required.

Despite his appearing for plaintiffs’ counsel in health effects litigation, some of Greenland’s suggestions are balanced and perhaps more truth-promoting than many plaintiffs’ counsel would abide. His article provides an important argument in favor of raising the legal criteria for witnesses who purport to have expertise to address and interpret epidemiologic and experimental evidence[11]. And beyond raising qualification requirements above mere “reasonable pretense at expertise,” Professor Greenland offers some thoughtful, helpful recommendations for improving expert witness testimony in the courts:

  • “Begin publishing projects in which controversial testimony (a matter of public record) is submitted, and as space allows, published on a regular basis in scientific or law journals, perhaps with commentary. An online version could provide extended excerpts, with additional context.
  • Give courts the resources and encouragement to hire neutral experts to peer-review expert testimony.
  • Encourage universities and established scholarly societies (such as AAAS, ASA, APHA, and SER) to conduct workshops on basic epidemiologic and statistical inference for judges and other legal professionals.”

39 Wake Forest Law Rev. at 308.

Each of these three suggestions is valuable and constructive, and worthy of an independent paper. The recommendation of neutral expert witnesses and scholarly tutorials for judges is hardly new. Many defense counsel and judges have argued for them in litigation and in commentary. The first recommendation, of publishing “controversial testimony” is part of the purpose of this blog. There would be great utility to making expert witness testimony, and analysis thereof, more available for didactic purposes. Perhaps the more egregious testimonial adventures should be republished in professional journals, as Greenland suggests. Greenland qualifies his recommendation with “as space allows,” but space is hardly the limiting consideration in the digital age.

Causation

Professor Greenland correctly points out that causal concepts and conclusions are often essentially contested[12], but his argument might well be incorrectly taken for “anything goes.” More helpfully, Greenland argues that various academic ideals should infuse expert witness testimony. He suggests that greater scholarship, with acknowledgment of all viewpoints, and all evidence, is needed in expert witnessing. 39 Wake Forest Law Rev. at 293.

Greenland’s argument provides an important corrective to the rhetoric of Oreskes, Cranor, Michaels, Egilman, and others on “manufacturing doubt”:

“Never force a choice among competing theories; always maintain the option of concluding that more research is needed before a defensible choice can be made.”

Id. Despite his position in the Denton case, and others, Greenland and all expert witnesses are free to maintain that more research is needed before a causal claim can be supported. Greenland also maintains that expert witnesses should “look past” the conclusions drawn by authors, and base their opinions on the “actual data” on which the statistical analyses are based, and from which conclusions have been drawn. Courts have generally rejected this view, but if courts were to insist upon real expertise in epidemiology and statistics, then the testifying expert witnesses should not be constrained by the hearsay opinions in the discussion sections of published studies – sections which by nature are incomplete and tendentious. See Follow the Data, Not the Discussion” (May 2, 2010).

Greenland urges expert witnesses and legal counsel to be forthcoming about their assumptions, their uncertainty about conclusions:

“Acknowledgment of controversy and uncertainty is a hallmark of good science as well as good policy, but clashes with the very time limited tasks faced by attorneys and courts”

39 Wake Forest Law Rev. at 293-4. This recommendation would be helpful in assuring courts that the data may simply not support conclusions sufficiently certain to be submitted to lay judges and jurors. Rosen v. Ciba-Geigy Corp., 78 F.3d 316, 319, 320 (7th Cir. 1996) (“But the courtroom is not the place for scientific guesswork, even of the inspired sort. Law lags science; it does not lead it.”) (internal citations omitted).

Threats to Validity

One of the serious mistakes counsel often make in health effects litigation is to invite courts to believe that statistical significance is sufficient for causal inferences. Greenland emphasizes that validity considerations often are much stronger, and more important considerations than the play of random error[13]:

“For very imperfect data (e.g., epidemiologic data), the limited conclusions offered by statistics must be further tempered by validity considerations.”

*   *   *   *   *   *

“Examples of validity problems include non-random distribution of the exposure in question, non-random selection or cooperation of subjects, and errors in assessment of exposure or disease.”

39 Wake Forest Law Rev. at 302 – 03. Greenland’s abbreviated list of threats to validity should remind courts that they cannot sniff a p-value below five percent and then safely kick the can to the jury. The literature on evaluating bias and confounding is huge, but Greenland was a co-author on an important recent paper, which needs to be added to the required reading lists of judges charged with gatekeeping expert witness opinion testimony about health effects. See Timothy L. Lash, et al., “Good practices for quantitative bias analysis,” 43 Internat’l J. Epidem. 1969 (2014).


[1] For an influential example of this sparse genre, see James T. Rosenbaum, “Lessons from litigation over silicone breast implants: A call for activism by scientists,” 276 Science 1524 (1997) (describing the exaggerations, distortions, and misrepresentations of plaintiffs’ expert witnesses in silicone gel breast implant litigation, from perspective of a highly accomplished scientist physician, who served as a defense expert witness, in proceedings before Judge Robert Jones, in Hall v. Baxter Healthcare Corp., 947 F. Supp. 1387 (D. Or. 1996). In one attempt to “correct the record” in the aftermath of a case, Greenland excoriated a defense expert witness, Professor Robert Makuch, for stating that Bayesian methods are rarely used in medicine or in the regulation of medicines. Sander Greenland, “The Need for Critical Appraisal of Expert Witnesses in Epidemiology and Statistics,” 39 Wake Forest Law Rev. 291, 306 (2004).  Greenland heaped adjectives upon his adversary, “ludicrous claim,” “disturbing, “misleading expert testimony,” and “demonstrably quite false.” See “The Infrequency of Bayesian Analyses in Non-Forensic Court Decisions” (Feb. 16, 2014) (debunking Prof. Greenland’s claims).

[2] One almost comical example of trying too hard to settle a score occurs in a footnote, where Greenland cites a breast implant case as having been reversed in part by another case in the same appellate court. See 39 Wake Forest Law Rev. at 309 n.68, citing Allison v. McGhan Med. Corp., 184 F.3d 1300, 1310 (11th Cir. 1999), aff’d in part & rev’d in part, United States v. Baxter Int’l, Inc., 345 F.3d 866 (11th Cir. 2003). The subsequent case was not by any stretch of the imagination a reversal of the earlier Allison case; the egregious citation is a legal fantasy. Furthermore, Allison had no connection with the procedures for court-appointed expert witnesses or technical advisors. Perhaps the most charitable interpretation of this footnote is that it was injected by the law review editors or supervisors.

[3] SeeSignificance Levels are Made a Whipping Boy on Climate Change Evidence: Is .05 Too Strict? (Schachtman on Oreskes)” (Jan. 4, 2015).

[4] In addition to the unfair attack on Professor Makuch, see supra, n.1, there is much that some will find “disturbing,” “misleading,” and even “ludicrous,” (some of Greenland’s favorite pejorative adjectives) in the article. Greenland repeats in brief his arguments against the legal system’s use of probabilities of causation[4], which I have addressed elsewhere.

[5] One of Baxter’s expert witnesses appeared to be the late Professor Patricia Buffler.

[6] See 39 Wake Forest Law Rev. at 294-95, citing Baxter Healthcare Corp. v. Denton, No. 99CS00868, 2002 WL 31600035, at *1 (Cal. App. Dep’t Super. Ct. Oct. 3, 2002) (unpublished); Baxter Healthcare Corp. v. Denton, 120 Cal. App. 4th 333 (2004)

[7] Although Greenland cites to a transcript, the citation is to a judicial opinion, and the actual transcript of testimony is not available at the citation give.

[8] See Denton, supra.

[9] 39 Wake Forest L. Rev. at 297.

[10] 39 Wake Forest L. Rev. at 305 (“If it is necessary to prove causation ‛beyond a reasonable doubt’–or be ‛compelled to give up the null’ – then action can be forestalled forever by focusing on any aspect of available evidence that fails to conform neatly with the causal (alternative) hypothesis. And in medical and social science there is almost always such evidence available, not only because of the ‛play of chance’ (the focus of ordinary statistical theory), but also because of the numerous validity problems in human research.”

[11] See Peter Green, “Letter from the President to the Lord Chancellor regarding the use of statistical evidence in court cases” (Jan. 23, 2002) (writing on behalf of The Royal Statistical Society; “Although many scientists have some familiarity with statistical methods, statistics remains a specialised area. The Society urges you to take steps to ensure that statistical evidence is presented only by appropriately qualified statistical experts, as would be the case for any other form of expert evidence.”).

[12] 39 Wake Forest Law Rev. at 291 (“In reality, there is no universally accepted method for inferring presence or absence of causation from human observational data, nor is there any universally accepted method for inferring probabilities of causation (as courts often desire); there is not even a universally accepted definition of cause or effect.”).

[13] 39 Wake Forest Law Rev. at 302-03 (“If one is more concerned with explaining associations scientifically, rather than with mechanical statistical analysis, evidence about validity can be more important than statistical results.”).

Sander Greenland on “The Need for Critical Appraisal of Expert Witnesses in Epidemiology and Statistics”

February 8th, 2015

Sander Greenland is one of the few academics, who has served as an expert witness, who has written post-mortems of his involvement in various litigations[1]. Although settling scores with opposing expert witnesses can be a risky business[2], the practice can provide important insights for judges and lawyers who want to avoid the errors of the past. Greenland correctly senses that many errors seem endlessly recycled, and that courts could benefit from disinterested commentary on cases. And so, there should be a resounding affirmation from federal and state courts to the proclaimed “need for critical appraisal of expert witnesses in epidemiology and statistics,” as well as in many other disciplines.

A recent exchange[3] with Professor Greenland led me to revisit his Wake Forest Law Review article. His article raises some interesting points, some mistaken, but some valuable and thoughtful considerations about how to improve the state of statistical expert witness testimony. For better and worse[4], lawyers who litigate health effects issues should read it.

Other Misunderstandings

Greenland posits criticisms of defense expert witnesses[5], who he believes have misinterpreted or misstated the appropriate inferences to be drawn from null studies. In one instance, Greenland revisits one of his own cases, without any clear acknowledgment that his views were largely rejected.[6] The State of California had declared, pursuant to Proposition 65 ( the Safe Drinking Water and Toxic Enforcement Act of 1986, Health and Safety Code sections 25249.5, et seq.), that the State “knew” that di(2-ethylhexyl)phthalate, or “DEHP” caused cancer. Baxter Healthcare challenged the classification, and according to Greenland, the defense experts erroneously interpreted inclusive studies with evidence supporting a conclusion that DEHP does not cause cancer.

Greenland argues that the Baxter expert’s reference[7] to an IARC working group’s classification of DEHP as “not classifiable as to its carcinogenicity to humans” did not support the expert’s conclusion that DEHP does not cause cancer in human. If Baxter’s expert invoked the IARC working group’s classification for complete exoneration of DEHP, then Greenland’s point is fair enough. In his single-minded attack on Baxter’s expert’s testimony, however, Greenland missed a more important point, which is that the IARC’s determination that DEHP is not classifiable as to carcinogenicity is directly contradictory of California’s epistemic claim to “know” that DEHP causes cancer. And Greenland conveniently omits any discussion that the IARC working group had reclassified DEHP from “possibly carcinogenic” to “not classifiable,” in the light of its conclusion that mechanistic evidence of carcinogenesis in rodents did not pertain to humans.[8] Greenland maintains that Baxter’s experts misrepresented the IARC working group’s conclusion[9], but that conclusion, at the very least, demonstrates that California was on very shaky ground when it declared that it “knew” that DEHP was a carcinogen. California’s semantic gamesmanship over its epistemic claims is at the root of the problem, not a misstep by defense experts in describing inconclusive evidence as exonerative.

Greenland goes on to complain that in litigation over health claims:

“A verdict of ‛uncertain’ is not allowed, yet it is the scientific verdict most often warranted. Elimination of this verdict from an expert’s options leads to the rather perverse practice (illustrated in the DEHP testimony cited above) of applying criminal law standards to risk assessments, as if chemicals were citizens to be presumed innocent until proven guilty.

39 Wake Forest Law Rev. at 303. Despite Greenland’s alignment with California in the Denton case, the fact of the matter is that a verdict of “uncertain” was allowed, and he was free to criticize California for making a grossly exaggerated epistemic claim on inconclusive evidence.

Perhaps recognizing that he may be readily be seen as an advocate for coming to the defense of California on the DEHP issue, Greenland protests that:

“I am not suggesting that judgments for plaintiffs or actions against chemicals should be taken when evidence is inconclusive.”

39 Wake Forest Law Rev. at 305. And yet, his involvement in the Denton case (as well as other cases, such as silicone gel breast implant cases, thimerosal cases, etc.) suggest that he is willing to lend aid and support to judgments for plaintiffs when the evidence is inconclusive.

Important Advice and Recommendations

These foregoing points are rather severe limitations to Greenland’s article, but lawyers and judges should also look to what is good and helpful here. Greenland is correct to call out expert witnesses, regardless of party of affiliation, who opine that inconclusive studies are “proof” of the null hypothesis. Although some of Greenland’s arguments against the use of significance probability may be overstated, his corrections to the misstatements and misunderstandings of significance probability should command greater attention in the legal community. In one strained passage, however, Greenland uses a disjunction to juxtapose null hypothesis testing with proof beyond a reasonable doubt[10]. Greenland of course understands the difference, but the context would lead some untutored readers to think he has equated the two probabilistic assessments. Writing in a law review for lawyers and judges might have led him to be more careful. Given the prevalence of plaintiffs’ counsel’s confusing the 95% confidence coefficient with a burden of proof akin to beyond a reasonable doubt, great care in this area is, indeed, required.

Despite his appearing for plaintiffs’ counsel in health effects litigation, some of Greenland’s suggestions are balanced and perhaps more truth-promoting than many plaintiffs’ counsel would abide. His article provides an important argument in favor of raising the legal criteria for witnesses who purport to have expertise to address and interpret epidemiologic and experimental evidence[11]. And beyond raising qualification requirements above mere “reasonable pretense at expertise,” Professor Greenland offers some thoughtful, helpful recommendations for improving expert witness testimony in the courts:

  • “Begin publishing projects in which controversial testimony (a matter of public record) is submitted, and as space allows, published on a regular basis in scientific or law journals, perhaps with commentary. An online version could provide extended excerpts, with additional context.
  • Give courts the resources and encouragement to hire neutral experts to peer-review expert testimony.
  • Encourage universities and established scholarly societies (such as AAAS, ASA, APHA, and SER) to conduct workshops on basic epidemiologic and statistical inference for judges and other legal professionals.”

39 Wake Forest Law Rev. at 308.

Each of these three suggestions is valuable and constructive, and worthy of an independent paper. The recommendation of neutral expert witnesses and scholarly tutorials for judges is hardly new. Many defense counsel and judges have argued for them in litigation and in commentary. The first recommendation, of publishing “controversial testimony” is part of the purpose of this blog. There would be great utility to making expert witness testimony, and analysis thereof, more available for didactic purposes. Perhaps the more egregious testimonial adventures should be republished in professional journals, as Greenland suggests. Greenland qualifies his recommendation with “as space allows,” but space is hardly the limiting consideration in the digital age.

Causation

Professor Greenland correctly points out that causal concepts and conclusions are often essentially contested[12], but his argument might well be incorrectly taken for “anything goes.” More helpfully, Greenland argues that various academic ideals should infuse expert witness testimony. He suggests that greater scholarship, with acknowledgment of all viewpoints, and all evidence, is needed in expert witnessing. 39 Wake Forest Law Rev. at 293.

Greenland’s argument provides an important corrective to the rhetoric of Oreskes, Cranor, Michaels, Egilman, and others on “manufacturing doubt”:

“Never force a choice among competing theories; always maintain the option of concluding that more research is needed before a defensible choice can be made.”

Id. Despite his position in the Denton case, and others, Greenland and all expert witnesses are free to maintain that more research is needed before a causal claim can be supported. Greenland also maintains that expert witnesses should “look past” the conclusions drawn by authors, and base their opinions on the “actual data” on which the statistical analyses are based, and from which conclusions have been drawn. Courts have generally rejected this view, but if courts were to insist upon real expertise in epidemiology and statistics, then the testifying expert witnesses should not be constrained by the hearsay opinions in the discussion sections of published studies – sections which by nature are incomplete and tendentious. See Follow the Data, Not the Discussion” (May 2, 2010).

Greenland urges expert witnesses and legal counsel to be forthcoming about their assumptions, their uncertainty about conclusions:

“Acknowledgment of controversy and uncertainty is a hallmark of good science as well as good policy, but clashes with the very time limited tasks faced by attorneys and courts”

39 Wake Forest Law Rev. at 293-4. This recommendation would be helpful in assuring courts that the data may simply not support conclusions sufficiently certain to be submitted to lay judges and jurors. Rosen v. Ciba-Geigy Corp., 78 F.3d 316, 319, 320 (7th Cir. 1996) (“But the courtroom is not the place for scientific guesswork, even of the inspired sort. Law lags science; it does not lead it.”) (internal citations omitted).

Threats to Validity

One of the serious mistakes counsel often make in health effects litigation is to invite courts to believe that statistical significance is sufficient for causal inferences. Greenland emphasizes that validity considerations often are much stronger, and more important considerations than the play of random error[13]:

“For very imperfect data (e.g., epidemiologic data), the limited conclusions offered by statistics must be further tempered by validity considerations.”

*   *   *   *   *   *

“Examples of validity problems include non-random distribution of the exposure in question, non-random selection or cooperation of subjects, and errors in assessment of exposure or disease.”

39 Wake Forest Law Rev. at 302 – 03. Greenland’s abbreviated list of threats to validity should remind courts that they cannot sniff a p-value below five percent and then safely kick the can to the jury. The literature on evaluating bias and confounding is huge, but Greenland was a co-author on an important recent paper, which needs to be added to the required reading lists of judges charged with gatekeeping expert witness opinion testimony about health effects. See Timothy L. Lash, et al., “Good practices for quantitative bias analysis,” 43 Internat’l J. Epidem. 1969 (2014).


[1] For an influential example of this sparse genre, see James T. Rosenbaum, “Lessons from litigation over silicone breast implants: A call for activism by scientists,” 276 Science 1524 (1997) (describing the exaggerations, distortions, and misrepresentations of plaintiffs’ expert witnesses in silicone gel breast implant litigation, from perspective of a highly accomplished scientist physician, who served as a defense expert witness, in proceedings before Judge Robert Jones, in Hall v. Baxter Healthcare Corp., 947 F. Supp. 1387 (D. Or. 1996). In one attempt to “correct the record” in the aftermath of a case, Greenland excoriated a defense expert witness, Professor Robert Makuch, for stating that Bayesian methods are rarely used in medicine or in the regulation of medicines. Sander Greenland, “The Need for Critical Appraisal of Expert Witnesses in Epidemiology and Statistics,” 39 Wake Forest Law Rev. 291, 306 (2004).  Greenland heaped adjectives upon his adversary, “ludicrous claim,” “disturbing, “misleading expert testimony,” and “demonstrably quite false.” See “The Infrequency of Bayesian Analyses in Non-Forensic Court Decisions” (Feb. 16, 2014) (debunking Prof. Greenland’s claims).

[2] One almost comical example of trying too hard to settle a score occurs in a footnote, where Greenland cites a breast implant case as having been reversed in part by another case in the same appellate court. See 39 Wake Forest Law Rev. at 309 n.68, citing Allison v. McGhan Med. Corp., 184 F.3d 1300, 1310 (11th Cir. 1999), aff’d in part & rev’d in part, United States v. Baxter Int’l, Inc., 345 F.3d 866 (11th Cir. 2003). The subsequent case was not by any stretch of the imagination a reversal of the earlier Allison case; the egregious citation is a legal fantasy. Furthermore, Allison had no connection with the procedures for court-appointed expert witnesses or technical advisors. Perhaps the most charitable interpretation of this footnote is that it was injected by the law review editors or supervisors.

[3] SeeSignificance Levels are Made a Whipping Boy on Climate Change Evidence: Is .05 Too Strict? (Schachtman on Oreskes)” (Jan. 4, 2015).

[4] In addition to the unfair attack on Professor Makuch, see supra, n.1, there is much that some will find “disturbing,” “misleading,” and even “ludicrous,” (some of Greenland’s favorite pejorative adjectives) in the article. Greenland repeats in brief his arguments against the legal system’s use of probabilities of causation[4], which I have addressed elsewhere.

[5] One of Baxter’s expert witnesses appeared to be the late Professor Patricia Buffler.

[6] See 39 Wake Forest Law Rev. at 294-95, citing Baxter Healthcare Corp. v. Denton, No. 99CS00868, 2002 WL 31600035, at *1 (Cal. App. Dep’t Super. Ct. Oct. 3, 2002) (unpublished); Baxter Healthcare Corp. v. Denton, 120 Cal. App. 4th 333 (2004)

[7] Although Greenland cites to a transcript, the citation is to a judicial opinion, and the actual transcript of testimony is not available at the citation give.

[8] See Denton, supra.

[9] 39 Wake Forest L. Rev. at 297.

[10] 39 Wake Forest L. Rev. at 305 (“If it is necessary to prove causation ‛beyond a reasonable doubt’–or be ‛compelled to give up the null’ – then action can be forestalled forever by focusing on any aspect of available evidence that fails to conform neatly with the causal (alternative) hypothesis. And in medical and social science there is almost always such evidence available, not only because of the ‛play of chance’ (the focus of ordinary statistical theory), but also because of the numerous validity problems in human research.”

[11] See Peter Green, “Letter from the President to the Lord Chancellor regarding the use of statistical evidence in court cases” (Jan. 23, 2002) (writing on behalf of The Royal Statistical Society; “Although many scientists have some familiarity with statistical methods, statistics remains a specialised area. The Society urges you to take steps to ensure that statistical evidence is presented only by appropriately qualified statistical experts, as would be the case for any other form of expert evidence.”).

[12] 39 Wake Forest Law Rev. at 291 (“In reality, there is no universally accepted method for inferring presence or absence of causation from human observational data, nor is there any universally accepted method for inferring probabilities of causation (as courts often desire); there is not even a universally accepted definition of cause or effect.”).

[13] 39 Wake Forest Law Rev. at 302-03 (“If one is more concerned with explaining associations scientifically, rather than with mechanical statistical analysis, evidence about validity can be more important than statistical results.”).

Playing Dumb on Statistical Significance

January 4th, 2015

For the last decade, at least, researchers have written to document, explain, and correct, a high rate of false-positive research findings in biomedical research[1]. And yet, there are some authors who complain that the traditional standard of statistical significance is too stringent. The best explanation for this paradox appears to lie in these authors’ rhetorical strategy of protecting their “scientific conclusions,” based upon weak and uncertain research findings, from criticisms. The strategy includes mischaracterizing significance probability as a burden of proof, and then speciously claiming that the standard for significance in the significance probability is too high as a threshold for posterior probabilities of scientific claims. SeeRhetorical Strategy in Characterizing Scientific Burdens of Proof” (Nov. 15, 2014).

Naomi Oreskes is a professor of the history of science in Harvard University. Her writings on the history of geology are well respected; her writings on climate change tend to be more adversarial, rhetorical, and ad hominem. See, e.g., Naomi Oreskes, Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming (N.Y. 2010). Oreskes’ abuse of the meaning of significance probability for her own rhetorical ends is on display in today’s New York Times. Naomi Oreskes, “Playing Dumb on Climate Change,” N.Y. Times Sunday Rev. at 2 (Jan. 4, 2015).

Oreskes wants her readers to believe that those who are resisting her conclusions about climate change are hiding behind an unreasonably high burden of proof, which follows from the conventional standard of significance in significance probability. In presenting her argument, Oreskes consistently misrepresents the meaning of statistical significance and confidence intervals to be about the overall burden of proof for a scientific claim:

“Typically, scientists apply a 95 percent confidence limit, meaning that they will accept a causal claim only if they can show that the odds of the relationship’s occurring by chance are no more than one in 20. But it also means that if there’s more than even a scant 5 percent possibility that an event occurred by chance, scientists will reject the causal claim. It’s like not gambling in Las Vegas even though you had a nearly 95 percent chance of winning.”

Although the confidence interval is related to the pre-specified Type I error rate, alpha, and so a conventional alpha of 5% does lead to a coefficient of confidence of 95%, Oreskes has misstated the confidence interval to be a burden of proof consisting of a 95% posterior probability. The “relationship” is either true or not; the p-value or confidence interval provides a probability for the sample statistic, or one more extreme, on the assumption that the null hypothesis is correct. The 95% probability of confidence intervals derives from the long-term frequency that 95% of all confidence intervals, based upon samples of the same size, will contain the true parameter of interest.

Oreskes is an historian, but her history of statistical significance appears equally ill considered. Here is how she describes the “severe” standard of the 95% confidence interval:

“Where does this severe standard come from? The 95 percent confidence level is generally credited to the British statistician R. A. Fisher, who was interested in the problem of how to be sure an observed effect of an experiment was not just the result of chance. While there have been enormous arguments among statisticians about what a 95 percent confidence level really means, working scientists routinely use it.”

First, Oreskes, the historian, gets the history wrong. The confidence interval is due to Jerzy Neyman, not to Sir Ronald A. Fisher. Jerzy Neyman, “Outline of a theory of statistical estimation based on the classical theory of probability,” 236 Philos. Trans. Royal Soc’y Lond. Ser. A 333 (1937). Second, although statisticians have debated the meaning of the confidence interval, they have not wandered from its essential use as an estimation of the parameter (based upon the use of an unbiased, consistent sample statistic) and a measure of random error (not systematic error) about the sample statistic. Oreskes provides a fallacious history, with a false and misleading statistics tutorial.

Oreskes, however, goes on to misidentify the 95% coefficient of confidence with the legal standard known as “beyond a reasonable doubt”:

“But the 95 percent level has no actual basis in nature. It is a convention, a value judgment. The value it reflects is one that says that the worst mistake a scientist can make is to think an effect is real when it is not. This is the familiar “Type 1 error.” You can think of it as being gullible, fooling yourself, or having undue faith in your own ideas. To avoid it, scientists place the burden of proof on the person making an affirmative claim. But this means that science is prone to ‘Type 2 errors’: being too conservative and missing causes and effects that are really there.

Is a Type 1 error worse than a Type 2? It depends on your point of view, and on the risks inherent in getting the answer wrong. The fear of the Type 1 error asks us to play dumb; in effect, to start from scratch and act as if we know nothing. That makes sense when we really don’t know what’s going on, as in the early stages of a scientific investigation. It also makes sense in a court of law, where we presume innocence to protect ourselves from government tyranny and overzealous prosecutors — but there are no doubt prosecutors who would argue for a lower standard to protect society from crime.

When applied to evaluating environmental hazards, the fear of gullibility can lead us to understate threats. It places the burden of proof on the victim rather than, for example, on the manufacturer of a harmful product. The consequence is that we may fail to protect people who are really getting hurt.”

The truth of climate change opinions do not turn on sampling error, but rather on the desire to draw an inference from messy, incomplete, non-random, and inaccurate measurements, fed into models of uncertain validity. Oreskes suggests that significance probability is keeping us from acknowledging a scientific fact, but the climate change data sets are amply large to rule out sampling error if that were a problem. And Oreskes’ suggestion that somehow statistical significance is placing a burden upon the “victim,” is simply assuming what she hopes to prove; namely, that there is a victim (and a perpetrator).

Oreskes’ solution seems to have a Bayesian ring to it. She urges that we should start with our a priori beliefs, intuitions, and pre-existing studies, and allow them to lower our threshold for significance probability:

“And what if we aren’t dumb? What if we have evidence to support a cause-and-effect relationship? Let’s say you know how a particular chemical is harmful; for example, that it has been shown to interfere with cell function in laboratory mice. Then it might be reasonable to accept a lower statistical threshold when examining effects in people, because you already have reason to believe that the observed effect is not just chance.

This is what the United States government argued in the case of secondhand smoke. Since bystanders inhaled the same chemicals as smokers, and those chemicals were known to be carcinogenic, it stood to reason that secondhand smoke would be carcinogenic, too. That is why the Environmental Protection Agency accepted a (slightly) lower burden of proof: 90 percent instead of 95 percent.”

Oreskes’ rhetoric misstates key aspects of scientific method. The demonstration of causality in mice, or only some perturbation of cell function in non-human animals, does not warrant lowering our standard for studies in human beings. Mice and rats are, for many purposes, poor predictors of human health effects. All medications developed for human use are tested in animals first, for safety and efficacy. A large majority of such medications, efficacious in rodents, fail to satisfy the conventional standards of significance probability in randomized clinical trials. And that standard is not lowered because the drug sponsor had previously demonstrated efficacy in mice, or some other furry rodent.

The EPA meta-analysis of passive smoking and lung cancer is a good example of how not to conduct science. The protocol for the EPA meta-analysis called for a 95% confidence interval, but the agency scientists manipulated their results by altering the pre-specified coefficient confidence in their final report. Perhaps even more disgraceful was the selectivity of included studies for the meta-analysis, which biased the agency’s result in a way not reflected in p-values or confidence intervals. SeeEPA Cherry Picking (WOE) – EPA 1992 Meta-Analysis of ETA & Lung Cancer – Part 1” (Dec. 2, 2012); “EPA Post Hoc Statistical Tests – One Tail vs Two” (Dec. 2, 2012).

Of course, the scientists preparing for and conducting a meta-analysis on environmental tobacco smoke began with a well-justified belief that active smoking causes lung cancer. Passive smoking, however, involves very different exposure levels and raises serious issues of the human body’s defensive mechanisms to protect against low-level exposure. Insisting on a reasonable quality meta-analysis for passive smoking and lung cancer was not a matter of “playing dumb”; it was a recognition of our actual ignorance and uncertainty about the claim being made for low-exposure effects. The shifty confidence intervals and slippery methodology exemplifies how agency scientists assume their probandum to be true, and then manipulate or adjust their methods to provide the result they had assumed all along.

Oreskes then analogizes not playing dumb on environmental tobacco smoke to not playing dumb on climate change:

“In the case of climate change, we are not dumb at all. We know that carbon dioxide is a greenhouse gas, we know that its concentration in the atmosphere has increased by about 40 percent since the industrial revolution, and we know the mechanism by which it warms the planet.

WHY don’t scientists pick the standard that is appropriate to the case at hand, instead of adhering to an absolutist one? The answer can be found in a surprising place: the history of science in relation to religion. The 95 percent confidence limit reflects a long tradition in the history of science that valorizes skepticism as an antidote to religious faith.”

I will leave substance of the climate change issue to others, but Oreskes’ methodological misidentification of the 95% coefficient of confidence with burden of proof is wrong. Regardless of motive, the error obscures the real debate, which is about data quality. More disturbing is that Oreskes’ error confuses significance and posterior probabilities, and distorts the meaning of burden of proof. To be sure, the article by Oreskes is labeled opinion, and Oreskes is entitled to her opinions about climate change and whatever.  To the extent that her opinions, however, are based upon obvious factual errors about statistical methodology, they are entitled to no weight at all.


 

[1] See, e.g., John P. A. Ioannidis, “How to Make More Published Research True,” 11 PLoS Medicine e1001747 (2014); John P. A. Ioannidis, “Why Most Published Research Findings Are False” 2 PLoS Medicine e124 (2005); John P. A. Ioannidis, Anna-Bettina Haidich, and Joseph Lau, “Any casualties in the clash of randomised and observational evidence?” 322 Brit. Med. J. 879 (2001).

 

Showing Causation in the Absence of Controlled Studies

December 17th, 2014

The Federal Judicial Center’s Reference Manual on Scientific Evidence has avoided any clear, consistent guidance on the issue of case reports. The Second Edition waffled:

“Case reports lack controls and thus do not provide as much information as controlled epidemiological studies do. However, case reports are often all that is available on a particular subject because they usually do not require substantial, if any, funding to accomplish, and human exposure may be rare and difficult to study. Causal attribution based on case studies must be regarded with caution. However, such studies may be carefully considered in light of other information available, including toxicological data.”

F.J.C. Reference Manual on Scientific Evidence at 474-75 (2d ed. 2000). Note the complete lack of discussion of base-line risk, prevalence of exposure, and external validity of the “toxicological data.”

The second edition’s more analytically acute and rigorous chapter on statistics generally acknowledged the unreliability of anecdotal evidence of causation. See David Kaye & David Freedman, “Reference Guide on Statistics,” in F.J.C. Reference Manual on Scientific Evidence 91 – 92 (2d ed. 2000).

The Third Edition of the Reference Manual is even less coherent. Professor Berger’s introductory chapter[1] begrudgingly acknowledges, without approval, that:

“[s]ome courts have explicitly stated that certain types of evidence proffered to prove causation have no probative value and therefore cannot be reliable.59

The chapter on statistical evidence, which had been relatively clear in the second edition, now states that controlled studies may be better but case reports can be helpful:

“When causation is the issue, anecdotal evidence can be brought to bear. So can observational studies or controlled experiments. Anecdotal reports may be of value, but they are ordinarily more helpful in generating lines of inquiry than in proving causation.14

Reference Manual at 217 (3d ed. 2011). The “generally” is given no context or contour for readers. These authors fail to provide any guidance on what will come from anecdotal evidence, or when and why anecdotal reports may do more than merely generating “lines of inquiry.”

In Matrixx Initiatives Inc. v. Siracusano, 131 S. Ct. 1309 (2011), the Supreme Court went out of its way, way out of its way, to suggest that statistical significance was not always necessary to support conclusions of causation in medicine. Id. at 1319. The Court cited three Circuit court decisions to support its suggestion, but two of three involved specific causation inferences from so-called differential etiologies. General causation was assumed in those two cases, and not at issue[2]. The third case, the notorious Wells v. Ortho Pharmaceutical Corp., 788 F. 2d 741, 744–745 (11th Cir. 1986), was also cited in support of the suggestion that statistical significance was not necessary, but in Wells, the plaintiffs’ expert witnesses actually relied upon studies that claimed at least nominal statistical significance. Wells was and remains representative of what is possible and results when trial judges ignore the constraints of study validity. The Supreme Court, in any event, abjured any intent to specify “whether the expert testimony was properly admitted in those cases [Wells and others],” and the Court made no “attempt to define here what constitutes reliable evidence of causation.” 131 S. Ct. at 1319.

The causal claim in Siracusano involved anosmia, loss of the sense of smell, from the use of Zicam, zinc gluconate. The case arose from a motion to dismiss the complaint; no evidence was ever presented or admitted. No baseline risk of anosmia was pleaded; nor did plaintiffs allege that any controlled study demonstrated an increased risk of anosmia from nasal instillation of zinc gluconate. There were, however, clinical trials conducted in the 1930s, with zinc sulfate for poliomyelitis prophylaxis, which showed a substantial incidence of anosmia in the treated children[3]. Matrixx tried to argue that this evidence was unreliable, in part because it involved a different compound, but this argument (1) in turn demonstrated a factual issue that required discovery and perhaps a trial, and (2) traded on a clear error in asserting that the zinc in zinc sulfate and zinc gluconate were different, when in fact they are both ionic compounds that result in zinc ion exposure, as the active constituent.

The position stridently staked out in Matrixx Initiatives is not uncommon among defense counsel in tort cases. Certainly, similar, unqualified statements, rejecting the use of case reports for supporting causal conclusions, can be found in the medical literature[4].

When the disease outcome has an expected value, a baseline rate, in the exposed population, then case reports simply confirm what we already know: cases of the disease happen in people regardless of their exposure status. For this reason, medical societies, such as the Teratology Society, have issued guidances that generally downplay or dismiss the role that case reports may have in the assessment and determination of causality for birth defects:

“5. A single case report by itself is not evidence of a causal relationship between an exposure and an outcome.  Combinations of both exposures and adverse developmental outcomes frequently occur by chance. Common exposures and developmental abnormalities often occur together when there is no causal link at all. Multiple case reports may be appropriate as evidence of causation if the exposures and outcomes are both well-defined and low in incidence in the general population. The use of multiple case reports as evidence of causation is analogous to the use of historical population controls: the co-occurrence of thalidomide ingestion in pregnancy and phocomelia in the offspring was evidence of causation because both thalidomide use and phocomelia were highly unusual in the population prior to the period of interest. Given how common exposures may be, and how common adverse pregnancy outcome is, reliance on multiple case reports as the sole evidence for causation is unsatisfactory.”

The Public Affairs Committee of the Teratology Society, “Teratology Society Public Affairs Committee Position Paper Causation in Teratology-Related Litigation,” 73 Birth Defects Research (Part A) 421, 423 (2005).

When the base rate for the outcome is near zero, and other circumstantial evidence is present, some commentators insist that causality may be inferred from well-documented case reports:

“However, we propose that some adverse drug reactions are so convincing, even without traditional chronological causal criteria such as challenge tests, that a well documented anecdotal report can provide convincing evidence of a causal association and further verification is not needed.”

Jeffrey K. Aronson & Manfred Hauben, “Drug safety: Anecdotes that provide definitive evidence,” 333 Brit. Med. J. 1267, 1267 (2006) (Dr. Hauben was medical director of risk management strategy for Pfizer, in New York, at the time of publication). But which ones are convincing, and why?

        *        *        *        *        *        *        *        *        *

Dr. David Schwartz, in a recent blog post, picked up on some of my discussion of the gadolinium case reports (see here and there), and posited the ultimate question: when are case reports sufficient to show causation? David Schwartz, “8 Examples of Causal Inference Without Data from Controlled Studies” (Dec. 14, 2014).

Dr. Schwartz discusses several causal claims, all of which gave rise to litigation at some point, in which case reports or case series played an important, if not dispositive, role:

  1.      Gadolinium-based contrast agents and NSF
  2.      Amphibole asbestos and malignant mesothelioma
  3.      Ionizing radiation and multiple cancers
  4.      Thalidomide and teratogenicity
  5.      Rezulin and acute liver failure
  6.      DES and clear cell vaginal adenocarcinoma
  7.      Vinyl chloride and angiosarcoma
  8.      Manganese exposure and manganism

Dr. Schwartz’s discussion is well worth reading in its entirety, but I wanted to emphasize some of Dr. Schwartz’s caveats. Most of the exposures are rare, as are the outcomes. In some cases, the outcomes occur almost exclusively with the identified exposures. All eight examples pose some danger of misinterpretation. Gadolinium-based contrast agents appear to create a risk of NSF only in the presence of chronic renal failure. Amphibole asbestos, and most importantly, crocidolite causes malignant mesothelioma after a very lengthy latency period. Ionizing radiation causes some cancers that are all-too common, but the presence of multiple cancers in the same person, after a suitable latency period, is distinctly uncommon, as is the level of radiation needed to overwhelm bodily defenses and induce cancers. Thalidomide was associated by case reports fairly quickly with phocomelia, which has an extremely low baseline risk. Other birth defects were not convincingly demonstrated by the case series. Rezulin, an oral antidiabetic medication, was undoubtedly causally responsible for rare cases of acute liver failure. Chronic liver disease, however, which is common among type 2 diabetic patients, required epidemiologic evidence, which never materialized[5].

Manganism, by definition, is the cause of manganism, but extremely high levels of manganese exposure, and specific speciation of the manganese, are essential to the causal connection. Manganism raises another issue often seen in so-called signature diseases: diagnostic accuracy. Unless the diagnostic criteria have perfect (100%) specificity, with no false-positive diagnoses, then once again, we expect false-positive cases to appear when the criteria are applied to large numbers of people. In the welding fume litigation, where plaintiffs’ counsel and physicians engaged in widespread, if not wanton, medico-legal screenings, it was not surprising that they might find occasional cases that appeared to satisfy their criteria. Of course, the more the criteria are diluted to accommodate litigation goals, the more likely there will be false positive cases.[6]

Dr. Schwartz identifies some common themes and important factors in identifying the bases for inferring causality from uncontrolled evidence:

“(a) low or no background rate of the disease condition;

(b) low background rate of the exposure;

(c) a clear understanding of the mechanism of action.”

These factors and perhaps others should not be confused with strict criteria here. The exemplar cases suggest a family resemblance of overlapping factors that help support the inference, even against the most robust skepticism.

In litigation, defense counsel typically argue that analytical epidemiology is always necessary, and plaintiffs’ counsel claim epidemiology is never needed. The truth is more nuanced and conditional, but the great majority of litigated cases do require epidemiology for health effects because the claimed harms are outcomes that have an expected incidence or prevalence in the exposed population irrespective of exposure.


[1] Reference Manual on Scientific Evidence at 23 (3d ed. 2011) (citing “Cloud v. Pfizer Inc., 198 F. Supp. 2d 1118, 1133 (D. Ariz. 2001) (stating that case reports were merely compilations of occurrences and have been rejected as reliable scientific evidence supporting an expert opinion that Daubert requires); Haggerty v. Upjohn Co., 950 F. Supp. 1160, 1164 (S.D. Fla. 1996), aff’d, 158 F.3d 588 (11th Cir. 1998) (“scientifically valid cause and effect determinations depend on controlled clinical trials and epidemiological studies”); Wade-Greaux v. Whitehall Labs., Inc., 874 F. Supp. 1441, 1454 (D.V.I. 1994), aff’d, 46 F.3d 1120 (3d Cir. 1994) (stating there is a need for consistent epidemiological studies showing statistically significant increased risks).”)

[2] Best v. Lowe’s Home Centers, Inc., 563 F. 3d 171, 178 (6th Cir 2009); Westberry v. Gislaved Gummi AB, 178 F. 3d 257, 263–264 (4th Cir. 1999).

[3] There may have been a better argument for Matrixx in distinguishing the method and place of delivery of the zinc sulfate in the polio trials of the 1930s, but when Matrixx’s counsel was challenged at oral argument, he asserted simply, and wrongly, that the two compounds were different.

[4] Johnston & Hauser, “The value of a case report,” 62 Ann. Neurology A11 (2007) (“No matter how compelling a vignette may seem, one must always be concerned about the reliability of inference from an “n of one.” No statistics are possible in case reports. Inference is entirely dependent, then, on subjective judgment. For a case meant to suggest that agent A leads to event B, the association of these two occurrences in the case must be compared to the likelihood that the two conditions could co-occur by chance alone …. Such a subjective judgment is further complicated by the fact that case reports are selected from a vast universe of cases.”); David A. Grimes & Kenneth F. Schulz, “Descriptive studies: what they can and cannot do,” 359 Lancet 145, 145, 148 (2002) (“A frequent error in reports of descriptive studies is overstepping the data: studies without a comparison group allow no inferences to be drawn about associations, causal or otherwise.”) (“Common pitfalls of descriptive reports include an absence of a clear, specific, and reproducible case definition, and interpretations that overstep the data. Studies without a comparison group do not allow conclusions about cause and disease.”); Troyen A. Brennan, “Untangling Causation Issues in Law and Medicine: Hazardous Substance Litigation,” 107 Ann. Intern. Med. 741, 746 (1987) (recommending that testifying physicans “[a]void anecdotal evidence; clearly state the opposing side is relying on anecdotal evidence and why that is not good science.”).

[5] See In re Rezulin, 2004 WL 2884327, at *3 (S.D.N.Y. 2004).

[6] This gaming of diagnostic criteria has been a major invitation to diagnostic invalidity in litigation over asbestosis and silicosis in the United States.