For your delectation and delight, desultory dicta on the law of delicts.

Slemp Trial Part 5 – Daniel W. Cramer

July 24th, 2017

The case of talc and ovarian cancer is a difficult and close case on general causation. Although I do not believe that the plaintiffs have made their case, their causal claims do not have the usual earmarks of “junk science,” so readily visible in many other litigations, such as the Zoloft birth defects cases.

Dr. Daniel Cramer is a physician and an epidemiology. He holds the title of professor of epidemiology at the Harvard University Chan School Of Public Health, as well as a professor of obstetrics, gynecology, and reproductive biology, at the Harvard Medical School. The plaintiffs called Cramer to testify on causation issues.

Cramer could have been purely duplicative as a witness, but he was used primarily on specific causation with a big boost on general causation because of his many publications on talc and ovarian cancer (a subject generally missing from Graham Colditz’s C.V.). The planned testimony for Cramer was to try to present the causal attribution of Slemp’s tumor to talc, with the understanding that since specific implied general causation, the plaintiff would obtain corroborating testimony on general causation as well.

With respect to Slemp’s known risk factors, such as her massive obesity and heavy smoking history, Cramer attempted to quantify her ex ante risks based upon her medical chart and from using risk ratios from available epidemiologic studies. Predictably, Cramer tried to diminish these ex ante risks by a highly selective reading of Slemp’s charts, but he ably deflected cross-examination criticisms by characterizing questions as quibbles and volunteering that he was not trying to ascribe plaintiff’s ovarian cancer exclusively to talc. Similarly, Cramer attempted to present the highest ex ante risk ratio for Slemp’s talc exposure, through his characterizations of her case as involving bilateral tumors and other features. Cramer tried to diminish the risk factor of obesity by claiming that fat women use talc more and that there was “synergy” between obesity and talc use. Cramer never described the evidentiary basis for this claimed synergy, or whether it was multiplicative or something less dramatic.

Interestingly, risk ratios from groups (epidemiologic studies) were used to describe her individual risks. The defense did not actively challenge this procedure. The premise of Cramer’s approach was that if an individual patient had a previous exposure or lifestyle variable that has been causally associated with ovarian cancer, then those exposures and lifestyle variables all participating in actually causing the patient’s cancer. As noted in the summary of Graham Colditz’s testimony, this assumption by Cramer is disputed. Cramer never attempted to justify the assumption by reference to any body of scientific evidence, or text. For Mrs. Slemp, Cramer opined that talc (as well as obesity and smoking) caused her serous borderline ovarian tumors. This conclusion was driven by his assumption that if Slemp had an exposure to a known cause of ovarian cancer, then it must have played a “substantial” role in causing the cancer.


The defense vigorously challenged Cramer for having failed to discuss causation in his publications. Most of these publications were epidemiologic studies, which did not necessarily provide an opportunity for full-ranging discussions of causal conclusions. Cramer effectively parried by noting that causation is not established by a single study, and single-study reports were not an appropriate vehicle for a full review and analysis of causation. As for his reviews and opinion pieces, Cramer defended his failure to state a clear causal conclusion on grounds that he had urged warning labels for personal talc products, and that a causal conclusion was not needed to justify such a warning because even a potential risk of ovarian cancer outweighed the negligible benefit of using talc in personal hygiene.

The defense plowed on with its claim that many studies lacked statistical significance, but Cramer generally lost defense counsel in technical details. For Cramer’s estimation of Slemp’s ex ante risk ratio from talc exposure, the defense challenged Cramer’s use of a one-tailed test of significance1. Cramer offered a half-hearted defense of a one-sided test in this context, and used the questions as an opportunity to repeat how low the p-value was with respect to the general association between talc and ovarian cancer. Cramer muddied the water by claiming that this calculation was superseded by further refinement of his estimate, which took into account the bilaterality of Slemp’s tumors, which obviated his one-sided confidence interval calculation. Although the details were not entirely forthcoming, the jury would not likely have seen this exchange as anything other than a quibble. The defense’s claim that Cramer had violated the “rules of epidemiology” never got off the ground, and given that the defense never presented an epidemiologist, the claim of counsel never was grounded in actual evidence.

Counterfactual Causation

The most important cross-examination of Dr. Cramer came from both J & J’s and Imerys’ counsel on the issue of counterfactual causation. Defense counsel asked Cramer, in several different ways, whether Ms. Slemp would have avoided having ovarian cancer if she had not used talc. Cramer stridently and belligerently refused to answer the question. The trial judge showed no interest in obtaining an answer to these questions. In the last effort to obtain a response from Cramer on “but for” causation, Cramer simply refused:

“I am not going to opine on the topic because it is not the task I was charged with.”

In other words, plaintiffs’ counsel and Cramer had discussed his inability to answer the counterfactual question, and decided it was simply better not respond to the question altogether. Since Mr. Smith, plaintiffs’ counsel, did not “task” him with counterfactual causation, Cramer was not going to answer it. Cramer’s intransigence was remarkable because the counterfactual question is an important component to causal inference in epidemiologic science. See, e.g., Michael Höfler, “Causal inference based on counterfactuals,” 5 BMC Med. Research Methodology 28 (2005).

In law, as in science, the counterfactual questions put to Cramer, are essential. Conduct or a product cannot be a legal cause of harm unless that cause alone, or acting in concert with other causes, was enough to result in the injury. Although legal treatises speak of “substantial factor,” the American Law Institute (ALI) defined that phrase (outside the context of overdetermined effects) negatively to make clear that “the actor’s negligent conduct is not a substantial factor in bringing about harm to another if the harm would have been sustained even if the actor had not been negligent.” Restatement (Second) of Torts § 432 (1965).

Given the mischief generated by some courts and commentators2 with respect to “substantial factor,” the ALI abandoned the phrase altogether in its most recent Restatement of the law of torts. In the current Restatement, the ALI has emphasized that the imposition of liability require that the harm claimed is one that would not have occurred in the absence of (“but for”) the defendant’s negligent conduct. Restatement (Third) of Torts: Physical and Emotional Harm § 26 cmt. J (2010); see also June v. Union Carbide Corp., 577 F.3d 1234, 1244 (10th Cir. 2009) (no material difference between Second and Third Restatements; holding that ‘‘a defendant cannot be liable to the plaintiff unless its conduct is either (a) a but-for cause of the plaintiff’s injury or (b) a necessary component of a causal set that (probably) would have caused the injury in the absence of other causes.’’).

Dr. Cramer’s refusal to answer the key counterfactual question about talc and Ms. Slemp’s ovarian cancer points to a lawlessness, both scientific and legal, in the proceedings in St. Louis, Missouri.

1 SeeFAQ: What are the differences between one-tailed and two-tailed tests?” Institute for Digital Research and Education.

2 See David A. Fischer, “Insufficient Causes,” 94 Kent. L. J. 277, 277 (2005-06) (criticizing judicial obtuseness in misinterpreting the earlier Restatement’s use of “substantial factor”).

Omalu and Science — A Bad Weld

October 22nd, 2016

Bennet Omalu is a star of the silver screen and in the minds of conspiratorial thinkers everywhere. Actually Will Smith[1] stood in for Omalu in the movie Concussion (2015), but Smith’s skills as an actor bring out the imaginary best in Omalu’s persona.

Chronic Traumatic Encephalopathy (CTE) is the name that Bennet Omalu, a pathologist, gave to the traumatic brain injuries resulting from repeated concussions experienced by football players.[2]  The concept is not particularly new; the condition of dementia pugilistica had been described previously in boxers. What was new with Omalu was his fervid imagination and his conspiratorial view of the world.[3] The movie Concussion  actually gives an intimation of some of the problems in Omalu’s scientific work.  See, e.g., Daniel Engber, “Concussion Lies: The film about the NFL’s apparent CTE epidemic feeds the pervasive national myths about head trauma,” Slate (Dec. 21 2015); Bob Hohler, “BU rescinds award to ‘Concussion’ trailblazer,” Boston Globe (June 16, 2016).

Omalu has more dubious claims to fame. He has not cabined his unique, stylized approach to science to the subject of head trauma. Although Omalu is a pathologist, not a clinician, Omalu recently he weighed in with observations that Hillary Clinton was definitely unwell. Indeed, Bennet Omalu has now made a public nuisance of himself by floating conspiratorial theories that Hilary Clinton has been poisoned. Cindy Boren, “The man who discovered CTE thinks Hillary Clinton may have been poisoned,” Wash. Post (Sept. 12, 2016); Christine Rushton, “‘Concussion’ doctor suggests without evidence that poison a factor in Clinton’s illness,” Los Angeles Times. (Sept. 13, 2016).

In the courtroom, in civil cases, Omalu has a poor track record for scientific rigor. The United States Court of Appeals, for the Third Circuit, which can be tough and skeptical of Rule 702 expert witness exclusions, readily affirmed an exclusion of Omalu’s testimony in Pritchard v. Dow Agro Sciences, 705 F. Supp. 2d 471 (W.D. Pa. 2010), aff’d, 430 F. App’x 102, 104 (3d Cir. 2011). In Pritchard, Omalu was caught misrepresenting the statistical data from published studies in a so-called toxic tort case. Fortunately, robust gatekeeping was able to detoxify the proffered testimony.[4]

More recently, Omalu was at it again in a case in which a welder claimed that exposure to welding and solvent fumes caused him to develop Parkinson’s disease. Brian v. Association of Independent Oil Distributors, No. 2011-3413, Westmoreland Cty. Ct. Common Pleas, Order of July 18, 2016. [cited here as Order].

James G. Brian developed Parkinson disease (PD), after 30 years of claimed exposure to welding and solvent fumes. It is America, so Brian sued Lincoln Electric and various chemical companies on his theory that his PD was caused by his welding and solvent exposures, either alone or together. Now although manganese in very high exposures can cause a distinctive movement disorder, manganism, manganese in welding fume does not cause PD in humans.[5] Omalu was undeterred, however, and proceeded by conjecturing that welding fume interacted with solvent fumes to cause Brian’s PD.

At the outset of the case, Brian intended to present testimony of expert witnesses, Bennet Omalu, Richard A. Parent, a toxicologist, and Jordan Loyal Holtzman, a pharmacologist.  Parent commenced giving a deposition, but became so uncomfortable with his own opinion that he put up a white flag at the deposition, and withdrew from the case.  On sober reflection, Holtzman also withdrew from the case.

Omalu was left alone, to make the case on general and specific causation. Defendant Lincoln Electric and others moved to exclude Omalu, under Pennsylvania’s standard for admissibility of expert witness opinion testimony, which is based upon a patch-work version of Frye v. United States, 293 F. 1013 (D. C. Cir. 1923).

Invoking a quirky differential diagnosis, and an idiosyncratic reading of Sir Austin Bradford Hill’s work, Omalu defended his general and specific causation opinions. After briefing and a viva voce hearing, President Judge Richard E. McCormick ruled that Omalu had misapplied both methodologies in reaching his singular opinion. Order at 8.

Omalu did not make the matter easy for Judge McCormick. There was no question that Brian had PD.  Every clinician who had examined him made the diagnosis. Knowing that PD is generally regarded as idiopathic, with no known cause, Omalu thought up a new diagnosis: chronic toxic encephalopathy.

When confronted with the other clincians’ diagnoses, Omalu did not dispute the diagnosis of PD. Instead, he attempted to evade the logical implications of the diagnosis of idiopathic PD by continually trying to change the terminology to suit his goals. Judge McCormick saw through Omalu’s semantic evasions, which bolstered the case for excluding him at trial.

Madness to His Method

In scrutinizing Omalu’s opinions, Judge McCormick found more madness than method. Omalu claimed that he randomly selected studies to rely upon, and he failed to explain the strengths and weaknesses of the cited studies when he formed his opinion.

Despite his claim to have randomly selected studies, Omalu remarkably managed to ignore epidemiologic studies that were contrary to his causal conclusions. Order at 9.  Indeed, Omalu missed more than half the published studies on welding and PD.  Not surprisingly, Omalu did not record his literature search; nor could explain, in deposition or at the court hearing, his inclusionary or exclusionary criteria for pertinent studies. Id. at 10. When confronted about his “interaction” opinions concerning welding and solvent fumes, Omalu cited several studies, none of which measured or assessed combined exposures.  Some of the papers flatly contradicted Omalu’s naked assertions. Id. at 9.

Judge McCormick rejected Omalu’s distorted invocation of the Bradford Hill factors to support a causal association when no association had yet been found. The court quoted from the explanation provided by Prof. James A. Mortimer, the defense neuroepidemiologist, at the Frye hearing:

“First, the Bradford Hill criteria should not be applied until you have ruled out a chance association, which [Omalu] did not do. In fact, as I will point out, carefully done epidemiologic studies will show there is no increased risk of Parkinson’s disease with exposure to welding fume and/or solvents, therefore the application of these criteria is inappropriate.”

Order at 11, citing to and quoting from Frye Hearing at 318 (Oct. 14, 2015).

When cornered, Omalu asserted that he never claimed that Mr. Brian’s PD was caused by welding or solvents; rather his contention was simply that occupational exposures had created a “substantial increased risk” of PD. Id. at 14. Risk creation, however, is not causation; and Omalu had not even shown unquantified evidence of increased risk before Brian developed PD. The court found that Omalu had not used any appropriate methodology with respect to general causation. Id. at 14.

Specific Causation

Undaunted, Omalu further compromised his credibility by claiming that Bradford Hill’s factors allowed him to establish specific causation, even in the absence of general causation. Id. at 12. Omalu suggested that he had performed a differential diagnosis, even though he is not a clinician, and as a pathologist had not evaluated any brain tissue. Id. at 10. The court deftly saw through these ruses. Id. at 11.

Judge McCormick’s conclusion should be a precautionary lesson to future courts that must gatekeep Omalu’s opinions, or Omalu-like opinions:

“In conclusion, we agree with the Defendants that while Dr. Omalu’s stated methodology in this case is generally accepted in the medical and scientific community, Dr. Omalu failed to properly apply it. He misused and demonstrated a lack of understanding of the Bradford Hill criteria and the Schaumburg criteria when he attempted to employ these methodologies to conduct a differential diagnosis or differential etiology analysis.”

Id. at 16. Gatekeeping is sometimes viewed as more difficult in Frye jurisdictions, but the exclusion of Omalu shows that it can be achieved when expert witnesses deviate materially from scientifically standard methodology.

[1] For other performances by Will Smith in this vein, see Six Degrees of Separation (1993); Focus (2015).

[2] See Bennet I. Omalu, Steven DeKosky, Ryan Minster, M. Ilyas Kamboh, Ronald Hamilton, Cyril H. Wecht, “Chronic Traumatic Encephalopathy in a National Football League Player, Part I,” 57 Neurosurgery 128 (2005); Bennet I. Omalu, Steven DeKosky, Ronald Hamilton, Ryan Minster, M. Ilyas Kamboh, Abdulrezak Shakir, and Cyril H. Wecht, “Chronic Traumatic Encephalopathy in a National Football League Player, Part II,” 59 Neurosurgery 1086 (2006).

[3] See Jeanne Marie Laskas, “The Doctor the NFL Tried to Silence,” Wall St. J. (Nov. 24, 2015).

[4] SeePritchard v. Dow Agro – Gatekeeping Exemplified” (Aug. 25, 2014).

[5] See, e.g., Marianne van der Mark, Roel Vermeulen, Peter C.G. Nijssen, Wim M. Mulleners, Antonetta M.G. Sas, Teus van Laar, Anke Huss, and Hans Kromhout, “Occupational exposure to solvents, metals and welding fumes and risk of Parkinson’s disease,” 21 Parkinsonism Relat Disord. 635 (2015); James Mortimer, Amy Borenstein & Laurene Nelson, Associations of Welding and Manganese Exposure with Parkinson’s Disease: Review and Meta-Analysis, 79 Neurology 1174 (2012); Joseph Jankovic, “Searching for a relationship between manganese and welding and Parkinson’s disease,” 64 Neurology 2012 (2005).

Lipitor MDL Cuts the Fat Out of Specific Causation

March 25th, 2016

Ms. Juanita Hempstead was diagnosed with hyperlipidemia in March 1998. Over a year later, in June 1999, with her blood lipids still elevated, her primary care physician prescribed 20 milligrams of atorvastatin per day. Ms. Hempstead did not start taking the statin regularly until July 2000. In September 2002, her lipids were under control, her blood glucose was abnormally high, and she had gained 13 pounds since she was first prescribed a statin medication. Hempstead v. Pfizer, Inc., 2:14–cv–1879, MDL No. 2:14–mn–02502–RMG, 2015 WL 9165589, at *2-3 (D.S.C. Dec. 11, 2015) (C.M.O. No. 55 in In re Lipitor Marketing, Sales Practices and Products Liability Litigation) [cited as Hempstead]. In the fall of 2003, Hempstead experienced abdominal pain, and she stopped taking the statin for a few weeks, presumably because of a concern over potential liver toxicity. Her cessation of the statin led to an increase in her blood fat, but her blood sugar remained elevated, although not in the range that would have been diagnostic of diabetes. In May 2004, about five years after starting on statin medication, having gained 15 pounds since 1999, Ms. Hempstead was diagnosed with type II diabetes mellitus. Id.

Living in a litigious society, and being bombarded with messages from the litigation industry, Ms. Hempstead sued the manufacturer of atorvastatin, Pfizer, Inc. In support of her litigation claim, Hempstead’s lawyers enlisted the support of Elizabeth Murphy, M.D., D.Phil., a Professor of Clinical Medicine, and Chief of Endocrinology and Metabolism at San Francisco General Hospital. Id. at *6. Dr. Murphy received her doctorate in biochemistry from Oxford University, and her medical degree from the Harvard Medical School. Despite her graduations from elite educational institutions, Dr. Murphy never learned the distinction between ex ante risk and assignment of causality in an individual patient.

Dr. Murphy claimed that atorvastatin causes diabetes, and that the medication caused Ms. Hempstead’s diabetes in 2004. Murphy pointed to a five-part test for her assessment of specific causation:

(1) reports or reliable studies of diabetes in patients taking atorvastatin;

(2) causation is biological plausible;

(3) diabetes appeared in the patient after starting atorvastatin;

(4) the existence of other possible causes of the patient’s diabetes; and

(5) whether the newly diagnosed diabetes was likely caused by the atorvastatin.

Id. In response to this proffered testimony, the defendant, Pfizer, Inc., challenged the admissibility of Dr. Murphy’s opinion under Federal Rule of Evidence 702.

The trial court, in reviewing Pfizer’s challenge, saw that Murphy’s opinion essentially was determined by (1), (2), and (3), above. In other words, once Murphy had become convinced of general causation, she was willing to causally attribute diabetes to atorvastatin in every patient who developed diabetes after starting to take the medication. Id. at *6-7.

Dr. Murphy relied upon some epidemiologic studies that suggested a relative risk of diabetes to be about 1.5 in patients who had taken atorvastatin. Id. at *5, *8. Unfortunately, the trial court, as is all too common among judges writing Rule 702 opinions, failed to provide citations to the materials upon which plaintiff’s expert witness relied. A safe bet, however, is that those studies, if they had any internal and external validity at all, involved multivariate analyses to analyze risk ratios for diabetes at time t1, in patients at time who had no diabetes before starting use of atorvastatin at time t0, compared with patients who did not have diabetes at t0 but never took the statin. If so, then Dr. Murphy’s use of a temporal relationship between starting atorvastatin and developing diabetes is quite irrelevant because the relative risk (1.5) relied upon is generated in studies in which the temporality is present. Ms. Hempstead’s development of diabetes five years after starting atorvastatin does not make her part of a group with a relative risk any higher than the risk ratio of 1.5, cited by Dr. Murphy. Similarly, the absence or presence of putative risk factors other than the accused statin is irrelevant because the risk ratio of 1.5 was mostly likely arrived at in studies that controlled or adjusted for the other risk factors in the epidemiologic study by a multivariate analysis. Id. at *5 & n. 8.

Dr. Murphy acknowledged that there are known risk factors for diabetes, and that plaintiff Ms. Hempstead had a few. Plaintiff was 55 years old at the time of diagnosis, and advancing age is a risk factor. Plaintiff’s body mass index (BMI) was elevated and it had increased over the five years since beginning to take atorvastatin. Even though not obese, Ms. Hempstead’s BMI was sufficiently high to confer a five-fold increase in risk for diabetes. Id. at *9. Plaintiff also had hypertension and metabolic syndrome, both of which are risk factors (with the latter adding to the level of risk of the former). Id. at *10. Perhaps hoping to avoid the intractable problem of identifying which risk factors were actually at work in Ms. Hempstead to produce her diabetes, Dr. Murphy claimed that all risk factors were causes of plaintiff’s diabetes. Her analysis was thus not so much a differential etiology as a non-differential, non-discriminating assertion that any and all risk factors were probably involved in producing the individual case. Not surprisingly, Dr. Murphy, when pressed, could not identify any professional organizations or peer-reviewed publications that employed such a methodology of attribution. Id. at *6. Dr. Murphy had never used such a method of attribution in her clinical practice; instead she attempted to justify and explain her methodology by adverting to its widespread use by expert witnesses in litigation. Id.

Relative Risk and the Inference of Specific Causation

The main thrust of the Dr. Murphy’s and the plaintiff’s specific causation claim seems to have been based upon a simple, simplistic identification of ex ante risk with causation. The MDL court recognized, however, that in science and in law, risk is not the same as causation.[1]

The existence of general causation, with elevated relative risks not likely the result of bias, chance, or confounding, does not necessarily support the inference that every person exposed to the substance or drug and who develops the outcome of interest, had his or her outcome caused by the exposure.

The law requires each plaintiff to show that his or her alleged injury, the outcome in the relied upon epidemiologic studies, was actually caused by the alleged exposure under a preponderance of the evidence. Id. at *4 (citing Guinn v. AstraZeneca Pharm. LP, 602 F.3d 1245, 1249 n. 1 (11th Cir.2010))

The disconnect between risk and causation is especially strong when the nature of the causation involved results from the modification of the incidence rate of a disease as a function of exposure. Although the MDL court did not explicitly note the importance of a base rate, which gives rise to an “expected value” or “expected outcome” in an epidemiologic sample, the court’s insistence upon a relative risk greater than two, from studies of sample groups that are sufficiently similar to the plaintiff, implicitly affirms the principle. The MDL court did, however, call out Dr. Murphy’s reasoning that specific causation exists for every drug-exposed patient, in the face of studies that show general causation with associations of the magnitude less than risk ratios of two, was logically flawed. Id. at *8 (citing Guinn v. AstraZeneca Pharm. LP, 602 F.3d 1245, 1255 (11th Cir. 2010) (“The fact that exposure to [a substance] may be a risk factor for [a disease] does not make it an actual cause simply because [the disease] developed.”).

The MDL court acknowledged the obvious, that some causal relationships may be based upon risk ratios of two or less (but greater than 1.0). Id. at *4. A risk ratio greater than 1.0, but not greater than two, can result only when some of the cases with the outcome of interest, here diabetes, would have occurred anyway in the population that has been sampled. And with increased risk ratios at two or less, a majority of the study sample would have developed the outcome even in the absence of the exposure of interest. With this in mind, the MDL court asked how plaintiff could show specific causation, even assuming that general causation were established with the use of epidemiologic methods.

The court in Hempstead reasoned that if the risk ratio were greater than 2.0, a majority of the exposed sample would have developed the outcome of interest because of the exposure being studied. Id. at *5. If the sampled population has had the same level of exposure as the plaintiff, then a case-specific inference of specific causation is supported.[2] Of course, this inferential strategy presupposes that general causation has been established, by ruling out bias, confounding, and chance, with high-quality, statistically significant findings of risk ratios in excess of 2.0. Id. at *5.

To be sure, there are some statisticians, such as Sander Greenland, who have criticized this use of a sample metric to assess the probability of individual causation, in part because the sample metric is an average level of risk, based upon the whole sample. Greenland is fond of speculating that the risk may not be stochastically distributed, but as the Supreme Court has recently acknowledged, there are times when the use of an average is appropriate to describe individuals within a sampled population. Tyson Foods, Inc. v. Bouaphakeo, No. 14-1146, 2016 WL 1092414 (U.S. S. Ct. Mar. 22, 2016).

The Whole Tsumish

Dr. Murphy, recognizing that there are other known and unknown causes and risk factors for diabetes, made a virtue of foolish consistency by opining that all risk factors present in Ms. Hempstead were involved in producing her diabetes. Dr. Murphy did not, and could not, explain, however, how or why she believed that every risk factor (age, BMI, hypertension, recent weight gain, metabolic syndrome, etc.), rather than some subset of factors, or some idiopathic factors, were involved in producing the specific plaintiff’s disease. The MDL court concluded that Dr. Murphy’s opinion was an ipse dixit of the sort that qualified her opinion for exclusion from trial. Id. at *10.

Biological Fingerprints

Plaintiffs posited typical arguments about “fingerprints” or biological markers that would support inferences of specific causation in the absence of high relative risks, but as is often the case with such arguments, they had no factual foundation for their claims that atorvastatin causes diabetes. Neither Dr. Murphy nor anyone else had ever identified a biological marker that allowed drug-exposed patients with diabetes to be identified as having had their diabetes actually caused by the drug of interest, as opposed to other known or unknown causes.

With Dr. Murphy’s testimony failing to satisfy common sense and Rule 702, plaintiff relied upon cases in which circumstances permitted inferences of specific causation from temporal relationships between exposure and outcome. In one such case, the plaintiff developed throat irritation from very high levels of airborne industrial talc exposure, which abated upon cessation of exposure, and returned with renewed exposure. Given that general causation was conceded, and natural experimental nature of challenge, dechallenge, and rechallenge, the Fourth Circuit in this instance held that the temporal relationship of an acute insult and onset was an adequate basis for expert witness opinion testimony on specific causation. Id. at *11. (citing Westberry v. Gislaved Gummi AB, 178 F.3d 257, 265 (4th Cir.1999) (“depending on the circumstances, a temporal relationship between exposure to a substance and the onset of a disease or a worsening of symptoms can provide compelling evidence of causation”); Cavallo v. Star Enter., 892 F. Supp. 756, 774 (E.D. Va.1995) (discussing unique, acute onset of symptoms caused by chemicals). In the Hempstead case, however, the very nature of the causal relationship claimed did not involve an acute reaction. The claimed injury, diabetes, emerged five years after statin use commenced, and the epidemiologic studies relied upon were all based upon this chronic use, with a non-acute, latent outcome. The trial judge thus would not credit the mere temporality between drug use and new onset of diabetes as probative of anything.

[1] Id. at *8, citing Guinn v. AstraZeneca Pharm. LP, 602 F.3d 1245, 1255 (11th Cir.2010) (“The fact that exposure to [a substance] may be a risk factor for [a disease] does not make it an actual cause simply because [the disease] developed.”); id. at *11, citing McClain v. Metabolife Int’l, Inc., 401 F.3d 1233, 1243 (11th Cir.2005) (“[S]imply because a person takes drugs and then suffers an injury does not show causation. Drawing such a conclusion from temporal relationships leads to the blunder of the post hoc ergo propter hoc fallacy.”); see also Roche v. Lincoln Prop. Co., 278 F.Supp. 2d 744, 752 (E.D. Va.2003) (“Dr. Bernstein’s reliance on temporal causation as the determinative factor in his analysis is suspect because it is well settled that a causation opinion based solely on a temporal relationship is not derived from the scientific method and is therefore insufficient to satisfy the requirements of Rule 702.”) (internal quotes omitted).

[2] See Reference Manual on Scientific Evidence at 612 (3d ed. 2011) (noting “the logic of the effect of doubling of the risk”); see also Marder v.G.D. Searle & Co., 630 F. Supp. 1087, 1092 (D. Md.1986) (“In epidemiological terms, a two-fold increased risk is an important showing for plaintiffs to make because it is the equivalent of the required legal burden of proof-a showing of causation by the preponderance of the evidence or, in other words, a probability of greater than 50%.”).

Beecher-Monas Proposes to Abandon Common Sense, Science, and Expert Witnesses for Specific Causation

September 11th, 2015

Law reviews are not peer reviewed, not that peer review is a strong guarantor of credibility, accuracy, and truth. Most law reviews have no regular provision for letters to the editor; nor is there a PubPeer that permits readers to point out errors for the benefit of the legal community. Nonetheless, law review articles are cited by lawyers and judges, often at face value, for claims and statements made by article authors. Law review articles are thus a potent source of misleading, erroneous, and mischievous ideas and claims.

Erica Beecher-Monas is a law professor at Wayne State University Law School, or Wayne Law, which considers itself “the premier public-interest law school in the Midwest.” Beware of anyone or any institution that describes itself as working for the public interest. That claim alone should put us on our guard against whose interests are being included and excluded as legitimate “public” interest.

Back in 2006, Professor Beecher-Monas published a book on evaluating scientific evidence in court, which had a few goods points in a sea of error and nonsense. See Erica Beecher-Monas, Evaluating Scientific Evidence: An Interdisciplinary Framework for Intellectual Due Process (2006)[1]. More recently, Beecher-Monas has published a law review article, which from its abstract suggests that she might have something to say about this difficult area of the law:

“Scientists and jurists may appear to speak the same language, but they often mean very different things. The use of statistics is basic to scientific endeavors. But judges frequently misunderstand the terminology and reasoning of the statistics used in scientific testimony. The way scientists understand causal inference in their writings and practice, for example, differs radically from the testimony jurists require to prove causation in court. The result is a disconnect between science as it is practiced and understood by scientists, and its legal use in the courtroom. Nowhere is this more evident than in the language of statistical reasoning.

Unacknowledged difficulties in reasoning from group data to the individual case (in civil cases) and the absence of group data in making assertions about the individual (in criminal cases) beset the courts. Although nominally speaking the same language, scientists and jurists often appear to be in dire need of translators. Since expert testimony has become a mainstay of both civil and criminal litigation, this failure to communicate creates a conundrum in which jurists insist on testimony that experts are not capable of giving, and scientists attempt to conform their testimony to what the courts demand, often well beyond the limits of their expertise.”

Beecher-Monas, “Lost in Translation: Statistical Inference in Court,” 46 Arizona St. L.J. 1057, 1057 (2014) [cited as BM].

A close read of the article shows, however, that Beecher-Monas continues to promulgate misunderstanding, error, and misdirection on statistical and scientific evidence.

Individual or Specific Causation

The key thesis of this law review is that expert witnesses have no scientific or epistemic warrant upon which to opine about individual or specific causation.

“But what statistics cannot do—nor can the fields employing statistics, like epidemiology and toxicology, and DNA identification, to name a few—is to ascribe individual causation.”

BM at 1057-58.

Beecher-Monas tells us that expert witnesses are quite willing to opine on specific causation, but that they have no scientific or statistical warrant for doing so:

“Statistics is the law of large numbers. It can tell us much about populations. It can tell us, for example, that so-and-so is a member of a group that has a particular chance of developing cancer. It can tell us that exposure to a chemical or drug increases the risk to that group by a certain percentage. What statistics cannot do is tell which exposed person with cancer developed it because of exposure. This creates a conundrum for the courts, because nearly always the legal question is about the individual rather than the group to which the individual belongs.”

BM at 1057. Clinical medicine and science come in for particular chastisement by Beecher-Monas, who acknowledges the medical profession’s legitimate role in diagnosing and treating disease. Physicians use a process of differential diagnosis to arrive at the most likely diagnosis of disease, but the etiology of the disease is not part of their normal practice. Beecher-Monas leaps beyond the generalization that physicians infrequently ascertain specific causation to the sweeping claim that ascertaining the cause of a patient’s disease is beyond the clinician’s competence and scientific justification. Beecher-Monas thus tells us, in apodictic terms, that science has nothing to say about individual or specific causation. BM at 1064, 1075.

In a variety of contexts, but especially in the toxic tort arena, expert witness testimony is not reliable with respect to the inference of specific causation, which, Beecher-Monas writes, usually without qualification, is “unsupported by science.” BM at 1061. The solution for Beecher-Monas is clear. Admitting baseless expert witness testimony is “pernicious” because the whole purpose of having expert witnesses is to help the fact finder, jury or judge, who lack the background understanding and knowledge to assess the data, interpret all the evidence, and evaluate the epistemic warrant for the claims in the case. BM at 1061-62. Beecher-Monas would thus allow the expert witnesses to testify about what they legitimately know, and let the jury draw the inference about which expert witnesses in the field cannot and should not opine. BM at 1101. In other words, Beecher-Monas is perfectly fine with juries and judges guessing their way to a verdict on an issue that science cannot answer. If her book danced around this recommendation, now her law review article has come out into the open, declaring an open season to permit juries and judges to be unfettered in their specific causation judgments. What is touching is that Beecher-Monas is sufficiently committed to gatekeeping of expert witness opinion testimony that she proposes a solution to take a complex area away from expert witnesses altogether rather than confront the reality that there is often simply no good way to connect general and specific causation in a given person.

Causal Pies

Beecher-Monas relies heavily upon Professor Rothman’s notion of causal pies or sets to describe the factors that may combine to bring about a particular outcome. In doing so, she commits a non-sequitur:

“Indeed, epidemiologists speak in terms of causal pies rather than a single cause. It is simply not possible to infer logically whether a specific factor caused a particular illness.”[2]

BM at 1063. But the question on her adopted model of causation is not whether any specific factor was the cause, but whether it was one of the multiple slices in the pie. Her citation to Rothman’s statement that “it is not possible to infer logically whether a specific factor was the cause of an observed event,” is not the problem that faces factfinders in court cases.

With respect to differential etiology, Beecher-Monas claims that “‘ruling in’ all potential causes cannot be done.” BM at 1075. But why not? While it is true that disease diagnosis is often made upon signs and symptoms, BM at 1076, sometimes physicians are involved in trying to identify causes in individuals. Psychiatrists of course are frequently involved in trying to identify sources of anxiety and depression in their patients. It is not all about putting a DSM-V diagnosis on the chart, and prescribing medication. And there are times, when physicians can say quite confidently that a disease has a particular genetic cause, as in a man with BrCa1, or BrCa2, and breast cancer, or certain forms of neurodegenerative diseases, or an infant with a clearly genetically determined birth defect.

Beecher-Monas confuses “the” cause with “a” cause, and wonders away from both law and science into her own twilight zone. Here is an example of how Beecher-Monas’ confusion plays out. She asserts that:

“For any individual case of lung cancer, however, smoking is no more important than any of the other component causes, some of which may be unknown.”

BM at 1078. This ignores the magnitude of the risk factor and its likely contribution to a given case. Putting aside synergistic co-exposures, for most lung cancers, smoking is the “but for” cause of individual smokers’ lung cancers. Beecher-Monas sets up a strawman argument by telling us that is logically impossible to infer “whether a specific factor in a causal pie was the cause of an observed event.” BM at 1079. But we are usually interested in whether a specific factor was “a substantial contributing factor,” without which the disease would not have occurred. This is hardly illogical or impracticable for a given case of mesothelioma in a patient who worked for years in a crocidolite asbestos factor, or for a case of lung cancer in a patient who smoked heavily for many years right up to the time of his lung cancer diagnosis. I doubt that many people would hesitate, on either logical or scientific grounds, to attribute a child’s phocomelia birth defects to his mother’s ingestion of thalidomide during an appropriate gestational window in her pregnancy.

Unhelpfully, Beecher-Monas insists upon playing this word game by telling us that:

“Looking backward from an individual case of lung cancer, in a person exposed to both asbestos and smoking, to try to determine the cause, we cannot separate which factor was primarily responsible.”

BM at 1080. And yet that issue, of “primary responsibility” is not in any jury instruction for causation in any state of the Union, to my knowledge.

From her extreme skepticism, Beecher-Monas swings to the other extreme that asserts that anything that could have been in the causal set or pie was in the causal set:

“Nothing in relative risk analysis, in statistical analysis, nor anything in medical training, permits an inference of specific causation in the individual case. No expert can tell whether a particular exposed individual’s cancer was caused by unknown factors (was idiopathic), linked to a particular gene, or caused by the individual’s chemical exposure. If all three are present, and general causation has been established for the chemical exposure, one can only infer that they all caused the disease.115 Courts demanding that experts make a contrary inference, that one of the factors was the primary cause, are asking to be misled. Experts who have tried to point that out, however, have had a difficult time getting their testimony admitted.”

BM at 1080. There is no support for Beecher-Monas’ extreme statement. She cites, in footnote 115, to Kenneth Rothman’s introductory book on epidemiology, but what he says at the cited page is quite different. Rothman explains that “every component cause that played a role was necessary to the occurrence of that case.” In other words, for every component cause that actually participated in bringing about this case, its presence was necessary to the occurrence of the case. What Rothman clearly does not say is that for a given individual’s case, the fact that a factor can cause a person’s disease means that it must have caused it. In Beecher-Monas’ hypothetical of three factors – idiopathic, particular gene, and chemical exposure, all three, or any two, or only one of the three may have made a given individual’s causal set. Beecher-Monas has carelessly or intentionally misrepresented Rothman’s actual discussion.

Physicians and epidemiologists do apply group risk figures to individuals, through the lens of predictive regression equations.   The Gail Model for 5 Year Risk of Breast Cancer, for instance, is a predictive equation that comes up with a prediction for an individual patient by refining the subgroup within which the patient fits. Similarly, there are prediction models for heart attack, such as the Risk Assessment Tool for Estimating Your 10-year Risk of Having a Heart Attack. Beecher-Monas might complain that these regression equations still turn on subgroup average risk, but the point is that they can be made increasingly precise as knowledge accumulates. And the regression equations can generate confidence intervals and prediction intervals for the individual’s constellation of risk factors.

Significance Probability and Statistical Significance

The discussion of significance probability and significance testing in Beecher-Monas’ book was frequently in error,[3] and this new law review article is not much improved. Beecher-Monas tells us that “judges frequently misunderstand the terminology and reasoning of the statistics used in scientific testimony,” BM at 1057, which is true enough, but this article does little to ameliorate the situation. Beecher-Monas offers the following definition of the p-value:

“The P- value is the probability, assuming the null hypothesis (of no effect) is true (and the study is free of bias) of observing as strong an association as was observed.”

BM at 1064-65. This definition misses that the p-value is a cumulative tail probability, and can be one-sided or two-sided. More seriously in error, however, is the suggestion that the null hypothesis is one of no effect, when it is merely a pre-specified expected value that is the subject of the test. Of course, the null hypothesis is often one of no disparity between the observed and the expected, but the definition should not mislead on this crucial point.

For some reason, Beecher-Monas persists in describing the conventional level of statistical significance as 95%, which substitutes the coefficient of confidence for the complement of the frequently pre-specified p-value for significance. Annoying but decipherable. See, e.g., BM at 1062, 1064, 1065. She misleadingly states that:

“The investigator will thus choose the significance level based on the size of the study, the size of the effect, and the trade-off between Type I (incorrect rejection of the null hypothesis) and Type II (incorrect failure to reject the null hypothesis) errors.”

BM at 1066. While this statement is sometimes, rarely true, it mostly is not. A quick review of the last several years of the New England Journal of Medicine will document the error. Invariably, researchers use the conventional level of alpha, at 5%, unless there is multiple testing, such as in a genetic association study.

Beecher-Monas admonishes us that “[u]sing statistical significance as a screening device is thus mistaken on many levels,” citing cases that do not provide support for this proposition.[4] BM at 1066. The Food and Drug Administration’s scientists, who review clinical trials for efficacy and safety will be no doubt be astonished to hear this admonition.

Beecher-Monas argues that courts should not factor statistical significance or confidence intervals into their gatekeeping of expert witnesses, but that they should “admit studies,” and leave it to the lawyers and expert witnesses to explain the strengths and weaknesses of the studies relied upon. BM at 1071. Of course, studies themselves are rarely admitted because they represent many levels of hearsay by unknown declarants. Given Beecher-Monas’ acknowledgment of how poorly judges and lawyers understand statistical significance, this argument is cynical indeed.

Remarkably, Beecher-Monas declares, without citation, that the

“the purpose of epidemiologists’ use of statistical concepts like relative risk, confidence intervals, and statistical significance are intended to describe studies, not to weed out the invalid from the valid.”

BM at 1095. She thus excludes by ipse dixit any inferential purposes these statistical tools have. She goes further and gives us a concrete example:

“If the methodology is otherwise sound, small studies that fail to meet a P-level of 5 [sic], say, or have a relative risk of 1.3 for example, or a confidence level that includes 1 at 95% confidence, but relative risk greater than 1 at 90% confidence ought to be admissible. And understanding that statistics in context means that data from many sources need to be considered in the causation assessment means courts should not dismiss non-epidemiological evidence out of hand.”

BM at 1095. Well, again, studies are not admissible; the issue is whether they may be reasonably relied upon, and whether reliance upon them may support an opinion claiming causality. And a “P-level” of 5 is, well, let us hope a serious typographical error. Beecher-Monas’ advice is especially misleading when there is there is only one study, or only one study in a constellation of exonerative studies. See, e.g., In re Accutane, No. 271(MCL), 2015 WL 753674, 2015 BL 59277 (N.J. Super. Law Div. Atlantic Cty. Feb. 20, 2015) (excluding Professor David Madigan for cherry picking studies to rely upon).

Confidence Intervals

Beecher-Monas’ book provided a good deal of erroneous information on confidence intervals.[5] The current article improves on the definitions, but still manages to go astray:

“The rationale courts often give for the categorical exclusion of studies with confidence intervals including the relative risk of one is that such studies lack statistical significance.62 Well, yes and no. The problem here is the courts’ use of a dichotomous meaning for statistical significance (significant or not).63 This is not a correct understanding of statistical significance.”

BM at 1069. Well yes and no; this interpretation of a confidence interval, say with a coefficient of confidence of 95%, is a reasonable interpretation of whether the point estimate is statistically significant at an alpa of 5%. If Beecher-Monas does not like strict significant testing, that is fine, but she cannot mandate its abandonment by scientists or the courts. Certainly the cited interpretation is one proper interpretation among several.


There were several misleading references to statistical power in Beecher-Monas’ book, but the new law review tops them by giving a new, bogus definition:

“Power, the probability that the study in which the hypothesis is being tested will reject the alterative [sic] hypothesis when it is false, increases with the size of the study.”

BM at 1065. For this definition, Beecher-Monas cites to the Reference Manual on Scientific Evidence, but butchers the correct definition give by the late David Freedman and David Kaye.[6] All of which is very disturbing.

Relative Risks and Other Risk Measures

Beecher-Monas begins badly by misdefining the concept of relative risk:

“as the percentage of risk in the exposed population attributable to the agent under investigation.”

BM at 1068. Perhaps this percentage can be derived from the relative risk, if we know it to be the true measure with some certainty, through a calculation of attributable risk, but confusing and conflating attributable and relative risk in a law review article that is taking the entire medical profession to task, and most of the judiciary to boot, should be written more carefully.

Then Beecher-Monas tells us that the “[r]elative risk is a statistical test that (like statistical significance) depends on the size of the population being tested.” BM at 1068. Well, actually not; the calculation of the RR is unaffected by the sample size. The variance of course will vary with the sample size, but Beecher-Monas seems intent on ignoring random variability.

Perhaps most egregious is Beecher-Monas’ assertion that:

“Any increase above a relative risk of one indicates that there is some effect.”

BM at 1067. So much for ruling out chance, bias, and confounding! Or looking at an entire body of epidemiologic research for strength, consistency, coherence, exposure-response, etc. Beecher-Monas has thus moved beyond a liberal, to a libertine, position. In case the reader has any doubts of the idiosyncrasy of her views, she repeats herself:

“As long as there is a relative risk greater than 1.0, there is some association, and experts should be permitted to base their causal explanations on such studies.”

BM at 1067-68. This is evidentiary nihilism in full glory. Beecher-Monas has endorsed relying upon studies irrespective of their study design or validity, their individual confidence intervals, their aggregate summary point estimates and confidence intervals, or the absence of important Bradford Hill considerations, such as consistency, strength, and dose-response. So an expert witness may opine about general causation from reliance upon a single study with a relative risk of 1.05, say with a 95% confidence interval of 0.8 – 1.4?[7] For this startling proposition, Beecher-Monas cites the work of Sander Greenland, a wild and wooly plaintiffs’ expert witness in various toxic tort litigations, including vaccine autism and silicone autoimmune cases.

RR > 2

Beecher-Monas’ discussion of inferring specific causation from relative risks greater than two devolves into a muddle by her failure to distinguish general from specific causation. BM at 1067. There are different relevancies for general and specific causation, depending upon context, such as clinical trials or epidemiologic studies for general causation, number of studies available, and the like. Ultimately, she adds little to the discussion and debate about this issue, or any other.

[1] See previous comments on the book at “Beecher-Monas and the Attempt to Eviscerate Daubert from Within”; “Friendly Fire Takes Aim at Daubert – Beecher-Monas And The Undue Attack on Expert Witness Gatekeeping; and “Confidence in Intervals and Diffidence in the Courts.”

[2] Kenneth J. Rothman, Epidemiology: An Introduction 250 (2d ed. 2012).

[3] Erica Beecher-Monas, Evaluating Scientific Evidence: An Interdisciplinary Framework for Intellectual Due Process 42 n. 30, 61 (2007) (“Another way of explaining this is that it describes the probability that the procedure produced the observed effect by chance.”) (“Statistical significance is a statement about the frequency with which a particular finding is likely to arise by chance.”).

[4] See BM at 1066 & n. 44, citing “See, e.g., In re Breast Implant Litig., 11 F. Supp. 2d 1217, 1226–27 (D. Colo. 1998); Haggerty v. Upjohn Co., 950 F. Supp. 1160, 1164 (S.D. Fla. 1996), aff’d, 158 F.3d 588 (11th Cir. 1998) (“[S]cientifically valid cause and effect determinations depend on controlled clinical trials and epidemiological studies.”).”


[5] See, e.g., Erica Beecher-Monas, Evaluating Scientific Evidence 58, 67 (N.Y. 2007) (“No matter how persuasive epidemiological or toxicological studies may be, they could not show individual causation, although they might enable a (probabilistic) judgment about the association of a particular chemical exposure to human disease in general.”) (“While significance testing characterizes the probability that the relative risk would be the same as found in the study as if the results were due to chance, a relative risk of 2 is the threshold for a greater than 50 percent chance that the effect was caused by the agent in question.”)(incorrectly describing significance probability as a point probability as opposed to tail probabilities).

[6] David H. Kaye & David A. Freedman, Reference Guide on Statistics, in Federal Jud. Ctr., Reference Manual on Scientific Evidence 211, 253–54 (3d ed. 2011) (discussing the statistical concept of power).

[7] BM at 1070 (pointing to a passage in the FJC’s Reference Manual on Scientific Evidence that provides an example of one 95% confidence interval that includes 1.0, but which shrinks when calculated as a 90% interval to 1.1 to 2.2, which values “demonstrate some effect with confidence interval set at 90%). This is nonsense in the context of observational studies.