TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Confounded by Confounding in Unexpected Places

December 12th, 2021

In assessing an association for causality, the starting point is “an association between two variables, perfectly clear-cut and beyond what we would care to attribute to the play of chance.”[1] In other words, before we even embark on consideration of Bradford Hill’s nine considerations, we should have ruled out chance, bias, and confounding as an explanation for the claimed association.[2]

Although confounding is sometimes considered as a type of systematic bias, its importance warrants its own category. Historically, courts have been rather careless in addressing confounding. The Supreme Court, in a case decided before Daubert and the statutory modifications to Rule 702, ignored the role of confounding in a multiple regression model used to support racial discrimination claims. In language that would be reprised many times to avoid and evade the epistemic demands of Rule 702, the Court held, in Bazemore, that the omission of variables in multiple regression models raises an issue that affects “the  analysis’ probativeness, not its admissibility.”[3]

When courts have not ignored confounding,[4] they have sidestepped its consideration by imparting magical abilities to confidence intervals to take care of problem posed by lurking variables.[5]

The advent of the Reference Manual on Scientific Manual allowed a ray of hope to shine on health effects litigation. Several important cases have been decided by judges who have taken note of the importance of assessing studies for confounding.[6] As a new, fourth edition of the Manual is being prepared, its editors and authors should not lose sight of the work that remains to be done.

The Third Edition of the Federal Judicial Center’s and the National Academies of Science, Engineering & Medicine’s Reference Manual on Scientific Evidence (RMSE3d 2011) addressed confounding in several chapters, not always consistently. The chapter on statistics defined “confounder” in terms of correlation between both the independent and dependent variables:

“[a] confounder is correlated with the independent variable and the dependent variable. An association between the dependent and independent variables in an observational study may not be causal, but may instead be due to confounding”[7]

The chapter on epidemiology, on the other hand, defined a confounder as a risk factor for both the exposure and disease outcome of interest:

“A factor that is both a risk factor for the disease and a factor associated with the exposure of interest. Confounding refers to a situation in which an association between an exposure and outcome is all or partly the result of a factor that affects the outcome but is unaffected by the exposure.”[8]

Unfortunately, the epidemiology chapter never defined “risk factor.” The term certainly seems much less neutral than a “correlated” variable, which lacks any suggestion of causality. Perhaps there is some implied help from the authors of the epidemiology chapter when they described a case of confounding by “known causal risk factors,” which suggests that some risk factors may not be causal.[9] To muck up the analysis, however, the epidemiology chapter went on to define “risk” as “[a] probability that an event will occur (e.g., that an individual will become ill or die within a stated period of time or by a certain age).”[10]

Both the statistics and the epidemiology chapters provide helpful examples of confounding and speak to the need for excluding confounding as the basis for an observed association. The statistics chapter, for instance, described confounding as a threat to “internal validity,”[11] and the need to inquire whether the adjustments in multivariate studies were “sensible and sufficient.”[12]

The epidemiology chapter in one passage instructed that when “an association is uncovered, further analysis should be conducted to assess whether the association is real or a result of sampling error, confounding, or bias.[13] Elsewhere in the same chapter, the precatory becomes mandatory.[14]

Legally Unexplored Source of Substantial Confounding

As the Reference Manual implies, attempting to control for confounding is not adequate.  The controlling must be carefully and sufficiently done. Under the heading of sufficiency and due care, there are epidemiologic studies that purport to control for confounding, but fail rather dramatically. The use of administrative databases, whether based upon national healthcare or insurance claims, has become a common place in chronic disease epidemiology. Their large size obviates many concerns about power to detect rare disease outcomes. Unfortunately, there is often a significant threat to the validity of such studies, which are based upon data sets that characterize patients as diabetic, hypertensive, obese, or smokers vel non. By dichotomizing what are continuous variables, the categorization extracts a significant price in multivariate models used in epidemiology.

Of course, physicians frequently create guidelines for normal versus abnormal, and these divisions or categories show up in medical records, in databases, and ultimately in epidemiologic studies. The actual measurements are not always available, and the use of a categorical variable may appear to simplify the statistical analysis of the dataset. Unfortunately, the results can be quite misleading. Consider the measurements of blood pressure in a study that is evaluating whether an exposure variable (such as medication use or environmental contaminant) is associated with an outcome such as cardiovascular or renal disease. Hypertension, if present, would clearly be a confounder, but the use of a categorical variable for hypertension would greatly undermine the validity of the study. If many of the study participants with hypertension had their condition well controlled by medication, then the categorical variable will dilute the adjustment for the role of hypertension in driving the association between the exposure and outcome variables of interest. Even if none of the hypertensive patients had good control, the reduction of all hypertension to a category, rather than a continuous measurement, is a path of the loss of information and the creation of bias.

Almost 40 years ago, Jacob Cohen showed that dichotomization of continuous variables results in a loss of power.[15] Twenty years later, Peter Austin showed in a Monte Carlo simulation that categorizing a continuous variable in a logistic regression results in inflating the rate of finding false positive associations.[16] The type I (false-positive) error rates increases with sample size, with increasing correlation between the confounding variable and outcome of interest, and the number of categories used for the continuous variables. Of course, the national databases often have huge sample sizes, which only serves to increase the bias from the use of categorical variables for confounding variables.

The late Douglas Altman, who did so much to steer the medical literature toward greater validity, warned that dichotomizing continuous variables was known to cause loss of information, statistical power, and reliability in medical research.[17]

In the field of pharmaco-epidemiology, the bias created by dichotomization of a continous variable is harmful from both the perspective of statistical estimation and hypothesis testing.[18] While readers are misled into believing that the study adjusts for important co-variates, the study will have lost information and power, with the result of presenting false-positive results that have the false-allure of a fully adjusted model. Indeed, this bias from inadequate control of confounding infects several pending pharmaceutical multi-district litigations.


Supreme Court

General Electric Co. v. Joiner, 522 U.S. 136, 145-46 (1997) (holding that an expert witness’s reliance on a study was misplaced when the subjects of the study “had been exposed to numerous potential carcinogens”)

First Circuit

Bricklayers & Trowel Trades Internat’l Pension Fund v. Credit Suisse Securities (USA) LLC, 752 F.3d 82, 89 (1st Cir. 2014) (affirming exclusion of expert witness who failed to account for confounding in event studies), aff’g 853 F. Supp. 2d 181, 188 (D. Mass. 2012)

Second Circuit

Wills v. Amerada Hess Corp., 379 F.3d 32, 50 (2d Cir. 2004) (holding expert witness’s specific causation opinion that plaintiff’s squamous cell carcinoma had been caused by polycyclic aromatic hydrocarbons was unreliable, when plaintiff had smoked and drunk alcohol)

Deutsch v. Novartis Pharms. Corp., 768 F.Supp. 2d 420, 432 (E.D.N.Y. 2011) (“When assessing the reliability of a epidemiologic study, a court must consider whether the study adequately accounted for “confounding factors.”)

Schwab v. Philip Morris USA, Inc., 449 F. Supp. 2d 992, 1199–1200 (E.D.N.Y. 2006), rev’d on other grounds, 522 F.3d 215 (2d Cir. 2008) (describing confounding in studies of low-tar cigarettes, where authors failed to account for confounding and assessing healthier life styles in users)

Third Circuit

In re Zoloft Prods. Liab. Litig., 858 F.3d 787, 793 (3d Cir. 2017) (affirming exclusion of causation expert witness)

Magistrini v. One Hour Martinizing Dry Cleaning, 180 F. Supp. 2d 584, 591 (D.N.J. 2002), aff’d, 68 Fed. Appx. 356 (3d Cir. 2003)(bias, confounding, and chance must be ruled out before an association  may be accepted as showing a causal association)

Soldo v. Sandoz Pharms. Corp., 244 F. Supp. 2d 434 (W.D.Pa. 2003) (excluding expert witnesses in Parlodel case; noting that causality assessments and case reports fail to account for confounding)

Wade-Greaux v. Whitehall Labs., Inc., 874 F. Supp. 1441 (D.V.I. 1994) (unanswered questions about confounding required summary judgment  against plaintiff in Primatene Mist birth defects case)

Fifth Circuit

Knight v. Kirby Inland Marine, Inc., 482 F.3d 347, 353 (5th Cir. 2007) (affirming exclusion of expert witnesses) (“Of all the organic solvents the study controlled for, it could not determine which led to an increased risk of cancer …. The study does not provide a reliable basis for the opinion that the types of chemicals appellants were exposed to could cause their particular injuries in the general population.”)

Burst v. Shell Oil Co., C. A. No. 14–109, 2015 WL 3755953, *7 (E.D. La. June 16, 2015) (excluding expert witness causation opinion that failed to account for other confounding exposures that could have accounted for the putative association), aff’d, 650 F. App’x 170 (5th Cir. 2016)

LeBlanc v. Chevron USA, Inc., 513 F. Supp. 2d 641, 648-50 (E.D. La. 2007) (excluding expert witness testimony that purported to show causality between plaintiff’s benzene ezposure and myelofibrosis), vacated, 275 Fed. App’x 319 (5th Cir. 2008) (remanding case for consideration of new government report on health effects of benzene)

Castellow v. Chevron USA, 97 F. Supp. 2d 780 (S.D. Tex. 2000) (discussing confounding in passing; excluding expert witness causation opinion in gasoline exposure AML case)

Kelley v. American Heyer-Schulte Corp., 957 F. Supp. 873 (W.D. Tex. 1997) (confounding in breast implant studies)

Sixth Circuit

Pluck v. BP Oil Pipeline Co., 640 F.3d 671 (6th Cir. 2011) (affirming exclusion of specific causation opinion that failed to rule out confounding factors)

Nelson v. Tennessee Gas Pipeline Co., 243 F.3d 244, 252-54 (6th Cir. 2001) (rewrite: expert’s failure to account for confounding factors in cohort study of alleged PCB exposures rendered his opinion unreliable)

Turpin v. Merrell Dow Pharms., Inc., 959 F. 2d 1349, 1355 -57 (6th Cir. 1992) (discussing failure of some studies to evaluate confounding)

Adams v. Cooper Indus. Inc., 2007 WL 2219212, 2007 U.S. Dist. LEXIS 55131 (E.D. Ky. 2007) (differential diagnosis includes ruling out confounding causes of plaintiffs’ disease).

Seventh Circuit

People Who Care v. Rockford Bd. of Educ., 111 F.3d 528, 537–38 (7th Cir. 1997) (noting importance of considering role of confounding variables in educational achievement);

Caraker v. Sandoz Pharms. Corp., 188 F. Supp. 2d 1026, 1032, 1036 (S.D. Ill 2001) (noting that “the number of dechallenge/rechallenge reports is too scant to reliably screen out other causes or confounders”)

Eighth Circuit

Penney v. Praxair, Inc., 116 F.3d 330, 333-334 (8th Cir. 1997) (affirming exclusion of expert witness who failed to account of the confounding effects of age, medications, and medical history in interpreting PET scans)

Marmo v. Tyson Fresh Meats, Inc., 457 F.3d 748, 758 (8th Cir. 2006) (affirming exclusion of specific causation expert witness opinion)

Ninth Circuit

Coleman v. Quaker Oats Co., 232 F.3d 1271, 1283 (9th Cir. 2000) (p-value of “3 in 100 billion” was not probative of age discrimination when “Quaker never contend[ed] that the disparity occurred by chance, just that it did not occur for discriminatory reasons. When other pertinent variables were factored in, the statistical disparity diminished and finally disappeared.”)

In re Viagra & Cialis Prods. Liab. Litig., 424 F.Supp. 3d 781 (N.D. Cal. 2020) (excluding causation opinion on grounds including failure to account properly for confounding)

Avila v. Willits Envt’l Remediation Trust, 2009 WL 1813125, 2009 U.S. Dist. LEXIS 67981 (N.D. Cal. 2009) (excluding expert witness opinion that failed to rule out confounding factors of other sources of exposure or other causes of disease), aff’d in relevant part, 633 F.3d 828 (9th Cir. 2011)

In re Phenylpropanolamine Prods. Liab. Litig., 289 F.Supp.2d 1230 (W.D.Wash. 2003) (ignoring study validity in a litigation arising almost exclusively from a single observational study that had multiple internal and external validity problems; relegating assessment of confounding to cross-examination)

In re Bextra and Celebrex Marketing Sales Practice, 524 F. Supp. 2d 1166, 1172 – 73 (N.D. Calif. 2007) (discussing invalidity caused by confounding in epidemiologic studies)

In re Silicone Gel Breast Implants Products Liab. Lit., 318 F.Supp. 2d 879, 893 (C.D.Cal. 2004) (observing that controlling for potential confounding variables is required, among other findings, before accepting epidemiologic studies as demonstrating causation).

Henricksen v. ConocoPhillips Co., 605 F. Supp. 2d 1142 (E.D. Wash. 2009) (noting that confounding must be ruled out)

Valentine v. Pioneer Chlor Alkali Co., Inc., 921 F. Supp. 666 (D. Nev. 1996) (excluding plaintiffs’ expert witnesses, including Dr. Kilburn, for reliance upon study that failed to control for confounding)

Tenth Circuit

Hollander v. Sandoz Pharms. Corp., 289 F.3d 1193, 1213 (10th Cir. 2002) (noting importance of accounting for confounding variables in causation of stroke)

In re Breast Implant Litig., 11 F. Supp. 2d 1217, 1233 (D. Colo. 1998) (alternative explanations, such confounding, should be ruled out before accepting causal claims).

Eleventh Circuit

In re Abilify (Aripiprazole) Prods. Liab. Litig., 299 F.Supp. 3d 1291 (N.D.Fla. 2018) (discussing confounding in studies but credulously accepting challenged explanations from David Madigan) (citing Bazemore, a pre-Daubert, decision that did not address a Rule 702 challenge to opinion testimony)

District of Columbia Circuit

American Farm Bureau Fed’n v. EPA, 559 F.3d 512 (D.C. Cir. 2009) (noting that data relied upon in setting particulate matter standards addressing visibility should avoid the confounding effects of humidity)

STATES

Delaware

In re Asbestos Litig., 911 A.2d 1176 (New Castle Cty., Del. Super. 2006) (discussing confounding; denying motion to exclude plaintiffs’ expert witnesses’ chrysotile causation opinions)

Minnesota

Goeb v. Tharaldson, 615 N.W.2d 800, 808, 815 (Minn. 2000) (affirming exclusion of Drs. Janette Sherman and Kaye Kilburn, in Dursban case, in part because of expert witnesses’ failures to consider confounding adequately).

New Jersey

In re Accutane Litig., 234 N.J. 340, 191 A.3d 560 (2018) (affirming exclusion of plaintiffs’ expert witnesses’ causation opinions; deprecating reliance upon studies not controlled for confounding)

In re Proportionality Review Project (II), 757 A.2d 168 (N.J. 2000) (noting the importance of assessing the role of confounders in capital sentences)

Grassis v. Johns-Manville Corp., 591 A.2d 671, 675 (N.J. Super. Ct. App. Div. 1991) (discussing the possibility that confounders may lead to an erroneous inference of a causal relationship)

Pennsylvania

Porter v. SmithKline Beecham Corp., No. 3516 EDA 2015, 2017 WL 1902905 (Pa. Super. May 8, 2017) (affirming exclusion of expert witness causation opinions in Zoloft birth defects case; discussing the importance of excluding confounding)

Tennessee

McDaniel v. CSX Transportation, Inc., 955 S.W.2d 257 (Tenn. 1997) (affirming trial court’s refusal to exclude expert witness opinion that failed to account for confounding)


[1] Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295, 295 (1965) (emphasis added).

[2] See, e.g., David A. Grimes & Kenneth F. Schulz, “Bias and Causal Associations in Observational Research,” 359 The Lancet 248 (2002).

[3] Bazemore v. Friday, 478 U.S. 385, 400 (1986) (reversing Court of Appeal’s decision that would have disallowed a multiple regression analysis that omitted important variables). Buried in a footnote, the Court did note, however, that “[t]here may, of course, be some regressions so incomplete as to be inadmissible as irrelevant; but such was clearly not the case here.” Id. at 400 n.10. What the Court missed, of course, is that the regression may be so incomplete as to be unreliable or invalid. The invalidity of the regression in Bazemore does not appear to have been raised as an evidentiary issue under Rule 702. None of the briefs in the Supreme Court or the judicial opinions cited or discussed Rule 702.

[4]Confounding in the Courts” (Nov. 2, 2018).

[5] See, e.g., Brock v. Merrill Dow Pharmaceuticals, Inc., 874 F.2d 307, 311-12 (5th Cir. 1989) (“Fortunately, we do not have to resolve any of the above questions [as to bias and confounding], since the studies presented to us incorporate the possibility of these factors by the use of a confidence interval.”). This howler has been widely acknowledged in the scholarly literature. See David Kaye, David Bernstein, and Jennifer Mnookin, The New Wigmore – A Treatise on Evidence: Expert Evidence § 12.6.4, at 546 (2d ed. 2011); Michael O. Finkelstein, Basic Concepts of Probability and Statistics in the Law 86-87 (2009) (criticizing the blatantly incorrect interpretation of confidence intervals by the Brock court).

[6]On Praising Judicial Decisions – In re Viagra” (Feb. 8, 2021); See “Ruling Out Bias and Confounding Is Necessary to Evaluate Expert Witness Causation Opinions” (Oct. 28, 2018); “Rule 702 Requires Courts to Sort Out Confounding” (Oct. 31, 2018).

[7] David H. Kaye and David A. Freedman, “Reference Guide on Statistics,” in RMSE3d 211, 285 (3ed 2011). 

[8] Michael D. Green, D. Michal Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” in RMSE3d 549, 621.

[9] Id. at 592.

[10] Id. at 627.

[11] Id. at 221.

[12] Id. at 222.

[13] Id. at 567-68 (emphasis added).

[14] Id. at 572 (describing chance, bias, and confounding, and noting that “[b]efore any inferences about causation are drawn from a study, the possibility of these phenomena must be examined”); id. at 511 n.22 (observing that “[c]onfounding factors must be carefully addressed”).

[15] Jacob Cohen, “The cost of dichotomization,” 7 Applied Psychol. Measurement 249 (1983).

[16] Peter C. Austin & Lawrence J. Brunner, “Inflation of the type I error rate when a continuous confounding variable is categorized in logistic regression analyses,” 23 Statist. Med. 1159 (2004).

[17] See, e.g., Douglas G. Altman & Patrick Royston, “The cost of dichotomising continuous variables,” 332 Brit. Med. J. 1080 (2006); Patrick Royston, Douglas G. Altman, and Willi Sauerbrei, “Dichotomizing continuous predictors in multiple regression: a bad idea,” 25 Stat. Med. 127 (2006). See also Robert C. MacCallum, Shaobo Zhang, Kristopher J. Preacher, and Derek D. Rucker, “On the Practice of Dichotomization of Quantitative Variables,” 7 Psychological Methods 19 (2002); David L. Streiner, “Breaking Up is Hard to Do: The Heartbreak of Dichotomizing Continuous Data,” 47 Can. J. Psychiatry 262 (2002); Henian Chen, Patricia Cohen, and Sophie Chen, “Biased odds ratios from dichotomization of age,” 26 Statist. Med. 3487 (2007); Carl van Walraven & Robert G. Hart, “Leave ‘em Alone – Why Continuous Variables Should Be Analyzed as Such,” 30 Neuroepidemiology 138 (2008); O. Naggara, J. Raymond, F. Guilbert, D. Roy, A. Weill, and Douglas G. Altman, “Analysis by Categorizing or Dichotomizing Continuous Variables Is Inadvisable,” 32 Am. J. Neuroradiol. 437 (Mar 2011); Neal V. Dawson & Robert Weiss, “Dichotomizing Continuous Variables in Statistical Analysis: A Practice to Avoid,” Med. Decision Making 225 (2012); Phillippa M Cumberland, Gabriela Czanner, Catey Bunce, Caroline J Doré, Nick Freemantle, and Marta García-Fiñana, “Ophthalmic statistics note: the perils of dichotomising continuous variables,” 98 Brit. J. Ophthalmol. 841 (2014).

[18] Valerii Fedorov, Frank Mannino1, and Rongmei Zhang, “Consequences of dichotomization,” 8 Pharmaceut. Statist. 50 (2009).

State-of-the-Art Legal Defenses and Shifty Paradigms

October 16th, 2021

The essence of a failure-to-warn claim is that (1) a manufacturer knows, or should know, about a harmful aspect of its product, (2) which knowledge is not appreciated by customers, (3) the manufacturer fails to warn adequately of this known harm, and (4) the manufacturer’s failure to warn causes the plaintiff to sustain the particular harm of which the manufacturer had knowledge, actual or constructive.

There are myriad problems with the assessing the knowledge component in failure-to-warn claims. Some formulations impute to manufacturers the knowledge of an expert in the field. First, which expert’s claim to knowledge counts for or against the existence of a duty? The typical formulation begs the question which expert’s understanding will control when experts in the field disagree. Second, and equally problematic, knowledge has a temporal aspect. There are causal relationships we “know” today, which we did not know in times past. This temporal component becomes even more refractory for failure-to-warn claims results when the epistemic criteria for claims of knowledge change over time.

In the early 20th century, infectious disease epidemiology, with its reliance upon Koch’s postulates. dominated the model of causation used in public and scientific discourse. The very nature of Koch’s postulates made the identification of a specific pathogen necessary to the causation of a specific disease. Later in the first half of the 20th century, epidemiologists and clinicians came to realize that the specific pathogen may be necessary but not sufficient for inducing a particular infectious disease. Still there was some comfort in having causal associations predicated upon necessary relationships. Clinicians and clinical scientists did not have to worry too much about probability theory or statistics.

The development of causal models in which the putative cause was neither necessary nor sufficient for bringing about the outcome of interest was a substantial shock to the system. In the absence of a one-to-one specificity, scientists had to account for confounding variables, in ways that they had not done so previously. The implications for legal state-of-the-art defenses could not be more profound. In the first half of the 20th century, case reports and series were frequently seen as adequate for suggesting and establishing causal relationships. By the end of the 1940s, scientists were well aware of the methodological inappropriateness of relying upon case reports and series, and the need for analytical epidemiologic studies to support causal claims.

Several historians of science have addressed the changing causal paradigm, which ultimately would permit and even encourage scientists to identify causal associations, even when the exposures studied were neither necessary nor sufficient to bring about the end point of interest. In 2011, Mark Parascandola, while he was an epidemiologist in the National Cancer Institute’s Tobacco Control Research Branch, wrote an important history of this paradigm shift and its implications in epidemiology.[1] His paper should be required reading for all lawyers who work on “long-tail” litigation, involving claims that risks were known to manufacturers even before World War II.

In Parascandola’s history, epidemiology and clinical science focused largely on infectious diseases in the early 20th century, and as a result, causal association was seen through the lens of Koch’s postulates with its implied model of necessary and sufficient conditions for causal attribution. Not until after World War II did “risk factor” epidemiology emerge to address the causal role of exposures – such as tobacco smoking – that were neither necessary nor sufficient for causing an outcome of interest.[2]

The shift from infectious to chronic diseases, such as cancer and cardiovascular disease, occurred in the 1950s, and brought with it, acceptance of a different concepts of causation, which involved stochastic events, indeterminism, multi-factorial contributions, and confounding of observations by independent but correlated causes. The causal criteria for infectious disease were generally unhelpful in supporting causal claims of chronic diseases.

Parascandola characterizes the paradigm shift as a “radical change,” influenced by developments in statistics, quantum mechanics, and causal theory.[3] Edward Cuyler Hammond, an epidemiologist with the American Cancer Society, for example, wrote in 1955, that:

“[t]he cause of an effect has sometimes been defined as a single factor which antecedes, which is necessary, and which is sufficient to produce the effect. Clearly this definition is inadequate for the study of biologic phenomena, which are produced by a complex pattern of environmental conditions interacting with the highly complex and variable make-up of living organisms.”[4]

The shift in causal models within epidemiologic thinking and research introduced new complexity with important practical implications. Gone was the one-to-one connection between pathogens (or pathogenic exposures) and specific diseases. Specificity was an important victim of the new model of causation. Causal models had to account for multi-factorial contributions to disease.[5] Confounding, the correlation between exposures of interest and other exposures that were truly driving the observations, became a substantial threat to validity. The discerning lens of analytical epidemiology was able to identify tobacco smoking as a cause of lung cancer only because of the large increased risks, ten-fold and greater, observed in multiple studies. There were no competing but independent risks of that magnitude, at hand, which could eliminate or reverse the observed tobacco risks.

Parascandola notes that in the 1950s, the criteria for causal assessment were in flux and the subject of debate:

“Previous informal rules or guides for inference, such as Koch’s postulates, were not adequate to identify partial causes of chronic disease based on a combination of epidemiologic and laboratory evidence.”[6]

As noted above, the legal implications of Parascandola’s historical analysis are hugely important.  Scientists and statisticians were scrambling to develop appropriate methodologies to accommodate the changed causal models and causal criteria. Mistakes were made along the way as the models and criteria changed. In Sir Richard Doll’s famous 1955 study of lung cancer among asbestos factory workers, the statistical methods were surprisingly primitive to modern epidemiology. Even more stunning was that Sir Richard failed to incorporate smoking histories and accounting for confounding from smoking before reaching a conclusion that lung cancer was associated with long-term asbestos factory work that had induced asbestosis.[7]

Not until the lae 1950s and early 1960s did statisticians develop multivariate models to help assess potential confounding.[8] Perhaps the most cited paper in epidemiology was published by Nathan Mantel (the pride of the Brooklyn Hebrew Orphan Asylum) and William Haenszel in 1959. Its approach to stratification of sample analyses was further elaborated upon by the authors and others all through the 1960s and into the 1970s.[9]

Similarly, the evolution of criteria for causal attribution based upon risk factor epidemiology required decades of discussion and debate. Reasonably well defined criteria did not emerge until the mid-1960s, with the famous Public Health Service report on smoking and lung cancer,[10] and Sir Austin Bradford Hill’s famous after-dinner talk to the Royal Society of Medicine.[11]

Several years before Parascandola published his historical analysis, three historians of science published a paper with a very similar thesis.[12] These authors noted that there was, indeed, a legitimate controversy over whether tobacco smoking caused lung cancer, in the 1950s early 1960s, as the mechanistic Koch’s postulates gave way to the statistical methods of risk-factor epidemiology. The historians’ paper observed that by the 1950s, infectious diseases such as tuberculosis were in retreat, and the public health community’s focus was on chronic diseases such as lung cancer. The lung cancer controversy of the 1950s pushed scientists to revise their conceptions of causation ,[13] and ultimately led to the strengthening of, and legitimizing, the field of epidemiology.[14] The growing acceptance of epidemiologic methods for identifying causes, neither necessary nor sufficient, pushed aside the attachment to Koch’s postulates and the skepticism over statistical reasoning.

Interestingly, this historians’ paper was funded completely by the Rollins Public Health of Emory University. Two of the authors had been sought out by a recruiting agency for the tobacco industry, but fell out with the agency and the tobacco companies when they realized that they could not support the litigation goals. In a footnote, the authors emphasized that their factual analysis and argument contradicted the industry’s desired defense.[15]

Reaching back even farther in time, there is the redoubtable Irving John Selikoff, who wrote in 1991:

“We are inevitably bound by the knowledge of the time in which we live. An example may be given. During the 1930s and 194Os, random instances of lung cancer occurring among workers exposed to asbestos were reported and attention was called to these by the collection of cases both in registers and in review papers. With the continued growth of the asbestos industry, it was deemed wise to epidemiologically examine the proposed association. This was done in an elegant, innovative, well-considered study by Richard Doll, a study which any one of us would have been proud to report in 1955.”[16]

What is ironic is that Dr. Selikoff had testified for plaintiffs’ counsel as an expert witness specifically on state of the art, or the question of when defendants should have known and warned that asbestos caused lung cancer.[17] Dr. Selikoff ultimately withdrew from testifying, in large part because his views on this matter were not particularly helpful to plaintiffs.

The shift in causal criteria, and rejection of case reports and case series, can be seen in the suggestion, in the 1930s, of a few pathologists who contended that silicosis caused lung cancer. The few scientists who made this causal claim relied upon heavily upon anecdotal and uncontrolled necropsy series.[18]

After World War II, these causal claims fell into disrepute as not properly supported by valid scientific methodology. Dr. Madge Thurlow Macklin, a female pioneer in clinical medicine and epidemiology,[19] and one the early adopters of statistical methodology in her work, debunked the causal claims:

“If silicosis is being considered as a causative agent in lung cancer, the control group should be as nearly like the experimental or observed group as possible in sex, age distribution, race, facilities for diagnosis, other possible carcinogenic factors, etc. The only point in which the control group should differ in an ideal study would be that they were not exposed to free silica, whereas the experimental group was. The incidence of lung cancer could then be compared in the two groups of patients.

This necessity is often ignored; and a ‘random’ control group is obtained for comparison on the assumption that any group taken at random is a good group for comparison. Fallacious results based on such studies are discussed briefly.”[20]

Macklin’s advice sounds like standard-operating procedure today, but in the 1940s, it was viewed as radical and wrong by many physicians and clinical scientists.

Of course, the change over time in the knowledge of, and techniques for, diagnostic methods, quantitative measurements, and disease definitions also affect litigated issues. The change in epistemic standards and causal criteria, however, fundamentally changed legal standards for tort liability. The shift from deterministic models of necessary and sufficient causation to risk factor causation had, and continues to have, enormous ramifications for the legal adjudication of questions concerning when companies, held to the knowledge of an expert in the field, should have started to warn about the risks created by their products. Mind the gap!


[1] Mark Parascandola, “The epidemiologic transition and changing concepts of causation and causal inference,” 64 Revue d’histoire des sciences 243 (2011).

[2] Id. at 245.

[3] Id. at 248.

[4] Id. at 252, citing Edward Cuyler Hammond, “Cause and Effect,” in Ernest L. Wynder, ed., The Biologic Effects of Tobacco (1955).

[5] Id. at 257.

[6] Id.

[7] Richard Doll, “Mortality from Lung Cancer in Asbestos Workers,” 12 Brit. J. Indus. Med. 81 (1955).

[8] See Parascandola at 258.

[9] Nathan Mantel & William Haenszel, “Statistical aspects of the analysis of data from retrospective studies of disease,” 22 J. Nat’l Cancer Instit. 19 (1959). See Mervyn Susser, “Epidemiology in the United States after World War II: The Evolution of Technique,” 7 Epidemiology Reviews 147 (1985).

[10] Surgeon General, Smoking and health : Report of the Advisory Committee to the surgeon general of the Public Health Service, PHS publication No. 1103 (1964).

[11] Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295, 295 (1965).

[12] Colin Talley, Howard I. Kushner & Claire E. Sterk, “Lung Cancer, Chronic Disease Epidemiology, and Medicine, 1948-1964,” 59 J. History Med. & Allied Sciences 329 (2004) [Talley]. Parascandola appeared not to have been aware of this article; at least he did not cite it.

[13] Id. at 374.

[14] Id. at 334.

[15] Id. at 329.

[16] Irving John Selikoff, “Statistical Compassion,” 44 J. Clin. Epidemiol. 141S, 142S (1991) (internal citations omitted) (emphasis added).

[17]Selikoff and the Mystery of the Disappearing Testimony,” (Dec. 3, 2010). See also Peter W.J. Bartrip, “Irving John Selikoff and the Strange Case of the Missing Medical Degrees,” 58 J. History Med. 3, 27 & n.88-92 (2003) (quoting insulator union President Andrew Haas, as saying “[w]e all owe a great debt of thanks for often and expert testimony on behalf of our members … .” Andrew Haas, Comments from the General President, 18 Asbestos Worker (Nov. 1972)).

[18] See, e.g., Max O. Klotz, “The Association Silicosis & Carcinoma of Lung 1939,” 35 Cancer Research 38 (1939); C.S. Anderson & J. Heney Dible, “Silicosis and carcinoma of the lung,” 38 J. Hygiene 185 (1938).

[19] Barry Mehler, “Madge Thurlow Macklin,” from Barbara Sicherman and Carl Hurd Green, eds., Notable American Women: The Modern Period 451-52 (1980); Laura Lynn WindsorWomen in Medicine: An Encyclopedia 134 (2002).

[20] Madge Thurlow Macklin, “Pitfalls in Dealing with Cancer Statistics, Especially as Related to Cancer of the Lung,” 14 Diseases Chest 525 532-33, 529-30 (1948). See alsoHistory of Silica Litigation – the Lung Cancer Angle,” (Feb. 3, 2019); “The Unreasonable Success of Asbestos Litigation,” (July 25, 2015); “Careless Scholarship about Silica History,” (July 21, 2014) (discussing David Egilman); “Silicosis, Lung Cancer, and Evidence-Based Medicine in North America,” (July 4, 2014).

Finding Big Blue

July 26th, 2021

The Washington Supreme Court recently upheld an $81.5 million verdict, against GPC and NAPA, in an asbestos peritoneal mesothelioma case. The award included $30 million for loss of consortium. Coogan v. Borg-Warner Morse Tec Inc., 12 Wash. App. 2d 1021, 2020 WL 824192 (2020), rev’d in part, No. 98296-1, 2021 Wash. LEXIS 383 *, 2021 WL 2835358 (Wash. July 8, 2021).[1] The main points of contention on appeal were plaintiffs’ counsel’s misconduct and the excessiveness of the verdict, which was for only compensatory damages. Twelve defendants settled before trial for a total of $4.4 million. Of the settling defendants, Defendant Manville paid $1.5 million.

Plaintiffs’ proofs against GPC and NAPA were for chrysotile exposure from their brake and clutch parts used by Coogan. Not surprisingly, given that Coogan died of peritoneal mesothelioma, there was a strong suspicion of crocidolite exposure from Manville’s transite product over the course of two years.  Apparently, GPC and NAPA failed to show that Coogan was exposed to crocidolite, even though the workplace was small and other workers had succumbed to asbestos disease.

While the court’s opinion on misconduct and the excessiveness of the verdict are of interest, the most interesting part of the story is what was not told. It is hard to imagine that defense counsel did not try hard to establish the workplace exposures to Manville’s transite. What is not clear is why they failed. Obviously, Manville took the threat seriously enough to pay a significant sum to settle the case before trial. Why could GPC and NAPA not prove at trial what Manville knew?  Were GPC and NAPA the victims of budgetary pressures or limited resources, or were they misled or stonewalled by plaintiffs’ counsel or co-workers?

Given the propensity for crocidolite, such as was used in Manville’s transite, to cause mesothelioma, and especially peritoneal mesothelioma, the trial defendants certainly had an adequate motivation to investigate and to document the crocidolite exposure. 

A recent, large, long-term cohort study in Denmark showed that vehicle mechanics, who use brake linings and clutch parts, as did Coogan, have no increased risk of mesothelioma. Compared with other workers, automobile mechanics actually had a lower than expect risk of mesothelioma or pleural cancer, with an age-adjusted hazard ratio of HR=0.74 (95% CI 0.55 to 0.99)), based upon 47 cases.[2]

The Danish study is in accord with previous studies and meta-analyses,[3] and stands in stark contrast with the epidemiology of mesothelioma among men and women exposed to crocidolite. By way of example, in a cohort of British workers who assembled gas masks during World War II, close to 9% of all deaths were due to mesothelioma.[4] In a published cohort study of workers at Hollingsworth & Vose, a company that made the filters for the Kent cigarette, close to 18 percent of all deaths were due to mesothelioma.[5]

Dr. Irving Selikoff and his colleagues worked assiduously to obscure the vast potency difference between chrysotile and crocidolite, by arguing falsely that crocidolite was not used in the United States,[6] and by suppressing their own research into disease at the Johns-Manville plant that manufactured transite and other products. What is interesting about the Coogan case is what has not been reported. Crocidolite is clearly the most potent cause of mesothelioma.[7] Even if chrysotile were to have posed a risk to someone such as Mr. Coogan, crocidolite exposure, even for just two years, likely represented multiple orders of magnitude greater risk for peritoneal mesothelioma. Without evidence that Coogan was exposed to crocidolite from Mansville’s transite, the manufacturers of brake and clutch parts were unable to seek an apportionment between exposures from their chrysotile and Mansville’s crocidolite. Trying the so-called chrysotile defense is more difficult without being able to show substantial amphibole asbestos exposure.  The bar, both plaintiffs’ and defendants’, could learn a great deal from what efforts were made to establish the crocidolite exposure, why they were unsuccessful, and how the efforts might go better in the future.


[1] Kirk Hartley kindly called my attention to this interesting case.

[2] Reimar Wernich Thomsen, Anders Hammerich Riis, Esben Meulengracht Flachs, David H Garabrant, Jens Peter Ellekilde Bonde, and Henrik Toft Sørensen, “Risk of asbestosis, mesothelioma, other lung disease or death among motor vehicle mechanics: a 45-year Danish cohort study,” Thorax (July 8, 2021), online ahead of print at <doi: 10.1136/thoraxjnl-2020-215041>.

[3] David H. Garabrant, Dominik D. Alexander, Paula E. Miller, Jon P. Fryzek, Paolo Boffetta, M. Jane Teta, Patrick A. Hessel, Valerie A. Craven, Michael A. Kelsh, and Michael Goodman, “Mesothelioma among Motor Vehicle Mechanics: An Updated Review and Meta-analysis,” 60 Ann. Occup. Hyg. 8 (2016); Michael Goodman, M. Jane Teta, Patrick A. Hessel, David H. Garabrant, Valerie A. Craven, Carolyn G. Scrafford, and Michael A. Kelsh, “Mesothelioma and lung cancer among motor vehicle mechanics: a meta-analysis,” 48 Ann. Occup. Hyg. 309 (2004).

[4] See J. Corbett McDonald, J. M. Harris, and Geoffry Berry, “Sixty years on: the price of assembling military gas masks in 1940,” 63 Occup. & Envt’l Med. 852 (2006). 

[5] James A. Talcott, Wendy A. Thurber, Arlene F. Kantor, Edward A. Gaensler, Jane F. Danahy, Karen H. Antman, and Frederick P. Li, “Asbestos-Associated Diseases in a Cohort of Cigarette-Filter Workers,” 321 New Engl. J. Med. 1220 (1989).

[6]Selikoff and the Mystery of the Disappearing Amphiboles” (Dec. 10, 2010); “Playing Hide the Substantial Factors in Asbestos Litigation” (Sept. 27, 2011).

[7] See, e.g., John T. Hodgson & Andrew A. Darnton, “The quantitative risks of mesothelioma and lung cancer in relation to asbestos exposure,” 14 Ann. Occup. Hygiene 565 (2000); Misty J Hein, Leslie T Stayner, Everett Lehman & John M Dement, “Follow-up study of chrysotile textile workers: cohort mortality and exposure-response,” 64 Occup. & Envt’l Med. 616 (2007); David H. Garabrant & Susan T. Pastula, “A comparison of asbestos fiber potency and elongate mineral particle (EMP) potency for mesothelioma in humans,” 361 Toxicology & Applied Pharmacol. 127 (2018) (“relative potency of chrysotile:amosite:crocidolite was 1:83:376”). See also D. Wayne Berman & Kenny S. Crump, “Update of Potency Factors for Asbestos-Related Lung Cancer and Mesothelioma,” 38(S1) Critical Reviews in Toxicology 1 (2008).

Judge Jack B. Weinstein – A Remembrance

June 17th, 2021

There is one less force of nature in the universe. Judge Jack Bertrand Weinstein died earlier this week, about two months shy of a century.[1] His passing has been noticed by the media, lawyers, and legal scholars[2]. In its obituary, the New York Times noted that Weinstein was known for his “bold jurisprudence and his outsize personality,” and that he was “revered, feared, and disparaged.” The obituary quoted Professor Peter H. Schuck, who observed that Weinstein was “something of a benevolent despot.”

As an advocate, I found Judge Weinstein to be anything but fearsome. His jurisprudence was often driven by intellectual humility rather than boldness or despotism. One area in which Judge Weinstein was diffident and restrained was in his exercise of gatekeeping of expert witness opinion. He, and his friend, the late Professor Margaret Berger, were opponents of giving trial judges discretion to exclude expert witness opinions on ground of validity and reliability. Their antagonism to gatekeeping was, no doubt, partly due to their sympathies for injured plaintiffs and their realization that plaintiffs’ expert witnesses often come up with dodgy scientific opinions to advance plaintiffs’ claims. In part, however, Judge Weinstein’s antagonism was due to his skepticism about judicial competence and his own intellectual humility.

Although epistemically humble, Judge Weinstein was not incurious. His interest in scientific issues occasionally got him into trouble, as when he was beguiled by Dr. Irving Selikoff and colleagues, who misled him on aspects of the occupational medicine of asbestos exposure. In 1990, Judge Weinstein issued a curious mea culpa. Because of a trial in progress, Judge Weinstein, along with state judge (Justice Helen Freedman), attended an ex parte private luncheon meeting with Dr. Selikoff. Here is how Judge Weinstein described the event:

“But what I did may have been even worse [than Judge Kelly’s conduct that led to his disqualification]. A state judge and I were attempting to settle large numbers of asbestos cases. We had a private meeting with Dr. Irwin [sic] J. Selikoff at his hospital office to discuss the nature of his research. He had never testified and would never testify. Nevertheless, I now think that it was a mistake not to have informed all counsel in advance and, perhaps, to have had a court reporter present and to have put that meeting on the record.”[3]

Judge Weinstein’s point about Selikoff’s having never testified was demonstrably false, but I impute no scienter for false statements to the judge. The misrepresentation almost certainly originated with Selikoff. Dr. Selikoff had testified frequently up to the point at which he and plaintiffs’ counsel realized that his shaky credentials and his pronouncements on “state of the art,” were hurtful to the plaintiffs’ cause. Even if Selikoff had not been an accomplished testifier, any disinterested observer should, by 1990, have known that Selikoff was himself not a disinterested actor in medical asbestos controversies.[4] The meeting with Selikoff apparently weighed on Judge Weinstein’s conscience. He repeated his mea culpa almost verbatim, along with the false statement about Selikoff’s never having testified, in a law review article in 1994, and then incorporated the misrepresentation into a full-length book.[5]

In his famous handling of the Agent Orange class action, Judge Weinstein manipulated the defendants into settling, and only then applied his considerable analytical ability in dissecting the inadequacies of the plaintiffs’ causation case. Rather than place the weight of his decision on Rule 702, Judge Weinstein dismembered the causation claim by finding that the bulk of what the plaintiffs’ expert witnesses relied upon under Rule 703 was unreasonable. He then found that what remained, if anything, could not reasonably support a verdict for plaintiffs, and he entered summary judgment for the defense in the opt-out cases.[6]

In 1993, the U.S. Supreme Court breathed fresh life into the trial court’s power and obligation to review expert witness opinions and to exclude unsound opinions.[7] Several months before the Supreme Court charted this new direction on expert witness testimony, the silicone breast implant litigation, fueled by iffy science and iffier scientists, erupted.[8] In October 1994, the Judicial Panel on Multi-District Litigation created MDL 926, which consolidated the federal breast implant cases before Judge Sam Pointer, in the Northern District of Alabama. Unlike most contemporary MDL judges, however, Judge Pointer did not believe that Rule 702 and 703 objections should be addressed by the MDL judge. Pointer believed strongly that the trial judges, in the individual, remanded cases, should rule on objections to the validity of proffered expert witness opinion testimony. As a result, so-called Daubert hearings began taking place in district courts around the country, in parallel with other centralized proceedings in MDL 926.

By the summer of 1996, Judge Robert E. Jones had a full-blown Rule 702 attack on the plaintiffs’ expert witnesses before him, in a case remanded from MDL 926. In the face of the plaintiffs’ MDL leadership committee’s determined opposition, Judge Jones appointed four independent scientists to serve as scientific advisors. With their help, in December 1996, Judge Jones issued one of the seminal rulings in the breast implant litigation, and excluded the plaintiffs’ expert witnesses.[9]

While Judge Jones was studying the record, and writing his opinion in the Hall case, Judge Weinstein, with a judge from the Southern District of New York, conducted a two-week Rule 702 hearing, in his Brooklyn courtroom. Judge Weinstein announced at the outset that he had studied the record from the Hall case, and that he would incorporate it into his record for the cases remanded to the Southern and Eastern Districts of New York.

I had one of the first witnesses, Dr. Donnard Dwyer, before Judge Weinstein during that chilly autumn of 1996. Dwyer was a very earnest immunologist, who appeared on direct examination to endorse the methodological findings of the plaintiffs’ expert witnesses, including a very dodgy study by Dr. Douglas Shanklin. On cross-examination, I elicited Dwyer’s view that the Shanklin study involved fraudulent methodology and that he, Dwyer, would never use such a method or allow a graduate student to use it. This examination, of course, was great fun, and as I dug deeper with relish, Judge Weinstein stopped me, and asked rhetorically to the plaintiffs’ counsel, whether any of them intended to rely upon the discredited Shanklin study. My main adversary Mike Williams did not miss a beat; he jumped to his feet to say no, and that he did not know why I was belaboring this study. But then Denise Dunleavy, of Weitz & Luxenberg, knowing that Shanklin was her listed expert witness in many cases, rose to say that her expert witnesses would rely upon the Shanklin study. Incredulous, Weinstein looked at me, rolled his eyes, paused dramatically, and then waved his hand at me to continue.

Later in my cross-examination, I was inquiring about another study that reported a statistic from a small sample. The authors reported a confidence interval that included negative values for a test that could not have had any result less than zero. The sample was obviously skewed, and the authors had probably used an inappropriate parametric test, but Dwyer was about to commit to the invalidity of the study when Judge Weinstein stopped me. He was well aware that the normal approximation had created the aberrant result, and that perhaps the authors only sin was in failing to use a non-parametric test. I have not had many trial judges interfere so knowledgably.

In short order, on October 23, 1996, Judge Weinstein issued a short, published opinion, in which he ducked the pending Rule 702 motions, and he granted partial summary judgment on the claims of systemic disease.[10] Only the lawyers involved in the matters would have known that there was no pending motion for summary judgment!

Following up with grant of summary judgment, Judge Weinstein appointed a group of scientists and a legal scholar, to help him assemble a panel of Rule 706 expert witnesses for future remanded case. Law Professor Margaret Berger, along with Drs. Joel Cohen and Alan Wolff, began meeting with the lawyers to identify areas of expertise needed by the court, and what the process of court-appointment of neutral expert witnesses would look like.

The plaintiffs’ counsel were apoplectic. They argued to Judge Weinstein that Judge Pointer, in the MDL, should be supervising the process of assembling court-appointed experts. Of course, the plaintiffs’ lawyers knew that Judge Pointer, unlike Judges Jones and Weinstein, believed that both sides’ expert witnesses were extreme, and mistakenly believed that the truth lay between. Judge Pointer was an even bigger foe of gatekeeping, and he was generally blind to the invalid evidence put forward by plaintiffs. In response to the plaintiffs’ counsel’s, Judge Weinstein sardonically observed that if there were a real MDL judge, he should take it over.

Within a month or so, Judge Pointer did, in fact, take over the court-appointed expert witness process, and incorporated Judge Weinstein’s selection panel. The process did not going very smoothly in front of the MDL judge, who allowed the plaintiffs lawyers to slow down the process by throwing in irrelevant documents and deploying rhetorical tricks. The court-appointed expert witnesses did not take kindly to the shenanigans, or to the bogus evidence. The expert panel’s unanimous rejection of the plaintiffs’ claims of connective tissue disease causation was an expensive, but long overdue judgment from which there was no appeal. Not many commentators, however, know that the panel would never have happened but for Judge Weinstein’s clever judicial politics.

In April 1997, while Judge Pointer was getting started with the neutral expert selection panel,[11] the parties met with Judge Weinstein one last time to argue the defense motions to exclude the plaintiffs’ expert witnesses. Invoking the pendency of the Rule 706 court-appointed expert witness processs in the MDL, Judge Weinstein quickly made his view clear that he would not rule on the motions. His Honor also made clear that if we pressed for a ruling, he would deny our motions, even though he had also ruled that plaintiffs’ could not make out a submissible case on causation.

I recall still the frustration that we, the defense counsel, felt that April day, when Judge Weinstein tried to explain why he would grant partial summary judgment but not rule on our motions contra plaintiffs’ expert witnesses. It would be many years later, before he let his judicial assessment see the light of day. Two decades and then some later, in a law review article, Judge Weinstein made clear that “[t]he breast implant litigation was largely based on a litigation fraud. …  Claims—supported by medical charlatans—that enormous damages to women’s systems resulted could not be supported.”[12] Indeed.

Judge Weinstein was incredibly smart and diligent, but he was human with human biases and human fallibilities. If he was a despot, he was at least kind and benevolent. In my experience, he was always polite to counsel and accommodating. Appearing before Judge Weinstein was a pleasure and an education.


[1] Laura Mansnerus, “Jack B. Weinstein, U.S. Judge With an Activist Streak, Is Dead at 99,” N.Y. Times (June 15, 2021).

[2] Christopher J. Robinette, “Judge Jack Weinstein 1921-2021,” TortsProf Blog (June 15, 2021).

[3] Jack B. Weinstein, “Learning, Speaking, and Acting: What Are the Limits for Judges?” 77 Judicature 322, 326 (May-June 1994).

[4]Selikoff Timeline & Asbestos Litigation History” (Dec. 20, 2018).

[5] See Jack B. Weinstein, “Limits on Judges’ Learning, Speaking and Acting – Part I- Tentative First Thoughts: How May Judges Learn?” 36 Ariz. L. Rev. 539, 560 (1994) (“He [Selikoff] had never testified and would never testify.”); Jack B. Weinstein, Individual Justice in Mass Tort Litigation: The Effect of Class Actions, Consolidations, and other Multi-Party Devices 117 (1995) (“A court should not coerce independent eminent scientists, such as the late Dr. Irving Selikoff, to testify if, like he, they prefer to publish their results only in scientific journals.”)

[6] In re Agent Orange Product Liab. Litig., 597 F. Supp. 740, 785 (E.D.N.Y. 1984), aff’d 818 F.2d 145, 150-51 (2d Cir. 1987)(approving district court’s analysis), cert. denied sub nom. Pinkney v. Dow Chemical Co., 487 U.S. 1234 (1988);  In re “Agent Orange” Prod. Liab. Litig., 611 F. Supp. 1223 (E.D.N.Y. 1985), aff’d, 818 F.2d 187 (2d Cir. 1987), cert. denied, 487 U.S. 1234 (1988).

[7] Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993).

[8] Reuters, “Record $25 Million Awarded In Silicone-Gel Implants Case,” N.Y. Times (Dec. 24, 1992).

[9] See Hall v. Baxter Healthcare Corp., 947 F. Supp. 1387 (D. Ore. 1996).

[10] In re Breast Implant Cases, 942 F. Supp. 958 (E.& S.D.N.Y. 1996).

[11] MDL 926 Order 31 (May 31, 1996) (order to show cause why a national Science Panel should not be appointed under Federal Rule of Evidence 706); MDL 926 Order No. 31C (Aug. 23, 1996) (appointing Drs. Barbara S. Hulka, Peter Tugwell, and Betty A. Diamond); Order No. 31D (Sept. 17, 1996) (appointing Dr. Nancy I. Kerkvliet).

[12] Hon. Jack B. Weinstein, “Preliminary Reflections on Administration of Complex Litigation” 2009 Cardozo L. Rev. de novo 1, 14 (2009) (emphasis added).

NJ Appellate Division Calls for Do Over in Baby Powder Dust Up

May 22nd, 2021

There was quite a bit of popular media reporting of the $117 million (compensatory and punitive damages) awarded by a Middlesex County, New Jersey, jury to a man who claimed his mesothelioma had been caused by his use of baby powder. There was much less media coverage last month of the New Jersey Appellate Division’s reversal of the underlying verdicts, on grounds that the trial Judge Ana C. Viscomi had abused her discretion on several key issues.[1] The New Jersey appellate court reversed the trial court’s judgment, and remanded the Lanzo case for a new trial, in a carefully reasoned decision.[2]

Johnson & Johnson Consumer Inc. (JJCI) and Imerys Talc America, Inc. (Imerys) appealed from the judgment entered by Judge Viscomi, on April 23, 2018. The appellants lodged several points of error, but the most erroneous of the erroneous trial court decisions seemed to involve a laissez-faire attitude to weak and unreliable proffered expert witness opinions.

Judge Viscomi conducted a Rule 104 hearing on the admissibility of testing of plaintiffs’ expert witness, William Longo, on crowd-sourced samples of baby powder, without chain of custody or provenance evidence. Judge Viscomi denied the challenge to Longo’s test results.

The defense had also filed Rule 702 challenges to plaintiffs’ expert witnesses, James S. Webber, Ph.D., and Jacqueline Moline, M.D., and their opinion that non-asbestiform amphibole cleavage fragments can cause mesothelioma. Judge Viscomi refused these pre-trial motions, and refused to conduct a pre-trial Rule 104 hearing on the proffered opinions. Her Honor’s denial of the Rule 702 was accompanied with little to no reasoning, which proved to be the determinant of her abuse of discretion, and deviation from the standard of judicial care.

At trial, the defense re-asserted its objections to Moline’s opinion on cleavage fragments, but Judge Viscomi permitted Moline to testify about “non-asbestiform cleavage fragments from a medical point of view.” In other words, the trial judge gave Dr. Moline carte blanche to address causation.

Understandably, on appeal, JJCI and Imerys assigned various errors. With respect to the scientific evidence, the defendants alleged that plaintiffs’ expert witnesses (Webber and Moline) failed to:

“(1) explain what causes the human body to respond in the same way to the different mineral forms;

(2) acknowledge the contrary opinions of scientists and government agencies;

(3) provide evidentiary support for their opinion that non-asbestiform minerals can cause mesothelioma; and

(4) produce evidence that their theory that non-asbestiform minerals are harmful had been subject to peer-review and publication or was generally accepted in the scientific community.”

The Federal Fiber

The genesis of the scientific dispute lay in the evolution of the definition of asbestos itself. Historically, asbestos was an industrial term for one of six different minerals, the serpentine mineral chrysotile, and the amphibole minerals, amosite, crocidolite, tremolite, anthophyllite, and actinolite. Chrysotile is, by mineralogical definition, a serpentine mineral in fibrous form.  If not fibrous, the mineral is typically called antigorite.

For the five amphiboles, the definitional morass deepens. Amosite is, again, an industrial term, an acronym for “asbestos mines of South Africa,” although South Africa once mined chrysotile and crocidolite as well.  Amosite is an iron-rich amphibole in the cummingtonite-grunerite family, with a fibrous habit.  Cummingtonite-grunerite can be either fibrous or non-fibrous in mineral habit.

Crocidolite is an amphibole that by definition is fibrous. The same mineral, if not fibrous, is known as riebeckite. Crocidolite is, by far, the most potent cause of mesothelioma.

The remaining amphiboles, tremolite, anthophyllite, and actinolite, have the same mineralogical designation, regardless whether they occur as fibers or in non-fibrous forms.

The designation of a mineral as “asbestiform” is also rather vague, apparently conveying an industrial functionality from its fibrosity. Medically, the term asbestiform became associated with minerals that have sufficiently high aspect ratio, and small cross-sectional diameter, to be considered potentially capable of inducing pulmonary fibrosis or mesothelioma.

In 1992, the federal OSHA regulations removed non-asbestiform actinolite, tremolite, and anthophyllite from the safety standard, based upon substantial evidence that the non-asbestiform occurrences of these minerals did not present the health risks associated with asbestiform amphiboles. Because nothing is ever simple, the National Institute for Occupational Safety and Health (NIOSH) persisted in its recommendation that OSHA continue to regulate non-asbestiform amphiboles under asbestos regulatory standards. This NIOSH pronouncement, however, was extremely controversial among the ranks of NIOSH scientists. In any event, NIOSH recommendations are just that, suggestions and not binding regulations.

The mineralogical, medical, and regulatory definitions of asbestos and asbestiform minerals vary greatly, and require a great deal of discipline and precision in discussing what causes mesothelioma. The health effects of non-asbestiform minerals have been studied, however, and generally shown not to cause mesothelioma.[3]

Judge Viscomi Abused Her Discretion

The Appellate Division panel applied Accutane’s abuse of discretion standard, which permits judges to screw up to some extent, but requires reversal for their mistakes when “so wide off the mark that a manifest denial of justice resulted.” The appellate court had little difficulty in saying that the trial court was “so wide off the mark” in addressing expert witness opinion admissibility.

James Webber

In the Lanzo case, plaintiffs’ expert witnesses, James Webber and Jacqueline Moline, both opined that non-asbestiform minerals can cause mesothelioma. The gravamen of the defense’s appeal was that these expert witnesses had failed to support their opinions and that the trial judge had misapplied the established judicial gatekeeping procedures required by the New Jersey Supreme Court, in In re Accutane Litigation, 234 N.J. 340 (2018).

The Appellate Division then set out to do what Judge Viscomi had failed to do – look at the proffered opinions and assess whether they followed reasonably and reliably from the expert witnesses’ stated grounds. Although Webber opined that cleavage fragments could cause mesothelioma, he had never studied the issue himself; nor was he aware of any studies showing that showed that non-asbestiform cleavage fragments can cause mesothelioma. Webber had never expressed his opinion in scientific publications, and he failed to cite any support for his opinion in his report.

At trial, Judge Viscomi permitted Webber to go beyond his anemic report and to cite reliance upon four sources for his opinion. The Appellate Division carefully reviewed each of the four sources, and found that they either did not support Webber’s opinions or they were as equally without evidentiary support. “Webber did not identify any data underlying his opinion. Further, he did not demonstrate that any of the authorities he relied on would be reasonably relied on by other experts in his field to reach an opinion regarding causation.”

Webber cited an article by Victor Roggli, who opined that he had found asbestiform and non-asbestiform fibers in the lungs of mesothelioma patients, but who went on to conclude that fibers were the likely cause. Webber also cited an article by NIOSH scientist Martin Harper, who stated the opinion, without evidentiary support that NIOSH did not believe, in 2008, that there was “sufficient evidence for a different toxicity for non-asbestiform amphibole particles that meet the morphological criteria for a fiber.”[4]

Although Harper and company appeared to be speaking on behalf of NIOSH, in 2011, the agency clarified its position to state that its previous inclusion of non-asbestiform minerals in the definition of respirable asbestos fibers had been based upon “inclusive science”:

“Epidemiological evidence clearly indicates a causal relationship between exposure to fibers from the asbestos minerals and various adverse health outcomes, including asbestosis, lung cancer, and mesothelioma. However, NIOSH has viewed as inconclusive the results from epidemiological studies of workers exposed to EMPs[9] [elongate mineral particles] from the non[-]asbestiform analogs of the asbestos minerals.”[5]

The Appellate Division was equally unimpressed with Webber’s citation of a geologist who stated an opinion in 2009, that “using the term ‘asbestiform’ to differentiate a hazardous from a non-hazardous substance has no foundational basis in the medical sciences.” Not only was the geologist, Gregory P. Meeker, lacking in medical expertise, but his article was non-peer-reviewed (for what little good that would have done) and his opinion did not cite any foundational evidence or data in an appropriate scientific study.

Webber cited to an Environmental Protection Agency (EPA) document,[6] which stated that

“[f]or the purposes of public health assessment and protection, [the] EPA makes no distinction between fibers and cleavage fragments of comparable chemical composition, size, and shape.”

The Appellate Division observed that the EPA not provide any scientific support for its assessment. Furthermore, the language cited by Webber clearly suggests that the EPA was issuing a precautionary view, not a scientific one.

Considering the Daubert factors, and New Jersey precedent, the Appellate Division readily found that Webber’s opinion was inadmissible. His opinion about non-asbestiform minerals was unsupported by data and analysis in published, peer-reviewed studies; the opinion was clearly not generally accepted; and the opinion had never been published by Webber himself. Plaintiffs had failed to show that Webber’s “methodology involv[ed] data and information of the type reasonably relied on by experts in the scientific field.”[7] The trial court’s observation that the issue of cleavage fragments was “contested” could not substitute for the required assessment of methodology and of the underlying data relied upon by Webber. Judge Viscomi abused her discretion in admitting Webber’s testimony.

Jacqueline Moline

Moline’s expert testimony that non-asbestiform minerals can cause mesothelioma suffered from many of the same defects as Webber’s opinion on this topic. The trial court once again did not conduct a pre-trial or in-trial hearing to assess Moline’s opinion, and it did not perform the rigorous assessment required by Rule 702 and the Accutane case to determine whether Moline’s opinions met the applicable (so-called Daubert) standards. The Appellate Division emphatically held that the trial court erred in permitting Moline to testify, over objection.

Moline vacuously opined that non-asbestiform amphiboles cause mesothelioma, but failed to identify any specific studies that actually supported this proposition. Like Webber, she pointed to an EPA document, from 2006, which also failed to support her asseverations. Moline also claimed support from the CDC, the American Thoracic Society, and other EPA pronouncements, but never cited anything specifically. In her pre-trial report, Moline claimed that New York state talc minerals experienced mesotheliomas from exposure to the mining and milling of talc that contained about “50% non-asbestiform anthophyllite and tremolite.” Moline’s report, however, was devoid of any reference for this remarkable claim.

Moline’s trial testimony was embarrassed on cross-examination when the defense confronted her with prior testimony she gave in another case, in which she testified that she lacked “information … one way or the other” say whether non-asbestiform minerals were carcinogenic. Moline shrugged off the impeachment with a claim that she had since come to learn of mesothelioma occurrences among patients with non-asbestiform mineral exposures. Nonetheless, Moline still could not identify the studies she relied upon to answer the question whether “asbestos-related diseases can be caused by the non-asbestiform varieties of the six regulated forms of asbestos.”

Reversal and Remand

Having concluded that the trial court erred and abused its discretion in denying the defense motions contra Webber and Moline, and having found that the error was harmful to the defense’s right to a fair trial, the appellate court reversed and remanded for new (separate) trials against JJCI and Imerys. There will be, no doubt, attempts to persuade the New Jersey Supreme Court to consider the issues further. The state Supreme Court’s jurisdiction is discretionary, and assuming that the high Court rejects petitions for certification, the case will return to the Middlesex County trial court. The intended nature of further trial court proceedings is, at best, a muddle. The Appellate Division has already done what Judge Viscomi failed to do. The three-judge panel carefully reviewed the plaintiffs’ proffered opinion testimony on causation and found it inadmissible. It would thus seem that the order of business would be for the defense to file motions for summary judgment for lack of admissible causation opinions, and for the trial court to enter judgment for the defense.

————————————————————————————————————

[1] To be fair, there was some coverage in local, and in financial and legal media. See, e.g., Jef Feeley, “J&J Gets Banker’s $117 Million Talc Verdict Tossed on Appeal,” (April 28, 2021); Mike Deak, “Appeals court overturns $117 million Johnson & Johnson baby powder verdict,” My Central Jersey (April 28, 2021); “J&J, Imerys Beat $117M Talc Verdicts Over Flawed Testimony,” Law360 (April 28, 2021); Irvin Jackson, “$117M J&J Talc Cancer Verdict Overturned By New Jersey Appeals Court,” About Lawsuits (April 30, 2021).

[2] See Lanzo v. Cyprus Amax Minerals Co., Docket Nos. A-5711-17, A-5717-17, New Jersey Superior Court, App. Div. (April 28, 2021).

[3] SeeIngham v. Johnson & Johnson – A Case of Meretricious Mensuration?” (July 3, 2020); “ Tremolitic Tergiversation or Ex-PIRG-Gation?” (Aug. 11, 2018).

[4] “Differentiating Non-Asbestiform Amphibole and Amphibole Asbestos by Size Characteristics,” 5 J. Occup. & Envt’l Hygiene 761 (2008).

[5] NIOSH, “Asbestos Fibers and Other Elongate Mineral Particles: State of the Science and Roadmap for Research,” Current Intelligence Bulletin 62 (April 2011).

[6] The document in question was issued in 2006, by EPA Region 9, in response to a report prepared by R.J. Lee Group, Inc. The regional office of the EPA criticized the R.J. Lee report for applying “a [g]eologic [d]efinition rather than a [p]ublic [h]ealth [d]efinition to [c]haracterize [m]icroscopic [s]tructures,” noting that the EPA made “no distinction between fibers and cleavage fragments of comparable chemical composition, size, and shape.” This document thus did not address, with credible evidence, the key issue in the Lanzo case.

[7] Lanzo (quoting Rubanick, 125 N.J. at 449).

Cancel Causation

March 9th, 2021

The Subversion of Causation into Normative Feelings

The late Professor Margaret Berger argued for the abandonment of general causation, or cause-in-fact, as an element of tort claims under the law.[1] Her antipathy to the requirement of showing causation ultimately involved her deprecating efforts to inject due scientific care in gatekeeping of causation opinions. After a long, distinguished career as a law professor, Berger died in November 2010.  Her animus against causation and Rule 702, however, was so strong that her chapter in the third edition of the Reference Manual on Scientific Evidence, which came out almost one year after her death, she embraced the First Circuit’s notorious anti-Daubert decision in Milward, which also post-dated her passing.[2]

Despite this posthumous writing and publication by Professor Berger, there have been no further instances of Zombie scholarship or ghost authorship.  Nonetheless, the assault on causation has been picked up by Professor Alexandra D. Lahav, of the University of Connecticut School of Law, in a recent essay posted online.[3] Lahav’s essay is an extension of her work, “The Knowledge Remedy,” published last year.[4]

This second essay, entitled “Chancy Causation in Tort Law,” is the plaintiffs’ brief against counterfactual causation, which Lahav acknowledges is the dominant test for factual causation.[5] Lahav begins with a reasonable, reasonably understandable distinction between deterministic (necessary and sufficient) and probabilistic (or chancy in her parlance) causation.

The putative victim of a toxic exposure (such as glyphosate and Non-Hodgkin’s lymphoma) cannot show that his exposure was a necessary and sufficient determinant of his developing NHL. Not everyone similarly exposed develops NHL; and not everyone with NHL has been exposed to glyphosate. In Lahav’s terminology, specific causation in such a case is “chancy.” Lahav asserts, but never proves, that the putative victim “could never prove that he would not have developed cancer if he had not been exposed to that herbicide.”[6]

Lahav’s example presents an example of a causal claim, which involves both general and specific causation, which is easily distinguishable from someone who claims his death was caused by being run over by a high-speed train. Despite this difference, Lahav never marshals any evidence to show why the putative glyphosate victim cannot show a probability that his case is causally related by adverting to the magnitude of the relative risk created by the prior exposure.

Repeatedly, Lahav asserts that when causation is chancy – probabilistic – it can never be shown by counterfactual causal reasoning, which she claims “assumes deterministic causation.” And she further asserts that because probabilistic causation cannot fit the counterfactual model, it can never “meet the law’s demand for a binary determination of cause.”[7]

Contrary to these ipse dixits, probabilistic causation can, at both the general and specific, or individual, levels be described in terms of counterfactuals. The modification requires us, of course, to address the baseline situation as a rate or frequency of events, and the post-exposure world as one with a modified rate or frequency. The exposure is the cause of the change in event rates. Modern physics addresses whether we must be content with probability statements, rather than precise deterministic “billiard ball” physics, which is so useful in a game of snooker, but less so in describing quarks. In the first half of the 20th century, the biological sciences learned with some difficulty that it must embrace probabilistic models, in genetic science, as well as in epidemiology. Many biological causation models are completely stated in terms of probabilities that are modified by specified conditions.

When Lahav gets to providing an example of where chancy causation fails in reasoning about individual causation, she gives a meaningless hypothetical of a woman, Mary, who is a smoker who develops lung cancer. To remove any semblance to real world cases, Lahav postulates that Mary had a 20% increased risk of lung cancer from smoking (a relative risk of 1.2). Thus, Lahav suggests that:

“[i]f Mary is a smoker and develops lung cancer, even after she has developed lung cancer it would still be the case that the cause of her cancer could only be described as a likelihood of 20 percent greater than what it would have been otherwise. Her doctor would not be able to say to her ‘Mary, if you had not smoked, you would not have developed this cancer’ because she might have developed it in any event.”

A more pertinent, less counterfactual hypothetical, is that Mary had a 2,000% increase in risk from her tobacco smoking. This corresponds to the relative risks in the range of 20, seen in many, if not most, epidemiologic studies of smoking and lung cancer. And there is an individual probability of causation that would be well over 0.9, for such a risk.

To be sure, there are critics of using the probability of causation because it assumes that the risk is distributed stochastically, which may not be correct. Of course, claimants are free to try to show that more of the risk fell on them for some reason, but of course, this requires evidence!

Lahav attempts to answer this point, but her argument runs off its rails.  She notes that:

“[i]f there is an 80% chance that a given smoker’s cancer is caused by smoking, and Mary smoked, some might like to say that she has met her burden of proof.

This approach confuses the strength of the evidence with its content. Assume that it is more likely than not, based on recognized scientific methodology, that for 80% of smokers who contract lung cancer their cancer is attributable to smoking. That fact does not answer the question of whether we ought to infer that Mary’s cancer was caused by smoking. I use the word ought advisedly here. Suppose Mary and the cigarette company stipulate that 80% of people like Mary will contract lung cancer, the burden of proof has been met. The strength of the evidence is established. The next question regards the legal permissibility of an inference that bridges the gap between the run of cases and Mary. The burden of proof cannot dictate the answer. It is a normative question of whether to impose liability on the cigarette company for Mary’s harm.”[8]

Lahav is correct that an 80% probability of causation might be based upon very flimsy evidence, and so that probability alone cannot establish that the plaintiff has a “submissible” case. If the 80% probability of causation is stipulated, and not subject to challenge, then Lahav’s claim is remarkable and contrary to most of the scholarship that has followed the infamous Blue Bus hypothetical. Indeed, she is making the very argument that tobacco companies made in opposition to the use of epidemiologic evidence in tobacco cases, in the 1950s and 1960s.

Lahav advances a perverse skepticism that any inferences about individuals can be drawn from information about rates or frequencies in groups of similar individuals.  Yes, there may always be some debate about what is “similar,” but successive studies may well draw the net tighter around what is the appropriate class. Lahav’s skepticism and her outright denialism, is common among some in the legal academy, but it ignores that group to individual inferences are drawn in epidemiology in multiple contexts. Regressions for disease prediction are based upon individual data within groups, and the regression equations are then applied to future individuals to help predict those individuals’ probability of future disease (such as heart attack or breast cancer), or their probability of cancer-free survival after a specific therapy. Group to individual inferences are, of course, also the basis for prescribing decisions in clinical medicine.  These are not normative inferences; they are based upon evidence-based causal thinking.

Lahav suggests that the metaphor of a “link” between exposure and outcome implies “something is determined and knowable, which is not possible in chancy causation cases.”[9] Not only is the link metaphor used all the time by sloppy journalists and some scientists, but when they use it, they mostly use it in the context of what Lahav would characterize as “chancy causation.” Even when speaking more carefully, and eschewing the link metaphor, scientists speak of probabilistic causation as something that is real, based upon evidence and valid inferences, not normative judgments or emotive reactions.

The probabilistic nature of the probability of causation does not affect its epistemic status.

The law does not assume that binary deterministic causality, as Lahav describes, is required to apply “but for” or counterfactual analysis. Juries are instructed to determine whether the party with the burden of proof has prevailed on each element of the claim, by a preponderance of the evidence. This civil jury instruction is almost always explained in terms of a posterior probability greater than 0.5, whether the claimed tort is a car crash or a case of Non-Hodgkin’s lymphoma.

Elsewhere, Lahav struggles with the concept of probability. Her essay suggests that

“[p]robability follows certain rules, or tendencies, but these regular laws do not abolish chance. There is a chance that the exposure caused his cancer, and a chance that it did not.”[10]

The use of chance here, in contradistinction to probability, is so idiosyncratic, and unexplained, that it is impossible to know what is meant.

Manufactured Doubt

Lahav’s essay twice touches upon a strawperson argument that stretches to claim that “manufacturing doubt” does not undermine her arguments about the nature of chancy causation. To Lahav, the likes of David Michaels have “demonstrated” that manufactured uncertainty is a genuine problem, but not one that affects her main claims. Nevertheless, Lahav remarkably sees no problem with manufactured certainty in the advocacy science of many authors.[11]

Lahav swallows the Michaels’ line, lure and all, and goes so far as to describe Rule 702 challenges to causal claims as having the “negative effect” of producing “incentives to sow doubt about epidemiologic studies using methodological battles, a strategy pioneered by the tobacco companies … .”[12] There is no corresponding concern about the negative effect of producing incentives to overstate the findings, or the validity of inferences, in order to get to a verdict for claimants.

Post-Modern Causation

What we have then is the ultimate post-modern program, which asserts that cause is “irreducibly chancy,” and thus indeterminate, and rightfully in the realm of “normative decisions.”[13] Lahav maintains there is an extreme plasticity to the very concept of causation:

“Causation in tort law can be whatever judges want it to be… .”[14]

I for one sincerely doubt it. And if judges make up some Lahav-inspired concept or normative causation, the scientific community would rightfully scoff.

Taking Lahav’s earlier paper, “The Knowledge Remedy,” along with this paper, the reader will see that Lahav is arguing for a rather extreme, radical precautionary principle approach to causation. There is a germ of truth that gatekeeping is affected by the moral quality of the defendant or its product. In the early days of the silicone gel breast implant litigation, some judges were influenced by suggestions that breast implants were frivolous products, made and sold to cater to male fantasies. Later, upon more mature reflection, judges recognized that roughly one third of breast implant surgeries were post-mastectomy, and that silicone was an essential biomaterial.  The recognition brought a sea change in critical thinking about the evidence proffered by claimants, and ultimately brought a recognition that the claimants were relying upon bogus and fraudulent evidence.[15]

—————————————————————————————–

[1]  Margaret A. Berger, “Eliminating General Causation: Notes towards a New Theory of Justice and Toxic Torts,” 97 Colum. L. Rev. 2117 (1997).

[2] Milward v. Acuity Specialty Products Group, Inc., 639 F.3d 11 (1st Cir. 2011), cert. denied sub nom., U.S. Steel Corp. v. Milward, 132 S. Ct. 1002 (2012)

[3]  Alexandra D. Lahav, “Chancy Causation in Tort,” (May 15, 2020) [cited as Chancy], available at https://ssrn.com/abstract=3633923 or http://dx.doi.org/10.2139/ssrn.3633923.

[4]  Alexandra D. Lahav, “The Knowledge Remedy,” 98 Texas L. Rev. 1361 (2020). SeeThe Knowledge Remedy Proposal” (Nov. 14, 2020).

[5]  Chancy at 2 (citing American Law Institute, Restatement (Third) of Torts: Physical & Emotional Harm § 26 & com. a (2010) (describing legal history of causal tests)).

[6]  Id. at 2-3.

[7]  Id.

[8]  Id. at 10.

[9]  Id. at 12.

[10]  Id. at 2.

[11]  Id. at 8 (citing David Michaels, The Triumph of Doubt: Dark Money and the Science of Deception (2020), among others).

[12]  Id. at 18.

[13]  Id. at 6.

[14]  Id. at 3.

[15]  Hon. Jack B. Weinstein, “Preliminary Reflections on Administration of Complex Litigation” 2009 Cardozo L. Rev. de novo 1, 14 (2009) (describing plaintiffs’ expert witnesses in silicone litigation as “charlatans” and the litigation as largely based upon fraud).

Carl Cranor’s Inference to the Best Explanation

February 12th, 2021

Carl Cranor pays me the dubious honor of quoting my assessment of weight of the evidence (WOE) pseudo-methodology as used by lawsuit industry expert witnesses, in one of his recent publications:

“Take all the evidence, throw it into the hopper, close your eyes, open your heart, and guess the weight. You could be a lucky winner! The weight of the evidence suggests that the weight-of-the-evidence (WOE) method is little more than subjective opinion, but why care if it helps you to get to a verdict!”[1]

Cranor’s intent was to deride my comments, but they hold up fairly well. I have always maintained that if were wrong, I would eat my words, but that they will be quite digestible. Nothing to eat here, though.

In his essay in the Public Affairs Quarterly, Cranor attempts to explain and support his advocacy of WOE in the notorious case, Milward, in which Cranor, along with his friend and business partner, Martyn Smith, served as partisan, paid expert witnesss.[2] Not disclosed in this article is that after the trial court excluded the opinions of Cranor and Smith under Federal Rule of Evidence 702, and plaintiff appealed, the lawsuit industry, acting through The Council for Education and Research on Toxics (CERT) filed an amicus brief to persuade the Court of Appeals to reverse the exclusion. The plaintiffs’ counsel, Cranor and Smith, and CERT failed to disclose that CERT was founded by the two witnesses, Cranor and Smith, whose exclusion was at issue.[3] Many of the lawsuit industry’s regular testifiers were signatories, and none raised any ethical qualms about the obvious conflict of interest, or the conspiracy to pervert the course of justice.[4]

Cranor equates WOE to “inference to the best explanation,” which reductively strips science of its predictive and reproducible nature. Readers may get the sense he is operating in the realm of narrative, not science, and they would be correct. Cranor goes on to conflate WOE methodology with “diagnostic induction,” and “differential diagnosis.”[5] The latter term is well understood in both medicine and in law to involve the assessment of an individual patient’s condition, based upon what is already known upon good and sufficient bases. The term has no accepted or justifiable meaning for assessing general causation. Cranor’s approach would pretermit the determination of general causation by making the disputed cause a differential.

Cranor offers several considerations in support of his WOE-ful methodology. First, he notes that the arguments for causal claims are not deductive. True, but indifferent as to his advocacy for WOE and inference to the best explanation.

Second, Cranor describes a search for relevant evidence once the scientific issue (hypothesis?) is formulated. Again, there is nothing unique about this described step, but Cranor intentionally leaves out considerations of validity, as in extrapolations between high and low dose, or between species. Similarly, he leaves out considerations of validity of study designs (such as whether any weight would be given to case studies, cross-sectional, or ecological studies) or of validity of individual studies.

Cranor’s third step is the formulation of a “sufficiently complete range of reasonable and plausible explanations to account for the evidence.” Again, nothing unique here about WOE, except that Cranor’s WOE abridges the process by ignoring the very real possibility that we do not have the correct plausible explanation available.

Fourth, according to Cranor, scientists rank, explicitly or implicitly, the putative “explanations” by plausibility and persuasiveness, based upon the evidence at hand, in view of general toxicological and background knowledge.[6] Note the absence of consideration of the predictive abilities of the competing explanations, or any felt need to assess the quality of evidence or the validity of study design.

For Cranor, the fifth consideration is to use the initial plausibility assessments, made on incomplete understanding of the phenomena, and on incomplete evidence, to direct “additionally relevant /available evidence to separate founded explanations from less well-founded ones.” Obviously missing from Cranor’s scheme is the idea of trying to challenge or test hypotheses severely to see whether withstand such challenges.

Sixth, Cranor suggests that “all scientifically relevant information” should be considered in moving to the “best supported” explanation. Because “best” is determined based upon what is available, regardless of the quality of the data, or the validity of the inference, Cranor rigs his WOE-ful methodology in favor of eliminating “indeterminate” as a possible conclusion.

In a seventh step, Cranor points to the need to “integrate, synthesize, and assess or evaluate,” all lines of “available relevant evidence.” There is nothing truly remarkable about this step, which clearly requires judgment. Cranor notes that there can be convergence of disparate lines of evidence, or divergence, and that some selection of “lines” of evidence may be endorsed as supporting the “more persuasive conclusion” of causality.[7] In other words, a grand gemish.

Cranor’s WOE-ful approach leaves out any consideration of random error, or systematic bias, or data quality, or study design. The words “bias” and “confounding” do not appear in Cranor’s essay, and he erroneously discusses “error” and “error rates,” only to disparage them as the machinations of defense lawyers in litigation. Similarly, Cranor omits any serious mention of reproducibility, or of the need to formulate predictions that have the ability to falsify tentative conclusions.

Quite stridently, Cranor insists that there is no room for any actual weighting of study types or designs. In apparent earnest, Cranor writes that:

“this conclusion is in accordance with a National Cancer Institute (NCI) recommendation that ‘there should be no hierarchy [among different types of scientific methods to determine cancer causation]. Epidemiology, animal, tissue culture and molecular pathology should be seen as integrating evidences in the determination of human carcinogenicity.”[8]

There is much whining and special pleading about the difficulty, expense, and lack of statistical power of epidemiologic studies, even though the last point is a curious backdoor endorsement of statistical significance. The first two points ignore the availability of large administrative databases from which large cohorts can be identified and studied, with tremendous statistical power. Case-control studies can in some instances be assembled quickly as studies nested in existing cohorts.

As I have noted elsewhere,[9] Cranor’s attempt to level all types of evidence starkly misrepresents the cited “NCI” source, which is not at all an NCI recommendation, but rather a “meeting report” of a workshop of non-epidemiologists.[10] The cited source is not an official pronouncement of the NCI, the authors were not NCI scientists, and the NCI did not sponsored the meeting. The meeting report appeared in the journal Cancer Research as a paid advertisement, not in the NCI’s Journal of the National Cancer Institute as a scholarly article:

“The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.”[11]

Tellingly, Cranor’s deception was relied upon and cited by the First Circuit, in its Milward, decision.[12] The scholarly fraud hit its mark. As a result of Cranor’s own dubious actions, the Milward decision has both both ethical and scholarship black clouds hovering over it.  The First Circuit should withdraw the decision as improvidently decided.

The article ends with Cranor’s triumphant view of Milward,[13] which he published previously, along with the plaintiffs’ lawyer who hired him.[14] What Cranor leaves out is that the First Circuit’s holding is now suspect because of the court’s uncritical acceptance of Cranor’s own misrepresentations and CERT’s omissions of conflict-of-interest disclosures, as well as the subsequent procedural history of the case. After the Circuit reversed the Rule 702 exclusions, and the Supreme Court denied the petition for a writ of certiorari, the case returned to the federal district court, where the defense lodged a Rule 702 challenge to expert witness opinion that attributed plaintiff’s acute promyelocytic leukemia to benzene exposure. This specific causation issue was not previously addressed in the earlier proceedings. The trial court sustained the challenge, which left the plaintiff unable to show specific causation. The result was summary judgment for the defense, which the First Circuit affirmed on appeal.[15] The upshot of the subsequent proceedings, with their dispositive ruling in favor of the defense on specific causation, is that the earlier ruling on general causation is no longer necessary to the final judgment, and not the holding of the case when all the proceedings are considered.

In the end, Cranor’s WOE leaves us with a misdirected search for an “explanation of causation,” rather than a testable, tested, reproducible, and valid “inference of causation.” Cranor’s attempt to invoke the liberalization of the Federal Rules of Evidence ignores the true meaning of “liberal” in being free from dogma and authority. Evidence does not equal eminence, and expert witnesses in court must show their data and defend their inferences, whatever their explanations may be.

——————————————————————————————————–

[1]  Carl F. Cranor, “How Courts’ Reviews of Science in Toxic Tort Cases Have Changed and Why That’s a Good Thing,” 31 Public Affairs Q. 280 (2017), quoting from Schachtman, “WOE-fully Inadequate Methodology – An Ipse Dixit by Another Name” (May 1, 2012).

[2]  Milward v. Acuity Specialty Products Group, Inc., 639 F. 3d 11 (1st Cir. 2011), cert. denied, 132 S.Ct. 1002 (2012).

[3]  SeeThe Council for Education and Research on Toxics” (July 9, 2013).

[4] Among the signatures were Nachman Brautbar, David C. Christiani, Richard W. Clapp, James Dahlgren, Arthur L. Frank, Peter F. Infante, Philip J. Landrigan, Barry S. Levy, David Ozonoff, David Rosner, Allan H. Smith, and Daniel Thau Teitelbaum.

[5]  Cranor at 286-87.

[6]  Cranor at 287.

[7]  Cranor at 287-88.

[8]  Cranor at 290.

[9]  “Cranor’s Defense of Milward at the CPR’s Celebration” (May 12, 2013).

[10]  Michelle Carbone, Jack Gruber, and May Wong, “Modern criteria to establish human cancer etiology,” 14 Semin. Cancer Biol. 397 (2004).

[11]  Michele Carbone, George Klein, Jack Gruber and May Wong, “Modern Criteria to Establish Human Cancer Etiology,” 64 Cancer Research 5518 (2004).

[12]  Milward v. Acuity Specialty Products Group, Inc., 639 F. 3d 11, 17 (1st Cir. 2011) (“when a group from the National Cancer Institute was asked to rank the different types of evidence, it concluded that ‘[t]here should be no such hierarchy’.”), cert. denied, 132 S.Ct. 1002 (2012).

[13]  Cranor at 292.

[14]  SeeWake Forest Publishes the Litigation Industry’s Views on Milward” (April 20, 2013).

[15]  Milward v. Acuity Specialty Products Group, Inc., 969 F. Supp. 2d 101 (D. Mass. 2013), aff’d sub nom. Milward v. Rust-Oleum Corp., 820 F.3d 469 (1st Cir. 2016).

Lawsuit Industry Advertising Indirectly Stimulates Adverse Event Reporting

February 4th, 2021

The lawsuit industry spends many millions of dollars each year to persuade people that they are ill from the medications they use, and that lawsuit industry lawyers will enrich them for their woes. But does the lawyer advertising stimulate the reporting of adverse events by consumers’ filing of MedWatch reports in the Federal Adverse Event Reporting System (FAERS)?

The question is of some significance. Adverse event reporting is a recognized, important component of pharmacovigilence. Regulatory agencies around the world look to an increased rate of reporting of a specific adverse event as a potential signal that there may be an underlying association between medication use and the reported harm. In the last two decades, pharmacoepidemiologists have developed techniques for mining databases of adverse event reports for evidence of a disproportionate level of reporting for a particular medication – adverse event pair. Such studies can help identify “signals” of potential issues for further study with properly controlled epidemiologic studies.[1]

One of the vexing misuses of pharmacovigilance techniques in the pharmaceutical products litigation is the use of adverse events reporting, either as case reports or in the form of disproportionality analyses to claim causal inference. In some litigations, lawsuit industry lawyers have argued that case reports, in the FAERS, standing alone support their claims of causation.[2] Desperate to make their case through anecdotes, plaintiffs’ counsel will sometimes retreat to the claim that they want to introduce the MedWatch reports in support of a lesser claim that the reports put the defendant on “notice.” Typically, the notice argument leaves open exactly what the content of the notice is, but the clear intent is to argue notice that (1) there is an increased risk, and (2) the defendant was aware of the increased risk.[3]

Standard textbooks on pharmacovigilance and pharmacoepidemiology, as well as regulatory agency guidance, emphatically reject the use of FAERS anecdotes or their transmogrification into disportionality analyses (DPAs) to support causal claims. The U.S. FDA’s official guidance on good pharmacovigilance practices, for example, elaborates on DPAs as an example of data mining, and instructs us that:

“[d]ata mining is not a tool for establishing causal attributions between products and adverse events.”[4]

The FDA specifically cautions that the signals detected by data mining techniques should be acknowledged to be “inherently exploratory or hypothesis generating.”[5] The agency exercises caution when making its own comparisons of adverse events between products in the same class because of the low quality of the data themselves, and uncontrollable and unpredictable biases in how the data are collected.[6] Because of the uncertainties in DPAs, the FDA urges “extreme causation” in comparing reporting rates, and generally considers DPA and similar analyses as “exploratory or hypothesis-generating.”[7]

The European Medicines Agency offers similar advice and caution:

“Therefore, the concept of SDR [Signal of Disproportionate Reporting] is applied in this guideline to describe a ‘statistical signal’ that has originated from a statistical method. The underlying principle of this method is that a drug–event pair is reported more often than expected relative to an independence model, based on the frequency of ICSRs on the reported drug and the frequency of ICSRs of a specific adverse event. This statistical association does not imply any kind of causal relationship between the administration of the drug and the occurrence of the adverse event.”[8]

Because the lawsuit industry frequently relies upon and over-endorses DPAs in its pharmaceutical litigations, inquiring minds may want to know whether the industry itself is stimulating reporting of adverse events through its media advertising.

Recently, two investigators published a study that attempted to look at whether lawsuit industry advertising was associated with stimulation of adverse event reporting in the FAERS.[9] Tippett and Chen conducted a multivariate regression analysis of FAERS reporting with independent variables of Google searches, attorney advertising, and FDA actions that would affect reporting over the course of a single calendar year (mid-2015 to mid-2016). The authors analyzed 412,901 adverse event reports to FAERS, involving 28 groups of drugs that were the subject of solicitous advertising.

The authors reported that they found associations (statistically significant, p < 0.05) for regression coefficients for FDA safety actions and Google searches, but not for attorney advertising. Using lag periods of one, two, three, and four weeks, or one or two months, between FAERS reporting and the variables did not result in statistically significant coefficients for lawyer advertising.

The authors variably described their finding as “preliminarily” supporting a claim that FAERS reporting is not stimulated by “direct attorney submission or drug injury advertising,” or as failing to find “a statistically significant relationship between drug injury advertising and adverse event reports.”[10] The authors claim that their analyses show that litigation advertisements “do not appear to have spurred patients, providers, attorneys, or other individuals to file a FAERS report, as shown in our regression and graphical results.”[11]

There are substantial problems with this study. For most of the 28 drugs and drug groups studied, attorneys made up a very small proportion of all submitters of adverse event reports. The authors present no pre-study power analysis for this aspect of their study. The authors do not tell us how many analyses they have done before the one presented in this journal article, but they do acknowledge having done “exploratory analyses.” Contrary to the 2016 guidance of the American Statistical Association,[12] they present no actual p-values, and they provide no confidence or prediction intervals for their coefficients. The study did not include local television advertising, and so the reported statistical non-significance of attorney advertising must be qualified to show the limitations of the authors’ data.

Perhaps the most serious problem with this observational study of attorney advertising and stimulated reporting is the way in which the authors framed their hypothesis. Advertising stimulates people to call the toll-free number to learn more how they too may hit the litigation jackpot. The point of attorney advertising is designed to persuade people to become legal clients, not to file MedWatch forms. In the following weeks and months that follow, paralegals interview the callers, collect information, and only then FAERs happen. Lag times of one to four weeks are generally irrelevant, as is the hypothesis studied and reported upon in this article.

After decades of working in this area, I have never seen an advertisement that encourages filing a MedWatch report, and the authors do not suggest otherwise. Advertising is only the initial part of a client intake mechanism that would result in the viewers’ making a telephone call, with a subsequent interview by lawfirm personnel, a review of the putative claim, and the viewers’ obtaining and signing retainer agreements and authorizations to obtain medical records. The scope of the study, which looked at FAERS filings and attorney advertisements after short lag periods could not detect an association given how long the recruitment takes.

The authors speculate, without evidence, that the lawsuit industry may discourage their clients from filing MedWatch reports and that the industry lawyers may hesitate to file the reports to avoid serving as a fact witness in their client’s case.[13] Indeed, the authors themselves adduce compelling evidence to the contrary, in the context of the multidistrict litigation over claimed harms from the use of testosterone therapies.

In their aggregate analysis of the 28 drugs and drug groups, the authors found that the lawsuit industry submitted only six percent of MedWatch reports. This low percentage would have been much lower yet but for the very high proportion (68%) of lawyer-submitted reports concerning the use of testosterone. The litigation-driven filings lagged the relevant attorney advertising by about six months, which should have caused the authors to re-evaluate their conclusions and their observational design that looked for correlations within one or two months. The testosterone data shows rather clearly that attorney advertising leads to recruitment of clients, which in turn leads to the filing of litigation-driven adverse event reports.

As the authors explain, attorney advertising and trolling for clients occurred in the summer of 2015, but FAERS reporting did not increase until an extreme burst of filings took place several months later. The authors’ graph tells the story even better:

So the correct conclusion is that attorney advertising stimulates client recruitment, which results in mass filings of MedWatch reports.

___________________________________________________________________

[1]  Sean Hennessy, “Disproportionality analyses of spontaneous reports,” 13 Pharmacoepidemiology & Drug Safety 503, 503 (2004). See alsoDisproportionality Analyses Misused by Lawsuit Industry” (Apr. 20, 2020).

[2]  See, e.g., Fred S. Longer, “The Federal Judiciary’s Super Magnet,” 45 Trial 18, 18 (July 2009) (arguing that “adverse events . . . established a causal association between Piccolomal and liver disease at statistically significant levels”).

[3]  See, e.g., Paul D. Rheingold, “Drug Products Liability and Malpractice Cases,” 17 Am. Jur. 1, Trials, Cumulative Supplement (1970 & Supp. 2019) (“Adverse event reports (AERs) created by manufacturers when users of their over-the-counter pain reliever experienced adverse events or problems, were admissible to show notice” of the elevated risk.).

[4]  FDA, “Good Pharmacovigilance Practices and Pharmacoepidemiologic Assessment Guidance for Industry” at 8 (2005) (emphasis added).

[5]  Id. at 9.

[6]  Id.

[7]  Id. at 11.

[8] EUDRAVigilance Expert Working Group, European Medicines Agency, “Guideline on the Use of Statistical Signal Detection Methods in the EUDRAVigilance Data Analysis System,” at 3 (2006) (emphasis added). See also Gerald J. Dal Pan, Marie Lindquist & Kate Gelperin, “Postmarketing Spontaneous Pharmacovigilance Reporting Systems,” in Brian L. Strom & Stephen E. Kimmel and Sean Hennessy, Pharmacoepidemiology at 185 (6th ed. 2020).

[9]  Elizabeth C. Tippett & Brian K. Chen, “Does Attorney Advertising Stimulate Adverse Event Reporting?” 74 Food & Drug Law J. 501 (2020) [Tippett].

[10]  Id. at 502.

[11]  Id.

[12]  Ronald L. Wasserstein & Nicole A. Lazar, “The ASA’s Statement on p-Values: Context, Process, and Purpose,” 70 The Am. Statistician 129 (2016).

[13]  Tippett at 591.

Susan Haack on Judging Expert Testimony

December 19th, 2020

Susan Haack has written frequently about expert witness testimony in the United States legal system. At times, Haack’s observations are interesting and astute, perhaps more so because she has no training in the law or legal scholarship. She trained in philosophy, and her works no doubt are taken seriously because of her academic seniority; she is the Distinguished Professor in the Humanities, Cooper Senior Scholar in Arts and Sciences, Professor of Philosophy and Professor of Law at the University of Miami.

On occasion, Haack has used her background and experience from teaching about epistemology to good effect in elucidating how epistemiologic issues are handled in the law. For instance, her exploration of the vice of credulity, as voiced by W.K. Clifford,[1] is a useful counterweight to the shrill agnotologists, Robert Proctor, Naomi Oreskes, and David Michaels.

Professor Haack has also been a source of confused, fuzzy, and errant advice when it comes to the issue Rule 702 gatekeeping. Haack’s most recent article on “Judging Expert Testimony” is an example of some unfocused thinking about one of the most important aspect of modern litigation practice, admissibility challenges to expert witness opinion testimony.[2]

Uncontroversially, Haack finds the case law on expert witness gatekeeping lacking in “effective practical guidance,” and she seeks to offer courts, and presumably litigants, “operational help.” Haack sets out to explain “why the legal formulae” are not of practical use. Haack notes that terms such as “reliable” and “sufficient” are qualitative, and vague,[3] much like “obscene” and other adjectives that gave the courts such a difficult time. Rules with vague terms such as these give judges very little guidance. As a philosopher, Haack might have noted that the various judicial formulations of gatekeeping standards are couched as conclusions, devoid of explanatory force.[4] And she might have pointed out that the judicial tendency to confuse reliability with validity has muddled many court opinions and lawyers’ briefs.

Focusing specifically on the field of epidemiology, Haack attempts to help courts by offering questions that judges and lawyers should be asking. She tells us that the Reference Manual for Scientific Evidence is of little practical help, which is a bit unfair.[5] The Manual in its present form has problems, but ultimately the performance of gatekeepers can be improved only if the gatekeepers develop some aptitude and knowledge in the subject matter of the expert witnesses who undergoing Rule 702 challenges. Haack seems unduly reluctant to acknowledge that gatekeeping will require subject matter expertise. The chapter on statistics in the current edition of the Manual, by David Kaye and the late David Freeman, is a rich resource for judges and lawyers in evaluating statistical evidence, including statistical analyses that appear in epidemiologic studies.

Why do judges struggle with epidemiologic testimony? Haack unwittingly shows the way by suggestion that “[e]pidemiological testimony will be to the effect that a correlation, an increased relative risk, has, or hasn’t, been found, between exposure to some substance (the alleged toxin at issue in the case) and some disease or disorder (the alleged disease or disorder the plaintiff claims to have suffered)… .”[6] Some philosophical parsing of the difference between “correlation” and “increased risk” as two very different things might have been in order. Haack suggests an incorrect identity between correlation and increased risk that has confused courts as well as some epidemiologists.

Haack suggests asking various questions that are fairly obvious such as the soundness of the data, measurements, study design, and data interpretation. Haack gives the example of failing to ascertain exposure to an alleged teratogen  during first trimester of pregnancy as a failure of study design that could obscure a real association. Curiously she claims that some of Merrell Dow’s studies of Bendectin did such a thing, not by citing to any publications but to the second-hand accounts of a trial judge.[7] Beyond the objectionable lack of scholarship, the example comes from a medication exposure that has been as exculpated as much as possible from the dubious litigation claims made of its teratogenicity. The misleading example begs the question why choose a Bendectin case, from a litigation that was punctuated by fraud and perjury from plaintiffs’ expert witnesses, and a medication that has been shown to be safe and effective in pregnancy?[8]

Haack balks when it comes to statistical significance, which she tells us is merely based upon a convention, and set “high” to avoid false alarms.[9] Haack’s dismissive attitude cannot be squared with the absolute need to address random error and to assess whether the research claim has been meaningfully tested.[10] Haack would reduce the assessment of random error to the uncertainties of eyeballing sample size. She tells us that:

“But of course, the larger the sample is, then, other things being equal, the better the study. Andrew Wakefield’s dreadful work supposedly finding a correlation between MMR vaccination, bowel disorders, and autism—based on a sample of only 12 children — is a paradigm example of a bad study.”[11]

Sample size was the least of Wakefield’s problems, but more to the point, in some study designs for some hypotheses, a sample of 12 may be quite adequate to the task, and capable of generating robust and even statistically significant findings.

Inevitably, Haack alights upon personal bias or conflicts of interest, as a subject of inquiry.[12] Of course, this is one of the few areas that judges and lawyers understand all too well, and do not need encouragement to pursue. Haack dives in, regardless, to advise asking:

“Do those who paid for or conducted a study have an interest in reaching a given conclusion (were they, for example, scientists working for manufacturers hoping to establish that their medication is effective and safe, or were they scientists working, like Wakefield, with attorneys for one party or another)?”[13]

Speaking of bias, we can detect some in how Haack frames the inquiry. Do scientists work for manufacturers (Boo!) or were they “like Wakefield” working for attorneys for a party? Haack cannot seem to bring herself to say that Wakefield, and many other expert witnesses, worked for plaintiffs and plaintiffs’ counsel, a.k.a., the lawsuit industry. Perhaps Haack included such expert witnesses as working for those who manufacture lawsuits. Similarly, in her discussion of journal quality, she notes that some journals carry advertisements from manufacturers, or receive financial support from them. There is a distinct lack of symmetry discernible in the lack of Haack’s curiosity about journals that are run by scientists or physicians who belong to advocacy groups, or who regularly testify for plaintiffs’ counsel.

There are many other quirky opinions here, but I will conclude with the obvious point that in the epidemiologic literature, there is a huge gulf between reporting on associations and drawing causal conclusions. Haack asks her readers to remember “that epidemiological studies can only show correlations, not causation.”[14] This suggestion ignores Haack’s article discussion of certain clinical trial results, which do “show” causal relationships. And epidemiologic studies can show strong, robust, consistent associations, with exposure-response gradients, not likely consistent with random variation, and these findings collectively can show causation in appropriate cases.

My recommendation is to ignore Haack’s suggestions and to pay closer attention to the subject matter of the expert witness who is under challenge. If the subject matter is epidemiology, open a few good textbooks on the subject. On the legal side, a good treatise such as The New Wigmore will provide much more illumination and guidance for judges and lawyers than vague, general suggestions.[15]


[1] William Kingdon Clifford, “The Ethics of Belief,” in L. Stephen & F. Pollock, eds., The Ethics of Belief 70-96 (1877) (“In order that we may have the to accept [someone’s] testimony as ground for believing what he says, we must have reasonable grounds for trusting his veracity, that he is really trying to speak the truth so far as he knows it; his knowledge, that he has had opportunities of knowing the truth about this matter; and his judgement, that he has made proper use of those opportunities in coming to the conclusion which he affirms.”), quoted in Susan Haack, “Judging Expert Testimony: From Verbal Formalism to Practical Advice,” 1 Quaestio facti. Internat’l J. Evidential Legal Reasoning 13, 13 (2020).

[2]  Susan Haack, “Judging Expert Testimony: From Verbal Formalism to Practical Advice,” 1 Quaestio facti. Internat’l J. Evidential Legal Reasoning 13, 13 (2020) [cited as Haack].

[3]  Haack at 21.

[4]  See, e.g., “Judicial Dodgers – The Crossexamination Excuse for Denying Rule 702 Motions”; “Judicial Dodgers – Reassigning the Burden of Proof on Rule 702”; “Judicial Dodgers – Weight not Admissibility”; “Judicial Dodgers – Rule 702 Tie Does Not Go to Proponent.”

[5]  Haack at 21.

[6]  Haack at 22.

[7]  Haack at 24, citing Blum v. Merrell Dow Pharms., Inc., 33 Phila. Cty. Rep. 193, 214-17 (1996).

[8]  See, e.g., “Bendectin, Diclegis & The Philosophy of Science” (Oct. 23, 2013).

[9]  Haack at 23.

[10]  See generally Deborah MayoStatistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (2018).

[11]  Haack at 23-24 (emphasis added).

[12]  Haack at 24.

[13]  Haack at 24.

[14]  Haack at 25.

[15]  David H. Kaye, David E. Bernstein & Jennifer L. Mnookin, The New Wigmore: A Treatise on Evidence: Expert Evidence (2nd ed. 2011). A new edition is due out presently.

Is Your Daubert Motion Racist?

July 17th, 2020

In this week’s New York Magazine, Jonathan Chait points out there is now a vibrant anti-racism consulting industry that exists to help white (or White?) people to recognize the extent to which their race has enabled their success, in the face of systematic inequalities that burden people of color. Chait acknowledges that some of what this industry does is salutary and timely, but he also notes that there are disturbing elements in this industry’s messaging, which is nothing short of an attack on individualism as racist myth that ignores that individuals are subsumed completely into their respective racial group. Chait argues that many of the West’s most cherished values – individualism, due process, free speech and inquiry, and the rule of law – are imperiled by so-called “radical progressivism” and “identity politics.”[1]

It is hard to fathom how anti-racism can collapse all identity into racial categories, even if some inarticulate progressives say so. Chait’s claim, however, seems to be supported by the Smithsonian National Museum of African American History & Culture, and its webpages on “Talking about Race,” which provides an extended analysis of “whiteness,” “white privilege,” and the like.

On May 31, 2020, the Museum’s website published a graphic that presented its view of the “Aspects & Assumptions of Whiteness and White Culture in the United States,” which made many startling claims about what is “white,” and by implication, what is “non-white.” [The chart is set out below.] I will leave it to the sociologists, psychologists, and anthropologists to parse the discussion of “white-dominant culture,” and white “racial identity,” provided in the Museum’s webpages. In my view, the characterizations of “whiteness” were overtly racist and insulting to all races and ethnicities. As Chait points out, with an abundance of irony, Donald Trump would seem to be the epitome of non-white, by his disavowal of the Museum’s identification of white culture’s insistence that “hard work is the key to success.”

The aspect of the graphic summary of whiteness, which I found most curious, most racist, and most insulting to people of all colors and ethnicities, is the chart’s assertion that white culture places “Emphasis on the Scientific Method,” with its valuation of “[o]bjective, rational linear thinking; “[c]ause and effect relationships”; and “[q]uantitative emphasis.” The implication is that non-whites do not emphasize or care about the scientific method. So scientific method, with its concern over validity of inference, and ruling out random and systematic errors, is just white privilege, and a microaggression against non-white people.

Really? Can the Smithsonian National Museum of African American History & Culture really mean that scientific punctilio is just another manifestation of racism and cultural imperialism. Chait seems to think so, quoting Glenn Singleton, president of Courageous Conversation, a racial-sensitivity training firm, who asserts that valuing “written communication over other forms” is “a hallmark of whiteness,” as is “scientific, linear thinking. Cause and effect.”

The Museum has apparently removed the graphic from its website, in response to a blitz of criticism from right-wing media and pundits.[2]  According to the Washington Post, the graphic has its origins in a 1978 book on White Awareness.[3] In response to the criticism, museum director Spencer Crew apologized and removed the graphic, agreeing that “it did not contribute to the discussion as planned.”[4]

The removal of the graphic is not really the point. Many people will now simply be bitter that they cannot publicly display their racist tropes. More important yet, many people will continue to believe that causal, rational, linear thinking is white, exclusionary, and even racist. Something to remember when you make your next Rule 702 motion.

   


[1]  Jonathan Chait, “Is the Anti-Racism Training Industry Just Peddling White Supremacy?” New York Magazine (July 16, 2020).

[2]  Laura Gesualdi-Gilmore “‘DEEPLY INSULTING’ African American museum accused of ‘racism’ over whiteness chart linking hard work and nuclear family to white culture,” The Sun (Jul 16 2020); “DC museum criticized for saying ‘delayed gratification’ and ‘decision-making’ are aspects of ‘whiteness’,” Fox News (July 16, 2020) (noting that the National Museum of African American History and Culture received a tremendous outcry after equating the nuclear family and self-reliance to whiteness); Sam Dorman, “African-American museum removes controversial chart linking ‘whiteness’ to self-reliance, decision-making The chart didn’t contribute to the ‘productive conversation’ they wanted to see,” Fox News (July 16, 2020); Mairead McArdle, “African American History Museum Publishes Graphic Linking ‘Rational Linear Thinking,’ ‘Nuclear Family’ to White Culture,” Nat’l Rev. (July 15, 2020).

[3]  Judy H. Katz, White Awareness: Handbook for Anti-Racism Training (1978).

[4]  Peggy McGlone, “African American Museum site removes ‘whiteness’ chart after criticism from Trump Jr. and conservative media,” Wash. Post (July 17, 2020).