Carl Cranor’s Conflicted Jeremiad Against Daubert
It seems that authors who have the most intense and refractory conflicts of interest (COI) often fail to see their own conflicts and are the most vociferous critics of others for failing to identify COIs. Consider the spectacle of having anti-tobacco activists and tobacco plaintiffs’ expert witnesses assert that the American Law Institute had an ethical problem because Institute members included some tobacco defense lawyers. Somehow these authors overlooked their own positional and financial conflicts, as well as the obvious fact that the Institute’s members included some tobacco plaintiffs’ lawyers as well. Still, the complaint was instructive because it typifies the abuse of ethical asymmetrical standards, as well as ethical blindspots.
Recently, Raymond Richard Neutra, Carl F. Cranor, and David Gee published a paper on the litigation use of Sir Austin Bradford Hill’s considerations for evaluating whether an association is causal or not. See Raymond Richard Neutra, Carl F. Cranor, and David Gee, “The Use and Misuse of Bradford Hill in U.S. Tort Law,” 58 Jurimetrics 127 (2018) [cited here as Cranor]. Their paper provides a startling example of hypocritical and asymmetrical assertions of conflicts of interests.
Neutra is a self-styled public health advocate and the Chief of the Division of Environmental and Occupational Disease Control (DEODC) of the California Department of Health Services (CDHS). David Gee, not to be confused with the English artist or the Australian coin forger, is with the European Environment Agency, in Copenhagen, Denmark. He is perhaps best known for his precautionary principle advocacy and his work with trade unions.
Carl Cranor is with the Center for Progressive Reform, and he teaches philosophy at one of the University of California campuses. Although he is neither a lawyer nor a scientist, he participates with some frequency as a consultant, and as an expert witness, in lawsuits, on behalf of claimants. Perhaps Cranor’s most notorious appearance as an expert witness resulted in the decision of Milward v. Acuity Specialty Products Group, Inc., 639 F.3d 11 (1st Cir. 2011), cert. denied sub nom., U.S. Steel Corp. v. Milward, 132 S. Ct. 1002 (2012). Probably less generally known is that Cranor was one of the founders of an organization, the Council for Education and Research on Toxics (CERT), which recently was the complaining party in a California case in which CERT sought money damages for Starbucks’ failure to label each cup of coffee sold as known to the State of California as causing cancer. Having a so-called not-for-profit corporation can also be pretty handy, especially when it holds itself out as a scientific organization and files amicus briefs in support of reversing Daubert exclusions of the founding members of the corporation, as CERT did on behalf of its founding member in the Milward case. The conflict of interest, in such an amicus brief, however, is no longer potential or subtle, and violates the duty of candor to the court.
In this recent article on Hill’s considerations for judging causality, Cranor followed CERT’s lead from Milward. Cranor failed to disclose that he has been a party expert witness for plaintiffs, in cases in which he was advocating many of the same positions put forward in the Jurimetrics article, including the Milward case, in which he was excluded from testifying by the trial court. Cranor’s lack of candor with the readers of the Jurimetrics article is all the more remarkable in that Cranor and his co-authors give conflicts of interest outsize importance in substantive interpretations of scholarship:
“the desired reliability for evidence evaluation requires that biases that derive from the financial interests and ideological commitments of the investigators and editors that control the gateways to publication be considered in a way that Hill did not address.”
Cranor at 137 & n.59. Well, we could add that Cranor’s financial interests and ideological commitments might well be considered in evaluating the reliability of the opinions and positions advanced in this most recent work by Cranor and colleagues. If you believe that COIs disqualify a speaker from addressing important issues, then you have all the reason you need to avoid reading Cranor’s recent article.
Dubious Scholarship
The more serious problem with Cranor’s article is not his ethically strained pronouncements about financial interests, but the dubious scholarship he and his colleagues advance to thwart judicial gatekeeping of even more dubious expert witness opinion testimony. To begin with, the authors disparage the training and abilities of federal judges to assess the epistemic warrant and reliability of proffered causation opinions:
“With their enhanced duties to review scientific and technical testimony federal judges, typically not well prepared by legal education for these tasks, have struggled to assess the scientific support for—and the reliability and relevance of—expert testimony.”
Cranor at 147. Their assessment is fair but hides the authors’ cynical agenda to remove gatekeeping and leave the assessment to lay juries, who are less well prepared for the task, and whose function ensures no institutional accountability, review, or public evaluation.
Similarly, the authors note the temporal context and limitations of Bradford Hill’s 1965 paper, which date and limit the advice provided over 50 years ago in a discipline that has changed dramatically with the advancement of biological, epidemiologic, and genetic science. Even at the time of its original publication in 1965, Bradford Hill’s paper, which was based upon an informal lecture, was not designed or intended to be a definitive treatment of causal inference. Cranor and his colleagues make no effort to review Bradford Hill’s many other publications, both before and after his 1965 dinner speech, for evidence of his views on the factors for causal inference, including the role of statistical testing and inference.
Nonetheless, Bradford Hill’s 1965 paper has become a landmark, even if dated, because of its author’s iconic status in the world of public health, earned for his showing that tobacco smoking causes lung cancer, and for advancing the role of double-blind randomized clinical trials. Cranor and his colleagues made no serious effort to engage with the large body of Bradford Hill’s writings, including his immensely important textbook, The Principles of Medical Statistics, which started as a series of articles in The Lancet, and went through 12 editions in print. Hill’s reputation will no doubt survive Cranor’s bowdlerized version of Sir Austin’s views.
Epidemiology is Dispensable When It Fails to Support Causal Claims
The egregious aspect of Cranor’s article is its bill of particulars against the federal judiciary for allegedly errant gatekeeping, which for these authors translates really into any gatekeeping at all. Cranor at 144-45. Indeed, the authors provide not a single example of what was a “proper” exclusion of an expert witness, who was contending for some doubtful causal claim. Perhaps they have never seen a proper exclusion, but doesn’t that speak volumes about their agenda and their biases?
High on the authors’ list of claimed gatekeeping errors is the requirement that a causal claim be supported with epidemiologic evidence. Although some causal claims may be supported by strong evidence of a biological process with mechanistic evidence, such claims are not common in United States tort litigation.
In support of the claim that epidemiology is dispensable, Cranor suggests that:
“Some courts have recognized this, and distinguished scientific committees often do not require epidemiological studies to infer harm to humans. For example, the International Agency for Research on Cancer (IRAC) [sic], the National Toxicology Program, and California’s Proposition 65 Scientific Advisory Panel, among others, do not require epidemiological data to support findings that a substance is a probable or—in some cases—a known human carcinogen, but it is welcomed if available.”
Cranor at 149. California’s Proposition 65!??? Even IARC is hard to take seriously these days with its capture by consultants for the litigation industry, but if we were to accept IARC as an honest broker of causal inferences, what substance “known” to IARC to cause cancer in humans (Category I) was branded as a “known carcinogen” without the support of epidemiologic studies? Inquiring minds might want to know, but they will not learn the answer from Cranor and his co-authors.
When it comes to adverting to legal decisions that supposedly support the authors’ claim that epidemiology is unnecessary, their scholarship is equally wanting. The paper cites the notorious Wells case, which was so roundly condemned in scientific circles, that it probably helped ensure that a decision such as Daubert would ultimately be handed down by the Supreme Court. The authors seemingly cannot read, understand, and interpret even the most straightforward legal decisions. Here is how they cite Wells as support for their views:
“Wells v. Ortho Pharm. Corp., 788 F.2d 741, 745 (11th Cir. 1986) (reviewing a district court’s decision deciding not to require the use of epidemiological evidence and instead allowing expert testimony).”
Cranor at 149-50 n.122. The trial judge in Wells never made such a decision; indeed, the case was tried by the bench, before the Supreme Court decided Daubert. There was no gatekeeping involved at all. More important, however, and contrary to Cranor’s explanatory parenthetical, both sides presented epidemiologic evidence in support of their positions.
Cranor and his co-authors similarly misread and misrepresent the trial court’s decision in the litigation over maternal sertraline use and infant birth defects. Twice they cite the Multi-District Litigation trial court’s decision that excluded plaintiffs’ expert witnesses:
“In re Zoloft (Sertraline Hydrochloride) Prods. Liab. Litig., 26 F. Supp. 3d 449, 455 (E.D. Pa. 2014) (expert may not rely on nonstatistically significant studies to which to apply the [Bradford Hill] factors).”
Cranor at 144 n.85; 158 n.179. The MDL judge, Judge Rufe, decidedly never held that an expert witness may not rely upon a statistically non-significant study in a “Bradford Hill” analysis, and the Third Circuit, which affirmed the exclusions of the plaintiffs’ expert witnesses’ testimony, was equally clear in avoiding the making of such a pronouncement.
Who Needs Statistical Significance
Part of Cranor’s post-science agenda is to intimidate judges into believing that statistical significance is unnecessary and a wrong-headed criterion for judging the validity of relied upon research. In their article, Cranor and friends suggest that Hill agreed with their radical approach, but nothing could be further from the truth. Although these authors parse almost every word of Hill’s 1965 article, they conveniently omit Hill’s views about the necessary predicates for applying his nine considerations for causal inference:
“Disregarding then any such problem in semantics we have this situation. Our observations reveal an association between two variables, perfectly clear-cut and beyond what we would care to attribute to the play of chance. What aspects of that association should we especially consider before deciding that the most likely interpretation of it is causation?”
Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295, 295 (1965). Cranor’s radicalism leaves no room for assessing whether a putative association is “beyond what we would care to attribute to the play of chance,” and his poor scholarship ignores Hill’s insistence that this statistical analysis be carried out.
Hill’s work certainly acknowledged the limitations of statistical method, which could not compensate for poorly designed research:
“It is a serious mistake to rely upon the statistical method to eliminate disturbing factors at the completion of the work. No statistical method can compensate for a badly planned experiment.”
Austin Bradford Hill, Principles of Medical Statistics at 4 (4th ed. 1948). Hill was equally clear, however, that the limits on statistical methods did not imply that statistical methods are not needed to interpret a properly planned experiment or study. In the summary section of his textbook’s first chapter, Hill removed any doubt about his view of the importance, and the necessity, of statistical methods:
“The statistical method is required in the interpretation of figures which are at the mercy of numerous influences, and its object is to determine whether individual influences can be isolated and their effects measured.”
Id. at 10 (emphasis added).
In his efforts to eliminate judicial gatekeeping of expert witness testimony, Cranor has struggled with understanding of statistical inference and testing. In an early writing, a 1993 book, Cranor suggests that we “can think of type I and II error rates as “standards of proof,” which begs the question whether they are appropriately used to assess significance or posterior probabilities. Indeed, Cranor goes further, in confusing significance and posterior probabilities, when he described the usual level of alpha (5%) as the “95%” rule, and claimed that regulatory agencies require something akin to proof “beyond a reasonable doubt,” when they require two “statistically significant” studies.
Cranor has persisted in this fallacious analysis in his writings. In a 2006 book, he erroneously equated the 95% coefficient of statistical confidence with 95% certainty of knowledge. Later in this same text, Cranor again asserted his nonsense that agency regulations are written when supported by “beyond a reasonable doubt.” Given that Cranor has consistently confused significance and posterior probability, he really should not be giving advice to anyone about statistical or scientific inference. Cranor’s persistent misunderstandings of basic statistical concepts do, however, explain his motivation for advocating the elimination of statistical significance testing, even if these misunderstandings make his enterprise intellectually unacceptable.
Cranor and company fall into a similar muddle when they offer advice on post-hoc power calculations, which advice ignores standard statistical learning for interpreting completed studies. Another measure of the authors’ failed scholarship is their omission of any discussion of recent efforts by many in the scientific community to lower the threshold for statistical significance, based upon the belief that the customary 5% p-value is an order of magnitude too high.
Relative Risks Greater Than Two
There are other tendentious arguments and treatments in Cranor’s brief against gatekeeping, but I will stop with one last example. The inference of specific causation from study risk ratios has provoked a torrent of verbiage from Sander Greenland (who is cited copiously by Cranor). Cranor, however, does not even scratch the surface of the issue and fails to cite the work of epidemiologists, such as Duncan C. Thomas, who have defended the use of probabilities of (specific) causation. More important, however, Cranor fails to speak out against the abuse of using any relative risk greater than 1.0 to support an inference of specific causation, when the nature of the causal relationship is neither necessary nor sufficient. In this context, Kenneth Rothman has reminded us that someone can be exposed to, or have, a risk, and then develop the related outcome, without there being any specific causation:
“An elementary but essential principle to keep in mind is that a person may be exposed to an agent and then develop disease without there being any causal connection between the exposure and the disease. For this reason, we cannot consider the incidence proportion or the incidence rate among exposed people to measure a causal effect.”
Kenneth J. Rothman, Epidemiology: An Introduction at 57 (2d ed. 2012).
The danger in Cranor’s article in Jurimetrics is that some readers will not realize the extreme partisanship in its ipse dixit, and erroneous, pronouncements. Caveat lector!