Dr. Frank Woodside and Allison Davis have published an article on the so-called Bradford Hill criteria. Frank C. Woodside, III & Allison G. Davis, “The Bradford Hill Criteria: The Forgotten Predicate,” 35 Thomas Jefferson L. Rev. 103 (2013).
Their short paper may be of interest to Rule 702 geeks, and students of how the law parses causal factors in litigation.
The authors argue that a “predicate” to applying the Hill criteria consists of:
- ascertaining a clear-cut association,
- determining the studies establishing the association are valid, and
- satisfying the Daubert [1][sic] requirements.
Id. at 107. Parties contending for a causal association often try to flyblow the need for statistical significance at any level, and argue that Bradford Hill did not insist upon statistical testing. Woodside and Davis remind us that Bradford Hill was quite firm in insisting upon the need to rule out random variability as an explanation for an association:
“Our observations reveal an association between two variables, perfectly clear-cut and beyond what we would care to attribute to the play of chance.”
Id. at 105; see Hill, Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295 (1965). The authors correctly note that the need for study validity is fairly implied by Bradford Hill’s casual expression about “perfectly clear-cut.”
Woodside and Davis appear to acquiesce in the plaintiffs’ tortured interpretation of Bradford Hill’s speech, on which statistical significance supposedly is unimportant. Woodside & Davis at 105 & n.7 (suggesting that Bradford Hill “seemingly negates the second [the requirement of statistical significance] when he discounts the value of significance testing, citing Bradford Hill at 299).
Woodside and Davis, however, miss the heavy emphasis that Bradford Hill actually placed upon “tests of significance”:
“No formal tests of significance can answer those questions. Such tests can, and should, remind us of the effects that the play of chance can create, and they will instruct us in the likely magnitude of those effects. Beyond that they contribute nothing to the ‘proof’ of our hypothesis.”
Bradford Hill at 299. Bradford Hill never says that statistical tests contribute nothing to proving an hypothesis; rather, his emphasis is on the insufficiency of statistical tests alone to establish causality. Bradford Hill’s “beyond that” language clearly stakes out the preliminary, but necessary importance of ruling out the play of chance before proceeding to consider the causal factors.
Passing beyond their exegetical fumble, Woodside and Davis proceed to discuss the individual Bradford Hill considerations and how they have fared in the crucible of Rule 702. Their discussion may be helpful to lawyers who want to track the individual considerations, and how they have treated, or dismissed, by trial courts charged with gatekeeping expert witness opinion testimony.
There is another serious problem in the Woodside and Davis paper. The authors describe risk ratios and the notion of “confidence intervals”:
“A confidence interval provides both the relative risk found in the study and a range (interval) within which the risk would likely fall if the study were repeated numerous times.32 … As such, risk measures used in conjunction with confidence intervals are critical in establishing a perfectly clear-cut association when it comes to examining the results of a single study.35”
Woodside & Davis at 110. The authors cite to the Reference Manual on Scientific Evidence (3d 2011), but they fail to catch important nuances of the definition of a confidence interval. The obtained interval from a given study is not the interval within which the “risk would likely fall if the study were repeated… .” Rather it is 95% of the many intervals, from the many repeated studies done on the same population, with the same sample size, which would capture the true risk. As for the obtained interval, the true risk is either within it, or not, and no probability value attaches to the likelihood that the true value lies within the obtained interval.
It is a mystery why lawyers would bother to define something like the confidence interval, and then do it incorrectly. Here is how Professors Finkelstein and Levin define the confidence interval in their textbook on statistics:
“A confidence interval for a population proportion P is a range of values around the proportion observed in a sample with the property that no value in the interval would be considered unacceptable as a possible value for P in light of the sample data.”
Michael Finkelstein & Bruce Levin, Statistics for Lawyers 166-67 (2d ed. 2001). This text explains why and where Woodside and Davis went astray:
“It is the confidence limits PL and PU that are random variables based on the sample data. Thus, a confidence interval (PL, PU) is a random interval, which may or may not contain the population parameter P. The term “confidence” derives from the fundamental property that, whatever the true value of P, the 95% confidence interval will contain P within its limits 95% of the time, or with 95% probability. This statement is made only with reference to the general property of confidence intervals and not to a probabilistic evaluation of its truth in any particular instance with realized values of PL and PU.”
Id. at 167-71.
[1] Surely the time has come to stop referring to the Daubert factors and acknowledge that the Daubert case was just one small step in the maturation of evidence law. The maturation consisted of three additional Supreme Court cases, many lower court cases, and a statutory revision to Federal Rule of Evidence 702, in 2000. The Daubert factors hardly give due consideration to the depth and breadth of the law in this area.