TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Expert Witness – Ghost Busters

March 29th, 2016

Andrew Funkhouser was tried and convicted for selling cocaine.  On appeal, the Missouri Court of Appeals affirmed his conviction and his sentence of prison for 30 years. State v. Funkhouser, 729 S.W.2d 43 (Mo. App. 1987). On a petition for post-conviction relief, Funkhouser asserted that he was deprived of his Sixth Amendment right to effective counsel. Funkhouser v. State, 779 S.W.2d 30 (Mo. App. 1989).

One of the alleged grounds of ineffectiveness was his lawyer’s failure to object to the prosecutor’s cross-examination of a defense expert witness, clinical psychologist Frederick Nolen, on Nolan’s belief in ghosts. Id. at 32. On direct examination, Nolen testified that he had published or presented on multiple personalities, hypnosis, and ghosts.

On cross-examination, the prosecution inquired of Nolan about his theory of ghosts:

“Q. Doctor, I believe that you’ve done some work in the theory of ghosts, is that right?

A. Yes.

Q. I believe you told me that some of that work you’d based on your own experiences, is that correct?

A. Yes.

Q. You also told me you have lived in a haunted house for 13 years, is that right?

A. Yes.

Q. You have seen the ghost, is that correct?

A. Yes.”

Id. at 32-33. Funkhouser asserted that the cross-examination was improper because his expert witness was examined on his religious beliefs, and his counsel was ineffective for failing to object. Id. at 33.  The Missouri Court of Appeals disagreed. Counsel are permitted to cross-examine an adversary’s expert witness

“in any reasonable respect that will test his qualifications, credibility, skill or knowledge and the value and accuracy of his opinions.”

The court held that any failure to object could not be incompetence because the examination was proper. Id.

So there you have it: wacky beliefs systems are fair game for cross-examination of expert witnesses, at least in the “Show-Me” state.

And this broad scope of cross-examination is probably a good thing because almost anything seems to go in Missouri. The Show-Me state has been wiping up the rear in the law of expert witness admissibility. Missouri Revised Statutes contains a version of the Federal Rule of Evidence 702, which goes back to the language before the federal statutory revision in 2000:

Expert witness, opinion testimony admissible–hypothetical question not required, when.

490.065. 1. In any civil action, if scientific, technical or other specialized knowledge will assist the trier of fact to understand the evidence or to determine a fact in issue, a witness qualified as an expert by knowledge, skill, experience, training, or education may testify thereto in the form of an opinion or otherwise.

In January 2016, the Missouri state senate passed a bill that would bring the Missouri standard in line with the current federal court rule of evidence. Most of the Republican senators voted for the bill; none of the Democrats voted in favor of the reform. Chris Semones, Missouri: One Step Closer to Daubert,” in Expert Witness Network (Jan. 26, 2016).

Lipitor MDL Cuts the Fat Out of Specific Causation

March 25th, 2016

Ms. Juanita Hempstead was diagnosed with hyperlipidemia in March 1998. Over a year later, in June 1999, with her blood lipids still elevated, her primary care physician prescribed 20 milligrams of atorvastatin per day. Ms. Hempstead did not start taking the statin regularly until July 2000. In September 2002, her lipids were under control, her blood glucose was abnormally high, and she had gained 13 pounds since she was first prescribed a statin medication. Hempstead v. Pfizer, Inc., 2:14–cv–1879, MDL No. 2:14–mn–02502–RMG, 2015 WL 9165589, at *2-3 (D.S.C. Dec. 11, 2015) (C.M.O. No. 55 in In re Lipitor Marketing, Sales Practices and Products Liability Litigation) [cited as Hempstead]. In the fall of 2003, Hempstead experienced abdominal pain, and she stopped taking the statin for a few weeks, presumably because of a concern over potential liver toxicity. Her cessation of the statin led to an increase in her blood fat, but her blood sugar remained elevated, although not in the range that would have been diagnostic of diabetes. In May 2004, about five years after starting on statin medication, having gained 15 pounds since 1999, Ms. Hempstead was diagnosed with type II diabetes mellitus. Id.

Living in a litigious society, and being bombarded with messages from the litigation industry, Ms. Hempstead sued the manufacturer of atorvastatin, Pfizer, Inc. In support of her litigation claim, Hempstead’s lawyers enlisted the support of Elizabeth Murphy, M.D., D.Phil., a Professor of Clinical Medicine, and Chief of Endocrinology and Metabolism at San Francisco General Hospital. Id. at *6. Dr. Murphy received her doctorate in biochemistry from Oxford University, and her medical degree from the Harvard Medical School. Despite her graduations from elite educational institutions, Dr. Murphy never learned the distinction between ex ante risk and assignment of causality in an individual patient.

Dr. Murphy claimed that atorvastatin causes diabetes, and that the medication caused Ms. Hempstead’s diabetes in 2004. Murphy pointed to a five-part test for her assessment of specific causation:

(1) reports or reliable studies of diabetes in patients taking atorvastatin;

(2) causation is biological plausible;

(3) diabetes appeared in the patient after starting atorvastatin;

(4) the existence of other possible causes of the patient’s diabetes; and

(5) whether the newly diagnosed diabetes was likely caused by the atorvastatin.

Id. In response to this proffered testimony, the defendant, Pfizer, Inc., challenged the admissibility of Dr. Murphy’s opinion under Federal Rule of Evidence 702.

The trial court, in reviewing Pfizer’s challenge, saw that Murphy’s opinion essentially was determined by (1), (2), and (3), above. In other words, once Murphy had become convinced of general causation, she was willing to causally attribute diabetes to atorvastatin in every patient who developed diabetes after starting to take the medication. Id. at *6-7.

Dr. Murphy relied upon some epidemiologic studies that suggested a relative risk of diabetes to be about 1.5 in patients who had taken atorvastatin. Id. at *5, *8. Unfortunately, the trial court, as is all too common among judges writing Rule 702 opinions, failed to provide citations to the materials upon which plaintiff’s expert witness relied. A safe bet, however, is that those studies, if they had any internal and external validity at all, involved multivariate analyses to analyze risk ratios for diabetes at time t1, in patients at time who had no diabetes before starting use of atorvastatin at time t0, compared with patients who did not have diabetes at t0 but never took the statin. If so, then Dr. Murphy’s use of a temporal relationship between starting atorvastatin and developing diabetes is quite irrelevant because the relative risk (1.5) relied upon is generated in studies in which the temporality is present. Ms. Hempstead’s development of diabetes five years after starting atorvastatin does not make her part of a group with a relative risk any higher than the risk ratio of 1.5, cited by Dr. Murphy. Similarly, the absence or presence of putative risk factors other than the accused statin is irrelevant because the risk ratio of 1.5 was mostly likely arrived at in studies that controlled or adjusted for the other risk factors in the epidemiologic study by a multivariate analysis. Id. at *5 & n. 8.

Dr. Murphy acknowledged that there are known risk factors for diabetes, and that plaintiff Ms. Hempstead had a few. Plaintiff was 55 years old at the time of diagnosis, and advancing age is a risk factor. Plaintiff’s body mass index (BMI) was elevated and it had increased over the five years since beginning to take atorvastatin. Even though not obese, Ms. Hempstead’s BMI was sufficiently high to confer a five-fold increase in risk for diabetes. Id. at *9. Plaintiff also had hypertension and metabolic syndrome, both of which are risk factors (with the latter adding to the level of risk of the former). Id. at *10. Perhaps hoping to avoid the intractable problem of identifying which risk factors were actually at work in Ms. Hempstead to produce her diabetes, Dr. Murphy claimed that all risk factors were causes of plaintiff’s diabetes. Her analysis was thus not so much a differential etiology as a non-differential, non-discriminating assertion that any and all risk factors were probably involved in producing the individual case. Not surprisingly, Dr. Murphy, when pressed, could not identify any professional organizations or peer-reviewed publications that employed such a methodology of attribution. Id. at *6. Dr. Murphy had never used such a method of attribution in her clinical practice; instead she attempted to justify and explain her methodology by adverting to its widespread use by expert witnesses in litigation. Id.

Relative Risk and the Inference of Specific Causation

The main thrust of the Dr. Murphy’s and the plaintiff’s specific causation claim seems to have been based upon a simple, simplistic identification of ex ante risk with causation. The MDL court recognized, however, that in science and in law, risk is not the same as causation.[1]

The existence of general causation, with elevated relative risks not likely the result of bias, chance, or confounding, does not necessarily support the inference that every person exposed to the substance or drug and who develops the outcome of interest, had his or her outcome caused by the exposure.

The law requires each plaintiff to show that his or her alleged injury, the outcome in the relied upon epidemiologic studies, was actually caused by the alleged exposure under a preponderance of the evidence. Id. at *4 (citing Guinn v. AstraZeneca Pharm. LP, 602 F.3d 1245, 1249 n. 1 (11th Cir.2010))

The disconnect between risk and causation is especially strong when the nature of the causation involved results from the modification of the incidence rate of a disease as a function of exposure. Although the MDL court did not explicitly note the importance of a base rate, which gives rise to an “expected value” or “expected outcome” in an epidemiologic sample, the court’s insistence upon a relative risk greater than two, from studies of sample groups that are sufficiently similar to the plaintiff, implicitly affirms the principle. The MDL court did, however, call out Dr. Murphy’s reasoning that specific causation exists for every drug-exposed patient, in the face of studies that show general causation with associations of the magnitude less than risk ratios of two, was logically flawed. Id. at *8 (citing Guinn v. AstraZeneca Pharm. LP, 602 F.3d 1245, 1255 (11th Cir. 2010) (“The fact that exposure to [a substance] may be a risk factor for [a disease] does not make it an actual cause simply because [the disease] developed.”).

The MDL court acknowledged the obvious, that some causal relationships may be based upon risk ratios of two or less (but greater than 1.0). Id. at *4. A risk ratio greater than 1.0, but not greater than two, can result only when some of the cases with the outcome of interest, here diabetes, would have occurred anyway in the population that has been sampled. And with increased risk ratios at two or less, a majority of the study sample would have developed the outcome even in the absence of the exposure of interest. With this in mind, the MDL court asked how plaintiff could show specific causation, even assuming that general causation were established with the use of epidemiologic methods.

The court in Hempstead reasoned that if the risk ratio were greater than 2.0, a majority of the exposed sample would have developed the outcome of interest because of the exposure being studied. Id. at *5. If the sampled population has had the same level of exposure as the plaintiff, then a case-specific inference of specific causation is supported.[2] Of course, this inferential strategy presupposes that general causation has been established, by ruling out bias, confounding, and chance, with high-quality, statistically significant findings of risk ratios in excess of 2.0. Id. at *5.

To be sure, there are some statisticians, such as Sander Greenland, who have criticized this use of a sample metric to assess the probability of individual causation, in part because the sample metric is an average level of risk, based upon the whole sample. Greenland is fond of speculating that the risk may not be stochastically distributed, but as the Supreme Court has recently acknowledged, there are times when the use of an average is appropriate to describe individuals within a sampled population. Tyson Foods, Inc. v. Bouaphakeo, No. 14-1146, 2016 WL 1092414 (U.S. S. Ct. Mar. 22, 2016).

The Whole Tsumish

Dr. Murphy, recognizing that there are other known and unknown causes and risk factors for diabetes, made a virtue of foolish consistency by opining that all risk factors present in Ms. Hempstead were involved in producing her diabetes. Dr. Murphy did not, and could not, explain, however, how or why she believed that every risk factor (age, BMI, hypertension, recent weight gain, metabolic syndrome, etc.), rather than some subset of factors, or some idiopathic factors, were involved in producing the specific plaintiff’s disease. The MDL court concluded that Dr. Murphy’s opinion was an ipse dixit of the sort that qualified her opinion for exclusion from trial. Id. at *10.

Biological Fingerprints

Plaintiffs posited typical arguments about “fingerprints” or biological markers that would support inferences of specific causation in the absence of high relative risks, but as is often the case with such arguments, they had no factual foundation for their claims that atorvastatin causes diabetes. Neither Dr. Murphy nor anyone else had ever identified a biological marker that allowed drug-exposed patients with diabetes to be identified as having had their diabetes actually caused by the drug of interest, as opposed to other known or unknown causes.

With Dr. Murphy’s testimony failing to satisfy common sense and Rule 702, plaintiff relied upon cases in which circumstances permitted inferences of specific causation from temporal relationships between exposure and outcome. In one such case, the plaintiff developed throat irritation from very high levels of airborne industrial talc exposure, which abated upon cessation of exposure, and returned with renewed exposure. Given that general causation was conceded, and natural experimental nature of challenge, dechallenge, and rechallenge, the Fourth Circuit in this instance held that the temporal relationship of an acute insult and onset was an adequate basis for expert witness opinion testimony on specific causation. Id. at *11. (citing Westberry v. Gislaved Gummi AB, 178 F.3d 257, 265 (4th Cir.1999) (“depending on the circumstances, a temporal relationship between exposure to a substance and the onset of a disease or a worsening of symptoms can provide compelling evidence of causation”); Cavallo v. Star Enter., 892 F. Supp. 756, 774 (E.D. Va.1995) (discussing unique, acute onset of symptoms caused by chemicals). In the Hempstead case, however, the very nature of the causal relationship claimed did not involve an acute reaction. The claimed injury, diabetes, emerged five years after statin use commenced, and the epidemiologic studies relied upon were all based upon this chronic use, with a non-acute, latent outcome. The trial judge thus would not credit the mere temporality between drug use and new onset of diabetes as probative of anything.


[1] Id. at *8, citing Guinn v. AstraZeneca Pharm. LP, 602 F.3d 1245, 1255 (11th Cir.2010) (“The fact that exposure to [a substance] may be a risk factor for [a disease] does not make it an actual cause simply because [the disease] developed.”); id. at *11, citing McClain v. Metabolife Int’l, Inc., 401 F.3d 1233, 1243 (11th Cir.2005) (“[S]imply because a person takes drugs and then suffers an injury does not show causation. Drawing such a conclusion from temporal relationships leads to the blunder of the post hoc ergo propter hoc fallacy.”); see also Roche v. Lincoln Prop. Co., 278 F.Supp. 2d 744, 752 (E.D. Va.2003) (“Dr. Bernstein’s reliance on temporal causation as the determinative factor in his analysis is suspect because it is well settled that a causation opinion based solely on a temporal relationship is not derived from the scientific method and is therefore insufficient to satisfy the requirements of Rule 702.”) (internal quotes omitted).

[2] See Reference Manual on Scientific Evidence at 612 (3d ed. 2011) (noting “the logic of the effect of doubling of the risk”); see also Marder v.G.D. Searle & Co., 630 F. Supp. 1087, 1092 (D. Md.1986) (“In epidemiological terms, a two-fold increased risk is an important showing for plaintiffs to make because it is the equivalent of the required legal burden of proof-a showing of causation by the preponderance of the evidence or, in other words, a probability of greater than 50%.”).

The ASA’s Statement on Statistical Significance – Buzzing from the Huckabees

March 19th, 2016

People say crazy things. In a radio interview, Evangelical Michael Huckabee argued that the Kentucky civil clerk who refused to issue a marriage license to a same-sex couple was as justified in defying an unjust court decision as people are justified in disregarding Dred Scott v. Sanford, 60 U.S. 393 (1857), which Huckabee described as still the “law of the land.”1 Chief Justice Roger B. Taney would be proud of Huckabee’s use of faux history, precedent, and legal process to argue his cause. Definition of “huckabee”: a bogus factoid.

Consider the case of Sander Greenland, who attempted to settle a score with an adversary’s expert witness, who had opined in 2002, that Bayesian analyses were rarely used at the FDA for reviewing new drug applications. The adversary’s expert witness obviously got Greenland’s knickers in a knot because Greenland wrote an article in a law review of all places, in which he presented his attempt to “correct the record” and show how the statement of the opposing expert witness was“ludicrous” .2 To support his indictment on charges of ludicrousness, Greenland ignored the FDA’s actual behavior in reviewing new drug applications,3 and looked at the practice of the Journal of Clinical Oncology, a clinical journal published 24 issues a year, with occasional supplements. Greenland found the word “Bayesian” 50 times in over 40,000 journal pages, and declared victory. According to Greenland, “several” (unquantified) articles had used Bayesian methods to explore, post hoc, statistically nonsignificant results.”4

Given Greenland’s own evidence, the posterior odds that Greenland was correct in his charges seem to be disturbingly low, but he might have looked at the published papers that conducted more serious, careful surveys of the issue.5 This week, the Journal of the American Medical Association published yet another study by John Ioannidis and colleagues, which documented actual practice in the biomedical literature. And no surprise, Bayesian methods barely register in a systematic survey of the last 25 years of published studies. See David Chavalarias, Joshua David Wallach, Alvin Ho Ting Li, John P. A. Ioannidis, “Evolution of reporting P values in the biomedical literature, 1990-2015,” 315 J. Am. Med. Ass’n 1141 (2016). See also Demetrios N. Kyriacou, “The Enduring Evolution of the P Value,” 315 J. Am. Med. Ass’n 1113 (2016) (“Bayesian methods are not frequently used in most biomedical research analyses.”).

So what are we to make of Greenland’s animadversions in a law review article? It was a huckabee moment.

Recently, the American Statistical Association (ASA) issued a statement on the use of statistical significance and p-values. In general, the statement was quite moderate, and declined to move in the radical directions urged by some statisticians who attended the ASA’s meeting on the subject. Despite the ASA’s moderation, the ASA’s statement has been met with huckabee-like nonsense and hyperbole. One author, a pharmacologist trained at the University of Washington, with post-doctoral training at the University of California, Berkeley, and an editor of PloS Biology, was moved to write:

However, the ASA notes, the importance of the p-value has been greatly overstated and the scientific community has become over-reliant on this one – flawed – measure.”

Lauren Richardson, “Is the p-value pointless?” (Mar. 16, 2016). And yet, no where in the ASA’s statement does the group suggest that the the p-value was a “flawed” measure. Richardson suffered a lapse and wrote a huckabee.

Not surprisingly, lawyers attempting to spin the ASA’s statement have unleashed entire hives of huckabees in an attempt to deflate the methodological points made by the ASA. Here is one example of a litigation-industry lawyer who argues that the American Statistical Association Statement shows the irrelevance of statistical significance for judicial gatekeeping of expert witnesses:

To put it into the language of Daubert, debates over ‘p-values’ might be useful when talking about the weight of an expert’s conclusions, but they say nothing about an expert’s methodology.”

Max Kennerly, “Statistical Significance Has No Place In A Daubert Analysis” (Mar. 13, 2016) [cited as Kennerly]

But wait; the expert witness must be able to rule out chance, bias and confounding when evaluating a putative association for causality. As Austin Bradford Hill explained, even before assessing a putative association for causality, scientists need first to have observations that

reveal an association between two variables, perfectly clear-cut and beyond what we would care to attribute to the play of chance.”

Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295, 295 (1965) (emphasis added).

The analysis of random error is an essential step on the methodological process. Simply because a proper methodology requires consideration of non-statistical factors does not remove the statistical from the methodology. Ruling out chance as a likely explanation is a crucial first step in the methodology for reaching a causal conclusion when there is an “expected value” or base rate of for the outcome of interest in the population being sampled.

Kennerly shakes his hive of huckabees:

The erroneous belief in an ‘importance of statistical significance’ is exactly what the American Statistical Association was trying to get rid of when they said, ‘The widespread use of “statistical significance” (generally interpreted as p ≤ 0.05)’ as a license for making a claim of a scientific finding (or implied truth) leads to considerable distortion of the scientific process.”

And yet, the ASA never urged that scientists “get rid of” statistical analyses and assessments of attained levels of significance probability. To be sure, they cautioned against overinterpreting p-values, especially in the context of multiple comparisons, non-prespecified outcomes, and the like. The ASA criticized bright-line rules, which are often used by litigation-industry expert witnesses to over-endorse the results of studies with p-values less than 5%, often in the face of multiple comparisons, cherry-picked outcomes, and poorly and incompletely described methods and results. What the ASA described as a “considerable distortion of the scientific process” was claiming scientific truth on the basis of “p < 0.05.” As Bradford Hill pointed out in 1965, a clear-cut association, beyond that which we would care to attribute to chance, is the beginning of the analysis of an association for causality, not the end of it. Kennerly ignores who is claiming “truth” in the litigation context.  Defense expert witnesses frequently are opining no more than “not proven.” The litigation industry expert witnesses must opine that there is causation, or else they are out of a job.

The ASA explained that the distortion of the scientific process comes from making a claim of a scientific conclusion of causality or its absence, when the appropriate claim is “we don’t know.” The ASA did not say, suggest, or imply that a claim of causality can be made in the absence of finding statistical significance, and as well as validation of the statistical model on which it is based, and other factors as well. The ASA certainly did not say that the scientific process will be served well by reaching conclusions of causation without statistical significance. What is clear is that statistical significance should not be an abridgment for a much more expansive process. Reviewing the annals of the International Agency for Research on Cancer (even in its currently politicized state), or the Institute of Medicine, an honest observer would be hard pressed to come up with examples of associations for outcomes that have known base rates, which associations were determined to be causal in the absence of studies that exhibited statistical significance, along with many other indicia of causality.

Some other choice huckabees from Kennerly:

“It’s time for courts to start seeing the phrase ‘statistically significant’ in a brief the same way they see words like ‘very,’ ‘clearly,’ and ‘plainly’. It’s an opinion that suggests the speaker has strong feelings about a subject. It’s not a scientific principle.”

Of course, this ignores the central limit theorems, the importance of random sampling, the pre-specification of hypotheses and level of Type I error, and the like. Stuff and nonsense.

And then in a similar vein, from Kennerly:

The problem is that many courts have been led astray by defendants who claim that ‘statistical significance’ is a threshold that scientific evidence must pass before it can be admitted into court.”

In my experience, litigation-industry lawyers oversell statistical significance rather than defense counsel who may question reliance upon studies that lack it. Kennerly’s statement is not even wrong, however, because defense counsel knowledgeable of the rules of evidence would know that statistical studies themselves are rarely admitted into evidence. What is admitted, or not, is the opinion of expert witnesses, who offer opinions about whether associations are causal, or not causal, or inconclusive.


1 Ben Mathis-Lilley, “Huckabee Claims Black People Aren’t Technically Citizens During Critique of Unjust Laws,” The Slatest (Sept. 11 2015) (“[T]he Dred Scott decision of 1857 still remains to this day the law of the land, which says that black people aren’t fully human… .”).

2 Sander Greenland, “The Need for Critical Appraisal of Expert Witnesses in Epidemiology and Statistics,” 39 Wake Forest Law Rev. 291, 306 (2004). See “The Infrequency of Bayesian Analyses in Non-Forensic Court Decisions” (Feb. 16, 2014).

3 To be sure, eight years after Greenland published this diatribe, the agency promulgated a guidance that set recommended practices for Bayesian analyses in medical device trials. FDA Guidance for the Use of Bayesian Statistics in Medical Device Clinical Trials (February 5, 2010); 75 Fed. Reg. 6209 (February 8, 2010); see also Laura A. Thompson, “Bayesian Methods for Making Inferences about Rare Diseases in Pediatric Populations” (2010); Greg Campbell, “Bayesian Statistics at the FDA: The Trailblazing Experience with Medical Devices” (Presentation give by Director, Division of Biostatistics Center for Devices and Radiological Health at Rutgers Biostatistics Day, April 3, 2009). Even today, Bayesian analysis remains uncommon at the U.S. FDA.

4 39 Wake Forest Law Rev. at 306-07 & n.61 (citing only one paper, Lisa Licitra et al., Primary Chemotherapy in Resectable Oral Cavity Squamous Cell Cancer: A Randomized Controlled Trial, 21 J. Clin. Oncol. 327 (2003)).

5 See, e.g., J. Martin Bland & Douglas G. Altman, “Bayesians and frequentists,” 317 Brit. Med. J. 1151, 1151 (1998) (“almost all the statistical analyses which appear in the British Medical Journal are frequentist”); David S. Moore, “Bayes for Beginners? Some Reasons to Hesitate,” 51 The Am. Statistician 254, 254 (“Bayesian methods are relatively rarely used in practice”); J.D. Emerson & Graham Colditz, “Use of statistical analysis in the New England Journal of Medicine,” in John Bailar & Frederick Mosteler, eds., Medical Uses of Statistics 45 (1992) (surveying 115 original research studies for statistical methods used; no instances of Bayesian approaches counted); Douglas Altman, “Statistics in Medical Journals: Developments in the 1980s,” 10 Statistics in Medicine 1897 (1991); B.S. Everitt, “Statistics in Psychiatry,” 2 Statistical Science 107 (1987) (finding only one use of Bayesian methods in 441 papers with statistical methodology).

The American Statistical Association’s Statement on and of Significance

March 17th, 2016

In scientific circles, some commentators have so zealously criticized the use of p-values that they have left uninformed observers with the impression that random error was not an interesting or important consideration in evaluating the results of a scientific study. In legal circles, counsel for the litigation industry and their expert witnesses have argued duplicitously that statistical significance was at once both unimportant, except when statistical significance is observed, in which causation is conclusive. The recently published Statement of the American Statistical Association (“ASA”) restores some sanity to the scientific and legal discussions of statistical significance and p-values. Ronald L. Wasserstein & Nicole A. Lazar, “The ASA’s Statement on p-Values: Context, Process, and Purpose,” The American Statistician, available online (Mar. 7, 2016), in-press at DOI:10.1080/00031305.2016.1154108, <http://dx.doi.org/10.1080/>.

Recognizing that sound statistical practice and communication affects research and public policy decisions, the ASA has published a statement of interpretative principles for statistical significance and p-values. The ASA’s statement first, and foremost, points out that the soundness of scientific conclusions turns on more than statistical methods alone. Study design, conduct, and evaluation often involve more than a statistical test result. And the ASA goes on to note, contrary to the contrarians, that “the p-value can be a useful statistical measure,” although this measure of attained significance probability “is commonly misused and misinterpreted.” ASA at 7. No news there.

The ASA’s statement puts forth six principles, all of which have substantial implications for how statistical evidence is received and interpreted in courtrooms. All are worthy of consideration by legal actors – legislatures, regulators, courts, lawyers, and juries.

1. P-values can indicate how incompatible the data are with a specified statistical model.”

The ASA notes that a p-value shows the “incompatibility between a particular set of data and a proposed model for the data.” Although there are some in the statistical world who rail against null hypotheses of no association, the ASA reports that “[t]he most common context” for p-values consists of a statistical model that includes a set of assumptions, including a “null hypothesis,” which often postulates the absence of association between exposure and outcome under study. The ASA statement explains:

The smaller the p-value, the greater the statistical incompatibility of the data with the null hypothesis, if the underlying assumptions used to calculate the p-value hold. This incompatibility can be interpreted as casting doubt on or providing evidence against the null hypothesis or the underlying assumptions.”

Some lawyers want to overemphasize statistical significance when present, but to minimize the importance of statistical significance when it is absent.  They will find no support in the ASA’s statement.

2. P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.”

Of course, there are those who would misinterpret the meaning of p-values, but the flaw lies in the interpreters, not in the statistical concept.

3. Scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold.”

Note that the ASA did not say that statistical significance is irrelevant to scientific conclusions. Of course, statistical significance is but one factor, which does not begin to account for study validity, data integrity, or model accuracy. The ASA similarly criticizes the use of statistical significance as a “bright line” mode of inference, without consideration of the contextual considerations of “the design of a study, the quality of the measurements, the external evidence for the phenomenon under study, and the validity of assumptions that underlie the data analysis.” Criticizing the use of “statistical significance” as singularly assuring the correctness of scientific judgment does not, however, mean that “statistical significance” is irrelevant or unimportant as a consideration in a much more complex decision process.

4. Proper inference requires full reporting and transparency”

The ASA explains that the proper inference from a p-value can be completely undermined by “multiple analyses” of study data, with selective reporting of sample statistics that have attractively low p-values, or cherry picking of suggestive study findings. The ASA points out that common practices of selective reporting compromises valid interpretation. Hence the correlative recommendation:

Researchers should disclose the number of hypotheses explored during the study, all data collection decisions, all statistical analyses conducted and all p-values computed. Valid scientific conclusions based on p-values and related statistics cannot be drawn without at least knowing how many and which analyses were conducted, and how those analyses (including p-values) were selected for reporting.”

ASA Statement. See also “Courts Can and Must Acknowledge Multiple Comparisons in Statistical Analyses” (Oct. 14, 2014).

5. A p-value, or statistical significance, does not measure the size of an effect or the importance of a result.”

The ASA notes the commonplace distinction between statistical and practical significance. The independence between statistical and practice significance does not, however, make statistical significance irrelevant, especially in legal and regulatory contexts, in which parties claim that a risk, however small, is relevant. Of course, we want the claimed magnitude of association to be relevant, but we also need the measured association to be accurate and precise.

6. By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis.”

Of course, a p-value cannot validate the model, which is assumed to generate the p-value. Contrary to the hyperbolic claims one sees in litigation, the ASA notes that “a p-value near 0.05 taken by itself offers only weak evidence against the null hypothesis.” And so the ASA counsels that “data analysis should not end with the calculation of a p-value when other approaches are appropriate and feasible.” 

What is important, however, is that the ASA never suggests that significance testing or measurement of significance probability is not an important and relevant part of the process. To be sure, the ASA notes that because of “the prevalent misuses of and misconceptions concerning p-values, some statisticians prefer to supplement or even replace p-values with other approaches.”

First of these other methods unsurprisingly is estimation with assessment of confidence intervals, although the ASA also includes Bayesian and other methods as well. There are some who express irrational exuberance about the protential of Bayesian methods to restore confidence in scientific process and conclusions. Bayesian approaches are less manipulated than frequentist ones, largely because very few people use Bayesian methods, and even fewer people really understand them.

In some ways, Bayesian statistical approaches are like Apple computers. The Mac OS is less vulnerable to viruses, compared with Windows, because its lower market share makes it less attractive to virus code writers. As Apple’s OS has gained market share, its vulnerability has increased. (My Linux computer on the other hand is truly less vulnerable to viruses because of system architecture, but also because Linux personal computers have almost no market share.) If Bayesian methods become more prevalent, my prediction is that they will be subject to as much abuse as frequent views. The ASA wisely recognized that the “reproducibility crisis” and loss of confidence in scientific research were mostly due to bias, both systematic and cognitive, in how studies are done, interpreted, and evaluated.