TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

The New Wigmore on Learned Treatises

September 12th, 2011

I am indebted to Professor David Bernstein for calling my attention to the treatment of learned treatises in the new edition of his treatise on expert evidence:  David H. Kaye, David E. Bernstein, and Jennifer L. Mnookin, The  New Wigmore:  A Treatise on Evidence – Expert Evidence (2d ed. 2011).  Professor Bernstein suggested that I might find the treatment of learned treatises consistent with some of my concerns about the outdated rationale for allowing such works to be admissible for their truth.  See Unlearning The Learned Treatise Exception,” and  “Further Unraveling of the Learned Treatise Exception.”

Having used the first edition of the New Wigmore, I purchased a copy of the second edition of the volume on expert evidence.  The second edition appears to be a valuable addition to the scholarly literature on expert witness opinion evidence, and I recommend it strongly to students and practitioners who wrestle with expert witness issues.

Chapter 5, a treatment of “Treatises and Other Learned Writings,” is a good descriptive account of the historical development of the common law hearsay exception and its modification by various statutes and codes.  Unlike many discussions of the learned treatise exception, The New Wigmore delves into the overlap between 803(18), which specifies “reliable authority,” and the reliability factors set out in the most recent version of Rule 702.  Although the case law on the relationship between the two rules is sparse and inconsistent, the authors make a strong case for a reliability criterion for learned treatises when such treatises are offered for the truth of the matters asserted.

The New Wigmore acknowledges that many courts and scholars have assumed that juries and most normal people have a difficult time following a limiting instruction to consider a learned treatise for assessing credibility but not for the truth. Refreshingly, the New Wigmore rejects the notion that difficulty in following a limiting instruction (if real) equates to meaninglessness for the distinction.  In the context of Rule 702 or 703 motions to exclude, and accompanying motions for summary judgment, the issue whether a learned treatise statement is admissible for its truth may be outcome determinative of the motions.

The sad truth, touched on but not directly confronted by the New Wigmore, is that so much of the biomedical literature is carelessly written, with only cursory “peer review.”  SeeMisplaced Reliance On Peer Review to Separate Valid Science From Nonsense” (Aug. 14, 2011). Professor Wigmore was impressed by the desire of treatise authors to offer trustworthy opinions to avoid ridicule by their peers; in our era, scientists are not so impressed by publication as a guarantor of trustworthiness.  See, e.g., Douglas G. Altman, “Poor Quality Medical Research:  What Can Journals Do?” 287 J. Am. Med. Ass’n 2765 (2002).  There is a good deal of rubbish out there in the published literature, and most courts have not considered how to stem the flood of this rubbish into the courtroom through the 803(18) loophole.

There are yet other problems with Rule 803(18) discussed in New Wigmore.  The language of the rule is ambiguous. Does the requirement of “reliable authority” apply to the author, the text or journal, or the statement itself?  If the author or the publication, then there really is no assurance that the work satisfies reliability in the way required by 702.  If the status of the text, the journal, or the author is the sole criterion under 803(18), then we have a Ferebee-like rule that countenances the opinion of any willing, available, qualified author.  And the bar to publication these days is probably lower than the bar to being selected as a suitable testifying expert witness.

Authority is not a concept that is much at home in scientific discourse.  Nulla in verba, and all that.  If a statement in a publication is truly “authoritative,” it is because it is well supported by the facts and data on which it is based.

The New Wigmore goes beyond the coincidence of the word “reliable” in Rules 702 and 803(18), and argues that the logic of using a hearsay “learned treatise” for the truth of the matter asserted requires that the statement itself is reliably based. Here is how the second edition states its case for importing the requirements of Rule 702 into Rule 803(18):

“It would be not so difficult to conclude that assertions in a treatise that are not ‘the product of reliable principles or methods’ under Rule 702(2), for example, also are not ‘a reliable authority’ under Rule 803(18).”

Id. at 228, § 5.4.2. The triple negative may obscure the gist of the authors’ meaning, but I think their point is clear.  Let me attempt to restate their point without the negatives:

It is easy to conclude that treatise opinions that fail 702 would fail to qualify for 803(18) exception.

Of course, if a treatise statement satisfies 702, then that statement would not necessarily qualify for the 803(18) exception.  The learned treatise also has a “recognition” requirement; one of the testifying expert witnesses must recognize the treatise as “authoritative,” “learned,” or whatnot, or the court must take judicial notice of its status.  The treatise could have the most detailed discussion and documentation of its opinions, with flawless reasoning and evidential assessment, but if it were just translated from Georgian, and unknown to the expert witnesses and the court, it would not qualify as a learned treatise.  More than epistemic reliability seems to be required in terms of the status of the publication: the renown of the author and/or text. The status of the publication creates a normative obligation upon the expert witnesses to be aware of its pronouncements and to reconcile or to incorporate the publication’s statements into their courtroom opinions.

The New Wigmore’s rejection of “authoritarianism” for Rule 803(18) is commendable, but difficult to achieve in practice.  Rule 702 has evolved into an important tool to ensure that opinions offered in court are “evidence based,” rather than predicated solely on the professional status of their authors.  Along with the epistemic requirements of Rule 702, the procedural requirements of Federal Rule of Civil Procedure 26 ensure that the opinion’s author has stated all opinions, and all bases, as well as everything considered along the way in forming the opinion.  The reality is that most textbooks and treatises have short, conclusory consideration of issues that are likely important to the resolution of a lawsuit.  Frequently, a textbook cites a few studies that support the author’s opinion, without a sustained discussion of conflicting evidence, study validity, and the like.  An opinion that might be the subject of a 50 page Rule 26-compliant report may be reduced to a sentence or two in a textbook, which was published several years before the close of discovery in the case.  These are hardly propitious conditions for a truly learned treatise, and a 702-sufficient opinion.

Perhaps more promising is the development of the “systematic review,” which sets out to provide an evidence-based basis for causal claims. See, e.g., Michael B. Bracken, “Commentary: Toward systematic reviews in epidemiology,” 30 Internat’l J.  Epidem. 954 (2001).  Such reviews identify a research question, pre-specify the methodological approach to varying study designs and validity questions, search for all the data available that can contribute to answering the question, and provide a disciplined attempt to answer the research question.  Systematic reviews come very close to satisfying the needs of the courtroom, and the requirements of both Rules 702 and 803(18).  The trouble is, of course, that most traditional textbooks and narrative reviews, and “learned treatises,” are far off course from the epistemic path taken by systematic reviews.

The New Wigmore also raises the interesting question whether individual published studies are “learned treatises.” If they were, then an expert witness could rely upon them, per Rule 703, and the sponsoring party could actually offer them into evidence (or at least as an exhibit, with some right to show the jury their results).  An individual study, however, would seem to fall way short of the mark of the comprehensiveness required for a Rule 702 opinion, at least in the situation where there were other studies.

An irreducible problem in this area is that Rule 702 separates the “authority” of the speaker, in the form of qualifications to give an expert opinion, from the “reliability” of the opinion itself.  This separation, when followed, has been a huge achievement for the improvement of science in the courtroom.  Qualifications are a rather minimal necessary requirement, and even at best are a weak proxy for the reliability of the opinion given in court.  Many key 702 decisions involved expert witnesses with substantial, impressive qualifications. Despite these qualifications, courts excluded the witnesses’ proffered opinions because they were inadequately or unreliably supported.  Reliability under Rule 702 is thus an “evidence-based” requirement. The New Wigmore authors are correct that it is time to abandon “authority” as the guarantor of reliability in favor of “evidence-based principles.

Milward — Unhinging the Courthouse Door to Dubious Scientific Evidence

September 2nd, 2011

It has been an interesting year in the world of expert witnesses.  We have seen David Egilman attempt a personal appeal of a district court’s order excluding him as an expert.  Stephen Ziliak has prattled on about how he steered the Supreme Court from the brink of disaster by helping them to avoid the horrors of statistical significance.  And then we had a philosophy professor turned expert witness, Carl Cranor, publicly touting an appellate court’s decision that held his testimony admissible.  Cranor, under the banner of the Center for Progressive Reform (CPR), hails the First Circuit’s opinion as the greatest thing since Sir Isaac Newton.   Carl Cranor, “Milward v. Acuity Specialty Products: How the First Circuit Opened Courthouse Doors for Wronged Parties to Present Wider Range of Scientific Evidence” (July 25, 2011).

Philosophy Professor Carl Cranor has been trying for decades to dilute the scientific approach to causal conclusions to permit the precautionary principle to find its way into toxic tort cases.  Cranor, along with others, has also criticized federal court expert witness gatekeeping for deconstructing individual studies, showing that the individual studies are weak, and ignoring the overall pattern of evidence from different disciplines.  This criticism has some theoretical merit, but the criticism is typically advanced as an excuse for “manufacturing certainty” from weak, inconsistent, and incoherent scientific evidence.  The criticism also ignores the actual text of the relevant rule – Rule 702, which does not limit the gatekeeping court to assessing individual “pieces” of evidence.  The scientific community acknowledges that there are times when a weaker epidemiologic dataset may be supplemented by strong experiment evidence that leads appropriately to a conclusion of causation.  See, e.g., Hans-Olov Adami, Sir Colin L. Berry, Charles B. Breckenridge, Lewis L. Smith, James A. Swenberg, Dimitrios Trichopoulos, Noel S. Weiss, and Timothy P. Pastoor, “Toxicology and Epidemiology: Improving the Science with a Framework for Combining Toxicological and Epidemiological Evidence to Establish Causal Inference,” 122 Toxicological Sci. 223 (2011) (noting the lack of a systematic, transparent way to integrate toxicologic and epidemiologic data to support conclusions of causality; proposing a “grid” to permit disparate lines of evidence to be integrated into more straightforward conclusions).

For the most part, Cranor’s publications have been ignored in the Rule 702 gatekeeping process.  Perhaps that is why he shrugged his academic regalia and took on the mantle of the expert witness, in Milward v. Acuity Specialty Products, a case involving a claim that benzene exposure caused plaintiff’s acute promyelocytic leukemia (APL), one of several types of acute myeloid leukemia.  Milward v. Acuity Specialty Products Group, Inc., 664 F.Supp. 2d 137 (D.Mass. 2009) (O’Toole, J.).

Philosophy might seem like the wrong discipline to help a court or a jury decide general and specific causation of a rare cancer, with an incidence of less 8 cases per million per year.  (A PubMed search on leukeumia and Cranor yielded no hits.)  Cranor supplemented the other, more traditional testimony from a toxiciologist, by attempting to show that the toxicologist’s testimony was based upon sound scientific method.  Cranor was particularly intent to show that the toxicologist, Dr. Martyn Smith, had used sound method to reach a scientific conclusion, even though he lacked strong epidemiologic studies to support his opinion.

The district court excluded Cranor’s testimony, along with plaintiff’s scientific expert witnesses.  The Court of Appeals, however, reversed, and remanded with instructions that plaintiff’s scientific expert witnesses’ opinions were admissible.  639 F.3d 11 (1st Cir. 2011).  Hence Cranor’s and the CPR’s hyperbole about the opening of the courthouse doors.

The district court was appropriately skeptical about plaintiff’s expert witnesses’ reliance upon epidemiologic studies, the results of which were not statistically significant.  Before reaching the issue of statistical significance, however, the district court found that Dr. Smith had relied upon studies that did not properly support his opinion.  664 F.Supp. 2d at 148.  The defense presented Dr. David Garabrant, an expert witness with substantial qualifications and accomplishments in epidemiologic science.  Dr. Garabrant persuaded the Court that Dr. Smith had relied upon some studies that tended to show no association, and others that presented faulty statistical analyses.  Other studies, relied upon by Dr. Smith, presented data on AML, but Dr. Smith speculated that these AML cases could have been APL cases.  Id.

None of the studies relied upon by plaintiffs’ Dr Smith had a statistically significant result for APL.  Id. at 144. The district court pointed out that scientists typically take care to rely upon data only that shows “statistical significance,” and Dr. Smith (plaintiff’s expert witness) deviated from sound scientific method in attempting to support his conclusion with studies that had not ruled out chance as an explanation for their increased risk ratios.  Id.  The district court did not summarize the studies’ results, and so the unsoundness of plaintiff’s method is difficult to evaluate.  Rather than engaging in hand waving and speculating about “trends” and suggestions, those witnesses could have performed a meta-analysis to increase the statistical precision of a summary point estimate beyond what was achieved in any single, small study.  Neither the plaintiff nor the district court addressed the issue of aggregating study results to address the role of chance in producing the observed results.

The inability to show a statistically significant result was not surprising given how rare the APL subtype of AML is.  Sample size might legitimately interfere with the ability of epidemiologic studies to detect a statistically significant association that really existed.  If this were truly the case, the lack of a statistically significant association could not be interpreted to mean the absence of an association without potentially committing a type II error. In any event, the district court in Milward was willing to credit the plaintiffs’ claim that epidemiologic evidence may not always be essential for establishing causality.  If causality does exist, however, epidemiologic studies are usually required to confirm the existence of the causal relationship.  Id. at 148.

The district court also took a close look at Smith’s mechanistic biological evidence, and found it equally speculative.  Although plausibility is a desirable feature of a causal hypothesis, it only sets the stage for actual data:

“Dr. Smith’s opinion is that ‘[s]ince benzene is clastogenic and has the capability of breaking and rearranging chromosomes, it is biologically plausible for benzene to cause’ the t(15;17) translocation. (Smith Decl. ¶ 28.b.) This is a kind of ‘bull in the china shop’ generalization: since the bull smashes the teacups, it must also smash the crystal. Whether that is so, of course, would depend on the bull having equal access to both teacups and crystal.”

Id. at 146.

“Since general extrapolation is not justified and since there is no direct observational evidence that benzene causes the t(15;17) translocation, Dr. Smith’s opinion — that because benzene is an agent that can cause some chromosomal mutations, it is ‘plausible’ that it causes the one critical to APL—is simply an hypothesis, not a reliable scientific conclusion.”

Id. at 147.

Judge O’Toole’s opinion is a careful, detailed consideration of the facts and data upon which Dr. Smith relied upon, but the First Circuit found an abuse of discretion, and reversed. 639 F.3d 11 (1st Cir. 2011).

The Circuit incorrectly suggested that Smith’s opinion was based upon a “weight of the evidence” methodology described by “the world-renowned epidemiologist Sir Arthur Bradford Hill in his seminal methodological article on inferences of causality. See Arthur Bradford Hill, The Environment and Disease: Association or Causation?, 58 Proc. Royal Soc’y Med. 295 (1965).” Id. at 17.  This suggestion is remarkable because everyone knows that it was Arthur’s much smarter brother, Austin, who wrote the seminal article and gave the Bradford Hill name to the famous presidential address published by the Royal Society of Medicine.  Arthur Bradford Hill was not even a knight if he existed at all.

The Circuit’s suggestion is also remarkable for confusing a vague “weight of the evidence” methodology with the statistical and epidemiologic approach of one of the 20th century’s great methodologists.  Sir Austin is known for having conducted the first double-blinded randomized clinical trial, as well as having shown, with fellow knight Sir Richard Doll, the causal relationship between smoking and lung cancer.  Sir Austin wrote one of the first texts on medical statistics, Principles of Medical Statistics (London 1937).  Sir Austin no doubt was turning in his grave when he was associated with Cranor’s loosey-goosey “weight of the evidence” methodology.  See, e.g., Douglas L. Weed, “Weight of Evidence: A Review of Concept and Methods,” 25 Risk Analysis 1545 (2005) (noting the vague, ambiguous, indefinite nature of the concept of “weight of evidence” review).

The Circuit adopted a dismissive attitude towards epidemiology in general, citing to an opinion piece by several cancer tumor biologists, whom the court described as a group from the National Cancer Institute (NCI).  The group was actually a workshop sponsored by the NCI, with participants from many institutions.  Id. at 17 (citing Michele Carbon[e] et al., “Modern Criteria to Establish Human Cancer Etiology,” 64 Cancer Res. 5518, 5522 (2004)).  The cited article did report some suggestions for modifying Bradford Hill’s criteria in the light of modern molecular biology, as well as a sense of the group that there was no “hierarchy” in which epidemiology was at the top.  (The group definitely did not address the established concept that some types of epidemiologic studies are analytically more powerful to support inferences of causality than others — the hierarchy of epidemiologic evidence.)

The Circuit then proceeded to evaluate Dr. Smith’s consideration of the available epidemiologic studies.  The Circuit mistakenly defined an “odds ratio” as the “the difference in the incidence of a disease between a population that has been exposed to benzene and one that has not.”  Id. at 24. Having failed to engage with the evidence sufficiently to learn what an odds ratio was, the Circuit Court then proceeded to state that the difference between Dr. Garabrant and Dr. Smith, as to how to calculate the odds ratio in some of the studies, was a mere difference in opinion between experts, and Dr. Garabrant’s criticisms of Dr. Smith’s approach went to the weight, not the admissibility, of the evidence.  These sparse words are, of course, a legal conclusion, not an explanation, and the Circuit leaves us without any real understanding of how Dr. Smith may have gone astray, but still have been advancing a legitimate opinion within epidemiology, which was not his discipline.  Id. at 22. If Dr. Smith’s idea of an odds ratio was as incorrect as the Circuit’s, his calculation may have had no validity whatsoever, and thus his opinions derived from his flawed ideas may have clearly failed the requirements of Rule 702.  The Circuit’s opinion is not terribly helpful in understanding anything other than its summary rejection of the district court’s more detailed analysis.

The Circuit also advanced the “impossibility” defense for Dr. Smith’s failure to rely upon epidemiologic studies with statistically significant results.  Id. at 24. As noted above, such studies fail to rule out chance for their finding of risk ratios above or below 1.0 (the measure of no association).  Because the likelihood of obtaining a risk ratio of exactly 1.0 is vanishingly small, epidemiologic science must and does consider the role of chance in explaining data that diverges from a measure of no association.  Dr. Smith’s hand waving about the large size of the studies needed to show an increased risk may have some validity in the context of benzene exposure and APL, but it does not explain or justify the failure to use aggregative techniques such as meta-analysis.  The hand waving also does nothing to rule out the role of chance in producing the results he relied upon in court.

The Circuit Court appeared to misunderstand the very nature of the need for statistical evaluation of stochastic biological events, such as APL incidence in a population.  According to the Circuit, Dr. Smith’s reliance upon epidemiologic data was merely

“meant to challenge the theory that benzene exposure could not cause APL, and to highlight that the limited data available was consistent with the conclusions that he had reached on the basis of other bodies of evidence. He stated that ‘[i]f epidemiologic studies of benzene-exposed workers were devoid of workers who developed APL, one could hypothesize that benzene does not cause this particular subtype of AML.’ The fact that, on the  contrary, ‘APL is seen in studies of workers exposed to benzene where the subtypes of AML have been separately analyzed and has been found at higher levels than expected’ suggested to him that the limited epidemiological evidence was at the very least consistent with, and suggestive of, the conclusion that benzene can cause APL.

* * *

Dr. Smith did not infer causality from this suggestion alone, but rather from the accumulation of multiple scientifically acceptable inferences from different bodies of evidence.”

Id. at 25

But challenging the theory that benzene exposure does not cause APL does not help show the validity of the studies relied upon, or the inferences drawn from them.  This was plaintiffs’ and Dr. Smith’s burden under Rule 702, and the Circuit seemed to lose sight of the law and the science with Professor Cranor’s and Dr. Smith’s sleight of hand.  As for the Circuit’s suggestion that scraps of evidence from different kinds of scientific studies can establish scientific knowledge, this approach was rejected by the great mathematician, physicist, and philosopher of science, Henri Poincaré:

“[O]n fait la science avec des faits comme une maison avec des pierres; mais une accumulation de faits n’est pas plus une science qu’un tas de pierres n’est une maison.”

Henri Poincaré, La Science et l’Hypothèse (1905) (chapter 9, Les Hypothèses en Physique).  Litigants, either plaintiff or defendant, should not be allowed to pick out isolated findings in a variety of studies, and throw them together as if that were science.

As unclear and dubious as the Circuit’s opinion is, the court did not throw out the last 18 years of Rule 702 law.  The Court distinguished the Milward case, with its sparse epidemiologic studies from those cases “in which the available epidemiological studies found that there is no causal link.”  Id. at 24 (citing Norris v. Baxter Healthcare Corp., 397 F.3d 878, 882 (10th Cir.2005), and Allen v. Pa. Eng’g Corp., 102 F.3d 194, 197 (5th Cir.1996).  The Court, however, provided no insight into why the epidemiologic studies must rise to the level of showing no causal link before an expert can torture weak, inconsistent, and contradictory data to claim such a link.  This legal sleight of hand is simply a shifting of the burden of proof, which should have been on plaintiffs and Dr. Smith.  Desperation is not a substitute for adequate scientific evidence to support a scientific conclusion.

The Court’s failure to engage more directly with the actual data, facts, and inferences, however, is likely to cause mischief in federal cases around the country.

Ziliak Gives Legal Advice — Puts His Posterior On the Line

August 31st, 2011

I have posted before about the curious saga of two university professors of economics who curiously tried to befriend the United States Supreme Court.  Professors Ziliak and McCloskey submitted an amicus brief to the Court, in connection with Matrixx Initiativives, Inc. v. Siracusano, ___ U.S. ___, 131 S.Ct. 1309 (2011).  Nothing unusual there, other than the Professors’ labeling themselves “Statistics Experts,” and then proceeding to commit a statistical howler of deriving a posterior probability from only a p-value.  See The Matrixx Oversold” (April 4, 2011).

I seemed to be alone in my dismay over this situation, but recently Professor David Kaye, an author of the chapter on statistics in the Reference Manual on Scientific Evidence, weighed in with his rebuttal to Ziliak and McCloskey’s erroneous statistical contentions.  SeeThe Transposition Fallacy in Matrixx Initiatives, Inc. v. Siracusano: Part I” (August 19, 2011), and “The Transposition Fallacy in Matrixx Initiatives, Inc. v. Siracusano: Part II” (August 26, 2011).  Kaye’s analysis is well worth reading.

Having attempted to bamboozle the Justices on statistics, Stephen Ziliak has now turned his attention to an audience of statisicians and students of statistical science, with a short article in Significance on the Court’s decision in Matrixx.  Stephen Ziliak, “Matrixx v. Siracusano and Student v. Fisher:  Statistical Significance on Trial,”  Significance 131 (September 2011).  Tellingly, Ziliak did not advance his novel, erroneous views of how to derive posterior odds or probabilities from p-values in the pages of a magazine published by the Royal Statistical Society.  Such gems were reserved for the audience of Justices and law clerks in Washington, D.C.  Instead of holding forth on statistical issues, Ziliak has used the pages of a statistical journal to advance equally bizarre, inexpert views about the legal meaning of a Supreme Court case.

The Matrixx decision involved the appeal from a dismissal of a complaint for failure to plead sufficient allegations in a securities fraud action.  No evidence was ever offered or refused; no expert witness opinion was held reliable or unreliable.  The defendant, Matrixx Initiatives, Inc., won the dismissal at the district court, only to have the complaint reinstated by the Court of Appeals for the Ninth Circuit.  The Supreme Court affirmed the reinstatement, and in doing so, did not, and could not, have created a holding about the sufficiency of evidence to show causation in a legal proceeding.  Indeed, Justice Sotomayor, in writing for a unanimous Court, specifically stated that causation was not at issue, especially given that evidentiary displays far below what is necessary to show causation between a medication and an adverse event might come to the attention of the FDA, which agency in turn might find the evidence sufficient to order a withdrawal of the medication.

Ziliak, having given dubious statistical advice to the U.S. Supreme Court, now sets himself up to give equally questionable legal advice to the statistical community.  He asserts that Matrixx claimed that anosmia (the loss of the sense of smell) was unimportant because not “statistically significant.”  Id. at 132.  Matrixx Initiatives no doubt made several errors, but it never made this erroneous claim.  Ziliak gives no citation to the parties’ briefs; nor could one be given.  Matrixx never contended that anosmia was unimportant; its claim was that the plaintiffs had not sufficiently alleged facts that Matrixx had knowledge of a causal relationship such that its failure to disclose adverse event reports became a “material” omission under the securities laws.  The word “unimportant” does not occur in the Matrixx’s briefs; nor was it uttered at oral argument.

Ziliak’s suggestion that “[t]he district court dismissed the case on the basis that investors did not prove ‘materiality’, by which that court meant ‘statistical significance’,” is nonsense.  Id. at 132.  The issue was never the sufficiency of evidence.  Matrixx did attempt to equate materiality with causation, and then argued that allegations of causation required, in turn, allegations of statistical significance.  In arguing the necessity of statistical significance, Matrixx was implicitly suggesting that an evidentiary display that fell short of supporting causation could not be material, when withheld from investors.  The Supreme Court had an easy time of disposing of Matrixx’s argument because causation was never at issue.  Everything that the Court did say about causation is readily discernible as dictum.

Ziliak erroneously reads into the Court’s opinion a requirement that a pharmaceutical company, reporting to the Securities and Exchange Commission “can no longer hide adverse effect [sic] reports from investors on the basis that reports are not statistically significant.”   Id. at 133.  Ziliak incorrectly refers to adverse event reports as “adverse effect reports,” which is a petitio principii.  Furthermore, this was not the holding of the Court.  The potentially fraudulent aspect of Matrixx’s conduct was not that it had “hidden” adverse event reports, but rather that it had adverse event reports and a good deal of additional information, none of which it had disclosed to investors, when at the same time, the company chose to give the investment community particularly bullish projections of future sales.  The medication involved, Zicam, was an over-the-counter formulation that never had the rigorous testing required for a prescription medication’s new drug application.

Curiously, Ziliak, the self-described statistics expert fails to point out that adverse event reports could not achieve, or fail to achieve, statistical significance on the basis of the facts alleged in the plaintiffs’ complaint.  Matrixx, and its legal counsel, might be forgiven this oversight, but surely Ziliak the statistical expert should have noted this.  Indeed, if the parties and the courts had recognized that there never was an issue of statistical significance involved in the case, the entire premiss of Matrixx’s appeal would have been taken away.

To be a little fair to Ziliak, the Supreme Court, having disclaimed any effort to require proof of causation or to define that requisites of reliable evidence of causation, went ahead and offered its own dubious dictum on how statistical significance might not be necessary for causation:

“Matrixx’s argument rests on the premise that statistical significance is the only reliable indication of causation. This premise is flawed: As the SEC points out, “medical researchers … consider multiple factors in assessing causation.” Brief for United States as Amicus Curiae 12. Statistically significant data are not always available. For example, when an adverse event is subtle or rare, “an inability to obtain a data set of appropriate quality or quantity may preclude a finding of statistical significance.” Id., at 15; see also Brief for Medical Researchers as Amici Curiae 11. Moreover, ethical considerations may prohibit researchers from conducting randomized clinical trials to confirm a suspected causal link for the purpose of obtaining statistically significant data. See id., at 10-11.

A lack of statistically significant data does not mean that medical experts have no reliable basis for inferring a causal link between a drug and adverse events. As Matrixx itself concedes, medical experts rely on other evidence to establish an inference of causation. See Brief for Petitioners 44-45, n. 22. We note that courts frequently permit expert testimony on causation based on evidence other than statistical significance. See, e.g., Best v. Lowe’s Home Centers, Inc., 563 F.3d 171, 178 (C.A.6 2009); Westberry v. Gislaved Gummi AB, 178 F.3d 257, 263-264 (C.A.4 1999) (citing cases); Wells v. Ortho Pharmaceutical Corp., 788 F.2d 741, 744-745 (C.A.11 1986). We need not consider whether the expert testimony was properly admitted in those cases, and we do not attempt to define here what constitutes reliable evidence of causation.”

What is problematic about this passage is that Justice Sotomayor was addressing situations that were not before the Court, and about which she had no appropriate briefing.  Her suggestion that randomized clinical trials are not always ethically appropriate is, of course, true, but that does not prevent an expert witness from relying upon observational epidemiologic studies – with statistically significant results – to support their causal claims.  Justice Sotomayor’s citation to the Best and the Westberry cases, again in dictum, is equally off the mark.  Both cases involve the application of differential etiological reasoning about specific causation, which presupposes that  general causation has been previously, sufficiently shown.  Finally, Justice Sotomayor’s citation to the Wells case, which involved both general and specific causation issues, was inapposite because plaintiff’s expert witness in Wells did rely upon at least one study with a statistically significant result.  As I have pointed out before, the Wells case went on to become an example of one trial judge’s abject failure to understand and evaluate scientific evidence.

Postscript:

The Supreme Court’s statistical acumen may have been lacking, but the Justices seemed to have a good sense of what was really going on in the case.  In December 2010, Matrixx settled over 2,000 Zicam injury claims. On February 24, 2011, a month before the Supreme Court decided the Matrixx case, the federal district judge responsible for the Zicam multi-district litigation refused Matrixx’ motion to exclude plaintiffs’ expert witnesses’ causation opinions.  “First Zicam Experts Admitted by MDL Judge for Causation, Labeling Opinions” 15 Mealey’s Daubert Reporter (February 2011); In re Zicam Cold Remedy Marketing, Sales Practices and Products Liab. Litig., MDL Docket No. 2:09-md-02096, Document 1360 (D. Ariz. 2011).

After the Supreme Court affirmed the reinstatement of the securities fraud complaint, Charles Hensley, the inventor of Zicam, was arrested on federal charges of illegally marketing another drug, Vira 38, which he claimed was therapeutic and preventive for bird flu.  Stuart Pfeifer, “Zicam inventor arrested, accused of illegal marketing of flu drug,” Los Angeles Times (June 2, 2011).  Earlier this month, Mr. Hensley pleaded guilty to the charges of unlawful distribution.

Confusion Over Causation in Texas

August 27th, 2011

As I have previously discussed, a risk ratio (RR) ≤ 2 is a strong practical argument against specific causation. See Courts and Commentators on Relative Risks to Infer Specific CausationRelative Risks and Individual Causal Attribution; and  Risk and Causation in the Law.   But a relative risk greater than 2 threshold has little to do with general causation.  There are any number of well-established causal relationships, where the magnitude of the ex ante risk in an exposed population is > 1, but ≤ 2.  The magnitude of risk for cardiovascular disease and smoking is one such well-known example.

When assessing general causation from only observational epidemiologic studies, where residual confounding and bias may be lurking, it is prudent to require a RR > 2, as a measure of strength of the association that can help us rule out the role of systemic error.  As the cardiovascular disease/smoking example illustrates, however, there is clearly no scientific requirement that the RR be greater than 2 to establish general causation.  Much will depend upon the entire body of evidence.  If the other important Bradford Hill factors are present – dose-response, consistent, coherence, etc. – then risk ratios ≤ 2, from observational studies, may suffice to show general causation.  So the requirement of a RR > 2, for the showing of general causation, is a much weaker consideration than it is for specific causation.

Randomization and double blinding are major steps in controlling confounding and bias, but they are not complete guarantees that systematic bias has been eliminated.  A double-blinded, placebo-controlled, randomized clinical trial (RCT) will usually have less opportunity for bias and confounding to play a role.  Imposing a RR > 2 requirement for general causation thus makes less sense in the context of trying to infer general causation from the results of RCTs.

Somehow the Texas Supreme Court managed to confuse these concepts in an important decision this week, Merck & Co. v. Garza (August 26, 2011).

Mr. Garza had a long history of heart disease, at least two decades long, including a heart attack, and quadruple bypass and stent surgeries.  Garza’s physician prescribed 25 mg Vioxx for pain relief.  Garza died less than a month later, at the age of 71, of an acute myocardial infarction.  The plaintiffs (Mr. Garza’s survivors) were thus faced with a problem of showing the magnitude of the risk experienced by Mr. Garza, which risk would allow them to infer that his fatal heart attack was caused by his having taken Vioxx.  The studies relied upon by plaintiffs did show increased risk, consistently, for larger doses (50 mg.) taken over longer periods of time.  The trial court entered judgment upon a jury verdict in favor of the plaintiffs.

The Texas Supreme Court reversed, and rendered the judgment for Merck.  The Court’s judgment was based largely upon its view that the studies relied upon did not apply to the plaintiff.  Here the Court was on pretty solid ground.  The plaintiffs also argued that Mr. Garza had a higher pre-medication, baseline risk, and that he therefore would have sustained a greater increased risk from short-term, low-dose use of Vioxx.  The Court saw through this speculative argument, and cautioned that the “absence of evidence cannot substitute for evidence.” Slip op. at 17.  The greater baseline does not mean that the medication imposed a greater relative risk on people like Mr. Garza, although it would mean that we would expect to see more cases from any subgroup that looked like him.  The attributable fraction and the difficulty in using risk to infer individual attribution, however, would remain the same.

The problematic aspect of the Garza case arises from the Texas Supreme Court’s conflating and confusing general with specific causation.  There was no real doubt that Vioxx at high-doses, for prolonged use, can cause heart attacks.  General causation was not at issue.  The attribution of Mr. Garza’s heart attack to his short-term, low-dose use of Vioxx, however, was at issue, and was a rather dubious claim.

The Texas Supreme Court proceeded to rely heavily upon its holding and language in Merrell Dow Pharmaceuticals, Inc. v. Havner, 953 S.W.2d 706 (Tex. 1997).  Havner was a Bendectin case, in which plaintiffs claimed that the medication caused specific birth defects.  Both general and specific causation were contested by the parties. The epidemiologic evidence in Havner came from observational studies, either case-control or cohort studies, and not RCTs.

The Havner decision insightfully recognized that risk does not equal causation, but RR > 2 is a practical compromise for allowing courts and juries to make the plaintiff-specific attribution in the face of uncertainty.  Havner, 953 S.W.2d at 717 .  Merck latched on to this and other language, arguing that “Havner requires a plaintiff who claims injury from taking a drug to produce two independent epidemiological studies showing a statistically significant doubling of the relative risk of the injury for patients taking the drug under conditions substantially similar to the plaintiff’s (dose and duration, for example) as compared to patients taking a placebo.” Slip op. at 7.

The plaintiffs in Garza responded by arguing that their reliance upon RCTs relieved them of Havner‘s requirement of showing a RR > 2.

The Texas Supreme Court correctly rejected the plaintiffs’ argument and followed its earlier decision in Havner on specific causation:

“But while the controlled, experimental, and prospective nature of clinical trials undoubtedly make them more reliable than retroactive, observational studies, both must show a statistically significant doubling of the risk in order to be some evidence that a drug more likely than not caused a particular injury.”

Slip op. at 10.

The Garza Court, however, went a dictum too far by expressing some of the Havner requirements as applying to general causation:

Havner holds, and we reiterate, that when parties attempt to prove general causation using epidemiological evidence, a threshold requirement of reliability is that the evidence demonstrate a statistically significant doubling of the risk. In addition, Havner requires that a plaintiff show ‘that he or she is similar to [the subjects] in the studies’ and that ‘other plausible causes of the injury or condition that could be negated [are excluded] with reasonable certainty’.40

Slip op. at 13-14 (quoting from Havner at 953 S.W.2d at 720).

General causation was not the dispositive issue in Garza, and so this language must be treated as dictum.  The sloppiness in confusing the requisites of general and specific causation is regrettable.

The plaintiffs also advanced another argument, which is becoming a commonplace in health-effects litigation.  They threw all their evidence into a pile, and claimed that the “totality of the evidence” supported their claims.  This argument is somehow supposed to supplant a reasoned approach to the issue of what specific inferences can be drawn from what kind of evidence.  The Texas Supreme Court saw through the pile, and dismissed the hand waving:

“The totality of the evidence cannot prove general causation if it does not meet the standards for scientific reliability established by Havner. A plaintiff cannot prove causation by presenting different types of unreliable evidence.”

Slip op. at 17.

All in all, the Garza Court did better than many federal courts that have consistently confused risk with cause, as well as general with specific causation.

Misplaced Reliance On Peer Review to Separate Valid Science From Nonsense

August 14th, 2011

A recent editorial in the Annals of Occupational Hygiene is a poignant reminder of how oversold peer review is in the context of expert witness judicial gatekeeping.  Editor Trevor Ogden urges some cautionary suggestions:

“1. Papers that have been published after proper peer review are more likely to be generally right than ones that have not.

2. However, a single study is very unlikely to take everything into account, and peer review is a very fallible process, and it is very unwise to rely on just one paper.

3. The question should be asked, has any published correspondence dealt with these paper, and what do other papers that cite them say about them?

4. Correspondence will legitimately give a point of view and not consider alternative explanations in the way a paper should, so peer review does not necessarily validate the views expressed.”

Trevor Ogden, “Lawyers Beware! The Scientific Process, Peer Review, and the Use of Papers in Evidence,” 55 Ann. Occup. Hyg. 689, 691 (2011).

Ogden’s conclusions, however, are misleading.  For instance, he suggests that peer-reviewed papers are better than non-peer reviewed papers, but by how much?  What is the empirical evidence for Ogden’s assertion?  In his editorial, Ogden gives an anecdote of a scientific report submitted to a political body, and comments that this report would not have survived peer review.   But an anecdote is not a datum.  What’s worse is that the paper that is rejected by peer review at Ogden’s journal will show up in another publication, eventually.  Courts make little distinction between and among journals for purposes of rating the value of peer review.

Of course it is unwise, and perhaps scientifically unsound, as Ogden points out, to rely upon just one paper, but the legal process permits it.  Worse yet,  litigants, either plaintiff or defendant, are often allowed to pick out isolated findings in a variety of studies, and throw them together as if that were science. “[O]n fait la science avec des faits comme une maison avec des pierres; mais une accumulation de faits n’est pas plus une science qu’un tas de pierres n’est une maison.” Henri Poincaré, La Science et l’Hypothèse (1905) (chapter 9, Les Hypothèses en Physique).

As for letters to the editor, sure, courts and litigants should pay attention to them, but as Ogden notes, these writings are themselves not peer reviewed, or not peer reviewed with very much analytical rigor.  The editing of letters raises additional concerns of imperious editors who silence some points of view to the benefit of others. Most journals have space only for a few letters, and unpopular but salient points of view can go unreported. Furthermore, many scientists will not write letters to the editors, even when the published article is terribly wrong in its methods, data analyses, conclusions, or discussion, because in most journals the authors typically have the last word in the form of reply, which often is self-serving and misleading, with immunity from further criticism.

Ogden describes and details the limitations of peer review in some detail, but he misses the significance of how these limitations play out in the legal arena.

Limitations and Failures of Peer Review

For instance, Ogden acknowledges that peer review fails to remove important errors from published articles. Here he does provide empirical evidence.  S. Schroter, N. Black, S. Evans, et al., “What errors do peer reviewers detect, and does training improve their ability to detect them?” 101 J. Royal Soc’y  Med. 507 (2008) (describing an experiment in which manuscripts were seeded with known statistical errors (9 major and 5 minor) and sent to 600 reviewers; each reviewer missed, on average, over 6 of 14 of the major errors).  Ogden tells us that the empirical evidence suggests that “peer review is a coarse and fallible filter.”

This is hardly a ringing endorsement.

Surveys of the medical literature have found the prevalence of statistical errors ranges from 30% to 90% of papers.  See, e.g., Douglas Altman, “Statistics in medical journals: developments in the 1980s,” 10 Stat. Med. 1897 (1991); Stuart J. Pocock, M.D. Hughes, R.J. Lee, “Statistical problems in the reporting of clinical trials. A survey of three medical journals,” 317 New Engl. J. Med. 426 (1987); S.M. Gore, I.G. Jones, E.C. Rytter, “Misuse of statistical methods: critical assessment of articles in the BMJ from January to March 1976. 1 Brit. Med. J. 85 (1977).

Without citing any empirical evidence, Ogden notes that peer review is not well designed to detect fraud, especially when the data are presented to look plausible.  Despite the lack of empirical evidence, the continuing saga of fraudulent publications coming to light supports Ogden’s evaluation. Peer reviewers rarely have access to underlying data.  In the silicone gel breast implant litigation, for instance, plaintiffs relied upon a collection of studies that looked very plausible from their peer-reviewed publications.  Only after the defense discovered misrepresentations and spoliation of data did the patent unreliability and invalidity of the studies become clear to reviewing courts.  The rate of retractions of published scientific articles appears to have increased, although the secular trend may have resulted from increased surveillance and scrutiny of the published literature for fraud.  Daniel S. Levine, “Fraud and Errors Fuel Research Journal Retractions,” (August 10, 2011); Murat Cokol, Fatih Ozbay, and Raul Rodriguez-Esteban, “Retraction rates are on the rise,” 9 European Molecular Biol. Reports 2 (2008);  Orac, “Scientific fraud and journal article retractions” (Aug. 12, 2011).

The fact is that peer review is not very good in detecting fraud or error in scientific work.  Ultimately, the scientific community must judge the value of the work, but in some niche areas, only “the acolytes” are paying attention.  These acolytes cite to one another, applaud each others’ work, and often serve as peer reviewers of the work in the field because editors see them as the most knowledgeable investigators in the narrow field. This phenomenon seems especially prevalent in occupational and environmental medicine.  See Cordelia Fine, “Biased But Brilliant,” New York Times (July 30, 2011) (describing confirmation bias and irrational loyalty of scientists to their hobby-horse hypotheses).

Peer review and correspondence to the editors are not the end of the story.  Discussion and debate may continue in the scientific community, but the pace of this debate may be glacial.  In areas of research where litigation or public policy does not fuel further research to address aberrant findings or to reconcile discordant results, science may take decades to ferret out the error. Litigation cannot proceed at this deliberative speed.  Furthermore, post-publication review is hardly a cure-all for the defects of peer review; post-publication commentary can be, and often is, spotty and inconsistent.  David Schriger and Douglas Altman, “Inadequate post-publication review of medical research:  A sign of an unhealthy research environment in clinical medicine,” 341 Brit. Med. J. 356 (2010)(identifying reasons for the absence of post-publication peer review).

The Evolution of Peer Review as a Criterion for Judicial Gatekeeping of Expert Witness Opinion

The story of how peer review came to be held in such high esteem in legal circles is sad, but deserves to be told.  In the Bendectin litigation, the medication sponsor, Merrell-Richardson, was confronted with the testimony of an epidemiologist, Shanna Swan, who propounded her own, unpublished re-analysis of the published epidemiologic studies, which failed to find an association between Bendectin use and birth defects.  Merrell challenged Swan’s unpublished, non-peer-reviewed re-analyses as not “generally accepted” under the Frye test.  The lack of peer review seemed like good evidence of the novelty of Swan’s reanalyses, as well as their lack of general acceptance.

In the briefings, the Supreme Court received radically different views of peer review in the Daubert case.  One group of amici modestly explained that “peer review referees and editors limit their assessment of submitted articles to such matters as style, plausibility, and defensibility; they do not duplicate experiments from scratch or plow through reams of computer-generated data in order to guarantee accuracy or veracity or certainty.” Brief for Amici Curiae Daryl E. Chubin, et al. at 10, Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579 (1993).  See also E. Chubin & Edward J. Hackett, Peerless Science: Peer Review and U.S. Science Policy (1990).

Other amici, such as the New England Journal of Medicine, Journal of the American Medical Association, and Annals of Internal Medicine proposed that peer-reviewed publication should be the principal criterion for admitting scientific opinion testimony.  Brief for Amici Curiae New England Journal of Medicine, Journal of the American Medical Association, and Annals of Internal Medicine in Support of Respondent, Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579 (1993). But see Arnold S. Relman & Marcia Angell,“How Good Is Peer Review?321 New Eng. J. Med. 827, 828 (1989) (‘‘peer review is not and cannot be an objective scientific process, nor can it be relied on to guarantee the validity or honesty of scientific research’’).

Justice Blackmun, speaking for the majority in Daubert, steered a moderate course:

“Another pertinent consideration is whether the theory or technique has been subjected to peer review and publication. Publication (which is but one element of peer review) is not a sine qua non of admissibility; it does not necessarily correlate with reliability, see S. Jasanoff, The Fifth Branch: Science Advisors as Policymakers 61-76 (1990), and in some instances well-grounded but innovative theories will not have been published, see Horrobin, “The Philosophical Basis of Peer Review and the Suppression of Innovation,” 263 JAMA 1438 (1990). Some propositions, moreover, are too particular, too new, or of too limited interest to be published. But submission to the scrutiny of the scientific community is a component of “good science,” in part because it increases the likelihood that substantive flaws in methodology will be detected. See J. Ziman, Reliable Knowledge: An Exploration of the Grounds for Belief in Science 130-133 (1978); Relman & Angell, “How Good Is Peer Review?” 321 New Eng. J. Med. 827 (1989). The fact of publication (or lack thereof) in a peer reviewed journal thus will be a relevant, though not dispositive, consideration in assessing the scientific validity of a particular technique or methodology on which an opinion is premised.”

Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579, 593-94, 590 n.9 (1993).

This lukewarm endorsement from Justice Blackmun, in Daubert, sent a mixed message to lower federal courts, which tended to make peer review into somewhat of a mechanical test in their gatekeeping decisions.  Many federal judges (and state court judges in states that followed the Daubert precedent), were too busy, too indolent, or too lacking in analytical acumen, to look past the fact of publication and peer review.  These judges avoided the labor of independent thought by taking the fact of peer-review publication as dispositive of the validity of the science in the paper.  Some commentators encouraged this low level of scrutiny and mechanical test, by suggesting that peer review could be taken as an indication of good science.  See, e.g., Margaret A. Berger, “The Supreme Court’s Trilogy on the Admissibility of Expert Testimony,” in Federal Judicial Center, Reference Manual on Scientific Evidence 9, 17 (2d ed. 2000) (describing Daubert as endorsing peer review as one of the “indicators of good science”) (hereafter cited as Reference Manual).  Elevating peer review to be an indicator of good science, however, obscures its lack of epistemic warrant, misrepresents its real view in the scientific community, and enables judges to fall back into their pre-Daubert mindset of finding quick and easy, and invalid, proxies for scientific reliability.

In a similar vein, other commentators spoke in superlatives about peer review, and thus managed to mislead judges and decision makers further to regard anything as published as valid scientific data, data interpretation, and data analysis. For instance, Professor David Goodstein, writing in the Reference Manual, advises the federal judicial that peer review is the test that separates valid science from rubbish:

“In the competition among ideas, the institution of peer review plays a central role. Scientific articles submitted for publication and proposals for funding are often sent to anonymous experts in the field, in other words, peers of the author, for review. Peer review works superbly to separate valid science from nonsense, or, in Kuhnian terms, to ensure that the current paradigm has been respected.11 It works less well as a means of choosing between competing valid ideas, in part because the peer doing the reviewing is often a competitor for the same resources (pages in prestigious journals, funds from government agencies) being sought by the authors. It works very poorly in catching cheating or fraud, because all scientists are socialized to believe that even their bitterest competitor is rigorously honest in the reporting of scientific results, making it easy to fool a referee with purposeful dishonesty if one wants to. Despite all of this, peer review is one of the sacred pillars of the scientific edifice.”

David Goodstein, “How Science Works,” Reference Manual 67, at 74-75, 82 (emphasis added).

Criticisms of Reliance Upon Peer Review as a Proxy for Reliability and Validity

Other commentators have put forward a more balanced and realistic, if not jaundiced, view of peer review. Professor Susan Haack, a philosopher of science at the University of Miami, who writes frequently about epistemic claims of expert witnesses and judicial approaches to gatekeeping, described the disconnect in meaning of peer review to scientists and to lawyers:

“For example, though peer-reviewed publication is now standard practice at scientific and medical journals, I doubt that many working scientists imagine that the fact that a work has been accepted for publication after peer review is any guarantee that it is good stuff, or that it’s not having been published necessarily undermines its value.92 The legal system, however, has come to invest considerable epistemic confidence in peer-reviewed publication93 — perhaps for no better reason than that the law reviews are not peer-reviewed!”

Susan Haack, “Irreconcilable Differences?  The Troubled Marriage of Science and Law,” 72 Law & Contemporary Problems 1, 19 (2009).   Haack’s assessment of the motivation of actors in the legal system is, for a philosopher, curiously ad hominem, and her shameless dig at law reviews is ironic, considering that she publishes extensively in them.  Still, her assessment that peer review is not any guarantee of an article’s being “good stuff,” is one of her more coherent contributions to this discussion.

The absence of peer review hardly supports the inference that a study or an evaluation of studies is not reliable, unless of course we also know that the authors have failed after repeated attempts to find a publisher.  In today’s world of vanity presses, a researcher would be hard pressed to be unable to find a journal in which to publish a paper.  As Drummond Rennie, a former editor of the Journal of the American Medical Association (the same journal, acting as an amicus curiae to the Supreme Court, which oversold peer review), has remarked:

“There seems to be no study too fragmented, no hypothesis too trivial, no literature citation too biased or too egotistical, no design too warped, no methodology too bungled, no presentation of results too inaccurate, too obscure, and too contradictory, no analysis too self serving, no argument too circular, no conclusions too trifling or too unjustified, and no grammar and syntax too offensive for a paper to end up in print.”

Drummond Rennie, “Guarding the Guardians: A Conference on Editorial Peer Review,” 256 J. Am. Med. Ass’n 2391 (1986); D. Rennie, A. Flanagin, R. Smith, and J. Smith, “Fifth International Congress on Peer Review and Biomedical Publication: Call for Research”. 289 J. Am. Med. Ass’n 1438 (2003)

Other editors at leading medical journals seem to agree with Rennie.  Richard Horton, an editor of The Lancet, rejects the Goodstein view (from the Reference Manual) of peer review as the “sacred pillar of the scientific edifice”:

“The mistake, of course, is to have thought that peer review was any more than a crude means of discovering the acceptability — not the validity — of a new finding. Editors and scientists alike insist on the pivotal importance of peer review. We portray peer review to the public as a quasi-sacred process that helps to make science our most objective truth teller. But we know that the system of peer review is biased, unjust, unaccountable, incomplete, easily fixed, often insulting, usually ignorant, occasionally foolish, and frequently wrong.”

Richard Horton “Genetically modified food: consternation, confusion, and crack-up,” 172 Med. J. Australia 148 (2000).

In last year’s prestigious 2010 Sense About Science lecture, Fiona Godlee, the editor of the British Medical Journal, characterized peer review as deficient in at least seven different ways:

  • Slow
  • Expensive
  • Biased
  • Unaccountable
  • Stifles innovation
  • Bad at detecting error
  • Hopeless at detecting fraud

Godlee, “It’s time to stand up for science once more” (June 21, 2010).

Important research often goes unpublished, and never sees the light of day.  Anti-industry zealots are fond of pointing fingers at the pharmaceutical industry, although many firms, such as GlaxoSmithKline, have adopted a practice of posting study results on a website.  The anti-industry zealots overlook how many apparently neutral investigators suppress research results that do not fit in with their pet theories.  One of my favorite examples is the failure of the late-Dr. Irving Selikoff to publish his study of Johns-Manville factory workers:  William J. Nicholson, Ph.D. and Irving J. Selikoff, M.D., “Mortality experience of asbestos factory workers; effect of differing intensities of asbestos exposure,” Unpublished Manuscript.  This study investigated cancer and other mortality at a factory in New Jersey, where crocidolite was used in the manufacture of  insulation products.  Selikoff and Nicholson apparently had no desire to publish a paper that would undermine their unfounded claim that crocidolite asbestos was not used by American workers.  But this desire does not necessarily mean that Nicholson and Selikoff’s unpublished paper was of any lesser quality than their study of North American insulators, the results of which they published, and republished, with abandon.

Examples of Failed Peer Review from the Litigation Front

Phenylpropanolamine and Stroke

Then there are many examples from the litigation arena of studies that passed peer review at the most demanding journals, but which did not hold up under the more intense scrutiny of review by experts in the cauldron of litigation.

In In re Phenylpropanolamine Products Liability Litigation, Judge Rothstein conducted hearings and entertained extensive briefings on the reliability of plaintiffs’ expert witnesses’ opinions, which were based largely upon one epidemiologic study, known as the “Yale Hemorrhagic Stroke Project (HSP).”  The Project was undertaken by manufacturers, which created a Scientific Advisory Group, to oversee the study protocol.  The study was submitted as a report to the FDA, which reviewed the study and convened an advisory committee to review the study further.  “The prestigious NEJM published the HSP results, further substantiating that the research bears the indicia of good science.” In re Phenylpropanolamine Prod. Liab. Litig., 289 F. 2d 1230, 1239 (2003) (citing Daubert II for the proposition that peer review shows the research meets the minimal criteria for good science).  There were thus many layers of peer review for the HSP study.

The HSP study was subjected to much greater analysis in litigation.  Peer review, even in the New England Journal of Medicine, did not and could not carry this weight. The Defendants fought to fight to obtain the underlying data to the HSP, and that underlying data unraveled the HSP paper.  Despite the plaintiffs’ initial enthusiasm for a litigation that was built on the back of a peer-reviewed paper in one of the leading clinical journals of internal medicine, the litigation resulted in a string of notable defense verdicts.  After one of the early defense verdicts, plaintiffs’ challenged the defendant’s reliance upon underlying data that went behind the peer-reviewed publication.  The trial court rejected the request for a new trial, and spoke to the significance of challenging the superficial significance of peer review of the key study relied upon by plaintiffs in the PPA litigation:

“I mean, you could almost say that there was some unethical activity with that Yale Study.  It’s real close.  I mean, I — I am very, very concerned at the integrity of those researchers.”

“Yale gets — Yale gets a big black eye on this.”

O’Neill v. Novartis AG, California Superior Court, Los Angeles Cty., Transcript of Oral Argument on Post-Trial Motions, at 46 -47 (March 18, 2004) (Hon. Anthony J. Mohr)

Viagra and Ophthalmic Events

The litigation over ophthalmic adverse events after the use of Viagara provides another example of challenging peer review.  In re Viagra Products Liab. Litig., 658 F. Supp. 2d 936, 945 (D. Minn. 2009).  In this litigation, the court, after viewing litigation discovery materials, recognized that the authors of a key paper failed to use the methodologies that were described in their published paper.  The court gave the sober assessment that ‘[p]eer review and publication mean little if a study is not based on accurate underlying data.’’ Id.

MMR Vaccine and Autism

Plaintiffs’ expert witness in the MMR vaccine/autism litigation, Andrew Wakefield published a paper in The Lancet, in which he purported to find an association between measles-mumps-rubella vaccine and autism.  A.J. Wakefield, et al., “Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children,” 351 Lancet 637 (1998).  This published paper, in a well-regarded journal, opened a decade-long controversy, with litigation, over the safety of the MMR vaccine.  The study was plagued, however, not only by failure to disclose payments from plaintiffs’ attorneys and ethical lapses for failure to obtain ethics board approvals, but by substantially misleading reports of data and data analyses.  In 2010, Wakefield was sanctioned by the UK General Medical Council’s Fitness to Practise Panel.  Finally, in 2010, over a decade after initial publication,  the Lancet ‘‘fully retract[ed] this paper from the published record.’’  Editors of the Lancet, “Retraction—Ileal-lymphoidnodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children,” 375 Lancet 445 (2010).

Accutane and Suicide

In the New Jersey litigation over claimed health effects of Accutane, one of the plaintiffs’ expert witnesses was the author of a key paper that “linked” Accutane to depression.  Palazzolo v. Hoffman La Roche, Inc., 2010 WL 363834 (N.J. App. Div.).  Discovery revealed that the author, James Bremner, did not follow the methodology described in the paper.  Furthermore, Bremner could not document the data used in the paper’s analysis, and conceded that the statistical analyses were incorrect.  The New Jersey Appellate Division held that reliance upon Bremner’s study should be excluded as not soundly and reliably generated.  Id. at *5.

Silicone and Connective Tissue Disease

It is heartening that the scientific and medical communities decisively renounced the pathological science that underlay the silicone gel breast implant litigation.  The fact remains, however, that plaintiffs relied upon a large body of published papers, each more invalid than the other, to support their claims.  For many years, judges around the country blinked and let expert witnesses offer their causation opinions, in large part based upon papers by Smalley, Shanklin, Lappe, Kossovosky, Gershwin, Garrido, and others.  Peer review did little to stop the enthusiasm of editors for this “sexy” topic until a panel of court-appointed expert witnesses, and the Institute of Medicine put an end to the judicial gullibility.

Concluding Comments

One district court distinguished between pre-publication peer review and the important peer review that takes place after publication as other researchers quietly go about replicating or reproducing a study’s findings, or attempting to build on those findings.  “[J]ust because an article is published in a prestigious journal, or any journal at all, does not mean per se that it is scientifically valid.”  Pick v. Amer. Med. Sys., 958 F. Supp. 1151, 1178 n.19 (E.D. La. 1997), aff’d, 198 F.3d 241 (5th Cir. 1999).  With hindsight, we can say that Merrell Richardson’s strategy of emphasizing peer review has had some unfortunate, unintended consequences.  The Supreme Court elevated peer review into a factor for reliable science, and lower courts have elevated peer review into a criterion of validity.  The upshot is that many courts will now not go beyond statements in a peer-reviewed paper to determine whether they are based upon sufficient facts and data, or whether the statements are based upon sound inferences from the available facts and data.  These courts violate the letter and spirit of Rule 702, of the Federal Rules of Evidence.

Bad and Good Statistical Advice from the New England Journal of Medicine

July 2nd, 2011

Many people consider The New England Journal of Medicine (NEJM) a prestigious journal.  It is certainly widely read.  Judging from its “impact factor,” we know the journal is frequently cited.  So when the NEJM weighs in on issue that involves the intersection of law and science, I pay attention.

Unfortunately, this week’s issue contains an editorial “Perspective” piece that is filled with incoherent, inconsistent, and incorrect assertions, both on the law and the science.  Mark A. Pfeffer and Marianne Bowler, “Access to Safety Data – Stockholders versus Prescribers,” 364 New Engl. J. Med. ___ (2011).

Dr. Mark Pfeffer and the Hon. Marianne Bowler used the recent United States Supreme Court decision in Matrixx Initiatives, Inc. v. Siracusano, __ U.S. __, 131 S.Ct., 1309 (2011), to advance views, not supported by the law or the science.   Remarkably, Dr. Pfeffer is the Victor J. Dzau Professor of Medicine, at the Harvard Medical School.  He is both a physician, and he has received a Ph.D. degree in physiology and biophysics.  Ms. Bowler is both a lawyer and a federal judge.  Between the two, they should have provided better, more accurate, and more consistent advice.

1. The Authors Erroneously Characterize Statistical Significance in Inappropriate Bayesian Terms

The article begins with a relatively straightforward characterization of various legal burdens of proof.  The authors then try to collapse one of those burdens of proof, “beyond a reasonable doubt,” which has no accepted quantitative meaning, to a significance probability that is used to reject a pre-specified null hypothesis in scientific studies:

“To reject the null hypothesis (that a result occurred by chance) and deem an intervention effective in a clinical trial, the level of proof analogous to law’s ‘beyond a reasonable doubt’ standard would require an extremely stringent alpha level to permit researchers to claim a statistically significant effect, with the offsetting risk that a truly effective intervention would sometimes be deemed ineffective.  Instead, most randomized clinical trials are designed to achieve a lower level of evidence that in legal jargon might be called ‘clear and convincing’, making conclusions drawn from it highly probable or reasonably certain.”

Now this is both scientific and legal nonsense.  It is distressing that a federal judge characterizes the burden of proof that she must apply, or direct juries to apply, as “legal jargon.”  More important, these authors, scientist and judge, give questionable quantitative meanings to burdens of proof, and they misstate the meaning of statistical significance.  When judges or juries must determine guilt “beyond a reasonable doubt,” they are assessing the prosecution’s claim that the defendant is guilty, given the evidence at trial.  This posterior probability can be represented as:

Probability (Guilt | Evidence Adduced)

This is what is known as a posterior probability, and it is fundamentally different from significance probability.

The significance probability is a transposed conditional probability from the posterior probability that is used to assess guilt in a criminal trial, or contentions in a civil trial.  As law professor David Kaye and his statistician coauthor, the late David Freedman, described the p-value and significance probability:

“The p-value is the probability of getting data as extreme as, or more extreme than, the actual data, given that the null hypothesis is true:

p = Probability (extreme data | null hypothesis in model)

* * *

Conversely, large p-values indicate that the data are compatible with the null hypothesis: the observed difference is easy to explain by chance. In this context, small p-values argue for the plaintiffs, while large p-values argue for the defense.131Since p is calculated by assuming that the null hypothesis is correct (no real difference in pass rates), the p-value cannot give the chance that this hypothesis is true. The p-value merely gives the chance of getting evidence against the null hypothesis as strong or stronger than the evidence at hand—assuming the null hypothesis to be correct. No matter how many samples are obtained, the null hypothesis is either always right or always wrong. Chance affects the data, not the hypothesis. With the frequency interpretation of chance, there is no meaningful way to assign a numerical probability to the null hypothesis.132

David H. Kaye and David A. Freedman, “Reference Guide on Statistics,” Federal Judicial Center, Reference Manual on Scientific Evidence 122 (2ed. 2000).  Kaye and Freedman explained over a decade ago, for the benefit of federal judges:

“As noted above, it is easy to mistake the p-value for the probability that there is no difference. Likewise, if results are significant at the .05 level, it is tempting to conclude that the null hypothesis has only a 5% chance of being correct.142

This temptation should be resisted. From the frequentist perspective, statistical hypotheses are either true or false; probabilities govern the samples, not the models and hypotheses. The significance level tells us what is likely to happen when the null hypothesis is correct; it cannot tell us the probability that the hypothesis is true. Significance comes no closer to expressing the probability that the null hypothesis is true than does the underlying p-value.143

Id. at 124-25.

As we can see, our scientist from the Harvard School of Medical School and our federal judge have committed the transpositional fallacy by likening “beyond a reasonable doubt” to the alpha used to test for a statistically significant outcome in a clinical trial.  They are not the same; nor are they analogous.

This fallacy has been repeatedly described.  Not only has the Reference Manual on Scientific Manual (which is written specifically for federal judges) described the fallacy in detail, but legal and scientific writers have urged care to avoid this basic mistake in probabilistic reasoning.  Here is a recent admonition from one of the leading writers on the use (and misuse) of statistics in legal procedures:

“Some commentators, however, would go much further; they argue that is an arbitrary statistical convention and since preponderance of the evidence means 51% probability, lawyers should not use 5% as the level of statistical significance but 49% – thus rejecting the null hypothesis when there is up to a 49% chance that it is true. In their view, to use a 5% standard of significance would impermissibly raise the preponderance of evidence standard in civil trials. Of course the 5% figure is arbitrary (although widely accepted in statistics) but the argument is fallacious. It assumes that 5% (or 49% for that matter) is the probability that the null hypothesis is true. The 5% level of significance is not that, but the probability of the sample evidence if the null hypothesis were true. This is a very different matter. As I pointed out in Chapter1, the probability of the sample given the null hypothesis is not generally the same as the probability of the null hypothesis given the sample. To relate the level of significance to the probability of the null hypothesis would require an application of Bayes’s theorem and the assumption of a prior probability distribution. However, the courts have usually accepted the statistical standard, although with some justifiable reservations when the P-value is only slightly above the 5% cutoff.”

Michael O. Finkelstein, Basic Concepts of Probability and Statistics in the Law 54 (N.Y. 2009) (emphasis added).

2.  The Authors, Having Mischaracterized Burden-of-Proof and Significance Probabilities, Incorrectly Assess the Meaning of the Supreme Court’s Decision in Matrixx Initiatives.

I have written a good bit about the Court’s decision in Matrixx Initiatives, most recently with David Venderbush, for the Washington Legal Foundation.  See Schachtman & Venderbush, “Matrixx Unbounded: High Court’s Ruling Needlessly Complicates Scientific Evidence Principles,” W.L.F. Legal Backgrounder (June 17, 2011).

I was thus startled to see the claim of a federal judge that the Supreme Court, in Matrixx, had “applied the ‘fair preponderance of the evidence’ standard of proof used for civil matters.”  Matrixx was a case about the sufficiency of the pleadings, and thus there really could have been no such application of a burden of proof to an evidentiary display.  The very claim is incoherent, and at odds with the Supreme Court’s holding.

The NEJM authors went on to detail how the defendant in Matrixx had persuaded the trial court that the evidence against its product, Zicam, did not reach statistical significance, and therefore the evidence should not be considered “material.”  As I have pointed out before, Matrixx focused on adverse event reports, as raw number of reported events, which did not, and could not, be analyzed for statistical significance.  The very essence of Matrixx’s argument was nonsense, which perhaps explains the company’s nine-nothing loss in the Supreme Court.  The authors of the opinion piece in the NEJM, however, missed that it is not the evidence of adverse event reports, with or without a statistical analysis, that is material.  What was at issue was whether the company’s failure to disclose this information, along with a good deal more information, in the face of the company’s having made very aggressive, optimistic sales and profits projections for the future.

The NEJM authors proceed to tell us, correctly, that adverse events do not prove causality, but then they tell us, incorrectly, that the Matrixx case shows that “such a high level of proof did not have to be achieved.”  While the authors are correct about the sufficiency of adverse event reports for causal assessments, they miss the legal significance of there being no burden of proof at play in Matrixx; it was a case on the pleadings.  The issue was the sufficiency of those pleadings, and what the Supreme Court made clear was that in the context of a product subject to FDA regulation, causation was never the test for materiality because the FDA could withdraw the product on a showing far less than scientific causation of harm.  So the plaintiffs could allege less than causation, and still have pleaded a sufficient case of securities fraud.  The Supreme Court did not, and could not, address the issue that the NEJM authors discuss.  The authors’ assessment that the Matrixx case freed legal causation of any requirement of statistical significance is a tortured reading of obiter dictum, not the holding of the case.  This editorializing is troubling.

The NEJM authors similarly hold forth on what clinicians consider material, and they announce that “[c]linicians are well aware that to be considered material, information regarding drug safety does not have to reach the same level of certainty that we demand for demonstrating efficacy.”  This is true, but clinicians are ethically bound to err on the side of safety:  Primum non nocere. See, e.g., Tamraz v. Lincoln Elec. Co., 620 F.3d 665, 673 (6th Cir. 2010) (noting that treating physicians have more training in diagnosis than in etiologic assessments), cert. denied, ___ U.S.____ (2011).  Again, the authors’ statements have nothing to do with the Matrixx case, or with the standards for legal or scientific causation.

3.  The Authors, Inconsistently with Their Characterization of Various Probabilities, Proceed Correctly To Describe Statistical Significance Testing for Adverse Outcomes in Trials.

Having incorrectly described beyond a reasonable doubt as like p <0.05, the NEJM authors then, correctly point out that standard statistical testing cannot be used for “evaluating unplanned and uncommon adverse events.”  The authors also note that the flood of data in the assessment of causation of adverse events is filled with “biologic noise.”  Physicians and regulators may take the noise signals and claim that they hear a concert.  This is exactly why we should not confuse precautionary judgments with scientific assessments of causation.

Ninth Circuit Affirms Rule 702 Exclusion of Dr David Egilman in Diacetyl Case

June 20th, 2011

On June 17, 2011, the Ninth Circuit of the United States Court of Appeals affirmed a district judge’s decision to exclude Dr David S. Egilman from testifying in a consumer-exposure diacetyl case.  Newkirk v. Conagra Foods Inc. (9th Cir. 2011).

Plaintiff claimed to develop bronchiolitis obliterans from having popped and eaten an Homeric quantity of microwavable popcorn.  The case was thus a key test of “consumer” diacetyl exposure.  Another case, also involving Egilman, just finished a Daubert hearing in Colorado, last week.

To get the full “flavor” of this diacetyl case, you may have to read the district court’s opinion, which excluded Egilman and other witnesses, and entered summary judgment for the defense. Newkirk v. Conagra Foods, Inc., No. CV-08-273, 2010 WL 2680184 (E.D. Wash. July 2, 2010).

Plaintiff appealed, and so did Egilman.  (See attached Egilman Motion Appeal Diacetyl Exclusion 2011 and Egilman Declaration Newkirk Diacetyl Appeal 2011.)  In what some may consider scurrilous pleading, Egilman attacked the district judge for having excluded him from testifying.  If Egilman’s challenge to the trial judge was not bizarre enough, Egilman also claimed a right to intervene in the appeal by advancing the claim that the Rule 702 exclusion hurt his livelihood.  The following language is from paragraph 11 of Dr. Egilman’s declaration in support of his motion:

“The Daubert ruling eliminates my ability to testify in this case and in others. I will lose the opportunity to bill for services in this case and in others (although I generally donate most fees related to courtroom testimony to charitable organizations, the lack of opportunity to do so is an injury to me). Based on my experience, it is virtually certain that some lawyers will choose not to attempt to retain me as a result of this ruling. Some lawyers will be dissuaded from retaining my services because the ruling is replete with unsubstantiated pejorative attacks on my qualifications as a scientist and expert. The judge’s rejection of my opinion is primarily an ad hominem attack and not based on an actual analysis of what I said – in an effort to deflect the ad hominem nature of the attack the judge creates ‘straw man’ arguments and then knocks the straw men down, without ever addressing the substance of my positions.”

Egilman Declaration at Paragraph 11.

Egilman tempers his opinion about the prejudice he will suffer in front of judges in future cases.  Only judges who have not seen him before would likely be persuaded by Judge Peterson’s decision in Newkirk.  Those judges who have heard him testify before would, no doubt, see him for the brilliant crusading avenger that he is:

“This will generally not occur in cases heard before Judges where I have already appeared as a witness. For example a New York state trial judge has praised plaintiffs’ molecular-biology and public-health expert Dr. David Egilman as follows: ‘Dr. Egilman is a brilliant fellow and I always enjoy seeing him and I enjoy listening to his testimony . . . . He is brilliant, he really is.’ [Lopez v. Ford Motor Co., et al. (120954/2000; In re New York City Asbestos Litigation, Index No. 40000/88).]”

Egilman Declaration at p. 9 n. 2.

It does not appear as though Egilman’s attempt to intervene helped plaintiff before the Ninth Circuit, which may not have thought that he was as brilliant as the unidentified trial judge in Lopez.

The Newkirk case is interesting for several reasons.

First, the Circuit correctly saw that general causation must be shown before the plaintiff can invoke a differential etiology analysis.

Second, the Circuit saw that it is not sufficient that the substance in question can cause the outcome claimed; the substance must do so at the levels of exposure that were experienced by the plaintiff.  In Newkirk, even by consuming massive quantities of microwave popcorn, plaintiff had not shown exposure levels to diacetyl equivalent to the exposures among factory workers at risk for bronchiolitis obliterans.  The affirmance of the district court is a strong statement that exposure matters in the context of the current understanding of diacetyl causation.

Third, the Circuit was not intimidated or persuaded by the tactics of Dr David Egilman, expert witness.

Fourth, having dealt with the issues deftly, the Ninth Circuit issued a judgment from which there will be no appeal.

WLF Legal Backgrounder on Matrixx Initiatives

June 20th, 2011

In Matrixx Initiatives, Inc. v. Siracusano, ___ U.S. ___, ___ , 2011 WL 977060 (Mar. 22, 2011), the Supreme Court addressed a securities fraud case against an over-the-counter pharmaceutical company for speaking to the market about its rosy financial projections, but failing to provide information received about the hazards of the product.

Much or most of the holding of the case is an unexceptional application of settled principles of securities fraud litigation in the context of claims against a pharmaceutical company with products liability cases pending.  The defendant company, however, attempted to import Rule 702 principles of scientific evidence into a motion to dismiss on the pleadings, with much confusion resulting among the litigants, the amici, and the Court.  The Supreme Court ruled unanimously to affirm the reinstatement of the complaint against the defendant.

I have written about this case previously: “The Matrixx – A Comedy of Errors,” and “Matrixx Unloaded,” and “The Matrixx Oversold,” and “De-Zincing the Matrixx.”

Now, with the collaboration of David Venderbush from Alston & Bird LLP, we have collected our thoughts to share in the form of a Washington Legal Foundation Legal Backgrounder, which is available for download at the WLF’s website.  Schachtman & Venderbush, “Matrixx Unbounded: High Court’s Ruling Needlessly Complicates Scientific Evidence Principles,” 26 (14) Legal Backgrounder (June 17, 2011).

National Academies Press Publications Are Now Free

June 3rd, 2011

Publications of the National Research Council, as well as those of its constitutive organizations, the National Academy of Science, the Institute of Medicine, and the National Academy of Engineering, are often important resources for lawyers who litigate scientific and technical issues.  Right or wrong, these publications become forces in their own right in the courtroom, where they command serious attention from trial and appellate judges.

According to the National Academies Press’s website, all electronic versions of its books, in portable document format (pdf), will be available at its website, for free:

“As of June 2, 2011, all PDF versions of books published by the National Academies Press (NAP) will be downloadable to anyone free of charge.

That’s more than 4,000 books plus future reports produced by NAP – publisher for the National Academy of Sciences, National Academy of Engineering, Institute of Medicine, and National Research Council.”

Important works on forensic evidence, asbestos, dioxin, beryllium, research ethics, and data sharing published by the NAP, for the IOM or NRC, are now available for free.  The NAP charged upwards of $40 or 50 for some of these books previously.

This summer, the NRC’s Committee on Science, Technology and Law will release the Third Edition of the Reference Manual on Scientific Evidence, previously prepared by the Federal Judicial Center.  See http://sites.nationalacademies.org/PGA/stl/development_manual/index.htm

Statistical Power in the Academy

June 1st, 2011

Previously I have written about the concept of statistical power and how it is used and abused in the courts.  See here and here.

Statistical power was discussed in both the chapters on statistics and on epidemiology in the Second Edition of The Reference Manual on Scientific Evidence. In my earlier posts, I pointed out that the chapter on epidemiology provided some misleading, outdated guidance on the use of power.  See Michael D. Green, D. Michal Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” in Federal Judicial Center, The Reference Manual on Scientific Evidence 333, 362-63 (2ed. 2000) (recommending use of power curves to assess whether failure to achieve statistical significance is exonerative of the exposure in question).  This chapter suggests that “[t]he concept of power can be helpful in evaluating whether a study’s outcome is exonerative or inconclusive.” Id.; see also David H. Kaye and David A. Freedman, Reference Guide on Statistics,” Federal Judicial Center, Reference Manual on Scientific Evidence 83, 125-26 (2ed. 2000).

The fact of the matter is that power curves are rarely or never used in contemporary epidemiology, and post-hoc power calculations have been discouraged and severely criticized for a long time. After the data are collected, the appropriate method to evaluate the “resolving power” of a study is to examine the confidence interval around the study’s estimate of risk size.  These confidence intervals allow a concerned reader to evaluate what can reasonably ruled out (on the basis of random variation only) by the data in a given study. Post-hoc power calculations or considerations fail to provide meaningful consideration because they require a specified alternative hypothesis.

Twenty-five years ago, the use of post-hoc power was thoughtfully put in the dust bin of statistical techniques in the leading clinical medical journal:

“Although power is a useful concept for initially planning the size of a medical study, it is less relevant for interpreting studies at the end.  This is because power takes no account of the actual results obtained.”

***

“[I]n general, confidence intervals are more appropriate than power figures for interpreting results.”

Richard Simon, “Confidence intervals for reporting results of clinical trials,” 105 Ann. Intern. Med. 429, 433 (1986) (internal citation omitted).

An accompanying editorial by Ken Rothman reinforced the guidance given by Simon:

“[Simon] rightly dismisses calculations of power as a weak substitute for confidence intervals, because power calculations address only the qualitative issue of statistical significance and do not take account of the results already in hand.”

Kenneth J. Rothman, “Significance Questing,” 105 Ann. Intern. Med. 445, 446 (1986)

These two papers must be added to the 20 consensus statements, textbooks, and articles I previously cited.  See Schachtman, Power in the Courts, Part Two (2011).

The danger of the Reference Manual’s misleading advice is illustrated in a recent law review article by Professor Gold, of the Rutgers Law School, who asks “[w]hat if, as is frequently the case, such study is possible but of limited statistical power?”  Steve C. Gold, “The ‘Reshapement’ of the False Negative Asymmetry in Toxic Tort Causation, 37 William Mitchell L. Rev. 101, 117 (2011) (available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1797826).

Never mind for the moment that Professor Gold offers no empirical evidence to support his assertion that studies of limited statistical power are “frequently” used in litigation.  Gold critically points to Dunn v. Sandoz Pharmaceuticals Corp., 275 F. Supp. 2d 672, 677–81, 684 (M.D.N.C. 2003), a Parlodel case in which case plaintiff relied upon a single case-control study that found an elevated odds ratio (8.4), which was not statistically significant.  Gold at 117.  Gold complains that “a study’s limited statistical power, rather than the absence of a genuine association, may lead to statistically insignificant results that courts treat as disproof of causation, particularly in situations without the large study samples that result from mass exposures.” Id.  Gold goes on to applaud two cases for emphasizing consideration of post-hoc power.  Id. at 117 & n. 80 – 81 (citing Smith v. Wyeth-Ayerst Labs. Co., 278 F. Supp. 2d 684, 692 – 93 (W.D.N.C. 2003) (“[T]he concept of power is key because it’s helpful in evaluating whether the study‘s outcome . . . is exonerative or inconclusive.”), and Cooley v. Lincoln Elec. Co., 693 F. Supp. 2d 767, 774 (N.D. Ohio 2010) (prohibiting expert witness from opining that epidemiologic studies are evidence of no association unless the witness “has performed a methodologically reliable analysis of the studies’ statistical power to support that conclusion”).

What of Professor Gold’s suggestion that power should be considered in evaluating studies that do not have statistically significant outcomes of interest?  See id. at 117. Not only is Gold’s endorsement at odds with sound scientific and statistical advice, but his approach reveals a potential hypocrisy when considered in the light of his criticisms of significance testing.  Post-hoc power tests ignore the results obtained, including the variance of the actual study results, and they are calculated based upon a predetermined arbitrary measure of Type I error (alpha) that is the focus of so much of Gold’s discomfort with statistical evidence.  Of course, power calculations also are made on the basis of arbitrarily selected alternative hypotheses, but this level of arbitrariness seems not to disturb Gold so much.

Where does the Third Edition of the Reference Manual on Scientific Evidence come out on this issue?  The Third Edition is not yet published, but Professor David Kaye has posted his chapter on statistics on the internet.  David H. Kaye & David A. Freedman, “Reference Guide on Statistics,” chapter 5.  http://www.personal.psu.edu/dhk3/pubs/11-FJC-Ch5-Stat.pdf (David Freedman died in 2008, after the chapter was submitted to the National Academy of Sciences for review; only Professor Kaye responded to the Academy’s reviews).

The chapter essentially continues the Second Edition’s advice:

“When a study with low power fails to show a significant effect, the results may therefore be more fairly described as inconclusive than negative. The proof is weak because power is low. On the other hand, when studies have a good chance of detecting a meaningful association, failure to obtain significance can be persuasive evidence that there is nothing much to be found.”

Chapter 5, at 44-46 (citations and footnotes omitted).

The chapter’s advice is not, of course, limited to epidemiologic studies, where a risk ratio or a risk difference is typically reported with an appropriate confidence interval.  In the broad generality of considering all statistical tests, some of which do not report a measure of “effect size,” and the variability of the sample statistic, the chapter’s advice is fine.  But, as we can see from Professor Gold’s discussion and case review, the advice runs into trouble when measured against the methodological standards for evaluating an epidemiologic study’s results when confidence intervals are available.  Gold’s assessment of the cases is considerably skewed by his failure to recognize the inappropriateness of post-hoc power assessments of epidemiologic studies.