FOLLOW THE DATA, NOT THE DISCUSSION

The Supreme Court’s decision in Joiner was an important qualification to its earlier decision in Daubert.  Joiner correctly adjusted the dicta in Daubert that suggested that conclusions could not be evaluated for their reliability, a correction that is now embodied in Federal Rule of Evidence 702.  Joiner correctly assessed that plaintiffs’ expert witnesses in that case were relying upon pathologically deficient and unreliable evidence.  (Some of the expert witnesses in Joiner are known repeated offenders against Rule 702.)  Furthermore, in reversing and rendering a judgment of the 11th Circuit, Joiner corrected the asymmetric standard of review for Rule 702 witness exclusions that the 11th and other Circuits were using.

In reaching the right result, and in advancing the jurisprudence of the reliability of expert witness opinion testimony, Joiner, however, stumbled on one important analysis.  In his opinion in Joiner, Chief Justice Rehnquist gave considerable weight to the consideration that the plaintiffs’ expert witnesses relied upon studies, the authors of which explicitly refused to interpret as supporting a conclusion of human disease causation.  See General Electric Co. v. Joiner, 522 U.S. 136, 145-46 (1997) (noting that the PCB studies at issue did not support expert witnesses’ conclusion that PCB exposure caused cancer because the study authors, who conducted the research, were not willing to endorse a conclusion of causation). 

Although the PCB study authors were well justified in their respective papers in refraining from over-interpreting their data and analyses, this consideration is of doubtful general value in evaluating the reliability of an expert witness’s proposed testimony.  First, as some plaintiffs’ counsel have argued, the testifying expert witness may be relying upon a more extensive and supportive evidentiary display than considered by the study authors.  The study, standing alone, might not support causation, but when considered with other evidence, the study could take on some importance in supporting a causal conclusion.  (This consideration would not save the sadly deficient opinions challenged in Joiner.) Second, there are important methodological considerations that render the Discussion sections of published papers of little value.  They are almost never comprehensive reviews of the subject matter, and they are often little more than the personal opinions of the study authors.  Sometimes, the Introduction and Discussion sections are influenced by the need to get the paper published and satisfy the whims of peer reviewers and editors.  Thus, these sections, in addition to being uncross-examined statements of the authors, might well reflect also second-level hearsay, of opinions of anonymous reviewers, whose expertise, biases, and perceptions cannot be challenged.

 The use of a paper’s Discussion section to measure the reliability of a proffered expert testimony runs contrary to how scientists generally read and interpret papers.  Chief Justice Rehnquist’s emphasis upon the study authors’ Discussion of their own studies ignores the first important principal of interpreting medical studies, in an evidence-based world view:  In critically reading and evaluating a study, one should ignore anything in the paper other than the Methods and Results sections.

There are many clear statements in the medical literature, which caution the consumers of medical studies against misleading claims.  Several years ago, the British Medical Journal published a paper by Montori, et al., “Users’ guide to detecting misleading claims in clinical research reports,” 329 Br. Med. J. 1093 (2004).  The authors distill their advice down to six suggestions in a “[g]uide to avoid being misled by biased presentation and interpretation of data, the first [suggestion] of which is to:  “Read only the Methods and Results sections; bypass the Discuss section.”  Id. at 1093 (emphasis added).

Perhaps the Discussion section, in the context of a Rule 104(a) proceeding, has some role in evaluating the challenged expert witness’s opinion, but surely it is a weak factor at best.  And clearly, the disagreement with the study authors’ conclusions or opinions, as reflected by speculative Discussion sections, can cut both ways.  Study authors may downplay their findings – appropriately or inappropriately, but study authors often overplay their findings and distort or misinterpret how their findings fit into the full picture of other studies and other evidence.  The quality of peer-reviewed publications is simply too irregular and unpredictable to make the subjective, evaluative comments in hearsay papers the touchstone for admissibility or inadmissibility.

Furthermore, courts should be asking why a testifying expert witness, or the witnesses who are countering the challenged witness, should advert to the Discussion section of a published article.  If an expert witness cannot interpret the Methods and Results sections, then in all likelihood he or she lacks the requisite expertise to offer a reliable opinion.

Joiner’s misplaced emphasis upon study authors’ Discussion sections has gained a foothold in the case law interpreting Rule 702.  In Huss v. Gayden, 571 F.3d 442  (5th Cir. 2009), for example, the Court declared:

“It is axiomatic that causation testimony is inadmissible if an expert relies upon studies or publications, the authors of which were themselves unwilling to conclude that causation had been proven.”

Id. (citing Vargas v. Lee, 317 F.3d 498, 501-01 (5th Cir. 2003) (noting that studies that did not themselves embrace causal conclusions undermined the reliability of the plaintiffs’ expert witness’s testimony that trauma caused fibromyalgia), and McClain v. Metabolife Int’l, Inc., 401 F.3d 1233, 1247-48 (11th Cir. 2005) (expert witnesses’ reliance upon studies that did not reach causal conclusions about ephedrine supported the challenge to the reliability of their proffered opinions).

This aspect of Joiner perpetuates an authority-based view of science to the detriment of requiring good and sufficient reasons to support the testifying expert witnesses’ opinions.  The problem with Joiner’s suggestion that expert witness opinion should not be admissible if it disagrees with the study authors’ Discussion section is that sometimes study authors grossly over-interpret their data.  When it comes to scientific studies written by “political scientists” (scientists who see their work as advancing a political cause or agenda), then the Discussion section often becomes a fertile source of unreliable, speculative opinions that should not be given credence in Rule 104(a) contexts, and certainly should not be admissible in trials.

There have been, and will continue to be, occasions in which published studies contain data, relevant and important to the causation issue, but which studies also contain speculative, personal opinions expressed in the Introduction and Discussion sections.  The parties’ expert witnesses may disagree with those opinions, but such disagreements hardly reflect poorly upon the testifying witnesses.  Neither sides’ expert witnesses should be judged by those out-of-court opinions.  Perhaps the hearsay Discussion section may be considered under Rule 104(a), which suspends the application of the Rules of Evidence, but it should hardly be an important or dispositive factor, other than raising questions for the reviewing court.

Expert witnesses should not be constrained or excluded for relying upon study data, when they disagree with the hearsay authors’ conclusions or discussions.  Given how many journals cater to advocacy scientists, and how variable the quality of peer review is, testifying expert witnesses should be required to have the expertise to interpret the data without substantial reliance upon, or reference to, the interpretative comments in the published literature.