TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Discovery of Retained, Testifying Statistician Expert Witnesses (Part 1)

June 30th, 2015

At times, the judiciary’s resistance to delving into the factual underpinnings of expert witness opinions is extraordinary. In one case, the Second Circuit affirmed a judgment for a plaintiff in a breach of contract action, based in large part upon expert witness testimony that presented the results of a computer simulation. Perma Research & Development v. Singer Co.[1] Although the trial court had promised to permit inquiry into the plaintiff’s computer expert witness’s source of data, programmed mathematical formulae, and computer programs, when the defendant asked the plaintiff’s expert witness to disclose his underlying data and algorithms, the district judge sustained the witness’s refusal on grounds that the requested materials were his “private work product” and “proprietary information.”[2] Despite the trial court’s failure to articulate any legally recognized basis for permitting the expert witness to stonewall in this fashion, a panel of the Circuit, in an opinion by superannuated Justice Tom Clark, affirmed, on an argument that the defendant “had not shown that it did not have an adequate basis on which to cross-examine plaintiff’s experts.” Judge Van Graafeiland dissented, indelicately pointing out that the majority had charged the defendant with failing to show that it had been deprived of a fair opportunity to cross-examine plaintiff’s expert witnesses while depriving the defendant of access to the secret underlying evidence and materials that were needed to demonstrate what could have been done on cross-examination[3]. The dissent traced the trial court’s error to its misconception that a computer is just a giant calculator, and pointed out that the majority contravened Circuit precedent[4] and evolving standards[5] for handling underlying data that was analyzed or otherwise incorporated into computer models and simulations.

Although the approach of Perma Research has largely been ignored, has fallen into disrepute, and has been superseded by statutory amendments[6], its retrograde approach continues to find occasional expression in reported decisions. The refinement of Federal Rule of Evidence 702 to require sound support for expert witnesses’ opinions has opened the flow of discovery of underlying facts and data considered by expert witnesses before generating their reports. The most recent edition of the Federal Judicial Center’s Manual for Complex Litigation treats both computer-generated evidence and expert witnesses’ underlying data as both subject to pre-trial discovery as necessary to provide for full and fair litigation of the issues in the case[7].

The discovery of expert witnesses who have conducted statistical analyses poses difficult problems for lawyers.  Unlike other some expert witnesses, who passively review data and arrive at an opinion that synthesizes published research, statisticians actually create evidence with new arrangements and analyses of data in the case.  In this respect, statisticians are like material scientists who may test and record experimental observations on a product or its constituents.  Inquiring minds will want to know whether the statistical analyses in the witness’s report were the results of pre-planned analysis protocols, or whether they were the second, third, or fifteenth alternative analysis.  Earlier statistical analyses conducted but not produced may reveal what the expert witness believed would have been the preferred analysis if only the data had cooperated more fully. Statistical analyses conducted by expert witnesses provide plenty of opportunity for data-dredging, which can then be covered up by disclosing only selected analyses in the expert witness’s report.

The output of statisticians’ statistical analyses will take the form of a measure of “point estimates” of “effect size,” a significance or posterior probability, a set of regression coefficients, a summary estimate of association, or a similar measure that did not exist before the statistician used the underlying data to produce the analytical outcome, which is then the subject of further inference and opinion.  Frequentist analyses must identify the probability model and other assumptions employed. Bayesian analyses must also identify prior probabilities used as the starting point used with further evidence to arrive at posterior probabilities. The science, creativity, and judgment involved in statistical methods challenge courts and counsel to discover, understand, reproduce, present, and cross-examine statistician expert witness testimony.  And occasionally, there is duplicity and deviousness to uncover as well.

The discovery obligations with respect to statistician expert witnesses vary considerably among state and federal courts.  The 1993 amendments to the Federal Rules of Civil Procedure created an automatic right to conduct depositions of expert witnesses[8].  Previously, parties in federal court had to show the inadequacy of other methods of discovery.  Rule 26(a)(2)(B)(ii) requires the automatic production of “the facts or data considered by the [expert] witness in forming” his or her opinions. The literal wording of this provision would appear to restrict automatic, mandatory disclosure to those facts and data that are specifically considered in forming the opinions contained in the prescribed report. Several courts, however, have interpreted the term “considered” to include any information that expert witnesses review or generate, “regardless of whether the experts actually rely on those materials as a basis for their opinions.[9]

Among the changes introduced by the 2010 amendments to the Federal Rules of Civil Procedure was a narrowing of the disclosure requirement of “facts and data” considered by expert witnesses in arriving at their opinions to exclude some attorney work product, as well as protecting drafts of expert witness reports from discovery.  The implications of the Federal Rules for statistician expert witnesses are not entirely clear, but these changes should not be used as an excuse to deprive litigants of access to the data and materials underlying statisticians’ analyses. Since the 2010 amendments, courts have enforced discovery requests for testifying expert witnesses’ notes because they were not draft reports or specific communications between counsel and expert witnesses[10].

The Requirements Associated With Producing A Report

Rule 26 is the key rule that governs disclosure and discovery of expert witnesses and their opinions. Under the current version of Rule 26(a)(2)(B), the scope of required disclosure in the expert report has been narrowed in some respects. Rule 26(a)(2)(B) now requires service of expert witness reports that contain, among other things:

(i) a complete statement of all opinions the witness will express and the basis and reasons for them;

(ii) the facts or data considered by the witness in forming them;

(iii) any exhibits that will be used to summarize or support them.

The Rule’s use of “them” seems clearly to refer back to “opinions,” which creates a problem with respect to materials considered generally with respect to the case or the issues, but not for the specific opinions advanced in the report.

The previous language of the rule required that the expert report disclose “the data or other information considered by the witness.[11]” The use of “other information” in the older version of the rule, rather than the new “data” was generally interpreted to authorize discovery of all oral and written communications between counsel and expert witnesses.  The trimming of Rule 26(a)(2)(B)(ii) was thus designed to place these attorney-expert witness communications off limits from disclosure or discovery.

The federal rules specify that the required report “is intended to set forth the substance of the direct examination[12].” Several court have thus interpreted the current rule in a way that does not result in automatic production of all statistical analyses performed, but only those data and analyses the witness has decided to present at trial.  The report requirement, as it now stands, is thus not necessarily designed to help adverse counsel fully challenge and cross-examine the expert witness on analyses attempted, discarded, or abandoned. If a statistician expert witness conducted multiple statistical testing before arriving at a “preferred” analysis, that expert witness, and instructing counsel, will obviously be all too happy to eliminate the unhelpful analyses from the direct examination, and from the purview of disclosure.

Some of the caselaw in this area makes clear that it is up to the requesting party to discover what it wants beyond the materials that must automatically be disclosed in, or with, the report. A party will not be heard to complain, or attack its adversary, about failure to produce materials never requested.[13] Citing Rule 26(a) and its subsections, which deal with the report, and not discovery beyond the report, several cases take a narrow view of disclosure as embodied in the report requirement.[14] In one case, McCoy v. Whirlpool Corp, the trial court did, however, permit the plaintiff to conduct a supplemental deposition of the defense expert witness to question him about his calculations[15].

A narrow view of automatic disclosure in some cases appears to protect statistician and other expert witnesses from being required to produce calculations, statistical analyses, and data outputs even for opinions that are identified in their reports, and intended to be the subject of direct examination at trial[16].  The trial court’s handling of the issues in Cook v. Rockwell International Corporation is illustrative of this questionable approach.  The issue of the inadequacy of expert witnesses’ reports, for failing to disclose notes, calculations, and preliminary analyses, arose in the context of a Rule 702 motion to the admissibility of the witnesses’ opinion testimony.  The trial court rejected “[a]ny suggestion that an opposing expert must be able to verify the correctness of an expert’s work before it can be admitted… ”[17]; any such suggestion “misstates the standard for admission of expert evidence under [Fed. R. Evid.] 702.[18]”  The Cook court further rejected any “suggestion in Rule 26(a)(2) that an expert report is incomplete unless it contains sufficient information and detail for an opposing expert to replicate and verify in all respects both the method and results described in the report.[19]”   Similarly, the court rejected the defense’s complaints that one of plaintiffs’ expert witness’s expert report and disclosures violated Rule 26(a)(2), by failing to provide “detailed working notes, intermediate results and computer records,” to allow a rebuttal expert witness to test the methodology and replicate the results[20]. The court observed that

“Defendants’ argument also confuses the expert reporting requirements of Rule 26(a)(2) with the considerations for assessing the admissibility of an expert’s opinions under Rule 702 of the Federal Rules of Evidence. Whether an expert’s method or theory can or has been tested is one of the factors that can be relevant to determining whether an expert’s testimony is reliable enough to be admissible. See Fed. R. Evid. 702 2000 advisory committee’s note; Daubert, 509 U.S. at 593, 113 S.Ct. 2786. It is not a factor for assessing compliance with Rule 26(a)(2)’s expert disclosure requirements.[21]

The Rule 702 motion to exclude an expert witness comes too late in the pre-trial process for complaints about failure to disclose underlying data and analyses. The Cook case never explicitly addressed Rule 26(b), or other discovery procedures, as a basis for the defense request for underlying documents, data, and materials.  In any event, the limited scope accorded to Rule 26 disclosure mechanisms by Cook emphasizes the importance of deploying ancillary discovery tools early in the pre-trial process.

The Format Of Documents and Data Files To Be Produced

The dispute in Helmert v.  Butterball, LLC, is typical of what may be expected in a case involving statistician expert witness testimony.  The parties exchanged reports of their statistical expert witnesses, as well as the data output files.  The parties chose, however, to produce the data files in ways that were singularly unhelpful to the other side.  One party produced data files in the “portable document format” (pdf) rather than in the native format of the statistical software package used (STATA).  The other party produced data in a spreadsheet without any information about how the data were processed.  The parties then filed cross-motions to compel the data in its “electronic, native format.” In addition, plaintiffs pressed for all the underlying data, formulae, and calculations. The court denied both motions on the theory that both sides had received copies of the data considered, and neither was denied facts or data considered by the expert witnesses in reaching their opinions[22]. The court refused plaintiffs’ request for formulae and calculations as well. The court’s discussion of its rationale for denying the cross-motions is framed entirely in terms of what parties may expect and be entitled in the form of a report, without any mention of additional discovery mechanisms to obtain the sought-after materials. The court noted that the parties would have the opportunity to explore calculations at deposition.

The decision in Helmert seems typical of judicial indifference to, and misunderstanding of, the need for datasets, especially with large datasets, in the form uploaded to, and used in, statistical software programs. What is missing from the Helmert opinion is a recognition that an effective deposition would require production of the requested materials in advance of the oral examination, so that the examining counsel can confer and consult with a statistical expert for help in formulating and structuring the deposition questions. There are at least two remedial considerations for future discovery motions of the sort seen in Helmert. First, the moving party should support its application with an affidavit of a statistical expert to explain the specific need for identification of the actual formulae used, programming used within specific software programs to run analyses, and interim and final outputs. Second, a strong analogy with document discovery of parties, in which courts routinely order “native format” versions of PowerPoint, Excel, and Word documents produced in response to document requests. Rule 34 of the Federal Rules of Civil Procedure requires that “[a] party must produce documents as they are kept in the usual course of business[23]” and that, “[i]f a request does not specify a form for producing electronically stored information, a party must produce it in a form or forms in which it is ordinarily maintained or in a reasonably usable form or forms.[24]” The Advisory Committee notes to Rule 34[25] make clear that:

“[T]he option to produce in a reasonably usable form does not mean that a responding party is free to convert electronically stored information from the form in which it is ordinarily maintained to a different form that makes it more difficult or burdensome for the requesting party to use the information efficiently in the litigation. If the responding party ordinarily maintains the information it is producing in a way that makes it searchable by electronic means, the information should not be produced in a form that removes or significantly degrades this feature.”

Under the Federal Rules, a requesting party’s obligation to specify a particular format for document production is superseded by the responding party’s obligation to refrain from manipulating or converting “any of its electronically stored information to a different format that would make it more difficult or burdensome for [the requesting party] to use.[26]” In Helmert, the STATA files should have been delivered as STATA native format files, and the requesting party should have requested, and received, all STATA input and output files, which would have permitted the requestor to replicate all analyses conducted.

Some of the decided cases on expert witness reports are troubling because they do not explicitly state whether they are addressing the adequacy of automatic disclosure and reports, or a response to propounded discovery.  For example, in Etherton v. Owners Ins. Co.[27], the plaintiff sought to preclude a defense accident reconstruction expert witness on grounds that the witness failed to produce several pages of calculations[28]. The defense argued that the “[w]hile [the witness’s] notes regarding these calculations were not included in his expert report, the report does specifically identify the methods he employed in his analysis, and the static data used in his calculations”; and by asserting that “Rule 26 does not require the disclosure of draft expert reports, and it certainly does not require disclosure of calculations, as Plaintiff contends.[29]”  The court in Etherton agreed that “Fed. R. Civ. P. 26(a)(2)(B) does not require the production of every scrap of paper with potential relevance to an expert’s opinion.[30]” The court laid the discovery default here upon the plaintiff, as the requesting party:  “Although Plaintiff should have known that Mr. Ogden’s engineering analysis would likely involve calculations, Plaintiff never requested that documentation of those calculations be produced at any time prior to the date of [Ogden’s] deposition.[31]

The Etherton court’s assessment that the defense expert witness’s calculations were “working notes,” which Rule 26(a)(2) does not require to be included in or produced with a report, seems a complete answer, except for the court’s musings about the new provisions of Rule 26(b)(4)(B), which protect draft reports.  Because of the court’s emphasis that the plaintiff never requested the documentation of the relevant calculations, the court’s musings about what was discoverable were clearly dicta.  The calculations, which would reveal data and inferential processes considered, appear to be core materials, subject to and important for discovery[32].

[This post is a substantial revision and update to an earlier post, “Discovery of Statistician Expert Witnesses” (July 19, 2012).]


[1] 542 F.2d 111 (2d Cir. 1976), cert. denied, 429 U.S. 987 (1976)

[2] Id. at 124.

[3] Id. at 126 & n.17.

[4] United States v. Dioguardi, 428 F.2d 1033, 1038 (2d Cir.), cert. denied, 400 U.S. 825 (1970) (holding that prosecution’s failure to produce computer program was error but harmless on the particular facts of the case).

[5] See, e.g., Roberts, “A Practitioner’s Primer on Computer-Generated Evidence,” 41 U. Chi. L. Rev. 254, 255-56 (1974); Freed, “Computer Records and the Law — Retrospect and Prospect,” 15 Jurimetrics J. 207, 208 (1975); ABA Sub-Committee on Data Processing, “Principles of Introduction of Machine Prepared Studies” (1964).

[6] Aldous, Note, “Disclosure of Expert Computer Simulations,” 8 Computer L.J. 51 (1987); Betsy S. Fiedler, “Are Your Eyes Deceiving You?: The Evidentiary Crisis Regarding the Admissibility of Computer Generated Evidence,” 48 N.Y.L. Sch. L. Rev. 295, 295–96 (2004); Fred Galves, “Where the Not-So-Wild Things Are: Computers in the Courtroom, the Federal Rules of Evidence, and the Need for Institutional Reform and More Judicial Acceptance,” 13 Harv. J.L. & Tech. 161 (2000); Leslie C. O’Toole, “Admitting that We’re Litigating in the Digital Age: A Practical Overview of Issues of Admissibility in the Technological Courtroom,” Fed. Def. Corp. Csl. Quart. 3 (2008); Carole E. Powell, “Computer Generated Visual Evidence: Does Daubert Make a Difference?” 12 Georgia State Univ. L. Rev. 577 (1995).

[7] Federal Judicial Center, Manual for Complex Litigation § 11.447, at 82 (4th ed. 2004) (“The judge should therefore consider the accuracy and reliability of computerized evidence, including any necessary discovery during pretrial proceedings, so that challenges to the evidence are not made for the first time at trial.”); id. at § 11.482, at 99 (“Early and full disclosure of expert evidence can help define and narrow issues. Although experts often seem hopelessly at odds, revealing the assumptions and underlying data on which they have relied in reaching their opinions often makes the bases for their differences clearer and enables substantial simplification of the issues.”)

[8] Fed. R. Civ. P. 26(b)(4)(A) (1993).

[9] United States v. Dish Network, L.L.C., No. 09-3073, 2013 WL 5575864, at *2, *5 (C.D. Ill. Oct. 9, 2013) (noting that the 2010 amendments did not affect the change the meaning of the term “considered,” as including “anything received, reviewed, read, or authored by the expert, before or in connection with the forming of his opinion, if the subject matter relates to the facts or opinions expressed.”); S.E.C. v. Reyes, 2007 WL 963422, at *1 (N.D. Cal. Mar. 30, 2007). See also South Yuba River Citizens’ League v. National Marine Fisheries Service, 257 F.R.D. 607, 610 (E.D. Cal. 2009) (majority rule requires production of materials considered even when work product); Trigon Insur. Co. v. United States, 204 F.R.D. 277, 282 (E.D. Va. 2001).

[10] Dongguk Univ. v. Yale Univ., No. 3:08–CV–00441 (TLM), 2011 WL 1935865 (D. Conn. May 19, 2011) (ordering production of a testifying expert witness’s notes, reasoning that they were neither draft reports nor communications between the party’s attorney and the expert witness, and they were not the mental impressions, conclusions, opinions, or legal theories of the party’s attorney); In re Application of the Republic of Ecuador, 280 F.R.D. 506, 513 (N.D. Cal. 2012) (holding that Rule 26(b) does not protect an expert witness’s own work product other than draft reports). But see Internat’l Aloe Science Council, Inc. v. Fruit of the Earth, Inc., No. 11-2255, 2012 WL 1900536, at *2 (D. Md. May 23, 2012) (holding that expert witness’s notes created to help counsel prepare for deposition of adversary’s expert witness were protected as attorney work product and protected from disclosure under Rule 26(b)(4)(C) because they did not contain opinions that the expert would provide at trial)).

[11] Fed. R. Civ. P. 26(a)(2)(B)(ii) (1993) (emphasis added).

[12] Notes of Advisory Committee on Rules for Rule 26(a)(2)(B). See, e.g., Lituanian Commerce Corp., Ltd. v. Sara Lee Hosiery, 177 F.R.D. 245, 253 (D.N.J. 1997) (expert witness’s written report should state completely all opinions to be given at trial, the data, facts, and information considered in arriving at those opinions, as well as any exhibits to be used), vacated on other grounds, 179 F.R.D. 450 (D.N.J. 1998).

[13] See, e.g., Gillepsie v. Sears, Roebuck & Co., 386 F.3d 21, 35 (1st Cir. 2004) (holding that trial court erred in allowing cross-examination and final argument on expert witness’s supposed failure to produce all working notes and videotaped recordings while conducting tests, when objecting party never made such document requests).

[14] See, e.g., McCoy v. Whirlpool Corp., 214 F.R.D. 646, 652 (D. Kan. 2003) (Rule  26(a)(2) “does not require that a report recite each minute fact or piece of scientific information that might be elicited on direct examination to establish the admissibility of the expert opinion … Nor does it require the expert to anticipate every criticism and articulate every nano-detail that might be involved in defending the opinion[.]”).

[15] Id. (without distinguishing between the provisions of Rule 26(a) concerning reports and Rule 26(b) concerning depositions); see also Scott v. City of New York, 591 F.Supp. 2d 554, 559 (S.D.N.Y. 2008) (“failure to record the panoply of descriptive figures displayed automatically by his statistics program does not constitute best practices for preparation of an expert report,’’ but holding that the report contained ‘‘the data or other information’’ he considered in forming his opinion, as required by Rule 26); McDonald v. Sun Oil Co., 423 F.Supp. 2d 1114, 1122 (D. Or. 2006) (holding that Rule 26(a)(2)(B) does not require the production of an expert witness’s working notes; a party may not be sanctioned for spoliation based upon expert witness’s failure to retain notes, absent a showing of relevancy and bad faith), rev’d on other grounds, 548 F.3d 774 (9th Cir. 2008).

[16] In re Xerox Corp Securities Litig., 746 F. Supp. 2d 402, 414-15 (D. Conn. 2010) (“The court concludes that it was not necessary for the [expert witness’s] initial regression analysis to be contained in the [expert] report” that was disclosed pursuant to Rule 26(a)(2)), aff’d on other grds. sub. nom., Dalberth v. Xerox Corp., 766 F. 3d 172 (2d Cir. 2014). See also Cook v. Rockwell Int’l Corp., 580 F.Supp. 2d 1071, 1122 (D. Colo. 2006), rev’d and remanded on other grounds, 618 F.3d 1127 (10th Cir. 2010), cert. denied, ___ U.S. ___ , No. 10-1377, 2012 WL 2368857 (June 25, 2012), on remand, 13 F.Supp.3d 1153 (D. Colo. 2014), vacated 2015 WL 3853593, No. 14–1112 (10th Cir. June 23, 2015); Flebotte v. Dow Jones & Co., No. Civ. A. 97–30117–FHF, 2000 WL 35539238, at *7 (D. Mass. Dec. 6, 2000) (“Therefore, neither the plain language of the rule nor its purpose compels disclosure of every calculation or test conducted by the expert during formation of the report.”).

[17] Cook, 580 F. Supp. 2d at 1121–22.

[18] Id.

[19] Id. & n. 55 (Rule 26(a)(2) does not “require that an expert report contain all the information that a scientific journal might require an author of a published paper to retain.”).

[20] Id. at 1121-22.

[21] Id.

[22] Helmert v.  Butterball, LLC, No. 4:08-CV-00342, 2011 WL 3157180, at *2 (E.D. Ark. July 27, 2011).

[23] Fed. R. Civ. P. 34(b)(2)(E)(i).

[24] Fed. R. Civ. P. 34(b)(2)(E)(ii).

[25] Fed. R. Civ. P. 34, Advisory Comm. Notes (2006 Amendments).

[26] Crissen v. Gupta, 2013 U.S. Dist. LEXIS 159534, at *22 (S.D. Ind. Nov. 7, 2013), citing Craig & Landreth, Inc. v. Mazda Motor of America, Inc., 2009 U.S. Dist. LEXIS 66069, at *3 (S.D. Ind. July 27, 2009). See also Saliga v. Chemtura Corp., 2013 U.S. Dist. LEXIS 167019, *3-7 (D. Conn. Nov. 25, 2013).

[27] No. 10-cv-00892-MSKKLM, 2011 WL 684592 (D. Colo. Feb. 18, 2011)

[28] Id. at *1.

[29] Id.

[30] Id. at *2.

[31] Id.

[32] See Barnes v. Dist. of Columbia, 289 F.R.D. 1, 19–24 (D.D.C. 2012) (ordering production of underlying data and information because, “[i]n order for the [requesting party] to understand fully the . . . [r]eports, they need to have all the underlying data and information on how” the reports were prepared).

Don’t Double Dip Data

March 9th, 2015

Meta-analyses have become commonplace in epidemiology and in other sciences. When well conducted and transparently reported, meta-analyses can be extremely helpful. In several litigations, meta-analyses determined the outcome of the medical causation issues. In the silicone gel breast implant litigation, after defense expert witnesses proffered meta-analyses[1], court-appointed expert witnesses adopted the approach and featured meta-analyses in their reports to the MDL court[2].

In the welding fume litigation, plaintiffs’ expert witness offered a crude, non-quantified, “vote counting” exercise to argue that welding causes Parkinson’s disease[3]. In rebuttal, one of the defense expert witnesses offered a quantitative meta-analysis, which provided strong evidence against plaintiffs’ claim.[4] Although the welding fume MDL court excluded the defense expert’s meta-analysis from the pre-trial Rule 702 hearing as untimely, plaintiffs’ counsel soon thereafter initiated settlement discussions of the entire set of MDL cases. Subsequently, the defense expert witness, with his professional colleagues, published an expanded version of the meta-analysis.[5]

And last month, a meta-analysis proffered by a defense expert witness helped dispatch a long-festering litigation in New Jersey’s multi-county isotretinoin (Accutane) litigation. In re Accutane Litig., No. 271(MCL), 2015 WL 753674 (N.J. Super., Law Div., Atlantic Cty., Feb. 20, 2015) (excluding plaintiffs’ expert witness David Madigan).

Of course, when a meta-analysis is done improperly, the resulting analysis may be worse than none at all. Some methodological flaws involve arcane statistical concepts and procedures, and may be easily missed. Other flaws are flagrant and call for a gatekeeping bucket brigade.

When a merchant puts his hand the scale at the check-out counter, we call that fraud. When George Costanza double dipped his chip twice in the chip dip, he was properly called out for his boorish and unsanitary practice. When a statistician or epidemiologist produces a meta-analysis that double counts crucial data to inflate a summary estimate of association, or to create spurious precision in the estimate, we don’t need to crack open Modern Epidemiology or the Reference Manual on Scientific Evidence to know that something fishy has taken place.

In litigation involving claims that selective serotonin reuptake inhibitors cause birth defects, plaintiffs’ expert witness, a perinatal epidemiologist, relied upon two published meta-analyses[6]. In an examination before trial, this epidemiologist was confronted with the double counting (and other data entry errors) in the relied-upon meta-analyses, and she readily agreed that the meta-analyses were improperly done and that she had to abandon her reliance upon them.[7] The result of the expert witness’s deposition epiphany, however, was that she no longer had the illusory benefit of an aggregation of data, with an outcome supporting her opinion. The further consequence was that her opinion succumbed to a Rule 702 challenge. See In re Zoloft (Sertraline Hydrochloride) Prods. Liab. Litig., MDL No. 2342; 12-md-2342, 2014 U.S. Dist. LEXIS 87592; 2014 WL 2921648 (E.D. Pa. June 27, 2014) (Rufe, J.).

Double counting of studies, or subgroups within studies, is a flaw that most careful readers can identify in a meta-analysis, without advance training. According to statistician Stephen Senn, double counting of evidence is a serious problem in published meta-analytical studies. Stephen J. Senn, “Overstating the evidence – double counting in meta-analysis and related problems,” 9, at *1 BMC Medical Research Methodology 10 (2009). Senn observes that he had little difficulty in finding examples of meta-analyses gone wrong, including meta-analyses with double counting of studies or data, in some of the leading clinical medical journals. Id. Senn urges analysts to “[b]e vigilant about double counting,” id. at *4, and recommends that journals should withdraw meta-analyses promptly when mistakes are found,” id. at *1.

Similar advice abounds in books and journals[8]. Professor Sander Greenland addresses the issue in his chapter on meta-analysis in Modern Epidemiology:

Conducting a Sound and Credible Meta-Analysis

Like any scientific study, an ideal meta-analysis would follow an explicit protocol that is fully replicable by others. This ideal can be hard to attain, but meeting certain conditions can enhance soundness (validity) and credibility (believability). Among these conditions we include the following:

  • A clearly defined set of research questions to address.

  • An explicit and detailed working protocol.

  • A replicable literature-search strategy.

  • Explicit study inclusion and exclusion criteria, with a rationale for each.

  • Nonoverlap of included studies (use of separate subjects in different included studies), or use of statistical methods that account for overlap. * * * * *”

Sander Greenland & Keith O’Rourke, “Meta-Analysis – Chapter 33,” in Kenneth J. Rothman, Sander Greenland, Timothy L. Lash, Modern Epidemiology 652, 655 (3d ed. 2008) (emphasis added).

Just remember George Costanza; don’t double dip that chip, and don’t double dip in the data.


[1] See, e.g., Otto Wong, “A Critical Assessment of the Relationship between Silicone Breast Implants and Connective Tissue Diseases,” 23 Regulatory Toxicol. & Pharmacol. 74 (1996).

[2] See Barbara Hulka, Betty Diamond, Nancy Kerkvliet & Peter Tugwell, “Silicone Breast Implants in Relation to Connective Tissue Diseases and Immunologic Dysfunction:  A Report by a National Science Panel to the Hon. Sam Pointer Jr., MDL 926 (Nov. 30, 1998)”; Barbara Hulka, Nancy Kerkvliet & Peter Tugwell, “Experience of a Scientific Panel Formed to Advise the Federal Judiciary on Silicone Breast Implants,” 342 New Engl. J. Med. 812 (2000).

[3] Deposition of Dr. Juan Sanchez-Ramos, Street v. Lincoln Elec. Co., Case No. 1:06-cv-17026, 2011 WL 6008514 (N.D. Ohio May 17, 2011).

[4] Deposition of Dr. James Mortimer, Street v. Lincoln Elec. Co., Case No. 1:06-cv-17026, 2011 WL 6008054 (N.D. Ohio June 29, 2011).

[5] James Mortimer, Amy Borenstein & Laurene Nelson, Associations of Welding and Manganese Exposure with Parkinson’s Disease: Review and Meta-Analysis, 79 Neurology 1174 (2012).

[6] Shekoufeh Nikfar, Roja Rahimi, Narjes Hendoiee, and Mohammad Abdollahi, “Increasing the risk of spontaneous abortion and major malformations in newborns following use of serotonin reuptake inhibitors during pregnancy: A systematic review and updated meta-analysis,” 20 DARU J. Pharm. Sci. 75 (2012); Roja Rahimi, Shekoufeh Nikfara, Mohammad Abdollahic, “Pregnancy outcomes following exposure to serotonin reuptake inhibitors: a meta-analysis of clinical trials,” 22 Reproductive Toxicol. 571 (2006).

[7] “Q So the question was: Have you read it carefully and do you understand everything that was done in the Nikfar meta-analysis?

A Yes, I think so.

* * *

Q And Nikfar stated that she included studies, correct, in the cardiac malformation meta-analysis?

A That’s what she says.

* * *

Q So if you look at the STATA output, the demonstrative, the — the forest plot, the second study is Kornum 2010. Do you see that?

A Am I —

Q You’re looking at figure four, the cardiac malformations.

A Okay.

Q And Kornum 2010, —

A Yes.

Q — that’s a study you relied upon.

A Mm-hmm.

Q Is that right?

A Yes.

Q And it’s on this forest plot, along with its odds ratio and confidence interval, correct?

A Yeah.

Q And if you look at the last study on the forest plot, it’s the same study, Kornum 2010, same odds ratio and same confidence interval, true?

A You’re right.

Q And to paraphrase My Cousin Vinny, no self-respecting epidemiologist would do a meta-analysis by including the same study twice, correct?

A Well, that was an error. Yeah, you’re right.

***

Q Instead of putting 2 out of 98, they extracted the data and put 9 out of 28.

A Yeah. You’re right.

Q So there’s a numerical transposition that generated a 25-fold increased risk; is that right?

A You’re correct.

Q And, again, to quote My Cousin Vinny, this is no way to do a meta-analysis, is it?

A You’re right.”

Testimony of Anick Bérard, Kuykendall v. Forest Labs, at 223:14-17; 238:17-20; 239:11-240:10; 245:5-12 (Cole County, Missouri; Nov. 15, 2013). According to a Google Scholar search, the Rahimi 2005 meta-analysis had been cited 90 times; the Nikfar 2012 meta-analysis, 11 times, as recently as this month. See, e.g., Etienne Weisskopf, Celine J. Fischer, Myriam Bickle Graz, Mathilde Morisod Harari, Jean-Francois Tolsa, Olivier Claris, Yvan Vial, Chin B. Eap, Chantal Csajka & Alice Panchaud, “Risk-benefit balance assessment of SSRI antidepressant use during pregnancy and lactation based on best available evidence,” 14 Expert Op. Drug Safety 413 (2015); Kimberly A. Yonkers, Katherine A. Blackwell & Ariadna Forray, “Antidepressant Use in Pregnant and Postpartum Women,” 10 Ann. Rev. Clin. Psychol. 369 (2014); Abbie D. Leino & Vicki L. Ellingrod, “SSRIs in pregnancy: What should you tell your depressed patient?” 12 Current Psychiatry 41 (2013).

[8] Julian Higgins & Sally Green, eds., Cochrane Handbook for Systematic Reviews of Interventions 152 (2008) (“7.2.2 Identifying multiple reports from the same study. Duplicate publication can introduce substantial biases if studies are inadvertently included more than once in a meta-analysis (Tramèr 1997). Duplicate publication can take various forms, ranging from identical manuscripts to reports describing different numbers of participants and different outcomes (von Elm 2004). It can be difficult to detect duplicate publication, and some ‘detectivework’ by the reviewauthors may be required.”); see also id. at 298 (Table 10.1.a “Definitions of some types of reporting biases”); id. at 304-05 (10.2.2.1 Duplicate (multiple) publication bias … “The inclusion of duplicated data may therefore lead to overestimation of intervention effects.”); Julian P.T. Higgins, Peter W. Lane, Betsy Anagnostelis, Judith Anzures-Cabrera, Nigel F. Baker, Joseph C. Cappelleri, Scott Haughie, Sally Hollis, Steff C. Lewis, Patrick Moneuse & Anne Whitehead, “A tool to assess the quality of a meta-analysis,” 4 Research Synthesis Methods 351, 363 (2013) (“A common error is to double-count individuals in a meta-analysis.”); Alessandro Liberati, Douglas G. Altman, Jennifer Tetzlaff, Cynthia Mulrow, Peter C. Gøtzsche, John P.A. Ioannidis, Mike Clarke, Devereaux, Jos Kleijnen, and David Moher, “The PRISMA Statement for Reporting Systematic Reviews and Meta-Analyses of Studies That Evaluate Health Care Interventions: Explanation and Elaboration,” 151 Ann. Intern. Med. W-65, W-75 (2009) (“Some studies are published more than once. Duplicate publications may be difficult to ascertain, and their inclusion may introduce bias. We advise authors to describe any steps they used to avoid double counting and piece together data from multiple reports of the same study (e.g., juxtaposing author names, treatment comparisons, sample sizes, or outcomes).”) (internal citations omitted); Erik von Elm, Greta Poglia; Bernhard Walder, and Martin R. Tramèr, “Different patterns of duplicate publication: an analysis of articles used in systematic reviews,” 291 J. Am. Med. Ass’n 974 (2004); John Andy Wood, “Methodology for Dealing With Duplicate Study Effects in a Meta-Analysis,” 11 Organizational Research Methods 79, 79 (2008) (“Dependent studies, duplicate study effects, nonindependent studies, and even covert duplicate publications are all terms that have been used to describe a threat to the validity of the meta-analytic process.”) (internal citations omitted); Martin R. Tramèr, D. John M. Reynolds, R. Andrew Moore, Henry J. McQuay, “Impact of covert duplicate publication on meta­analysis: a case study,” 315 Brit. Med. J. 635 (1997); Beverley J Shea, Jeremy M Grimshaw, George A. Wells, Maarten Boers, Neil Andersson, Candyce Hamel, Ashley C. Porter, Peter Tugwell, David Moher, and Lex M. Bouter, “Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews,” 7(10) BMC Medical Research Methodology 2007 (systematic reviews must inquire whether there was “duplicate study selection and data extraction”).

Bentham’s Legacy – Quantification of Fact Finding

March 1st, 2015

Jeremy Bentham, radical philosopher, was a source of many antic proposals. Perhaps his most antic proposal was to have himself stuffed, mounted, and displayed in the halls of University College of London, where he may still be observed during normal school hours. In ethical theory, Bentham advocated for an extreme ethical reductionism, known as utilitarianism. Bentham shared Edmund Burke’s opposition to the invocation of natural rights, but unlike Burke, Bentham was an ardent foe of the American Revolution.

Bentham was also a non-practicing lawyer who had an inexhaustible capacity for rationalistic revisions of legal practice. Among his revisionary schemes, Bentham proposed to reduce or translate qualitative beliefs to a numerical a scale, like a thermometer. Jeremy Bentham, 6 The Works of Jeremy Bentham; Rationale of Evidence, Rationale of Judicial Evidence at 225 (1843); 1 Rationale of Judicial Evidence Specially Applied to Judicial Practice at 76 (1827). The legal profession, that is lawyers who actually tried or judged cases, did think much of Bentham’s proposal:

“The notions of those who have proposed that mere moral probabilities or relations could ever be represented by numbers or space, and thus be subjected to arithmetical analysis, cannot but be regarded as visionary and chimerical.”

Thomas Starkie, A Practical Treatise of the Law of Evidence 225 (2d ed. 1833). Having graduated from St. John’s College, Cambridge University, as senior wrangler, Starkie was no slouch in mathematics, and he was an accomplished lawyer and judge later in life.

Starkie’s pronouncement upon Bentham’s proposal was, in the legal profession, a final judgment. The idea of having witnesses provide a decigrade or centigrade scale of belief in facts never caught on in the law. No evidentiary code or set of rules allows for, or requires, such quantification, but on the fringes, Bentham’s ideas still resonate with some observers who would require juries or judges to quantify their findings of fact:

“Consequently statistical ideas should be used in court and have already been used in the analysis of forensic data. But there are other areas to explore. Thus I do not think a jury should be required to decide guilty or innocent; they should provide their probability of guilt. The judge can then apply MEU [maximised expected utility] by incorporating society’s utility. Hutton could usefully have used some probability. A lawyer and I wrote a paper on the evidential worth of failure to produce evidence.”

Lindley, “Bayesian Thoughts,” Significance 73, 74-75 (June 2004). Some might say that Lindley was trash picking in the dustbin of legal history.

Sander Greenland on “The Need for Critical Appraisal of Expert Witnesses in Epidemiology and Statistics”

February 8th, 2015

Sander Greenland is one of the few academics, who has served as an expert witness, who has written post-mortems of his involvement in various litigations[1]. Although settling scores with opposing expert witnesses can be a risky business[2], the practice can provide important insights for judges and lawyers who want to avoid the errors of the past. Greenland correctly senses that many errors seem endlessly recycled, and that courts could benefit from disinterested commentary on cases. And so, there should be a resounding affirmation from federal and state courts to the proclaimed “need for critical appraisal of expert witnesses in epidemiology and statistics,” as well as in many other disciplines.

A recent exchange[3] with Professor Greenland led me to revisit his Wake Forest Law Review article. His article raises some interesting points, some mistaken, but some valuable and thoughtful considerations about how to improve the state of statistical expert witness testimony. For better and worse[4], lawyers who litigate health effects issues should read it.

Other Misunderstandings

Greenland posits criticisms of defense expert witnesses[5], who he believes have misinterpreted or misstated the appropriate inferences to be drawn from null studies. In one instance, Greenland revisits one of his own cases, without any clear acknowledgment that his views were largely rejected.[6] The State of California had declared, pursuant to Proposition 65 ( the Safe Drinking Water and Toxic Enforcement Act of 1986, Health and Safety Code sections 25249.5, et seq.), that the State “knew” that di(2-ethylhexyl)phthalate, or “DEHP” caused cancer. Baxter Healthcare challenged the classification, and according to Greenland, the defense experts erroneously interpreted inclusive studies with evidence supporting a conclusion that DEHP does not cause cancer.

Greenland argues that the Baxter expert’s reference[7] to an IARC working group’s classification of DEHP as “not classifiable as to its carcinogenicity to humans” did not support the expert’s conclusion that DEHP does not cause cancer in human. If Baxter’s expert invoked the IARC working group’s classification for complete exoneration of DEHP, then Greenland’s point is fair enough. In his single-minded attack on Baxter’s expert’s testimony, however, Greenland missed a more important point, which is that the IARC’s determination that DEHP is not classifiable as to carcinogenicity is directly contradictory of California’s epistemic claim to “know” that DEHP causes cancer. And Greenland conveniently omits any discussion that the IARC working group had reclassified DEHP from “possibly carcinogenic” to “not classifiable,” in the light of its conclusion that mechanistic evidence of carcinogenesis in rodents did not pertain to humans.[8] Greenland maintains that Baxter’s experts misrepresented the IARC working group’s conclusion[9], but that conclusion, at the very least, demonstrates that California was on very shaky ground when it declared that it “knew” that DEHP was a carcinogen. California’s semantic gamesmanship over its epistemic claims is at the root of the problem, not a misstep by defense experts in describing inconclusive evidence as exonerative.

Greenland goes on to complain that in litigation over health claims:

“A verdict of ‛uncertain’ is not allowed, yet it is the scientific verdict most often warranted. Elimination of this verdict from an expert’s options leads to the rather perverse practice (illustrated in the DEHP testimony cited above) of applying criminal law standards to risk assessments, as if chemicals were citizens to be presumed innocent until proven guilty.

39 Wake Forest Law Rev. at 303. Despite Greenland’s alignment with California in the Denton case, the fact of the matter is that a verdict of “uncertain” was allowed, and he was free to criticize California for making a grossly exaggerated epistemic claim on inconclusive evidence.

Perhaps recognizing that he may be readily be seen as an advocate for coming to the defense of California on the DEHP issue, Greenland protests that:

“I am not suggesting that judgments for plaintiffs or actions against chemicals should be taken when evidence is inconclusive.”

39 Wake Forest Law Rev. at 305. And yet, his involvement in the Denton case (as well as other cases, such as silicone gel breast implant cases, thimerosal cases, etc.) suggest that he is willing to lend aid and support to judgments for plaintiffs when the evidence is inconclusive.

Important Advice and Recommendations

These foregoing points are rather severe limitations to Greenland’s article, but lawyers and judges should also look to what is good and helpful here. Greenland is correct to call out expert witnesses, regardless of party of affiliation, who opine that inconclusive studies are “proof” of the null hypothesis. Although some of Greenland’s arguments against the use of significance probability may be overstated, his corrections to the misstatements and misunderstandings of significance probability should command greater attention in the legal community. In one strained passage, however, Greenland uses a disjunction to juxtapose null hypothesis testing with proof beyond a reasonable doubt[10]. Greenland of course understands the difference, but the context would lead some untutored readers to think he has equated the two probabilistic assessments. Writing in a law review for lawyers and judges might have led him to be more careful. Given the prevalence of plaintiffs’ counsel’s confusing the 95% confidence coefficient with a burden of proof akin to beyond a reasonable doubt, great care in this area is, indeed, required.

Despite his appearing for plaintiffs’ counsel in health effects litigation, some of Greenland’s suggestions are balanced and perhaps more truth-promoting than many plaintiffs’ counsel would abide. His article provides an important argument in favor of raising the legal criteria for witnesses who purport to have expertise to address and interpret epidemiologic and experimental evidence[11]. And beyond raising qualification requirements above mere “reasonable pretense at expertise,” Professor Greenland offers some thoughtful, helpful recommendations for improving expert witness testimony in the courts:

  • “Begin publishing projects in which controversial testimony (a matter of public record) is submitted, and as space allows, published on a regular basis in scientific or law journals, perhaps with commentary. An online version could provide extended excerpts, with additional context.
  • Give courts the resources and encouragement to hire neutral experts to peer-review expert testimony.
  • Encourage universities and established scholarly societies (such as AAAS, ASA, APHA, and SER) to conduct workshops on basic epidemiologic and statistical inference for judges and other legal professionals.”

39 Wake Forest Law Rev. at 308.

Each of these three suggestions is valuable and constructive, and worthy of an independent paper. The recommendation of neutral expert witnesses and scholarly tutorials for judges is hardly new. Many defense counsel and judges have argued for them in litigation and in commentary. The first recommendation, of publishing “controversial testimony” is part of the purpose of this blog. There would be great utility to making expert witness testimony, and analysis thereof, more available for didactic purposes. Perhaps the more egregious testimonial adventures should be republished in professional journals, as Greenland suggests. Greenland qualifies his recommendation with “as space allows,” but space is hardly the limiting consideration in the digital age.

Causation

Professor Greenland correctly points out that causal concepts and conclusions are often essentially contested[12], but his argument might well be incorrectly taken for “anything goes.” More helpfully, Greenland argues that various academic ideals should infuse expert witness testimony. He suggests that greater scholarship, with acknowledgment of all viewpoints, and all evidence, is needed in expert witnessing. 39 Wake Forest Law Rev. at 293.

Greenland’s argument provides an important corrective to the rhetoric of Oreskes, Cranor, Michaels, Egilman, and others on “manufacturing doubt”:

“Never force a choice among competing theories; always maintain the option of concluding that more research is needed before a defensible choice can be made.”

Id. Despite his position in the Denton case, and others, Greenland and all expert witnesses are free to maintain that more research is needed before a causal claim can be supported. Greenland also maintains that expert witnesses should “look past” the conclusions drawn by authors, and base their opinions on the “actual data” on which the statistical analyses are based, and from which conclusions have been drawn. Courts have generally rejected this view, but if courts were to insist upon real expertise in epidemiology and statistics, then the testifying expert witnesses should not be constrained by the hearsay opinions in the discussion sections of published studies – sections which by nature are incomplete and tendentious. See Follow the Data, Not the Discussion” (May 2, 2010).

Greenland urges expert witnesses and legal counsel to be forthcoming about their assumptions, their uncertainty about conclusions:

“Acknowledgment of controversy and uncertainty is a hallmark of good science as well as good policy, but clashes with the very time limited tasks faced by attorneys and courts”

39 Wake Forest Law Rev. at 293-4. This recommendation would be helpful in assuring courts that the data may simply not support conclusions sufficiently certain to be submitted to lay judges and jurors. Rosen v. Ciba-Geigy Corp., 78 F.3d 316, 319, 320 (7th Cir. 1996) (“But the courtroom is not the place for scientific guesswork, even of the inspired sort. Law lags science; it does not lead it.”) (internal citations omitted).

Threats to Validity

One of the serious mistakes counsel often make in health effects litigation is to invite courts to believe that statistical significance is sufficient for causal inferences. Greenland emphasizes that validity considerations often are much stronger, and more important considerations than the play of random error[13]:

“For very imperfect data (e.g., epidemiologic data), the limited conclusions offered by statistics must be further tempered by validity considerations.”

*   *   *   *   *   *

“Examples of validity problems include non-random distribution of the exposure in question, non-random selection or cooperation of subjects, and errors in assessment of exposure or disease.”

39 Wake Forest Law Rev. at 302 – 03. Greenland’s abbreviated list of threats to validity should remind courts that they cannot sniff a p-value below five percent and then safely kick the can to the jury. The literature on evaluating bias and confounding is huge, but Greenland was a co-author on an important recent paper, which needs to be added to the required reading lists of judges charged with gatekeeping expert witness opinion testimony about health effects. See Timothy L. Lash, et al., “Good practices for quantitative bias analysis,” 43 Internat’l J. Epidem. 1969 (2014).


[1] For an influential example of this sparse genre, see James T. Rosenbaum, “Lessons from litigation over silicone breast implants: A call for activism by scientists,” 276 Science 1524 (1997) (describing the exaggerations, distortions, and misrepresentations of plaintiffs’ expert witnesses in silicone gel breast implant litigation, from perspective of a highly accomplished scientist physician, who served as a defense expert witness, in proceedings before Judge Robert Jones, in Hall v. Baxter Healthcare Corp., 947 F. Supp. 1387 (D. Or. 1996). In one attempt to “correct the record” in the aftermath of a case, Greenland excoriated a defense expert witness, Professor Robert Makuch, for stating that Bayesian methods are rarely used in medicine or in the regulation of medicines. Sander Greenland, “The Need for Critical Appraisal of Expert Witnesses in Epidemiology and Statistics,” 39 Wake Forest Law Rev. 291, 306 (2004).  Greenland heaped adjectives upon his adversary, “ludicrous claim,” “disturbing, “misleading expert testimony,” and “demonstrably quite false.” See “The Infrequency of Bayesian Analyses in Non-Forensic Court Decisions” (Feb. 16, 2014) (debunking Prof. Greenland’s claims).

[2] One almost comical example of trying too hard to settle a score occurs in a footnote, where Greenland cites a breast implant case as having been reversed in part by another case in the same appellate court. See 39 Wake Forest Law Rev. at 309 n.68, citing Allison v. McGhan Med. Corp., 184 F.3d 1300, 1310 (11th Cir. 1999), aff’d in part & rev’d in part, United States v. Baxter Int’l, Inc., 345 F.3d 866 (11th Cir. 2003). The subsequent case was not by any stretch of the imagination a reversal of the earlier Allison case; the egregious citation is a legal fantasy. Furthermore, Allison had no connection with the procedures for court-appointed expert witnesses or technical advisors. Perhaps the most charitable interpretation of this footnote is that it was injected by the law review editors or supervisors.

[3] SeeSignificance Levels are Made a Whipping Boy on Climate Change Evidence: Is .05 Too Strict? (Schachtman on Oreskes)” (Jan. 4, 2015).

[4] In addition to the unfair attack on Professor Makuch, see supra, n.1, there is much that some will find “disturbing,” “misleading,” and even “ludicrous,” (some of Greenland’s favorite pejorative adjectives) in the article. Greenland repeats in brief his arguments against the legal system’s use of probabilities of causation[4], which I have addressed elsewhere.

[5] One of Baxter’s expert witnesses appeared to be the late Professor Patricia Buffler.

[6] See 39 Wake Forest Law Rev. at 294-95, citing Baxter Healthcare Corp. v. Denton, No. 99CS00868, 2002 WL 31600035, at *1 (Cal. App. Dep’t Super. Ct. Oct. 3, 2002) (unpublished); Baxter Healthcare Corp. v. Denton, 120 Cal. App. 4th 333 (2004)

[7] Although Greenland cites to a transcript, the citation is to a judicial opinion, and the actual transcript of testimony is not available at the citation give.

[8] See Denton, supra.

[9] 39 Wake Forest L. Rev. at 297.

[10] 39 Wake Forest L. Rev. at 305 (“If it is necessary to prove causation ‛beyond a reasonable doubt’–or be ‛compelled to give up the null’ – then action can be forestalled forever by focusing on any aspect of available evidence that fails to conform neatly with the causal (alternative) hypothesis. And in medical and social science there is almost always such evidence available, not only because of the ‛play of chance’ (the focus of ordinary statistical theory), but also because of the numerous validity problems in human research.”

[11] See Peter Green, “Letter from the President to the Lord Chancellor regarding the use of statistical evidence in court cases” (Jan. 23, 2002) (writing on behalf of The Royal Statistical Society; “Although many scientists have some familiarity with statistical methods, statistics remains a specialised area. The Society urges you to take steps to ensure that statistical evidence is presented only by appropriately qualified statistical experts, as would be the case for any other form of expert evidence.”).

[12] 39 Wake Forest Law Rev. at 291 (“In reality, there is no universally accepted method for inferring presence or absence of causation from human observational data, nor is there any universally accepted method for inferring probabilities of causation (as courts often desire); there is not even a universally accepted definition of cause or effect.”).

[13] 39 Wake Forest Law Rev. at 302-03 (“If one is more concerned with explaining associations scientifically, rather than with mechanical statistical analysis, evidence about validity can be more important than statistical results.”).

Sander Greenland on “The Need for Critical Appraisal of Expert Witnesses in Epidemiology and Statistics”

February 8th, 2015

Sander Greenland is one of the few academics, who has served as an expert witness, who has written post-mortems of his involvement in various litigations[1]. Although settling scores with opposing expert witnesses can be a risky business[2], the practice can provide important insights for judges and lawyers who want to avoid the errors of the past. Greenland correctly senses that many errors seem endlessly recycled, and that courts could benefit from disinterested commentary on cases. And so, there should be a resounding affirmation from federal and state courts to the proclaimed “need for critical appraisal of expert witnesses in epidemiology and statistics,” as well as in many other disciplines.

A recent exchange[3] with Professor Greenland led me to revisit his Wake Forest Law Review article. His article raises some interesting points, some mistaken, but some valuable and thoughtful considerations about how to improve the state of statistical expert witness testimony. For better and worse[4], lawyers who litigate health effects issues should read it.

Other Misunderstandings

Greenland posits criticisms of defense expert witnesses[5], who he believes have misinterpreted or misstated the appropriate inferences to be drawn from null studies. In one instance, Greenland revisits one of his own cases, without any clear acknowledgment that his views were largely rejected.[6] The State of California had declared, pursuant to Proposition 65 ( the Safe Drinking Water and Toxic Enforcement Act of 1986, Health and Safety Code sections 25249.5, et seq.), that the State “knew” that di(2-ethylhexyl)phthalate, or “DEHP” caused cancer. Baxter Healthcare challenged the classification, and according to Greenland, the defense experts erroneously interpreted inclusive studies with evidence supporting a conclusion that DEHP does not cause cancer.

Greenland argues that the Baxter expert’s reference[7] to an IARC working group’s classification of DEHP as “not classifiable as to its carcinogenicity to humans” did not support the expert’s conclusion that DEHP does not cause cancer in human. If Baxter’s expert invoked the IARC working group’s classification for complete exoneration of DEHP, then Greenland’s point is fair enough. In his single-minded attack on Baxter’s expert’s testimony, however, Greenland missed a more important point, which is that the IARC’s determination that DEHP is not classifiable as to carcinogenicity is directly contradictory of California’s epistemic claim to “know” that DEHP causes cancer. And Greenland conveniently omits any discussion that the IARC working group had reclassified DEHP from “possibly carcinogenic” to “not classifiable,” in the light of its conclusion that mechanistic evidence of carcinogenesis in rodents did not pertain to humans.[8] Greenland maintains that Baxter’s experts misrepresented the IARC working group’s conclusion[9], but that conclusion, at the very least, demonstrates that California was on very shaky ground when it declared that it “knew” that DEHP was a carcinogen. California’s semantic gamesmanship over its epistemic claims is at the root of the problem, not a misstep by defense experts in describing inconclusive evidence as exonerative.

Greenland goes on to complain that in litigation over health claims:

“A verdict of ‛uncertain’ is not allowed, yet it is the scientific verdict most often warranted. Elimination of this verdict from an expert’s options leads to the rather perverse practice (illustrated in the DEHP testimony cited above) of applying criminal law standards to risk assessments, as if chemicals were citizens to be presumed innocent until proven guilty.

39 Wake Forest Law Rev. at 303. Despite Greenland’s alignment with California in the Denton case, the fact of the matter is that a verdict of “uncertain” was allowed, and he was free to criticize California for making a grossly exaggerated epistemic claim on inconclusive evidence.

Perhaps recognizing that he may be readily be seen as an advocate for coming to the defense of California on the DEHP issue, Greenland protests that:

“I am not suggesting that judgments for plaintiffs or actions against chemicals should be taken when evidence is inconclusive.”

39 Wake Forest Law Rev. at 305. And yet, his involvement in the Denton case (as well as other cases, such as silicone gel breast implant cases, thimerosal cases, etc.) suggest that he is willing to lend aid and support to judgments for plaintiffs when the evidence is inconclusive.

Important Advice and Recommendations

These foregoing points are rather severe limitations to Greenland’s article, but lawyers and judges should also look to what is good and helpful here. Greenland is correct to call out expert witnesses, regardless of party of affiliation, who opine that inconclusive studies are “proof” of the null hypothesis. Although some of Greenland’s arguments against the use of significance probability may be overstated, his corrections to the misstatements and misunderstandings of significance probability should command greater attention in the legal community. In one strained passage, however, Greenland uses a disjunction to juxtapose null hypothesis testing with proof beyond a reasonable doubt[10]. Greenland of course understands the difference, but the context would lead some untutored readers to think he has equated the two probabilistic assessments. Writing in a law review for lawyers and judges might have led him to be more careful. Given the prevalence of plaintiffs’ counsel’s confusing the 95% confidence coefficient with a burden of proof akin to beyond a reasonable doubt, great care in this area is, indeed, required.

Despite his appearing for plaintiffs’ counsel in health effects litigation, some of Greenland’s suggestions are balanced and perhaps more truth-promoting than many plaintiffs’ counsel would abide. His article provides an important argument in favor of raising the legal criteria for witnesses who purport to have expertise to address and interpret epidemiologic and experimental evidence[11]. And beyond raising qualification requirements above mere “reasonable pretense at expertise,” Professor Greenland offers some thoughtful, helpful recommendations for improving expert witness testimony in the courts:

  • “Begin publishing projects in which controversial testimony (a matter of public record) is submitted, and as space allows, published on a regular basis in scientific or law journals, perhaps with commentary. An online version could provide extended excerpts, with additional context.
  • Give courts the resources and encouragement to hire neutral experts to peer-review expert testimony.
  • Encourage universities and established scholarly societies (such as AAAS, ASA, APHA, and SER) to conduct workshops on basic epidemiologic and statistical inference for judges and other legal professionals.”

39 Wake Forest Law Rev. at 308.

Each of these three suggestions is valuable and constructive, and worthy of an independent paper. The recommendation of neutral expert witnesses and scholarly tutorials for judges is hardly new. Many defense counsel and judges have argued for them in litigation and in commentary. The first recommendation, of publishing “controversial testimony” is part of the purpose of this blog. There would be great utility to making expert witness testimony, and analysis thereof, more available for didactic purposes. Perhaps the more egregious testimonial adventures should be republished in professional journals, as Greenland suggests. Greenland qualifies his recommendation with “as space allows,” but space is hardly the limiting consideration in the digital age.

Causation

Professor Greenland correctly points out that causal concepts and conclusions are often essentially contested[12], but his argument might well be incorrectly taken for “anything goes.” More helpfully, Greenland argues that various academic ideals should infuse expert witness testimony. He suggests that greater scholarship, with acknowledgment of all viewpoints, and all evidence, is needed in expert witnessing. 39 Wake Forest Law Rev. at 293.

Greenland’s argument provides an important corrective to the rhetoric of Oreskes, Cranor, Michaels, Egilman, and others on “manufacturing doubt”:

“Never force a choice among competing theories; always maintain the option of concluding that more research is needed before a defensible choice can be made.”

Id. Despite his position in the Denton case, and others, Greenland and all expert witnesses are free to maintain that more research is needed before a causal claim can be supported. Greenland also maintains that expert witnesses should “look past” the conclusions drawn by authors, and base their opinions on the “actual data” on which the statistical analyses are based, and from which conclusions have been drawn. Courts have generally rejected this view, but if courts were to insist upon real expertise in epidemiology and statistics, then the testifying expert witnesses should not be constrained by the hearsay opinions in the discussion sections of published studies – sections which by nature are incomplete and tendentious. See Follow the Data, Not the Discussion” (May 2, 2010).

Greenland urges expert witnesses and legal counsel to be forthcoming about their assumptions, their uncertainty about conclusions:

“Acknowledgment of controversy and uncertainty is a hallmark of good science as well as good policy, but clashes with the very time limited tasks faced by attorneys and courts”

39 Wake Forest Law Rev. at 293-4. This recommendation would be helpful in assuring courts that the data may simply not support conclusions sufficiently certain to be submitted to lay judges and jurors. Rosen v. Ciba-Geigy Corp., 78 F.3d 316, 319, 320 (7th Cir. 1996) (“But the courtroom is not the place for scientific guesswork, even of the inspired sort. Law lags science; it does not lead it.”) (internal citations omitted).

Threats to Validity

One of the serious mistakes counsel often make in health effects litigation is to invite courts to believe that statistical significance is sufficient for causal inferences. Greenland emphasizes that validity considerations often are much stronger, and more important considerations than the play of random error[13]:

“For very imperfect data (e.g., epidemiologic data), the limited conclusions offered by statistics must be further tempered by validity considerations.”

*   *   *   *   *   *

“Examples of validity problems include non-random distribution of the exposure in question, non-random selection or cooperation of subjects, and errors in assessment of exposure or disease.”

39 Wake Forest Law Rev. at 302 – 03. Greenland’s abbreviated list of threats to validity should remind courts that they cannot sniff a p-value below five percent and then safely kick the can to the jury. The literature on evaluating bias and confounding is huge, but Greenland was a co-author on an important recent paper, which needs to be added to the required reading lists of judges charged with gatekeeping expert witness opinion testimony about health effects. See Timothy L. Lash, et al., “Good practices for quantitative bias analysis,” 43 Internat’l J. Epidem. 1969 (2014).


[1] For an influential example of this sparse genre, see James T. Rosenbaum, “Lessons from litigation over silicone breast implants: A call for activism by scientists,” 276 Science 1524 (1997) (describing the exaggerations, distortions, and misrepresentations of plaintiffs’ expert witnesses in silicone gel breast implant litigation, from perspective of a highly accomplished scientist physician, who served as a defense expert witness, in proceedings before Judge Robert Jones, in Hall v. Baxter Healthcare Corp., 947 F. Supp. 1387 (D. Or. 1996). In one attempt to “correct the record” in the aftermath of a case, Greenland excoriated a defense expert witness, Professor Robert Makuch, for stating that Bayesian methods are rarely used in medicine or in the regulation of medicines. Sander Greenland, “The Need for Critical Appraisal of Expert Witnesses in Epidemiology and Statistics,” 39 Wake Forest Law Rev. 291, 306 (2004).  Greenland heaped adjectives upon his adversary, “ludicrous claim,” “disturbing, “misleading expert testimony,” and “demonstrably quite false.” See “The Infrequency of Bayesian Analyses in Non-Forensic Court Decisions” (Feb. 16, 2014) (debunking Prof. Greenland’s claims).

[2] One almost comical example of trying too hard to settle a score occurs in a footnote, where Greenland cites a breast implant case as having been reversed in part by another case in the same appellate court. See 39 Wake Forest Law Rev. at 309 n.68, citing Allison v. McGhan Med. Corp., 184 F.3d 1300, 1310 (11th Cir. 1999), aff’d in part & rev’d in part, United States v. Baxter Int’l, Inc., 345 F.3d 866 (11th Cir. 2003). The subsequent case was not by any stretch of the imagination a reversal of the earlier Allison case; the egregious citation is a legal fantasy. Furthermore, Allison had no connection with the procedures for court-appointed expert witnesses or technical advisors. Perhaps the most charitable interpretation of this footnote is that it was injected by the law review editors or supervisors.

[3] SeeSignificance Levels are Made a Whipping Boy on Climate Change Evidence: Is .05 Too Strict? (Schachtman on Oreskes)” (Jan. 4, 2015).

[4] In addition to the unfair attack on Professor Makuch, see supra, n.1, there is much that some will find “disturbing,” “misleading,” and even “ludicrous,” (some of Greenland’s favorite pejorative adjectives) in the article. Greenland repeats in brief his arguments against the legal system’s use of probabilities of causation[4], which I have addressed elsewhere.

[5] One of Baxter’s expert witnesses appeared to be the late Professor Patricia Buffler.

[6] See 39 Wake Forest Law Rev. at 294-95, citing Baxter Healthcare Corp. v. Denton, No. 99CS00868, 2002 WL 31600035, at *1 (Cal. App. Dep’t Super. Ct. Oct. 3, 2002) (unpublished); Baxter Healthcare Corp. v. Denton, 120 Cal. App. 4th 333 (2004)

[7] Although Greenland cites to a transcript, the citation is to a judicial opinion, and the actual transcript of testimony is not available at the citation give.

[8] See Denton, supra.

[9] 39 Wake Forest L. Rev. at 297.

[10] 39 Wake Forest L. Rev. at 305 (“If it is necessary to prove causation ‛beyond a reasonable doubt’–or be ‛compelled to give up the null’ – then action can be forestalled forever by focusing on any aspect of available evidence that fails to conform neatly with the causal (alternative) hypothesis. And in medical and social science there is almost always such evidence available, not only because of the ‛play of chance’ (the focus of ordinary statistical theory), but also because of the numerous validity problems in human research.”

[11] See Peter Green, “Letter from the President to the Lord Chancellor regarding the use of statistical evidence in court cases” (Jan. 23, 2002) (writing on behalf of The Royal Statistical Society; “Although many scientists have some familiarity with statistical methods, statistics remains a specialised area. The Society urges you to take steps to ensure that statistical evidence is presented only by appropriately qualified statistical experts, as would be the case for any other form of expert evidence.”).

[12] 39 Wake Forest Law Rev. at 291 (“In reality, there is no universally accepted method for inferring presence or absence of causation from human observational data, nor is there any universally accepted method for inferring probabilities of causation (as courts often desire); there is not even a universally accepted definition of cause or effect.”).

[13] 39 Wake Forest Law Rev. at 302-03 (“If one is more concerned with explaining associations scientifically, rather than with mechanical statistical analysis, evidence about validity can be more important than statistical results.”).

Fixodent Study Causes Lockjaw in Plaintiffs’ Counsel

February 4th, 2015

Litigation Drives Science

Back in 2011, the Fixodent MDL Court sustained Rule 702 challenges to plaintiffs’ expert witnesses. “Hypotheses are verified by testing, not by submitting them to lay juries for a vote.” In re Denture Cream Prods. Liab. Litig., 795 F. Supp. 2d 1345, 1367 (S.D.Fla.2011), aff’d, Chapman v. Procter & Gamble Distrib., LLC, 766 F.3d 1296 (11th Cir. 2014). The Court found that the plaintiffs had raised a superficially plausible hypothesis, but that they had not verified the hypothesis by appropriate testing[1].

Like dentures to Fixodent, the plaintiffs stuck to their claims, and set out to create the missing evidence. Plaintiffs’ counsel contracted with Dr. Salim Shah and his companies Sarfez Pharmaceuticals, Inc. and Sarfez USA, Inc. (“Sarfez”) to conduct human research in India, to support their claims that zinc in denture cream causes neurological damage[2]In re Denture Cream Prods. Liab. Litig., Misc. Action 13-384 (RBW), 2013 U.S. Dist. LEXIS 93456, *2 (D.D.C. July 3, 2013).  When the defense learned of this study, and the plaintiffs’ counsel’s payments of over $300,000, to support the study, they sought discovery of raw data, study protocol, statistical analyses, and other materials from plaintiffs’ counsel.  Plaintiffs’ counsel protested that they did not have all the materials, and directed defense counsel to Sarfez.  Although other courts have made counsel produce similar materials from the scientists and independent contractors they engaged, in this case, defense counsel followed the trail of documents to contractor, Sarfez, with subpoenas in hand.  Id. at *3-4.

The defense served a Rule 45 subpoena on Sarfez, which produced some, but not all responsive documents. Proctor & Gamble pressed for the missing materials, including study protocols, analytical reports, and raw data.  Id. at *12-13.  Judge Reggie Walton upheld the subpoena, which sought underlying data and non-privileged correspondence, to be within the scope of Rules 26(b) and 45, and not unduly burdensome. Id. at *9-10, *20. Sarfez attempted to argue that the requested materials, listed as email attachments, might not exist, but Judge Walton branded the suggestion “disingenuous.”  Attachments to emails should be produced along with the emails.  Id. at *12 (citing and collecting cases). Although Judge Walton did not grant a request for forensic recovery of hard-drive data or for sanctions, His Honor warned Sarfez that it might be required to bear the cost of forensic data recovery if it did not comply the court’s order.  Id. at *15, *22.

Plaintiffs Put Their Study Into Play

The study at issue in the subpoena was designed by Frederick K. Askari, M.D., Ph.D., an associate professor of hepatology, in the University of Michigan Health System. In re Denture Cream Prods. Liab. Litig., No. 09–2051–MD, 2015 WL 392021, at *7 (S.D. Fla. Jan. 28, 2015). At the instruction of plaintiffs’ counsel, Dr. Askari sought to study the short-term effects of Fixodent on copper absorption in humans. Working in India, Askari conducted the study on 24 participants, who were given a controlled diet for 36 days. Of the 24 participants, 12, randomly selected, received 12 grams of Fixodent per day (containing 204 mg. of zinc). Another six participants, randomly selected, were given zinc acetate, three times per day (150 mg of zinc), and the remaining six participants received placebo, three times per day.

A study protocol was approved by an independent group[3], id. at *9, and the study was supposed to be conducted with a double blind. Id. at *7. Not surprisingly, those participants who received doses of Fixodent or zinc acetate had higher urinary levels of zinc (pee < 0.05). The important issue, however, was whether the dietary zinc levels affect copper excretion in a way that would support plaintiffs’ claims that copper levels were lowered sufficiently by Fixodent to cause a syndromic neurological disorder. The MDL Court ultimately concluded that plaintiffs’ expert witnesses’ opinions on general causation claims were not sufficiently supported to satisfy the requirements of Rule 702, and upheld defense challenges to those expert witnesses. In doing so, the MDL Court had much of interest to say about case reports, weight of the evidence, and other important issues. This post, however, concentrates on the deviations of one study, commissioned by plaintiffs’ counsel, from the scientific standard of care. The Askari “research” makes for a fascinating case study of how not to conduct a study in a litigation caldron.

Non-Standard Deviations

The First Deviation – Changing the Ascertainment Period After the Data Are Collected

The protocol apparently identified a primary endpoint to be:

“the mean increase in [copper 65] excretion in fecal matter above the baseline (mg/day) averaged over the study period … to test the hypothesis that the release of [zinc] either from Fixodent or Zinc Acetate impairs [copper 65] absorption as measured in feces.”

The study outcome, on the primary end point, was clear. The plaintiffs’ testifying statistician, Hongkun Wang, stated in her deposition that the fecal copper (whether isotope Cu63 or Cu65) was not different across the three groups (Fixodent, zinc acetate, and placebo). Id. at *9[4]. Even Dr. Askari himself admitted that the total fecal copper levels were not increased in the Fixodent group compared with the placebo control group. Id. at *9.[5]

Apparently after obtaining the data, and finding no difference in the pre-specified end point of average fecal copper levels between Fixodent and placebo groups, Askari turned to a new end point, measured in a different way, not described in the protocol as the primary end point.

The Second Deviation – Changing Primary End Point After the Data Are Collected

In the early (days 3, 4, and 5) and late (days 31, 32, and 33) part of the Study, participants received a dose of purified copper 65[6] to help detect the “blockade of copper.” Id. at 8*. The participants’ fecal copper 65 levels were compared to their naturally occurring copper 63 levels. According to Dr. Askari:

“if copper is being blocked in the Fixodent and zinc acetate test subjects from exposure to the zinc in the test product (Fixodent) and positive control (zinc acetate), the ratio of their fecal output of copper 65 as compared to their fecal output of copper 63 would increase relative to the control subjects, who were not dosed with zinc. In short, a higher ratio of copper 65 to copper 63 reflects blocking of copper.”

Id.

Askari analyzed the ratio of two copper isotopes (Cu65 /Cu63), in the limited period of observation to study days 31 to 33. Id. at *9. Askari thus changed the outcome to be measured, the timing of the measurement, and manner of measurement (average over entire period versus amount on days 31 to 33). On this post hoc, non-prespecified end point, Askari claimed to have found “significant” differences.

The MDL Court expressed its skepticism and concern over the difference between the protocol’s specified end point, and one that came into the study only after the data were obtained and analyzed. The plaintiffs claimed that it was their (and Askari’s) intention from the initial stages of designing the Fixodent Blockade Study to use the Cu65/Cu63 ratio as the primary end point. According to the plaintiffs, the isotope ratio was simply better articulated and “clarified” as the primary end point in the final report than it was in the protocol. The Court was not amused or assuaged by the plaintiffs’ assurances. The study sponsor, Dr. Salim Shah could not point to a draft protocol that indicated the isotope ratio as the end point; nor could Dr. Shah identify a request for this analysis by Wang until after the study was concluded. Id. at *9.[7]

Ultimately, the Court declared that whether the protocol was changed post hoc after the primary end point provided disappointing analysis, or the isotope ratio was carelessly omitted from the protocol, the design or conduct of the study was “incompatible with reliable scientific methodology.”

The Third Deviation – Changing the Standard of “Significance” After the Data Are Collected and P-Values Are Computed

The protocol for the Blockade study called for a pre-determined Type I error rate (p-value) of no more than 5 percent.[8] Id. at *10. The difference in the isotope ratio showed an attained level of significance probability of 5.7 percent, and thus even the post hoc end point missed the prespecified level of significance. The final protocol changed the value of “significance” to 10 percent, to permit the plaintiffs to declare a “statistically significant” result. Dr. Wang admitted in deposition that she doubled the acceptable level of Type I error only after she obtained the data and calculated the p-value of 0.057. Id. at *10.[9]

The Court found that this deliberate moving of the statistical goal post reflected a “lack of objectivity and reliability,” which smacked of contrivance[10].

The Court found that the study’s deviations from the protocol demonstrated a lack of objectivity. The inadequacy of the Study’s statistical analysis plan supported the Court’s conclusion that Dr. Askari’s supposed finding of a “statistically significant” difference in fecal copper isotope ratio between Fixodent and placebo group participants was “not based on sufficiently reliable and objective scientific methodology” and thus could not support plaintiffs’ expert witnesses’ general causation claims.

The Fourth Deviation – Failing to Take Steps to Preserve the Blind

The protocol called for a double-blinded study, with neither the participants nor the clinical investigators knowing which participant was in which group. Rather than delivering the three different groups capsules that looked similar, the group each received starkly different looking capsules. Id. at *11. The capsules for one set were apparently so large that the investigators worried whether the participants would comply with the dosing regimen.

The Fifth Deviation – Failing to Take Steps to Keep Biological Samples From Becoming Contaminated

Documents and emails from Dr. Shah acknowledged that there had been “difficulties in storing samples at appropriate temperature.” Id. at *11. Fecal samples were “exposed to unfrozen and undesirable temperature conditions.” Dr. Shah called for remedial steps from the Study manager, but there was no documentation that such steps were taken to correct the problem. Id.

The Consequences of Discrediting the Study

Dr. Askari opined that the Study, along with other evidence, shows that Fixodent can cause copper deficiency myeloneuropathy (“CDM”). The plaintiffs, of course, argued that the Defendants’ criticisms of the Fixodent

Study’s methodology went merely to the “weight rather than admissibility.” Id. at *9. Askari’s study was but one leg of the stool, but the defense’s thorough discrediting of the study was an important step in collapsing the support for the plaintiffs’ claims. As the MDL Court explained:

“The Court cannot turn a blind eye to the myriad, serious methodological flaws in the Fixodent Blockade Study and conclude they go to weight rather than admissibility. While some of these flaws, on their own, may not be serious enough to justify exclusion of the Fixodent Blockade Study; taken together, the Court finds Fixodent Blockade Study is not “good science,” and is not admissible. Daubert, 509 U.S. at 593 (internal quotation marks and citation omitted).”

Id. at *11.

A study, such as the Fixodent Blockade Study, is not itself admissible, but the deconstruction of the study upon which plaintiffs’ expert witnesses relied, led directly to the Court’s decision to exclude those witnesses. The Court omitted any reference to Federal Rule of Evidence 703, which addresses the requirements of facts and data, otherwise inadmissible, which may be relied upon by expert witnesses in reaching their opinions.


 

[1] SeePhiladelphia Plaintiff’s Claims Against Fixodent Prove Toothless” (May 2, 2012); Jacoby v. Rite Aid Corp., 2012 Phila. Ct. Com. Pl. LEXIS 208 (2012), aff’d, 93 A.3d 503 (Pa. Super. 2013); “Pennsylvania Superior Court Takes The Bite Out of Fixodent Claims” (Dec. 12, 2013).

[2] SeeUsing the Rule 45 Subpoena to Obtain Research Data” (July 24, 2013)

[3] The group was identified as the Ethica Norma Ethical Committee.

[4] citing Wang Dep. at 56:7–25, Aug. 13, 2013), and Wang Analysis of Fixodent Blockade Study [ECF No. 2197–56] (noting “no clear treatment effect on Cu63 or Cu65”).

[5] Askari Dep. at 69:21–24, June 20, 2013.

[6] Copper 65 is not a typical tracer; it is not radioactive. Naturally occurring copper consists almost exclusively of two stable (non-radioactive) isotope, Cu65 about 31 percent, Cu63 about 69 percent. See, e.g., Manuel Olivares, Bo Lönnerdal, Steve A Abrams, Fernando Pizarro, and Ricardo Uauy, “Age and copper intake do not affect copper absorption, measured with the use of 65Cu as a tracer, in young infants,” 76 Am. J. Clin. Nutr. 641 (2002); T.D. Lyon, et al., “Use of a stable copper isotope (65Cu) in the differential diagnosis of Wilson’s disease,” 88 Clin. Sci. 727 (1995).

[7] Shah Dep. at 87:12–25; 476:2–536:12, 138:6–142:12, June 5, 2013).

[8] The reported decision leaves unclear how the analysis would proceed, whether by ANOVA for the three groups, or t-tests, and whether there was multiple testing.

[9] Wang Dep. at 151:13–152:7; 153:15–18.

[10] 2015 WL 392021, at *10, citing Perry v. United States, 755 F.2d 888, 892 (11th Cir. 1985) (“A scientist who has a formed opinion as to the answer he is going to find before he even begins his research may be less objective than he needs to be in order to produce reliable scientific results.”); Rink v. Cheminova, Inc., 400 F.3d 1286, 1293 n. 7 (11th Cir.2005) (“In evaluating the reliability of an expert’s method … a district court may properly consider whether the expert’s methodology has been contrived to reach a particular result.” (alteration added)).

 

The Rhetoric of Playing Dumb on Statistical Significance – Further Comments on Oreskes

January 17th, 2015

As a matter of policy, I leave the comment field turned off on this blog. I don’t have the time or patience to moderate discussions, but that is not to say that I don’t value feedback. Many readers have written, with compliments, concurrences, criticisms, and corrections. Some correspondents have given me valuable suggestions and materials. I believe I can say that aside from a few scurrilous emails, the feedback generally has been constructive, and welcomed.

My last post was on Naomi Oreskes’ opinion piece in the Sunday New York Times[1]. Professor Deborah Mayo asked me for permission to re-post the substance of this post, and to link to the original[2]. Mayo’s blog does allow for comments, and much to my surprise, the posts drew a great deal of attention, links, comment, and twittering. The number and intensity of the comments, as well as the other blog posts and tweets, seemed out of proportion to the point I was trying to make about misinterpreting confidence intervals and other statistical concepts. I suspect that some climate skeptics received my criticisms of Oreskes with a degree of schadenfreude, and that some who criticized me did so because they fear any challenge to Oreskes as a climate-change advocate. So be it. As I made clear in my post, I was not seeking to engage Oreskes on climate change or her judgments on that issue. What I saw in Oreskes’ article was the same rhetorical move made in the courtroom, and in scientific publications, in which plaintiffs environmentalists attempt to claim a scientific imprimatur for their conclusions without adhering to the rigor required for scientific judgments[3].

Some of the comments about Professor Oreskes caused me to take a look at her recent book, Naomi Oreskes & Erik M. Conway, Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming (N.Y. 2010). Interestingly, much of the substance of Oreskes’ newspaper article comes directly from this book. In the context of reporting on the dispute over the EPA’s meta-analysis of studies on passive smoking and lung cancer, Oreskes addressed the 95 percent issue:

“There’s nothing magic about 95 percent. It could be 80 percent. It could be 51 percent. In Vegas if you play a game with 51 percent odds in your favor, you’ll still come out ahead if you play long enough. The 95 percent confidence level is a social convention, a value judgment. And the value it reflects is one that says that the worst mistake a scientist can make is to fool herself: to think an effect is real when it is not. Statisticians call this a type I error. You can think of it as being gullible, naive, or having undue faith in your own ideas.89 To avoid it, scientists place the burden of proof on the person claiming a cause and effect. But there’s another kind of error-type 2-where you miss effects that are really there. You can think of that as being excessively skeptical or overly cautious. Conventional statistics is set up to be skeptical and avoid type I errors. The 95 percent confidence standard means that there is only 1 chance in 20 that you believe something that isn’t true. That is a very high bar. It reflects a scientific worldview in which skepticism is a virtue, credulity is not.90 As one Web site puts it, ‘A type I error is often considered to be more serious, and therefore more important to avoid, than a type II error’.91 In fact, some statisticians claim that type 2 errors aren’t really errors at all,  just missed opportunities.92

Id. at 156-57 (emphasis added). Oreskes’ statement of the confidence interval, from her book, advances more ambiguity by not specifying what the “something” you don’t believe to be true. Of course, if it is the assumed parameter, then she has made the same error as she did in the Times. Oreskes’ further discussion of the EPA environmental tobacco smoke meta-analysis issue makes her meaning clearer, and her interpretation of statistical significance, less defensible:

“Even if 90 percent is less stringent than 95 percent, it still means that there is a 9 in 10 chance that the observed results did not occur by chance. Think of it this way. If you were nine-tenths sure about a crossword puzzle answer, wouldn’t you write it in?94

Id.  Throughout her discussion, Oreskes fails to acknowledge that the p-value assumes the correctness of the null hypothesis in order to assess the strength of the specific data as evidence against the null. As I have pointed out elsewhere, this misinterpretation of significance testing is a rhetorical strategy to evade significance testing, as well as to obscure the role of bias and confounding in accounting for data that differs from an expected value.

Oreskes also continues to maintain that a failure to reject the null is playing “dumb” and placing:

“the burden of proof on the victim, rather than, for example, the manufacturer of a harmful product-and we may fail to protect some people who are really getting hurt.”

Id. So again, the same petitio principii as we saw in the Times. Victimhood is exactly what remains to be established. Oreskes cannot assume it, and then criticize time-tested methods that fail to deliver a confirmatory judgment.

There are endnotes in her book, but the authors fail to cite any serious statistics text. The only reference of dubious relevance is another University of Michigan book, Stephen T. Ziliak & Deidre N. McCloskey, The Cult of Statistical Significance (2008). Enough said[4].

With a little digging, I learned that Oreskes and Conway are science fiction writers, and perhaps we should judge them by literary rather than scientific standards. See Naomi Oreskes & Erik M. Conway, “The Collapse of Western Civilization: A View from the Future,” 142 Dædalus 41 (2013). I do not imply any pejorative judgment of Oreskes for advancing her apocalyptic vision of the future of Earth’s environment as a work of fiction. Her literary work is a worthy thought experiment that has the potential to lead us to accept her precautionary judgments; and at least her publication, in Dædalus, is clearly labeled science fiction.

Oreskes’ future fantasy is, not surprisingly, exactly what Oreskes, the historian of science, now predicts in terms of catastrophic environmental change. Looking back from the future, the science fiction authors attempt to explore the historical origins of the catastrophe, only to discover that it is the fault of everyone who disagreed with Naomi Oreskes in the early 21st century. Heavy blame is laid at the feet of the ancestor scientists (Oreskes’ contemporaries) who insisted upon scientific and statistical standards for inferring conclusions from observational data. Implicit in the science fiction tale is the welcome acknowledgment that science should make accurate predictions.

In Oreskes’ science fiction, these scientists of yesteryear, today’s adversaries of climate-change advocates, were “almost childlike,” in their felt-need to adopt “strict” standards, and their adherence to severe tests derived from their ancestors’ religious asceticism. In other words, significance testing is a form of self-flagellation. Lest you think, I exaggerate, consider the actual words of Oreskes and Conway:

“In an almost childlike attempt to demarcate their practices from those of older explanatory traditions, scientists felt it necessary to prove to themselves and the world how strict they were in their intellectual standards. Thus, they placed the burden of proof on novel claims, including those about climate. Some scientists in the early twenty-first century, for example, had recognized that hurricanes were intensifying, but they backed down from this conclusion under pressure from their scientific colleagues. Much of the argument surrounded the concept of statistical significance. Given what we now know about the dominance of nonlinear systems and the distribution of stochastic processes, the then-dominant notion of a 95 percent confidence limit is hard to fathom. Yet overwhelming evidence suggests that twentieth-century scientists believed that a claim could be accepted only if, by the standards of Fisherian statistics, the possibility that an observed event could have happened by chance was less than 1 in 20. Many phenomena whose causal mechanisms were physically, chemically, or biologically linked to warmer temperatures were dismissed as “unproven” because they did not adhere to this standard of demonstration.

Historians have long argued about why this standard was accepted, given that it had no substantive mathematical basis. We have come to understand the 95 percent confidence limit as a social convention rooted in scientists’ desire to demonstrate their disciplinary severity. Just as religious orders of prior centuries had demonstrated moral rigor through extreme practices of asceticism in dress, lodging, behavior, and food–in essence, practices of physical self-denial–so, too, did natural scientists of the twentieth century attempt to demonstrate their intellectual rigor through intellectual self-denial.14 This practice led scientists to demand an excessively stringent standard for accepting claims of any kind, even those involving imminent threats.”

142 Dædalus at 44.

The science fiction piece in Dædalus has now morphed into a short book, which is billed within as a “haunting, provocative work of science-based fiction.” Naomi Oreskes & Erik M. Conway, The Collapse of Western Civilization: A View from the Future (N.Y. 2014). Under the cover of fiction, Oreskes and Conway provide their idiosyncratic, fictional definition of statistical significance, in a “Lexicon of Archaic Terms,” at the back of the book:

statistical significance  The archaic concept that an observed phenomenon could only be accepted as true if the odds of it happening by chance were very small, typically taken to be no more than 1 in 20.”

Id. at 61-62. Of course, in writing fiction, you can make up anything you like. Caveat lector.


 

[1] SeePlaying Dumb on Statistical Significance” (Jan. 4, 2015).

[2] SeeSignificance Levels are Made a Whipping Boy on Climate Change Evidence: Is .05 Too Strict? (Schachtman on Oreskes)” (Jan. 4, 2015).

[3] SeeRhetorical Strategy in Characterizing Scientific Burdens of Proof” (Nov. 15, 2014).

[4] SeeThe Will to Ummph” (Jan. 10, 2012).

Playing Dumb on Statistical Significance

January 4th, 2015

For the last decade, at least, researchers have written to document, explain, and correct, a high rate of false-positive research findings in biomedical research[1]. And yet, there are some authors who complain that the traditional standard of statistical significance is too stringent. The best explanation for this paradox appears to lie in these authors’ rhetorical strategy of protecting their “scientific conclusions,” based upon weak and uncertain research findings, from criticisms. The strategy includes mischaracterizing significance probability as a burden of proof, and then speciously claiming that the standard for significance in the significance probability is too high as a threshold for posterior probabilities of scientific claims. SeeRhetorical Strategy in Characterizing Scientific Burdens of Proof” (Nov. 15, 2014).

Naomi Oreskes is a professor of the history of science in Harvard University. Her writings on the history of geology are well respected; her writings on climate change tend to be more adversarial, rhetorical, and ad hominem. See, e.g., Naomi Oreskes, Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming (N.Y. 2010). Oreskes’ abuse of the meaning of significance probability for her own rhetorical ends is on display in today’s New York Times. Naomi Oreskes, “Playing Dumb on Climate Change,” N.Y. Times Sunday Rev. at 2 (Jan. 4, 2015).

Oreskes wants her readers to believe that those who are resisting her conclusions about climate change are hiding behind an unreasonably high burden of proof, which follows from the conventional standard of significance in significance probability. In presenting her argument, Oreskes consistently misrepresents the meaning of statistical significance and confidence intervals to be about the overall burden of proof for a scientific claim:

“Typically, scientists apply a 95 percent confidence limit, meaning that they will accept a causal claim only if they can show that the odds of the relationship’s occurring by chance are no more than one in 20. But it also means that if there’s more than even a scant 5 percent possibility that an event occurred by chance, scientists will reject the causal claim. It’s like not gambling in Las Vegas even though you had a nearly 95 percent chance of winning.”

Although the confidence interval is related to the pre-specified Type I error rate, alpha, and so a conventional alpha of 5% does lead to a coefficient of confidence of 95%, Oreskes has misstated the confidence interval to be a burden of proof consisting of a 95% posterior probability. The “relationship” is either true or not; the p-value or confidence interval provides a probability for the sample statistic, or one more extreme, on the assumption that the null hypothesis is correct. The 95% probability of confidence intervals derives from the long-term frequency that 95% of all confidence intervals, based upon samples of the same size, will contain the true parameter of interest.

Oreskes is an historian, but her history of statistical significance appears equally ill considered. Here is how she describes the “severe” standard of the 95% confidence interval:

“Where does this severe standard come from? The 95 percent confidence level is generally credited to the British statistician R. A. Fisher, who was interested in the problem of how to be sure an observed effect of an experiment was not just the result of chance. While there have been enormous arguments among statisticians about what a 95 percent confidence level really means, working scientists routinely use it.”

First, Oreskes, the historian, gets the history wrong. The confidence interval is due to Jerzy Neyman, not to Sir Ronald A. Fisher. Jerzy Neyman, “Outline of a theory of statistical estimation based on the classical theory of probability,” 236 Philos. Trans. Royal Soc’y Lond. Ser. A 333 (1937). Second, although statisticians have debated the meaning of the confidence interval, they have not wandered from its essential use as an estimation of the parameter (based upon the use of an unbiased, consistent sample statistic) and a measure of random error (not systematic error) about the sample statistic. Oreskes provides a fallacious history, with a false and misleading statistics tutorial.

Oreskes, however, goes on to misidentify the 95% coefficient of confidence with the legal standard known as “beyond a reasonable doubt”:

“But the 95 percent level has no actual basis in nature. It is a convention, a value judgment. The value it reflects is one that says that the worst mistake a scientist can make is to think an effect is real when it is not. This is the familiar “Type 1 error.” You can think of it as being gullible, fooling yourself, or having undue faith in your own ideas. To avoid it, scientists place the burden of proof on the person making an affirmative claim. But this means that science is prone to ‘Type 2 errors’: being too conservative and missing causes and effects that are really there.

Is a Type 1 error worse than a Type 2? It depends on your point of view, and on the risks inherent in getting the answer wrong. The fear of the Type 1 error asks us to play dumb; in effect, to start from scratch and act as if we know nothing. That makes sense when we really don’t know what’s going on, as in the early stages of a scientific investigation. It also makes sense in a court of law, where we presume innocence to protect ourselves from government tyranny and overzealous prosecutors — but there are no doubt prosecutors who would argue for a lower standard to protect society from crime.

When applied to evaluating environmental hazards, the fear of gullibility can lead us to understate threats. It places the burden of proof on the victim rather than, for example, on the manufacturer of a harmful product. The consequence is that we may fail to protect people who are really getting hurt.”

The truth of climate change opinions do not turn on sampling error, but rather on the desire to draw an inference from messy, incomplete, non-random, and inaccurate measurements, fed into models of uncertain validity. Oreskes suggests that significance probability is keeping us from acknowledging a scientific fact, but the climate change data sets are amply large to rule out sampling error if that were a problem. And Oreskes’ suggestion that somehow statistical significance is placing a burden upon the “victim,” is simply assuming what she hopes to prove; namely, that there is a victim (and a perpetrator).

Oreskes’ solution seems to have a Bayesian ring to it. She urges that we should start with our a priori beliefs, intuitions, and pre-existing studies, and allow them to lower our threshold for significance probability:

“And what if we aren’t dumb? What if we have evidence to support a cause-and-effect relationship? Let’s say you know how a particular chemical is harmful; for example, that it has been shown to interfere with cell function in laboratory mice. Then it might be reasonable to accept a lower statistical threshold when examining effects in people, because you already have reason to believe that the observed effect is not just chance.

This is what the United States government argued in the case of secondhand smoke. Since bystanders inhaled the same chemicals as smokers, and those chemicals were known to be carcinogenic, it stood to reason that secondhand smoke would be carcinogenic, too. That is why the Environmental Protection Agency accepted a (slightly) lower burden of proof: 90 percent instead of 95 percent.”

Oreskes’ rhetoric misstates key aspects of scientific method. The demonstration of causality in mice, or only some perturbation of cell function in non-human animals, does not warrant lowering our standard for studies in human beings. Mice and rats are, for many purposes, poor predictors of human health effects. All medications developed for human use are tested in animals first, for safety and efficacy. A large majority of such medications, efficacious in rodents, fail to satisfy the conventional standards of significance probability in randomized clinical trials. And that standard is not lowered because the drug sponsor had previously demonstrated efficacy in mice, or some other furry rodent.

The EPA meta-analysis of passive smoking and lung cancer is a good example of how not to conduct science. The protocol for the EPA meta-analysis called for a 95% confidence interval, but the agency scientists manipulated their results by altering the pre-specified coefficient confidence in their final report. Perhaps even more disgraceful was the selectivity of included studies for the meta-analysis, which biased the agency’s result in a way not reflected in p-values or confidence intervals. SeeEPA Cherry Picking (WOE) – EPA 1992 Meta-Analysis of ETA & Lung Cancer – Part 1” (Dec. 2, 2012); “EPA Post Hoc Statistical Tests – One Tail vs Two” (Dec. 2, 2012).

Of course, the scientists preparing for and conducting a meta-analysis on environmental tobacco smoke began with a well-justified belief that active smoking causes lung cancer. Passive smoking, however, involves very different exposure levels and raises serious issues of the human body’s defensive mechanisms to protect against low-level exposure. Insisting on a reasonable quality meta-analysis for passive smoking and lung cancer was not a matter of “playing dumb”; it was a recognition of our actual ignorance and uncertainty about the claim being made for low-exposure effects. The shifty confidence intervals and slippery methodology exemplifies how agency scientists assume their probandum to be true, and then manipulate or adjust their methods to provide the result they had assumed all along.

Oreskes then analogizes not playing dumb on environmental tobacco smoke to not playing dumb on climate change:

“In the case of climate change, we are not dumb at all. We know that carbon dioxide is a greenhouse gas, we know that its concentration in the atmosphere has increased by about 40 percent since the industrial revolution, and we know the mechanism by which it warms the planet.

WHY don’t scientists pick the standard that is appropriate to the case at hand, instead of adhering to an absolutist one? The answer can be found in a surprising place: the history of science in relation to religion. The 95 percent confidence limit reflects a long tradition in the history of science that valorizes skepticism as an antidote to religious faith.”

I will leave substance of the climate change issue to others, but Oreskes’ methodological misidentification of the 95% coefficient of confidence with burden of proof is wrong. Regardless of motive, the error obscures the real debate, which is about data quality. More disturbing is that Oreskes’ error confuses significance and posterior probabilities, and distorts the meaning of burden of proof. To be sure, the article by Oreskes is labeled opinion, and Oreskes is entitled to her opinions about climate change and whatever.  To the extent that her opinions, however, are based upon obvious factual errors about statistical methodology, they are entitled to no weight at all.


 

[1] See, e.g., John P. A. Ioannidis, “How to Make More Published Research True,” 11 PLoS Medicine e1001747 (2014); John P. A. Ioannidis, “Why Most Published Research Findings Are False” 2 PLoS Medicine e124 (2005); John P. A. Ioannidis, Anna-Bettina Haidich, and Joseph Lau, “Any casualties in the clash of randomised and observational evidence?” 322 Brit. Med. J. 879 (2001).

 

Power at the FDA

December 11th, 2014

For six years, the Food and Drug Administration (FDA) has been pondering a proposed rule to abandon the current system of pregnancy warning categories for prescription drugs. Last week, the agency finally published its final rule for pregnancy and lactation labeling[1]. The rule, effective in June 2015, will require removal of the current category labeling, A, B, C, D, or X, in favor of risk statements and narrative summaries of the human, animal, and pharmacologic data for adverse maternal and embryo/fetal outcomes.

The labeling system, which will be phased out, discouraged or prohibited inclusion of actual epidemiologic data results for teratogenicity. With sponsors required to present actual data, the agency voiced a concern whether prescribing physicians, who are the intended readers of the labeling, interpret a statistically non-significant result as showing a lack of association:

“We note that it is difficult to be certain that a lack of findings equates to a lack of risk because the failure of a study to detect an association between a drug exposure and an adverse outcome may be related to many factors, including a true lack of an association between exposure and outcome, a study of the wrong population, failure to collect or analyze the right data endpoints, and/or inadequate power. The intent of this final rule is to require accurate descriptions of available data and facilitate the determination of whether the data demonstrate potential associations between drug exposure and an increased risk for developmental toxicities.[2]

When human epidemiologic data are available, the agency had proposed the following for inclusion in drug labeling[3]:

Narrative description of risk(s) based on human data. FDA proposed that when there are human data, the risk conclusion must be followed by a brief description of the risks of developmental abnormalities as well as other relevant risks associated with the drug. To the extent possible, this description must include the specific developmental abnormality (e.g., neural tube defects); the incidence, seriousness, reversibility, and correctability of the abnormality; and the effect on the risk of dose, duration of exposure, and gestational timing of exposure. When appropriate, the description must include the risk above the background risk attributed to drug exposure and confidence limits and power calculations to establish the statistical power of the study to identify or rule out a specified level of risk (proposed [21 C.F.R.] § 201.57(c)(9)(i)(C)(4)).”

The agency rebuffed comments that physicians would be unable to interpret confidence intervals, and confused by actual data and the need to interpret study results. The agency’s responses to comments to the proposed rule note that the final rule requires a description of the data, and its limitations, in approved labeling[4]:

‘‘Confidence intervals and power calculations are important for the review and interpretation of the data. As noted in the draft guidance on pregnancy and lactation labeling, which is being published concurrently with the final rule, the confidence intervals and power calculation, when available, should be part of that description of limitations.’’

The agency’s insistence upon power calculations is surprising. The proposed rule talked about requiring ‘‘confidence limits and power calculations to establish the statistical power of the study to identify or rule out a specified level of risk (proposed § 201.57(c)(9)(i)(C)(4)).” The agency’s failure to retain the qualification of power, at some specified level of risk, makes the requirement meaningless. A study with ample power to find a doubling of risk may have low power to find a 20% increase in risk. Power is dependent upon the specified alternative to the null hypothesis, as well as the level of alpha, or statistical significance.

The final rule omits all references to power and power calculations, with or without the qualifier of at some specified level of risk, from the revised sections of part 201; indeed the statistical concepts of power and confidence interval do not show up at all, other than a vague requirement that the limitation of data from epidemiologic studies be described[5]:

‘‘(3) Description of human data. For human data, the labeling must describe adverse developmental outcomes, adverse reactions, and other adverse effects. To the extent applicable, the labeling must describe the types of studies or reports, number of subjects and the duration of each study, exposure information, and limitations of the data. Both positive and negative study findings must be included.”

Presumably, the proposed rule’s requirement of providing power calculations and confidence intervals is part of the future requirement to describe data limitations. The agency, however, omitted this level of detail from the revised regulation.

The same day that the FDA issued the final rule, it also issued a draft guidance on pregnancy and lactation labeling, for public comment[6].

The guidance recommends what the regulation, in its final form, does not require specifically. First, the guidance recommends omission of individual case reports from the human data section, because:

‘‘Individual case reports are rarely sufficient to characterize risk and therefore ordinarily should not be included in this section.[7]

And for actual controlled epidemiologic studies, the guidance suggests that:

‘‘If available, data from the comparator or control group, and data confidence intervals and power calculations should also be included.[8]

Statistically, this guidance is no guidance at all. Power calculations can never be presented without a specified alternative hypothesis to the null hypothesis of no increased risk of birth defects. Furthermore, virtually no study provides power calculations of data already acquired and analyzed for point estimates and confidence intervals. The guidance is unclear as to whether sponsors should attempt to calculate power from the data in a study, and try to anticipate what level of specified risk is of interest to the agency and to prescribing physicians. More disturbing yet is the agency’s failure to explain why it is recommending both confidence intervals and power calculations, in the face of many leading groups’ recommendations to abandon power calculations when confidence intervals are available for the analyzed data.[9]


[1] Dep’t of Health & Human Services, Food & Drug Admin., 21 CFR Part 201, Content and Format of Labeling for Human Prescription Drug and Biological Products; Requirements for Pregnancy and Lactation Labeling; Pregnancy, Lactation, and Reproductive Potential: Labeling for Human Prescription Drug and Biological Products—Content and Format; Draft Guidance for Industry; Availability; Final Rule and Notice, 79 Fed. Reg. 72064 (Dec. 4, 2014) [Docket No. FDA–2006–N–0515 (formerly Docket No. 2006N–0467)]

[2] Id. at 72082a.

[3] Id. at 72082c-083a.

[4] Id. at 72083c.

[5] Id. at 72102a (§ 201.57(c)(9)(i)(D)(3)).

[6] U.S. Department of Health and Human Services, Food and Drug Administration, Pregnancy, Lactation, and Reproductive Potential: Labeling for Human Prescription Drug and Biological Products — Content and Format DRAFT GUIDANCE (Dec. 2014).

[7] Id. at 12.

[8] Id.

[9] See, e.g., Vandenbroucke, et al., “Strengthening the reporting of observational studies in epidemiology (STROBE):  Explanation and elaboration,” 18 Epidemiology 805, 815 (2007) (Section 10, sample size) (“Do not bother readers with post hoc justifications for study size or retrospective power calculations. From the point of view of the reader, confidence intervals indicate the statistical precision that was ultimately obtained. It should be realized that confidence intervals reflect statistical uncertainty only, and not all uncertainty that may be present in a study (see item 20).”); Douglas Altman, et al., “The Revised CONSORT Statement for Reporting Randomized Trials:  Explanation and Elaboration,” 134 Ann. Intern. Med. 663, 670 (2001) (“There is little merit in calculating the statistical power once the results of the trial are known, the power is then appropriately indicated by confidence intervals.”).

Teaching Statistics in Law Schools

November 12th, 2014

Back in 2011, I came across a blog post about a rumor of a trend in law school education to train law students in quantitative methods. Sasha Romanosky, “Two Law School RumorsConcurring Opinions (Jan. 20, 2011). Of course, the notion that that quantitative methods and statistics would become essential to a liberal and a professional education reaches back to the 19th century. Holmes famously wrote that:

“For the rational study of the law the blackletter man may be the man of the present, but the man of the future is the man of statistics and the master of economics.”

Oliver Wendell Holmes, Jr., “The Path of Law” 10 Harvard Law Rev. 457 (1897). A few years later, H.G. Wells expanded the pre-requisite from lawyering to citizenship, generally:

“The great body of physical science, a great deal of the essential fact of financial science, and endless social and political problems are only accessible and only thinkable to those who have had a sound training in mathematical analysis, and the time may not be very remote when it will be understood that for complete initiation as an efficient citizen of one of the new great complex worldwide States that are now developing, it is as necessary to be able to compute, to think in averages and maxima and minima, as it is now to be able to read and write.”

Herbert George Wells, Mankind in the Making 204 (1903).

Certainly, there have been arguments made that statistics and quantitative analyses more generally should be part of the law school curriculum. See, e.g., Yair Listokin, “Why Statistics Should be Mandatory for Law Students” Prawfsblawg (May 22, 2006); Steven B. Dow, “There’s Madness in the Method: A Commentary on Law, Statistics, and the Nature of Legal Education,” 57 Okla. L. Rev. 579 (2004).

Judge Richard Posner has described the problem in dramatic Kierkegaardian terms of “fear and loathing.”Jackson v. Pollion, 733 F.3d 786, 790 (7th Cir. 2013). Stopping short of sickness unto death, Judge Posner catalogued the “lapse,” at the expense of others, in the words of judges and commentators:

“This lapse is worth noting because it is indicative of a widespread, and increasingly troublesome, discomfort among lawyers and judges confronted by a scientific or other technological issue. “As a general matter, lawyers and science don’t mix.” Peter Lee, “Patent Law and the Two Cultures,” 120 Yale L.J. 2, 4 (2010); see also Association for Molecular Pathology v. Myriad Genetics, Inc., ___ U.S. ___, 133 S.Ct. 2107, 2120, (2013) (Scalia, J., concurring in part and concurring in the judgment) (“I join the judgment of the Court, and all of its opinion except Part I–A and some portions of the rest of the opinion going into fine details of molecular biology. I am unable to affirm those details on my own knowledge or even my own belief”); Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 599 (1993) (Rehnquist, C.J., concurring in part and dissenting in part) (‘‘the various briefs filed in this case … deal with definitions of scientific knowledge, scientific method, scientific validity, and peer review—in short, matters far afield from the expertise of judges’’); Marconi Wireless Telegraph Co. of America v. United States, 320 U.S. 1, 60–61 (1943) (Frankfurter, J., dissenting in part) (‘‘it is an old observation that the training of Anglo–American judges ill fits them to discharge the duties cast upon them by patent legislation’’); Parke–Davis & Co. v. H.K. Mulford Co., 189 F. 95, 115 (S.D.N.Y. 1911) (Hand, J.) (‘‘I cannot stop without calling attention to the extraordinary condition of the law which makes it possible for a man without any knowledge of even the rudiments of chemistry to pass upon such questions as these … . How long we shall continue to blunder along without the aid of unpartisan and authoritative scientific assistance in the administration of justice, no one knows; but all fair persons not conventionalized by provincial legal habits of mind ought, I should think, unite to effect some such advance’’); Henry J. Friendly, Federal Jurisdiction: A General View 157 (1973) (‘‘I am unable to perceive why we should not insist on the same level of scientific understanding on the patent bench that clients demand of the patent bar, or why lack of such understanding by the judge should be deemed a precious asset’’); David L. Faigman, Legal Alchemy: The Use and Misuse of Science in Law xi (1999) (‘‘the average lawyer is not merely ignorant of science, he or she has an affirmative aversion to it’’).

Of course, ignorance of the law is no excuse for the ordinary citizen[1]. Ignorance of science and math should be no excuse for the ordinary judge or lawyer.

In the 1960s, Michael Finkelstein introduced a course on statistics and probability into the curriculum of the Columbia Law School. The class has had an unfortunate reputation of being “difficult.” One year, when Prof. Finkelstein taught the class at Yale Law School, the students petitioned him not to give a final examination. Apparently, the students were traumatized by facing problems that actually have right and wrong answers! Michael O. Finkelstein, “Teaching Statistics to Law Students,” in L. Pereira-Mendoza, L.S. Kea, T.W.Kee, & W.K. Wong, eds., I Proceedings of the Fifth International Conference on Teaching Statistics at 505 (1998).

Law school is academia’s “last clear chance” to avoid having statistically illiterate lawyers running amok. Do law schools take advantage of the opportunity? For the most part, understanding statistical concepts is not required for admission to, or for graduation from, law school. Some law schools helpfully offer courses to address the prevalent gap in statistics education at the university level. I have collected some of the available law school offerings from law school websites, and collected below. If you know of any omissions, please let me know.

Law School Courses

Columbia Law School: Statistics for Lawyers (Schachtman)

Emory Law:  Analytical Methods for Lawyers; Statistics for Lawyers (Joanna M. Shepherd)

Florida State College of Law:  Analytical Methods for Lawyers (Murat C. Mungan)

Fordham University School of Law:  Legal Process & Quantitative Methods

George Mason University School of Law:  Quantitative Forensics (Kobayashi); Statistics for Lawyers and Policy Analysts (Dick Ippolito)

George Washington University Law School:  Quantitative Analysis for Lawyers; The Law and Regulation of Science

Georgetown Law School:  Analytical Methods (Joshua Teitelbaum); Analyzing Empirical Research for Lawyers (Juliet Aiken); Epidemiology for Lawyers (Kraemer)

Santa Clara University, School of Law:  Analytical Methods for Lawyers (David Friedman)

Harvard Law School:  Analytical Methods for Lawyers (Kathryn Spier); Analytical Methods for Lawyers; Fundamentals of Statistical Analysis (David Cope)

Loyola Law School:  Statistics (Doug Stenstrom)

Marquette University School of Law:  Quantitative Methods

Michigan State:  Analytical Methods for Lawyers (Statistics) (Gia Barboza); Quantitative Analysis for Lawyers (Daniel Martin Katz)

New York Law School:  Statistical Literacy

New York University Law School:  Quantitative Methods in Law Seminar (Daniel Rubinfeld)

Northwestern Law School:  Quantitative Reasoning in the Law (Jonathan Koehler); Statistics & Probability (Jonathan Koehler)

Notre Dame Law School: Analytical Methods for Lawyers (M. Barrett)

Ohio Northern University Claude W. Pettit College of Law:  Analytical Methods for Lawyers

Stanford Law School:  Statistical Inference in the Law; Bayesian Statistics and Econometrics (Daniel E. Ho); Quantitative Methods – Statistical Inference (Jeff Strnad)

University of Arizona James E. Rogers College of Law:  Law, Statistics & Economics (Katherine Y. Barnes)

University of California at Berkeley:  Quantitative Methods (Kevin Quinn); Introductory Statistics (Justin McCrary)

University of California, Hastings College of Law:  Scientific Method for Lawyers (David Faigman)

University of California at Irvine:  Statistics for Lawyers

University of California at Los Angeles:  Quantitative Methods in the Law (Richard H. Sander)

University of Colorado: Quantitative Methods in the Law (Paul Ohm)

University of Connecticut School of Law:  Statistical Reasoning in the Law

University of Michigan:  Statistics for Lawyers

University of Minnesota:  Analytical Methods for Lawyers: An Introduction (Parisi)

University of Pennsylvania Law School:  Analytical Methods (David S. Abrams); Statistics for Lawyers (Jon Klick)

University of Texas at Austin:  Analytical Methods (Farnworth)

University of Washington:  Quantitative Methods In The Law (Mike Townsend)

Vanderbilt Law School: Statistical Concepts for Lawyer (Edward Cheng)

Wake Forest: Analytical Methods for Lawyers

Washington University St. Louis School of Law: Social Scientific Research for Lawyers (Andrew D. Martin)

Washington & Lee Law School: The Role of Social Science in the Law (John Keyser)

William & Mary Law School: Statistics for Lawyers

William Mitchell College of Law:  Statistics Workshop (Herbert M. Kritzer)

Yale Law School:  Probability Modeling and Statistics LAW 26403


[1] See Ignorantia juris non excusat.