TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Stanford Professor Smokes Out Tobacco Defense Expert Witnesses

July 18th, 2015

“Bullshit is unavoidable whenever circumstances require someone to talk without knowing what he is talking about.  Thus the production of bullshit is stimulated whenever a person’s obligations or opportunities to speak about some topic exceed his knowledge of the facts that are relevant to that topic.”

Harry Frankfurt, On Bullshit 63 (Princeton 2005)

*****************************************************

An in-press paper that attacks the ethics and motives of tobacco defense expert witnesses is making some ripples in the science and legal popular media. See, e.g., Tracie White, “Physicians testified for tobacco companies against plaintiffs with head, neck cancers, study finds,” ScienceDaily (July 17, 2015); Joyce E. Cutler, “Motives of Tobacco’s Experts Questioned by Study,” BNA Snapshot -Product Safety & Liability Reporter (July 20, 2015) [cited as Cutler]. The paper is now available in an in-press version. Robert K. Jackler, “Testimony by Otolaryngologists in Defense of Tobacco Companies 2009-2014,” The Laryngoscope (July 17, 2015) (in press) (doi: 10.1002/lary.25432). The paper is worth close study.

Dr. Robert Jackler, the Edward C. and Amy H. Sewall Professor in Otorhinolaryngology, at the Standford Medical School, has previously published his views of the tobacco industry’s role in obscuring medical research on the causal role of tobacco in human cancer.[1] In his most recent contribution, Jackler has reviewed tobacco companies’ expert witnesses’ testimonies to assess their consistency with what Jackler takes to be well-warranted scientific conclusions of known human cancer risks. From his one-sided review (of only defense witnesses) of a limited number of cases, Jackler concludes that tobacco defense expert witnesses were systematically biased in their under-estimation of the magnitude of tobacco-related risks, and their consistent inflation of non-tobacco risks, both in terms of their magnitude, prevalence, and scientific warrant of causality. Although one-sided in his review, Jackler does cite to transcripts that are available on-line from repositories at the University of California San Francisco Legacy Tobacco Documents Library. Disinterested observers can explore further the allegations and their merits by reading the primary documents.

The cases reviewed by Jackler included nine cases of upper aerodigestive-tract cancers (mostly larynx and esophagus), in which six defense otolaryngologists testified that tobacco played no role, or that there was no scientific basis for attributing individual cancers to tobacco. Jackler’s review interestingly identifies a tobacco defense strategy of “leveling” known causes with suspected causes and risk factors, to create a laundry list of differentials in a differential etiology for specific causation. Some of the risk factors that defense expert witnesses postulated included human papillomavirus (HPV), alcohol, mouthwash, heredity, asbestos, diesel fumes, gasoline fumes, salted fish, mouthwash, and urban living.

This epistemic dilution is indeed objectionable, but of course, it is the same epistemic bootstrapping that plaintiffs’ expert witnesses routinely use to place questionable, weakly supported differentials in their specific causation assessments. Consider the recent Tennessee appellate decision in Russell v. Illinois Central RR, in which the court rejected a defense challenge to the dodgy testimony of plaintiffs’ expert witness, Dr. Arthur Frank, that plaintiff’s throat cancer was caused by a etiological soup of asbestos, diesel fume, and environmental tobacco smoke. Russell v. Illinois Central RR, No. W2013-02453-COA-R3-CV, 2015 Tenn. App. LEXIS 520 (Tenn. App. June 30, 2015) (affirming judment for plaintiff in excess of $3 million).

Jackler notes that tobacco witnesses frequently raised the specter of “etiological soup.” Jackler at 3. For instance, some tobacco defense expert witnesses raised even possible asbestos exposure as a cause of laryngeal cancer, but Jackler is appropriately skeptical: “Although some studies suggest an additive effect with smoking, a meta-analysis concluded that the weight of the evidence does not support a causal association.” Jackler at 6 (citing H. Griffiths & N.C. Molony, “Does asbestos cause laryngeal cancer?” 28 Clin. Otolaryngol. & Allied Sci. 177 (2003); and Kevin Browne & J. Bernard L. Gee, “Asbestos exposure and laryngeal cancer,” 44 Ann. Occup. Hyg. 239 (2000). Jackler might be surprised by the stridency and the overreaching of a large segment of the occupational medicine community’s disagreement with his assessment.

Tobacco defense expert witnesses frequently raised diesel fume exposure as an alternative cause, but Jackler finds this testimony disingenuous. “The relationship of diesel fume exposure to laryngeal cancer has been discounted.” Jackler at 6 (citing J.E. Muscat & Ernst L. Wynder, “Diesel exhaust, diesel fumes, and laryngeal cancer,” 112 Otolaryngol. Head Neck Surg. 437 (1995)). So where is Jackler’s outrage against plaintiffs’ expert witness excesses, and the judicial acquiescence in accepting testimony such as that given by Dr. Arthur Frank in the Russell case.

Some of the risks, risk factors, and alternative causes invoked by the tobacco defense expert witnesses, as related by Dr. Jackler, did appear fantastical or false. Jackler, however, does not explore how the plaintiffs’ counsel addressed such over-reaching, or how the court responded to objections and challenges to the defense expert witness testimony. Plaintiffs’ counsel may have strategically allowed tobacco defense expert witnesses to overreach, in order to have a “harder” target on cross-examination.

Improper to Criticize Expert Witnesses

A spokesperson for Philip Morris, Bill Phelps, responded to Jackler’s critique by telling the BNA reporter that:

“We believe that out-of-court attempts to criticize experts for testifying on behalf of defendants in these cases have no place in our judicial system.”

Cutler. Phelps’ criticism is ambiguous between suggesting that any criticism of expert witnesses outside court is improper and suggesting that criticizing expert witnesses simply on the basis for testifying for the tobacco companies is improper.

If Phelps’s point was the latter, then it seems unexceptionable. Surely, the companies have a right to defend themselves, as long as they sponsor expert witness testimony in a responsible way. And certainly, any number of anti-tobacco scientists and physicians have resorted to bullying and name calling in efforts to chill scientists from speaking or testifying for tobacco companies, not by criticizing the substantive merits of testimony, but by asserting a contagious moral leprosy from merely having associated with tobacco companies.

Jackler comes very close to saying that physicians and scientists should not testify on behalf of tobacco companies. He casts aspersion on all tobacco defense expert witnesses by quoting others who state that “[t]he tobacco industry pays generously and gets its money worth.” Jackler at 7 (quoting L. Maggi, “Bearing Witness for Tobacco,” 21 J. Pub. Health Pol. 296 (2000). Jackler notes that “[u]nethical experts bias their testimony to bolster the position of the side who hired them.” Jackler at 7. But Jackler’s review is itself biased by his failure to examine the contentions and degrees of warrant of plaintiffs’ expert witnesses on any issues. Jackler does not articulate a view in his own voice, but states that “[s]ome medical ethicists question whether it could ever be ethical for a physician to testify on behalf of the tobacco industry.” Jackler at 7. Jackler’s implication begs the question whether plaintiffs, and their expert witnesses, are correct on all their medical claims, in every case.

The former point — that any out-of-court criticism of courtroom testimony — is wrong. Trenchant criticism of expert witness testimony is very much needed, and it is hard to see how it would not be helpful to public debate and to improvement of the judicial process. (Again, assuming that the critical discussions are fair and evidence-based.)

In an interview with the BNA, Jackler speculates:

“I think that these [tobacco defense] physicians testifying did so with the belief that their behavior would not become public. And this is an area where shedding light and creating dialogue will help to encourage people to behave ethically when giving their testimony.”

Cutler (quoting Jackler). If expert witnesses on either side think that they can escape critical scrutiny by advocating fabulous fictions in the courtroom, then we would all be better off if we could disabuse them of their notions. In his article, Jackler notes that his professional organization, the American Academy of Otolaryngology – Head and Neck Surgery (AAO–HNS) has, since 2003, had a policy that states:

“Physician expert witnesses should not adopt a position as an advocate or partisan in the legal proceedings”; and that

“the physician expert witness should be aware that transcripts of their deposition and courtroom testimony are public records, subject to independent peer review.”

Jackler at 7 (citing American Academy of Otolaryngology–Head and Neck Surgery official policy on expert witnesses (revised October, 2012)). So the tobacco defense expert witnesses should certainly have been prepared for the post-trial “peer review” that Jackler provides. What is curious about Jackler’s article, and his obvious sense of outrage, is that he does state whether he has filed an ethics complaint with the American Academy or with any other reviewing organization.

Philip Morris Claims that Jackler Has Undisclosed Conflicts of Interests

The Philip Morris spokes person also noted “[Dr. Jackler had] failed to disclose that he has previously worked with counsel for plaintiffs in the Engle cases.” Cutler.  Jackler responded to the BNA, stating “[c]ategorically, I have never testified in tobacco litigation. Specifically, I have never worked for lawyers on either side in any capacity in Engle cases.” What about other cases, outside Florida? Jackler’s in-press article states that it was “[s]upported by Stanford Research into the Impact of Tobacco Advertising, Stanford University School of Medicine, Stanford, CA. The authors have no other funding, financial relationships, or conflicts of interest to disclose.”

One Defense Expert Witness Responds

One defense expert witness, otolaryngologist Dr. Michael Bertino, responded to Cutler in an interview. Dr. Bertino noted that Jackler was out of touch with the risk factor epidemiology of laryngeal cancer, and that strong evidence had emerged that tobacco is not the only strong risk for this cancer. Many older studies, for instance, did not look at the role of human papilloma virus (HPV), which has been identified as a prevalent cause of oral and esophageal cancers. Cutler (citing oral interview with Dr. Bertino). Jackler’s article acknowledges that HPV has been identified in substantial percentages of oral pharyngeal cancers, but disputes that the virus is a substantial independent cause of these cancers. I will leave to others to determine whether Jackler’s review of the HPV studies is fair and balanced, and to what extent it “falsifies” Dr. Bertino’s testimony.


[1] Robert K. Jackler & H. Samji, “The price paid: Manipulation of otolaryngologists by the tobacco industry to obfuscate the emerging truth that smoking causes cancer,” 122 The Laryngoscope 75 (2011).

Discovery of Retained, Testifying Statistician Expert Witnesses (Part 2)

July 1st, 2015

Discovery Beyond the Report and the Deposition

The lesson of the cases interpreting Rule 26 is that counsel cannot count exclusively upon the report and automatic disclosure requirements to obtain the materials necessary or helpful for cross-examination of statisticians who have created their own analyses. Sometimes just asking nicely suffices[1]. Other avenues of discovery are available, however, for reluctant disclosers. In particular, Rule 26(b) authorizes discovery substantially broader than what is required for inclusion in an expert witness’s report.

Occasionally, counsel cite caselaw that has been superseded by the steady expansion of Rule 26[2]. The 1993 amendments made clear, however, that Rule 26 sets out mandatory minimum requirements that do not define or exhaust the available discovery tools to obtain information from expert witnesses[3]. Some courts continue to insist that a party make a showing of necessity to go beyond the minimal requirements of Rule 26[4], although the better reasoned cases take a more expansive view of the proper scope of expert witness discovery[5].

Although the federal rules may not require the expert witness report to include, or to attach, all “working notes or recordings,” or calculations, alternative analyses, and data output files, these materials may be the subject of proper document requests to the adverse party or perhaps subpoenas to the expert witness.  The Advisory Committee Notes explain that the various techniques of discovery kick in by virtue of Rule 26(b), where automatic disclosure and report requirements of Rule 26(a) leave off:

“Rules 26(b)(4)(B) and (C) do not impede discovery about the opinions to be offered by the expert or the development, foundation, or basis of those opinions. For example, the expert’s testing of material involved in litigation, and notes of any such testing, would not be exempted from discovery by this rule. Similarly, inquiry about communications the expert had with anyone other than the party’s counsel about the opinions expressed is unaffected by the rule. Counsel are also free to question expert witnesses about alternative analyses, testing methods, or approaches to the issues on which they are testifying, whether or not the expert considered them in forming the opinions expressed. These discovery changes therefore do not affect the gatekeeping functions called for by Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993), and related cases.[6]

The court in Ladd Furniture v. Ernst & Young explained the structure of Rule 26 with respect to underlying documents, calculations, and data[7].  In particular, the requirements of the Rule 26(a) report do not create a limitation on Rule 26(b) discovery:

“As a basis for withholding the above information, Ladd argues that Ernst & Young is not entitled to discover any expert witness information which is not specifically mentioned in Rule 26(a)(2)(B). However, as explained below, Ladd’s position on this point is not supported by the text of Rule 26 or by the Advisory Committee’s commentary to Rule 26(a). In the text, Rule 26(a)(2)(B) provides for the mandatory disclosure of certain expert witness information, even without a request from the opposing party. However, there is no indication on the face of the rule to suggest that a party is absolutely prohibited from seeking any additional information about an opponent’s expert witnesses. In fact, Rule 26(b)(1) describes the scope of allowable discovery as follows: ‛Parties may obtain discovery regarding any matter, not privileged, which is relevant to the subject matter involved in the pending action… .’ Fed. R. Civ. P. 26(b)(1).[8]

Expert witness discovery for materials that go beyond what is required in an adequate Rule 26(a) report can have serious consequences for the expert witness who fails to produce the requested materials. Opinion exclusion is an appropriate remedy against an expert witness who failed to keep data samples and statistical packages because the adversary party “could not attempt to validate [the expert witness’s] methods even if [the witness] could specifically say what he considered.[9]

No doubt expert witnesses and parties will attempt to resist the call for working notes and underlying materials on the theory that the requested documents and materials are “draft reports,” which are now protected by the revisions to Rule 26.  For the most part, these evasions have been rejected[10].  In one case, for instance, in which an expert witness’s assistants compiled and summarized information from individual case files, the court rejected the characterization of the information as part of a “draft report,” and ordered their production.[11]

Choice of Discovery Method Beyond Rule 26 Automatic Disclosure

In addition to the mandatory expert report and disclosure of data and facts, and the optional deposition by oral examination, parties have other avenues to pursue discovery of information, facts, and data, from expert witnesses. Under Rule 33(a)(2), parties may propound contention interrogatories that address expert witnesses’ opinions and conclusions. As for methods of discovery beyond what is discussed specifically in Rule 26, courts are confronted with a threshold question whether Rule 34 requests to produce, Rule 30(b)(2) depositions by oral examination, or Rule 45 subpoenas are the appropriate discovery method for obtaining documents from a retained, testifying expert witness. In the view of some courts, the resolution to this threshold question turns on whether expert witnesses are within the control of parties such that parties must respond to discovery for information, documents, and things within the custody, possession, and control of their expert witnesses.

Subpoenas Are Improper

Some federal district courts view Rule 45 subpoenas as inappropriate discovery tools for parties[12] and persons under the control of parties. In Alper v. United States[13], the district court refused to enforce plaintiff’s Rule 45 subpoena that sought documents from defendant’s expert witness. Although acknowledging that Rule 45’s language was unclear, the Alper court insisted that since a party proffers an expert witness, that witness should be considered under the party’s control[14]. And because the expert witness was “within defendant’s control,” the court noted that Rule 34 rather than Rule 45 governed the requested discovery[15]. Alper seems to be a minority view, but its approach is attractive in streamlining discovery, eliminating subpoena service issues for expert witnesses who may live outside the district, and forcing the sponsoring party to respond and to obtain compliance with its retained expert witness.

Subpoenas Are Proper

The “control” rationale of the Alper case is questionable. Rule 45 contains no statement of limitation to non-parties[16]. Parties “proffer” fact witnesses, but their proffers do not restrict the availability of Rule 45 subpoenas. More important, expert witnesses are not truly under the control of the retaining parties. Expert witnesses have independent duties to the court, and under their own professional standards, to give their own independent opinions[17].

Many courts allow discovery of expert witness documents and information by Rule 45 subpoena on either the theory that Rule 45 subpoenas are available for both parties and non-parties or the theory that expert witnesses are sufficiently independent of the sponsoring party that they are non-parties who are clearly subject to Rule 45. If expert witnesses are not parties, and Rule 26’s confidentiality provisions do not constrain the available discovery tools for expert witnesses, then expert witness subpoenas would appear to a proper discovery tool to discover documents in the witnesses’ possession, control, and custody[18]. When used as a discovery tool in this way, subpoenas used are subject to discovery deadlines[19].

Particular Concerns for Discovery of Statistician Expert Witnesses

Statistician expert witnesses require additional care and discovery investigation in complex products liability cases[20].  The caselaw sometimes takes a crabbed approach that refuses to provide parties access to their adversaries’ statistical analyses, calculations, data input  and output files, and graphical files.

Statistician expert testimony will usually involve complex statistical evidence, models, assumptions, and calculations. These materials will in turn create a difficulty in discerning the statistician’s choices from available statistical tests, and whether the statistician exploited the opportunity for multiple tests to be conducted serially with varying assumptions until a propitious result was obtained. Given these typical circumstances, statistical expert witness testimony will almost always require full disclosure to allow the adversary a fair opportunity to cross-examine at trial, or to challenge the validity of the proffered analyses under Rules 702 and 703[21].

Statisticians create and use a variety of materials that are clearly relevant to the their opinion:

  • programs and programming code run to generate all specified analyses on specified data,
  • statistical packages,
  • all data available,
  • all data “cleaning” or data selection processes,
  • selection of variables from those available,
  • data frames that show what data were included (and excluded) in the analyses,
  • data input files,
  • all specified tests run on all data,
  • all data and analysis output files that show all analyses generated,
  • all statistical test diagnostics and tests of underlying assumptions, and
  • graphical output files.

The statistician may have made any number of decisions or judgments in selecting which statistical test results to incorporate into his or her final report.  The report will in all likelihood not include important materials that would allow another statistician to fully understand, test, replicate, and criticize the more conclusory analysis and statements in the report.  In addition, lurking in the witnesses files, or in the electronic “trash bin” may be alternative analyses that were run and discarded, and not included in the final report.  Why and how those alternative analyses were run but discarded, may raise important credibility or validity questions, as well as provide insight into the statistician’s analytical process, all important considerations in preparing for cross-examination and rebuttal.  The lesson of Rule 26, and the caselaw interpreting its provisions, is that lawyers must make specific request for the materials described above.  Only with these materials firmly in hand, can a deposition fully explore the results obtained, the methods used, the assumptions made, the assumptions violated, the alternative methods rejected, the data used, the data available, data not used, the data-dredging and manipulation potential, analytical problems, and the potential failure to reconcile inconsistent results. Waiting for trial, or even for the deposition, may well be too late[22].

The warrant for examining the integrity of data relied upon by expert witnesses appears to be securely embedded in the Federal Rules of Civil Procedure, and in the Federal Rules of Evidence. Evidence Rule 703 has particular relevance to statistical or epidemiologic testimony. Lawyers facing studies of dubious quality may need to press for discovery of underlying data and materials. In the Viagra vision loss multi-district litigation (MDL), the defendant sought and obtained discovery of underlying data from plaintiffs’ expert witness’s epidemiologic study of vision loss among patients using Viagra and similar medications[23]. Although the Viagra MDL court had struggled with inferential statistics in its first approach to defendant’s Rule 702 motion, the court understood the challenge based upon lack of data integrity, and reconsidered and granted defendant’s motion to exclude the challenged expert witness[24].

The lawyering implications for discovery of statistician expert witnesses are important. Statistical evidence requires counsel’s special scrutiny to ensure compliance with the disclosure requirements of Federal Rule of Civil Procedure 26. Given the restrictive reading of Rule 26 by some courts, counsel will need to anticipate the use of other discovery tools. Lawyers should request by Rule 34 or Rule 45, all computer runs, programming routines, and outputs, and they should zealously pursue witnesses’ failure to maintain and produce data. Given the uncertainty in some districts whether expert witnesses are subject to subpoenas, counsel may consider propounding both Rule 34 requests and serving Rule 45 subpoenas.

Lawyers in data-intensive cases should give early consideration to appropriate discovery plans that contemplate data production in advance of depositions, to allow full exploration of analyses at deposition[25]. Lawyers should also be alert to the potential need to show particularized need for the requested data and analyses. In instructing expert witnesses on their preparation of their reports, lawyers should consider directing their expert witnesses to express whether they need further access to the adversary’s expert witnesses’ underlying data and materials to fully evaluate the proffered opinions. Discovery of statisticians and their data and their analyses requires careful planning, as well as patient efforts to educate the court about the need for full exploration of all data and all analyses conducted, whether or not incorporated into the Rule 26 report.


[1] Randall v. Rolls-Royce Corp., 2010 U.S. Dist. LEXIS 23421, *4-5 (S.D. Ind. March 12, 2010) (“Dr. Harnett who began his evaluation of the analysis contained in the report … soon concluded that he needed the underlying studies and statistical programs created or used by Dr. Drogin. In response to the Defendants’ request for such materials, Plaintiffs produced four discs containing more than 1,000 separate electronic files”).

[2] Marsh v. Jackson, 141 F.R.D. 431, 432–33 (W.D. Va. 1992) (holding that Rule 45 could not be used to obtain an opposing expert’s files because Rule 26(b)(4) limits expert discovery to depositions and interrogatories as a policy matter)

[3] See Advisory Comm. Notes for 1993 Amendments, to Fed. R. Civ. P. 26(a) (“The enumeration in Rule 26(a) of items to be disclosed does not prevent a court from requiring by order or local rule that the parties disclose additional information without a discovery request. Nor are parties precluded from using traditional discovery methods to obtain further information regarding these matters, … .”); United States v. Bazaarvoice, Inc., C 13-00133 WHO (LB), 2013 WL 3784240 (N.D. Cal. July 18, 2013) (“Rule 26(a)(2)(B) . . . does not preclude parties from obtaining further information through ordinary discovery tools”) (internal citations omitted).

[4] Morriss v. BNSF Ry. Co., No. 8:13CV24, 2014 WL 128393, at *4–6, 2014 U.S. Dist. LEXIS 3757, at *17 (D.Neb. Jan. 13, 2014) (holding that “absent some threshold showing of “compelling reason,” the broad discovery provisions of Rules 34 and 45 cannot be used to undermine the specific expert witness discovery rules in Rule 26(a)(2)”).

[5] Modjeska v. United Parcel Service Inc., No. 12–C–1020, 2014 WL 2807531 (E.D. Wis. June 19, 2014) (holding that Rule 26(a)(2)(B) governs only disclosure in expert witness reports and does not limit or preclude further discovery using ordinary discovery such as requests to produce); Expeditors Int’l of Wash., Inc. v. Vastera, Inc., No. 04 C 0321, 2004 WL 406999, at *3 (N.D. Ill. Feb.26, 2004). See also Wright & Miller, 9A Federal Practice & Procedure Civ. § 2452 (3d ed. 2013).

[6] Adv. Comm. Note for Rule 26(b)(4)(B)(2010).  See, e.g., Ladd Furniture v. Ernst & Young, 1998 U.S. Dist. LEXIS 17345, at *34-37 (M.D.N.C. Aug. 27, 1998).

[7] Id.

[8] Id. at *36-37.

[9] Innis Arden Golf Club v. Pitney Bowes, Inc., 629 F. Supp. 2d 175, 190 (D. Conn. 2009) (excluding expert opinion because his samples and data packages no longer existed and thus “[d]efendants could not attempt to validate [his] methods even if he could specifically say what he considered”). See also Jung v. Neschis, No. 01–Civ. 6993(RMB)(THK), 2007 WL 5256966, at *8–15 (S.D.N.Y. Oct. 23, 2007) (finding that a party’s failure to produce tape recordings that its medical expert witness relied upon for his opinion was ‘‘disturbing’’; precluding expert witness’s testimony).

[10] See, e.g., Dongguk Univ. v. Yale Univ., No. 3:08-CV-00441, 2011 WL 1935865, at *1 (D. Conn. May 19, 2011) (holding that “an expert’s handwritten notes are not protected from disclosure because they are neither drafts of an expert report nor communications between the party’s attorney and the expert witness”).

[11] D.G. ex rel. G. v. Henry, No. 08-CV-74-GKF-FHM, 2011 WL 1344200, at *1 (N.D. Okla. Apr. 8, 2011) (ordering production of the assistants’ notes because the expert witness had relied upon them in forming his opinion, which brought them within the scope of “facts or data” under the rule).

[12] Mortgage Info. Servs, Inc. v. Kitchens, 210 F.R.D. 562, 564-68 (W.D.N.C. 2002) (holding that nothing in Rule 45 precludes its use on a party); See also Mezu v. Morgan State Univ., 269 F.R.D. 565, 581 (D. Md. 2010) (“courts are divided as to whether Rule 45 subpoenas should be served on parties”); Peyton v. Burdick, 2008 U.S. Dist. LEXIS 106910 (E.D. Cal. 2008) (discussing the split among courts on the issue).

[13] 190 F.R.D. 281 (D. Mass. 2000).

[14] Id. at 283.

[15] Id. See Ambrose v. Southworth Products Corp., No. CIV.A. 95–0048–H, 1997 WL 470359, 1 (W.D. Va. June 24, 1997) (holding a “naked” subpoena duces tecum directed to a non-party expert retained by a party is not within the ambit of a Rule 45 document production subpoena, and not permitted by Fed. R. Civ. Pro. 26(b)(4)); see also Hartford Fire Ins. v. Pure Air on the Lake Ltd., 154 F.R.D. 202, 208 (N.D. Ind. 1993) (holding a party cannot use Rule 45 to circumvent Rule 26(b)(4) as a method to obtain an expert witness’s files); Marsh v. Jackson, 141 F.R.D. 431, 432 (W.D. Va. 1992) (noting that subpoena for production of documents directed to non-party expert retained by a party is not within ambit of Fed. Rule 45(c)(3)(8)(ii)).

[16] See James Wm. Moore, 9 Moore’s Federal Practice § 45.03[1] (noting that “[s]ubpoenas under Rule 45 may be issued to parties or non-parties”).

[17] See Glendale Fed. Bank, FSB v.United States, 39 Fed. Cl. 422, 424 (Fed. Cl. 1997) (“The expert witness, testifying under oath, is expected to give his own honest, independent opinion… He is not the sponsoring party’s agent at any time merely because he is retained as its expert witness”). See also National Justice Compania Naviera S.A. v. Prudential Assurance Co. Ltd., (“The Ikarian Reefer”), [1993] 2 Lloyd’s Rep. 68 at 81-82 (Q.B.D.), rev’d on other grounds [1995] 1 Lloyd’s Rep. 455 at 496 (C.A.) (embracing the enumeration of duties, including a duty to “provide independent assistance to the Court by way of objective unbiased opinion in relation to matters within his expertise,” and a duty to eschew “the role of an advocate”).

[18] Western Res., Inc. v. Union Pac. RR, No. 00-2043-CM, 2002 WL 1822428, at *3 (D. Kan. July 23, 2002) (ordering expert witness to produce prior testimony under Rule 45); All W. Supply Co. v. Hill’s Pet Prods. Div., Colgate-Palmolive Co., 152 F.R.D. 634, 639 (D. Kan. 1993) (“With regard to nonparties such as plaintiff’s expert witness, a request for documents may be made by subpoena duces tecum pursuant to Rule 45”); Smith v. Transducer Technology, Inc., No. Civ. 1995/28, 2000 WL 1717332, 2 (D.V.I. Nov. 16, 2000) (holding that Rule 30(b)(5) deposition notice, served upon opposing party, is not an appropriate discovery tool to compel expert witness to produce documents from at his deposition) (noting that a “Rule 45 subpoena duces tecum in conjunction with a properly noticed deposition may do so (subject however to any Rule 26 limitations)”); Thomas v. Marina Assocs., 202 F.R.D. 433, 434 (E.D. Pa. 2001) (denying motion to quash subpoenas issued to party’s expert witness); Quaile v. Carol Cable Co., Civ. A. No. 90-7415, 1992 WL 277981, at *2 (E.D. Pa. Oct. 5, 1992) (granting motion to compel discovery concerning expert witness’s opinions pursuant to a Rule 45 subpoena); Lawrence E. Jaffe Pension Plan v. Household Int’l, Inc., No. 02 C 5893, 2008 WL 687220, at *2 (N.D. Ill Mar. 10, 2008) (“It is clear . . . that a subpoena duces tecum . . . is an appropriate discovery mechanism against . . . a party’s expert witness”) (internal citation omitted); Expeditors Internat’l of Wash., Inc. v. Vastera, Inc., No. 04 C 0321, 2004 WL 406999, at *2-3 (N.D. Ill. Feb. 26, 2004) (holding Rule 45, not Rule 34, governs discovery from retained experts) (“Subpoena duces tecum is . . . an appropriate discovery mechanism against nonparties such as a party’s expert witness”); Reit v. Post Prop., Inc., No. 09 Civ. 5455(RMB)(KNF), 2010 WL 4537044, at *9 (S.D.N.Y. Nov. 4, 2010) (“Subpoena duces tecum … is an appropriate discovery mechanism against a nonparty expert”).

[19] See, e.g., Williamson v. Horizon Lines LLC , 248 F.R.D. 79, 83 (D. Me. 2008) (“[C]ontrary to Horizon Lines’ contention, there is a relationship between Rule 26 and Rule 45 and parties should not be allowed to employ a subpoena after a discovery deadline to obtain materials from third parties that could have been produced before discovery.”).

[20] Bartley v. Isuzu Motors Ltd., 151 F.R.D. 659, 660-61 (D. Colo. 1993) (ordering party to create and preserve “the input and output data for each variable in the program, for each iteration, or each simulation,” as well as a record of all simulations performed, even those that do not conform to the plaintiff’s claims and theories in the case).

[21] See City of Cleveland v. Cleveland Elec. Illuminating Co., 538 F. Supp. 1257 (N.D. Ohio 1980) (“Certainly, where, as here, the expert reports are predicated upon complex data, calculations and computer simulations which are neither discernible nor deducible from the written reports themselves, disclosure thereof is essential to the facilitation of effective and efficient examination of these experts at trial.”); Shu-Tao Lin v. McDonnell-Douglas, Corp., 574 F. Supp. 1407, 1412-13 (S.D.N.Y. 1983) (granting new trial, and holding that expert witness’s failure to disclosure the “nature of [the plaintiff’s testifying expert’s] computer program or the underlying data, the inputs and outputs employed in the program” deprived adversary of an “adequate basis on which to cross-examine plaintiff’s experts”), rev’d on other grounds, 742 F.2d 45 (2d Cir. 1984).

[22] Manual for Complex Litigation at 99, § 11.482 (4th ed. 2004) (“Early and full disclosure of expert evidence can help define and narrow issues. Although experts often seem hopelessly at odds, revealing the assumptions and underlying data on which they have relied in reaching their opinions often makes the bases for their differences clearer and enables substantial simplification of the issues. In addition, disclosure can facilitate rulings well in advance of trial on objections to the qualifications of an expert, the relevance and reliability of opinions to be offered, and the reasonableness of reliance on particular data.207”). See also ABA Section of Antitrust Law, Econometrics: Legal, Practical, and Technical Issues at 75-76 (2005) (advising of the necessity to obtain all data, all analyses, and all supporting materials, in advance of deposition to ensure efficient and effective discovery procedures).

[23] In re Viagra Prods. Liab. Litig., 572 F. Supp. 2d 1071, 1090 (D. Minn. 2008).

[24] In re Viagra Prods. Liab. Litig., 658 F. Supp. 2d 936, 945 (D. Minn. 2009).

[25] See Fed. R. Civ. Pro. 16(b); 26(f).

Discovery of Retained, Testifying Statistician Expert Witnesses (Part 1)

June 30th, 2015

At times, the judiciary’s resistance to delving into the factual underpinnings of expert witness opinions is extraordinary. In one case, the Second Circuit affirmed a judgment for a plaintiff in a breach of contract action, based in large part upon expert witness testimony that presented the results of a computer simulation. Perma Research & Development v. Singer Co.[1] Although the trial court had promised to permit inquiry into the plaintiff’s computer expert witness’s source of data, programmed mathematical formulae, and computer programs, when the defendant asked the plaintiff’s expert witness to disclose his underlying data and algorithms, the district judge sustained the witness’s refusal on grounds that the requested materials were his “private work product” and “proprietary information.”[2] Despite the trial court’s failure to articulate any legally recognized basis for permitting the expert witness to stonewall in this fashion, a panel of the Circuit, in an opinion by superannuated Justice Tom Clark, affirmed, on an argument that the defendant “had not shown that it did not have an adequate basis on which to cross-examine plaintiff’s experts.” Judge Van Graafeiland dissented, indelicately pointing out that the majority had charged the defendant with failing to show that it had been deprived of a fair opportunity to cross-examine plaintiff’s expert witnesses while depriving the defendant of access to the secret underlying evidence and materials that were needed to demonstrate what could have been done on cross-examination[3]. The dissent traced the trial court’s error to its misconception that a computer is just a giant calculator, and pointed out that the majority contravened Circuit precedent[4] and evolving standards[5] for handling underlying data that was analyzed or otherwise incorporated into computer models and simulations.

Although the approach of Perma Research has largely been ignored, has fallen into disrepute, and has been superseded by statutory amendments[6], its retrograde approach continues to find occasional expression in reported decisions. The refinement of Federal Rule of Evidence 702 to require sound support for expert witnesses’ opinions has opened the flow of discovery of underlying facts and data considered by expert witnesses before generating their reports. The most recent edition of the Federal Judicial Center’s Manual for Complex Litigation treats both computer-generated evidence and expert witnesses’ underlying data as both subject to pre-trial discovery as necessary to provide for full and fair litigation of the issues in the case[7].

The discovery of expert witnesses who have conducted statistical analyses poses difficult problems for lawyers.  Unlike other some expert witnesses, who passively review data and arrive at an opinion that synthesizes published research, statisticians actually create evidence with new arrangements and analyses of data in the case.  In this respect, statisticians are like material scientists who may test and record experimental observations on a product or its constituents.  Inquiring minds will want to know whether the statistical analyses in the witness’s report were the results of pre-planned analysis protocols, or whether they were the second, third, or fifteenth alternative analysis.  Earlier statistical analyses conducted but not produced may reveal what the expert witness believed would have been the preferred analysis if only the data had cooperated more fully. Statistical analyses conducted by expert witnesses provide plenty of opportunity for data-dredging, which can then be covered up by disclosing only selected analyses in the expert witness’s report.

The output of statisticians’ statistical analyses will take the form of a measure of “point estimates” of “effect size,” a significance or posterior probability, a set of regression coefficients, a summary estimate of association, or a similar measure that did not exist before the statistician used the underlying data to produce the analytical outcome, which is then the subject of further inference and opinion.  Frequentist analyses must identify the probability model and other assumptions employed. Bayesian analyses must also identify prior probabilities used as the starting point used with further evidence to arrive at posterior probabilities. The science, creativity, and judgment involved in statistical methods challenge courts and counsel to discover, understand, reproduce, present, and cross-examine statistician expert witness testimony.  And occasionally, there is duplicity and deviousness to uncover as well.

The discovery obligations with respect to statistician expert witnesses vary considerably among state and federal courts.  The 1993 amendments to the Federal Rules of Civil Procedure created an automatic right to conduct depositions of expert witnesses[8].  Previously, parties in federal court had to show the inadequacy of other methods of discovery.  Rule 26(a)(2)(B)(ii) requires the automatic production of “the facts or data considered by the [expert] witness in forming” his or her opinions. The literal wording of this provision would appear to restrict automatic, mandatory disclosure to those facts and data that are specifically considered in forming the opinions contained in the prescribed report. Several courts, however, have interpreted the term “considered” to include any information that expert witnesses review or generate, “regardless of whether the experts actually rely on those materials as a basis for their opinions.[9]

Among the changes introduced by the 2010 amendments to the Federal Rules of Civil Procedure was a narrowing of the disclosure requirement of “facts and data” considered by expert witnesses in arriving at their opinions to exclude some attorney work product, as well as protecting drafts of expert witness reports from discovery.  The implications of the Federal Rules for statistician expert witnesses are not entirely clear, but these changes should not be used as an excuse to deprive litigants of access to the data and materials underlying statisticians’ analyses. Since the 2010 amendments, courts have enforced discovery requests for testifying expert witnesses’ notes because they were not draft reports or specific communications between counsel and expert witnesses[10].

The Requirements Associated With Producing A Report

Rule 26 is the key rule that governs disclosure and discovery of expert witnesses and their opinions. Under the current version of Rule 26(a)(2)(B), the scope of required disclosure in the expert report has been narrowed in some respects. Rule 26(a)(2)(B) now requires service of expert witness reports that contain, among other things:

(i) a complete statement of all opinions the witness will express and the basis and reasons for them;

(ii) the facts or data considered by the witness in forming them;

(iii) any exhibits that will be used to summarize or support them.

The Rule’s use of “them” seems clearly to refer back to “opinions,” which creates a problem with respect to materials considered generally with respect to the case or the issues, but not for the specific opinions advanced in the report.

The previous language of the rule required that the expert report disclose “the data or other information considered by the witness.[11]” The use of “other information” in the older version of the rule, rather than the new “data” was generally interpreted to authorize discovery of all oral and written communications between counsel and expert witnesses.  The trimming of Rule 26(a)(2)(B)(ii) was thus designed to place these attorney-expert witness communications off limits from disclosure or discovery.

The federal rules specify that the required report “is intended to set forth the substance of the direct examination[12].” Several court have thus interpreted the current rule in a way that does not result in automatic production of all statistical analyses performed, but only those data and analyses the witness has decided to present at trial.  The report requirement, as it now stands, is thus not necessarily designed to help adverse counsel fully challenge and cross-examine the expert witness on analyses attempted, discarded, or abandoned. If a statistician expert witness conducted multiple statistical testing before arriving at a “preferred” analysis, that expert witness, and instructing counsel, will obviously be all too happy to eliminate the unhelpful analyses from the direct examination, and from the purview of disclosure.

Some of the caselaw in this area makes clear that it is up to the requesting party to discover what it wants beyond the materials that must automatically be disclosed in, or with, the report. A party will not be heard to complain, or attack its adversary, about failure to produce materials never requested.[13] Citing Rule 26(a) and its subsections, which deal with the report, and not discovery beyond the report, several cases take a narrow view of disclosure as embodied in the report requirement.[14] In one case, McCoy v. Whirlpool Corp, the trial court did, however, permit the plaintiff to conduct a supplemental deposition of the defense expert witness to question him about his calculations[15].

A narrow view of automatic disclosure in some cases appears to protect statistician and other expert witnesses from being required to produce calculations, statistical analyses, and data outputs even for opinions that are identified in their reports, and intended to be the subject of direct examination at trial[16].  The trial court’s handling of the issues in Cook v. Rockwell International Corporation is illustrative of this questionable approach.  The issue of the inadequacy of expert witnesses’ reports, for failing to disclose notes, calculations, and preliminary analyses, arose in the context of a Rule 702 motion to the admissibility of the witnesses’ opinion testimony.  The trial court rejected “[a]ny suggestion that an opposing expert must be able to verify the correctness of an expert’s work before it can be admitted… ”[17]; any such suggestion “misstates the standard for admission of expert evidence under [Fed. R. Evid.] 702.[18]”  The Cook court further rejected any “suggestion in Rule 26(a)(2) that an expert report is incomplete unless it contains sufficient information and detail for an opposing expert to replicate and verify in all respects both the method and results described in the report.[19]”   Similarly, the court rejected the defense’s complaints that one of plaintiffs’ expert witness’s expert report and disclosures violated Rule 26(a)(2), by failing to provide “detailed working notes, intermediate results and computer records,” to allow a rebuttal expert witness to test the methodology and replicate the results[20]. The court observed that

“Defendants’ argument also confuses the expert reporting requirements of Rule 26(a)(2) with the considerations for assessing the admissibility of an expert’s opinions under Rule 702 of the Federal Rules of Evidence. Whether an expert’s method or theory can or has been tested is one of the factors that can be relevant to determining whether an expert’s testimony is reliable enough to be admissible. See Fed. R. Evid. 702 2000 advisory committee’s note; Daubert, 509 U.S. at 593, 113 S.Ct. 2786. It is not a factor for assessing compliance with Rule 26(a)(2)’s expert disclosure requirements.[21]

The Rule 702 motion to exclude an expert witness comes too late in the pre-trial process for complaints about failure to disclose underlying data and analyses. The Cook case never explicitly addressed Rule 26(b), or other discovery procedures, as a basis for the defense request for underlying documents, data, and materials.  In any event, the limited scope accorded to Rule 26 disclosure mechanisms by Cook emphasizes the importance of deploying ancillary discovery tools early in the pre-trial process.

The Format Of Documents and Data Files To Be Produced

The dispute in Helmert v.  Butterball, LLC, is typical of what may be expected in a case involving statistician expert witness testimony.  The parties exchanged reports of their statistical expert witnesses, as well as the data output files.  The parties chose, however, to produce the data files in ways that were singularly unhelpful to the other side.  One party produced data files in the “portable document format” (pdf) rather than in the native format of the statistical software package used (STATA).  The other party produced data in a spreadsheet without any information about how the data were processed.  The parties then filed cross-motions to compel the data in its “electronic, native format.” In addition, plaintiffs pressed for all the underlying data, formulae, and calculations. The court denied both motions on the theory that both sides had received copies of the data considered, and neither was denied facts or data considered by the expert witnesses in reaching their opinions[22]. The court refused plaintiffs’ request for formulae and calculations as well. The court’s discussion of its rationale for denying the cross-motions is framed entirely in terms of what parties may expect and be entitled in the form of a report, without any mention of additional discovery mechanisms to obtain the sought-after materials. The court noted that the parties would have the opportunity to explore calculations at deposition.

The decision in Helmert seems typical of judicial indifference to, and misunderstanding of, the need for datasets, especially with large datasets, in the form uploaded to, and used in, statistical software programs. What is missing from the Helmert opinion is a recognition that an effective deposition would require production of the requested materials in advance of the oral examination, so that the examining counsel can confer and consult with a statistical expert for help in formulating and structuring the deposition questions. There are at least two remedial considerations for future discovery motions of the sort seen in Helmert. First, the moving party should support its application with an affidavit of a statistical expert to explain the specific need for identification of the actual formulae used, programming used within specific software programs to run analyses, and interim and final outputs. Second, a strong analogy with document discovery of parties, in which courts routinely order “native format” versions of PowerPoint, Excel, and Word documents produced in response to document requests. Rule 34 of the Federal Rules of Civil Procedure requires that “[a] party must produce documents as they are kept in the usual course of business[23]” and that, “[i]f a request does not specify a form for producing electronically stored information, a party must produce it in a form or forms in which it is ordinarily maintained or in a reasonably usable form or forms.[24]” The Advisory Committee notes to Rule 34[25] make clear that:

“[T]he option to produce in a reasonably usable form does not mean that a responding party is free to convert electronically stored information from the form in which it is ordinarily maintained to a different form that makes it more difficult or burdensome for the requesting party to use the information efficiently in the litigation. If the responding party ordinarily maintains the information it is producing in a way that makes it searchable by electronic means, the information should not be produced in a form that removes or significantly degrades this feature.”

Under the Federal Rules, a requesting party’s obligation to specify a particular format for document production is superseded by the responding party’s obligation to refrain from manipulating or converting “any of its electronically stored information to a different format that would make it more difficult or burdensome for [the requesting party] to use.[26]” In Helmert, the STATA files should have been delivered as STATA native format files, and the requesting party should have requested, and received, all STATA input and output files, which would have permitted the requestor to replicate all analyses conducted.

Some of the decided cases on expert witness reports are troubling because they do not explicitly state whether they are addressing the adequacy of automatic disclosure and reports, or a response to propounded discovery.  For example, in Etherton v. Owners Ins. Co.[27], the plaintiff sought to preclude a defense accident reconstruction expert witness on grounds that the witness failed to produce several pages of calculations[28]. The defense argued that the “[w]hile [the witness’s] notes regarding these calculations were not included in his expert report, the report does specifically identify the methods he employed in his analysis, and the static data used in his calculations”; and by asserting that “Rule 26 does not require the disclosure of draft expert reports, and it certainly does not require disclosure of calculations, as Plaintiff contends.[29]”  The court in Etherton agreed that “Fed. R. Civ. P. 26(a)(2)(B) does not require the production of every scrap of paper with potential relevance to an expert’s opinion.[30]” The court laid the discovery default here upon the plaintiff, as the requesting party:  “Although Plaintiff should have known that Mr. Ogden’s engineering analysis would likely involve calculations, Plaintiff never requested that documentation of those calculations be produced at any time prior to the date of [Ogden’s] deposition.[31]

The Etherton court’s assessment that the defense expert witness’s calculations were “working notes,” which Rule 26(a)(2) does not require to be included in or produced with a report, seems a complete answer, except for the court’s musings about the new provisions of Rule 26(b)(4)(B), which protect draft reports.  Because of the court’s emphasis that the plaintiff never requested the documentation of the relevant calculations, the court’s musings about what was discoverable were clearly dicta.  The calculations, which would reveal data and inferential processes considered, appear to be core materials, subject to and important for discovery[32].

[This post is a substantial revision and update to an earlier post, “Discovery of Statistician Expert Witnesses” (July 19, 2012).]


[1] 542 F.2d 111 (2d Cir. 1976), cert. denied, 429 U.S. 987 (1976)

[2] Id. at 124.

[3] Id. at 126 & n.17.

[4] United States v. Dioguardi, 428 F.2d 1033, 1038 (2d Cir.), cert. denied, 400 U.S. 825 (1970) (holding that prosecution’s failure to produce computer program was error but harmless on the particular facts of the case).

[5] See, e.g., Roberts, “A Practitioner’s Primer on Computer-Generated Evidence,” 41 U. Chi. L. Rev. 254, 255-56 (1974); Freed, “Computer Records and the Law — Retrospect and Prospect,” 15 Jurimetrics J. 207, 208 (1975); ABA Sub-Committee on Data Processing, “Principles of Introduction of Machine Prepared Studies” (1964).

[6] Aldous, Note, “Disclosure of Expert Computer Simulations,” 8 Computer L.J. 51 (1987); Betsy S. Fiedler, “Are Your Eyes Deceiving You?: The Evidentiary Crisis Regarding the Admissibility of Computer Generated Evidence,” 48 N.Y.L. Sch. L. Rev. 295, 295–96 (2004); Fred Galves, “Where the Not-So-Wild Things Are: Computers in the Courtroom, the Federal Rules of Evidence, and the Need for Institutional Reform and More Judicial Acceptance,” 13 Harv. J.L. & Tech. 161 (2000); Leslie C. O’Toole, “Admitting that We’re Litigating in the Digital Age: A Practical Overview of Issues of Admissibility in the Technological Courtroom,” Fed. Def. Corp. Csl. Quart. 3 (2008); Carole E. Powell, “Computer Generated Visual Evidence: Does Daubert Make a Difference?” 12 Georgia State Univ. L. Rev. 577 (1995).

[7] Federal Judicial Center, Manual for Complex Litigation § 11.447, at 82 (4th ed. 2004) (“The judge should therefore consider the accuracy and reliability of computerized evidence, including any necessary discovery during pretrial proceedings, so that challenges to the evidence are not made for the first time at trial.”); id. at § 11.482, at 99 (“Early and full disclosure of expert evidence can help define and narrow issues. Although experts often seem hopelessly at odds, revealing the assumptions and underlying data on which they have relied in reaching their opinions often makes the bases for their differences clearer and enables substantial simplification of the issues.”)

[8] Fed. R. Civ. P. 26(b)(4)(A) (1993).

[9] United States v. Dish Network, L.L.C., No. 09-3073, 2013 WL 5575864, at *2, *5 (C.D. Ill. Oct. 9, 2013) (noting that the 2010 amendments did not affect the change the meaning of the term “considered,” as including “anything received, reviewed, read, or authored by the expert, before or in connection with the forming of his opinion, if the subject matter relates to the facts or opinions expressed.”); S.E.C. v. Reyes, 2007 WL 963422, at *1 (N.D. Cal. Mar. 30, 2007). See also South Yuba River Citizens’ League v. National Marine Fisheries Service, 257 F.R.D. 607, 610 (E.D. Cal. 2009) (majority rule requires production of materials considered even when work product); Trigon Insur. Co. v. United States, 204 F.R.D. 277, 282 (E.D. Va. 2001).

[10] Dongguk Univ. v. Yale Univ., No. 3:08–CV–00441 (TLM), 2011 WL 1935865 (D. Conn. May 19, 2011) (ordering production of a testifying expert witness’s notes, reasoning that they were neither draft reports nor communications between the party’s attorney and the expert witness, and they were not the mental impressions, conclusions, opinions, or legal theories of the party’s attorney); In re Application of the Republic of Ecuador, 280 F.R.D. 506, 513 (N.D. Cal. 2012) (holding that Rule 26(b) does not protect an expert witness’s own work product other than draft reports). But see Internat’l Aloe Science Council, Inc. v. Fruit of the Earth, Inc., No. 11-2255, 2012 WL 1900536, at *2 (D. Md. May 23, 2012) (holding that expert witness’s notes created to help counsel prepare for deposition of adversary’s expert witness were protected as attorney work product and protected from disclosure under Rule 26(b)(4)(C) because they did not contain opinions that the expert would provide at trial)).

[11] Fed. R. Civ. P. 26(a)(2)(B)(ii) (1993) (emphasis added).

[12] Notes of Advisory Committee on Rules for Rule 26(a)(2)(B). See, e.g., Lituanian Commerce Corp., Ltd. v. Sara Lee Hosiery, 177 F.R.D. 245, 253 (D.N.J. 1997) (expert witness’s written report should state completely all opinions to be given at trial, the data, facts, and information considered in arriving at those opinions, as well as any exhibits to be used), vacated on other grounds, 179 F.R.D. 450 (D.N.J. 1998).

[13] See, e.g., Gillepsie v. Sears, Roebuck & Co., 386 F.3d 21, 35 (1st Cir. 2004) (holding that trial court erred in allowing cross-examination and final argument on expert witness’s supposed failure to produce all working notes and videotaped recordings while conducting tests, when objecting party never made such document requests).

[14] See, e.g., McCoy v. Whirlpool Corp., 214 F.R.D. 646, 652 (D. Kan. 2003) (Rule  26(a)(2) “does not require that a report recite each minute fact or piece of scientific information that might be elicited on direct examination to establish the admissibility of the expert opinion … Nor does it require the expert to anticipate every criticism and articulate every nano-detail that might be involved in defending the opinion[.]”).

[15] Id. (without distinguishing between the provisions of Rule 26(a) concerning reports and Rule 26(b) concerning depositions); see also Scott v. City of New York, 591 F.Supp. 2d 554, 559 (S.D.N.Y. 2008) (“failure to record the panoply of descriptive figures displayed automatically by his statistics program does not constitute best practices for preparation of an expert report,’’ but holding that the report contained ‘‘the data or other information’’ he considered in forming his opinion, as required by Rule 26); McDonald v. Sun Oil Co., 423 F.Supp. 2d 1114, 1122 (D. Or. 2006) (holding that Rule 26(a)(2)(B) does not require the production of an expert witness’s working notes; a party may not be sanctioned for spoliation based upon expert witness’s failure to retain notes, absent a showing of relevancy and bad faith), rev’d on other grounds, 548 F.3d 774 (9th Cir. 2008).

[16] In re Xerox Corp Securities Litig., 746 F. Supp. 2d 402, 414-15 (D. Conn. 2010) (“The court concludes that it was not necessary for the [expert witness’s] initial regression analysis to be contained in the [expert] report” that was disclosed pursuant to Rule 26(a)(2)), aff’d on other grds. sub. nom., Dalberth v. Xerox Corp., 766 F. 3d 172 (2d Cir. 2014). See also Cook v. Rockwell Int’l Corp., 580 F.Supp. 2d 1071, 1122 (D. Colo. 2006), rev’d and remanded on other grounds, 618 F.3d 1127 (10th Cir. 2010), cert. denied, ___ U.S. ___ , No. 10-1377, 2012 WL 2368857 (June 25, 2012), on remand, 13 F.Supp.3d 1153 (D. Colo. 2014), vacated 2015 WL 3853593, No. 14–1112 (10th Cir. June 23, 2015); Flebotte v. Dow Jones & Co., No. Civ. A. 97–30117–FHF, 2000 WL 35539238, at *7 (D. Mass. Dec. 6, 2000) (“Therefore, neither the plain language of the rule nor its purpose compels disclosure of every calculation or test conducted by the expert during formation of the report.”).

[17] Cook, 580 F. Supp. 2d at 1121–22.

[18] Id.

[19] Id. & n. 55 (Rule 26(a)(2) does not “require that an expert report contain all the information that a scientific journal might require an author of a published paper to retain.”).

[20] Id. at 1121-22.

[21] Id.

[22] Helmert v.  Butterball, LLC, No. 4:08-CV-00342, 2011 WL 3157180, at *2 (E.D. Ark. July 27, 2011).

[23] Fed. R. Civ. P. 34(b)(2)(E)(i).

[24] Fed. R. Civ. P. 34(b)(2)(E)(ii).

[25] Fed. R. Civ. P. 34, Advisory Comm. Notes (2006 Amendments).

[26] Crissen v. Gupta, 2013 U.S. Dist. LEXIS 159534, at *22 (S.D. Ind. Nov. 7, 2013), citing Craig & Landreth, Inc. v. Mazda Motor of America, Inc., 2009 U.S. Dist. LEXIS 66069, at *3 (S.D. Ind. July 27, 2009). See also Saliga v. Chemtura Corp., 2013 U.S. Dist. LEXIS 167019, *3-7 (D. Conn. Nov. 25, 2013).

[27] No. 10-cv-00892-MSKKLM, 2011 WL 684592 (D. Colo. Feb. 18, 2011)

[28] Id. at *1.

[29] Id.

[30] Id. at *2.

[31] Id.

[32] See Barnes v. Dist. of Columbia, 289 F.R.D. 1, 19–24 (D.D.C. 2012) (ordering production of underlying data and information because, “[i]n order for the [requesting party] to understand fully the . . . [r]eports, they need to have all the underlying data and information on how” the reports were prepared).

Earthquake-Induced Data Loss – We’re All Shook Up

June 26th, 2015

Adam Marcus and Ivan Oransky are medical journalists who publish the Retraction Watch blog. Their blog’s coverage of error, fraud, plagiarism, and other publishing disasters is often first-rate, and a valuable curative for the belief that peer review publication, as it is now practiced, ensures trustworthiness.

Yesterday, Retraction Watch posted an article on earthquake-induced data loss. Shannon Palus, “Lost your data? Blame an earthquake” (June 25, 2015). A commenter on PubPeer raised concerns about a key figure in a paper[1]. The authors acknowledged a problem, which they traced to their loss of data in an earthquake. The journal retracted the paper.

This is not the first instance of earthquake-induced loss of data.

When John O’Quinn and his colleagues in the litigation industry created the pseudo-science of silicone-induced autoimmunity, they recruited Nir Kossovsky, a pathologist at UCLA Medical Center. Although Kossovsky looked a bit like Pee-Wee Herman, he was a graduate of the University of Chicago Pritzker School of Medicine, and the U.S. Naval War College, and a consultant to the FDA. In his dress whites, Kossovsky helped O’Quinn sell his silicone immunogenicity theories to juries and judges around the country. For a while, the theories sold well.

In testifying and dodging discovery for the underlying data in his silicone studies, Kossovsky was as slick as silicone itself. Ultimately, when defense counsel subpoenaed the underlying data from Kossovsky’s silicone study, Kossovsky shrugged and replied that the Northridge Earthquake destroyed his data. Apparently coffee cups and other containers of questionable fluids spilled on his silicone data in the quake, and Kossovsky’s emergency response was to obtain garbage cans and throw out the data. For the gory details, see Gary Taubes, “Silicone in the System: Has Nir Kossovsky really shown anything about the dangers of breast implants?” Discover Magazine (Dec. 1995).

As Mr. Taubes points out, Kossovsky’s paper was rejected by several journals before being published in the Journal of Applied Biomaterials, of which Kossovsky was a member of the editorial board. The lack of data did not, however, keep Kossovsky from continuing to testify, and from trying to commercialize, along with his wife, Beth Brandegee, and his father, Ram Kossowsky[2], an ELISA-based silicone “antibody” biomarker diagnostic test, Detecsil. Although Rule 702 had been energized by the Daubert decision in 1993, many judges were still not willing to take a hard look at Kossovsky’s study, his test, or to demand the supposedly supporting data. The Food and Drug Administration, however, eventually caught up with Kossovsky, and the Detecsil marketing ceased. Lillian J. Gill, FDA Acting Director, Office of Compliance, Letter to Beth S. Brandegee, President, Structured Biologicals (SBI) Laboratories: Detecsil Silicone Sensitivity Test (July 15, 1994); see Taubes, Discover Magazine.

After defense counsel learned of the FDA’s enforcement action against Kossovsky and his company, the litigation industry lost interest in Kossovsky, and his name dropped off trial witness lists. His name also dropped off the rolls of tenured UCLA faculty, and he apparently left medicine altogether to become a business consultant. Dr. Kossovsky became “an authority on business process risk and reputational value.” Kossovsky is now the CEO and Director of Steel City Re, which specializes in strategies for maintaining and enhancing reputational value. Ironic; eh?

A review of PubMed’s entries for Nir Kossovsky shows that his run in silicone started in 1983, and ended in 1996. He testified for plaintiffs in Hopkins v. Dow Corning Corp., 33 F.3d 1116 (9th Cir.1994) (tried in 1991), and in the infamous case of Johnson v. Bristol-Myers Squibb, CN 91-21770, Tx Dist. Ct., 125th Jud. Dist., Harris Cty., 1992.

A bibliography of Kossovsky silicone oeuvre is listed, below.


[1] Federico S. Rodríguez, Katterine A. Salazar, Nery A. Jara, María A García-Robles, Fernando Pérez, Luciano E. Ferrada, Fernando Martínez, and Francisco J. Nualart, “Superoxide-dependent uptake of vitamin C in human glioma cells,” 127 J. Neurochemistry 793 (2013).

[2] Father and son apparently did not agree on how to spell their last name.


Nir Kossovsky, D. Conway, Ram Kossowsky & D. Petrovich, “Novel anti-silicone surface-associated antigen antibodies (anti-SSAA(x)) may help differentiate symptomatic patients with silicone breast implants from patients with classical rheumatological disease,” 210 Curr. Topics Microbiol. Immunol. 327 (1996)

Nir Kossovsky, et al., “Preservation of surface-dependent properties of viral antigens following immobilization on particulate ceramic delivery vehicles,” 29 J. Biomed. Mater. Res. 561 (1995)

E.A. Mena, Nir Kossovsky, C. Chu, and C. Hu, “Inflammatory intermediates produced by tissues encasing silicone breast prostheses,” 8 J. Invest. Surg. 31 (1995)

Nir Kossovsky, “Can the silicone controversy be resolved with rational certainty?” 7 J. Biomater. Sci. Polymer Ed. 97 (1995)

Nir Kossovsky & C.J. Freiman, “Physicochemical and immunological basis of silicone pathophysiology,” 7 J. Biomater. Sci. Polym. Ed. 101 (1995)

Nir Kossovsky, et al., “Self-reported signs and symptoms in breast implant patients with novel antibodies to silicone surface associated antigens [anti-SSAA(x)],” 6 J. Appl. Biomater. 153 (1995), and “Erratum,” 6 J. Appl. Biomater. 305 (1995)

Nir Kossovsky & J. Stassi, “A pathophysiological examination of the biophysics and bioreactivity of silicone breast implants,” 24s1 Seminars Arthritis & Rheum. 18 (1994)

Nir Kossovsky & C.J. Freiman, “Silicone breast implant pathology. Clinical data and immunologic consequences,” 118 Arch. Pathol. Lab. Med. 686 (1994)

Nir Kossovsky & C.J. Freiman, “Immunology of silicone breast implants,” 8 J. Biomaterials Appl. 237 (1994)

Nir Kossovsky & N. Papasian, “Mammary implants,” 3 J. Appl. Biomater. 239 (1992)

Nir Kossovsky, P. Cole, D.A. Zackson, “Giant cell myocarditis associated with silicone: An unusual case of biomaterials pathology discovered at autopsy using X-ray energy spectroscopic techniques,” 93 Am. J. Clin. Pathol. 148 (1990)

Nir Kossovsky & R.B. Snow RB, “Clinical-pathological analysis of failed central nervous system fluid shunts,” 23 J. Biomed. Mater. Res. 73 (1989)

R.B. Snow & Nir Kossovsky, “Hypersensitivity reaction associated with sterile ventriculoperitoneal shunt malfunction,” 31 Surg. Neurol. 209 (1989)

Nir Kossovsky & Ram Kossowsky, “Medical devices and biomaterials pathology: Primary data for health care technology assessment,” 4 Internat’l J. Technol. Assess. Health Care 319 (1988)

Nir Kossovsky, John P. Heggers, and M.C. Robson, “Experimental demonstration of the immunogenicity of silicone-protein complexes,” 21 J. Biomed. Mater. Res. 1125 (1987)

Nir Kossovsky, John P. Heggers, R.W. Parsons, and M.C. Robson, “Acceleration of capsule formation around silicone implants by infection in a guinea pig model,” 73 Plastic & Reconstr. Surg. 91 (1984)

John Heggers, Nir Kossovsky, et al., “Biocompatibility of silicone implants,” 11 Ann. Plastic Surg. 38 (1983)

Nir Kossovsky, John P. Heggers, et al., “Analysis of the surface morphology of recovered silicone mammary prostheses,” 71 Plast. Reconstr. Surg. 795 (1983)

The One Percent Non-solution – Infante Fuels His Own Exclusion in Gasoline Leukemia Case

June 25th, 2015

Most epidemiologic studies are not admissible. Such studies involve many layers of hearsay evidence, measurements of exposures, diagnoses, records, and the like, which cannot be “cross-examined.” Our legal system allows expert witnesses to rely upon such studies, although clearly inadmissible, when “experts in the particular field would reasonably rely on those kinds of facts or data in forming an opinion on the subject.” Federal Rule of Evidence 703. One of the problems that judges face in carrying out their gatekeeping duties is to evaluate whether challenged expert witnesses have reasonably relied upon particular studies and data. Judges, unlike juries, have an obligation to explain their decisions, and many expert witness gatekeeping decisions by judges fall short by failing to provide citations to the contested studies at issue in the challenge. Sometimes the parties may be able to discern what is being referenced, but the judicial decision has a public function that goes beyond speaking to the litigants before the court. Without full citations to the studies that underlie an expert witness’s opinion, the communities of judges, lawyers, scientists, and others cannot evaluate the judge’s gatekeeping. Imagine a judicial opinion that vaguely referred to a decision by another judge, but failed to provide a citation? We would think such an opinion to be a miserable failure of the judge’s obligation to explain and justify the resolution of the matter, as well as a case of poor legal scholarship. The same considerations should apply to the scientific studies relied upon by an expert witness, whose opinion is being discussed in a judicial opinion.

Judge Sarah Vance’s opinion in Burst v. Shell Oil Co., C. A. No. 14–109, 2015 WL 3755953 (E.D. La. June 16, 2015) [cited as Burst], is a good example of judicial opinion writing, in the context of deciding an evidentiary challenge to an expert witness’s opinion, which satisfies the requirements of judicial opinion writing, as well as basic scholarship. The key studies relied upon by the challenged expert witness are identified, and cited, in a way that permits both litigants and non-litigants to review Her Honor’s opinion, and evaluate both the challenged expert witness’s opinion, and the trial judge’s gatekeeping performance. Citations to the underlying studies creates the delicious possibility that the trial judge might actually have read the papers to decide the admissibility question. On the merits, Judge Vance’s opinion in Burst also serves as a good example of judicial scrutiny that cuts through an expert witness’s hand waving and misdirection in the face of inadequate, inconsistent, and insufficient evidence for a causal conclusion.

Burst is yet another case in which plaintiff claimed that exposure to gasoline caused acute myeloid leukemia (AML), one of several different types of leukemia[1]. The claim is fraught with uncertainty and speculation in the form of extrapolations between substances, from high to low exposures, and between diseases.

Everyone has a background exposure to benzene from both natural and anthropogenic sources. Smoking results in approximately a ten-fold elevation of benzene exposure. Agency for Toxic Substances and Disease Registry (ATSDR) Public Health Statement – Benzene CAS#: 71-43-2 (August 2007). Gasoline contains small amounts of benzene, on the order of 1 percent or less. U.S. Environmental Protection Agency (EPA), Summary and Analysis of the 2011 Gasoline Benzene Pre-Compliance Report (2012).

Although gasoline has always contained benzene, the quantitative difference in levels of benzene exposure involved in working with concentrated benzene and with gasoline has led virtually all scientists and regulatory agencies to treat the two exposures differently. Benzene exposure is a known cause of AML; gasoline exposure, even in occupational contexts, is not taken to be a known cause of AML. Dose matters.

Although the reviews of the International Agency for Research on Cancer (IARC) are sometimes partisan, incomplete, and biased towards finding carcinogenicity, the IARC categorizes benzene as a known human carcinogen, in large part because of its known ability to cause AML, but regards the evidence for gasoline as inadequate for making causal conclusions. IARC, Monographs on the Evaluation of Carcinogenic Risks to Humans, Vol. 45, Occupational Exposures in Petroleum Refining; Crude Oil and Major Petroleum Fuels (1989) (“There is inadequate evidence for the carcinogenicity in humans of gasoline.”) (emphasis in original)[2].

To transmogrify a gasoline case into a benzene case, plaintiff called upon Peter F. Infante, a fellow of the white-hat conspiracy, Collegium Ramazzini, and an adjunct professor at George Washington University School of Public Health and Health Services. Previously, Dr. Infante was Director of OHSA’s Office of Standards Review (OSHA). More recently, Infante is known as the president and registered agent of Peter F. Infante Consulting, LLC, in Falls Church, Virginia, and a go-to expert witness for plaintiffs in toxic tort litigation[3].

In the Burst case, Infante started out in trouble, by claiming that he had he “followed the methodology of the International Agency for Research on Cancer (IARC) and of the Occupational Safety and Health Administration (OSHA) in evaluating epidemiological studies, case reports and toxicological studies of benzene exposure and its effect on the hematopoietic system.” Burst at *4. Relying upon the IARC’s methodology might satisfy some uncritical courts, but here the IARC itself sharply distinguished its characterizations of benzene and gasoline in separate reviews. Infante’s opinion ignored this divide, although it ultimately had to connect gasoline exposure to the claimed injury[4].

Judge Vance found that Infante’s proffered opinions ransacked the catalogue of expert witness errors. Infante:

  • relied upon studies of benzene exposure and diseases other than the outcome of interest, AML. Burst at *4, *10, *13.
  • relied upon studies of benzene exposure rather than gasoline exposure. Burst at *9.
  • relied upon studies that assessed outcomes in groups with multiple exposures, which studies were hopelessly confounded. Burst at *7.
  • failed to acknowledge the inconsistency of outcomes in the studies of the relevant exposure, gasoline. Burst at *9.
  • relied upon studies that lacked adequate exposure measurements and characterizations, which lack was among the reasons that the ATSDR declined to label gasoline a carcinogen. Burst at *12.
  • relied upon studies that did not report statistically significant associations between gasoline exposure and AML. Burst at *10, *12
  • cherry picked studies and failed to explain contrary results. Burst at *10.
  • cherry picked data from within studies that did not otherwise support his conclusion. Burst at *10.
  • interpreted studies at odds with how the authors of published papers interpreted their own studies. Burst at *10.
  • failed to reconcile conflicting studies. Burst at *10.
  • manipulated data without sufficient explanation or justification. Burst at *14.
  • failed to conduct an appropriate analysis of the entire dataset, along the lines of Sir Austin Bradford Hill’s nine factors. Burst at *10.

The manipulation charge is worth further discussion because it reflects upon the trial court’s acumen and the challenged witness’s deviousness. Infante combined the data from two exposure subgroups from one study[5] to claim that the study actually had a statistically significant association. The trial court found that Dr. Infante failed to explain or justify the recalculation. Burst at *14. At the pre-trial hearing, Dr. Infante offered that he performed the re-calculation on a “sticky note,” but failed to provide his calculations. The court might also have been concerned about the misuse of claiming statistical significance in a post-hoc, non-prespecified analysis that would have clearly raised a multiple comparisons issue. Infante also combined two separate datasets from an unpublished study (the Spivey study for Union Oil), which the court found problematic for his failure to explain and justify the aggregation of data. Id. This recalculation raises the issue whether the two separate datasets could be appropriately combined.

For another study[6], Infante adjusted the results based upon his assessment that the study was biased by a “healthy worker effect[7].” Burst at *15. Infante failed to provide any explanation of how he adjusted for the healthy worker effect, thus giving the court no basis for evaluating the reliability of his methodology. Perhaps more telling, the authors of this study acknowledged the hypothetical potential for healthy worker bias, but chose not to adjust for it because their primary analyses were conducted internally within the working study population, which fully accounted for the potential bias[8].

The court emphasized that it did not question whether combining datasets or adjusting for bias was accepted or proper methodology; rather it focused its critical scrutiny on Infante’s refusal or failure to explain and justify his post-hoc “manipulations of published data.” Burst at *15. Without a showing that AML is more common among non-working, disabled men, the health worker adjustment could well be questioned.

In the final analysis, Infante’s sloppy narrative review could not stand in the face of obviously inconsistent epidemiologic data. Burst at *16. The trial court found that Dr. Infante’s methodology of claiming reliance upon multiple studies, which did not reliably (validly) support his claims or “fit” his conclusions, failed to satisfy the requirements of Federal Rule of Evidence 702. The analytical gap between the data and the opinion were too great. Id. at *8. Infante’s opinion fell into the abyss[9].


[1] See, e.g., Castellow v. Chevron USA, 97 F. Supp. 2d 780, 796 (S.D.Tex.2000) (“Plaintiffs here have not shown that the relevant scientific or medical literature supports the conclusion that workers exposed to benzene, as a component of gasoline, face a statistically significant risk of an increase in the rate of AML.”); Henricksen v. Conoco Phillips Co., 605 F.Supp.2d 1142, 1175 (E.D.Wa. 2009) (“None of the studies relied upon have concluded that gasoline has the same toxic effect as benzene, and none have concluded that the benzene component of gasoline is capable of causing AML.”); Parker v. Mobil Oil Corp., 7 N.Y.3d 434, 450 (N.Y.2006) (“[N]o significant association has been found between gasoline exposure and AML. Plaintiff’s experts were unable to identify a single epidemiologic study finding an increased risk of AML as a result of exposure to gasoline.”).

[2] See also ATSDR Toxicological Profile for Gasoline (1995) (concluding “there is no conclusive evidence to support or refute the carcinogenic potential of gasoline in humans or animals based on the carcinogenicity of one of its components, benzene”); ATSDR, Public Health Statement for Automotive Gasoline (June 1995) (“[However, there is no evidence that exposure to gasoline causes cancer in humans. There is not enough information available to determine if gasoline causes birth defects or affects reproduction.”).

[3] See, e.g., Harris v. CSX Transp., Inc., 753 SE 2d 275, 232 W. Va. 617 (2013); Henricksen v. ConocoPhillips Co., 605 F. Supp. 2d 1142 (E.D. Wash. 2009); Roney v. GENCORP, Civil Action No. 3: 05-0788 (S.D.W. Va. Sept. 18, 2009); Chambers v. Exxon Corp., 81 F. Supp. 2d 661 (M.D. La. 2000).

[4] Judge Vance did acknowledge that benzene studies were relevant to Infante’s causation opinion, but emphasized that such studies could not suffice to show that all gasoline exposures could cause AML. Burst at *10 (citing Dickson v. Nat’l Maint. & Repair of Ky., Inc., No. 5:08–CV–00008, 2011 WL 12538613, at *6 (W.D. Ky. April 28, 2011) (“Benzene may be considered a causative agent despite only being a component of the alleged harm.”).

[5] L. Rushton & H. Romaniuk, “A Case-Control Study to Investigate the

Risk of Leukaemia Associated with Exposure to Benzene in Petroleum Marketing and Distribution Workers in the United Kingdom,” 54 Occup. & Envt’l Med. 152 (1997).

[6] Otto Wong, et al., “Health Effects of Gasoline Exposure. II. Mortality Patterns of Distribution Workers in the United States,” 101 Envt’l Health Persp. 6 (1993).

[7] Burst at *15, citing and quoting from John Last, A Dictionary of Epidemiology (3d ed.1995) (“Workers usually exhibit lower overall death rates than the general population because the severely ill and chronically disabled are ordinarily excluded from employment.”).

[8] Wong, supra.

[9] In a separate opinion, Judge Vance excluded a physician, Dr. Robert Harrison, who similarly opined that gasoline causes AML, and Mr. Burst’s AML, without the benefit of sound science to support his opinion. Burst v. Shell Oil Co., C. A. No. 14–109, 2015 WL 2015 WL 3620111 (E.D. La. June 9, 2015).

Daubert’s Error Rate

June 16th, 2015

In Daubert, the Supreme Court came to the realization that expert witness opinion testimony was allowed under the governing statute, Federal Rule of Evidence 702, only when that witness’s “scientific, technical, or other specialized knowledge” would help the fact finder. Knowledge clearly connotes epistemic warrant, and some of the Court’s “factors” speak directly to this warrant, such as whether the claim has been tested, and whether the opinion has an acceptable rate of error. The Court, however, continued to allow some proxies for that warrant, in the form of “general acceptance,” or “peer review.”

The “rate of error” factor has befuddled some courts in their attempt to apply the statutory requirements of Rule 702, especially when statistical evidence is involved. Some litigants have tried to suggest that a statistically significant result suffices alone to meet the demands of Rule 702, but this argument is clearly wrong. See, e.g., United States v. Vitek Supply Corp., 144 F.3d 476, 480, 485–86 (7th Cir. 1998) (stating that the purpose of the inquiry into rate of error is to determine whether tests are “accurate and reliable”) (emphasis added). See also Judicial Control of the Rate of Error in Expert Witness Testimony” (May 28, 2015). The magnitude of tolerable actual or potential error rate remains, however, a judicial mystery[1].

Sir Austin Bradford Hill described ruling out bias, confounding, and chance (or random error) as essential prerequisites to considering his nine factors used to assess whether an association is causal:

“Disregarding then any such problem in semantics we have this situation. Our observations reveal an association between two variables, perfectly clear-cut and beyond what we would care to attribute to the play of chance. What aspects of that association should we especially consider before deciding that the most likely interpretation of it is causation.”

Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295, 295 (1965). The better reasoned cases agree. See, e.g., Frischhertz v. SmithKline Beecham Corp., 2012 U.S. Dist. LEXIS 181507, *6 (E.D.La. 2012) (“The Bradford-Hill criteria can only be applied after a statistically significant association has been identified.”) (citing and quoting among other sources, Federal Judicial Center, Reference Manual on Scientific Evidence, 599 & n.141 (3d. ed. 2011)).

Citing the dictum in Matrixx Initiatives[2] as though it were a holding is not only ethically dubious, but also ignores the legal and judicial context of the Court’s statements[3]. There are, after all, some circumstances such as cases of death by blunt-force trauma, or bullet wounds, when epidemiological and statistical evidence is not needed. The Court did not purport to speak to all causation assessments; nor did it claim that it was addressing only instances in which there were “expected cases,” and “base-line risks,” in diseases that have an accepted occurrence and incidence among unexposed persons. It is, of course, in exactly those cases that statistical consideration of bias, confounding, and chance are essential before Bradford Hill’s factors can be parsed.

Lord Rutherford[4] is often quoted as having said that “[i]f your experiment needs statistics, you ought to have done a better experiment.” Today, physics and chemistry have dropped their haughty disdain for statistics in the face of their recognition that some processes can be understood only as stochastic and rate driven. In biology, we are a long way from being able to describe the most common disease outcomes as mechanistic genetic or epigenetic events. Statistical analyses, with considerations of random and systematic error, will be with us for a long time, whether the federal judiciary acknowledges this fact or not.

*        *        *        *        *        *        *        *        *        *        *         *        *       

Cases Discussing Error Rates in Rule 702 Decisions

SCOTUS

Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 593 (1993) (specifying the “the known or potential rate of error” as one of several factors in assessing the scientific reliability or validity of proffered expert witness’s opinion)

Kumho Tire Co. v. Carmichael, 526 U.S. 137, 151 (1999) (suggesting that reliability in the form of a known and an acceptable error rate is an important consideration for admissibility)

US Court of Appeals

FIRST CIRCUIT

United States v. Shea, 957 F. Supp. 331, 334–45 (D.N.H. 1997) (rejecting criminal defendant’s objection to government witness’s providing separate match and error probability rates)

SECOND CIRCUIT

Rabozzi v. Bombardier, Inc., No. 5:03-CV-1397 (NAM/DEP), 2007 U.S. Dist. LEXIS 21724, at *7, *8, *20 (N.D.N.Y. Mar. 27, 2007) (excluding testimony from civil engineer about boat design, in part because witness failed to provide rate of error)

Sorto-Romero v. Delta Int’l Mach. Corp., No. 05-CV-5172 (SJF) (AKT), 2007 U.S. Dist. LEXIS 71588, at *22–23 (E.D.N.Y. Sept. 24, 2007) (excluding engineering opinion that defective wood-carving tool caused injury because of lack of error rate)

In re Ephedra Products Liability Litigation, 393 F. Supp. 2d 181, 184 (S.D.N.Y. 2005) (confusing assessment of random error with probability that statistical estimate of true risk ratio was correct)

Roane v. Greenwich Swim Comm., 330 F. Supp. 2d 306, 309, 319 (S.D.N.Y. 2004) (excluding mechanical engineer, in part because witness failed to provide rate of error)

Nook v. Long Island R.R., 190 F. Supp. 2d 639, 641–42 (S.D.N.Y. 2002) (excluding industrial hygienist’s opinion in part because witness was unable to provide a known rate of error).

United States v. Towns, 19 F. Supp. 2d 67, 70–72 (W.D.N.Y. 1998) (permitting clinical psychologist to opine about defendant’s mens rea and claimed mental illness causing his attempted bank robbery, in part because the proffer of opinion maintained that the psychologist would provide an error rate)  

Meyers v. Arcudi, 947 F. Supp. 581 (D. Conn. 1996) (excluding polygraph in civil action in part because of error rate)

THIRD CIRCUIT

United States v. Ewell, 252 F. Supp. 2d 104, 113–14 (D.N.J. 2003) (rejecting criminal defendant’s objection to government’s failure to quantify laboratory error rate)

Soldo v. Sandoz Pharmaceuticals Corp., 244 F. Supp. 2d 434, 568 (W.D. Pa. 2003) (excluding plaintiffs’ expert witnesses in part because court, and court-appointed expert witnesses, were unable to determine error rate).

Pharmacia Corp. v. Alcon Labs., Inc., 201 F. Supp. 2d 335, 360 (D.N.J. 2002) (excluding ; error too high).

FOURTH CIRCUIT

United States v. Moreland, 437 F.3d 424, 427–28, 430–31 (4th Cir. 2006) (affirming district court’s allowance of forensic chemist’s testimony that could not provide error rate because reviews of witness’s work found it to be free of error)

Buckman v. Bombardier Corp., 893 F. Supp. 547, 556–57 (E.D.N.C. 1995) (ruling that an expert witness may opine about comparisons between boat engines in rough water but only as a lay witness, because the comparison tests were unreliable, with a high estimated rate of error)

FIFTH CIRCUIT

Albert v. Jordan, Nos. 05CV516, 05CV517, 05CV518, 05CV519, 2007 U.S. Dist. LEXIS 92025, at *2–3 (W.D. La. Dec. 14, 2007) (allowing testimony of vocational rehabilitation expert witness, over objection, because witness provided “reliable” information, with known rate of error)

SIXTH CIRCUIT

United States v. Leblanc, 45 F. App’x 393, 398, 400 (6th Cir. 2002) (affirming exclusion of child psychologist, whose testimony about children’s susceptibility to coercive interrogation was based upon “‘soft science’ . . . in which ‘error is . . . rampant’.” (quoting the district court))

United States v. Sullivan, 246 F. Supp. 2d 696, 698–99 (E.D. Ky. 2003) (admitting expert witness’s opinion on the unreliability of eyewitness identification; confusing error rate of witness’s opinion with accuracy of observations made based upon order of presentation of photographs of suspect)

SEVENTH CIRCUIT

United States v. Vitek Supply Corp., 144 F.3d 476, 480, 485–86 (7th Cir. 1998) (affirming denial of defendant’s Rule 702 challenge based in part upon error rates; the purpose of the inquiry into rate of error is to determine whether tests are “accurate and reliable”; here the government’s expert witnesses used adequate controls and replication to ensure an acceptably low rate of error)

Phillips v. Raymond Corp., 364 F. Supp. 2d 730, 732–33, 740-41 (N.D. Ill. 2005) (excluding biomechanics expert witness who had not reliably tested his claims in a way to produce an accurate rate of error)

EIGHTH CIRCUIT

Bone Shirt v. Hazeltine, 461 F.3d 1011, 1020 (8th Cir. 2006) (affirming district court’s ruling to admit testimony of expert witness’s regression analysis in vote redistricting case); see id. at 1026 (Gruender, J., concurring) (expressing concern with the questioned testimony’s potential rate of error because it is “difficult to weigh this factor in Daubert’s analysis if ‘the effect of that error is unknown’.” (quoting court below, Bone Shirt v. Hazeltine, 336 F. Supp. 2d 976, 1002 (D.S.D. 2004))

United States v. Beasley, 102 F.3d 1440, 1444, 1446–48 (8th Cir. 1996) (confusing random error with general error rate) (affirming admissibility of expert witness testimony based upon DNA testing, because such testing followed acceptable standards in testing for contamination and “double reading”)

NINTH CIRCUIT

United States v. Chischilly, 30 F.3d 1144, 1148, 1152, 1154–55 (9th Cir. 1994) (affirming admissibility of testimony based upon DNA match in sex crime, noting that although error rate of error was unquantified, the government had made a sufficient showing of rarity of false positives to support an inference of low error rate)

Cascade Yarns, Inc. v. Knitting Fever, Inc., No. C10–861RSM, 2012 WL 5194085, at *7 (W.D. Wash. Oct. 18. 2012) (excluding expert witness opinion because error rate was too high)

United States v. Microtek Int’l Dev. Sys. Div., Inc., No. 99-298-KI, 2000 U.S. Dist. LEXIS 2771, at *2, *10–13, *15 (D. Or. Mar. 10, 2000) (excluding polygraph data based upon showing that claimed error rate came from highly controlled situations, and that “real world” situations led to much higher error (10%) false positive error rates)

TENTH CIRCUIT

Miller v. Pfizer, Inc., 356 F.3d 1326, 1330, 1334 (10th Cir. 2004) (affirming exclusion of plaintiffs’ expert witness, Dr. David Healy, based upon district court’s findings, made with the assistance of court-appointed expert witnesses, that Healy’s opinion was based upon studies that lacked sufficient sample size, adequate controls, and freedom from study bias, and thus prone to unacceptable error rate)

ELEVENTH CIRCUIT

Quiet Tech. DC-8, Inc. v. Hurel-Duboi U.K., Ltd., 326 F.3d 1333, 1343–45 (11th Cir. 2003) (affirming trial court’s admission of defendant’s aerospace engineer’s testimony, when the lower court had found that the error rate involved was “relatively low”; rejecting plaintiff’s argument that the witness had entered data incorrectly on ground that the asserted error would not affect the validity of the witness’s opinions)

Wright v. Case Corp., No. 1:03-CV-1618-JEC, 2006 U.S. Dist. LEXIS 7683, at *14 (N.D. Ga. Feb. 1, 2006) (granting defendant’s motion to exclude plaintiff’s mechanical engineering expert, because the expert’s alternative designs for the seat safety bar were not reliable due to potential feasibility issues, and because the associated error rate was therefore unquantifiable but potentially very high)

Benkwith v. Matrixx Initiatives, Inc., 467 F. Supp. 2d 1316, 1326, 1330, 1332 (M.D. Ala. 2006) (granting defendant’s motion to exclude testimony of an expert in the field of epidemiology regarding Zicam nasal spray’s causing plaintiff’s anosmia, because the opinions had not been tested and a rate of error could not be provided).

D.C. CIRCUIT

Ambrosini v. Upjohn Co., No. 84-3483 (NHJ), 1995 U.S. Dist. LEXIS 21318, at *16, *22–24 (D.D.C. Oct. 18, 1995) (finding that plaintiff’s teratology expert was not permitted to testify, because the methodology used was found to be unreliable and could not yield an accurate error rate)


[1] Jed S. Rakoff, “Science and the Law: Uncomfortable Bedfellows,” 38 Seton Hall L. Rev. 1379, 1382–83 (2008) (observing that an error rate of 13 percent in polygraph interpretation would likely be insufficiently reliable to support admissibility of testimony based upon polygraph results).

[2] Matrixx Initiatives, Inc. v. Siracusano, 131 S. Ct. 1309, 1319 (2011) (suggesting that courts “frequently permit expert testimony on causation based on evidence other than statistical significance”).

[3] See, e.g., WLF Legal Backgrounder on Matrixx Initiatives (June 20, 2011); “The Matrixx – A Comedy of Errors”; Matrixx Unloaded (Mar. 29, 2011)”; “The Matrixx Oversold” (April 4, 2011); “De-Zincing the Matrixx.”

[4] Ernest Rutherford, a British chemist, investigated radioactivity. He won the Nobel Prize in chemistry, in 1908.

Judicial Control of the Rate of Error in Expert Witness Testimony

May 28th, 2015

In Daubert, the Supreme Court set out several criteria or factors for evaluating the “reliability” of expert witness opinion testimony. The third factor in the Court’s enumeration was whether the trial court had considered “the known or potential rate of error” in assessing the scientific reliability of the proffered expert witness’s opinion. Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 593 (1993). The Court, speaking through Justice Blackmun, failed to provide much guidance on the nature of the errors subject to gatekeeping, on how to quantify the errors, and on to know how much error was too much. Rather than provide a taxonomy of error, the Court lumped “accuracy, validity, and reliability” together with a grand pronouncement that these measures were distinguished by no more than a “hen’s kick.” Id. at 590 n.9 (1993) (citing and quoting James E. Starrs, “Frye v. United States Restructured and Revitalized: A Proposal to Amend Federal Evidence Rule 702,” 26 Jurimetrics J. 249, 256 (1986)).

The Supreme Court’s failure to elucidate its “rate of error” factor has caused a great deal of mischief in the lower courts. In practice, trial courts have rejected engineering opinions on stated grounds of their lacking an error rate as a way of noting that the opinions were bereft of experimental and empirical evidential support[1]. For polygraph evidence, courts have used the error rate factor to obscure their policy prejudices against polygraphs, and to exclude test data even when the error rate is known, and rather low compared to what passes for expert witness opinion testimony in many other fields[2]. In the context of forensic evidence, the courts have rebuffed objections to random-match probabilities that would require that such probabilities be modified by the probability of laboratory or other error[3].

When it comes to epidemiologic and other studies that require statistical analyses, lawyers on both sides of the “v” frequently misunderstand p-values or confidence intervals to provide complete measures of error, and ignore the larger errors that result from bias, confounding, study validity (internal and external), inappropriate data synthesis, and the like[4]. Not surprisingly, parties fallaciously argue that the Daubert criterion of “rate of error” is satisfied by expert witness’s reliance upon studies that in turn use conventional 95% confidence intervals and measures of statistical significance in p-values below 0.05[5].

The lawyers who embrace confidence intervals and p-values as their sole measure of error rate fail to recognize that confidence intervals and p-values are means of assessing only one kind of error: random sampling error. Given the carelessness of the Supreme Court’s use of technical terms in Daubert, and its failure to engage in the actual evidence at issue in the case, it is difficult to know whether the Court intended to suggest that random error was the error rate it had in mind[6]. The statistics chapter in the Reference Manual on Scientific Evidence helpfully points out that the inferences that can be drawn from data turn on p-values and confidence intervals, as well as on study design, data quality, and the presence or absence of systematic errors, such as bias or confounding.  Reference Manual on Scientific Evidence at 240 (3d 2011) [Manual]. Random errors are reflected in the size of p-values or the width of confidence intervals, but these measures of random sampling error ignore systematic errors such as confounding and study biases. Id. at 249 & n.96.

The Manual’s chapter on epidemiology takes an even stronger stance: the p-value for a given study does not provide a rate of error or even a probability of error for an epidemiologic study:

“Epidemiology, however, unlike some other methodologies—fingerprint identification, for example—does not permit an assessment of its accuracy by testing with a known reference standard. A p-value provides information only about the plausibility of random error given the study result, but the true relationship between agent and outcome remains unknown. Moreover, a p-value provides no information about whether other sources of error – bias and confounding – exist and, if so, their magnitude. In short, for epidemiology, there is no way to determine a rate of error.”

Manual at 575. This stance seems not entirely justified given that there are Bayesian approaches that would produce credibility intervals accounting for sampling and systematic biases. To be sure, such approaches have their own problems and they have received little to no attention in courtroom proceedings to date.

The authors of the Manual’s epidemiology chapter, who are usually forgiving of judicial error in interpreting epidemiologic studies, point to one United States Court of Appeals case that fallaciously interpreted confidence intervals magically to quantify bias and confounding in a Bendectin birth defects case. Id. at 575 n. 96[7]. The Manual could have gone further to point out that, in the context of multiple studies, of different designs and analyses, cognitive biases involved in evaluating, assessing, and synthesizing the studies are also ignored by statistical measures such as p-values and confidence intervals. Although the Manual notes that assessing the role of chance in producing a particular set of sample data is “often viewed as essential when making inferences from data,” the Manual never suggests that random sampling error is the only kind of error that must be assessed when interpreting data. The Daubert criterion would appear to encompass all varieties or error, not just random error.

The Manual’s suggestion that epidemiology does not permit an assessment of the accuracy of epidemiologic findings misrepresents the capabilities of modern epidemiologic methods. Courts can, and do, invoke gatekeeping approaches to weed out confounded study findings. SeeSorting Out Confounded Research – Required by Rule 702” (June 10, 2012). The “reverse Cornfield inequality” was an important analysis that helped establish the causal connection between tobacco smoke and lung cancer[8]. Olav Axelson studied and quantified the role of smoking as a confounder in epidemiologic analyses of other putative lung carcinogens.[9] Quantitative methods for identifying confounders have been widely deployed[10].

A recent study in birth defects epidemiology demonstrates the power of sibling cohorts in addressing the problem of residual confounding from observational population studies with limited information about confounding variables. Researchers looking at various birth defect outcomes among offspring of women who used certain antidepressants in early pregnancy generally found no associations in pooled data from Iceland, Norway, Sweden, Finland, and Denmark. A putative association between maternal antidepressant use and a specific kind of cardiac defect (right ventricular outflow tract obstruction or RVOTO) did appear in the overall analysis, but was reversed when the analysis was limited to the sibling subcohort. The study found an apparent association between RVOTO defects and first trimester maternal exposure to selective serotonin reuptake inhibitors, with an adjusted odds ratio of 1.48 (95% C.I., 1.15, 1.89). In the adjusted analysis for siblings, the study found an OR of 0.56 (95% C.I., 0.21, 1.49) in an adjusted sibling analysis[11]. This study and many others show how creative analyses can elucidate and quantify the direction and magnitude of confounding effects in observational epidemiology.

Systematic bias has also begun to succumb to more quantitative approaches. A recent guidance paper by well-known authors encourages the use of quantitative bias analysis to provide estimates of uncertainty due to systematic errors[12].

Although the courts have failed to articulate the nature and consequences of erroneous inference, some authors would reduce all of Rule 702 (and perhaps 704, 403 as well) to a requirement that proffered expert witnesses “account” for the known and potential errors in their opinions:

“If an expert can account for the measurement error, the random error, and the systematic error in his evidence, then he ought to be permitted to testify. On the other hand, if he should fail to account for any one or more of these three types of error, then his testimony ought not be admitted.”

Mark Haug & Emily Baird, “Finding the Error in Daubert,” 62 Hastings L.J. 737, 739 (2011).

Like most antic proposals to revise Rule 702, this reform vision shuts out the full range of Rule 702’s remedial scope. Scientists certainly try to identify potential sources of error, but they are not necessarily very good at it. See Richard Horton, “Offline: What is medicine’s 5 sigma?” 385 Lancet 1380 (2015) (“much of the scientific literature, perhaps half, may simply be untrue”). And as Holmes pointed out[13], certitude is not certainty, and expert witnesses are not likely to be good judges of their own inferential errors[14]. Courts continue to say and do wildly inconsistent things in the course of gatekeeping. Compare In re Zoloft (Setraline Hydrochloride) Products, 26 F. Supp. 3d 449, 452 (E.D. Pa. 2014) (excluding expert witness) (“The experts must use good grounds to reach their conclusions, but not necessarily the best grounds or unflawed methods.”), with Gutierrez v. Johnson & Johnson, 2006 WL 3246605, at *2 (D.N.J. November 6, 2006) (denying motions to exclude expert witnesses) (“The Daubert inquiry was designed to shield the fact finder from flawed evidence.”).


[1] See, e.g., Rabozzi v. Bombardier, Inc., No. 5:03-CV-1397 (NAM/DEP), 2007 U.S. Dist. LEXIS 21724, at *7, *8, *20 (N.D.N.Y. Mar. 27, 2007) (excluding testimony from civil engineer about boat design, in part because witness failed to provide rate of error); Sorto-Romero v. Delta Int’l Mach. Corp., No. 05-CV-5172 (SJF) (AKT), 2007 U.S. Dist. LEXIS 71588, at *22–23 (E.D.N.Y. Sept. 24, 2007) (excluding engineering opinion that defective wood-carving tool caused injury because of lack of error rate); Phillips v. Raymond Corp., 364 F. Supp. 2d 730, 732–33 (N.D. Ill. 2005) (excluding biomechanics expert witness who had not reliably tested his claims in a way to produce an accurate rate of error); Roane v. Greenwich Swim Comm., 330 F. Supp. 2d 306, 309, 319 (S.D.N.Y. 2004) (excluding mechanical engineer, in part because witness failed to provide rate of error); Nook v. Long Island R.R., 190 F. Supp. 2d 639, 641–42 (S.D.N.Y. 2002) (excluding industrial hygienist’s opinion in part because witness was unable to provide a known rate of error).

[2] See, e.g., United States v. Microtek Int’l Dev. Sys. Div., Inc., No. 99-298-KI, 2000 U.S. Dist. LEXIS 2771, at *2, *10–13, *15 (D. Or. Mar. 10, 2000) (excluding polygraph data based upon showing that claimed error rate came from highly controlled situations, and that “real world” situations led to much higher error (10%) false positive error rates); Meyers v. Arcudi, 947 F. Supp. 581 (D. Conn. 1996) (excluding polygraph in civil action).

[3] See, e.g., United States v. Ewell, 252 F. Supp. 2d 104, 113–14 (D.N.J. 2003) (rejecting defendant’s objection to government’s failure to quantify laboratory error rate); United States v. Shea, 957 F. Supp. 331, 334–45 (D.N.H. 1997) (rejecting objection to government witness’s providing separate match and error probability rates).

[4] For a typical judicial misstatement, see In re Zoloft Products, 26 F. Supp.3d 449, 454 (E.D. Pa. 2014) (“A 95% confidence interval means that there is a 95% chance that the ‘‘true’’ ratio value falls within the confidence interval range.”).

[5] From my experience, this fallacious argument is advanced by both plaintiffs’ and defendants’ counsel and expert witnesses. See also Mark Haug & Emily Baird, “Finding the Error in Daubert,” 62 Hastings L.J. 737, 751 & n.72 (2011).

[6] See David L. Faigman, et al. eds., Modern Scientific Evidence: The Law and Science of Expert Testimony § 6:36, at 359 (2007–08) (“it is easy to mistake the p-value for the probability that there is no difference”)

[7] Brock v. Merrell Dow Pharmaceuticals, Inc., 874 F.2d 307, 311-12 (5th Cir. 1989), modified, 884 F.2d 166 (5th Cir. 1989), cert. denied, 494 U.S. 1046 (1990). As with any error of this sort, there is always the question whether the judges were entrapped by the parties or their expert witnesses, or whether the judges came up with the fallacy on their own.

[8] See Joel B Greenhouse, “Commentary: Cornfield, Epidemiology and Causality,” 38 Internat’l J. Epidem. 1199 (2009).

[9] Olav Axelson & Kyle Steenland, “Indirect methods of assessing the effects of tobacco use in occupational studies,” 13 Am. J. Indus. Med. 105 (1988); Olav Axelson, “Confounding from smoking in occupational epidemiology,” 46 Brit. J. Indus. Med. 505 (1989); Olav Axelson, “Aspects on confounding in occupational health epidemiology,” 4 Scand. J. Work Envt’l Health 85 (1978).

[10] See, e.g., David Kriebel, Ariana Zeka1, Ellen A Eisen, and David H. Wegman, “Quantitative evaluation of the effects of uncontrolled confounding by alcohol and tobacco in occupational cancer studies,” 33 Internat’l J. Epidem. 1040 (2004).

[11] Kari Furu, Helle Kieler, Bengt Haglund, Anders Engeland, Randi Selmer, Olof Stephansson, Unnur Anna Valdimarsdottir, Helga Zoega, Miia Artama, Mika Gissler, Heli Malm, and Mette Nørgaard, “Selective serotonin reuptake inhibitors and ventafaxine in early pregnancy and risk of birth defects: population based cohort study and sibling design,” 350 Brit. Med. J. 1798 (2015).

[12] Timothy L.. Lash, Matthew P. Fox, Richard F. MacLehose, George Maldonado, Lawrence C. McCandless, and Sander Greenland, “Good practices for quantitative bias analysis,” 43 Internat’l J. Epidem. 1969 (2014).

[13] Oliver Wendell Holmes, Jr., Collected Legal Papers at 311 (1920) (“Certitude is not the test of certainty. We have been cock-sure of many things that were not so.”).

[14] See, e.g., Daniel Kahneman & Amos Tversky, “Judgment under Uncertainty:  Heuristics and Biases,” 185 Science 1124 (1974).

Can an Expert Witness Be Too Biased to Be Allowed to Testify

May 20th, 2015

The Case of Barry Castleman

Barry Castleman has been a fixture in asbestos litigation for over three decades. By all appearances, he was the creation of the litigation industry. Castleman received a bachelor of science degree in chemical engineering in 1968, and a master’s degree in environmental engineering, in 1972. In 1975, he started as a research assistant to plaintiffs’ counsel in asbestos litigation, and in 1979, he commenced his testimonial adventures as a putative expert witness for plaintiffs’ counsel. Enrolled in a doctoral program, Castleman sent chapters of his thesis to litigation industry mentors for review and edits. In 1985, Castleman received a doctorate degree, with the assistance of a Ron Motley fellowship. See John M. Fitzpatrick, “Digging Deep to Attack Bias of Plaintiff Experts,” DRI Products Liability Seminar (2013).

Castleman candidly testified, on many occasions, that he was not an epidemiologist, a biostatistician, a toxicologist, a physician, a pathologist, or any other kind of healthcare professional. He is not a trained historian. Understandably, courts puzzled over exactly what someone like Castleman should be allowed to testify about. Many courts limited or excluded Castleman from remunerative testimonial roles[1]. Still, in the face of his remarkably inadequate training, education, and experience, Castleman persisted, and often prevailed, in making a living at testifying about the historical “state of the art” of medical knowledge about asbestos over time.

The result was often not pretty. Castleman worked not just as an expert witness, but also as an agent of plaintiffs’ counsel to suppress evidence. “The Selikoff – Castleman Conspiracy” (May 13, 2011). As a would-be historian, Castleman was controlled and directed by the litigation industry to avoid inconvenient evidence. “Discovery into the Origin of Historian Expert Witnesses’ Opinions” (Jan. 30, 2012). Despite his covert operations, and his exploitation of defendants’ internal documents, Castleman complained more than anyone about the scrutiny created by his self-chosen litigation roles. In 1985, pressed for materials he had considered in formulating his “opinions,” Castleman wrote a personal letter to the judge, the Hon. Hugh Gibson of Galveston, Texas, to object to lawful discovery into his activities:

“1. It threatens me ethically through demands that I    divulge material submitted in confidence, endangering my good name and reputation.
2. It exposes me to potential liability arising from the release of correspondence and other materials provided to me by others who assumed I would honor their confidence.
3. It jeopardizes my livelihood in that material requested reveals strategies of parties with whom I consult, as well as other materials of a confidential nature.
4. It is far beyond the scope of relevant material to my qualifications and the area of expert testimony offered.
5. It is unprecedented in 49 prior trials and depositions where I have testified, in federal and state courts all over the United States, including many cases in Texas. Never before have I had to produce such voluminous and sensitive material in order to be permitted to testify.
6. It is excessively and unjustifiably intrusive into my personal and business life.
7. I have referenced most of the information I have in my 593-page book, “Asbestos: Medical and Legal Aspects.” The great majority of the information I have on actual knowledge of specific defendants has come from the defendants themselves.
8. All information that I have which is relevant to my testimony and qualifications has been the subject of numerous trials and depositions since 1979.”

Castleman Letter to Hon. Hugh Gibson (Nov. 5, 1985).

Forty years later, Castleman is still working for the litigation industry, and courts are still struggling to figure out what role he should be allowed as a testifying expert witness.

Last year, the Delaware Supreme Court had to order a new trial for R. T. Vanderbilt, in part because Castleman had blurted out non-responsive, scurrilous hearsay statements that:

(1) employees of Johns-Manville (a competitor of R.T. Vanderbilt) had called employees of Vanderbilt “liars;”

(2) R.T. Vanderbilt spent a great amount of money on studies and activities to undermine federal regulatory action on talc; and

(3) R.T. Vanderbilt was “buying senators and lobbying the government.”

The Delaware court held that Castleman’s gratuitous, unsolicited testimony on cross-examination was inadmissible, and that his conduct required a new trial.  R.T. Vanderbilt Co. v. Galliher, No. 510, 2013, 2014 WL 3674180 (Del. July 24, 2014).

Late last year, a federal court ruled, pre-trial, that Castleman may testify over Rule 702 objections because he “possesses ‘specialized knowledge’ regarding the literature relating to asbestos available during the relevant time periods,” and that his testimony “could be useful to the jury as a ‘sort of anthology’ of the copious available literature.” Krik v. Crane Co., No. 10-cv-7435, – F. Supp. 2d -, 2014 WL 5350463, *3 (N.D. Ill. Oct. 21, 2014). Because Castleman was little more than a sounding board for citing and reading sections of the historical medical literature, the district court prohibited him from testifying as to the accuracy of any conclusions in the medical literature. Id.

Last week, another federal court took a different approach to keeping Castleman in business. In ruling on defendant’s Rule 702 objections to Castleman, the court held:

“I agree with defendant that plaintiffs have made no showing that Castleman is qualified to explain the meaning and significance of medical literature. Further, there is no suggestion in Krik that Castleman is qualified as an expert in that respect. To the extent that plaintiffs want Castleman simply to read excerpts from medical articles, they do not explain how doing so could be helpful to the jury. Accordingly, I am granting defendant’s motion as it relates to Castleman’s discussion of the medical literature.

***

However, Castleman’s report also includes discussions of articles in trade journals and government publications, which, presumably, would not require medical expertise to understand or summarize.”

Suoja v. Owens-Illinois, Inc., 2015 U.S. Dist. LEXIS 63170, at *3 (W.D.Wisc. May 14, 2015). Judge Barbara Crabb thus disallowed medical state of the art testimony from Castleman, but permitted him to resume his sounding board role for non-medical and other historical documents referenced in his Rule 26 report.

The strange persistence of Barry Castleman, and the inconsistent holdings of dozens of opinions strewn across the asbestos litigation landscape, raise the question whether someone so biased, so entrenched in a litigation role, so lacking in the requisite expertise, should simply be expunged from the judicial process. Rather than struggling to find some benign, acceptable role for Barry Castleman, perhaps courts should just say no. “How Testifying Historians Are Like Lawn-Mowing Dogs” (May 24, 2010).


[1] See, e.g., Van Harville v. Johns-Manville Sales Corp., CV-78-642-H (S.D. Ala 1979); In re Related Asbestos Cases, 543 F. Supp. 1142, 1149 (N.D. Cal. 1982) (rejecting Castleman’s bid to be called an “expert”) (holding that the court was “not persuaded that Mr. Castleman, as a layperson, possesses the expertise necessary to read complex, technical medical articles and discern which portions of the articles would best summarize the authors’ conclusions”); Kendrick v. Owens-Corning Fiberglas Corp., No. C-85-178-AAm (E.D. Wash. 1986); In re King County Asbestos Cases of Levinson, Friedman, Vhugen, Duggan, Bland and Horowitz, No. 81-2-08702-7, (Washington St. Super. Ct. for King Cty.1987); Franze v. Celotex Corp., C.A. No. 84-1316 (W.D. Pa.); Dunn v. Hess Oil Virgin Islands Corp., C.A. No. 1987-238 (D.V.I. May 16, 1989) (excluding testimony of Barry Castleman); Rutkowski v. Occidental Chem. Corp., No. 83 C 2339, 1989 WL 32030, at *1 (N.D. Ill. Feb. 16, 1989) (“Castleman lacks the medical background and experience to evaluate and analyze the articles in order to identify which parts of the articles best summarize the authors’ conclusions.”); In re Guam Asbestos Litigation, 61 F.3d 910, 1995 WL 411876 (9th Cir. 1995) (Kozinski, J., dissenting) (“I would also reverse because Barry Castleman was not qualified to testify as an expert witness on the subject of medical state of the art or anything else; he appears to have read a number of articles for the sole purpose of turning himself into an expert witness. Reductio ad absurdum.”); McClure v. Owens Corning Fiberglas Corp. 188 Ill. 2d 102, 720 N.E.2d 242 (1999) (rejecting probativeness of Castleman’s testimony about company conduct).

Adverse Liver Events and Causal Claims Against Black Cohosh

April 6th, 2015

Liver toxicity in pharmaceutical products liability cases is one of the more difficult categories of cases for judicial gatekeeping because of the possibility of idiosyncratic liver toxicity. Sometimes a plaintiff will exploit this difficulty and try to recover for an acute liver reaction.

Susan Grant began to take a black cohosh herbal remedy in 2002, and within a year, developed autoimmune hepatitis, which required her to undergo a liver transplant. She and her husband sued the seller of black cohosh for substantial damages. Grant v. Pharmavite, LLC, 452 F. Supp 2d 903 (D. Neb. 2006). Granted enlisted two expert witnesses, Michael Corbett, Ph.D, a toxicologist, and her treating gastroenterologist, Michael Sorrell, M.D. The defense relying upon liver expert, Phillip Guzelian, M.D., challenged the admissibility of plaintiffs’ expert witnesses’ opinions under the federal rules.

Struggling with the law, Senior Judge Strom observed that Nebraska law requires expert witness opinion testimony on causation. Id. at 906. Of course, in this diversity action, federal law controlled on the scope and the requirements of expert witness opinion testimony.

And in a similarly offbeat way, Judge Strom suggested that plaintiffs’ expert witnesses need not have opinion supported by evidence:

While it is not necessary that an opinion be backed by scientific research, it is necessary that an expert’s testimony, which contradicts all of the research, at minimum address and distinguish the contradictory research in order to support the expressed opinion.”

Id. at 907 (emphasis added). Senior Judge Strom thus suggests had there been no published research at all, then Dr. Corbett could just make up an opinion, not backed by scientific research. This is, of course, seriously wrong, but fortunately it amounts only to obiter detritus, because Judge Strom believed that given the available studies, the testifying expert witnesses had to do more than simply criticize the studies that disagreed with their subjective opinion.

Michael Corbett, Ph.D, a consultant in “chemical toxicology,” from Omaha, Nebraska, criticized existent studies, which generally failed to identify liver toxicity, but he failed to conduct his own studies. Id. at 907. And Corbett also failed to explain why he rejected the great weight of medical publications that found that black cohosh was not hepatotoxic. Id. Michael Sorrell, M.D., started out as Ms. Grant’s treating gastroenterologist, but became a litigation expert witness. He was generally unaware of the randomized clinical trials of black cohosh, or any study that, or group of scientists who, supported his opinion. Id. at 909.

To Dr. Sorrell’s credit, he did attempt to write up a case report, which was published after the termination of the case. Unfortunately for Dr. Sorrell and his colleagues, Ms. Grant and her lawyers were less than forthcoming about her medical history, which included medications and lifestyle variables that were apparently not shared with Dr. Sorrell. Id. at 909.

You know that the quality of gatekeeping due process is strained when judges fail to cite key studies sufficiently to permit their readers to find the scientific evidence. Between Google Scholar and PubMed, however, you can find Dr. Sorrell’s case report, which was published in 2005, before Judge Strom issued his Rule 702 opinion. Josh Levitsky, Tyron A. Alli, James Wisecarver, and Michael F. Sorrell, “Fulminant liver failure associated with the use of black cohosh,” 50 Digestive Dis. Sci. 538 (2005). If nothing else, Judge Strom provoked an erratum from Dr. Sorrell and colleagues:

“After the article was published, it was brought to the authors’ attention through legal documentation and testimony that the patient admitted to consuming alcohol and had been taking other medications at the time of her initial presentation of liver failure. From these records, she reported drinking no more than six glasses of wine per week. In addition, up until presentation, she was taking valacyclovir 500 mg daily for herpes prophylaxis for 2 years, an occasional pseudoephedrine tablet, calcium carbonate 500 mg three times daily, iron sulfate 325 mg daily and ibuprofen up to three times weekly. She had been taking erythromycin tablets but discontinued those 3 months prior to presentation.

The authors regret the omission of this information from the original case report. While this new information is important to include as a correction to the history, it does not change the authors’ clinical opinion … .”

The erratum omits that Ms. Grant was taking Advil (ibuprofen) at the time of her transplantation, and that she had been taking erythromycin for 2.5 years, stopping just a few months before her acute liver illness. The Valtrex use shows that Ms. Grant had a chronic herpes infection. In the past, plaintiff took such excessive doses of ibuprofen that she developed anemia. Grant v. Pharmavite, LLC, 452 F. Supp 2d at 909 n.1. Hardly an uncomplicated case report to interpret for causality and an interesting case history of confirmation bias. Remarkably, the journal charges $39.95 to download the erratum, as much as the case report itself!

And how has the plaintiff’s claim fared in the face of the evolving scientific record since Judge Strom’s opinion?

Not well.

See, e.g., Peter W Whiting, Andrew Clouston and Paul Kerlin, “Black cohosh and other herbal remedies associated with acute hepatitis,” 177 Med. J. Australia 432 (2002); Cohen SM, O’Connor AM, Hart J, et al. Autoimmune hepatitis associated with the use of black cohosh: a case study. 11 Menopause 575 (2004); Christopher R. Lynch, Milan E. Folkers, and William R. Hutson, “Fulminant hepatic failure associated with the use of black cohosh: A case report,” 12 Liver Transplantation 989 (2006); Elizabeth C-Y Chow, Marcus Teo, John A Ring and John W Chen, “Liver failure associated with the use of black cohosh for menopausal symptoms,” 188 Med. J. Australia 420 (2008); Gail B. Mahady, Tieraona Low Dog, Marilyn L. Barrett, Mary L. Chavez, Paula Gardiner, Richard Ko, Robin J. Marles, Linda S. Pellicore, Gabriel I. Giancaspro, and Dandapantula N. Sarma, “United States Pharmacopeia review of the black cohosh case reports of hepatotoxicity,” 15 Menopause 628 (2008) (toxicity only possible on available evidence); D. Joy, J. Joy, and P. Duane, “Black cohosh: a cause of abnormal postmenopausal liver function tests,” 11 Climacteric 84 (2008); Lily Dara, Jennifer Hewett, and Joseph Kartaik Lim, “Hydroxycut hepatotoxicity: A case series and review of liver toxicity from herbal weight loss supplements,” 14 World J. Gastroenterol. 6999 (2008); F. Borrelli & E. Ernst, “Black cohosh (Cimicifuga racemosa): a systematic review of adverse events,” Am. J. Obstet. & Gyn. 455 (2008); Rolf Teschke & A. Schwarzenboeck, “Suspected hepatotoxicity by Cimicifugae racemosae rhizoma (black cohosh, root): critical analysis and structured causality assessment,” 16 Phytomedicine 72 (2009); Stacie E. Geller, Lee P. Shulman, Richard B. van Breemen, Suzanne Banuvar, Ying Zhou, Geena Epstein, Samad Hedayat, Dejan Nikolic, Elizabeth C. Krause, Colleen E. Piersen, Judy L. Bolton, Guido F. Pauli, and Norman R. Farnsworth, “Safety and Efficacy of Black Cohosh and Red Clover for the Management of Vasomotor Symptoms: A Randomized Controlled Trial,” 16 Menopause 1156 (2009) (89 women randomized to four groups; no hepatic events in trial not powered to detect them); Rolf Teschke, “Black cohosh and suspected hepatotoxicity: inconsistencies, confounding variables, and prospective use of a diagnostic causality algorithm. A critical review,” 17 Menopause 426 (2010) (“The presented data do not support the concept of hepatotoxicity in a primarily suspected causal relationship to the use of BC and failure to provide a signal of safety concern, but further efforts have to be undertaken to dismiss or to substantiate the existence of BC hepatotoxicity as a special disease entity. The future strategy should be focused on prospective causality evaluations in patients diagnosed with suspected BC hepatotoxicity, using a structured, quantitative, and hepatotoxicity-specific causality assessment method.”); Fabio Firenzuoli, Luigi Gori, and Paolo Roberti di Sarsina, “Black Cohosh Hepatic Safety: Follow-Up of 107 Patients Consuming a Special Cimicifuga racemosa rhizome Herbal Extract and Review of Literature,” 2011 Evidence-Based Complementary & Alternative Med. 1 (2011); Rolf Teschke, Wolfgang Schmidt-Taenzer and Albrecht Wolff, “Spontaneous reports of assumed herbal hepatotoxicity by black cohosh: is the liver-unspecific Naranjo scale precise enough to ascertain causality?” 20 Pharmacoepidemiol. & Drug Safety 567 (2011) (causation unlikely or excluded); Rolf Teschke, Alexander Schwarzenboeck, Wolfgang Schmidt-Taenzer, Albrecht Wolff, and Karl-Heinz Hennermann, “Herb induced liver injury presumably caused by black cohosh: A survey of initially purported cases and herbal quality specifications,” 11 Ann. Hepatology 249 (2011).

Cherry Picking; Systematic Reviews; Weight of the Evidence

April 5th, 2015

In a paper prepared for one of Professor Margaret Berger’s symposia on law and science, Lisa Bero, a professor of clinical pharmacy in the University of California San Francisco’s School of Pharmacy identified a major source of error in published reviews of putative health effects:

“The biased citation of studies in a review can be a major source of error in the results of the review. Authors of reviews can influence their conclusions by citing only studies that support their preconceived, desired outcome.”

Lisa Bero, “Evaluating Systematic Reviews and Meta-Analyses,” 14 J. L. & Policy 569, 576 (2006). Biased citation, consideration, and reliance are major sources of methodological error in courtroom proceedings as well. Sometimes astute judges recognize and bar expert witnesses who would pass off their opinions, as well considered, when they are propped up only by biased citation. Unfortunately, courts have been inconsistent, sometimes rewarding cherry picking of studies by admitting biased opinions[1], sometimes unhorsing the would-be expert witnesses by excluding their opinions[2].

Given that cherry picking or “biased citation” is recognized in the professional community as a rather serious methodological sin, judges may be astonished to learn that both phrases, “cherry picking” and “biased citation” do not appear in the third edition of the Reference Manual on Scientific Evidence. Of course, the Manual could have dealt with the underlying issue of biased citation by affirmatively promoting the procedure of systematic reviews, but here again, the Manual falls short. There is no discussion of systematic review in the chapters on toxicology[3], epidemiology[4], or statistics[5]. Only the chapter on clinical medicine discusses the systematic review, briefly[6]. The absence of support for the procedures of systematic reviews, combined with the occasional cheerleading for “weight of the evidence,” in which expert witnesses subjectively include and weight studies to reach pre-ordained opinions, tends to undermines the reliability of the latest edition of the Manual[7].


[1] Spray-Rite Serv. Corp. v. Monsanto Co., 684 F.2d 1226, 1242 (7th Cir. 1982) (failure to consider factors identified by opposing side’s expert did not make testimony inadmissible).

[2] In re Zoloft, 26 F. Supp. 3d 449 (E.D. Pa. 2014) (excluding perinatal epidemiologist, Anick Bérard, for biased cherry picking of data points); In re Accutane, No. 271(MCL), 2015 WL 753674, 2015 BL 59277 (N.J.Super. Law Div. Atlantic Cty. Feb. 20, 2015) (excluding opinions Drs. Arthur Kornbluth and David Madigan because of their authors’ unjustified dismissal of studies that contradicted or undermined their opinions); In re Bextra & Celebrex Mktg. Sales Practices & Prods. Liab. Litig., 524 F.Supp. 2d 1166, 1175–76, 1179 (N.D.Cal.2007) (holding that expert witnesses may not ‘‘cherry-pick[ ]’’ observational studies to support a conclusion that is contradicted by randomized controlled trials, meta-analyses of such trials, and meta-analyses of observational studies; excluding expert witness who ‘‘ignores the vast majority of the evidence in favor of the few studies that support her conclusion’’); Grant v. Pharmative, LLC, 452 F. Supp. 2d 903, 908 (D. Neb. 2006) (excluding expert witness opinion testimony that plaintiff’s use of black cohash caused her autoimmune hepatitis) (“Dr. Corbett’s failure to adequately address the body of contrary epidemiological evidence weighs heavily against admission of his testimony.”); Downs v. Perstorp Components, Inc., 126 F. Supp. 2d 1090,1124-29 (E.D. Tenn. 1999) (expert’s opinion raised seven “red flags” indicating that his testimony was litigation biased), aff’d, 2002 U.S. App. Lexis 382 (6th Cir. Jan. 4, 2002).

[3] Bernard D. Goldstein & Mary Sue Henifin, “Reference Guide on Toxicology,” in Reference Manual on Scientific Evidence 633 (3d ed. 2011).

[4] Michael D. Green, D. Michal Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” in Reference Manual on Scientific Evidence 549 (3d ed. 2011).

[5] David H. Kaye & David A. Freedman, “Reference Guide on Statistics,” in Reference Manual on Scientific Evidence 209 (3d ed. 2011).

[6] John B. Wong, Lawrence O. Gostin, and Oscar A. Cabrera, “Reference Guide on Medical Testimony,” in Federal Judicial Center and National Research Council, Reference Manual on Scientific Evidence 687 (3d ed. 2011).

[7] See Margaret A. Berger, “The Admissibility of Expert Testimony,” in Reference Manual on Scientific Evidence 11, 20 & n.51 (3d ed. 2011) (posthumously citing Milward v. Acuity Specialty Products Group, Inc., 639 F.3d 11, 26 (1st Cir. 2011), with approval, for reversing exclusion of expert witnesses who advanced “weight of the evidence” opinions).