TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Discovery of Retained, Testifying Statistician Expert Witnesses (Part 1)

June 30th, 2015

At times, the judiciary’s resistance to delving into the factual underpinnings of expert witness opinions is extraordinary. In one case, the Second Circuit affirmed a judgment for a plaintiff in a breach of contract action, based in large part upon expert witness testimony that presented the results of a computer simulation. Perma Research & Development v. Singer Co.[1] Although the trial court had promised to permit inquiry into the plaintiff’s computer expert witness’s source of data, programmed mathematical formulae, and computer programs, when the defendant asked the plaintiff’s expert witness to disclose his underlying data and algorithms, the district judge sustained the witness’s refusal on grounds that the requested materials were his “private work product” and “proprietary information.”[2] Despite the trial court’s failure to articulate any legally recognized basis for permitting the expert witness to stonewall in this fashion, a panel of the Circuit, in an opinion by superannuated Justice Tom Clark, affirmed, on an argument that the defendant “had not shown that it did not have an adequate basis on which to cross-examine plaintiff’s experts.” Judge Van Graafeiland dissented, indelicately pointing out that the majority had charged the defendant with failing to show that it had been deprived of a fair opportunity to cross-examine plaintiff’s expert witnesses while depriving the defendant of access to the secret underlying evidence and materials that were needed to demonstrate what could have been done on cross-examination[3]. The dissent traced the trial court’s error to its misconception that a computer is just a giant calculator, and pointed out that the majority contravened Circuit precedent[4] and evolving standards[5] for handling underlying data that was analyzed or otherwise incorporated into computer models and simulations.

Although the approach of Perma Research has largely been ignored, has fallen into disrepute, and has been superseded by statutory amendments[6], its retrograde approach continues to find occasional expression in reported decisions. The refinement of Federal Rule of Evidence 702 to require sound support for expert witnesses’ opinions has opened the flow of discovery of underlying facts and data considered by expert witnesses before generating their reports. The most recent edition of the Federal Judicial Center’s Manual for Complex Litigation treats both computer-generated evidence and expert witnesses’ underlying data as both subject to pre-trial discovery as necessary to provide for full and fair litigation of the issues in the case[7].

The discovery of expert witnesses who have conducted statistical analyses poses difficult problems for lawyers.  Unlike other some expert witnesses, who passively review data and arrive at an opinion that synthesizes published research, statisticians actually create evidence with new arrangements and analyses of data in the case.  In this respect, statisticians are like material scientists who may test and record experimental observations on a product or its constituents.  Inquiring minds will want to know whether the statistical analyses in the witness’s report were the results of pre-planned analysis protocols, or whether they were the second, third, or fifteenth alternative analysis.  Earlier statistical analyses conducted but not produced may reveal what the expert witness believed would have been the preferred analysis if only the data had cooperated more fully. Statistical analyses conducted by expert witnesses provide plenty of opportunity for data-dredging, which can then be covered up by disclosing only selected analyses in the expert witness’s report.

The output of statisticians’ statistical analyses will take the form of a measure of “point estimates” of “effect size,” a significance or posterior probability, a set of regression coefficients, a summary estimate of association, or a similar measure that did not exist before the statistician used the underlying data to produce the analytical outcome, which is then the subject of further inference and opinion.  Frequentist analyses must identify the probability model and other assumptions employed. Bayesian analyses must also identify prior probabilities used as the starting point used with further evidence to arrive at posterior probabilities. The science, creativity, and judgment involved in statistical methods challenge courts and counsel to discover, understand, reproduce, present, and cross-examine statistician expert witness testimony.  And occasionally, there is duplicity and deviousness to uncover as well.

The discovery obligations with respect to statistician expert witnesses vary considerably among state and federal courts.  The 1993 amendments to the Federal Rules of Civil Procedure created an automatic right to conduct depositions of expert witnesses[8].  Previously, parties in federal court had to show the inadequacy of other methods of discovery.  Rule 26(a)(2)(B)(ii) requires the automatic production of “the facts or data considered by the [expert] witness in forming” his or her opinions. The literal wording of this provision would appear to restrict automatic, mandatory disclosure to those facts and data that are specifically considered in forming the opinions contained in the prescribed report. Several courts, however, have interpreted the term “considered” to include any information that expert witnesses review or generate, “regardless of whether the experts actually rely on those materials as a basis for their opinions.[9]

Among the changes introduced by the 2010 amendments to the Federal Rules of Civil Procedure was a narrowing of the disclosure requirement of “facts and data” considered by expert witnesses in arriving at their opinions to exclude some attorney work product, as well as protecting drafts of expert witness reports from discovery.  The implications of the Federal Rules for statistician expert witnesses are not entirely clear, but these changes should not be used as an excuse to deprive litigants of access to the data and materials underlying statisticians’ analyses. Since the 2010 amendments, courts have enforced discovery requests for testifying expert witnesses’ notes because they were not draft reports or specific communications between counsel and expert witnesses[10].

The Requirements Associated With Producing A Report

Rule 26 is the key rule that governs disclosure and discovery of expert witnesses and their opinions. Under the current version of Rule 26(a)(2)(B), the scope of required disclosure in the expert report has been narrowed in some respects. Rule 26(a)(2)(B) now requires service of expert witness reports that contain, among other things:

(i) a complete statement of all opinions the witness will express and the basis and reasons for them;

(ii) the facts or data considered by the witness in forming them;

(iii) any exhibits that will be used to summarize or support them.

The Rule’s use of “them” seems clearly to refer back to “opinions,” which creates a problem with respect to materials considered generally with respect to the case or the issues, but not for the specific opinions advanced in the report.

The previous language of the rule required that the expert report disclose “the data or other information considered by the witness.[11]” The use of “other information” in the older version of the rule, rather than the new “data” was generally interpreted to authorize discovery of all oral and written communications between counsel and expert witnesses.  The trimming of Rule 26(a)(2)(B)(ii) was thus designed to place these attorney-expert witness communications off limits from disclosure or discovery.

The federal rules specify that the required report “is intended to set forth the substance of the direct examination[12].” Several court have thus interpreted the current rule in a way that does not result in automatic production of all statistical analyses performed, but only those data and analyses the witness has decided to present at trial.  The report requirement, as it now stands, is thus not necessarily designed to help adverse counsel fully challenge and cross-examine the expert witness on analyses attempted, discarded, or abandoned. If a statistician expert witness conducted multiple statistical testing before arriving at a “preferred” analysis, that expert witness, and instructing counsel, will obviously be all too happy to eliminate the unhelpful analyses from the direct examination, and from the purview of disclosure.

Some of the caselaw in this area makes clear that it is up to the requesting party to discover what it wants beyond the materials that must automatically be disclosed in, or with, the report. A party will not be heard to complain, or attack its adversary, about failure to produce materials never requested.[13] Citing Rule 26(a) and its subsections, which deal with the report, and not discovery beyond the report, several cases take a narrow view of disclosure as embodied in the report requirement.[14] In one case, McCoy v. Whirlpool Corp, the trial court did, however, permit the plaintiff to conduct a supplemental deposition of the defense expert witness to question him about his calculations[15].

A narrow view of automatic disclosure in some cases appears to protect statistician and other expert witnesses from being required to produce calculations, statistical analyses, and data outputs even for opinions that are identified in their reports, and intended to be the subject of direct examination at trial[16].  The trial court’s handling of the issues in Cook v. Rockwell International Corporation is illustrative of this questionable approach.  The issue of the inadequacy of expert witnesses’ reports, for failing to disclose notes, calculations, and preliminary analyses, arose in the context of a Rule 702 motion to the admissibility of the witnesses’ opinion testimony.  The trial court rejected “[a]ny suggestion that an opposing expert must be able to verify the correctness of an expert’s work before it can be admitted… ”[17]; any such suggestion “misstates the standard for admission of expert evidence under [Fed. R. Evid.] 702.[18]”  The Cook court further rejected any “suggestion in Rule 26(a)(2) that an expert report is incomplete unless it contains sufficient information and detail for an opposing expert to replicate and verify in all respects both the method and results described in the report.[19]”   Similarly, the court rejected the defense’s complaints that one of plaintiffs’ expert witness’s expert report and disclosures violated Rule 26(a)(2), by failing to provide “detailed working notes, intermediate results and computer records,” to allow a rebuttal expert witness to test the methodology and replicate the results[20]. The court observed that

“Defendants’ argument also confuses the expert reporting requirements of Rule 26(a)(2) with the considerations for assessing the admissibility of an expert’s opinions under Rule 702 of the Federal Rules of Evidence. Whether an expert’s method or theory can or has been tested is one of the factors that can be relevant to determining whether an expert’s testimony is reliable enough to be admissible. See Fed. R. Evid. 702 2000 advisory committee’s note; Daubert, 509 U.S. at 593, 113 S.Ct. 2786. It is not a factor for assessing compliance with Rule 26(a)(2)’s expert disclosure requirements.[21]

The Rule 702 motion to exclude an expert witness comes too late in the pre-trial process for complaints about failure to disclose underlying data and analyses. The Cook case never explicitly addressed Rule 26(b), or other discovery procedures, as a basis for the defense request for underlying documents, data, and materials.  In any event, the limited scope accorded to Rule 26 disclosure mechanisms by Cook emphasizes the importance of deploying ancillary discovery tools early in the pre-trial process.

The Format Of Documents and Data Files To Be Produced

The dispute in Helmert v.  Butterball, LLC, is typical of what may be expected in a case involving statistician expert witness testimony.  The parties exchanged reports of their statistical expert witnesses, as well as the data output files.  The parties chose, however, to produce the data files in ways that were singularly unhelpful to the other side.  One party produced data files in the “portable document format” (pdf) rather than in the native format of the statistical software package used (STATA).  The other party produced data in a spreadsheet without any information about how the data were processed.  The parties then filed cross-motions to compel the data in its “electronic, native format.” In addition, plaintiffs pressed for all the underlying data, formulae, and calculations. The court denied both motions on the theory that both sides had received copies of the data considered, and neither was denied facts or data considered by the expert witnesses in reaching their opinions[22]. The court refused plaintiffs’ request for formulae and calculations as well. The court’s discussion of its rationale for denying the cross-motions is framed entirely in terms of what parties may expect and be entitled in the form of a report, without any mention of additional discovery mechanisms to obtain the sought-after materials. The court noted that the parties would have the opportunity to explore calculations at deposition.

The decision in Helmert seems typical of judicial indifference to, and misunderstanding of, the need for datasets, especially with large datasets, in the form uploaded to, and used in, statistical software programs. What is missing from the Helmert opinion is a recognition that an effective deposition would require production of the requested materials in advance of the oral examination, so that the examining counsel can confer and consult with a statistical expert for help in formulating and structuring the deposition questions. There are at least two remedial considerations for future discovery motions of the sort seen in Helmert. First, the moving party should support its application with an affidavit of a statistical expert to explain the specific need for identification of the actual formulae used, programming used within specific software programs to run analyses, and interim and final outputs. Second, a strong analogy with document discovery of parties, in which courts routinely order “native format” versions of PowerPoint, Excel, and Word documents produced in response to document requests. Rule 34 of the Federal Rules of Civil Procedure requires that “[a] party must produce documents as they are kept in the usual course of business[23]” and that, “[i]f a request does not specify a form for producing electronically stored information, a party must produce it in a form or forms in which it is ordinarily maintained or in a reasonably usable form or forms.[24]” The Advisory Committee notes to Rule 34[25] make clear that:

“[T]he option to produce in a reasonably usable form does not mean that a responding party is free to convert electronically stored information from the form in which it is ordinarily maintained to a different form that makes it more difficult or burdensome for the requesting party to use the information efficiently in the litigation. If the responding party ordinarily maintains the information it is producing in a way that makes it searchable by electronic means, the information should not be produced in a form that removes or significantly degrades this feature.”

Under the Federal Rules, a requesting party’s obligation to specify a particular format for document production is superseded by the responding party’s obligation to refrain from manipulating or converting “any of its electronically stored information to a different format that would make it more difficult or burdensome for [the requesting party] to use.[26]” In Helmert, the STATA files should have been delivered as STATA native format files, and the requesting party should have requested, and received, all STATA input and output files, which would have permitted the requestor to replicate all analyses conducted.

Some of the decided cases on expert witness reports are troubling because they do not explicitly state whether they are addressing the adequacy of automatic disclosure and reports, or a response to propounded discovery.  For example, in Etherton v. Owners Ins. Co.[27], the plaintiff sought to preclude a defense accident reconstruction expert witness on grounds that the witness failed to produce several pages of calculations[28]. The defense argued that the “[w]hile [the witness’s] notes regarding these calculations were not included in his expert report, the report does specifically identify the methods he employed in his analysis, and the static data used in his calculations”; and by asserting that “Rule 26 does not require the disclosure of draft expert reports, and it certainly does not require disclosure of calculations, as Plaintiff contends.[29]”  The court in Etherton agreed that “Fed. R. Civ. P. 26(a)(2)(B) does not require the production of every scrap of paper with potential relevance to an expert’s opinion.[30]” The court laid the discovery default here upon the plaintiff, as the requesting party:  “Although Plaintiff should have known that Mr. Ogden’s engineering analysis would likely involve calculations, Plaintiff never requested that documentation of those calculations be produced at any time prior to the date of [Ogden’s] deposition.[31]

The Etherton court’s assessment that the defense expert witness’s calculations were “working notes,” which Rule 26(a)(2) does not require to be included in or produced with a report, seems a complete answer, except for the court’s musings about the new provisions of Rule 26(b)(4)(B), which protect draft reports.  Because of the court’s emphasis that the plaintiff never requested the documentation of the relevant calculations, the court’s musings about what was discoverable were clearly dicta.  The calculations, which would reveal data and inferential processes considered, appear to be core materials, subject to and important for discovery[32].

[This post is a substantial revision and update to an earlier post, “Discovery of Statistician Expert Witnesses” (July 19, 2012).]


[1] 542 F.2d 111 (2d Cir. 1976), cert. denied, 429 U.S. 987 (1976)

[2] Id. at 124.

[3] Id. at 126 & n.17.

[4] United States v. Dioguardi, 428 F.2d 1033, 1038 (2d Cir.), cert. denied, 400 U.S. 825 (1970) (holding that prosecution’s failure to produce computer program was error but harmless on the particular facts of the case).

[5] See, e.g., Roberts, “A Practitioner’s Primer on Computer-Generated Evidence,” 41 U. Chi. L. Rev. 254, 255-56 (1974); Freed, “Computer Records and the Law — Retrospect and Prospect,” 15 Jurimetrics J. 207, 208 (1975); ABA Sub-Committee on Data Processing, “Principles of Introduction of Machine Prepared Studies” (1964).

[6] Aldous, Note, “Disclosure of Expert Computer Simulations,” 8 Computer L.J. 51 (1987); Betsy S. Fiedler, “Are Your Eyes Deceiving You?: The Evidentiary Crisis Regarding the Admissibility of Computer Generated Evidence,” 48 N.Y.L. Sch. L. Rev. 295, 295–96 (2004); Fred Galves, “Where the Not-So-Wild Things Are: Computers in the Courtroom, the Federal Rules of Evidence, and the Need for Institutional Reform and More Judicial Acceptance,” 13 Harv. J.L. & Tech. 161 (2000); Leslie C. O’Toole, “Admitting that We’re Litigating in the Digital Age: A Practical Overview of Issues of Admissibility in the Technological Courtroom,” Fed. Def. Corp. Csl. Quart. 3 (2008); Carole E. Powell, “Computer Generated Visual Evidence: Does Daubert Make a Difference?” 12 Georgia State Univ. L. Rev. 577 (1995).

[7] Federal Judicial Center, Manual for Complex Litigation § 11.447, at 82 (4th ed. 2004) (“The judge should therefore consider the accuracy and reliability of computerized evidence, including any necessary discovery during pretrial proceedings, so that challenges to the evidence are not made for the first time at trial.”); id. at § 11.482, at 99 (“Early and full disclosure of expert evidence can help define and narrow issues. Although experts often seem hopelessly at odds, revealing the assumptions and underlying data on which they have relied in reaching their opinions often makes the bases for their differences clearer and enables substantial simplification of the issues.”)

[8] Fed. R. Civ. P. 26(b)(4)(A) (1993).

[9] United States v. Dish Network, L.L.C., No. 09-3073, 2013 WL 5575864, at *2, *5 (C.D. Ill. Oct. 9, 2013) (noting that the 2010 amendments did not affect the change the meaning of the term “considered,” as including “anything received, reviewed, read, or authored by the expert, before or in connection with the forming of his opinion, if the subject matter relates to the facts or opinions expressed.”); S.E.C. v. Reyes, 2007 WL 963422, at *1 (N.D. Cal. Mar. 30, 2007). See also South Yuba River Citizens’ League v. National Marine Fisheries Service, 257 F.R.D. 607, 610 (E.D. Cal. 2009) (majority rule requires production of materials considered even when work product); Trigon Insur. Co. v. United States, 204 F.R.D. 277, 282 (E.D. Va. 2001).

[10] Dongguk Univ. v. Yale Univ., No. 3:08–CV–00441 (TLM), 2011 WL 1935865 (D. Conn. May 19, 2011) (ordering production of a testifying expert witness’s notes, reasoning that they were neither draft reports nor communications between the party’s attorney and the expert witness, and they were not the mental impressions, conclusions, opinions, or legal theories of the party’s attorney); In re Application of the Republic of Ecuador, 280 F.R.D. 506, 513 (N.D. Cal. 2012) (holding that Rule 26(b) does not protect an expert witness’s own work product other than draft reports). But see Internat’l Aloe Science Council, Inc. v. Fruit of the Earth, Inc., No. 11-2255, 2012 WL 1900536, at *2 (D. Md. May 23, 2012) (holding that expert witness’s notes created to help counsel prepare for deposition of adversary’s expert witness were protected as attorney work product and protected from disclosure under Rule 26(b)(4)(C) because they did not contain opinions that the expert would provide at trial)).

[11] Fed. R. Civ. P. 26(a)(2)(B)(ii) (1993) (emphasis added).

[12] Notes of Advisory Committee on Rules for Rule 26(a)(2)(B). See, e.g., Lituanian Commerce Corp., Ltd. v. Sara Lee Hosiery, 177 F.R.D. 245, 253 (D.N.J. 1997) (expert witness’s written report should state completely all opinions to be given at trial, the data, facts, and information considered in arriving at those opinions, as well as any exhibits to be used), vacated on other grounds, 179 F.R.D. 450 (D.N.J. 1998).

[13] See, e.g., Gillepsie v. Sears, Roebuck & Co., 386 F.3d 21, 35 (1st Cir. 2004) (holding that trial court erred in allowing cross-examination and final argument on expert witness’s supposed failure to produce all working notes and videotaped recordings while conducting tests, when objecting party never made such document requests).

[14] See, e.g., McCoy v. Whirlpool Corp., 214 F.R.D. 646, 652 (D. Kan. 2003) (Rule  26(a)(2) “does not require that a report recite each minute fact or piece of scientific information that might be elicited on direct examination to establish the admissibility of the expert opinion … Nor does it require the expert to anticipate every criticism and articulate every nano-detail that might be involved in defending the opinion[.]”).

[15] Id. (without distinguishing between the provisions of Rule 26(a) concerning reports and Rule 26(b) concerning depositions); see also Scott v. City of New York, 591 F.Supp. 2d 554, 559 (S.D.N.Y. 2008) (“failure to record the panoply of descriptive figures displayed automatically by his statistics program does not constitute best practices for preparation of an expert report,’’ but holding that the report contained ‘‘the data or other information’’ he considered in forming his opinion, as required by Rule 26); McDonald v. Sun Oil Co., 423 F.Supp. 2d 1114, 1122 (D. Or. 2006) (holding that Rule 26(a)(2)(B) does not require the production of an expert witness’s working notes; a party may not be sanctioned for spoliation based upon expert witness’s failure to retain notes, absent a showing of relevancy and bad faith), rev’d on other grounds, 548 F.3d 774 (9th Cir. 2008).

[16] In re Xerox Corp Securities Litig., 746 F. Supp. 2d 402, 414-15 (D. Conn. 2010) (“The court concludes that it was not necessary for the [expert witness’s] initial regression analysis to be contained in the [expert] report” that was disclosed pursuant to Rule 26(a)(2)), aff’d on other grds. sub. nom., Dalberth v. Xerox Corp., 766 F. 3d 172 (2d Cir. 2014). See also Cook v. Rockwell Int’l Corp., 580 F.Supp. 2d 1071, 1122 (D. Colo. 2006), rev’d and remanded on other grounds, 618 F.3d 1127 (10th Cir. 2010), cert. denied, ___ U.S. ___ , No. 10-1377, 2012 WL 2368857 (June 25, 2012), on remand, 13 F.Supp.3d 1153 (D. Colo. 2014), vacated 2015 WL 3853593, No. 14–1112 (10th Cir. June 23, 2015); Flebotte v. Dow Jones & Co., No. Civ. A. 97–30117–FHF, 2000 WL 35539238, at *7 (D. Mass. Dec. 6, 2000) (“Therefore, neither the plain language of the rule nor its purpose compels disclosure of every calculation or test conducted by the expert during formation of the report.”).

[17] Cook, 580 F. Supp. 2d at 1121–22.

[18] Id.

[19] Id. & n. 55 (Rule 26(a)(2) does not “require that an expert report contain all the information that a scientific journal might require an author of a published paper to retain.”).

[20] Id. at 1121-22.

[21] Id.

[22] Helmert v.  Butterball, LLC, No. 4:08-CV-00342, 2011 WL 3157180, at *2 (E.D. Ark. July 27, 2011).

[23] Fed. R. Civ. P. 34(b)(2)(E)(i).

[24] Fed. R. Civ. P. 34(b)(2)(E)(ii).

[25] Fed. R. Civ. P. 34, Advisory Comm. Notes (2006 Amendments).

[26] Crissen v. Gupta, 2013 U.S. Dist. LEXIS 159534, at *22 (S.D. Ind. Nov. 7, 2013), citing Craig & Landreth, Inc. v. Mazda Motor of America, Inc., 2009 U.S. Dist. LEXIS 66069, at *3 (S.D. Ind. July 27, 2009). See also Saliga v. Chemtura Corp., 2013 U.S. Dist. LEXIS 167019, *3-7 (D. Conn. Nov. 25, 2013).

[27] No. 10-cv-00892-MSKKLM, 2011 WL 684592 (D. Colo. Feb. 18, 2011)

[28] Id. at *1.

[29] Id.

[30] Id. at *2.

[31] Id.

[32] See Barnes v. Dist. of Columbia, 289 F.R.D. 1, 19–24 (D.D.C. 2012) (ordering production of underlying data and information because, “[i]n order for the [requesting party] to understand fully the . . . [r]eports, they need to have all the underlying data and information on how” the reports were prepared).

Forensic Science Conference Papers Published by Royal Society

June 27th, 2015

In February of this year, the Royal Society sponsored a two day conference, on “The paradigm shift for UK forensic science,” at The Royal Society, London. The meeting was organized by Professors Sue Black and Niamh Nic Daeid, of Dundee University, to discuss developments in the scientific reliability of the forensic sciences. The meeting program reflected a broad coverage of topics by scientists, judges, lawyers, on science in the courtroom.

The presentations are now available as papers open access in the Philosophical Transactions of the Royal Society B: Biological Sciences:

Sue Black, Niamh Nic Daeid, Introduction: Time to think differently: catalysing a paradigm shift in forensic science

The Rt Hon. the Lord Thomas of Cwmgiedd, The legal framework for more robust forensic science evidence

Éadaoin O’Brien, Niamh Nic Daeid, Sue Black, Science in the court: pitfalls, challenges and solutions

Paul Roberts, Paradigms of forensic science and legal process: a critical diagnosis

Stephan A. Bolliger, Michael J. Thali, Bridging the gap: from biometrics to forensics

Anil K. Jain, Arun Ross, Fingerprint identification: advances since the 2009 National Research Council report

Christophe Champod, The future of forensic DNA analysis

John M. Butler, The end of the (forensic science) world as we know it? The example of trace evidence

Claude Roux, Benjamin Talbot-Wright, James Robertson, Frank Crispino, Olivier Ribaux, Advances in the use of odour as forensic evidence through optimizing and standardizing instruments and canines

Kenneth G. Furton, Norma Iris Caraballo, Michelle M. Cerreta, Howard K. Holness, New psychoactive substances: catalysing a shift in forensic science practice?

Justice Tettey, Conor Crean, The logical foundations of forensic science: towards reliable knowledge

Ian Evett, The interface between forensic science and technology: how technology could cause a paradigm shift in the role of forensic institutes in the criminal justice system

Ate Kloosterman, Anna Mapes, Zeno Geradts, Erwin van Eijk, Carola Koper, Jorrit van den Berg, Saskia Verheij, Marcel van der Steen, Arian van Asten, Integrating research into operational practice

Alastair Ross, Cognitive neuroscience in forensic science: understanding and utilizing the human element

Itiel E. Dror, Review article: Cognitive neuroscience in forensic science: understanding and utilizing the human element

Earthquake-Induced Data Loss – We’re All Shook Up

June 26th, 2015

Adam Marcus and Ivan Oransky are medical journalists who publish the Retraction Watch blog. Their blog’s coverage of error, fraud, plagiarism, and other publishing disasters is often first-rate, and a valuable curative for the belief that peer review publication, as it is now practiced, ensures trustworthiness.

Yesterday, Retraction Watch posted an article on earthquake-induced data loss. Shannon Palus, “Lost your data? Blame an earthquake” (June 25, 2015). A commenter on PubPeer raised concerns about a key figure in a paper[1]. The authors acknowledged a problem, which they traced to their loss of data in an earthquake. The journal retracted the paper.

This is not the first instance of earthquake-induced loss of data.

When John O’Quinn and his colleagues in the litigation industry created the pseudo-science of silicone-induced autoimmunity, they recruited Nir Kossovsky, a pathologist at UCLA Medical Center. Although Kossovsky looked a bit like Pee-Wee Herman, he was a graduate of the University of Chicago Pritzker School of Medicine, and the U.S. Naval War College, and a consultant to the FDA. In his dress whites, Kossovsky helped O’Quinn sell his silicone immunogenicity theories to juries and judges around the country. For a while, the theories sold well.

In testifying and dodging discovery for the underlying data in his silicone studies, Kossovsky was as slick as silicone itself. Ultimately, when defense counsel subpoenaed the underlying data from Kossovsky’s silicone study, Kossovsky shrugged and replied that the Northridge Earthquake destroyed his data. Apparently coffee cups and other containers of questionable fluids spilled on his silicone data in the quake, and Kossovsky’s emergency response was to obtain garbage cans and throw out the data. For the gory details, see Gary Taubes, “Silicone in the System: Has Nir Kossovsky really shown anything about the dangers of breast implants?” Discover Magazine (Dec. 1995).

As Mr. Taubes points out, Kossovsky’s paper was rejected by several journals before being published in the Journal of Applied Biomaterials, of which Kossovsky was a member of the editorial board. The lack of data did not, however, keep Kossovsky from continuing to testify, and from trying to commercialize, along with his wife, Beth Brandegee, and his father, Ram Kossowsky[2], an ELISA-based silicone “antibody” biomarker diagnostic test, Detecsil. Although Rule 702 had been energized by the Daubert decision in 1993, many judges were still not willing to take a hard look at Kossovsky’s study, his test, or to demand the supposedly supporting data. The Food and Drug Administration, however, eventually caught up with Kossovsky, and the Detecsil marketing ceased. Lillian J. Gill, FDA Acting Director, Office of Compliance, Letter to Beth S. Brandegee, President, Structured Biologicals (SBI) Laboratories: Detecsil Silicone Sensitivity Test (July 15, 1994); see Taubes, Discover Magazine.

After defense counsel learned of the FDA’s enforcement action against Kossovsky and his company, the litigation industry lost interest in Kossovsky, and his name dropped off trial witness lists. His name also dropped off the rolls of tenured UCLA faculty, and he apparently left medicine altogether to become a business consultant. Dr. Kossovsky became “an authority on business process risk and reputational value.” Kossovsky is now the CEO and Director of Steel City Re, which specializes in strategies for maintaining and enhancing reputational value. Ironic; eh?

A review of PubMed’s entries for Nir Kossovsky shows that his run in silicone started in 1983, and ended in 1996. He testified for plaintiffs in Hopkins v. Dow Corning Corp., 33 F.3d 1116 (9th Cir.1994) (tried in 1991), and in the infamous case of Johnson v. Bristol-Myers Squibb, CN 91-21770, Tx Dist. Ct., 125th Jud. Dist., Harris Cty., 1992.

A bibliography of Kossovsky silicone oeuvre is listed, below.


[1] Federico S. Rodríguez, Katterine A. Salazar, Nery A. Jara, María A García-Robles, Fernando Pérez, Luciano E. Ferrada, Fernando Martínez, and Francisco J. Nualart, “Superoxide-dependent uptake of vitamin C in human glioma cells,” 127 J. Neurochemistry 793 (2013).

[2] Father and son apparently did not agree on how to spell their last name.


Nir Kossovsky, D. Conway, Ram Kossowsky & D. Petrovich, “Novel anti-silicone surface-associated antigen antibodies (anti-SSAA(x)) may help differentiate symptomatic patients with silicone breast implants from patients with classical rheumatological disease,” 210 Curr. Topics Microbiol. Immunol. 327 (1996)

Nir Kossovsky, et al., “Preservation of surface-dependent properties of viral antigens following immobilization on particulate ceramic delivery vehicles,” 29 J. Biomed. Mater. Res. 561 (1995)

E.A. Mena, Nir Kossovsky, C. Chu, and C. Hu, “Inflammatory intermediates produced by tissues encasing silicone breast prostheses,” 8 J. Invest. Surg. 31 (1995)

Nir Kossovsky, “Can the silicone controversy be resolved with rational certainty?” 7 J. Biomater. Sci. Polymer Ed. 97 (1995)

Nir Kossovsky & C.J. Freiman, “Physicochemical and immunological basis of silicone pathophysiology,” 7 J. Biomater. Sci. Polym. Ed. 101 (1995)

Nir Kossovsky, et al., “Self-reported signs and symptoms in breast implant patients with novel antibodies to silicone surface associated antigens [anti-SSAA(x)],” 6 J. Appl. Biomater. 153 (1995), and “Erratum,” 6 J. Appl. Biomater. 305 (1995)

Nir Kossovsky & J. Stassi, “A pathophysiological examination of the biophysics and bioreactivity of silicone breast implants,” 24s1 Seminars Arthritis & Rheum. 18 (1994)

Nir Kossovsky & C.J. Freiman, “Silicone breast implant pathology. Clinical data and immunologic consequences,” 118 Arch. Pathol. Lab. Med. 686 (1994)

Nir Kossovsky & C.J. Freiman, “Immunology of silicone breast implants,” 8 J. Biomaterials Appl. 237 (1994)

Nir Kossovsky & N. Papasian, “Mammary implants,” 3 J. Appl. Biomater. 239 (1992)

Nir Kossovsky, P. Cole, D.A. Zackson, “Giant cell myocarditis associated with silicone: An unusual case of biomaterials pathology discovered at autopsy using X-ray energy spectroscopic techniques,” 93 Am. J. Clin. Pathol. 148 (1990)

Nir Kossovsky & R.B. Snow RB, “Clinical-pathological analysis of failed central nervous system fluid shunts,” 23 J. Biomed. Mater. Res. 73 (1989)

R.B. Snow & Nir Kossovsky, “Hypersensitivity reaction associated with sterile ventriculoperitoneal shunt malfunction,” 31 Surg. Neurol. 209 (1989)

Nir Kossovsky & Ram Kossowsky, “Medical devices and biomaterials pathology: Primary data for health care technology assessment,” 4 Internat’l J. Technol. Assess. Health Care 319 (1988)

Nir Kossovsky, John P. Heggers, and M.C. Robson, “Experimental demonstration of the immunogenicity of silicone-protein complexes,” 21 J. Biomed. Mater. Res. 1125 (1987)

Nir Kossovsky, John P. Heggers, R.W. Parsons, and M.C. Robson, “Acceleration of capsule formation around silicone implants by infection in a guinea pig model,” 73 Plastic & Reconstr. Surg. 91 (1984)

John Heggers, Nir Kossovsky, et al., “Biocompatibility of silicone implants,” 11 Ann. Plastic Surg. 38 (1983)

Nir Kossovsky, John P. Heggers, et al., “Analysis of the surface morphology of recovered silicone mammary prostheses,” 71 Plast. Reconstr. Surg. 795 (1983)

The One Percent Non-solution – Infante Fuels His Own Exclusion in Gasoline Leukemia Case

June 25th, 2015

Most epidemiologic studies are not admissible. Such studies involve many layers of hearsay evidence, measurements of exposures, diagnoses, records, and the like, which cannot be “cross-examined.” Our legal system allows expert witnesses to rely upon such studies, although clearly inadmissible, when “experts in the particular field would reasonably rely on those kinds of facts or data in forming an opinion on the subject.” Federal Rule of Evidence 703. One of the problems that judges face in carrying out their gatekeeping duties is to evaluate whether challenged expert witnesses have reasonably relied upon particular studies and data. Judges, unlike juries, have an obligation to explain their decisions, and many expert witness gatekeeping decisions by judges fall short by failing to provide citations to the contested studies at issue in the challenge. Sometimes the parties may be able to discern what is being referenced, but the judicial decision has a public function that goes beyond speaking to the litigants before the court. Without full citations to the studies that underlie an expert witness’s opinion, the communities of judges, lawyers, scientists, and others cannot evaluate the judge’s gatekeeping. Imagine a judicial opinion that vaguely referred to a decision by another judge, but failed to provide a citation? We would think such an opinion to be a miserable failure of the judge’s obligation to explain and justify the resolution of the matter, as well as a case of poor legal scholarship. The same considerations should apply to the scientific studies relied upon by an expert witness, whose opinion is being discussed in a judicial opinion.

Judge Sarah Vance’s opinion in Burst v. Shell Oil Co., C. A. No. 14–109, 2015 WL 3755953 (E.D. La. June 16, 2015) [cited as Burst], is a good example of judicial opinion writing, in the context of deciding an evidentiary challenge to an expert witness’s opinion, which satisfies the requirements of judicial opinion writing, as well as basic scholarship. The key studies relied upon by the challenged expert witness are identified, and cited, in a way that permits both litigants and non-litigants to review Her Honor’s opinion, and evaluate both the challenged expert witness’s opinion, and the trial judge’s gatekeeping performance. Citations to the underlying studies creates the delicious possibility that the trial judge might actually have read the papers to decide the admissibility question. On the merits, Judge Vance’s opinion in Burst also serves as a good example of judicial scrutiny that cuts through an expert witness’s hand waving and misdirection in the face of inadequate, inconsistent, and insufficient evidence for a causal conclusion.

Burst is yet another case in which plaintiff claimed that exposure to gasoline caused acute myeloid leukemia (AML), one of several different types of leukemia[1]. The claim is fraught with uncertainty and speculation in the form of extrapolations between substances, from high to low exposures, and between diseases.

Everyone has a background exposure to benzene from both natural and anthropogenic sources. Smoking results in approximately a ten-fold elevation of benzene exposure. Agency for Toxic Substances and Disease Registry (ATSDR) Public Health Statement – Benzene CAS#: 71-43-2 (August 2007). Gasoline contains small amounts of benzene, on the order of 1 percent or less. U.S. Environmental Protection Agency (EPA), Summary and Analysis of the 2011 Gasoline Benzene Pre-Compliance Report (2012).

Although gasoline has always contained benzene, the quantitative difference in levels of benzene exposure involved in working with concentrated benzene and with gasoline has led virtually all scientists and regulatory agencies to treat the two exposures differently. Benzene exposure is a known cause of AML; gasoline exposure, even in occupational contexts, is not taken to be a known cause of AML. Dose matters.

Although the reviews of the International Agency for Research on Cancer (IARC) are sometimes partisan, incomplete, and biased towards finding carcinogenicity, the IARC categorizes benzene as a known human carcinogen, in large part because of its known ability to cause AML, but regards the evidence for gasoline as inadequate for making causal conclusions. IARC, Monographs on the Evaluation of Carcinogenic Risks to Humans, Vol. 45, Occupational Exposures in Petroleum Refining; Crude Oil and Major Petroleum Fuels (1989) (“There is inadequate evidence for the carcinogenicity in humans of gasoline.”) (emphasis in original)[2].

To transmogrify a gasoline case into a benzene case, plaintiff called upon Peter F. Infante, a fellow of the white-hat conspiracy, Collegium Ramazzini, and an adjunct professor at George Washington University School of Public Health and Health Services. Previously, Dr. Infante was Director of OHSA’s Office of Standards Review (OSHA). More recently, Infante is known as the president and registered agent of Peter F. Infante Consulting, LLC, in Falls Church, Virginia, and a go-to expert witness for plaintiffs in toxic tort litigation[3].

In the Burst case, Infante started out in trouble, by claiming that he had he “followed the methodology of the International Agency for Research on Cancer (IARC) and of the Occupational Safety and Health Administration (OSHA) in evaluating epidemiological studies, case reports and toxicological studies of benzene exposure and its effect on the hematopoietic system.” Burst at *4. Relying upon the IARC’s methodology might satisfy some uncritical courts, but here the IARC itself sharply distinguished its characterizations of benzene and gasoline in separate reviews. Infante’s opinion ignored this divide, although it ultimately had to connect gasoline exposure to the claimed injury[4].

Judge Vance found that Infante’s proffered opinions ransacked the catalogue of expert witness errors. Infante:

  • relied upon studies of benzene exposure and diseases other than the outcome of interest, AML. Burst at *4, *10, *13.
  • relied upon studies of benzene exposure rather than gasoline exposure. Burst at *9.
  • relied upon studies that assessed outcomes in groups with multiple exposures, which studies were hopelessly confounded. Burst at *7.
  • failed to acknowledge the inconsistency of outcomes in the studies of the relevant exposure, gasoline. Burst at *9.
  • relied upon studies that lacked adequate exposure measurements and characterizations, which lack was among the reasons that the ATSDR declined to label gasoline a carcinogen. Burst at *12.
  • relied upon studies that did not report statistically significant associations between gasoline exposure and AML. Burst at *10, *12
  • cherry picked studies and failed to explain contrary results. Burst at *10.
  • cherry picked data from within studies that did not otherwise support his conclusion. Burst at *10.
  • interpreted studies at odds with how the authors of published papers interpreted their own studies. Burst at *10.
  • failed to reconcile conflicting studies. Burst at *10.
  • manipulated data without sufficient explanation or justification. Burst at *14.
  • failed to conduct an appropriate analysis of the entire dataset, along the lines of Sir Austin Bradford Hill’s nine factors. Burst at *10.

The manipulation charge is worth further discussion because it reflects upon the trial court’s acumen and the challenged witness’s deviousness. Infante combined the data from two exposure subgroups from one study[5] to claim that the study actually had a statistically significant association. The trial court found that Dr. Infante failed to explain or justify the recalculation. Burst at *14. At the pre-trial hearing, Dr. Infante offered that he performed the re-calculation on a “sticky note,” but failed to provide his calculations. The court might also have been concerned about the misuse of claiming statistical significance in a post-hoc, non-prespecified analysis that would have clearly raised a multiple comparisons issue. Infante also combined two separate datasets from an unpublished study (the Spivey study for Union Oil), which the court found problematic for his failure to explain and justify the aggregation of data. Id. This recalculation raises the issue whether the two separate datasets could be appropriately combined.

For another study[6], Infante adjusted the results based upon his assessment that the study was biased by a “healthy worker effect[7].” Burst at *15. Infante failed to provide any explanation of how he adjusted for the healthy worker effect, thus giving the court no basis for evaluating the reliability of his methodology. Perhaps more telling, the authors of this study acknowledged the hypothetical potential for healthy worker bias, but chose not to adjust for it because their primary analyses were conducted internally within the working study population, which fully accounted for the potential bias[8].

The court emphasized that it did not question whether combining datasets or adjusting for bias was accepted or proper methodology; rather it focused its critical scrutiny on Infante’s refusal or failure to explain and justify his post-hoc “manipulations of published data.” Burst at *15. Without a showing that AML is more common among non-working, disabled men, the health worker adjustment could well be questioned.

In the final analysis, Infante’s sloppy narrative review could not stand in the face of obviously inconsistent epidemiologic data. Burst at *16. The trial court found that Dr. Infante’s methodology of claiming reliance upon multiple studies, which did not reliably (validly) support his claims or “fit” his conclusions, failed to satisfy the requirements of Federal Rule of Evidence 702. The analytical gap between the data and the opinion were too great. Id. at *8. Infante’s opinion fell into the abyss[9].


[1] See, e.g., Castellow v. Chevron USA, 97 F. Supp. 2d 780, 796 (S.D.Tex.2000) (“Plaintiffs here have not shown that the relevant scientific or medical literature supports the conclusion that workers exposed to benzene, as a component of gasoline, face a statistically significant risk of an increase in the rate of AML.”); Henricksen v. Conoco Phillips Co., 605 F.Supp.2d 1142, 1175 (E.D.Wa. 2009) (“None of the studies relied upon have concluded that gasoline has the same toxic effect as benzene, and none have concluded that the benzene component of gasoline is capable of causing AML.”); Parker v. Mobil Oil Corp., 7 N.Y.3d 434, 450 (N.Y.2006) (“[N]o significant association has been found between gasoline exposure and AML. Plaintiff’s experts were unable to identify a single epidemiologic study finding an increased risk of AML as a result of exposure to gasoline.”).

[2] See also ATSDR Toxicological Profile for Gasoline (1995) (concluding “there is no conclusive evidence to support or refute the carcinogenic potential of gasoline in humans or animals based on the carcinogenicity of one of its components, benzene”); ATSDR, Public Health Statement for Automotive Gasoline (June 1995) (“[However, there is no evidence that exposure to gasoline causes cancer in humans. There is not enough information available to determine if gasoline causes birth defects or affects reproduction.”).

[3] See, e.g., Harris v. CSX Transp., Inc., 753 SE 2d 275, 232 W. Va. 617 (2013); Henricksen v. ConocoPhillips Co., 605 F. Supp. 2d 1142 (E.D. Wash. 2009); Roney v. GENCORP, Civil Action No. 3: 05-0788 (S.D.W. Va. Sept. 18, 2009); Chambers v. Exxon Corp., 81 F. Supp. 2d 661 (M.D. La. 2000).

[4] Judge Vance did acknowledge that benzene studies were relevant to Infante’s causation opinion, but emphasized that such studies could not suffice to show that all gasoline exposures could cause AML. Burst at *10 (citing Dickson v. Nat’l Maint. & Repair of Ky., Inc., No. 5:08–CV–00008, 2011 WL 12538613, at *6 (W.D. Ky. April 28, 2011) (“Benzene may be considered a causative agent despite only being a component of the alleged harm.”).

[5] L. Rushton & H. Romaniuk, “A Case-Control Study to Investigate the

Risk of Leukaemia Associated with Exposure to Benzene in Petroleum Marketing and Distribution Workers in the United Kingdom,” 54 Occup. & Envt’l Med. 152 (1997).

[6] Otto Wong, et al., “Health Effects of Gasoline Exposure. II. Mortality Patterns of Distribution Workers in the United States,” 101 Envt’l Health Persp. 6 (1993).

[7] Burst at *15, citing and quoting from John Last, A Dictionary of Epidemiology (3d ed.1995) (“Workers usually exhibit lower overall death rates than the general population because the severely ill and chronically disabled are ordinarily excluded from employment.”).

[8] Wong, supra.

[9] In a separate opinion, Judge Vance excluded a physician, Dr. Robert Harrison, who similarly opined that gasoline causes AML, and Mr. Burst’s AML, without the benefit of sound science to support his opinion. Burst v. Shell Oil Co., C. A. No. 14–109, 2015 WL 2015 WL 3620111 (E.D. La. June 9, 2015).

Diclegis and Vacuous Philosophy of Science

June 24th, 2015

Just when you thought that nothing more could be written intelligently about the Bendectin litigation, you find out you are right. Years ago, Michael Green and Joseph Sanders each wrote thoughtful, full-length books[1] about the litigation assault on the morning-sickness (nausea and vomiting of pregnancy) medication, which was voluntarily withdrawn by its manufacturer from the United States market. Dozens, if not hundreds, of law review articles discuss the scientific issues, the legal tactics, and the judicial decisions in the U.S. Bendectin litigation, including the Daubert case in Supreme Court and in the Ninth Circuit. But perhaps fresh eyes might find something new and fresh to say.

Boaz Miller teaches social epistemology and philosophy of science, and he recently weighed in on the role that scientific consensus played in resolving the Bendectin litigation. Miller, “Scientific Consensus and Expert Testimony in Courts: Lessons from the Bendectin Litigation,” Foundations of Science (2014) (Oct. 17, 2014) (in press) [cited as Miller]. Miller astutely points out that scientific consensus may or may not be epistemic, that is, based upon robust, valid, sufficient scientific evidence of causality. Scientists are people, and sometimes they come to conclusions based upon invalid evidence, or because of cognitive biases, or social pressures, and the like. Sometimes scientists get the right result for the wrong reasons. From this position he argues that adverting to scientific consensus is fraught with danger of being misled, and that the Bendectin ligitation specifically is an example of courts led astray by a “non-epistemic” scientific consensus. Miller at 1.

Miller is correct that the scientific consensus on Bendectin’s safety, which emerged after the initiation of litigation, played a role in resolving the litigation, id. at 8, but he badly misunderstands how the consensus actually operated to bring closure to the birth defects litigation. Remarkably, he pays no attention to the consolidated trial of over 800 cases before the Hon. Carl B. Rubin, in the Southern District of Ohio. This trial resulted in a defense verdict in March 1985, and judgment that withstood appellate review. Assoc. Press, “U.S. Jury Clears a Nausea Drug in Birth Defects,” N.Y. Times (Mar. 13, 1985). The subsequent litigation turned into guerilla warfare based upon relatively few remaining individual cases in state and federal courts. In one of the state court cases, the trial court appointed neutral expert witnesses, who opined that plaintiffs had failed to make out their causal claims of teratogenicity in human beings. DePyper v. Navarro, No. 83–303467-NM, 1995 WL 788828 (Mich. Cir. Ct. Nov. 27, 1995).

To be sure, plaintiffs’ expert witnesses and plaintiffs’ counsel continued in their campaign to manufacture “reasonable medical certainty” of Bendectin’s teratogenicity, well after a scientific consensus emerged. Boaz Miller makes the stunning claim that this consensus was not a “knowledge-based” consensus because:

(1) the research was controlled by parties to the dispute (Miller at 10);

(2) the consensus ignored or diminished the “value” of in vitro toxicology (Miller at 13);

(3) the consensus relied upon most heavily upon the epidemiologic evidence (Miller at 14);

(4) the animal toxicology research was “prematurely” abandoned when the U.S. withdrew its product from the market (Miller at 15); and

(5) the withdrawal ended the “threat” to public health, and the concerns about teratogenicity (Miller at 15).

Miller’s asserted reasons are demonstrably incorrect. Although Merrell Richardson funded some studies early on, by the time the scientific consensus emerged, many studies funded by neutral sources, and conducted by researchers of respected integrity, were widely available. The consensus did not diminish the value of in vivo toxicology; rather scientists evaluated the available evidence through their understanding of epidemiology’s superiority in assessing actual risks in human populations. Animal studies were not prematurely abandoned; more accurately, the animal studies gave way to more revealing, valid studies in humans about human outcomes. The sponsor’s withdrawal of Bendectin in the United States was not the cause of any abandonment of research. The drug remained available outside the United States, in countries with less rapacious tort systems, and researchers would have, in any event, continued animal studies if there were something left open by previous research. A casual browse through PubMed’s collection of articles on thalidomide shows that animal research continued well after that medication had been universally withdrawn for use in pregnant women. Given that thalidomide was unquestionably a human teratogen, there was a continued interest in understanding its teratogenicity. No such continued interest existed for Bendectin after the avalanche of exculpatory human data.

What sort of inquiry permitted Miller to reach his conclusions? His article cites no studies, no whole-animal toxicology, no in vitro research, no epidemiologic studies, no systematic reviews, no regulatory agency reviews, and no meta-analysis. All exist in abundance. The full extent of his engagement with the actual scientific data and issues is a reference to an editorial and two letters to the editor[2]! From the exchange of views in one set of letters in 1985, Miller infers that there was “clear dissent within the community of toxicologists.” Miller at 13. The letters in question, however, were written in a journal of teratology, which was not limited to toxicology, and the interlocutors were well aware of the hierarchy of evidence that placed human observational studies at the top of the evidential pyramid.

Miller argues that it was possible that the consensus was not knowledge-based because it might have reflected the dominance of epidemiology over the discipline of toxicology. Again, he ignores the known dubious validity of inferring human teratogenicity from high dose whole animal or in vitro toxicology. By the time the scientific consensus emerged with respect to Bendectin’s safety, this validity point was widely appreciated by all but the most hardened rat killers, and plaintiffs’ counsel. In less litigious countries, the drug never left the market. No regulatory agency ever called for its withdrawal.

Miller might have tested whether the scientific community’s consensus on Bendectin, circa 1992 (when Daubert was being briefed in the Supreme Court) was robust by looking to how well it stood up to further testing. He did not, but he could easily have found the following. The U.S. sponsor of Diclegis, Duchesnay USA, sought and obtained the indication for its medication in pregnancy. Under U.S. law, Duchesnay’s new drug application had to establish safety and efficacy for this indication. In 2013, the U.S. FDA approved Bendectin, under the tradename, Diclegis[3], as a combination of doxylamine succinate and pyridoxine hydrochloride for sale in the United States. Under the FDA’s pregnancy labeling system, Diclegis is a category A, with a specific indication for use in pregnancy. The FDA’s review of the actual data is largely available for all to see. See, e.g., Center for Drug Evaluation and Research, Other Reviews (Aug. 2012); Summary Review (April 2013); Pharmacology Review (March 2013); Medical Review (March 2013); Statistical Review (March 2013); Cross Discipline Team Leader Review (April 2013). Given the current scientific record, the consensus that emerged in the early 1990s looks strong. Indeed, the consensus was epistemically strong when reached two decades ago.

Miller is certainly correct that reliance upon consensus entails epistemic risks. Sometimes the consensus has not looked very hard or critically at all the evidence. Political, financial, and cognitive biases can be prevalent. Miller fails to show that any such biases were prevalent in the early 1990s, or that they infected judicial assessments of the plaintiffs’ causal claims in Bendectin litigation. Miller is also wrong to suggest that courts did not look beyond the consensus to the actual evidential base for plaintiffs’ claims. Through the lens of expert witness testimony, both party and court-appointed expert witnesses, courts and juries had a better view of the causation issues than Miller appreciates. Miller’s philosophy of science might be improved by his rolling up his sleeves and actually looking at the data[4].


[1] See Joseph Sanders, Bendectin on Trial: A Study of Mass Tort Litigation (1998); Michael D. Green, Bendectin and Birth Defects: The Challenges of Mass Toxic Substances Litigation (1996).

[2] Robert Brent, “Editorial comment on comments on ‘Teratogen Update: Bendectin’,” 31 Teratology 429 (1985); Kenneth S. Brown, John M. Desesso, John Hassell, Norman W. Klein, Jon M. Rowland, A. J. Steffek, Betsy D. Carlton, Cas. Grabowski, William Slikker Jr. and David Walsh, “Comments on ‘Teratogen Update: Bendectin’,” 31Teratology 431 (1985); Lewis B. Holmes, “Response to comments on ‘Teratogen Update: Bendectin’,” 31 Teratology 432 (1985).

[3] See FDA News Release, “FDA approves Diclegis for pregnant women experiencing nausea and vomiting,” (April 8, 2013). The return of this drug to the United States market was held up as a triumph of science over the will of the industry litigation. See Gideon Koren, “The Return to the USA of the Doxylamine-Pyridoxine Delayed Release Combination (Diclegis®) for Morning Sickness — A New Morning for American Women,” 20 J. Popul. Ther. Clin. Pharmacol. e161 (2013).

[4] See “Bendectin, Diclegis & The Philosophy of Science” (Oct 26, 2013).

Government Secrecy That Denies Defendant A Fair Trial – Because of Reasons

June 20th, 2015

In Davis v. Ayala, defendant Hector Ayala challenged the prosecutor’s use of preemptory challenges in an apparently racially motivated fashion. The trial judge allowed the prosecutor to disclose his reasons in an ex parte session, without the defense present. Under the Supreme Court’s decision in Batson, the defendant should have had the opportunity to inquiry into the bona fides of the prosecutor’s claimed motivations. Based upon the prosecutor’s one-sided presentation, the trial judge ruled that the prosecutor had valid, race-neutral grounds for the contested strikes. After a trial, the empanelled jury convicted Ayala of murder, and sentenced him to death. In a 5-4 decision, the Supreme Court held that the trial court’s error was harmless. Davis v. Ayala, Supreme Court, No. 13–1428 (June 18, 2015). Justice Kennedy issued a concurrence. His conscience was curiously not troubled by the Star Chamber proceedings, but the facts of Ayala’s post-conviction incarceration, which has taken place largely in solitary confinement.

Remarkably, the New York Times weighed in on the Ayala case, but not to castigate the Court for rubber-stamping Kafkaesque Rules of Procedure that permits the defense to be excluded and prevented from exercising its Constitutionally protected role. The Times chose to spill ink instead on Justice Kennedy’s concurrence on the length of solitary confinement. Editorial, “Justice Kennedy on Solitary Confinement,” N.Y. Times (June 19, 2015).

What is curious about Justice Kennedy’s focus, and the Times’ cheerleading, is that they run roughshod over a procedural error that excused prosecutorial secrecy and that affected the adjudication of guilt or innocence, only to obsess about whether a man, taken to be guilty, has been treated inhumanely by the California prison system. Even more curious is the willingness to the Times to castigate, on bogus legal grounds, Justice Thomas for responding to Justice Kennedy:

“In a brief, sour retort that read more like a comment to a blog post, Justice Clarence Thomas quipped that however small Mr. Ayala’s current accommodations may be, they are ‘a far sight more spacious than those in which his victims, Ernesto Dominguez Mendez, Marcos Antonio Zamora, and Jose Luis Rositas, now rest’. It was a bizarre and unseemly objection. The Eighth Amendment does not operate on a sliding scale depending on the gravity of a prisoner’s crime.”

Id. (emphasis added). Except, of course, the Eight Amendment’s requirement of proportionality does operate on a sliding scale[1]. In Kennedy v. Louisiana, 554 U.S. 407 (2008), for instance, the Court held that the Eighth Amendment’s Cruel and Unusual Punishments Clause prohibited a state from imposing the death penalty to punish a child rapist because of the sanction’s disproportionality[2].

Perhaps the New York Times could hire a struggling young lawyer to fact check its legal pronouncements? Both Justice Kennedy and Justice Thomas were in the same majority that would tolerate denying the defendant of his constitutional right to examine prosecutor’s motivation for striking black and Hispanic jurors. What a “sour note” for the Times to sound over Justice Thomas’s empathy for the victims of the defendant’s crimes.


[1] William W. Berry III, “Eighth Amendment Differentness,” 78 Missouri L. Rev. 1053 (2013); Charles Walter Schwartz, “Eighth Amendment Proportionality Analysis and the Compelling Case of William Rummel,” 71 J. Crim. L. & Criminology 378 (1980); John F. Stinneford, “Rethinking Proportionality Under the Cruel and Unusual Punishments Clause,” 97 Va. L. Rev. 899 (2011).

[2] Also curious was that then Senator Barack Obama criticized the Supreme Court for its decision in the Kennedy case. See Sara Kugler “Obama Disagrees With High Court on Child Rape Case,” ABC News (June 25, 2008) (archived from the original).

Daubert’s Error Rate

June 16th, 2015

In Daubert, the Supreme Court came to the realization that expert witness opinion testimony was allowed under the governing statute, Federal Rule of Evidence 702, only when that witness’s “scientific, technical, or other specialized knowledge” would help the fact finder. Knowledge clearly connotes epistemic warrant, and some of the Court’s “factors” speak directly to this warrant, such as whether the claim has been tested, and whether the opinion has an acceptable rate of error. The Court, however, continued to allow some proxies for that warrant, in the form of “general acceptance,” or “peer review.”

The “rate of error” factor has befuddled some courts in their attempt to apply the statutory requirements of Rule 702, especially when statistical evidence is involved. Some litigants have tried to suggest that a statistically significant result suffices alone to meet the demands of Rule 702, but this argument is clearly wrong. See, e.g., United States v. Vitek Supply Corp., 144 F.3d 476, 480, 485–86 (7th Cir. 1998) (stating that the purpose of the inquiry into rate of error is to determine whether tests are “accurate and reliable”) (emphasis added). See also Judicial Control of the Rate of Error in Expert Witness Testimony” (May 28, 2015). The magnitude of tolerable actual or potential error rate remains, however, a judicial mystery[1].

Sir Austin Bradford Hill described ruling out bias, confounding, and chance (or random error) as essential prerequisites to considering his nine factors used to assess whether an association is causal:

“Disregarding then any such problem in semantics we have this situation. Our observations reveal an association between two variables, perfectly clear-cut and beyond what we would care to attribute to the play of chance. What aspects of that association should we especially consider before deciding that the most likely interpretation of it is causation.”

Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295, 295 (1965). The better reasoned cases agree. See, e.g., Frischhertz v. SmithKline Beecham Corp., 2012 U.S. Dist. LEXIS 181507, *6 (E.D.La. 2012) (“The Bradford-Hill criteria can only be applied after a statistically significant association has been identified.”) (citing and quoting among other sources, Federal Judicial Center, Reference Manual on Scientific Evidence, 599 & n.141 (3d. ed. 2011)).

Citing the dictum in Matrixx Initiatives[2] as though it were a holding is not only ethically dubious, but also ignores the legal and judicial context of the Court’s statements[3]. There are, after all, some circumstances such as cases of death by blunt-force trauma, or bullet wounds, when epidemiological and statistical evidence is not needed. The Court did not purport to speak to all causation assessments; nor did it claim that it was addressing only instances in which there were “expected cases,” and “base-line risks,” in diseases that have an accepted occurrence and incidence among unexposed persons. It is, of course, in exactly those cases that statistical consideration of bias, confounding, and chance are essential before Bradford Hill’s factors can be parsed.

Lord Rutherford[4] is often quoted as having said that “[i]f your experiment needs statistics, you ought to have done a better experiment.” Today, physics and chemistry have dropped their haughty disdain for statistics in the face of their recognition that some processes can be understood only as stochastic and rate driven. In biology, we are a long way from being able to describe the most common disease outcomes as mechanistic genetic or epigenetic events. Statistical analyses, with considerations of random and systematic error, will be with us for a long time, whether the federal judiciary acknowledges this fact or not.

*        *        *        *        *        *        *        *        *        *        *         *        *       

Cases Discussing Error Rates in Rule 702 Decisions

SCOTUS

Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 593 (1993) (specifying the “the known or potential rate of error” as one of several factors in assessing the scientific reliability or validity of proffered expert witness’s opinion)

Kumho Tire Co. v. Carmichael, 526 U.S. 137, 151 (1999) (suggesting that reliability in the form of a known and an acceptable error rate is an important consideration for admissibility)

US Court of Appeals

FIRST CIRCUIT

United States v. Shea, 957 F. Supp. 331, 334–45 (D.N.H. 1997) (rejecting criminal defendant’s objection to government witness’s providing separate match and error probability rates)

SECOND CIRCUIT

Rabozzi v. Bombardier, Inc., No. 5:03-CV-1397 (NAM/DEP), 2007 U.S. Dist. LEXIS 21724, at *7, *8, *20 (N.D.N.Y. Mar. 27, 2007) (excluding testimony from civil engineer about boat design, in part because witness failed to provide rate of error)

Sorto-Romero v. Delta Int’l Mach. Corp., No. 05-CV-5172 (SJF) (AKT), 2007 U.S. Dist. LEXIS 71588, at *22–23 (E.D.N.Y. Sept. 24, 2007) (excluding engineering opinion that defective wood-carving tool caused injury because of lack of error rate)

In re Ephedra Products Liability Litigation, 393 F. Supp. 2d 181, 184 (S.D.N.Y. 2005) (confusing assessment of random error with probability that statistical estimate of true risk ratio was correct)

Roane v. Greenwich Swim Comm., 330 F. Supp. 2d 306, 309, 319 (S.D.N.Y. 2004) (excluding mechanical engineer, in part because witness failed to provide rate of error)

Nook v. Long Island R.R., 190 F. Supp. 2d 639, 641–42 (S.D.N.Y. 2002) (excluding industrial hygienist’s opinion in part because witness was unable to provide a known rate of error).

United States v. Towns, 19 F. Supp. 2d 67, 70–72 (W.D.N.Y. 1998) (permitting clinical psychologist to opine about defendant’s mens rea and claimed mental illness causing his attempted bank robbery, in part because the proffer of opinion maintained that the psychologist would provide an error rate)  

Meyers v. Arcudi, 947 F. Supp. 581 (D. Conn. 1996) (excluding polygraph in civil action in part because of error rate)

THIRD CIRCUIT

United States v. Ewell, 252 F. Supp. 2d 104, 113–14 (D.N.J. 2003) (rejecting criminal defendant’s objection to government’s failure to quantify laboratory error rate)

Soldo v. Sandoz Pharmaceuticals Corp., 244 F. Supp. 2d 434, 568 (W.D. Pa. 2003) (excluding plaintiffs’ expert witnesses in part because court, and court-appointed expert witnesses, were unable to determine error rate).

Pharmacia Corp. v. Alcon Labs., Inc., 201 F. Supp. 2d 335, 360 (D.N.J. 2002) (excluding ; error too high).

FOURTH CIRCUIT

United States v. Moreland, 437 F.3d 424, 427–28, 430–31 (4th Cir. 2006) (affirming district court’s allowance of forensic chemist’s testimony that could not provide error rate because reviews of witness’s work found it to be free of error)

Buckman v. Bombardier Corp., 893 F. Supp. 547, 556–57 (E.D.N.C. 1995) (ruling that an expert witness may opine about comparisons between boat engines in rough water but only as a lay witness, because the comparison tests were unreliable, with a high estimated rate of error)

FIFTH CIRCUIT

Albert v. Jordan, Nos. 05CV516, 05CV517, 05CV518, 05CV519, 2007 U.S. Dist. LEXIS 92025, at *2–3 (W.D. La. Dec. 14, 2007) (allowing testimony of vocational rehabilitation expert witness, over objection, because witness provided “reliable” information, with known rate of error)

SIXTH CIRCUIT

United States v. Leblanc, 45 F. App’x 393, 398, 400 (6th Cir. 2002) (affirming exclusion of child psychologist, whose testimony about children’s susceptibility to coercive interrogation was based upon “‘soft science’ . . . in which ‘error is . . . rampant’.” (quoting the district court))

United States v. Sullivan, 246 F. Supp. 2d 696, 698–99 (E.D. Ky. 2003) (admitting expert witness’s opinion on the unreliability of eyewitness identification; confusing error rate of witness’s opinion with accuracy of observations made based upon order of presentation of photographs of suspect)

SEVENTH CIRCUIT

United States v. Vitek Supply Corp., 144 F.3d 476, 480, 485–86 (7th Cir. 1998) (affirming denial of defendant’s Rule 702 challenge based in part upon error rates; the purpose of the inquiry into rate of error is to determine whether tests are “accurate and reliable”; here the government’s expert witnesses used adequate controls and replication to ensure an acceptably low rate of error)

Phillips v. Raymond Corp., 364 F. Supp. 2d 730, 732–33, 740-41 (N.D. Ill. 2005) (excluding biomechanics expert witness who had not reliably tested his claims in a way to produce an accurate rate of error)

EIGHTH CIRCUIT

Bone Shirt v. Hazeltine, 461 F.3d 1011, 1020 (8th Cir. 2006) (affirming district court’s ruling to admit testimony of expert witness’s regression analysis in vote redistricting case); see id. at 1026 (Gruender, J., concurring) (expressing concern with the questioned testimony’s potential rate of error because it is “difficult to weigh this factor in Daubert’s analysis if ‘the effect of that error is unknown’.” (quoting court below, Bone Shirt v. Hazeltine, 336 F. Supp. 2d 976, 1002 (D.S.D. 2004))

United States v. Beasley, 102 F.3d 1440, 1444, 1446–48 (8th Cir. 1996) (confusing random error with general error rate) (affirming admissibility of expert witness testimony based upon DNA testing, because such testing followed acceptable standards in testing for contamination and “double reading”)

NINTH CIRCUIT

United States v. Chischilly, 30 F.3d 1144, 1148, 1152, 1154–55 (9th Cir. 1994) (affirming admissibility of testimony based upon DNA match in sex crime, noting that although error rate of error was unquantified, the government had made a sufficient showing of rarity of false positives to support an inference of low error rate)

Cascade Yarns, Inc. v. Knitting Fever, Inc., No. C10–861RSM, 2012 WL 5194085, at *7 (W.D. Wash. Oct. 18. 2012) (excluding expert witness opinion because error rate was too high)

United States v. Microtek Int’l Dev. Sys. Div., Inc., No. 99-298-KI, 2000 U.S. Dist. LEXIS 2771, at *2, *10–13, *15 (D. Or. Mar. 10, 2000) (excluding polygraph data based upon showing that claimed error rate came from highly controlled situations, and that “real world” situations led to much higher error (10%) false positive error rates)

TENTH CIRCUIT

Miller v. Pfizer, Inc., 356 F.3d 1326, 1330, 1334 (10th Cir. 2004) (affirming exclusion of plaintiffs’ expert witness, Dr. David Healy, based upon district court’s findings, made with the assistance of court-appointed expert witnesses, that Healy’s opinion was based upon studies that lacked sufficient sample size, adequate controls, and freedom from study bias, and thus prone to unacceptable error rate)

ELEVENTH CIRCUIT

Quiet Tech. DC-8, Inc. v. Hurel-Duboi U.K., Ltd., 326 F.3d 1333, 1343–45 (11th Cir. 2003) (affirming trial court’s admission of defendant’s aerospace engineer’s testimony, when the lower court had found that the error rate involved was “relatively low”; rejecting plaintiff’s argument that the witness had entered data incorrectly on ground that the asserted error would not affect the validity of the witness’s opinions)

Wright v. Case Corp., No. 1:03-CV-1618-JEC, 2006 U.S. Dist. LEXIS 7683, at *14 (N.D. Ga. Feb. 1, 2006) (granting defendant’s motion to exclude plaintiff’s mechanical engineering expert, because the expert’s alternative designs for the seat safety bar were not reliable due to potential feasibility issues, and because the associated error rate was therefore unquantifiable but potentially very high)

Benkwith v. Matrixx Initiatives, Inc., 467 F. Supp. 2d 1316, 1326, 1330, 1332 (M.D. Ala. 2006) (granting defendant’s motion to exclude testimony of an expert in the field of epidemiology regarding Zicam nasal spray’s causing plaintiff’s anosmia, because the opinions had not been tested and a rate of error could not be provided).

D.C. CIRCUIT

Ambrosini v. Upjohn Co., No. 84-3483 (NHJ), 1995 U.S. Dist. LEXIS 21318, at *16, *22–24 (D.D.C. Oct. 18, 1995) (finding that plaintiff’s teratology expert was not permitted to testify, because the methodology used was found to be unreliable and could not yield an accurate error rate)


[1] Jed S. Rakoff, “Science and the Law: Uncomfortable Bedfellows,” 38 Seton Hall L. Rev. 1379, 1382–83 (2008) (observing that an error rate of 13 percent in polygraph interpretation would likely be insufficiently reliable to support admissibility of testimony based upon polygraph results).

[2] Matrixx Initiatives, Inc. v. Siracusano, 131 S. Ct. 1309, 1319 (2011) (suggesting that courts “frequently permit expert testimony on causation based on evidence other than statistical significance”).

[3] See, e.g., WLF Legal Backgrounder on Matrixx Initiatives (June 20, 2011); “The Matrixx – A Comedy of Errors”; Matrixx Unloaded (Mar. 29, 2011)”; “The Matrixx Oversold” (April 4, 2011); “De-Zincing the Matrixx.”

[4] Ernest Rutherford, a British chemist, investigated radioactivity. He won the Nobel Prize in chemistry, in 1908.

How to Fake a Moon Landing

June 8th, 2015

The meaning of the world is the separation of wish and fact.”
Kurt Gödel

Everyone loves science except when science defeats wishes for a world not known. Coming to accept the world based upon evidence requires separating wish from fact. And when the evidence is lacking in quality or quantity, then science requires us to have the discipline to live with uncertainty rather than wallow in potentially incorrect beliefs.

Darryl Cunningham has written an engaging comic graphics book about science and the scientific worldview. Darryl Cunningham, How to Fake a Moon Landing: Exposing the Myths of Science Denial (2013). Through pictorial vignettes, taken from current controversies, Cunningham has created a delightful introduction to scientific methodology and thinking. Cunningham has provided chapters on several modern scandalous deviations from the evidence-based understanding of the world, including:

  • The Moon Hoax
  • Homeopathy
  • Chiropractic
  • The MMR Vaccination Scandal
  • Evolution
  • Fracking
  • Climate Change, and
  • Science Denial

Most people will love this book. Lawyers will love the easy-to-understand captions. Physicians will love the debunking of chiropractic. Republicans will love the book’s poking fun at (former Dr.) Andrew Wakefield and his contrived MMR vaccination-autism scare, and the liberal media’s unthinking complicity in his fraud. Democrats will love the unraveling of the glib, evasive assertions of the fracking industry. New Agers will love the book because of its neat pictures, and they probably won’t read the words anyway, and so they will likely miss the wonderful deconstruction of homeopathy and other fashionable hokum. Religious people, however, will probably hate the fun poked at all attempts to replace evidence with superstitions.Roblox HackBigo Live Beans HackYUGIOH DUEL LINKS HACKPokemon Duel HackRoblox HackPixel Gun 3d HackGrowtopia HackClash Royale Hackmy cafe recipes stories hackMobile Legends HackMobile Strike Hack

Without rancor, Cunningham pillories all true believers who think that they can wish the facts of the world. At $16.95, the book is therapeutic and a bargain.

The Eleventh Circuit Confuses Adversarial and Methodological Bias, Manifestly Erroneously

June 6th, 2015

The Eleventh Circuit’s decision in Adams v. Laboratory Corporation of America, is disturbing on many levels. Adams v. Lab. Corp. of Am., 760 F.3d 1322 (11th Cir. 2014). Professor David Bernstein has already taken the Circuit judges to task for their failure to heed the statutory requirements of Federal Rule of Evidence 702. See David Bernstein, “A regrettable Eleventh Circuit expert testimony ruling, Adams v. Lab. Corp. of America,Wash. Post (May 26, 2015). Sadly, the courts’ strident refusal to acknowledge the statutory nature of Rule 702, and the Congressional ratification of the 2000 amendments to Rule 702, have become commonplace in the federal courts. Ironically, the holding of the Supreme Court’s decision in Daubert itself was that the lower courts were not free to follow common law that had not been incorporated into the first version of the Rule 702.

There is much more wrong with the Adams case than just a recalcitrant disregard for the law: the Circuit displayed an equally distressing disregard for science. The case started as a negligent failure to diagnose cervical cancer claim against defendant Laboratory Corporation of America. Plaintiffs claimed that the failure to diagnose cancer led to delays in treatment, which eroded Mrs. Adam’s chance for a cure.

Before the Adams case arose, two professional organizations, the College of American Pathologists (CAP) and the American Society of Cytopathology (ASC) issued guidelines about how an appropriate retrospective review should be conducted. Both organizations were motivated by two concerns: protecting their members from exaggerated, over-extended, and bogus litigation claims, as well as by a scientific understanding that a false-negative finding by a cytopathologist does not necessarily reflect a negligent interpretation of a Pap smear[1]. Both organizations called for a standard of blinded review in litigation to protect against hind-sight bias. The Adams retained a highly qualified pathologist, Dr. Dorothy Rosenthal, who with full knowledge of the later diagnosis and the professional guidelines, reviewed the earlier Pap smears that were allegedly misdiagnosed as non-malignant. 760 F.3d at 1326. Rosenthal’s approach violated the CAP and ASC guidelines, as well as common sense.

The district judge ruled that Rosenthal’s approach was little more than an ipse dixit, and a subjective method that could not be reviewed objectively. Adams v. Lab. Corp. of Am., No. 1:10-CV-3309-WSD, 2012 WL 370262, at *15 (N.D. Ga. Feb. 3, 2012). In a published per curiam opinion, the Eleventh Circuit reversed, holding that the district judge’s analysis of Rosenthal’s opinion was “manifestly erroneous.” 760 F.3d at 1328. Judge Garza, of the Fifth Circuit, sitting by designation, concurred to emphasize his opinion that Rosenthal did not need a methodology, as long as she showed up with her qualifications and experience to review the contested Pap smears.

The Circuit opinion is a model of conceptual confusion. The judges refer to the professional society guidelines, but never provide citations. (See note 1, infra.). The Circuit judges are obviously concerned that the professional societies are promulgating standards to be used in judging claims against their own members for negligent false-negative interpretations of cytology or pathology. What the appellate judges failed to recognize, however, is that the professional societies had a strong methodological basis for insisting upon “blinded” review of the slides in controversy. Knowledge of the outcome must of necessity bias any subsequent review, such as plaintiffs’ expert witness, Rosenthal. Even a cursory reading of the two guidelines would have made clear that they had been based on more than simply a desire to protect members; they were designed to protect members against bogus claims, and cited data in support of their position[2]. Subsequent to the guidelines, several publications have corroborated the evidence-based need for blinded review[3].

The concepts of sensitivity, specificity, and positive predictive value are inherent in any screening procedure; they are very much part of the methodology of screening. These measures, along with statistical analyses of concordance and discordance among experienced cytopathologists, can be measured and assessed for accuracy and reliability. The Circuit judges in Adams, however, were blinded (in a bad way) to the scientific scruples that govern screenings. The per curiam opinion suggests that:

“[t]he only arguably appreciable differences between Dr. Rosenthal’s method and the review method for LabCorp’s cytotechnologists is that Dr. Rosenthal (1) already knew that the patient whose slides she was reviewing had developed cancer and (2) reviewed slides from just one patient. Those differences relate to the lack of blinded review, which we address later.”

760 F.3d at 1329 n. 10. And when the judges addressed the lack of blinded review, they treated hindsight bias, a cognitive bias and methodological flaw in the same way as they would have trial courts and litigants treat Dr. Rosenthal’s “philosophical bent” in favor of cancer patients — as “a credibility issue for the jury.” Id. at 1326-27, 1332. This conflation of methodological bias with adversarial bias, however, is a prescription for eviscerating judicial gatekeeping of flawed opinion testimony. Judge Garza, in a concurring opinion, would have gone further and declared that plaintiffs’ expert witness Rosenthal had no methodology and thus she was free to opine ad libitum.

Although Rosenthal’s “philosophical bent” might perhaps be left to the crucible of cross-examination, hindsight review bias could and should have been eliminated by insisting that Rosenthal wear the same “veil of ignorance” of Mrs. Adam’s future clinical course, which the defendant wore when historically evaluating the plaintiff’s Pap smears. Here Rosenthal’s adversarial bias was very probably exacerbated by her hindsight bias, and the Circuit missed a valuable opportunity to rein in both kinds of bias.

Certainly in other areas of medicine, such as radiology, physicians are blinded to the correct interpretation and evaluated on their ability to line up with a gold standard. The NIOSH B-reader examination, for all its problems, at least tries to qualify physicians in the use of the International Labor Organization’s pneumoconiosis scales for interpreting plain-film radiographs for pulmonary dust diseases, by having them read and interpret films blinded to the NIOSH/ILO consensus interpretation.


[1] See Patrick L. Fitzgibbons & R. Marshall Austin, “Expert review of histologic slides and Papanicolaou tests in the context of litigation or potential litigation — Surgical Pathology Committee and Cytopathology Committee of the College of American Pathologists,” 124 Arch. Pathol. Lab. Med. 1717 (2000); American Society of Cytopathology, “Guidelines for Review of Gyn Cytology Samples in the Context of Litigation or Potential Litigation” (2000).

[2] The CAP guideline, for instance, cited R. Marshall Austin, “Results of blinded rescreening of Papanicolaou smears versus biased retrospective review,” 121 Arch. Pathol. Lab. Med. 311 (1997).

[3] Andrew A. Renshaw, K.M Lezon, and D.C. Wilbur, “The human false-negative rate of rescreening Pap tests: Measured in a two-arm prospective clinical trial,” 93 Cancer (Cancer Cytopathol.) 106 (2001); Andrew A. Renshaw, Mary L. Young, and E. Blair Holladay, “Blinded review of Papanicolaou smears in the context of litigation: Using statistical analysis to define appropriate thresholds,” 102 Cancer Cytopathology 136 (2004) (showing that data from blinded reviews can be interpreted in a statistically appropriate way, and defining standards to improve the accuracy and utility of blinded reviews); D. V. Coleman & J. J. R. Poznansky, “Review of cervical smears from 76 women with invasive cervical cancer: cytological findings and medicolegal implications,” 17 Cytopathology 127 (2006); Andrew A. Renshaw, “Comparing Methods to Measure Error in Gynecologic Cytology and Surgical Pathology,” 130 Arch. Path. & Lab. Med. 626 (2009).