TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Silicone Data Slippery and Hard to Find (Part 2)

July 5th, 2015

What Does a Scientist “Gain,” When His Signal Is Only Noise

When the silicone litigation erupted in the early 1990s, Leoncio Garrido was a research professor at Harvard. In 1995, he was promoted from Assistant to Associate Professor of Radiology, and the Associate Director of NMR Core, at the Harvard Medical School. Along with Bettina Pfleiderer, Garrido published a series of articles on the use of silicon 29 nuclear magnetic resonance (NMR) spectroscopy, in which he claimed to detect and quantify silicon that migrated from the silicone in gel implants to the blood, livers, and brains of implanted women[1].

Plaintiffs touted Garrido’s work on NMR silicone as their “Harvard” study, to offset the prestige that the Harvard Nurses epidemiologic study[2] had in diminishing the plaintiffs’ claims that silicone caused autoimmune disease. Even though Garrido’s work was soundly criticized in the scientific literature[3], Garrido’s apparent independence of the litigation industry, his Harvard affiliation, and the difficulty in understanding the technical details of NMR spectroscopic work, combined to enhance the credibility of the plaintiffs’ claims.

Professor Peter Macdonald, who had consulted with defense counsel, was quite skeptical of Garrido’s work on silicone. In sum, Macdonald’s analysis showed that Garrido’s conclusions were not supported by the NMR spectra presented in Garrido’s papers. The spectra shown had signal-to-noise ratios too low to allow a determination of putative silicon biodegradation products (let alone to quantify such products), in either in vivo or ex vivo analyses. The existence of Garrido’s papers in peer-reviewed journals, however, allowed credulous scientists and members of the media to press unsupported theories about degradation of silicone into supposedly bioreactive silica.

A Milli-Mole Spills the Beans on the Silicone NMR Data

As the silicone litigation plodded on, a confidential informant dropped the dime on Garrido. The informant was a Harvard graduate student, who was quite concerned about the repercussions of pointing the finger at the senior scientist in charge of his laboratory work. Fortunately, and honorably, this young scientist more concerned yet that Garrido was manipulating the NMR spectra to create his experimental results. Over the course of 1997, the informant, who was dubbed “Mini-Mole,” reported serious questions about the validity of the silicon NMR spectra reported by Garrido and colleagues, who had created the appearance of a signal by turning up the gain to enhance the signal/noise ratio. Milli-mole also confirmed Macdonald’s suspicions that Garrido had created noise artifacts (either intentionally or carelessly) that could be misrepresented to be silicon-containing materials with silicon 29 NMR spectra.

In late winter 1997, “Mini-Mole” reported that Harvard had empanelled an internal review board to investigate Garrido’s work on silicon detection in blood of women with silicone gel breast implants. The board involved an associate dean of the medical school, along with an independent reviewer, knowledgeable about NMR. Mini-Mole was relieved that he would not be put into the position of becoming a whistle blower, and he believed that once the board understood the issues, Garrido’s deviation from the scientific standard of care would become clear. Apparently, concern at Harvard was reaching a crescendo, as Garrido was about to present yet another abstract, on brain silicon levels, at an upcoming meeting of the International Society of Magnetic Resonance in Medicine, in Vancouver, BC. Milli-Mole reported that one of the co-authors strongly disagreed with Garrido’s interpretation of the data, but was anxious about withdrawing from the publication.

Science Means Never Having to Say You’re Sorry

By 1997, Judge Pointer had appointed a panel of neutral expert witnesses, but the process had become mired in procedural diversions. Bristol-Myers Squibb sought and obtained a commission in state court (New Jersey) cases for a Massachusetts’ subpoena for Garrido’s underlying data late in1997. Before BMS or the other defendants could act on this subpoena, however, Garrido published a rather weak, non-apologetic corrigendum to one of his papers[4].

Although Garrido’s “Erratum” concealed more than it disclosed, the publication of the erratum triggered an avalanche of critical scrutiny. One of the members of the editorial board of Magnetic Resonance in Medicine undertook a critical review of Garrido’s papers, as a result of the erratum and its fallout. This scientist concluded that:

“From my viewpoint as an analytical spectroscopist, the result of this exercise was disturbing and disappointing. In my judgement as a referee, none of the Garrido group’s papers (1–6) should have been published in their current form.”

William E. Hull, “A Critical Review of MR Studies Concerning Silicone Breast Implants,” 42 Magnetic Resonance in Medicine 984, 984 (1999).

Another scientist, Professor Christopher T.G. Knight, of the University of Illinois at Urbana-Champaign, commented in a letter in response to the Garrido erratum:

“A series of papers has appeared in this Journal from research groups at Harvard Medical School and Massachusetts General Hospital. These papers describe magnetic resonance studies that purport to show significant concentrations of silicone and chemically related species in the blood and internal organs of silicone breast implant recipients. One paper in particular details 29Si NMR spectroscopic results of experiments conducted on the blood of volunteers with and without implants. In the spectrum of the implant recipients’ blood there appear to be several broad signals, whereas no signals are apparent in the spectrum of the blood of a volunteer with no implant. On these grounds, the authors claim that silicone and its degradation products occur in significant quantities in the blood of some implant recipients. Although this conclusion has been challenged, it has been widely quoted.

******

The erratum, in my opinion, deserves considerably more visibility, because it in effect greatly reduces the strength of the authors’ original claims. Indeed, it appears to be tantamount to a retraction of these.”

Christopher T.G. Knight, “Migration and Chemical Modification of Silicone in Women With Breast Prostheses,” 42 Magnetic Resonance in Med. 42:979 (1999) (internal citations omitted). Professor Knight went on to critique the original Garrido work, and the unsigned, unattributed erratum as failing to show a difference between the spectra developed from blood of women with and without silicone implants. Garrido’s erratum suggested that his “error” was simply showing a spectrum with the wrong scale, but Professor Knight showed rather conclusively that other manipulations had taken place to alter the spectrum. Id.

In a brief response[5], Garrido and co-authors acknowledged that their silicon quantification was invalid, but still maintained that they had qualitatively determined the presence of silicon entities. Despite Garrido’s response, the scientific community soon became incredulous about his silicone NMR work.

Garrido’s fall-back claim that he had detected unquantified levels of silicon using Si29 NMR was definitively refuted, in short order[6]. Ultimately, Peter Macdonald’s critique of Garrido was vindicated, and Garrido’s work became yet another weight that helped sink the plaintiffs’ case. Garrido last published on silicone in 1999, and left Harvard soon thereafter, to become the Director of the Instituto de Ciencia y Tecnología de Polímeros, in Madrid, Spain. He is now a scientific investigator at the Institute’s Physical Chemistry of Polymers Department. The Institute’s website lists Garrido as Dr. Leoncio Garrido Fernández. Garrido’s silicone publications were never retracted, and Harvard never publicly explained Garrido’s departure.


[1] See, e.g., Bettina Pfleiderer & Leoncio Garrido, “Migration and accumulation of silicone in the liver of women with silicone gel-filled breast implants,” 33 Magnetic Resonance in Med. 8 (1995); Leoncio Garrido, Bettina Pfleiderer, B.G. Jenkins, Carol A. Hulka, D.B. Kopans, “Migration and chemical modification of silicone in women with breast prostheses,” 31 Magnetic Resonance in Med. 328 (1994). Dr. Carol Hulka is the daughter of Dr. Barbara Hulka, who later served as a neutral expert witness, appointed by Judge Pointer in MDL 926.

[2] Jorge Sanchez-Guerrero, Graham A. Colditz, Elizabeth W. Karlson, David J. Hunter, Frank E. Speizer, Matthew H. Liang, “Silicone Breast Implants and the Risk of Connective-Tissue Diseases and Symptoms,” 332 New Engl. J. Med . 1666 (1995).

[3] See R.B. Taylor, J.J. Kennan, “29Si NMR and blood silicon levels in silicone gel breast implant recipients,” 36 Magnetic Resonance in Med. 498 (1996); Peter Macdonald, N. Plavac, W. Peters, Stanley Lugowski, D. Smith, “Failure of 29Si NMR to detect increased blood silicon levels in silicone gel breast implant recipients,” 67 Analytical Chem. 3799 (1995).

[4] Leoncio Garrido, Bettina Pfleiderer, G. Jenkins, Carol A. Hulka, Daniel B. Kopans, “Erratum,” 40 Magnetic Resonance in Med. 689 (1998).

[5] Leoncio Garrido, Bettina Pfleiderer, G. Jenkins, Carol A. Hulka, Daniel B. Kopans, “Response,” 40 Magnetic Resonance in Med. 995 (1998).

[6] See Darlene J. Semchyschyn & Peter M. Macdonald, “Limits of Detection of Polydimethylsiloxane in 29Si NMR Spectroscopy,” 43 Magnetic Resonance in Med. 607 (2000) (Garrido’s erratum acknowledges that his group’s spectra contain no quantifiable silicon resonances, but their 29Si spectra fail to show evidence of silicone or breakdown products); Christopher T. G. Knight & Stephen D. Kinrade, “Silicon-29 Nuclear Magnetic Resonance Spectroscopy Detection Limits,” 71 Anal. Chem. 265 (1999).

Silicone Data Slippery and Hard to Find (Part 1)

July 4th, 2015

In the silicone gel breast implant litigation, plaintiffs’ counsel loved to wave around early Dow Corning experiments with silicone as an insecticide. As the roach crawls, it turned out that silicone was much better at attracting and dispatching dubious expert witnesses and their testimony. On this point, it is hard to dispute the judgment of Judge Jack Weinstein[1].

The silicone wars saw a bioethics expert appear as an expert witness to testify about a silicone study in which his co-authors refuse to share their data with him, embarrassing to say the least. “Where Are They Now? Marc Lappé and the Missing Data” (May 19, 2013). And another litigation expert witness lost his cachet when the Northridge earthquake at his data. “Earthquake-Induced Data Loss – We’re All Shook Up” (June 26, 2015). But other expert witnesses were up to the challenge for the most creative and clever excuses for not producing their underlying data.

Rhapsody in Goo – My Data Are Traveling; Come Back Later

Testifying expert witness, Dr. Eric Gershwin was the author of several research papers that claimed or suggested immunogenicity of silicone[2]. His results were criticized and seemed to elude replication, but he enjoyed a strong reputation as an NIH-funded researcher. Although several of his co-authors were from Specialty Labs, Inc. (Santa Monica, CA)[3], defense requests for his Gershwin’s underlying data were routinely met with the glib response that the data were in Israel, where some of his other co-authors resided.

Gershwin testified in several trials, and the plaintiffs’ counsel placed great emphasis on his publications and on his testimony given before Judge Jones’ technical advisors in August 1996, before Judge Pointer’s panel of Rule 706 experts, in July 1997, and before the Institute of Medicine (IOM) in 1998.

Ultimately, this peer review of Gershwin’s work and claims was withering. The immunologist on Judge Jones’ panel (Dr. Stenzel-Poore) found Gershwin’s claims “not well substantiated.” Hall v. Baxter Healthcare Corp., 947 F.Supp. 1387 (D. Ore. 1996). The immunologist on Judge Pointer’s panel, Dr. Betty A. Diamond was unshakeable in her criticisms of Gershwin’s work and his conclusions. Testimony of Dr. Betty A. Diamond, in MDL 926 (April 23, 1999). And the IOM found Gershwin’s work inadequate and insufficient to justify the extravagent claims that plaintiffs were making for immunogenicity and for causation of autoimmune disease. Stuart Bondurant, Virginia Ernster, and Roger Herdman, eds., Safety of Silicone Breast Implants (Institute of Medicine) (Wash. D.C. 1999).

Unlike Kossovsky, who left medical practice and his university position, Gershwin has continued to teach, research, and write after the collapse of the silicone litigation industry. And he has continued to testify, albeit in other kinds of tort cases.

In 2011, in testimony in a botox case, Dr. Gershwin attempted to distance himself from his prior silicone testimony. Gershwin testified that he was “an expert for silicone implants in the late 90s.” Testimony of M.E. Gershwin, at at 18:17-25, in Ray v. Allergan, Inc., Civ. No. 3:10CV00136 (E.D. Va. Jan. 17, 2011). An expert witness for implants; how curious? Here is how Gershwin described the fate of his strident testimony in the silicone litigation:

“Q. And has a court ever limited or excluded your opinions?

A. So a long time ago, probably more than ten years ago or so, twice. I had many cases involving silicone implants. The court restricted some but not all of my testimony. Although, my understanding is that, when the FDA finally did reapprove the use of silicone implants, the papers I published and evidence I gave was actually part of the basis by which they developed their regulations. And there’s not been a single example in the literature of anyone that’s ever refuted or questioned any of my work. But I think that’s all, as far as I know.

* * * *
Q. Okay. So it’s not — you made it sound like it was some published work that you had. Was it your opinions that you expressed in the cases that you believe the FDA adopted as part of their guidelines, or do you —

A. So I’ll tell you, I haven’t visited this subject in a long time, and I certainly took quite a beating from a number of people over — I was very proud in the past that I did it. Women’s rights groups all over the United States applauded what I did. I haven’t looked at these documents in over ten years, so beyond that, you’d have to do your own research.”

Id. at 20:19 – 21:25. Actually, several courts excluded Gershwin, as well as other expert witnesses who relied upon his published papers. Proud to be beaten.

Some of Gershwin’s coauthors have stayed the course on silicone. Yehuda Shoenfeld continues to publish on sick-building syndrome and so-called silicone “adjuvant disease,” which Shoenfeld immodestly refers to as “Shoenfeld’s syndrome.[4]” Gershwin and Shoenfeld parted company in the late 1990s on silicone, although they continue to publish together on other topics[5].


[1] Hon. Jack B. Weinstein, “Preliminary Reflections on Administration of Complex Litigation” 2009 Cardozo L. Rev. de novo 1, 14 (2009) (describing plaintiffs’ expert witnesses in the silicone gel breast implant litigation as “charlatans” and the litigation as largely based upon fraud).

[2] E. Bar-Meir, S.S. Teuber, H.C. Lin, I. Alosacie, G. Goddard, J. Terybery, N. Barka, B. Shen, J.B. Peter, M. Blank, M.E. Gershwin, Y. Shoenfeld, “Multiple Autoantibodies in Patients with Silicone Breast Implants,” 8 J. Autoimmunity 267 (1995); Merrill J. Rowley, Andrew D. Cook, Suzanne S. Teuber, M. Eric Gershwin, “Antibodies to Collagen: Comparative Epitope Mapping in Women with Silicon Breast Implants, Systemic Lupus Erythematosus and Rheumatoid Arthritis,” 6 J. Autoimmunity 775 (1994); Suzanne S. Teuber, Merrill J. Rowley, Steven H. Yoshida, Aftab A. Ansari, M.Eric Gershwin, “Anti-collagen Autoantibodies are Found in Women with Silicone Breast Implants,” 6 J. Autoimmunity 367 (1993).

[3] J. Teryberyd, J.B. Peter, H.C. Lin, and B. Shen.

[4] A partial sampler of Shoenfeld’s continued output on silicone:

Goren, G. Segal, Y. Shoenfeld, “Autoimmune/inflammatory syndrome induced by adjuvant (ASIA) evolution after silicone implants: Who is at risk?” 34 Clin. Rheumatol. (2015) [in press]

Nesher, A. Soriano, G. Shlomai, Y. Iadgarov, T.R. Shulimzon, E. Borella, D. Dicker, Y. Shoenfeld, “Severe ASIA syndrome associated with lymph node, thoracic, and pulmonary silicone infiltration following breast implant rupture: experience with four cases,” 24 Lupus 463 (2015)

Dagan, M. Kogan, Y. Shoenfeld, G. Segal, “When uncommon and common coalesce: adult onset Still’s disease associated with breast augmentation as part of autoimmune syndrome induced by adjuvants (ASIA),” 34 Clin Rheumatol. 2015 [in press]

Soriano, D. Butnaru, Y. Shoenfeld, “Long-term inflammatory conditions following silicone exposure: the expanding spectrum of the autoimmune/ inflammatory syndrome induced by adjuvants (ASIA),” 32 Clin. Experim. Rheumatol. 151 (2014)

Perricone, S. Colafrancesco, R. Mazor, A. Soriano, N. Agmon-Levin, Y. Shoenfeld, “Autoimmune/inflammatory syndrome induced by adjuvants (ASIA) 2013: Unveiling the pathogenic, clinical and diagnostic aspects,” 47 J. Autoimmun. 1 (2013)

Vera-Lastra, G. Medina, P. Cruz-Dominguez Mdel, L.J. Jara, Y. Shoenfeld, “Autoimmune/inflammatory syndrome induced by adjuvants (Shoenfeld’s syndrome): clinical and immunological spectrum,” 9 Expert Rev Clin Immunol. 361 (2013)

Lidar, N. Agmon-Levin, P. Langevitz, Y. Shoenfeld, “Silicone and scleroderma revisited,” 21 Lupus 121 (2012)

S.D. Hajdu, N. Agmon-Levin, Y. Shoenfeld, “Silicone and autoimmunity,” 41 Eur. J. Clin. Invest. 203 (2011)

Levy, P. Rotman-Pikielny, M. Ehrenfeld, Y. Shoenfeld, “Silicone breastimplantation-induced scleroderma: description of four patients and a critical review of the literature,” 18 Lupus 1226 (2009)

A.L. Nancy & Y. Shoenfeld, “Chronic fatigue syndrome with autoantibodies – the result of an augmented adjuvant effect of hepatitis-B vaccine and silicone implant,” 8 Autoimmunity Rev. 52 (2008)

Molina & Y. Shoenfeld, “Infection, vaccines and other environmental triggers of autoimmunity,” 38 Autoimmunity 235 (2005)

R.A. Asherson, Y. Shoenfeld, P. Jacobs, C. Bosman, “An unusually complicated case of primary Sjögren’s syndrome: development of transient ‘lupus-type’ autoantibodies following silicone implant rejection,” 31 J. Rheumatol. 196 (2004), and Erratum in 31 J. Rheumatol. 405 (2004)

Bar-Meir, M. Eherenfeld, Y. Shoenfeld, “Silicone gel breast implants and connective tissue disease–a comprehensive review,” 36 Autoimmunity 193 (2003)

Zandman-Goddard, M. Blank, M. Ehrenfeld, B. Gilburd, J. Peter, Y. Shoenfeld, “A comparison of autoantibody production in asymptomatic and symptomatic women with silicone breast implants,” 26 J. Rheumatol. 73 (1999)

[5] See, e.g., N. Agmon-Levin, R. Kopilov, C. Selmi, U. Nussinovitch, M. Sánchez-Castañón, M. López-Hoyos, H. Amital, S. Kivity, M.E. Gershwin, Y. Shoenfeld, “Vitamin D in primary biliary cirrhosis, a plausible marker of advanced disease,” 61 Immunol. Research 141 (2015).

Discovery of Retained, Testifying Statistician Expert Witnesses (Part 1)

June 30th, 2015

At times, the judiciary’s resistance to delving into the factual underpinnings of expert witness opinions is extraordinary. In one case, the Second Circuit affirmed a judgment for a plaintiff in a breach of contract action, based in large part upon expert witness testimony that presented the results of a computer simulation. Perma Research & Development v. Singer Co.[1] Although the trial court had promised to permit inquiry into the plaintiff’s computer expert witness’s source of data, programmed mathematical formulae, and computer programs, when the defendant asked the plaintiff’s expert witness to disclose his underlying data and algorithms, the district judge sustained the witness’s refusal on grounds that the requested materials were his “private work product” and “proprietary information.”[2] Despite the trial court’s failure to articulate any legally recognized basis for permitting the expert witness to stonewall in this fashion, a panel of the Circuit, in an opinion by superannuated Justice Tom Clark, affirmed, on an argument that the defendant “had not shown that it did not have an adequate basis on which to cross-examine plaintiff’s experts.” Judge Van Graafeiland dissented, indelicately pointing out that the majority had charged the defendant with failing to show that it had been deprived of a fair opportunity to cross-examine plaintiff’s expert witnesses while depriving the defendant of access to the secret underlying evidence and materials that were needed to demonstrate what could have been done on cross-examination[3]. The dissent traced the trial court’s error to its misconception that a computer is just a giant calculator, and pointed out that the majority contravened Circuit precedent[4] and evolving standards[5] for handling underlying data that was analyzed or otherwise incorporated into computer models and simulations.

Although the approach of Perma Research has largely been ignored, has fallen into disrepute, and has been superseded by statutory amendments[6], its retrograde approach continues to find occasional expression in reported decisions. The refinement of Federal Rule of Evidence 702 to require sound support for expert witnesses’ opinions has opened the flow of discovery of underlying facts and data considered by expert witnesses before generating their reports. The most recent edition of the Federal Judicial Center’s Manual for Complex Litigation treats both computer-generated evidence and expert witnesses’ underlying data as both subject to pre-trial discovery as necessary to provide for full and fair litigation of the issues in the case[7].

The discovery of expert witnesses who have conducted statistical analyses poses difficult problems for lawyers.  Unlike other some expert witnesses, who passively review data and arrive at an opinion that synthesizes published research, statisticians actually create evidence with new arrangements and analyses of data in the case.  In this respect, statisticians are like material scientists who may test and record experimental observations on a product or its constituents.  Inquiring minds will want to know whether the statistical analyses in the witness’s report were the results of pre-planned analysis protocols, or whether they were the second, third, or fifteenth alternative analysis.  Earlier statistical analyses conducted but not produced may reveal what the expert witness believed would have been the preferred analysis if only the data had cooperated more fully. Statistical analyses conducted by expert witnesses provide plenty of opportunity for data-dredging, which can then be covered up by disclosing only selected analyses in the expert witness’s report.

The output of statisticians’ statistical analyses will take the form of a measure of “point estimates” of “effect size,” a significance or posterior probability, a set of regression coefficients, a summary estimate of association, or a similar measure that did not exist before the statistician used the underlying data to produce the analytical outcome, which is then the subject of further inference and opinion.  Frequentist analyses must identify the probability model and other assumptions employed. Bayesian analyses must also identify prior probabilities used as the starting point used with further evidence to arrive at posterior probabilities. The science, creativity, and judgment involved in statistical methods challenge courts and counsel to discover, understand, reproduce, present, and cross-examine statistician expert witness testimony.  And occasionally, there is duplicity and deviousness to uncover as well.

The discovery obligations with respect to statistician expert witnesses vary considerably among state and federal courts.  The 1993 amendments to the Federal Rules of Civil Procedure created an automatic right to conduct depositions of expert witnesses[8].  Previously, parties in federal court had to show the inadequacy of other methods of discovery.  Rule 26(a)(2)(B)(ii) requires the automatic production of “the facts or data considered by the [expert] witness in forming” his or her opinions. The literal wording of this provision would appear to restrict automatic, mandatory disclosure to those facts and data that are specifically considered in forming the opinions contained in the prescribed report. Several courts, however, have interpreted the term “considered” to include any information that expert witnesses review or generate, “regardless of whether the experts actually rely on those materials as a basis for their opinions.[9]

Among the changes introduced by the 2010 amendments to the Federal Rules of Civil Procedure was a narrowing of the disclosure requirement of “facts and data” considered by expert witnesses in arriving at their opinions to exclude some attorney work product, as well as protecting drafts of expert witness reports from discovery.  The implications of the Federal Rules for statistician expert witnesses are not entirely clear, but these changes should not be used as an excuse to deprive litigants of access to the data and materials underlying statisticians’ analyses. Since the 2010 amendments, courts have enforced discovery requests for testifying expert witnesses’ notes because they were not draft reports or specific communications between counsel and expert witnesses[10].

The Requirements Associated With Producing A Report

Rule 26 is the key rule that governs disclosure and discovery of expert witnesses and their opinions. Under the current version of Rule 26(a)(2)(B), the scope of required disclosure in the expert report has been narrowed in some respects. Rule 26(a)(2)(B) now requires service of expert witness reports that contain, among other things:

(i) a complete statement of all opinions the witness will express and the basis and reasons for them;

(ii) the facts or data considered by the witness in forming them;

(iii) any exhibits that will be used to summarize or support them.

The Rule’s use of “them” seems clearly to refer back to “opinions,” which creates a problem with respect to materials considered generally with respect to the case or the issues, but not for the specific opinions advanced in the report.

The previous language of the rule required that the expert report disclose “the data or other information considered by the witness.[11]” The use of “other information” in the older version of the rule, rather than the new “data” was generally interpreted to authorize discovery of all oral and written communications between counsel and expert witnesses.  The trimming of Rule 26(a)(2)(B)(ii) was thus designed to place these attorney-expert witness communications off limits from disclosure or discovery.

The federal rules specify that the required report “is intended to set forth the substance of the direct examination[12].” Several court have thus interpreted the current rule in a way that does not result in automatic production of all statistical analyses performed, but only those data and analyses the witness has decided to present at trial.  The report requirement, as it now stands, is thus not necessarily designed to help adverse counsel fully challenge and cross-examine the expert witness on analyses attempted, discarded, or abandoned. If a statistician expert witness conducted multiple statistical testing before arriving at a “preferred” analysis, that expert witness, and instructing counsel, will obviously be all too happy to eliminate the unhelpful analyses from the direct examination, and from the purview of disclosure.

Some of the caselaw in this area makes clear that it is up to the requesting party to discover what it wants beyond the materials that must automatically be disclosed in, or with, the report. A party will not be heard to complain, or attack its adversary, about failure to produce materials never requested.[13] Citing Rule 26(a) and its subsections, which deal with the report, and not discovery beyond the report, several cases take a narrow view of disclosure as embodied in the report requirement.[14] In one case, McCoy v. Whirlpool Corp, the trial court did, however, permit the plaintiff to conduct a supplemental deposition of the defense expert witness to question him about his calculations[15].

A narrow view of automatic disclosure in some cases appears to protect statistician and other expert witnesses from being required to produce calculations, statistical analyses, and data outputs even for opinions that are identified in their reports, and intended to be the subject of direct examination at trial[16].  The trial court’s handling of the issues in Cook v. Rockwell International Corporation is illustrative of this questionable approach.  The issue of the inadequacy of expert witnesses’ reports, for failing to disclose notes, calculations, and preliminary analyses, arose in the context of a Rule 702 motion to the admissibility of the witnesses’ opinion testimony.  The trial court rejected “[a]ny suggestion that an opposing expert must be able to verify the correctness of an expert’s work before it can be admitted… ”[17]; any such suggestion “misstates the standard for admission of expert evidence under [Fed. R. Evid.] 702.[18]”  The Cook court further rejected any “suggestion in Rule 26(a)(2) that an expert report is incomplete unless it contains sufficient information and detail for an opposing expert to replicate and verify in all respects both the method and results described in the report.[19]”   Similarly, the court rejected the defense’s complaints that one of plaintiffs’ expert witness’s expert report and disclosures violated Rule 26(a)(2), by failing to provide “detailed working notes, intermediate results and computer records,” to allow a rebuttal expert witness to test the methodology and replicate the results[20]. The court observed that

“Defendants’ argument also confuses the expert reporting requirements of Rule 26(a)(2) with the considerations for assessing the admissibility of an expert’s opinions under Rule 702 of the Federal Rules of Evidence. Whether an expert’s method or theory can or has been tested is one of the factors that can be relevant to determining whether an expert’s testimony is reliable enough to be admissible. See Fed. R. Evid. 702 2000 advisory committee’s note; Daubert, 509 U.S. at 593, 113 S.Ct. 2786. It is not a factor for assessing compliance with Rule 26(a)(2)’s expert disclosure requirements.[21]

The Rule 702 motion to exclude an expert witness comes too late in the pre-trial process for complaints about failure to disclose underlying data and analyses. The Cook case never explicitly addressed Rule 26(b), or other discovery procedures, as a basis for the defense request for underlying documents, data, and materials.  In any event, the limited scope accorded to Rule 26 disclosure mechanisms by Cook emphasizes the importance of deploying ancillary discovery tools early in the pre-trial process.

The Format Of Documents and Data Files To Be Produced

The dispute in Helmert v.  Butterball, LLC, is typical of what may be expected in a case involving statistician expert witness testimony.  The parties exchanged reports of their statistical expert witnesses, as well as the data output files.  The parties chose, however, to produce the data files in ways that were singularly unhelpful to the other side.  One party produced data files in the “portable document format” (pdf) rather than in the native format of the statistical software package used (STATA).  The other party produced data in a spreadsheet without any information about how the data were processed.  The parties then filed cross-motions to compel the data in its “electronic, native format.” In addition, plaintiffs pressed for all the underlying data, formulae, and calculations. The court denied both motions on the theory that both sides had received copies of the data considered, and neither was denied facts or data considered by the expert witnesses in reaching their opinions[22]. The court refused plaintiffs’ request for formulae and calculations as well. The court’s discussion of its rationale for denying the cross-motions is framed entirely in terms of what parties may expect and be entitled in the form of a report, without any mention of additional discovery mechanisms to obtain the sought-after materials. The court noted that the parties would have the opportunity to explore calculations at deposition.

The decision in Helmert seems typical of judicial indifference to, and misunderstanding of, the need for datasets, especially with large datasets, in the form uploaded to, and used in, statistical software programs. What is missing from the Helmert opinion is a recognition that an effective deposition would require production of the requested materials in advance of the oral examination, so that the examining counsel can confer and consult with a statistical expert for help in formulating and structuring the deposition questions. There are at least two remedial considerations for future discovery motions of the sort seen in Helmert. First, the moving party should support its application with an affidavit of a statistical expert to explain the specific need for identification of the actual formulae used, programming used within specific software programs to run analyses, and interim and final outputs. Second, a strong analogy with document discovery of parties, in which courts routinely order “native format” versions of PowerPoint, Excel, and Word documents produced in response to document requests. Rule 34 of the Federal Rules of Civil Procedure requires that “[a] party must produce documents as they are kept in the usual course of business[23]” and that, “[i]f a request does not specify a form for producing electronically stored information, a party must produce it in a form or forms in which it is ordinarily maintained or in a reasonably usable form or forms.[24]” The Advisory Committee notes to Rule 34[25] make clear that:

“[T]he option to produce in a reasonably usable form does not mean that a responding party is free to convert electronically stored information from the form in which it is ordinarily maintained to a different form that makes it more difficult or burdensome for the requesting party to use the information efficiently in the litigation. If the responding party ordinarily maintains the information it is producing in a way that makes it searchable by electronic means, the information should not be produced in a form that removes or significantly degrades this feature.”

Under the Federal Rules, a requesting party’s obligation to specify a particular format for document production is superseded by the responding party’s obligation to refrain from manipulating or converting “any of its electronically stored information to a different format that would make it more difficult or burdensome for [the requesting party] to use.[26]” In Helmert, the STATA files should have been delivered as STATA native format files, and the requesting party should have requested, and received, all STATA input and output files, which would have permitted the requestor to replicate all analyses conducted.

Some of the decided cases on expert witness reports are troubling because they do not explicitly state whether they are addressing the adequacy of automatic disclosure and reports, or a response to propounded discovery.  For example, in Etherton v. Owners Ins. Co.[27], the plaintiff sought to preclude a defense accident reconstruction expert witness on grounds that the witness failed to produce several pages of calculations[28]. The defense argued that the “[w]hile [the witness’s] notes regarding these calculations were not included in his expert report, the report does specifically identify the methods he employed in his analysis, and the static data used in his calculations”; and by asserting that “Rule 26 does not require the disclosure of draft expert reports, and it certainly does not require disclosure of calculations, as Plaintiff contends.[29]”  The court in Etherton agreed that “Fed. R. Civ. P. 26(a)(2)(B) does not require the production of every scrap of paper with potential relevance to an expert’s opinion.[30]” The court laid the discovery default here upon the plaintiff, as the requesting party:  “Although Plaintiff should have known that Mr. Ogden’s engineering analysis would likely involve calculations, Plaintiff never requested that documentation of those calculations be produced at any time prior to the date of [Ogden’s] deposition.[31]

The Etherton court’s assessment that the defense expert witness’s calculations were “working notes,” which Rule 26(a)(2) does not require to be included in or produced with a report, seems a complete answer, except for the court’s musings about the new provisions of Rule 26(b)(4)(B), which protect draft reports.  Because of the court’s emphasis that the plaintiff never requested the documentation of the relevant calculations, the court’s musings about what was discoverable were clearly dicta.  The calculations, which would reveal data and inferential processes considered, appear to be core materials, subject to and important for discovery[32].

[This post is a substantial revision and update to an earlier post, “Discovery of Statistician Expert Witnesses” (July 19, 2012).]


[1] 542 F.2d 111 (2d Cir. 1976), cert. denied, 429 U.S. 987 (1976)

[2] Id. at 124.

[3] Id. at 126 & n.17.

[4] United States v. Dioguardi, 428 F.2d 1033, 1038 (2d Cir.), cert. denied, 400 U.S. 825 (1970) (holding that prosecution’s failure to produce computer program was error but harmless on the particular facts of the case).

[5] See, e.g., Roberts, “A Practitioner’s Primer on Computer-Generated Evidence,” 41 U. Chi. L. Rev. 254, 255-56 (1974); Freed, “Computer Records and the Law — Retrospect and Prospect,” 15 Jurimetrics J. 207, 208 (1975); ABA Sub-Committee on Data Processing, “Principles of Introduction of Machine Prepared Studies” (1964).

[6] Aldous, Note, “Disclosure of Expert Computer Simulations,” 8 Computer L.J. 51 (1987); Betsy S. Fiedler, “Are Your Eyes Deceiving You?: The Evidentiary Crisis Regarding the Admissibility of Computer Generated Evidence,” 48 N.Y.L. Sch. L. Rev. 295, 295–96 (2004); Fred Galves, “Where the Not-So-Wild Things Are: Computers in the Courtroom, the Federal Rules of Evidence, and the Need for Institutional Reform and More Judicial Acceptance,” 13 Harv. J.L. & Tech. 161 (2000); Leslie C. O’Toole, “Admitting that We’re Litigating in the Digital Age: A Practical Overview of Issues of Admissibility in the Technological Courtroom,” Fed. Def. Corp. Csl. Quart. 3 (2008); Carole E. Powell, “Computer Generated Visual Evidence: Does Daubert Make a Difference?” 12 Georgia State Univ. L. Rev. 577 (1995).

[7] Federal Judicial Center, Manual for Complex Litigation § 11.447, at 82 (4th ed. 2004) (“The judge should therefore consider the accuracy and reliability of computerized evidence, including any necessary discovery during pretrial proceedings, so that challenges to the evidence are not made for the first time at trial.”); id. at § 11.482, at 99 (“Early and full disclosure of expert evidence can help define and narrow issues. Although experts often seem hopelessly at odds, revealing the assumptions and underlying data on which they have relied in reaching their opinions often makes the bases for their differences clearer and enables substantial simplification of the issues.”)

[8] Fed. R. Civ. P. 26(b)(4)(A) (1993).

[9] United States v. Dish Network, L.L.C., No. 09-3073, 2013 WL 5575864, at *2, *5 (C.D. Ill. Oct. 9, 2013) (noting that the 2010 amendments did not affect the change the meaning of the term “considered,” as including “anything received, reviewed, read, or authored by the expert, before or in connection with the forming of his opinion, if the subject matter relates to the facts or opinions expressed.”); S.E.C. v. Reyes, 2007 WL 963422, at *1 (N.D. Cal. Mar. 30, 2007). See also South Yuba River Citizens’ League v. National Marine Fisheries Service, 257 F.R.D. 607, 610 (E.D. Cal. 2009) (majority rule requires production of materials considered even when work product); Trigon Insur. Co. v. United States, 204 F.R.D. 277, 282 (E.D. Va. 2001).

[10] Dongguk Univ. v. Yale Univ., No. 3:08–CV–00441 (TLM), 2011 WL 1935865 (D. Conn. May 19, 2011) (ordering production of a testifying expert witness’s notes, reasoning that they were neither draft reports nor communications between the party’s attorney and the expert witness, and they were not the mental impressions, conclusions, opinions, or legal theories of the party’s attorney); In re Application of the Republic of Ecuador, 280 F.R.D. 506, 513 (N.D. Cal. 2012) (holding that Rule 26(b) does not protect an expert witness’s own work product other than draft reports). But see Internat’l Aloe Science Council, Inc. v. Fruit of the Earth, Inc., No. 11-2255, 2012 WL 1900536, at *2 (D. Md. May 23, 2012) (holding that expert witness’s notes created to help counsel prepare for deposition of adversary’s expert witness were protected as attorney work product and protected from disclosure under Rule 26(b)(4)(C) because they did not contain opinions that the expert would provide at trial)).

[11] Fed. R. Civ. P. 26(a)(2)(B)(ii) (1993) (emphasis added).

[12] Notes of Advisory Committee on Rules for Rule 26(a)(2)(B). See, e.g., Lituanian Commerce Corp., Ltd. v. Sara Lee Hosiery, 177 F.R.D. 245, 253 (D.N.J. 1997) (expert witness’s written report should state completely all opinions to be given at trial, the data, facts, and information considered in arriving at those opinions, as well as any exhibits to be used), vacated on other grounds, 179 F.R.D. 450 (D.N.J. 1998).

[13] See, e.g., Gillepsie v. Sears, Roebuck & Co., 386 F.3d 21, 35 (1st Cir. 2004) (holding that trial court erred in allowing cross-examination and final argument on expert witness’s supposed failure to produce all working notes and videotaped recordings while conducting tests, when objecting party never made such document requests).

[14] See, e.g., McCoy v. Whirlpool Corp., 214 F.R.D. 646, 652 (D. Kan. 2003) (Rule  26(a)(2) “does not require that a report recite each minute fact or piece of scientific information that might be elicited on direct examination to establish the admissibility of the expert opinion … Nor does it require the expert to anticipate every criticism and articulate every nano-detail that might be involved in defending the opinion[.]”).

[15] Id. (without distinguishing between the provisions of Rule 26(a) concerning reports and Rule 26(b) concerning depositions); see also Scott v. City of New York, 591 F.Supp. 2d 554, 559 (S.D.N.Y. 2008) (“failure to record the panoply of descriptive figures displayed automatically by his statistics program does not constitute best practices for preparation of an expert report,’’ but holding that the report contained ‘‘the data or other information’’ he considered in forming his opinion, as required by Rule 26); McDonald v. Sun Oil Co., 423 F.Supp. 2d 1114, 1122 (D. Or. 2006) (holding that Rule 26(a)(2)(B) does not require the production of an expert witness’s working notes; a party may not be sanctioned for spoliation based upon expert witness’s failure to retain notes, absent a showing of relevancy and bad faith), rev’d on other grounds, 548 F.3d 774 (9th Cir. 2008).

[16] In re Xerox Corp Securities Litig., 746 F. Supp. 2d 402, 414-15 (D. Conn. 2010) (“The court concludes that it was not necessary for the [expert witness’s] initial regression analysis to be contained in the [expert] report” that was disclosed pursuant to Rule 26(a)(2)), aff’d on other grds. sub. nom., Dalberth v. Xerox Corp., 766 F. 3d 172 (2d Cir. 2014). See also Cook v. Rockwell Int’l Corp., 580 F.Supp. 2d 1071, 1122 (D. Colo. 2006), rev’d and remanded on other grounds, 618 F.3d 1127 (10th Cir. 2010), cert. denied, ___ U.S. ___ , No. 10-1377, 2012 WL 2368857 (June 25, 2012), on remand, 13 F.Supp.3d 1153 (D. Colo. 2014), vacated 2015 WL 3853593, No. 14–1112 (10th Cir. June 23, 2015); Flebotte v. Dow Jones & Co., No. Civ. A. 97–30117–FHF, 2000 WL 35539238, at *7 (D. Mass. Dec. 6, 2000) (“Therefore, neither the plain language of the rule nor its purpose compels disclosure of every calculation or test conducted by the expert during formation of the report.”).

[17] Cook, 580 F. Supp. 2d at 1121–22.

[18] Id.

[19] Id. & n. 55 (Rule 26(a)(2) does not “require that an expert report contain all the information that a scientific journal might require an author of a published paper to retain.”).

[20] Id. at 1121-22.

[21] Id.

[22] Helmert v.  Butterball, LLC, No. 4:08-CV-00342, 2011 WL 3157180, at *2 (E.D. Ark. July 27, 2011).

[23] Fed. R. Civ. P. 34(b)(2)(E)(i).

[24] Fed. R. Civ. P. 34(b)(2)(E)(ii).

[25] Fed. R. Civ. P. 34, Advisory Comm. Notes (2006 Amendments).

[26] Crissen v. Gupta, 2013 U.S. Dist. LEXIS 159534, at *22 (S.D. Ind. Nov. 7, 2013), citing Craig & Landreth, Inc. v. Mazda Motor of America, Inc., 2009 U.S. Dist. LEXIS 66069, at *3 (S.D. Ind. July 27, 2009). See also Saliga v. Chemtura Corp., 2013 U.S. Dist. LEXIS 167019, *3-7 (D. Conn. Nov. 25, 2013).

[27] No. 10-cv-00892-MSKKLM, 2011 WL 684592 (D. Colo. Feb. 18, 2011)

[28] Id. at *1.

[29] Id.

[30] Id. at *2.

[31] Id.

[32] See Barnes v. Dist. of Columbia, 289 F.R.D. 1, 19–24 (D.D.C. 2012) (ordering production of underlying data and information because, “[i]n order for the [requesting party] to understand fully the . . . [r]eports, they need to have all the underlying data and information on how” the reports were prepared).

Forensic Science Conference Papers Published by Royal Society

June 27th, 2015

In February of this year, the Royal Society sponsored a two day conference, on “The paradigm shift for UK forensic science,” at The Royal Society, London. The meeting was organized by Professors Sue Black and Niamh Nic Daeid, of Dundee University, to discuss developments in the scientific reliability of the forensic sciences. The meeting program reflected a broad coverage of topics by scientists, judges, lawyers, on science in the courtroom.

The presentations are now available as papers open access in the Philosophical Transactions of the Royal Society B: Biological Sciences:

Sue Black, Niamh Nic Daeid, Introduction: Time to think differently: catalysing a paradigm shift in forensic science

The Rt Hon. the Lord Thomas of Cwmgiedd, The legal framework for more robust forensic science evidence

Éadaoin O’Brien, Niamh Nic Daeid, Sue Black, Science in the court: pitfalls, challenges and solutions

Paul Roberts, Paradigms of forensic science and legal process: a critical diagnosis

Stephan A. Bolliger, Michael J. Thali, Bridging the gap: from biometrics to forensics

Anil K. Jain, Arun Ross, Fingerprint identification: advances since the 2009 National Research Council report

Christophe Champod, The future of forensic DNA analysis

John M. Butler, The end of the (forensic science) world as we know it? The example of trace evidence

Claude Roux, Benjamin Talbot-Wright, James Robertson, Frank Crispino, Olivier Ribaux, Advances in the use of odour as forensic evidence through optimizing and standardizing instruments and canines

Kenneth G. Furton, Norma Iris Caraballo, Michelle M. Cerreta, Howard K. Holness, New psychoactive substances: catalysing a shift in forensic science practice?

Justice Tettey, Conor Crean, The logical foundations of forensic science: towards reliable knowledge

Ian Evett, The interface between forensic science and technology: how technology could cause a paradigm shift in the role of forensic institutes in the criminal justice system

Ate Kloosterman, Anna Mapes, Zeno Geradts, Erwin van Eijk, Carola Koper, Jorrit van den Berg, Saskia Verheij, Marcel van der Steen, Arian van Asten, Integrating research into operational practice

Alastair Ross, Cognitive neuroscience in forensic science: understanding and utilizing the human element

Itiel E. Dror, Review article: Cognitive neuroscience in forensic science: understanding and utilizing the human element

Earthquake-Induced Data Loss – We’re All Shook Up

June 26th, 2015

Adam Marcus and Ivan Oransky are medical journalists who publish the Retraction Watch blog. Their blog’s coverage of error, fraud, plagiarism, and other publishing disasters is often first-rate, and a valuable curative for the belief that peer review publication, as it is now practiced, ensures trustworthiness.

Yesterday, Retraction Watch posted an article on earthquake-induced data loss. Shannon Palus, “Lost your data? Blame an earthquake” (June 25, 2015). A commenter on PubPeer raised concerns about a key figure in a paper[1]. The authors acknowledged a problem, which they traced to their loss of data in an earthquake. The journal retracted the paper.

This is not the first instance of earthquake-induced loss of data.

When John O’Quinn and his colleagues in the litigation industry created the pseudo-science of silicone-induced autoimmunity, they recruited Nir Kossovsky, a pathologist at UCLA Medical Center. Although Kossovsky looked a bit like Pee-Wee Herman, he was a graduate of the University of Chicago Pritzker School of Medicine, and the U.S. Naval War College, and a consultant to the FDA. In his dress whites, Kossovsky helped O’Quinn sell his silicone immunogenicity theories to juries and judges around the country. For a while, the theories sold well.

In testifying and dodging discovery for the underlying data in his silicone studies, Kossovsky was as slick as silicone itself. Ultimately, when defense counsel subpoenaed the underlying data from Kossovsky’s silicone study, Kossovsky shrugged and replied that the Northridge Earthquake destroyed his data. Apparently coffee cups and other containers of questionable fluids spilled on his silicone data in the quake, and Kossovsky’s emergency response was to obtain garbage cans and throw out the data. For the gory details, see Gary Taubes, “Silicone in the System: Has Nir Kossovsky really shown anything about the dangers of breast implants?” Discover Magazine (Dec. 1995).

As Mr. Taubes points out, Kossovsky’s paper was rejected by several journals before being published in the Journal of Applied Biomaterials, of which Kossovsky was a member of the editorial board. The lack of data did not, however, keep Kossovsky from continuing to testify, and from trying to commercialize, along with his wife, Beth Brandegee, and his father, Ram Kossowsky[2], an ELISA-based silicone “antibody” biomarker diagnostic test, Detecsil. Although Rule 702 had been energized by the Daubert decision in 1993, many judges were still not willing to take a hard look at Kossovsky’s study, his test, or to demand the supposedly supporting data. The Food and Drug Administration, however, eventually caught up with Kossovsky, and the Detecsil marketing ceased. Lillian J. Gill, FDA Acting Director, Office of Compliance, Letter to Beth S. Brandegee, President, Structured Biologicals (SBI) Laboratories: Detecsil Silicone Sensitivity Test (July 15, 1994); see Taubes, Discover Magazine.

After defense counsel learned of the FDA’s enforcement action against Kossovsky and his company, the litigation industry lost interest in Kossovsky, and his name dropped off trial witness lists. His name also dropped off the rolls of tenured UCLA faculty, and he apparently left medicine altogether to become a business consultant. Dr. Kossovsky became “an authority on business process risk and reputational value.” Kossovsky is now the CEO and Director of Steel City Re, which specializes in strategies for maintaining and enhancing reputational value. Ironic; eh?

A review of PubMed’s entries for Nir Kossovsky shows that his run in silicone started in 1983, and ended in 1996. He testified for plaintiffs in Hopkins v. Dow Corning Corp., 33 F.3d 1116 (9th Cir.1994) (tried in 1991), and in the infamous case of Johnson v. Bristol-Myers Squibb, CN 91-21770, Tx Dist. Ct., 125th Jud. Dist., Harris Cty., 1992.

A bibliography of Kossovsky silicone oeuvre is listed, below.


[1] Federico S. Rodríguez, Katterine A. Salazar, Nery A. Jara, María A García-Robles, Fernando Pérez, Luciano E. Ferrada, Fernando Martínez, and Francisco J. Nualart, “Superoxide-dependent uptake of vitamin C in human glioma cells,” 127 J. Neurochemistry 793 (2013).

[2] Father and son apparently did not agree on how to spell their last name.


Nir Kossovsky, D. Conway, Ram Kossowsky & D. Petrovich, “Novel anti-silicone surface-associated antigen antibodies (anti-SSAA(x)) may help differentiate symptomatic patients with silicone breast implants from patients with classical rheumatological disease,” 210 Curr. Topics Microbiol. Immunol. 327 (1996)

Nir Kossovsky, et al., “Preservation of surface-dependent properties of viral antigens following immobilization on particulate ceramic delivery vehicles,” 29 J. Biomed. Mater. Res. 561 (1995)

E.A. Mena, Nir Kossovsky, C. Chu, and C. Hu, “Inflammatory intermediates produced by tissues encasing silicone breast prostheses,” 8 J. Invest. Surg. 31 (1995)

Nir Kossovsky, “Can the silicone controversy be resolved with rational certainty?” 7 J. Biomater. Sci. Polymer Ed. 97 (1995)

Nir Kossovsky & C.J. Freiman, “Physicochemical and immunological basis of silicone pathophysiology,” 7 J. Biomater. Sci. Polym. Ed. 101 (1995)

Nir Kossovsky, et al., “Self-reported signs and symptoms in breast implant patients with novel antibodies to silicone surface associated antigens [anti-SSAA(x)],” 6 J. Appl. Biomater. 153 (1995), and “Erratum,” 6 J. Appl. Biomater. 305 (1995)

Nir Kossovsky & J. Stassi, “A pathophysiological examination of the biophysics and bioreactivity of silicone breast implants,” 24s1 Seminars Arthritis & Rheum. 18 (1994)

Nir Kossovsky & C.J. Freiman, “Silicone breast implant pathology. Clinical data and immunologic consequences,” 118 Arch. Pathol. Lab. Med. 686 (1994)

Nir Kossovsky & C.J. Freiman, “Immunology of silicone breast implants,” 8 J. Biomaterials Appl. 237 (1994)

Nir Kossovsky & N. Papasian, “Mammary implants,” 3 J. Appl. Biomater. 239 (1992)

Nir Kossovsky, P. Cole, D.A. Zackson, “Giant cell myocarditis associated with silicone: An unusual case of biomaterials pathology discovered at autopsy using X-ray energy spectroscopic techniques,” 93 Am. J. Clin. Pathol. 148 (1990)

Nir Kossovsky & R.B. Snow RB, “Clinical-pathological analysis of failed central nervous system fluid shunts,” 23 J. Biomed. Mater. Res. 73 (1989)

R.B. Snow & Nir Kossovsky, “Hypersensitivity reaction associated with sterile ventriculoperitoneal shunt malfunction,” 31 Surg. Neurol. 209 (1989)

Nir Kossovsky & Ram Kossowsky, “Medical devices and biomaterials pathology: Primary data for health care technology assessment,” 4 Internat’l J. Technol. Assess. Health Care 319 (1988)

Nir Kossovsky, John P. Heggers, and M.C. Robson, “Experimental demonstration of the immunogenicity of silicone-protein complexes,” 21 J. Biomed. Mater. Res. 1125 (1987)

Nir Kossovsky, John P. Heggers, R.W. Parsons, and M.C. Robson, “Acceleration of capsule formation around silicone implants by infection in a guinea pig model,” 73 Plastic & Reconstr. Surg. 91 (1984)

John Heggers, Nir Kossovsky, et al., “Biocompatibility of silicone implants,” 11 Ann. Plastic Surg. 38 (1983)

Nir Kossovsky, John P. Heggers, et al., “Analysis of the surface morphology of recovered silicone mammary prostheses,” 71 Plast. Reconstr. Surg. 795 (1983)

Diclegis and Vacuous Philosophy of Science

June 24th, 2015

Just when you thought that nothing more could be written intelligently about the Bendectin litigation, you find out you are right. Years ago, Michael Green and Joseph Sanders each wrote thoughtful, full-length books[1] about the litigation assault on the morning-sickness (nausea and vomiting of pregnancy) medication, which was voluntarily withdrawn by its manufacturer from the United States market. Dozens, if not hundreds, of law review articles discuss the scientific issues, the legal tactics, and the judicial decisions in the U.S. Bendectin litigation, including the Daubert case in Supreme Court and in the Ninth Circuit. But perhaps fresh eyes might find something new and fresh to say.

Boaz Miller teaches social epistemology and philosophy of science, and he recently weighed in on the role that scientific consensus played in resolving the Bendectin litigation. Miller, “Scientific Consensus and Expert Testimony in Courts: Lessons from the Bendectin Litigation,” Foundations of Science (2014) (Oct. 17, 2014) (in press) [cited as Miller]. Miller astutely points out that scientific consensus may or may not be epistemic, that is, based upon robust, valid, sufficient scientific evidence of causality. Scientists are people, and sometimes they come to conclusions based upon invalid evidence, or because of cognitive biases, or social pressures, and the like. Sometimes scientists get the right result for the wrong reasons. From this position he argues that adverting to scientific consensus is fraught with danger of being misled, and that the Bendectin ligitation specifically is an example of courts led astray by a “non-epistemic” scientific consensus. Miller at 1.

Miller is correct that the scientific consensus on Bendectin’s safety, which emerged after the initiation of litigation, played a role in resolving the litigation, id. at 8, but he badly misunderstands how the consensus actually operated to bring closure to the birth defects litigation. Remarkably, he pays no attention to the consolidated trial of over 800 cases before the Hon. Carl B. Rubin, in the Southern District of Ohio. This trial resulted in a defense verdict in March 1985, and judgment that withstood appellate review. Assoc. Press, “U.S. Jury Clears a Nausea Drug in Birth Defects,” N.Y. Times (Mar. 13, 1985). The subsequent litigation turned into guerilla warfare based upon relatively few remaining individual cases in state and federal courts. In one of the state court cases, the trial court appointed neutral expert witnesses, who opined that plaintiffs had failed to make out their causal claims of teratogenicity in human beings. DePyper v. Navarro, No. 83–303467-NM, 1995 WL 788828 (Mich. Cir. Ct. Nov. 27, 1995).

To be sure, plaintiffs’ expert witnesses and plaintiffs’ counsel continued in their campaign to manufacture “reasonable medical certainty” of Bendectin’s teratogenicity, well after a scientific consensus emerged. Boaz Miller makes the stunning claim that this consensus was not a “knowledge-based” consensus because:

(1) the research was controlled by parties to the dispute (Miller at 10);

(2) the consensus ignored or diminished the “value” of in vitro toxicology (Miller at 13);

(3) the consensus relied upon most heavily upon the epidemiologic evidence (Miller at 14);

(4) the animal toxicology research was “prematurely” abandoned when the U.S. withdrew its product from the market (Miller at 15); and

(5) the withdrawal ended the “threat” to public health, and the concerns about teratogenicity (Miller at 15).

Miller’s asserted reasons are demonstrably incorrect. Although Merrell Richardson funded some studies early on, by the time the scientific consensus emerged, many studies funded by neutral sources, and conducted by researchers of respected integrity, were widely available. The consensus did not diminish the value of in vivo toxicology; rather scientists evaluated the available evidence through their understanding of epidemiology’s superiority in assessing actual risks in human populations. Animal studies were not prematurely abandoned; more accurately, the animal studies gave way to more revealing, valid studies in humans about human outcomes. The sponsor’s withdrawal of Bendectin in the United States was not the cause of any abandonment of research. The drug remained available outside the United States, in countries with less rapacious tort systems, and researchers would have, in any event, continued animal studies if there were something left open by previous research. A casual browse through PubMed’s collection of articles on thalidomide shows that animal research continued well after that medication had been universally withdrawn for use in pregnant women. Given that thalidomide was unquestionably a human teratogen, there was a continued interest in understanding its teratogenicity. No such continued interest existed for Bendectin after the avalanche of exculpatory human data.

What sort of inquiry permitted Miller to reach his conclusions? His article cites no studies, no whole-animal toxicology, no in vitro research, no epidemiologic studies, no systematic reviews, no regulatory agency reviews, and no meta-analysis. All exist in abundance. The full extent of his engagement with the actual scientific data and issues is a reference to an editorial and two letters to the editor[2]! From the exchange of views in one set of letters in 1985, Miller infers that there was “clear dissent within the community of toxicologists.” Miller at 13. The letters in question, however, were written in a journal of teratology, which was not limited to toxicology, and the interlocutors were well aware of the hierarchy of evidence that placed human observational studies at the top of the evidential pyramid.

Miller argues that it was possible that the consensus was not knowledge-based because it might have reflected the dominance of epidemiology over the discipline of toxicology. Again, he ignores the known dubious validity of inferring human teratogenicity from high dose whole animal or in vitro toxicology. By the time the scientific consensus emerged with respect to Bendectin’s safety, this validity point was widely appreciated by all but the most hardened rat killers, and plaintiffs’ counsel. In less litigious countries, the drug never left the market. No regulatory agency ever called for its withdrawal.

Miller might have tested whether the scientific community’s consensus on Bendectin, circa 1992 (when Daubert was being briefed in the Supreme Court) was robust by looking to how well it stood up to further testing. He did not, but he could easily have found the following. The U.S. sponsor of Diclegis, Duchesnay USA, sought and obtained the indication for its medication in pregnancy. Under U.S. law, Duchesnay’s new drug application had to establish safety and efficacy for this indication. In 2013, the U.S. FDA approved Bendectin, under the tradename, Diclegis[3], as a combination of doxylamine succinate and pyridoxine hydrochloride for sale in the United States. Under the FDA’s pregnancy labeling system, Diclegis is a category A, with a specific indication for use in pregnancy. The FDA’s review of the actual data is largely available for all to see. See, e.g., Center for Drug Evaluation and Research, Other Reviews (Aug. 2012); Summary Review (April 2013); Pharmacology Review (March 2013); Medical Review (March 2013); Statistical Review (March 2013); Cross Discipline Team Leader Review (April 2013). Given the current scientific record, the consensus that emerged in the early 1990s looks strong. Indeed, the consensus was epistemically strong when reached two decades ago.

Miller is certainly correct that reliance upon consensus entails epistemic risks. Sometimes the consensus has not looked very hard or critically at all the evidence. Political, financial, and cognitive biases can be prevalent. Miller fails to show that any such biases were prevalent in the early 1990s, or that they infected judicial assessments of the plaintiffs’ causal claims in Bendectin litigation. Miller is also wrong to suggest that courts did not look beyond the consensus to the actual evidential base for plaintiffs’ claims. Through the lens of expert witness testimony, both party and court-appointed expert witnesses, courts and juries had a better view of the causation issues than Miller appreciates. Miller’s philosophy of science might be improved by his rolling up his sleeves and actually looking at the data[4].


[1] See Joseph Sanders, Bendectin on Trial: A Study of Mass Tort Litigation (1998); Michael D. Green, Bendectin and Birth Defects: The Challenges of Mass Toxic Substances Litigation (1996).

[2] Robert Brent, “Editorial comment on comments on ‘Teratogen Update: Bendectin’,” 31 Teratology 429 (1985); Kenneth S. Brown, John M. Desesso, John Hassell, Norman W. Klein, Jon M. Rowland, A. J. Steffek, Betsy D. Carlton, Cas. Grabowski, William Slikker Jr. and David Walsh, “Comments on ‘Teratogen Update: Bendectin’,” 31Teratology 431 (1985); Lewis B. Holmes, “Response to comments on ‘Teratogen Update: Bendectin’,” 31 Teratology 432 (1985).

[3] See FDA News Release, “FDA approves Diclegis for pregnant women experiencing nausea and vomiting,” (April 8, 2013). The return of this drug to the United States market was held up as a triumph of science over the will of the industry litigation. See Gideon Koren, “The Return to the USA of the Doxylamine-Pyridoxine Delayed Release Combination (Diclegis®) for Morning Sickness — A New Morning for American Women,” 20 J. Popul. Ther. Clin. Pharmacol. e161 (2013).

[4] See “Bendectin, Diclegis & The Philosophy of Science” (Oct 26, 2013).

How to Fake a Moon Landing

June 8th, 2015

The meaning of the world is the separation of wish and fact.”
Kurt Gödel

Everyone loves science except when science defeats wishes for a world not known. Coming to accept the world based upon evidence requires separating wish from fact. And when the evidence is lacking in quality or quantity, then science requires us to have the discipline to live with uncertainty rather than wallow in potentially incorrect beliefs.

Darryl Cunningham has written an engaging comic graphics book about science and the scientific worldview. Darryl Cunningham, How to Fake a Moon Landing: Exposing the Myths of Science Denial (2013). Through pictorial vignettes, taken from current controversies, Cunningham has created a delightful introduction to scientific methodology and thinking. Cunningham has provided chapters on several modern scandalous deviations from the evidence-based understanding of the world, including:

  • The Moon Hoax
  • Homeopathy
  • Chiropractic
  • The MMR Vaccination Scandal
  • Evolution
  • Fracking
  • Climate Change, and
  • Science Denial

Most people will love this book. Lawyers will love the easy-to-understand captions. Physicians will love the debunking of chiropractic. Republicans will love the book’s poking fun at (former Dr.) Andrew Wakefield and his contrived MMR vaccination-autism scare, and the liberal media’s unthinking complicity in his fraud. Democrats will love the unraveling of the glib, evasive assertions of the fracking industry. New Agers will love the book because of its neat pictures, and they probably won’t read the words anyway, and so they will likely miss the wonderful deconstruction of homeopathy and other fashionable hokum. Religious people, however, will probably hate the fun poked at all attempts to replace evidence with superstitions.Roblox HackBigo Live Beans HackYUGIOH DUEL LINKS HACKPokemon Duel HackRoblox HackPixel Gun 3d HackGrowtopia HackClash Royale Hackmy cafe recipes stories hackMobile Legends HackMobile Strike Hack

Without rancor, Cunningham pillories all true believers who think that they can wish the facts of the world. At $16.95, the book is therapeutic and a bargain.

Judicial Control of the Rate of Error in Expert Witness Testimony

May 28th, 2015

In Daubert, the Supreme Court set out several criteria or factors for evaluating the “reliability” of expert witness opinion testimony. The third factor in the Court’s enumeration was whether the trial court had considered “the known or potential rate of error” in assessing the scientific reliability of the proffered expert witness’s opinion. Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 593 (1993). The Court, speaking through Justice Blackmun, failed to provide much guidance on the nature of the errors subject to gatekeeping, on how to quantify the errors, and on to know how much error was too much. Rather than provide a taxonomy of error, the Court lumped “accuracy, validity, and reliability” together with a grand pronouncement that these measures were distinguished by no more than a “hen’s kick.” Id. at 590 n.9 (1993) (citing and quoting James E. Starrs, “Frye v. United States Restructured and Revitalized: A Proposal to Amend Federal Evidence Rule 702,” 26 Jurimetrics J. 249, 256 (1986)).

The Supreme Court’s failure to elucidate its “rate of error” factor has caused a great deal of mischief in the lower courts. In practice, trial courts have rejected engineering opinions on stated grounds of their lacking an error rate as a way of noting that the opinions were bereft of experimental and empirical evidential support[1]. For polygraph evidence, courts have used the error rate factor to obscure their policy prejudices against polygraphs, and to exclude test data even when the error rate is known, and rather low compared to what passes for expert witness opinion testimony in many other fields[2]. In the context of forensic evidence, the courts have rebuffed objections to random-match probabilities that would require that such probabilities be modified by the probability of laboratory or other error[3].

When it comes to epidemiologic and other studies that require statistical analyses, lawyers on both sides of the “v” frequently misunderstand p-values or confidence intervals to provide complete measures of error, and ignore the larger errors that result from bias, confounding, study validity (internal and external), inappropriate data synthesis, and the like[4]. Not surprisingly, parties fallaciously argue that the Daubert criterion of “rate of error” is satisfied by expert witness’s reliance upon studies that in turn use conventional 95% confidence intervals and measures of statistical significance in p-values below 0.05[5].

The lawyers who embrace confidence intervals and p-values as their sole measure of error rate fail to recognize that confidence intervals and p-values are means of assessing only one kind of error: random sampling error. Given the carelessness of the Supreme Court’s use of technical terms in Daubert, and its failure to engage in the actual evidence at issue in the case, it is difficult to know whether the Court intended to suggest that random error was the error rate it had in mind[6]. The statistics chapter in the Reference Manual on Scientific Evidence helpfully points out that the inferences that can be drawn from data turn on p-values and confidence intervals, as well as on study design, data quality, and the presence or absence of systematic errors, such as bias or confounding.  Reference Manual on Scientific Evidence at 240 (3d 2011) [Manual]. Random errors are reflected in the size of p-values or the width of confidence intervals, but these measures of random sampling error ignore systematic errors such as confounding and study biases. Id. at 249 & n.96.

The Manual’s chapter on epidemiology takes an even stronger stance: the p-value for a given study does not provide a rate of error or even a probability of error for an epidemiologic study:

“Epidemiology, however, unlike some other methodologies—fingerprint identification, for example—does not permit an assessment of its accuracy by testing with a known reference standard. A p-value provides information only about the plausibility of random error given the study result, but the true relationship between agent and outcome remains unknown. Moreover, a p-value provides no information about whether other sources of error – bias and confounding – exist and, if so, their magnitude. In short, for epidemiology, there is no way to determine a rate of error.”

Manual at 575. This stance seems not entirely justified given that there are Bayesian approaches that would produce credibility intervals accounting for sampling and systematic biases. To be sure, such approaches have their own problems and they have received little to no attention in courtroom proceedings to date.

The authors of the Manual’s epidemiology chapter, who are usually forgiving of judicial error in interpreting epidemiologic studies, point to one United States Court of Appeals case that fallaciously interpreted confidence intervals magically to quantify bias and confounding in a Bendectin birth defects case. Id. at 575 n. 96[7]. The Manual could have gone further to point out that, in the context of multiple studies, of different designs and analyses, cognitive biases involved in evaluating, assessing, and synthesizing the studies are also ignored by statistical measures such as p-values and confidence intervals. Although the Manual notes that assessing the role of chance in producing a particular set of sample data is “often viewed as essential when making inferences from data,” the Manual never suggests that random sampling error is the only kind of error that must be assessed when interpreting data. The Daubert criterion would appear to encompass all varieties or error, not just random error.

The Manual’s suggestion that epidemiology does not permit an assessment of the accuracy of epidemiologic findings misrepresents the capabilities of modern epidemiologic methods. Courts can, and do, invoke gatekeeping approaches to weed out confounded study findings. SeeSorting Out Confounded Research – Required by Rule 702” (June 10, 2012). The “reverse Cornfield inequality” was an important analysis that helped establish the causal connection between tobacco smoke and lung cancer[8]. Olav Axelson studied and quantified the role of smoking as a confounder in epidemiologic analyses of other putative lung carcinogens.[9] Quantitative methods for identifying confounders have been widely deployed[10].

A recent study in birth defects epidemiology demonstrates the power of sibling cohorts in addressing the problem of residual confounding from observational population studies with limited information about confounding variables. Researchers looking at various birth defect outcomes among offspring of women who used certain antidepressants in early pregnancy generally found no associations in pooled data from Iceland, Norway, Sweden, Finland, and Denmark. A putative association between maternal antidepressant use and a specific kind of cardiac defect (right ventricular outflow tract obstruction or RVOTO) did appear in the overall analysis, but was reversed when the analysis was limited to the sibling subcohort. The study found an apparent association between RVOTO defects and first trimester maternal exposure to selective serotonin reuptake inhibitors, with an adjusted odds ratio of 1.48 (95% C.I., 1.15, 1.89). In the adjusted analysis for siblings, the study found an OR of 0.56 (95% C.I., 0.21, 1.49) in an adjusted sibling analysis[11]. This study and many others show how creative analyses can elucidate and quantify the direction and magnitude of confounding effects in observational epidemiology.

Systematic bias has also begun to succumb to more quantitative approaches. A recent guidance paper by well-known authors encourages the use of quantitative bias analysis to provide estimates of uncertainty due to systematic errors[12].

Although the courts have failed to articulate the nature and consequences of erroneous inference, some authors would reduce all of Rule 702 (and perhaps 704, 403 as well) to a requirement that proffered expert witnesses “account” for the known and potential errors in their opinions:

“If an expert can account for the measurement error, the random error, and the systematic error in his evidence, then he ought to be permitted to testify. On the other hand, if he should fail to account for any one or more of these three types of error, then his testimony ought not be admitted.”

Mark Haug & Emily Baird, “Finding the Error in Daubert,” 62 Hastings L.J. 737, 739 (2011).

Like most antic proposals to revise Rule 702, this reform vision shuts out the full range of Rule 702’s remedial scope. Scientists certainly try to identify potential sources of error, but they are not necessarily very good at it. See Richard Horton, “Offline: What is medicine’s 5 sigma?” 385 Lancet 1380 (2015) (“much of the scientific literature, perhaps half, may simply be untrue”). And as Holmes pointed out[13], certitude is not certainty, and expert witnesses are not likely to be good judges of their own inferential errors[14]. Courts continue to say and do wildly inconsistent things in the course of gatekeeping. Compare In re Zoloft (Setraline Hydrochloride) Products, 26 F. Supp. 3d 449, 452 (E.D. Pa. 2014) (excluding expert witness) (“The experts must use good grounds to reach their conclusions, but not necessarily the best grounds or unflawed methods.”), with Gutierrez v. Johnson & Johnson, 2006 WL 3246605, at *2 (D.N.J. November 6, 2006) (denying motions to exclude expert witnesses) (“The Daubert inquiry was designed to shield the fact finder from flawed evidence.”).


[1] See, e.g., Rabozzi v. Bombardier, Inc., No. 5:03-CV-1397 (NAM/DEP), 2007 U.S. Dist. LEXIS 21724, at *7, *8, *20 (N.D.N.Y. Mar. 27, 2007) (excluding testimony from civil engineer about boat design, in part because witness failed to provide rate of error); Sorto-Romero v. Delta Int’l Mach. Corp., No. 05-CV-5172 (SJF) (AKT), 2007 U.S. Dist. LEXIS 71588, at *22–23 (E.D.N.Y. Sept. 24, 2007) (excluding engineering opinion that defective wood-carving tool caused injury because of lack of error rate); Phillips v. Raymond Corp., 364 F. Supp. 2d 730, 732–33 (N.D. Ill. 2005) (excluding biomechanics expert witness who had not reliably tested his claims in a way to produce an accurate rate of error); Roane v. Greenwich Swim Comm., 330 F. Supp. 2d 306, 309, 319 (S.D.N.Y. 2004) (excluding mechanical engineer, in part because witness failed to provide rate of error); Nook v. Long Island R.R., 190 F. Supp. 2d 639, 641–42 (S.D.N.Y. 2002) (excluding industrial hygienist’s opinion in part because witness was unable to provide a known rate of error).

[2] See, e.g., United States v. Microtek Int’l Dev. Sys. Div., Inc., No. 99-298-KI, 2000 U.S. Dist. LEXIS 2771, at *2, *10–13, *15 (D. Or. Mar. 10, 2000) (excluding polygraph data based upon showing that claimed error rate came from highly controlled situations, and that “real world” situations led to much higher error (10%) false positive error rates); Meyers v. Arcudi, 947 F. Supp. 581 (D. Conn. 1996) (excluding polygraph in civil action).

[3] See, e.g., United States v. Ewell, 252 F. Supp. 2d 104, 113–14 (D.N.J. 2003) (rejecting defendant’s objection to government’s failure to quantify laboratory error rate); United States v. Shea, 957 F. Supp. 331, 334–45 (D.N.H. 1997) (rejecting objection to government witness’s providing separate match and error probability rates).

[4] For a typical judicial misstatement, see In re Zoloft Products, 26 F. Supp.3d 449, 454 (E.D. Pa. 2014) (“A 95% confidence interval means that there is a 95% chance that the ‘‘true’’ ratio value falls within the confidence interval range.”).

[5] From my experience, this fallacious argument is advanced by both plaintiffs’ and defendants’ counsel and expert witnesses. See also Mark Haug & Emily Baird, “Finding the Error in Daubert,” 62 Hastings L.J. 737, 751 & n.72 (2011).

[6] See David L. Faigman, et al. eds., Modern Scientific Evidence: The Law and Science of Expert Testimony § 6:36, at 359 (2007–08) (“it is easy to mistake the p-value for the probability that there is no difference”)

[7] Brock v. Merrell Dow Pharmaceuticals, Inc., 874 F.2d 307, 311-12 (5th Cir. 1989), modified, 884 F.2d 166 (5th Cir. 1989), cert. denied, 494 U.S. 1046 (1990). As with any error of this sort, there is always the question whether the judges were entrapped by the parties or their expert witnesses, or whether the judges came up with the fallacy on their own.

[8] See Joel B Greenhouse, “Commentary: Cornfield, Epidemiology and Causality,” 38 Internat’l J. Epidem. 1199 (2009).

[9] Olav Axelson & Kyle Steenland, “Indirect methods of assessing the effects of tobacco use in occupational studies,” 13 Am. J. Indus. Med. 105 (1988); Olav Axelson, “Confounding from smoking in occupational epidemiology,” 46 Brit. J. Indus. Med. 505 (1989); Olav Axelson, “Aspects on confounding in occupational health epidemiology,” 4 Scand. J. Work Envt’l Health 85 (1978).

[10] See, e.g., David Kriebel, Ariana Zeka1, Ellen A Eisen, and David H. Wegman, “Quantitative evaluation of the effects of uncontrolled confounding by alcohol and tobacco in occupational cancer studies,” 33 Internat’l J. Epidem. 1040 (2004).

[11] Kari Furu, Helle Kieler, Bengt Haglund, Anders Engeland, Randi Selmer, Olof Stephansson, Unnur Anna Valdimarsdottir, Helga Zoega, Miia Artama, Mika Gissler, Heli Malm, and Mette Nørgaard, “Selective serotonin reuptake inhibitors and ventafaxine in early pregnancy and risk of birth defects: population based cohort study and sibling design,” 350 Brit. Med. J. 1798 (2015).

[12] Timothy L.. Lash, Matthew P. Fox, Richard F. MacLehose, George Maldonado, Lawrence C. McCandless, and Sander Greenland, “Good practices for quantitative bias analysis,” 43 Internat’l J. Epidem. 1969 (2014).

[13] Oliver Wendell Holmes, Jr., Collected Legal Papers at 311 (1920) (“Certitude is not the test of certainty. We have been cock-sure of many things that were not so.”).

[14] See, e.g., Daniel Kahneman & Amos Tversky, “Judgment under Uncertainty:  Heuristics and Biases,” 185 Science 1124 (1974).

Professor Bernstein’s Critique of Regulatory Daubert

May 15th, 2015

In the law of expert witness gatekeeping, the distinction between scientific claims made in support of litigation positions and claims made in support of regulations is fundamental. In re Agent Orange Product Liab. Litig., 597 F. Supp. 740, 781 (E.D.N.Y. 1984) (“The distinction between avoidance of risk through regulation and compensation for injuries after the fact is a fundamental one”), aff’d 818 F.2d 145 (2d Cir. 1987), cert. denied sub nom. Pinkney v. Dow Chemical Co., 487 U.S. 1234 (1988). Although scientists proffer opinions in both litigation and regulatory proceedings, their opinions are usually evaluated by substantially different standards. In federal litigation, civil and criminal, expert witnesses must be qualified and have an epistemic basis for their opinions, to satisfy the statutory requirements of Federal Rule of Evidence 702, and they must have reasonably relied upon otherwise inadmissible evidence (such as the multiple layers of hearsay involved in an epidemiologic study) under Rule 703. In regulatory proceedings, scientists are not subject to admissibility requirements and the sufficiency requirements set by the Administrative Procedures Act are extremely low[1].

Some industry stakeholders are aggrieved by the low standards for scientific decision making in certain federal agencies, and they have urged that the more stringent litigation evidentiary rules be imported into regulatory proceedings. There are several potential problems with such reform proposals. First, the epistemic requirements of science generally, or of Rules 702 and 703 in particular, are not particularly stringent. Scientific method leads to plenty of false positive and false negative conclusions, which are subject to daily challenge and revision. Scientific inference is not necessarily so strict, as much as ordinary reasoning is so flawed, inexact, and careless. Second, the call for “regulatory Daubert” ignores mandates of some federal agency enabling statutes and guiding regulations, which call for precautionary judgments, and which allow agencies to decide issues on evidentiary display that fall short of epistemic warrants for claims of knowledge.

Many lawyers who represent industry stakeholders have pressed for extension of Daubert-type gatekeeping to federal agency decision making. The arguments for constraining agency action find support in the over-extended claims that agencies and so-called public interest science advocates make in support of agency measures. Advocates and agency personnel seem to believe that worst-case scenarios and overstated safety claims are required as “bargaining” positions to achieve the most restrictive and possibly the most protective regulation that can be gotten from the administrative procedure, while trumping industry’s concerns about costs and feasibility. Still, extending Daubert to regulatory proceedings could have the untoward result of lowering the epistemic bar for both regulators and litigation fact finders.

In a recent article, Professor David Bernstein questions the expansion of Daubert into some regulatory realms. David E. Bernstein, “What to Do About Federal Agency Science: Some Doubts About Regulatory Daubert,” 22 Geo. Mason L. Rev. 549 (2015)[cited as Bernstein]. His arguments are an important counterweight to those who insist on changing agency rulemaking and actions at every turn. As an acolyte and a defender of scientific scruples and reasoning in the courts, Bernstein’s arguments are worth taking seriously.

Bernstein reminds us that bad policy, as seen in regulatory agency rulemaking or decisions, is not always a scientific issue. In any event, regulatory actions, unlike jury decisions, are not, or at least should not be, “black boxes.” The agency’s rationale and reasoning are publicly stated, subject to criticism, and open to revision. Jury decisions are opaque, non-transparent, potentially unreasoned, not carefully articulated, and not subject to revision absent remarkable failures of proof.

One line of argument[2] pursued by Professor Bernstein follows from his observation that Daubert procedures are required to curtail litigation expert witness “adversarial bias.” Id. at 555. Bernstein traces adversarial bias to three sources:

(1) conscious bias;

(2) unconscious bias; and

(3) selection bias.

Id. Conscious bias stems from deliberate attempts by “hired guns” to deliver opinions that satisfy the lawyers who retained them. The problem of conscious bias is presented by “hired guns” who will adapt their opinions to the needs of the attorney who hires them. Unconscious biases are the more subtle, but no less potent determinants of expert witness behavior, which are created by financial dependence upon, and allegiance to, the witness’s paymaster. Selection bias results from lawyers’ ability to choose expert witnesses to support their claims, regardless whether those witnesses’ opinions are representative of the scientific community. Id.

Professor Bernstein’s taxonomy of bias is important, but incomplete. First, the biases he identifies operate fulsomely in regulatory settings. Although direct financial remuneration is usually not a significant motivation for a scientist to testify before an agency, or to submit a whitepaper, professional advancement and cause advocacy are often powerful incentives at work. These incentives for self-styled public interest zealots may well create more powerful distortions of scientific judgment than any monetary factors in private litigation settings. As for selection bias, lawyers are ethically responsible for screening their expert witnesses, and there can be little doubt that once expert witnesses are disclosed, their opinions will align with their sponsoring parties’ interests. This systematic bias, however, does not necessarily mean that both side’s expert witnesses will necessarily be unrepresentative or unscientific. In the silicone gel breast implant litigation (MDL 926), Judge Pointer, the presiding judge, insisted that both sides’ witnesses were “too extreme,” and he was stunned when his court-appointed expert witnesses filed reports that vindicated the defendants’ expert witnesses’ positions[3]. The defendants had selected expert witnesses who analyzed the data on sound scientific principles; the plaintiffs had selected expert witnesses who overreached in their interpretation of the evidence. Furthermore, many scientific disputes, which find their way into the courtroom, will not have the public profile of silicone gel breast implants, and for which there may be no body of scientific community opinion from which lawyers could select “outliers,” even if they wished to do so.

Professor Bernstein’s offered taxonomy of bias is incomplete because it does not include the most important biases that jurors (and many judges) struggle to evaluate:

random errors;

systematic biases;

confounding; and

cognitive biases.

These errors and biases, along with their consequential fallacies of reasoning, apply with equal force to agency and litigation science. Bernstein does point out, however, an important institutional difference between jury or judge trials and agency review and decisions based upon scientific evidence: agencies often have extensive in-house expertise. Although agency expertise may sometimes be blinded by its policy agenda, agency procedures usually afford the public and the scientific community to understand what the agency decided, and why, and to respond critically when necessary. In the case of the Food and Drug Administration, agency decisions, whether pro- or contra-industry positions are dissected and critiqued by the scientific and statistical community with great care and relish. Nothing of the same sort is possible in response to a jury verdict.

Professor Bernstein is not a science nihilist, and he would not have reviewing courts give a pass to whatever nonsense federal agencies espouse. He calls for enforcement of available statutory requirements that agency action be based upon the “best available science,” and for requiring agencies to explicitly separate and state their policy and scientific judgments. Bernstein also urges greater use of agency peer review, such as occasionally seen from the Institute of Medicine (soon to be the National Academy of Medicine), and the use of Daubert-like criteria for testimony at agency hearings. Bernstein at 554.

Proponents of regulatory Daubert should take Professor Bernstein’s essay to heart, with a daily dose of atorvastatin. Importing Rule 702 into agency proceedings may well undermine the rule’s import in litigation, civil and criminal, while achieving little in the regulatory arena. Consider the pending OSHA rulemaking for lowering the permissible exposure limit (PEL) of crystalline silica in the workplace. OSHA, and along with some public health organizations, has tried to justify this rulemaking on the basis of many overwrought claims of the hazards of crystalline silica exposure at current levels. Clearly, there are some workers who continue to work in unacceptably hazardous conditions, but the harms sustained by these workers can be tied to violations of the current PEL; they are hardly an argument for lowering that current PEL. Contrary to the OSHA’s parade of horribles, silicosis mortality in the United States has steadily declined over the last several decades. The following chart draws upon NIOSH and other federal governmental data:

 

Silicosis Deaths by Year

 

Silicosis deaths, crude and age-adjusted death rates, for U.S. residents age 15 and over, 1968–2007

from Susan E. Dudley & Andrew P. Morriss, “Will the Occupational Safety and Health Administration’s Proposed Standards for Occupational Exposure to Respirable Crystalline Silica Reduce Workplace Risk?” 35 Risk Analysis (2015), in press, doi: 10.1111/risa.12341 (NIOSH reference number: 2012F03–01, based upon multiple cause-of-death data from National Center for Health Statistics, National Vital Statistics System, with population estimates from U.S. Census Bureau).

The decline in silicosis mortality is all the more remarkable because it occurred in the presence of stimulated reporting from silicosis litigation, and misclassification of coal workers’ pneumoconiosis in coal-mining states.

The decline in silicosis mortality may be helpfully compared with the steady rise in mortality from accidental falls among men and women 65 years old, or older:

CDC MMWR Death Rates from Unintentional Falls 2015

Yahtyng Sheu, Li-Hui Chen, and Holly Hedegaard, “QuickStats: Death Rates* from Unintentional Falls† Among Adults Aged ≥ 65 Years, by Sex — United States, 2000–2013,” 64 CDC MMWR 450 (May 1, 2015). Over the observation period, these death rates roughly doubled in both men and women.

Is there a problem with OSHA rulemaking? Of course. The agency has gone off on a regulatory frolic and detour trying to justify an onerous new PEL, without any commitment to enforcing its current silica PEL. OSHA has invoked the prospect of medical risks, many of which are unproven, speculative, and remote, such as lung cancer, autoimmune disease, and kidney disease. The agency, however, is awash with PhDs, and I fear that Professor Bernstein is correct that the distortions of the science are not likely to be corrected by applying Rule 702 to agency factfinding. Courts, faced with the complex prediction models, with disputed medical claims made by agency and industry scientists, will do what they usually do, shrug and defer. And the blow back of the “judicially approved” agency science in litigation contexts will be a cure worse than the disease. At bottom, the agency twisting of science is driven by policy goals and considerations, which require public debate and scrutiny, sound executive judgment, with careful legislative oversight and guidance.


[1] Even under the very low evidentiary and procedural hurdles, federal agencies still manage to outrun their headlights on occasion. See, e.g., Industrial Union Department v. American Petroleum Institute, 448 U.S. 607 (1980) (The Benzene Case); Gulf South Insulation v. U.S. Consumer Product Safety Comm’n, 701 F.2d 1137 (5th Cir. 1983); Corrosion Proof Fittings v. EPA, 947 F2d 1201 (5th Cir 1991).

[2] See also David E. Bernstein, “The Misbegotten Judicial Resistance to the Daubert Revolution,” 89 Notre Dame L. Rev. 27, 31 (2013); David E. Bernstein, “Expert Witnesses, Adversarial Bias, and the (Partial) Failure of the Daubert Revolution,” 93 Iowa L. Rev. 451, 456–57 (2008).

[3] Judge Pointer was less than enthusiastic about performing any gatekeeping role. Unlike most of today’s MDL judges, he was content to allow trial judges in the transferor districts to decide Rule 702 and other pre-trial issues. See Note, “District Judge Takes Issue With Circuit Courts’ Application of Gatekeeping Role” 3 Federal Discovery News (Aug. 1997) (noting that Chief Judge Pointer had criticized appellate courts for requiring district judges to serve as gatekeepers of expert witness testimony).

Science as Adversarial Process versus Group Think

May 7th, 2015

Climate scientists, at least those scientists who believe that climate change is both real and an existential threat to human civilization, have invoked their consensus as an evidentiary ground for political action. These same scientists have also used their claim of a consensus to shame opposing points of view (climate change skeptics) as coming from “climate change deniers.”

Consensus, or “general acceptance” as it is sometimes cast in legal discussions, is rarely more than nose counting. At best, consensus is a proxy for data quality and inferential validity. At worst, consensus is a manifestation of group think and herd mentality. Debates about climate change, as well as most scientific issues, would progress more dependably if there were more data, and less harrumphing about consensus.

Olah’s Nobel Speech

One Nobel laureate, Professor George Olah, explicitly rejected the kumbaya view of science and its misplaced emphasis on consensus and collaboration. In accepting his Nobel Prize in Chemistry, Olah emphasized the value of adversarial challenges in refining and establishing scientific discovery:

“Intensive, critical studies of a controversial topic always help to eliminate the possibility of any errors. One of my favorite quotation is that by George von Bekessy (Nobel Prize in Medicine, 1961).

‘[One] way of dealing with errors is to have friends who are willing to spend the time necessary to carry out a critical examination of the experimental design beforehand and the results after the experiments have been completed. An even better way is to have an enemy. An enemy is willing to devote a vast amount of time and brain power to ferreting out errors both large and small, and this without any compensation. The trouble is that really capable enemies are scarce; most of them are only ordinary. Another trouble with enemies is that they sometimes develop into friends and lose a good deal of their zeal. It was in this way the writer lost his three best enemies. Everyone, not just scientists, needs a few good enemies!’”

George A. Olah, “My Search for Carbocations and Their Role in Chemistry,” Nobel Lecture (Dec. 8, 1994), quoting George von Békésy, Experiments in Hearing 8 (N.Y. 1960); see also McMillan v. Togus Reg’l Office, Dep’t of Veterans Affairs, 294 F. Supp. 2d 305, 317 (E.D.N.Y. 2003) (“As in political controversy, ‘science is, above all, an adversary process.’”) (internal citation omitted).

Carl Sagan expressed similar views about the importance of skepticism in science :

“At the heart of science is an essential balance between two seemingly contradictory attitudes — an openness to new ideas, no matter how bizarre or counterintuitive they may be, and the most ruthless skeptical scrutiny of all ideas, old and new. This is how deep truths are winnowed from deep nonsense.”

Carl Sagan, The Demon-Haunted World: Science as a Candle in the Dark (1995); See also Cary Coglianese, “The Limits of Consensus,” 41 Environment 28 (April 1999).

Michael Crichton, no fan of Sagan, agreed at least on the principle:

“I want to . . . talk about this notion of consensus, and the rise of what has been called consensus science. I regard consensus science as an extremely pernicious development that ought to be stopped cold in its tracks. Historically, the claim of consensus has been the first refuge of scoundrels; it is a way to avoid debate by claiming that the matter is already settled. Whenever you hear the consensus of scientists agrees on something or other, reach for your wallet, because you’re being had.

Let’s be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. Science, on the contrary, requires only one investigator who happens to be right, which means that he or she has results that are verifiable by reference to the real world. In science consensus is irrelevant. What is relevant is [sic] reproducible results. The greatest scientists in history are great precisely because they broke with the consensus.  There is no such thing as consensus science. If it’s consensus, it isn’t science. If it’s science, it isn’t consensus. Period.34

Michael Crichton, “Lecture at California Institute of Technology: Aliens Cause Global Warming” (Jan. 17, 2003) (describing many examples of how “consensus” science historically has frustrated scientific progress).

Crystalline Silica, Carcinogenesis, and Faux Consensus

Clearly, there are times when consensus in science works against knowledge and data-driven inferences. Consider the saga of crystalline silica and lung cancer. Suggestions that silica causes lung cancer date back to the 1930s, but the suggestions were dispelled by data. The available data were evaluated by the likes of Wilhelm Heuper[1], Cuyler Hammond[2] (Selikoff’s go-to-epidemiologist), Gerrit Schepers[3], and Hans Weill[4]. Even etiologic fabulists, such as Kaye Kilburn, disclaimed any connection between silica or silicosis and lung cancer[5]. As recently as 1988, august international committees, writing for the National Institute of Occupational Safety and Health, acknowledged the evidentiary insufficiency of any claim that silica caused lung cancer[6].

IARC (1987)

So what happened to the “consensus”? A group of activist scientists, who disagreed with the consensus, sought to establish their own, new consensus. Working through the International Agency for Research on Cancer (IARC), these scientists were able to inject themselves into the IARC working group process, and gradually raise the IARC ranking of crystalline silica. In 1987, the advocate scientists were able to move the IARC to adopt a “limited evidence” classification for crystalline silica.

The term “limited evidence” is defined incoherently by the IARC as evidence that provides for a “credible” causal explanation, even though chance, bias, and confounding have not been adequately excluded. Despite the incoherent definition that giveth and taketh away, the 1987 IARC reclassification[7] into Group 2A had regulatory consequences that saw silica classified as a “regulatory carcinogen,” or a substance that was “reasonably anticipated to be a carcinogen.”

The advocates’ prophecy was self-fulfilling. In 1996, another working group of the IARC met in Lyon, France, to deliberate on the classification of crystalline silica. The 1996 working group agreed, by a close vote, to reclassify crystalline silica as a “known human carcinogen,” or a Group 1 carcinogen. The decision was accepted and reported officially in volume 68 of the IARC monographs, in 1997.

According to participants, the debate was intense and the vote close. Here is the description from one of the combatants;

“When the IARC Working Group met in Lyon in October 1996 to assess the carcinogenicity of crystalline silica, a seemingly interminable debate ensued, only curtailed by a reminder from the Secretariat that the IARC was concerned with the identification of carcinogenic hazards and not the evaluation of risks. The important distinction between the potential to cause disease in certain circumstances, and in what circumstances, is not always appreciated.

*   *   *   *   *

Even so, the debate in Lyon continued for some time, finally ending in a narrow vote, reflecting the majority view of the experts present at that particular time.”

See Corbett McDonald, “Silica and Lung Cancer: Hazard or Risk,” 44 Ann. Occup. Hyg. 1, 1 (2000); see also Corbett McDonald & Nicola Cherry, “Crystalline Silica and Lung Cancer: The Problem of Conflicting Evidence,” 8 Indoor Built Env’t 8 (1999).

Although the IARC reclassification hardly put the silica lung cancer debate to rest, it did push the regulatory agencies to walk in lockstep with the IARC and declare crystalline silica to be a “known human carcinogen.” More important, it gave regulators and scientists an excuse to avoid the hard business of evaluating complicated data, and of thinking for themselves.

Post IARC

From a sociology of science perspective, the aftermath of the 1997 IARC monograph is a fascinating natural experiment to view the creation of a sudden, thinly supported, new orthodoxy. To be sure, there were scientists who looked carefully at the IARC’s stated bases and found them inadequate, inconsistent, and incoherent[8]. One well-regarded pulmonary text in particular gives the IARC and regulatory agencies little deference:

“Silica-induced lung cancer

A series of studies suggesting that there might be a link between silica inhalation and lung cancer was reviewed by the International Agency for Research on Cancer in 1987, leading to the conclusion that the evidence for carcinogenicity of crystalline silica in experimental animals was sufficient, while in humans it was limited.112 Subsequent epidemiological publications were reviewed in 1996, when it was concluded that the epidemiological evidence linking exposure to silica to the risk of lung cancer had become somewhat stronger.113 but that in the absence of lung fibrosis remained scanty.113 The pathological evidence in humans is also weak in that premalignant changes around silicotic nodules are seldom evident.114 Nevertheless, on this rather insubstantial evidence, lung cancer in the presence of silicosis (but not coal or mixed-dust pneumoconiosis) has been accepted as a pre­scribed industrial disease in the UK since 1992.115 Some subsequent studies have provided support for this decision.116 In contrast to the sparse data on classic silicosis, the evidence linking carcinoma of the lung to the rare diffuse pattern of fibrosis attributed to silica and mixed dusts is much stronger and appears incontrovertible.33,92

Bryan Corrin[9] & Andrew Nicholson, Pathology of the Lungs (3d ed. 2011).

=======================================================

Cognitive biases cause some people to see a glass half full, while others see it half empty. Add a “scientific consensus” to the mix, and many people will see a glass filled 5% as 95% full.

Consider a paper by Centers for Disease Control and NIOSH authors on silica exposure and morality from various diseases. Geoffrey M. Calvert, Faye L. Rice, James M. Boiano, J. W. Sheehy, and Wayne T. Sanderson, “Occupational silica exposure and risk of various diseases: an analysis using death certificates from 27 states of the United States,” 60 Occup. Envt’l Med. 122 (2003). The paper was nominated for the Charles Shepard Award for Best Scientific Publication by a CDC employee, and was published in the British Medical Journal’s publication on occupational medicine. The study analyzed death certificate data from the U.S. National Occupational Mortality Surveillance (NOMS) system, which is based upon the collaboration of NIOSH, the National Center for Health Statistics, the National Cancer Institute, and some state health departments. Id. at 122.

From about 4.8 million death certificates included in their analysis, the authors found a statistically decreased mortality odds ratio (MOR) for lung cancer among those who had silicosis (MOR = 0.70, 95% C.I., 0.55 to 0.89). Of course, with silicosis on the death certificates along with lung cancer, the investigators could be reasonably certain about silica exposure. Given the group-think in occupational medicine about silica and lung cancer, the authors struggled to explain away their finding:

“Although many studies observed that silicotics have an increased risk for lung cancer, a few studies, including ours, found evidence suggesting the lack of such an association. Although this lack of consistency across studies may be related to differences in study design, it suggests that silicosis is not necessary for an increased risk of lung cancer among silica exposed workers.”

Well this statement is at best disingenuous. The authors did not merely find a lack of an association; they found a statistically significance inverse or “negative” association between silicosis and lung cancer. So it is not the case that silicosis is not necessary for an increased risk; silicosis is antithetical to an increased risk.

Looking at only death certificate information, without any data on known or suspected confounders (“diet, hobbies, tobacco use, alcohol use, or medication,” id. at 126, or comorbid diseases or pulmonary impairment, or other occupational or environmental exposures), the authors inferred low, medium, high, and “super high” silica exposure from job categories. Comparing the ever-exposed categories with low exposure yielded absolutely no association between exposure and lung cancer, and subgroup analyses (without any correction for multiple comparisons) found little association, although two subgroups were nominally statistically significantly increased, and one was nominally statistically significantly decreased, at very small deviations from expected:

Lung Cancer Mortality Odds Ratios (p-value for trend < 0.001)

ever vs. low/no exposure:                0.99 (0.98 to 1.00)

medium vs. low/no exposure:         0.88 (0.87 to 0.90)

high vs. low/no exposure:                 1.13 (1.11 to 1.15)

super high vs. low/no exposure:      1.13 (1.06 to 1.21)

Id. at Table 4, and 124.

On this weak evidentiary display, the authors declare that their “study corroborates the association between crystalline silica exposure and silicosis, lung cancer.” Id. at 123. In their conclusions, they elaborate:

“Our findings support an association between high level crystalline silica exposure and lung cancer. The statistically significant MORs for high and super high exposures compared with low/no exposure (MORs = 1.13) are consistent with the relative risk of 1.3 reported in a meta-analysis of 16 cohort and case-control studies of lung cancer in crystalline silica exposed workers without silicosis”

Id. at 126. Actually not; Calvert’s reported MORs exclude an OR of 1.3.

The Calvert study thus is a stunning example of authors, prominent in the field of public health, looking at largely exculpatory data and declaring that they have confirmed an important finding of silica carcinogenesis. And to think that United States taxpayers paid for this paper, and that the authors almost received an honorific award for this thing!


[1] Wilhelm Hueper, “Environmental Lung Cancer,” 20 Industrial Medicine & Surgery 49, 55-56 (1951) (“However, the great majority of investigators have come to the conclusion that there does not exist any causal relation between silicosis and pulmonary or laryngeal malignancy”).

[2] Cuyler Hammond & W. Machle, “Environmental and Occupational Factors in the Development of Lung Cancer,” Ch. 3, pp. 41, 50, in E. Mayer & H. Maier, Pulmonary Carcinoma: Pathogenesis, Diagnosis, and Treatment (N.Y. 1956) (“Studies by Vorwald (41) and others agree in the conclusion that pneumoconiosis in general, and silicosis in particular, do not involve any predisposition of lung cancer.”).

[3] Gerrit Schepers, “Occupational Chest Diseases,” Chap. 33, in A. Fleming, et al., eds., Modern Occupational Medicine at 455 (Philadelphia 2d ed. 1960) (“Lung cancer, of course, occurs in silicotics and is on the increase. Thus far, however, statistical studies have failed to reveal a relatively enhanced incidence of pulmonary neoplasia in silicotic subjects.”).

[4] Ziskind, Jones, and Weill, “State of the Art: Silicosis” 113 Am. Rev. Respir. Dis. 643, 653 (1976) (“There is no indication that silicosis is associated with increased risk for the development of cancer of the respiratory or other systems.”); Weill, Jones, and Parkes, “Silicosis and Related Diseases, Chap. 12, in Occupational Lung Disorders (3d ed. 1994) (“It may be reasonably concluded that the evidence to date that occupational exposure to silica results in excess lung cancer risk is not yet persuasive.”).

[5] Kaye Kilburn, Ruth Lilis, Edwin Holstein, “Silicosis,” in Maxcy-Rosenau, Public Health and Preventive Medicine, 11th ed., at 606 (N.Y. 1980) (“Lung cancer is apparently not a complication of silicosis”).

[6] NIOSH Silicosis and Silicate Disease Committee, “Diseases Associated With Exposure to Silica and Nonfibrous Silicate Minerals,” 112 Arch. Path. & Lab. Med. 673, 711b, ¶ 2 (1988) (“The epidemiological evidence at present is insufficient to permit conclusions regarding the role of silica in the pathogenesis of bronchogenic carcinoma.”)

[7] 42 IARC Monographs on the Evaluation of the Carcinogenic Risk of Chemicals to Humans at 22, 111, § 4.4 (1987).

[8] See, e.g., Patrick A. Hessel, John F. Gamble, J. Bernard L. Gee, Graham Gibbs, Francis H. Y. Green, W. Keith C. Morgan, and Brooke T. Mossman, “Silica, Silicosis, and Lung Cancer: A Response to A Recent Working Group,” 42 J. Occup. Envt’l Med. 704, 718 (2000) (“The data demonstrate a lack of association between lung cancer and exposure to crystalline silica in human studies. Furthermore, silica is not directly genotoxic and has been to be a pulmonary carcinogen in only one animal species, the rat, which seems to be an inappropriate carcinogenesis in humans.”)

[9] Professor of Thoracic Pathology, National Heart and Lung Institute, Imperial College School of Medicine; Honorary Consultant Pathologist, Brompton Hospital, London, UK.