TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Clinical Trials and Epidemiologic Studies Biased by False and Misleading Data From Research Participants

October 2nd, 2015

Many legal commentators erroneously refer to epidemiologic studies as “admitted” into evidence.[1] These expressions are sloppy, and unfortunate, because they obscure the tenuousness of study validity, and the many hearsay levels that are represented by an epidemiologic study. Rule 702 permits expert witness opinion that has an epistemic basis, and Rule 703 allows expert witnesses to rely upon otherwise inadmissible facts and data, as long as real experts in the field would reasonably rely upon such facts and data. Nothing in Rule 702 or 703 make an epidemiologic study itself admissible. And the general inadmissibility of the studies themselves is a good thing, given that they will be meaningless to the trier of fact without the endorsements, qualifications, and explanations of an expert witness, and given that many studies are inaccurate, invalid, and lack data integrity to boot.

Dr. Frank Woodside was kind enough to call my attention to an interesting editorial piece in the current issue of the New England Journal of Medicine, which reinforced the importance of recognizing that epidemiologic studies and clinical trials are inadmissible in themselves. The editorial, by scientists from the National Institute of Environmental Health Studies and the National Institute on Drug Abuse, calls out the problem of study participants who lie, falsify, fail to disclose, and exaggerate important aspects of their medical histories as well as their data. See David B. Resnik & David J. McCann, “Deception by Research Participants,” 373 New Engl. J. Med. 1192 (2015). The editorial is an important caveat for those who would glibly describe epidemiologic studies and clinical trials as “admissible.”

As a reminder of the autonomy of those who participate in clinical trials and studies, we now refer to individuals in a study as “participants,” and not “subjects.” Resnik and McCann remind us, however, that notwithstanding their importance, study participants can bias a study in important ways. Citing other recent papers,[2] the editorialists note that clinical trials offer financial incentives to participants, which may lead to exaggeration of symptoms to ensure enrollment, to failure to disclose exclusionary medical conditions and information, and to withholding of embarrassing or inculpatory information. Although fabrication or falsification of medical history and data by research participants is not research misconduct by the investigators, the participants’ misconduct can seriously bias and undermine the validity and integrity of a study.

Resnik and McCann’s concerns about the accuracy and truthfulness of clinical trial participant medical data and information can mushroom exponentially in the context of observational studies that involve high-stakes claims for compensation and vindication on medical causation issues. Here are a couple of high-stakes examples.

The Brinton Study in Silicone Gel Breast Implant Litigation

In the silicone gel breast implant litigation, claimants looked forward to a study by one of their champions, Dr. Louis Brinton, of the National Cancer Institute (NCI). Brinton had obtained intramural funding to conduct a study of women who had had silicone gel breast implants and their health outcomes. To their consternation, the defendants in that litigation learned of Dr. Brinton’s close ties with plaintiffs’ counsel, plaintiffs’ support groups, and other advocates. Further investigation, including Freedom of Information Act requests to the NCI led to some disturbing and startling revelations.

In October 1996, a leading epidemiologist wrote a “concerned citizen” letter to Dr. Joseph Fraumeni, who was then the director of Epidemiology and Genetics at the NCI. The correspondent wrote to call Dr. Fraumeni’s attention to severe bias problems in Dr. Brinton’s pending study of disease and symptom outcomes among women who had had silicone breast implants. Dr. Brinton had written to an Oregon attorney (Michael Williams) to enlist him to encourage his clients to participate in Brinton’s NCI study.   Dr. Brinton had also written to a Philadelphia attorney (Steven Sheller) to seek permission to link potential study subjects to the global settlement database of information on women participating in the settlement. Perhaps most egregiously, Dr. Brinton and others had prepared a study Question & Answer sheet, from the National Institutes of Health, which ended with a ringing solicitation of “The study provides an opportunity for women who may be suffering as a result of implants to be heard. Now is your chance to make a major contribution to women’s health by supporting this essential research.” Dr. Brinton apparently had not thought of appealing to women with implants who did not have health problems.

Dr. Brinton’s methodology doomed her study from the start. Without access to the background materials, such as the principal investigator’s correspondence file, or the recruitment documents used to solicit participation of ill women in the study, the scientific community, and the silicone litigation defendants would not have had the important insights into serious bias and flaws of Brinton’s study.

The Racette-Scruggs’ Study in Welding Fume Litigation

The welding fume litigation saw its version of a study corrupted by the participation of litigants and potential litigants. Richard (Dickie) Scruggs and colleagues funded some neurological researchers to travel to Alabama and Mississippi to “screen” plaintiffs and potential plaintiffs in litigation for over claims of neurological injury and disease from welding fume exposure. The plaintiffs’ lawyers rounded up the research subjects (a.k.a. clients and potential clients), talked to them before the medical evaluations, and administered the study questionnaires. Clearly the study subjects were aware of Scruggs’ “research” hypothesis. The plaintiffs’ lawyers then invited researchers who saw the welding tradesmen, using a novel videotaping methodology, to evaluate the workers for parkinsonism.

After their sojourn, at Scruggs’ expense to Alabama and Mississippi, the researchers wrote up their results, with little or no detail of the circumstances of how they had acquired their research “participants,” or those participants’ motives to give accurate or inaccurate medical and employment history information. See Brad A. Racette, S.D. Tabbal, D. Jennings, L. Good, J.S. Perlmutter, and Brad Evanoff, “Prevalence of parkinsonism and relationship to exposure in a large sample of Alabama welders,” 64 Neurology 230 (2005); Brad A. Racette, et al., “A rapid method for mass screening for parkinsonism,” 27 Neurotoxicology 357 (2006) (a largely duplicative report of the Alabama welders study).

Defense counsel directed subpoenas to both Dr. Racette and his institution, Washington University St. Louis, for the study protocol, underlying data, data codes, and statistical analyses.  After a long discovery fight, the MDL court largely enforced the subpoenas.  See, e.g., In re Welding Fume Prods. Liab. Litig., MDL 1535, 2005 WL 5417815 (N.D. Ohio Oct. 18, 2005) (upholding defendants’ subpoena for protocol, data, data codes, statistical analyses, and other things from Dr. Racette’s Alabama study on welding and parkinsonism). After the defense had the opportunity to obtain and analyze the underlying data in the Scruggs-Racette study, the welding plaintiffs largely retreated from their epidemiologic case. The Racette Alabama study faded into the background of the trials.

Both the Brinton and the Racette studies are painful reminders of the importance of assessing the motives of the study participants in observational epidemiologic studies, and the participants’ ability to undermine data integrity. If the financial motives identified by Resnik and McCann are sufficient to lead participants to give false information, or to fail to disclose correct information, we can only imagine how powerful are the motives created by the American tort litigation system among actual and potential claimants when they participate in epidemiologic studies. Resnik and McCann may be correct that fabrication or falsification of medical history and data by research participants is not research misconduct by the investigators themselves, but investigators who turn a blind eye to the knowledge, intent, and motives of their research participants may be conducting studies that are doomed from the outset.


[1] Michael D. Green, D. Michal Freedman, Leon Gordis, “Reference Guide on Epidemiology 549, 551,” in Reference Manual on Scientific Evidence (3d ed. 2011) ( “Epidemiologic studies have been well received by courts deciding cases involving toxic substances. *** Well-conducted studies are uniformly admitted.) (citing David L. Faigman et al. eds., 3 Modern Scientific Evidence: The Law and Science of Expert Testimony § 23.1, at 187 (2007–08)).

[2] Eric Devine, Megan Waters, Megan Putnam, et al., “Concealment and fabrication by experienced research subjects,” 20 Clin. Trials 935 (2013); Rebecca Dresser, “Subversive subjects: rule-breaking and deception in clinical trials,” 41 J. Law Med. Ethics 829 (2013).

The C-8 (Perfluorooctanoic Acid) Litigation Against DuPont, part 1

September 27th, 2015

The first plaintiff has begun her trial against E.I. Du Pont De Nemours & Company (DuPont), for alleged harm from environmental exposure to perfluorooctanoic acid or its salts (PFOA). Ms. Carla Bartlett is claiming that she developed kidney cancer as a result of drinking water allegedly contaminated with PFOA by DuPont. Nicole Hong, “Chemical-Discharge Case Against DuPont Goes to Trial: Outcome could affect thousands of claims filed by other U.S. residents,” Wall St. J. (Sept. 13, 2015). The case is pending before Chief Judge Edmund A. Sargus, Jr., in the Southern District of Ohio.

PFOA is not classified as a carcinogen in the Integrated Risk Information System (IRIS), of the U.S. Environmental Protection Agency (EPA). In 2005, the EPA Office of Pollution Prevention and Toxics submitted a “Draft Risk Assessment of the Potential Human Health Effects Associated With Exposure to Perfluorooctanoic Acid and Its Salts (PFOA),” which is available at the EPA’s website. The draft report, which is based upon some epidemiology and mostly animal toxicology studies, stated that there was “suggestive evidence of carcinogenicity, but not sufficient to assess human carcinogenic potential.”

In 2013, The Health Council of the Netherlands evaluated the PFOA cancer issue, and found the data unsupportive of a causal conclusions. The Health Council of the Netherlands, “Perfluorooctanoic acid and its salts: Evaluation of the carcinogenicity and genotoxicity” (2013) (“The Committee is of the opinion that the available data on perfluorooctanoic acid and its salts are insufficient to evaluate the carcinogenic properties (category 3)”).

Last year, the World Health Organization (WHO) through its International Agency for Research on Cancer (IARC) reviewed the evidence on the alleged carcinogenicity of PFOA. The IARC, which has fostered much inflation with respect to carcinogenicity evaluations, classified as PFOA as only possibly carcinogenic. See News, “Carcinogenicity of perfluorooctanoic acid, tetrafl uoroethylene, dichloromethane, 1,2-dichloropropane, and 1,3-propane sultone,” 15 The Lancet Oncology 924 (2014).

Most independent reviews also find the animal and epidemiologic unsupportive of a causal conclusion between PFOA and any human cancer. See, e.g., Thorsten Stahl, Daniela Mattern, and Hubertus Brunn, “Toxicology of perfluorinated compounds,” 23 Environmental Sciences Europe 38 (2011).

So you might wonder how DuPont lost its Rule 702 challenges in such a case, which it surely did. In re E. I. du Pont de Nemours & Co. C-8 Pers. Injury Litig., Civil Action 2:13-md-2433, 2015 U.S. Dist. LEXIS 98788 (S.D. Ohio July 21, 2015). That is a story for another day.

David Faigman’s Critique of G2i Inferences at Weinstein Symposium

September 25th, 2015

The DePaul Law Review’s 20th Annual Clifford Symposium on Tort Law and Social Policy is an 800-plus page tribute in honor of Judge Jack Weinstein. 64 DePaul L. Rev. (Winter 2015). There are many notable, thought-provoking articles, but my attention was commanded by the contribution on Judge Weinstein’s approach to expert witness opinion evidence. David L. Faigman & Claire Lesikar, “Organized Common Sense: Some Lessons from Judge Jack Weinstein’s Uncommonly Sensible Approach to Expert Evidence,” 64 DePaul L. Rev. 421 (2015) [cited as Faigman].

Professor Faigman praises Judge Jack Weinstein for his substantial contributions to expert witness jurisprudence, while acknowledging that Judge Weinstein has been a sometimes reluctant participant and supporter of judicial gatekeeping of expert witness testimony. Professor Faigman also uses the occasion to restate his own views about the so-called “G2i” problem, the problem of translating general knowledge that pertains to groups to individual cases. In the law of torts, the G2i problem arises from the law’s requirement that plaintiffs show that they were harmed by defendants’ products or environmental exposures. In the context of modern biological “sufficient” causal set principles, this “proof” requirement entails that the product or exposure can cause the specified harms in human beings generally (“general causation”) and that the product or exposure actually played a causal role in bringing about plaintiffs’ specific harms.

Faigman makes the helpful point that courts initially and incorrectly invoked “differential diagnosis,” as the generally accepted methodology for attributing causation. In doing so, the courts extrapolated from the general acceptance of differential diagnosis in the medical community to the courtroom testimony about etiology. The extrapolation often glossed over the methodological weaknesses of the differential approach to etiology. Not until 1995 did a court wake to the realization that what was being proffered was a “differential etiology,” and not a differential diagnosis. McCullock v. H.B. Fuller Co., 61 F.3d 1038, 1043 (2d Cir. 1995). This realization, however, did not necessarily stimulate the courts’ analytical faculties, and for the most part, they treated the methodology of specific causal attribution as general acceptance and uncontroversial. Faigman’s point that the courts need to pay attention to the methodological challenges to differential etiological analysis is well taken.

Faigman also claims, however, that in advancing “differential etiologies, expert witnesses were inventing wholesale an approach that had no foundation or acceptance in their scientific disciplines:

 “Differential etiology is ostensibly a scientific methodology, but one not developed by, or even recognized by, physicians or scientists. As described, it is entirely logical, but has no scientific methods or principles underlying it. It is a legal invention and, as such, has analytical heft, but it is entirely bereft of empirical grounding. Courts and commentators have so far merely described the logic of differential etiology; they have yet to define what that methodology is.”

Faigman at 444.[1] Faigman is correct that courts often have left unarticulated exactly what the methodology is, but he does not quite make sense when he writes that the method of differential etiology is “entirely logical,” but has no “scientific methods or principles underlying it.” Afterall, Faigman starts off his essay with a quotation from Thomas Huxley that “science is nothing but trained and organized common sense.”[2] As I have written elsewhere, the form of reasoning involved in differential diagnosis is nothing other than the iterative disjunctive syllogism.[3] Either-or reasoning occurs throughout the physical and biological sciences; it is not clear why Faigman declares it un- or extra-scientific.

The strength of Faigman’s claim about the made-up nature of differential etiology appears to be undermined and contradicted by an example that he provides from clinical allergy and immunology:

“Allergists, for example, attempt to identify the etiology of allergic reactions in order to treat them (or to advise the patient to avoid what caused them), though it might still be possible to treat the allergic reactions without knowing their etiology.”

Faigman at 437. Of course, not only allergists try to determine the cause of an individual patient’s disease. Psychiatrists, in the psychoanalytic tradition, certain do so as well. Physicians who use predictive regression models use group data, in multivariate analyses, to predict outcomes, risk, and mortality in individual patients. Faigman’s claim is similarly undermined by the existence of a few diseases (other than infectious diseases) that are defined by the causative exposure. Silicosis and manganism have played a large role in often bogus litigation, but they represent instances in which a differential diagnosis and puzzle may also be an etiological diagnosis and puzzle. Of course, to the extent that a disease is defined in terms of causative exposures, there may be serious and even intractable problems caused by the lack of specificity and accuracy in the diagnostic criteria for the supposedly pathognomonic disease.

As for whether the concept of “differential etiology” is ever used in the sciences themselves, a few citations for consideration follow.

Kløve & D. Doehring, “MMPI in epileptic groups with differential etiology,” 18 J. Clin. Psychol. 149 (1962)

Kløve & C. Matthews, “Psychometric and adaptive abilities in epilepsy with differential etiology,” 7 Epilepsia 330 (1966)

Teuber & K. Usadel, “Immunosuppression in juvenile diabetes mellitus? Critical viewpoint on the treatment with cyclosporin A with consideration of the differential etiology,” 103 Fortschr. Med. 707 (1985)

G.May & W. May, “Detection of serum IgA antibodies to varicella zoster virus (VZV)–differential etiology of peripheral facial paralysis. A case report,” 74 Laryngorhinootologie 553 (1995)

Alan Roberts, “Psychiatric Comorbidity in White and African-American Illicity Substance Abusers” Evidence for Differential Etiology,” 20 Clinical Psych. Rev. 667 (2000)

Mark E. Mullinsa, Michael H. Leva, Dawid Schellingerhout, Gilberto Gonzalez, and Pamela W. Schaefera, “Intracranial Hemorrhage Complicating Acute Stroke: How Common Is Hemorrhagic Stroke on Initial Head CT Scan and How Often Is Initial Clinical Diagnosis of Acute Stroke Eventually Confirmed?” 26 Am. J. Neuroradiology 2207 (2005)

Qiang Fua, et al., “Differential Etiology of Posttraumatic Stress Disorder with Conduct Disorder and Major Depression in Male Veterans,” 62 Biological Psychiatry 1088 (2007)

Jesse L. Hawke, et al., “Etiology of reading difficulties as a function of gender and severity,” 20 Reading and Writing 13 (2007)

Mastrangelo, “A rare occupation causing mesothelioma: mechanisms and differential etiology,” 105 Med. Lav. 337 (2014)


[1] See also Faigman at 448 (“courts have invented a methodology – differential etiology – that purports to resolve the G2i problem. Unfortunately, this method has only so far been described; it has not been defined with any precision. For now, it remains a highly ambiguous idea, sound in principle, but profoundly underdefined.”).

[2] Thomas H. Huxley, “On the Education Value of the Natural History Sciences” (1854), in Lay Sermons, Addresses and Reviews 77 (1915).

[3] See, e.g.,Differential Etiology and Other Courtroom Magic” (June 23, 2014) (collecting cases); “Differential Diagnosis in Milward v. Acuity Specialty Products Group” (Sept. 26, 2013).

Beecher-Monas Proposes to Abandon Common Sense, Science, and Expert Witnesses for Specific Causation

September 11th, 2015

Law reviews are not peer reviewed, not that peer review is a strong guarantor of credibility, accuracy, and truth. Most law reviews have no regular provision for letters to the editor; nor is there a PubPeer that permits readers to point out errors for the benefit of the legal community. Nonetheless, law review articles are cited by lawyers and judges, often at face value, for claims and statements made by article authors. Law review articles are thus a potent source of misleading, erroneous, and mischievous ideas and claims.

Erica Beecher-Monas is a law professor at Wayne State University Law School, or Wayne Law, which considers itself “the premier public-interest law school in the Midwest.” Beware of anyone or any institution that describes itself as working for the public interest. That claim alone should put us on our guard against whose interests are being included and excluded as legitimate “public” interest.

Back in 2006, Professor Beecher-Monas published a book on evaluating scientific evidence in court, which had a few goods points in a sea of error and nonsense. See Erica Beecher-Monas, Evaluating Scientific Evidence: An Interdisciplinary Framework for Intellectual Due Process (2006)[1]. More recently, Beecher-Monas has published a law review article, which from its abstract suggests that she might have something to say about this difficult area of the law:

“Scientists and jurists may appear to speak the same language, but they often mean very different things. The use of statistics is basic to scientific endeavors. But judges frequently misunderstand the terminology and reasoning of the statistics used in scientific testimony. The way scientists understand causal inference in their writings and practice, for example, differs radically from the testimony jurists require to prove causation in court. The result is a disconnect between science as it is practiced and understood by scientists, and its legal use in the courtroom. Nowhere is this more evident than in the language of statistical reasoning.

Unacknowledged difficulties in reasoning from group data to the individual case (in civil cases) and the absence of group data in making assertions about the individual (in criminal cases) beset the courts. Although nominally speaking the same language, scientists and jurists often appear to be in dire need of translators. Since expert testimony has become a mainstay of both civil and criminal litigation, this failure to communicate creates a conundrum in which jurists insist on testimony that experts are not capable of giving, and scientists attempt to conform their testimony to what the courts demand, often well beyond the limits of their expertise.”

Beecher-Monas, “Lost in Translation: Statistical Inference in Court,” 46 Arizona St. L.J. 1057, 1057 (2014) [cited as BM].

A close read of the article shows, however, that Beecher-Monas continues to promulgate misunderstanding, error, and misdirection on statistical and scientific evidence.

Individual or Specific Causation

The key thesis of this law review is that expert witnesses have no scientific or epistemic warrant upon which to opine about individual or specific causation.

“But what statistics cannot do—nor can the fields employing statistics, like epidemiology and toxicology, and DNA identification, to name a few—is to ascribe individual causation.”

BM at 1057-58.

Beecher-Monas tells us that expert witnesses are quite willing to opine on specific causation, but that they have no scientific or statistical warrant for doing so:

“Statistics is the law of large numbers. It can tell us much about populations. It can tell us, for example, that so-and-so is a member of a group that has a particular chance of developing cancer. It can tell us that exposure to a chemical or drug increases the risk to that group by a certain percentage. What statistics cannot do is tell which exposed person with cancer developed it because of exposure. This creates a conundrum for the courts, because nearly always the legal question is about the individual rather than the group to which the individual belongs.”

BM at 1057. Clinical medicine and science come in for particular chastisement by Beecher-Monas, who acknowledges the medical profession’s legitimate role in diagnosing and treating disease. Physicians use a process of differential diagnosis to arrive at the most likely diagnosis of disease, but the etiology of the disease is not part of their normal practice. Beecher-Monas leaps beyond the generalization that physicians infrequently ascertain specific causation to the sweeping claim that ascertaining the cause of a patient’s disease is beyond the clinician’s competence and scientific justification. Beecher-Monas thus tells us, in apodictic terms, that science has nothing to say about individual or specific causation. BM at 1064, 1075.

In a variety of contexts, but especially in the toxic tort arena, expert witness testimony is not reliable with respect to the inference of specific causation, which, Beecher-Monas writes, usually without qualification, is “unsupported by science.” BM at 1061. The solution for Beecher-Monas is clear. Admitting baseless expert witness testimony is “pernicious” because the whole purpose of having expert witnesses is to help the fact finder, jury or judge, who lack the background understanding and knowledge to assess the data, interpret all the evidence, and evaluate the epistemic warrant for the claims in the case. BM at 1061-62. Beecher-Monas would thus allow the expert witnesses to testify about what they legitimately know, and let the jury draw the inference about which expert witnesses in the field cannot and should not opine. BM at 1101. In other words, Beecher-Monas is perfectly fine with juries and judges guessing their way to a verdict on an issue that science cannot answer. If her book danced around this recommendation, now her law review article has come out into the open, declaring an open season to permit juries and judges to be unfettered in their specific causation judgments. What is touching is that Beecher-Monas is sufficiently committed to gatekeeping of expert witness opinion testimony that she proposes a solution to take a complex area away from expert witnesses altogether rather than confront the reality that there is often simply no good way to connect general and specific causation in a given person.

Causal Pies

Beecher-Monas relies heavily upon Professor Rothman’s notion of causal pies or sets to describe the factors that may combine to bring about a particular outcome. In doing so, she commits a non-sequitur:

“Indeed, epidemiologists speak in terms of causal pies rather than a single cause. It is simply not possible to infer logically whether a specific factor caused a particular illness.”[2]

BM at 1063. But the question on her adopted model of causation is not whether any specific factor was the cause, but whether it was one of the multiple slices in the pie. Her citation to Rothman’s statement that “it is not possible to infer logically whether a specific factor was the cause of an observed event,” is not the problem that faces factfinders in court cases.

With respect to differential etiology, Beecher-Monas claims that “‘ruling in’ all potential causes cannot be done.” BM at 1075. But why not? While it is true that disease diagnosis is often made upon signs and symptoms, BM at 1076, sometimes physicians are involved in trying to identify causes in individuals. Psychiatrists of course are frequently involved in trying to identify sources of anxiety and depression in their patients. It is not all about putting a DSM-V diagnosis on the chart, and prescribing medication. And there are times, when physicians can say quite confidently that a disease has a particular genetic cause, as in a man with BrCa1, or BrCa2, and breast cancer, or certain forms of neurodegenerative diseases, or an infant with a clearly genetically determined birth defect.

Beecher-Monas confuses “the” cause with “a” cause, and wonders away from both law and science into her own twilight zone. Here is an example of how Beecher-Monas’ confusion plays out. She asserts that:

“For any individual case of lung cancer, however, smoking is no more important than any of the other component causes, some of which may be unknown.”

BM at 1078. This ignores the magnitude of the risk factor and its likely contribution to a given case. Putting aside synergistic co-exposures, for most lung cancers, smoking is the “but for” cause of individual smokers’ lung cancers. Beecher-Monas sets up a strawman argument by telling us that is logically impossible to infer “whether a specific factor in a causal pie was the cause of an observed event.” BM at 1079. But we are usually interested in whether a specific factor was “a substantial contributing factor,” without which the disease would not have occurred. This is hardly illogical or impracticable for a given case of mesothelioma in a patient who worked for years in a crocidolite asbestos factor, or for a case of lung cancer in a patient who smoked heavily for many years right up to the time of his lung cancer diagnosis. I doubt that many people would hesitate, on either logical or scientific grounds, to attribute a child’s phocomelia birth defects to his mother’s ingestion of thalidomide during an appropriate gestational window in her pregnancy.

Unhelpfully, Beecher-Monas insists upon playing this word game by telling us that:

“Looking backward from an individual case of lung cancer, in a person exposed to both asbestos and smoking, to try to determine the cause, we cannot separate which factor was primarily responsible.”

BM at 1080. And yet that issue, of “primary responsibility” is not in any jury instruction for causation in any state of the Union, to my knowledge.

From her extreme skepticism, Beecher-Monas swings to the other extreme that asserts that anything that could have been in the causal set or pie was in the causal set:

“Nothing in relative risk analysis, in statistical analysis, nor anything in medical training, permits an inference of specific causation in the individual case. No expert can tell whether a particular exposed individual’s cancer was caused by unknown factors (was idiopathic), linked to a particular gene, or caused by the individual’s chemical exposure. If all three are present, and general causation has been established for the chemical exposure, one can only infer that they all caused the disease.115 Courts demanding that experts make a contrary inference, that one of the factors was the primary cause, are asking to be misled. Experts who have tried to point that out, however, have had a difficult time getting their testimony admitted.”

BM at 1080. There is no support for Beecher-Monas’ extreme statement. She cites, in footnote 115, to Kenneth Rothman’s introductory book on epidemiology, but what he says at the cited page is quite different. Rothman explains that “every component cause that played a role was necessary to the occurrence of that case.” In other words, for every component cause that actually participated in bringing about this case, its presence was necessary to the occurrence of the case. What Rothman clearly does not say is that for a given individual’s case, the fact that a factor can cause a person’s disease means that it must have caused it. In Beecher-Monas’ hypothetical of three factors – idiopathic, particular gene, and chemical exposure, all three, or any two, or only one of the three may have made a given individual’s causal set. Beecher-Monas has carelessly or intentionally misrepresented Rothman’s actual discussion.

Physicians and epidemiologists do apply group risk figures to individuals, through the lens of predictive regression equations.   The Gail Model for 5 Year Risk of Breast Cancer, for instance, is a predictive equation that comes up with a prediction for an individual patient by refining the subgroup within which the patient fits. Similarly, there are prediction models for heart attack, such as the Risk Assessment Tool for Estimating Your 10-year Risk of Having a Heart Attack. Beecher-Monas might complain that these regression equations still turn on subgroup average risk, but the point is that they can be made increasingly precise as knowledge accumulates. And the regression equations can generate confidence intervals and prediction intervals for the individual’s constellation of risk factors.

Significance Probability and Statistical Significance

The discussion of significance probability and significance testing in Beecher-Monas’ book was frequently in error,[3] and this new law review article is not much improved. Beecher-Monas tells us that “judges frequently misunderstand the terminology and reasoning of the statistics used in scientific testimony,” BM at 1057, which is true enough, but this article does little to ameliorate the situation. Beecher-Monas offers the following definition of the p-value:

“The P- value is the probability, assuming the null hypothesis (of no effect) is true (and the study is free of bias) of observing as strong an association as was observed.”

BM at 1064-65. This definition misses that the p-value is a cumulative tail probability, and can be one-sided or two-sided. More seriously in error, however, is the suggestion that the null hypothesis is one of no effect, when it is merely a pre-specified expected value that is the subject of the test. Of course, the null hypothesis is often one of no disparity between the observed and the expected, but the definition should not mislead on this crucial point.

For some reason, Beecher-Monas persists in describing the conventional level of statistical significance as 95%, which substitutes the coefficient of confidence for the complement of the frequently pre-specified p-value for significance. Annoying but decipherable. See, e.g., BM at 1062, 1064, 1065. She misleadingly states that:

“The investigator will thus choose the significance level based on the size of the study, the size of the effect, and the trade-off between Type I (incorrect rejection of the null hypothesis) and Type II (incorrect failure to reject the null hypothesis) errors.”

BM at 1066. While this statement is sometimes, rarely true, it mostly is not. A quick review of the last several years of the New England Journal of Medicine will document the error. Invariably, researchers use the conventional level of alpha, at 5%, unless there is multiple testing, such as in a genetic association study.

Beecher-Monas admonishes us that “[u]sing statistical significance as a screening device is thus mistaken on many levels,” citing cases that do not provide support for this proposition.[4] BM at 1066. The Food and Drug Administration’s scientists, who review clinical trials for efficacy and safety will be no doubt be astonished to hear this admonition.

Beecher-Monas argues that courts should not factor statistical significance or confidence intervals into their gatekeeping of expert witnesses, but that they should “admit studies,” and leave it to the lawyers and expert witnesses to explain the strengths and weaknesses of the studies relied upon. BM at 1071. Of course, studies themselves are rarely admitted because they represent many levels of hearsay by unknown declarants. Given Beecher-Monas’ acknowledgment of how poorly judges and lawyers understand statistical significance, this argument is cynical indeed.

Remarkably, Beecher-Monas declares, without citation, that the

“the purpose of epidemiologists’ use of statistical concepts like relative risk, confidence intervals, and statistical significance are intended to describe studies, not to weed out the invalid from the valid.”

BM at 1095. She thus excludes by ipse dixit any inferential purposes these statistical tools have. She goes further and gives us a concrete example:

“If the methodology is otherwise sound, small studies that fail to meet a P-level of 5 [sic], say, or have a relative risk of 1.3 for example, or a confidence level that includes 1 at 95% confidence, but relative risk greater than 1 at 90% confidence ought to be admissible. And understanding that statistics in context means that data from many sources need to be considered in the causation assessment means courts should not dismiss non-epidemiological evidence out of hand.”

BM at 1095. Well, again, studies are not admissible; the issue is whether they may be reasonably relied upon, and whether reliance upon them may support an opinion claiming causality. And a “P-level” of 5 is, well, let us hope a serious typographical error. Beecher-Monas’ advice is especially misleading when there is there is only one study, or only one study in a constellation of exonerative studies. See, e.g., In re Accutane, No. 271(MCL), 2015 WL 753674, 2015 BL 59277 (N.J. Super. Law Div. Atlantic Cty. Feb. 20, 2015) (excluding Professor David Madigan for cherry picking studies to rely upon).

Confidence Intervals

Beecher-Monas’ book provided a good deal of erroneous information on confidence intervals.[5] The current article improves on the definitions, but still manages to go astray:

“The rationale courts often give for the categorical exclusion of studies with confidence intervals including the relative risk of one is that such studies lack statistical significance.62 Well, yes and no. The problem here is the courts’ use of a dichotomous meaning for statistical significance (significant or not).63 This is not a correct understanding of statistical significance.”

BM at 1069. Well yes and no; this interpretation of a confidence interval, say with a coefficient of confidence of 95%, is a reasonable interpretation of whether the point estimate is statistically significant at an alpa of 5%. If Beecher-Monas does not like strict significant testing, that is fine, but she cannot mandate its abandonment by scientists or the courts. Certainly the cited interpretation is one proper interpretation among several.

Power

There were several misleading references to statistical power in Beecher-Monas’ book, but the new law review tops them by giving a new, bogus definition:

“Power, the probability that the study in which the hypothesis is being tested will reject the alterative [sic] hypothesis when it is false, increases with the size of the study.”

BM at 1065. For this definition, Beecher-Monas cites to the Reference Manual on Scientific Evidence, but butchers the correct definition give by the late David Freedman and David Kaye.[6] All of which is very disturbing.

Relative Risks and Other Risk Measures

Beecher-Monas begins badly by misdefining the concept of relative risk:

“as the percentage of risk in the exposed population attributable to the agent under investigation.”

BM at 1068. Perhaps this percentage can be derived from the relative risk, if we know it to be the true measure with some certainty, through a calculation of attributable risk, but confusing and conflating attributable and relative risk in a law review article that is taking the entire medical profession to task, and most of the judiciary to boot, should be written more carefully.

Then Beecher-Monas tells us that the “[r]elative risk is a statistical test that (like statistical significance) depends on the size of the population being tested.” BM at 1068. Well, actually not; the calculation of the RR is unaffected by the sample size. The variance of course will vary with the sample size, but Beecher-Monas seems intent on ignoring random variability.

Perhaps most egregious is Beecher-Monas’ assertion that:

“Any increase above a relative risk of one indicates that there is some effect.”

BM at 1067. So much for ruling out chance, bias, and confounding! Or looking at an entire body of epidemiologic research for strength, consistency, coherence, exposure-response, etc. Beecher-Monas has thus moved beyond a liberal, to a libertine, position. In case the reader has any doubts of the idiosyncrasy of her views, she repeats herself:

“As long as there is a relative risk greater than 1.0, there is some association, and experts should be permitted to base their causal explanations on such studies.”

BM at 1067-68. This is evidentiary nihilism in full glory. Beecher-Monas has endorsed relying upon studies irrespective of their study design or validity, their individual confidence intervals, their aggregate summary point estimates and confidence intervals, or the absence of important Bradford Hill considerations, such as consistency, strength, and dose-response. So an expert witness may opine about general causation from reliance upon a single study with a relative risk of 1.05, say with a 95% confidence interval of 0.8 – 1.4?[7] For this startling proposition, Beecher-Monas cites the work of Sander Greenland, a wild and wooly plaintiffs’ expert witness in various toxic tort litigations, including vaccine autism and silicone autoimmune cases.

RR > 2

Beecher-Monas’ discussion of inferring specific causation from relative risks greater than two devolves into a muddle by her failure to distinguish general from specific causation. BM at 1067. There are different relevancies for general and specific causation, depending upon context, such as clinical trials or epidemiologic studies for general causation, number of studies available, and the like. Ultimately, she adds little to the discussion and debate about this issue, or any other.


[1] See previous comments on the book at “Beecher-Monas and the Attempt to Eviscerate Daubert from Within”; “Friendly Fire Takes Aim at Daubert – Beecher-Monas And The Undue Attack on Expert Witness Gatekeeping; and “Confidence in Intervals and Diffidence in the Courts.”

[2] Kenneth J. Rothman, Epidemiology: An Introduction 250 (2d ed. 2012).

[3] Erica Beecher-Monas, Evaluating Scientific Evidence: An Interdisciplinary Framework for Intellectual Due Process 42 n. 30, 61 (2007) (“Another way of explaining this is that it describes the probability that the procedure produced the observed effect by chance.”) (“Statistical significance is a statement about the frequency with which a particular finding is likely to arise by chance.”).

[4] See BM at 1066 & n. 44, citing “See, e.g., In re Breast Implant Litig., 11 F. Supp. 2d 1217, 1226–27 (D. Colo. 1998); Haggerty v. Upjohn Co., 950 F. Supp. 1160, 1164 (S.D. Fla. 1996), aff’d, 158 F.3d 588 (11th Cir. 1998) (“[S]cientifically valid cause and effect determinations depend on controlled clinical trials and epidemiological studies.”).”

 

[5] See, e.g., Erica Beecher-Monas, Evaluating Scientific Evidence 58, 67 (N.Y. 2007) (“No matter how persuasive epidemiological or toxicological studies may be, they could not show individual causation, although they might enable a (probabilistic) judgment about the association of a particular chemical exposure to human disease in general.”) (“While significance testing characterizes the probability that the relative risk would be the same as found in the study as if the results were due to chance, a relative risk of 2 is the threshold for a greater than 50 percent chance that the effect was caused by the agent in question.”)(incorrectly describing significance probability as a point probability as opposed to tail probabilities).

[6] David H. Kaye & David A. Freedman, Reference Guide on Statistics, in Federal Jud. Ctr., Reference Manual on Scientific Evidence 211, 253–54 (3d ed. 2011) (discussing the statistical concept of power).

[7] BM at 1070 (pointing to a passage in the FJC’s Reference Manual on Scientific Evidence that provides an example of one 95% confidence interval that includes 1.0, but which shrinks when calculated as a 90% interval to 1.1 to 2.2, which values “demonstrate some effect with confidence interval set at 90%). This is nonsense in the context of observational studies.

Seventh Circuit Affirms Exclusion of Expert Witnesses in Vinyl Chloride Case

August 30th, 2015

Last week, the Seventh Circuit affirmed a federal district court’s exclusion of plaintiffs’ expert witnesses in an environmental vinyl chloride exposure case. Wood v. Textron, Inc., No. 3:10 CV 87, 2014 U.S. Dist. LEXIS 34938 (N.D. Ind. Mar. 17, 2014); 2014 U.S. Dist. LEXIS 141593, at *11 (N.D. Ind. Oct. 3, 2014), aff’d, Slip op., No. 14-3448, 20125 U.S. App. LEXIS 15076 (7th Cir. Aug. 26, 2015). Plaintiffs, children C.W. and E.W., claimed exposure from Textron’s manufacturing facility in Rochester, Indiana, which released vinyl chloride as a gas that seeped into ground water, and into neighborhood residential water wells. Slip op. at 2-3. Plaintiffs claimed present injuries in the form of “gastrointestinal issues (vomiting, bloody stools), immunological issues, and neurological issues,” as well as future increased risk of cancer. Importantly, the appellate court explicitly approved the trial court’s careful reading of relied upon studies to determine whether they really did support the scientific causal claims made by the expert witnesses. Given the reluctance of some federal district judges to engage with the studies actually cited, this holding is noteworthy.

To support their claims, plaintiffs offered the testimony from three familiar expert witnesses:

(1) Dr. James G. Dahlgren;

(2) Dr. Vera S. Byers; and

(3) Dr. Jill E. Ryer-Powder.

Slip op. at 5. This gaggle offered well-rehearsed but scientifically unsound arguments in place of actual evidence that the children were hurt, or would be afflicted, as a result of their claimed exposures:

(a) extrapolation from high dose animal and human studies;

(b) assertions of children’s heightened vulnerability;

(c) differential etiology;

(d) temporality; and

(e) regulatory exposure limits.

On appeal, a panel of the Seventh Circuit held that the district court had properly conducted “an in-depth review of the relevant studies that the experts relied upon to generate their differential etiology,” and their general causation opinions. Slip op. at 13-14 (distinguishing other Seventh Circuit decisions that reversed district court Rule 702 rulings, and noting that the court below followed Joiner’s lead by analyzing the relied-upon studies to assess analytical gaps and extrapolations). The plaintiffs’ expert witnesses simply failed in analytical gap bridging, and dot connecting.

Extrapolation

The Circuit agreed with the district court that the extrapolations asserted were extreme, and that they represented “analytical gaps” too wide to be permitted in a courtroom. Slip op. at 15. The challenged expert witnesses extrapolated between species, between exposure levels, between exposure duration, between exposure circumstances, and between disease outcomes.

The district court faulted Dahlgren for relying upon articles that “fail to establish that [vinyl chloride] at the dose and duration present in this case could cause the problems that the [p]laintiffs have experienced or claim that they are likely to experience.” C.W. v. Textron, 2014 U.S. Dist. LEXIS 34938, at *53, *45 (N.D. Ind. Mar. 17, 2014) (finding that the analytical gap between the cited studies and Dahlgren’s purpose in citing the studies was an unbridged gap, which Dahlgren had failed to explain). Slip op. at 8.

Byers, for instance, cited one study[1] that involved exposure for five years, at an average level that was over 1,000 times higher than the children’s alleged exposure levels, which lasted less than 17 and 7 months, each. Perhaps even more extreme were the plaintiffs’ expert witnesses’ attempted extrapolations from animal studies, which the district court recognized as “too attenuated” from plaintiffs’ case. Slip op. at 14. The Seventh Circuit rejected plaintiffs’ alleged error that the district court had imposed a requirement of “absolute precision,” in holding that the plaintiffs’ expert witnesses’ analytical gaps (and slips) were too wide to be bridged. The Circuit provided a colorful example of a study on laboratory rodents, pressed into service for a long-term carcinogenetic assay, which found no statistically significant increase in tumors fed 0.03 milligrams vinyl chloride per kilogram of bodyweight, (0.03 mg/kg), for 4 to 5 days each week, for 59 weeks, compared to control rodents fed olive oil.[2] Slip op. at 14-15. This exposure level in this study of 0.03 mg/kg was over 10 times the children’s exposure, as estimated by Ryer-Powder. The 59 weeks of study exposure represents the great majority of the rodents’ adult years, which greatly exceeds the children’s exposure was took place over several months of their lives. Slip op. at 15.

The Circuit held that the district court was within its discretion in evaluating the analytical gaps, and that the district court was correct to look at the study details to exercise its role as a gatekeeper under Rule 702. Slip op. at 15-17. The plaintiffs’ expert witnesses failed to explain their extrapolations, which was made their opinions suspect. As the Circuit court noted, there is a methodology by which scientists sometimes attempt to model human risks from animal evidence. Slip op. at 16-17, citing Bernard D. Goldtsein & Mary Sue Henifin, “Reference Guide on Toxicology,” in Federal Manual on Scientific Evidence 646 (3d ed. 2011) (“The mathematical depiction of the process by which an external dose moves through various compartments in the body until it reaches the target organ is often called physiologically based pharmokinetics or toxicokinetics.”). Given the abject failures of plaintiffs’ expert witnesses to explain their leaps of faith, the appellate court had no occasion to explore the limits of risk assessment outside regulatory contexts.

Children’s Vulnerability

Plaintiffs’ expert witness asserted that children are much more susceptible than adult workers, and even laboratory rats. As is typical in such cases, these expert witnesses had no evidence to support their assertions, and they made no effort even to invoke models that attempted reasonable risk assessments of children’s risk.

Differential Etiology

Dahlgren and Byers both claimed that they reached individual or specific causation conclusions based upon their conduct of a “differential etiology.” The trial and appellate court both faulted them for failing to “rule in” vinyl chloride for plaintiffs’ specific ailments before going about the business of ruling out competing or alternative causes. Slip op. at 6-7; 9-10; 20-21.

The courts also rejected Dahlgren’s claim that he could rule out all potential alternative causes by noting that the children’s treating physicians had failed to identify any cause for their ailments. So after postulating a limited universe of alternative causes of “inheritance, allergy, infection or another poison,” Dahlgren ruled all of them out of the case, because these putative causes “would have been detected by [the appellants’] doctors and treated accordingly.” Slip op. at 7, 18. As the Circuit court saw the matter:

“[T]his approach is not the stuff of science. It is based on faith in his fellow physicians—nothing more. The district court did not abuse its discretion in rejecting it.”

Slip op. at 18. Of course, the court might well have noted that physicians are often concerned exclusively with identifying effective therapy, and have little or nothing to offer on actual causation.

The Seventh Circuit panel did fuss with dicta in the trial court’s opinion that suggested differential etiology “cannot be used to support general causation.” C.W. v. Textron, 2014 U.S. Dist. LEXIS 141593, at *11 (N.D. Ind. Oct. 3, 2014). Elsewhere, the trial court wrote, in a footnote, that “[d]ifferential [etiology] is admissible only insofar as it supports specific causation, which is secondary to general causation … .” Id. at *12 n.3. Curiously the appellate court characterized these statements as “holdings” of the trial court, but disproved their own characterization by affirming the judgment below. The Circuit court countered with its own dicta that

“there may be a case where a rigorous differential etiology is sufficient to help prove, if not prove altogether, both general and specific causation.”

Slip op. at 20 (citing, in turn, improvident dicta from the Second Circuit, in Ruggiero v. Warner-Lambert Co., 424 F.3d 249, 254 (2d Cir. 2005) (“There may be instances where, because of the rigor of differential diagnosis performed, the expert’s training and experience, the type of illness or injury at issue, or some other … circumstance, a differential diagnosis is sufficient to support an expert’s opinion in support of both general and specific causation.”).

Regulatory Pronouncements

Dahlgren based his opinions upon the children’s water supply containing vinyl chloride in excess of regulatory levels set by state and federal agencies, including the U.S. Environmental Protection Agency (E.P.A.). Slip op. at 6. Similarly, Ryer-Powder relied upon exposure levels’ exceeding regulatory permissible limits for her causation opinions. Slip op. at 10.

The district court, with the approval now of the Seventh Circuit would have none of this nonsense. Exceeding governmental regulatory exposure limits does not prove causation. The con-compliance does not help the fact finder without knowing “the specific dangers” that led the agency to set the permissible level, and thus the regulations are not relevant at all without this information. Even with respect to specific causation, the regulatory infraction may be weak or null evidence for causation. Slip op. at 18-19 (citing Cunningham v. Masterwear Corp., 569 F.3d 673, 674–75 (7th Cir. 2009).

Temporality

Byers and Dahlgren also emphasized that the children’s symptoms began after exposure and abated after removal from exposure. Slip op. at 9, 6-7. Both the trial and appellate courts were duly unimpressed by the post hoc ergo propter hoc argument. Slip op. at 19, citing Ervin v. Johnson & Johnson, 492 F.3d 901, 904-05 (7th Cir. 2007) (“The mere existence of a temporal relationship between taking a medication and the onset of symptoms does not show a sufficient causal relationship.”).

Increased Risk of Cancer

The plaintiffs’ expert witnesses offered opinions about the children’s future risk of cancer that were truly over the top. Dahlgren testified that the children were “highly likely” to develop cancer in the future. Slip op. at 6. Ryer-Powder claimed that the children’s exposures were “sufficient to present an unacceptable risk of cancer in the future.” Slip op. at 10. With no competence evidence to support their claims of present or past injury, these opinions about future cancer were no longer relevant. The Circuit thus missed an opportunity to comment on how meaningless these opinions were. Most people will develop a cancer at some point in their lifetime, and we might all agree that any risk is unacceptable, which is why medical research continues into the causes, prevention, and cure of cancer. An unquantified risk of cancer, however, cannot support an award of damages even if it were a proper item of damages. See, e.g., Sutcliffe v. G.A.F. Corp., 15 Phila. 339, 1986 Phila. Cty. Rptr. LEXIS 22, 1986 WL 501554 (1986). See alsoBack to Baselines – Litigating Increased Risks” (Dec. 21, 2010).


[1] Steven J. Smith, et al., “Molecular Epidemiology of p53 Protein Mutations in Workers Exposed to Vinyl Chloride,” 147 Am. J. Epidemiology 302 (1998) (average level of workers’ exposure was 3,735 parts per million; children were supposedly exposed at 3 ppb). This study looked only at a putative biomarker for angiosarcoma of the liver, not at cancer risk.

[2] Cesare Maltoni, et al., “Carcinogenity Bioassays of Vinyl Chloride Monomer: A Model of Risk Assessment on an Experimental Basis, 41 Envt’l Health Persp. 3 (1981).

Publication of Two Symposia on Scientific Evidence in the Law

August 2nd, 2015

The Journal of Philosophy, Science & Law bills itself as an on-line journal for the interdisciplinary exploration of philsophy, science, and law. This journal has just made its “Daubert Special” issue available at its website:

Jason Borenstein and Carol Henderson, “Reflections on Daubert: A Look Back at the Supreme Court’s Decision,” 15 J. Philos., Sci. & Law 1 (2015)

Mark Amadeus Notturno, “Falsifiability Revisited: Popper, Daubert, and Kuhn,” 15 J. Philos., Sci. & Law 5 (2015)

Tony Ward, “An English Daubert? Law, Forensic Science and Epistemic Deference,” 15 J. Philos., Sci. & Law 26 (2015)

Daniella McCahey & Simon A. Cole, “Human(e) Science? Demarcation, Law, and ‘Scientific Whaling’ in Whaling in the Antarctic” 15 J. Philos., Sci. & Law 37 (2015)

Back in January 30 – 31, 2015, the Texas Law Review called for a Conference on Science Challenges for Law and Policy, to focus on issues arising at intersection of science and law, with particular focus on issues arising in criminal justice, bioethics, and the environment. The Conference schedule is still available here. Conference papers addressed the nature of scientific disputes, the role of expertise in resolving such disputes, and the legal implementation and management of scientific knowledge. Some of the Conference papers are now available in the symposium issue of the 2015 Texas Law Review:

Rebecca Dresser, “The ‘Right to Try’ Investigational Drugs: Science and Stories in the Access Debate,” 93 Tex. L. Rev. 1631

David L. Faigman, “Where Law and Science (and Religion?) Meet,” 93 Tex. L. Rev. 1659 (2015)

Jennifer E. Laurin, “Criminal Law’s Science Lag: How Criminal Justice Meets Changed Scientific Understanding,” 93 Tex. L. Rev. 1751 (2015)

Elizabeth Fisher, Pasky Pascual & Wendy Wagner, “Rethinking Judicial Review of Expert Agencies,” 93 Tex. L. Rev. 1681 (2015)

Sheila Jasanoff, “Serviceable Truths: Science for Action in Law and Policy,” 93 Tex. L. Rev. 1723 (2015)

Thomas O. McGarity, “Science and Policy in Setting National Ambient Air Quality Standards: Resolving the Ozone Enigma,” 93 Tex. L. Rev. 1783 (2015)

Jennifer L. Mnookin, “Constructing Evidence and Educating Juries: The Case for Modular, Made-In-Advance Expert Evidence About Eyewitness Identifications and False Confessions,” 93 Tex. L. Rev. 1811 (2015)

California Actos Decision Embraces Relative-Risk-Greater-Than-Two Argument

July 28th, 2015

A recent decision of the California Court of Appeal, Second District, Division Three, continues the dubious state and federal practice of deciding important issues under cover of unpublished opinions. Cooper v. Takeda Pharms. America, Inc., No. B250163, 2015 Cal. App. Unpub. LEXIS 4965 (Calif. App., 2nd Dist., Div. 3; July 16, 2015). In Cooper, plaintiff claimed that her late husband’s bladder cancer was caused by defendant’s anti-diabetic medication, Actos (pioglitazone). The defendant moved to strike the expert witness testimony in support of specific causation. The trial judge expressed serious concerns about the admissibility of plaintiff’s expert witnesses on specific causation, but permitted the trial to go forward. After a jury returned its verdict in favor of plaintiff, the trial court entered judgment for the defendants, on grounds that the plaintiff lacked admissible expert witness testimony.

Although a recent, large, well-conducted study[1] failed to find any meaningful association between pioglitazone and bladder cancer, there were, at the time of trial, several studies that suggested an association. Plaintiff’s expert witnesses, epidemiologist Dr. Alfred Neugut and bladder oncologist Dr. Norm Smith interpreted the evidence to claim a causal association, but both conceded that there were no biomarkers that allowed them to attribute Cooper’s cancer to pioglitazone. The plaintiff also properly conceded that identifying a cause of the bladder cancer was irrelevant to treating the disease. Cooper, 2015 Cal. App. Unpub. LEXIS 4965, at *13. Specific causation was thus determined by the so-called process of differential etiology, with the ex ante existence of risk substituting for cause, and using risk exposure in the differential analysis.

The trial court was apparently soured on Dr. Smith’s specific causation assessment because of his poor performance at deposition, in which he demonstrated a lack of understanding of Cooper’s other potential exposures. Smith’s spotty understanding of Cooper’s actual and potential exposures and other risks made any specific causation assessment less than guesswork. By the time of trial, Dr. Smith and plaintiff’s counsel had backfilled the gaps, and Smith presented a more confident analysis of Cooper’s exposures and potentially competing risks.

Cooper had no family history of bladder cancer, no alcohol consumption, and no obvious exposure to occupational bladder carcinogens. His smoking history would account for exposure to a known bladder carcinogen, cigarette smoke, but Cooper’s documented history was of minor tobacco use, and remote in time. Factually, Cooper’s history was suspect and at odds with his known emphysema. Based upon this history, along with their causal interpretation of the Actos bladder cancer association, and their quantitative assessment that the risk ratio for bladder cancer from Actos was 7.0 or higher for Mr. Cooper (controlled for covariate, potential confounders), the plaintiff’s expert witnesses opined that Actos was probably a substantial factor in causing Mr. Cooper’s bladder cancer. The court did not examine the reasonableness of Dr. Smith’s risk ratios, which seem exorbitant in view of several available meta-analyses.[2]

The court stated that under the applicable California law of “substantial factor,” the plaintiff’s expert witness, in conducting a differential diagnosis, need not exclude every other possible cause of plaintiff’s disease “with absolute certainty.” Cooper, at *41-42. This statement leaves unclear and ambiguous whether the plaintiff’s expert witness must (and did in this case) rule out other possible causes with some level of certitude less than “absolute certainty,” such as reasonable medical certainty, or perhaps reasonable probability. Dr. Smith’s testimony, as described, did not attempt to go so far as to rule out smoking as “a cause” of Cooper’s bladder cancer; only that the risk from smoking was a lower order of magnitude than that for Actos. In Dr. Smith’s opinion, the discrepancy in magnitude between the risk ratios for smoking and Actos allowed him to state confidently that Actos was the most substantial risk.

Having estimated the smoking-related increased risk to somewhere between 0 and 100%, with the Actos increased risk at 600% or greater, Dr. Smith was able to present an admissible opinion that Actos was a substantial factor. Of course, this all turns on the appellate court’s acceptance of risk, of some sufficiently large magnitude, as evidence of specific causation. In the Cooper court’s words:

“The epidemiological studies relied on by Dr. Smith indicated exposure to Actos® resulted in hazard ratios for developing bladder cancer ranging from 2.54 to 6.97.18 By demonstrating a relative risk greater than 2.0 that a product causes a disease, epidemiological studies thereby become admissible to prove that the product at issue was more likely than not responsible for causing a particular person’s disease. “When statistical analyses or probabilistic results of epidemiological studies are offered to prove specific causation . . . under California law those analyses must show a relative risk greater than 2.0 to be ‘useful’ to the jury. Daubert v. Merrell Dow Pharmaceuticals Inc., 43 F.3d 1311, 1320 (9th Cir.), cert. denied 516 U.S. 869 (1995) [Daubert II]. This is so, because a relative risk greater than 2.0 is needed to extrapolate from generic population-based studies to conclusions about what caused a specific person’s disease. When the relative risk is 2.0, the alleged cause is responsible for an equal number of cases of the disease as all other background causes present in the control group. Thus, a relative risk of 2.0 implies a 50% probability that the agent at issue was responsible for a particular individual’s disease. This means that a relative risk that is greater than 2.0 permits the conclusion that the agent was more likely than not responsible for a particular individuals disease. [Reference Manual on Scientific Evidence (Federal Judicial Center 2d ed. 2000) (“Ref. Manual”),] Ref. Manual at 384, n. 140 (citing Daubert II).” (In re Silicone Gel Breast Implant Prod. Liab. Lit. (C.D. Cal. 2004) 318 F.Supp.2d 879, 893; italics added.) Thus, having considered and ruled out other background causes of bladder cancer based on his medical records, Dr. Smith could conclude based on the studies that it was more likely than not that Cooper’s exposure to Actos® caused his bladder cancer. In other words, because the studies, to varying degrees, adjusted for race, age, sex, and smoking, as well as other known causes of bladder cancer, Dr. Smith could rely upon those studies to make his differential diagnosis ruling in Actos®—as well as smoking—and concluding that Actos® was the most probable cause of Cooper’s disease.”

Cooper, at *78-80 (emphasis in the original).

Of course, the epidemiologic studies themselves are not admissible, regardless of the size of the relative risk, but the court was, no doubt, speaking loosely about the expert witness opinion testimony that was based upon the studies with risk ratios greater than two. Although the Cooper case does not change California law’s facile acceptance of risk as a substitute for cause, the case does base its approval of plaintiff’s expert witness’s attribution as turning on the magnitude of the risk ratio, adjusted for confounders, as having exceeded two. The Cooper case leaves open what happens when the risk that is being substituted for cause is a ratio ≤ 2.0. Some critics of the risk ratio > 2.0 inference have suggested that risk ratios greater than two would lead to directed verdicts for plaintiffs in all cases, but this suggestion requires demonstrations of both the internal and external validity of the studies that measure the risk ratio, which in many cases is in doubt. In Cooper, the plaintiff’s expert witnesses’ embrace of a high, outlier risk ratio for Actos, while simultaneously downplaying competing risks, allowed them to make out their specific causation case.


[1] James D. Lewis, Laurel A. Habel, Charles P. Quesenberry, Brian L. Strom, Tiffany Peng, Monique M. Hedderson, Samantha F. Ehrlich, Ronac Mamtani, Warren Bilker, David J. Vaughn, Lisa Nessel, Stephen K. Van Den Eeden, and Assiamira Ferrara, “Pioglitazone Use and Risk of Bladder Cancer and Other Common Cancers in Persons With Diabetes,” 314 J. Am. Med. Ass’n 265 (2015) (adjusted hazard ratio 1.06, 95% CI, 0.89-1.26).

[2] See, e.g., R.M. Turner, et al., “Thiazolidinediones and associated risk of bladder cancer: a systematic review and meta-analysis,” 78 Brit. J. Clin. Pharmacol. 258 (2014) (OR = 1.51, 95% CI 1.26-1.81, for longest cumulative duration of pioglitazone use); M. Ferwana, et al., “Pioglitazone and risk of bladder cancer: a meta-analysis of controlled studies,” 30 Diabet. Med. 1026 (2013) (based upon 6 studies, with median follow-up of 44 months, risk ratio = 1.23; 95% CI 1.09-1.39); Cristina Bosetti, “Cancer Risk for Patients Using Thiazolidinediones for Type 2 Diabetes: A Meta-Analysis,” 18 The Oncologist 148 (2013) (RR = 1.64 for longest exposure); Shiyao He, et al., “Pioglitazone prescription increases risk of bladder cancer in patients with type 2 diabetes: an updated meta-analysis,” 35 Tumor Biology 2095 (2014) (pooled hazard ratio = 1.67 (95% C.I., 1.31 – 2.12).

Canadian Judges’ Reference Manual on Scientific Evidence

July 24th, 2015

I had some notion that there was a Canadian version of the Reference Manual on Scientific Evidence in the works, but Professor Greenland’s comments in a discussion over at Deborah Mayo’s blog drew my attention to the publication of the Science Manual for Canadian Judges [Manual]. See “‘Statistical Significance’ According to the U.S. Dept. of Health and Human Services (ii),Error Statistics Philosophy (July 17, 2015).

The Manual is the product of the Canadian National Judicial Institute (NJI), which is an independent, not-for-profit group that is committed to educating Canadian judges. The NJI’s website describes the Manual:

“Without the proper tools, the justice system can be vulnerable to unreliable expert scientific evidence.

* * *

The goal of the Science Manual is to provide judges with tools to better understand expert evidence and to assess the validity of purportedly scientific evidence presented to them. …”

The Chief Justice of Canada, Hon. Beverley M. McLachlin, contributed an introduction to the Manual, which was notable for its frank admission that:

[w]ithout the proper tools, the justice system is vulnerable to unreliable expert scientific evidence.

****

Within the increasingly science-rich culture of the courtroom, the judiciary needs to discern ‘good’ science from ‘bad’ science, in order to assess expert evidence effectively and establish a proper threshold for admissibility. Judicial education in science, the scientific method, and technology is essential to ensure that judges are capable of dealing with scientific evidence, and to counterbalance the discomfort of jurists confronted with this specific subject matter.”

Manual at 14. These are laudable goals, indeed.

The first chapter of the Manual is an overview of Canadian law of scientific evidence, “The Legal Framework for Scientific Evidence,” by Canadian law professors Hamish Stewart (University of Toronto), and Catherine Piché (University of Montreal). Several judges served as peer reviewers.

The second chapter, “Science and the Scientific Method,” contains the heart of what judges supposedly should know about scientific and statistical matters to serve as effective “gatekeepers.” Like the chapters in the Reference Manual on Scientific Evidence, this chapter was prepared by a scientist author (Scott Findlay, Ph.D., Associate Professor of Biology, University of Ottawa) and a lawyer author (Nathalie Chalifour, Associate Professor of Law, University of Ottawa). Several judges, and Professor Brian Baigrie (University of Toronto, Victoria College, and the Institute for the History and Philosophy of Science and Technology) provided peer review. The chapter attempts to cover the demarcation between science and non-science, and between scientific and other expert witness opinion. The authors describe “the” scientific method, hypotheses, experiments, predictions, inference, probability, statistics and statistical hypothesis testing, data reliability, and related topics. A subsection of chapter two is entitled “Normative Issues in Science – The Myth of Scientific Objectivity,” which suggests a Feyerabend, post-modernist influence at odds with the Chief Justice’s aspirational statement of goals in her introduction to the Manual.

Greenland noted some rather cavalier statements in Chapter two that suggest that the conventional alpha of 5% corresponds to a “scientific attitude that unless we are 95% sure the null hypothesis is false, we provisionally accept it.” And he pointed elsewhere where the chapter seems to suggest that the coefficient of confidence that corresponds to an alpha of 5% “constitutes a rather high standard of proof,” thus confusing and conflating probability of random error with posterior probabilities. Some have argued that these errors are simply an effort to make statistical concepts easier to grasp for lay people, but the statistics chapter in the FJC’s Reference Manual shows that accurate exposition of statistical concepts can be made understandable. The Canadian Manual seems in need of some trimming with Einstein’s razor, usually paraphrased as “Everything should be made as simple as possible, but no simpler.[1] The razor should certainly applied to statistical concepts, with the understanding that pushing to simplify too aggressively can sometimes result in simplistic, and simply wrong, exposition.

Chapter 3 returns to more lawyerly matters, “Managing and Evaluating Expert Evidence in the Courtroom,” prepared and peer-reviewed by prominent Canadian lawyers and judges. The final chapter, “Ethics of the Expert Witness,” should be of interest to lawyers and judges in the United States, where the topic is largely ignored. The chapter was prepared by Professor Adam Dodek (University of Ottawa), along with several writers from the National Judicial Institute, the Canadian Judicial Council, American College of Trial Lawyers, Environment Canada, and notably, Joe Cecil & the Federal Judicial Center.

Weighing in at 228 pages, the Science Manual for Canadian Judges is much shorter than the Federal Judicial Center’s Reference Manual on Scientific Evidence. Unlike the FJC’s Reference Manual, which is now in its third edition, the Canadian Manual has no separate chapters on regression, DNA testing and forensic evidence, clinical medicine and epidemiology. The coverage of statistical inference is concentrated in chapter two, but that chapter has no discussion of meta-analysis, systematic review, evidence-based medicine, confounding, and the like. Perhaps there will soon be a second edition of the Science Manual for Canadian Judges.


[1] See Albert Einstein, “On the Method of Theoretical Physics; The Herbert Spencer Lecture,” delivered at Oxford (10 June 1933), published in 1 Philosophy of Science 163 (1934) (“It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.”).

Silicone Data Slippery and Hard to Find (Part 1)

July 4th, 2015

In the silicone gel breast implant litigation, plaintiffs’ counsel loved to wave around early Dow Corning experiments with silicone as an insecticide. As the roach crawls, it turned out that silicone was much better at attracting and dispatching dubious expert witnesses and their testimony. On this point, it is hard to dispute the judgment of Judge Jack Weinstein[1].

The silicone wars saw a bioethics expert appear as an expert witness to testify about a silicone study in which his co-authors refuse to share their data with him, embarrassing to say the least. “Where Are They Now? Marc Lappé and the Missing Data” (May 19, 2013). And another litigation expert witness lost his cachet when the Northridge earthquake at his data. “Earthquake-Induced Data Loss – We’re All Shook Up” (June 26, 2015). But other expert witnesses were up to the challenge for the most creative and clever excuses for not producing their underlying data.

Rhapsody in Goo – My Data Are Traveling; Come Back Later

Testifying expert witness, Dr. Eric Gershwin was the author of several research papers that claimed or suggested immunogenicity of silicone[2]. His results were criticized and seemed to elude replication, but he enjoyed a strong reputation as an NIH-funded researcher. Although several of his co-authors were from Specialty Labs, Inc. (Santa Monica, CA)[3], defense requests for his Gershwin’s underlying data were routinely met with the glib response that the data were in Israel, where some of his other co-authors resided.

Gershwin testified in several trials, and the plaintiffs’ counsel placed great emphasis on his publications and on his testimony given before Judge Jones’ technical advisors in August 1996, before Judge Pointer’s panel of Rule 706 experts, in July 1997, and before the Institute of Medicine (IOM) in 1998.

Ultimately, this peer review of Gershwin’s work and claims was withering. The immunologist on Judge Jones’ panel (Dr. Stenzel-Poore) found Gershwin’s claims “not well substantiated.” Hall v. Baxter Healthcare Corp., 947 F.Supp. 1387 (D. Ore. 1996). The immunologist on Judge Pointer’s panel, Dr. Betty A. Diamond was unshakeable in her criticisms of Gershwin’s work and his conclusions. Testimony of Dr. Betty A. Diamond, in MDL 926 (April 23, 1999). And the IOM found Gershwin’s work inadequate and insufficient to justify the extravagent claims that plaintiffs were making for immunogenicity and for causation of autoimmune disease. Stuart Bondurant, Virginia Ernster, and Roger Herdman, eds., Safety of Silicone Breast Implants (Institute of Medicine) (Wash. D.C. 1999).

Unlike Kossovsky, who left medical practice and his university position, Gershwin has continued to teach, research, and write after the collapse of the silicone litigation industry. And he has continued to testify, albeit in other kinds of tort cases.

In 2011, in testimony in a botox case, Dr. Gershwin attempted to distance himself from his prior silicone testimony. Gershwin testified that he was “an expert for silicone implants in the late 90s.” Testimony of M.E. Gershwin, at at 18:17-25, in Ray v. Allergan, Inc., Civ. No. 3:10CV00136 (E.D. Va. Jan. 17, 2011). An expert witness for implants; how curious? Here is how Gershwin described the fate of his strident testimony in the silicone litigation:

“Q. And has a court ever limited or excluded your opinions?

A. So a long time ago, probably more than ten years ago or so, twice. I had many cases involving silicone implants. The court restricted some but not all of my testimony. Although, my understanding is that, when the FDA finally did reapprove the use of silicone implants, the papers I published and evidence I gave was actually part of the basis by which they developed their regulations. And there’s not been a single example in the literature of anyone that’s ever refuted or questioned any of my work. But I think that’s all, as far as I know.

* * * *
Q. Okay. So it’s not — you made it sound like it was some published work that you had. Was it your opinions that you expressed in the cases that you believe the FDA adopted as part of their guidelines, or do you —

A. So I’ll tell you, I haven’t visited this subject in a long time, and I certainly took quite a beating from a number of people over — I was very proud in the past that I did it. Women’s rights groups all over the United States applauded what I did. I haven’t looked at these documents in over ten years, so beyond that, you’d have to do your own research.”

Id. at 20:19 – 21:25. Actually, several courts excluded Gershwin, as well as other expert witnesses who relied upon his published papers. Proud to be beaten.

Some of Gershwin’s coauthors have stayed the course on silicone. Yehuda Shoenfeld continues to publish on sick-building syndrome and so-called silicone “adjuvant disease,” which Shoenfeld immodestly refers to as “Shoenfeld’s syndrome.[4]” Gershwin and Shoenfeld parted company in the late 1990s on silicone, although they continue to publish together on other topics[5].


[1] Hon. Jack B. Weinstein, “Preliminary Reflections on Administration of Complex Litigation” 2009 Cardozo L. Rev. de novo 1, 14 (2009) (describing plaintiffs’ expert witnesses in the silicone gel breast implant litigation as “charlatans” and the litigation as largely based upon fraud).

[2] E. Bar-Meir, S.S. Teuber, H.C. Lin, I. Alosacie, G. Goddard, J. Terybery, N. Barka, B. Shen, J.B. Peter, M. Blank, M.E. Gershwin, Y. Shoenfeld, “Multiple Autoantibodies in Patients with Silicone Breast Implants,” 8 J. Autoimmunity 267 (1995); Merrill J. Rowley, Andrew D. Cook, Suzanne S. Teuber, M. Eric Gershwin, “Antibodies to Collagen: Comparative Epitope Mapping in Women with Silicon Breast Implants, Systemic Lupus Erythematosus and Rheumatoid Arthritis,” 6 J. Autoimmunity 775 (1994); Suzanne S. Teuber, Merrill J. Rowley, Steven H. Yoshida, Aftab A. Ansari, M.Eric Gershwin, “Anti-collagen Autoantibodies are Found in Women with Silicone Breast Implants,” 6 J. Autoimmunity 367 (1993).

[3] J. Teryberyd, J.B. Peter, H.C. Lin, and B. Shen.

[4] A partial sampler of Shoenfeld’s continued output on silicone:

Goren, G. Segal, Y. Shoenfeld, “Autoimmune/inflammatory syndrome induced by adjuvant (ASIA) evolution after silicone implants: Who is at risk?” 34 Clin. Rheumatol. (2015) [in press]

Nesher, A. Soriano, G. Shlomai, Y. Iadgarov, T.R. Shulimzon, E. Borella, D. Dicker, Y. Shoenfeld, “Severe ASIA syndrome associated with lymph node, thoracic, and pulmonary silicone infiltration following breast implant rupture: experience with four cases,” 24 Lupus 463 (2015)

Dagan, M. Kogan, Y. Shoenfeld, G. Segal, “When uncommon and common coalesce: adult onset Still’s disease associated with breast augmentation as part of autoimmune syndrome induced by adjuvants (ASIA),” 34 Clin Rheumatol. 2015 [in press]

Soriano, D. Butnaru, Y. Shoenfeld, “Long-term inflammatory conditions following silicone exposure: the expanding spectrum of the autoimmune/ inflammatory syndrome induced by adjuvants (ASIA),” 32 Clin. Experim. Rheumatol. 151 (2014)

Perricone, S. Colafrancesco, R. Mazor, A. Soriano, N. Agmon-Levin, Y. Shoenfeld, “Autoimmune/inflammatory syndrome induced by adjuvants (ASIA) 2013: Unveiling the pathogenic, clinical and diagnostic aspects,” 47 J. Autoimmun. 1 (2013)

Vera-Lastra, G. Medina, P. Cruz-Dominguez Mdel, L.J. Jara, Y. Shoenfeld, “Autoimmune/inflammatory syndrome induced by adjuvants (Shoenfeld’s syndrome): clinical and immunological spectrum,” 9 Expert Rev Clin Immunol. 361 (2013)

Lidar, N. Agmon-Levin, P. Langevitz, Y. Shoenfeld, “Silicone and scleroderma revisited,” 21 Lupus 121 (2012)

S.D. Hajdu, N. Agmon-Levin, Y. Shoenfeld, “Silicone and autoimmunity,” 41 Eur. J. Clin. Invest. 203 (2011)

Levy, P. Rotman-Pikielny, M. Ehrenfeld, Y. Shoenfeld, “Silicone breastimplantation-induced scleroderma: description of four patients and a critical review of the literature,” 18 Lupus 1226 (2009)

A.L. Nancy & Y. Shoenfeld, “Chronic fatigue syndrome with autoantibodies – the result of an augmented adjuvant effect of hepatitis-B vaccine and silicone implant,” 8 Autoimmunity Rev. 52 (2008)

Molina & Y. Shoenfeld, “Infection, vaccines and other environmental triggers of autoimmunity,” 38 Autoimmunity 235 (2005)

R.A. Asherson, Y. Shoenfeld, P. Jacobs, C. Bosman, “An unusually complicated case of primary Sjögren’s syndrome: development of transient ‘lupus-type’ autoantibodies following silicone implant rejection,” 31 J. Rheumatol. 196 (2004), and Erratum in 31 J. Rheumatol. 405 (2004)

Bar-Meir, M. Eherenfeld, Y. Shoenfeld, “Silicone gel breast implants and connective tissue disease–a comprehensive review,” 36 Autoimmunity 193 (2003)

Zandman-Goddard, M. Blank, M. Ehrenfeld, B. Gilburd, J. Peter, Y. Shoenfeld, “A comparison of autoantibody production in asymptomatic and symptomatic women with silicone breast implants,” 26 J. Rheumatol. 73 (1999)

[5] See, e.g., N. Agmon-Levin, R. Kopilov, C. Selmi, U. Nussinovitch, M. Sánchez-Castañón, M. López-Hoyos, H. Amital, S. Kivity, M.E. Gershwin, Y. Shoenfeld, “Vitamin D in primary biliary cirrhosis, a plausible marker of advanced disease,” 61 Immunol. Research 141 (2015).

Earthquake-Induced Data Loss – We’re All Shook Up

June 26th, 2015

Adam Marcus and Ivan Oransky are medical journalists who publish the Retraction Watch blog. Their blog’s coverage of error, fraud, plagiarism, and other publishing disasters is often first-rate, and a valuable curative for the belief that peer review publication, as it is now practiced, ensures trustworthiness.

Yesterday, Retraction Watch posted an article on earthquake-induced data loss. Shannon Palus, “Lost your data? Blame an earthquake” (June 25, 2015). A commenter on PubPeer raised concerns about a key figure in a paper[1]. The authors acknowledged a problem, which they traced to their loss of data in an earthquake. The journal retracted the paper.

This is not the first instance of earthquake-induced loss of data.

When John O’Quinn and his colleagues in the litigation industry created the pseudo-science of silicone-induced autoimmunity, they recruited Nir Kossovsky, a pathologist at UCLA Medical Center. Although Kossovsky looked a bit like Pee-Wee Herman, he was a graduate of the University of Chicago Pritzker School of Medicine, and the U.S. Naval War College, and a consultant to the FDA. In his dress whites, Kossovsky helped O’Quinn sell his silicone immunogenicity theories to juries and judges around the country. For a while, the theories sold well.

In testifying and dodging discovery for the underlying data in his silicone studies, Kossovsky was as slick as silicone itself. Ultimately, when defense counsel subpoenaed the underlying data from Kossovsky’s silicone study, Kossovsky shrugged and replied that the Northridge Earthquake destroyed his data. Apparently coffee cups and other containers of questionable fluids spilled on his silicone data in the quake, and Kossovsky’s emergency response was to obtain garbage cans and throw out the data. For the gory details, see Gary Taubes, “Silicone in the System: Has Nir Kossovsky really shown anything about the dangers of breast implants?” Discover Magazine (Dec. 1995).

As Mr. Taubes points out, Kossovsky’s paper was rejected by several journals before being published in the Journal of Applied Biomaterials, of which Kossovsky was a member of the editorial board. The lack of data did not, however, keep Kossovsky from continuing to testify, and from trying to commercialize, along with his wife, Beth Brandegee, and his father, Ram Kossowsky[2], an ELISA-based silicone “antibody” biomarker diagnostic test, Detecsil. Although Rule 702 had been energized by the Daubert decision in 1993, many judges were still not willing to take a hard look at Kossovsky’s study, his test, or to demand the supposedly supporting data. The Food and Drug Administration, however, eventually caught up with Kossovsky, and the Detecsil marketing ceased. Lillian J. Gill, FDA Acting Director, Office of Compliance, Letter to Beth S. Brandegee, President, Structured Biologicals (SBI) Laboratories: Detecsil Silicone Sensitivity Test (July 15, 1994); see Taubes, Discover Magazine.

After defense counsel learned of the FDA’s enforcement action against Kossovsky and his company, the litigation industry lost interest in Kossovsky, and his name dropped off trial witness lists. His name also dropped off the rolls of tenured UCLA faculty, and he apparently left medicine altogether to become a business consultant. Dr. Kossovsky became “an authority on business process risk and reputational value.” Kossovsky is now the CEO and Director of Steel City Re, which specializes in strategies for maintaining and enhancing reputational value. Ironic; eh?

A review of PubMed’s entries for Nir Kossovsky shows that his run in silicone started in 1983, and ended in 1996. He testified for plaintiffs in Hopkins v. Dow Corning Corp., 33 F.3d 1116 (9th Cir.1994) (tried in 1991), and in the infamous case of Johnson v. Bristol-Myers Squibb, CN 91-21770, Tx Dist. Ct., 125th Jud. Dist., Harris Cty., 1992.

A bibliography of Kossovsky silicone oeuvre is listed, below.


[1] Federico S. Rodríguez, Katterine A. Salazar, Nery A. Jara, María A García-Robles, Fernando Pérez, Luciano E. Ferrada, Fernando Martínez, and Francisco J. Nualart, “Superoxide-dependent uptake of vitamin C in human glioma cells,” 127 J. Neurochemistry 793 (2013).

[2] Father and son apparently did not agree on how to spell their last name.


Nir Kossovsky, D. Conway, Ram Kossowsky & D. Petrovich, “Novel anti-silicone surface-associated antigen antibodies (anti-SSAA(x)) may help differentiate symptomatic patients with silicone breast implants from patients with classical rheumatological disease,” 210 Curr. Topics Microbiol. Immunol. 327 (1996)

Nir Kossovsky, et al., “Preservation of surface-dependent properties of viral antigens following immobilization on particulate ceramic delivery vehicles,” 29 J. Biomed. Mater. Res. 561 (1995)

E.A. Mena, Nir Kossovsky, C. Chu, and C. Hu, “Inflammatory intermediates produced by tissues encasing silicone breast prostheses,” 8 J. Invest. Surg. 31 (1995)

Nir Kossovsky, “Can the silicone controversy be resolved with rational certainty?” 7 J. Biomater. Sci. Polymer Ed. 97 (1995)

Nir Kossovsky & C.J. Freiman, “Physicochemical and immunological basis of silicone pathophysiology,” 7 J. Biomater. Sci. Polym. Ed. 101 (1995)

Nir Kossovsky, et al., “Self-reported signs and symptoms in breast implant patients with novel antibodies to silicone surface associated antigens [anti-SSAA(x)],” 6 J. Appl. Biomater. 153 (1995), and “Erratum,” 6 J. Appl. Biomater. 305 (1995)

Nir Kossovsky & J. Stassi, “A pathophysiological examination of the biophysics and bioreactivity of silicone breast implants,” 24s1 Seminars Arthritis & Rheum. 18 (1994)

Nir Kossovsky & C.J. Freiman, “Silicone breast implant pathology. Clinical data and immunologic consequences,” 118 Arch. Pathol. Lab. Med. 686 (1994)

Nir Kossovsky & C.J. Freiman, “Immunology of silicone breast implants,” 8 J. Biomaterials Appl. 237 (1994)

Nir Kossovsky & N. Papasian, “Mammary implants,” 3 J. Appl. Biomater. 239 (1992)

Nir Kossovsky, P. Cole, D.A. Zackson, “Giant cell myocarditis associated with silicone: An unusual case of biomaterials pathology discovered at autopsy using X-ray energy spectroscopic techniques,” 93 Am. J. Clin. Pathol. 148 (1990)

Nir Kossovsky & R.B. Snow RB, “Clinical-pathological analysis of failed central nervous system fluid shunts,” 23 J. Biomed. Mater. Res. 73 (1989)

R.B. Snow & Nir Kossovsky, “Hypersensitivity reaction associated with sterile ventriculoperitoneal shunt malfunction,” 31 Surg. Neurol. 209 (1989)

Nir Kossovsky & Ram Kossowsky, “Medical devices and biomaterials pathology: Primary data for health care technology assessment,” 4 Internat’l J. Technol. Assess. Health Care 319 (1988)

Nir Kossovsky, John P. Heggers, and M.C. Robson, “Experimental demonstration of the immunogenicity of silicone-protein complexes,” 21 J. Biomed. Mater. Res. 1125 (1987)

Nir Kossovsky, John P. Heggers, R.W. Parsons, and M.C. Robson, “Acceleration of capsule formation around silicone implants by infection in a guinea pig model,” 73 Plastic & Reconstr. Surg. 91 (1984)

John Heggers, Nir Kossovsky, et al., “Biocompatibility of silicone implants,” 11 Ann. Plastic Surg. 38 (1983)

Nir Kossovsky, John P. Heggers, et al., “Analysis of the surface morphology of recovered silicone mammary prostheses,” 71 Plast. Reconstr. Surg. 795 (1983)