For your delectation and delight, desultory dicta on the law of delicts.


September 5th, 2017

“With half-damp eyes I stared to the room
Where my friends and I spent many an afternoon
Where we together weathered many a storm
Laughin’ and singin’ till the early hours of the morn”
Bob Dylan, “Bob Dylan’s Dream” (1963)

* * * * * * * * * * * *

Well, not really singing so much as analyzing, calculating, discussing, debating, and occasionally laughing, too.

A few weeks ago, two good friends, Dr David Schwartz, and Dr Judi Steinman, came to visit me in New York. I seem to have known David and Judi, forever. David went to work for McCarter & English, shortly after finishing his post-doctoral training in neuropharmacology and neurophysiology, and his doctorate from Princeton in neuroscience. At McCarter, David worked initially on the Prozac cases, but after the 1992 Pamela Jean Johnson Christmas eve verdict in the silicone gel breast implant litigation, David jumped in to help McCarter and other lawyers understand the sketchy scientific evidence that was being proffered in support of claims by the “silicone sisters.” Judi, whose doctorate was in psychobiology and neuroscience from Rutgers, joined us on the science McCarter science team, a couple of years later. Together, we had the challenge and thrill of putting an end to a rather disreputable chapter in American tort litigation history, MDL 926, a.k.a. In re Silicone Gel Breast Implant Product Liability Litigation.

Ultimately, we all moved on from the McCarter firm. David went on to start a first-rate scientific consulting firm, Innovative Science Solutions (ISS) which serves the pharmaceutical, biotechnology, and medical device industries. As a principal in ISS, David worked with me in welding fume and other litigations, and we continue to collaborate on various projects. A few years ago, we co-produced a short film, “The Daubert Will Set Your Client Free.”

Judi moved to Hawaii, where, in 2003, she started BioTechnoLegal Services LLC, which provides scientific and medico-legal advice to lawyers in complex health-effects litigation. Judi joined the faculty of the University of Hawaii’s Department of Pharmaceutical Sciences, and for some years, she was the Program Coordinator for the University’s Master of Science program in Clinical Psychopharmacology. A couple of years ago, I gave a lecture by Skype to one of Judi’s classes at the University on meta-analysis in pharmacoepidemiology.

What a treat to have David and Judi in my living room, to talk and reminisce. David had planned to conduct an interview of me, but we might as well have conducted interviews of each other, and the varied roads we have traveled. David persisted in his plan to make me the interviewed, and he has now graced me twice by posting the interview to his firm’s website. David Schwartz, “Effective Use of Scientific Principles in the Courtroom: From Silicone to Talc and Beyond,” ISS Blog (Aug. 30, 2017)

Our discussion on a warm July afternoon made me nostalgic, but also pushed me into reflecting on how I came to live in the interdisciplinary world of law and science. Science had always been a part of my life. As a young boy, I lost myself in my grandfather’s Medical Clinics of North America, and my uncle’s college and medical school textbooks. There were several physicians in my family, and one of my favorites was my great uncle Sam, who was an orthopedic surgeon. Uncle Sam delighted my cousins and me with visits to the skeleton that dangled from a hook in his office. When I got my first microscope at age 11, Uncle Sam gave me a collection of tissue slides and taught me the difference between a sarcoma and a carcinoma. This was much more fun than trading baseball cards.

Another childhood treat was visiting my cousin Nan, whose parents had given her a subscription to “Things of Science.” Every month, she received a magical blue box with stuff – scientific stuff, with suggestions for experiments and observations. Whenever I had a chance, I would press Nan to get out the most recent box, and we would we become engrossed in the latest scientific marvel. Nan’s younger sister, Elena, a few years younger, recently reminded me how jealous she was when she was excluded from our scientific play.

In high school school, I had the good fortune to attend a National Science Foundation summer program to study physics. In college, I studied biology, and worked in the laboratory of a professor who was studying tubulin mutations and nuclear migration.

Watching the scientific process unfold through experiments and analysis was a huge thrill, but also, in some ways, a disappointment. Science is a long game, with lots of dead ends and missteps. After finishing university training in biological sciences, I stayed another year to complete a second major in philosophy, and entered graduate school to study philosophy. My experience in the laboratory ultimately made me more interested in the epistemology of scientific evidence and knowledge, as well as the implementation of scientific knowledge in policy decisions. Studying philosophy gave me plenty of opportunity to understand “meta-science,” but in the late 1970s, there were few opportunities for gainful employment. The tenure-track market was saturated by recent doctorates who had swelled the university departments during the Vietnam War. The department chairman, Arthur Smullyan, would send out regular memoranda to remind us that we were not likely going to find university-level teaching jobs. I recall sitting in Patty’s restaurant, on Sicard Street, New Brunswick, where some of my fellow graduate students and I, after finishing our qualifying exams, were drowning our sorrows in cheap beer and pizza. We all bemoaned our lack of job opportunities, and in a fit of exasperation, I suggested that we might form a consulting company. Having polished our skills in argumentation, I thought that there could be a way to eke out a living, much like Monty Python’s “Argument Clinic.” To my surprise, my colleagues pointed out that there already was such a profession. Naively, I asked which one, only to be confused why I had never before thought of law as a career. I took the LSAT, and the rest is history. When I started law school, I thought that my studying biology and philosophy were dead ends in my education, which shows how wrong I can be.

The C-8 (Perfluorooctanoic Acid) Litigation Against DuPont, part 1

September 27th, 2015

The first plaintiff has begun her trial against E.I. Du Pont De Nemours & Company (DuPont), for alleged harm from environmental exposure to perfluorooctanoic acid or its salts (PFOA). Ms. Carla Bartlett is claiming that she developed kidney cancer as a result of drinking water allegedly contaminated with PFOA by DuPont. Nicole Hong, “Chemical-Discharge Case Against DuPont Goes to Trial: Outcome could affect thousands of claims filed by other U.S. residents,” Wall St. J. (Sept. 13, 2015). The case is pending before Chief Judge Edmund A. Sargus, Jr., in the Southern District of Ohio.

PFOA is not classified as a carcinogen in the Integrated Risk Information System (IRIS), of the U.S. Environmental Protection Agency (EPA). In 2005, the EPA Office of Pollution Prevention and Toxics submitted a “Draft Risk Assessment of the Potential Human Health Effects Associated With Exposure to Perfluorooctanoic Acid and Its Salts (PFOA),” which is available at the EPA’s website. The draft report, which is based upon some epidemiology and mostly animal toxicology studies, stated that there was “suggestive evidence of carcinogenicity, but not sufficient to assess human carcinogenic potential.”

In 2013, The Health Council of the Netherlands evaluated the PFOA cancer issue, and found the data unsupportive of a causal conclusions. The Health Council of the Netherlands, “Perfluorooctanoic acid and its salts: Evaluation of the carcinogenicity and genotoxicity” (2013) (“The Committee is of the opinion that the available data on perfluorooctanoic acid and its salts are insufficient to evaluate the carcinogenic properties (category 3)”).

Last year, the World Health Organization (WHO) through its International Agency for Research on Cancer (IARC) reviewed the evidence on the alleged carcinogenicity of PFOA. The IARC, which has fostered much inflation with respect to carcinogenicity evaluations, classified as PFOA as only possibly carcinogenic. See News, “Carcinogenicity of perfluorooctanoic acid, tetrafl uoroethylene, dichloromethane, 1,2-dichloropropane, and 1,3-propane sultone,” 15 The Lancet Oncology 924 (2014).

Most independent reviews also find the animal and epidemiologic unsupportive of a causal conclusion between PFOA and any human cancer. See, e.g., Thorsten Stahl, Daniela Mattern, and Hubertus Brunn, “Toxicology of perfluorinated compounds,” 23 Environmental Sciences Europe 38 (2011).

So you might wonder how DuPont lost its Rule 702 challenges in such a case, which it surely did. In re E. I. du Pont de Nemours & Co. C-8 Pers. Injury Litig., Civil Action 2:13-md-2433, 2015 U.S. Dist. LEXIS 98788 (S.D. Ohio July 21, 2015). That is a story for another day.

Beecher-Monas Proposes to Abandon Common Sense, Science, and Expert Witnesses for Specific Causation

September 11th, 2015

Law reviews are not peer reviewed, not that peer review is a strong guarantor of credibility, accuracy, and truth. Most law reviews have no regular provision for letters to the editor; nor is there a PubPeer that permits readers to point out errors for the benefit of the legal community. Nonetheless, law review articles are cited by lawyers and judges, often at face value, for claims and statements made by article authors. Law review articles are thus a potent source of misleading, erroneous, and mischievous ideas and claims.

Erica Beecher-Monas is a law professor at Wayne State University Law School, or Wayne Law, which considers itself “the premier public-interest law school in the Midwest.” Beware of anyone or any institution that describes itself as working for the public interest. That claim alone should put us on our guard against whose interests are being included and excluded as legitimate “public” interest.

Back in 2006, Professor Beecher-Monas published a book on evaluating scientific evidence in court, which had a few goods points in a sea of error and nonsense. See Erica Beecher-Monas, Evaluating Scientific Evidence: An Interdisciplinary Framework for Intellectual Due Process (2006)[1]. More recently, Beecher-Monas has published a law review article, which from its abstract suggests that she might have something to say about this difficult area of the law:

“Scientists and jurists may appear to speak the same language, but they often mean very different things. The use of statistics is basic to scientific endeavors. But judges frequently misunderstand the terminology and reasoning of the statistics used in scientific testimony. The way scientists understand causal inference in their writings and practice, for example, differs radically from the testimony jurists require to prove causation in court. The result is a disconnect between science as it is practiced and understood by scientists, and its legal use in the courtroom. Nowhere is this more evident than in the language of statistical reasoning.

Unacknowledged difficulties in reasoning from group data to the individual case (in civil cases) and the absence of group data in making assertions about the individual (in criminal cases) beset the courts. Although nominally speaking the same language, scientists and jurists often appear to be in dire need of translators. Since expert testimony has become a mainstay of both civil and criminal litigation, this failure to communicate creates a conundrum in which jurists insist on testimony that experts are not capable of giving, and scientists attempt to conform their testimony to what the courts demand, often well beyond the limits of their expertise.”

Beecher-Monas, “Lost in Translation: Statistical Inference in Court,” 46 Arizona St. L.J. 1057, 1057 (2014) [cited as BM].

A close read of the article shows, however, that Beecher-Monas continues to promulgate misunderstanding, error, and misdirection on statistical and scientific evidence.

Individual or Specific Causation

The key thesis of this law review is that expert witnesses have no scientific or epistemic warrant upon which to opine about individual or specific causation.

“But what statistics cannot do—nor can the fields employing statistics, like epidemiology and toxicology, and DNA identification, to name a few—is to ascribe individual causation.”

BM at 1057-58.

Beecher-Monas tells us that expert witnesses are quite willing to opine on specific causation, but that they have no scientific or statistical warrant for doing so:

“Statistics is the law of large numbers. It can tell us much about populations. It can tell us, for example, that so-and-so is a member of a group that has a particular chance of developing cancer. It can tell us that exposure to a chemical or drug increases the risk to that group by a certain percentage. What statistics cannot do is tell which exposed person with cancer developed it because of exposure. This creates a conundrum for the courts, because nearly always the legal question is about the individual rather than the group to which the individual belongs.”

BM at 1057. Clinical medicine and science come in for particular chastisement by Beecher-Monas, who acknowledges the medical profession’s legitimate role in diagnosing and treating disease. Physicians use a process of differential diagnosis to arrive at the most likely diagnosis of disease, but the etiology of the disease is not part of their normal practice. Beecher-Monas leaps beyond the generalization that physicians infrequently ascertain specific causation to the sweeping claim that ascertaining the cause of a patient’s disease is beyond the clinician’s competence and scientific justification. Beecher-Monas thus tells us, in apodictic terms, that science has nothing to say about individual or specific causation. BM at 1064, 1075.

In a variety of contexts, but especially in the toxic tort arena, expert witness testimony is not reliable with respect to the inference of specific causation, which, Beecher-Monas writes, usually without qualification, is “unsupported by science.” BM at 1061. The solution for Beecher-Monas is clear. Admitting baseless expert witness testimony is “pernicious” because the whole purpose of having expert witnesses is to help the fact finder, jury or judge, who lack the background understanding and knowledge to assess the data, interpret all the evidence, and evaluate the epistemic warrant for the claims in the case. BM at 1061-62. Beecher-Monas would thus allow the expert witnesses to testify about what they legitimately know, and let the jury draw the inference about which expert witnesses in the field cannot and should not opine. BM at 1101. In other words, Beecher-Monas is perfectly fine with juries and judges guessing their way to a verdict on an issue that science cannot answer. If her book danced around this recommendation, now her law review article has come out into the open, declaring an open season to permit juries and judges to be unfettered in their specific causation judgments. What is touching is that Beecher-Monas is sufficiently committed to gatekeeping of expert witness opinion testimony that she proposes a solution to take a complex area away from expert witnesses altogether rather than confront the reality that there is often simply no good way to connect general and specific causation in a given person.

Causal Pies

Beecher-Monas relies heavily upon Professor Rothman’s notion of causal pies or sets to describe the factors that may combine to bring about a particular outcome. In doing so, she commits a non-sequitur:

“Indeed, epidemiologists speak in terms of causal pies rather than a single cause. It is simply not possible to infer logically whether a specific factor caused a particular illness.”[2]

BM at 1063. But the question on her adopted model of causation is not whether any specific factor was the cause, but whether it was one of the multiple slices in the pie. Her citation to Rothman’s statement that “it is not possible to infer logically whether a specific factor was the cause of an observed event,” is not the problem that faces factfinders in court cases.

With respect to differential etiology, Beecher-Monas claims that “‘ruling in’ all potential causes cannot be done.” BM at 1075. But why not? While it is true that disease diagnosis is often made upon signs and symptoms, BM at 1076, sometimes physicians are involved in trying to identify causes in individuals. Psychiatrists of course are frequently involved in trying to identify sources of anxiety and depression in their patients. It is not all about putting a DSM-V diagnosis on the chart, and prescribing medication. And there are times, when physicians can say quite confidently that a disease has a particular genetic cause, as in a man with BrCa1, or BrCa2, and breast cancer, or certain forms of neurodegenerative diseases, or an infant with a clearly genetically determined birth defect.

Beecher-Monas confuses “the” cause with “a” cause, and wonders away from both law and science into her own twilight zone. Here is an example of how Beecher-Monas’ confusion plays out. She asserts that:

“For any individual case of lung cancer, however, smoking is no more important than any of the other component causes, some of which may be unknown.”

BM at 1078. This ignores the magnitude of the risk factor and its likely contribution to a given case. Putting aside synergistic co-exposures, for most lung cancers, smoking is the “but for” cause of individual smokers’ lung cancers. Beecher-Monas sets up a strawman argument by telling us that is logically impossible to infer “whether a specific factor in a causal pie was the cause of an observed event.” BM at 1079. But we are usually interested in whether a specific factor was “a substantial contributing factor,” without which the disease would not have occurred. This is hardly illogical or impracticable for a given case of mesothelioma in a patient who worked for years in a crocidolite asbestos factor, or for a case of lung cancer in a patient who smoked heavily for many years right up to the time of his lung cancer diagnosis. I doubt that many people would hesitate, on either logical or scientific grounds, to attribute a child’s phocomelia birth defects to his mother’s ingestion of thalidomide during an appropriate gestational window in her pregnancy.

Unhelpfully, Beecher-Monas insists upon playing this word game by telling us that:

“Looking backward from an individual case of lung cancer, in a person exposed to both asbestos and smoking, to try to determine the cause, we cannot separate which factor was primarily responsible.”

BM at 1080. And yet that issue, of “primary responsibility” is not in any jury instruction for causation in any state of the Union, to my knowledge.

From her extreme skepticism, Beecher-Monas swings to the other extreme that asserts that anything that could have been in the causal set or pie was in the causal set:

“Nothing in relative risk analysis, in statistical analysis, nor anything in medical training, permits an inference of specific causation in the individual case. No expert can tell whether a particular exposed individual’s cancer was caused by unknown factors (was idiopathic), linked to a particular gene, or caused by the individual’s chemical exposure. If all three are present, and general causation has been established for the chemical exposure, one can only infer that they all caused the disease.115 Courts demanding that experts make a contrary inference, that one of the factors was the primary cause, are asking to be misled. Experts who have tried to point that out, however, have had a difficult time getting their testimony admitted.”

BM at 1080. There is no support for Beecher-Monas’ extreme statement. She cites, in footnote 115, to Kenneth Rothman’s introductory book on epidemiology, but what he says at the cited page is quite different. Rothman explains that “every component cause that played a role was necessary to the occurrence of that case.” In other words, for every component cause that actually participated in bringing about this case, its presence was necessary to the occurrence of the case. What Rothman clearly does not say is that for a given individual’s case, the fact that a factor can cause a person’s disease means that it must have caused it. In Beecher-Monas’ hypothetical of three factors – idiopathic, particular gene, and chemical exposure, all three, or any two, or only one of the three may have made a given individual’s causal set. Beecher-Monas has carelessly or intentionally misrepresented Rothman’s actual discussion.

Physicians and epidemiologists do apply group risk figures to individuals, through the lens of predictive regression equations.   The Gail Model for 5 Year Risk of Breast Cancer, for instance, is a predictive equation that comes up with a prediction for an individual patient by refining the subgroup within which the patient fits. Similarly, there are prediction models for heart attack, such as the Risk Assessment Tool for Estimating Your 10-year Risk of Having a Heart Attack. Beecher-Monas might complain that these regression equations still turn on subgroup average risk, but the point is that they can be made increasingly precise as knowledge accumulates. And the regression equations can generate confidence intervals and prediction intervals for the individual’s constellation of risk factors.

Significance Probability and Statistical Significance

The discussion of significance probability and significance testing in Beecher-Monas’ book was frequently in error,[3] and this new law review article is not much improved. Beecher-Monas tells us that “judges frequently misunderstand the terminology and reasoning of the statistics used in scientific testimony,” BM at 1057, which is true enough, but this article does little to ameliorate the situation. Beecher-Monas offers the following definition of the p-value:

“The P- value is the probability, assuming the null hypothesis (of no effect) is true (and the study is free of bias) of observing as strong an association as was observed.”

BM at 1064-65. This definition misses that the p-value is a cumulative tail probability, and can be one-sided or two-sided. More seriously in error, however, is the suggestion that the null hypothesis is one of no effect, when it is merely a pre-specified expected value that is the subject of the test. Of course, the null hypothesis is often one of no disparity between the observed and the expected, but the definition should not mislead on this crucial point.

For some reason, Beecher-Monas persists in describing the conventional level of statistical significance as 95%, which substitutes the coefficient of confidence for the complement of the frequently pre-specified p-value for significance. Annoying but decipherable. See, e.g., BM at 1062, 1064, 1065. She misleadingly states that:

“The investigator will thus choose the significance level based on the size of the study, the size of the effect, and the trade-off between Type I (incorrect rejection of the null hypothesis) and Type II (incorrect failure to reject the null hypothesis) errors.”

BM at 1066. While this statement is sometimes, rarely true, it mostly is not. A quick review of the last several years of the New England Journal of Medicine will document the error. Invariably, researchers use the conventional level of alpha, at 5%, unless there is multiple testing, such as in a genetic association study.

Beecher-Monas admonishes us that “[u]sing statistical significance as a screening device is thus mistaken on many levels,” citing cases that do not provide support for this proposition.[4] BM at 1066. The Food and Drug Administration’s scientists, who review clinical trials for efficacy and safety will be no doubt be astonished to hear this admonition.

Beecher-Monas argues that courts should not factor statistical significance or confidence intervals into their gatekeeping of expert witnesses, but that they should “admit studies,” and leave it to the lawyers and expert witnesses to explain the strengths and weaknesses of the studies relied upon. BM at 1071. Of course, studies themselves are rarely admitted because they represent many levels of hearsay by unknown declarants. Given Beecher-Monas’ acknowledgment of how poorly judges and lawyers understand statistical significance, this argument is cynical indeed.

Remarkably, Beecher-Monas declares, without citation, that the

“the purpose of epidemiologists’ use of statistical concepts like relative risk, confidence intervals, and statistical significance are intended to describe studies, not to weed out the invalid from the valid.”

BM at 1095. She thus excludes by ipse dixit any inferential purposes these statistical tools have. She goes further and gives us a concrete example:

“If the methodology is otherwise sound, small studies that fail to meet a P-level of 5 [sic], say, or have a relative risk of 1.3 for example, or a confidence level that includes 1 at 95% confidence, but relative risk greater than 1 at 90% confidence ought to be admissible. And understanding that statistics in context means that data from many sources need to be considered in the causation assessment means courts should not dismiss non-epidemiological evidence out of hand.”

BM at 1095. Well, again, studies are not admissible; the issue is whether they may be reasonably relied upon, and whether reliance upon them may support an opinion claiming causality. And a “P-level” of 5 is, well, let us hope a serious typographical error. Beecher-Monas’ advice is especially misleading when there is there is only one study, or only one study in a constellation of exonerative studies. See, e.g., In re Accutane, No. 271(MCL), 2015 WL 753674, 2015 BL 59277 (N.J. Super. Law Div. Atlantic Cty. Feb. 20, 2015) (excluding Professor David Madigan for cherry picking studies to rely upon).

Confidence Intervals

Beecher-Monas’ book provided a good deal of erroneous information on confidence intervals.[5] The current article improves on the definitions, but still manages to go astray:

“The rationale courts often give for the categorical exclusion of studies with confidence intervals including the relative risk of one is that such studies lack statistical significance.62 Well, yes and no. The problem here is the courts’ use of a dichotomous meaning for statistical significance (significant or not).63 This is not a correct understanding of statistical significance.”

BM at 1069. Well yes and no; this interpretation of a confidence interval, say with a coefficient of confidence of 95%, is a reasonable interpretation of whether the point estimate is statistically significant at an alpa of 5%. If Beecher-Monas does not like strict significant testing, that is fine, but she cannot mandate its abandonment by scientists or the courts. Certainly the cited interpretation is one proper interpretation among several.


There were several misleading references to statistical power in Beecher-Monas’ book, but the new law review tops them by giving a new, bogus definition:

“Power, the probability that the study in which the hypothesis is being tested will reject the alterative [sic] hypothesis when it is false, increases with the size of the study.”

BM at 1065. For this definition, Beecher-Monas cites to the Reference Manual on Scientific Evidence, but butchers the correct definition give by the late David Freedman and David Kaye.[6] All of which is very disturbing.

Relative Risks and Other Risk Measures

Beecher-Monas begins badly by misdefining the concept of relative risk:

“as the percentage of risk in the exposed population attributable to the agent under investigation.”

BM at 1068. Perhaps this percentage can be derived from the relative risk, if we know it to be the true measure with some certainty, through a calculation of attributable risk, but confusing and conflating attributable and relative risk in a law review article that is taking the entire medical profession to task, and most of the judiciary to boot, should be written more carefully.

Then Beecher-Monas tells us that the “[r]elative risk is a statistical test that (like statistical significance) depends on the size of the population being tested.” BM at 1068. Well, actually not; the calculation of the RR is unaffected by the sample size. The variance of course will vary with the sample size, but Beecher-Monas seems intent on ignoring random variability.

Perhaps most egregious is Beecher-Monas’ assertion that:

“Any increase above a relative risk of one indicates that there is some effect.”

BM at 1067. So much for ruling out chance, bias, and confounding! Or looking at an entire body of epidemiologic research for strength, consistency, coherence, exposure-response, etc. Beecher-Monas has thus moved beyond a liberal, to a libertine, position. In case the reader has any doubts of the idiosyncrasy of her views, she repeats herself:

“As long as there is a relative risk greater than 1.0, there is some association, and experts should be permitted to base their causal explanations on such studies.”

BM at 1067-68. This is evidentiary nihilism in full glory. Beecher-Monas has endorsed relying upon studies irrespective of their study design or validity, their individual confidence intervals, their aggregate summary point estimates and confidence intervals, or the absence of important Bradford Hill considerations, such as consistency, strength, and dose-response. So an expert witness may opine about general causation from reliance upon a single study with a relative risk of 1.05, say with a 95% confidence interval of 0.8 – 1.4?[7] For this startling proposition, Beecher-Monas cites the work of Sander Greenland, a wild and wooly plaintiffs’ expert witness in various toxic tort litigations, including vaccine autism and silicone autoimmune cases.

RR > 2

Beecher-Monas’ discussion of inferring specific causation from relative risks greater than two devolves into a muddle by her failure to distinguish general from specific causation. BM at 1067. There are different relevancies for general and specific causation, depending upon context, such as clinical trials or epidemiologic studies for general causation, number of studies available, and the like. Ultimately, she adds little to the discussion and debate about this issue, or any other.

[1] See previous comments on the book at “Beecher-Monas and the Attempt to Eviscerate Daubert from Within”; “Friendly Fire Takes Aim at Daubert – Beecher-Monas And The Undue Attack on Expert Witness Gatekeeping; and “Confidence in Intervals and Diffidence in the Courts.”

[2] Kenneth J. Rothman, Epidemiology: An Introduction 250 (2d ed. 2012).

[3] Erica Beecher-Monas, Evaluating Scientific Evidence: An Interdisciplinary Framework for Intellectual Due Process 42 n. 30, 61 (2007) (“Another way of explaining this is that it describes the probability that the procedure produced the observed effect by chance.”) (“Statistical significance is a statement about the frequency with which a particular finding is likely to arise by chance.”).

[4] See BM at 1066 & n. 44, citing “See, e.g., In re Breast Implant Litig., 11 F. Supp. 2d 1217, 1226–27 (D. Colo. 1998); Haggerty v. Upjohn Co., 950 F. Supp. 1160, 1164 (S.D. Fla. 1996), aff’d, 158 F.3d 588 (11th Cir. 1998) (“[S]cientifically valid cause and effect determinations depend on controlled clinical trials and epidemiological studies.”).”


[5] See, e.g., Erica Beecher-Monas, Evaluating Scientific Evidence 58, 67 (N.Y. 2007) (“No matter how persuasive epidemiological or toxicological studies may be, they could not show individual causation, although they might enable a (probabilistic) judgment about the association of a particular chemical exposure to human disease in general.”) (“While significance testing characterizes the probability that the relative risk would be the same as found in the study as if the results were due to chance, a relative risk of 2 is the threshold for a greater than 50 percent chance that the effect was caused by the agent in question.”)(incorrectly describing significance probability as a point probability as opposed to tail probabilities).

[6] David H. Kaye & David A. Freedman, Reference Guide on Statistics, in Federal Jud. Ctr., Reference Manual on Scientific Evidence 211, 253–54 (3d ed. 2011) (discussing the statistical concept of power).

[7] BM at 1070 (pointing to a passage in the FJC’s Reference Manual on Scientific Evidence that provides an example of one 95% confidence interval that includes 1.0, but which shrinks when calculated as a 90% interval to 1.1 to 2.2, which values “demonstrate some effect with confidence interval set at 90%). This is nonsense in the context of observational studies.

Silicone Data Slippery and Hard to Find (Part 2)

July 5th, 2015

What Does a Scientist “Gain,” When His Signal Is Only Noise

When the silicone litigation erupted in the early 1990s, Leoncio Garrido was a research professor at Harvard. In 1995, he was promoted from Assistant to Associate Professor of Radiology, and the Associate Director of NMR Core, at the Harvard Medical School. Along with Bettina Pfleiderer, Garrido published a series of articles on the use of silicon 29 nuclear magnetic resonance (NMR) spectroscopy, in which he claimed to detect and quantify silicon that migrated from the silicone in gel implants to the blood, livers, and brains of implanted women[1].

Plaintiffs touted Garrido’s work on NMR silicone as their “Harvard” study, to offset the prestige that the Harvard Nurses epidemiologic study[2] had in diminishing the plaintiffs’ claims that silicone caused autoimmune disease. Even though Garrido’s work was soundly criticized in the scientific literature[3], Garrido’s apparent independence of the litigation industry, his Harvard affiliation, and the difficulty in understanding the technical details of NMR spectroscopic work, combined to enhance the credibility of the plaintiffs’ claims.

Professor Peter Macdonald, who had consulted with defense counsel, was quite skeptical of Garrido’s work on silicone. In sum, Macdonald’s analysis showed that Garrido’s conclusions were not supported by the NMR spectra presented in Garrido’s papers. The spectra shown had signal-to-noise ratios too low to allow a determination of putative silicon biodegradation products (let alone to quantify such products), in either in vivo or ex vivo analyses. The existence of Garrido’s papers in peer-reviewed journals, however, allowed credulous scientists and members of the media to press unsupported theories about degradation of silicone into supposedly bioreactive silica.

A Milli-Mole Spills the Beans on the Silicone NMR Data

As the silicone litigation plodded on, a confidential informant dropped the dime on Garrido. The informant was a Harvard graduate student, who was quite concerned about the repercussions of pointing the finger at the senior scientist in charge of his laboratory work. Fortunately, and honorably, this young scientist more concerned yet that Garrido was manipulating the NMR spectra to create his experimental results. Over the course of 1997, the informant, who was dubbed “Mini-Mole,” reported serious questions about the validity of the silicon NMR spectra reported by Garrido and colleagues, who had created the appearance of a signal by turning up the gain to enhance the signal/noise ratio. Milli-mole also confirmed Macdonald’s suspicions that Garrido had created noise artifacts (either intentionally or carelessly) that could be misrepresented to be silicon-containing materials with silicon 29 NMR spectra.

In late winter 1997, “Mini-Mole” reported that Harvard had empanelled an internal review board to investigate Garrido’s work on silicon detection in blood of women with silicone gel breast implants. The board involved an associate dean of the medical school, along with an independent reviewer, knowledgeable about NMR. Mini-Mole was relieved that he would not be put into the position of becoming a whistle blower, and he believed that once the board understood the issues, Garrido’s deviation from the scientific standard of care would become clear. Apparently, concern at Harvard was reaching a crescendo, as Garrido was about to present yet another abstract, on brain silicon levels, at an upcoming meeting of the International Society of Magnetic Resonance in Medicine, in Vancouver, BC. Milli-Mole reported that one of the co-authors strongly disagreed with Garrido’s interpretation of the data, but was anxious about withdrawing from the publication.

Science Means Never Having to Say You’re Sorry

By 1997, Judge Pointer had appointed a panel of neutral expert witnesses, but the process had become mired in procedural diversions. Bristol-Myers Squibb sought and obtained a commission in state court (New Jersey) cases for a Massachusetts’ subpoena for Garrido’s underlying data late in1997. Before BMS or the other defendants could act on this subpoena, however, Garrido published a rather weak, non-apologetic corrigendum to one of his papers[4].

Although Garrido’s “Erratum” concealed more than it disclosed, the publication of the erratum triggered an avalanche of critical scrutiny. One of the members of the editorial board of Magnetic Resonance in Medicine undertook a critical review of Garrido’s papers, as a result of the erratum and its fallout. This scientist concluded that:

“From my viewpoint as an analytical spectroscopist, the result of this exercise was disturbing and disappointing. In my judgement as a referee, none of the Garrido group’s papers (1–6) should have been published in their current form.”

William E. Hull, “A Critical Review of MR Studies Concerning Silicone Breast Implants,” 42 Magnetic Resonance in Medicine 984, 984 (1999).

Another scientist, Professor Christopher T.G. Knight, of the University of Illinois at Urbana-Champaign, commented in a letter in response to the Garrido erratum:

“A series of papers has appeared in this Journal from research groups at Harvard Medical School and Massachusetts General Hospital. These papers describe magnetic resonance studies that purport to show significant concentrations of silicone and chemically related species in the blood and internal organs of silicone breast implant recipients. One paper in particular details 29Si NMR spectroscopic results of experiments conducted on the blood of volunteers with and without implants. In the spectrum of the implant recipients’ blood there appear to be several broad signals, whereas no signals are apparent in the spectrum of the blood of a volunteer with no implant. On these grounds, the authors claim that silicone and its degradation products occur in significant quantities in the blood of some implant recipients. Although this conclusion has been challenged, it has been widely quoted.


The erratum, in my opinion, deserves considerably more visibility, because it in effect greatly reduces the strength of the authors’ original claims. Indeed, it appears to be tantamount to a retraction of these.”

Christopher T.G. Knight, “Migration and Chemical Modification of Silicone in Women With Breast Prostheses,” 42 Magnetic Resonance in Med. 42:979 (1999) (internal citations omitted). Professor Knight went on to critique the original Garrido work, and the unsigned, unattributed erratum as failing to show a difference between the spectra developed from blood of women with and without silicone implants. Garrido’s erratum suggested that his “error” was simply showing a spectrum with the wrong scale, but Professor Knight showed rather conclusively that other manipulations had taken place to alter the spectrum. Id.

In a brief response[5], Garrido and co-authors acknowledged that their silicon quantification was invalid, but still maintained that they had qualitatively determined the presence of silicon entities. Despite Garrido’s response, the scientific community soon became incredulous about his silicone NMR work.

Garrido’s fall-back claim that he had detected unquantified levels of silicon using Si29 NMR was definitively refuted, in short order[6]. Ultimately, Peter Macdonald’s critique of Garrido was vindicated, and Garrido’s work became yet another weight that helped sink the plaintiffs’ case. Garrido last published on silicone in 1999, and left Harvard soon thereafter, to become the Director of the Instituto de Ciencia y Tecnología de Polímeros, in Madrid, Spain. He is now a scientific investigator at the Institute’s Physical Chemistry of Polymers Department. The Institute’s website lists Garrido as Dr. Leoncio Garrido Fernández. Garrido’s silicone publications were never retracted, and Harvard never publicly explained Garrido’s departure.

[1] See, e.g., Bettina Pfleiderer & Leoncio Garrido, “Migration and accumulation of silicone in the liver of women with silicone gel-filled breast implants,” 33 Magnetic Resonance in Med. 8 (1995); Leoncio Garrido, Bettina Pfleiderer, B.G. Jenkins, Carol A. Hulka, D.B. Kopans, “Migration and chemical modification of silicone in women with breast prostheses,” 31 Magnetic Resonance in Med. 328 (1994). Dr. Carol Hulka is the daughter of Dr. Barbara Hulka, who later served as a neutral expert witness, appointed by Judge Pointer in MDL 926.

[2] Jorge Sanchez-Guerrero, Graham A. Colditz, Elizabeth W. Karlson, David J. Hunter, Frank E. Speizer, Matthew H. Liang, “Silicone Breast Implants and the Risk of Connective-Tissue Diseases and Symptoms,” 332 New Engl. J. Med . 1666 (1995).

[3] See R.B. Taylor, J.J. Kennan, “29Si NMR and blood silicon levels in silicone gel breast implant recipients,” 36 Magnetic Resonance in Med. 498 (1996); Peter Macdonald, N. Plavac, W. Peters, Stanley Lugowski, D. Smith, “Failure of 29Si NMR to detect increased blood silicon levels in silicone gel breast implant recipients,” 67 Analytical Chem. 3799 (1995).

[4] Leoncio Garrido, Bettina Pfleiderer, G. Jenkins, Carol A. Hulka, Daniel B. Kopans, “Erratum,” 40 Magnetic Resonance in Med. 689 (1998).

[5] Leoncio Garrido, Bettina Pfleiderer, G. Jenkins, Carol A. Hulka, Daniel B. Kopans, “Response,” 40 Magnetic Resonance in Med. 995 (1998).

[6] See Darlene J. Semchyschyn & Peter M. Macdonald, “Limits of Detection of Polydimethylsiloxane in 29Si NMR Spectroscopy,” 43 Magnetic Resonance in Med. 607 (2000) (Garrido’s erratum acknowledges that his group’s spectra contain no quantifiable silicon resonances, but their 29Si spectra fail to show evidence of silicone or breakdown products); Christopher T. G. Knight & Stephen D. Kinrade, “Silicon-29 Nuclear Magnetic Resonance Spectroscopy Detection Limits,” 71 Anal. Chem. 265 (1999).