For your delectation and delight, desultory dicta on the law of delicts.

Infante-lizing the IARC

May 13th, 2018

Peter Infante, a frequently partisan, paid expert witness for the Lawsuit Industry, recently published a “commentary” in the red journal, the American Journal of Industrial Medicine, about the evils of scientists with economic interests commenting upon the cancer causation pronouncements of the International Agency for Research on Cancer (IARC). Peter F. Infante, Ronald Melnick, James Huff & Harri Vainio, “Commentary: IARC Monographs Program and public health under siege by corporate interests,” 61 Am. J. Indus. Med. 277 (2018). Infante’s rant goes beyond calling out scientists with economic interests on IARC working groups; Infante would silence all criticism of IARC pronouncements by anyone, including scientists, who has an economic interest in the outcome of a scientific debate. Infante points to manufacturing industry’s efforts to “discredit” recent IARC pronouncements on glyphosate and red meat, by which he means that there were scientists who had the temerity to question IARC’s processes and conclusions.

Apparently, Infante did not think his bias was showing or would be detected. He and his co-authors invoke militaristic metaphors to claim that the IARC’s monograph program, and indeed all of public health, is “under siege by corporate interests.” A farcical commentary at that, coming from such stalwarts of the Lawsuit Industry. Infante lists his contact information as “Peter F. Infante Consulting, LLC, Falls Church, Virginia,” and co-author Ronald Melnick can be found at “Ronald Melnick Consulting, LLC, North Logan, Utah.” A search on Peter Infante in Westlaw yields 141 hits, all on the plaintiffs’ side of health effects disputes; he is clearly no stranger to the world of litigation. Melnick is, to be sure, harder to find, but he does show up as a signatory on Raphael Metzger’s supposed amicus briefs, filed by Metzger’s litigation front organization, Council for Education and Research on Toxics.1

Of the commentary’s authors, only James Huff, of “James Huff Consulting, Durham, North Carolina,” disclosed a connection with litigation, as a consultant to plaintiffs on animal toxicology of glyphosate. Huff’s apparent transparency clouded up when it came to disclosing how much he has been compensated for his consulting activities for claimants in glyphosate litigation. In the very next breath, in unison, the authors announce unabashedly that “[a]ll authors report no conflicts of interest.” Infante at 280.

Of course, reporting “no conflicts of interest” does not mean that the authors have no conflicts of interest, financial, positional, and idealogical. Their statement simply means that they have not reported any conflicts, through inadvertence, willfulness, or blindness. The authors, and the journal, are obviously content to mislead their readership by not-so-clever dodges.

The clumsiness of the authors’ inability to appreciate their very own conflicts infects their substantive claims in this commentary. These “consultants” tell us solemnly that IARC “[m]eetings are openly transparent and members are vetted for conflicts of interest.” Infante at 277. Working group members, however, are vetted but only for manufacturing industry conflicts, not for litigation industry conflicts or for motivational conflicts, such as advancing their own research agendas. Not many scientists have a research agenda to show that chemicals do not cause cancer.

At the end of this charade, the journal provides additional disclosures [sic]. As for “Ethics and Approval Consent,” we are met with a bold “Not Applicable.” Indeed; ethics need not apply. Perhaps, the American Journal of Industrial Medicine is beyond good and evil. The journal’s “Editor of Record,” Steven B. Markowitz “declares that he has no conflict of interest in the review and publication decision regarding this article.” This is, of course, the same Markowitz who testifies frequently for the Lawsuit Industry, without typically disclosing this conflict on his own publications.

This commentary is yet another brushback pitch, which tries to chill manufacturing industry and scientists from criticizing the work of agencies, such as IARC, captured by lawsuit industry consultants. No one should be fooled other than Mother Jones.

1See, e.g., Ramos v. Brenntag Specialties, Inc., 372 P.3d 200 (Calif. 2016) (where plaintiff was represented by Metzger, and where CERT filed an amicus brief by the usual suspects, plaintiffs’ expert witnesses, including Melnick).

Statistical Deontology

March 2nd, 2018

In courtrooms across America, there has been a lot of buzzing and palavering about the American Statistical Association’s Statement on Statistical Significance Testing,1 but very little discussion of the Society’s Ethical Guidelines, which were updated and promulgated in the same year, 2016. Statisticians and statistics, like lawyers and the law, receive their fair share of calumny over their professional activities, but the statistician’s principal North American professional organization is trying to do something about members’ transgressions.

The American Statistical Society (ASA) has promulgated ethical guidelines for statisticians, as has the Royal Statistical Society,2 even if these organizations lack the means and procedures to enforce their codes. The ASA’s guidelines3 are rich with implications for statistical analyses put forward in all contexts, including in litigation and regulatory rule making. As such, the guidelines are well worth studying by lawyers.

The ASA Guidelines were prepared by the Committee on Professional Ethics, and approved by the ASA’s Board in April 2016. There are lots of “thou shall” and “thou shall nots,” but I will focus on the issues that are more likely to arise in litigation. What is remarkable about the Guidelines is that if followed, they probably are more likely to eliminate unsound statistical practices in the courtroom than the ASA State on P-values.

Defining Good Statistical Practice

Good statistical practice is fundamentally based on transparent assumptions, reproducible results, and valid interpretations.” Guidelines at 1. The Guidelines thus incorporate something akin to the Kumho Tire standard that an expert witness ‘‘employs in the courtroom the same level of intellectual rigor that characterizes the practice of an expert in the relevant field.’’ Kumho Tire Co. v. Carmichael, 526 U.S. 137, 152 (1999).

A statistician engaged in expert witness testimony should provide “only expert testimony, written work, and oral presentations that he/she would be willing to have peer reviewed.” Guidelines at 2. “The ethical statistician uses methodology and data that are relevant and appropriate, without favoritism or prejudice, and in a manner intended to produce valid, interpretable, and reproducible results.” Id. Similarly, the statistician, if ethical, will identify and mitigate biases, and use analyses “appropriate and valid for the specific question to be addressed, so that results extend beyond the sample to a population relevant to the objectives with minimal error under reasonable assumptions.” Id. If the Guidelines were followed, a lot of spurious analyses would drop off the litigation landscape, regardless whether they used p-values or confidence intervals, or a Bayesian approach.

Integrity of Data and Methods

The ASA’s Guidelines also have a good deal to say about data integrity and statistical methods. In particular, the Guidelines call for candor about limitations in the statistical methods or the integrity of the underlying data:

The ethical statistician is candid about any known or suspected limitations, defects, or biases in the data that may impact the integrity or reliability of the statistical analysis. Objective and valid interpretation of the results requires that the underlying analysis recognizes and acknowledges the degree of reliability and integrity of the data.”

Guidelines at 3.

The statistical analyst openly acknowledges the limits of statistical inference, the potential sources of error, as well as the statistical and substantive assumptions made in the execution and interpretation of any analysis,” including data editing and imputation. Id. The Guidelines urge analysts to address potential confounding not assessed by the study design. Id. at 3, 10. How often do we see these acknowledgments in litigation-driven analyses, or in peer-reviewed papers, for that matter?

Affirmative Actions Prescribed

In the aid of promoting data and methodological integrity, the Guidelines also urge analysts to share data when appropriate without revealing the identities of study participants. Statistical analysts should publicly correct any disseminated data and analyses in their own work, as well as working to “expose incompetent or corrupt statistical practice.” Of course, the Lawsuit Industry will call this ethical duty “attacking the messenger,” but maybe that’s a rhetorical strategy based upon an assessment of risks versus benefits to the Lawsuit Industry.


The ASA Guidelines address the impropriety of substantive statistical errors, such as:

[r]unning multiple tests on the same data set at the same stage of an analysis increases the chance of obtaining at least one invalid result. Selecting the one “significant” result from a multiplicity of parallel tests poses a grave risk of an incorrect conclusion. Failure to disclose the full extent of tests and their results in such a case would be highly misleading.”

Guidelines at 9.

There are some Lawsuit Industrialists who have taken comfort in the pronouncements of Kenneth Rothman on corrections for multiple comparisons. Rothman’s views on multiple comparisons are, however, much broader and more nuanced than the Industry’s sound bites.4 Given that Rothman opposes anything like strict statistical significance testing, it follows that he is relatively unmoved for the need for adjustments to alpha or the coefficient of confidence. Rothman, however, has never deprecated the need to consider the multiplicity of testing, and the need for researchers to be forthright in disclosing the the scope of comparisons originally planned and actually done.

2 Royal Statistical Society – Code of Conduct (2014); Steven Piantadosi, Clinical Trials: A Methodologic Perspective 609 (2d ed. 2005).

3 Shelley Hurwitz & John S. Gardenier, “Ethical Guidelines for Statistical Practice: The First 60 Years and Beyond,” 66 Am. Statistician 99 (2012) (describing the history and evolution of the Guidelines).

4 Kenneth J. Rothman, “Six Persistent Research Misconceptions,” 29 J. Gen. Intern. Med. 1060, 1063 (2014).

Wrong Words Beget Causal Confusion

February 12th, 2018

In clinical medical and epidemiologic journals, most articles that report about associations will conclude with a discussion section in which the authors hold forth about

(1) how they have found that exposure to X “increases the risk” of Y, and

(2) how their finding makes sense because of some plausible (even if unproven) mechanism.

In an opinion piece in Significance,1 Dalmeet Singh Chawla cites to a study that suggests the “because” language frequently confuses readers into believing that a causal claim is being made. The study abstract explains:

Most researchers do not deliberately claim causal results in an observational study. But do we lead our readers to draw a causal conclusion unintentionally by explaining why significant correlations and relationships may exist? Here we perform a randomized study in a data analysis massive online open course to test the hypothesis that explaining an analysis will lead readers to interpret an inferential analysis as causal. We show that adding an explanation to the description of an inferential analysis leads to a 15.2% increase in readers interpreting the analysis as causal (95% CI 12.8% – 17.5%). We then replicate this finding in a second large scale massive online open course. Nearly every scientific study, regardless of the study design, includes explanation for observed effects. Our results suggest that these explanations may be misleading to the audience of these data analyses.”

Leslie Myint, Jeffrey T. Leek, and Leah R. Jager, “Explanation implies causation?” (Nov. 2017) (on line manuscript).

Invoking the principle of charity, these authors suggest that most researchers are not deliberately claiming causal results. Indeed, the language of biomedical science itself is biased in favor of causal interpretation. The term “statistical significance” suggests causality to naive readers, as does stats talk about “effect size,” and “fixed effect models,” for data sets that come no where near establishing causality.

Common epidemiologic publication practice tolerates if not encourages authors to state that their study shows (finds, demonstrates, etc.) that exposure to X “increases the risk” of Y in the studies’ samples. This language is deliberately causal, even if the study cannot support a causal conclusion alone or even with other studies. After all, a risk is the antecedent of a cause, and in the stochastic model of causation involved in much of biomedical research, causation will manifest in a change of a base rate to a higher or lower post-exposure rate. Given that mechanism is often unknown and not required, then showing an increased risk is the whole point. Eliminating chance, bias, confounding, and study design often is lost in the irrational exuberance of declaring the “increased risk.”

Tighter editorial control might have researchers qualify their findings by explaining that they found a higher rate in association with exposure, under the circumstances of the study, followed by an explanation that much more is needed to establish causation. But where is the fun and profit in that?

Journalists, lawyers, and advocacy scientists often use the word “link,” to avoid having to endorse associations that they know, or should know, have not been shown to be causal.2 Using “link” as a noun or a verb clearly implies a causal chain metaphor, which probably is often deliberately implied. Perhaps publishers would defend the use of “link” by noting that it is so much shorter than “association,” and thus saves typesetting costs.

More attention is needed to word choice, even and especially when statisticians and scientists are using their technical terms and jargon.3 If, for the sake of argument, we accept the sincerity of scientists who work as expert witnesses in litigation in which causal claims are overstated, we can see that poor word choices confuse scientists as well as lay people. Or you can just read the materials and methods and the results of published study papers; skip the introduction and discussion sections, as well as the newspaper headlines.

1 Dalmeet Singh Chawla, “Mind your language,” Significance 6 (Feb. 2018).

2 See, e.g., Perri Klass, M.D., “,” N.Y. Times (Dec. 4, 2017); Nicholas Bakalar, “Body Chemistry: Lower Testosterone Linked to Higher Death Risk,” N.Y. Times (Aug. 15, 2006).

3 Fang Xuelan & Graeme Kennedy, “Expressing Causation in Written English,” 23 RELC J. 62 (1992); Bengt Altenberg, “Causal Linking in Spoken and Written English,” 38 Studia Linguistica 20 (1984).

Stuck in Silicone

December 12th, 2017

There was a time when silicone chemistry, biocompatibility, toxicity, and litigation weighed upon my mind. What started with a flurry of scientific interest, led to a media free for all, then FDA Commissioner David Kessler’s moratorium on silicone breast implants, and then to a feeding frenzy for the lawsuit industry. Ultimately, the federal court system found its way to engage four non-party expert witnesses, who cut through the thousands of irrelevant documents that plaintiffs’ counsel used to obfuscate the lack of causation evidence. The court-appointed experts in MDL 926 were unanimous in their rejection of the plaintiffs’ claims.1 Not long after, the Institute of Medicine (now the National Academy of Medicine) issued its voluminous review of the scientific evidence, again with the conclusion that the evidence, when viewed scientifically and critically, showed a lack of association between silicone and autoimmune disease.2

Along the way to this definitive end of the lawsuit industry’s assault on the medical device industry, the parties assembled in the courtroom of the Hon. Jack B. Weinstein, for Rule 702 hearings on the opinions proffered by the plaintiffs’ expert witnesses. Judge Weinstein, along with the late Judge Harold Baer, of the Southern District of New York, and Justice Lobis, of the New York Supreme Court, held hearings that lasted two weeks, and entertained virtually unlimited argument. In characteristic style, Judge Weinstein did not grant the defendants’ Rule 702 motions; rather he cut right to the heart of the matter, and granted summary judgment in favor of the defense on plaintiffs’ claims of systemic diseases.3

Over a dozen years later, in reflecting upon a long judicial career that involved many so-called mass torts, Judge Weinstein described the plaintiffs’ expert witnesses more plainly as “charlatans” and the silicone litigation as largely based upon fraud.4


Last week, I received an email from Arthur E. Brawer, who represented himself to be an Associate Clinical Professor of Medicine.5 Dr. Brawer kindly forwarded some of his publications on the subject of silicone toxicity.6 Along with the holiday gift, Dr Brawer also gave me a piece of his mind:

I recommend you rethink your prior opinions on the intersection of science and the law as it relates to this issue, as you clearly have no idea what you are talking about regarding the matter of silicone gel-filled breast implants. Perhaps refresher courses in biochemistry and biophysics at a major university might wake you up.”

Wow, that woke me up! Who was this Dr Brawer? His name seemed vaguely familiar. I thought he might have been a lawsuit industry expert witness I encountered in the silicone litigation, but none of his articles had a disclosure of having been a retained expert witness. Perhaps that was a mere oversight on his part. Still, I went to my archives, where I found the same Dr Brawer engaged in testifying for plaintiffs all around the country. In one early testimonial adventure, Brawer described how he came up with his list of signs and symptoms to use to define “silicone toxicity”:

Q. Doctor, if a patient presented to you with green hair and claimed that her green hair was attributable to her silicone breast implants, unless you could find another explanation for that green hair, you’d put that on your list of signs and symptoms; right?

A. The answer is yes.

Notes of Testimony of Arthur E. Brawer, at 465:7-12, in Merlin v. 3M Co., No. CV-N-95-696-HDM (D. Nev.Dec. 11, 1995) (Transcript of Rule 702 hearing)

A year later, Brawer’s opinions were unceremoniously excluded in a case set for trial in Dallas, Texas.7 Surely this outcome, along with Judge Weinstein’s rulings, the findings of the court-appointed witnesses in MDL 926, and the conclusions of the Institute of Medicine would have discouraged this Brawer fellow from testifying ever again?

Apparently not. Brawer, like the Black Knight in Monty Python and the Holy Grail, still lives and breathes, but only to be cut again and again. A quick Westlaw search turned up another, recent Brawer testimonial misadventure in Laux v. Mentor Worldwide, LLC, case no. 2:16-cv-01026, 2017 WL 5235619 (C.D. Calif., Nov. 8, 2017).8 Plaintiff Anita Laux claimed that she developed debilitating “biotoxin” disease from her saline-filled silicone breast implants. In support, she proffered the opinions of three would-be expert witnesses, a plastic surgeon (Dr Susan Kolb), a chemist (Pierre Blais), and a rheumatologist (Arthur Brawer).

Plaintiffs’ theory of biotoxin disease causation started with Blais’ claim to have found mold debris in the plaintiff’s explanted implants. The court found Blais unqualified, however, to offer an opinion on microbiology or product defects, and his opinions in the case, unreliable. Id. at *4-6. Dr Kolb, the author of The Naked Truth about Breast Implants, attempted to build upon Blais’ opinions, a rather weak foundation, to construct a “differential diagnosis.” In reasoning that Ms. Laux’s medical complaints arose from a mold infection, Kolb asserted that she had ruled out all other sources of exposure to mold. Unfortunately, Kolb either forgot or chose to hide correspondence with Ms. Laux, in which the plaintiff directly provided Kolb with information about prior environmental mold exposure on multiple occasions. Id. at *3. The trial court severely deprecated Kolb’s rather selective and false use of facts used to make the attribution of Ms. Laux’s claimed medical problems.

Dr Brawer, the author of Holistic Harmony: A Guide To Choosing A Competent Alternative Medicine Provider (1999), and my recent email correspondent, also succumbed to Judge Wright’s gatekeeping in Laux. The court found that Brawer had given a toxicology opinion with no supporting data. His report was thus both procedurally deficient under Federal Rule of Civil Procedure 26, and substantively deficient under Federal Rule of Evidence 702. Finding Brawer’s report “so lacking of scientific principles and methods,” and thus unhelpful and unreliable, the trial court excluded his report and precluded his testimony at trial. Id. at *7.

Thankfully, the ghost of litigations past, communicating now by email, can be safely disregarded. And I do not have to dig my silicone polymer chemistry and biochemistry textbooks out of storage.

1 See Barbara Hulka, Betty Diamond, Nancy Kerkvliet & Peter Tugwell, “Silicone Breast Implants in Relation to Connective Tissue Diseases and Immunologic Dysfunction: A Report by a National Science Panel to the Hon. Sam Pointer Jr., MDL 926 (Nov. 30, 1998).” The experts appointed by the late Judge Pointer all committed extensive time and expertise to evaluating the plaintiffs’ claims and the entire evidence. After delivering their reports, the court-appointed experts all published their litigation work in leading journals. See Barbara Hulka, Nancy Kerkvliet & Peter Tugwell, “Experience of a Scientific Panel Formed to Advise the Federal Judiciary on Silicone Breast Implants,” 342 New Engl. J. Med. 812 (2000); Esther C. Janowsky, Lawrence L. Kupper., and Barbara S. Hulka, “Meta-Analyses of the Relation between Silicone Breast Implants and the Risk of Connective-Tissue Diseases,” 342 New Engl. J. Med. 781 (2000); Peter Tugwell, George Wells, Joan Peterson, Vivian Welch, Jacqueline Page, Carolyn Davison, Jessie McGowan, David Ramroth, and Beverley Shea, “Do Silicone Breast Implants Cause Rheumatologic Disorders? A Systematic Review for a Court-Appointed National Science Panel,” 44 Arthritis & Rheumatism 2477 (2001).

2 Stuart Bondurant, Virginia Ernster, and Roger Herdman, eds., Safety of Silicone Breast Implants (Institute of Medicine) (Wash. D.C. 1999).

3 See In re Breast Implant Cases, 942 F. Supp. 958 (E. & S.D.N.Y. 1996) (granting summary judgment because of insufficiency of plaintiffs’ evidence, but specifically declining to rule on defendants’ Rule 702 and Rule 703 motions).

5 At the Drexel University School of Medicine, in Philadelphia, as well as the Director of Rheumatology at Monmouth Medical Center, in Long Branch, New Jersey.

6 Included among the holiday gift package was Arthur E. Brawer, “Is Silicone Breast Implant Toxicity an Extreme Form of a More Generalized Toxicity Adversely Affecting the Population as a Whole?,”1 Internat’l Ann. Med. (2017); Arthur E. Brawer, “Mechanisms of Breast Implant Toxicity: Will the Real Ringmaster Please Stand Up,”1 Internat’l Ann. Med. (2017); Arthur E. Brawer, “Destiny rides again: the reappearance of silicone gel-filled breast implant toxicity,” 26 Lupus 1060 (2017); Arthur E. Brawer, “Silicon and matrix macromolecules: new research opportunities for old diseases from analysis of potential mechanisms of breast implant toxicity,” 51 Medical Hypotheses 27 (1998).

7 Bailey v. Dow Corning Corp., c.a. 94-1199-A (Dallas Cty. Texas Dist. Ct., Sept. 15, 1996).

8 I later found that another blog had reviewed the Laux decision. Stephen McConnell, “C.D. Cal. Excludes Three Plaintiff Experts in Breast Implant Case,” Drug & Device Law (Nov. 16, 2017).