For your delectation and delight, desultory dicta on the law of delicts.

Statistical Deontology

March 2nd, 2018

In courtrooms across America, there has been a lot of buzzing and palavering about the American Statistical Association’s Statement on Statistical Significance Testing,1 but very little discussion of the Society’s Ethical Guidelines, which were updated and promulgated in the same year, 2016. Statisticians and statistics, like lawyers and the law, receive their fair share of calumny over their professional activities, but the statistician’s principal North American professional organization is trying to do something about members’ transgressions.

The American Statistical Society (ASA) has promulgated ethical guidelines for statisticians, as has the Royal Statistical Society,2 even if these organizations lack the means and procedures to enforce their codes. The ASA’s guidelines3 are rich with implications for statistical analyses put forward in all contexts, including in litigation and regulatory rule making. As such, the guidelines are well worth studying by lawyers.

The ASA Guidelines were prepared by the Committee on Professional Ethics, and approved by the ASA’s Board in April 2016. There are lots of “thou shall” and “thou shall nots,” but I will focus on the issues that are more likely to arise in litigation. What is remarkable about the Guidelines is that if followed, they probably are more likely to eliminate unsound statistical practices in the courtroom than the ASA State on P-values.

Defining Good Statistical Practice

Good statistical practice is fundamentally based on transparent assumptions, reproducible results, and valid interpretations.” Guidelines at 1. The Guidelines thus incorporate something akin to the Kumho Tire standard that an expert witness ‘‘employs in the courtroom the same level of intellectual rigor that characterizes the practice of an expert in the relevant field.’’ Kumho Tire Co. v. Carmichael, 526 U.S. 137, 152 (1999).

A statistician engaged in expert witness testimony should provide “only expert testimony, written work, and oral presentations that he/she would be willing to have peer reviewed.” Guidelines at 2. “The ethical statistician uses methodology and data that are relevant and appropriate, without favoritism or prejudice, and in a manner intended to produce valid, interpretable, and reproducible results.” Id. Similarly, the statistician, if ethical, will identify and mitigate biases, and use analyses “appropriate and valid for the specific question to be addressed, so that results extend beyond the sample to a population relevant to the objectives with minimal error under reasonable assumptions.” Id. If the Guidelines were followed, a lot of spurious analyses would drop off the litigation landscape, regardless whether they used p-values or confidence intervals, or a Bayesian approach.

Integrity of Data and Methods

The ASA’s Guidelines also have a good deal to say about data integrity and statistical methods. In particular, the Guidelines call for candor about limitations in the statistical methods or the integrity of the underlying data:

The ethical statistician is candid about any known or suspected limitations, defects, or biases in the data that may impact the integrity or reliability of the statistical analysis. Objective and valid interpretation of the results requires that the underlying analysis recognizes and acknowledges the degree of reliability and integrity of the data.”

Guidelines at 3.

The statistical analyst openly acknowledges the limits of statistical inference, the potential sources of error, as well as the statistical and substantive assumptions made in the execution and interpretation of any analysis,” including data editing and imputation. Id. The Guidelines urge analysts to address potential confounding not assessed by the study design. Id. at 3, 10. How often do we see these acknowledgments in litigation-driven analyses, or in peer-reviewed papers, for that matter?

Affirmative Actions Prescribed

In the aid of promoting data and methodological integrity, the Guidelines also urge analysts to share data when appropriate without revealing the identities of study participants. Statistical analysts should publicly correct any disseminated data and analyses in their own work, as well as working to “expose incompetent or corrupt statistical practice.” Of course, the Lawsuit Industry will call this ethical duty “attacking the messenger,” but maybe that’s a rhetorical strategy based upon an assessment of risks versus benefits to the Lawsuit Industry.


The ASA Guidelines address the impropriety of substantive statistical errors, such as:

[r]unning multiple tests on the same data set at the same stage of an analysis increases the chance of obtaining at least one invalid result. Selecting the one “significant” result from a multiplicity of parallel tests poses a grave risk of an incorrect conclusion. Failure to disclose the full extent of tests and their results in such a case would be highly misleading.”

Guidelines at 9.

There are some Lawsuit Industrialists who have taken comfort in the pronouncements of Kenneth Rothman on corrections for multiple comparisons. Rothman’s views on multiple comparisons are, however, much broader and more nuanced than the Industry’s sound bites.4 Given that Rothman opposes anything like strict statistical significance testing, it follows that he is relatively unmoved for the need for adjustments to alpha or the coefficient of confidence. Rothman, however, has never deprecated the need to consider the multiplicity of testing, and the need for researchers to be forthright in disclosing the the scope of comparisons originally planned and actually done.

2 Royal Statistical Society – Code of Conduct (2014); Steven Piantadosi, Clinical Trials: A Methodologic Perspective 609 (2d ed. 2005).

3 Shelley Hurwitz & John S. Gardenier, “Ethical Guidelines for Statistical Practice: The First 60 Years and Beyond,” 66 Am. Statistician 99 (2012) (describing the history and evolution of the Guidelines).

4 Kenneth J. Rothman, “Six Persistent Research Misconceptions,” 29 J. Gen. Intern. Med. 1060, 1063 (2014).

Wrong Words Beget Causal Confusion

February 12th, 2018

In clinical medical and epidemiologic journals, most articles that report about associations will conclude with a discussion section in which the authors hold forth about

(1) how they have found that exposure to X “increases the risk” of Y, and

(2) how their finding makes sense because of some plausible (even if unproven) mechanism.

In an opinion piece in Significance,1 Dalmeet Singh Chawla cites to a study that suggests the “because” language frequently confuses readers into believing that a causal claim is being made. The study abstract explains:

Most researchers do not deliberately claim causal results in an observational study. But do we lead our readers to draw a causal conclusion unintentionally by explaining why significant correlations and relationships may exist? Here we perform a randomized study in a data analysis massive online open course to test the hypothesis that explaining an analysis will lead readers to interpret an inferential analysis as causal. We show that adding an explanation to the description of an inferential analysis leads to a 15.2% increase in readers interpreting the analysis as causal (95% CI 12.8% – 17.5%). We then replicate this finding in a second large scale massive online open course. Nearly every scientific study, regardless of the study design, includes explanation for observed effects. Our results suggest that these explanations may be misleading to the audience of these data analyses.”

Leslie Myint, Jeffrey T. Leek, and Leah R. Jager, “Explanation implies causation?” (Nov. 2017) (on line manuscript).

Invoking the principle of charity, these authors suggest that most researchers are not deliberately claiming causal results. Indeed, the language of biomedical science itself is biased in favor of causal interpretation. The term “statistical significance” suggests causality to naive readers, as does stats talk about “effect size,” and “fixed effect models,” for data sets that come no where near establishing causality.

Common epidemiologic publication practice tolerates if not encourages authors to state that their study shows (finds, demonstrates, etc.) that exposure to X “increases the risk” of Y in the studies’ samples. This language is deliberately causal, even if the study cannot support a causal conclusion alone or even with other studies. After all, a risk is the antecedent of a cause, and in the stochastic model of causation involved in much of biomedical research, causation will manifest in a change of a base rate to a higher or lower post-exposure rate. Given that mechanism is often unknown and not required, then showing an increased risk is the whole point. Eliminating chance, bias, confounding, and study design often is lost in the irrational exuberance of declaring the “increased risk.”

Tighter editorial control might have researchers qualify their findings by explaining that they found a higher rate in association with exposure, under the circumstances of the study, followed by an explanation that much more is needed to establish causation. But where is the fun and profit in that?

Journalists, lawyers, and advocacy scientists often use the word “link,” to avoid having to endorse associations that they know, or should know, have not been shown to be causal.2 Using “link” as a noun or a verb clearly implies a causal chain metaphor, which probably is often deliberately implied. Perhaps publishers would defend the use of “link” by noting that it is so much shorter than “association,” and thus saves typesetting costs.

More attention is needed to word choice, even and especially when statisticians and scientists are using their technical terms and jargon.3 If, for the sake of argument, we accept the sincerity of scientists who work as expert witnesses in litigation in which causal claims are overstated, we can see that poor word choices confuse scientists as well as lay people. Or you can just read the materials and methods and the results of published study papers; skip the introduction and discussion sections, as well as the newspaper headlines.

1 Dalmeet Singh Chawla, “Mind your language,” Significance 6 (Feb. 2018).

2 See, e.g., Perri Klass, M.D., “,” N.Y. Times (Dec. 4, 2017); Nicholas Bakalar, “Body Chemistry: Lower Testosterone Linked to Higher Death Risk,” N.Y. Times (Aug. 15, 2006).

3 Fang Xuelan & Graeme Kennedy, “Expressing Causation in Written English,” 23 RELC J. 62 (1992); Bengt Altenberg, “Causal Linking in Spoken and Written English,” 38 Studia Linguistica 20 (1984).

Stuck in Silicone

December 12th, 2017

There was a time when silicone chemistry, biocompatibility, toxicity, and litigation weighed upon my mind. What started with a flurry of scientific interest, led to a media free for all, then FDA Commissioner David Kessler’s moratorium on silicone breast implants, and then to a feeding frenzy for the lawsuit industry. Ultimately, the federal court system found its way to engage four non-party expert witnesses, who cut through the thousands of irrelevant documents that plaintiffs’ counsel used to obfuscate the lack of causation evidence. The court-appointed experts in MDL 926 were unanimous in their rejection of the plaintiffs’ claims.1 Not long after, the Institute of Medicine (now the National Academy of Medicine) issued its voluminous review of the scientific evidence, again with the conclusion that the evidence, when viewed scientifically and critically, showed a lack of association between silicone and autoimmune disease.2

Along the way to this definitive end of the lawsuit industry’s assault on the medical device industry, the parties assembled in the courtroom of the Hon. Jack B. Weinstein, for Rule 702 hearings on the opinions proffered by the plaintiffs’ expert witnesses. Judge Weinstein, along with the late Judge Harold Baer, of the Southern District of New York, and Justice Lobis, of the New York Supreme Court, held hearings that lasted two weeks, and entertained virtually unlimited argument. In characteristic style, Judge Weinstein did not grant the defendants’ Rule 702 motions; rather he cut right to the heart of the matter, and granted summary judgment in favor of the defense on plaintiffs’ claims of systemic diseases.3

Over a dozen years later, in reflecting upon a long judicial career that involved many so-called mass torts, Judge Weinstein described the plaintiffs’ expert witnesses more plainly as “charlatans” and the silicone litigation as largely based upon fraud.4


Last week, I received an email from Arthur E. Brawer, who represented himself to be an Associate Clinical Professor of Medicine.5 Dr. Brawer kindly forwarded some of his publications on the subject of silicone toxicity.6 Along with the holiday gift, Dr Brawer also gave me a piece of his mind:

I recommend you rethink your prior opinions on the intersection of science and the law as it relates to this issue, as you clearly have no idea what you are talking about regarding the matter of silicone gel-filled breast implants. Perhaps refresher courses in biochemistry and biophysics at a major university might wake you up.”

Wow, that woke me up! Who was this Dr Brawer? His name seemed vaguely familiar. I thought he might have been a lawsuit industry expert witness I encountered in the silicone litigation, but none of his articles had a disclosure of having been a retained expert witness. Perhaps that was a mere oversight on his part. Still, I went to my archives, where I found the same Dr Brawer engaged in testifying for plaintiffs all around the country. In one early testimonial adventure, Brawer described how he came up with his list of signs and symptoms to use to define “silicone toxicity”:

Q. Doctor, if a patient presented to you with green hair and claimed that her green hair was attributable to her silicone breast implants, unless you could find another explanation for that green hair, you’d put that on your list of signs and symptoms; right?

A. The answer is yes.

Notes of Testimony of Arthur E. Brawer, at 465:7-12, in Merlin v. 3M Co., No. CV-N-95-696-HDM (D. Nev.Dec. 11, 1995) (Transcript of Rule 702 hearing)

A year later, Brawer’s opinions were unceremoniously excluded in a case set for trial in Dallas, Texas.7 Surely this outcome, along with Judge Weinstein’s rulings, the findings of the court-appointed witnesses in MDL 926, and the conclusions of the Institute of Medicine would have discouraged this Brawer fellow from testifying ever again?

Apparently not. Brawer, like the Black Knight in Monty Python and the Holy Grail, still lives and breathes, but only to be cut again and again. A quick Westlaw search turned up another, recent Brawer testimonial misadventure in Laux v. Mentor Worldwide, LLC, case no. 2:16-cv-01026, 2017 WL 5235619 (C.D. Calif., Nov. 8, 2017).8 Plaintiff Anita Laux claimed that she developed debilitating “biotoxin” disease from her saline-filled silicone breast implants. In support, she proffered the opinions of three would-be expert witnesses, a plastic surgeon (Dr Susan Kolb), a chemist (Pierre Blais), and a rheumatologist (Arthur Brawer).

Plaintiffs’ theory of biotoxin disease causation started with Blais’ claim to have found mold debris in the plaintiff’s explanted implants. The court found Blais unqualified, however, to offer an opinion on microbiology or product defects, and his opinions in the case, unreliable. Id. at *4-6. Dr Kolb, the author of The Naked Truth about Breast Implants, attempted to build upon Blais’ opinions, a rather weak foundation, to construct a “differential diagnosis.” In reasoning that Ms. Laux’s medical complaints arose from a mold infection, Kolb asserted that she had ruled out all other sources of exposure to mold. Unfortunately, Kolb either forgot or chose to hide correspondence with Ms. Laux, in which the plaintiff directly provided Kolb with information about prior environmental mold exposure on multiple occasions. Id. at *3. The trial court severely deprecated Kolb’s rather selective and false use of facts used to make the attribution of Ms. Laux’s claimed medical problems.

Dr Brawer, the author of Holistic Harmony: A Guide To Choosing A Competent Alternative Medicine Provider (1999), and my recent email correspondent, also succumbed to Judge Wright’s gatekeeping in Laux. The court found that Brawer had given a toxicology opinion with no supporting data. His report was thus both procedurally deficient under Federal Rule of Civil Procedure 26, and substantively deficient under Federal Rule of Evidence 702. Finding Brawer’s report “so lacking of scientific principles and methods,” and thus unhelpful and unreliable, the trial court excluded his report and precluded his testimony at trial. Id. at *7.

Thankfully, the ghost of litigations past, communicating now by email, can be safely disregarded. And I do not have to dig my silicone polymer chemistry and biochemistry textbooks out of storage.

1 See Barbara Hulka, Betty Diamond, Nancy Kerkvliet & Peter Tugwell, “Silicone Breast Implants in Relation to Connective Tissue Diseases and Immunologic Dysfunction: A Report by a National Science Panel to the Hon. Sam Pointer Jr., MDL 926 (Nov. 30, 1998).” The experts appointed by the late Judge Pointer all committed extensive time and expertise to evaluating the plaintiffs’ claims and the entire evidence. After delivering their reports, the court-appointed experts all published their litigation work in leading journals. See Barbara Hulka, Nancy Kerkvliet & Peter Tugwell, “Experience of a Scientific Panel Formed to Advise the Federal Judiciary on Silicone Breast Implants,” 342 New Engl. J. Med. 812 (2000); Esther C. Janowsky, Lawrence L. Kupper., and Barbara S. Hulka, “Meta-Analyses of the Relation between Silicone Breast Implants and the Risk of Connective-Tissue Diseases,” 342 New Engl. J. Med. 781 (2000); Peter Tugwell, George Wells, Joan Peterson, Vivian Welch, Jacqueline Page, Carolyn Davison, Jessie McGowan, David Ramroth, and Beverley Shea, “Do Silicone Breast Implants Cause Rheumatologic Disorders? A Systematic Review for a Court-Appointed National Science Panel,” 44 Arthritis & Rheumatism 2477 (2001).

2 Stuart Bondurant, Virginia Ernster, and Roger Herdman, eds., Safety of Silicone Breast Implants (Institute of Medicine) (Wash. D.C. 1999).

3 See In re Breast Implant Cases, 942 F. Supp. 958 (E. & S.D.N.Y. 1996) (granting summary judgment because of insufficiency of plaintiffs’ evidence, but specifically declining to rule on defendants’ Rule 702 and Rule 703 motions).

5 At the Drexel University School of Medicine, in Philadelphia, as well as the Director of Rheumatology at Monmouth Medical Center, in Long Branch, New Jersey.

6 Included among the holiday gift package was Arthur E. Brawer, “Is Silicone Breast Implant Toxicity an Extreme Form of a More Generalized Toxicity Adversely Affecting the Population as a Whole?,”1 Internat’l Ann. Med. (2017); Arthur E. Brawer, “Mechanisms of Breast Implant Toxicity: Will the Real Ringmaster Please Stand Up,”1 Internat’l Ann. Med. (2017); Arthur E. Brawer, “Destiny rides again: the reappearance of silicone gel-filled breast implant toxicity,” 26 Lupus 1060 (2017); Arthur E. Brawer, “Silicon and matrix macromolecules: new research opportunities for old diseases from analysis of potential mechanisms of breast implant toxicity,” 51 Medical Hypotheses 27 (1998).

7 Bailey v. Dow Corning Corp., c.a. 94-1199-A (Dallas Cty. Texas Dist. Ct., Sept. 15, 1996).

8 I later found that another blog had reviewed the Laux decision. Stephen McConnell, “C.D. Cal. Excludes Three Plaintiff Experts in Breast Implant Case,” Drug & Device Law (Nov. 16, 2017).

Samuel Tarry’s Protreptic for Litigation-Sponsored Publications

July 9th, 2017

Litigation-related research has been the punching bag of self-appointed public health advocates for some time. Remarkably, and perhaps not surprising to readers of this blog, many of the most strident critics have deep ties to the lawsuit industry, and have served the plaintiffs’ bar loyally and zealously for many years.1,2,3,4 And many of these critics have ignored or feigned ignorance of the litigation provenance of much research that they hold dear, such as Irving Selikoff’s asbestos research undertaken for the asbestos workers’ union and its legal advocates. These critics’ campaign is an exquisite study in hypocrisy.

For some time, I have argued that the standards for conflict-of-interest disclosures should be applied symmetrically and comprehensively to include positional conflicts, public health and environmental advocacy, as well as litigation consulting or testifying for any party. Conflicts should be disclosed, but they should not become a facile excuse or false justification for dismissing research, regardless of the party that sponsored it.5 Scientific studies should be interpreted scientifically – that is carefully, thoroughly, and rigorously – regardless whether they are conducted and published by industry-sponsored, union-sponsored, or Lord help us, even lawyer-sponsored scientists.

Several years ago, a defense lawyer, Samuel Tarry, published a case series of industry-sponsored research or analysis, which grew out of litigation, but made substantial contributions to the scientific understanding of claimed health risks. See Samuel L. Tarry, Jr., “Can Litigation-Generated Science Promote Public Health?” 33 Am. J. Trial Advocacy 315 (2009). Tarry’s paper is a helpful corrective to the biased (and often conflicted) criticisms of industry-sponsored research and analysis by the lawsuit industry and its scientific allies and consultants. It an ocean of uninformative papers about “Daubert,” Tarry’s paper stands out and should be required reading for all lawyers who practice in the area of “health effects litigation.”

Tarry presented a brief summary of the litigation context for three publications that deserve to remembered and used as exemplars of important, sound, scientific publications that helped changed the course of litigations, as well as the scientific community’s appreciation of prior misleading contentions and publications. His three case studies grew out of the silicone-gel breast implant litigation, the latex allergy litigation, and the never-ending asbestos litigation.

1. Silicone

There are some glib characterizations of the silicone gel breast implant litigation as having had no evidentiary basis. A more careful assessment would allow that there was some evidence, much of it fraudulent and irrelevant. See, e.g., Hon. Jack B. Weinstein, “Preliminary Reflections on Administration of Complex Litigation” 2009 Cardozo L. Rev. de novo 1, 14 (2009) (describing plaintiffs’ expert witnesses in the silicone gel breast implant litigation as “charlatans” and the litigation as largely based upon fraud). The lawsuit industry worked primarily through so-called support groups, which in turn funded friendly, advocate physicians, who in turn testified for plaintiffs and their lawyers in personal injury cases.

When the defendants, such as Dow Corning, reacted by sponsoring serious epidemiologic analyses of the issue whether exposure to silicone gel was associated with specific autoimmune or connective tissue diseases, the plaintiffs’ bar mounted a conflict-of-interest witch hunt over industry funding.6 Ultimately, the source of funding became obviously irrelevant; the concordance between industry-funded and all high quality research on the litigation claims was undeniable. Obvious that is to court-appointed expert witnesses7, and to a blue-ribbon panel of experts in the Institute of Medicine8.

2. Latex Hypersensitivity

Tarry’s second example comes from the latex hypersensitivity litigation. Whatever evidentiary basis may have existed for isolated cases of latex allergy, the plaintiffs’ bar had taken and expanded into a full-scale mass tort. A defense expert witness, Dr. David Garabrant, a physician and an epidemiologist, published a meta-analysis and systematic review of the extant scientific evidence. David H. Garabrant & Sarah Schweitzer, “Epidemiology of latex sensitization and allergies in health care workers,” 110 J. Allergy & Clin. Immunol. S82 (2002). Garabrant’s formal, systematic review documented his litigation opinions that the risk of latex hypersensitivity was much lower than claimed and not the widespread hazard asserted by plaintiffs and their retained expert witnesses. Although Garabrant’s review did not totally end the litigation and public health debate about latex, it went a long way toward ending both.

3. Fraudulent Asbestos-Induced Radiography

I still recall, sitting at my desk, my secretary coming into my office to tell me excitedly that a recent crop of silicosis claimants had had previous asbestosis claims. When I asked how she knew, she showed me the computer print out for closed files for another client. Some of the names were so distinctive that the probability that there were two men with the same name was minuscule. When we obtained the closed files from storage, sure enough, the social security numbers matched, as did all other pertinent data, except that what had been called asbestosis previously was now called silicosis.

My secretary’s astute observation was mirrored in the judicial proceedings of Judge Janis Graham Jack, who presided over MDL 1553. Judge Jack, however, discovered something even more egregious: in some cases, a single physician interpreted a single chest radiograph as showing either asbestosis or silicosis, but not both. The two, alternative diagnoses were recorded in two, separate reports, for two different litigation cases against different defendants. This fraudulent practice, as well as others, are documented in Judge Jack’s extraordinary, thorough opinion. See In re Silica Prods. Liab. Litig., 398 F. Supp. 2d 563 (S.D. Tex. 2005)9.

The revelations of fraud in Judge Jack’s opinion were not entirely surprising. As everyone involved in asbestos litigation has always known, there is a disturbing degree of subjectivity in the interpretation of chest radiographs for pneumoconiosis. The federal government has long been aware of this problem, and through the Centers for Disease Control and the National Institute of Occupational Safety and Health, has tried to subdue extreme subjectivity by creating a pneumoconiosis classification schemed for chest radiographs known as the “B-reader” system. Unfortunately, B-reader certification meant only that physicians could achieve inter-observer and intra-observer reproducibility of interpretations on the examination, but they were free to peddle extreme interpretations for litigation. Indeed, the B-reader certification system exacerbated the problem by creating a credential that was marketed to advance the credibility of some of the most biased, over-reading physicians in asbestos, silica, and coal pneumoconiosis litigation.

Tarry’s third example is a study conducted under the leadership of the late Joseph Gitlin, at Johns Hopkins Medical School. With funding from defendants and insurers, Dr. Joseph Gitlin conducted a concordance study of films that had been read by predatory radiologists and physicians as showing pneumoconiosis. The readers in his study found a very low level of positive films (less than 5%), despite their having been interpreted as showing pneumoconiosis by the litigation physicians. See Joseph N. Gitlin, Leroy L. Cook, Otha W. Linton, and Elizabeth Garrett-Mayer, “Comparison of ‘B’ Readers’ Interpretations of Chest Radiographs for Asbestos Related Changes,” 11 Acad. Radiol. 843 (2004); Marjorie Centofanti, “With thousands of asbestos workers demanding compensation for lung disease, a radiology researcher here finds that most cases lack merit,” Hopkins Medicine (2006). As with the Sokol hoax, the practitioners of post-modern medicine cried “foul,” and decried industry sponsorship, but the disparity between courtroom and hospital medicine was sufficient proof for most disinterested observers that there was a need to fix the litigation process.

Meretricious Mensuration10 – Manganese Litigation Example

Tarry’s examples are important reminders that corporate sponsorship, whether from the plaintiffs’ lawsuit industry or from manufacturing industry, does not necessarily render research tainted or unreliable. Although lawyers often confront exaggerated or false claims, and witness important, helpful correctives in the form of litigation-sponsored studies, the demands of legal practice and “the next case” typically prevent lawyers from documenting the scientific depredations and their rebuttals. Sadly, unlike litigations such as those involving Bendectin and silicone, the chronicles of fraud and exaggeration are mostly closed books in closed files in closed offices. These examples need the light of day and a fresh breeze to disseminate them widely in both the scientific and legal communities, so that all may have a healthy appreciation for the value of appropriately conducted studies generated in litigation contexts.

As I have intimated elsewhere, the welding fume litigation is a great example of specious claiming, which ultimately was unhorsed by publications inspired or funded by the defense. In the typical welding fume case, plaintiff claimed that exposure to manganese in welding fume caused Parkinson’s disease or manganism. Although manganism sounds as though it must be a disease that can be caused only by manganese, in the hands of plaintiffs’ expert witnesses, manganism became whatever ailment plaintiffs claimed to have suffered. Circularity and perfect definitional precision were achieved by semantic fiat.

The Sanchez-Ramos Meta-Analysis

Manganese Madness was largely the creation of the Litigation Industry, under the dubious leadership of Dickie Scruggs & Company. Although the plaintiffs enjoyed a strong tail wind in the courtroom of an empathetic judge, they had difficulties in persuading juries and ultimately decamped from MDL 1535, in favor of more lucrative targets. In their last hurrah, however, plaintiffs retained a neurologist, Juan Sanchez-Ramos, who proffered a biased, invalid synthesis, which he billed as a meta-analysis11.

Sanchez-Ramos’s meta-analysis, such as it was, provoked professional disapproval and criticism from the defense expert witness, Dr. James Mortimer. Because the work product of Sanchez-Ramos was first disclosed in deposition, and not in his Rule 26 report, Dr. Mortimer undertook belatedly a proper meta-analysis.12 Even though Dr. Mortimer’s meta-analysis was done in response to the Sanchez-Ramos’s improper, tardy disclosure, the MDL judge ruled that Mortimer’s meta-analysis was too late. The effect, however, of Mortimer’s meta-analysis was clear in showing that welding had no positive association with Parkinson’s disease outcomes. The MDL 1535 resolved quickly thereafter, and with only slight encouragement, Dr. Mortimer published a further refined meta-analysis with two other leading neuro-epidemiologists. See James Mortimer, Amy Borenstein, and Lorene Nelson, “Associations of welding and manganese exposure with Parkinson disease: Review and meta-analysis,” 79 Neurology 1174 (2012). See also Manganese Meta-Analysis Further Undermines Reference Manual’s Toxicology Chapter(Oct. 15, 2012).

1 See, e.g., David Michaels & Celeste Monforton, “Manufacturing Uncertainty Contested Science and the Protection ofthe Public’s Health and Environment,” 95 Am. J. Pub. Health S39, S40 (2005); David Michaels & Celeste Monforton, “How Litigation Shapes the Scientific Literature: Asbestos and Disease Among Automobile Mechanics,” 15 J. L. & Policy 1137, 1165 (2007). Michaels had served as a plaintiffs’ paid expert witness in chemical exposure litigation, and Monforton had been employed by labor unions before these papers were published, without disclosure of conflicts.

2 Leslie Boden & David Ozonoff, “Litigation-Generated Science: Why Should We Care?” 116 Envt’l Health Persp. 121, 121 (2008) (arguing that systematic distortion of the scientific record will result from litigation-sponsored papers even with disclosure of conflicts of interest). Ozonoff had served as a hired plaintiffs’ expert witnesses on multiple occasion before the publication of this article, which was unadorned by disclosure.

3 Lennart Hardell, Martin J. Walker, Bo Walhjalt, Lee S. Friedman, and Elihu D. Richter, “Secret Ties to Industry and Conflicting Interest in Cancer Research,” 50 Am. J. Indus. Med. 227, 233 (2007) (criticizing “powerful industrial interests” for “undermining independent research on hazard and risk,” in a “red” journal that is controlled by allies of the lawsuit industry). Hardell was an expert witness for plaintiffs in mobile phone litigation in which plaintiffs claimed that non-ionizing radiation caused brain cancer. In federal litigation, Hardell was excluded as an expert witness when his proffered opinions were found to be scientifically unreliable. Newman v. Motorola, Inc., 218 F. Supp. 2d. 769, 777 (D. Md. 2002), aff’d, 78 Fed. Appx. 292 (4th Cir. 2003).

4 See David Egilman & Susanna Bohme, “IJOEH and the Critique of Bias,” 14 Internat’l J. Occup. & Envt’l Health 147, 148 (2008) (urging a Marxist critique that industry-sponsored research is necessarily motivated by profit considerations, and biased in favor of industry funders). Although Egilman usually gives a disclosure of his litigation activities, he typically characterizes those activities as having been for both plaintiffs and defendants, even though his testimonial work for defendants is minuscule.

5 Kenneth J. Rothman, “Conflict of Interest: The New McCarthyism in Science,” 269 J. Am. Med. Ass’n 2782 (1993).

6 See Charles H. Hennekens, I-Min Lee, Nancy R. Cook, Patricia R. Hebert, Elizabeth W. Karlson, Fran LaMotte; JoAnn E. Manson, and Julie E. Buring, “Self-reported Breast Implants and Connective- Tissue Diseases in Female Health Professionals: A Retrospective Cohort Study, 275 J. Am. Med. Ass’n 616-19 (1998) (analyzing established cohort for claimed associations, with funding from the National Institutes of Health and Dow Corning Corporation).

7 See Barbara Hulka, Betty Diamond, Nancy Kerkvliet & Peter Tugwell, “Silicone Breast Implants in Relation to Connective Tissue Diseases and Immunologic Dysfunction: A Report by a National Science Panel to the Hon. Sam Pointer Jr., MDL 926 (Nov. 30, 1998).” The court-appointed expert witnesses dedicated a great deal of their professional time to their task of evaluating the plaintiffs’ claims and the evidence. At the end of the process, they all published their litigation work in leading journals. See Barbara Hulka, Nancy Kerkvliet & Peter Tugwell, “Experience of a Scientific Panel Formed to Advise the Federal Judiciary on Silicone Breast Implants,” 342 New Engl. J. Med. 812 (2000); Esther C. Janowsky, Lawrence L. Kupper., and Barbara S. Hulka, “Meta-Analyses of the Relation between Silicone Breast Implants and the Risk of Connective-Tissue Diseases,” 342 New Engl. J. Med. 781 (2000); Peter Tugwell, George Wells, Joan Peterson, Vivian Welch, Jacqueline Page, Carolyn Davison, Jessie McGowan, David Ramroth, and Beverley Shea, “Do Silicone Breast Implants Cause Rheumatologic Disorders? A Systematic Review for a Court-Appointed National Science Panel,” 44 Arthritis & Rheumatism 2477 (2001).

8 Stuart Bondurant, Virginia Ernster, and Roger Herdman, eds., Safety of Silicone Breast Implants (Institute of Medicine) (Wash. D.C. 1999).

9 See also Lester Brickman, “On the Applicability of the Silica MDL Proceeding to Asbestos Litigation, 12 Conn. Insur. L. J. 289 (2006); Lester Brickman, “Disparities Between Asbestosis and Silicosis Claims Generated By Litigation Screenings and Clinical Studies,” 29 Cardozo L. Rev. 513 (2007).

10 This apt phraseology is due to the late Keith Morgan, whose wit, wisdom, and scientific acumen are greatly missed. See W. Keith C. Morgan, “Meretricious Mensuration,” 6 J. Eval. Clin. Practice 1 (2000).

11 See Deposition of Dr. Juan Sanchez-Ramos, in Street v. Lincoln Elec. Co., Case No. 1:06-cv-17026, 2011 WL 6008514 (N.D. Ohio May 17, 2011).

12 See Deposition of Dr. James Mortimer, in Street v. Lincoln Elec. Co., Case No. 1:06-cv-17026, 2011 WL 6008054 (N.D. Ohio June 29, 2011).