TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Litigation Science – In re Zambelli-Weiner

April 8th, 2019

Back in 2001, in the aftermath of the silicone gel breast implant litigation, I participated in a Federal Judicial Center (FJC) television production of “Science in the Courtroom, program 6” (2001). Program six was a round-table discussion among the directors (past, present, and future) of the FJC, all of whom were sitting federal judges, with two lawyers in private practice, Elizabeth Cabraser and me.1 One of the more exasperating moments in our conversation came when Ms. Cabraser, who represented plaintiffs in the silicone litigation, complained that Daubert was unfair because corporate defendants were able to order up supporting scientific studies, whereas poor plaintiffs counsel did not have the means to gin up studies that confirmed what they knew to be true.2 Refraining from talking over her required all the self-restraint I could muster, but I did eventually respond by denying her glib generalization and offering the silicone litigation as one in which plaintiffs, plaintiffs’ counsel, and plaintiffs’ support groups were all involved in funding and directing some of the sketchiest studies, most of which managed to find homes in so-called peer-reviewed journals of some sort, even if not the best.

The litigation connections of the plaintiff-sponsored studies in the silicone litigation were not apparent on the face of the published articles. The partisan funding and provenance of the studies were mostly undisclosed and required persistent discovery and subpoenas. Cabraser’s propaganda reinforced the recognition of what so-called mass tort litigation had taught me about all scientific studies: “trust but verify.” Verification is especially important for studies that are sponsored by litigation-industry actors who have no reputation at stake in the world of healthcare.

Verification is not a straightforward task, however. Peer-review publication usually provides some basic information about “methods and materials,” but rarely if ever do published articles provide sufficient data and detail about methodology to replicate the reported analysis. In legal proceedings, verification of studies conducted and relied upon by testifying expert witnesses is facilitated by the rules of expert witness discovery. In federal court, expert witnesses must specify all opinions and all bases for their opinions. When such witnesses rely upon their own studies, and thus have had privileged access to the complete data and all analyses, courts have generally permitted full inquiry into the underlying materials of relied-upon studies. On the other, when the author of a relied-upon study is a “stranger to the litigation,” neither a party nor a retained expert witness, courts have permitted generally more limited discovery of the study’s full data set and analyses. Regardless of the author’s status, the question remains how litigants are to challenge an adversary’s expert witness’s trusted reliance upon a study, which cannot be “verified.”

Most lawyers would prefer, of course, to call an expert witness who has actually conducted studies pertinent to the issues in the case. The price, however, of allowing the other side to discover the underlying data and materials of the author expert witness’s studies may be too high. The relied-upon studies may well end up discredited, as well as the professional reputation of the expert witness. The litigation industry has adapted to these rules of discovery by avoiding, in most instances, calling testifying expert witnesses who have published studies that might be vulnerable.3

One work-around to the discovery rules lies in the use of “consulting, non-testifying expert witnesses.” The law permits the use of such expert witnesses to some extent to facilitate candid consultations with expert witnesses, usually without concerns that communications will be shared with the adversary party and witnesses. The hope is that such candid communications will permit realistic assessment of partisan positions, as well as allowing scientists and scholars to participate in an advisory capacity without the burden of depositions, formal report writing, and appearances at judicial hearings and trials. The confidentiality of consulting expert witnesses is open to abuse by counsel who would engage the consultants to conduct and publish studies, which can then be relied upon by the testifying expert witnesses. The upshot is that legal counsel can manipulate the published literature in a favorable way, without having to disclose their financial sponsorship or influence of the published studies used by their testifying expert witnesses.

This game of hiding study data and sponsorship through the litigation industry’s use of confidential consulting expert witnesses pervades so-called mass tort litigation, which provides ample financial incentives for study sponsorship and control. Defendants will almost always be unable to play the game, without detection. A simple interrogatory or other discovery request about funding of studies will reveal the attempt to pass off a party-sponsored study as having been conducted by disinterested scientists. Furthermore, most scientists will feel obligated to reveal corporate funding as a potential conflict of interest, in their submission of manuscripts for publication.

Revealing litigation-industry (plaintiffs’) funding of studies is more complicated. First, the funding may be through one firm, which is not the legal counsel in the case for which discovery is being conducted. In such instances, the plaintiff’s lawyers can truthfully declare that they lack personal knowledge of any financial support for studies relied upon by their testifying expert witnesses. Second, the plaintiffs’ lawyer firm is not a party is not itself subject to discovery. Even if the plaintiffs’ lawyers funded a study, they can claim, with plausible deniability, that they funded the study in connection with another client’s case, not the client who is plaintiff in the case in which discovery is sought. Third, the plaintiffs’ firm may take the position, however dubious it might be, that the funding of the relied-upon study was simply a confidential consultation with the authors of that study, and not subject to discovery.

The now pending litigation against ondansetron (Zofran) provides the most recent example of the dubious use of consulting expert witnesses to hide party sponsorship of an epidemiologic study. The plaintiffs, who are claiming that Zofran causes birth defects in this multi-district litigation assigned to Judge F. Dennis Saylor, have designated Dr. Carol Luik as their sole testifying expert witness on epidemiology. Dr. Luik, in turn, has relied substantially upon a study conducted by Dr. April Zambelli-Weiner.4

According to motion papers filed by defendants,5 the plaintiffs’ counsel initially claimed that they had no knowledge of any financial support or conflicts for Dr Zambelli-Weiner. The conflict-of-interest disclosure in Zambelli-Weiner’s paper was, to say the least, suspicious:

The authors declare that there was no outside involvement in study design; in the collection, analysis and interpretation of data; in the writing of the manuscript; and in the decision to submit the manuscript for publication.”

As an organization TTi reports receiving funds from plaintiff law firms involved in ondansetron litigation and a manufacturer of ondansetron.”

According to its website, TTi

is an economically disadvantaged woman-owned small business headquartered in Westminster, Maryland. We are focused on the development, evaluation, and implementation of technologies and solutions that advance the transformation of data into actionable knowledge. TTi serves a diverse clientele, including all stakeholders in the health space (governments, payors, providers, pharmaceutical and device companies, and foundations) who have a vested interest in advancing research to improve patient outcomes, population health, and access to care while reducing costs and eliminating health disparities.”

According to defendants’ briefing, and contrary to plaintiffs’ initial claims and Zambelli-Weiner’s anemic conflicts disclosure, plaintiffs’ counsel eventually admitted that “Plaintiffs’ Leadership Attorneys paid $210,000 as financial support relating to” Zambelli-Weiner’s epidemiologic study. The women at TTi are apparently less economically disadvantaged than advertised.

The Zofran defendants served subpoenas duces tecum and ad testificandum on two of the study authors, Drs. April Zambelli-Weiner and Russell Kirby. Curiously, the plaintiffs (who would seem to have no interest in defending the third-party subpoenas) sought a protective order by arguing that defendants were harassing “third-party scientists.” Their motion for protection conveniently and disingenuously omitted, that Zambelli-Weiner had been a paid consultant to the Zofran plaintiffs.

Judge Saylor refused to quash the subpoenas, and Zambelli-Weiner appeared herself, through counsel, to seek a protective order. Her supporting affidavit averred that she had not been retained as an expert witness, and that she had no documents “concerning any data analyses or results that were not reported in the [published study].” Zambelli-Weiner’s attempt to evade discovery was embarrassed by her having presented a “Zofran Litigation Update” with Plaintiffs’ counsel Robert Jenner and Elizabeth Graham at a national conference for plaintiffs’ attorneys. Judge Saylor was not persuaded, and the MDL court refused Dr. Zambelli-Weiner’s motion. The law and the public has a right to every man’s, and every woman’s, (even if economically disadvantaged) evidence.6

Tellingly, in the aftermath of the motions to quash, Zambelli-Weiner’s counsel, Scott Marder, abandoned his client by filing an emergency motion to withdraw, because “certain of the factual assertions in Dr. Zambelli-Weiner’s Motion for Protective Order and Affidavit were inaccurate.” Mr. Marder also honorably notified defense counsel that he could no longer represent that Zambelli-Weiner’s document production was complete.

Early this year, on January 29, 2019, Zambelli-Weiner submitted, through new counsel, a “Supplemental Affidavit,” wherein she admitted she had been a “consulting expert” witness for the law firm of Grant & Eisenhofer on the claimed teratogenicity of Zofran.7 Zambelli-Weiner also produced a few extensively redacted documents. On February 1, 2019, Zambelli-Weiner testified at deposition that the moneys she received from Grant & Eisenhofer were not to fund her Zofran study, but for other, “unrelated work.” Her testimony was at odds with the plaintiffs’ counsel’s confession that the $210,000 related to her Zofran study.

Zambelli-Weiner’s etiolated document production was confounded by the several hundred of pages of documents produced by fellow author, Dr. Russell Kirby. When confronted with documents from Kirby’s production, Zambelli-Weiner’s lawyer unilaterally suspended the deposition.

Deja Vu All Over Again

Federal courts have seen the Zambelli maneuver before. In litigation over claimed welding fume health effects, plaintiffs’ counsel Richard (Dickie) Scruggs and colleagues funded some neurological researchers to travel to Alabama and Mississippi to “screen” plaintiffs and potential plaintiffs in litigation for over claims of neurological injury and disease from welding fume exposure, with a novel videotaping methodology. The plaintiffs’ lawyers rounded up the research subjects (a.k.a. clients and potential clients), talked to them before the medical evaluations, and administered the study questionnaires. The study subjects were clearly aware of Mr. Scruggs’ “research” hypothesis, and had already promised him 40% of any recovery.8

After their sojourn, at Scruggs’ expense to Alabama and Mississippi, the researchers wrote up their results, with little or no detail of the circumstances of how they had acquired their research “participants,” or those participants’ motives to give accurate or inaccurate medical and employment history information.9

Defense counsel served subpoenas upon both Dr. Racette and his institution, Washington University St. Louis, for the study protocol, underlying data, data codes, and all statistical analyses. Racette and Washington University resisted sharing their data and materials with every page in the Directory of Non-Transparent Research. They claimed that the subpoenas sought production of testimony, information and documents in violation of:

(1) the Federal Regulations set forth in the Department of Health and Human Services Policy for Protection of Human Research Subjects,

(2) the Federal regulations set forth in the HIPPA Regulations,

(3) the physician/patient privilege,

(4) the research scholar’s privilege,

(5) the trade secret/confidential research privilege and

(6) the scope of discovery as codified by the Federal Rules of Civil Procedure and the Missouri Rules of Civil Procedure.”

After a long discovery fight, the MDL court largely enforced the subpoenas.10 The welding MDL court ordered Racette to produce

a ‘limited data set’ which links the specific categories requested by defendants: diagnosis, occupation, and age. This information may be produced as a ‘deidentified’ data set, such that the categories would be linked to each particular patient, without using any individual patient identifiers. This data set should: (1) allow matching of each study participant’s occupational status and age with his or her neurological condition, as diagnosed by the study’s researchers; and (2) to the greatest extent possible (except for necessary de-identification), show original coding and any code-keys.”

After the defense had the opportunity to obtain and analyze the underlying data in the Scruggs-Racette study, the welding plaintiffs retreated from their epidemiologic case. Various defense expert witnesses analyzed the underlying data produced by Racette, and prepared devastating rebuttal reports. These reports were served upon plaintiffs’ counsel, whose expert witnesses never attempted any response. Reliance upon Racette’s study was withdrawn or abandoned. After the underlying data were shared with the parties to MDL 1535, no scientist appeared to defend the results in the published papers.11 The Racette Alabama study faded into the background of the subsequent welding-fume cases and trials.

The motion battle in the welding MDL revealed interesting contradictions, similar to those seen in the Zambelli-Weiner affair. For example, Racette claimed he had no relationship whatsoever with plaintiffs’ counsel, other than showing up by happenstance in Alabama at places where Scruggs’ clients also just happened to show up. Racette claimed that the men and women he screened were his patients, but he had no license to practice in Alabama, where the screenings took place. Plaintiffs’ counsel disclaimed that Racette was a treating physician, which acknowledgment would have made the individual’s screening results discoverable in their individual cases. And more interestingly, plaintiffs’ counsel claimed that both Dr. Racette and Washington University were “non-testifying, consulting experts utilized to advise and assist Plaintiffs’ counsel with respect to evaluating and assessing each of their client’s potential lawsuit or claim (or not).”12

Over the last decade or so, best practices and codes of conduct for the relationship between pharmacoepidemiologists and study funders have been published.13 These standards apply with equal force to public agencies, private industry, and regulatory authories. Perhaps it is time for them to specify that the apply to the litigation industry as well.


1 See Smith v. Wyeth-Ayerst Labs. Co., 278 F. Supp. 2d 684, 710 & n. 56 (W.D.N.C. 2003).

2 Ironically, Ms. Cabraser has published her opinion that failure to disclose conflicts of interest and study funding should result in evidentiary exclusions, a view which would have simplified and greatly shortened the silicone gel breast implant litigation. See Elizabeth J. Cabraser, Fabrice Vincent & Alexandra Foote, “Ethics and Admissibility: Failure to Disclose Conflicts of Interest in and/or Funding of Scientific Studies and/or Data May Warrant Evidentiary Exclusions,” Mealey’s Emerging Drugs Reporter (Dec. 2002).

3 Litigation concerning Viagra is one notable example where plaintiffs’ counsel called an expert witness who was the author of the very study that supposedly supported their causal claim. It did not go well for the plaintiffs or the expert witness. See Lori B. Leskin & Bert L. Slonim, “A Primer on Challenging Peer-Reviewed Scientific Literature in Mass Tort and Product Liability Actions,” 25 Toxics L. Rptr. 651 (Jul. 1, 2010).

4 April Zambelli‐Weiner, Christina Via, Matt Yuen, Daniel Weiner, and Russell S. Kirby, “First Trimester Pregnancy Exposure to Ondansetron and Risk of Structural Birth Defects,” 83 Reproductive Toxicology 14 (2019).

5 Nate Raymond, “GSK accuses Zofran plaintiffs’ law firms of funding academic study,” Reuters (Mar. 5, 2019).

6 See Branzburg v. Hayes, 408 U.S. 665, 674 (1972).

7 Affidavit of April Zambelli-Weiner, dated January 9, 2019 (Doc. No. 1272).

8 The plaintiffs’ lawyers’ motive and opportunity to poison the study by coaching their “clients” was palpable. See David B. Resnik & David J. McCann, “Deception by Research Participants,” 373 New Engl. J. Med. 1192 (2015).

9 See Brad A. Racette, S.D. Tabbal, D. Jennings, L. Good, J.S. Perlmutter, and Brad Evanoff, “Prevalence of parkinsonism and relationship to exposure in a large sample of Alabama welders,” 64 Neurology 230 (2005); Brad A. Racette, et al., “A rapid method for mass screening for parkinsonism,” 27 Neurotoxicology 357 (2006) (a largely duplicative report of the Alabama welders study).

10 See, e.g., In re Welding Fume Prods. Liab. Litig., MDL 1535, 2005 WL 5417815 (N.D. Ohio Oct. 18, 2005) (upholding defendants’ subpoena for protocol, data, data codes, statistical analyses, and other things from Dr. Racette’s Alabama study on welding and parkinsonism).

11 Racette sought and obtained a protective order for the data produced, and thus I still cannot share the materials he provided asking that any reviewer sign the court-mandated protective order. Revealingly, Racette was concerned about who had seen his underlying data, and he obtained a requirement in the court’s non-disclosure affidavit that any one who reviews the underlying data will not sit on peer review of his publications or his grant applications. See Motion to Compel List of Defendants’ Reviewers of Data Produced by Brad A. Racette, M.D., and Washington University Pursuant to Protective Order, in In re Welding Fume Products Liab. Litig., MDL No. 1535, Case 1:03-cv-17000-KMO, Document 1642-1 (N.D. Ohio Feb. 14, 2006). Curiously, Racette never moved to compel a list of Plaintiffs’ Reviewers!

12 Plaintiffs’ Motion for Protective Order, Motion to Reconsider Order Requiring Disclovery from Dr. Racette, and Request for In Camera Inspection as to Any Responses or Information Provided by Dr. Racette, filed in Solis v. Lincoln Elec. Co., case No. 1:03-CV-17000, MDL 1535 (N.D. Ohio May 8, 2006).

13 See, e.g., Xavier Kurz, Susana Perez‐Gutthann, and the ENCePP Steering Group, “Strengthening standards, transparency, and collaboration to support medicine evaluation: Ten years of the European Network of Centres for Pharmacoepidemiology and Pharmacovigilance (ENCePP),” 27 Pharmacoepidem. & Drug Safety 245 (2018).

Statistical Deontology

March 2nd, 2018

In courtrooms across America, there has been a lot of buzzing and palavering about the American Statistical Association’s Statement on Statistical Significance Testing,1 but very little discussion of the Society’s Ethical Guidelines, which were updated and promulgated in the same year, 2016. Statisticians and statistics, like lawyers and the law, receive their fair share of calumny over their professional activities, but the statistician’s principal North American professional organization is trying to do something about members’ transgressions.

The American Statistical Society (ASA) has promulgated ethical guidelines for statisticians, as has the Royal Statistical Society,2 even if these organizations lack the means and procedures to enforce their codes. The ASA’s guidelines3 are rich with implications for statistical analyses put forward in all contexts, including in litigation and regulatory rule making. As such, the guidelines are well worth studying by lawyers.

The ASA Guidelines were prepared by the Committee on Professional Ethics, and approved by the ASA’s Board in April 2016. There are lots of “thou shall” and “thou shall nots,” but I will focus on the issues that are more likely to arise in litigation. What is remarkable about the Guidelines is that if followed, they probably are more likely to eliminate unsound statistical practices in the courtroom than the ASA State on P-values.

Defining Good Statistical Practice

Good statistical practice is fundamentally based on transparent assumptions, reproducible results, and valid interpretations.” Guidelines at 1. The Guidelines thus incorporate something akin to the Kumho Tire standard that an expert witness ‘‘employs in the courtroom the same level of intellectual rigor that characterizes the practice of an expert in the relevant field.’’ Kumho Tire Co. v. Carmichael, 526 U.S. 137, 152 (1999).

A statistician engaged in expert witness testimony should provide “only expert testimony, written work, and oral presentations that he/she would be willing to have peer reviewed.” Guidelines at 2. “The ethical statistician uses methodology and data that are relevant and appropriate, without favoritism or prejudice, and in a manner intended to produce valid, interpretable, and reproducible results.” Id. Similarly, the statistician, if ethical, will identify and mitigate biases, and use analyses “appropriate and valid for the specific question to be addressed, so that results extend beyond the sample to a population relevant to the objectives with minimal error under reasonable assumptions.” Id. If the Guidelines were followed, a lot of spurious analyses would drop off the litigation landscape, regardless whether they used p-values or confidence intervals, or a Bayesian approach.

Integrity of Data and Methods

The ASA’s Guidelines also have a good deal to say about data integrity and statistical methods. In particular, the Guidelines call for candor about limitations in the statistical methods or the integrity of the underlying data:

The ethical statistician is candid about any known or suspected limitations, defects, or biases in the data that may impact the integrity or reliability of the statistical analysis. Objective and valid interpretation of the results requires that the underlying analysis recognizes and acknowledges the degree of reliability and integrity of the data.”

Guidelines at 3.

The statistical analyst openly acknowledges the limits of statistical inference, the potential sources of error, as well as the statistical and substantive assumptions made in the execution and interpretation of any analysis,” including data editing and imputation. Id. The Guidelines urge analysts to address potential confounding not assessed by the study design. Id. at 3, 10. How often do we see these acknowledgments in litigation-driven analyses, or in peer-reviewed papers, for that matter?

Affirmative Actions Prescribed

In the aid of promoting data and methodological integrity, the Guidelines also urge analysts to share data when appropriate without revealing the identities of study participants. Statistical analysts should publicly correct any disseminated data and analyses in their own work, as well as working to “expose incompetent or corrupt statistical practice.” Of course, the Lawsuit Industry will call this ethical duty “attacking the messenger,” but maybe that’s a rhetorical strategy based upon an assessment of risks versus benefits to the Lawsuit Industry.

Multiplicity

The ASA Guidelines address the impropriety of substantive statistical errors, such as:

[r]unning multiple tests on the same data set at the same stage of an analysis increases the chance of obtaining at least one invalid result. Selecting the one “significant” result from a multiplicity of parallel tests poses a grave risk of an incorrect conclusion. Failure to disclose the full extent of tests and their results in such a case would be highly misleading.”

Guidelines at 9.

There are some Lawsuit Industrialists who have taken comfort in the pronouncements of Kenneth Rothman on corrections for multiple comparisons. Rothman’s views on multiple comparisons are, however, much broader and more nuanced than the Industry’s sound bites.4 Given that Rothman opposes anything like strict statistical significance testing, it follows that he is relatively unmoved for the need for adjustments to alpha or the coefficient of confidence. Rothman, however, has never deprecated the need to consider the multiplicity of testing, and the need for researchers to be forthright in disclosing the the scope of comparisons originally planned and actually done.


2 Royal Statistical Society – Code of Conduct (2014); Steven Piantadosi, Clinical Trials: A Methodologic Perspective 609 (2d ed. 2005).

3 Shelley Hurwitz & John S. Gardenier, “Ethical Guidelines for Statistical Practice: The First 60 Years and Beyond,” 66 Am. Statistician 99 (2012) (describing the history and evolution of the Guidelines).

4 Kenneth J. Rothman, “Six Persistent Research Misconceptions,” 29 J. Gen. Intern. Med. 1060, 1063 (2014).

Gatekeeping of Expert Witnesses Needs a Bair Hug

December 20th, 2017

For every Rule 702 (“Daubert”) success story, there are multiple gatekeeping failures. See David E. Bernstein, “The Misbegotten Judicial Resistance to the Daubert Revolution,” 89 Notre Dame L. Rev. 27 (2013).1 Exemplars of inadequate expert witness gatekeeping in state or federal court abound, and overwhelm the bar. The only solace one might find is that the abuse-of-discretion appellate standard of review keeps the bad decisions from precedentially outlawing the good ones.

Judge Joan Ericksen recently provided another Berenstain Bears’ example of how not to keep the expert witness gate, in litigation claims that the Bair Hugger forced air warming devices (“Bair Huggers”) cause infections. In re Bair Hugger Forced Air Warming, MDL No. 15-2666, 2017 WL 6397721 (D. Minn. Dec. 13, 2017). Although Her Honor properly cited and quoted Rule 702 (2000), a new standard is announced in a bold heading:

Under Federal Rule of Evidence 702, the Court need only exclude expert testimony that is so fundamentally unsupported that it can offer no assistance to the jury.”

Id. at *1. This new standard thus permits largely unsupported opinion that can offer bad assistance to the jury. As Judge Ericksen demonstrates, this new standard, which has no warrant in the statutory text of Rule 702 or its advisory committee notes, allows expert witnesses to rely upon studies that have serious internal and external validity flaws.

Jonathan Samet, a specialist in pulmonary medicine, not infectious disease or statistics, is one of the plaintiffs’ principal expert witnesses. Samet relies in large measure upon an observational study2, which purports to find an increased odds ratio for use of the Bair Hugger among infection cases in one particular hospital. The defense epidemiologist, Jonathan B. Borak, criticized the McGovern observational study on several grounds, including that the study was highly confounded by the presence of other known infection risks. Id. at *6. Judge Ericksen characterized Borak’s opinion as an assertion that the McGovern study was an “insufficient basis” for the plaintiffs’ claims. A fair reading of even Judge Ericksen’s précis of Borak’s proffered testimony requires the conclusion that Borak’s opinion was that the McGovern study was invalid because of data collection errors and confounding. Id.

Judge Ericksen’s judicial assessment, taken from the disagreement between Samet and Borak, is that there are issues with the McGovern study, which go to “weight of the evidence.” This finding obscures, however, that there were strong challenges to the internal and external validity of the study. Drawing causal inferences from an invalid observational study is a methodological issue, not a weight-of-the-evidence problem for the jury to resolve. This MDL opinion never addresses the Rule 703 issue, whether an epidemiologic expert would reasonably rely upon such a confounded study.

The defense proffered the opinion of Theodore R. Holford, who criticized Dr. Samet for drawing causal inferences from the McGovern observational study. Holford, a professor of biostatistics at Yale University’s School of Public Health, analyzed the raw data behind the McGovern study. Id. at *8. The plaintiffs challenged Holford’s opinions on the ground that he relied on data in “non-final” form, from a temporally expanded dataset. Even more intriguingly, given that the plaintiffs did not present a statistician expert witness, plaintiffs argued that Holford’s opinions should be excluded because

(1) he insufficiently justified his use of a statistical test, and

(2) he “emphasizes statistical significance more than he would in his professional work.”

Id.

The MDL court dismissed the plaintiffs’ challenge on the mistaken conclusion that the alleged contradictions between Holford’s practice and his testimony impugn his credibility at most.” If there were truly such a deviation from the statistical standard of care, the issue is methodological, not a credibility issue of whether Holford was telling the truth. And as for the alleged over-emphasis on statistical significance, the MDL court again falls back to the glib conclusions that the allegation goes to the weight, not the admissibility of expert witness opinion testimony, and that plaintiffs can elicit testimony from Dr Samet as to how and why Professor Holford over-emphasized statistical significance. Id. Inquiring minds, at the bar, and in the academy, are left with no information about what the real issues are in the case.

Generally, both sides’ challenges to expert witnesses were denied.3 The real losers, however, were the scientific and medical communities, bench, bar, and general public. The MDL court glibly and incorrectly treated methodological issues as “credibility” issues, confused sufficiency with validity, and banished methodological failures to consideration by the trier of fact for “weight.” Confounding was mistreated as simply a debating point between the parties’ expert witnesses. The reader of Judge Ericksen’s opinion never learns what statistical test was used by Professor Holford, what justification was needed but allegedly absent for the test, why the justification was contested, and what other test was alleged by plaintiffs to have been a “better” statistical test. As for the emphasis given statistical significance, the reader is left in the dark about exactly what that emphasis was, and how it led to Holford’s conclusions and opinions, and what the proper emphasis should have been.

Eventually appellate review of the Bair Hugger MDL decision must turn on whether the district court abused its discretion. Although appellate courts give trial judges discretion to resolve Rule 702 issues, the appellate courts cannot reach reasoned decisions when the inferior courts fail to give even a cursory description of what the issues were, and how and why they were resolved as they were.


2 P. D. McGovern, M. Albrecht, K. G. Belani, C. Nachtsheim, P. F. Partington, I. Carluke, and M. R. Reed, “Forced-Air Warming and Ultra-Clean Ventilation Do Not Mix: An Investigation of Theatre Ventilation, Patient Warming and Joint Replacement Infection in Orthopaedics,” 93 J. Bone Joint 1537 (2011). The article as published contains no disclosures of potential or actual conflicts of interest. A persistent rumor has it that the investigators were funded by a commercial rival to the manufacturer of the Bair Hugger at issue in Judge Ericksen’s MDL. See generally, Melissa D. Kellam, Loraine S. Dieckmann, and Paul N. Austin, “Forced-Air Warming Devices and the Risk of Surgical Site Infections,” 98 Ass’n periOperative Registered Nurses (AORN) J. 354 (2013).

3 A challenge to plaintiffs’ expert witness Yadin David was sustained to the extent he sought to offer opinions about the defendant’s state of mind. Id. at *5.

Welding Litigation – Another Positive Example of Litigation-Generated Science

July 11th, 2017

In a recent post1, I noted Samuel Tarry’s valuable article2 for its helpful, contrarian discussion of the importance of some scientific articles with litigation provenances. Public health debates can spill over to the courtroom, and developments in the courtroom can, on occasion, inform and even resolve those public health debates that gave rise to the litigation. Tarry provided an account of three such articles, and I provided a brief account of another article, a published meta-analysis, from the welding fume litigation.

The welding litigation actually accounted for several studies, but in this post, I detail the background of another published study, this one an epidemiologic study by a noted Harvard epidemiologist. Not every expert witness’s report has the making of a published paper. In theory, if the expert witness has conducted a systematic review, and reached a conclusion that is not populated among already published papers, we might well expect that the witness had achieved the “least publishable unit.” The reality is that most causal claims are not based upon what could even remotely be called a systematic review. Given the lack of credibility to the causal claim, rebuttal reports are likely to have little interest to serious scientists.

Martin Wells

In the welding fume cases, one of plaintiffs’ hired expert witnesses, Martin Wells, a statistician, proffered an analysis of Parkinson’s disease (PD) mortality among welders and welding tradesmen. Using the National Center for Health Statistics (NCHS) database, Wells aggregated data from 1993 to 1999, for PD among welders and compared this to PD mortality among non-welders. Wells claimed to find an increased risk of PD mortality among younger (under age 65 at death) welders and welding tradesmen in this dataset.

The defense sought discovery of Wells’s methods and materials, and obtained the underlying data from the NCHS. Wells had no protocol, no pre-stated commitment to which years in the dataset he would use, and no pre-stated statistical analysis plan. At a Rule 702 hearing, Wells was unable to state how many welders were included in his analysis, why he selected some years but not others, or why he had selected age 65 as the cut off. His analyses appeared to be pure data dredging.

As the defense discovered, the NCHS dataset contained mortality data for many more years than the limited range employed by Wells in his analysis. Working with an expert witness at the Harvard School of Public Health, the defense discovered that Wells had gerrymandered the years included (and excluded) in his analysis in a way that just happened to generate a marginally, nominally statistically significant association.

NCHS Welder Age Distribution

The defense was thus able to show that the data overall, and in each year, were very sparse. For most years, the value was either 0 or 1, for PD deaths under age 65. Because of the huge denominators, however, the calculated mortality odds ratios were nominally statistically significant. The value of four PD deaths in 1998 is clearly an outlier. If the value were three rather than four, the statistical significance of the calculated OR would have been lost. Alternatively, a simple sensitivity test suggests that if instead of overall n = 7, n were 6, statistical significance would have been lost. The chart below, prepared at the time with help from Dr. David Schwartzof Innovative Science solutions, shows the actual number of “underlying cause” PD deaths that were in the dataset for each year in the NCHS dataset, and how sparse and granular” these data were:

A couple of years later, the Wells’ litigation analysis showed up as a manuscript, with only minor changes in its analyses, and with authors listed as Martin T. Wells and Katherine W. Eisenberg, in the editorial offices of Neurology. Katherine W. Eisenberg, AB and Martin T. Wells, Ph.D., “A Mortality Odds Ratio Study of Welders and Parkinson Disease.” Wells disclosed that he had testified for plaintiffs in the welding fume litigation, but Eisenberg declared no conflicts. Having only an undergraduate degree, and attending medical school at the time of submission, Ms. Eisenberg would not seem to have had the opportunity to accumulate any conflicts of interest. Undisclosed to the editors of Neurology, however, was that Ms. Eisenberg was the daughter of Theodore (Ted) Eisenberg, a lawyer who taught at Cornell University and who represented plaintiffs in the same welding MDL as the one in which Wells testified. Inquiring minds might have wondered whether Ms. Eisenberg’s tuition, room, and board were subsidized by Ted’s earnings in the welding fume and other litigations. Ted Eisenberg and Martin Wells had collaborated on many other projects, but in the welding fume litigation, Ted worked as an attorney for MDL welding plaintiffs, and Martin Wells was compensated handsomely as an expert witness. The acknowledgment at the end of the manuscript thanked Theodore Eisenberg for his thoughtful comments and discussion, without noting that he had been a paid member of the plaintiff’s litigation team. Nor did Wells and Eisenberg tells the Neurology editors that the article had grown out of Wells’ 2005 litigation report in the welding MDL.

The disclosure lapses and oversights by Wells and the younger Eisenberg proved harmless error because Neurology rejected the Wells and Eisenberg paper for publication, and it was never submitted elsewhere. The paper used the same restricted set of years of NCHS data, 1993-1999. The defense had already shown, through its own expert witness’s rebuttal report, that the manuscript’s analysis achieved statistical significance only because it omitted years from the analysis. For instance, if the authors had analyzed 1992 through 1999, their Parkinson’s disease mortality point estimate for younger welding tradesmen would no longer have been statistically significant.

Robert Park

One reason that Wells and Eisenberg may have abandoned their gerrymandered statistical analysis of the NCHS dataset was that an ostensibly independent group3 of investigators published a paper that presented a competing analysis. Robert M. Park, Paul A. Schulte, Joseph D. Bowman, James T. Walker, Stephen C. Bondy, Michael G. Yost, Jennifer A. Touchstone, and Mustafa Dosemeci, “Potential Occupational Risks for Neurodegenerative Diseases,” 48 Am. J. Ind. Med. 63 (2005) [cited as Park (2005)]. The authors accessed the same NCHS dataset, and looked at hundreds of different occupations, including welding tradesmen, and four neurodegenerative diseases.

Park, et al., claimed that they looked at occupations that had previously shown elevated proportional mortality ratios (PMR) in a previous publication of the NIOSH. A few other occupations were included; in all their were hundreds of independent analyses, without any adjustment for multiple testing. Welding occupations4 were included “[b]ecause of reports of Parkinsonism in welders [Racette et al.,, 2001; Levy and Nassetta, 2003], possibly attributable to manganese exposure (from welding rods and steel alloys)… .”5 Racette was a consultant for the Lawsuit Industry, which had been funded his research on parkinsonism among welders. Levy was a testifying expert witness for Lawsuit, Inc. A betting person would conclude that Park had consulted with Wells and Eisenberg, and their colleagues.

These authors looked at four neurological degenerative diseases (NDDs), Alzheimer’s disease, Parkinson’s disease, motor neuron disease, and pre-senile dementia. The authors looked at NCHS death certificate occupational information from 1992 to 1998, which was remarkable because Wells had insisted that 1992 somehow was not available for inclusion in his analyses. During 1992 to 1998, in 22 states, there were 2,614,346 deaths with 33,678 from Parkinson’s diseases. (p. 65b). Then for each of the four disease outcomes, the authors conducted an analysis for deaths below age 65. For the welding tradesmen, none of the four NDDs showed any associations. Park went on to conduct subgroup analyses for each of the four NDDs for death below age 65. In these subgroup analyses for welding tradesmen, the authors purported to find only an association only with Parkinson’s disease:

Of the four NDDs under study, only PD was associated with occupations where arc-welding of steel is performed, and only for the 20 PD deaths below age 65 (MOR=1.77, 95% CI=1.08-2.75) (Table V).”

Park (2005), at 70.

The exact nature of the subgroup was obscure, to say the least. Remarkably, Park and his colleagues had not calculated an odds ratio for welding tradesmen under age 65 at death compared with non-welding tradesmen under age 65 at death. The table’s legend attempts to explain the authors’ calculation:

Adjusted for age, race, gender, region and SES. Model contains multiplicative terms for exposure and for exposure if age at death <65; thus MOR is estimate for deaths occurring age 65+, and MOR, age <65 is estimate of enhanced risk: age <65 versus age 65+”

In other words, Park looked to see whether welding tradesmen who died at a younger age (below age 65) were more likely to have a PD cause of death than welding tradesmen who died an older age (over age 65). The meaning of this internal comparison is totally unclear, but it cannot represent a comparison of welder’s with non-welders. Indeed, every time, Park and his colleagues calculated and reported this strange odds ratio for any occupational group in the published paper, the odds ratio was elevated. If the odds ratio means anything, it is that younger Parkinson’s patients, regardless of occupation, are more likely to die of their neurological disease than older patients. Older men, regardless of occupation, are more likely to die of cancer, cardiovascular disease, and other chronic diseases. Furthermore, this age association within (not between) an occupational groups may be nothing other than a reflection of the greater severity of early-onset Parkinson’s disease in anyone, regardless of their occupation.

Like the manuscript by Eisenberg and Wells, the Park paper was an exercise in data dredging. The Park study reported increased odds ratios for Parkinson’s disease among the following groups on the primary analysis:

biological, medical scientists [MOR 2.04 (95% CI, 1.37-2.92)]

clergy [MOR 1.79 (95% CI, 1.58-2.02)]

religious workers [MOR 1.70 (95% CI, 1.27-2.21)]

college teachers [MOR 1.61 (95% CI, 1.39-1.85)]

social workers [MOR 1.44 (95% CI, 1.14-1.80)]

As noted above, the Park paper reported all of the internal mortality odds ratios for below versus above age 65, within occupational groups were nominally statistically significantly elevated. Nonetheless, the Park authors were on a mission, and determined to make something out of nothing, at least when it came to welding and Parkinson’s disease among younger patients. The authors’ conclusion reflected stunningly poor scholarship:

Studies in the US, Europe, and Korea implicate manganese fumes from arc-welding of steel in the development of a Parkinson’s-like disorder, probably a manifestation of manganism [Sjogren et al., 1990; Kim et al., 1999; Luccini, et al., 1999; Moon et al., 1999]. The observation here that PD mortality is elevated among workers with likely manganese exposures from welding, below age 65 (based on 20 deaths), supports the welding-Parkinsonism connection.”

Park (2005) at 73.

Stunningly bad because the cited papers by Sjogren, Luccini, Kim, and Moon did not examine Parkinson’s disease as an outcome; indeed, they did not even examine a parkinsonian movement disorder. More egregious, however, was the authors’ assertion that their analysis, which compared the odds of Parkinson’s disease mortality between welders under age 65 to that mortality for welders over age 65, supported an association between welding and “Parkinsonism.” 

Every time the authors conducted this analysis internal to an occupational group, they found an elevation among under age 65 deaths compared with over age 65 deaths within the occupational group. They did not report comparisons of any age-defined subgroup of a single occupational group with similarly aged mortality in the remaining dataset.

Elan Louis

The plaintiffs’ lawyers used the Park paper as “evidence” of an association that they claimed was causal. They were aided by a cadre of expert witnesses who could cite to a paper’s conclusions, but could not understand its methods. Occasionally, one of the plaintiffs’ expert witnesses would confess ignorance about exactly what Robert Park had done in this paper. Elan Louis, one of the better qualified expert witnesses on the side of claimants, for instance, testified in the plaintiffs’ attempt to certify a national medical monitoring class action for welding tradesmen. His testimony about what to make of the Park paper was more honest than most of the plaintiffs’ expert witnesses:

Q. My question to you is, is it true that that 1.77 point estimate of risk, is not a comparison of this welder and allied tradesmen under this age 65 mortality, compared with non-welders and allied tradesmen who die under age 65?

A. I think it’s not clear that the footnote — I think that the footnote is not clearly written. When you read the footnote, you didn’t read the punctuation that there are semicolons and colons and commas in the same sentence. And it’s not a well constructed sentence. And I’ve gone through this sentence many times. And I’ve gone through this sentence with Ted Eisenberg many times. This is a topic of our discussion. One of the topics of our discussions. And it’s not clear from this sentence that that’s the appropriate interpretation. *  *  *  However, the footnote, because it’s so poorly written, it obscures what he actually did. And then I think it opens up alternative interpretations.

Q. And if we can pursue that for a moment. If you look at other tables for other occupational titles, or exposure related variables, is it true that every time that Mr. Park reports on that MOR age under 65, that the estimate is elevated and statistically significantly so?

A. Yes. And he uses the same footnote every time. He’s obviously cut and paste that footnote every single time, down to the punctuation is exactly the same. And I would agree that if you look for example at table 4, the mortality odds ratios are elevated in that manner for Parkinson’s Disease, with reference to farming, with reference to pesticides, and with reference to farmers excluding horticultural deaths.

Deposition testimony of Elan Louis, at p. 401-04, in Steele v. A. O. Smith Corp., no. 1:03 CV-17000, MDL 1535 (Jan. 18, 2007). Other less qualified, or less honest expert witnesses on the plaintiffs’ side were content to cite Park (2005) as support for their causal opinions.

Meir Stampfer

The empathetic MDL trial judge denied the plaintiffs’ request for class certification in Steele, but individual personal injury cases continued to be litigated. Steele v. A.O. Smith Corp., 245 F.R.D. 279 (N.D. Ohio 2007) (denying class certification); In re Welding Fume Prods. Liab. Litig., No. 1:03-CV-17000, MDL 1535, 2008 WL 3166309 (N.D. Ohio Aug. 4, 2008) (striking pendent state-law class actions claims)

Although Elan Louis was honest enough to acknowledge his own confusion about the Park paper, other expert witnesses continued to rely upon it, and plaintiffs’ counsel continued to cite the paper in their briefs and to use the apparently elevated point estimate for welders in their cross-examinations of defense expert witnesses. With the NCHS data in hand (on a DVD), defense counsel returned to Meir Stampfer, who had helped them unravel the Martin Wells’ litigation analysis. The question for Professor Stampfer was whether Park’s reported point estimate for PD mortality odds ratio was truly a comparison of welders versus non-welders, or whether it was some uninformative internal comparison of younger welders versus older welders.

The one certainty available to the defense is that it had the same dataset that had been used by Martin Wells in the earlier litigation analysis, and now by Robert Park and his colleagues in their published analysis. Using the NCHS dataset, and Park’s definition of a welder or a welding tradesman, Professor Stampfer calculated PD mortality odds ratios for each definition, as well as for each definition for deaths under age 65. None of these analyses yielded statistically significant associations. Park’s curious results could not be replicated from the NCHS dataset.

For welders, the overall PD mortality odds ratio (MOR) was 0.85 (95% CI, 0.77–0.94), for years 1985 through 1999, in the NCHS dataset. If the definition of welders was expanded to including welding tradesmen, as used by Robert Park, the MOR was 0.83 (95% CI, 0.78–0.88) for all years available in the NCHS dataset.

When Stampfer conducted an age-restricted analysis, which properly compared welders or welding tradesmen with non-welding tradesmen, with death under age 65, he similarly obtained no associations for PD MOR. For the years 1985-1991, death under 65 from PD, Stampfer found MORs 0.99 (95% CI, 0.44–2.22) for just welders, and 0.83 (95% CI, 0.48–1.44) all welding tradesmen.

And for 1992-1999, the years used by Park (2005), and similar to the date range used by Martin Wells, for PD deaths at under age 65, for welders only, Stampfer found a MOR of 1.44 (95% CI, 0.79–2.62), and for all welding tradesmen, 1.20 (95% CI, 0.79–1.84)

None of Park’s slicing, dicing, and subgrouping of welding and PD results could be replicated. Although Dr. Stampfer submitted a report in Steele, there remained the problem that Park (2005) was a peer-reviewed paper, and that plaintiffs’ counsel, expert witnesses, and other published papers were citing it for its claimed results and errant discussion. The defense asked Dr. Stampfer whether the “least publishable unit” had been achieved, and Stampfer reluctantly agreed. He wrote up his analysis, and published it in 2009, with an appropriate disclosure6. Meir J. Stampfer, “Welding Occupations and Mortality from Parkinson’s Disease and Other Neurodegenerative Diseases Among United States Men, 1985–1999,” 6 J. Occup. & Envt’l Hygiene 267 (2009).

Professor Stampfer’s paper may not be the most important contribution to the epidemiology of Parkinson’s disease, but it corrected the distortions and misrepresentations of data in Robert Park’s paper. His paper has since been cited by well-known researchers in support of their conclusion that there is no association between welding and Parkinson’s disease7. Park’s paper has been criticized on PubPeer, with no rebuttal8.

Almost comically, Park has cited Stampfer’s study tendentiously for a claim that there is a healthy worker bias present in the available epidemiology of welding and PD, without noting, or responding to, the devastating criticism of his own Park (2005) work:

For a mortality study of neurodegenerative disease deaths in the United States during 1985 – 1999, Stampfer [61] used the Cause of Death database of the US National Center for Health Statistics and observed adjusted mortality odds ratios for PD of 0.85 (95% CI, 0.77 – 0.94) and 0.83 (95% CI, 0.78 – 0.88) in welders, using two definitions of welding occupations [61]. This supports the presence of a significant HWE [healthy worker effect] among welders. An even stronger effect was observed in welders for motor neuron disease (amyotrophic lateral sclerosis, OR 0.71, 95% CI, 0.56 – 0.89), a chronic condition that clearly would affect welders’ ability to work.”

Robert M. Park, “Neurobehavioral Deficits and Parkinsonism in Occupations with Manganese Exposure: A Review of Methodological Issues in the Epidemiological Literature,” 4 Safety & Health at Work 123, 126 (2013). Amyotrophic lateral sclerosis has a sudden onset, usually in middle age, without any real prodomal signs or symptoms, which would keep a young man from entering welding as a trade. Just shows you can get any opinion published in a peer-reviewed journal, somewhere. Stampfer’s paper, along with Mortimer’s meta-analysis helped put the kabosh on welding fume litigation.

Addendum

A few weeks ago, the Sixth Circuit affirmed the dismissal of a class action that was attempted based upon claims of environmental manganese exposure. Abrams v. Nucor Steel Marion, Inc., Case No. 3:13 CV 137, 2015 WL 6872511 (N. D. Ohio Nov. 9, 2015) (finding testimony of neurologist Jonathan Rutchik to be nugatory, and excluding his proffered opinions), aff’d, 2017 U.S. App. LEXIS 9323 (6th Cir. May 25, 2017). Class plaintiffs employed one of the regulators, Jonathan Rutchik, from the welding fume parkinsonism litigation).


2 Samuel L. Tarry, Jr., “Can Litigation-Generated Science Promote Public Health?” 33 Am. J. Trial Advocacy 315 (2009)

3 Ostensibly, but not really. Robert M. Park was an employee of NIOSH, but he had spent most of his career working as an employee for the United Autoworkers labor union. The paper acknowledged help from Ed Baker, David Savitz, and Kyle Steenland. Baker is a colleague and associate of B.S. Levy, who was an expert witness for plaintiffs in the welding fume litigation, as well as many others. The article was published in the “red” journal, the American Journal of Industrial Medicine.

4 The welding tradesmen included in the analyses were welders and cutters, boilermakers, structural metal workers, millwrights, plumbers, pipefitters, and steamfitters. Robert M. Park, Paul A. Schulte, Joseph D. Bowman, James T. Walker, Stephen C. Bondy, Michael G. Yost, Jennifer A. Touchstone, and Mustafa Dosemeci, “Potential Occupational Risks for Neurodegenerative Diseases,” 48 Am. J. Ind. Med. 63, 65a, ¶2 (2005).

5 Id.

6 “The project was supported in part through a consulting agreement with a group of manufacturers of welding consumables who had no role in the analysis, or in preparing this report, did not see any draft of this manuscript prior to submission for publication, and had no control over any aspect of the work or its publication.” Stampfer, at 272.

7 Karin Wirdefeldt, Hans-Olov Adami, Philip Cole, Dimitrios Trichopoulos, and Jack Mandel, “Epidemiology and etiology of Parkinson’s disease: a review of the evidence,” 26 Eur. J. Epidemiol. S1 (2011).

8 The criticisms can be found at <https://pubpeer.com/publications/798F9D98B5D2E5A832136C0A4AD261>, last visited on July 10, 2017.