TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Everything She Just Said Was Bullshit

September 26th, 2019

At this point, most products liability lawyers have read about the New Jersey verdicts returned earlier this month against Johnson & Johnson in four mesothelioma cases.[1] The Middlesex County jury found that the defendant’s talc and its supposed asbestos impurities were a cause of all four mesothelioma cases, and awarded compensatory damages of $37.3 million, in the cases.[2]

Johnson & Johnson was prejudiced by having to try four cases questionably consolidated together, and then hobbled by having its affirmative defense evidence stricken, and finally crucified when the trial judge instructed the jury at the end of the defense lawyer’s closing argument: “everything she just said was bullshit.”

Judge Ana C. Viscomi, who presided over the trial, struck the entire summation of defense lawyer Diane Sullivan. The action effectively deprived Johnson & Johnson of a defense, as can be seen from the verdicts. Judge Viscomi’s egregious ruling was given without explaining which parts of Sullivan’s closing were objectionable, and without giving Sullivan an opportunity to argue against the sanction.

During the course of Sullivan’s closing argument, Judge Viscomi criticized Sullivan for calling the plaintiffs’ lawyers “sinister,” and suggested that her argument was defaming the legal profession in violation of the Rules of Professional Conduct.[3] Sullivan did use the word “sinister” several times, but in each instance, she referred to the plaintiffs’ arguments, allegations, and innuendo about Johnson & Johnson’s action. Judge Viscomi curiously imputed unprofessional conduct to Sullivan for referring to plaintiffs’ counsel’s “shows and props,” as a suggestion that plaintiffs’ counsel had fabricated evidence.

Striking an entire closing argument is, as far as anyone has determined, unprecedented. Of course, Judge Haller is fondly remembered for having stricken the entirety of Vinny Gambini’s opening statement, but the good judge did allow Vinny’s “thank you” to stand:

Vinny Gambini: “Yeah, everything that guy just said is bullshit… Thank you.”

D.A. Jim Trotter: “Objection. Counsel’s entire opening statement is argument.”

Judge Chamberlain Haller: “Sustained. Counselor’s entire opening statement, with the exception of ‘Thank you’ will be stricken from the record.”

My Cousin Vinny (1992).

In the real world of a New Jersey courtroom, even Ms. Sullivan’s expression of gratitude for the jury’s attention and service succumbed to Judge Viscomi’s unprecedented ruling,[4] as did almost 40 pages of argument in which Sullivan carefully debunked and challenged the opinion testimony of plaintiffs’ highly paid expert witnesses. The trial court’s ruling undermined the defense’s detailed rebuttal of plaintiffs’ evidence, as well as the defense’s comment upon the plaintiffs’ witnesses’ lack of credibility.

Judge Viscomi’s sua sponte ruling appears even more curious given what took place in the aftermath of her instructing the jury to disregard Sullivan’s argument. First, the trial court gave very disparate treatment to plaintiffs’ counsel. The lawyers for the plaintiffs gave extensive closing arguments that were replete with assertions that Johnson & Johnson and Ms. Sullivan were liars, predators, manipulators, poisoners, baby killers, and then some. Sullivan’s objections were perfunctorily overruled. Second, Judge Viscomi permitted plaintiffs’ counsel to comment extensively upon Ms. Sullivan’s closing, even though it had been stricken. Third, despite the judicial admonition about the Rules of Professional Conduct, neither the trial judge nor plaintiffs’ counsel appear to have filed a disciplinary complaint against Ms. Sullivan. Of course, if Judge Viscomi or the plaintiffs’ counsel thought that Ms. Sullivan had violated the Rules, then they would be obligated to report Ms. Sullivan for misconduct.

Bottom line: these verdicts are unsafe.


[1]  The cases were tried in a questionable consolidation in the New Jersey Superior Court, for Middlesex County, before Judge Viscomi. Barden v. Brenntag North America, No. L-1809-17; Etheridge v. Brenntag North America, No. L-932-17; McNeill-George v. Brenntag North America, No. L-7049-16; and Ronning v. Brenntag North America, No. L-6040-17.

[2]  Bill Wichert, “J&J Hit With $37.3M Verdict In NJ Talc Case,” Law360 (Sept. 11, 2019).

[3]  Amanda Bronstad, “J&J Moves for Talc Mistrial After Judge Strikes Entire Closing Argument,” N.J.L.J. (Sept. 10, 2019) (describing Judge Viscomi as having admonished Ms. Sullivan to “[s]top denigrating the lawyers”; J&J’s motion for mistrial was made before the case was submitted to the jury).

[4]  See Peder B. Hong, “Summation at the Border: Serious Misconduct in Final Argument in Civil Trials,” 19 Hamline L. Rev. 179 (1995); Ty Tasker, “Stick and Stones: Judicial Handling of Invective in Advocacy,” 42 Judges J. 17 (2003); Janelle L. Davis, “Sticks and Stones May Break My Bones, But Names Could Get Me a Mistrial: An Examination of Name-Calling in Closing Argument in Civil Cases,” 42 Gonzaga L. Rev. 133 (2011).

Palavering About P-Values

August 17th, 2019

The American Statistical Association’s most recent confused and confusing communication about statistical significance testing has given rise to great mischief in the world of science and science publishing.[1] Take for instance last week’s opinion piece about “Is It Time to Ban the P Value?” Please.

Helena Chmura Kraemer is an accomplished professor of statistics at Stanford University. This week the Journal of the American Medical Association network flagged Professor Kraemer’s opinion piece on p-values as one of its most read articles. Kraemer’s eye-catching title creates the impression that the p-value is unnecessary and inimical to valid inference.[2]

Remarkably, Kraemer’s article commits the very mistake that the ASA set out to correct back in 2016,[3] by conflating the probability of the data under a hypothesis of no association with the probability of a hypothesis given the data:

“If P value is less than .05, that indicates that the study evidence was good enough to support that hypothesis beyond reasonable doubt, in cases in which the P value .05 reflects the current consensus standard for what is reasonable.”

The ASA tried to break the bad habit of scientists’ interpreting p-values as allowing us to assign posterior probabilities, such as beyond a reasonable doubt, to hypotheses, but obviously to no avail.

Kraemer also ignores the ASA 2016 Statement’s teaching of what the p-value is not and cannot do, by claiming that p-values are determined by non-random error probabilities such as:

“the reliability and sensitivity of the measures used, the quality of the design and analytic procedures, the fidelity to the research protocol, and in general, the quality of the research.”

Kraemer provides errant advice and counsel by insisting that “[a] non-significant result indicates that the study has failed, not that the hypothesis has failed.” If the p-value is the measure of the probability of observing an association at least as large as obtained given an assumed null hypothesis, then of course a large p-value cannot speak to the failure of the hypothesis, but why declare that the study has failed? The study was perhaps indeterminate, but it still yielded information that perhaps can be combined with other data, or help guide future studies.

Perhaps in her most misleading advice, Kraemer asserts that:

“[w]hether P values are banned matters little. All readers (reviewers, patients, clinicians, policy makers, and researchers) can just ignore P values and focus on the quality of research studies and effect sizes to guide decision-making.”

Really? If a high quality study finds an “effect size” of interest, we can now ignore random error?

The ASA 2016 Statement, with its “six principles,” has provoked some deliberate or ill-informed distortions in American judicial proceedings, but Kraemer’s editorial creates idiosyncratic meanings for p-values. Even the 2019 ASA “post-modernism” does not advocate ignoring random error and p-values, as opposed to proscribing dichotomous characterization of results as “statistically significant,” or not.[4] The current author guidelines for articles submitted to the Journals of the American Medical Association clearly reject this new-fangled rejection of evaluating this new-fangled rejection of the need to assess the role of random error.[5]


[1]  See Ronald L. Wasserstein, Allen L. Schirm, and Nicole A. Lazar, “Editorial: Moving to a World Beyond ‘p < 0.05’,” 73 Am. Statistician S1, S2 (2019).

[2]  Helena Chmura Kraemer, “Is It Time to Ban the P Value?J. Am. Med. Ass’n Psych. (August 7, 2019), in-press at doi:10.1001/jamapsychiatry.2019.1965.

[3]  Ronald L. Wasserstein & Nicole A. Lazar, “The ASA’s Statement on p-Values: Context, Process, and Purpose,” 70 The American Statistician 129 (2016).

[4]  “Has the American Statistical Association Gone Post-Modern?” (May 24, 2019).

[5]  See instructions for authors at https://jamanetwork.com/journals/jama/pages/instructions-for-authors

Mass Torts Made Less Bad – The Zambelli-Weiner Affair in the Zofran MDL

July 30th, 2019

Judge Saylor, who presides over the Zofran MDL, handed down his opinion on the Zambelli-Weiner affair, on July 25, 2019.[1] As discussed on these pages back in April of this year,[2] GlaxoSmithKline (GSK), the defendant in the Zofran birth defects litigation, sought documents from plaintiffs and Dr Zambelli-Weiner (ZW) about her published study on Zofran and birth defects.[3] Plaintiffs refused to respond to the discovery on grounds of attorney work product,[4] and of consulting expert witness confidential communications.[5] After an abstract of ZW’s study appeared in print, GSK subpoenaed ZW and her co-author, Dr. Russell Kirby, for a deposition and for production of documents.

Plaintiffs’ counsel sought a protective order. Their opposition relied upon a characterization of ZW as a research scientist; they conveniently ommitted their retention of her as a paid expert witness. In December 2018, the MDL court denied plaintiffs’ motion for a protective order, and allowed the deposition to go forward to explore the financial relationship between counsel and ZW.

In January 2019, when GSK served ZW with its subpoena duces tecum, ZW through her own counsel moved for a protective order, supported by ZW’s affidavit with factual assertions to support her claim to be not subject to the deposition. The MDL court quickly denied her motion, and in short order, her lawyer notified the court that ZW’s affidavit contained “factual misrepresentations,” which she refused to correct, and he sought leave to withdraw.

According to the MDL court, the ZW affidavit contained three falsehoods. She claimed not to have been retained by any party when she had been a paid consultant to plaintiffs at times over the previous five years, since December 2014. ZW claimed that she had no factual information about the litigation, when in fact she had participated in a Las Vegas plaintiffs’ lawyers’ conference, “Mass Torts Made Perfect,” in October 2015. Furthermore, ZW falsely claimed that monies received from plaintiffs’ law firms did not go to fund the Zofran study, but went to her company, Translational Technologies International Health Research & Economics, for unrelated work. ZW received in excess of $200,000 for her work on the Zofran study.

After ZW obtained new counsel, she gave deposition testimony in February 2019, when she acknowledged the receipt of money for the study, and the lengthy relationship with plaintiffs’ counsel. Armed with this information, GSK moved for full responses to its document requests. Again, plaintiffs’ counsel and ZW resisted on grounds of confidentiality and privilege.

Judge Saylor reviewed the requested documents in camera, and held last week that they were not protected by consulting expert witness privilege or by attorney work product confidentiality. ZW’s materials and communications in connection with the Las Vegas plaintiffs’ conference never had the protection of privilege or confidentiality. ZW presented at a “quasi-public” conference attended by lawyers who had no connection to the Zofran litigation.[6]

With respect to work product claims, Judge Saylor found that GSK had shown “exceptional circumstances” and “substantial need” for the requested materials given that the plaintiffs’ testifying expert witnesses had relied upon the ZW study, which had been covertly financially supported by plaintiffs’ lawyers.[7] With respect to whatever was thinly claimed to be privileged and confidential, Judge Saylor found the whole arrangement to fail the smell test:[8]

“It is troublesome, to say the least, for a party to engage a consulting, non-testifying expert; pay for that individual to conduct and publish a study, or otherwise affect or influence the study; engage a testifying expert who relies upon the study; and then cloak the details of the arrangement with the consulting expert in the confidentiality protections of Rule 26(b) in order to conceal it from a party opponent and the Court. The Court can see no valid reason to permit such an arrangement to avoid the light of discovery and the adversarial process. Under the circumstances, GSK has made a showing of substantial need and an inability to obtain these documents by other means without undue hardship.

Furthermore, in this case, the consulting expert made false statements to the Court as to the nature of her relationship with plaintiffs’ counsel. The Court would not have been made aware of those falsehoods but for the fact that her attorney became aware of the issue and sought to withdraw. Certainly plaintiffs’ counsel did nothing at the time to correct the false impressions created by the affidavit. At a minimum, the submission of those falsehoods effectively waived whatever protections might otherwise apply. The need to discover the truth and correct the record surely outweighs any countervailing policy in favor of secrecy, particularly where plaintiffs’ testifying experts have relied heavily on Dr. Zambelli-Weiner’s study as a basis for their causation opinions. In order to effectively cross-examine plaintiffs’ experts about those opinions at trial, GSK is entitled to review the documents. At a minimum, the documents shed additional light on the nature of the relationship between Dr. Zambelli-Weiner and plaintiffs’ counsel, and go directly to the credibility of Dr. Zambelli-Weiner and the reliability of her study results.”

It remains to be seen whether Judge Saylor will refer the matter of ZW’s false statements in her affidavit to the U.S. Attorney’s office, or the lawyers’ complicity in perpetuating these falsehoods to disciplinary boards.

Mass torts will never be perfect, or even very good. Judge Saylor, however, has managed to make the Zofran litigation a little less bad.


[1]  Memorandum and order on In Camera Production of Documents Concerning Dr. April Zambelli-Weiner, In re Zofran Prods. Liab. Litig., MDL 2657, D.Mass. (July 25, 2019) [cited as Mem.].

[2]  NAS, “Litigation Science – In re Zambelli-Weiner” (April 8, 2019).

[3]  April Zambelli-Weiner, et al., “First Trimester Ondansetron Exposure and Risk of Structual Birth Defects,” 83 Reproductive Toxicol. 14 (2019).

[4]  Fed. R. Civ. P. 26(b)(3).

[5]  Fed. R. Civ. P. 26(b)(4)(D).

[6]  Mem. at 7-9.

[7]  Mem. at 9.

[8]  Mem. at 9-10.

Statistical Significance at the New England Journal of Medicine

July 19th, 2019

Some wild stuff has been going on in the world of statistics, at the American Statistical Association, and elsewhere. A very few obscure journals have declared p-values to be verboten, and presumably confidence intervals as well. The world of biomedical research has generally reacted more sanely, with authors defending the existing frequentist approaches and standards.[1]

This week, the editors of the New England Journal of Medicine have issued new statistical guidelines for authors. The Journal’s approach seems appropriately careful and conservative for the world of biomedical research. In an editorial introducing the new guidelines,[2] the Journal editors remind their potential authors that statistical significance and p-values are here to stay:

“Despite the difficulties they pose, P values continue to have an important role in medical research, and we do not believe that P values and significance tests should be eliminated altogether. A well-designed randomized or observational study will have a primary hypothesis and a prespecified method of analysis, and the significance level from that analysis is a reliable indicator of the extent to which the observed data contradict a null hypothesis of no association between an intervention or an exposure and a response. Clinicians and regulatory agencies must make decisions about which treatment to use or to allow to be marketed, and P values interpreted by reliably calculated thresholds subjected to appropriate adjustments have a role in those decisions.”[3]

The Journal’s editors described their revamped statistical policy as being based upon three premises:

(1) adhering to prespecified analysis plans if they exist;

(2) declaring associations or effects only for statistical analyses that have pre-specified “a method for controlling type I error”; and

(3) presenting evidence about clinical benefits or harms requires “both point estimates and their margins of error.”

With a hat tip to the ASA’s recent pronouncements on statistical significance,[4] the editors suggest that their new guidelines have moved away from bright-line applications of statistical significance “as a bright-line marker for a conclusion or a claim”[5]:

“[T]he notion that a treatment is effective for a particular outcome if P < 0.05 and ineffective if that threshold is not reached is a reductionist view of medicine that does not always reflect reality.”[6]

The editors’ language intimates greater latitude for authors in claiming associations or effects from their studies, but this latitude may well be circumscribed by tighter control over such claims in the inevitable context of multiple testing within a dataset.

The editors’ introduction of the new guidelines is not entirely coherent. The introductory editorial notes that the use of p-values for reporting multiple outcomes, without adjustments for multiplicity, inflates the number of findings with p-values less than 5%. The editors thus caution against “uncritical interpretation of multiple inferences,” which can be particularly threatening to valid inference when not all the comparisons conducted by the study investigators have been reported in their manuscript.[7] They reassuringly tell prospective authors that many methods are available to adjust for multiple comparisons, and can be used to control Type I error probability “when specified in the design of a study.”[8]

But what happens when such adjustment methods are not pre-specified in the study design? Failure to to do so do not appear to be disqualifying factors for publication in the Journal. For one thing, when the statistical analysis plan of the study has not specified adjustment methods for controlling type I error probabilities, then authors must replace p-values with “estimates of effects or association and 95% confidence intervals.”[9] It is hard to understand how this edict helps when the specified coefficient of 95% is a continuation of the 5% alpha, which would have been used in any event. The editors seem to be saying that if authors fail to pre-specify or even post-specify methods for controlling error probabilities, then they cannot declare statistical significance, or use p-values, but they can use confidence intervals in the same way they have been using them, and with the same misleading interpretations supplied by their readers.

More important, another price authors will have to pay for multiple testing without pre-specified methods of adjustment is that they will affirmatively have to announce their failure to adjust for multiplicity and that their putative associations “may not be reproducible.” Tepid as this concession is, it is better than previous practice, and perhaps it will become a badge of shame. The crucial question is whether judges, in exercising their gatekeeping responsibilities, will see these acknowledgements as disabling valid inferences from studies that carry this mandatory warning label.

The editors have not issued guidelines for the use of Bayesian statistical analyses, because “the large majority” of author manuscripts use only frequentist analyses.[10] The editors inform us that “[w]hen appropriate,” they will expand their guidelines to address Bayesian and other designs. Perhaps this expansion will be appropriate when Bayesian analysts establish a track record of abuse in their claiming of associations and effects.

The new guidelines themselves are not easy to find. The Journal has not published these guidelines as an article in their published issues, but has relegated them to a subsection of their website’s instructions to authors for new manuscripts:

https://www.nejm.org/author-center/new-manuscripts

Presumably, the actual author instructions control in any perceived discrepancy between this week’s editorial and the guidelines themselves. Authors are told that p-values generally should be two-sided. Authors’ use of:

“Significance tests should be accompanied by confidence intervals for estimated effect sizes, measures of association, or other parameters of interest. The confidence intervals should be adjusted to match any adjustment made to significance levels in the corresponding test.”

Similarly, the guidelines call for, but do not require, pre-specified methods of controlling family-wide error rates for multiple comparisons. For observational studies submitted without pre-specified methods of error control, the guidelines recommend the use of point estimates and 95% confidence intervals, with an explanation that the interval widths have not been adjusted for multiplicity, and a caveat that the inferences from these findings may not be reproducible. The guidelines recommend against using p-values for such results, but again, it is difficult to see why reporting the 95% confidence intervals is recommended when p-values are not recommended.


[1]  Jonathan A. Cook, Dean A. Fergusson, Ian Ford, Mithat Gonen, Jonathan Kimmelman, Edward L. Korn, and Colin B. Begg, “There is still a place for significance testing in clinical trials,” 16 Clin. Trials 223 (2019).

[2]  David Harrington, Ralph B. D’Agostino, Sr., Constantine Gatsonis, Joseph W. Hogan, David J. Hunter, Sharon-Lise T. Normand, Jeffrey M. Drazen, and Mary Beth Hamel, “New Guidelines for Statistical Reporting in the Journal,” 381 New Engl. J. Med. 285 (2019).

[3]  Id. at 286.

[4]  See id. (“Journal editors and statistical consultants have become increasingly concerned about the overuse and misinterpretation of significance testing and P values in the medical literature. Along with their strengths, P values are subject to inherent weaknesses, as summarized in recent publications from the American Statistical Association.”) (citing Ronald L. Wasserstein & Nicole A. Lazar, “The ASA’s statement on p-values: context, process, and purpose,” 70 Am. Stat. 129 (2016); Ronald L. Wasserstein, Allen L. Schirm, and Nicole A. Lazar, “Moving to a world beyond ‘p < 0.05’,” 73 Am. Stat. s1 (2019)).

[5]  Id. at 285.

[6]  Id. at 285-86.

[7]  Id. at 285.

[8]  Id., citing Alex Dmitrienko, Frank Bretz, Ajit C. Tamhane, Multiple testing problems in pharmaceutical statistics (2009); Alex Dmitrienko & Ralph B. D’Agostino, Sr., “Multiplicity considerations in clinical trials,” 378 New Engl. J. Med. 2115 (2018).

[9]  Id.

[10]  Id. at 286.

The opinions, statements, and asseverations expressed on Tortini are my own, or those of invited guests, and these writings do not necessarily represent the views of clients, friends, or family, even when supported by good and sufficient reason.