TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Infante-lizing the IARC

May 13th, 2018

Peter Infante, a frequently partisan, paid expert witness for the Lawsuit Industry, recently published a “commentary” in the red journal, the American Journal of Industrial Medicine, about the evils of scientists with economic interests commenting upon the cancer causation pronouncements of the International Agency for Research on Cancer (IARC). Peter F. Infante, Ronald Melnick, James Huff & Harri Vainio, “Commentary: IARC Monographs Program and public health under siege by corporate interests,” 61 Am. J. Indus. Med. 277 (2018). Infante’s rant goes beyond calling out scientists with economic interests on IARC working groups; Infante would silence all criticism of IARC pronouncements by anyone, including scientists, who has an economic interest in the outcome of a scientific debate. Infante points to manufacturing industry’s efforts to “discredit” recent IARC pronouncements on glyphosate and red meat, by which he means that there were scientists who had the temerity to question IARC’s processes and conclusions.

Apparently, Infante did not think his bias was showing or would be detected. He and his co-authors invoke militaristic metaphors to claim that the IARC’s monograph program, and indeed all of public health, is “under siege by corporate interests.” A farcical commentary at that, coming from such stalwarts of the Lawsuit Industry. Infante lists his contact information as “Peter F. Infante Consulting, LLC, Falls Church, Virginia,” and co-author Ronald Melnick can be found at “Ronald Melnick Consulting, LLC, North Logan, Utah.” A search on Peter Infante in Westlaw yields 141 hits, all on the plaintiffs’ side of health effects disputes; he is clearly no stranger to the world of litigation. Melnick is, to be sure, harder to find, but he does show up as a signatory on Raphael Metzger’s supposed amicus briefs, filed by Metzger’s litigation front organization, Council for Education and Research on Toxics.1

Of the commentary’s authors, only James Huff, of “James Huff Consulting, Durham, North Carolina,” disclosed a connection with litigation, as a consultant to plaintiffs on animal toxicology of glyphosate. Huff’s apparent transparency clouded up when it came to disclosing how much he has been compensated for his consulting activities for claimants in glyphosate litigation. In the very next breath, in unison, the authors announce unabashedly that “[a]ll authors report no conflicts of interest.” Infante at 280.

Of course, reporting “no conflicts of interest” does not mean that the authors have no conflicts of interest, financial, positional, and idealogical. Their statement simply means that they have not reported any conflicts, through inadvertence, willfulness, or blindness. The authors, and the journal, are obviously content to mislead their readership by not-so-clever dodges.

The clumsiness of the authors’ inability to appreciate their very own conflicts infects their substantive claims in this commentary. These “consultants” tell us solemnly that IARC “[m]eetings are openly transparent and members are vetted for conflicts of interest.” Infante at 277. Working group members, however, are vetted but only for manufacturing industry conflicts, not for litigation industry conflicts or for motivational conflicts, such as advancing their own research agendas. Not many scientists have a research agenda to show that chemicals do not cause cancer.

At the end of this charade, the journal provides additional disclosures [sic]. As for “Ethics and Approval Consent,” we are met with a bold “Not Applicable.” Indeed; ethics need not apply. Perhaps, the American Journal of Industrial Medicine is beyond good and evil. The journal’s “Editor of Record,” Steven B. Markowitz “declares that he has no conflict of interest in the review and publication decision regarding this article.” This is, of course, the same Markowitz who testifies frequently for the Lawsuit Industry, without typically disclosing this conflict on his own publications.

This commentary is yet another brushback pitch, which tries to chill manufacturing industry and scientists from criticizing the work of agencies, such as IARC, captured by lawsuit industry consultants. No one should be fooled other than Mother Jones.


1See, e.g., Ramos v. Brenntag Specialties, Inc., 372 P.3d 200 (Calif. 2016) (where plaintiff was represented by Metzger, and where CERT filed an amicus brief by the usual suspects, plaintiffs’ expert witnesses, including Melnick).

P-Values: Pernicious or Perspicacious?

May 12th, 2018

Professor Kingsley R. Browne, of the Wayne State University Law School, recently published a paper that criticized the use of p-values and significance testing in discrimination litigation. Kingsley R. Browne, “Pernicious P-Values: Statistical Proof of Not Very Much,” 42 Univ. Dayton L. Rev. 113 (2017) (cited below as Browne). Browne amply documents the obvious and undeniable, that judges, lawyers, and even some ill-trained expert witnesses, are congenitally unable to describe and interpret p-values properly. Most of Browne’s examples are from the world of anti-discrimination law, but he also cites a few from health effects litigation as well. Browne also cites from many of the criticisms of p-values in the psychology and other social science literature.

Browne’s efforts to correct judicial innumeracy are welcomed, but they take a peculiar turn in this law review article. From the well-known state of affairs of widespread judicial refusal or inability to discuss statistical concepts accurately, Browne argues for what seem to be two incongruous, inconsistent responses. Rejecting the glib suggestion of former Judge Posner that evidence law is not “fussy” about evidence, Browne argues that federal evidence law requires courts to be “fussy” about evidence, and that Rule 702 requires courts to exclude expert witnesses, whose opinions fail to “employ[] in the courtroom the same level of intellectual rigor that characterizes the practice of an expert in the relevant field.” Browne at 143 (quoting from Kumho Tire Co. v. Carmichael, 526 U.S. 137, 152 (1999). Browne tells us, with apparently appropriate intellectual rigor, that “[i]f a disparity that does not provide a p-value of less than 0.05 would not be accepted as meaningful in the expert’s discipline, it is not clear that the expert should be allowed to testify – on the basis of his expertise in that discipline – that the disparity is, in fact, meaningful.” Id.

In a volte face, Browne then argues that p-values do “not tell us much,” basically because they are dependent upon sample size. Browne suggests that the quantitative disparity between expected value and observed proportion or average can be assessed without the use of p-values, and that measuring a p-value “adds virtually nothing and just muddies the water.” Id. at 152. The prevalent confusion among judges and lawyers seems sufficient in Browne’s view to justify his proposal, as well as his further suggestion that Rule 403 should be invoked to exclude p-values:

The ease with which reported p-values cause a trier of fact to slip into the transposition fallacy and the difficulty of avoiding that lapse of logic, coupled with the relatively sparse information actually provided by the p-value, make p-values prime candidates for exclusion under Federal Rule of Evidence 403. *** If judges, not to mention the statistical experts they rely on, cannot use the information without falling into fallacious reasoning, the likelihood that the jury will misunderstand the evidence is very high. Since the p-value actually provides little useful relevant information, the high risk of misleading the jury greatly exceeds its scant probative value, so it simply should not be presented to the jury.”

Id. at 152-53.

And yet, elsewhere in the same article, Browne ridicules one court and several expert witnesses who have argued in favor of conclusions that were based upon p-values up to 50%.1 The concept of p-values cannot be so flexible as to straddle the extremes of having no probative value, and yet capable of rendering an expert witness’s opinions ludicrous. P-values quantify an estimate of random error, even if that error rate varies with sample size. To be sure, the measure of random error depends upon the specified model and assumption of a null hypothesis, but the crucial point is that the estimate (whether mean, proportion, risk ratio, risk difference, etc.) is rather meaningless without some further estimate of random variability of the estimate. Of course, random error is not the only type of error, but the existence of other potential systematic errors is hardly a reason to ignore random error.

In the science of health effects, many applications of p-values have given way to the use of confidence intervals, which arguably provide more direct assessments of both sample estimates, along with ranges of potential outcomes that are reasonably compatible with the sample estimates. Remarkably, Browne never substantively discusses confidence intervals in his article.

Under the heading of other problems with p-values and significance testing, Browne advances four additional putative problems with p-values. First, Browne asserts with little to no support that “[t]he null hypothesis is unlikely a priori.” Id. at 155. He fails to tell us why the null hypothesis of no disparity is not a reasonable starting place in the absence of objective evidence of a prior estimate. Furthermore, a null hypothesis of no difference will have legal significance in claims of health effects, or of unlawful discrimination.

Second, Browne argues that significance testing will lead to “[c]onflation of statistical and practical (or legal) significance” in the minds of judges and jurors. Id. at 156-58. This charge is difficult to sustain. The actors in legal cases can probably best appreciate practical significance and its separation from statistical significance, most readily. If a large class action showed that the expected value of a minority’s proportion was 15%, and the observed proportion was 14.8%, p < 0.05, most innumerate judges and jurors would sense that this disparity was unimportant and that no employer would fine tune its discriminatory activities so closely as to achieve such a meaningless difference.

Third, Browne reminds us that the validity and the interpretation of a p-value turns on the assumption that the statistical model is perfectly specified. Id. at 158-59. His reminder is correct, but again, this aspect of p-values (or confidence intervals) is relatively easy to explain, as well as to defend or challenge. To be sure, there may be legitimate disputes about whether an appropriate model was used (say binomial versus hypergeometric), but such disputes are hardly the most arcane issues that judges and jurors will face.

Fourth, Browne claims that “the alternative hypothesis is seldom properly specified.” Id. at 159-62. Unless analysts are focused on measuring pre-test power or type II error, however, they need not advance an alternative hypothesis. Furthermore, it is hardly a flaw with significance testing that it does not account for systematic bias or confounding.

Browne does not offer an affirmative response such as urging courts to adopt a Bayesian program. A Bayesian response to prevalent blunders in interpreting statistical significance would introduce perhaps even more arcane and hard-to-discern blunders in court proceedings. Browne also leaves courts without a meaningful approach to evaluate random error other than to engage in crude comparisons between two means or proportions. The recommendations in this law review article appear to be a giant step, backwards, into an epistemic void.


1See Browne at 146, citing In re Photochromic Lens Antitrust Litig., 2014 WL 1338605 (M.D. Fla. April 3, 2014) (reversing magistrate judge’s exclusion of an expert witness who had advanced claims based upon p-value of 0.50); id. at 147 n. 116, citing In re High-Tech Employee Antitrust Litig., 2014 WL 1351040 (N.D. Cal. 2014).

Stranger to the Contract and to the World

March 10th, 2018

It Was a Dark and Stormy Night

All around the country, first year law students are staring at the prospect of their final examination in contracts, one of the required courses in the law school curriculum in the United States. So here is a practice question.

A lawyer for David Dennison drafts a memorandum of agreement between Mr Dennison and Stephanie Clifford. The memorandum calls for Ms Clifford to remain silent about a sexual liaison between Mr Dennison and her, in return for payment of $130,000, in “hush money.”

Mr Dennison never signed the putative contract, and he never provided the consideration for Ms Clifford’s silence. The lawyer for Mr Dennison, however, wired Ms Clifford the money, although he apparently was never given the money by his client, or reimbursed for the payment, later1.

Mr Dennison’s lawyer also represents the President of the United States (POTUS). POTUS may well be Mr Dennison, but he has never acknowledged that Dennison was a name he used. Mr Dennison’s lawyer has publicly acknowledged that he provided the money to Ms. Clifford, and that his client Mr Dennison or whoever Mr Dennison is, did not reimburse him2.

The putative contract calls for arbitration and penalties. A company, EC, LLC, obtained a temporary restraining order (TRO) from the designated alternative dispute resolution company. EC, LLC v Peterson, ADR Services TRO (Feb. 27, 2018)3. A week later, Stephanie Clifford sued POTUS (a.k.a. David Dennison) for declaratory relief, in California Superior Court, Los Angeles County, after the TRO was entered. Clifford v. Trump, Calif. Super. Ct., Los Angeles Cty. Complaint (Mar. 06, 2018)4.

Prepare a bench memorandum for the trial court judge who has been assigned the declaratory judgment action. Make sure you address all issues of contract formation and enforcement, affirmative defenses such as the statute of frauds5, as well as professional ethics of the lawyers involved. Address the ethical propriety of POTUS’s lawyer’s paying consideration for a hush contract out of his own pocket and then claiming the benefit of the bargain for his client, as well as the legal consequences of his public disclosure on the enforceability of the putative contract. If you come up with a negotiation strategy for the wife of POTUS to vitiate her pre-nuptial agreements with POTUS, you will receive extra credit.

Watch the upcoming issues of the New York Times for the answer to this practice question.


1 Amy Davidson Sorkin, “Does Stormy Daniels Have a Case Against Donald Trump?” New Yorker (Mar. 7, 2018).

2 Debra Cassens Weiss, “Stormy Daniels sues Trump, says confidentiality deal is void because he didn’t sign it,” Am. Bar. Ass’n J. (Mar. 7, 2018).

3 Jim Rutenberg & Peter Baker, “Trump Lawyer Obtained Restraining Order to Silence Stormy Daniels,” N.Y. Times (Mar. 7, 2018).

4 Rebecca R. Ruiz & Matt Stevens, “Stormy Daniels Sues, Saying Trump Never Signed ‘Hush Agreement’,” N.Y. Times (Mar. 6, 2018).

Statistical Deontology

March 2nd, 2018

In courtrooms across America, there has been a lot of buzzing and palavering about the American Statistical Association’s Statement on Statistical Significance Testing,1 but very little discussion of the Society’s Ethical Guidelines, which were updated and promulgated in the same year, 2016. Statisticians and statistics, like lawyers and the law, receive their fair share of calumny over their professional activities, but the statistician’s principal North American professional organization is trying to do something about members’ transgressions.

The American Statistical Society (ASA) has promulgated ethical guidelines for statisticians, as has the Royal Statistical Society,2 even if these organizations lack the means and procedures to enforce their codes. The ASA’s guidelines3 are rich with implications for statistical analyses put forward in all contexts, including in litigation and regulatory rule making. As such, the guidelines are well worth studying by lawyers.

The ASA Guidelines were prepared by the Committee on Professional Ethics, and approved by the ASA’s Board in April 2016. There are lots of “thou shall” and “thou shall nots,” but I will focus on the issues that are more likely to arise in litigation. What is remarkable about the Guidelines is that if followed, they probably are more likely to eliminate unsound statistical practices in the courtroom than the ASA State on P-values.

Defining Good Statistical Practice

Good statistical practice is fundamentally based on transparent assumptions, reproducible results, and valid interpretations.” Guidelines at 1. The Guidelines thus incorporate something akin to the Kumho Tire standard that an expert witness ‘‘employs in the courtroom the same level of intellectual rigor that characterizes the practice of an expert in the relevant field.’’ Kumho Tire Co. v. Carmichael, 526 U.S. 137, 152 (1999).

A statistician engaged in expert witness testimony should provide “only expert testimony, written work, and oral presentations that he/she would be willing to have peer reviewed.” Guidelines at 2. “The ethical statistician uses methodology and data that are relevant and appropriate, without favoritism or prejudice, and in a manner intended to produce valid, interpretable, and reproducible results.” Id. Similarly, the statistician, if ethical, will identify and mitigate biases, and use analyses “appropriate and valid for the specific question to be addressed, so that results extend beyond the sample to a population relevant to the objectives with minimal error under reasonable assumptions.” Id. If the Guidelines were followed, a lot of spurious analyses would drop off the litigation landscape, regardless whether they used p-values or confidence intervals, or a Bayesian approach.

Integrity of Data and Methods

The ASA’s Guidelines also have a good deal to say about data integrity and statistical methods. In particular, the Guidelines call for candor about limitations in the statistical methods or the integrity of the underlying data:

The ethical statistician is candid about any known or suspected limitations, defects, or biases in the data that may impact the integrity or reliability of the statistical analysis. Objective and valid interpretation of the results requires that the underlying analysis recognizes and acknowledges the degree of reliability and integrity of the data.”

Guidelines at 3.

The statistical analyst openly acknowledges the limits of statistical inference, the potential sources of error, as well as the statistical and substantive assumptions made in the execution and interpretation of any analysis,” including data editing and imputation. Id. The Guidelines urge analysts to address potential confounding not assessed by the study design. Id. at 3, 10. How often do we see these acknowledgments in litigation-driven analyses, or in peer-reviewed papers, for that matter?

Affirmative Actions Prescribed

In the aid of promoting data and methodological integrity, the Guidelines also urge analysts to share data when appropriate without revealing the identities of study participants. Statistical analysts should publicly correct any disseminated data and analyses in their own work, as well as working to “expose incompetent or corrupt statistical practice.” Of course, the Lawsuit Industry will call this ethical duty “attacking the messenger,” but maybe that’s a rhetorical strategy based upon an assessment of risks versus benefits to the Lawsuit Industry.

Multiplicity

The ASA Guidelines address the impropriety of substantive statistical errors, such as:

[r]unning multiple tests on the same data set at the same stage of an analysis increases the chance of obtaining at least one invalid result. Selecting the one “significant” result from a multiplicity of parallel tests poses a grave risk of an incorrect conclusion. Failure to disclose the full extent of tests and their results in such a case would be highly misleading.”

Guidelines at 9.

There are some Lawsuit Industrialists who have taken comfort in the pronouncements of Kenneth Rothman on corrections for multiple comparisons. Rothman’s views on multiple comparisons are, however, much broader and more nuanced than the Industry’s sound bites.4 Given that Rothman opposes anything like strict statistical significance testing, it follows that he is relatively unmoved for the need for adjustments to alpha or the coefficient of confidence. Rothman, however, has never deprecated the need to consider the multiplicity of testing, and the need for researchers to be forthright in disclosing the the scope of comparisons originally planned and actually done.


2 Royal Statistical Society – Code of Conduct (2014); Steven Piantadosi, Clinical Trials: A Methodologic Perspective 609 (2d ed. 2005).

3 Shelley Hurwitz & John S. Gardenier, “Ethical Guidelines for Statistical Practice: The First 60 Years and Beyond,” 66 Am. Statistician 99 (2012) (describing the history and evolution of the Guidelines).

4 Kenneth J. Rothman, “Six Persistent Research Misconceptions,” 29 J. Gen. Intern. Med. 1060, 1063 (2014).

The opinions, statements, and asseverations expressed on Tortini are my own, or those of invited guests, and these writings do not necessarily represent the views of clients, friends, or family, even when supported by good and sufficient reason.