TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Two Stanford Researchers Are Anti-Semantic

August 4th, 2018

Two Stanford University communications researchers have shown that fraudulent publications and authors’ linguistic obfuscation are correlated, p < 0.05. David M. Markowitz & Jeffrey T. Hancock, “Linguistic Obfuscation in Fraudulent Science,” 35 J. Language & Social Psych. 435 (2016); Bjorn Carey, “Stanford researchers uncover patterns in how scientists lie about their data,Stanford Report (Nov. 16, 2015) [Carey, below]

Stanford Professor of Communication, Jeff Hancock, and graduate student David Markowitz observed that there are repeating patterns of expression in the language used by scientific fraudfeasors. They hypothesized that scientific fraudfeasors would signal their duplicity in their linguistic expressions as well. These authors created a linguistic obfuscation index based upon the prevalence of jargon, abstraction, positive emotion terms, and readability. They then compared the obfuscation index scores of 253 papers retracted for fraudulent data with 253 unretracted papers, and 63 papers retracted for reasons other than fraud. Not surprisingly, Hancock and Markowitz found differences, with fraudulent papers having higher obfuscation scores, and generally more jargon.

As Markowitz explained:

We believe the underlying idea behind obfuscation is to muddle the truth. *** Scientists faking data know that they are committing a misconduct and do not want to get caught. Therefore, one strategy to evade this may be to obscure parts of the paper. We suggest that language can be one of many variables to differentiate between fraudulent and genuine science.”

Carey. Professor Hancock acknowledged that there remained a high error rate in their obfuscation analysis, which needed to be lowered before automatic linguistic analyses could be useful for detecting fraud. Hancock also acknowledged that such the use of such a computerized linguistic tool might undermine the trust upon which science is based. Id.

Well detecting fraud might undermine trust, but look where trust has gotten us in science.

Trust but verify.

I cannot wait until I proffer the first expert witness rebuttal report in litigation, to show that my adversary’s expert witness has crossed the obfuscation line.

Infante-lizing the IARC

May 13th, 2018

Peter Infante, a frequently partisan, paid expert witness for the Lawsuit Industry, recently published a “commentary” in the red journal, the American Journal of Industrial Medicine, about the evils of scientists with economic interests commenting upon the cancer causation pronouncements of the International Agency for Research on Cancer (IARC). Peter F. Infante, Ronald Melnick, James Huff & Harri Vainio, “Commentary: IARC Monographs Program and public health under siege by corporate interests,” 61 Am. J. Indus. Med. 277 (2018). Infante’s rant goes beyond calling out scientists with economic interests on IARC working groups; Infante would silence all criticism of IARC pronouncements by anyone, including scientists, who has an economic interest in the outcome of a scientific debate. Infante points to manufacturing industry’s efforts to “discredit” recent IARC pronouncements on glyphosate and red meat, by which he means that there were scientists who had the temerity to question IARC’s processes and conclusions.

Apparently, Infante did not think his bias was showing or would be detected. He and his co-authors invoke militaristic metaphors to claim that the IARC’s monograph program, and indeed all of public health, is “under siege by corporate interests.” A farcical commentary at that, coming from such stalwarts of the Lawsuit Industry. Infante lists his contact information as “Peter F. Infante Consulting, LLC, Falls Church, Virginia,” and co-author Ronald Melnick can be found at “Ronald Melnick Consulting, LLC, North Logan, Utah.” A search on Peter Infante in Westlaw yields 141 hits, all on the plaintiffs’ side of health effects disputes; he is clearly no stranger to the world of litigation. Melnick is, to be sure, harder to find, but he does show up as a signatory on Raphael Metzger’s supposed amicus briefs, filed by Metzger’s litigation front organization, Council for Education and Research on Toxics.1

Of the commentary’s authors, only James Huff, of “James Huff Consulting, Durham, North Carolina,” disclosed a connection with litigation, as a consultant to plaintiffs on animal toxicology of glyphosate. Huff’s apparent transparency clouded up when it came to disclosing how much he has been compensated for his consulting activities for claimants in glyphosate litigation. In the very next breath, in unison, the authors announce unabashedly that “[a]ll authors report no conflicts of interest.” Infante at 280.

Of course, reporting “no conflicts of interest” does not mean that the authors have no conflicts of interest, financial, positional, and idealogical. Their statement simply means that they have not reported any conflicts, through inadvertence, willfulness, or blindness. The authors, and the journal, are obviously content to mislead their readership by not-so-clever dodges.

The clumsiness of the authors’ inability to appreciate their very own conflicts infects their substantive claims in this commentary. These “consultants” tell us solemnly that IARC “[m]eetings are openly transparent and members are vetted for conflicts of interest.” Infante at 277. Working group members, however, are vetted but only for manufacturing industry conflicts, not for litigation industry conflicts or for motivational conflicts, such as advancing their own research agendas. Not many scientists have a research agenda to show that chemicals do not cause cancer.

At the end of this charade, the journal provides additional disclosures [sic]. As for “Ethics and Approval Consent,” we are met with a bold “Not Applicable.” Indeed; ethics need not apply. Perhaps, the American Journal of Industrial Medicine is beyond good and evil. The journal’s “Editor of Record,” Steven B. Markowitz “declares that he has no conflict of interest in the review and publication decision regarding this article.” This is, of course, the same Markowitz who testifies frequently for the Lawsuit Industry, without typically disclosing this conflict on his own publications.

This commentary is yet another brushback pitch, which tries to chill manufacturing industry and scientists from criticizing the work of agencies, such as IARC, captured by lawsuit industry consultants. No one should be fooled other than Mother Jones.


1See, e.g., Ramos v. Brenntag Specialties, Inc., 372 P.3d 200 (Calif. 2016) (where plaintiff was represented by Metzger, and where CERT filed an amicus brief by the usual suspects, plaintiffs’ expert witnesses, including Melnick).

Statistical Deontology

March 2nd, 2018

In courtrooms across America, there has been a lot of buzzing and palavering about the American Statistical Association’s Statement on Statistical Significance Testing,1 but very little discussion of the Society’s Ethical Guidelines, which were updated and promulgated in the same year, 2016. Statisticians and statistics, like lawyers and the law, receive their fair share of calumny over their professional activities, but the statistician’s principal North American professional organization is trying to do something about members’ transgressions.

The American Statistical Society (ASA) has promulgated ethical guidelines for statisticians, as has the Royal Statistical Society,2 even if these organizations lack the means and procedures to enforce their codes. The ASA’s guidelines3 are rich with implications for statistical analyses put forward in all contexts, including in litigation and regulatory rule making. As such, the guidelines are well worth studying by lawyers.

The ASA Guidelines were prepared by the Committee on Professional Ethics, and approved by the ASA’s Board in April 2016. There are lots of “thou shall” and “thou shall nots,” but I will focus on the issues that are more likely to arise in litigation. What is remarkable about the Guidelines is that if followed, they probably are more likely to eliminate unsound statistical practices in the courtroom than the ASA State on P-values.

Defining Good Statistical Practice

Good statistical practice is fundamentally based on transparent assumptions, reproducible results, and valid interpretations.” Guidelines at 1. The Guidelines thus incorporate something akin to the Kumho Tire standard that an expert witness ‘‘employs in the courtroom the same level of intellectual rigor that characterizes the practice of an expert in the relevant field.’’ Kumho Tire Co. v. Carmichael, 526 U.S. 137, 152 (1999).

A statistician engaged in expert witness testimony should provide “only expert testimony, written work, and oral presentations that he/she would be willing to have peer reviewed.” Guidelines at 2. “The ethical statistician uses methodology and data that are relevant and appropriate, without favoritism or prejudice, and in a manner intended to produce valid, interpretable, and reproducible results.” Id. Similarly, the statistician, if ethical, will identify and mitigate biases, and use analyses “appropriate and valid for the specific question to be addressed, so that results extend beyond the sample to a population relevant to the objectives with minimal error under reasonable assumptions.” Id. If the Guidelines were followed, a lot of spurious analyses would drop off the litigation landscape, regardless whether they used p-values or confidence intervals, or a Bayesian approach.

Integrity of Data and Methods

The ASA’s Guidelines also have a good deal to say about data integrity and statistical methods. In particular, the Guidelines call for candor about limitations in the statistical methods or the integrity of the underlying data:

The ethical statistician is candid about any known or suspected limitations, defects, or biases in the data that may impact the integrity or reliability of the statistical analysis. Objective and valid interpretation of the results requires that the underlying analysis recognizes and acknowledges the degree of reliability and integrity of the data.”

Guidelines at 3.

The statistical analyst openly acknowledges the limits of statistical inference, the potential sources of error, as well as the statistical and substantive assumptions made in the execution and interpretation of any analysis,” including data editing and imputation. Id. The Guidelines urge analysts to address potential confounding not assessed by the study design. Id. at 3, 10. How often do we see these acknowledgments in litigation-driven analyses, or in peer-reviewed papers, for that matter?

Affirmative Actions Prescribed

In the aid of promoting data and methodological integrity, the Guidelines also urge analysts to share data when appropriate without revealing the identities of study participants. Statistical analysts should publicly correct any disseminated data and analyses in their own work, as well as working to “expose incompetent or corrupt statistical practice.” Of course, the Lawsuit Industry will call this ethical duty “attacking the messenger,” but maybe that’s a rhetorical strategy based upon an assessment of risks versus benefits to the Lawsuit Industry.

Multiplicity

The ASA Guidelines address the impropriety of substantive statistical errors, such as:

[r]unning multiple tests on the same data set at the same stage of an analysis increases the chance of obtaining at least one invalid result. Selecting the one “significant” result from a multiplicity of parallel tests poses a grave risk of an incorrect conclusion. Failure to disclose the full extent of tests and their results in such a case would be highly misleading.”

Guidelines at 9.

There are some Lawsuit Industrialists who have taken comfort in the pronouncements of Kenneth Rothman on corrections for multiple comparisons. Rothman’s views on multiple comparisons are, however, much broader and more nuanced than the Industry’s sound bites.4 Given that Rothman opposes anything like strict statistical significance testing, it follows that he is relatively unmoved for the need for adjustments to alpha or the coefficient of confidence. Rothman, however, has never deprecated the need to consider the multiplicity of testing, and the need for researchers to be forthright in disclosing the the scope of comparisons originally planned and actually done.


2 Royal Statistical Society – Code of Conduct (2014); Steven Piantadosi, Clinical Trials: A Methodologic Perspective 609 (2d ed. 2005).

3 Shelley Hurwitz & John S. Gardenier, “Ethical Guidelines for Statistical Practice: The First 60 Years and Beyond,” 66 Am. Statistician 99 (2012) (describing the history and evolution of the Guidelines).

4 Kenneth J. Rothman, “Six Persistent Research Misconceptions,” 29 J. Gen. Intern. Med. 1060, 1063 (2014).

Wrong Words Beget Causal Confusion

February 12th, 2018

In clinical medical and epidemiologic journals, most articles that report about associations will conclude with a discussion section in which the authors hold forth about

(1) how they have found that exposure to X “increases the risk” of Y, and

(2) how their finding makes sense because of some plausible (even if unproven) mechanism.

In an opinion piece in Significance,1 Dalmeet Singh Chawla cites to a study that suggests the “because” language frequently confuses readers into believing that a causal claim is being made. The study abstract explains:

Most researchers do not deliberately claim causal results in an observational study. But do we lead our readers to draw a causal conclusion unintentionally by explaining why significant correlations and relationships may exist? Here we perform a randomized study in a data analysis massive online open course to test the hypothesis that explaining an analysis will lead readers to interpret an inferential analysis as causal. We show that adding an explanation to the description of an inferential analysis leads to a 15.2% increase in readers interpreting the analysis as causal (95% CI 12.8% – 17.5%). We then replicate this finding in a second large scale massive online open course. Nearly every scientific study, regardless of the study design, includes explanation for observed effects. Our results suggest that these explanations may be misleading to the audience of these data analyses.”

Leslie Myint, Jeffrey T. Leek, and Leah R. Jager, “Explanation implies causation?” (Nov. 2017) (on line manuscript).

Invoking the principle of charity, these authors suggest that most researchers are not deliberately claiming causal results. Indeed, the language of biomedical science itself is biased in favor of causal interpretation. The term “statistical significance” suggests causality to naive readers, as does stats talk about “effect size,” and “fixed effect models,” for data sets that come no where near establishing causality.

Common epidemiologic publication practice tolerates if not encourages authors to state that their study shows (finds, demonstrates, etc.) that exposure to X “increases the risk” of Y in the studies’ samples. This language is deliberately causal, even if the study cannot support a causal conclusion alone or even with other studies. After all, a risk is the antecedent of a cause, and in the stochastic model of causation involved in much of biomedical research, causation will manifest in a change of a base rate to a higher or lower post-exposure rate. Given that mechanism is often unknown and not required, then showing an increased risk is the whole point. Eliminating chance, bias, confounding, and study design often is lost in the irrational exuberance of declaring the “increased risk.”

Tighter editorial control might have researchers qualify their findings by explaining that they found a higher rate in association with exposure, under the circumstances of the study, followed by an explanation that much more is needed to establish causation. But where is the fun and profit in that?

Journalists, lawyers, and advocacy scientists often use the word “link,” to avoid having to endorse associations that they know, or should know, have not been shown to be causal.2 Using “link” as a noun or a verb clearly implies a causal chain metaphor, which probably is often deliberately implied. Perhaps publishers would defend the use of “link” by noting that it is so much shorter than “association,” and thus saves typesetting costs.

More attention is needed to word choice, even and especially when statisticians and scientists are using their technical terms and jargon.3 If, for the sake of argument, we accept the sincerity of scientists who work as expert witnesses in litigation in which causal claims are overstated, we can see that poor word choices confuse scientists as well as lay people. Or you can just read the materials and methods and the results of published study papers; skip the introduction and discussion sections, as well as the newspaper headlines.


1 Dalmeet Singh Chawla, “Mind your language,” Significance 6 (Feb. 2018).

2 See, e.g., Perri Klass, M.D., “https://www.nytimes.com/2017/12/04/well/family/does-an-adhd-link-mean-tylenol-is-unsafe-in-pregnancy.html,” N.Y. Times (Dec. 4, 2017); Nicholas Bakalar, “Body Chemistry: Lower Testosterone Linked to Higher Death Risk,” N.Y. Times (Aug. 15, 2006).

3 Fang Xuelan & Graeme Kennedy, “Expressing Causation in Written English,” 23 RELC J. 62 (1992); Bengt Altenberg, “Causal Linking in Spoken and Written English,” 38 Studia Linguistica 20 (1984).