TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

A New Egilman Bully Pulpit

February 19th, 2020

Larding Up the Literature

Another bio-medical journal? In October 2019, The Journal of Scientific Practice and Integrity published its inaugural volume one, number one issue, online. This journal purports to cover scientific integrity issues, which may well not be adequately covered in the major biomedical journals. There are reasons to believe, however, that this journal may be more of a threat to scientific integrity than a defender.

The journal describes itself:

“an interdisciplinary, peer-reviewed journal that publishes scholarly debate and original research on scientific practices that impact human and environmental health.”

The editorial board reads like a Who’s Who’s list of “political scientists” who testify a LOT for claimants, and who, when not working for the lawsuit industry, practice occupational and environmental medicine for the redistribution of wealth in the Western world.

David Egilman, contemnor and frequent plaintiffs’ expert witness in personal injury litigation is editor in chief. Tess Bird, an Egilman protégé, is managing editor. Another Egilman protégé, Susana Rankin Bohme, an associate Director of Research at Corporate Accountability International, also sits on the editorial board. You may be forgiven for believing that this journal will be an Egilman vanity press.

The editorial board also includes some high-volume testifying plaintiffs expert witnesses:

Peter Infante, of Peter F. Infante Consulting, LLC, Virginia

Adriane Fugh-Berman, of PharmedOut

Barry Castleman,

William E. Longo, President, MAS, LLC

David Madigan,

Michael R. Harbut, and

David Rosner and Gerald Markowitz, my favorite left-wing radical historians.

The journal identifies the Collegium Ramazzini as one of its partners. Cute the “Интернационал”!

The first issue of this new journal features a letter[1] from the chief and managing editors, Egilman and Bird, which states wonderfully aspirational goals. The trick will be whether the journal can apply its ethical microscope to all actors in the world of scientific publishing, or whether this new journal is just not another propaganda outlet for the special pleading by the lawsuit industry.


[1]  Tess Bird & David Egilman, “Letter from the Editors: An Introduction to the Journal of Scientific Practice and Integrity,” 1 J. Sci. Practice & Integrity 1 (2019).

Counter Cancel Culture Part III – Fixing Science

February 14th, 2020

This is the last of three posts about Cancel Culture, and the National Association of Scholars (NAS) conference on Fixing Science, held February 7th and 8th, in Oakland, California.

In finding my participation in the National Association of Scholars’ conference on Fixing Science, “worrying” and “concerning,” John Mashey takes his cues from the former OSHA Administrator, David Michaels. David Michaels has written much about industry conflicts of interests and efforts to influence scientific debates and discussions. He popularized the notion of “manufacturing doubt,”[1] with his book of that title. I leave it to others to decide whether Mashey’s adverting to Michaels’ work, in finding my writings on silica litigation “concerning” and “worrying,” is itself worrisome. In order to evaluate Mashey’s argument, such as it is, the reader should know something more about David Michaels, and his publications.[2]

As one might guess from its title, The Triumph of Doubt: Dark Money and the Science of Deception, Michaels’ new book s appears to be a continuation of his attack on industry’s efforts to influence regulation. I confess not to have read this new book yet, but I am willing to venture a further guess that the industry Michaels is targeting is manufacturing industry, not the lawsuit industry, for which he has worked on many occasions. There is much irony (and no little hypocrisy) in Michaels’ complaints about dark money and the science of deception. For many years, Michaels ran the now defunct The Project on Scientific Knowledge and Public Policy (SKAPP), which was bankrolled by the plaintiffs’ counsel in the silicone gel breast implant litigation. Whenever SKAPP sponsored a conference, or a publication, the sponsors or authors dutifully gave a disclosure that the meeting or publication was underwritten by “a grant from the Common Benefit Trust, a fund established pursuant to a federal court order in the Silicone Gel Breast Implant Products Liability litigation.”

Non-lawyers might be forgiven for thinking that SKAPP and its propaganda had the imprimatur of the federal court system, but nothing could be further from the truth. A common benefits fund is the pool of money that is available to plaintiffs’ lawyers who serve on the steering committee of a large, multi-district litigation, to develop expert witnesses, analyze available scientific studies, and even commission studies of their own.[3] The source of the money was a “tax” imposed upon all settlements with defendants, which funneled the money into the so-called common benefits fund, controlled by the leadership of the plaintiffs’ counsel. When litigating the silicone gel breast implant cases involving claims of autoimmune disease became untenable due to an overwhelming scientific consensus against their causal claims,[4] the leadership of the plaintiffs’ steering committee gave the remaining money to SKAPP, rather than returning the money to the plaintiffs themselves.  David Michaels and his colleagues at SKAPP then misrepresented the source of the money as coming from a “trust fund” established by the federal court, which sounded rather like a neutral, disinterested source. This fund, however, was “walking around” money for the plaintiffs’ lawyers, which belonged to the settling plaintiffs, and which was diverted into a major propaganda effort against the judicial gatekeeping of expert witness opinion testimony.[5] A disinterested reader might well believe that David Michaels thus has some deep personal experience with “dark money,” and “the science of deception.” Mashey might be well advised to consider the adjacency issues raised by his placing such uncritical trust in what Michaels has published.

Regardless of David Michaels’ rhetoric, doubt is not such a bad thing in the face of uncertain and inconclusive evidence. In my view, we could use more doubt, and open-minded thought. Bertrand Russell is generally credited with having written some years ago:

“The biggest cause of trouble in the world today is that the stupid people are so sure about things and the intelligent folks are so full of doubts.”

What are we to make then of the charge by Dorothy Bishop that the conference would not be about regular scientific debate, but

“about weaponising the reproducibility debate to bolster the message that everything in science is uncertain — which is very convenient for those who wish to promote fringe ideas.”

I attended and presented at the conference because I have a long-standing interest in how scientific validity is assessed in the scientific and in the legal world. I have been litigating such issues in many different contexts for over 35 years, with notable scientific experts occasionally on either side. One phenomenon I have observed repeatedly is that expert witnesses of the greatest skill, experience, and knowledge are prone to cognitive biases, fallacies, and other errors. One of my jobs as a legal advocate is to make sure that my own expert witnesses engage fully with the evidence as well as how my adversaries are interpreting the evidence. In other words, expert witnesses of the highest scientific caliber succumb to biases in interpreting studies and evidence.

A quick anecdote, war story, will I hope make the point. A few years ago, I was helping a scientist get ready to testify in a case involving welding fume exposure and Parkinson’s disease. The scientist arrived with some PowerPoint slides, one of which commented that a study relied upon by plaintiffs’ expert witnesses had a fatal design flaw that rendered its conclusions invalid. Another slide embraced a study, sponsored by a co-defendant company, which had a null result but the same design flaw called out in the study used by plaintiff’s witnesses. It was one in the morning, but I gently pointed out the inconsistency, and the scientist immediately saw the problem and modified his slides.

The next day, my adversary noticed the lack of the codefendant’s study in the group of studies this scientist had relied upon. He cross-examined the scientist about why he had left out a study, which the codefendant had actually sponsored. The defense expert witness testified that the omitted study had the same design flaw as seen in the study embraced by plaintiffs’ expert witnesses, and that it had to be consigned to the same fate. The defense won this case, and long after the celibration died down, I received a very angry call from a lawyer for the codefendant. The embrace of bad studies and invalid inferences is not the exclusive province of the plaintiffs’ bar.

My response to Dorothy Bishop is that science ultimately has no political friends, although political actors will try to use criteria of validity selectively to arrive at convenient, and agreeable results. Do liberals ever advance junk science claims? Just say the words: Robert F. Kennedy, Jr. How bizarre and absurd for Kennedy to come out of a meeting with Trump’s organization, to proclaim a new vaccine committee to investigate autism outcomes! Although the issue has been explored in detail in medical journals for the last two decades, apparently there can even be bipartisan junk science. Another “litmus test” for conservatives would be whether they speak out against what are, in my view, unsubstantiated laws in several “Red States,” which mandate that physicians tell women who are seeking abortions that abortions cause breast cancer. There have been, to be sure, some studies that reported increased risks, but they were mostly case-control studies in which recall and reporting biases were uncontrolled. Much better, larger cohort studies done with unbiased information about history of abortions failed to support the association, which no medical organization has taken to be causal. This is actually a good example of irreproducibility that is corrected by the normal evolutionary process of scientific research, with political exploitation of the earlier, less valid studies.

Did presenters at the Fixing Science conference selectively present and challenge studies? It is difficult for me to say, not having a background in climate science. I participated in the conference to talk about how courts deal with problems of unreliable expert witness testimony and reliance upon unreliable studies. But what I heard at the conference were two main speakers argue that climate change and its human cause were real. The thrust of the most data-rich presentation was that many climate models advanced are overstated and not properly calibrated.  Is Bishop really saying that we cannot have a civil conversation about whether some climate change models are poorly done and validated? Assuming that the position I heard is a reasonable interpretation of the data and the models, it establishes a “floor” in opposition to the ceilings asserted by other climate scientists. There are some implications; perhaps the National Association of Scholars should condemn Donald Trump and others who claim that climate change is a hoax. Of course, condemning Trump every time he says something false, stupid, and unsupported would be a full time job. Having staked out an interest climate change, the Association might well consider balancing the negative impression others have of it as “deniers.”

The Science Brief

Back in June 2018, the National Association of Scholars issued a Science Brief, which it described as its official position statement in the area. A link to the brief online was broken, but a copy of the brief was distributed to those who attended the Fixing Science conference in Oakland. The NAS website does contain an open letter from Dr. Peter Wood, the president of the NAS, who described the brief thus:

“the positions we have put forward in these briefs are not settled once and for all. We expect NAS members will critique them. Please read and consider them. Are there essential points we got wrong? Others that we left out? Are there good points that could be made better?

We are not aiming to compile an NAS catechism. Rather, we are asked frequently by members, academics who are weighing whether to join, reporters, and others what NAS ‘thinks’ about various matters. Our 2,600 members (and growing) no doubt think a lot of different things. We prize that intellectual diversity and always welcome voices of dissent on our website, in our conferences, and in our print publications. But it helps if we can present a statement that offers a first-order approximation of how NAS’s general principles apply to particular disciplines or areas of inquiry.

We also hope that these issue briefs will make NAS more visible and that they will assist scholars who are finding their way in the maze of contemporary academic life.

As a preface to an attempt to address general principles, Peter Wood’s language struck me as liberal, in the best sense of open-minded and generous in spirit to the possibility of reasoned disagreement.

So what are the NAS principles when it comes to science? Because the Science Brief seems not to be online at the moment, I will quote it here at length:

OVERVIEW

The National Association of Scholars (NAS) supports the proper teaching and practice of science: the systematic exercise of reason, observation, hypothesis, and experiment aimed at understanding and making reliable predictions about the material world. We work to keep science as a mode of inquiry engaged in the disinterested pursuit of truth rather than a collection of ‘settled’ conclusions. We also work to integrate course requirements in the unique history of Western science into undergraduate core curricula and distribution requirements. The NAS promotes scientific freedom and transparency.

We support researchers’ freedom to formulate and test any scientific hypothesis, unconstrained by political inhibitions. We support researchers’ freedom to pursue any scientific experiment, within ethical research guidelines. We support transparent scientific research, to foster the scientific community’s collective search for truth.

The NAS supports course requirements on the history and the nature of the Western scientific tradition.

All students should learn a coherent general narrative of the history of science that tells how the scientific disciplines interrelate. We work to restore core curricula that include both the unique history of Western science and an introduction to the distinctive mode of Western scientific reasoning. We also work to add new requirements in statistics and experimental design for majors and graduate students in the sciences and social sciences.

The NAS works to reform the practice of modern science so that it generates reproducible results. Modern science and social science are crippled by a crisis of reproducibility. This crisis springs from a combination of misused statistics, slipshod research techniques, and political groupthink. We aim to eliminate the crisis of reproducibility by grounding scientific practice in the meticulous traditions of Western scientific thought and rigorous reproducibility standards.

The NAS works to eliminate the politicization of undergraduate science education.

Our priority is to dismantle advocacy-based science, which discards the exercise of rational skepticism in pursuit of truth when it explicitly declares that scientific inquiry should serve policy advocacy. We therefore work to remove advocacy-based science from the classroom and from university bureaucracies. We also criticize student movements that demand the replacement of disinterested scientific inquiry with advocacy-based science. We focus our critiques on disciplines such as climate science that are mostly engaged in policy advocacy.

The NAS tracks scientific controversies that affect public policy, studies the remedies that scientists propose, and criticizes laws, regulations, and proposed policies based upon advocacy-based science.

We do this to prevent a vicious cycle in which advocacy-based science justifies the misuse of government – and private funding to support yet more advocacy-based science. We also work to reform the administration of government science funding so as to prevent its capture by advocacy-scientists.  The NAS’s scientific reports draw on the expertise of its member scholars and staff, as well as independent scholars. Our aim is to provide professionally credible critiques of America’s science education and science-based public policy.

John Mashey in his critique of the NAS snarkily comments that folks at the NAS lack the expertise to make the assessments they call for. Considering that Mashey is a computer scientist, without training in the climate or life sciences, his comments fall short of their mark. Still, if he were to have something worthwhile to say, and he supported his statements by sufficient evidence and reasoning, I believe we should take it seriously.

Nonetheless, the NAS statement of principles and concerns about how science and statistics is taught are unexceptional. I suspect that neither Mashey nor anyone else is against scientific freedom, methodological rigor,  and ethical, transparent research.

The scientific, mathematical, and statistical literacy of most judges and lawyers, is poor indeed. The Law School Admission Test (LSAT) does not ask any questions about statistical reasoning. A jury trial is not a fair, adequate opportunity to teach jurors the intricacies of statistical and scientific methods. Most medical schools still do not teach a course in experimental design and statistical analysis. Until recently, the Medical College Acceptance Test (MCAT) did not ask any questions of a statistical nature, and the test still does not require applicants to have taken a full course in statistics. I do not believe any reasonable person could be against the NAS’s call for better statistical education for scientists, and I would add for policy makers. Certainly, Mashey offers no arguments or insights on this topic.

Perhaps Mashey is wary of the position that we should be skeptical of advocacy-based science, for fear that climate-change science will come in for unwelcomed attention. If the science is sound, the data accurate, and the models valid, then this science does not need to be privileged and protected from criticism. Whether Mashey cares to acknowledge the phenomenon or not, scientists do become personally invested in their hypotheses.

The NAS statement of principles in its Science Brief thus seems worthy of everyone’s support. Whether the NAS is scrupulous in applying its own principles to positions it takes will require investigation and cautious vigilance. Still, I think Mashey should not judge anyone harshly lest he be so judged. We are a country of great principles, but a long history of indifferent and sometimes poor implementation. To take just a few obvious examples, despite the stirring words in the Declaration of Independence about the equality of all men, native people, women, and African slaves were treated in distinctly unequal and deplorable ways. Although our Constitution was amended after the Civil War to enfranchise former slaves, our federal government, after an all-too-short period of Reconstruction, failed to enforce the letter or the spirit of the Civil War amendments for 100 years, and then some. Less than seven years after our Constitution was amended to include freedom from governmental interference with speech or publication, a Federalist Congress passed the Alien and Sedition Acts, which President Adams signed into law in 1798. It would take over 100 years before the United States Supreme Court would make a political reality of the full promise of the First Amendment.

In these sad, historical events, one thing is clear. The promise and hope of clearly stated principles did prevail. To me, the lesson is not to belittle the principles or the people, but to hold the latter to the former.  If Mashey believes that the NAS is inconsistent or hypocritical about its embrace of what otherwise seems like worthwhile first principles, he should say. For my part, I think the NAS will find it difficult to avoid a charge of selectivity if it were to criticize climate change science, and not cast a wider net.

Finally, I can say that the event sponsored by the Independent Institute and the NAS featured speakers with diverse, disparate opinions. Some speakers denied that there was a “crisis,” and some saw the crisis as overwhelming and destructive of sound science. I heard some casual opinions of climate change skepticism, but from the most serious, sustained look at the actual data and models, an affirmation of anthropogenic climate change. In the area of health effects, the scientific study more relevant to what I do, I heard a fairly wide consensus about the need to infuse greater rigor into methodology and to reduce investigators’ freedom to cherry pick data and hypotheses after data collection is finished. Even so, there were speakers with stark disagreement over methods. The conference was an important airing and exchanging of many ideas. I believe that those who attended and who participated went away with less orthodoxy and much to contemplate. The Independent Institute and the NAS deserve praise for having organized and sponsored the event. The intellectual courage of the sponsors in inviting such an intellectually diverse group of speakers undermines the charge by Mashey, Teytelman, and Bishop that the groups are simply shilling for Big Oil.


[1]        David Michaels, Doubt is Their Product: How Industry’s Assault on Science Threatens Your Health (2008).

[2]        David Michaels, The Triumph of Doubt: Dark Money and the Science of Deception (2020).

[3]        See, e.g., William Rubenstein, “On What a ‘Common Benefit Fee’ Is, Is Not, and Should Be,” Class Action Attorney Fee Digest 87, 89 (March 2009).

[4]        In 1999, after much deliberation, the Institute of Medicine issued a report that found the scientific claims in the silicone litigation to be without scientific support. Stuart Bondurant, et al., Safety of Silicone Breast Implants (I.O.M. 1999).

[5]        I have written about the lack of transparency and outright deception in SKAPP’s disclosures before; seeSKAPP A LOT” (April 30, 2010); “Manufacturing Certainty” (Oct. 25, 2011); “The Capture of the Public Health Community by the Litigation Industry” (Feb. 10, 2014); “Daubert’s Silver Anniversary – Retrospective View of Its Friends and Enemies” (Oct. 21, 2018); “David Michaels’ Public Relations Problem” (Dec. 2, 2011)

Counter Cancel Culture – Part II: The Fixing Science Conference

February 12th, 2020

So this is what it is like to be denounced? My ancestors fled the Czar’s lands before they could be tyrannized by denunciations of Stalin’s Soviets. The work of contemporary denunciators is surely much milder, but no more principled than the Soviet versions of yesteryear.

Now that I am back from the Fixing Science conference, sponsored by the Independent Institute and the National Association of Scholars (NAS), I can catch up with the media coverage of the event. I have already addressed Dr. Lenny Teytelman’s issues in an open letter to him. John Mashey is a computer scientist who has written critical essays on climate science denial. On the opening day of the NAS conference, he published online his take on the recent NAS’s conference on scientific irreproducibility.[1] Mashey acknowledges that the Fixing Science conference included “credible speakers who want to improve some areas of science hurt by the use of poor statistical methods or making irreproducible claims,” but his post devolves into scurrilous characterizations of several presenters. Alas, some of the ad hominems are tossed at me, and here is what I have to say about them.

Mashey misspells my name, “Schactman,” but that is a minor flaw of scholarship. He writes that I have “published much on evidence,” which is probably too laudatory. I am hardly a recognized scholar on the law of evidence, although I know something about this area, and have published in it.

Mashey tautologically declares that I “may or may not be a ‘product defense lawyer’ (akin to Louis Anthony Cox) defending companies against legitimate complaints.” Mashey seems unaware of how the rule of law works in our country. Plaintiffs file complaints, but the standard for the legitimacy of these complaints is VERY low. Courts require the parties to engage in discovery of their claims and defenses, and then courts address dispositive motions to dismiss either the claims or the defenses. So, sometimes after years of work, legitimate complaints are revealed to be bogus complaints, and then the courts will dismiss bogus complaints, and thus legitimate complaints become illegitimate complaints. In my 36 years at the bar, I am proud to have been able to show that a great many apparently legitimate complaints were anything but what they seemed.

Mashey finds me “worrying” and “concerning.” My children are sometimes concerned about me, and even worry about me, about I do not think that Mashey was trying to express solicitude for me.

Why worry? Well, David Michaels in his most recent book, Triumph of Doubt (2020), has an entire chapter on silica dust. And I, worrisomely, have written and spoken, about silica and silicosis litigation, sometimes in a way critical of the plaintiffs’ litigation claims. Apparently, Mashey does not worry that David Michaels may be an unreliable protagonist who worked as a paid witness for the lawsuit industry on many occasions before becoming the OSHA Administrator, in which position he ignored enforcement of existing silica regulations in order to devote a great deal of time, energy, and money to revising the silica regulations. The evidentiary warrant for Michaels’ new silica rule struck me then, and now, as slim, but the real victims, workers, suffered because Michaels was so intent on changing a rule in the face of decades of declining silicosis mortality, that he failed, in my view, to attend to specific instances of over-exposure.

Mashey finds me concerning because two radical labor historians do not like me. (I think I am going eat a worm, ….) Mashey quotes at length from an article by these historians, criticizing me for having had the audacity to criticize them.[2] Oh my.

What Mashey does not tell his readers was that, as co-chair of a conference on silicosis litigation (along with a co-chair who was a plaintiffs’ lawyer), I invited historian Gerald Markowitz to speak and air his views on the history of silica regulation and litigation. In response, I delivered a paper that criticized, and I would dare say, rebutted many of Markowitz’s historical conclusions and his inferences from an incomplete, selectively assembled, and sometimes incorrect, set of historical facts. I later published my paper.

Mashey tells his readers that my criticisms, based not upon what I wrote, but upon the partisan cries of Rosner and Markowitz, “seems akin to Wood’s style of attack.” Well, if so, nicely done, Woods.

But does Mashey believe that his readers deserve to know that Rosner and Markowitz have testified repeatedly on behalf of the lawsuit industry, that is, those entrepreneurs who make lawsuits?[3] And that Rosner and Markowitz have been amply remunerated for their labors as partisan witnesses in these lawsuits?

And is Mashey worried or concerned that in the United States, silicosis litigation has been infused with fraud and deception, not by the defendants, but by the litigation industry that creates the lawsuits? Absent from Rosner and Markowitz’s historical narratives is any mention of the frauds that have led to dismissals of thousands of cases, and the professional defrocking of any number of physician witnesses.  In re Silica Products Liab. Litig., MDL No. 1553, 398 F. Supp. 2d 563 (S.D.Tex. 2005). Even the redoubtable expert witness for the plaintiffs’ bar, David S. Egilman, has published articles that point out the unethical and unlawful nature of the medico-legal screenings that gave rise to the silicosis litigation, which Michaels, Rosner, and Markowitz seem to support, or at the very least suppress any criticism of.[4]

So this is what it means to be denounced! Mashey’s piece is hardly advertisement for the intellectual honesty of those who would de-platform the NAS conference. He has selectively and inaccurately addressed my credentials. As just one example, and in an effort to diminish the NAS, he has omitted that I have received a grant from the NASEM to develop a teaching module on scientific causation. My finished paper is published online at the NASEM website.[5]

I do not know Mashey, but I leave it to you to judge him by his sour fruits.


[1]  John Mashey, “Dark-Moneyed Denialists Are Running ‘Fixing Science’ Symposium of Doubt,” Desmog Blog (Feb. 7, 2020).

[2]  David Rosner & Gerald Markowitz, “The Trials and Tribulations of Two Historians:  Adjudicating Responsibility for Pollution and Personal Harm, 53 Medical History 271, 280-81 (2009) (criticizing me for expressing the view that historians should not be permitted to testify and thereby circumvent the rules of evidence). See also David Rosner & Gerald Markowitz, “L’histoire au prétoire.  Deux historiens dans les procès des maladies professionnelles et environnementales,” 56 Revue D’Histoire Moderne & Contemporaine 227, 238-39 (2009) (same); D. Rosner, “Trials and Tribulations:  What Happens When Historians Enter the Courtroom,” 72 Law & Contemporary Problems 137, 152 (2009) (same). I once thought there was an academic standard that prohibited duplicative publication!

[3] I have been critical of Rosner and Markowitz on many occasions; they have never really responded to the substance of my criticisms. See, e.g., “How Testifying Historians Are Like Lawn-Mowing Dogs,” (May 15, 2010).

[4]  See David Egilman and Susanna Rankin Bohme, “Attorney-directed screenings can be hazardous,” 45 Am. J. Indus. Med. 305 (2004); David Egilman, “Asbestos screenings,” 42 Am. J. Indus. Med. 163 (2002).

[5]  “Drug-Induced Birth Defects: Exploring the Intersection of Regulation, Medicine, Science, and Law – An Educational Module” (2016) (A teaching module designed to help professional school students and others evaluate the role of science in decision-making, developed for the National Academies of Science, Engineering, and Medicine, and its Committee on Preparing the Next Generation of Policy Makers for Science-Based Decisions).

Counter Cancel Culture – The NAS Conference on Irreproducibility

February 9th, 2020

The meaning of the world is the separation of wish and fact.”  Kurt Gödel

Back in October 2019, David Randall, the Director of Research, of the National Association of Scholars, contacted me to ask whether I would be interested in presenting at a conference, to be titled “Fixing Science: Practical Solutions for the Irreproducibility Crisis.” David explained that the conference would be aimed at a high level consideration of whether such a crisis existed, and if so, what salutary reforms might be implemented.

As for the character and commitments of the sponsoring organizations, David was candid and forthcoming. I will quote him, without his permission, and ask his forgiveness later:

The National Association of Scholars is taken to be conservative by many scholars; the Independent Institute is (broadly speaking) in the libertarian camp. The NAS is open to but currently agnostic about the degree of human involvement in climate change. The Independent Institute I take to be institutionally skeptical of consensus climate change theory–e.g., they recently hosted Willie Soon for lecture. A certain number of speakers prefer not to participate in events hosted by institutions with these commitments.”

To me, the ask was for a presentation on how the so-called replication crisis, or the irreproducibility crisis, affected the law. This issue was certainly one I have had much occasion to consider. Although I am aware of the “adjacency” arguments made by some that people should be mindful of whom they align with, I felt that nothing in my participation would compromise my own views or unduly accredit institutional positions of the sponsors.

I was flattered by the invitation, but I did some due diligence on the sponsoring organizations. I vaguely recalled the Independent Institute from my more libertarian days, but the National Association of Scholars (NAS, not to be confused with Nathan A. Schachtman) was relatively unknown to me. A little bit of research showed that the NAS had a legitimate interest in the irreproducibility crisis. David Randall had written a monograph for the organization, which was a nice summary of some of the key problems. The Irreproducibility Crisis of Modern Science: Causes, Consequences,and the Road to Reform (2018).

On other issues, the NAS seemed to live up to its description as “an organization of scholars committed to higher education as the catalyst of American freedom.” I listened to some of the group’s podcasts, Curriculum Vitae, and browsed through its publications. I found myself agreeing with many positions articulated by or through the NAS, and disagreeing with a few positions very strongly.

In looking over the list of other invited speakers, I saw great diversity of view points and approaches, One distinguished speaker, Daniele Fanelli, had criticized the very notion that there was a reproducibility crisis. In the world of statistics, there were strong defenders of statistical tests, and vociferous critics. I decided to accept the invitation, not because I was flattered, but because the replication issue was important, and I believed that I could add something to the discussion before an audience of professional scientists, statisticians, and educated lay persons. In writing to David Randall to accept the invitation, I told him that with respect to the climate change issues, I was not at all put off by healthy skepticism in the face all dogmas. Every dogma will have its day.

I did not give any further consideration to the political aspect of the conference until early January, when I received an email from a scientist, Lenny Teytelman, Ph.D., the C.E.O. of a company protocols.io, which addresses reproducibility issues. Dr Teytelman’s interest in improving reproducibility seemed quite genuine, but he wrote to express his deep concern about the conference and the organizations that were sponsoring it.

Perhaps a bit pedantically, he cautioned me that the NAS was not the National Academy of Sciences, a confusion that never occurred to me because the National Academies has been known as the National Academies of Science, Engineering and Medicine for several years now. Dr. Teytelman’s real concern seemed to be that the NAS is a “‘politically conservative advocacy group’.” (The internal scare quotes were Teytelman’s, but I was not afraid.) According to Dr. Teytelman, the NAS sought to undermine climate science and environmental protection by advancing a call for more reproducible science. He pointed me to what he characterized as an exposé on NAS, in Undark,1 and he cautioned me that the National Association of Scholars’ work is “dangerous.” Finally, Dr. Teytelman urged me to reconsider my decision to participate in the conference.

I did reconsider my decision, but reaffirmed it in an email I sent back to Dr. Teytelman. I realized that I could be wrong, in which case, I would eat my words, confident that they would be most digestible:

Dear Dr Teytelman,

Thank you for your note. I was aware of the piece on Undark’s website, as well as the difference between the NAS and the NASEM. I don’t believe anyone involved in science education would likely to be confused between the two organizations. A couple of years ago, I wrote a teaching module on biomedical causation for the National Academies. This is my first presentation at the request of the NAS, and frankly I am honored by the organization’s request that I present at its conference.

I have read other materials that have been critical of the NAS and its publications on climate change and other issues. I know that there are views of the organization from which I would dissent, but I do not see my disagreement on some issues as a reason not to attend, and present at a conference on an issue of great importance to the legal system.

I am hardly an expert on climate change issues, and that is my failing. Most of my professional work involves health effects regulation and litigation. If the NAS has advanced sophistical arguments against a scientific claim, then the proper antidote will be to demonstrate its fallacious reasoning and misleading marshaling of evidence. I should think, however, as someone interested in improving the reproducibility of scientific research, you will agree that there is much common ground for discussion and reform of scientific practice, on a broader arrange [sic] of issues than climate change.

As for the political ‘conservatism’, of the organization, I am not sure why that is a reason to eschew participation in a conference that should be of great importance to people of all political views. My own politics probably owe much to the influence of Michael Oakeshott, which puts me in perhaps the smallest political tribe of any in the United States. If conservatism means antipathy to post-modernism, identity politics, political orthodoxies, and assaults on Enlightenment values and the Rule of Law, then count me in.

In any event, thanks for your solicitude. I think I can participate and return with my soul intact.

All the best.

Nathan

To his credit, Dr. Teytelman tenaciously continued. He acknowledged that the political leanings of the organizers were not a reason to boycott, but he politely pressed his case. We were now on a first name basis:

Dear Nathan,

I very much applaud all efforts to improve the rigour of our science. The problem here is that this NAS organization has a specific goal – undermining the environmental protection and denying climate change. This is why 7 out of the 21 speakers at the event are climate change deniers. [https://docs.google.com/spreadsheets/d/136FNLtJzACc6_JbbOxjy2urbkDK7GefRZ/edit?usp=sharing] And this isn’t some small fringe effort to be ignored. Efforts of this organization and others like them have now gotten us to the brink of a regulatory change at the United States Environmental Protection Agency which can gut the entire EPA (see a recent editorial against this I co-authored). This conference is not a genuine effort to talk about reproducibility. The reproducibility part is a clever disguise for pushing a climate change denialism agenda.

Best,

Lenny

I looked more carefully at Lenny’s spreadsheet, and considered the issue afresh. We were both pretty stubborn:

Dear Lenny,

Thank you for this information. I will review with interest.

I do not see that the conference is primarily or even secondarily about climate change vel non. There are two scientists, Trafimow and Wasserstein with whom I have some disagreements about statistical methodology. Tony Cox and Stan Young, whatever their political commitments or views on climate change may be, are both very capable statisticians, from whom I have learned a great deal. The conference should be a lively conversation about reproducibility, not about climate change. Given your interests and background, you should go.

I believe that your efforts here are really quite illiberal, although they are in line with the ‘cancel culture’, so popular on campuses these days.

Forty three years ago, I entered a Roman Catholic Church to marry the woman I love. There were no lightning bolts or temblors, even though I was then and I am now an atheist. Yes, I am still married to my first wife. Although I share the late Christopher Hitchins’ low view of the Catholic Church, somehow I managed to overcome my antipathy to being married in what some would call a house of ill repute. I even manage to agree with some Papist opinions, although not for the superstitious reasons’ Papists embrace.

If I could tolerate the RC Church’s dogma for a morning, perhaps you could put aside the dichotomous ‘us and them’ view of the world and participate in what promises to be an interesting conference on reproducibility?

All the best.

Nathan

Lenny kindly acknowledged my having considered his issues, and wrote back a nice note, which I will quote again in full without permission, but with the hope that he will forgive me and even acknowledge that I have given his views an airing in this forum.

Hi Nathan,

We’ll have to agree to disagree. I don’t want to give a veneer of legitimacy to an organization whose goal is not improving reproducibility but derailing EPA and climate science.

Warmly,

Lenny

The business of psychoanalyzing motives and disparaging speakers and conference organizers is a dangerous business for several reasons. First motives can be inscrutable. Second, they can be misinterpreted. And third, they can be mixed. When speaking of organizations, there is the further complication of discerning a corporate motive among the constituent members.

The conference was an exciting, intellectually challenging event, which took place in Oakland, California, on February 7 and 8. I can report back to Lenny that his characterizations of and fears about the conference were unwarranted. While there were some assertions of climate change skepticism made with little or no evidence, the evidence-based presentations essentially affirmed climate change and sought to understand its causes and future course in a scientific way. But climate change was not why I went to this conference. On the more general issue of reform of scientific procedures and methods, we had open debates, some agreement on important principles, and robust and reasoned disagreement.

Lenny, you were correct that the NAS should not be ignored, but you should have gone to the meeting and participated in the conversation.


1 Michael Schulson, “A Remedy for Broken Science, Or an Attempt to Undercut It?Undark (April 18, 2018).

Judicial Gatekeeping Cures Claims That Viagra Can Cause Melonoma

January 24th, 2020

The phosphodiesterases 5 inhibitor medications (PDE5i) seem to arouse the litigation propensities of the lawsuit industry. The PDE5i medications (sildenafil, tadalafil, etc.) have multiple indications, but they are perhaps best known for their ability to induce penile erections, which in some situations can be a very useful outcome.

The launch of Viagra in 1998 was followed by litigation that claimed the drug caused heart attacks, and not the romantic kind. The only broken hearts, however, were those of the plaintiffs’ lawyers and their expert witnesses who saw their litigation claims excluded and dismissed.[1]

Then came claims that the PDE5i medications caused non-arteritic anterior ischemic optic neuropathy (“NAION”), based upon a dubious epidemiologic study by Dr. Gerald McGwin. This litigation demonstrated, if anything, that while love may be blind, erections need not be.[2] The NAION cases were consolidated in a multi-district litigation (MDL) in front of Judge Paul Magnuson, in the District of Minnesota. After considerable back and forth, Judge Manguson ultimately concluded that the McGwin study was untrustworthy, and the NAION claims were dismissed.[3]

In 2014, the American Medical Association’s internal medicine journal published an observational epidemiologic study of sildenafil (Viagra) use and melanoma.[4] The authors of the study interpreted their study modestly, concluding:

“[s]ildenafil use may be associated with an increased risk of developing melanoma. Although this study is insufficient to alter clinical recommendations, we support a need for continued investigation of this association.”

Although the Li study eschewed causal conclusions and new clinical recommendations in view of the need for more research into the issue, the litigation industry filed lawsuits, claiming causality.[5]

In the new natural order of things, as soon as the litigation industry cranks out more than a few complaints, an MDL results, and the PDE5i – melanoma claims were no exception. By spring 2016, plaintiffs’ counsel had collected ten cases, a minion, sufficient for an MDL.[6] The MDL plaintiffs named the manufacturers of sildenafil and tadalafil, two of the more widely prescribed PDEi5 medications, on behalf of putative victims.

While the MDL cases were winding their way through discovery and possible trials, additional studies and meta-analyses were published. None of the subsequent studies, including the systematic reviews and meta-analyses, concluded that there was a causal association. Most scientists who were publishing on the issue opined that systematic error (generally confounding) prevented a causal interpretation of the data.[7]

Many of the observational studies found statistically significantly increased relative risks about 1.1 to 1.2 (10 to 20%), typically with upper bounds of 95% confidence intervals less than 2.0. The only scientists who inferred general causation from the available evidence were those who had been recruited and retained by plaintiffs’ counsel. As plaintiffs’ expert witnesses, they contended that the Li study, and the several studies that became available afterwards, collectively showed that PDE5i drugs cause melanoma in humans.

Not surprisingly, given the absence of any non-litigation experts endorsing the causal conclusion, the defendants challenged plaintiffs’ proffered expert witnesses under Federal Rule of Evidence 702. Plaintiffs’ counsel also embraced judicial gatekeeping and challenged the defense experts. The MDL trial judge, the Hon. Richard Seeborg, held hearings with four days of viva voce testimony from four of plaintiffs’ expert witnesses (two on biological plausibility, and two on epidemiology), and three of the defense’s experts. Last week, Judge Seeborg ruled by granting in part, and denying in part, the parties’ motions.[8]

The Decision

The MDL trial judge’s opinion is noteworthy in many respects. First, Judge Richard Seeborg cited and applied Rule 702, a statute, and not dicta from case law that predates the most recent statutory version of the rule. As a legal process matter, this respect for judicial process and the difference in legal authority between statutory and common law was refreshing. Second, the judge framed the Rule 702 issue, in line with the statute, and Ninth Circuit precedent, as an inquiry whether expert witnesses deviated from the standard of care of how scientists “conduct their research and reach their conclusions.”[9]

Biological Plausibility

Plaintiffs proffered three expert witnesses on biological plausibility, Drs. Rizwan Haq, Anand Ganesan, and Gary Piazza. All were subject to motions to exclude under Rule 702. Judge Seeborg denied the defense motions against all three of plaintiffs’ plausibility witnesses.[10]

The MDL judge determined that biological plausibility is neither necessary nor sufficient for inferring causation in science or in the law. The defense argued that the plausibility witnesses relied upon animal and cell culture studies that were unrealistic models of the human experience.[11] The MDL court, however, found that the standard for opinions on biological plausibility is relatively forgiving, and that the testimony of all three of plaintiffs’ proffered witnesses was admissible.

The subjective nature of opinions about biological plausibility is widely recognized in medical science.[12] Plausibility determinations are typically “Just So” stories, offered in the absence of hard evidence that postulated mechanisms are actually involved in a real causal pathway in human beings.

Causal Association

The real issue in the MDL hearings was the conclusion reached by plaintiffs’ expert witnesses that the PDE5i medications cause melanoma. The MDL court did not have to determine whether epidemiologic studies were necessary for such a causal conclusion. Plaintiffs’ counsel had proffered three expert witnesses with more or less expertise in epidemiology: Drs. Rehana Ahmed-Saucedo, Sonal Singh, and Feng Liu-Smith. All of plaintiffs’ epidemiology witnesses, and certainly all of defendants’ experts, implicitly if not explicitly embraced the proposition that analytical epidemiology was necessary to determine whether PDE5i medications can cause melanoma.

In their motions to exclude Ahmed-Saucedo, Singh, and Liu-Smith, the defense pointed out that, although many of the studies yielded statistically significant estimates of melanoma risk, none of the available studies adequately accounted for systematic bias in the form of confounding. Although the plaintiffs’ plausibility expert witnesses advanced “Just-So” stories about PDE5i and melanoma, the available studies showed an almost identical increased risk of basal cell carcinoma of the skin, which would be explained by confounding, but not by plaintiffs’ postulated mechanisms.[13]

The MDL court acknowledged that whether epidemiologic studies “adequately considered” confounding was “central” to the Rule 702 inquiry. Without any substantial analysis, however, the court gave its own ipse dixit that the existence vel non of confounding was an issue for cross-examination and the jury’s resolution.[14] Whether there was a reasonably valid association between PDE5i and melanoma was a jury question. This judicial refusal to engage with the issue of confounding was one of the disappointing aspects of the decision.

The MDL court was less forgiving when it came to the plaintiffs’ epidemiology expert witnesses’ assessment of the association as causal. All the parties’ epidemiology witnesses invoked Sir Austin Bradford Hill’s viewpoints or factors for judging whether associations were causal.[15] Although they embraced Hill’s viewpoints on causation, the plaintiffs’ epidemiologic expert witnesses had a much more difficult time faithfully applying them to the evidence at hand. The MDL court concluded that the plaintiffs’ witnesses deviated from their own professional standard of care in their analysis of the data.[16]

Hill’s first enumerated factor was “strength of association,” which is typically expressed epidemiologically as a risk ratio or a risk difference. The MDL court noted that the extant epidemiologic studies generally showed relative risks around 1.2 for PDE5i and melanoma, which was “undeniably” not a strong association.[17]

The plaintiffs’ epidemiology witnesses were at sea on how to explain away the lack of strength in the putative association. Dr. Ahmed-Saucedo retreated into an emphasis on how all or most of the studies found some increased risk, but the MDL court correctly found that this ruse was merely a conflation of strength with consistency of the observed associations. Dr. Ahmed-Saucedo’s dismissal of the importance of a dose-response relationship, another Hill factor, as unimportant sealed her fate. The MDL court found that her Bradford Hill analysis was “unduly results-driven,” and that her proffered testimony was not admissible.[18] Similarly, the MDL court found that Dr. Feng Liu-Smith similarly conflated strength of association with consistency, which error was too great a professional deviation from the standard of care.[19]

Dr. Sonal Singh fared no better after he contradicted his own prior testimony that there is an order of importance to the Hill factors, with “strength of association,” at or near the top. In the face of a set of studies, none of which showed a strong association, Dr. Singh abandoned his own interpretative principle to suit the litigation needs of the case. His analysis placed the greatest weight on the Li study, which had the highest risk ratio, but he failed to advance any persuasive reason for his emphasis on one of the smallest studies available. The MDL court found that Dr. Singh’s claim to have weighed strength of association heavily, despite the obvious absence of strong associations, puzzling and too great an analytical gap to abide.[20]

Judge Seeborg thus concluded that while the plaintiffs’ expert witness could opine that there was an association, which was arguably plausible, they could not, under Rule 702, contend that the association was causal. In attempting to advance an argument that the association met Bradford Hill’s factors for causality, the plaintiffs’ witnesses had ignored, misrepresented, or confused one of the most important factors, strength of the association, in a way that revealed their analyses to be result driven and unfaithful to the methodology they claimed to have followed. Judge Seeborg emphasized a feature of the revised Rule 702, which often is ignored by his fellow federal judges:[21]

“Under the amendment, as under Daubert, when an expert purports to apply principles and methods in accordance with professional standards, and yet reaches a conclusion that other experts in the field would not reach, the trial court may fairly suspect that the principles and methods have not been faithfully applied. See Lust v. Merrell Dow Pharmaceuticals, Inc., 89 F.3d 594, 598 (9th Cir. 1996). The amendment specifically provides that the trial court must scrutinize not only the principles and methods used by the expert, but also whether those principles and methods have been properly applied to the facts of the case.”

Given that the plaintiffs’ witnesses purported to apply a generally accepted methodology, Judge Seeborg was left to question why they would conclude causality when no one else in their field had done so.[22] The epidemiologic issue had been around for several years, and addressed not just in observational studies, but systematically reviewed and meta-analyzed. The absence of published causal conclusions was not just an absence of evidence, but evidence of absence of expert support for how plaintiffs’ expert witnesses applied the Bradford Hill factors.

Reliance Upon Studies That Did Not Conclude Causation Existed

Parties challenging causal claims will sometimes point to the absence of a causal conclusion in the publication of individual epidemiologic studies that are the main basis for the causal claim. In the PDE5i-melanoma cases, the defense advanced this argument unsuccessfully. The MDL court rejected the defense argument, based upon the absence of any comprehensive review of all the pertinent evidence for or against causality in an individual study; the study authors are mostly concerned with conveying the results of their own study.[23] The authors may have a short discussion of other study results as the rationale for their own study, but such discussions are often limited in scope and purpose. Judge Seeborg, in this latest round of PDE5i litigation, thus did not fault plaintiffs’ witnesses’ reliance upon epidemiologic or mechanistic studies, which individually did not assert causal conclusions; rather it was the absence of causal conclusions in systematic reviews, meta-analyses, narrative reviews, regulatory agency pronouncements, or clinical guidelines that ultimately raised the fatal inference that the plaintiffs’ witnesses were not faithfully deploying a generally accepted methodology.

The defense argument that pointed to the individual epidemiologic studies themselves derives some legal credibility from the Supreme Court’s opinion in General Electric Co. v. Joiner, 522 U.S. 136 (1997). In Joiner, the SCOTUS took plaintiffs’ expert witnesses to task for drawing stronger conclusions than were offered in the papers upon which they relied. Chief Justice Rehnquist gave considerable weight to the consideration that the plaintiffs’ expert witnesses relied upon studies, the authors of which explicitly refused to interpret as supporting a conclusion of human disease causation.[24]

Joiner’s criticisms of the reliance upon studies that do not themselves reach causal conclusions have gained a foothold in the case law interpreting Rule 702. The Fifth Circuit, for example, has declared:[25]

“It is axiomatic that causation testimony is inadmissible if an expert relies upon studies or publications, the authors of which were themselves unwilling to conclude that causation had been proven.”

This aspect of Joiner may properly limit the over-interpretation or misinterpretation of an individual study, which seems fine.[26] The Joiner case may, however, perpetuate an authority-based view of science to the detriment of requiring good and sufficient reasons to support the testifying expert witnesses’ opinions.  The problem with Joiner’s suggestion that expert witness opinion should not be admissible if it disagrees with the study authors’ discussion section is that sometimes study authors grossly over-interpret their data.  When it comes to scientific studies written by “political scientists” (scientists who see their work as advancing a political cause or agenda), then the discussion section often becomes a fertile source of unreliable, speculative opinions that should not be given credence in Rule 104(a) contexts, and certainly should not be admissible in trials. In other words, the misuse of non-rigorous comments in published articles can cut both ways.

There have been, and will continue to be, occasions in which published studies contain data, relevant and important to the causation issue, but which studies also contain speculative, personal opinions expressed in the Introduction and Discussion sections.  The parties’ expert witnesses may disagree with those opinions, but such disagreements hardly reflect poorly upon the testifying witnesses.  Neither side’s expert witnesses should be judged by those out-of-court opinions.  Perhaps the hearsay discussion section may be considered under Rule 104(a), which suspends the application of the Rules of Evidence, but it should hardly be a dispositive factor, other than raising questions for the reviewing court.

In exercising their gatekeeping function, trial judges should exercise care in how they assess expert witnesses’ reliance upon study data and analyses, when they disagree with the hearsay authors’ conclusions or discussions.  Given how many journals cater to advocacy scientists, and how variable the quality of peer review is, testifying expert witnesses should, in some instances,  have the expertise to interpret the data without substantial reliance upon, or reference to, the interpretative comments in the published literature.

Judge Seeborg sensibly seems to have distinguished between the absence of causal conclusions in individual epidemiologic studies and the absence of causal conclusions in any reputable medical literature.[27] He refused to be ensnared in the Joiner argument because:[28]

“Epidemiology studies typically only expressly address whether an association exists between agents such as sildenafil and tadalafil and outcomes like melanoma progression. As explained in In re Roundup Prod. Liab. Litig., 390 F. Supp. 3d 1102, 1116 (N.D. Cal. 2018), ‘[w]hether the agents cause the outcomes, however, ordinarily cannot be proven by epidemiological studies alone; an evaluation of causation requires epidemiologists to exercise judgment about the import of those studies and to consider them in context’.”

This new MDL opinion, relying upon the Advisory Committee Notes to Rule 702, is thus a more felicitous statement of the goals of gatekeeping.

Confidence Intervals

As welcome as some aspects of Judge Seeborg’s opinion are, the decision is not without mistakes. The district judge, like so many of his judicial colleagues, trips over the proper interpretation of a confidence interval:[29]

“When reviewing the results of a study it is important to consider the confidence interval, which, in simple terms, is the ‘margin of error’. For example, a given study could calculate a relative risk of 1.4 (a 40 percent increased risk of adverse events), but show a 95 percent ‘confidence interval’ of .8 to 1.9. That confidence interval means there is 95 percent chance that the true value—the actual relative risk—is between .8 and 1.9.”

This statement is inescapably wrong. The 95 percent probability attaches to the capturing of the true parameter – the actual relative risk – in the long run of repeated confidence intervals that result from repeated sampling of the same sample size, in the same manner, from the same population. In Judge Seeborg’s example, the next sample might give a relative risk point estimate 1.9, and that new estimate will have a confidence interval that may run from just below 1.0 to over 3. A third sample might turn up a relative risk estimate of 0.8, with a confidence interval that runs from say 0.3 to 1.4. Neither the second nor the third sample would be reasonably incompatible with the first. A more accurate assessment of the true parameter is that it will be somewhere between 0.3 and 3, a considerably broader range for the 95 percent.

Judge Seeborg’s error is sadly all too common. Whenever I see the error, I wonder whence it came. Often the error is in briefs of both plaintiffs’ and defense counsel. In this case, I did not see the erroneous assertion about confidence intervals made in plaintiffs’ or defendants’ briefs.


[1]  Brumley  v. Pfizer, Inc., 200 F.R.D. 596 (S.D. Tex. 2001) (excluding plaintiffs’ expert witness who claimed that Viagra caused heart attack); Selig v. Pfizer, Inc., 185 Misc. 2d 600 (N.Y. Cty. S. Ct. 2000) (excluding plaintiff’s expert witness), aff’d, 290 A.D. 2d 319, 735 N.Y.S. 2d 549 (2002).

[2]  “Love is Blind but What About Judicial Gatekeeping of Expert Witnesses? – Viagra Part I” (July 7, 2012); “Viagra, Part II — MDL Court Sees The Light – Bad Data Trump Nuances of Statistical Inference” (July 8, 2012).

[3]  In re Viagra Prods. Liab. Litig., 572 F.Supp. 2d 1071 (D. Minn. 2008), 658 F. Supp. 2d 936 (D. Minn. 2009), and 658 F. Supp. 2d 950 (D. Minn. 2009).

[4]  Wen-Qing Li, Abrar A. Qureshi, Kathleen C. Robinson, and Jiali Han, “Sildenafil use and increased risk of incident melanoma in US men: a prospective cohort study,” 174 J. Am. Med. Ass’n Intern. Med. 964 (2014).

[5]  See, e.g., Herrara v. Pfizer Inc., Complaint in 3:15-cv-04888 (N.D. Calif. Oct. 23, 2015); Diana Novak Jones, “Viagra Increases Risk Of Developing Melanoma, Suit Says,” Law360 (Oct. 26, 2015).

[6]  See In re Viagra (Sildenafil Citrate) Prods. Liab. Litig., 176 F. Supp. 3d 1377, 1378 (J.P.M.L. 2016).

[7]  See, e.g., Jenny Z. Wang, Stephanie Le , Claire Alexanian, Sucharita Boddu, Alexander Merleev, Alina Marusina, and Emanual Maverakis, “No Causal Link between Phosphodiesterase Type 5 Inhibition and Melanoma,” 37 World J. Men’s Health 313 (2019) (“There is currently no evidence to suggest that PDE5 inhibition in patients causes increased risk for melanoma. The few observational studies that demonstrated a positive association between PDE5 inhibitor use and melanoma often failed to account for major confounders. Nonetheless, the substantial evidence implicating PDE5 inhibition in the cyclic guanosine monophosphate (cGMP)-mediated melanoma pathway warrants further investigation in the clinical setting.”); Xinming Han, Yan Han, Yongsheng Zheng, Qiang Sun, Tao Ma, Li Dai, Junyi Zhang, and Lianji Xu, “Use of phosphodiesterase type 5 inhibitors and risk of melanoma: a meta-analysis of observational studies,” 11 OncoTargets & Therapy 711 (2018).

[8]  In re Viagra (Sildenafil Citrate) and Cialis (Tadalafil) Prods. Liab. Litig., Case No. 16-md-02691-RS, Order Granting in Part and Denying in Part Motions to Exclude Expert Testimony (N.D. Calif. Jan. 13, 2020) [cited as Opinion].

[9]  Opinion at 8 (“determin[ing] whether the analysis undergirding the experts’ testimony falls within the range of accepted standards governing how scientists conduct their research and reach their conclusions”), citing Daubert v. Merrell Dow Pharm., Inc. (Daubert II), 43 F.3d 1311, 1317 (9th Cir. 1995).

[10]  Opinion at 11.

[11]  Opinion at 11-13.

[12]  See Kenneth J. Rothman, Sander Greenland, and Timothy L. Lash, “Introduction,” chap. 1, in Kenneth J. Rothman, et al., eds., Modern Epidemiology at 29 (3d ed. 2008) (“no approach can transform plausibility into an objective causal criterion).

[13]  Opinion at 15-16.

[14]  Opinion at 16-17.

[15]  See Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295 (1965); see also “Woodside & Davis on the Bradford Hill Considerations” (April 23, 2013).

[16]  Opinion at 17 – 21.

[17]  Opinion at 18. The MDL court cited In re Silicone Gel Breast Implants Prod. Liab. Litig., 318 F. Supp. 2d 879, 893 (C.D. Cal. 2004), for the proposition that relative risks greater than 2.0 permit the inference that the agent under study “was more likely than not responsible for a particular individual’s disease.”

[18]  Opinion at 18.

[19]  Opinion at 20.

[20]  Opinion at 19.

[21]  Opinion at 21, quoting from Rule 702, Advisory Committee Notes (emphasis in Judge Seeborg’s opinion).

[22]  Opinion at 21.

[23]  SeeFollow the Data, Not the Discussion” (May 2, 2010).

[24]  Joiner, 522 U.S. at 145-46 (noting that the PCB studies at issue did not support expert witnesses’ conclusion that PCB exposure caused cancer because the study authors, who conducted the research, were not willing to endorse a conclusion of causation).

[25]  Huss v. Gayden, 571 F.3d 442  (5th Cir. 2009) (citing Vargas v. Lee, 317 F.3d 498, 501-01 (5th Cir. 2003) (noting that studies that did not themselves embrace causal conclusions undermined the reliability of the plaintiffs’ expert witness’s testimony that trauma caused fibromyalgia); see also McClain v. Metabolife Internat’l, Inc., 401 F.3d 1233, 1247-48 (11th Cir. 2005) (expert witnesses’ reliance upon studies that did not reach causal conclusions about ephedrine supported the challenge to the reliability of their proffered opinions); Happel v. Walmart, 602 F.3d 820, 826 (7th Cir. 2010) (observing that “is axiomatic that causation testimony is inadmissible if an expert relies upon studies or publications, the authors of which were themselves unwilling to conclude that causation had been proven”).

[26]  In re Accutane Prods. Liab. Litig., 511 F. Supp. 2d 1288, 1291 (M.D. Fla. 2007) (“When an expert relies on the studies of others, he must not exceed the limitations the authors themselves place on the study. That is, he must not draw overreaching conclusions.) (internal citations omitted).

[27]  See Rutigliano v. Valley Bus. Forms, 929 F. Supp. 779, 785 (D.N.J. 1996), aff’d, 118 F.3d 1577 (3d Cir. 1997) (“law warns against use of medical literature to draw conclusions not drawn in the literature itself …. Reliance upon medical literature for conclusions not drawn therein is not an accepted scientific methodology.”).

[28]  Opinion at 14

[29]  Opinion at 4 – 5.

American Statistical Association – Consensus versus Personal Opinion

December 13th, 2019

Lawyers and judges pay close attention to standards, guidances, and consenus statements from respected and recognized professional organizations. Deviations from these standards may be presumptive evidence of malpractice or malfeasance in civil and criminal litigation, in regulatory matters, and in other contexts. One important, recurring situation arises when trial judges must act as gatekeepers of the admissibility of expert witness opinion testimony. In making this crucial judicial determination, judges will want to know whether a challenged expert witness has deviated from an accepted professional standard of care or practice.

In 2016, the American Statistical Association (ASA) published a consensus statement on p-values. The ASA statement grew out of a lengthy process that involved assembling experts of diverse viewpoints. In October 2015, the ASA convened a two-day meeting for 20 experts to meet and discuss areas of core agreement. Over the following three months, the participating experts and the ASA Board members continued their discussions, which led to the ASA Executive Committee’s approval of the statement that was published in March 2016.[1]

The ASA 2016 Statement spelled out six relatively uncontroversial principles of basic statistical practice.[2] Far from rejecting statistical significance, the six principles embraced statistical tests as an important but insufficient basis for scientific conclusions:

“3. Scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold.”

Despite the fairly clear and careful statement of principles, legal actors did not take long to misrepresent the ASA principles.[3] What had been a prescription about the insufficiency of p-value thresholds was distorted into strident assertions that statistical significance was unnecessary for scientific conclusions.

Three years after the ASA published its p-value consensus document, ASA Executive Director, Ronald Wasserstein, and two other statisticians, published an editorial in a supplemental issue of The American Statistician, in which they called for the abandonment of significance testing.[4] Although the Wasserstein’s editorial was clearly labeled as such, his essay introduced the special journal issue, and it appeared without disclaimer over his name, and his official status as the ASA Executive Director.

Sowing further confusion, the editorial made the following pronouncement:[5]

“The [2016] ASA Statement on P-Values and Statistical Significance stopped just short of recommending that declarations of ‘statistical significance’ be abandoned. We take that step here. We conclude, based on our review of the articles in this special issue and the broader literature, that it is time to stop using the term “statistically significant” entirely. Nor should variants such as ‘significantly different’, ‘p < 0.05’, and ‘nonsignificant’ survive, whether expressed in words, by asterisks in a table, or in some other way.”

The ASA is a collective body, and its ASA Statement 2016 was a statement from that body, which spoke after lengthy deliberation and debate. The language, quoted above, moves within one paragraph, from the ASA Statement to the royal “We,” who are taking the step of abandoning the term “statistically significant.” Given the unqualified use of the collective first person pronoun in the same paragraph that refers to the ASA, combined with Ronald Wasserstein’s official capacity, and the complete absence of a disclaimer that this pronouncement was simply a personal opinion, a reasonable reader could hardly avoid concluding that this pronouncement reflected ASA policy.

Your humble blogger, and others, read Wasserstein’s 2019 editorial as an ASA statement.[6] Although it is true that the 2019 paper is labeled “editorial,” and that the editorial does not describe a consensus process, there is no disclaimer such as is customary when someone in an official capacity publishes a personal opinion. Indeed, rather than the usual disclaimer, the Wasserstein editorial thanks the ASA Board of Directors “for generously and enthusiastically supporting the ‘p-values project’ since its inception in 2014.” This acknowledgement strongly suggests that the editorial is itself part of the “p-values project,” which is “enthusiastically” supported by the ASA Board of Directors.

If the editorial were not itself confusing enough, an unsigned email from “ASA <asamail@amstat.org>” was sent out in July 2019, in which the anonymous ASA author(s) takes credit for changing statistical guidelines at the New England Journal of Medicine:[7]

From: ASA <asamail@amstat.org>
Date: Thu, Jul 18, 2019 at 1:38 PM
Subject: Major Medical Journal Updates Statistical Policy in Response to ASA Statement
To: <XXXX>

The email is itself an ambiguous piece of evidence as to what the ASA is claiming. The email says that the New England Journal of Medicine changed its guidelines “in response to the ASA Statement on P-values and Statistical Significance and the subsequent The American Statistician special issue on statistical inference.” Of course, the “special issue” was not just Wasserstein’s editorial, but the 42 other papers. So this claim leaves open to doubt exactly what in the 2019 special issue the NEJM editors were responding to. Given that the 42 articles that followed Wasserstein’s editorial did not all agree with Wasserstein’s “steps taken,” or with each other, the only landmark in the special issue was the editorial over the name of the ASA’s Executive Director.

Moreover, a reading of the NEJM revised guidelines does not suggest that the journal’s editors were unduly influenced by the Wasserstein editorial or the 42 accompanying papers. The journal mostly responded to the ASA 2016 consensus paper by putting some teeth into its Principle 4, which dealt with multiplicity concerns in submitted manuscripts.  The newly adopted (2019) NEJM author guidelines do not take step out with Wasserstein and colleagues; there is no general prohibition on p-values or statements of “statistical significance.”

The confusion propagated by the Wasserstein 2019 editorial has not escaped the attention of other ASA officials. An editorial in the June 2019 issue of AmStat News, by ASA President Karen Kafadar, noted the prevalent confusion and uneasiness over the 2019 The American Statistician special issue, the lack of consensus, and the need for healthy debate.[8]

In this month’s issue of AmStat News, President Kafadar returned to the issue of the confusion over the 2019 ASA special issue of The American Statistician, in her “President’s Corner.” Because Executive Director Wasserstein’s editorial language about “we now take this step” is almost certainly likely to find its way into opportunistic legal briefs, Kafadar’s comments are worth noting in some detail:[9]

“One final challenge, which I hope to address in my final month as ASA president, concerns issues of significance, multiplicity, and reproducibility. In 2016, the ASA published a statement that simply reiterated what p-values are and are not. It did not recommend specific approaches, other than ‘good statistical practice … principles of good study design and conduct, a variety of numerical and graphical summaries of data, understanding of the phenomenon under study, interpretation of results in context, complete reporting and proper logical and quantitative understanding of what data summaries mean’.

The guest editors of the March 2019 supplement to The American Statistician went further, writing: ‘The ASA Statement on P-Values and Statistical Significance stopped just short of recommending that declarations of “statistical significance” be abandoned. We take that step here. … [I]t is time to stop using the term “statistically significant” entirely’.

Many of you have written of instances in which authors and journal editors – and even some ASA members – have mistakenly assumed this editorial represented ASA policy. The mistake is understandable: The editorial was coauthored by an official of the ASA. In fact, the ASA does not endorse any article, by any author, in any journal – even an article written by a member of its own staff in a journal the ASA publishes.”

Kafadar’s caveat should quash incorrect assertions about the ASA’s position on statistical significance testing. It is a safe bet, however, that such assertions will appear in trial and appellate briefs.

Statistical reasoning is difficult enough for most people, but the hermeneutics of American Statistical Association publications on statistical significance may require a doctorate of divinity degree. In a cleverly titled post, Professor Deborah Mayo argues that there is no other way to interpret the Wasserstein 2019 editorial except as laying down an ASA prescription. Deborah G. Mayo, “Les stats, c’est moi,” Error Philosophy (Dec. 13, 2019). I accept President Kafadar’s correction at face value, and accept that I, like many other readers, misinterpreted the Wasserstein editorial as having the imprimatur of the ASA. Mayo points out, however, that Kafadar’s correction in a newsletter may be insufficient at this point, and that a stronger disclaimer is required. Officers of the ASA are certainly entitled to their opinions and the opportunity to present them, but disclaimers would bring clarity and transparency to published work of these officials.

Wasserstein’s 2019 editorial goes further to make a claim about how his “step” will ameliorate the replication crisis:

“In this world, where studies with ‘p < 0.05’ and studies with ‘p > 0.05 are not automatically in conflict, researchers will see their results more easily replicated – and, even when not, they will better understand why.”

The editorial here seems to be attempting to define replication failure out of existence. This claim, as stated, is problematic. A sophisticated practitioner may think of the situation in which two studies, one with p = .048, and another with p = 0.052 might be said not to be conflict. In real world litigation, however, advocates will take Wasserstein’s statement about studies not in conflict (despite p-values above and below a threshold, say 5%) to the extremes. We can anticipate claims that two similar studies with p-values above and below 5%, say with one p-value at 0.04, and the other at 0.40, will be described as not in conflict, with the second a replication of the first test. It is hard to see how this possible interpretation of Wasserstein’s editorial, although consistent with its language, will advance sound, replicable science.[10]


[1] Ronald L. Wasserstein & Nicole A. Lazar, “The ASA’s Statement on p-Values: Context, Process, and Purpose,” 70 The Am. Statistician 129 (2016).

[2]The American Statistical Association’s Statement on and of Significance” (Mar. 17, 2016).

[3] See, e.g., “The Education of Judge Rufe – The Zoloft MDL” (April 9, 2016) (Zoloft litigation); “The ASA’s Statement on Statistical Significance – Buzzing from the Huckabees” (Mar. 19, 2016); “The American Statistical Association Statement on Significance Testing Goes to Court – Part I” (Nov. 13, 2018).

[4] Ronald L. Wasserstein, Allen L. Schirm, and Nicole A. Lazar, “Editorial: Moving to a World Beyond ‘p < 0.05’,” 73 Am. Statistician S1, S2 (2019).

[5] Id. at S2.

[6] SeeHas the American Statistical Association Gone Post-Modern?” (Mar. 24, 2019); Deborah G. Mayo, “The 2019 ASA Guide to P-values and Statistical Significance: Don’t Say What You Don’t Mean,” Error Statistics Philosophy (June 17, 2019); B. Haig, “The ASA’s 2019 update on P-values and significance,” Error Statistics Philosophy  (July 12, 2019).

[7] SeeStatistical Significance at the New England Journal of Medicine” (July 19, 2019); See also Deborah G. Mayo, “The NEJM Issues New Guidelines on Statistical Reporting: Is the ASA P-Value Project Backfiring?Error Statistics Philosophy  (July 19, 2019).

[8] See Kafadar, “Statistics & Unintended Consequences,” AmStat News 3,4 (June 2019).

[9] Karen Kafadar, “The Year in Review … And More to Come,” AmStat News 3 (Dec. 2019).

[10]  See also Deborah G. Mayo, “P‐value thresholds: Forfeit at your peril,” 49 Eur. J. Clin. Invest. e13170 (2019).

 

Is the IARC Lost in the Weeds?

November 30th, 2019

A couple of years ago, I met David Zaruk at a Society for Risk Analysis meeting, where we were both presenting. I was aware of David’s blogging and investigative journalism, but meeting him gave me a greater appreciation for the breadth and depth of his work. For those of you who do not know David, he is present in cyberspace as the Risk-Monger who blogs about risk and science communications issues. His blog has featured cutting-edge exposés about the distortions in risk communications perpetuated by the advocacy of non-governmental organizations (NGOs). Previously, I have recorded my objections to the intellectual arrogance of some such organizations that purport to speak on behalf of the public interest, when often they act in cahoots with the lawsuit industry in the manufacturing of tort and environmental litigation.

David’s writing on the lobbying and control of NGOs by plaintiffs’ lawyers from the United States should be required reading for everyone who wants to understand how litigation sausage is made. His series, “SlimeGate” details the interplay among NGO lobbying, lawsuit industry maneuvering, and carcinogen determinations at the International Agency for Research on Cancer (IARC). The IARC, a branch of the World Health Organization, is headquartered in Lyon, France. The IARC convenes “working groups” to review the scientific studies of the carcinogencity of various substances and processes. The IARC working groups produce “monographs” of their reviews, and the IARC publishes these monographs, in print and on-line. The United States is in the top tier of participating countries for funding the IARC.

The IARC was founded in 1965, when observational epidemiology was still very much an emerging science, with expertise concentrated in only a few countries. For its first few decades, the IARC enjoyed a good reputation, and its monographs were considered definitive reviews, especially under its first director, Dr. John Higginson, from 1966 to 1981.[1] By the end of the 20th century, the need for the IARC and its reviews had waned, as the methods of systematic review and meta-analyses had evolved significantly, and had became more widely standardized and practiced.

Understandably, the IARC has been concerned that the members of its working groups should be viewed as disinterested scientists. Unfortunately, this concern has been translated into an asymmetrical standard that excludes anyone with a hint of manufacturing connection, but keeps the door open for those scientists with deep lawsuit industry connections. Speaking on behalf of the plaintiffs’ bar, Michael Papantonio, a plaintiffs’ lawyer who founded Mass Torts Made Perfect, noted that “We [the lawsuit industry] operate just like any other industry.”[2]

David Zaruk has shown how this asymmetry has been exploited mercilessly by the lawsuit industry and its agents in connection with the IARC’s review of glyphosate.[3] The resulting IARC classification of glyphosate has led to a litigation firestorm and an all-out assault on agricultural sustainability and productivity.[4]

The anomaly of the IARC’s glyphosate classification has been noted by scientists as well. Dr. Geoffrey Kabat is a cancer epidemiologist, who has written perceptively on the misunderstandings and distortions of cancer risk assessments in various settings.[5] He has previously written about glyphosate in Forbes and elsewhere, but recently he has written an important essay on glyphosate in Issues in Science and Technology, which is published by the National Academies of Sciences, Engineering, and Medicine and Arizona State University. In his essay, Dr. Kabat details how the IARC’s evaluation of glyphosate is an outlier in the scientific and regulatory world, and is not well supported by the available evidence.[6]

The problems with the IARC are both substantive and procedural.[7] One of the key problems that face IARC evaluations is an incoherent classification scheme. IARC evaluations classify putative human carcinogenic risks into five categories: Group I (known), Group 2A (probably), Group 2B (possibly), Group 3 (unclassifiable), and Group 4 (probably not). Group 4 is virtually an empty set with only one substance, caprolactam ((CH2)5C(O)NH), an organic compound used in the manufacture of nylon.

In the IARC evaluation at issue, glyphosate was placed into Group 2A, which would seem to satisfy the legal system’s requirement that an exposure more likely than not causes the harm in question. Appearances and word usage, however, can be deceiving. Probability is a continuous scale from zero to one. In Bayesian decision making, zero and one are unavailable because if either was our starting point, no amount of evidence could ever change our judgment of the probability of causation. (Cromwell’s Rule) The IARC informs us that its use of “probably” is quite idiosyncratic; the probability that a Group 2A agent causes cancer has “no quantitative” meaning. All the IARC intends is that a Group 2A classification “signifies a greater strength of evidence than possibly carcinogenic.”[8]

In other words, Group 2A classifications are consistent with having posterior probabilities of less than 0.5 (or 50 percent). A working group could judge the probability of a substance or a process to be carcinogenic to humans to be greater than zero, but no more than five or ten percent, and still vote for a 2A classification, in keeping with the IARC Preamble. This low probability threshold for a 2A classification converts the judgment of “probably carcinogenic” into a precautionary prescription, rendered when the most probable assessment is either ignorance or lack of causality. There is thus a practical certainty, close to 100%, that a 2A classification will confuse judges and juries, as well as the scientific community.

In IARC-speak, a 2A “probability” connotes “sufficient evidence” in experimental animals, and “limited evidence” in humans. A substance can receive a 2A classification even when the sufficient evidence of carcinogenicity occurs in one non-human animal specie, even though other animal species fail to show carcinogenicity. A 2A classification can raise the thorny question in court whether a claimant is more like a rat or a mouse.

Similarly, “limited evidence” in humans can be based upon inconsistent observational studies that fail to measure and adjust for known and potential confounding risk factors and systematic biases. The 2A classification requires little substantively or semantically, and many 2A classifications leave juries and judges to determine whether a chemical or medication caused a human being’s cancer, when the basic predicates for Sir Austin Bradford Hill’s factors for causal judgment have not been met.[9]

In courtrooms, IARC 2A classifications should be excluded as legally irrelevant, under Rule 403. Even if a 2A IARC classification were a credible judgment of causation, admitting evidence of the classification would be “substantially outweighed by a danger of … unfair prejudice, confusing the issues, [and] misleading the jury….”[10]

The IARC may be lost in the weeds, but there is no need to fret. A little Round Up™ will help.


[1]  See John Higginson, “The International Agency for Research on Cancer: A Brief History of Its History, Mission, and Program,” 43 Toxicological Sci. 79 (1998).

[2]  Sara Randazzo & Jacob Bunge, “Inside the Mass-Tort Machine That Powers Thousands of Roundup Lawsuits,” Wall St. J. (Nov. 25, 2019).

[3]  David Zaruk, “The Corruption of IARC,” Risk Monger (Aug. 24, 2019); David Zaruk, “Greed, Lies and Glyphosate: The Portier Papers,” Risk Monger (Oct. 13, 2017).

[4]  Ted Williams, “Roundup Hysteria,” Slate Magazine (Oct. 14, 2019).

[5]  See, e.g., Geoffrey Kabat, Hyping Health Risks: Environmental Hazards in Everyday Life and the Science of Epidemiology (2008); Geoffrey Kabat, Getting Risk Right: Understanding the Science of Elusive Health Risks (2016).

[6]  Geoffrey Kabat, “Who’s Afraid of Roundup?” 36 Issues in Science and Technology (Fall 2019).

[7]  See Schachtman, “Infante-lizing the IARC” (May 13, 2018); “The IARC Process is Broken” (May 4, 2016). See also Eric Lasker and John Kalas, “Engaging with International Carcinogen Evaluations,” Law360 (Nov. 14, 2019).

[8]  “IARC Preamble to the IARC Monographs on the Identification of Carcinogenic Hazards to Humans,” at Sec. B.5., p.31 (Jan. 2019); See alsoIARC Advisory Group Report on Preamble” (Sept. 2019).

[9]  See Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295 (1965) (noting that only when “[o]ur observations reveal an association between two variables, perfectly clear-cut and beyond what we would care to attribute to the play of chance,” do we move on to consider the nine articulated factors for determining whether an association is causal.

[10]  Fed. R. Evid. 403.

 

Does the California State Bar Discriminate Unlawfully?

November 24th, 2019

Earlier this month, various news outlets announced a finding in a California study that black male attorneys are three times more likely to be disciplined by the State Bar than their white male counterparts.[1] Some of the news accounts treated the study findings as conclusions that the Bar had engaged in race discrimination. One particularly irresponsible website proclaimed that “bar discipline is totally racist.”[2] Indeed, the California State Bar itself apparently plans to hire consulting experts to help it achieve “bias-free decision-making and processes,” to eliminate “unintended bias,” and to consider how, if at all, to weigh prior complaints in the disciplinary procedure.[3]

The California Bar’s report was prepared by a social scientist, George Farkas, of the School of Education at University of California, Irvine. Based upon data from attorneys admitted to the California bar between 1990 and 2008, Professor Farkas reported crude prevalence rates of discipline, probation, disbarment, or resignation, by race.[4] The disbarment/ resignation rate for black male lawyers was 3.9%, whereas the rate for white male lawyers was 1%. Disparities, however, are not unlawful discriminations.

The disbarment/resignation rate for black female lawyers was 0.9%, but no one has suggested that there is implicit bias in favor of black women over both black and white male lawyers. White women were twice as likely as Asian women to resign, or be placed on probation or be disbarred (0.4% versus 0.2%).

The ABA’s coverage sheepishly admitted that “[d]ifferences could be explained by the number of complaints received about an attorney, the number of investigations opened, the percentage of investigations in which a lawyer was not represented by counsel, and previous discipline history.”[5]

Farkas’s report of October 31, 2019, was transmitted to the Bar’s Board of Trustees, on November 14th.[6] As anyone familiar with discrimination law would have expected, Professor Farkas conducted multiple regression analyses that adjusted for the number of previous complaints filed against the errant lawyer, and whether the lawyer was represented by counsel before the Bar. The full analyses showed that these other important variables, not race – not could – but did explain variability in discipline rates:

“Statistically, these variables explained all of the differences in probation and disbarment rates by race/ethnicity. Among all variables included in the final analysis, prior discipline history was found to have the strongest effects [sic] on discipline outcomes, followed by the proportion of investigations in which the attorney under investigation was represented by counsel, and the number of investigations.”[7]

The number of previous complaints against a particular lawyer surely has a role in considering whether a miscreant lawyer should be placed on probation, or subjected to disbarment. And without further refinement of the analysis, and irrespective of race or ethnicity, failure to retain counsel for disciplinary hearings may correlate strongly with futility of any defense.

Curiously, the Farkas report did not take into account the race or ethnicity of the complainants before the Bar’s disciplinary committee. The Farkas report seems reasonable as far as it goes, but the wild conclusions drawn in the media would not pass Rule 702 gatekeeping.


[1]  See, e.g., Emma Cueto, “Black Male Attorneys Disciplined More Often, California Study Finds,” Law360 (Nov. 18, 2019); Debra Cassens Weiss, “New California bar study finds racial disparities in lawyer discipline,” Am. Bar Ass’n J. (Nov. 18, 2019).

[2]  Joe Patrice, “Study Finds That Bar Discipline Is Totally Racist Shocking Absolutely No One: Black male attorneys are more likely to be disciplined than white attorneys,” Above the Law (Nov. 19, 2019).

[3]  Debra Cassens Weiss, “New California bar study finds racial disparities in lawyer discipline,” Am. Bar Ass’n J. (Nov. 18, 2019).

[4]  George Farkas, “Discrepancies by Race and Gender in Attorney Discipline by the State Bar of California: An Empirical Analysis” (Oct. 31, 2019).

[5]  Debra Cassens Weiss, supra at note 3.

[6]  Dag MacLeod (Chief of Mission Advancement & Accountability Division) & Ron Pi (Principal Analyst, Office of Research & Institutional Accountability), Report on Disparities in the Discipline System (Nov. 14, 2019).

[7] Dag MacLeod & Pi, Report on Disparities in the Discipline System at 4 (Nov. 14, 2019) (emphasis added).

Everything She Just Said Was Bullshit

September 26th, 2019

At this point, most products liability lawyers have read about the New Jersey verdicts returned earlier this month against Johnson & Johnson in four mesothelioma cases.[1] The Middlesex County jury found that the defendant’s talc and its supposed asbestos impurities were a cause of all four mesothelioma cases, and awarded compensatory damages of $37.3 million, in the cases.[2]

Johnson & Johnson was prejudiced by having to try four cases questionably consolidated together, and then hobbled by having its affirmative defense evidence stricken, and finally crucified when the trial judge instructed the jury at the end of the defense lawyer’s closing argument: “everything she just said was bullshit.”

Judge Ana C. Viscomi, who presided over the trial, struck the entire summation of defense lawyer Diane Sullivan. The action effectively deprived Johnson & Johnson of a defense, as can be seen from the verdicts. Judge Viscomi’s egregious ruling was given without explaining which parts of Sullivan’s closing were objectionable, and without giving Sullivan an opportunity to argue against the sanction.

During the course of Sullivan’s closing argument, Judge Viscomi criticized Sullivan for calling the plaintiffs’ lawyers “sinister,” and suggested that her argument was defaming the legal profession in violation of the Rules of Professional Conduct.[3] Sullivan did use the word “sinister” several times, but in each instance, she referred to the plaintiffs’ arguments, allegations, and innuendo about Johnson & Johnson’s action. Judge Viscomi curiously imputed unprofessional conduct to Sullivan for referring to plaintiffs’ counsel’s “shows and props,” as a suggestion that plaintiffs’ counsel had fabricated evidence.

Striking an entire closing argument is, as far as anyone has determined, unprecedented. Of course, Judge Haller is fondly remembered for having stricken the entirety of Vinny Gambini’s opening statement, but the good judge did allow Vinny’s “thank you” to stand:

Vinny Gambini: “Yeah, everything that guy just said is bullshit… Thank you.”

D.A. Jim Trotter: “Objection. Counsel’s entire opening statement is argument.”

Judge Chamberlain Haller: “Sustained. Counselor’s entire opening statement, with the exception of ‘Thank you’ will be stricken from the record.”

My Cousin Vinny (1992).

In the real world of a New Jersey courtroom, even Ms. Sullivan’s expression of gratitude for the jury’s attention and service succumbed to Judge Viscomi’s unprecedented ruling,[4] as did almost 40 pages of argument in which Sullivan carefully debunked and challenged the opinion testimony of plaintiffs’ highly paid expert witnesses. The trial court’s ruling undermined the defense’s detailed rebuttal of plaintiffs’ evidence, as well as the defense’s comment upon the plaintiffs’ witnesses’ lack of credibility.

Judge Viscomi’s sua sponte ruling appears even more curious given what took place in the aftermath of her instructing the jury to disregard Sullivan’s argument. First, the trial court gave very disparate treatment to plaintiffs’ counsel. The lawyers for the plaintiffs gave extensive closing arguments that were replete with assertions that Johnson & Johnson and Ms. Sullivan were liars, predators, manipulators, poisoners, baby killers, and then some. Sullivan’s objections were perfunctorily overruled. Second, Judge Viscomi permitted plaintiffs’ counsel to comment extensively upon Ms. Sullivan’s closing, even though it had been stricken. Third, despite the judicial admonition about the Rules of Professional Conduct, neither the trial judge nor plaintiffs’ counsel appear to have filed a disciplinary complaint against Ms. Sullivan. Of course, if Judge Viscomi or the plaintiffs’ counsel thought that Ms. Sullivan had violated the Rules, then they would be obligated to report Ms. Sullivan for misconduct.

Bottom line: these verdicts are unsafe.


[1]  The cases were tried in a questionable consolidation in the New Jersey Superior Court, for Middlesex County, before Judge Viscomi. Barden v. Brenntag North America, No. L-1809-17; Etheridge v. Brenntag North America, No. L-932-17; McNeill-George v. Brenntag North America, No. L-7049-16; and Ronning v. Brenntag North America, No. L-6040-17.

[2]  Bill Wichert, “J&J Hit With $37.3M Verdict In NJ Talc Case,” Law360 (Sept. 11, 2019).

[3]  Amanda Bronstad, “J&J Moves for Talc Mistrial After Judge Strikes Entire Closing Argument,” N.J.L.J. (Sept. 10, 2019) (describing Judge Viscomi as having admonished Ms. Sullivan to “[s]top denigrating the lawyers”; J&J’s motion for mistrial was made before the case was submitted to the jury).

[4]  See Peder B. Hong, “Summation at the Border: Serious Misconduct in Final Argument in Civil Trials,” 19 Hamline L. Rev. 179 (1995); Ty Tasker, “Stick and Stones: Judicial Handling of Invective in Advocacy,” 42 Judges J. 17 (2003); Janelle L. Davis, “Sticks and Stones May Break My Bones, But Names Could Get Me a Mistrial: An Examination of Name-Calling in Closing Argument in Civil Cases,” 42 Gonzaga L. Rev. 133 (2011).

Palavering About P-Values

August 17th, 2019

The American Statistical Association’s most recent confused and confusing communication about statistical significance testing has given rise to great mischief in the world of science and science publishing.[1] Take for instance last week’s opinion piece about “Is It Time to Ban the P Value?” Please.

Helena Chmura Kraemer is an accomplished professor of statistics at Stanford University. This week the Journal of the American Medical Association network flagged Professor Kraemer’s opinion piece on p-values as one of its most read articles. Kraemer’s eye-catching title creates the impression that the p-value is unnecessary and inimical to valid inference.[2]

Remarkably, Kraemer’s article commits the very mistake that the ASA set out to correct back in 2016,[3] by conflating the probability of the data under a hypothesis of no association with the probability of a hypothesis given the data:

“If P value is less than .05, that indicates that the study evidence was good enough to support that hypothesis beyond reasonable doubt, in cases in which the P value .05 reflects the current consensus standard for what is reasonable.”

The ASA tried to break the bad habit of scientists’ interpreting p-values as allowing us to assign posterior probabilities, such as beyond a reasonable doubt, to hypotheses, but obviously to no avail.

Kraemer also ignores the ASA 2016 Statement’s teaching of what the p-value is not and cannot do, by claiming that p-values are determined by non-random error probabilities such as:

“the reliability and sensitivity of the measures used, the quality of the design and analytic procedures, the fidelity to the research protocol, and in general, the quality of the research.”

Kraemer provides errant advice and counsel by insisting that “[a] non-significant result indicates that the study has failed, not that the hypothesis has failed.” If the p-value is the measure of the probability of observing an association at least as large as obtained given an assumed null hypothesis, then of course a large p-value cannot speak to the failure of the hypothesis, but why declare that the study has failed? The study was perhaps indeterminate, but it still yielded information that perhaps can be combined with other data, or help guide future studies.

Perhaps in her most misleading advice, Kraemer asserts that:

“[w]hether P values are banned matters little. All readers (reviewers, patients, clinicians, policy makers, and researchers) can just ignore P values and focus on the quality of research studies and effect sizes to guide decision-making.”

Really? If a high quality study finds an “effect size” of interest, we can now ignore random error?

The ASA 2016 Statement, with its “six principles,” has provoked some deliberate or ill-informed distortions in American judicial proceedings, but Kraemer’s editorial creates idiosyncratic meanings for p-values. Even the 2019 ASA “post-modernism” does not advocate ignoring random error and p-values, as opposed to proscribing dichotomous characterization of results as “statistically significant,” or not.[4] The current author guidelines for articles submitted to the Journals of the American Medical Association clearly reject this new-fangled rejection of evaluating this new-fangled rejection of the need to assess the role of random error.[5]


[1]  See Ronald L. Wasserstein, Allen L. Schirm, and Nicole A. Lazar, “Editorial: Moving to a World Beyond ‘p < 0.05’,” 73 Am. Statistician S1, S2 (2019).

[2]  Helena Chmura Kraemer, “Is It Time to Ban the P Value?J. Am. Med. Ass’n Psych. (August 7, 2019), in-press at doi:10.1001/jamapsychiatry.2019.1965.

[3]  Ronald L. Wasserstein & Nicole A. Lazar, “The ASA’s Statement on p-Values: Context, Process, and Purpose,” 70 The American Statistician 129 (2016).

[4]  “Has the American Statistical Association Gone Post-Modern?” (May 24, 2019).

[5]  See instructions for authors at https://jamanetwork.com/journals/jama/pages/instructions-for-authors

The opinions, statements, and asseverations expressed on Tortini are my own, or those of invited guests, and these writings do not necessarily represent the views of clients, friends, or family, even when supported by good and sufficient reason.