TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Science Bench Book for Judges

July 13th, 2019

On July 1st of this year, the National Judicial College and the Justice Speakers Institute, LLC released an online publication of the Science Bench Book for Judges [Bench Book]. The Bench Book sets out to cover much of the substantive material already covered by the Federal Judicial Center’s Reference Manual:

Acknowledgments

Table of Contents

  1. Introduction: Why This Bench Book?
  2. What is Science?
  3. Scientific Evidence
  4. Introduction to Research Terminology and Concepts
  5. Pre-Trial Civil
  6. Pre-trial Criminal
  7. Trial
  8. Juvenile Court
  9. The Expert Witness
  10. Evidence-Based Sentencing
  11. Post Sentencing Supervision
  12. Civil Post Trial Proceedings
  13. Conclusion: Judges—The Gatekeepers of Scientific Evidence

Appendix 1 – Frye/Daubert—State-by-State

Appendix 2 – Sample Orders for Criminal Discovery

Appendix 3 – Biographies

The Bench Book gives some good advice in very general terms about the need to consider study validity,[1] and to approach scientific evidence with care and “healthy skepticism.”[2] When the Bench Book attempts to instruct on what it represents the scientific method of hypothesis testing, the good advice unravels:

“A scientific hypothesis simply cannot be proved. Statisticians attempt to solve this dilemma by adopting an alternate [sic] hypothesis – the null hypothesis. The null hypothesis is the opposite of the scientific hypothesis. It assumes that the scientific hypothesis is not true. The researcher conducts a statistical analysis of the study data to see if the null hypothesis can be rejected. If the null hypothesis is found to be untrue, the data support the scientific hypothesis as true.”[3]

Even in experimental settings, a statistical analysis of the data do not lead to a conclusion that the null hypothesis is untrue, as opposed to not reasonably compatible with the study’s data. In observational studies, the statistical analysis must acknowledge whether and to what extent the study has excluded bias and confounding. When the Bench Book turns to speak of statistical significance, more trouble ensues:

“The goal of an experiment, or observational study, is to achieve results that are statistically significant; that is, not occurring by chance.”[4]

In the world of result-oriented science, and scientific advocacy, it is perhaps true that scientists seek to achieve statistically significant results. Still, it seems crass to come right out and say so, as opposed to saying that the scientists are querying the data to see whether they are compatible with the null hypothesis. This first pass at statistical significance is only mildly astray compared with the Bench Book’s more serious attempts to define statistical significance and confidence intervals:

4.10 Statistical Significance

The research field agrees that study outcomes must demonstrate they are not the result of random chance. Leaving room for an error of .05, the study must achieve a 95% level of confidence that the results were the product of the study. This is denoted as p ≤ 05. (or .01 or .1).”[5]

and

“The confidence interval is also a way to gauge the reliability of an estimate. The confidence interval predicts the parameters within which a sample value will fall. It looks at the distance from the mean a value will fall, and is measured by using standard deviations. For example, if all values fall within 2 standard deviations from the mean, about 95% of the values will be within that range.”[6]

Of course, the interval speaks to the precision of the estimate, not its reliability, but that is a small point. These definitions are virtually guaranteed to confuse judges into conflating statistical significance and the coefficient of confidence with the legal burden of proof probability.

The Bench Book runs into problems in interpreting legal decisions, which would seem softer grist for the judicial mill. The authors present dictum from the Daubert decision as though it were a holding:[7]

“As noted in Daubert, ‘[t]he focus, of course, must be solely on principles and methodology, not on the conclusions they generate’.”

The authors fail to mention that this dictum was abandoned in Joiner, and that it is specifically rejected by statute, in the 2000 revision to the Federal Rule of Evidence 702.

Early in the Bench Book, it authors present a subsection entitled “The Myth of Scientific Objectivity,” which they might have borrowed from Feyerabend or Derrida. The heading appears misleading because the text contradicts it:

“Scientists often develop emotional attachments to their work—it can be difficult to abandon an idea. Regardless of bias, the strongest intellectual argument, based on accepted scientific hypotheses, will always prevail, but the road to that conclusion may be fraught with scholarly cul-de-sacs.”[8]

In a similar vein, the authors misleadingly tell readers that “the forefront of science is rarely encountered in court,” and so “much of the science mentioned there shall be considered established….”[9] Of course, the reality is that many causal claims presented in court have already been rejected or held to be indeterminate by the scientific community. And just when readers may think themselves safe from the goblins of nihilism, the authors launch into a theory of naïve probabilism that science is just placing subjective probabilities upon data, based upon preconceived biases and beliefs:

“All of these biases and beliefs play into the process of weighing data, a critical aspect of science. Placing weight on a result is the process of assigning a probability to an outcome. Everything in the universe can be expressed in probabilities.”[10]

So help the expert witness who honestly (and correctly) testifies that the causal claim or its rejection cannot be expressed as a probability statement!

Although I have not read all of the Bench Book closely, there appears to be no meaningful discussion of Rule 703, or of the need to access underlying data to ensure that the proffered scientific opinion under scrutiny has used appropriate methodologies at every step in its development. Even a 412 text cannot address every issue, but this one does little to help the judicial reader find more in-depth help on statistical and scientific methodological issues that arise in occupational and environmental disease claims, and in pharmaceutical products litigation.

The organizations involved in this Bench Book appear to be honest brokers of remedial education for judges. The writing of this Bench Book was funded by the State Justice Institute (SJI) Which is a creation of federal legislation enacted with the laudatory goal of improving the quality of judging in state courts.[11] Despite its provenance in federal legislation, the SJI is a a private, nonprofit corporation, governed by 11 directors appointed by the President, and confirmed by the Senate. A majority of the directors (six) are state court judges, one state court administrator, and four members of the public (no more than two from any one political party). The function of the SJI is to award grants to improve judging in state courts.

The National Judicial College (NJC) originated in the early 1960s, from the efforts of the American Bar Association, American Judicature Society and the Institute of Judicial Administration, to provide education for judges. In 1977, the NJC became a Nevada not-for-profit (501)(c)(3) educational corporation, which its campus at the University of Nevada, Reno, where judges could go for training and recreational activities.

The Justice Speakers Institute appears to be a for-profit company that provides educational resources for judge. A Press Release touts the Bench Book and follow-on webinars. Caveat emptor.

The rationale for this Bench Book is open to question. Unlike the Reference Manual for Scientific Evidence, which was co-produced by the Federal Judicial Center and the National Academies of Science, the Bench Book’s authors are lawyers and judges, without any subject-matter expertise. Unlike the Reference Manual, the Bench Book’s chapters have no scientist or statistician authors, and it shows. Remarkably, the Bench Book does not appear to cite to the Reference Manual or the Manual on Complex Litigation, at any point in its discussion of the federal law of expert witnesses or of scientific or statistical method. Perhaps taxpayers would have been spared substantial expense if state judges were simply encouraged to read the Reference Manual.


[1]  Bench Book at 190.

[2]  Bench Book at 174 (“Given the large amount of statistical information contained in expert reports, as well as in the daily lives of the general society, the ability to be a competent consumer of scientific reports is challenging. Effective critical review of scientific information requires vigilance, and some healthy skepticism.”).

[3]  Bench Book at 137; see also id. at 162.

[4]  Bench Book at 148.

[5]  Bench Book at 160.

[6]  Bench Book at 152.

[7]  Bench Book at 233, quoting Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579, 595 (1993).

[8]  Bench Book at 10.

[9]  Id. at 10.

[10]  Id. at 10.

[11] See State Justice Institute Act of 1984 (42 U.S.C. ch. 113, 42 U.S.C. § 10701 et seq.).

N.J. Supreme Court Uproots Weeds in Garden State’s Law of Expert Witnesses

August 8th, 2018

The United States Supreme Court’s decision in Daubert is now over 25 years old. The idea of judicial gatekeeping of expert witness opinion testimony is even older in New Jersey state courts. The New Jersey Supreme Court articulated a reliability standard before the Daubert case was even argued in Washington, D.C. See Landrigan v. Celotex Corp., 127 N.J. 404, 414 (1992); Rubanick v. Witco Chem. Corp., 125 N.J. 421, 447 (1991). Articulating a standard, however, is something very different from following a standard, and in many New Jersey trial courts, until very recently, the standard was pretty much anything goes.

One counter-example to the general rule of dog-eat-dog in New Jersey was Judge Nelson Johnson’s careful review and analysis of the proffered causation opinions in cases in which plaintiffs claimed that their use of the anti-acne medication isotretinoin (Accutane) caused Crohn’s disease. Judge Johnson, who sits in the Law Division of the New Jersey Superior Court for Atlantic County held a lengthy hearing, and reviewed the expert witnesses’ reliance materials.1 Judge Johnson found that the plaintiffs’ expert witnesses had employed undue selectivity in choosing what to rely upon. Perhaps even more concerning, Judge Johnson found that these witnesses had refused to rely upon reasonably well-conducted epidemiologic studies, while embracing unpublished, incomplete, and poorly conducted studies and anecdotal evidence. In re Accutane, No. 271(MCL), 2015 WL 753674, 2015 BL 59277 (N.J.Super. Law Div., Atlantic Cty. Feb. 20, 2015). In response, Judge Johnson politely but firmly closed the gate to conclusion-driven duplicitous expert witness causation opinions in over 2,000 personal injury cases. “Johnson of Accutane – Keeping the Gate in the Garden State” (Mar. 28, 2015).

Aside from resolving over 2,000 pending cases, Judge Johnson’s judgment was of intense interest to all who are involved in pharmaceutical and other products liability litigation. Judge Johnson had conducted a pretrial hearing, sometimes called a Kemp hearing in New Jersey, after the New Jersey Supreme Court’s opinion in Kemp v. The State of New Jersey, 174 N.J. 412 (2002). At the hearing and in his opinion that excluded plaintiffs’ expert witnesses’ causation opinions, Judge Johnson demonstrated a remarkable aptitude for analyzing data and inferences in the gatekeeping process.

When the courtroom din quieted, the trial court ruled that the proffered testimony of Dr., Arthur Kornbluth and Dr. David Madigan did not meet the liberal New Jersey test for admissibility. In re Accutane, No. 271(MCL), 2015 WL 753674, 2015 BL 59277 (N.J.Super. Law Div. Atlantic Cty. Feb. 20, 2015). And in closing the gate, Judge Johnson protected the judicial process from several bogus and misleading “lines of evidence,” which have become standard ploys to mislead juries in courthouses where the gatekeepers are asleep. Recognizing that not all evidence is on the same analytical plane, Judge Johnson gave case reports short shrift.

[u]nsystematic clinical observations or case reports and adverse event reports are at the bottom of the evidence hierarchy.”

Id. at *16. Adverse event reports, largely driven by the very litigation in his courtroom, received little credit and were labeled as “not evidentiary in a court of law.” Id. at 14 (quoting FDA’s description of FAERS).

Judge Johnson recognized that there was a wide range of identified “risk factors” for irritable bowel syndrome, such as prior appendectomy, breast-feeding as an infant, stress, Vitamin D deficiency, tobacco or alcohol use, refined sugars, dietary animal fat, fast food. In re Accutane, 2015 WL 753674, at *9. The court also noted that there were four medications generally acknowledged to be potential risk factors for inflammatory bowel disease: aspirin, nonsteroidal anti-inflammatory medications (NSAIDs), oral contraceptives, and antibiotics. Understandably, Judge Johnson was concerned that the plaintiffs’ expert witnesses preferred studies unadjusted for potential confounding co-variables and studies that had involved “cherry picking the subjects.” Id. at *18.

Judge Johnson had found that both sides in the isotretinoin cases conceded the relative unimportance of animal studies, but the plaintiffs’ expert witnesses nonetheless invoked the animal studies in the face of the artificial absence of epidemiologic studies that had been created by their cherry-picking strategies. Id.

Plaintiffs’ expert witnesses had reprised a common claimants’ strategy; namely, they claimed that all the epidemiology studies lacked statistical power. Their arguments often ignored that statistical power calculations depend upon statistical significance, a concept to which many plaintiffs’ counsel have virulent antibodies, as well as an arbitrarily selected alternative hypothesis of association size. Furthermore, the plaintiffs’ arguments ignored the actual point estimates, most of which were favorable to the defense, and the observed confidence intervals, most of which were reasonably narrow.

The defense responded to the bogus statistical arguments by presenting an extremely capable clinical and statistical expert witness, Dr. Stephen Goodman, to present a meta-analysis of the available epidemiologic evidence.

Meta-analysis has become an important facet of pharmaceutical and other products liability litigation[1]. Fortunately for Judge Johnson, he had before him an extremely capable expert witness, Dr. Stephen Goodman, to explain meta-analysis generally, and two meta-analyses he had performed on isotretinoin and irritable bowel outcomes.

Dr. Goodman explained that the plaintiffs’ witnesses’ failure to perform a meta-analysis was telling when meta-analysis can obviate the plaintiffs’ hyperbolic statistical complaints:

the strength of the meta-analysis is that no one feature, no one study, is determinant. You don’t throw out evidence except when you absolutely have to.”

In re Accutane, 2015 WL 753674, at *8.

Judge Johnson’s judicial handiwork received non-deferential appellate review from a three-judge panel of the Appellate Division, which reversed the exclusion of Kornbluth and Madigan. In re Accutane Litig., 451 N.J. Super. 153, 165 A.3d 832 (App. Div. 2017). The New Jersey Supreme Court granted the isotretinoin defendants’ petition for appellate review, and the issues were joined over the appropriate standard of appellate review for expert witness opinion exclusions, and the appropriateness of Judge Johnson’s exclusions of Kornbluth and Madigan. A bevy of amici curiae joined in the fray.2

Last week, the New Jersey Supreme Court issued a unanimous opinion, which reversed the Appellate Division’s holding that Judge Johnson had “mistakenly exercised” discretion. Applying its own precedents from Rubanick, Landrigan, and Kemp, and the established abuse-of-discretion standard, the Court concluded that the trial court’s ruling to exclude Kornbluth and Madigan was “unassailable.” In re Accutane Litig., ___ N.J. ___, 2018 WL 3636867 (2018), Slip op. at 79.3

The high court graciously acknowledged that defendants and amici had “good reason” to seek clarification of New Jersey law. Slip op. at 67. In abandoning abuse-of-discretion as its standard of review, the Appellate Division had relied upon a criminal case that involved the application of the Frye standard, which is applied as a matter of law. Id. at 70-71. The high court also appeared to welcome the opportunity to grant review and reverse the intermediate court reinforce “the rigor expected of the trial court” in its gatekeeping role. Id. at 67. The Supreme Court, however, did not articulate a new standard; rather it demonstrated at length that Judge Johnson had appropriately applied the legal standards that had been previously announced in New Jersey Supreme Court cases.4

In attempting to defend the Appellate Division’s decision, plaintiffs sought to characterize New Jersey law as somehow different from, and more “liberal” than, the United States Supreme Court’s decision in Daubert. The New Jersey Supreme Court acknowledged that it had never formally adopted the dicta from Daubert about factors that could be considered in gatekeeping, slip op. at 10, but the Court went on to note what disinterested observers had long understood, that the so-called Daubert factors simply flowed from a requirement of sound methodology, and that there was “little distinction” and “not much light” between the Landrigan and Rubanick principles and the Daubert case or its progeny. Id at 10, 80.

Curiously, the New Jersey Supreme Court announced that the Daubert factors should be incorporated into the New Jersey Rules 702 and 703 and their case law, but it stopped short of declaring New Jersey a “Daubert” jurisdiction. Slip op. at 82. In part, the Court’s hesitance followed from New Jersey’s bifurcation of expert witness standards for civil and criminal cases, with the Frye standard still controlling in the criminal docket. At another level, it makes no sense to describe any jurisdiction as a “Daubert” state because the relevant aspects of the Daubert decision were dicta, and the Daubert decision and its progeny were superseded by the revision of the controlling statute in 2000.5

There were other remarkable aspects of the Supreme Court’s Accutane decision. For instance, the Court put its weight behind the common-sense and accurate interpretation of Sir Austin Bradford Hill’s famous articulation of factors for causal judgment, which requires that sampling error, bias, and confounding be eliminated before assessing whether the observed association is strong, consistent, plausible, and the like. Slip op. at 20 (citing the Reference Manual at 597-99), 78.

The Supreme Court relied extensively on the National Academies’ Reference Manual on Scientific Evidence.6 That reliance is certainly preferable to judicial speculations and fabulations of scientific method. The reliance is also positive, considering that the Court did not look only at the problematic epidemiology chapter, but adverted also to the chapters on statistical evidence and on clinical medicine.

The Supreme Court recognized that the Appellate Division had essentially sanctioned an anything goes abandonment of gatekeeping, an approach that has been all-too-common in some of New Jersey’s lower courts. Contrary to the previously prevailing New Jersey zeitgeist, the Court instructed that gatekeeping must be “rigorous” to “prevent[] the jury’s exposure to unsound science through the compelling voice of an expert.” Slip op. at 68-9.

Not all evidence is equal. “[C]ase reports are at the bottom of the evidence hierarchy.” Slip op. at 73. Extrapolation from non-human animal studies is fraught with external validity problems, and such studies “far less probative in the face of a substantial body of epidemiologic evidence.” Id. at 74 (internal quotations omitted).

Perhaps most chilling for the lawsuit industry will be the Supreme Court’s strident denunciation of expert witnesses’ selectivity in choosing lesser evidence in the face of a large body of epidemiologic evidence, id. at 77, and their unprincipled cherry picking among the extant epidemiologic publications. Like the trial court, the Supreme Court found that the plaintiffs’ expert witnesses’ inconsistent use of methodological criteria and their selective reliance upon studies (disregarding eight of the nine epidemiologic studies) that favored their task masters was the antithesis of sound methodology. Id. at 73, citing with approval, In re Lipitor, ___ F.3d ___ (4th Cir. 2018) (slip op. at 16) (“Result-driven analysis, or cherry-picking, undermines principles of the scientific method and is a quintessential example of applying methodologies (valid or otherwise) in an unreliable fashion.”).

An essential feature of the Supreme Court’s decision is that it was not willing to engage in the common reductionism that has “all epidemiologic studies are flawed,” and which thus privileges cherry picking. Not all disagreements between expert witnesses can be framed as differences in interpretation. In re Accutane will likely stand as a bulwark against flawed expert witness opinion testimony in the Garden State for a long time.


1 Judge Nelson Johnson is also the author of Boardwalk Empire: The Birth, High Times, and Corruption of Atlantic City (2010), a spell-binding historical novel about political and personal corruption.

2 In support of the defendants’ positions, amicus briefs were filed by the New Jersey Business & Industry Association, Commerce and Industry Association of New Jersey, and New Jersey Chamber of Commerce; by law professors Kenneth S. Broun, Daniel J. Capra, Joanne A. Epps, David L. Faigman, Laird Kirkpatrick, Michael M. Martin, Liesa Richter, and Stephen A. Saltzburg; by medical associations the American Medical Association, Medical Society of New Jersey, American Academy of Dermatology, Society for Investigative Dermatology, American Acne and Rosacea Society, and Dermatological Society of New Jersey, by the Defense Research Institute; by the Pharmaceutical Research and Manufacturers of America; and by New Jersey Civil Justice Institute. In support of the plaintiffs’ position and the intermediate appellate court’s determination, amicus briefs were filed by political action committee the New Jersey Association for Justice; by the Ironbound Community Corporation; and by plaintiffs’ lawyer Allan Kanner.

3 Nothing in the intervening scientific record called question upon Judge Johnson’s trial court judgment. See, e.g., I.A. Vallerand, R.T. Lewinson, M.S. Farris, C.D. Sibley, M.L. Ramien, A.G.M. Bulloch, and S.B. Patten, “Efficacy and adverse events of oral isotretinoin for acne: a systematic review,” 178 Brit. J. Dermatol. 76 (2018).

4 Slip op. at 9, 14-15, citing Landrigan v. Celotex Corp., 127 N.J. 404, 414 (1992); Rubanick v. Witco Chem. Corp., 125 N.J. 421, 447 (1991) (“We initially took that step to allow the parties in toxic tort civil matters to present novel scientific evidence of causation if, after the trial court engages in rigorous gatekeeping when reviewing for reliability, the proponent persuades the court of the soundness of the expert’s reasoning.”).

5 The Court did acknowledge that Federal Rule of Evidence 702 had been amended in 2000, to reflect the Supreme Court’s decision in Daubert, Joiner, and Kumho Tire, but the Court did not deal with the inconsistencies between the present rule and the 1993 Daubert case. Slip op. at 64, citing Calhoun v. Yamaha Motor Corp., U.S.A., 350 F.3d 316, 320-21, 320 n.8 (3d Cir. 2003).

6 See Accutane slip op. at 12-18, 24, 73-74, 77-78. With respect to meta-analysis, the Reference Manual’s epidemiology chapter is still stuck in the 1980s and the prevalent resistance to poorly conducted, often meaningless meta-analyses. SeeThe Treatment of Meta-Analysis in the Third Edition of the Reference Manual on Scientific Evidence” (Nov. 14, 2011) (The Reference Manual fails to come to grips with the prevalence and importance of meta-analysis in litigation, and fails to provide meaningful guidance to trial judges).

Scientific Evidence in Canadian Courts

February 20th, 2018

A couple of years ago, Deborah Mayo called my attention to the Canadian version of the Reference Manual on Scientific Evidence.1 In the course of discussion of mistaken definitions and uses of p-values, confidence intervals, and significance testing, Sander Greenland pointed to some dubious pronouncements in the Science Manual for Canadian Judges [Manual].

Unlike the United States federal court Reference Manual, which is published through a joint effort of the National Academies of Science, Engineering, and Medicine, the Canadian version, is the product of the Canadian National Judicial Institute (NJI, or the Institut National de la Magistrature, if you live in Quebec), which claims to be an independent, not-for-profit group, that is committed to educating Canadian judges. In addition to the Manual, the Institute publishes Model Jury Instructions and a guide, Problem Solving in Canada’s Courtrooms: A Guide to Therapeutic Justice (2d ed.), as well as conducting educational courses.

The NJI’s website describes the Instute’s Manual as follows:

Without the proper tools, the justice system can be vulnerable to unreliable expert scientific evidence.

         * * *

The goal of the Science Manual is to provide judges with tools to better understand expert evidence and to assess the validity of purportedly scientific evidence presented to them. …”

The Chief Justice of Canada, Hon. Beverley M. McLachlin, contributed an introduction to the Manual, which was notable for its frank admission that:

[w]ithout the proper tools, the justice system is vulnerable to unreliable expert scientific evidence.

****

Within the increasingly science-rich culture of the courtroom, the judiciary needs to discern ‘good’ science from ‘bad’ science, in order to assess expert evidence effectively and establish a proper threshold for admissibility. Judicial education in science, the scientific method, and technology is essential to ensure that judges are capable of dealing with scientific evidence, and to counterbalance the discomfort of jurists confronted with this specific subject matter.”

Manual at 14. These are laudable goals, indeed, but did the National Judicial Institute live up to its stated goals, or did it leave Canadian judges vulnerable to the Institute’s own “bad science”?

In his comments on Deborah Mayo’s blog, Greenland noted some rather cavalier statements in Chapter two that suggest that the conventional alpha of 5% corresponds to a “scientific attitude that unless we are 95% sure the null hypothesis is false, we provisionally accept it.” And he, pointed elsewhere where the chapter seems to suggest that the coefficient of confidence that corresponds to an alpha of 5% “constitutes a rather high standard of proof,” thus confusing and conflating probability of random error with posterior probabilities. Greenland is absolutely correct that the Manual does a rather miserable job of educating Canadian judges if our standard for its work product is accuracy and truth.

Some of the most egregious errors are within what is perhaps the most important chapter of the Manual, Chapter 2, “Science and the Scientific Method.” The chapter has two authors, a scientist, Scott Findlay, and a lawyer, Nathalie Chalifour. Findlay is an Associate Professor, in the Department of Biology, of the University of Ottawa. Nathalie Chalifour is an Associate Professor on the Faculty of Law, also in the University of Ottawa. Together, they produced some dubious pronouncements, such as:

Weight of the Evidence (WOE)

First, the concept of weight of evidence in science is similar in many respects to its legal counterpart. In both settings, the outcome of a weight-of-evidence assessment by the trier of fact is a binary decision.”

Manual at 40. Findlay and Chalifour cite no support for their characterization of WOE in science. Most attempts to invoke WOE are woefully vague and amorphous, with no meaningful guidance or content.2  Sixty-five pages later, if any one is noticing, the authors let us in a dirty, little secret:

at present, there exists no established prescriptive methodology for weight of evidence assessment in science.”

Manual at 105. The authors omit, however, that there are prescriptive methods for inferring causation in science; you just will not see them in discussions of weight of the evidence. The authors then compound the semantic and conceptual problems by stating that “in a civil proceeding, if the evidence adduced by the plaintiff is weightier than that brought forth by the defendant, a judge is obliged to find in favour of the plaintiff.” Manual at 41. This is a remarkable suggestion, which implies that if the plaintiff adduces the crummiest crumb of evidence, a mere peppercorn on the scales of justice, but the defendant has none to offer, that the plaintiff must win. The plaintiff wins notwithstanding that no reasonable person could believe that the plaintiff’s claims are more likely than not true. Even if there were the law of Canada, it is certainly not how scientists think about establishing the truth of empirical propositions.

Confusion of Hypothesis Testing with “Beyond a Reasonable Doubt”

The authors’ next assault comes in conflating significance probability with the probability connected with the burden of proof, a posterior probability. Legal proceedings have a defined burden of proof, with criminal cases requiring the state to prove guilt “beyond a reasonable doubt.” Findlay and Chalifour’s discussion then runs off the rails by likening hypothesis testing, with an alpha of 5% or its complement, 95%, as a coefficient of confidence, to a “very high” burden of proof:

In statistical hypothesis-testing – one of the tools commonly employed by scientists – the predisposition is that there is a particular hypothesis (the null hypothesis) that is assumed to be true unless sufficient evidence is adduced to overturn it. But in statistical hypothesis-testing, the standard of proof has traditionally been set very high such that, in general, scientists will only (provisionally) reject the null hypothesis if they are at least 95% sure it is false. Third, in both scientific and legal proceedings, the setting of the predisposition and the associated standard of proof are purely normative decisions, based ultimately on the perceived consequences of an error in inference.”

Manual at 41. This is, as Greenland and many others have pointed out, a totally bogus conception of hypothesis testing, and an utterly false description of the probabilities involved.

Later in the chapter, Findlay and Chalifour flirt with the truth, but then lapse into an unrecognizable parody of it:

Inferential statistics adopt the frequentist view of probability whereby a proposition is either true or false, and the task at hand is to estimate the probability of getting results as discrepant or more discrepant than those observed, given the null hypothesis. Thus, in statistical hypothesis testing, the usual inferred conclusion is either that the null is true (or rather, that we have insufficient evidence to reject it) or it is false (in which case we reject it). 16 The decision to reject or not is based on the value of p if the estimated value of p is below some threshold value a, we reject the null; otherwise we accept it.”

Manual at 74. OK; so far so good, but here comes the train wreck:

By convention (and by convention only), scientists tend to set α = 0.05; this corresponds to the collective – and, one assumes, consensual – scientific attitude that unless we are 95% sure the null hypothesis is false, we provisionally accept it. It is partly because of this that scientists have the reputation of being a notoriously conservative lot, given that a 95% threshold constitutes a rather high standard of proof.”

Manual at 75. Uggh; so we are back to significance probability’s being a posterior probability. As if to atone for their sins, in the very next paragraph, the authors then remind the judicial readers that:

As noted above, p is the probability of obtaining results at least as discrepant as those observed if the null is true. This is not the same as the probability of the null hypothesis being true, given the results.”

Manual at 75. True, true, and completely at odds with what the authors have stated previously. And to add to the reader’s now fully justified conclusion, the authors describe the standard for rejecting the null hypothesis as “very high indeed.” Manual at 102, 109. Any reader who is following the discussion might wonder how and why there is such a problem of replication and reproducibility in contemporary science.

Conflating Bayesianism with Frequentist Modes of Inference

We have seen how Findlay and Chalifour conflate significance and posterior probabilities, some of the time. In a section of their chapter that deals explicitly with probability, the authors tell us that before any study is conducted the prior probability of the truth of the tested hypothesis is 50%, sans evidence. This an astonishing creation of certainty out nothingness, and perhaps it explains the authors’ implied claim that the crummiest morsel of evidence on one side is sufficient to compel a verdict, if the other side has no morsels at all. Here is how the authors put their claim to the Canadian judges:

Before each study is conducted (that is, a priori), the hypothesis is as likely to be true as it is to be false. Once the results are in, we can ask: How likely is it now that the hypothesis is true? In the first study, the low a priori inferential strength of the study design means that this probability will not be much different from the a priori value of 0.5 because any result will be rather equivocal owing to limitations in the experimental design.”

Manual at 64. This implied Bayesian slant, with 50% priors, in the world of science would lead anyone to believe “as many as six impossible things before breakfast,” and many more throughout the day.

Lest you think that the Manual is all rubbish, there are occasional gems of advice to the Canadian judges. The authors admonish the judges to

be wary of individual ‘statistically significant’ results that are mined from comparatively large numbers of trials or experiments, as the results may be ‘cherry picked’ from a larger set of experiments or studies that yielded mostly negative results. The court might ask the expert how many other trials or experiments testing the same hypothesis he or she is aware of, and to describe the outcome of those studies.”

Manual at 87. Good advice, but at odds with the authors’ characterization of statistical significance as establishing the rejection of the null hypothesis well-nigh beyond a reasonable doubt.

When Greenland first called attention to this Manual, I reached to some people who had been involved in its peer review. One reviewer told me that it was a “living document,” and would likely be revised after he had the chance to call the NJI’s attention to the errors. But two years later, the errors remain, and so we have to infer that the authors meant to say all the contradictory and false statements that are still present in the downloadable version of the Manual.


2 SeeWOE-fully Inadequate Methodology – An Ipse Dixit By Another Name” (May 1, 2012); “Weight of the Evidence in Science and in Law” (July 29, 2017); see also David E. Bernstein, “The Misbegotten Judicial Resistance to the Daubert Revolution,” 89 Notre Dame L. Rev. 27 (2013).

High, Low and Right-Sided Colonics – Ridding the Courts of Junk Science

July 16th, 2016

Not surprisingly, many of Selikoff’s litigation- and regulatory-driven opinions have not fared well, such as the notions that asbestos causes gastrointestinal cancers and that all asbestos minerals have equal potential and strength to cause mesothelioma.  Forty years after Selikoff testified in litigation that occupational asbestos exposure caused an insulator’s colorectal cancer, the Institute of Medicine reviewed the extant evidence and announced that the evidence was  “suggestive but not sufficient to infer a causal relationship between asbestos exposure and pharyngeal, stomach, and colorectal cancers.” Jonathan Samet, et al., eds., Institute of Medicine Review of Asbestos: Selected Cancers (2006).[1] The Institute of Medicine’s monograph has fostered a more circumspect approach in some of the federal agencies.  The National Cancer Institute’s website now proclaims that the evidence is insufficient to permit a conclusion that asbestos causes non-pulmonary cancers of gastrointestinal tract and throat.[2]

As discussed elsewhere, Selikoff testified as early as 1966 that asbestos causes colorectal cancer, in advance of any meaningful evidence to support such an opinion, and then he, and his protégées, worked hard to lace the scientific literature with their pronouncements on the subject, without disclosing their financial, political, and positional conflicts of interest.[3]

With plaintiffs’ firm’s (Lanier) zealous pursuit of bias information from the University of Idaho, in the LoGuidice case, what are we to make of Selikoff’s and his minions’ dubious ethics of failed disclosure. Do Selikoff and Mount Sinai receive a pass because their asbestos research predated the discovery of ethics? The “Lobby” (as the late Douglas Liddell called Selikoff and his associates)[4] has seriously distorted truth-finding in any number of litigations, but nowhere are the Lobby’s distortions more at work than in lawsuits for claimed asbestos injuries. Here the conflicts of interests truly have had a deleterious effect on the quality of civil justice. As we saw with the Selikoff exceptionalism displayed by the New York Supreme Court in reviewing third-party subpoenas,[5] some courts seem bent on ignoring evidence-based analyses in favor of Mount Sinai faith-based initiatives.

Current Asbestos Litigation Claims Involving Colorectal Cancer

Although Selikoff has passed from the litigation scene, his trainees and followers have lined up at the courthouse door to propagate his opinions. Even before the IOM’s 2006 monograph, more sophisticated epidemiologists consistently rejected the Selikoff conclusion on asbestos and colon cancer, which grew out of Selikoff’s litigation activities.[6] And yet, the minions keep coming.

In the pre-Daubert era, defendants lacked an evidentiary challenge to the Selikoff’s opinion that asbestos caused colorectal cancer. Instead of contesting the legal validity or sufficiency of the plaintiffs’ general causation claims, defendants often focused on the unreliability of the causal attribution for the specific claimant’s disease. These early cases are often misunderstood to be challenges to expert witnesses’ opinions about whether asbestos causes colorectal cancer; they were not.[7]

Of course, after the IOM’s 2006 monograph, active expert witness gatekeeping should eliminate asbestos gastrointestinal cancer claims, but sadly they persist. Perhaps, courts simply considered the issue “grandfathered” in from the era in which judicial scrutiny of expert witness opinion testimony was restricted. Perhaps, defense counsel are failing to frame and support their challenges properly.  Perhaps both.

Arthur Frank Jumps the Gate

Although ostensibly a “Frye” state, Pennsylvania judges have, when moved by the occasion, to apply a fairly thorough analysis of proffered expert witness opinion.[8] On occasion, Pennsylvania judges have excluded unreliably or invalidly supported causation opinions, under the Pennsylvania version of the Frye standard. A recent case, however, tried before a Workman’s Compensation Judge (WCJ), and appealed to the Commonwealth Court, shows how inconsistent the application of the standard can be, especially when Selikoff’s legacy views are at issue.

Michael Piatetsky, an architect, died of colorectal cancer. Before his death, he and his wife filed a worker’s compensation claim, in which they alleged that his disease was caused by his workplace exposure to asbestos. Garrison Architects v. Workers’ Comp. Appeal Bd. (Piatetsky), No. 1095 C.D. 2015, Pa. Cmwlth. Ct., 2016 Pa. Commw. Unpub. LEXIS 72 (Jan. 22, 2016) [cited as Piatetsky]. Mr. Piatetsky was an architect, almost certainly knowledgeable about asbestos hazards generally.  Despite his knowledge, Piatetsky eschewed personal protective equipment even when working at dusty work sites well marked with warnings. Although he had engaged in culpable conduct, the employer in worker compensation proceedings does not have ordinary negligence defenses, such as contributory negligence or assumption of risk.

In litigating the Piatetsky’s claim, the employer dragged its feet and failed to name an expert witness.  Eventually, after many requests for continuances, the Workers’ Compensation Judge barred the employer from presenting an expert witness. With the record closed, and without an expert witness, the Judge understandably ruled in favor of the claimant.

The employer, sans expert witness, had to confront claimant’s expert witness, Arthur L. Frank, a minion of Selikoff and a frequent testifier in asbestos and many other litigations. Frank, of course, opined that asbestos causes colon cancer and that it caused Mr. Piatetsky’s cancer. Mr. Piatetsky’s colon cancer originated on the right side of his colon. Dr. Frank thus emphasized that asbestos causes colon cancer in all locations, but especially on the right side in view of one study’s having concluded “that colon cancer caused by asbestos is more likely to begin on the right side.” Piatetsky at *6.

On appeal, the employer sought relief on several issues, but the only one of interest here is the employer’s argument “that Claimant’s medical expert based his opinion on flimsy medical studies.” Piatetsky at *10. The employer’s appeal seemed to go off the rails with the insistence that the Claimant’s medical opinion was invalid because Dr. Frank relied upon studies not involving architects. Piatetsky at *14. The Commonwealth Court was able to point to testimony, although probably exaggerated, which suggested that Mr. Piatetsky had been heavily exposed, at least at times, and thus his exposure was similar to that in the studies cited by Frank.

With respect to Frank’s right-sided (non-sinister) opinion, the Commonwealth Court framed the employer’s issue as a contention that Dr. Frank’s opinion on the asbestos-relatedness of right-sided colon cancer was “not universally accepted.” But universal acceptance has never been the test or standard for the rejection or acceptance of expert witness opinion testimony in any state.  Either the employer badly framed its appeal, or the appellate court badly misstated the employer’s ground for relief. In any event, the Commonwealth Court never addressed the relevant legal standard in its discussion.

The Claimant argued that the hearing Judge had found that Frank’s opinion was based on “numerous studies.” Piatetsky at *15. None of these studies is cited to permit the public to assess the argument and the Court’s acceptance of it. The appellate court made inappropriately short work of this appellate issue by confusing general and specific causation, and invoking Mr. Piatetsky’s age, his lack of family history of colon cancer, Frank’s review of medical records, testimony, and work records, as warranting Frank’s causal inference. None of these factors is relevant to general causation, and none is probative of the specific causation claim.  Many if not most colon cancers have no identifiable risk factor, and Dr. Frank had no way to rule out baseline risk, even if there were an increased risk from asbestos exposure. Piatetsky at *16. With no defense expert witness, the employer certainly had a difficult appellate journey. It is hard for the reader of the Commonwealth Court’s opinion to determine whether the case was poorly defended, poorly briefed on appeal, or poorly described by the appellate judges.

In any event, the right-sided ruse of Arthur Frank went unreprimanded.  Intellectual due process might have led the appellate court to cite the article at issue, but it failed to do so.  It is interesting and curious to see how the appellate court gave a detailed recitation of the controverted facts of asbestos exposure, while how glib the court was when describing the scientific issues and evidence.  Nonetheless, the article referenced vaguely, which went uncited by the appellate court, was no doubt the paper:  K. Jakobsson, M. Albin & L. Hagmar, “Asbestos, cement, and cancer in the right part of the colon,” 51 Occup. & Envt’l Med. 95 (1994).

These authors 24 observed versus 9.63 expected right-sided colon cancers, and they concluded that there was an increased rate of right-sided colon cancer in the asbestos cement plant workers.  Notably the authors’ reference population had a curiously low rate of right-sided colon cancer.  For left-sided colon cancer, the authors 9.3 expected cases but observed only 5 cases in the asbestos-cement cohort.  Contrary to Frank’s suggestion, the authors did not conclude that right-sided colon cancers had been caused by asbestos; indeed, the authors never reached any conclusion whether asbestos causes colorectal  cancer under any circumstances.  In their discussion, these authors noted that “[d]espite numerous epidemiological and experimental studies, there is no consensus concerning exposure to asbestos and risks of gastrointestinal cancer.” Jakobsson at 99; see also Dorsett D. Smith, “Does Asbestos Cause Additional Malignancies Other than Lung Cancer,” chap. 11, in Dorsett D. Smith, The Health Effects of Asbestos: An Evidence-based Approach 143, 154 (2015). Even this casual description of the Jakobsson study will awake the learned reader to the multiple comparisons that went on in this cohort study, with outcomes reported for left, right, rectum, and multiple sites, without any adjustment to the level of significance.  Risk of right-sided colon cancer was not a pre-specified outcome of the study, and the results of subsequent studies have never corroborated this small cohort study.

A sane understanding of subgroup analyses is important to judicial gatekeeping. SeeSub-group Analyses in Epidemiologic Studies — Dangers of Statistical Significance as a Bright-Line Test” (May 17, 2011).  The chapter on statistics in the Reference Manual for Scientific Evidence (3d ed. 2011) has some prudent caveats for multiple comparisons and testing, but neither the chapter on epidemiology, nor the chapter on clinical medicine[9], provides any sense of the dangers of over-interpreting subgroup analyses.

Some commentators have argued that we must not dissuade scientists from doing subgroup analysis, but the issue is not whether they should be done, but how they should be interpreted.[10] Certainly many authors have called for caution in how subgroup analyses are interpreted[11], but apparently Expert Witness Arthur Frank, did not receive the memo, before testifying in the Piatetsky case, and the Commonwealth Court did not before deciding this case.


[1] As good as the IOM process can be on occasion, even its reviews are sometimes less than thorough. The asbestos monograph gave no consideration to alcohol in the causation of laryngeal cancer, and no consideration to smoking in its analysis of asbestos and colorectal cancer. See, e.g., Peter S. Liang, Ting-Yi Chen & Edward Giovannucci, “Cigarette smoking and colorectal cancer incidence and mortality: Systematic review and meta-analysis,” 124 Internat’l J. Cancer 2406, 2410 (2009) (“Our results indicate that both past and current smokers have an increased risk of [colorectal cancer] incidence and mortality. Significantly increased risk was found for current smokers in terms of mortality (RR 5 1.40), former smokers in terms of incidence (RR 5 1.25)”); Lindsay M. Hannan, Eric J. Jacobs and Michael J. Thun, “The Association between Cigarette Smoking and Risk of Colorectal Cancer in a Large Prospective Cohort from the United States,” 18 Cancer Epidemiol., Biomarkers & Prevention 3362 (2009).

[2] National Cancer Institute, “Asbestos Exposure and Cancer Risk” (last visited July 10, 2016) (“In addition to lung cancer and mesothelioma, some studies have suggested an association between asbestos exposure and gastrointestinal and colorectal cancers, as well as an elevated risk for cancers of the throat, kidney, esophagus, and gallbladder (3, 4). However, the evidence is inconclusive.”).

[3] Compare “Health Hazard Progress Notes: Compensation Advance Made in New York State,” 16(5) Asbestos Worker 13 (May 1966) (thanking Selikoff for testifying in a colon cancer case) with, Irving J. Selikoff, “Epidemiology of gastrointestinal cancer,” 9 Envt’l Health Persp. 299 (1974) (arguing for his causal conclusion between asbestos and all gastrointestinal cancers, with no acknowledgment of his role in litigation or his funding from the asbestos insulators’ union).

[4] F.D.K. Liddell, “Magic, Menace, Myth and Malice,” 41 Ann. Occup. Hyg. 3, 3 (1997); see alsoThe Lobby Lives – Lobbyists Attack IARC for Conducting Scientific Research” (Feb. 19, 2013).

[5]

SeeThe LoGiudice Inquisitiorial Subpoena & Its Antecedents in N.Y. Law” (July 14, 2016).

[6] See, e.g., Richard Doll & Julian Peto, Asbestos: Effects on health of exposure to asbestos 8 (1985) (“In particular, there are no grounds for believing that gastrointestinal cancers in general are peculiarly likely to be caused by asbestos exposure.”).

[7] See Landrigan v. The Celotex Corporation, Revisited” (June 4, 2013); Landrigan v. The Celotex Corp., 127 N.J. 404, 605 A.2d 1079 (1992); Caterinicchio v. Pittsburgh Corning Corp., 127 NJ. 428, 605 A.2d 1092 (1992). In both Landrigan and Caterinicchio, there had been no challenge to the reliability or validity of the plaintiffs’ expert witnesses’ general causation opinions. Instead, the trial courts entered judgments, assuming arguendo that asbestos can cause colorectal cancer (a dubious proposition), on the ground that the low relative risk cited by plaintiffs’ expert witnesses (about 1.5) was factually insufficient to support a verdict for plaintiffs on specific causation.  Indeed, the relative risk suggested that the odds were about 2 to 1 in defendants’ favor that the plaintiffs’ colorectal cancers were not caused by asbestos.

[8] See, e.g., Porter v. Smithkline Beecham Corp., Sept. Term 2007, No. 03275. 2016 WL 614572 (Phila. Cty. Com. Pleas, Oct. 5, 2015); “Demonstration of Frye Gatekeeping in Pennsylvania Birth Defects Case” (Oct. 6, 2015).

[9] John B. Wong, Lawrence O. Gostin & Oscar A. Cabrera, “Reference Guide on Medical Testimony,” in Reference Manual for Scientific Evidence 687 (3d ed. 2011).

[10] See, e.g., Phillip I. Good & James W. Hardin, Common Errors in Statistics (and How to Avoid Them) 13 (2003) (proclaiming a scientists’ Bill of Rights under which they should be allowed to conduct subgroup analyses); Ralph I. Horwitz, Burton H. Singer, Robert W. Makuch, Catherine M. Viscoli, “Clinical versus statistical considerations in the design and analysis of clinical research,” 51 J. Clin. Epidemiol. 305 (1998) (arguing for the value of subgroup analyses). In United States v. Harkonen, the federal government prosecuted a scientist for fraud in sending a telecopy that described a clinical trial as “demonstrating” a benefit in a subgroup of a secondary trial outcome.  Remarkably, in the Harkonen case, the author, and criminal defendant, was describing a result in a pre-specified outcome, in a plausible but post-hoc subgroup, which result accorded with prior clinical trials and experimental evidence. United States v. Harkonen (D. Calif. 2009); United States v. Harkonen (D. Calif. 2010) (post-trial motions), aff’d, 510 F. App’x 633 (9th Cir. 2013) (unpublished), cert. denied, 134 S. Ct. 824, ___ U.S. ___ (2014); Brief by Scientists And Academics as Amici Curiae In Support Of Petitioner, On Petition For Writ Of Certiorari in the Supreme Court of the United States, W. Scott Harkonen v. United States, No. 13-180 (filed Sept. 4, 2013).

[11] SeeSub-group Analyses in Epidemiologic Studies — Dangers of Statistical Significance as a Bright-Line Test” (May 17, 2011) (collecting commentary); see also Lemuel A. Moyé, Statistical Reasoning in Medicine:  The Intuitive P-Value Primer 206, 225 (2d ed. 2006) (noting that subgroup analyses are often misleading: “Fishing expeditions for significance commonly catch only the junk of sampling error”); Victor M. Montori, Roman Jaeschke, Holger J. Schünemann, Mohit Bhandari, Jan L Brozek, P. J. Devereaux & Gordon H Guyatt, “Users’ guide to detecting misleading claims in clinical research reports,” 329 Brit. Med. J. 1093 (2004) (“Beware subgroup analysis”); Susan F. Assmann, Stuart J. Pocock, Laura E. Enos, Linda E. Kasten, “Subgroup analysis and other (mis)uses) of baseline data in clinical trials,” 355 Lancet 1064 (2000); George Davey Smith & Mathias Egger, “Commentary: Incommunicable knowledge? Interpreting and applying the results of clinical trials and meta-analyses,” 51 J. Clin. Epidemiol. 289 (1998) (arguing against post-hoc hypothesis testing); Douglas G. Altman, “Statistical reviewing for medical journals,” 17 Stat. Med. 2662 (1998); Douglas G. Altman, “Commentary:  Within trial variation – A false trail?” 51 J. Clin. Epidemiol. 301 (1998) (noting that observed associations are expected to vary across subgroup because of random variability); Christopher Bulpitt, “Subgroup Analysis,” 2 Lancet: 31 (1988).