TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Historians Noir

November 18th, 2014

David Rosner and Gerald Markowitz are two “labor” historians who make it their business to testify as historian expert witnesses in occupational and environmental disease cases. They apparently do not like lawyers who argue that they should have less business in the courts[1]. Rosner and Markowitz have obsessed about my article critical of their scholarship, and about historian witnesses, but rather than respond as scholars, they have responded largely ad hominem by suggesting that my criticisms were motivated by their testifying for the litigation industry. They have accused me of “attacking the messenger,” and they have responded by attacking the messenger. And their “attacks,” feeble though they may be, have come repetitively[2], suggesting some obsession and compulsion.

Last month[3], Professor Rosner gave a public lecture on his testimonial adventures as an historian expert witness, “Judging Science: The Historian, the Courts, & Discerning Responsibility for Environmental Pollution.” The lecture, given at Columbia University’s Heyman Center, lasted a little over an hour, exemplifies Rosner’s approach to “historifying,” as well as why courts should be wary of permitting such testimony. Here is how the Heyman Center’s website describes the talk:

“Over the past twenty years a vast public negotiation has taken place over the causes of, and responsibility for, disease. For the most part this discussion has flown under the radar of doctors, historians and public health professionals. This talk will look at a number of environmental pollution and public health cases over the course of the past two decades in which Professor Rosner has participated.”

Rosner begins by recounting his initial involvement in litigation, in Texas cases involving claims for silicosis. Rosner asserts that his involvement was necessitated by the defendants’ position that no one had ever heard of silicosis, and that silicosis had vanished from the medical literature after 1940. Rosner’s characterization of the claims and defenses of the Odessa sandblasting cases is, however, badly flawed, and his suggestion that silicosis had disappeared from the medical literature at the end of the 1930s is simply false.

According to Rosner (about 22:40 into the video), Histrionic Historians was an “attack” made in response to his, and his friend Gerald Markowitz’s, testimony in the Odessa, Texas case. Wrong. By the time Histrionic Historians was published, Rosner and Markowitz were listed as retained expert witnesses in hundreds if not thousands of cases, in the silicosis MDL, see In re Silica Prods. Liab. Litig., 398 F. Supp. 2d 563 (S.D. Tex. 2005), and they were showing up in several other isolated cases around the country. One of the Odessa silicosis cases had gone up to the Texas Supreme Court, which reversed the judgment for plaintiff on the ground that the jury must consider the knowledge and role of the intermediary employer in the context of an occupational disease claim against a remote supplier. Humble Sand & Gravel, Inc. v. Gomez, 146 S.W.3d 170 (Tex. 2004). The cases in front of Judge Jack were, of course, mostly fraudulent, and the liability in the remaining cases was almost tenuous to non-existent. In his Heyman Center lecture last month, Rosner suggests that my article was an attempt to “take back” from him and Markowitz, the narrative that had been historically controlled by industry (around 34:50 of the video). The fact is, however, that industry never controlled the silicosis narrative, which was played out in the 1930s by organized labor, government, academics, and industry. Histrionic Historians was only a preliminary essay designed to show that the Rosner narrative was false.

Towards the end of his lecture, Rosner attempts to describe the consequences of the workman’s compensation system. He argues that byssinosis, anthracosilicosis, and asbestosis were once considered “silicosis,” on the theory that silica was doing the damage, a stunning claim considering that byssinosis is caused by cotton dust, and does not involve any mineral dust of any kind. According to Rosner, the other pneumoconioses were “politically divided off of the silicosis issue” so that workers could regain the ability to sue, since workers could not sue for silicosis (due to statutory employer immunity). Video at 59:15-40. With no regard for the medical or scientific history of the knowledge of the various pneumoconioses, Rosner states that asbestosis and byssinosis were:

“in some sense created as clinical entities because of the political implications of being identified as silicosis after 1940. Silicosis was no longer compensable and so you had to find new definitions. It is a very interesting history of these disease that were once considered forms of silicosis.”

Video at 1:00:30-51. Very interesting, and entirely bogus. Asbestosis and silicosis were considered distinct diseases well before 1940, and medical science distinguished the two pneumoconioses as having different causes, different diagnostic criteria, and different sequelae. And neither asbestos nor cotton dust contains silica. A great example of the misinformation that historians unfamiliar with the relevant medical history can spout.

Historians’ Acting Badly

In response to a question from the audience, Professor Rosner recounts the events of an historical society meeting at which he and his colleagues learned that the President had been consulting for tobacco defendants in litigation. Apparently, this revelation almost led to fistfights in the halls. So much for diversity and tolerance! Video around 1:10:00. Rosner tells us that he is one of only about three historians who have decided to work for plaintiffs and labor unions. Video at 1:09:45.

Standards for Historian Testimony

Rosner criticizes the historians who testify for tobacco defendants on the grounds that they were not shown everything known (secretly) by the tobacco companies. These historian thus testified on only the public record, and their testimony was thus misleading. According to Rosner, you (the aspiring historian expert witness) “must see everything”; “you are entitled to see all the documents.” Otherwise, you are at risk of being given documents selectively by instructing counsel. Video at 1:11:10-29. There could be a semblance of a criterion in Rosner’s remarks for evaluating historian expert witness testimony. Rosner, understandably however, states that he does not know whether he wants the American Historian Association to become involved in policing historian witness testimony.

Historian Testimony – Beyond the Ken?

Rosner fielded a question from the audience about how courts viewed historian testimony. Of course, Rosner is not a lawyer, and his answer did not attempt to summarize the judicial antipathy towards historian testimony when not necessary. Instead, Rosner focused on his own niche of testifying in lead, asbestos, and silica cases, where courts have been more indulgent of permitting historian expert witness testimony. “They [the courts] are getting used to it,” Rosner reports. “Juries love” historian testimony because historians speak English, and “they understand it,” unlike the scientific testimony in the case. According to Rosner, historians are not pretending to have a special expertise that the jury cannot understand, and the materials relied upon do not require interpretation by an expert the way scientific studies do. Video at 1:12:24-14:04. Q.E.D.!


[1] Nathan Schachtman & John Ulizio, “Courting Clio:  Historians and Their Testimony in Products Liability Action,” in: Brian Dolan & Paul Blanc, eds., At Work in the World: Proceedings of the Fourth International Conference on the History of Occupational and Environmental Health, Perspectives in Medical Humanities, University of California Medical Humanities Consortium, University of California Press (2012); Schachtman, “On Deadly Dust & Histrionic Historians 041904,” Mealey’s Silica Litigation Report Vol. 2, No. 3 (Nov. 2003). See also How Testifying Historians Are Like Lawn-Mowing Dogs” (May 15, 2010); A Walk on the Wild Side (July 16, 2010); Counter Narratives for Hire (Dec. 13, 2010).

[2] Four articles dwell on the issue. See D. Rosner & G. Markowitz, “The Trials and Tribulations of Two Historians:  Adjudicating Responsibility for Pollution and Personal Harm, 53 Medical History 271, 280-81 (2009); D. Rosner & G. Markowitz, “L’histoire au prétoire.  Deux historiens dans les procès des maladies professionnelles et environnementales,” 56 Revue D’Histoire Moderne & Contemporaine 227, 238-39 (2009); David Rosner, “Trials and Tribulations:  What Happens When Historians Enter the Courtroom,” 72 Law & Contemporary Problems 137, 152 (2009); David Rosner & Gerald Markowitz, “The Historians of Industry” Academe (Nov. 2010). To these publications, these “forensic historians” have added yet another recitation in an epilogue to a revised edition of one of their books. Gerald Markowitz and David Rosner, Deceit and Denial: The Deadly Politics of Industrial Pollution at 313-14 (U. Calif. rev. ed. 2013).

[3] October 22, 2014.

Rhetorical Strategy in Characterizing Scientific Burdens of Proof

November 15th, 2014

The recent opinion piece by Kevin Elliott and David Resnik exemplifies a rhetorical strategy that idealizes and elevates a burden of proof in science, and then declares it is different from legal and regulatory burdens of proof. Kevin C. Elliott and David B. Resnik, “Science, Policy, and the Transparency of Values,” 122 Envt’l Health Persp. 647 (2014) [Elliott & Resnik]. What is astonishing about this strategy is the lack of support for the claim that “science” imposes such a high burden of proof that we can safely ignore it when making “practical” legal or regulatory decisions. Here is how the authors state their claim:

“Very high standards of evidence are typically expected in order to infer causal relationships or to approve the marketing of new drugs. In other social contexts, such as tort law and chemical regulation, weaker standards of evidence are sometimes acceptable to protect the public (Cranor 2008).”

Id.[1] Remarkably, the authors cite no statute, no case law, and no legal treatise for the proposition that the tort law standard for causation is somehow lower than for a scientific claim of causality. Similarly, the authors cite no support for their claim that regulatory pronouncements are judged under a lower burden. One only need consider the burden a sponsor faces in establishing medication efficacy and safety in a New Drug Application before the Food and Drug Administration.  Of course, when agencies engage in assessing causal claims regarding safety, they often act under regulations and guidances that lessen the burden of proof from what we would be required in a tort action.[2]

And most important, Elliott and Resnik fail to cite to any work of scientists for the claim that scientists require a greater burden of proof before accepting a causal claim. When these authors’ claims of differential burdens of proof were challenged by a scientist, Dr. David Schwartz, in a letter to the editors, the authors insisted that they were correct, again citing to Carl Cranor, a non-lawyer, non-scientist:

“we caution against equating the standards of evidence expected in tort law with those expected in more traditional scientific contexts. The tort system requires only a preponderance of evidence (> 50% likelihood) to win a case; this is much weaker evidence than scientists typically demand when presenting or publishing results, and confusion about these differing standards has led to significant legal controversies (Cranor 2006).”

Reply to Dr. Schwartz. The only thing the authors added to the discussion was to cite to the same work by Carl Cranor[3], but change the date of the book.

Whence comes the assertion that science has a heavier burden of proof? Elliott and Resnik cite Cranor for their remarkable proposition, and so where did Cranor find support for the proposition at issue here? In his 1993 book, Cranor suggests that we “can think of type I and II error rates as “standards of proof,” which begs the question whether they are appropriately used to assess significance or posterior probabilities[4]. Cranor goes so far in his 1993 as to describe the usual level of alpha as the “95%” rule, and that regulatory agencies require something akin to proof “beyond a reasonable doubt,” when they require two “statistically significant” studies[5]. Thus Cranor’s opinion has its origins in his commission of the transposition fallacy[6].

Cranor has persisted in his fallacious analysis in his later books. In his 2006 book, he erroneously equates the 95% coefficient of statistical confidence with 95% certainty of knowledge[7]. Later in the text, he asserts that agency regulations are written when supported by “beyond a reasonable doubt.[8]

To be fair, it is possible to find regulators stating something close to what Cranor asserts, but only when they themselves are committing the transposition fallacy:

“Statistical significance is a mathematical determination of the confidence in the outcome of a test. The usual criterion for establishing statistical significance is the p-value (probability value). A statistically significant difference in results is generally indicated by p < 0.05, meaning there is less than a 5% probability that the toxic effects observed were due to chance and were not caused by the chemical. Another way of looking at it is that there is a 95% probability that the effect is real, i.e., the effect seen was the result of the chemical exposure.”

U.S. Dep’t of Labor, Guidance for Hazard Determination for Compliance with the OSHA Hazard Communication Standard (29 CFR § 1910.1200) Section V (July 6, 2007).

And it is similarly possible to find policy wonks expressing similar views. In 1993, the Carnegie Commission published a report in which it tried to explain away junk science as simply the discrepancy in burdens of proof between law and science, but its reasoning clearly points to the Commission’s commission of the transposition fallacy:

“The reality is that courts often decide cases not on the scientific merits, but on concepts such as burden of proof that operate differently in the legal and scientific realms. Scientists may misperceive these decisions as based on a misunderstanding of the science, when in actuality the decision may simply result from applying a different norm, one that, for the judiciary, is appropriate.  Much, for instance, has been written about ‘junk science’ in the courtroom. But judicial decisions that appear to be based on ‘bad’ science may actually reflect the reality that the law requires a burden of proof, or confidence level, other than the 95 percent confidence level that is often used by scientists to reject the possibility that chance alone accounted for observed differences.”

The Carnegie Commission on Science, Technology, and Government, Report on Science and Technology in Judicial Decision Making 28 (1993)[9].

Resnik and Cranor’s rhetoric is a commonplace in the courtroom. Here is how the rhetorical strategy plays out in courtroom. Plaintiffs’ counsel elicits concessions from defense expert witnesses that they are using the “norms” and standards of science in presenting their opinions. Counsel then argue to the finder of fact that the defense experts are wonderful, but irrelevant because the fact finder must decide the case on a lower standard. This stratagem can be found supported by the writings of plaintiffs’ counsel and their expert witnesses[10]. The stratagem also shows up in the writings of law professors who are critical of the law’s embrace of scientific scruples in the courtroom[11].

The cacophony of error, from advocates and commentators, have led the courts into frequent error on the subject. Thus, Judge Pauline Newman, who sits on the United States Court of Appeals for the Federal Circuit, and who was a member of the Committee on the Development of the Third Edition of the Reference Manual on Scientific Evidence, wrote in one of her appellate opinions[12]:

“Scientists as well as judges must understand: ‘the reality that the law requires a burden of proof, or confidence level, other than the 95 percent confidence level that is often used by scientists to reject the possibility that chance alone accounted for observed differences’.”

Reaching back even further into the judiciary’s wrestling with the issue of the difference between legal and scientific standards of proof, we have one of the clearest and clearly incorrect statements of the matter[13]:

“Petitioners demand sole reliance on scientific facts, on evidence that reputable scientific techniques certify as certain. Typically, a scientist will not so certify evidence unless the probability of error, by standard statistical measurement, is less than 5%. That is, scientific fact is at least 95% certain.  Such certainty has never characterized the judicial or the administrative process. It may be that the ‘beyond a reasonable doubt’ standard of criminal law demands 95% certainty.  Cf. McGill v. United States, 121 U.S.App. D.C. 179, 185 n.6, 348 F.2d 791, 797 n.6 (1965). But the standard of ordinary civil litigation, a preponderance of the evidence, demands only 51% certainty. A jury may weigh conflicting evidence and certify as adjudicative (although not scientific) fact that which it believes is more likely than not. ***”

The 95% certainty appears to derive from 95% confidence intervals, although “confidence” is a technical term in statistics, and it most certainly does not mean the probability of the alternative hypothesis under consideration.  Similarly, the probability that is less than 5% is not the probability that the null hypothesis is correct. The United States Court of Appeals for the District of Columbia thus fell for the rhetorical gambit in accepting the strawman that scientific certainty is 95%, whereas civil and administrative law certainty is a smidgeon above 50%.

We should not be too surprised that courts have erroneously described burdens of proof in the realm of science. Even within legal contexts, judges have a very difficult time articulating exactly how different verbal formulations of the burden of proof translate into probability statements. In one of his published decisions, Judge Jack Weinstein reported an informal survey of judges of the Eastern District of New York, on what they believed were the correct quantizations of legal burdens of proof. The results confirm that judges, who must deal with burdens of proof as lawyers and then as “umpires” on the bench, have no idea of how to translate verbal formulations into mathematical quantities: Fatico

U.S. v. Fatico, 458 F.Supp. 388 (E.D.N.Y. 1978). Thus one judge believed that “clear, unequivocal and convincing” required a higher level of proof (90%) than “beyond a reasonable doubt,” and no judge placed “beyond a reasonable doubt” above 95%. A majority of the judges polled placed the criminal standard below 90%.

In running down Elliott, Resnik, and Cranor’s assertions about burdens of proof, all I could find was the commonplace error involved in moving from 95% confidence to 95% certainty. Otherwise, I found scientists declaring that the burden of proof should rest with the scientist who is making the novel causal claim. Carl Sagan famously declaimed, “extraordinary claims require extraordinary evidence[14],” but he appears never to have succumbed to the temptation to provide a quantification of the posterior probability that would cinch the claim.

If anyone has any evidence leading to support for Resnik’s claim, other than the transposition fallacy or the confusion between certainty and coefficient of statistical confidence, please share.


 

[1] The authors citation is to Carl F. Cranor, Toxic Torts: Science, Law, and the Possibility of Justice (NY 2008). Professor Cranor teaches philosophy at one of the University of California campuses. He is neither a lawyer nor a scientist, but he does participate with some frequency as a consultant, and as an expert witness, in lawsuits, on behalf of claimants.

[2] See, e.g., In re Agent Orange Product Liab. Litig., 597 F. Supp. 740, 781 (E.D.N.Y. 1984) (Weinstein, J.) (“The distinction between avoidance of risk through regulation and compensation for injuries after the fact is a fundamental one.”), aff’d 818 F.2d 145 (2d Cir. 1987) (approving district court’s analysis), cert. denied sub nom. Pinkney v. Dow Chemical Co., 487 U.S. 1234 (1988).

[3] Carl F. Cranor, Toxic Torts: Science, Law, and the Possibility of Justice (NY 2006).

[4] Carl F. Cranor, Regulating Toxic Substances: A Philosophy of Science and the Law at 33-34 (Oxford 1993) (One can think of α, β (the chances of type I and type II errors, respectively and 1- β as measures of the “risk of error” or “standards of proof.”) See also id. at 44, 47, 55, 72-76.

[5] Id. (squaring 0.05 to arrive at “the chances of two such rare events occurring” as 0.0025).

[6] Michael D. Green, “Science Is to Law as the Burden of Proof is to Significance Testing: Book Review of Cranor, Regulating Toxic Substances: A Philosophy of Science and the Law,” 37 Jurimetrics J. 205 (1997) (taking Cranor to task for confusing significance and posterior (burden of proof) probabilities). At least one other reviewer was not as discerning as Professor Green and fell for Cranor’s fallacious analysis. Steven R. Weller, “Book Review: Regulating Toxic Substances: A Philosophy of Science and Law,” 6 Harv. J. L. & Tech. 435, 436, 437-38 (1993) (“only when the statistical evidence gathered from studies shows that it is more than ninety-five percent likely that a test substance causes cancer will the substance be characterized scientifically as carcinogenic … to determine legal causality, the plaintiff need only establish that the probability with which it is true that the substance in question causes cancer is at least fifty percent, rather than the ninety-five percent to prove scientific causality”).

[7] Carl F. Cranor, Toxic Torts: Science, Law, and the Possibility of Justice 100 (2006) (incorrectly asserting, without further support, that “[t]he practice of setting α =.05 I call the “95% rule,” for researchers want to be 95% certain that when knowledge is gained [a study shows new results] and the null hypothesis is rejected, it is correctly rejected.”).

[8] Id. at 266.

[9] There were some scientists on the Commission’s Task Force, but most of the members were lawyers.

[10] Jan Beyea & Daniel Berger, “Scientific misconceptions among Daubert gatekeepers: the need for reform of expert review procedures,” 64 Law & Contemporary Problems 327, 328 (2001) (“In fact, Daubert, as interpreted by ‛logician’ judges, can amount to a super-Frye test requiring universal acceptance of the reasoning in an expert’s testimony. It also can, in effect, raise the burden of proof in science-dominated cases from the acceptable “more likely than not” standard to the nearly impossible burden of ‛beyond a reasonable doubt’.”).

[11] Lucinda M. Finley, “Guarding the Gate to the Courthouse:  How Trial Judges Are Using Their Evidentiary Screening Role to Remake Tort Causation Rules,” 336 DePaul L. Rev. 335, 348 n. 49 (1999) (“Courts also require that the risk ratio in a study be ‘statistically significant,’ which is a statistical measurement of the likelihood that any detected association has occurred by chance, or is due to the exposure. Tests of statistical significance are intended to guard against what are called ‘Type I’ errors, or falsely ascribing a relationship when there in fact is not one (a false positive).” Finley erroneously ignores the conditioning of the significance probability on the null hypothesis, and she suggests that statistical significance is sufficient for ascribing causality); Erica Beecher-Monas, Evaluating Scientific Evidence: An Interdisciplinary Framework for Intellectual Due Process 42 n. 30, 61 (2007) (“Another way of explaining this is that it describes the probability that the procedure produced the observed effect by chance.”) (“Statistical significance is a statement about the frequency with which a particular finding is likely to arise by chance.″).

[12] Hodges v. Secretary Dep’t Health & Human Services, 9 F.3d 958, 967 (Fed. Cir. 1993) (Newman, J., dissenting) (citing and quoting from the Report of the Carnegie Commission on Science, Technology, and Government, Science and Technology in Judicial Decision Making 28 (1993).

[13] Ethyl Corp. v. EPA, 541 F.2d 1, 28 n.58 (D.C. Cir.), cert. denied, 426 U.S. 941 (1976).

[14] Carl Sagan, Broca’s Brain: Reflections on the Romance of Science 93 (1979).

 THE STANDARD OF APPELLATE REVIEW FOR RULE 702 DECISIONS

November 12th, 2014

Back in the day, some Circuits of the United States Court of Appeal embraced an asymmetric standard of review of district court decisions concerning the admissibility of expert witness opinion evidence. If the trial court’s decision was to exclude an expert witness, and that exclusion resulted in summary judgment, then the appellate court would take a “hard look” at the trial court’s decision. If the trial court admitted the expert witness’s opinions, and the case proceeded to trial, with opponent of the challenged expert witness losing the verdict, then the appellate court would take a not-so “hard look” the trial court’s decision to admit the opinion. In re Paoli RR Yard PCB Litig., 35 F.3d 717, 750 (3d Cir.1994) (Becker, J.), cert. denied, 115 S.Ct.1253 (1995).

In Kumho Tire, the 11th Circuit followed this asymmetric approach, only to have the Supreme Court reverse and render. Unlike the appellate procedure followed in Daubert, the high Court took the extra step of applying the symmetrical standard of review, presumably for the didactic purpose of showing the 11th Circuit how to engage in appellate review. Carmichael v. Kumho Tire Co., 131 F.3d 1433 (11th Cir. 1997), rev’d sub nom. Kumho Tire Co. v. Carmichael, 526 U.S. 137, 158-59 (1999).

If anything is clear from the Kumho Tire decision, courts do not have discretion to apply an asymmetric standard to their evaluation of a challenge, under Federal Rule of Evidence 702, to a proffered expert witness opinion. Justice Stephen Breyer, in his opinion for the Court, in Kumho Tire, went on to articulate the requirement that trial courts must inquire whether an expert witness ‘‘employs in the courtroom the same level of intellectual rigor that characterizes the practice of an expert in the relevant field.’’ Kumho Tire Co. v. Carmichael, 526 U.S. 137, 152 (1999). Again, trial courts do not have the discretion to abandon this inquiry.

The “same intellectual rigor” test may have some ambiguities that make application difficult. For instance, identifying the “relevant” field or discipline may be contested. Physicians traditionally have not been trained in statistical analyses, yet they produce, and rely extensively upon, clinical research, the proper conduct and interpretation of which requires expertise in study design and data analysis. Is the relevant field biostatistics or internal medicine? Given that the validity and reliability of the relied upon studies come from biostatistics, courts need to acknowledge that the rigor test requires identification of the “appropriate” field — the field that produces the criteria or standards of validity and interpretation.

Justice Breyer did grant that trial courts must have some latitude in determining how to conduct their gatekeeping inquiries. Some cases may call for full-blown hearings and post-hearing proposed findings of fact and conclusions of law; some cases may be easily decided upon the moving papers. Justice Breyer’s grant of “latitude,” however, wanders off target:

“The trial court must have the same kind of latitude in deciding how to test an expert’s reliability, and to decide whether or when special briefing or other proceedings are needed to investigate reliability, as it enjoys when it decides whether that expert’s relevant testimony is reliable. Our opinion in Joiner makes clear that a court of appeals is to apply an abuse-of-discretion standard when it ‛review[s] a trial court’s decision to admit or exclude expert testimony’. 522 U. S. at 138-139. That standard applies as much to the trial court’s decisions about how to determine reliability as to its ultimate conclusion. Otherwise, the trial judge would lack the discretionary authority needed both to avoid unnecessary ‛reliability’ proceedings in ordinary cases where the reliability of an expert’s methods is properly taken for granted, and to require appropriate proceedings in the less usual or more complex cases where cause for questioning the expert’s reliability arises. Indeed, the Rules seek to avoid ‛unjustifiable expense and delay’ as part of their search for ‛truth’ and the ‛jus[t] determin[ation]’ of proceedings. Fed. Rule Evid. 102. Thus, whether Daubert ’s specific factors are, or are not, reasonable measures of reliability in a particular case is a matter that the law grants the trial judge broad latitude to determine. See Joiner, supra, at 143. And the Eleventh Circuit erred insofar as it held to the contrary.”

Kumho, 526 U.S. at 152-53.

Now the segue from discretion to fashion the procedural mechanism for gatekeeping review to discretion to fashion the substantive criteria or standards for determining “intellectual rigor in the relevant field” represents a rather abrupt shift. The leap from discretion to fashion procedure to discretion to fashion substantive criteria of validity has no basis in prior law, in linguistics, or in science. For instance, Justice Breyer would be hard pressed to uphold a trial court’s refusal to consider bias and confounding in assessing whether epidemiologic studies established causality in a given case, notwithstanding the careless language quoted above.

The troubling nature of Justice Breyer’s language did not go unnoticed at the time of the Kumho Tire case. Indeed, three of the Justices in Kumho Tire concurred to clarify:

“I join the opinion of the Court, which makes clear that the discretion it endorses—trial-court discretion in choosing the manner of testing expert 1reliability—is not discretion to abandon the gatekeeping function. I think it worth adding that it is not discretion to perform the function inadequately.”

Kumho Tire Co. v. Carmichael, 526 U.S. 137, 158-59 (1999) (Scalia, J., concurring, with O’Connor, J., and Thomas, J.)

Of course, this language from Kumho Tire really cannot be treated as binding after the statute interpreted, Rule 702, was modified in 2000. The judges of the inferior federal courts have struggled with Rule 702, sometimes more to evade its reach than to perform gatekeeping in an intelligent way. Quotations of passages from cases decided before the statute was amended and revised should be treated with skepticism.

Recently, the Sixth Circuit quoted Justice Breyer’s language about latitude from Kumho Tire, in the Circuit’s decision involving GE Healthcare’s radiographic contrast medium, Omniscan. Decker v. GE Healthcare Inc., 2014 U.S. App. LEXIS 20049, at *29 (6th Cir. Oct. 20, 2014). Although the Decker case is problematic in many ways, the defendant did not challenge general causation between gadolinium and nephrogenic systemic fibrosis, a painful, progressive connective tissue disease, which afflicted the plaintiff. It is unclear exactly what sort of latitude in applying the statute, the Sixth Circuit was hoping to excuse.

Teaching Statistics in Law Schools

November 12th, 2014

Back in 2011, I came across a blog post about a rumor of a trend in law school education to train law students in quantitative methods. Sasha Romanosky, “Two Law School RumorsConcurring Opinions (Jan. 20, 2011). Of course, the notion that that quantitative methods and statistics would become essential to a liberal and a professional education reaches back to the 19th century. Holmes famously wrote that:

“For the rational study of the law the blackletter man may be the man of the present, but the man of the future is the man of statistics and the master of economics.”

Oliver Wendell Holmes, Jr., “The Path of Law” 10 Harvard Law Rev. 457 (1897). A few years later, H.G. Wells expanded the pre-requisite from lawyering to citizenship, generally:

“The great body of physical science, a great deal of the essential fact of financial science, and endless social and political problems are only accessible and only thinkable to those who have had a sound training in mathematical analysis, and the time may not be very remote when it will be understood that for complete initiation as an efficient citizen of one of the new great complex worldwide States that are now developing, it is as necessary to be able to compute, to think in averages and maxima and minima, as it is now to be able to read and write.”

Herbert George Wells, Mankind in the Making 204 (1903).

Certainly, there have been arguments made that statistics and quantitative analyses more generally should be part of the law school curriculum. See, e.g., Yair Listokin, “Why Statistics Should be Mandatory for Law Students” Prawfsblawg (May 22, 2006); Steven B. Dow, “There’s Madness in the Method: A Commentary on Law, Statistics, and the Nature of Legal Education,” 57 Okla. L. Rev. 579 (2004).

Judge Richard Posner has described the problem in dramatic Kierkegaardian terms of “fear and loathing.”Jackson v. Pollion, 733 F.3d 786, 790 (7th Cir. 2013). Stopping short of sickness unto death, Judge Posner catalogued the “lapse,” at the expense of others, in the words of judges and commentators:

“This lapse is worth noting because it is indicative of a widespread, and increasingly troublesome, discomfort among lawyers and judges confronted by a scientific or other technological issue. “As a general matter, lawyers and science don’t mix.” Peter Lee, “Patent Law and the Two Cultures,” 120 Yale L.J. 2, 4 (2010); see also Association for Molecular Pathology v. Myriad Genetics, Inc., ___ U.S. ___, 133 S.Ct. 2107, 2120, (2013) (Scalia, J., concurring in part and concurring in the judgment) (“I join the judgment of the Court, and all of its opinion except Part I–A and some portions of the rest of the opinion going into fine details of molecular biology. I am unable to affirm those details on my own knowledge or even my own belief”); Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 599 (1993) (Rehnquist, C.J., concurring in part and dissenting in part) (‘‘the various briefs filed in this case … deal with definitions of scientific knowledge, scientific method, scientific validity, and peer review—in short, matters far afield from the expertise of judges’’); Marconi Wireless Telegraph Co. of America v. United States, 320 U.S. 1, 60–61 (1943) (Frankfurter, J., dissenting in part) (‘‘it is an old observation that the training of Anglo–American judges ill fits them to discharge the duties cast upon them by patent legislation’’); Parke–Davis & Co. v. H.K. Mulford Co., 189 F. 95, 115 (S.D.N.Y. 1911) (Hand, J.) (‘‘I cannot stop without calling attention to the extraordinary condition of the law which makes it possible for a man without any knowledge of even the rudiments of chemistry to pass upon such questions as these … . How long we shall continue to blunder along without the aid of unpartisan and authoritative scientific assistance in the administration of justice, no one knows; but all fair persons not conventionalized by provincial legal habits of mind ought, I should think, unite to effect some such advance’’); Henry J. Friendly, Federal Jurisdiction: A General View 157 (1973) (‘‘I am unable to perceive why we should not insist on the same level of scientific understanding on the patent bench that clients demand of the patent bar, or why lack of such understanding by the judge should be deemed a precious asset’’); David L. Faigman, Legal Alchemy: The Use and Misuse of Science in Law xi (1999) (‘‘the average lawyer is not merely ignorant of science, he or she has an affirmative aversion to it’’).

Of course, ignorance of the law is no excuse for the ordinary citizen[1]. Ignorance of science and math should be no excuse for the ordinary judge or lawyer.

In the 1960s, Michael Finkelstein introduced a course on statistics and probability into the curriculum of the Columbia Law School. The class has had an unfortunate reputation of being “difficult.” One year, when Prof. Finkelstein taught the class at Yale Law School, the students petitioned him not to give a final examination. Apparently, the students were traumatized by facing problems that actually have right and wrong answers! Michael O. Finkelstein, “Teaching Statistics to Law Students,” in L. Pereira-Mendoza, L.S. Kea, T.W.Kee, & W.K. Wong, eds., I Proceedings of the Fifth International Conference on Teaching Statistics at 505 (1998).

Law school is academia’s “last clear chance” to avoid having statistically illiterate lawyers running amok. Do law schools take advantage of the opportunity? For the most part, understanding statistical concepts is not required for admission to, or for graduation from, law school. Some law schools helpfully offer courses to address the prevalent gap in statistics education at the university level. I have collected some of the available law school offerings from law school websites, and collected below. If you know of any omissions, please let me know.

Law School Courses

Columbia Law School: Statistics for Lawyers (Schachtman)

Emory Law:  Analytical Methods for Lawyers; Statistics for Lawyers (Joanna M. Shepherd)

Florida State College of Law:  Analytical Methods for Lawyers (Murat C. Mungan)

Fordham University School of Law:  Legal Process & Quantitative Methods

George Mason University School of Law:  Quantitative Forensics (Kobayashi); Statistics for Lawyers and Policy Analysts (Dick Ippolito)

George Washington University Law School:  Quantitative Analysis for Lawyers; The Law and Regulation of Science

Georgetown Law School:  Analytical Methods (Joshua Teitelbaum); Analyzing Empirical Research for Lawyers (Juliet Aiken); Epidemiology for Lawyers (Kraemer)

Santa Clara University, School of Law:  Analytical Methods for Lawyers (David Friedman)

Harvard Law School:  Analytical Methods for Lawyers (Kathryn Spier); Analytical Methods for Lawyers; Fundamentals of Statistical Analysis (David Cope)

Loyola Law School:  Statistics (Doug Stenstrom)

Marquette University School of Law:  Quantitative Methods

Michigan State:  Analytical Methods for Lawyers (Statistics) (Gia Barboza); Quantitative Analysis for Lawyers (Daniel Martin Katz)

New York Law School:  Statistical Literacy

New York University Law School:  Quantitative Methods in Law Seminar (Daniel Rubinfeld)

Northwestern Law School:  Quantitative Reasoning in the Law (Jonathan Koehler); Statistics & Probability (Jonathan Koehler)

Notre Dame Law School: Analytical Methods for Lawyers (M. Barrett)

Ohio Northern University Claude W. Pettit College of Law:  Analytical Methods for Lawyers

Stanford Law School:  Statistical Inference in the Law; Bayesian Statistics and Econometrics (Daniel E. Ho); Quantitative Methods – Statistical Inference (Jeff Strnad)

University of Arizona James E. Rogers College of Law:  Law, Statistics & Economics (Katherine Y. Barnes)

University of California at Berkeley:  Quantitative Methods (Kevin Quinn); Introductory Statistics (Justin McCrary)

University of California, Hastings College of Law:  Scientific Method for Lawyers (David Faigman)

University of California at Irvine:  Statistics for Lawyers

University of California at Los Angeles:  Quantitative Methods in the Law (Richard H. Sander)

University of Connecticut School of Law:  Statistical Reasoning in the Law

University of Michigan:  Statistics for Lawyers

University of Minnesota:  Analytical Methods for Lawyers: An Introduction (Parisi)

University of Pennsylvania Law School:  Analytical Methods (David S. Abrams); Statistics for Lawyers (Jon Klick)

University of Texas at Austin:  Analytical Methods (Farnworth)

University of Washington:  Quantitative Methods In The Law (Mike Townsend)

Vanderbilt Law School: Statistical Concepts for Lawyer (Edward Cheng)

Wake Forest: Analytical Methods for Lawyers

Washington University St. Louis School of Law: Social Scientific Research for Lawyers (Andrew D. Martin)

Washington & Lee Law School: The Role of Social Science in the Law (John Keyser)

William & Mary Law School: Statistics for Lawyers

William Mitchell College of Law:  Statistics Workshop (Herbert M. Kritzer)

Yale Law School:  Probability Modeling and Statistics LAW 26403


[1] See Ignorantia juris non excusat.

 

The opinions, statements, and asseverations expressed on Tortini are my own, or those of invited guests, and these writings do not necessarily represent the views of clients, friends, or family, even when supported by good and sufficient reason.