TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Historians Noir

November 18th, 2014

David Rosner and Gerald Markowitz are two “labor” historians who make it their business to testify as historian expert witnesses in occupational and environmental disease cases. They apparently do not like lawyers who argue that they should have less business in the courts[1]. Rosner and Markowitz have obsessed about my article critical of their scholarship, and about historian witnesses, but rather than respond as scholars, they have responded largely ad hominem by suggesting that my criticisms were motivated by their testifying for the litigation industry. They have accused me of “attacking the messenger,” and they have responded by attacking the messenger. And their “attacks,” feeble though they may be, have come repetitively[2], suggesting some obsession and compulsion.

Last month[3], Professor Rosner gave a public lecture on his testimonial adventures as an historian expert witness, “Judging Science: The Historian, the Courts, & Discerning Responsibility for Environmental Pollution.” The lecture, given at Columbia University’s Heyman Center, lasted a little over an hour, exemplifies Rosner’s approach to “historifying,” as well as why courts should be wary of permitting such testimony. Here is how the Heyman Center’s website describes the talk:

“Over the past twenty years a vast public negotiation has taken place over the causes of, and responsibility for, disease. For the most part this discussion has flown under the radar of doctors, historians and public health professionals. This talk will look at a number of environmental pollution and public health cases over the course of the past two decades in which Professor Rosner has participated.”

Rosner begins by recounting his initial involvement in litigation, in Texas cases involving claims for silicosis. Rosner asserts that his involvement was necessitated by the defendants’ position that no one had ever heard of silicosis, and that silicosis had vanished from the medical literature after 1940. Rosner’s characterization of the claims and defenses of the Odessa sandblasting cases is, however, badly flawed, and his suggestion that silicosis had disappeared from the medical literature at the end of the 1930s is simply false.

According to Rosner (about 22:40 into the video), Histrionic Historians was an “attack” made in response to his, and his friend Gerald Markowitz’s, testimony in the Odessa, Texas case. Wrong. By the time Histrionic Historians was published, Rosner and Markowitz were listed as retained expert witnesses in hundreds if not thousands of cases, in the silicosis MDL, see In re Silica Prods. Liab. Litig., 398 F. Supp. 2d 563 (S.D. Tex. 2005), and they were showing up in several other isolated cases around the country. One of the Odessa silicosis cases had gone up to the Texas Supreme Court, which reversed the judgment for plaintiff on the ground that the jury must consider the knowledge and role of the intermediary employer in the context of an occupational disease claim against a remote supplier. Humble Sand & Gravel, Inc. v. Gomez, 146 S.W.3d 170 (Tex. 2004). The cases in front of Judge Jack were, of course, mostly fraudulent, and the liability in the remaining cases was almost tenuous to non-existent. In his Heyman Center lecture last month, Rosner suggests that my article was an attempt to “take back” from him and Markowitz, the narrative that had been historically controlled by industry (around 34:50 of the video). The fact is, however, that industry never controlled the silicosis narrative, which was played out in the 1930s by organized labor, government, academics, and industry. Histrionic Historians was only a preliminary essay designed to show that the Rosner narrative was false.

Towards the end of his lecture, Rosner attempts to describe the consequences of the workman’s compensation system. He argues that byssinosis, anthracosilicosis, and asbestosis were once considered “silicosis,” on the theory that silica was doing the damage, a stunning claim considering that byssinosis is caused by cotton dust, and does not involve any mineral dust of any kind. According to Rosner, the other pneumoconioses were “politically divided off of the silicosis issue” so that workers could regain the ability to sue, since workers could not sue for silicosis (due to statutory employer immunity). Video at 59:15-40. With no regard for the medical or scientific history of the knowledge of the various pneumoconioses, Rosner states that asbestosis and byssinosis were:

“in some sense created as clinical entities because of the political implications of being identified as silicosis after 1940. Silicosis was no longer compensable and so you had to find new definitions. It is a very interesting history of these disease that were once considered forms of silicosis.”

Video at 1:00:30-51. Very interesting, and entirely bogus. Asbestosis and silicosis were considered distinct diseases well before 1940, and medical science distinguished the two pneumoconioses as having different causes, different diagnostic criteria, and different sequelae. And neither asbestos nor cotton dust contains silica. A great example of the misinformation that historians unfamiliar with the relevant medical history can spout.

Historians’ Acting Badly

In response to a question from the audience, Professor Rosner recounts the events of an historical society meeting at which he and his colleagues learned that the President had been consulting for tobacco defendants in litigation. Apparently, this revelation almost led to fistfights in the halls. So much for diversity and tolerance! Video around 1:10:00. Rosner tells us that he is one of only about three historians who have decided to work for plaintiffs and labor unions. Video at 1:09:45.

Standards for Historian Testimony

Rosner criticizes the historians who testify for tobacco defendants on the grounds that they were not shown everything known (secretly) by the tobacco companies. These historian thus testified on only the public record, and their testimony was thus misleading. According to Rosner, you (the aspiring historian expert witness) “must see everything”; “you are entitled to see all the documents.” Otherwise, you are at risk of being given documents selectively by instructing counsel. Video at 1:11:10-29. There could be a semblance of a criterion in Rosner’s remarks for evaluating historian expert witness testimony. Rosner, understandably however, states that he does not know whether he wants the American Historian Association to become involved in policing historian witness testimony.

Historian Testimony – Beyond the Ken?

Rosner fielded a question from the audience about how courts viewed historian testimony. Of course, Rosner is not a lawyer, and his answer did not attempt to summarize the judicial antipathy towards historian testimony when not necessary. Instead, Rosner focused on his own niche of testifying in lead, asbestos, and silica cases, where courts have been more indulgent of permitting historian expert witness testimony. “They [the courts] are getting used to it,” Rosner reports. “Juries love” historian testimony because historians speak English, and “they understand it,” unlike the scientific testimony in the case. According to Rosner, historians are not pretending to have a special expertise that the jury cannot understand, and the materials relied upon do not require interpretation by an expert the way scientific studies do. Video at 1:12:24-14:04. Q.E.D.!


[1] Nathan Schachtman & John Ulizio, “Courting Clio:  Historians and Their Testimony in Products Liability Action,” in: Brian Dolan & Paul Blanc, eds., At Work in the World: Proceedings of the Fourth International Conference on the History of Occupational and Environmental Health, Perspectives in Medical Humanities, University of California Medical Humanities Consortium, University of California Press (2012); Schachtman, “On Deadly Dust & Histrionic Historians 041904,” Mealey’s Silica Litigation Report Vol. 2, No. 3 (Nov. 2003). See also How Testifying Historians Are Like Lawn-Mowing Dogs” (May 15, 2010); A Walk on the Wild Side (July 16, 2010); Counter Narratives for Hire (Dec. 13, 2010).

[2] Four articles dwell on the issue. See D. Rosner & G. Markowitz, “The Trials and Tribulations of Two Historians:  Adjudicating Responsibility for Pollution and Personal Harm, 53 Medical History 271, 280-81 (2009); D. Rosner & G. Markowitz, “L’histoire au prétoire.  Deux historiens dans les procès des maladies professionnelles et environnementales,” 56 Revue D’Histoire Moderne & Contemporaine 227, 238-39 (2009); David Rosner, “Trials and Tribulations:  What Happens When Historians Enter the Courtroom,” 72 Law & Contemporary Problems 137, 152 (2009); David Rosner & Gerald Markowitz, “The Historians of Industry” Academe (Nov. 2010). To these publications, these “forensic historians” have added yet another recitation in an epilogue to a revised edition of one of their books. Gerald Markowitz and David Rosner, Deceit and Denial: The Deadly Politics of Industrial Pollution at 313-14 (U. Calif. rev. ed. 2013).

[3] October 22, 2014.

Rhetorical Strategy in Characterizing Scientific Burdens of Proof

November 15th, 2014

The recent opinion piece by Kevin Elliott and David Resnik exemplifies a rhetorical strategy that idealizes and elevates a burden of proof in science, and then declares it is different from legal and regulatory burdens of proof. Kevin C. Elliott and David B. Resnik, “Science, Policy, and the Transparency of Values,” 122 Envt’l Health Persp. 647 (2014) [Elliott & Resnik]. What is astonishing about this strategy is the lack of support for the claim that “science” imposes such a high burden of proof that we can safely ignore it when making “practical” legal or regulatory decisions. Here is how the authors state their claim:

“Very high standards of evidence are typically expected in order to infer causal relationships or to approve the marketing of new drugs. In other social contexts, such as tort law and chemical regulation, weaker standards of evidence are sometimes acceptable to protect the public (Cranor 2008).”

Id.[1] Remarkably, the authors cite no statute, no case law, and no legal treatise for the proposition that the tort law standard for causation is somehow lower than for a scientific claim of causality. Similarly, the authors cite no support for their claim that regulatory pronouncements are judged under a lower burden. One only need consider the burden a sponsor faces in establishing medication efficacy and safety in a New Drug Application before the Food and Drug Administration.  Of course, when agencies engage in assessing causal claims regarding safety, they often act under regulations and guidances that lessen the burden of proof from what we would be required in a tort action.[2]

And most important, Elliott and Resnik fail to cite to any work of scientists for the claim that scientists require a greater burden of proof before accepting a causal claim. When these authors’ claims of differential burdens of proof were challenged by a scientist, Dr. David Schwartz, in a letter to the editors, the authors insisted that they were correct, again citing to Carl Cranor, a non-lawyer, non-scientist:

“we caution against equating the standards of evidence expected in tort law with those expected in more traditional scientific contexts. The tort system requires only a preponderance of evidence (> 50% likelihood) to win a case; this is much weaker evidence than scientists typically demand when presenting or publishing results, and confusion about these differing standards has led to significant legal controversies (Cranor 2006).”

Reply to Dr. Schwartz. The only thing the authors added to the discussion was to cite to the same work by Carl Cranor[3], but change the date of the book.

Whence comes the assertion that science has a heavier burden of proof? Elliott and Resnik cite Cranor for their remarkable proposition, and so where did Cranor find support for the proposition at issue here? In his 1993 book, Cranor suggests that we “can think of type I and II error rates as “standards of proof,” which begs the question whether they are appropriately used to assess significance or posterior probabilities[4]. Cranor goes so far in his 1993 as to describe the usual level of alpha as the “95%” rule, and that regulatory agencies require something akin to proof “beyond a reasonable doubt,” when they require two “statistically significant” studies[5]. Thus Cranor’s opinion has its origins in his commission of the transposition fallacy[6].

Cranor has persisted in his fallacious analysis in his later books. In his 2006 book, he erroneously equates the 95% coefficient of statistical confidence with 95% certainty of knowledge[7]. Later in the text, he asserts that agency regulations are written when supported by “beyond a reasonable doubt.[8]

To be fair, it is possible to find regulators stating something close to what Cranor asserts, but only when they themselves are committing the transposition fallacy:

“Statistical significance is a mathematical determination of the confidence in the outcome of a test. The usual criterion for establishing statistical significance is the p-value (probability value). A statistically significant difference in results is generally indicated by p < 0.05, meaning there is less than a 5% probability that the toxic effects observed were due to chance and were not caused by the chemical. Another way of looking at it is that there is a 95% probability that the effect is real, i.e., the effect seen was the result of the chemical exposure.”

U.S. Dep’t of Labor, Guidance for Hazard Determination for Compliance with the OSHA Hazard Communication Standard (29 CFR § 1910.1200) Section V (July 6, 2007).

And it is similarly possible to find policy wonks expressing similar views. In 1993, the Carnegie Commission published a report in which it tried to explain away junk science as simply the discrepancy in burdens of proof between law and science, but its reasoning clearly points to the Commission’s commission of the transposition fallacy:

“The reality is that courts often decide cases not on the scientific merits, but on concepts such as burden of proof that operate differently in the legal and scientific realms. Scientists may misperceive these decisions as based on a misunderstanding of the science, when in actuality the decision may simply result from applying a different norm, one that, for the judiciary, is appropriate.  Much, for instance, has been written about ‘junk science’ in the courtroom. But judicial decisions that appear to be based on ‘bad’ science may actually reflect the reality that the law requires a burden of proof, or confidence level, other than the 95 percent confidence level that is often used by scientists to reject the possibility that chance alone accounted for observed differences.”

The Carnegie Commission on Science, Technology, and Government, Report on Science and Technology in Judicial Decision Making 28 (1993)[9].

Resnik and Cranor’s rhetoric is a commonplace in the courtroom. Here is how the rhetorical strategy plays out in courtroom. Plaintiffs’ counsel elicits concessions from defense expert witnesses that they are using the “norms” and standards of science in presenting their opinions. Counsel then argue to the finder of fact that the defense experts are wonderful, but irrelevant because the fact finder must decide the case on a lower standard. This stratagem can be found supported by the writings of plaintiffs’ counsel and their expert witnesses[10]. The stratagem also shows up in the writings of law professors who are critical of the law’s embrace of scientific scruples in the courtroom[11].

The cacophony of error, from advocates and commentators, have led the courts into frequent error on the subject. Thus, Judge Pauline Newman, who sits on the United States Court of Appeals for the Federal Circuit, and who was a member of the Committee on the Development of the Third Edition of the Reference Manual on Scientific Evidence, wrote in one of her appellate opinions[12]:

“Scientists as well as judges must understand: ‘the reality that the law requires a burden of proof, or confidence level, other than the 95 percent confidence level that is often used by scientists to reject the possibility that chance alone accounted for observed differences’.”

Reaching back even further into the judiciary’s wrestling with the issue of the difference between legal and scientific standards of proof, we have one of the clearest and clearly incorrect statements of the matter[13]:

“Petitioners demand sole reliance on scientific facts, on evidence that reputable scientific techniques certify as certain. Typically, a scientist will not so certify evidence unless the probability of error, by standard statistical measurement, is less than 5%. That is, scientific fact is at least 95% certain.  Such certainty has never characterized the judicial or the administrative process. It may be that the ‘beyond a reasonable doubt’ standard of criminal law demands 95% certainty.  Cf. McGill v. United States, 121 U.S.App. D.C. 179, 185 n.6, 348 F.2d 791, 797 n.6 (1965). But the standard of ordinary civil litigation, a preponderance of the evidence, demands only 51% certainty. A jury may weigh conflicting evidence and certify as adjudicative (although not scientific) fact that which it believes is more likely than not. ***”

The 95% certainty appears to derive from 95% confidence intervals, although “confidence” is a technical term in statistics, and it most certainly does not mean the probability of the alternative hypothesis under consideration.  Similarly, the probability that is less than 5% is not the probability that the null hypothesis is correct. The United States Court of Appeals for the District of Columbia thus fell for the rhetorical gambit in accepting the strawman that scientific certainty is 95%, whereas civil and administrative law certainty is a smidgeon above 50%.

We should not be too surprised that courts have erroneously described burdens of proof in the realm of science. Even within legal contexts, judges have a very difficult time articulating exactly how different verbal formulations of the burden of proof translate into probability statements. In one of his published decisions, Judge Jack Weinstein reported an informal survey of judges of the Eastern District of New York, on what they believed were the correct quantizations of legal burdens of proof. The results confirm that judges, who must deal with burdens of proof as lawyers and then as “umpires” on the bench, have no idea of how to translate verbal formulations into mathematical quantities: Fatico

U.S. v. Fatico, 458 F.Supp. 388 (E.D.N.Y. 1978). Thus one judge believed that “clear, unequivocal and convincing” required a higher level of proof (90%) than “beyond a reasonable doubt,” and no judge placed “beyond a reasonable doubt” above 95%. A majority of the judges polled placed the criminal standard below 90%.

In running down Elliott, Resnik, and Cranor’s assertions about burdens of proof, all I could find was the commonplace error involved in moving from 95% confidence to 95% certainty. Otherwise, I found scientists declaring that the burden of proof should rest with the scientist who is making the novel causal claim. Carl Sagan famously declaimed, “extraordinary claims require extraordinary evidence[14],” but he appears never to have succumbed to the temptation to provide a quantification of the posterior probability that would cinch the claim.

If anyone has any evidence leading to support for Resnik’s claim, other than the transposition fallacy or the confusion between certainty and coefficient of statistical confidence, please share.


 

[1] The authors citation is to Carl F. Cranor, Toxic Torts: Science, Law, and the Possibility of Justice (NY 2008). Professor Cranor teaches philosophy at one of the University of California campuses. He is neither a lawyer nor a scientist, but he does participate with some frequency as a consultant, and as an expert witness, in lawsuits, on behalf of claimants.

[2] See, e.g., In re Agent Orange Product Liab. Litig., 597 F. Supp. 740, 781 (E.D.N.Y. 1984) (Weinstein, J.) (“The distinction between avoidance of risk through regulation and compensation for injuries after the fact is a fundamental one.”), aff’d 818 F.2d 145 (2d Cir. 1987) (approving district court’s analysis), cert. denied sub nom. Pinkney v. Dow Chemical Co., 487 U.S. 1234 (1988).

[3] Carl F. Cranor, Toxic Torts: Science, Law, and the Possibility of Justice (NY 2006).

[4] Carl F. Cranor, Regulating Toxic Substances: A Philosophy of Science and the Law at 33-34 (Oxford 1993) (One can think of α, β (the chances of type I and type II errors, respectively and 1- β as measures of the “risk of error” or “standards of proof.”) See also id. at 44, 47, 55, 72-76.

[5] Id. (squaring 0.05 to arrive at “the chances of two such rare events occurring” as 0.0025).

[6] Michael D. Green, “Science Is to Law as the Burden of Proof is to Significance Testing: Book Review of Cranor, Regulating Toxic Substances: A Philosophy of Science and the Law,” 37 Jurimetrics J. 205 (1997) (taking Cranor to task for confusing significance and posterior (burden of proof) probabilities). At least one other reviewer was not as discerning as Professor Green and fell for Cranor’s fallacious analysis. Steven R. Weller, “Book Review: Regulating Toxic Substances: A Philosophy of Science and Law,” 6 Harv. J. L. & Tech. 435, 436, 437-38 (1993) (“only when the statistical evidence gathered from studies shows that it is more than ninety-five percent likely that a test substance causes cancer will the substance be characterized scientifically as carcinogenic … to determine legal causality, the plaintiff need only establish that the probability with which it is true that the substance in question causes cancer is at least fifty percent, rather than the ninety-five percent to prove scientific causality”).

[7] Carl F. Cranor, Toxic Torts: Science, Law, and the Possibility of Justice 100 (2006) (incorrectly asserting, without further support, that “[t]he practice of setting α =.05 I call the “95% rule,” for researchers want to be 95% certain that when knowledge is gained [a study shows new results] and the null hypothesis is rejected, it is correctly rejected.”).

[8] Id. at 266.

[9] There were some scientists on the Commission’s Task Force, but most of the members were lawyers.

[10] Jan Beyea & Daniel Berger, “Scientific misconceptions among Daubert gatekeepers: the need for reform of expert review procedures,” 64 Law & Contemporary Problems 327, 328 (2001) (“In fact, Daubert, as interpreted by ‛logician’ judges, can amount to a super-Frye test requiring universal acceptance of the reasoning in an expert’s testimony. It also can, in effect, raise the burden of proof in science-dominated cases from the acceptable “more likely than not” standard to the nearly impossible burden of ‛beyond a reasonable doubt’.”).

[11] Lucinda M. Finley, “Guarding the Gate to the Courthouse:  How Trial Judges Are Using Their Evidentiary Screening Role to Remake Tort Causation Rules,” 336 DePaul L. Rev. 335, 348 n. 49 (1999) (“Courts also require that the risk ratio in a study be ‘statistically significant,’ which is a statistical measurement of the likelihood that any detected association has occurred by chance, or is due to the exposure. Tests of statistical significance are intended to guard against what are called ‘Type I’ errors, or falsely ascribing a relationship when there in fact is not one (a false positive).” Finley erroneously ignores the conditioning of the significance probability on the null hypothesis, and she suggests that statistical significance is sufficient for ascribing causality); Erica Beecher-Monas, Evaluating Scientific Evidence: An Interdisciplinary Framework for Intellectual Due Process 42 n. 30, 61 (2007) (“Another way of explaining this is that it describes the probability that the procedure produced the observed effect by chance.”) (“Statistical significance is a statement about the frequency with which a particular finding is likely to arise by chance.″).

[12] Hodges v. Secretary Dep’t Health & Human Services, 9 F.3d 958, 967 (Fed. Cir. 1993) (Newman, J., dissenting) (citing and quoting from the Report of the Carnegie Commission on Science, Technology, and Government, Science and Technology in Judicial Decision Making 28 (1993).

[13] Ethyl Corp. v. EPA, 541 F.2d 1, 28 n.58 (D.C. Cir.), cert. denied, 426 U.S. 941 (1976).

[14] Carl Sagan, Broca’s Brain: Reflections on the Romance of Science 93 (1979).

 THE STANDARD OF APPELLATE REVIEW FOR RULE 702 DECISIONS

November 12th, 2014

Back in the day, some Circuits of the United States Court of Appeal embraced an asymmetric standard of review of district court decisions concerning the admissibility of expert witness opinion evidence. If the trial court’s decision was to exclude an expert witness, and that exclusion resulted in summary judgment, then the appellate court would take a “hard look” at the trial court’s decision. If the trial court admitted the expert witness’s opinions, and the case proceeded to trial, with opponent of the challenged expert witness losing the verdict, then the appellate court would take a not-so “hard look” the trial court’s decision to admit the opinion. In re Paoli RR Yard PCB Litig., 35 F.3d 717, 750 (3d Cir.1994) (Becker, J.), cert. denied, 115 S.Ct.1253 (1995).

In Kumho Tire, the 11th Circuit followed this asymmetric approach, only to have the Supreme Court reverse and render. Unlike the appellate procedure followed in Daubert, the high Court took the extra step of applying the symmetrical standard of review, presumably for the didactic purpose of showing the 11th Circuit how to engage in appellate review. Carmichael v. Kumho Tire Co., 131 F.3d 1433 (11th Cir. 1997), rev’d sub nom. Kumho Tire Co. v. Carmichael, 526 U.S. 137, 158-59 (1999).

If anything is clear from the Kumho Tire decision, courts do not have discretion to apply an asymmetric standard to their evaluation of a challenge, under Federal Rule of Evidence 702, to a proffered expert witness opinion. Justice Stephen Breyer, in his opinion for the Court, in Kumho Tire, went on to articulate the requirement that trial courts must inquire whether an expert witness ‘‘employs in the courtroom the same level of intellectual rigor that characterizes the practice of an expert in the relevant field.’’ Kumho Tire Co. v. Carmichael, 526 U.S. 137, 152 (1999). Again, trial courts do not have the discretion to abandon this inquiry.

The “same intellectual rigor” test may have some ambiguities that make application difficult. For instance, identifying the “relevant” field or discipline may be contested. Physicians traditionally have not been trained in statistical analyses, yet they produce, and rely extensively upon, clinical research, the proper conduct and interpretation of which requires expertise in study design and data analysis. Is the relevant field biostatistics or internal medicine? Given that the validity and reliability of the relied upon studies come from biostatistics, courts need to acknowledge that the rigor test requires identification of the “appropriate” field — the field that produces the criteria or standards of validity and interpretation.

Justice Breyer did grant that trial courts must have some latitude in determining how to conduct their gatekeeping inquiries. Some cases may call for full-blown hearings and post-hearing proposed findings of fact and conclusions of law; some cases may be easily decided upon the moving papers. Justice Breyer’s grant of “latitude,” however, wanders off target:

“The trial court must have the same kind of latitude in deciding how to test an expert’s reliability, and to decide whether or when special briefing or other proceedings are needed to investigate reliability, as it enjoys when it decides whether that expert’s relevant testimony is reliable. Our opinion in Joiner makes clear that a court of appeals is to apply an abuse-of-discretion standard when it ‛review[s] a trial court’s decision to admit or exclude expert testimony’. 522 U. S. at 138-139. That standard applies as much to the trial court’s decisions about how to determine reliability as to its ultimate conclusion. Otherwise, the trial judge would lack the discretionary authority needed both to avoid unnecessary ‛reliability’ proceedings in ordinary cases where the reliability of an expert’s methods is properly taken for granted, and to require appropriate proceedings in the less usual or more complex cases where cause for questioning the expert’s reliability arises. Indeed, the Rules seek to avoid ‛unjustifiable expense and delay’ as part of their search for ‛truth’ and the ‛jus[t] determin[ation]’ of proceedings. Fed. Rule Evid. 102. Thus, whether Daubert ’s specific factors are, or are not, reasonable measures of reliability in a particular case is a matter that the law grants the trial judge broad latitude to determine. See Joiner, supra, at 143. And the Eleventh Circuit erred insofar as it held to the contrary.”

Kumho, 526 U.S. at 152-53.

Now the segue from discretion to fashion the procedural mechanism for gatekeeping review to discretion to fashion the substantive criteria or standards for determining “intellectual rigor in the relevant field” represents a rather abrupt shift. The leap from discretion to fashion procedure to discretion to fashion substantive criteria of validity has no basis in prior law, in linguistics, or in science. For instance, Justice Breyer would be hard pressed to uphold a trial court’s refusal to consider bias and confounding in assessing whether epidemiologic studies established causality in a given case, notwithstanding the careless language quoted above.

The troubling nature of Justice Breyer’s language did not go unnoticed at the time of the Kumho Tire case. Indeed, three of the Justices in Kumho Tire concurred to clarify:

“I join the opinion of the Court, which makes clear that the discretion it endorses—trial-court discretion in choosing the manner of testing expert 1reliability—is not discretion to abandon the gatekeeping function. I think it worth adding that it is not discretion to perform the function inadequately.”

Kumho Tire Co. v. Carmichael, 526 U.S. 137, 158-59 (1999) (Scalia, J., concurring, with O’Connor, J., and Thomas, J.)

Of course, this language from Kumho Tire really cannot be treated as binding after the statute interpreted, Rule 702, was modified in 2000. The judges of the inferior federal courts have struggled with Rule 702, sometimes more to evade its reach than to perform gatekeeping in an intelligent way. Quotations of passages from cases decided before the statute was amended and revised should be treated with skepticism.

Recently, the Sixth Circuit quoted Justice Breyer’s language about latitude from Kumho Tire, in the Circuit’s decision involving GE Healthcare’s radiographic contrast medium, Omniscan. Decker v. GE Healthcare Inc., 2014 U.S. App. LEXIS 20049, at *29 (6th Cir. Oct. 20, 2014). Although the Decker case is problematic in many ways, the defendant did not challenge general causation between gadolinium and nephrogenic systemic fibrosis, a painful, progressive connective tissue disease, which afflicted the plaintiff. It is unclear exactly what sort of latitude in applying the statute, the Sixth Circuit was hoping to excuse.

Teaching Statistics in Law Schools

November 12th, 2014

Back in 2011, I came across a blog post about a rumor of a trend in law school education to train law students in quantitative methods. Sasha Romanosky, “Two Law School RumorsConcurring Opinions (Jan. 20, 2011). Of course, the notion that that quantitative methods and statistics would become essential to a liberal and a professional education reaches back to the 19th century. Holmes famously wrote that:

“For the rational study of the law the blackletter man may be the man of the present, but the man of the future is the man of statistics and the master of economics.”

Oliver Wendell Holmes, Jr., “The Path of Law” 10 Harvard Law Rev. 457 (1897). A few years later, H.G. Wells expanded the pre-requisite from lawyering to citizenship, generally:

“The great body of physical science, a great deal of the essential fact of financial science, and endless social and political problems are only accessible and only thinkable to those who have had a sound training in mathematical analysis, and the time may not be very remote when it will be understood that for complete initiation as an efficient citizen of one of the new great complex worldwide States that are now developing, it is as necessary to be able to compute, to think in averages and maxima and minima, as it is now to be able to read and write.”

Herbert George Wells, Mankind in the Making 204 (1903).

Certainly, there have been arguments made that statistics and quantitative analyses more generally should be part of the law school curriculum. See, e.g., Yair Listokin, “Why Statistics Should be Mandatory for Law Students” Prawfsblawg (May 22, 2006); Steven B. Dow, “There’s Madness in the Method: A Commentary on Law, Statistics, and the Nature of Legal Education,” 57 Okla. L. Rev. 579 (2004).

Judge Richard Posner has described the problem in dramatic Kierkegaardian terms of “fear and loathing.”Jackson v. Pollion, 733 F.3d 786, 790 (7th Cir. 2013). Stopping short of sickness unto death, Judge Posner catalogued the “lapse,” at the expense of others, in the words of judges and commentators:

“This lapse is worth noting because it is indicative of a widespread, and increasingly troublesome, discomfort among lawyers and judges confronted by a scientific or other technological issue. “As a general matter, lawyers and science don’t mix.” Peter Lee, “Patent Law and the Two Cultures,” 120 Yale L.J. 2, 4 (2010); see also Association for Molecular Pathology v. Myriad Genetics, Inc., ___ U.S. ___, 133 S.Ct. 2107, 2120, (2013) (Scalia, J., concurring in part and concurring in the judgment) (“I join the judgment of the Court, and all of its opinion except Part I–A and some portions of the rest of the opinion going into fine details of molecular biology. I am unable to affirm those details on my own knowledge or even my own belief”); Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 599 (1993) (Rehnquist, C.J., concurring in part and dissenting in part) (‘‘the various briefs filed in this case … deal with definitions of scientific knowledge, scientific method, scientific validity, and peer review—in short, matters far afield from the expertise of judges’’); Marconi Wireless Telegraph Co. of America v. United States, 320 U.S. 1, 60–61 (1943) (Frankfurter, J., dissenting in part) (‘‘it is an old observation that the training of Anglo–American judges ill fits them to discharge the duties cast upon them by patent legislation’’); Parke–Davis & Co. v. H.K. Mulford Co., 189 F. 95, 115 (S.D.N.Y. 1911) (Hand, J.) (‘‘I cannot stop without calling attention to the extraordinary condition of the law which makes it possible for a man without any knowledge of even the rudiments of chemistry to pass upon such questions as these … . How long we shall continue to blunder along without the aid of unpartisan and authoritative scientific assistance in the administration of justice, no one knows; but all fair persons not conventionalized by provincial legal habits of mind ought, I should think, unite to effect some such advance’’); Henry J. Friendly, Federal Jurisdiction: A General View 157 (1973) (‘‘I am unable to perceive why we should not insist on the same level of scientific understanding on the patent bench that clients demand of the patent bar, or why lack of such understanding by the judge should be deemed a precious asset’’); David L. Faigman, Legal Alchemy: The Use and Misuse of Science in Law xi (1999) (‘‘the average lawyer is not merely ignorant of science, he or she has an affirmative aversion to it’’).

Of course, ignorance of the law is no excuse for the ordinary citizen[1]. Ignorance of science and math should be no excuse for the ordinary judge or lawyer.

In the 1960s, Michael Finkelstein introduced a course on statistics and probability into the curriculum of the Columbia Law School. The class has had an unfortunate reputation of being “difficult.” One year, when Prof. Finkelstein taught the class at Yale Law School, the students petitioned him not to give a final examination. Apparently, the students were traumatized by facing problems that actually have right and wrong answers! Michael O. Finkelstein, “Teaching Statistics to Law Students,” in L. Pereira-Mendoza, L.S. Kea, T.W.Kee, & W.K. Wong, eds., I Proceedings of the Fifth International Conference on Teaching Statistics at 505 (1998).

Law school is academia’s “last clear chance” to avoid having statistically illiterate lawyers running amok. Do law schools take advantage of the opportunity? For the most part, understanding statistical concepts is not required for admission to, or for graduation from, law school. Some law schools helpfully offer courses to address the prevalent gap in statistics education at the university level. I have collected some of the available law school offerings from law school websites, and collected below. If you know of any omissions, please let me know.

Law School Courses

Columbia Law School: Statistics for Lawyers (Schachtman)

Emory Law:  Analytical Methods for Lawyers; Statistics for Lawyers (Joanna M. Shepherd)

Florida State College of Law:  Analytical Methods for Lawyers (Murat C. Mungan)

Fordham University School of Law:  Legal Process & Quantitative Methods

George Mason University School of Law:  Quantitative Forensics (Kobayashi); Statistics for Lawyers and Policy Analysts (Dick Ippolito)

George Washington University Law School:  Quantitative Analysis for Lawyers; The Law and Regulation of Science

Georgetown Law School:  Analytical Methods (Joshua Teitelbaum); Analyzing Empirical Research for Lawyers (Juliet Aiken); Epidemiology for Lawyers (Kraemer)

Santa Clara University, School of Law:  Analytical Methods for Lawyers (David Friedman)

Harvard Law School:  Analytical Methods for Lawyers (Kathryn Spier); Analytical Methods for Lawyers; Fundamentals of Statistical Analysis (David Cope)

Loyola Law School:  Statistics (Doug Stenstrom)

Marquette University School of Law:  Quantitative Methods

Michigan State:  Analytical Methods for Lawyers (Statistics) (Gia Barboza); Quantitative Analysis for Lawyers (Daniel Martin Katz)

New York Law School:  Statistical Literacy

New York University Law School:  Quantitative Methods in Law Seminar (Daniel Rubinfeld)

Northwestern Law School:  Quantitative Reasoning in the Law (Jonathan Koehler); Statistics & Probability (Jonathan Koehler)

Notre Dame Law School: Analytical Methods for Lawyers (M. Barrett)

Ohio Northern University Claude W. Pettit College of Law:  Analytical Methods for Lawyers

Stanford Law School:  Statistical Inference in the Law; Bayesian Statistics and Econometrics (Daniel E. Ho); Quantitative Methods – Statistical Inference (Jeff Strnad)

University of Arizona James E. Rogers College of Law:  Law, Statistics & Economics (Katherine Y. Barnes)

University of California at Berkeley:  Quantitative Methods (Kevin Quinn); Introductory Statistics (Justin McCrary)

University of California, Hastings College of Law:  Scientific Method for Lawyers (David Faigman)

University of California at Irvine:  Statistics for Lawyers

University of California at Los Angeles:  Quantitative Methods in the Law (Richard H. Sander)

University of Colorado: Quantitative Methods in the Law (Paul Ohm)

University of Connecticut School of Law:  Statistical Reasoning in the Law

University of Michigan:  Statistics for Lawyers

University of Minnesota:  Analytical Methods for Lawyers: An Introduction (Parisi)

University of Pennsylvania Law School:  Analytical Methods (David S. Abrams); Statistics for Lawyers (Jon Klick)

University of Texas at Austin:  Analytical Methods (Farnworth)

University of Washington:  Quantitative Methods In The Law (Mike Townsend)

Vanderbilt Law School: Statistical Concepts for Lawyer (Edward Cheng)

Wake Forest: Analytical Methods for Lawyers

Washington University St. Louis School of Law: Social Scientific Research for Lawyers (Andrew D. Martin)

Washington & Lee Law School: The Role of Social Science in the Law (John Keyser)

William & Mary Law School: Statistics for Lawyers

William Mitchell College of Law:  Statistics Workshop (Herbert M. Kritzer)

Yale Law School:  Probability Modeling and Statistics LAW 26403


[1] See Ignorantia juris non excusat.

 

Expert Witness Mining – Antic Proposals for Reform

November 4th, 2014

Law Reviews and Altered States of Reality

In 2008, Justice Breyer observed wryly that “there is evidence that law review articles have left terra firma to soar into outer space”; and Judge Posner has criticized law review articles for the “silly titles, the many opaque passages, the antic proposals, the rude polemics, [and] the myriad pretentious citations.” In 2010, Justice Scalia, who was a law-review-producing law professor for the University of Virginia for several years, responded to a lawyer’s oral argument, in McDonald v. City of Chicago, by suggesting that the argument had no support in Supreme Court precedent, but the unsupported argument would make the lawyer the “the darling of the professoriate.” At the June 2011 Fourth Circuit Judicial Conference, Chief Justice Roberts opined that law reviews are generally not “particularly helpful for practitioners and judges.”  In his words:

“Pick up a copy of any law review that you see and the first article is likely to be, you know, the influence of Immanuel Kant on evidentiary approaches in 18th-century Bulgaria, or something, which I’m sure was of great interest to the academic that wrote it, but isn’t of much help to the bar.”

See Debra Cassens Weiss, “Law Prof Responds After Chief Justice Roberts Disses Legal ScholarshipAm. Bar Ass’n J. (July 07, 2011). Lawyers would think the Justices view law review scholarship as a useless but generally harmless activity. Sometimes, however, law review articles can actually be harmful.

Selection Effects in the Retention and Presentation of Expert Witnesses

The complaints about law review scholarship are obviously based upon extremes and travesties. Interestingly, Judge Posner himself has been no slacker when it comes to producing law review articles with “antic proposals.” See, e.g., Richard A. Posner, “An Economic Approach to the Law of Evidence,” 51 Stan. L. Rev. 1477, 1541–42 (1999). In the tradition of non-traditional, rationalist proposals that ignore experience and make up something completely untested, Judge Richard Posner has advocated rule changes that would require lawyers

“to disclose the name of all the experts whom they approached as possible witnesses before settling on the one testifying. This would alert the jury to the problem of ‘witness shopping’.”

Posner, 51 Stan. L. Rev. at 1541. The point of Judge Posner’s radical reform is to alert triers of fact to whether the expert witness testifying is the first, or the umpteenth expert witness interviewed before a suitable opinion had been “procured,” so that the fact finder can draw the“ reasonable inference” that the case must be weaker than presented if the party went through so many expert witnesses before coming up with one who would testify in the case. If one party disclosed but one expert witness, the one that actually testified, and the other party disclosed X such witnesses (where X >1), then the fact finder could find in favor of the first party upon the basis of the so-called reasonable inference.

Posner’s proposal is at best a proxy for accuracy and validity in expert witness opinion testimony, and one for which Posner presents no evidence to support his hoped-for improvement in juridical accuracy. Not only does Judge Posner present no evidence that his proposed reform and suggested inference would be in the least bit reasonable and probative of the truth, he fails to address the obvious incentives that would be created by his proposal. Fearing the prejudicial inference from having consulted with “too many” expert witnesses, lawyers, operating under the Posner Rule, would have strong incentives to go to the expert witness “one-stop-shopping” mall, where they know they can obtain expert witnesses guaranteed to align themselves with the needed litigation positions and claims. The Posner Rule would also give a strong advantage to lawyers more skilled in vetting and selecting expert witnesses, to the detriment of less experienced lawyers. Of course, lawyers who are willing to go shopping at the meretricious mall or to employ a “cleaner” who brokers the selection without footprints might escape the bite of the Posner adverse inference.

Posner’s proposed rule ignores what is at the heart of identifying and selecting expert witnesses to testify. Obviously lawyers must identify potential witnesses with suitable expertise to address the issues raised by the litigation. Database searches, such as PubMed and Google Scholar searches for bio-medical experts, can go a long way towards identifying candidates, but interviews are important as well. Posner would chill lawyers’ effective representation by placing an adverse inference upon their diligence in any contact with the person other than the “one” who will be anointed to be the party’s designated testifier.

Meetings and interviews with prospective expert witnesses to ascertain whether the witness candidate has sufficient time and interest in fulfill the litigation assignment. Expertise in the area is hardly a guarantee that the candidate will be interested in answering the specific questions that are contested in the litigation. The lawyers must also ascertain whether the witness candidate has the stamina, patience, and aptitude for the litigation context. Not all real experts do, and the consequences of engaging an expert who does not have the qualities to make a good expert witness can be disastrous. Witness candidates must also be screened for their communication skills, their appearance, and even basic hygiene. The most brilliant expert who mumbles, or who is unkempt, is useless in litigation.

Lawyers must evaluate witness candidates for conflicts of interest, many of which are unknowable until there is a face-to-face meeting. Does the witness candidate have a significant other or child who works for the litigation industry (plaintiffs’ bar) or for the defendant industry under assault in the litigation at hand? Either way, the candidate may be compromised. Was the candidate mentored by an expert witness on the other side? Is the candidate on an editorial board with the adversary’s witnesses? Is the candidate close personal friends of the adversaries or their witnesses, such that he will be less than enthusiastic in showing the infirmities of the other side’s positions? Any of these questions could lead to answers that practically disqualify a witness candidate from consideration. Proceeding without such vetting could be catastrophic for the client and counsel. Burdening the vetting process with the threat of an adverse inference is deeply unfair to diligent counsel trying to represent and serve their clients.

And there are yet additional considerations that require exploration with any witness candidate. Expert witnesses are not equally able to deal with adverse authority in the form of a noted scientist who has taken a stand on the litigation issue, or a superficially appearing authoritative author who has published an adverse opinion. As well trained as they might be, some real experts are “sheep,” who are most comfortable following the herd, and not independent thinkers. Not all experts are willing or able to read studies as critically as needed for the litigation situation, which can sometimes be more demanding than the scientific arena. Lawyers charged with retaining expert witnesses must assess their clients’ positions and determine how well their expert witnesses will perform under all the circumstances of the case.

Professor Christopher Robertson proposes an even more radical reform of the law of expert witness by removing the selection and control of expert witnesses from parties and their counsel, completely. Robertson would somehow create a pool of expert witnesses on the issues in each case, and assign them to parties in a double-blinded randomized fashion. Christopher Tarver Robertson, “Blind Expertise,” 85 N.Y. Univ. L. Rev. 174, 211 (2010). Aside from depriving litigants of autonomy and control over their cases, this approach has even greater potential for generating false results. How do the expert witness come to be retained for this process? Any two expert witnesses may very well come to an incorrect analysis precisely because they do not have the benefit of each other’s report to develop the full range of data to be considered. What if the expert assigned to plaintiff concludes that there is no case, but the expert assigned to the defendant concludes that the plaintiff’s case is meritorious? Normally, plaintiffs’ expert witnesses must file their reports in advance of the defense witnesses, who then have the opportunity to rebut but also the benefit of all the data included. Simultaneous reports risk major omissions of data to be considered on both sides. The adversarial cauldron works to ensure completeness in what data and studies are considered.

Now comes Jonah Gelbach to attempt a probabilistic, theoretical defense of reforms in the Posner-Robertson mold. Jonah B. Gelbach, “Expert Mining and Required Disclosure,” 81 U. Chicago L. Rev. 131 (2014). Professor Gelbach is a well-trained economist, and a recently minted lawyer (Yale 2013), who is now an Associate Professor at the University of Pennsylvania Law School. Gelbach’s experience with the practice of law is limited to working as a law-school intern at David Rosen & Associates, in New Haven, Connecticut, before joining the Penn faculty. His proposals may need to be taken with a 100 grains of aspirin.

Although Gelbach disagrees with particulars of the Posner-Robertson proposals, Gelbach joins with them to opine that “[t]o the extent that additional fully disclosed expert testimony increases the fact finder’s information, we can expect a beneficial increase in accuracy.” Gelbach at 133. Gelbach’s dictum, however, is an ipse dixit, and he offers only a limited hypothetical case in which full disclosure of data should be required to solve the problem. And even in his hypothetical case, the disclosure of the identities of the testers is unnecessary to correct the error that Gelbach predicts. Gelbach’s call for the disclosure of consulting expert witnesses introduces only a collateral issue that has nothing to do with the accuracy of the scientific reasoning.

Gelbach analogizes “witness shopping” to data dredging and multiple testing, with a known inflation in the rate of false positive outcomes. If a party directs multiple to conduct single outcome measurements or tests, then that party can recreate the results of multiple testing without having to disclosure the number of independent tests. Gelbach’s argument is at its strongest for a simplistic model of a simple measurement, with errors normally distributed, with accuracy of the measurement tied to the outcome of the case. Gelbach at 136. Gelbach analogizes expert witness mining with data mining, and goes so far as to provide a calculation of false positive rates from multiple testing.

The sort of multiple testing Gelbach condemns is even more obvious when something other than random error is involved. Consider the need of litigants to have chest radiograph interpreted for the presence or absence of a pneumoconiosis in occupational dust disease litigation. Not only is there an intra-observer variability, there are potential or known subjective biases in radiograph interpretations. Gelbach need not worry about multiple testing because the need for economic efficiency already encourages many lawyers to employ radiologists who are must biased in favor of their clients’ positions. The bigger problem would be to encourage lawyers to obtain an honest second opinion, which might make them less strident about their litigation positions when discussing possible settlement.

Gelbach appears to believe that mandatory disclosure of the number of expert witnesses hired as well as the contents of the written and oral reports issued by the party’s nontestifying expert witnesses is needed to abate the potential harm from “expert mining.” By introducing the probabilistic modeling of Type I and Type II errors, however, Gelbach elevates proofiness over clear thinking about the issue. The simple solution to Gelbach’s soil measurement hypothetical is to require disclosure of all testing data, regardless whether conducted by expert witnesses designated as testifying or as consulting. All are agents of the party for purposes of creating data in the form of the hypothesized soil measurement. Indeed, Gelbach’s hypothetical envisions a technical laboratory that conducts such measurements, and the lab might not even be associated with a person designated to serve as an expert witness on the litigation issues.

Gelbach’s soil-measurement case is thus, for the most part, a straw-person case. In the vast majority of cases, multiple expert witness interviews leading up to selection and retention is, however, not at all like multiple testing, either in its ability to generate deliberate false positive or false negative opinions. The evidence remains what it is, and the parameter unchanged, whatever the qualitative judgments of the witness candidates. In most litigation contexts, the data upon which the expert witnesses will rely comes from published studies, and not from a single measurement under either side’s control and ability to resample many times through the agency of multiple expert witnesses. The Rules need to help the triers of fact discern the truth, not irrelevant proxies for the truth. If the triers of fact are incompetent to adjudge the actual evidence, then we may need to find triers who are competent.

The extension of the soil hypothetical to all of expert witness opinion testimony is unwarranted. Accuracy and validity of expert opinion is not “independent and identically distributed.” Truth and accuracy in scientific judgment as applied to litigation scientific questions are not random variables with known distributions.

A party may have to comb through dozens of potential expert witnesses before arriving at an expert witness with an appropriate, accurate answer to the litigation issue. When confronted with a pamphlet entitled “100 Authors against Einstein,” Albert Einstein quipped “if I were wrong, one would have been enough.”  See Remigio Russo, 18 Mathematical Problems in Elasticity 125 (1996) (quoting Einstein). Legal counsel should not have their clients’ cause compromised because they had the misfortune of consulting the “100 Authors” before arriving at Einstein’s door. The Posner-Robertson-Gelbach proposals all suffer the same flaw: they defer unduly to conformism and ignore the truth, validity, and accuracy of procured opinions.

Disputes in science are resolved with data, from high-quality, reproducible experimental or observational studies, not with appeals to the number of speakers. The number of expert witness candidates who were interviewed or who offered preliminary opinions is irrelevant to the task assigned to the finder of fact in a case involving scientific evidence. The final, proffered opinion of the testifying expert witness is only as good as the evidence and analysis upon which it rests, which under the current rules, should be fully disclosed.

Transparency, Confusion, and Obscurantism

October 31st, 2014

In NIEHS Transparency? We Can See Right Through You (July 10, 2014), I chastised authors Kevin C. Elliott and David B. Resnik for their confusing and confused arguments about standards of proof, the definition of risk, and conflicts of interest (COIs). See Kevin C. Elliott and David B. Resnik, “Science, Policy, and the Transparency of Values,” 122 Envt’l Health Persp. 647 (2014) [Elliott & Resnik]. In their focus on environmentalism and environmental policy, Elliott and Resnik seem intent upon substituting various presumptions, leaps of faith, and unproven extrapolations for actual evidence and valid inference, in the hope of improving the environment and reducing risk to life. But to get to their goal, Elliott and Resnik engage in various equivocations and ambiguities in their use of “risk,” and they compound the muddle by introducing a sliding scale of “standards of evidence,” for legal, regulatory, and scientific conclusions.

Dr. David H. Schwartz is a scientist, who received his doctoral degree in Neuroscience from Princeton University, and his postdoctoral training in Neuropharmacology and Neurophysiology at the Center for Molecular and Behavioral Neuroscience, in Rutgers University. Dr. Schwartz has since gone to found one of the leading scientific consulting firms, Innovative Science Solutions (ISS), which supports both regulatory and litigation claims and defenses, as may scientifically appropriate. Given his experience, Dr. Schwartz is well positioned to address the standards of scientific evidentiary conclusions across regulatory, litigation, and scientific communities.

In this month’s issue of Environmental Health Perspectives (EHP), Dr. David Schwartz adds to the criticism of Elliott and Resnik’s tendentious editorial. David H. Schwartz, “Policy and the Transparency of Values in Science,” 122 Envt’l Health Persp. A291 (2014). Schwartz points out that “[a]lthough … different venues or contexts require different standards of evidence, it is important to emphasize that the actual scientific evidence remains constant.” Id.

Dr. Schwartz points out transparency is needed in how standards and evidence are represented in scientific and legal discourse, and he takes Elliott and Resnik to task for arguing, from ignorance, that litigation burdens are different from scientific standards. At times some writers misrepresent the nature of their evidence, or its weakness, and when challenged, attempt to excuse the laxness in standards by adverting to the regulatory or litigation contexts in which they are speaking. In some regulatory contexts, the burdens of proof are deliberately reduced, or shifted to the regulated industry. In litigation, the standard or burden of proof is rarely different from the scientific enterprise itself. As the United States Supreme Court made clear, trial courts must inquire whether an expert witness ‘‘employs in the courtroom the same level of intellectual rigor that characterizes the practice of an expert in the relevant field.’’ Kumho Tire Co. v. Carmichael, 526 U.S. 137, 152 (1999). Expert witnesses who fail to exercise the same intellectual rigor in the courtroom as in the laboratory, are eminently disposable or excludable from the legal process.

Schwartz also points out, as I had in my blog post, that “[w]hen using science to inform policy, transparency is critical. However, this transparency should include not only financial ties to industry but also ties to advocacy organizations and other strongly held points of view.”

In their Reply to Dr. Schwartz, Elliott and Resnik concede the importance of non-financial conflicts of interest, but they dig in on the supposed lower standard for scientific claims:

“we caution against equating the standards of evidence expected in tort law with those expected in more traditional scientific contexts. The tort system requires only a preponderance of evidence (> 50% likelihood) to win a case; this is much weaker evidence than scientists typically demand when presenting or publishing results, and confusion about these differing standards has led to significant legal controversies (Cranor 2006).”

Rather than citing any pertinent or persuasive legal authority, Elliott and Resnik cite an expert witness, Carl Cranor, neither a lawyer nor a scientist, who has worked steadfastly for the litigation industry (the plaintiffs’ bar) on various matters. The “caution” of Elliott and Resnik is directly contradicted by the Supreme Court’s pronouncement in Kumho Tire, and is fueled by a ignoratio elenchi that is based upon a confusion between the probability of repeated sampling with confidence intervals (usually 95%) and the posterior probability of a claim: namely, the probability of the claim given the admissible evidence. As the Reference Manual for Scientific Evidence makes clear, these are very different probabilities, which Cranor and others have consistently confused. Elliott and Resnik ought to know better.

The Current Crisis – Ebola Comes to the Land of Litigation

October 29th, 2014

Lying About

President Obama has appointed a political operative, a lawyer, to be the “Ebola czar,” while the Surgeon General and Secretary of Health and Human Resources remain in hiding. Dr. Craig Spencer, who lies in a Bellevue Hospital isolation ward, lied about his travels about New York City when talking to the New York City authorities. He claimed to have been in voluntary quarantine and isolation at his Manhattan home upon returning from West Africa. Jamie Schram & Bruce Golding, “Ebola doctor ‛lied’ about NYC travelsNY Post (Oct. 29, 2014) (“The city’s first Ebola patient initially lied to authorities about his travels around the city following his return from treating disease victims in Africa, law-enforcement sources said.”) We now know he used the subways, ate at public restaurants, and generally cavorted about town.

Foolish Consistencies and Some Inconsistency, Too

President Obama has pressured Governors Christie and Cuomo to back off their stricter quarantine rules, and demonstrated that Cuomo is politically soft in the center. At the same time that the Obama’s administration has bullied critics of its voluntary quarantine protocol, they have imposed mandatory quarantine on military personnel, returning from West Africa. Secretary of War Defense has announced a mandatory quarantine. See Starr, “Hagel announces mandatory Ebola quarantineCNN (Oct. 29, 2014). Ah, our leaders would follow Ralph Waldo Emerson, on Self-Reliance and self-quarantine: “[a] foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines. With consistency a great soul has simply nothing to do. He may as well concern himself with his shadow on the wall.”

Australia has banned travel with Ebola affected countries, which should now include the United States. Michelle Nichols and Umaru Fofana, “Australia bans travel from Ebola-hit countries; U.S. isolates troopsReuters (Oct. 28, 2014). Of course, Australia was settled by criminals, as we all know.

Wild Nurse Hickox

Voluntary quarantine is a quaint notion. A healthcare worker takes his or her temperature twice a day, but fevers come on, when they come on. Nurse Kaci Hickox, whose “human rights” were supposedly violated by Order of Governor Christie, has been removed to Maine, whence she has announced her attention to violate Maine’s lax rule that requires voluntary quarantine. Jennifer Levitz, “Nurse in Ebola Quarantine Flap Says She Won’t Obey Maine’s Isolation Rules: Kaci Hickox Says She Will Go to Court if Restrictions Aren’t Removed by ThursdayWall Street Journal (Oct. 29, 2014). So much for the human rights of Maine’s good citizens, not to mention the rights of the moose, and other innocent species.

The litigation industry is, I am sure, gearing up to meet the crisis. And Nurse Hickox is now be free to litigate her voluntary quarantine in Maine.

Gad-zooks – Expert Witness Dishonesty

October 18th, 2014

This is the first in what I hope will be a continuing series, tagged as the Expert Witness Hall of Shame.

*     *     *     *     *

Shayne Cox Gad is a toxicologist and a principal in the firm, Gad Consulting Services, in Cary, North Carolina. In 1977, Gad was awarded his doctorate in pharmacology and toxicology by the University of Texas (Austin). Some years later, Gad apparently awarded himself a Silver Star, and three Purple Hearts.

The Stolen Valor Act[1], effective in December 2006, made false representations of having received military decorations or awards a federal crime. Gad was charged with violating the Stolen Valor Act, and in February 2009, he pleaded guilty to dishonesty and specious claiming prohibited by the Act.

Before and after his conviction by guilty plea, Gad testified as an expert witness in litigation. He was an expert witness for plaintiff in an Oklahoma state court case, Helton v. Allergan, Inc., in which Dr. Sharla Helton complained that Botox caused her neurologic problems and pain that prevented her from working as an obstetrician/gynecologist.

Whatever the merits of the claims about Botox, Allergan might well have resisted settling a case in which plaintiff’s claim rested upon the testimony of an expert witness, convicted for dishonesty. Trial counsel for Allergan, Vaughn Crawford, cross-examined Gad, on April 27, 2010. Vaughn’s examination went immediately to prior conviction. “Allergan unmasks anti-Botox expert” (April 28, 2010; updated Aug. 21, 2013). Vaughn sprung the impeachment:

Q You are the same Shayne Cox Gad who has been adjudged guilty by the Eastern Federal District Court in North Carolina for crimes involving false statements and dishonesty, aren’t you, sir.

A Yes, sir.

Q Yes. Specifically in February of 2009, you were adjudged guilty by that Court of falsely representing that you had been awarded military decorations and medals including the Navy cross, aren’t you, sir.

A Yes, sir.

Helton v. Allergan, Inc., Notes of Testimony by Shayne Cox Gad at 48-49 (April 27, 2010).

Crawford pressed. Not only had Gad confessed to the crime, he had made various acts of contrition in his Pre-sentencing Report, in which Gad asked that he be placed on probation rather than incarcerated. One of the representations Gad made in the Report was that he would no longer testify as an expert witness in litigation. Gad’s plea was accepted and he was placed on probation as he and his lawyer requested.

As dramatic as Crawford’s impeachment of Gad must have been, the jury shrugged it off and awarded Dr. Helton 15 million dollars, which came to 18 million with pre-judgment interest. Helton v. Allergan Inc., No. CJ-2009-2171 (Okla. Dist. Ct., Oklahoma Cty.) (jury voted 10 to 2 to award actual but no punitive damages). The Oklahoma intermediate appellate court affirmed in an unpublished opinion, and the Oklahoma Supreme Court refused to grant discretionary review. Helton v. Allergan Inc., No. 2009-2171 (Okla. Civ. App. Sept. 6, 2013); “Okla. Appeals Court Backs $15M Award In Botox Injury,” Law 360 (Sept. 10, 2013). See PR Newswire, “Botox Victim Wins $18 Million: Oklahoma Supreme Court Affirms Botulism Verdict for McGinnis Lochridge Client Against Allergan, Inc.,” (May 9, 2014) (law firm press release misleadingly claiming that the Oklahoma Supreme Court had affirmed, when in fact, the Court had declined discretionary review).

Having pled guilty in federal court, Gad would have recited the facts of his crime in court before the imposition of sentencing, as required under Federal Rule of Criminal Procedure 11. Furthermore, even if Gad’s criminal defense lawyer drafted the Pre-sentencing Report, Gad was the principal responsible for his agent attorney’s representation that he, Gad, would not testify again as an expert witness.

Gad tried to resist the gist of the cross-examination by suggesting that others, not he, had made the representation. And on redirect, plaintiff’s counsel Chester elicited an apology, not to the court, or to the defendants, but to Dr. Helton, the plaintiff:

Q Would you, at least, apologize to my client for me because she hired me and I hired you.

A I do apologize for that.

Q Have you lied about anything in this case?

A No, sir.

Q You put five kids through college; is that right?

A Yes, sir.

Q You’ve had this career. Why would you do something like this?

A Well, that, of course, was discussed in a lot more detail in the documents having to do with it, but it was something that got out there 25 years ago and I thought it was put away. I did my best to expunge it from the record, and I was unsuccessful. Twenty-five years ago I was a very different person, a lot younger than I am now.

Helton v. Allergan, Inc., Notes of Testimony by Shayne Cox Gad at 141 (April 27, 2010).

The internet is, however, unforgiving and unforgetting. A curriculum vitae for Gad, labeled August 2005, states the following for military service:

June 1970 to April 1974 (Active):

Served on riverine craft in Mekong Delta of Vietnam and as O.I.C. of Armory, Quonset Point, Rhode Island. Served on USS Intrepid (CVS-11) as special 7 weapons officer, deck division officer and as First Lieutenant. Qualified as O.D. underway on Intrepid. Made several deployments overseas – mainly to Europe and the Mediterranean. Released from active duty in the permanent grade of LT(jg). Received Silver Star, 3 Bronze Stars, 3 Purple Hearts.  * * *

Holds current (2003) top secret clearance.”

C.V. for Shayne Cox Gad, Ph.D., D.A.B.T., A.T.S. (emphasis added).

The charging document against Gad, from United States v. Gad, also refutes the notion that Gad’s false statements occurred in the distant past, but rather that they were made “[o]n or about November 2004, and continuing up to and including March 29, 2007 … .” Immunity from prosecution for perjury in another case, United States v. Caputo appeared to be part of the consideration for the plea deal in U.S. v. Gad. Thus the inclusion of a representation, in the pre-sentencing report, that “[a]dditionally, Dr. Gad has agreed to no longer testify as an expert witness in the future.”

The mischief Gad created by his dishonesty was thus not limited to the Helton case. Gad’s testimony looks even more dubious in view of the Caputo case, a criminal case in which Gad testified for a federal prosecutor. In Caputo, the prosecutor informed the defendants, executives of AbTox Inc., that Gad “had committed perjury by falsely claiming military experience and decorations.” United States v. Caputo, Case No. 10-1964, 397 Fed. Appx. 216, (7th Cir. Oct. 12, 2010) (unpublished). See alsoAbTox Execs Seek New Trial Over Witness ‘Perjury’ – Law360” (Sept. 16, 2010).

The Caputo defendants had been charged with lying to the FDA and selling a misbranded medical sterilization devices. United States v. Caputo, 517 F.3d 935 (7th Cir. 2008) (affirming convictions). In rebuttal, Gad testified that defendants could not reasonably have held the beliefs they claimed to have held in good faith. Because of how the issue of good faith arose, the Circuit held that Gad’s perjurious testimony was harmless error that could not support the grant of a new trial. United States v. Caputo, Case No. 10-1964, 397 Fed. Appx. 216, (7th Cir. Oct. 12, 2010).

When the government informed the defendants that Gad had committed perjury, the Caputo defendants moved for a new trial on grounds of newly discovered evidence. The defendants went beyond Gad’s perjury disclosed by the prosecutors, and charged that Gad’s resume was a sham and that Gad had lied about other credentials as well.

According to the defendants’ motion in Caputo, Gad had misrepresented several credentials and misleadingly claimed to have had professional experience in medicine and toxicology, which experience Gad, in fact, lacked. The defendants, in Caputo, alleged other misrepresentations. Gad had testified in their case, and in the Helton case, that he had taught a course at the Duke University Medical School in the early 2000s and had lectured at the school since. Gad’s resume listed a professorship of toxicology at the College of St. Elizabeth, where he purportedly developed the school’s bachelor of science toxicology program, according to the motion. In their motion for a new trial, the AbTox executives, Caputo and Riley, provided an offer of proof that neither Duke University Medical School nor the College of St. Elizabeth had any record of Gad’s faculty status, and St. Elizabeth lacked a bachelor’s program in toxicology. Caputo and Riley also adverted to the federal prosecutors’ own earlier finding that Gad had lied about his military record during their trial.

I don’t know whether Gad testified again.  Some of Gad’s dubious views on toxicology are cited with approval by legal commentators who would dilute the scientific standard for causation[2].

“Falsus in uno, falsus in omnibus.” The essence of the crime is specious claiming.


[1] United States v. Alvarez, 132 S. Ct. 1421 (2012) (holding that the Stolen Valor Act was unconstitutional).

[2] See Shayne C. Gad, “Model Selection and Scaling,” in Shayne C. Gad & Christopher P. Chengelis eds., Animal Models in Toxicology 813 (1992), cited by Carl F. Cranor & David A. Eastmond, “Scientific Ignorance and Reliable Patterns of Evidence in Toxic Tort Causation: Is There a Need for Liability Reform? 64 Law & Contemporary Problems 5, 27 & n.120 (2001), and by Erica Beecher-Monas, Evaluating Scientific Evidence An Interdisciplinary Framework for Intellectual Due Process at 74 & n.63 (2007) (citing Gad’s book at page 826, for the argument that humans may be more sensitive to chemical effects than smaller species).

 

Courts Can and Must Acknowledge Multiple Comparisons in Statistical Analyses

October 14th, 2014

In excluding the proffered testimony of Dr. Anick Bérard, a Canadian perinatal epidemiologist in the Université de Montréal, the Zoloft MDL trial court discussed several methodological shortcomings and failures, including Bérard’s reliance upon claims of statistical significance from studies that conducted dozens and hundreds of multiple comparisons. See In re Zoloft (Sertraline Hydrochloride) Prods. Liab. Litig., MDL No. 2342; 12-md-2342, 2014 U.S. Dist. LEXIS 87592; 2014 WL 2921648 (E.D. Pa. June 27, 2014) (Rufe, J.). The Zoloft MDL court was not the first court to recognize the problem of over-interpreting the putative statistical significance of results that were one among many statistical tests in a single study. The court was, however, among a fairly small group of judges who have shown the needed statistical acumen in looking beyond the reported p-value or confidence interval to the actual methods used in a study[1].

A complete and fair evaluation of the evidence in situations as occurred in the Zoloft birth defects epidemiology required more than the presentation of the size of the random error, or the width of the 95 percent confidence interval.  When the sample estimate arises from a study with multiple testing, presenting the sample estimate with the confidence interval, or p-value, can be highly misleading if the p-value is used for hypothesis testing.  The fact of multiple testing will inflate the false-positive error rate. Dr. Bérard ignored the context of the studies she relied upon. What was noteworthy is that Bérard encountered a federal judge who adhered to the assigned task of evaluating methodology and its relationship with conclusions.

*   *   *   *   *   *   *

There is no unique solution to the problem of multiple comparisons. Some researchers use Bonferroni or other quantitative adjustments to p-values or confidence intervals, whereas others reject adjustments in favor of qualitative assessments of the data in the full context of the study and its methods. See, e.g., Kenneth J. Rothman, “No Adjustments Are Needed For Multiple Comparisons,” 1 Epidemiology 43 (1990) (arguing that adjustments mechanize and trivialize the problem of interpreting multiple comparisons). Two things are clear from Professor Rothman’s analysis. First for someone intent upon strict statistical significance testing, the presence of multiple comparisons means that the rejection of the null hypothesis cannot be done without further consideration of the nature and extent of both the disclosed and undisclosed statistical testing. Rothman, of course, has inveighed against strict significance testing under any circumstance, but the multiple testing would only compound the problem. Second, although failure to adjust p-values or intervals quantitatively may be acceptable, failure to acknowledge the multiple testing is poor statistical practice. The practice is, alas, too prevalent for anyone to say that ignoring multiple testing is fraudulent, and the Zoloft MDL court certainly did not condemn Dr. Bérard as a fraudfeasor[2].

In one case, a pharmaceutical company described a p-value of 0.058 as statistical significant in a “Dear Doctor” letter, no doubt to avoid a claim of under-warning physicians. Vanderwerf v. SmithKline Beecham Corp., 529 F.Supp. 2d 1294, 1301 & n.9 (D. Kan. 2008), appeal dism’d, 603 F.3d 842 (10th Cir. 2010). The trial court[3], quoting the FDA clinical review, reported that a finding of “significance” at the 0.05 level “must be discounted for the large number of comparisons made. Id. at 1303, 1308.

Previous cases have also acknowledged the multiple testing problem. In litigation claims for compensation for brain tumors for cell phone use, plaintiffs’ expert witness relied upon subgroup analysis, which added to the number of tests conducted within the epidemiologic study at issue. Newman v. Motorola, Inc., 218 F. Supp. 2d 769, 779 (D. Md. 2002), aff’d, 78 Fed. App’x 292 (4th Cir. 2003). The trial court explained:

“[Plaintiff’s expert] puts overdue emphasis on the positive findings for isolated subgroups of tumors. As Dr. Stampfer explained, it is not good scientific methodology to highlight certain elevated subgroups as significant findings without having earlier enunciated a hypothesis to look for or explain particular patterns, such as dose-response effect. In addition, when there is a high number of subgroup comparisons, at least some will show a statistical significance by chance alone.”

Id. And shortly after the Supreme Court decided Daubert, the Tenth Circuit faced the reality of data dredging in litigation, and its effect on the meaning of “significance”:

“Even if the elevated levels of lung cancer for men had been statistically significant a court might well take account of the statistical “Texas Sharpshooter” fallacy in which a person shoots bullets at the side of a barn, then, after the fact, finds a cluster of holes and draws a circle around it to show how accurate his aim was. With eight kinds of cancer for each sex there would be sixteen potential categories here around which to “draw a circle” to show a statistically significant level of cancer. With independent variables one would expect one statistically significant reading in every twenty categories at a 95% confidence level purely by random chance.”

Boughton v. Cotter Corp., 65 F.3d 823, 835 n. 20 (10th Cir. 1995). See also Novo Nordisk A/S v. Caraco Pharm. Labs., 775 F.Supp. 2d 985, 1019-20 & n.21 (2011) (describing the Bonferroni correction, and noting that expert witness biostatistician Marcello Pagano had criticized the use of post-hoc, “cherry-picked” data that were not part of the prespecified protocol analysis, and the failure to use a “correction factor,” and that another biostatistician expert witness, Howard Tzvi Thaler, had described a “strict set of well-accepted guidelines for correcting or adjusting analysis obtained from the `post hoc’ analysis”).

The notorious Wells[4] case was cited by the Supreme Court in Matrixx Initiatives[5] for the proposition that statistical significance was unnecessary. Ironically, at least one of the studies relied upon by the plaintiffs’ expert witnesses in Wells had some outcomes with p-values below five percent. The problem, addressed by defense expert witnesses and ignored by the plaintiffs’ witnesses and Judge Shoob, was that there were over 20 reported outcomes, and probably many more outcomes analyzed but not reported. Accordingly, some qualitative or quantitative adjustment was required in Wells. See Hans Zeisel & David Kaye, Prove It With Figures: Empirical Methods in Law and Litigation 93 (1997)[6].

Reference Manual on Scientific Evidence

David Kaye’s and the late David Freedman’s chapter on statistics in the third, most recent, edition of Reference Manual, offers some helpful insights into the problem of multiple testing:

4. How many tests have been done?

Repeated testing complicates the interpretation of significance levels. If enough comparisons are made, random error almost guarantees that some will yield ‘significant’ findings, even when there is no real effect. To illustrate the point, consider the problem of deciding whether a coin is biased. The probability that a fair coin will produce 10 heads when tossed 10 times is (1/2)10 = 1/1024. Observing 10 heads in the first 10 tosses, therefore, would be strong evidence that the coin is biased. Nonetheless, if a fair coin is tossed a few thousand times, it is likely that at least one string of ten consecutive heads will appear. Ten heads in the first ten tosses means one thing; a run of ten heads somewhere along the way to a few thousand tosses of a coin means quite another. A test—looking for a run of ten heads—can be repeated too often.

Artifacts from multiple testing are commonplace. Because research that fails to uncover significance often is not published, reviews of the literature may produce an unduly large number of studies finding statistical significance.111 Even a single researcher may examine so many different relationships that a few will achieve statistical significance by mere happenstance. Almost any large dataset—even pages from a table of random digits—will contain some unusual pattern that can be uncovered by diligent search. Having detected the pattern, the analyst can perform a statistical test for it, blandly ignoring the search effort. Statistical significance is bound to follow.

There are statistical methods for dealing with multiple looks at the data, which permit the calculation of meaningful p-values in certain cases.112 However, no general solution is available… . In these situations, courts should not be overly impressed with claims that estimates are significant. …”

Reference Manual on Scientific Evidence at 256-57 (3d ed. 2011).

When a lawyer asks a witness whether a sample statistic is “statistically significant,” there is the danger that the answer will be interpreted or argued as a Type I error rate, or worse yet, as a posterior probability for the null hypothesis.  When the sample statistic has a p-value below 0.05, in the context of multiple testing, completeness requires the presentation of the information about the number of tests and the distorting effect of multiple testing on preserving a pre-specified Type I error rate.  Even a nominally statistically significant finding must be understood in the full context of the study.

Some texts and journals recommend that the Type I error rate not be modified in the paper, as long as readers can observe the number of multiple comparisons that took place and make the adjustment for themselves.  Most jurors and judges are not sufficiently knowledgeable to make the adjustment without expert assistance, and so the fact of multiple testing, and its implication, are additional examples of how the rule of completeness may require the presentation of appropriate qualifications and explanations at the same time as the information about “statistical significance.”

*     *     *     *     *

Despite the guidance provided by the Reference Manual, some courts have remained resistant to the need to consider multiple comparison issues. Statistical issues arise frequently in securities fraud cases against pharmaceutical cases, involving the need to evaluate and interpret clinical trial data for the benefit of shareholders. In a typical case, joint venturers Aeterna Zentaris Inc. and Keryx Biopharmaceuticals, Inc., were both targeted by investors for alleged Rule 10(b)(5) violations involving statements of clinical trial results, made in SEC filings, press releases, investor presentations and investor conference calls from 2009 to 2012. Abely v. Aeterna Zentaris Inc., No. 12 Civ. 4711(PKC), 2013 WL 2399869 (S.D.N.Y. May 29, 2013); In re Keryx Biopharms, Inc., Sec. Litig., 1307(KBF), 2014 WL 585658 (S.D.N.Y. Feb. 14, 2014).

The clinical trial at issue tested perifosine in conjunction with, and without, other therapies, in multiple arms, which examined efficacy for seven different types of cancer. After a preliminary phase II trial yielded promising results for metastatic colon cancer, the colon cancer arm proceeded. According to plaintiffs, the defendants repeatedly claimed that perifosine had demonstrated “statistically significant positive results.” In re Keryx at *2, 3.

The plaintiffs alleged that defendants’ statements omitted material facts, including the full extent of multiple testing in the design and conduct of the phase II trial, without adjustments supposedly “required” by regulatory guidance and generally accepted statistical principles. The plaintiffs asserted that the multiple comparisons involved in testing perifosine in so many different kinds of cancer patients, at various doses, with and against so many different types of other cancer therapies, compounded by multiple interim analyses, inflated the risk of Type I errors such that some statistical adjustment should have been applied before claiming that a statistically significant survival benefit had been found in one arm, with colorectal cancer patients. In re Keryx at *2-3, *10.

The trial court dismissed these allegation given that the trial protocol had been published, although over two years after the initial press release, which started the class period, and which failed to disclose the full extent of multiple testing and lack of statistical correction, which omitted this disclosure. In re Keryx at *4, *11. The trial court emphatically rejected the plaintiffs’ efforts to dictate methodology and interpretative strategy. The trial court was loathe to allow securities fraud claims over allegations of improper statistical methodology, which:

“would be equivalent to a determination that if a researcher leaves any of its methodology out of its public statements — how it did what it did or was planning to do — it could amount to an actionable false statement or omission. This is not what the law anticipates or requires.”

In re Keryx at *10[7]. According to the trial court, providing p-values for comparisons between therapies, without disclosing the extent of unplanned interim analyses or the number of multiple comparisons is “not falsity; it is less disclosure than plaintiffs would have liked.” Id. at *11.

“It would indeed be unjust—and could lead to unfortunate consequences beyond a single lawsuit—if the securities laws become a tool to second guess how clinical trials are designed and managed. The law prevents such a result; the Court applies that law here, and thus dismisses these actions.”

Id. at *1.

The court’s characterization of the fraud claims as a challenge to trial methodology rather than data interpretation and communication decidedly evaded the thrust of the plaintiffs’ fraud complaint. Data interpretation will often be part of the methodology outlined in a protocol. The Keryx case also confused criticism of the design and execution of a clinical trial with criticism of the communication of the trial results.


[1] Predictably, some plaintiffs’ counsel accused the MDL trial judge of acting as a statistician and second-guessing the statistical inferences drawn by the party expert witness. See, e.g., Max Kennerly, “Daubert Doesn’t Ask Judges To Become Experts On Statistics” (July 22, 2014). Federal Rule of Evidence 702 requires trial judges to evaluate the methodology used to determine whether it is valid. Kennerly would limit the trial judge to a simple determination of whether the expert witness used statistics, and whether statistics generally are appropriately used. In his words, “[t]o go with the baseball metaphors so often (and wrongly) used in the law, when it comes to Daubert, the judge isn’t an umpire calling balls and strikes, they’re [sic] more like a league official checking to make sure the players are using regulation equipment. Mere disagreements about the science itself, and about the expert’s conclusions, are to be made by the jury in the courtroom.” This position is rejected by the explicit wording of the statute, as well as the Supreme Court opinions leading up to the revision in the statute. To extend Kennerly’s overextended metaphor even further, the trial court must not only make sure that the players are using regulation equipment, but also that pitchers, expert witnesses, aren’t throwing spitballs or balking in their pitching of opinions. Judge Rufe, in the Zoloft MDL, did no more than asked of her by Rule 702 and the Reference Manual.

[2] Perhaps the prosecutor, jury, and trial and appellate judges in United States v. Harkonen would be willing to brand Dr. Bérard a fraudfeasor. U.S. v. Harkonen, 2009 WL 1578712, 2010 WL 2985257 (N.D. Cal.), aff’d, 2013 WL 782354 (9th Cir. Mar. 4, 2013), cert. denied, ___ U.S. ___ (2013).

[3] The trial court also acknowledged the Reference Manual on Scientific Evidence 127-28 (2d ed. 2000). Unfortunately, the court erred in interpreting the meaning of a 95 percent confidence interval as showing “the true relative risk value will be between the high and low ends of the confidence interval 95 percent of the time.” Vanderwerf v. SmithKlineBeecham Corp., 529 F.Supp. 2d at 1302 n.10.

[4] Wells v. Ortho Pharm. Corp., 615 F. Supp. 262 (N.D. Ga. 1985), aff ’d, and rev’d in part on other grounds, 788 F.2d 741 (11th Cir.), cert. denied, 479 U.S. 950 (1986).

[5] Matrixx Initiatives, Inc. v. Siracusano, 131 S.Ct. 1309 (2011)

[6] Zeisel and Kaye contrast the lack of appreciation for statistical methodology in Wells with the handling of the multiple comparison issue in an English case, Reay v. British Nuclear Fuels (Q.B. Oct. 8, 1993). In Reay, children of fathers who worked in nuclear power plants and who developed leukemia, sued. Their expert witnesses relied upon a study that reported 50 or so hypotheses. Zeisel and Kaye quote the trial judge as acknowledging that the number of hypotheses considered inflates the nominal value of the p-value and reduces confidence in the study’s result. Hans Zeisel & David Kaye, Prove It With Figures: Empirical Methods in Law and Litigation 93 (1997) (discussing Reay case as published in The Independent, Nov. 22, 1993).

[7] Of course, this is exactly what happened to Dr. Scott Harkonen, who was indicted and convicted under the Wire Fraud Act, despite issuing a press release that included a notice of an investor conference call within a couple of weeks, when investors and others could inquire fully about the clinical trial results.

Can Expert Bias and Prejudice Disqualify a Witness From Testifying?

October 11th, 2014

The Center for Science in the Public Interest (CSPI) bills itself as a consumer advocate committed to research and education in sound science. The CSPI considers itself to be “one of the nation’s top consumer advocates,” which works to “ensure that science is used to promote the public welfare.”

You may wonder whether and why “science” turns out to promote the public welfare envisioned by the CSPI? According to the CSPI, you should just accept that it does. So sure is the CSPI that industry corrupts science that it features a web-based, open database of scientists with ties to industry. There is no database of scientists’ ties to the litigation industry (plaintiffs’ lawyers), to organized labor, or advocacy groups. No doubt, implicit in its choice, is the claim that all science done by scientists with “ties” to the plaintiffs’ bar, to labor, or to advocacy groups, is “in the public interest.”

The arrogance of the implicit claim is made even more clear by how the CSPI addresses supposed corruption and conflicts of interest in science. The CSPI features an Integrity in Science Project to ferret out corruption in science, but the Project concerns itself only with industry-sponsored and funded science. The Project is candid about its one-sided jihad against industry-based science:

“Although many have cheered partnerships between industry and the research community, it is also acknowledged that they entail conflicts of interest that may compromise the judgment of trusted professionals, the credibility of research institutions and scientific journals, the safety and transparency of human subjects research, the norms of free inquiry, and the legitimacy of science-based policy.

For example:

  • There is strong evidence that researchers’ financial ties to chemical, pharmaceutical, or tobacco manufacturers directly influence their published positions in supporting the benefit or downplaying the harm of the manufacturers’ product.
  • A growing body of evidence indicates that pharmaceutical industry gifts and inducements bias clinicians’ judgments and influence doctors’ prescribing practices.
  • There are well-known cases of industry seeking to discredit or prevent the publication of research results that are critical of its products.
  • Studies of life-science faculty indicate that researchers with industry funding are more likely to withhold research results in order to secure commercial advantage.
  • Increasingly, the same academic institutions that are responsible for oversight of scientific integrity and human subjects protection are entering financial relationships with the industries whose product-evaluations they oversee.

In response to the commercialization of science and the growing problem of conflicts of interest, the Integrity in Science Project seeks to:

  • raise awareness about the role that corporate funding and other corporate interests play in scientific research, oversight, and publication;

  • investigate and publicize conflicts of interest and other potentially destructive influences of industry-sponsored science;

  • advocate for full disclosure of funding sources by individuals, governmental and non-governmental organizations that conduct, regulate, or provide oversight of scientific investigation or promote specific scientific findings;

  • encourage policy-makers at all levels of government to seek balance on expert advisory committees and to provide public, web-based access to conflict-of-interest information collected in the course of committee formation;

  • encourage journalists to routinely ask scientists and others about their possible conflicts of interests and to provide this information to the public.”

The CSPI inquiry then is entirely one-sided, with no apparent or manifest interest in exploring and revealing conflicts created by scientists’ affiliations with advocacy groups, labor, or the litigation industry. The concern about conflicts of interests is, in my view, simply an attempt to disqualify industry-sponsored scientific studies from inclusion in policy discussions. To be sure, there are notorious examples of industry-sponsored, compromised studies. But there are similarly notorious examples of union and plaintiff-lawyer sponsored studies gone awry. Why then is there no concern at the CSPI about researchers’ ties with advocacy groups, labor unions, and most important, and the litigation industry? The obvious answer is that the CSPI is engaging in advocacy for certain conclusions. The CSPI wants to put its hand on one side of the balance, and do its best to ensure that scientific debates and discussions come out a certain way, a way that favors conclusions it desires. The CSPI wants to disqualify dissenters from the conversation. The so-called “Integrity” project thus appears to be a pretense, exactly the opposite of what it purports to be.

In 2004, the CSPI’s Integrity in Science Project sponsored a conference on, among other topics, Corporate and Government Suppression of Research. Actually, there was barely any discussion of governmental suppression; the speakers spoke almost entirely on corporate conduct.

One speaker on the panel presented about corporate conflicts of interest in the starkest Marxist terms. Corporations must cheat and lie because they are capable only of acting to maximize profits, and they will inevitably see safety as a dispensable cost. The speaker, who is a frequent testifier in mass tort litigation, held forth that the problem with corporations is not that there are some rotten apples, but that the entire barrel is rotten. Suppression of scientific research, according to this speaker is not an anomaly, but totally determined by the nature of the firm. Ethical companies cannot compete, and they go out of business; ergo, any company in business is unethical.

Of course, the same uncharitable determinist views can be applied to expert witnesses, to plaintiffs’ counsel, to labor unions, and to advocacy groups. Remarkably, this speaker acknowledged that is ideology is a much larger bias than money, and then confessed that

My bias is ideological.”

This speaker testifies frequently for the litigation industry, and his zeal is so uncabined that he has been held in contempt and fined as part of his litigation activities. When one federal court judge excluded his testimony, he attacked the bona fides of the judge and sought to appeal his exclusion personally. And yet the Integrity project featured him as a speaker!

The CSPI and its cadre of anti-industry scientists brings me to the question du jour: Can an expert witness be too biased or prejudiced in a matter to serve as an expert witness? We exclude judges and jurors who have potential conflicts of interest. Surely there are fact or expert witnesses, who are so untrustworthy that they should not be allowed to testify. Consider whether an expert witnesses who, having demonstrated that they will violate court orders or other laws, want the court to qualify them as “expert witnesses” to give their opinions in a pending case. The trial court does not necessarily endorse the opinions proffered, but should the court give its imprimatur to the witnesses’ standing as having opinions that could be considered, relied upon to the exclusion of competing opinions, and form the basis for verdicts for the parties offering these suspect witnesses?

Just asking.

The opinions, statements, and asseverations expressed on Tortini are my own, or those of invited guests, and these writings do not necessarily represent the views of clients, friends, or family, even when supported by good and sufficient reason.