TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Consensus Rule – Shadows of Validity

April 26th, 2023

Back in 2011, at a Fourth Circuit Judicial Conference, Chief Justice John Roberts took a cheap shot at law professors and law reviews when he intoned:

“Pick up a copy of any law review that you see, and the first article is likely to be, you know, the influence of Immanuel Kant on evidentiary approaches in 18th Century Bulgaria, or something, which I’m sure was of great interest to the academic that wrote it, but isn’t of much help to the bar.”[1]

Anti-intellectualism is in vogue these days. No doubt, Roberts was jocularly indulging in an over-generalization, but for anyone who tries to keep up with the law reviews, he has a small point. Other judges have rendered similar judgments. Back in 1993, in a cranky opinion piece – in a law review – then Judge Richard A. Posner channeled the liar paradox by criticizing law review articles for “the many silly titles, the many opaque passages, the antic proposals, the rude polemics, [and] the myriad pretentious citations.”[2] In a speech back in 2008, Justice Stephen Breyer noted that “[t]here is evidence that law review articles have left terra firma to soar into outer space.”[3]

The temptation to rationalize, and to advocate for reflective equilibrium between the law as it exists, and the law as we think it should be, combine to lead to some silly and harmful efforts to rewrite the law as we know it.  Jeremy Bentham, Mr. Nonsense-on-Stilts, who sits stuffed in the hallway of the University of London, ushered in a now venerable tradition of rejecting tradition and common sense, in proposing all sorts of law reforms.[4]  In the early 1800s, Jeremy Bentham, without much in the way of actual courtroom experience, deviled the English bench and bar with sweeping proposals to place evidence law on what he thought was a rational foundation. As with his naïve utilitarianism, Bentham’s contributions to jurisprudence often ignored the realities of human experience and decision making. The Benthamite tradition of anti-tradition is certainly alive and well in the law reviews.

Still, I have a soft place in my heart for law reviews.  Although not peer reviewed, law reviews provide law students a tremendous opportunity to learn about writing and scholarship through publishing the work of legal scholars, judges, thoughtful lawyers, and other students. Not all law review articles are non-sense on stilts, but we certainly should have our wits about us when we read immodest proposals from the law professoriate.

*   *   *   *   *   *   *   *   *   *

Professor Edward Cheng has written broadly and insightfully about evidence law, and he certainly has the educational training to do so. Recently, Cheng has been bemused by the expert paradox, which wonders how lay persons, without expertise, can evaluate and judge issues of the admissibility, validity, and correctness of expert opinion. The paradox has long haunted evidence law, and it is at center stage in the adjudication of expert admissibility issues, as well as the trial of technical cases. Recently, Cheng has proposed a radical overhaul to the law of evidence, which would require that we stop asking courts to act as gatekeepers, and to stop asking juries to determine the validity and correctness of expert witness opinions before them. Cheng’s proposal would revert to the nose counting process of Frye and permit consideration of only whether there is an expert witness consensus to support the proffered opinion for any claim or defense.[5] Or in Plato’s allegory of the cave, we need to learn to be content with shadows on the wall rather than striving to know the real thing.

When Cheng’s proposal first surfaced, I wrote briefly about why it was a bad idea.[6] Since his initial publication, a law review symposium was assembled to address and perhaps to celebrate the proposal.[7] The papers from that symposium are now in print.[8] Unsurprisingly, the papers are both largely sympathetic (but not completely) to Cheng’s proposal, and virtually devoid of references to actual experiences of gatekeeping or trials of technical issues.

Cheng contends that the so-called Daubert framework for addressing the admissibility of expert witness opinion is wrong.  He does not argue that the existing law, in the form of Federal Rules of Evidence 702 and 703, does not call for an epistemic standard for both admitting opinion testimony, as well for the fact-finders’ assessments. There is no effort to claim that somehow four Supreme Court cases, and thousand of lower courts, have erroneously viewed the whole process. Rather, Cheng simply asserts non-expert judges cannot evaluate the reliability (validity) of expert witness opinions, and that non-expert jurors cannot “reach independent, substantive conclusions about specialized facts.”[9] The law must change to accommodate his judgment.

In his symposium contribution, Cheng expands upon his previous articulation of his proposed “consensus rule.”[10] What is conspicuously absent, however, is any example of failed gatekeeping that excluded valid expert witness opinion. One example, the appellate decision in Rosen v. Ciba-Geigy Corporation,[11] which Cheng does give, is illustrative of Cheng’s project. The expert witness, whose opinion was excluded, was on the faculty of the University of Chicago medical school; Richard Posner, the appellate judge who wrote the opinion that affirmed the expert witness’s exclusion, was on the faculty of that university’s law school. Without any discussion of the reports, depositions, hearings, or briefs, Cheng concludes that “the very idea that a law professor would tell medical school colleagues that their assessments were unreliable seems both breathtakingly arrogant and utterly ridiculous.”[12]

Except, of course, very well qualified scientists and physicians advance invalid and incorrect claims all the time. What strikes me as breathtakingly arrogant and utterly ridiculous is the judgment of a law professor who has little to no experience trying or defending Rule 702 and 703 issues labeling the “very idea” as arrogant and ridiculous. Aside from its being a petitio principia, we could probably add that the reaction is emotive, uninformed, and uninformative, and that it fails to support the author’s suggestion that “Daubert has it all wrong,” and that “[w]e need a different approach.”

Judges and jurors obviously will never fully understand the scientific issues before them.  If and when this lack of epistemic competence is problematic, we should honestly acknowledge how we are beyond the realm of the Constitution’s seventh amendment. Since Cheng is fantasizing about what the law should be, why not fantasize about not allowing lay people to decide complex scientific issues? Verdicts from jurors who do not have to give reasons for their decisions, and who are not in any sense peers of the scientists whose work they judge are normatively problematic.

Professor Cheng likens his consensus rule to how the standard of care is decided in medical malpractice litigation. The analogy is interesting, but hardly compelling in that it ignores “two schools of thought” doctrine.[13] In litigation of claims of professional malpractice, the “two schools of thought doctrine” is a complete defense.  As explained by the Pennsylvania Supreme Court,[14] physicians may defend against claims that they deviated from the standard of care, or of professional malpractice, by adverting to support for their treatment by a minority of professionals in their field:

“Where competent medical authority is divided, a physician will not be held responsible if in the exercise of his judgment he followed a course of treatment advocated by a considerable number of recognized and respected professionals in his given area of expertise.”[15]

The analogy to medical malpractice litigation seems inapt.

Professor Cheng advertises that he will be giving full-length book treatment to his proposal, and so perhaps my critique is uncharitable in looking at a preliminary, (antic?) law review article. Still, his proposal seems to ignore that “general acceptance” renders consensus, when it truly exists, as relevant to both the court’s gatekeeping decisions, and the fact finders’ determination of the facts and issues in dispute. Indeed, I have never seen a Rule 702 hearing that did not involve, to some extent, the assertion of a consensus, or the lack thereof.

To the extent that we remain committed to trials of scientific claims, we can see that judges and jurors often can detect inconsistencies, cherry picking, unproven assumptions, and other aspects of the patho-epistemology of exert witness opinions. It takes a community of scientists and engineers to build a space rocket, but any Twitter moron can determine when a rocket blows up on launch. Judges in particular have (and certainly should have) the competence to determine deviations from the scientific and statistical standards of care that pertain to litigants’ claims.

Cheng’s proposal also ignores how difficult and contentious it is to ascertain the existence, scope, and actual content of scientific consensus. In some areas of science, such as occupational and environmental epidemiology and medicine, faux consensuses are set up by would-be expert witnesses for both claimants and defendants. A search of the word “consensus” in the PubMed database yields over a quarter of a million hits. The race to the bottom is on. Replacing epistemic validity with sociological and survey navel gazing seems like a fool’s errand.

Perhaps the most disturbing aspect of Cheng’s proposal is what happens in the absence of consensus.  Pretty much anything goes, a situation that Cheng finds “interesting,” and I find horrifying:

“if there is no consensus, the legal system’s options become a bit more interesting. If there is actual dissensus, meaning that the community is fractured in substantial numbers, then the non-expert can arguably choose from among the available theories. If the expert community cannot agree, then one cannot possibly expect non-experts to do any better.”[16]

Cheng reports that textbooks and other documents “may be both more accurate and more efficient” evidence of consensus.[17] Maybe; maybe not.  Textbooks are typically often dated by the time they arrive on the shelves, and contentious scientists are not beyond manufacturing certainty or doubt in the form of falsely claimed consensus.

Of course, often, if not most of the time, there will be no identifiable, legitimate consensus for a litigant’s claim at trial. What would Professor Cheng do in this default situation? Here Cheng, fully indulging the frolic, tells us that we

“should hypothetically ask what the expert community is likely to conclude, rather than try to reach conclusions on their own.”[18]

So the default situation transforms jurors into tea-leaf readers of what an expert community, unknown to them, will do if and when there is evidence of a quantum and quality to support a consensus, or when that community gets around to articulating what the consensus is. Why not just toss claims that lack consensus support?


[1] Debra Cassens Weiss, “Law Prof Responds After Chief Justice Roberts Disses Legal Scholarship,” Am. Bar Ass’n J. (July 7, 2011).

[2] Richard A. Posner, “Legal Scholarship Today,” 45 Stanford L. Rev. 1647, 1655 (1993), quoted in Walter Olson, “Abolish the Law Reviews!” The Atlantic (July 5, 2012); see also Richard A. Posner, “Against the Law Reviews: Welcome to a world where inexperienced editors make articles about the wrong topics worse,”
Legal Affairs (Nov. 2004).

[3] Brent Newton, “Scholar’s highlight: Law review articles in the eyes of the Justices,” SCOTUS Blog (April 30, 2012); “Fixing Law Reviews,” Inside Higher Education (Nov. 19, 2012).

[4]More Antic Proposals for Expert Witness Testimony – Including My Own Antic Proposals” (Dec. 30, 2014).

[5] Edward K. Cheng, “The Consensus Rule: A New Approach to Scientific Evidence,” 75 Vanderbilt L. Rev. 407 (2022).

[6]Cheng’s Proposed Consensus Rule for Expert Witnesses” (Sept. 15, 2022);
Further Thoughts on Cheng’s Consensus Rule” (Oct. 3, 2022).

[7] Norman J. Shachoy Symposium, The Consensus Rule: A New Approach to the Admissibility of Scientific Evidence (2022), 67 Villanova L. Rev. (2022).

[8] David S. Caudill, “The ‘Crisis of Expertise’ Reaches the Courtroom: An Introduction to the Symposium on, and a Response to, Edward Cheng’s Consensus Rule,” 67 Villanova L. Rev. 837 (2022); Harry Collins, “The Owls: Some Difficulties in Judging Scientific Consensus,” 67 Villanova L. Rev. 877 (2022); Robert Evans, “The Consensus Rule: Judges, Jurors, and Admissibility Hearings,” 67 Villanova L. Rev. 883 (2022); Martin Weinel, “The Adversity of Adversarialism: How the Consensus Rule Reproduces the Expert Paradox,” 67 Villanova L. Rev. 893 (2022); Wendy Wagner, “The Consensus Rule: Lessons from the Regulatory World,” 67 Villanova L. Rev. 907 (2022); Edward K. Cheng, Elodie O. Currier & Payton B. Hampton, “Embracing Deference,” 67 Villanova L. Rev. 855 (2022).

[9] Embracing Deference at 876.

[10] Edward K. Cheng, Elodie O. Currier & Payton B. Hampton, “Embracing Deference,” 67 Villanova L. Rev. 855 (2022) [Embracing Deference]

[11] Rosen v. Ciba-Geigy Corp., 78 F.3d 316 (7th Cir. 1996).

[12] Embracing Deference at 859.

[13]Two Schools of Thought” (May 25, 2013).

[14] Jones v. Chidester, 531 Pa. 31, 40, 610 A.2d 964 (1992).

[15] Id. at 40.  See also Fallon v. Loree, 525 N.Y.S.2d 93, 93 (N.Y. App. Div. 1988) (“one of several acceptable techniques”); Dailey, “The Two Schools of Thought and Informed Consent Doctrine in Pennsylvania,” 98 Dickenson L. Rev. 713 (1994); Douglas Brown, “Panacea or Pandora’ Box:  The Two Schools of Medical Thought Doctrine after Jones v. Chidester,” 44 J. Urban & Contemp. Law 223 (1993).

[16] Embracing Deference at 861.

[17] Embracing Deference at 866.

[18] Embracing Deference at 876.

Science Bench Book for Judges

July 13th, 2019

On July 1st of this year, the National Judicial College and the Justice Speakers Institute, LLC released an online publication of the Science Bench Book for Judges [Bench Book]. The Bench Book sets out to cover much of the substantive material already covered by the Federal Judicial Center’s Reference Manual:

Acknowledgments

Table of Contents

  1. Introduction: Why This Bench Book?
  2. What is Science?
  3. Scientific Evidence
  4. Introduction to Research Terminology and Concepts
  5. Pre-Trial Civil
  6. Pre-trial Criminal
  7. Trial
  8. Juvenile Court
  9. The Expert Witness
  10. Evidence-Based Sentencing
  11. Post Sentencing Supervision
  12. Civil Post Trial Proceedings
  13. Conclusion: Judges—The Gatekeepers of Scientific Evidence

Appendix 1 – Frye/Daubert—State-by-State

Appendix 2 – Sample Orders for Criminal Discovery

Appendix 3 – Biographies

The Bench Book gives some good advice in very general terms about the need to consider study validity,[1] and to approach scientific evidence with care and “healthy skepticism.”[2] When the Bench Book attempts to instruct on what it represents the scientific method of hypothesis testing, the good advice unravels:

“A scientific hypothesis simply cannot be proved. Statisticians attempt to solve this dilemma by adopting an alternate [sic] hypothesis – the null hypothesis. The null hypothesis is the opposite of the scientific hypothesis. It assumes that the scientific hypothesis is not true. The researcher conducts a statistical analysis of the study data to see if the null hypothesis can be rejected. If the null hypothesis is found to be untrue, the data support the scientific hypothesis as true.”[3]

Even in experimental settings, a statistical analysis of the data do not lead to a conclusion that the null hypothesis is untrue, as opposed to not reasonably compatible with the study’s data. In observational studies, the statistical analysis must acknowledge whether and to what extent the study has excluded bias and confounding. When the Bench Book turns to speak of statistical significance, more trouble ensues:

“The goal of an experiment, or observational study, is to achieve results that are statistically significant; that is, not occurring by chance.”[4]

In the world of result-oriented science, and scientific advocacy, it is perhaps true that scientists seek to achieve statistically significant results. Still, it seems crass to come right out and say so, as opposed to saying that the scientists are querying the data to see whether they are compatible with the null hypothesis. This first pass at statistical significance is only mildly astray compared with the Bench Book’s more serious attempts to define statistical significance and confidence intervals:

4.10 Statistical Significance

The research field agrees that study outcomes must demonstrate they are not the result of random chance. Leaving room for an error of .05, the study must achieve a 95% level of confidence that the results were the product of the study. This is denoted as p ≤ 05. (or .01 or .1).”[5]

and

“The confidence interval is also a way to gauge the reliability of an estimate. The confidence interval predicts the parameters within which a sample value will fall. It looks at the distance from the mean a value will fall, and is measured by using standard deviations. For example, if all values fall within 2 standard deviations from the mean, about 95% of the values will be within that range.”[6]

Of course, the interval speaks to the precision of the estimate, not its reliability, but that is a small point. These definitions are virtually guaranteed to confuse judges into conflating statistical significance and the coefficient of confidence with the legal burden of proof probability.

The Bench Book runs into problems in interpreting legal decisions, which would seem softer grist for the judicial mill. The authors present dictum from the Daubert decision as though it were a holding:[7]

“As noted in Daubert, ‘[t]he focus, of course, must be solely on principles and methodology, not on the conclusions they generate’.”

The authors fail to mention that this dictum was abandoned in Joiner, and that it is specifically rejected by statute, in the 2000 revision to the Federal Rule of Evidence 702.

Early in the Bench Book, it authors present a subsection entitled “The Myth of Scientific Objectivity,” which they might have borrowed from Feyerabend or Derrida. The heading appears misleading because the text contradicts it:

“Scientists often develop emotional attachments to their work—it can be difficult to abandon an idea. Regardless of bias, the strongest intellectual argument, based on accepted scientific hypotheses, will always prevail, but the road to that conclusion may be fraught with scholarly cul-de-sacs.”[8]

In a similar vein, the authors misleadingly tell readers that “the forefront of science is rarely encountered in court,” and so “much of the science mentioned there shall be considered established….”[9] Of course, the reality is that many causal claims presented in court have already been rejected or held to be indeterminate by the scientific community. And just when readers may think themselves safe from the goblins of nihilism, the authors launch into a theory of naïve probabilism that science is just placing subjective probabilities upon data, based upon preconceived biases and beliefs:

“All of these biases and beliefs play into the process of weighing data, a critical aspect of science. Placing weight on a result is the process of assigning a probability to an outcome. Everything in the universe can be expressed in probabilities.”[10]

So help the expert witness who honestly (and correctly) testifies that the causal claim or its rejection cannot be expressed as a probability statement!

Although I have not read all of the Bench Book closely, there appears to be no meaningful discussion of Rule 703, or of the need to access underlying data to ensure that the proffered scientific opinion under scrutiny has used appropriate methodologies at every step in its development. Even a 412 text cannot address every issue, but this one does little to help the judicial reader find more in-depth help on statistical and scientific methodological issues that arise in occupational and environmental disease claims, and in pharmaceutical products litigation.

The organizations involved in this Bench Book appear to be honest brokers of remedial education for judges. The writing of this Bench Book was funded by the State Justice Institute (SJI) Which is a creation of federal legislation enacted with the laudatory goal of improving the quality of judging in state courts.[11] Despite its provenance in federal legislation, the SJI is a a private, nonprofit corporation, governed by 11 directors appointed by the President, and confirmed by the Senate. A majority of the directors (six) are state court judges, one state court administrator, and four members of the public (no more than two from any one political party). The function of the SJI is to award grants to improve judging in state courts.

The National Judicial College (NJC) originated in the early 1960s, from the efforts of the American Bar Association, American Judicature Society and the Institute of Judicial Administration, to provide education for judges. In 1977, the NJC became a Nevada not-for-profit (501)(c)(3) educational corporation, which its campus at the University of Nevada, Reno, where judges could go for training and recreational activities.

The Justice Speakers Institute appears to be a for-profit company that provides educational resources for judge. A Press Release touts the Bench Book and follow-on webinars. Caveat emptor.

The rationale for this Bench Book is open to question. Unlike the Reference Manual for Scientific Evidence, which was co-produced by the Federal Judicial Center and the National Academies of Science, the Bench Book’s authors are lawyers and judges, without any subject-matter expertise. Unlike the Reference Manual, the Bench Book’s chapters have no scientist or statistician authors, and it shows. Remarkably, the Bench Book does not appear to cite to the Reference Manual or the Manual on Complex Litigation, at any point in its discussion of the federal law of expert witnesses or of scientific or statistical method. Perhaps taxpayers would have been spared substantial expense if state judges were simply encouraged to read the Reference Manual.


[1]  Bench Book at 190.

[2]  Bench Book at 174 (“Given the large amount of statistical information contained in expert reports, as well as in the daily lives of the general society, the ability to be a competent consumer of scientific reports is challenging. Effective critical review of scientific information requires vigilance, and some healthy skepticism.”).

[3]  Bench Book at 137; see also id. at 162.

[4]  Bench Book at 148.

[5]  Bench Book at 160.

[6]  Bench Book at 152.

[7]  Bench Book at 233, quoting Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579, 595 (1993).

[8]  Bench Book at 10.

[9]  Id. at 10.

[10]  Id. at 10.

[11] See State Justice Institute Act of 1984 (42 U.S.C. ch. 113, 42 U.S.C. § 10701 et seq.).

N.J. Supreme Court Uproots Weeds in Garden State’s Law of Expert Witnesses

August 8th, 2018

The United States Supreme Court’s decision in Daubert is now over 25 years old. The idea of judicial gatekeeping of expert witness opinion testimony is even older in New Jersey state courts. The New Jersey Supreme Court articulated a reliability standard before the Daubert case was even argued in Washington, D.C. See Landrigan v. Celotex Corp., 127 N.J. 404, 414 (1992); Rubanick v. Witco Chem. Corp., 125 N.J. 421, 447 (1991). Articulating a standard, however, is something very different from following a standard, and in many New Jersey trial courts, until very recently, the standard was pretty much anything goes.

One counter-example to the general rule of dog-eat-dog in New Jersey was Judge Nelson Johnson’s careful review and analysis of the proffered causation opinions in cases in which plaintiffs claimed that their use of the anti-acne medication isotretinoin (Accutane) caused Crohn’s disease. Judge Johnson, who sits in the Law Division of the New Jersey Superior Court for Atlantic County held a lengthy hearing, and reviewed the expert witnesses’ reliance materials.1 Judge Johnson found that the plaintiffs’ expert witnesses had employed undue selectivity in choosing what to rely upon. Perhaps even more concerning, Judge Johnson found that these witnesses had refused to rely upon reasonably well-conducted epidemiologic studies, while embracing unpublished, incomplete, and poorly conducted studies and anecdotal evidence. In re Accutane, No. 271(MCL), 2015 WL 753674, 2015 BL 59277 (N.J.Super. Law Div., Atlantic Cty. Feb. 20, 2015). In response, Judge Johnson politely but firmly closed the gate to conclusion-driven duplicitous expert witness causation opinions in over 2,000 personal injury cases. “Johnson of Accutane – Keeping the Gate in the Garden State” (Mar. 28, 2015).

Aside from resolving over 2,000 pending cases, Judge Johnson’s judgment was of intense interest to all who are involved in pharmaceutical and other products liability litigation. Judge Johnson had conducted a pretrial hearing, sometimes called a Kemp hearing in New Jersey, after the New Jersey Supreme Court’s opinion in Kemp v. The State of New Jersey, 174 N.J. 412 (2002). At the hearing and in his opinion that excluded plaintiffs’ expert witnesses’ causation opinions, Judge Johnson demonstrated a remarkable aptitude for analyzing data and inferences in the gatekeeping process.

When the courtroom din quieted, the trial court ruled that the proffered testimony of Dr., Arthur Kornbluth and Dr. David Madigan did not meet the liberal New Jersey test for admissibility. In re Accutane, No. 271(MCL), 2015 WL 753674, 2015 BL 59277 (N.J.Super. Law Div. Atlantic Cty. Feb. 20, 2015). And in closing the gate, Judge Johnson protected the judicial process from several bogus and misleading “lines of evidence,” which have become standard ploys to mislead juries in courthouses where the gatekeepers are asleep. Recognizing that not all evidence is on the same analytical plane, Judge Johnson gave case reports short shrift.

[u]nsystematic clinical observations or case reports and adverse event reports are at the bottom of the evidence hierarchy.”

Id. at *16. Adverse event reports, largely driven by the very litigation in his courtroom, received little credit and were labeled as “not evidentiary in a court of law.” Id. at 14 (quoting FDA’s description of FAERS).

Judge Johnson recognized that there was a wide range of identified “risk factors” for irritable bowel syndrome, such as prior appendectomy, breast-feeding as an infant, stress, Vitamin D deficiency, tobacco or alcohol use, refined sugars, dietary animal fat, fast food. In re Accutane, 2015 WL 753674, at *9. The court also noted that there were four medications generally acknowledged to be potential risk factors for inflammatory bowel disease: aspirin, nonsteroidal anti-inflammatory medications (NSAIDs), oral contraceptives, and antibiotics. Understandably, Judge Johnson was concerned that the plaintiffs’ expert witnesses preferred studies unadjusted for potential confounding co-variables and studies that had involved “cherry picking the subjects.” Id. at *18.

Judge Johnson had found that both sides in the isotretinoin cases conceded the relative unimportance of animal studies, but the plaintiffs’ expert witnesses nonetheless invoked the animal studies in the face of the artificial absence of epidemiologic studies that had been created by their cherry-picking strategies. Id.

Plaintiffs’ expert witnesses had reprised a common claimants’ strategy; namely, they claimed that all the epidemiology studies lacked statistical power. Their arguments often ignored that statistical power calculations depend upon statistical significance, a concept to which many plaintiffs’ counsel have virulent antibodies, as well as an arbitrarily selected alternative hypothesis of association size. Furthermore, the plaintiffs’ arguments ignored the actual point estimates, most of which were favorable to the defense, and the observed confidence intervals, most of which were reasonably narrow.

The defense responded to the bogus statistical arguments by presenting an extremely capable clinical and statistical expert witness, Dr. Stephen Goodman, to present a meta-analysis of the available epidemiologic evidence.

Meta-analysis has become an important facet of pharmaceutical and other products liability litigation[1]. Fortunately for Judge Johnson, he had before him an extremely capable expert witness, Dr. Stephen Goodman, to explain meta-analysis generally, and two meta-analyses he had performed on isotretinoin and irritable bowel outcomes.

Dr. Goodman explained that the plaintiffs’ witnesses’ failure to perform a meta-analysis was telling when meta-analysis can obviate the plaintiffs’ hyperbolic statistical complaints:

the strength of the meta-analysis is that no one feature, no one study, is determinant. You don’t throw out evidence except when you absolutely have to.”

In re Accutane, 2015 WL 753674, at *8.

Judge Johnson’s judicial handiwork received non-deferential appellate review from a three-judge panel of the Appellate Division, which reversed the exclusion of Kornbluth and Madigan. In re Accutane Litig., 451 N.J. Super. 153, 165 A.3d 832 (App. Div. 2017). The New Jersey Supreme Court granted the isotretinoin defendants’ petition for appellate review, and the issues were joined over the appropriate standard of appellate review for expert witness opinion exclusions, and the appropriateness of Judge Johnson’s exclusions of Kornbluth and Madigan. A bevy of amici curiae joined in the fray.2

Last week, the New Jersey Supreme Court issued a unanimous opinion, which reversed the Appellate Division’s holding that Judge Johnson had “mistakenly exercised” discretion. Applying its own precedents from Rubanick, Landrigan, and Kemp, and the established abuse-of-discretion standard, the Court concluded that the trial court’s ruling to exclude Kornbluth and Madigan was “unassailable.” In re Accutane Litig., ___ N.J. ___, 2018 WL 3636867 (2018), Slip op. at 79.3

The high court graciously acknowledged that defendants and amici had “good reason” to seek clarification of New Jersey law. Slip op. at 67. In abandoning abuse-of-discretion as its standard of review, the Appellate Division had relied upon a criminal case that involved the application of the Frye standard, which is applied as a matter of law. Id. at 70-71. The high court also appeared to welcome the opportunity to grant review and reverse the intermediate court reinforce “the rigor expected of the trial court” in its gatekeeping role. Id. at 67. The Supreme Court, however, did not articulate a new standard; rather it demonstrated at length that Judge Johnson had appropriately applied the legal standards that had been previously announced in New Jersey Supreme Court cases.4

In attempting to defend the Appellate Division’s decision, plaintiffs sought to characterize New Jersey law as somehow different from, and more “liberal” than, the United States Supreme Court’s decision in Daubert. The New Jersey Supreme Court acknowledged that it had never formally adopted the dicta from Daubert about factors that could be considered in gatekeeping, slip op. at 10, but the Court went on to note what disinterested observers had long understood, that the so-called Daubert factors simply flowed from a requirement of sound methodology, and that there was “little distinction” and “not much light” between the Landrigan and Rubanick principles and the Daubert case or its progeny. Id at 10, 80.

Curiously, the New Jersey Supreme Court announced that the Daubert factors should be incorporated into the New Jersey Rules 702 and 703 and their case law, but it stopped short of declaring New Jersey a “Daubert” jurisdiction. Slip op. at 82. In part, the Court’s hesitance followed from New Jersey’s bifurcation of expert witness standards for civil and criminal cases, with the Frye standard still controlling in the criminal docket. At another level, it makes no sense to describe any jurisdiction as a “Daubert” state because the relevant aspects of the Daubert decision were dicta, and the Daubert decision and its progeny were superseded by the revision of the controlling statute in 2000.5

There were other remarkable aspects of the Supreme Court’s Accutane decision. For instance, the Court put its weight behind the common-sense and accurate interpretation of Sir Austin Bradford Hill’s famous articulation of factors for causal judgment, which requires that sampling error, bias, and confounding be eliminated before assessing whether the observed association is strong, consistent, plausible, and the like. Slip op. at 20 (citing the Reference Manual at 597-99), 78.

The Supreme Court relied extensively on the National Academies’ Reference Manual on Scientific Evidence.6 That reliance is certainly preferable to judicial speculations and fabulations of scientific method. The reliance is also positive, considering that the Court did not look only at the problematic epidemiology chapter, but adverted also to the chapters on statistical evidence and on clinical medicine.

The Supreme Court recognized that the Appellate Division had essentially sanctioned an anything goes abandonment of gatekeeping, an approach that has been all-too-common in some of New Jersey’s lower courts. Contrary to the previously prevailing New Jersey zeitgeist, the Court instructed that gatekeeping must be “rigorous” to “prevent[] the jury’s exposure to unsound science through the compelling voice of an expert.” Slip op. at 68-9.

Not all evidence is equal. “[C]ase reports are at the bottom of the evidence hierarchy.” Slip op. at 73. Extrapolation from non-human animal studies is fraught with external validity problems, and such studies “far less probative in the face of a substantial body of epidemiologic evidence.” Id. at 74 (internal quotations omitted).

Perhaps most chilling for the lawsuit industry will be the Supreme Court’s strident denunciation of expert witnesses’ selectivity in choosing lesser evidence in the face of a large body of epidemiologic evidence, id. at 77, and their unprincipled cherry picking among the extant epidemiologic publications. Like the trial court, the Supreme Court found that the plaintiffs’ expert witnesses’ inconsistent use of methodological criteria and their selective reliance upon studies (disregarding eight of the nine epidemiologic studies) that favored their task masters was the antithesis of sound methodology. Id. at 73, citing with approval, In re Lipitor, ___ F.3d ___ (4th Cir. 2018) (slip op. at 16) (“Result-driven analysis, or cherry-picking, undermines principles of the scientific method and is a quintessential example of applying methodologies (valid or otherwise) in an unreliable fashion.”).

An essential feature of the Supreme Court’s decision is that it was not willing to engage in the common reductionism that has “all epidemiologic studies are flawed,” and which thus privileges cherry picking. Not all disagreements between expert witnesses can be framed as differences in interpretation. In re Accutane will likely stand as a bulwark against flawed expert witness opinion testimony in the Garden State for a long time.


1 Judge Nelson Johnson is also the author of Boardwalk Empire: The Birth, High Times, and Corruption of Atlantic City (2010), a spell-binding historical novel about political and personal corruption.

2 In support of the defendants’ positions, amicus briefs were filed by the New Jersey Business & Industry Association, Commerce and Industry Association of New Jersey, and New Jersey Chamber of Commerce; by law professors Kenneth S. Broun, Daniel J. Capra, Joanne A. Epps, David L. Faigman, Laird Kirkpatrick, Michael M. Martin, Liesa Richter, and Stephen A. Saltzburg; by medical associations the American Medical Association, Medical Society of New Jersey, American Academy of Dermatology, Society for Investigative Dermatology, American Acne and Rosacea Society, and Dermatological Society of New Jersey, by the Defense Research Institute; by the Pharmaceutical Research and Manufacturers of America; and by New Jersey Civil Justice Institute. In support of the plaintiffs’ position and the intermediate appellate court’s determination, amicus briefs were filed by political action committee the New Jersey Association for Justice; by the Ironbound Community Corporation; and by plaintiffs’ lawyer Allan Kanner.

3 Nothing in the intervening scientific record called question upon Judge Johnson’s trial court judgment. See, e.g., I.A. Vallerand, R.T. Lewinson, M.S. Farris, C.D. Sibley, M.L. Ramien, A.G.M. Bulloch, and S.B. Patten, “Efficacy and adverse events of oral isotretinoin for acne: a systematic review,” 178 Brit. J. Dermatol. 76 (2018).

4 Slip op. at 9, 14-15, citing Landrigan v. Celotex Corp., 127 N.J. 404, 414 (1992); Rubanick v. Witco Chem. Corp., 125 N.J. 421, 447 (1991) (“We initially took that step to allow the parties in toxic tort civil matters to present novel scientific evidence of causation if, after the trial court engages in rigorous gatekeeping when reviewing for reliability, the proponent persuades the court of the soundness of the expert’s reasoning.”).

5 The Court did acknowledge that Federal Rule of Evidence 702 had been amended in 2000, to reflect the Supreme Court’s decision in Daubert, Joiner, and Kumho Tire, but the Court did not deal with the inconsistencies between the present rule and the 1993 Daubert case. Slip op. at 64, citing Calhoun v. Yamaha Motor Corp., U.S.A., 350 F.3d 316, 320-21, 320 n.8 (3d Cir. 2003).

6 See Accutane slip op. at 12-18, 24, 73-74, 77-78. With respect to meta-analysis, the Reference Manual’s epidemiology chapter is still stuck in the 1980s and the prevalent resistance to poorly conducted, often meaningless meta-analyses. SeeThe Treatment of Meta-Analysis in the Third Edition of the Reference Manual on Scientific Evidence” (Nov. 14, 2011) (The Reference Manual fails to come to grips with the prevalence and importance of meta-analysis in litigation, and fails to provide meaningful guidance to trial judges).

Mississippi High Court Takes the Bite Out of Forensic Evidence

November 3rd, 2017

The Supreme Court’s 1993 decision in Daubert changed the thrust of Federal Rule of Evidence 702, which governs the admissibility of expert witness opinion testimony in both civil and criminal cases. Before Daubert, lawyers who hoped to exclude opinions lacking in evidentiary and analytical support turned to the Frye decision on “general acceptance.” Frye, however, was an outdated rule that was rarely applied outside the context of devices. Furthermore, the meaning and application of Frye were unclear. Confusion reigned on whether expert witnesses could survive Frye challenges simply by adverting to their claimed use of a generally accepted science, such as epidemiology, even though their implementation of epidemiologic science was sloppy, incoherent, and invalid.

Daubert noted that Rule 702 should be interpreted in the light of the “liberal” goals of the Federal Rules of Evidence. Some observers rejoiced at the invocation of “liberal” values, but history of the last 25 years has shown that they really yearned for libertine interpretations of the rules. Liberal, of course, never meant “anything goes.” It is unclear why “liberal” cannot mean restricting evidence not likely to advance the truth-finding function of trials.

Criminal versus Civil

Back on April 27, 2009, then President Barack Obama announced the formation of the President’s Council of Advisors on Science and Technology (PCAST). The mission of PCAST was to advise the President and his administration on science and technology, and their policy implications. Although the PCAST was a new council, presidents have had scientific advisors and advisory committees back to Franklin Roosevelt, in 1933.

On September 20, 2016, PCAST issued an important report to President Obama, Report to the President on Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods. Few areas of forensic “science,” beyond DNA matching, escaped the Council’s withering criticism. Bite-mark evidence in particular received a thorough mastication.

The criticism was hardly new. Seven years earlier, the National Academies of Science issued an indictment that forensic scientists had largely failed to establish the validity of their techniques and conclusions, and that the judiciary had “been utterly ineffective in addressing this problem.”1

The response from Obama’s Department of Justice, led by Loretta Lynch, was underwhelming.2 The Trump response was equally disappointing.3 The Left and the Right appear to agree that science is dispensable when it becomes politically inconvenient. It is a common place in the community of evidence scholars that Rule 702 is not applied with the same enthusiasm in criminal cases, to the benefit of criminal defendants, as the rule is sometimes, sporadically and inconsistently applied in civil cases. The Daubert revolution has failed the criminal justice system perhaps because courts are unwilling to lift the veil on forensic evidence, for fear they may not like what the find.4

A Grudging Look at the Scientific Invalidity of Bite Mark Evidence

Sherwood Brown was convicted of a triple murder in large measure as a result of testimony from Dr. Michael West, a forensic odontologist. West, as well as another odontologist, opined that a cut on Brown’s wrist matched the shape of a victim’s mouth. DNA testing authorized after the conviction, however, rendered West’s opinions edentulous. Samples from inside the female victim’s mouth yielded male DNA, but not that of Mr. Brown.5

Did the PCAST report leave an impression upon the highest court of Mississippi? The Supreme Court of Mississippi vacated Brown’s conviction and remanded for a new trial, in an opinion that a bitemark expert might describe as reading like a bite into a lemon. Brown v. State, No. 2017 DR 00206 SCT, Slip op. (Miss. Sup. Ct. Oct. 26, 2017). The majority could not bring themselves to comment upon the Dr. West’s toothless opinions. Three justices would have kicked the can down to the trial judge by voting to grant a new hearing without vacating Brown’s convictions. The decision seems mostly predicated on the strength of the DNA evidence, rather than the invalidity of the bite mark evidence. Mr. Brown will probably be vindicated, but bite mark evidence will continue to mislead juries, with judicial imprimatur.


1 National Research Council, Committee on Identifying the Needs of the Forensic Sciences Community, Strengthening Forensic Science in the United States: A Path Forward 53 (2009).

2 See Jordan Smith, “FBI and DoJ Vow to Continue Using Junk Science Rejected by White House Report,” The Intercept (Sept. 23, 2016); Radley Balko, “When Obama wouldn’t fight for science,” Wash. Post (Jan. 4, 2017).

3 See Radley Balko, “Jeff Sessions wants to keep forensics in the Dark Ages,” Wash. Post (April 11, 2017); Jessica Gabel Cino, “Session’s Assault on Forensic Science Will Lead to More Unsafe Convictions,” Newsweek (April 20, 2017).

4 See, e.g., Paul C. Giannelli, “Forensic Science: Daubert’s Failure,” Case Western Reserve L. Rev. (2017) (“in press”).

Every Time a Bell Rings

July 1st, 2017

“Every time a bell rings, an angel gets his wings.”
Zuzu Bailey

And every time a court issues a non-citable opinion, a judge breaks fundamental law. Whether it wants to or not, a common law court, in deciding a case, creates precedent, and an expectation and a right that other, similarly situated litigants will be treated similarly. Deciding a case and prohibiting its citation deprives future litigants of due process and equal protection of the law. If that makes for more citable opinions, more work for judges and litigants, so be it; that is what our constitution requires.

Back in 2015, Judge Bernstein issued a ruling in a birth defects case in which the mother had claimed to have taken sertraline during pregnancy and this medication use caused her child to be born with congenital malformations. Applying what Pennsylvania courts insist is a Frye standard, Judge Bernstein excluded the proffered expert witness testimony that attempted to draw a causal connection between the plaintiff’s birth defect and the mother’s medication use. Porter v. SmithKline Beecham Corp., No. 03275, 2015 WL 5970639 (Phila. Cty. Pennsylvania, Ct. C.P. October 5, 2015) (Mark I. Bernstein, J.) Judge Bernstein has since left the bench, but he was and is a respected commentator on Pennsylvania evidence1, even if he was generally known for his pro-plaintiff views on many legal issues. Bernstein’s opinion in Porter was a capable demonstration of how Pennsylvania’s Frye rule can be interpreted to reach essentially the same outcome that is required by Federal Rule of Evidence 702. SeeDemonstration of Frye Gatekeeping in Pennsylvania Birth Defects Case” (Oct. 6, 2015); In re Zoloft Prod. Liab. Litig., No. 16-2247 , __ F.3d __, 2017 WL 2385279 , 2017 U.S. App. LEXIS 9832 (3d Cir. June 2, 2017) (affirming exclusion of dodgy statistical analyses and opinions, and the trial court’s entry of summary judgment on claims that sertraline causes birth defects).

In May of this year, the Pennsylvania Superior Court affirmed Judge Bernstein’s judgment, and essentially approved and adopted his reasoning. Porter v. SmithKline Beecham Corp., No. 3516 EDA 2015,2017 WL 1902905 (Pa. Super. May 8, 2017). What the Superior Court purport to giveth, the Superior Court taketh away. The Porter decision is franked as a “Non-Precedential Decision – See Superior Court I.O.P. 65.37.”

What is this Internal Operating Procedure that makes the Superior Court think that it can act and decide cases without creating precedent? Here is the relevant text from the Pennsylvania Code:

  1. An unpublished memorandum decision shall not be relied upon or cited by a Court or a party in any other action or proceeding, except that such a memorandum decision may be relied upon or cited
  1. when it is relevant under the doctrine of law of the case, res judicata, or collateral estoppel, and
  1. when the memorandum is relevant to a criminal action or proceeding because it recites issues raised and reasons for a decision affecting the same defendant in a prior action or proceeding.

210 Pa. Code § 65.37. Unpublished Memoranda Decisions. So, in other words, it is secret law.

No citation and no precedent rules are deeply problematic, and have attracted a great deal of scholarly attention2. And still, courts engage in this problematic practice. Prohibiting citation of Superior Court decisions in Pennsylvania is especially problematic in a state in which the highest court hears relatively few cases, and where the Justices involve themselves in internecine disputes. As other commentators have noted, prohibiting citation to prior decisions admitting or excluding expert witness testimony stunts the development of an area of evidence law, in which judges and litigants are often confused and in need of guidance. William E. Padgett, “‘Non-Precedential’ Unpublished Decisions in Daubert and Frye Cases, Often Silenced,” Nat’l L. Rev. (2017). The abuses of judge-made secret law from uncitable decisions has been abolished in the federal appeals courts for over a decade3. It is time for the state courts to follow suit.


1 See, e.g., Mark I. Bernstein, Pennsylvania Rules of Evidence (2017).

See Erica Weisgerber, “Unpublished Opinions: A Convenient Means to an Unconstitutional End,” 97 Georgetown L.J. 621 (2009);  Rafi Moghadam, “Judge Nullification: A Perception of Unpublished Opinions,” 62 Hastings L.J. 1397 (2011);  Norman R. Williams, “The failings of Originalism:  The Federal Courts and the Power of Precedent,” 37 U.C.. Davis L. Rev.761 (2004);  Dione C. Greene, “The Federal Courts of Appeals, Unpublished Decisions, and the ‘No-Citation Rule,” 81 Indiana L.J. 1503 (2006);  Vincent M. Cox, “Freeing Unpublished Opinions from Exile: Going Beyond the Citation Permitted by Proposed Federal Rule of Appellate Procedure 32.1,” 44 Washburn L.J. 105 (2004);  Sarah E. Ricks, “The Perils of Unpublished Non-Precedential Federal Appellate Opinions: A Case Study of The Substantive Due Process State-Created Danger Doctrine in One Circuit,” 81 Wash. L.Rev. 217 (2006);  Michael J. Woodruff, “State Supreme Court Opinion Publication in the Context of Ideology and Electoral Incentives.” New York University Department of Politics (March 2011);  Michael B. W. Sinclair, “Anastasoff versus Hart: The Constitutionality and Wisdom of Denying Precedential Authority to Circuit Court Decisions”; Thomas Healy, “Stare Decisis as a Constitutional Requirement,” 104 W. Va. L. Rev. 43 (2001); David R. Cleveland & William D. Bader, “Precedent and Justice,” 49 Duq. L. Rev. 35 (2011); Johanna S. Schiavoni, “Who’s Afraid of Precedent,” 49 UCLA L. Rev. 1859 (2002); Salem M. Katsh and Alex V. Chachkes, “Constitutionality of ‘No-Citation’ Rules,” 3 J. App. Prac. & Process 287 (2001); David R. Cleveland, “Appellate Court Rules Governing Publication, Citation, and Precedent of Opinions: An Update,” 16 J. App. Prac. & Process 257 (2015). See generally The Committee for the Rule of Law (website) (collecting scholarship and news on the issue of unpublished and supposedly non-precedential opinions). The problem even has its own Wikipedia page. SeeNon-publication of legal opinions in the United States.”

3 See Fed. R. App. Proc. 32.1 (prohibiting federal courts from barring or limiting citation to unpublished federal court opinions, effective after Jan. 1, 2007).

New York Rejects the Asbestos Substantial Factor Ruse (Juni Case)

March 2nd, 2017

I recall encountering Dr. Joseph Sokolowski in one of my first asbestos personal injury cases, 32 years ago. Dr. Sokolowki was a pulmonary specialist in Cherry Hill, New Jersey, and he showed up for plaintiffs in cases in south Jersey as well as in Philadelphia. Plaintiffs’ counsel sought him out for his calm and unflappable demeanor, stentorious voice, and propensity for over-interpreting chest radiographs. (Dr. Sokolowski failed the NIOSH B-Reader examination.)

At the end of his direct examination, the plaintiff’s lawyer asked Dr. Sokolowski the derigueur “substantial factor” question, which in 1985 had already become a customary feature of such testimonies. And Dr. Sokolowski delivered his well-rehearsed answer: “Each and every exposure to asbestos was a substantial factor in causing the plaintiff’s disease.”

My cross-examination picked at the cliché. Some asbestos inhaled was then exhaled. Yes. Some asbestos inhaled was brought up and swallowed. Yes. Asbestos that was inhaled and retained near the hilum did not participate in causing disease at the periphery of the lung. Yes. And so on, and so forth. I finished with my rhetorical question, always a dangerous move, “So you have no way to say that each and every exposure to asbestos actually participated in causing the plaintiff’s disease?” Dr. Sokolowski was imperceptibly thrown off his game, but he confessed error by claiming the necessity to cover up the gap in the evidence. “Well, we have no way to distinguish among the exposures so we have to say all were involved.”

Huh? What did he say? Move to strike the witness’s testimony as irrational, and incoherent. How can a litigant affirmatively support a claim by asserting his ignorance of the necessary foundational facts? The trial judge overruled my motion with alacrity, and the parties continued with the passion play called asbestos litigation. The judge was perhaps simply eager to get on with his docket of thousands of asbestos cases, but at least Dr. Sokolowski and I recognized that the “substantial factor” testimony was empty rhetoric, with no scientific or medical basis.

Sadly, the “substantial factor” falsehood was already well ensconced in 1985, in Pennsylvania law, as well as the law of most other states. Now, 32 years later, with ever increasingly more peripheral defendants, each involving less significant, if any, asbestos exposure, the “substantial factor” ruse is beginning to unravel.1

Juni v. A.O. Smith Water Products Co.

Arthur Juni was a truck and car car mechanic, who worked on the clutches, brakes, and manifold gaskets of Ford trucks. Juni claimed to have sustained asbestos exposure in this work, as well as in other aspects of his work career. In 2012, Juni was diagnosed with mesothelioma; he died in 2014. Juni v. A.O. Smith Water Products Co., at *1,No. 190315/12 2458 2457, 2017 N.Y. Slip Op. 01523 (N.Y. App. Div. 1st Dep’t, Feb. 28, 2017).

Juni sued multiple defendants in New York Supreme Court, for New York County. Most of the defendants settled, but Ford Corporation tried the case against the plaintiff’s widow. Both sides called multiple expert witnesses, whose testimony disputed whether the chrysotile asbestos in Ford’s brakes and clutches could cause mesothelioma. The jury returned a verdict in favor of the plaintiff, but the trial court granted judgment nothwithstanding the verdict, on the ground that the evidence failed to support the causation verdict. Id. At *1; see Juni v. A. 0. Smith Water Prod., 48 Misc. 3d 460, 11 N.Y.S.3d 415 (N.Y. Sup. Ct. 2015).

Earlier this week, the first department of the New York Appellate Division affirmed the judgment for Ford. 2017 N.Y. Slip Op. 01523. The Appellate Division refused to approve plaintiffs’ theory of cumulative exposure to show causation. The plaintiffs’ expert witnesses, Drs. Jacqueline Moline and Stephen Markowitz, both asserted that even a single asbestos exposure was a “substantial contributing” cause. The New York appellate court, like the trial court before, saw through the ruse, and declared that both expert witnesses had failed to support their assertions.

The “Asbestos Exception” Rejected

Although New York has never enacted a codified set of evidence rules, and has never expressly adopted the rule of Daubert v. Merrill Richardson, the New York Court of Appeals has held that there are limits to the admissibility of expert witness opinion testimony. Parker v. Mobil Oil Corp., 7 N.Y.3d 434 (2006), and Cornell v. 360 W. 51st St. Realty, LLC, 22 NY3d 762 (2014); Sean Reeps. v BMW of North Am., LLC, 26 N.Y.3d 801 (2016). In Juni, the Appellate Division, First Department, firmly rejected any suggestion that plaintiffs’ expert witnesses in asbestos cases are privileged against challenge over admissibility or sufficiency because the challenges occur in an asbestos case. The plaintiff’s special pleading that asbestos causation of mesothelioma is too difficult was invalidated by the success of other plaintiffs, in other cases, in showing that a specific occupational exposure was sufficient to cause mesothelioma.

The Appellate Division also rejected the plaintiff’s claim, echoed in the dissenting opinion of one lone judge, that there exists a “consensus from the medical and scientific communities that even low doses of asbestos exposure, above that in the ambient environment, are sufficient to cause mesothelioma.” The Court held that this supposed consensus is not material to the claims of a particular plaintiff against a particular defendant, especially when the particular exposure circumstance is not associated with mesothelioma in most of the relevant studies. In Juni, the defense had presented many studies that failed to show any association between occupational brake work and mesothelioma. The court might also have added that a characterization of low exposure is extremely amiguous, depending upon the implicit comparison that is being made with other exposures. It is impossible to fit a particular plaintiff’s exposure into the scale of low, medium, and high without some further context.

Single Exposure Sufficiency Rejected

The evidence that chrysotile itself causes mesothelioma remains weak, but the outcome of Juni turned not on the broad general causation question, but on the question whether even suggestive evidence of chrysotile causation had been established for the exposure circumstances of an automobile mechanic, such as Mr. Juni. Plaintiffs’ expert witnesses maintained that Juni’s cumulative asbestos exposures caused his mesothelioma, but they had no meaningful quantification or even reasonable estimate of his exposure.

Citing the Court of Appeals decision in Reeps, the Appellate Division held that plaintiff’s expert witnesses’ causation opinions must be supported by reasonable quantification of the plaintiff’s exposure, or some some scientific method, such as mathematical modeling based upon actual work history, or by comparison of plaintiff’s claimed exposure with the exposure of workers in reported studies that establish a relevant risk from those workers’ exposure. In the Juni case, however, there were no exposure measurements or scientific models, and the comparison with workers doing similar tasks failed to show a causal relationship between the asbestos exposure in those tasks and mesothelioma.

Expert Witness Admissibility and Sufficiency Requires Evaluation of Both Direct and Cross-examination Testimony and Relied Upon Studies

The Juni decision teaches another important lesson for challenging expert witness testimony in New York: glib generalizations delivered on direct examination must be considered in the light of admissions and concessions made on cross-examination, and the entire record. In Juni, the plaintiffs’ expert witnesses, Jacqueline Moline and Stephen Markowitz, asserted that asbestos in Ford’s friction products was a cause of plaintiff’s mesothelioma. Cross-examination, however, revealed that these assertions were lacking in factual support.

Cumulative Exposure

On cross-examination, the plaintiffs’ expert witnesses’ statements about exposure levels proved meaningless. Moline attempted to equate visible dust with sufficient asbestos exposure to cause disease, but she conceded on cross-examination that studies had shown that 99% of brake lining debris was not asbestos. Most of the dust observed from brake drums is composed of resins used to manufacture brake linings and pads. The heat and pressure of the brake drum causes much of the remaining chrysotile to transform into a non-fibrous mineral, fosterite.

Similarly, Markowitz had to acknowledge that chrysotile has a “serpentine” structure, with individual fibers curling in a way that makes deeper penetration into the lungs more difficult. Furthermore, chrysotile, a hydrated magnesium silicate, melts in the lungs, not in the hands. The human lung can clear particulates, and so there is no certainty that remaining chrysotile fibers from brake lining exposures ever reach the periphery of the lung, where they could interact with the pleura, the tissue in which mesothelioma arises.

Increased Risk, “Linking,” and Association Are Not Causation – Exculpatory Epidemiologic Studies

When pressed, plaintiffs’ expert witnesses lapsed into characterizing the epidemiologic studies of brake and automobile mechanics as showing increased risk or association, not causation. Causation, not association, however, was the issue. Witnesses’ invocation of weasel words, such as “increased risk,” “linkage,” and “association” are insufficient in themselves to show the requisite causation in long-latency toxic exposure cases. For automobile mechanics, even the claimed association was weak at best, with plaintiffs’ expert witnesses having to acknowledge that 21 of 22 epidemiologic studies failed to show an association between automobile mechanics’ asbestos exposure and risk of mesothelioma.

The Juni case was readily distinguishable from other cases in which the Markowitz was able to identify epidemiologic studies that showed that visible dust from a specific product contained sufficient respirable asbestos to cause mesothelioma. Id. (citing Caruolo v John Crane, Inc., 226 F.3d 46 (2d Cir. 2000). As the Appellate Division put the matter, there was no “no valid line of reasoning or permissible inference which could have led the jury to reach its result.” Asbestos plaintiffs must satisfy the standards set out in the New York Court of Appeals decisions, Parker v. Mobil Oil Corp., 7 NY3d 434 2006), and Cornell v. 360 W. 51st St. Realty, LLC, 22 N.Y.3d 762 (2014), for exposure evidence and causal inferences, as well.

New York now joins other discerning courts in rejecting regulatory rationales of “no safe exposure,” and default “linear no threshold” exposure-response models as substitutes for inferring specific causation.2 A foolish consistency may be the hobgoblin of little minds, but in jurisprudence, consistency is often the bedrock for the rule of law.


1 The ruse of passing off “no known safe exposure” as evidence that even the lowest exposure was unsafe has been going on for a long time, but not all judges are snookered by this rhetorical sleight of hand. See, e.g., Bostic v. Georgia-Pacific Corp., 439 S.W.3d 332, 358 (Tex. 2014) (“the failure of science to isolate a safe level of exposure does not prove specific causation”).

2 See, e.g. Bostic v. Georgia-Pacific Corp., 439 S.W.3d 332, 358 (Tex. 2014) (failing to identify safe levels of exposure does not suffice to show specific causation); Henricksen v. ConocoPhillips Co., 605 F. Supp. 2d 1142, 1165-66 (E.D. Wash. 2009) (rejecting a “no threshold” model of exposure-response as unfalsifiable and unvalidated, and immaterial to the causation claims); Pluck v. BP Oil Pipeline Co., 640 F.3d 671, 679 (6th Cir. 2011) (rejecting claim that plaintiff’s exposure to benzene “above background level,” but below EPA’s maximum permissible contaminant level, caused her cancer); Newkirk v. ConAgra Foods, Inc., 727 F. Supp. 2d 10006, 1015 (E.D. Wash. 2010) (rejecting Dr. David Egilman’s proffered testimony on specific causation based upon his assertion that there was no known safe level of diacetyl exposure).

Omalu and Science — A Bad Weld

October 22nd, 2016

Bennet Omalu is a star of the silver screen and in the minds of conspiratorial thinkers everywhere. Actually Will Smith[1] stood in for Omalu in the movie Concussion (2015), but Smith’s skills as an actor bring out the imaginary best in Omalu’s persona.

Chronic Traumatic Encephalopathy (CTE) is the name that Bennet Omalu, a pathologist, gave to the traumatic brain injuries resulting from repeated concussions experienced by football players.[2]  The concept is not particularly new; the condition of dementia pugilistica had been described previously in boxers. What was new with Omalu was his fervid imagination and his conspiratorial view of the world.[3] The movie Concussion  actually gives an intimation of some of the problems in Omalu’s scientific work.  See, e.g., Daniel Engber, “Concussion Lies: The film about the NFL’s apparent CTE epidemic feeds the pervasive national myths about head trauma,” Slate (Dec. 21 2015); Bob Hohler, “BU rescinds award to ‘Concussion’ trailblazer,” Boston Globe (June 16, 2016).

Omalu has more dubious claims to fame. He has not cabined his unique, stylized approach to science to the subject of head trauma. Although Omalu is a pathologist, not a clinician, Omalu recently he weighed in with observations that Hillary Clinton was definitely unwell. Indeed, Bennet Omalu has now made a public nuisance of himself by floating conspiratorial theories that Hilary Clinton has been poisoned. Cindy Boren, “The man who discovered CTE thinks Hillary Clinton may have been poisoned,” Wash. Post (Sept. 12, 2016); Christine Rushton, “‘Concussion’ doctor suggests without evidence that poison a factor in Clinton’s illness,” Los Angeles Times. (Sept. 13, 2016).

In the courtroom, in civil cases, Omalu has a poor track record for scientific rigor. The United States Court of Appeals, for the Third Circuit, which can be tough and skeptical of Rule 702 expert witness exclusions, readily affirmed an exclusion of Omalu’s testimony in Pritchard v. Dow Agro Sciences, 705 F. Supp. 2d 471 (W.D. Pa. 2010), aff’d, 430 F. App’x 102, 104 (3d Cir. 2011). In Pritchard, Omalu was caught misrepresenting the statistical data from published studies in a so-called toxic tort case. Fortunately, robust gatekeeping was able to detoxify the proffered testimony.[4]

More recently, Omalu was at it again in a case in which a welder claimed that exposure to welding and solvent fumes caused him to develop Parkinson’s disease. Brian v. Association of Independent Oil Distributors, No. 2011-3413, Westmoreland Cty. Ct. Common Pleas, Order of July 18, 2016. [cited here as Order].

James G. Brian developed Parkinson disease (PD), after 30 years of claimed exposure to welding and solvent fumes. It is America, so Brian sued Lincoln Electric and various chemical companies on his theory that his PD was caused by his welding and solvent exposures, either alone or together. Now although manganese in very high exposures can cause a distinctive movement disorder, manganism, manganese in welding fume does not cause PD in humans.[5] Omalu was undeterred, however, and proceeded by conjecturing that welding fume interacted with solvent fumes to cause Brian’s PD.

At the outset of the case, Brian intended to present testimony of expert witnesses, Bennet Omalu, Richard A. Parent, a toxicologist, and Jordan Loyal Holtzman, a pharmacologist.  Parent commenced giving a deposition, but became so uncomfortable with his own opinion that he put up a white flag at the deposition, and withdrew from the case.  On sober reflection, Holtzman also withdrew from the case.

Omalu was left alone, to make the case on general and specific causation. Defendant Lincoln Electric and others moved to exclude Omalu, under Pennsylvania’s standard for admissibility of expert witness opinion testimony, which is based upon a patch-work version of Frye v. United States, 293 F. 1013 (D. C. Cir. 1923).

Invoking a quirky differential diagnosis, and an idiosyncratic reading of Sir Austin Bradford Hill’s work, Omalu defended his general and specific causation opinions. After briefing and a viva voce hearing, President Judge Richard E. McCormick ruled that Omalu had misapplied both methodologies in reaching his singular opinion. Order at 8.

Omalu did not make the matter easy for Judge McCormick. There was no question that Brian had PD.  Every clinician who had examined him made the diagnosis. Knowing that PD is generally regarded as idiopathic, with no known cause, Omalu thought up a new diagnosis: chronic toxic encephalopathy.

When confronted with the other clincians’ diagnoses, Omalu did not dispute the diagnosis of PD. Instead, he attempted to evade the logical implications of the diagnosis of idiopathic PD by continually trying to change the terminology to suit his goals. Judge McCormick saw through Omalu’s semantic evasions, which bolstered the case for excluding him at trial.

Madness to His Method

In scrutinizing Omalu’s opinions, Judge McCormick found more madness than method. Omalu claimed that he randomly selected studies to rely upon, and he failed to explain the strengths and weaknesses of the cited studies when he formed his opinion.

Despite his claim to have randomly selected studies, Omalu remarkably managed to ignore epidemiologic studies that were contrary to his causal conclusions. Order at 9.  Indeed, Omalu missed more than half the published studies on welding and PD.  Not surprisingly, Omalu did not record his literature search; nor could explain, in deposition or at the court hearing, his inclusionary or exclusionary criteria for pertinent studies. Id. at 10. When confronted about his “interaction” opinions concerning welding and solvent fumes, Omalu cited several studies, none of which measured or assessed combined exposures.  Some of the papers flatly contradicted Omalu’s naked assertions. Id. at 9.

Judge McCormick rejected Omalu’s distorted invocation of the Bradford Hill factors to support a causal association when no association had yet been found. The court quoted from the explanation provided by Prof. James A. Mortimer, the defense neuroepidemiologist, at the Frye hearing:

“First, the Bradford Hill criteria should not be applied until you have ruled out a chance association, which [Omalu] did not do. In fact, as I will point out, carefully done epidemiologic studies will show there is no increased risk of Parkinson’s disease with exposure to welding fume and/or solvents, therefore the application of these criteria is inappropriate.”

Order at 11, citing to and quoting from Frye Hearing at 318 (Oct. 14, 2015).

When cornered, Omalu asserted that he never claimed that Mr. Brian’s PD was caused by welding or solvents; rather his contention was simply that occupational exposures had created a “substantial increased risk” of PD. Id. at 14. Risk creation, however, is not causation; and Omalu had not even shown unquantified evidence of increased risk before Brian developed PD. The court found that Omalu had not used any appropriate methodology with respect to general causation. Id. at 14.

Specific Causation

Undaunted, Omalu further compromised his credibility by claiming that Bradford Hill’s factors allowed him to establish specific causation, even in the absence of general causation. Id. at 12. Omalu suggested that he had performed a differential diagnosis, even though he is not a clinician, and as a pathologist had not evaluated any brain tissue. Id. at 10. The court deftly saw through these ruses. Id. at 11.

Judge McCormick’s conclusion should be a precautionary lesson to future courts that must gatekeep Omalu’s opinions, or Omalu-like opinions:

“In conclusion, we agree with the Defendants that while Dr. Omalu’s stated methodology in this case is generally accepted in the medical and scientific community, Dr. Omalu failed to properly apply it. He misused and demonstrated a lack of understanding of the Bradford Hill criteria and the Schaumburg criteria when he attempted to employ these methodologies to conduct a differential diagnosis or differential etiology analysis.”

Id. at 16. Gatekeeping is sometimes viewed as more difficult in Frye jurisdictions, but the exclusion of Omalu shows that it can be achieved when expert witnesses deviate materially from scientifically standard methodology.


[1] For other performances by Will Smith in this vein, see Six Degrees of Separation (1993); Focus (2015).

[2] See Bennet I. Omalu, Steven DeKosky, Ryan Minster, M. Ilyas Kamboh, Ronald Hamilton, Cyril H. Wecht, “Chronic Traumatic Encephalopathy in a National Football League Player, Part I,” 57 Neurosurgery 128 (2005); Bennet I. Omalu, Steven DeKosky, Ronald Hamilton, Ryan Minster, M. Ilyas Kamboh, Abdulrezak Shakir, and Cyril H. Wecht, “Chronic Traumatic Encephalopathy in a National Football League Player, Part II,” 59 Neurosurgery 1086 (2006).

[3] See Jeanne Marie Laskas, “The Doctor the NFL Tried to Silence,” Wall St. J. (Nov. 24, 2015).

[4] SeePritchard v. Dow Agro – Gatekeeping Exemplified” (Aug. 25, 2014).

[5] See, e.g., Marianne van der Mark, Roel Vermeulen, Peter C.G. Nijssen, Wim M. Mulleners, Antonetta M.G. Sas, Teus van Laar, Anke Huss, and Hans Kromhout, “Occupational exposure to solvents, metals and welding fumes and risk of Parkinson’s disease,” 21 Parkinsonism Relat Disord. 635 (2015); James Mortimer, Amy Borenstein & Laurene Nelson, Associations of Welding and Manganese Exposure with Parkinson’s Disease: Review and Meta-Analysis, 79 Neurology 1174 (2012); Joseph Jankovic, “Searching for a relationship between manganese and welding and Parkinson’s disease,” 64 Neurology 2012 (2005).

Judge Bernstein’s Criticism of Rule 703 of the Federal Rules of Evidence

August 30th, 2016

Federal Rule of Evidence Rule 703 addresses the bases of expert witness opinions, and it is a mess. The drafting of this Rule is particularly sloppy. The Rule tells us, among other things, that:

“[i]f experts in the particular field would reasonably rely on those kinds of facts or data in forming an opinion on the subject, they need not be admissible for the opinion to be admitted.”

This sentence of the Rule has a simple grammatical and logical structure:

If A, then B;

where A contains the concept of reasonable reliance, and B tells us the consequence that the relied upon material need not be itself admissible for the opinion to be admissible.

But what happens if the expert witness has not reasonably relied upon certain facts or data; i.e., ~A?  The conditional statement as given does not describe the outcome in this situation. We are not told what happens when an expert witness’s reliance in the particular field is unreasonable.  ~A does not necessarily imply ~B. Perhaps the drafters meant to write:

B if and only if A.

But the drafters did not give us the above rule, and they have left judges and lawyers to make sense of their poor grammar and bad logic.

And what happens when the reliance material is independently admissible, say as a business record, government report, and first-person observation?  May an expert witness rely upon admissible facts or data, even when a reasonable expert would not do so? Again, it seems that the drafters were trying to limit expert witness reliance to some rule of reason, but by tying reliance to the admissibility of the reliance material, they managed to conflate two separate notions.

And why is reliance judged by the expert witness’s particular field?  Fields of study and areas of science and technology overlap. In some fields, it is common place for putative experts to rely upon materials that would not be given the time of day in other fields. Should we judge the reasonableness of homeopathic healthcare providers’ reliance by the standards of reasonableness in homeopathy, such as it is, or should we judge it by the standards of medical science? The answer to this rhetorical question seems obvious, but the drafters of Rule 703 introduced a Balkanized concept of science and technology by introducing the notion of the expert witness’s “particular field.” The standard of Rule 702 is “knowledge” and “helpfulness,” both of which concepts are not constrained by “particular fields.”

And then Rule 703 leaves us in the dark about how to handle an expert witness’s reliance upon inadmissible facts or data. According to the Rule, “the proponent of the opinion may disclose [the inadmissible facts or data] to the jury only if their probative value in helping the jury evaluate the opinion substantially outweighs their prejudicial effect. And yet, disclosing inadmissible facts or data would always be highly prejudicial because they represent facts and data that the jury is forbidden to consider in reaching its verdict.  Nonetheless, trial judges routinely tell juries that an expert witness’s opinion is no better than the facts and data on which the opinion is based.  If the facts and data are inadmissible, the jury must disregard them in its fact finding; and if an expert witness’s opinion is based upon facts and data that are to be disregarded, then the expert witness’s opinion must be disregarded as well. Or so common sense and respect for the trial’s truth-finding function would suggest.

The drafters of Rule 703 do not shoulder all the blame for the illogic and bad results of the rule. The judicial interpretation of Rule 703 has been sloppy, as well. The Rule’s “plain language” tells us that “[a]n expert may base an opinion on facts or data in the case that the expert has been made aware of or personally observed.”  So expert witnesses should be arriving at their opinions through reliance upon facts and data, but many expert witnesses rely upon others’ opinions, and most courts seem to be fine with such reliance.  And the reliance is often blind, as when medical clinicians rely upon epidemiologic opinions, which in turn are based upon data from studies that the clinicians themselves are incompetent to interpret and critique.

The problem of reliance, as contained within Rule 703, is deep and pervasive in modern civil and criminal trials. In the trial of health effect claims, expert witnesses rely upon epidemiologic and toxicologic studies that contain multiple layers of hearsay, often with little or no validation of the trustworthiness of many of those factual layers. The inferential methodologies are often obscure, even to the expert witnesses, and trial counsel are frequently untrained and ill prepared to expose the ignorance and mistakes of the expert witnesses.

Back in February 2008, I presented at an ALI-ABA conference on expert witness evidence about the problems of Rule 703.[1] I laid out a critique of Rule 703, which showed that the Rule permitted expert witnesses to rely upon “castles in the air.” A distinguished panel of law professors and judges seemed to agree; at least no one offered a defense of Rule 703.

Shortly after I presented at the ALI-ABA conference, Professor Julie E. Seaman published an insightful law review in which she framed the problems of rule 703 as constitutional issues.[2] Encouraged by Professor Seaman’s work, I wrote up my comments on Rule 703 for an ABA publication,[3] and I have updated those comments in the light of subsequent judicial opinions,[4] as well as the failure of the Third Edition of the Reference Manual of Scientific Evidence to address the problems.[5]

===================

Judge Mark I. Bernstein is a trial court judge for the Philadelphia County Court of Common Pleas. I never tried a case before Judge Bernstein, who has announced his plans to leave the Philadelphia bench after 29 years of service,[6] but I had heard from some lawyers (on both sides of the bar) that he was a “pro-plaintiff” judge. Some years ago, I sat next to him on a CLE panel on trial evidence, at which he disparaged judicial gatekeeping,[7] which seemed to support his reputation. The reality seems to be more complex. Judge Bernstein has shown that he can be a critical consumer of complex scientific evidence, and an able gatekeeper under Pennsylvania’s crazy quilt-work pattern of expert witness law. For example, in a hotly contested birth defects case involving sertraline, Judge Bernstein held a pre-trial evidentiary hearing and looked carefully at the proffered testimony of Michael D. Freeman, a chiropractor and self-styled “forensic epidemiologist, and Robert Cabrera, a teratologist. Applying a robust interpretation of Pennsylvania’s Frye rule, Judge Bernstein excluded Freeman and Cabrera’s proffered testimony, and entered summary judgment for defendant Pfizer, Inc. Porter v. Smithkline Beecham Corp., 2016 WL 614572 (Phila. Cty. Ct. Com. Pl.). SeeDemonstration of Frye Gatekeeping in Pennsylvania Birth Defects Case” (Oct. 6, 2015).

And Judge Bernstein has shown that he is one of the few judges who takes seriously Rule 705’s requirement that expert witnesses produce their relied upon facts and data at trial, on cross-examination. In Hansen v. Wyeth, Inc., Dr. Harris Busch, a frequent testifier for plaintiffs, glibly opined about the defendant’s negligence.  On cross-examination, he adverted to the volumes of depositions and documents he had reviewed, but when defense counsel pressed, the witness was unable to produce and show exactly what he had reviewed. After the jury returned a verdict for the plaintiff, Judge Bernstein set the verdict aside because of the expert witness’s failure to comply with Rule 705. Hansen v. Wyeth, Inc., 72 Pa. D. & C. 4th 225, 2005 WL 1114512, at *13, *19, (Phila. Ct. Common Pleas 2005) (granting new trial on post-trial motion), 77 Pa. D. & C. 4th 501, 2005 WL 3068256 (Phila. Ct. Common Pleas 2005) (opinion in support of affirmance after notice of appeal).

In a recent law review article, Judge Bernstein has issued a withering critique of Rule 703. See Hon. Mark I. Bernstein, “Jury Evaluation of Expert Testimony Under the Federal Rules,” 7 Drexel L. Rev. 239 (2015). Judge Bernstein is clearly dissatisfied with the current approach to expert witnesses in federal court, and he lays almost exclusive blame on Rule 703 and its permission to hide the crucial facts, data, and inferential processes from the jury. In his law review article, Judge Bernstein characterizes Rules 703 and 705 as empowering “the expert to hide personal credibility judgments, to quietly draw conclusions, to individually decide what is proper evidence, and worst of all, to offer opinions without even telling the jury the facts assumed.” Id. at 264. Judge Bernstein cautions that the subversion of the factual predicates for expert witnesses’ opinions under Rule 703 has significant, untoward consequences for the court system. Not only are lawyers allowed to hire professional advocates as expert witnesses, but the availability of such professional witnesses permits and encourages the filing of unnecessary litigation. Id. at 286. Hear hear.

Rule 703’s practical consequence of eliminating the hypothetical question has enabled the expert witness qua advocate, and has up-regulated the trial as a contest of opinions and opiners rather than as an adversarial procedure that is designed to get at the truth. Id. at 266-67. Without having access to real, admissible facts and data, the jury is forced to rely upon proxies for the truth: qualifications, demeanor, and courtroom poise, all of which fail the jury and the system in the end.

As a veteran trial judge, Judge Bernstein makes a persuasive case that the non-disclosure permitted under Rule 703 is not really curable under Rule 705. Id. at 288.  If the cross-examination inquiry into reliance material results in the disclosure of inadmissible facts, then judges and the lawyers must deal with the charade of a judicial instruction that the identification of the inadmissible facts is somehow “not for the truth.” Judge Bernstein argues, as have many others, that this “not for the truth” business is an untenable fiction, either not understood or ignored by jurors.

Opposing counsel, of course, may ask for an elucidation of the facts and data relied upon, but when they consider the time and difficulty involved in cross-examining highly experienced, professional witnesses, opposing counsel usually choose to traverse the adverse opinion by presenting their own expert witness’s opinion rather than getting into nettlesome details and risking looking foolish in front of the jury, or even worse, allowing the highly trained adverse expert witness to run off at the mouth.

As powerful as Judge Bernstein’s critique of Rule 703 is, his analysis misses some important points. Lawyers and judges have other motives for not wanting to elicit underlying facts and data: they do not want to “get into the weeds,” and they want to avoid technical questions of valid inference and quality of data. Yet sometimes the truth is in the weeds. Their avoidance of addressing the nature of inference, as well as facts and data, often serves to make gatekeeping a sham.

And then there is the problem that arises from the lack of time, interest, and competence among judges and jurors to understand the technical details of the facts and data, and inferences therefrom, which underlie complex factual disputes in contemporary trials. Cross examination is reduced to the attempt to elicit “sound bites” and “cheap shots,” which can be used in closing argument. This approach is common on both sides of the bar, in trials before judges and juries, and even at so-called Daubert hearings. See David E. Bernstein & Eric G. Lasker,“Defending Daubert: It’s Time to Amend Federal Rule of Evidence 702,” 57 William & Mary L. Rev. 1, 32 (2015) (“Rule 703 is frequently ignored in Daubert analyses”).

The Rule 702 and 703 pretrial hearing is an opportunity to address the highly technical validity questions, but even then, the process is doomed to failure unless trial judges make adequate time and adopt an attitude of real intellectual curiosity to permit a proper exploration of the evidentiary issues. Trial lawyers often discover that a full exploration is technical and tedious, and that it pisses off the trial judge. As much as judges dislike having to serve as gatekeepers of expert witness opinion testimony, they dislike even more having to assess the reasonableness of individual expert witness’s reliance upon facts and data, especially when this inquiry requires a deep exploration of the methods and materials of each relied upon study.

In favor of something like Rule 703, Bernstein’s critique ignores that there are some facts and data that will never be independently admissible. Epidemiologic studies, with their multiple layers of hearsay, come to mind.

Judge Bernstein, as a reformer, is wrong to suggest that the problem is solely in hiding the facts and data from the jury. Rules 702 and 703 march together, and there are problems with both that require serious attention. See David E. Bernstein & Eric G. Lasker,“Defending Daubert: It’s Time to Amend Federal Rule of Evidence 702,” 57 William & Mary L. Rev. 1 (2015); see alsoOn Amending Rule 702 of the Federal Rules of Evidence” (Oct. 17, 2015).

And we should remember that the problem is not solely with juries and their need to see the underlying facts and data. Judges try cases too, and can butcher scientific inference with any help from a lay jury. Then there is the problem of relied upon opinions, discussed above. And then there is the problem of unreasonable reliance of the sort that juries cannot discern even if they see the underlying, relied upon facts and data.


[1] Schachtman, “Rule 703 – The Problem Child of Article VII”; and “The Effective Presentation of Defense Expert Witnesses and Cross-examination of Plaintiffs’ Expert Witnesses”; at the ALI-ABA Course on Opinion and Expert Witness Testimony in State and Federal Courts (February 14-15, 2008).

[2] See Julie E. Seaman, “Triangulating Testimonial Hearsay: The Constitutional Boundaries of Expert Opinion Testimony,” 96 Georgetown L.J. 827 (2008).

[3]  Nathan A. Schachtman, “Rule of Evidence 703—Problem Child of Article VII,” 17 Proof 3 (Spring 2009).

[4]RULE OF EVIDENCE 703 — Problem Child of Article VII” (Sept. 19, 2011)

[5] SeeGiving Rule 703 the Cold Shoulder” (May 12, 2012); “New Reference Manual on Scientific Evidence Short Shrifts Rule 703,” (Oct. 16, 2011).

[6] Max Mitchell, “Bernstein Announces Plan to Step Down as Judge,” The Legal Intelligencer (July 29, 2016).

[7] See Schachtman, “Court-Appointed Expert Witnesses,” for Mealey’s Judges & Lawyers in Complex Litigation, Class Actions, Mass Torts, MDL and the Monster Case Conference, in West Palm Beach, Florida (November 8-9, 1999). I don’t recall Judge Bernstein’s exact topic, but I remember he criticized the Pennsylvania Supreme Court’s decision in Blum v. Merrill Dow Pharmaceuticals, 534 Pa. 97, 626 A.2d 537 ( 1993), which reversed a judgment for plaintiffs, and adopted what Judge Bernstein derided as a blending of Frye and Daubert, which he called Fraubert. Judge Bernstein had presided over the Blum trial, which resulted in the verdict for plaintiffs.

Birth Defects Case Exceeds NY Court of Appeal’s Odor Threshold

March 14th, 2016

The so-called “weight of the evidence” (WOE) approach by expert witnesses has largely been an argument for subjective weighting of studies and cherry picking of data to reach a favored, pre-selected conclusion. The approach is so idiosyncratic and amorphous that it really is no method at all, which is exactly why it seems to have been embraced by the litigation industry and its cadre of expert witnesses.

The WOE enjoyed some success in the First Circuit’s Milward decision, with much harrumphing from the litigation industry and its proxies, but more recently courts have mostly seen through the ruse and employed their traditional screening approaches to exclude opinions that deviate from the relevant standard of care of scientific opinion testimony.[1]

In Reeps, the plaintiff child was born with cognitive and physical defects, which his family claimed resulted from his mother’s inhalation of gasoline fumes in her allegedly defective BMW. To support their causal claims, the Reeps proffered the opinions of two expert witnesses, Linda Frazier and Shira Kramer, on both general and specific causation of the child’s conditions. The defense presented reports from Anthony Scialli and Peter Lees.

Justice York, of the Supreme Court for New York County, sustained defendants’ objections to the admissibility of Frazier and Kramer’s opinions, in a careful opinion that dissected the general and specific causation opinions that invoked WOE methods. Reeps v. BMW of North America, LLC, 2012 NY Slip Op 33030(U), N.Y.S.Ct., Index No. 100725/08 (New York Cty. Dec. 21, 2012) (York, J.), 2012 WL 6729899, aff’d on rearg., 2013 WL 2362566.

The First Department of the Appellate Division affirmed Justice York’s exclusionary ruling and then certified the appellate question to the New York Court of Appeals. 115 A.D.3d 432, 981 N.Y.S.2d 514 (2013).[2] Last month, the New York high court affirmed in a short opinion that focused on the plaintiff’s claim that Mrs. Reeps must have been exposed to a high level of gasoline (and its minor constituents, such as benzene) because she experienced symptoms such as dizziness while driving the car. Sean R. v. BMW of North America, LLC, ___ N.E.3d ___, 2016 WL 527107, 2016 N.Y. Slip Op. 01000 (2016).[3]

The car in question was a model that was recalled by BMW for a gasoline line leak, and there was thus no serious question that there had been some gasoline exposure to the plaintiff’s mother and thus to the plaintiff and thus perhaps to the plaintiff in utero. According to the Court of Appeals, the plaintiff’s expert witness Frazier concluded that the gasoline fume exposures to the car occupants exceeded 1,000 parts per million (ppm) because studies showed that symptoms of acute toxicity were reported when exposures reached or exceeded 1,000 ppm. The mother of the car’s owner claimed to suffer dizziness and nausea when riding in the car, and Frazier inferred from these self-reported, in litigation, symptoms that the plaintiff’s mother also sustained gasoline exposures in excess of 1,000 ppm. From this inference about level of exposure, Frazier then proceeded to use the “Bradford Hill criteria” to opine that unleaded gasoline vapor is capable of causing the claimed birth defects based upon “the link between exposure to the constituent chemicals and adverse birth outcomes.” And then using the wizardry of differential etiology, Frazier was able to conclude that the mother’s first-trimester exposure to gasoline fumes was the probable cause of plaintiff’s birth defects.

There was much wrong with Frazier’s opinions, as detailed in the trial court’s decision, but for reasons unknown, the Court of Appeals chose to focus on Frazier’s symptom-threshold analysis. The high court provided no explanation of how Frazier applied the Bradford Hill criteria, or her downward extrapolation from high-exposure benzene or solvent exposure birth defect studies to a gasoline-exposure case that involved only a small percentage of benzene or solvent in the high-exposure studies. There is no description from the Court of what a “link” might be, or how it is related to a cause; nor is there any discussion of how Frazier might have excluded the most likely cause of birth defects: the unknown. The Court also noted that plaintiff’s expert witness Kramer had employed a WOE-ful analysis, but it provided no discussion of what was amiss with Kramer’s opinion. A curious reader might think that the Court had overlooked and dismissed “sound science,” but Justice York’s trial court opinion fully addressed the inadequacies of these other opinions.

The Court of Appeals acknowledge that “odor thresholds” can be helpful in estimating a plaintiff’s level of exposure to a potentially toxic chemical, but it noted that there was no generally accepted exposure assessment methodology that connected the report of an odor to adverse pregnancy outcomes.

Frazier, however, had not adverted to an odor threshold, but a symptom threshold. In support, Frazier pointed to three things:

  1. A report of the American Conference of Governmental and Industrial Hygienists (ACGIH), (not otherwise identified) which synthesized the results of controlled studies, and reported a symptom threshold of “mild toxic effects” to be about 1,000 ppm;
  1. A 1991 study (not further identified) that purportedly showed a dose-response between exposures to ethanol and toluene and headaches; and
  1. A 2008 report (again not further identified) that addressed the safety of n-Butyl alcohol in cosmetic products.

Item (2) seems irrelevant at best, given that ethanol and toluene are again minor components of gasoline, and that the exposure levels in the study are not given. Item (3) again seems off the report because the Court’s description does not allude to any symptom threshold; nor is there any attempt to tie exposure levels of n-Butyl to the experienced levels of gasoline in the Reeps case.

With respect to item (1), which supposedly had reported that if exposure exceeded 1,000 ppm, then headaches and nausea can occur acutely, the Court asserted that the ACGIH report did not support an inverse inference, that if headaches and nausea had occurred, then exposures exceeded 1,000 ppm.

It is true that ) does not logically support ), but the claimed symptoms, their onset and abatement, and the lack of other known precipitating causes would seem to provide some evidence for exposures above the symptom threshold. Rather than engaging with the lack of scientific evidence on the claimed causal connection between gasoline and birth defects, however, the Court invoked the lack of general acceptance of the “symptom-threshold” methodology to dispose of the case.

In its short opinion, The Court of Appeals did not address the quality, validity, or synthesis of studies urged by plaintiff’s expert witnesses; nor did it address the irrelevancy of whether the plaintiff’s grandmother or his mother had experienced acute symptoms such as nausea to the level that might be relevant to causing embryological injury. Had it done so, the Court would have retraced the path of Justice York, in the trial court, who saw through the ruse of WOE and the blatantly false claim that the scientific evidence even came close to satisfying the Bradford Hill factors. Furthermore, the Court might have found that the defense expert witnesses were entirely consistent with the Centers for Disease Control:

“The hydrocarbons found in gasoline can cross the placenta. There is no direct evidence that maternal exposure to gasoline causes fetotoxic or teratogenic effects. Gasoline is not included in Reproductive and Developmental Toxicants, a 1991 report published by the U.S. General Accounting Office (GAO) that lists 30 chemicals of concern because of widely acknowledged reproductive and developmental consequences.”

Agency for Toxic Substances and Disease Registry, “Medical Management Guidelines for Gasoline” (Oct. 21, 2014, last updated) (“Toxic Substances Portal – Gasoline, Automotive”); Agency for Toxic Substances and Disease Registry, “Public Health Statement for Automotive Gasoline” (June 1995) (“There is not enough information available to determine if gasoline causes birth defects or affects reproduction.”); see also National Institute for Occupational Safety & Health, Occupational Exposure to Refined Petroleum Solvents: Criteria for a Recommended Standard (1977).


[1] See, e.g., In re Denture Cream Prods. Liab. Litig., 795 F. Supp. 2d 1345, 1367 (S.D. Fla. 2011), aff’d, Chapman v. Procter & Gamble Distrib., LLC, 766 F.3d 1296 (11th Cir. 2014). See alsoFixodent Study Causes Lockjaw in Plaintiffs’ Counsel” (Feb. 4, 2015); “WOE-fully Inadequate Methodology – An Ipse Dixit By Another Name” (May 1, 2012); “I Don’t See Any Method At All”   (May 2, 2013).

[2]New York Breathes Life Into Frye Standard – Reeps v. BMW” (March 5, 2013); “As They WOE, So No Recovery Have the Reeps” (May 22, 2013).

[3] See Sean T. Stadelman “Symptom Threshold Methodology Rejected by Court of Appeals of New York Pursuant to Frye,” (Feb. 18, 2016).

Demonstration of Frye Gatekeeping in Pennsylvania Birth Defects Case

October 6th, 2015

Michael D. Freeman is a chiropractor and self-styled “forensic epidemiologist,” affiliated with Departments of Public Health & Preventive Medicine and Psychiatry, Oregon Health & Science University School of Medicine, in Portland, Oregon. His C.V. can be found here. Freeman has an interesting publication in press on his views of forensic epidemiology. Michael D. Freeman & Maurice Zeegers, “Principles and applications of forensic epidemiology in the medico-legal setting,” Law, Probability and Risk (2015); doi:10.1093/lpr/mgv010. Freeman’s views on epidemiology did not, however, pass muster in the courtroom. Porter v. Smithkline Beecham Corp., Phila. Cty. Ct. C.P., Sept. Term 2007, No. 03275. Slip op. (Oct. 5, 2015).

In Porter, plaintiffs sued Pfizer, the manufacturer of the SSRI antidepressant Zoloft. Plaintiffs claimed the mother plaintiff’s use of Zoloft during pregnancy caused her child to be born with omphalocele, a serious defect that occurs when the child’s intestines develop outside his body. Pfizer moved to exclude plaintiffs’ medical causation expert witnesses, Dr. Cabrera and Dr. Freeman. The trial judge was the Hon. Mark I. Bernstein, who has written and presented frequently on expert witness evidence.[1] Judge Bernstein held a two day hearing in September 2015, and last week, His Honor ruled that the plaintiffs’ expert witnesses failed to meet Pennsylvania’s Frye standard for admissibility. Judge Bernstein’s opinion reads a bit like a Berenstain Bear book on how not to use epidemiology.

GENERAL CAUSATION SCREW UPS

Proper Epidemiologic Method

First, Find An Association

Dr. Freeman has a methodologic map that included Bradford Hill criteria at the back end of the procedure. Dr. Freeman, however, impetuously forgot that before you get to the back end, you must traverse the front end:

“Dr. Freemen agrees that he must, and claims he has, applied the Bradford Hill Criteria to support his opinion. However, the starting procedure of any Bradford-Hill analysis is ‘an association between two variables’ that is ‘perfectly clear-cut and beyond what we would care to attribute to the play of chance’.35 Dr. Freeman testified that generally accepted methodology requires a determination, first, that there’s evidence of an association and, second, whether chance, bias and confounding have been accounted for, before application of the Bradford-Hill criteria.36 Because no such association has been properly demonstrated, the Bradford Hill criteria could not have been properly applied.”

Slip op. at 12-13. In other words, don’t go rushing to the Bradford Hill factors until and unless you have first shown an association; second, you have shown that it is “clear cut,” and not likely the result of bias or confounding; and third, you have ruled out the play of chance or random variability in explaining the difference between the observed and expected rates of disease.

Proper epidemiologic method requires surveying the pertinent published studies that investigate whether there is an association between the medication use and the claimed harm. The expert witnesses must, however, do more than write a bibliography; they must assess any putative associations for “chance, confounding or bias”:

“Proper epidemiological methodology begins with published study results which demonstrate an association between a drug and an unfortunate effect. Once an association has been found, a judgment as whether a real causal relationship between exposure to a drug and a particular birth defect really exists must be made. This judgment requires a critical analysis of the relevant literature applying proper epidemiologic principles and methods. It must be determined whether the observed results are due to a real association or merely the result of chance. Appropriate scientific studies must be analyzed for the possibility that the apparent associations were the result of chance, confounding or bias. It must also be considered whether the results have been replicated.”

Slip op. at 7.

Then Rule Out Chance

So if there is something that appears to be an association in a study, the expert epidemiologist must assess whether it is likely consistent with a chance association. If we flip a fair coin 10 times, we “expect” 5 heads and 5 tails, but actually the probability of not getting the expected result is about three times greater than obtaining the expected result. If on one series of 10 tosses we obtain 6 heads and 4 tails, we would certainly not reject a starting assumption that the expected outcome was 5 heads/ 5 tails. Indeed, the probability of obtaining 6 heads / 4 tails or 4 heads /6 tails is almost double that of the probability of obtaining the expected outcome of equal number of heads and tails.

As it turned out in the Porter case, Dr. Freeman relied rather heavily upon one study, the Louik study, for his claim that Zoloft causes the birth defect in question. See Carol Louik, Angela E. Lin, Martha M. Werler, Sonia Hernández-Díaz, and Allen A. Mitchell, “First-Trimester Use of Selective Serotonin-Reuptake Inhibitors and the Risk of Birth Defects,” 356 New Engl. J. Med. 2675 (2007). The authors of the Louik study were quite clear that they were not able to rule out chance as a sufficient explanation for the observed data in their study:

“The previously unreported associations we identified warrant particularly cautious interpretation. In the absence of preexisting hypotheses and the presence of multiple comparisons, distinguishing random variation from true elevations in risk is difficult. Despite the large size of our study overall, we had limited numbers to evaluate associations between rare outcomes and rare exposures. We included results based on small numbers of exposed subjects in order to allow other researchers to compare their observations with ours, but we caution that these estimates should not be interpreted as strong evidence of increased risks.24

Slip op at 10 (quoting from Louik study).

Judge Bernstein thus criticized Freeman for failing to account for chance in explaining his putative association between maternal Zoloft use and infant omphalocele. The appropriate and generally accepted methodology for accomplishing this step of evaluating a putative association is to consider whether the association is statistically significant at the conventional level.

In relying heavily upon the Louik study, Dr. Freeman opened himself up to serious methodological criticism. Judge Bernstein’s opinion stands for the important proposition that courts should not be unduly impressed with nominal statistical significance in the presence of multiple comparisons and very broad confidence intervals:

“The Louik study is the only study to report a statistically significant association between Zoloft and omphalocele. Louik’s confidence interval which ranges between 1.6 and 20.7 is exceptionally broad. … The Louik study had only 3 exposed subjects who developed omphalocele thus limiting its statistical power. Studies that rely on a very small number of cases can present a random statistically unstable clustering pattern that may not replicate the reality of a larger population. The Louik authors were unable to rule out confounding or chance. The results have never been replicated concerning omphalocele. Dr. Freeman’s testimony does not explain, or seemingly even consider these serious limitations.”

Slip op. at 8. Statistical precision in the point estimate of risk, which includes assessing the outcome in the context of whether the authors conducted multiple comparisons, and whether the observed confidence intervals were very broad, is part of the generally accepted epidemiologic methodology, which Freeman flouted:

“Generally accepted methodology considers statistically significant replication of study results in different populations because apparent associations may reflect flaws in methodology.”

Slip op. at 9. The studies that Freeman cited and apparently relied upon failed to report statistically significant associations between sertraline (Zoloft) and omphalocele. Judge Bernstein found this lack to be a serious problem for Freeman and his epidemiologic opinion:

“While non-significant results can be of some use, despite a multitude of subsequent studies which isolated omphalocele, there is no study which replicates or supports Dr. Freeman’s conclusions.”

Slip op. at 10. The lack of statistical significance, in the context of repeated attempts to find it, helped sink Freeman’s proffered testimony.

Then Rule Out Bias and Confounding

As noted, Freeman relied heavily upon the Louik study, which was the only study to report a nominally statistically significant risk ratio for maternal Zoloft use and infant omphalocele. The Louik study, by its design, however, could not exclude chance or confounding as full explanation for the apparent association, and Judge Bernstein chastised Dr. Freeman for overselling the study as support for the plaintiffs’ causal claim:

“The Louik authors were unable to rule out confounding or chance. The results have never been replicated concerning omphalocele. Dr. Freeman’s testimony does not explain, or seemingly even consider these serious limitations.”

Slip op. at 8.

And Only Then Consider the Bradford Hill Factors

Even when an association is clear cut, and beyond what we can likely attribute to chance, generally accepted methodology requires the epidemiologist to consider the Bradford Hill factors. As Judge Bernstein explains, generally accepted methodology for assessing causality in this area requires a proper consideration of Hill’s factors before a conclusion of causation is reached:

“As the Bradford-Hill factors are properly considered, causality becomes a matter of the epidemiologist’s professional judgment.”

Slip op. at 7.

Consistency or Replication

The nine Hill factors are well known to lawyers because they have been stated and discussed extensively in Hill’s original article, and in references such as the Reference Manual on Scientific Evidence. Not all the Hill factors are equally important, or important at all, but one that is consistency or concordance of results among the available epidemiologic studies. Stated alternatively, a clear cut association unlikely to be explained by chance is certainly interesting and probative, but it raises an important methodological question — can the result be replicated? Judge Bernstein restated this important Hill factor as an important determinant of whether a challenged expert witness employed a generally accepted method:

“Generally accepted methodology considers statistically significant replication of study results in different populations because apparent associations may reflect flaws in methodology.”

Slip op. at 10.

“More significantly neither Reefhuis nor Alwan reported statistically significant associations between Zoloft and omphalocele. While non-significant results can be of some use, despite a multitude of subsequent studies which isolated omphalocele, there is no study which replicates or supports Dr. Freeman’s conclusions.”

Slip op. at 10.

Replication But Without Double Dipping the Data

Epidemiologic studies are sometimes updated and extended with additional follow up. An expert witness who wished to skate over the replication and consistency requirement might be tempted, as was Dr. Freeman, to count the earlier and later iteration of the same basic study to count as “replication.” The Louik study was indeed updated and extended this year in a published paper by Jennita Reefhuis and colleagues.[2] Proper methodology, however, prohibits double dipping data to count the later study that subsumes the early one as a “replication”:

“Generally accepted methodology considers statistically significant replication of study results in different populations because apparent associations may reflect flaws in methodology. Dr. Freeman claims the Alwan and Reefhuis studies demonstrate replication. However, the population Alwan studied is only a subset of the Reefhuis population and therefore they are effectively the same.”

Slip op. at 10.

The Lumping Fallacy

Analyzing the health outcome of interest at the right level of specificity can sometimes be a puzzle and a challenge, but Freeman generally got it wrong by opportunistically “lumping” disparate outcomes together when it helps him get a result that he likes. Judge Bernstein admonishes:

“Proper methodology further requires that one not fall victim to the … the ‘Lumping Fallacy’. … Different birth defects should not be grouped together unless they a part of the same body system, share a common pathogenesis or there is a specific valid justification or necessity for an association20 and chance, bias, and confounding have been eliminated.”

Slip op. at 7. Dr. Freeman lumped a lot, but Judge Bernstein saw through the methodological ruse. As Judge Bernstein pointed out:

“Dr. Freeman’s analysis improperly conflates three types of data: Zoloft and omphalocele, SSRI’s generally and omphalocele, and SSRI’s and gastrointestinal and abdominal malformations.”

Slip op. at 8. Freeman’s approach, which sadly is seen frequently in pharmaceutical and other products liability cases, is methodologically improper:

“Generally accepted causation criteria must be based on the data applicable to the specific birth defect at issue. Dr. Freeman improperly lumps together disparate birth defects.”

Slip op. at 11.

Class Effect Fallacy

Another kind of improper lumping results from treating all SSRI antidepressants the same to either lump them together, or to pick and choose from among all the SSRIs, the data points that are supportive of the plaintiffs’ claims (while ignoring those SSRI data points not supportive of the claims). To be sure, the SSRI antidepressants do form a “class,” in that they all have a similar pharmacologic effect. The SSRIs, however, do not all achieve their effect in the serotonergic neurons the same way; nor do they all have the same “off-target” effects. Treating all the SSRIs as interchangeable for a claimed adverse effect, without independent support for this treatment, is known as the class effect fallacy. In Judge Bernstein’s words:

“Proper methodology further requires that one not fall victim to the ‘Class Effect Fallacy’ … . A class effect cannot be assumed. The causation conclusion must be drug specific.”

Slip op. at 7. Dr. Freeman’s analysis improperly conflated Zoloft data with SSRI data generally. Slip op. at 8. Assuming what you set out to demonstrate is, of course, a fine way to go methodologically into the ditch:

“Without significant independent scientific justification it is contrary to generally accepted methodology to assume the existence of a class effect. Dr. Freeman lumps all SSRI drug results together and assumes a class effect.”

Slip op. at 10.

SPECIFIC CAUSATION SCREW UPS

Dr. Freeman was also offered by plaintiffs to provide a specific causation opinion – that Mrs. Porter’s use of Zoloft in pregnancy caused her child’s omphalocele. Freeman claimed to have performed a differential diagnosis or etiology or something to rule out alternative causes.

Genetics

In the field of birth defects, one possible cause looming in any given case is an inherited or spontaneous genetic mutation. Freeman purported to have considered and ruled out genetic causes, which he acknowledged to make up a substantial percentage of all omphalocele cases. Bo Porter, Mrs. Porter’s son, was tested for known genetic causes, and Freeman argued that this allowed him to “rule out” genetic causes. But the current state of the art in genetic testing allowed only for identifying a small number of possible genetic causes, and Freeman failed to explain how he might have ruled out the as-of-yet unidentified genetic causes of birth defects:

“Dr. Freeman fails to properly rule out genetic causes. Dr. Freeman opines that 45-49% of omphalocele cases are due to genetic factors and that the remaining 50-55% of cases are due to non-genetic factors. Dr. Freeman relies on Bo Porter’s genetic testing which did not identify a specific genetic cause for his injury. However, minor plaintiff has not been tested for all known genetic causes. Unknown genetic causes of course cannot yet be tested. Dr. Freeman has made no analysis at all, only unwarranted assumptions.”

Slip op. at 15-16. Judge Bernstein reviewed Freeman’s attempted analysis and ruling out of potential causes, and found that it departed from the generally accepted methodology in conducting differential etiology. Slip op. at 17.

Timing Errors

One feature of putative terotogenicity is that an embryonic exposure must take place at a specific gestational developmental time in order to have its claimed deleterious effect. As Judge Bernstein pointed out, omphalocele results from an incomplete folding of the abdominal wall during the third to fifth weeks of gestation. Mrs. Porter, however, did not begin taking Zoloft until her seventh week of pregnancy, which left Dr. Freeman opinion-less as to how Zoloft contributed to the claimed causation of the minor plaintiff’s birth defect. Slip op. at 14. This aspect of Freeman’s specific causation analysis was glaringly defect, and clearly not the sort of generally accepted methodology of attributing a birth defect to a teratogen.

******************************************************

All in all, Judge Bernstein’s opinion is a tour de force demonstration of how a state court judge, in a so-called Frye jurisdiction, can show that failure to employ generally accepted methods renders an expert witness’s opinions inadmissible. There is one small problem in statistical terminology.

Statistical Power

Judge Bernstein states, at different places, that the Louik study was and was not statistically significant for Zoloft and omphalocele. The court’s opinion ultimately does explain that the nominal statistical significance was vitiated by multiple comparisons and an extremely broad confidence interval, which more than justified its statement that the study was not truly statistically significant. In another moment, however, the court referred to the problem as one of lack of statistical power. For some reason, however, Judge Bernstein chose to explain the problem with the Louik study as a lack of statistical power:

“Equally significant is the lack of power concerning the omphalocele results. The Louik study had only 3 exposed subjects who developed omphalocele thus limiting its statistical power.”

Slip op. at 8. The adjusted odds ratio for Zoloft and omphalocele, was 5.7, with a 95% confidence interval of 1.6 – 20.7. Power was not the issue because if the odds ratio were otherwise credible, free from bias, confounding, and chance, the study had the power to observe an increased risk of close to 500%, which met the pre-stated level of significance. The problem, however, was multiple testing, fragile and imprecise results, and inability to evaluate the odds ratio fully for bias and confounding.


 

[1] Mark I. Bernstein, “Expert Testimony in Pennsylvania,” 68 Temple L. Rev. 699 (1995); Mark I. Bernstein, “Jury Evaluation of Expert Testimony under the Federal Rules,” 7 Drexel L. Rev. 239 (2014-2015).

[2] Jennita Reefhuis, Owen Devine, Jan M Friedman, Carol Louik, Margaret A Honein, “Specific SSRIs and birth defects: bayesian analysis to interpret new data in the context of previous reports,” 351 Brit. Med. J. (2015).