TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Judicial Dodgers – Reassigning the Burden of Proof on Rule 702

May 13th, 2020

Explaining the denial of a Rule 702 motion in terms of the availability of cross-examination is just one among several dodges that judges use to avoid fully engaging with Rule 702’s requirements.[1] Another dodge involves shifting the burden of proof on admissibility from the proponent of the challenged expert witness to the challenger. This dodgewould appear to violate well-established law.

The Supreme Court, in deciding Daubert, made clear that the question whether an expert witness’s opinion was admissible was governed under the procedure set out in Federal Rule of Evidence 104(a).[2] The significance of placing the Rule 702 issues under the procedures set out in Rule 104(a) is that the trial judge must make the admissibility determination, and that he or she is not bound by the rules of evidence. The exclusion of the admissibility determination from the other rules of evidence means that trial judges can look at challenged expert witnesses’ relied-upon materials, and other facts, data, and opinions, regardless of these materials’ admissibility. The Supreme Court also made clear that the admissibility of an expert witness’s opinion testimony should be shown “by a preponderance of proof.”[3] Every court that has directly addressed the burden of proof issue in a Rule 702 challenge to expert witness testimony has clearly assigned that burden to the proponent of the testimony.[4]

Trial courts intent upon evading gatekeeping responsibility, however, have created a presumption of admissibility. When called upon to explain why they have denied Rule 702 challenges, these courts advert to the presumption as an explanation and justification for the denial.[5] Some courts even manage to discuss the burden of proof upon the proponent, and a presumption of admissibility, in almost the same breath.[6]

In his policy brief for amending Rule 702, Lee Mickus traces the presumption innovation to Borawick v. Shay, a 1995 Second Circuit decision that involved a challenge to hypnotically refreshed (or created) memory.[7] In Borawick, the Court of Appeals held that the plaintiff’s challenge turned upon whether Borawick’s testimony was competent or admissible, and that it did not involve the “the admissibility of data derived from scientific techniques or expert opinions.”[8] Nevertheless, in dicta, the court observed that “by loosening the strictures on scientific evidence set by Frye, Daubert reinforces the idea that there should be a presumption of admissibility of evidence.”[9]

Presumptions come in different forms and operate differently, and this casual reference to a presumption in dictum could mean any number of things. A presumption of admissibility could mean simply that unless there is a challenge to an expert witness’s opinion, the opinion is admissible.[10] The presumption could be a bursting-bubble (Thayer) presumption, which disappears once the opponent of the evidence credibly raises questions about the evidence’s admissibility. The presumption might be something that does not disappear, but once the admissibility is challenged, the presumption continues to provide some evidence for the proponent. And in the most extreme forms, the (Morgan) presumption might be nothing less than a judicially artful way of saying that the burden of proof is shifted to the opponent of the evidence to show inadmissibility.[11]

Although Borawick suggested that there should be a presumption, it did not exactly hold that one existed. A presumption in favor of the admissibility of evidence raises many questions about the nature, definition, and operation of the presumption. It throws open the question what evidence is needed to rebut the presumption. For instance, may a party whose expert witness is challenged not defend the witness’s compliance with Rule 702, stand on the presumption, and still prevail?

There is no mention of a presumption in Rule 702 itself, or in any Supreme Court decision on Rule 702, or in the advisory committee notes. Inventing a presumption, especially a poorly described one, turns the judicial discretion to grant or deny a Rule 702 challenge into an arbitrary decision.

Most importantly, given the ambiguity of “presumption,” a judicial opinion that denies a Rule 702 challenge by invoking a legal fiction fails to answer the question whether the proponent of the expert witness has carried the burden of showing that all the subparts of Rule 702 were satisfied by a preponderance of the evidence. While judges may prefer not to endorse or disavow the methodology of an otherwise “qualified” expert witness, their office requires them to do so, and not hide behind fictional presumptions.


1

[1]  “Judicial Dodgers – The Crossexamination Excuse for Denying Rule 702 Motions” (May 11, 2020).

[2]  Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 592 n.10 (1993).

[3]  Id., citing Bourjaily v. United States, 483 U. S. 171, 175-176 (1987).

[4]  Barrett v. Rhodia, Inc., 606 F.3d 975, 980 (8th Cir. 2010) (quoting Marmo v. Tyson Fresh Meats, Inc., 457 F.3d 748, 757 (8th Cir. 2006)); Beylin v. Wyeth, 738 F. Supp. 2d 887 (E.D. Ark. 2010) (MDL court) (Wilson, J. & Montgomery, J.); Pride v. BIC Corp., 218 F.3d 566, 578 (6th Cir. 2000); Reece v. Astrazeneca Pharms., LP, 500 F. Supp. 2d 736, 742 (S.D. Ohio 2007).

[5]  See, e.g., Cates v. Trustees of Columbia Univ. in City of New York, No. 16CIV6524GBDSDA, 2020 WL 1528124, at *6 (S.D.N.Y. Mar. 30, 2020) (discussing presumptive admissibility); Price v. General Motors, LLC, No. CIV-17-156-R, 2018 WL 8333415, at *1 (W.D. Okla. Oct. 3, 2018) (“[T]here is a presumption under the Rules that expert testimony is admissible.”)(internetal citation omitted); Powell v. Schindler Elevator Corp., No. 3:14cv579 (WIG), 2015 WL 7720460, at *2 (D. Conn. Nov. 30, 2015) (“The Second Circuit has made clear that Daubert contemplates liberal admissibility standards, and reinforces the idea that there should be a presumption of admissibility of evidence.”); Advanced Fiber Technologies (AFT) Trust v. J & L Fiber Services, Inc., No. 1:07-CV-1191, 2015 WL 1472015, at *20 (N.D.N.Y. Mar. 31, 2015) (“In assuming this [gatekeeper] role, the Court applies a presumption of admissibility.”); Crawford v. Franklin Credit Mgt. Corp., 08-CV-6293 (KMW), 2015 WL 13703301, at *2 (S.D.N.Y. Jan. 22, 2015) (“[T]he court should apply ‘a presumption of admissibility’ of evidence” in carrying out the gatekeeper function.); Martinez v. Porta, 598 F. Supp. 2d 807, 812 (N.D. Tex. 2009) (“Expert testimony is presumed admissible”).

[6]  S.E.C. v. Yorkville Advisors, LLC, 305 F. Supp. 3d 486, 503-04 (S.D.N.Y. 2018) (“The party seeking to introduce the expert testimony bears the burden of establishing by a preponderance of the evidence that the proffered testimony is admissible. There is a presumption that expert testimony is admissible … .”) (internal citations omitted).

[7]  Borawick v. Shay, 68 F.3d 597, 610 (2d Cir. 1995), cert. denied, 517 U.S. 1229 (1996).

[8]  Id.

[9]  Id. (referring to Frye v. United States, 293 F. 1013 (D.C.Cir.1923)).

[10]  In re Zyprexa Prod. Liab. Litig., 489 F. Supp. 2d 230, 282 (E.D.N.Y. 2007) (Weinstein, J.) (“Since Rule 702 embodies a liberal standard of admissibility for expert opinions, the assumption the court starts with is that a well-qualified expert’s testimony is admissible.”).

[11]  See, e.g., Orion Drilling Co., LLC v. EQT Prod. Co., No. CV 16-1516, 2019 WL 4273861, at *34 (W.D. Pa. Sept. 10, 2019) (after declaring that “[e]xclusion is disfavored” under Rule 702, the court flipped the burden of production and declared the opinion testimony admissible, stating “Orion has not established that incorporation of the data renders Ray’s opinion unreliable.”).

Judicial Dodgers – The Crossexamination Excuse for Denying Rule 702 Motions

May 11th, 2020

In my last post,[1] I praised Lee Mickus’s recent policy paper on amending Rule 702 for its persuasive force on the need for an amendment, as well as a source for helping lawyers anticipate common judicial dodges to a faithful application of the rule.[2] There are multiple dodges used by judicial dodgers, and it behooves litigants to recognize and anticipate them. In this post, and perhaps future ones, I elaborate upon the concerns that Mickus documents.

One prevalent judicial response to the Rule 702 motion is to kick the can and announce that the challenge to an expert witness’s methodological shenanigans can and should be addressed by crossexamination. This judicial response was, of course, the standard one before the 1993 Daubert decision, but Justice Blackmun’s opinion kept it alive in frequently quote dicta:

“Vigorous cross-examination, presentation of contrary evidence, and careful instruction on the burden of proof are the traditional and appropriate means of attacking shaky but admissible evidence.”[3]

Justice Blackmun, no doubt, believed he was offering a “helpful” observation here, but the reality is quite different. Traditionally, courts allowed qualified expert witnesses to opine with wild abandon, after showing that they had the very minimal qualifications required to do so in court. In the face of this traditional judicial lassitude, “[v]igorous cross-examination, presentation of contrary evidence, and careful instruction on the burden of proof” were all a litigant could hope to accomplish in litigation. Furthermore, the litany of remedies for “shaky but admissible evidence” fails to help lower court judges and lawyers sort shaky but admissible evidence from shaky and inadmissible evidence.

Perhaps even more to the point, cases at common law “traditionally” did not involve multivariate logistic regression, structural equation models, propensity score weighting, and the like. Juries did just fine on whether Farmer Brown had exercised due care when he ran over his neighbor’s cow with his tractor, or even when a physician opined that a child was born 350 days after the putative father’s death was sired by the testator and entitled to inherit from “dad.”

Mickus is correct that a trial judge’s comment that the loser of a Rule 702 motion is free to cross-examine is often a dodge, an evasion, or an outright failure to engage with the intricacies of a complex methodological challenge.[4] Stating that the “traditional and appropriate means of attacking shaky but admissible evidence” remain available is a truism, and might be offered as judicial balm to the motion loser, but the availability of such means is hardly an explanation or justification for denying the Rule 702 motion. Furthermore, Justice Blackmun’s observation about traditional means was looking back at an era when in most state and federal court, a person found to be minimally qualified, could pretty much say anything regardless of scientific validity. That was the tradition that stood in active need of reform when Daubert was decided in 1993.

Mickus is also certainly correct that the whole point of judicial gatekeeping is that the presentation of vive voce testimony before juries is not an effective method for revealing shaky, inadmissible opinion testimony. A few courts have acknowledged that cross-examination in front of a jury is not an appropriate justification for admitting methodologically infirm expert witness opinion testimony. In the words of Judge Jed Rakoff, who served on the President’s Council of Advisors on Science and Technology,[5] addressed the limited ability of cross-examination in the context of forensic evidence:

“Although effective cross-examination may mitigate some of these dangers, the explicit premise of Daubert and Kumho Tire is that, when it comes to expert testimony, cross-examination is inherently handicapped by the jury’s own lack of background knowledge, so that the Court must play a greater role, not only in excluding unreliable testimony, but also in alerting the jury to the limitations of what is presented.”[6]

Judge Rakoff’s point is by no means limited to forensic evidence, and it has been acknowledged more generally by Professor Daniel Capra, the Reporter to the Advisory Committee on Evidence Rules:

“the key to Daubert is that cross-examination alone is ineffective in revealing nuanced defects in expert opinion testimony and that the trial judge must act as a gatekeeper to ensure that unreliable opinions don’t get to the jury in the first place.”[7]

Juries do not arrive at the court house knowledgeable about statistical and scientific methods; nor are they prepared to spend weeks going over studies to assess their quality, and whether an expert witness engaged in cherry picking, misapplying methodologies, or insufficient investigation.[8] In discussing the problem of expert witnesses’ overstating the strength of their opinions, beyond what is supported by evidence, the Reporter stressed the limits and ineffectiveness of remedial adversarial cross-examination:

“Perhaps another way to think about cross-examination as a remedy is to compare the overstatement issue to the issues of sufficiency of basis, reliability of methodology, and reliable application of that methodology. As we know, those three factors must be shown by a preponderance of the evidence. The whole point of Rule 702 — and the Daubert-Rule 104(a) gatekeeping function — is that these issues cannot be left to cross-examination. The underpinning of Daubert is that an expert’s opinion could be unreliable and the jury could not figure that out, even given cross-examination and argument, because the jurors are deferent to a qualified expert (i.e., the white lab coat effect). The premise is that cross-examination cannot undo the damage that has been done by the expert who has power over the jury. This is because, for the very reason that an expert is needed (because lay jurors need assistance) the jury may well be unable to figure out whether the expert is providing real information or junk. The real question, then, is whether the dangers of overstatement are any different from the dangers of insufficient basis, unreliability of methodology, and unreliable application. Why would cross-examination be insufficient for the latter yet sufficient for the former?

It is hard to see any difference between the risk of overstatement and the other risks that are regulated by Rule 702. When an expert says that they are certain of a result — when they cannot be — how is that easier for the jury to figure out than if an expert says something like ‘I relied on four scientifically valid studies concluding that PCB’s cause small lung cancer’. When an expert says he employed a ‘scientific methodology’ when that is not so, how is that different from an expert saying “I employed a reliable methodology” when that is not so?”[9]

The Reporter’s example of PCBs and small lung cancer was an obvious reference to the Joiner case, in which the Supreme Court held that the trial judge had properly excluded causation opinions. The Reporter’s point goes directly to the cross-examination excuse for not shirking the gatekeeping function. In Joiner, the Court held that gatekeeping was necessitated when cross-examination was insufficient in the face of an analytical gap between methodology and conclusion.[10] Indeed, such gaps are or should be present in most well-conceived Rule 702 challenges.

The problem is not only that juries defer to expert witnesses. Juries lack the competence to assess scientific validity. Although many judges are lacking in such competence, at least litigants can expect them to read the Reference Manual on Scientific Evidence before they read the parties’ briefs and the expert witnesses’ reports. If the trial judge’s opinion evidences ignorance of the Manual, then at least there is the possibility of an appeal. It will be a strange day in a stranger world, when a jury summons arrives in the mail with a copy of the Manual!

The rules of evidence permit expert witnesses to rely upon inadmissible evidence, at least when experts in their field would do so reasonably. To decide whether the reliance is reasonable requires the decision maker go outside the “proofs” that would typically be offered at trial. Furthermore, the decision maker – gatekeeper – will have to read the relied-upon study and data to evaluate the reasonableness of the reliance. In a jury trial, the actual studies relied upon are rarely admissible, and so the jury almost never has the opportunity to read them to make its own determination of reasonableness of reliance, or of whether the study and its data really support what the expert witness draws from it.

Of course, juries do not have to write opinions about their findings. They need neither explain nor justify their verdicts, once the trial court has deemed that there is the minimally sufficient evidence to support a verdict. Juries, with whatever help cross-examination provides, in the absence of gatekeeping, cannot deliver anything approaching scientific due process of law.

Despite Supreme Court holdings, a substantially revised and amended Rule 702, and clear direction from the Advisory Committee, some lower courts have actively resisted enforcing the requirements of Rule. 702 Part of this resistance consists in pushing the assessment of the reliability of the data and assumptions used in applying a given methodology out of the gatekeeping column and into the jury’s column. Despite the clear language of Rule 702, and the Advisory Committee Note,[11] some Circuits of the Court of Appeals have declared that assessing the reliability of assumptions and data is not judges’ work (outside of a bench trial).[12]

As Seinfeld has taught us, rules are like reservations. It is not enough to make the rules, you have to keep and follow them. Indeed, following the rule is really the important part.[13] Although an amended Rule 702 might include a provision that “we really mean this,” perhaps it is worth a stop at the Supreme Court first to put down the resistance.


[1]  “Should Federal Rule of Evidence 702 Be Amended?” (May 8, 2020).

[2]  Lee Mickus, “Gatekeeping Reorientation: Amend Rule 702 to Correct Judicial Misunderstanding about Expert Evidence,” Washington Legal Foundation Critical Legal Issues Working Paper No. 217 (May 2020).

[3]  Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 596 (1993).

[4]  See, e.g., AmGuard Ins. Co. v. Lone Star Legal Aid, No. CV H-18-2139, 2020 WL 60247, at *8 (S.D. Tex. Jan. 6, 2020) (“[O]bjections [that the expert could not link her experienced-based methodology to her conclusions] are better left for cross examination, not a basis for exclusion.”); Powell v. Schindler Elevator Corp., No. 3:14cv579 (WIG), 2015 WL 7720460, at *2 (D. Conn. Nov. 30, 2015) (“To the extent Defendant argues that Mr. McPartland’s conclusions are unreliable, it may attack his report through cross examination.”); Wright v. Stern, 450 F. Supp. 2d 335, 359–60 (S.D.N.Y. 2006) (“In a close case, a court should permit the testimony to be presented at trial, where it can be tested by cross-examination and measured against the other evidence in the case.”) (internal citation omitted). See also Adams v. Toyota Motor Corp., 867 F.3d 903, 916 (8th Cir. 2017) (affirming admission of expert testimony, reiterating the flexibility of the Daubert inquiry and emphasizing that defendant’s concerns could all be addressed with “[v]igorous cross-examination, presentation of contrary evidence, and careful instruction on the burden of proof”); Liquid Dynamics Corp. v. Vaughan Corp., 449 F.3d 1209, 1221 (Fed. Cir. 2006) (“The identification of such flaws in generally reliable scientific evidence is precisely the role of cross-examination.” (internal citation omitted)); Carmichael v. Verso Paper, LLC, 679 F. Supp. 2d 109, 119 (D. Me. 2010) (“[W]hen the adequacy of the foundation for the expert testimony is at issue, the law favors vigorous cross-examination over exclusion.”); Crawford v. Franklin Credit Mgt. Corp., 08-CV-6293 (KMW), 2015 WL 13703301, at *6 (S.D.N.Y. Jan. 22, 2015) (“In light of the ‘presumption of admissibility of evidence,’ that opportunity [for cross-examination] is sufficient to ensure that the jury receives testimony that is both relevant and reliable.”) (internal citation omitted).

Even the most explicitly methodological challenges are transmuted into cross-examination issues by refusnik courts. For instance, cherry picking is reduced to a credibility issue for the jury and not germane to the court’s Rule 702 determination. In re Chantix Prods. Liab. Litig., 889 F. Supp. 2d 1272, 1288 (N.D. Ala. 2012) (finding that an expert witness’s deliberate decision not to rely upon clinical trial data merely “is a matter for cross-examination, not exclusion under Daubert”); In re Urethane Antitrust Litig., 2012 WL 6681783, at *3 (D.Kan.) (“The extent to which [an expert] considered the entirety of the evidence in the case is a matter for cross-examination.”); Bouchard v. Am. Home Prods. Corp., 2002 WL 32597992, at *7 (N.D. Ohio) (“If the plaintiff believes that the expert ignored evidence that would have required him to substantially change his opinion, that is a fit subject for cross-examination.”). Similarly, courts have by ipse dixit made flawed application of what a standard methodological into merely a credibility issue to be explore by cross-examination rather than by judicial gatekeeping. United States v. Adam Bros. Farming, 2005 WL 5957827, at *5 (C.D. Cal. 2005) (“Defendants’ objections are to the accuracy of the expert’s application of the methodology, not the methodology itself, and as such are properly reserved for cross-examination.”); Oshana v. Coca-Cola Co., 2005 WL 1661999, at *4 (N.D. Ill.) (“Challenges addressing flaws in an expert’s application of reliable methodology may be raised on cross-examination.”).

[5]  President’s Council of Advisors on Science and Technology, Report to the President on Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods (Sept. 2016).

[6]  United States v. Glynn, 578 F. Supp. 2d 567, 574 (S.D.N.Y. 2008) (Rakoff, J.)

[7]  Daniel J. Capra, Reporter, Advisory Comm. on Evidence Rules, Minutes of Meeting at 23 (May 3, 2019) (comments of the Reporter).

[8]  Daniel J. Capra, Reporter’s Memorandum re Forensic Evidence, Daubert and Rule 702 at 50 (April 1, 2018) (identifying issues such as insufficient investigation, cherry-picking data, or misapplying standard methodologies, as examples of a “white lab coat” problem resulting from juries’ inability to evaluate expert witnesses’ factual bases, methodologies, and applications of methods).

[9]  Daniel J. Capra, Reporter, Advisory Comm. on Evidence Rules, Minutes of Meeting at 10-11 (Oct. 1, 2019) (comments of the Reporter on possible amendment of Rule 702) (internal citation to Joiner omitted).

[10]  Id. at 11 n.5.

[11]  See In re Paoli RR Yard PCB Litig., 35 F.3d 717, 745 (3d Cir. 1994) (calling for a close, careful analysis of the application of a proper methodology to every step in the case; “any step that renders the analysis unreliable renders the expert’s testimony inadmissible whether the step completely changes a reliable methodology or merely misapplies that methodology”).

[12]  See, e.g., City of Pomona v. SQM North Am. Corp., 750 F.3d 1036, 1047 (9th Cir. 2014) (rejecting the Paoli any-step approach without careful analysis of the statute, the advisory committee note, or Supreme Court decisions); Manpower, Inc. v. Ins. Co. of Pa., 732 F.3d 796, 808 (7th Cir. 2013) (“[t]he reliability of data and assumptions used in applying a methodology is tested by the adversarial process and determined by the jury; the court’s role is generally limited to assessing the reliability of the methodology – the framework – of the expert’s analysis”); Bonner v. ISP Techs., Inc., 259 F.3d 924, 929 (8th Cir. 2001) (“the factual basis of an expert opinion goes to the credibility of the testimony, not the admissibility, and it is up to the opposing party to examine the factual basis for the opinion in cross-examination”).

[13]  Despite the clarity of the revised Rule 702, and the intent to synthesize Daubert, Joiner, Kumho Tire, and Weisgram, some courts have insisted that nothing changed with the amended rule. See, e.g., Pappas v. Sony Elec., Inc., 136 F. Supp. 2d 413, 420 & n.11 (W.D. Pa. 2000) (opining that Rule 702 as amended did not change the application of Daubert within the Third Circuit) (“The Committee Notes to the amended Rule 702 cite and discuss several Court of Appeals decisions that have properly applied Daubert and its progeny. Among these decisions are numerous cases from the Third Circuit. See Committee Note to 2000 Amendments to Fed. R.Evid. 702. Accordingly, I conclude that amended Rule 702 does not effect a change in the application of Daubert in the Third Circuit.”). Of course, if nothing changed, then the courts that take this position should be able to square their decisions with text of Rule 702, as amended in 2000.

Should Federal Rule of Evidence 702 Be Amended?

May 8th, 2020

Almost 27 years have passed since the United States Supreme Court issued its opinion in Daubert.[1] The holding was narrow. The Court reminded the Bar that Federal Rule of Evidence 702 was a statute, and that courts were thus bound to read it as a statute. The plain language of Rule 702 had been adopted by the Court in 1972, and then enacted by Congress, to be effective on July 1, 1975. Absent from the enacted Rule 702 was the “twilight zone” test articulated by a lower federal court in 1923.[2] In the Daubert case, the defense erroneously urged the application of the twilight zone test. In the post-modern way, the plaintiffs urged the application of no test.[3] The Court held simply that the twilight zone test had not been incorporated in the statutory language of Rule 702. Instead, the Court observed that the plain language of the statute imposed “helpfulness” and epistemic requirements for admitting expert witness opinion testimony.

It took another two Supreme Court decisions to flesh out the epistemic requirements for expert witnesses’ opinions,[4] and a third decision in which the Court told the Bench and Bar that the requirements of Rule 702 are “exacting.”[5] After the Supreme Court had added significantly to Rule 702’s helpfulness and knowledge requirements, the Advisory Committee revised the rule in 2000, to synthesize and incorporate these four Supreme Court decisions, and scholarly thinking about the patho-epistemology of expert witness opinion testimony. The Committee revised Rule 702 again in 2011, but only on “stylistic” issues, without any intent to add to or subtract from the 2000 rule.

Not all judges got the memo, or bothered to read and implement the revised Rule 702, in 2000. At both the District Court and the Circuit levels, courts persisted, and continue to persist, in citing retrograde decisions that predate the 2000 amendment, and even predate the 1993 decision in Daubert. Even the Supreme Court, in a 2011 opinion that did not involve the interpretation of Rule 702, was misled by a Solicitor General’s amicus brief, into citing one of the most anti-science, anti-method, post-modern, pre-Daubert, anything-goes decisions.[6] The judicial resistance to Rule 702 is well documented in many scholarly articles,[7] by the Reporter to the Advisory Committee,[8] and in the pages of this and other blogs.

In 2015, when evidence scholar David Bernstein argued that Rule 702 required amending,[9] I acknowledged the strength of his argument, but resisted because of what I perceived to be the danger of opening up the debate in Congress.[10] Professor Bernstein and lawyer Eric Lasker detailed and documented the many judicial dodges and evasions engaged in by many judges intent upon ignoring the clear requirements of Rule 702.

A paper published this week by the Washington Legal Foundation has updated and expanded the case for reform made by Professor Bernstein five years ago. In his advocacy paper, lawyer Lee Mickus has collated and analyzed some of the more recent dodges, which will depress the spirits of anyone who believes in evidence-based decision making.[11] My resistance to reform by amendment is waning. The meaning and intent of Rule 702 has been scarred over by precedent based upon judicial ipse dixit, and not Rule 702.

Mickus’s paper, like Professor Bernstein’s articles before, makes a persuasive case for reform, but this new paper does not evaluate the vagaries of navigating an amendment through the Advisory Committee, the Supreme Court, and Congress. Even if the reader is not interested in the amendment process, the paper can be helpful to the advocate in anticipating dodgy rule denialism.


[1]  Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993).

[2]  Frye v. United States, 54 App. D.C. 46, 293 F. 1013 (1923).

[3]  SeeThe Advocates’ Errors in Daubert” (Dec. 28, 2018).

[4]  General Electric Co. v. Joiner, 522 U.S. 136 (1997); Kumho Tire Co. v. Carmichael, 526 U.S. 137 (1999).

[5]  Weisgram v. Marley Co., 528 U.S. 440, 455 (2000) (Ginsberg, J.) (unanimous decision).

[6] Matrixx Initiatives, Inc. v. Siracusano, 563 US 27, 131 S.Ct. 1309, 1319 (2011) (citing Wells v. Ortho Pharmaceutical Corp., 615 F. Supp. 262, 298 (N.D. Ga. 1985), aff’d and rev’d in part on other grounds, 788 F.2d 741 (11th Cir.), cert. denied, 479 U.S.950 (1986)).  SeeWells v. Ortho Pharmaceutical Corp. Reconsidered – Part 1”; “Part 2”; “Part 3”; “Part 4”; “Part 5”; and “Part 6”.

[7]  David E. Bernstein & Eric Lasker, “Defending Daubert: It’s Time to Amend Federal Rule of Evidence 702,” 57 Wm. & Mary L. Rev. 1 (2015); David E. Bernstein, “The Misbegotten Judicial Resistance to the Daubert Revolution,” 89 Notre Dame L. Rev. 27 (2014).

[8]  See Daniel J. Capra, Reporter’s Memorandum re Forensic Evidence, Daubert and Rule 702 at 52 (April 1, 2018) (“[T]he fact remains that some courts are ignoring the requirements of Rule 702(b) and (d). That is frustrating.”).

[9]  David E. Bernstein & Eric Lasker, “Defending Daubert: It’s Time to Amend Federal Rule of Evidence 702,” 57 Wm. & Mary L. Rev. 1 (2015).

[10]  “On Amending Rule 702 of the Federal Rules of Evidence” (Oct. 17, 2015).

[11]  Lee Mickus, “Gatekeeping Reorientation: Amend Rule 702 to Correct Judicial Misunderstanding about Expert Evidence,” Washington Legal Foundation Critical Legal Issues Working Paper No. 217 (May 2020).

Data Games – A Techno Thriller

April 22nd, 2020

Data Games – A Techno Thriller

Sherlock Holmes, Hercule Poirot, Miss Marple, Father Brown, Harry Bosch, Nancy Drew, Joe and Frank Hardy, Sam Spade, Columbo, Lennie Briscoe, Inspector Clouseau, and Dominic Da Vinci:

Move over; there is a new super sleuth in town.

Meet Professor Ken Wheeler.

Ken is a statistician, and so by profession, he is a data detective. In his day job, he teaches at a northeastern university, where his biggest challenges are managing the expectations of students and administrators, while trying to impart statistical learning. At home, Ken rarely manages to meet the expectations of his wife and son. But as some statisticians are wont to do, Ken sometimes takes on consulting gigs that require him to use his statistical skills to help litigants sort out the role of chance in cases that run from discrimination claims to rare health effects. In this contentious, sharp-elbowed environment, Ken excels. And truth be told, Ken actually finds great satisfaction in identifying the egregious errors and distortions of adversary statisticians

Wheeler’s sleuthing usually involves ascertaining random error or uncovering a lurking variable, but in Herberg I. Weisberg’s just-published novel, Data Games: A Techno Thriller, Wheeler is drawn into a high-stakes conspiracy of intrigue, violence, and fraud that goes way beyond the run-of-the-mine p-hacking and data dredging.

An urgent call from a scientific consulting firm puts Ken Wheeler in the midst of imminent disaster for a pharmaceutical manufacturer, whose immunotherapy anti-cancer wonder drug, Verbana, is under attack. A group of apparently legitimate scientists have obtained the dataset from Verbana’s pivotal clinical trial, and they appear on the verge of blowing Verbana out of the formulary with a devastating analysis that will show that the drug causes early dementia. Wheeler’s mission is to debunk the debunking analysis when it comes.

For those readers who are engaged in the litigation defense of products liability claims against medications, the scenario is familiar enough. The scientific group studying Verbana’s alleged side effect seems on the up-and-up, but they appear to engaged in a cherry-picking exercise, guided by a dubious theory of biological plausibility, known as the “Kreutzfeld hypothesis.”

It is not often that mystery novels turn on surrogate outcomes, biomarkers, genomic medicine, and predictive analytics, but Data Games is no ordinary mystery. And Wheeler is no ordinary detective. To be sure, the middle-aged Wheeler drives a middle-aged BMW, not a Bond car, and certainly not a Bonferroni. And Wheeler’s toolkit may not include a Glock, but he can handle the lasso, the jacknife, and the logit, and serve them up with SAS. Wheeler sees patterns where others see only chaos.

Unlike the typical Hollywood rubbish about stereotyped evil pharmaceutical companies, the hero of Data Games finds that there are sinister forces behind what looks like an honest attempt to uncover safety problems with Verbana. These sinister forces will use anything to achieve their illicit ends, including superficially honest academics with white hats. The attack on Verbana gets the FDA’s attention and an urgent hearing in White Oak, where Wheeler shines.

The author of Data Games, Herbert I. Weisberg, is himself a statistician, and a veteran of some of the dramatic data games he writes about in this novel. Weisberg is perhaps better known for his “homework” books, such asWillful Ignorance: The Mismeasure of Uncertainty (2014), and Bias and Causation: Models and Judgment for Valid Comparisons (2010). If, however, you ever find yourself in a pandemic lockdown, Weisberg’s Data Games: A Techno Thriller is a perfect way to escape. For under $3, you will be entertained, and you might even learn something about probability and statistics.

Disproportionality Analyses Misused by Lawsuit Industry

April 20th, 2020

Adverse event reporting is a recognized, important component of pharmacovigilence. Regulatory agencies around the world further acknowledge that an increased rate of reporting of a specific adverse event may signal the possible existence of an association. In the last two decades, pharmacoepidemiologists have developed techniques for mining databases of adverse event reports for evidence of a disproportionate level of reporting for a particular medication – adverse event pair. Such studies can help identify “signals” of potential issues for further study with properly controlled epidemiologic studies.[1]

Most sane and sensible epidemiologists recognize that the low quality, inconsistences, and biases of the data in adverse event reporting databases render studies of disproportionate reporting “poor surrogates for controlled epidemiologic studies.” In the face of incomplete and inconsistent reporting, so-called disproportionality analyses (“DPA”) assume that incomplete reporting will be constant for all events for a specific medication. Regulatory attention, product labeling, lawyer advertising and client recruitment, social media and publicity, and time since launch are all known to affect reporting rates, and to ensure that reporting rates for some event types for a specific medication will be higher. Thus, the DPA assumptions are virtually always false and unverifiable.[2]

DPAs are non-analytical epidemiologic studies that cannot rise in quality or probativeness above the level of the anecdote upon which they are based. DPAs may generate signals or hypotheses, but they cannot test hypotheses of causality. Although simple in concept, DPAs involve some complicated computations that embue them with an aura of “proofiness.” As would-be studies that lack probativeness for causality, they are thus ideal tools for the lawsuit industry to support litigation campaigns against drugs and medical devices. Indeed, if a statistical technique is difficult to understand but relatively easy to perform and even easier to pass off to unsuspecting courts and juries, then you can count on its metastatic use in litigation. The DPA has become one of the favorite tools of the lawsuit industry’s statisticians. This litigation use, however, cannot obscure the simple fact that the relative reporting risk provided by a DPA can never rise to the level of a relative risk.

In one case in which a Parkinson’s disease patient claimed that his compulsive gambling was caused by his use of the drug Requip, the plaintiff’s expert witness attempted to invoke a DPA in support of his causal claim. In granting a Rule 702 motion to exclude the expert witnesses who relied upon a DPA, the trial judge rejected the probativeness of DPAs, based upon the FDA’s rejection of such analyses for anything other than signal detection.[3]

In the Accutane litigation, statistician David Madigan attempted to support his fatally weak causation opinion with a DPA for Crohn’s disease and Accutane adverse event reports. According to the New Jersey Supreme Court, Madigan claimed that his DPA showed “striking signal of disproportionality” indicative of a “strong association” between Accutane use and Crohn’s disease.[4]  With the benefit of a thorough review by the trial court, the New Jersey Supreme Court found other indicia of unreliability in Madigan’s opinions, such that it was not fooled by Madigan’s shenanigans. In any event, no signal of disproportionality could ever show an association between medication use and a disease; at best the DPA can show only an association between reporting of the medication use and the outcome of interest.

In litigation over Mirena and intracranial hypertension, one of the lawsuit industry’s regulars, Mayhar Etminan, published a DPA based upon the FDA’s Adverse Event Reporting System, which purported to find an increased reporting odds ratio.[5] Unthinkingly, the plaintiffs’ other testifying expert witnesses relied upon Etminan’s study. When a defense expert witness pointed out that Etminan had failed to adjust for age and gender in his multivariate analysis,[6] he repudiated his findings.[7] Remarkably, when Etminan published his original DPA in 2015, he declared that he had no conflicts, but when he published his repudiation, he disclosed that he “has been an expert witness in Mirena litigation in the past but is no longer part of the litigation.” The Etminan kerfuffle helped scuttle the plaintiffs’ assault on Mirena.[8]

DPAs have, on occasion, bamboozled federal judges into treating them as analytical epidemiology that can support causal claims. For instance, misrepresentations or misunderstandings of what DPAs can and cannot do carried the day in a Rule 702 contest on the admissibility of opinion testimony by statistician Rebecca Betensky. In multidistrict litigation over the safety of inferior vena cava (“IVC”) filters, plaintiffs’ counsel retained Rebecca Betensky, to prepare a DPA of adverse events reported for the defendants’ retrievable filters. The MDL judge’s description of Betensky’s opinion demonstrates that her DPA was either misrepresented or misunderstood:

“In this MDL, Dr. Betensky opines generally that there is a higher risk of adverse events for Bard’s retrievable IVC filters than for its permanent SNF.”[9]

The court clearly took Betensky to be opining about risk and not the risk of reporting. The court’s opinion goes on to describe Betensky’s calculation of a “reporting risk ratio,” but found that she could testify that the retrievable IVC filters increased the risk of the claimed adverse events, and not merely that there was an increase in reporting risk ratios.

Betensky acknowledged that the reporting risk ratios were “imperfect estimates of the actual risk ratios,”[10] but nevertheless dismissed all caveats about the inability of DPAs to assess actual increased risk. The trial court quoted Dr. Betensky’s attempt to infuse analytical rigor into a data mining exercise:

“[A]dverse events are generally considered to be underreported to the databases, and potentially differentially by severity of adverse event and by drug or medical device. . . . It is important to recognize that underreporting in and of itself is not problematic. Rather, differential underreporting of the higher risk device is what leads to bias. And even if there was differential underreporting of the higher risk device, given the variation in reporting relative risks across adverse events, the differential reporting would have had to have been highly variable across adverse events. This does not seem plausible given the severity of the adverse events considered. Given the magnitude of the RRR’s [relative reporting ratios], and their variability across adverse events, it seems implausible that differential underreporting by filter could fully explain the deviation of the observed RRR’s from 1.”[11]

Of course, this explanation fails to account for differential over-reporting for the newer, but less risky or equally risk device. Betensky dismissed notoriety bias as having caused an increase in reporting adverse events because her DPA ended with 2014, before the FDA had issued a warning letter. The lawsuit industry, however, was on the attack against IVC filers, years before 2014.[12] Similarly, Betensky dismissed consideration of the Weber effect, but her analysis apparently failed to acknowledge that notoriety and Weber effect are just two of many possible biases in DPAs.

In the face of her credentials, the MDL trial judge retreated to the usual chestnuts that are served up when a Rule 702 challenge is denied.  Judge Campbell thus observed that “[i]t is not the job of the court to insure that the evidence heard by the jury is error-free, but to insure that it is sufficiently reliable to be considered by the jury.”[13]  The trial judge professed a need to be “be careful not to conflate questions of admissibility of expert testimony with the weight appropriately to be accorded to such testimony by the fact finder.”[14] The court denied the claim that Betensky had engaged in an ipse dixit, by engaging in its own ipse dixit. Judge Campbell found that Betensky had explained her assumptions, had acknowledged shortcomings, and had engaged in various sensitivity tests of the validity of her DPA; and so he concluded that Betensky did not present “a case where ‘there is simply too great an analytical gap between the data and the opinion proffered’.”[15]

By closing off inquiry into the limits of the DPA methodology, Judge Campbell managed to stumble into a huge analytical gap he blindly ignored, or was unaware of. Even the best DPAs cannot substitute for analytical epidemiology in a scientific methodology of determining causation. The ipse dixit becomes apparent when we consider that the MDL gatekeeping opinion on Rebecca Betensky fails to mention the extensive body of regulatory and scientific opinion about the distinct methodologic limitations of DPA. The U.S. FDA’s official guidance on good pharmacovigilance practices, for example, instructs us that

“[d]ata mining is not a tool for establishing causal attributions between products and adverse events.”[16]

The FDA specifically cautions that the signals detected by data mining techniques should be acknowledged to be “inherently exploratory or hypothesis generating.”[17] The agency exercises caution when making its own comparisons of adverse events between products in the same class because of the low quality of the data themselves, and uncontrollable and unpredictable biases in how the data are collected.[18] Because of the uncertainties in DPAs,

“FDA suggests that a comparison of two or more reporting rates be viewed with extreme caution and generally considered exploratory or hypothesis-generating. Reporting rates can by no means be considered incidence rates, for either absolute or comparative purposes.”[19]

The European Medicines Agency offers similar advice and caution:

“Therefore, the concept of SDR [Signal of Disproportionate Reporting] is applied in this guideline to describe a ‘statistical signal’ that has originated from a statistical method. The underlying principle of this method is that a drug–event pair is reported more often than expected relative to an independence model, based on the frequency of ICSRs on the reported drug and the frequency of ICSRs of a specific adverse event. This statistical association does not imply any kind of causal relationship between the administration of the drug and the occurrence of the adverse event.”[20]

The current version of perhaps the leading textbook on pharmacoepidemiology is completely in accord with the above regulatory guidances. In addition to emphasizing the limitations on data quality from adverse event reporting, and the inability to interpret temporal trends, the textbook authors clearly characterize DPAs as generating signals, and unable to serve as hypothesis tests:

“a signal of disproportionality is a measure of a statistical association within a collection of AE/ADR reports (rather than in a population), and it is not a measure of causality. In this regard, it is important to underscore that the use of data mining is for signal detection – that is, for hypothesis  generation – and that further work is needed to evaluate the signal.”[21]

Reporting ratios are not, and cannot serve as, measures of incidence or prevalence, because adverse event databases do not capture all the events of interest, and so these ratios “it must be interpreted cautiously.”[22] The authors further emphasize that “well-designed pharmacoepidemiology or clinical studies are needed to assess the signal.”[23]

The authors of this chapter are all scientists and officials at the FDA’s Center for Drug Evaluation and Research, and the World Health Organization. Although they properly disclaimed to have been writing for their agencies, their agencies have independently embraced their concepts in other agency publications. The consensus view of the hypothesis generating nature of DPAs can easily be seen in surveying the relevant literature.[24] Passing off a DPA as a study that supports causal inference is not a mere matter of “weight,” or excluding any opinion that has some potential for error. The misuse of Betensky’s DPA is a methodological error that goes to the heart of what Congress intended to be screened and excluded by Rule 702.


[1]  Sean Hennessy, “Disproportionality analyses of spontaneous reports,” 13 Pharmacoepidemiology & Drug Safety 503, 503 (2004).

[2]  Id. See, e.g., Patrick Waller & Mira Harrison-Woolrych, An Introduction to Pharmacovigilance 68-69 (2nd ed. 2017) (noting the example of the WHO’s DPA that found a 10-fold reporting rate increase for statins and ALS, which reporting association turned out to be spurious).

[3]  Wells v. SmithKline Beecham Corp., 2009 WL 564303, at *12 (W.D. Tex. 2009) (citing and quoting from the FDA’s Guidance for Industry: Good Pharmacovigilance Practices and Pharmacoepidemiologic Assessment (2005)), aff’d, 601 F.3d 375 (5th Cir. 2010). But see In re Abilify (Aripiprazole) Prods. Liab. Litig., 299 F.Supp. 3d 1291. 1324 (N.D. Fla. 2018) (noting that the finding of a DPA that compared Abilify with other anti-psychotics helped to show that a traditional epidemiologic study was not confounded by the indication for depressive symptoms).

[4]  In re Accutane Litig., 234 N.J. 340, 191 A.3d 560, 574 (2018).

[5]  See Mahyar Etminan, Hao Luo, and Paul Gustafson, et al., “Risk of intracranial hypertension with intrauterine levonorgestrel,” 6 Therapeutic Advances in Drug Safety 110 (2015).

[6]  Deborah Friedman, “Risk of intracranial hypertension with intrauterine levonorgestrel,” 7 Therapeutic Advances in Drug Safety 23 (2016).

[7]  Mahyar Etminan, “Revised disproportionality analysis of Mirena and benign intracranial hypertension,” 8 Therapeutic Advances in Drug Safety 299 (2017).

[8]  In re Mirena IUS Levonorgestrel-Relaated Prods. Liab. Litig. (No. II), 387 F. Supp. 3d 323, 331 (S.D.N.Y. 2019) (Engelmayer, J.).

[9]  In re Bard IVC Filters Prods. Liab. Litig., No. MDL 15-02641-PHX DGC, Order Denying Motion to Exclude Rebecca Betensky at 2 (D. Ariz. Jan. 22, 2018) (Campbell, J.) (emphasis added) [Order]

[10]  Id. at 4.

[11]  Id.

[12]  See Matt Fair, “C.R. Bard’s Faulty Filters Pose Health Risks, Suit Says,” Law360 (Aug. 10, 2012); See, e.g., Derrick J. Stobaugh, Parakkal Deepak, & Eli D. Ehrenpreis, “Alleged isotretinoin-associated inflammatory bowel disease: Disproportionate reporting by attorneys to the Food and Drug Administration Adverse Event Reporting System,” 69 J. Am. Acad. Dermatol. 393 (2013) (documenting stimulated reporting from litigation activities).

[13]  Order at 6, quoting from Southwire Co. v. J.P. Morgan Chase & Co., 528 F. Supp. 2d 908, 928 (W.D. Wis. 2007).

[14]  Id., citing In re Trasylol Prods. Liab. Litig., No. 08-MD-01928, 2010 WL 1489793, at *7 (S.D. Fla. Feb. 24, 2010).

[15]  Id., citing and quoting from In re Trasylol Prods. Liab. Litig., No. 08-MD-01928, 2010 WL 1489793, at *7 (S.D. Fla. Feb. 24, 2010) ((quoting General Electric v. Joiner, 522 U.S. 136, 146 (1997)).

[16]  FDA, “Good Pharmacovigilance Practices and Pharmacoepidemiologic Assessment Guidance for Industry” at 8 (2005) (emphasis added).

[17]  Id. at 9.

[18]  Id.

[19]  Id. at 11 (emphasis added).

[20]  EUDRAVigilance Expert Working Group, European Medicines Agency, “Guideline on the Use of Statistical Signal Detection Methods in the EUDRAVigilance Data Analysis System,” at 3 (2006) (emphasis added).

[21]  Gerald J. Dal Pan, Marie Lindquist & Kate Gelperin, “Postmarketing Spontaneous Pharmacovigilance Reporting Systems,” in Brian L. Strom & Stephen E. Kimmel and Sean Hennessy, Pharmacoepidemiology at 185 (6th ed. 2020) (emphasis added).

[22]  Id. at 187.

[23]  Id. See also Andrew Bate, Gianluca Trifirò, Paul Avillach & Stephen J.W. Evans, “Data Mining and Other Informatics Approaches to Pharmacoepidemiology,” chap. 27, in Brian L. Strom & Stephen E. Kimmel and Sean Hennessy, Pharmacoepidemiology at 685-88 (6th ed. 2020) (acknowledging the importance of DPAs for detecting signals that must then be tested with analytical epidemiology) (authors from industry, Pfizer, and academia, including NYU School of Medicine, Harvard Medical School, and London School of Hygiene and Tropical Medicine).

[24]  See, e.g., Patrick Waller & Mira Harrison-Woolrych, An Introduction to Pharmacovigilance 61 (2nd ed. 2017) (“[A]lthough the numbers are calculated in a similar way to relative risks, they do not represent a meaningful calculation of risk.” *** “Indicators of disproportionality are measures of association and even quite extreme results may not be causal.”); Ronald D. Mann & Elizabeth B. Andrews, Pharmacovigilance 240 (2d ed. 2007) (“Importantly, data mining cannot prove or refute causal associations between drugs and events. Data mining simply identifies disproportionality of drugevent reporting patterns in databases. The absence of a signal does not rule out a safety problem. Similarly, the presence of a signal is not a proof of a causal relationship between a drug and an adverse event.”); Patrick Waller, An Introduction to Pharmacovigilance 49 (2010) (“[A]lthough the numbers are calculated in a similar way to relative risks, they do not represent a meaningful calculation of risk. Whilst it is true that the greater the degree of disproportionality, the more reason there is to look further, the only real utility of the numbers is to decide whether or not there are more cases than might reasonably have been expected. Indicators of disproportionality are measures of association and even quite extreme results may not be causal.”); Sidney N. Kahn, “You’ve found a safety signal–now what?  Regulatory implications of industry signal detection activities,” 30 Drug Safety 615 (2007).

Dark Money, Scott Augustine, and Hot Air

April 11th, 2020

Fraud by the litigation industry takes many different forms. In the massive silicosis litigation unleashed in Mississippi and Texas in the early 2000s, plaintiffs’ lawyers colluded with physicians to concoct dubious diagnoses of silicosis. Fraudulent diagnoses of silicosis led to dismissals of thousands of cases, as well as the professional defrocking of some physician witnesses.[1] For those trying to keep up with lawsuit industry’s publishing arm, discussion of the Great Silicosis Fraud is completely absent from David Michaels’ recent book, The Triumph of Doubt.[2] So too is any mention of “dark money” that propelled the recently concluded Bair Hugger litigation.

Back in 2017, I wrote about the denial of a Rule 702 motion in the Bair Hugger litigation.[3] At the time, I viewed the trial court’s denial, on the facts of the case, to be a typical failure of gatekeeping.[4] Events in the Bair Hugger cases were only warming up in 2017.

After the court’s ruling, 3M took the first bellwether case to trial and won the case with jury, on May 30, 2018. Perhaps this jury verdict encouraged the MDL trial judge to take 3M’s motion for reconsideration of the Rule 702 motion seriously. In July 2019, the MDL court granted 3M’s motion to exclude the opinion testimony of plaintiffs’ general causation and mechanism expert witnesses, Drs. Jarvis, Samet, Stonnington, and Elghobashi.[5] Without these witnesses, over 5,000 plaintiffs, who had been misled about the merits of their cases, were stranded and set up for dismissal. On August 2, 2019, the MDL cases were dismissed for want of evidentiary support on causation. On August 29, 2019, plaintiffs filed a joint notice of appeal to the Eight Circuit.

The two Bair Hugger Rule 702 federal court decisions focused (or failed to focus) on scientific considerations. Most of the story of “dark money” and the manufacturing of science to support the litigation were suppressed in the Rule 702 motion practice, and in the federal jury trial. In her second Rule 702 reconsideration opinion, the MDL judge did mention undisclosed conflicts of interest by authors of the key studies relied upon by plaintiffs’ witnesses.[6]

To understand how the Bair Hugger litigation got started, and to obtain a full understanding of the nature of the scientific evidence was, a disinterested observer will have to read the state court decisions. Defendant 3M moved to exclude plaintiffs’ causation expert witnesses, in its Minnesota state court cases, under the so-called Frye standard. In response, the state judge excluded plaintiffs’ witnesses for advancing a novel scientific theory that lacked acceptance in the relevant scientific community. The Minnesota Court of Appeals affirmed, with a decision that talked rather more freely about the plaintiffs’ counsel’s dark money. In re 3M Bair Hugger Litig., 924 N.W.2d 16 (Minn. App. 2019) [cited as Bair Hugger].

As the Minnesota Court of Appeals explained, a forced-air warming device (FAWD) is a very important, useful device to keep patients’ body temperatures normal during surgery. The “Bair Hugger” is a FAWD, which was invented in 1987, by Dr. Scott Augustine, an anesthesiologist, who at the time was the chief executive officer of Augustine Medical, Inc. Bair Hugger at 19.

In the following 15 years, the Bair Hugger became the leading FAWD in the world. In 2002, the federal government notified Augustine that it was investigating him for Medicare fraud. Augustine resigned from the company that bore his name, and the company purged the taint by reorganizing as Arizant Healthcare Inc. (Arizant), which continued to make the Bair Hugger. In the following year, 2003, Augustine pleaded guilty to fraud and paid a $2 million fine. His sentence included a five-year ban from involvement in federal health-care programs.

During the years of his banishment, fraudfeasor Augustine developed a rival product and then embarked upon a global attack on the safety of his own earlier invention, the Bair Hugger. In the United Kingdom, his claim that the Bair Hugger increased risks of surgical site infections attacks was rejected by the UK National Institute for Health and Clinical Excellence. A German court enjoined Augustine from falsely claiming that the Bair Hugger led to increased bacterial contamination.[7] The United States FDA considered and rejected Augustine’s claims, and recommended the use of FAWDs.

In 2009, Augustine began to work as a non-testifying expert witness with the Houston, Texas, plaintiffs’ law firm of Kennedy Hodges LLP. A series of publications resulted in which the authors attempted to raise questions about the safety of the Bair Hugger. By 2013, with the medical literature “seeded” with several studies attacking the Bair Hugger, the Kennedy Hodges law firm began to manufacture law suits against Arizant and 3M (which had bought the Bair Hugger product line from Arizant in 2010). Bair Hugger at 20.

The seeding studies were marketing and litigation propaganda used by Augustine to encourage the all-too-complicit lawsuit industry to ramp up production of complaints against 3M over the Bair Hugger. Several of the plaintiffs’ studies included as an author a young statistician, Mark Albrecht, an employee of, or a contractor for, Augustine’s new companies, Augustine Temperature Management and Augustine Medical. Even when disclosures were made, they were at best “anemic”:

“The author or one or more of the authors have received or will receive benefits for personal or professional use from a commercial party related directly or indirectly to the subject of this article.”[8]

Some of these studies generally included a disclosure that Albrecht was funded or employed by Augustine, but they did not disclose the protracted, bitter feud or Augustine’s confessed fraudulent conduct. Another author of some of the plaintiffs’ studies included David Leaper, who was a highly paid “consultant’’ to Augustine at the time of the work on the study. None of the studies disclosed Leaper’s consultancy for Augustin:

  1. Mark Albrecht, Robert Gauthier, and David Leaper, “Forced air warming, a source of airborne contamination in the operating room?” 1 Orthopedic Rev. (Pavia) e28 (2009)
  2. Mark Albrecht, Robert L. Gauthier, Kumar Belani, Mark Litchy, and David Leaper, “Forced-air warming blowers: An evaluation of filtration adequacy and airborne contamination emissions in the operating room,” 39 Am. J. Infection Control 321 (2011)
  3. P.D. McGovern, Mark Albrecht, Kumar Belani, C. Nachtsheim, “Forced-air warming and ultra-clean ventilation do not mix,” 93 J. Bone & Joint Surg. – British 1537 (2011)
  4. K.B. Dasari, Mark Albrecht, and M. Harper, “Effect of forced-air warming on the performance of operating-theatre laminar-flow ventilation,” 67 Anaesthesia 244 (2012)
  5. Mike Reed, Oliver Kimberger, Paul D. McGovern, and Mark C. Albrecht, “Forced-Air Warming Design: Evaluation of Intake Filtration, Internal Microbial Buildup, and Airborne-Contamination Emissions,” 81 Am. Ass’n Nurse Anesthetists 275 (2013)
  6. Kumar Belani, Mark Albrecht, Paul McGovern, Mike Reed, and Christopher Nachtsheim, “Patient warming excess heat: the effects on orthopedic operating room ventilation performance,” 117 Anesthesia & Analgesia 406 (2013)

In one study, Augustine’s employee Mark Albrecht conducted the experiment with one of the authors, but was not listed as an author although he wrote an early draft of the study. Augustine provided all the equipment used in the experiment. The published paper failed to disclose any of these questionable activities:

  1. A.J. Legg & A.J. Hammer, “Forced-air patient warming blankets disrupt unidirectional flow,” 95 Bone & Joint J. 407 (2013)

Another study had more peripheral but still questionable involvement of Augustine, whose company lent the authors equipment used to conduct the study, without proper acknowledgment and disclosure:

  1. A.J. Legg, T. Cannon, and A. J. Hamer, “Do forced-air warming devices disrupt unidirectional downward airflow?” 94 J. Bone & Joint Surg. – British 254 (2012)

In addition to the defects in the authors’ disclosures, 3M discovered that two of the studies had investigated whether the Bair Hugger spread bacteria in the surgical area. Although the experiments found no spread with the Bair Hugger, the researchers never publicly disclosed their exculpatory evidence.[9]

Augustine’s marketing campaign, through these studies, ultimately fell flat at the FDA, which denied his citizen’s petition and recommended that surgeons continue to use FAWDs such as the Bair Hugger.[10] Augustine’s proxy litigation war against 3M also fizzled, unless the 8th Circuit revives his vendetta. Nonetheless, the Augustine saga raises serious questions about how litigation funding of “scientific studies” will vex the search for the truth in pharmaceutical products litigation. The Augustine attempt to pollute the medical literature was relatively apparent, but dark money from undisclosed financiers may require greater attention from litigants and from journal editors.


[1]  In re Silica Products Liab. Litig., MDL No. 1553, 398 F. Supp. 2d 563 (S.D.Tex. 2005).

[2]  David Michaels, The Triumph of Doubt: Dark Money and the Science of Deception (2020).

[3]  In re Bair Hugger Forced Air Warming, MDL No. 15-2666, 2017 WL 6397721 (D. Minn. Dec. 13, 2017).

[4]  “Gatekeeping of Expert Witnesses Needs a Bair Hug” (Dec. 20, 2017).

[5]  In re Bair Hugger Forced Air Warming Devices Prods. Liab. Litig., MDL No. 15-2666, 2019 WL 4394812 (D. Minn. July 31, 2019). See Joe G. Hollingsworth & Caroline Barker, “Exclusion of Junk Science in ‘Bair Hugger’ MDL Shows Daubert Is Still Breathing,” Wash. Leg. Foundation (Jan 23, 2020); Christine Kain, Patrick Reilly, Hannah Anderson and Isabelle Chammas, “Top 5 Drug And Medical Device Developments Of 2019,” Law360 (Jan. 9, 2020).

[6]  In re Bair Hugger Forced Air Warming Devices Prods. Liab. Litig., 2019 WL 4394812, at *10 n.13 (D. Minn. July 31, 2019) (observing that “[i]n the published study, the authors originally declared no conflicts of interest”).

[7]  Dr. Augustine has never been a stranger to the judicial system. See, e.g., Augustine Medical, Inc. v. Gaymar Industries, Inc., 181 F.3d 1291 (Fed. Cir. 1999); Augustine Medical, Inc. v. Progressive Dynamics, Inc., 194 F.3d 1367 (Fed. Cir. 1999); Cincinnati Sub-Zero Products, Inc. v. Augustine Medical, Inc., 800 F. Supp. 1549 (S.D. Ohio 1992).

[8]  P.D. McGovern, Mark Albrecht, Kumar Belani, and C. Nachtsheim, “Forced-air warming and ultra-clean ventilation do not mix,” 93 J. Bone & Joint Surg. – British 1537, 1544 (2011).

[9]  See https://www.truthaboutbairhugger.com/truth-science-behind-claims-3m-bair-hugger-system-look-augustine-connections-research-studies/.

[10]  William Maisel, “Information about the Use of Forced Air Thermal Regulating Systems – Letter to Health Care Providers”; Center for Devices and Radiological Health, U.S. Food and Drug Administration (Aug. 30, 2017).

April Fool – Zambelli-Weiner Must Disclose

April 2nd, 2020

Back in the summer of 2019, Judge Saylor, the MDL judge presiding over the Zofran birth defect cases, ordered epidemiologist, Dr. Zambelli-Weiner to produce documents relating to an epidemiologic study of Zofran,[1] as well as her claimed confidential consulting relationship with plaintiffs’ counsel.[2]

This previous round of motion practice and discovery established that Zambelli-Weiner was a paid consultant in advance of litigation, that her Zofran study was funded by plaintiffs’ counsel, and that she presented at a Las Vegas conference, for plaintiffs’ counsel only, on [sic] how to make mass torts perfect. Furthermore, she had made false statements to the court about her activities.[3]

Zambelli-Weiner ultimately responded to the discovery requests but she and plaintiffs’ counsel withheld several documents as confidential, pursuant to the MDL’s procedure for protective orders. Yesterday, April 1, 2020, Judge Saylor entered granted GlaxoSmithKline’s motion to de-designate four documents that plaintiffs claimed to be confidential.[4]

Zambelli-Weiner sought to resist GSK’s motion to compel disclosure of the documents on a claim that GSK was seeking the documents to advance its own litigation strategy. Judge Saylor acknowledged that Zambelli-Weiner’s psycho-analysis might be correct, but that GSK’s motive was not the critical issue. According to Judge Saylor, the proper inquiry was whether the claim of confidentiality was proper in the first place, and whether removing the cloak of secrecy was appropriate under the facts and circumstances of the case. Indeed, the court found “persuasive public-interest reasons” to support disclosure, including providing the FDA and the EMA a complete, unvarnished view of Zambelli-Weiner’s research.[5] Of course, the plaintiffs’ counsel, in close concert with Zambelli-Weiner, had created GSK’s need for the documents.

This discovery battle has no doubt been fought because plaintiffs and their testifying expert witnesses rely heavily upon the Zambelli-Weiner study to support their claim that Zofran causes birth defects. The present issue is whether four of the documents produced by Dr. Zambelli-Weiner pursuant to subpoena should continue to enjoy confidential status under the court’s protective order. GSK argued that the documents were never properly designated as confidential, and alternatively, the court should de-designate the documents because, among other things, the documents would disclose information important to medical researchers and regulators.

Judge Saylor’s Order considered GSK’s objections to plaintiffs’ and Zambelli-Weiner’s withholding four documents:

(1) Zambelli-Weiner’s Zofran study protocol;

(2) Undisclosed, hidden analyses that compared birth defects rates for children born to mothers who used Zofran with the rates seen with the use of other anti-emetic medications;

(3) An earlier draft Zambelli-Weiner’s Zofran study, which she had prepared to submit to the New England Journal of Medicine; and

(4) Zambelli-Weiner’s advocacy document, a “Causation Briefing Document,” which she prepared for plaintiffs’ lawyers.

Judge Saylor noted that none of the withheld documents would typically be viewed as confidential. None contained “sensitive personal, financial, or medical information.”[6]  The court dismissed Zambelli-Weiner’s contention that the documents all contained “business and proprietary information,” as conclusory and meritless. Neither she nor plaintiffs’ counsel explained how the requested documents implicated proprietary information when Zambelli-Weiner’s only business at issue is to assist in making lawsuits. The court observed that she is not “engaged in the business of conducting research to develop a pharmaceutical drug or other proprietary medical product or device,” and is related solely to her paid consultancy to plaintiffs’ lawyers. Neither she nor the plaintiffs’ lawyers showed how public disclosure would hurt her proprietary or business interests. Of course, if Zambelli-Weiner had been dishonest in carrying out the Zofran study, as reflected in study deviations from its protocol, her professional credibility and her business of conducting such studies might well suffer. Zambelli-Weiner, however, was not prepared to affirm the antecedent of that hypothetical. In any event, the court found that whatever right Zambelli-Weiner might have enjoyed to avoid discovery evaporated with her previous dishonest representations to the MDL court.[7]

The Zofran Study Protocol

GSK sought production of the Zofran study protocol, which in theory contained the research plan for the Zofran study and the analyses the researchers intended to conduct. Zambelli-Weiner attempted to resist production on the specious theory that she had not published the protocol, but the court found this “non-publication” irrelevant to the claim of confidentiality. Most professional organizations, such as the International Society of Pharmacoepidemiology (“ISPE”), which ultimately published Zambelli-Weiner’s study, encourage the publication and sharing of study protocols.[8] Disclosure of protocols helps ensure the integrity of studies by allowing readers to assess whether the researchers have adhered to their study plan, or have engaged in ad hoc data dredging in search for a desired result.[9]

The Secret, Undisclosed Analyses

Perhaps even more egregious than withholding the study protocol was the refusal to disclose unpublished analyses comparing the rate of birth defects among children born to mothers who used Zofran with the birth defect rates of children with in utero exposure to other anti-emetic medications.  In ruling that Zambelli-Weiner must produce the unpublished analyses, the court expressed its skepticism over whether these analyses could ever have been confidential. Under ISPE guidelines, researchers must report findings that significantly affect public health, and the relative safety of Zofran is essential to its evaluation by regulators and prescribing physicians.

Not only was Zambelli-Weiner’s failure to include these analyses in her published article ethically problematic, but she apparently hid these analyses from the Pharmacovigilance Risk Assessment Committee (PRAC) of the European Medicines Agency, which specifically inquired of Zambelli-Weiner whether she had performed such analyses. As a result, the PRAC recommended a label change based upon Zambelli-Weiner’s failure to disclosure material information. Furthermore, the plaintiffs’ counsel represented they intended to oppose GSK’s citizen petition to the FDA, based upon the Zambelli-Weiner study. The apparently fraudulent non-disclosure of relevant analyses could not have been more fraught for public health significance. The MDL court found that the public health need trumped any (doubtful) claim to confidentiality.[10] Against the obvious public interest, Zambelli-Weiner offered no “compelling countervailing interest” in keeping her secret analyses confidential.

There were other aspects to the data-dredging rationale not discussed in the court’s order. Without seeing the secret analyses of other anti-emetics, readers were deprive of an important opportunity to assess actual and potential confounding in her study. Perhaps even more important, the statistical tools that Zambelli-Weiner used, including any measurements of p-values and confidence intervals, and any declarations of “statistical significance,” were rendered meaningless by her secret, undisclosed, multiple testing. As noted by the American Statistical Association (ASA) in its 2016 position statement, “4. Proper inference requires full reporting and transparency.”

The ASA explains that the proper inference from a p-value can be completely undermined by “multiple analyses” of study data, with selective reporting of sample statistics that have attractively low p-values, or cherry picking of suggestive study findings. The ASA points out that common practices of selective reporting compromises valid interpretation. Hence the correlative recommendation:

“Researchers should disclose the number of hypotheses explored during the study, all data collection decisions, all statistical analyses conducted and all p-values computed. Valid scientific conclusions based on p-values and related statistics cannot be drawn without at least knowing how many and which analyses were conducted, and how those analyses (including p-values) were selected for reporting.”[11]

The Draft Manuscript for the New England Journal of Medicine

The MDL court wasted little time and ink in dispatching Zambelli-Weiner’s claim of confidentiality for her draft New England Journal of Medicine manuscript. The court found that she failed to explain how any differences in content between this manuscript and the published version constituted “proprietary business information,” or how disclosure would cause her any actual prejudice.

Zambelli-Weiner’s Litigation Road Map

In a world where social justice warriors complain about organizations such as Exponent, for its litigation support of defense efforts, the revelation that Zambelli-Weiner was helping to quarterback the plaintiffs’ offense deserves greater recognition. Zambelli-Weiner’s litigation road map was clearly created to help Grant & Eisenhofer, P.A., the plaintiffs’ lawyers,, create a causation strategy (to which she would add her Zofran study). Such a document from a consulting expert witness is typically the sort of document that enjoys confidentiality and protection from litigation discovery. The MDL court, however, looked beyond Zambelli-Weiner’s role as a “consulting witness” to her involvement in designing and conducting research. The broader extent of her involvement in producing studies and communicating with regulators made her litigation “strategery” “almost certainly relevant to scientists and regulatory authorities” charged with evaluating her study.”[12]

Despite Zambelli-Weiner’s protestations that she had made a disclosure of conflict of interest, the MDL court found her disclosure anemic and the public interest in knowing the full extent of her involvement in advising plaintiffs’ counsel, long before the study was conducted, great.[13]

The legal media has been uncommonly quiet about the rulings on April Zambelli-Weiner, in the Zofran litigation. From the Union of Concerned Scientists, and other industry scolds such as David Egilman, David Michaels, and Carl Cranor – crickets. Meanwhile, while the appeal over the admissibility of her testimony is pending before the Pennsylvania Supreme Court,[14] Zambelli-Weiner continues to create an unenviable record in Zofran, Accutane,[15] Mirena,[16] and other litigations.


[1]  April Zambelli‐Weiner, Christina Via, Matt Yuen, Daniel Weiner, and Russell S. Kirby, “First Trimester Pregnancy Exposure to Ondansetron and Risk of Structural Birth Defects,” 83 Reproductive Toxicology 14 (2019).

[2]  See In re Zofran (Ondansetron) Prod. Liab. Litig., 392 F. Supp. 3d 179, 182-84 (D. Mass. 2019) (MDL 2657) [cited as In re Zofran].

[3]  “Litigation Science – In re Zambelli-Weiner” (April 8, 2019); “Mass Torts Made Less Bad – The Zambelli-Weiner Affair in the Zofran MDL” (July 30, 2019). See also Nate Raymond, “GSK accuses Zofran plaintiffs’ law firms of funding academic study,” Reuters (Mar. 5, 2019).

[4]  In re Zofran Prods. Liab. Litig., MDL No. 1:15-md-2657-FDS, Order on Defendant’s Motion to De-Designate Certain Documents as Confidential Under the Protective Order (D.Mass. Apr. 1, 2020) [Order].

[5]  Order at n.3

[6]  Order at 3.

[7]  See In re Zofran, 392 F. Supp. 3d at 186.

[8]  Order at 4. See also Xavier Kurz, Susana Perez-Gutthann, the ENCePP Steering Group, “Strengthening standards, transparency, and collaboration to support medicine evaluation: Ten years of the European Network of Centres for Pharmacoepidemiology and Pharmacovigilance (ENCePP),” 27 Pharmacoepidemiology & Drug Safety 245 (2018).

[9]  Order at note 2 (citing Charles J. Walsh & Marc S. Klein, “From Dog Food to Prescription Drug Advertising: Litigating False Scientific Establishment Claims Under the Lanham Act,” 22 Seton Hall L. Rev. 389, 431 (1992) (noting that adherence to study protocol “is essential to avoid ‘data dredging’—looking through results without a predetermined plan until one finds data to support a claim”).

[10]  Order at 5, citing Anderson v. Cryovac, Inc., 805 F.2d 1, 8 (1st Cir. 1986) (describing public-health concerns as “compelling justification” for requiring disclosing of confidential information).

[11]  Ronald L. Wasserstein & Nicole A. Lazar, “The ASA’s Statement on p-Values: Context, Process, and Purpose,” 70 The American Statistician 129 (2016)

See alsoThe American Statistical Association’s Statement on and of Significance” (March 17, 2016).“Courts Can and Must Acknowledge Multiple Comparisons in Statistical Analyses (Oct. 14, 2014).

[12]  Order at 6.

[13]  Cf. Elizabeth J. Cabraser, Fabrice Vincent & Alexandra Foote, “Ethics and Admissibility: Failure to Disclose Conflicts of Interest in and/or Funding of Scientific Studies and/or Data May Warrant Evidentiary Exclusions,” Mealey’s Emerging Drugs Reporter (Dec. 2002) (arguing that failure to disclose conflicts of interest and study funding should result in evidentiary exclusions).

[14]  Walsh v. BASF Corp., GD #10-018588 (Oct. 5, 2016, Pa. Ct. C.P. Allegheny Cty., Pa.) (finding that Zambelli-Weiner’s and Nachman Brautbar’s opinions that pesticides generally cause acute myelogenous leukemia, that even the smallest exposure to benzene increases the risk of leukemia offended generally accepted scientific methodology), rev’d, 2018 Pa. Super. 174, 191 A.3d 838, 842-43 (Pa. Super. 2018), appeal granted, 203 A.3d 976 (Pa. 2019).

[15]  In re Accutane Litig., No. A-4952-16T1, (Jan. 17, 2020 N.J. App. Div.) (affirming exclusion of Zambelli-Weiner as an expert witness).

[16]  In re Mirena IUD Prods. Liab. Litig., 169 F. Supp. 3d 396 (S.D.N.Y. 2016) (excluding Zambelli-Weiner in part).

Dodgy Data Duck Daubert Decisions

March 11th, 2020

Judges say the darndest things, especially when it comes to their gatekeeping responsibilities under Federal Rules of Evidence 702 and 703. One of the darndest things judges say is that they do not have to assess the quality of the data underlying an expert witness’s opinion.

Even when acknowledging their obligation to “assess the reasoning and methodology underlying the expert’s opinion, and determine whether it is both scientifically valid and applicable to a particular set of facts,”[1] judges have excused themselves from having to look at the trustworthiness of the underlying data for assessing the admissibility of an expert witness’s opinion.

In McCall v. Skyland Grain LLC, the defendant challenged an expert witness’s reliance upon oral reports of clients. The witness, Mr. Bradley Walker, asserted that he regularly relied upon such reports, in similar contexts of the allegations that the defendant misapplied herbicide to plaintiffs’ crops. The trial court ruled that the defendant could cross-examine the declarant who was available trial, and concluded that the “reliability of that underlying data can be challenged in that manner and goes to the weight to be afforded Mr. Walker’s conclusions, not their admissibility.”[2] Remarkably, the district court never evaluated the reasonableness of Mr. Walker’s reliance upon client reports in this or any context.

In another federal district court case, Rodgers v. Beechcraft Corporation, the trial judge explicitly acknowledged the responsibility to assess whether the expert witness’s opinion was based upon “sufficient facts and data,” but disclaimed any obligation to assess the quality of the underlying data.[3] The trial court in Rodgers cited a Tenth Circuit case from 2005,[4] which in turn cited the Supreme Court’s 1993 decision in Daubert, for the proposition that the admissibility review of an expert witness’s opinion was limited to a quantitative sufficiency analysis, and precluded a qualitative analysis of the underlying data’s reliability. Quoting from another district court criminal case, the court in Rodgers announced that “the Court does not examine whether the facts obtained by the witness are themselves reliable – whether the facts used are qualitatively reliable is a question of the weight to be given the opinion by the factfinder, not the admissibility of the opinion.”[5]

In a 2016 decision, United States v. DishNetwork LLC, the court explicitly disclaimed that it was required to “evaluate the quality of the underlying data or the quality of the expert’s conclusions.”[6] This district court pointed to a Seventh Circuit decision, which maintained that  “[t]he soundness of the factual underpinnings of the expert’s analysis and the correctness of the expert’s conclusions based on that analysis are factual matters to be determined by the trier of fact, or, where appropriate, on summary judgment.”[7] The Seventh Circuit’s decision, however, issued in June 2000, several months before the effective date of the amendments to Federal Rule of Evidence 702 (December 2000).

In 2012, a magistrate judge issued an opinion along the same lines, in Bixby v. KBR, Inc.[8] After acknowledging what must be done in ruling on a challenge to an expert witness, the judge took joy in what could be overlooked. If the facts or data upon which the expert witness has relied are “minimally sufficient,” then the gatekeeper can regard questions about “the nature or quality of the underlying data bear upon the weight to which the opinion is entitled or to the credibility of the expert’s opinion, and do not bear upon the question of admissibility.”[9]

There need not be any common law mysticism to the governing standard. The relevant law is, of course, a statute, which appears to be forgotten in many of the failed gatekeeping decisions:

Rule 702. Testimony by Expert Witnesses

A witness who is qualified as an expert by knowledge, skill, experience, training, or education may testify in the form of an opinion or otherwise if:

(a) the expert’s scientific, technical, or other specialized knowledge will help the trier of fact to understand the evidence or to determine a fact in issue;

(b) the testimony is based on sufficient facts or data;

(c) the testimony is the product of reliable principles and methods; and

(d) the expert has reliably applied the principles and methods to the facts of the case.

It would seem that you could not produce testimony that is the product of reliable principles and methods by starting with unreliable underlying facts and data. Certainly, having a reliable method would require selecting reliable facts and data from which to start. What good would the reliable application of reliable principles to crummy data?

The Advisory Committee Notes to Rule 702 hints at an answer to the problem:

“There has been some confusion over the relationship between Rules 702 and 703. The amendment makes clear that the sufficiency of the basis of an expert’s testimony is to be decided under Rule 702. Rule 702 sets forth the overarching requirement of reliability, and an analysis of the sufficiency of the expert’s basis cannot be divorced from the ultimate reliability of the expert’s opinion. In contrast, the ‘reasonable reliance’ requirement of Rule 703 is a relatively narrow inquiry. When an expert relies on inadmissible information, Rule 703 requires the trial court to determine whether that information is of a type reasonably relied on by other experts in the field. If so, the expert can rely on the information in reaching an opinion. However, the question whether the expert is relying on a sufficient basis of information—whether admissible information or not—is governed by the requirements of Rule 702.”

The answer is only partially satisfactory. First, if the underlying data are independently admissible, then there may indeed be no gatekeeping of an expert witness’s reliance upon such data. Rule 703 imposes a reasonableness test for reliance upon inadmissible underlying facts and data, but appears to give otherwise admissible facts and data a pass. Second, the above judicial decisions do not mention any Rule 703 challenge to the expert witnesses’ reliance. If so, then there is a clear lesson for counsel. When framing a challenge to the admissibility of an expert witness’s opinion, show that the witness has unreasonably relied upon facts and data, from whatever source, in violation of Rule 703. Then show that without the unreasonably relied upon facts and data, the witness cannot show that his or her opinion satisfies Rule 702(a)-(d).


[1]  See, e.g., McCall v. Skyland Grain LLC, Case 1:08-cv-01128-KHV-BNB, Order (D. Colo. June 22, 2010) (Brimmer, J.) (citing Dodge v. Cotter Corp., 328 F.3d 1212, 1221 (10th Cir. 2003), citing in turn Daubert v. Merrill Dow Pharms., Inc., 509 U.S. 579,  592-93 (1993).

[2]  McCall v. Skyland Grain LLC Case 1:08-cv-01128-KHV-BNB, Order at p.9 n.6 (D. Colo. June 22, 2010) (Brimmer, J.)

[3]  Rodgers v. Beechcraft Corp., Case No. 15-CV-129-CVE-PJC, Report & Recommendation at p.6 (N.D. Okla. Nov. 29, 2016).

[4]  Id., citing United.States. v. Lauder, 409 F.3d 1254, 1264 (10th Cir. 2005) (“By its terms, the Daubert opinion applies only to the qualifications of an expert and the methodology or reasoning used to render an expert opinion” and “generally does not, however, regulate the underlying facts or data that an expert relies on when forming her opinion.”), citing Daubert v. Merrill Dow Pharms., Inc., 509 U.S. 579, 592-93 (1993).

[5]  Id., citing and quoting United States v. Crabbe, 556 F. Supp. 2d 1217, 1223
(D. Colo. 2008) (emphasis in original). In Crabbe, the district judge mostly excluded the challenged expert witness, thus rendering its verbiage on quality of data as obiter dicta). The pronouncements about the nature of gatekeeping proved harmless error when the court dismissed the case on other grounds. Rodgers v. Beechcraft Corp., 248 F. Supp. 3d 1158 (N.D. Okla. 2017) (granting summary judgment).

[6]  United States v. DishNetwork LLC, No. 09-3073, Slip op. at 4-5 (C.D. Ill. Jan. 13, 2016) (Myerscough, J.)

[7]  Smith v. Ford Motor Co., 215 F.3d 713, 718 (7th Cir. 2000).

[8]  Bixby v. KBR, Inc., Case 3:09-cv-00632-PK, Slip op. at 6-7 (D. Ore. Aug. 29, 2012) (Papak, M.J.)

[9]  Id. (citing Hangarter v. Provident Life & Accident Ins. Co., 373 F.3d 998, 1017 (9th Cir. 2004), quoting Children’s Broad Corp. v. Walt Disney Co., 357 F.3d 860, 865 (8th Cir. 2004) (“The factual basis of an expert opinion goes to the credibility of the testimony, not the admissibility, and it is up to the opposing party to examine the factual basis for the opinion in cross-examination.”).

Practical Solutions for the Irreproducibility Crisis

March 3rd, 2020

I have previously praised the efforts of the National Association of Scholars (NAS) for its efforts to sponsor a conference on “Fixing Science: Practical Solutions for the Irreproducibility Crisis.” The conference was a remarkable event, with a good deal of diverse view points, civil discussion and debate, and collegiality.

The NAS has now posted a follow up to its conference, with a link to slide presentations, and to a You Tube page with videos of the presentations. The NAS, along with The Independent Institute, should be commended for their organizational efforts, and their transparency in making the conference contents available now to a wider audience.

The conference took place on February 7th and 8th, and I had the privilege of starting the event with my presentation, “Not Just an Academic Dispute: Irreproducible Scientific Evidence Renders Legal Judgments Unsafe”.

Some, but not all, of the interesting presentations that followed:

Tim Edgell, “Stylistic Bias, Selective Reporting, and Climate Science” (Feb. 7, 2020)

Patrick J. Michaels, “Biased Climate Science” (Feb. 7, 2020)

Daniele Fanelli, “Reproducibility Reforms if there is no Irreproducibility Crisis” (Feb. 8, 2020)

On Saturday, I had the additional privilege of moderating a panel on “Group Think” in science, and its potential for skewing research focus and publication:

Lee Jussim, “Intellectual Diversity Limits Groupthink in Scientific Psychology” (Feb. 8, 2020)

Mark Regnerus, “Groupthink in Sociology” (Feb. 8, 2020)

Michael Shermer, “Giving the Devil His Due” (Feb. 8, 2020)

Later on Saturday, the presenters turned to methodological issues, many of which are key to understanding ongoing scientific and legal controversies:

Stanley Young, “Prevention and Management of Acute and Late Toxicities in Radiation Oncology

James E. Enstrom, “Reproducibility is Essential to Combating Environmental Lysenkoism

Deborah Mayo, “P-Value ‘Reforms’: Fixing Science or Threats to Replication and Falsification?” (Feb. 8, 2020)

Ronald L. Wasserstein, “What Professional Organizations Can Do To Fix The Irreproducibility Crisis” (Feb. 8, 2020)

Louis Anthony Cox, Jr., “Causality, Reproducibility, and Scientific Generalization in Public Health” (Feb. 8, 2020)

David Trafimow, “What Journals Can Do To Fix The Irreproducibility Crisis” (Feb. 8, 2020)

David Randall, “Regulatory Science and the Irreproducibility Crisis” (Feb. 8, 2020)

Science Journalism – UnDark Noir

February 23rd, 2020

Critics of the National Association of Scholars’ conference on Fixing Science pointed readers to an article in Undark, an on-line popular science site for lay audiences, and they touted the site for its science journalism. My review of the particular article left me unimpressed and suspicious of Undark’s darker side. When I saw that the site featured an article on the history of the Supreme Court’s Daubert decision, I decided to give the site another try. For one thing, I am sympathetic to the task science journalists take on: it is important and difficult. In many ways, lawyers must commit to perform the same task. Sadly, most journalists and lawyers, with some notable exceptions, lack the scientific acumen and English communication skills to meet the needs of this task.

The Undark article that caught my attention was a history of the Daubert decision and the Bendectin litigation that gave rise to the Supreme Court case.[1] The author, Peter Andrey Smith, is a freelance reporter, who often covers science issues. In his Undark piece, Smith covered some of the oft-told history of the Daubert case, which has been told before, better and in more detail in many legal sources. Smith gets some credit for giving the correct pronunciation of the plaintiff’s name – “DAW-burt,” and for recounting how both sides declared victory after the Supreme Court’s ruling. The explanation Smith gives of the opinion by Associate Justice Harry Blackmun is reasonably accurate, and he correctly notes that a partial dissenting opinion by Chief Justice Rehnquist complained that the majority’s decision would have trial judges become “amateur scientists.” Nowhere in the article will you find, however, the counter to the dissent: an honest assessment of the institutional and individual competence of juries to decide complex scientific issues.

The author’s biases eventually, however, become obvious. He recounts his interviews with Jason Daubert and his mother, Joyce Daubert. He earnestly reports how Joyce Daubert remembered having taken Bendectin during her pregnancy with Jason, and in the moment of that recall, “she felt she’d finally identified the teratogen that harmed Jason.” Really? Is that how teratogens are identified? Might it have been useful and relevant for a scientific journalist to explain that there are four million live births every year in the United States and that 3% of children born each year have major congenital malformations? And that most malformations have no known cause? Smith ingenuously relays that Jason Daubert had genetic testing, but omits that genetic testing in the early 1990s was fairly primitive and limited. In any event, how were any expert witnesses supposed to rule out base-line risk of birth defects, especially given weak to non-existent epidemiologic support for the Daubert’s claims? Smith does answer these questions; he does not even acknowledge the questions.

Smith later quotes Joyce Daubert as describing the litigation she signed up for as “the hill I’ll die on. You only go to war when you think you can win.” Without comment or analysis, Smith gives Joyce Daubert an opportunity to rant against the “injustice” of how her lawsuit turned out. Smith tells us that the Dauberts found the “legal system remains profoundly disillusioning.” Joyce Daubert told Smith that “it makes me feel stupid that I was so naïve to think that, after we’d invested so much in the case, that we would get justice.”  When called for jury duty, she introduces herself as

“I’m Daubert of Daubert versus Merrell Dow … ; I don’t want to sit on this jury and pretend that I can pass judgment on somebody when there is no justice. Please allow me to be excused.”

But didn’t she really get all the justice she deserved? Given her zealotry, doesn’t she deserve to have her name on the decision that serves to rein in expert witnesses who outrun their scientific headlights? Smith is coy and does not say, but in presenting Mrs. Daubert’s rant, without presenting the other side, he is using his journalistic tools in a fairly blatant attempt to mislead. At this point, I begin to get the feeling that Smith is preaching to a like-minded choir over there at Undark.

The reader is not treated to any interviews with anyone from the company that made Bendectin, any of its scientists, or any of the scientists who published actual studies on whether Bendectin was associated with the particular birth defects Jason Daubert had, or for that matter, with any birth defects at all. The plaintiffs’ expert witnesses quoted and cited never published anything at all on the subject. The readers are left to their imagination about how the people who developed Bendectin felt about the litigation strategies and tactics of the lawsuit industry.

The journalistic ruse is continued with Smith’s treatment of the other actors in the Daubert passion play. Smith describes the Bendectin plaintiffs’ lawyer Barry Nace in hagiographic terms, but omits his bar disciplinary proceedings.[2] Smith tells us that Nace had an impressive background in chemistry, and quotes him in an interview in which he described the evidentiary rules on scientific witness testimony as “scientific evidence crap.”

Smith never describes the Daubert’s actual affirmative evidence in any detail, which one might expect in a sophisticated journalistic outlet. Instead, he described some of their expert witnesses, Shanna Swan, a reproductive epidemiologist, and Alan K. Done, “a former pediatrician from Wayne State University.” Smith is secretive about why Done was done in at Wayne State; and we learn nothing about the serious accusations of perjury on credentials by Done. Instead, Smith regales us with Done’s tsumish theory, which takes inconclusive bits of evidence, throws them together, and then declares causation that somehow eludes the rest of the scientific establishment.

Smith tells us that Swan was a rebuttal witness, who gave an opinion that the data did not rule out “the possibility Bendectin caused defects.” Legally and scientifically, Smith is derelict in failing to explain that the burden was on the party claiming causation, and that Swan’s efforts to manufacture doubt were beside the point. Merrell Dow did not have to rule out any possibility of causation; the plaintiffs had to establish causation. Nor does Smith delve into how Swan sought to reprise her performance in the silicone gel breast implant litigation, only to be booted by several judges as an expert witness. And then for a convincer, Smith sympathetically repeats plaintiffs’ lawyer Barry Nace’s hyperbolic claim that Bendectin manufacturer, Merrell Dow had been “financing scientific articles to get their way,” adding by way of emphasis, in his own voice:

“In some ways, here was the fake news of its time: If you lacked any compelling scientific support for your case, one way to undermine the credibility of your opponents was by calling their evidence ‘junk science’.”

Against Nace’s scatalogical Jackson Pollack approach, Smith is silent about another plaintiffs’ expert witness, William McBride, who was found guilty of scientific fraud.[3] Smith reports interviews of several well-known, well-respected evidence scholars. He dutifully report Professor Edward Cheng’s view that “the courts were right to dismiss the [Bendectin] plaintiffs’ claims.” Smith quotes Professor D. Michael Risinger that claims from both sides in Bendectin cases were exaggerated, and that the 1970s and 1980s saw an “unbridled expansion of self-anointed experts,” with “causation in toxic torts had been allowed to become extremely lax.” So a critical reader might wonder why someone like Professor Cheng, who has a doctorate in statistics, a law degree from Harvard, and teaches at Vanderbilt Law School, would vindicate the manufacturers’ position in the Bendectin litigation. Smith never attempts to reconcile his interviews of the law professors with the emotive comments of Barry Nace and Joyce Daubert.

Smith acknowledges that a reformulated version of Bendectin, known as  Diclegis, was approved by the Food and Drug Administration in the United States, in 2013, for treatment of  nausea and vomiting during pregnancy. Smith tells us that Joyce is not convinced the drug should be back on the market,” but really why would any reasonable person care about her view of the matter? The challenge by Nav Persaud, a Toronto physician, is cited, but Persaud’s challenge is to the claim of efficacy, not to the safety of the medication. Smith tells us that Jason Daubert “briefly mulled reopening his case when Diclegis, the updated version of Bendectin, was re-approved.” But how would the approval of Diclegis, on the strength of a full new drug application, somehow support his claim anew? And how would he “reopen” a claim that had been fully litigated in the 1990s, and well past any statute of limitations?

Is this straight reporting? I think not. It is manipulative and misleading.

Smith notes, without attribution, that some scholars condemn litigation, such as the cases involving Bendectin, as an illegitimate form of regulation of medications. In opposition, he appears to rely upon Elizabeth Chamblee Burch, a professor at the University of Georgia School of Law for the view that because the initial pivotal clinical trials for regulatory approvals take place in limited populations, litigation “serves as a stopgap for identifying rare adverse outcomes that could crop up when several hundreds of millions of people are exposed to those products over longer periods of time.” The problem with this view is that Smith ignores the whole process of pharmacovigilance, post-registration trials, and pharmaco-epidemiologic studies conducted after the licensing of a new medication. The suggested necessity of reliance upon the litigation system as an adjunct to regulatory approval is at best misplaced and tenuous.

Smith correctly explains that the Daubert standard is still resisted in criminal cases, where it could much improve the gatekeeping of forensic expert witness opinion. But while the author gets his knickers in a knot over wrongful convictions, he seems quite indifferent to wrongful judgments in civil action.

Perhaps the one positive aspect of this journalistic account of the Daubert case was that Jason Daubert, unlike his mother, was open minded about his role in transforming the law of scientific evidence. According to Smith, Jason Daubert did not see the case as having “not ruined his life.” Indeed, Jason seemed to approve the basic principle of the Daubert case, and the subsequent legislation that refined the admissibility standard: “Good science should be all that gets into the courts.”


[1] Peter Andrey Smith, “Where Science Enters the Courtroom, the Daubert Name Looms Large: Decades ago, two parents sued a drug company over their newborn’s deformity – and changed courtroom science forever,” Undark (Feb. 17, 2020).

[2]  Lawyer Disciplinary Board v. Nace, 753 S.E.2d 618, 621–22 (W. Va.) (per curiam), cert. denied, 134 S. Ct. 474 (2013).

[3] Neil Genzlinger, “William McBride, Who Warned About Thalidomide, Dies at 91,” N.Y. Times (July 15, 2018); Leigh Dayton, “Thalidomide hero found guilty of scientific fraud,” New Scientist (Feb. 27, 1993); G.F. Humphrey, “Scientific fraud: the McBride case,” 32 Med. Sci. Law 199 (1992); Andrew Skolnick, “Key Witness Against Morning Sickness Drug Faces Scientific Fraud Charges,” 263 J. Am. Med. Ass’n 1468 (1990).

The opinions, statements, and asseverations expressed on Tortini are my own, or those of invited guests, and these writings do not necessarily represent the views of clients, friends, or family, even when supported by good and sufficient reason.