TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Judicial Dodgers – Weight not Admissibility

May 28th, 2020

Another vacuous response to a methodological challenge under Rule 702 is to label the challenge as “going to the weight, not the admissibility” of the challenged expert witness’s testimony. Of course, a challenge may be solely focused upon the expert witness’s credibility, such as when an expert witness testifies on many occasions only for one side in similar disputes, or for one whose political commitments render him unable to acknowledge the bona fides of any studies conducted by the adversarial parties.[1] If, however, the Rule 702 challenge stated an objection to the witness’s methodology, then the objection would count against both the opinion’s weight and its admissibility. The judicial “weight not admissibility” label conveys the denial of the challenge, but it hardly explains how and why the challenge failed under Rule 702. Applying such a label without addressing the elements of Rule 702, and how the challenged expert witness satisfied those elements, is often nothing less than a failure of judging.

The Flawed Application of a Generally Accepted Methodology

If a meretricious expert witness by pretense or ignorance invokes a standard methodology but does so in a flawed or distorted, or in an invalid way, then there will be a clear break in the chain of inferences from data to conclusion. The clear language of Rule 702 should render such an expert witness’s conclusion inadmissible. Some courts, however, retreat into a high level of generality about the method used rather than inspecting the method as applied. For example, a court might look at an expert witness’s opinion and correctly find that it relied upon epidemiology, and that epidemiology is a generally accepted discipline concerned with identifying causes. The specific detail of the challenge may have shown that the witness had relied upon a study that was thoroughly flawed,[2] or that the witness relied upon an epidemiologic study of a type that cannot support a causal inference.[3]

Rule 702 and the Supreme Court’s decision in Joiner make clear that the trial court must evaluate the expert witness’s application of methodology and whether it actually supports valid inferences leading to the witness’s claims and conclusions.[4] And yet, lower courts continue to characterize the gatekeeping process as “hands off” the application of methodology and conclusions:

“Where the court has determined that plaintiffs have met their burden of showing that the methodology is reliable, the expert’s application of the methodology and his or her conclusions are issues of credibility for the jury.”[5]

This rejection of the clear demands of a statute has infected even the intermediate appellate United States Court of Appeals. In a muddled application of Rule 702, the Third Circuit approved admitting expert witness testimony in a case, explaining “because [the objecting party / plaintiff] objected to the application rather than the legitimacy of [the expert’s] methodology, such objections were more appropriately addressed on cross-examination and no Daubert hearing was required”).[6] Such a ruling in the Third Circuit is especially jarring because it violates not only the clear language of Rule 702, but also established precedent within the Circuit that holds that “any step that renders the analysis unreliable . . . renders the expert’s testimony inadmissible. This is true whether the step completely changes a reliable methodology or merely misapplies that methodology.”[7]

The Eight Circuit seems to have set itself up stridently against the law by distinguishing between scientific methodologies and their applications, and holding that “when the application of a scientific methodology is challenged as unreliable under Daubert and the methodology itself is otherwise sufficiently reliable, outright exclusion of the evidence in question is warranted only if the methodology was so altered by a deficient application as to skew the methodology itself.”[8]

The Ninth Circuit similarly has followed this dubious distinction between methodology in the abstract and methodology as applied. In City of Pomona, the Circuit addressed the admissibility of an expert witness whose testing deviated from protocols. Relying upon pre-2000 Ninth Circuit case law, decided before the statutory language of Rule 702 was adopted, the court found that:

“expert evidence is inadmissible where the analysis is the result of a faulty methodology or theory as opposed to imperfect execution of laboratory techniques whose theoretical foundation is sufficiently accepted in the scientific community to pass muster under Daubert.”[9]

The Eleventh Circuit has similarly disregarded Rule 702 by adverting to an improvised distinction between validity of methodology and flawed application of methodology.[10]

Cherry Picking and Inadequate Bases

Most of the Circuits of the United States Court of Appeals have contributed to the mistaken belief that “[a]s a general rule, questions relating to the bases and sources of an expert’s opinion affect the weight to be assigned that opinion rather than its admissibility.”[11] Clearly, such questions can undermine the admissibility of an expert witness’s opinion under Rule 702, and courts need to say why they have found the challenged opinion to have had a “sufficient basis.” For example, in the notorious Milward case, the First Circuit, citing legally invalid pre-Daubert decisions, stated that “when the factual underpinning of an expert’s opinion is weak it is a matter affecting the weight and credibility of the testimony − a question to be resolved by the jury.”[12]

After Milward, the Eighth Circuit followed suit in a hormone replacement therapy case. An expert who ignored studies was excluded by the district court, but the Court of Appeals found an abuse of discretion, holding that the sufficiency of an expert’s basis is a question of weight and not admissibility.[13]

These rulings elevate form over substance by halting the gatekeeping inquiry at an irrelevant, high level of abstraction, and finding that the challenged expert witness was doing something “sciencey,” which is good enough for government work. The courts involved evaded their gatekeeping duties and ignored the undue selectivity in reliance materials and the inadequacy and insufficiency of the challenged expert witness’s factual predicate. The question is not whether expert witnesses relied upon “scientific studies,” but whether their causal conclusions and claims are well supported, under scientific standards, by the studies upon which they relied.

Like the covert shifting of the burden of proof, or the glib assessment that the loser can still cross-examine in front of the jury,[14] the rulings discussed represent another way that judges kick the can on Rule 702 motions. Despite the confusing verbiage, these judicial rulings are a serious deviation from the text of Rule 702, as well as the Advisory Committee Note to the 2000 Amendments, which embraced the standard articulated in In re Paoli, that

“any step that renders the analysis unreliable . . . renders the expert’s testimony inadmissible. This is true whether the step completely changes a reliable methodology or merely misapplies that methodology.”[15]

On a positive note, some courts have recognized that responding with the conclusory assessment of a challenge’s going to weight not admissibility is a delegation of the court’s gatekeeping duty to the jury.[16]

In 2018, Professor Daniel Capra, the Reporter to the Rules Committee addressed the “weight not admissibility dodge” at length in his memorandum to the Rules Committee:

“Rule 702 clearly states that these are questions of admissibility, but many courts treat them as questions of weight. The issue for the Committee is whether something/anything can be done about these wayward decisions.”[17]

The Reporter charitably noted that the problem could be in the infelicitous expression of some courts that short-circuit their analyses by saying “I see the problems, but they go to the weight of the evidence.”[18] Perhaps these courts meant to say that they had found that the proponent of the challenged expert witness testimony had shown admissibility by a preponderance, and that what non-disqualifying problems remained should be taken up on cross-examination.[19] The principle of charity, however, cannot exonerate federal judges from exercising the dodge repeatedly in the face of clear statutory language. Indeed, the Reporter reaffirmed the Rules Committee’s substantive judgment that questions of sufficient basis and reliable application of methodology are admissibility issues:[20]

“It is hard to see how expert testimony is reliable if the expert has not done sufficient investigation, or has cherry-picked the data, or has misapplied the methodology. The same ‘white lab coat’ problem − that the jury will not be able to figure out the expert’s missteps − would seem to apply equally to basis, methodology and application.”

Although the Reporter opined that some authors may have overstated judicial waywardness, he found the judicial disregard of the requirements of Rule 702(b) and (d) incontrovertible.[21]

Professor Capra restated his conclusions a year later, in 2019, when he characterized broad statements such as such as “challenges to the sufficiency of an expert’s basis raise questions of weight and not admissibility” as “misstatement[s] made by circuit courts in a disturbing number of cases… .”[22] Factual insufficiency and unreliable application of methodology are, of course, also credibility and ethical considerations, but they are the fact finders’ concern only after the proponent has shown admissibility by a preponderance of the evidence. Principled adjudication requires judges to say what they mean and mean what they say.


[1]  See also Cruz-Vazquez v. Mennonite Gen. Hosp. Inc., 613 F.3d 54 (1st Cir. 2010) (reversing exclusion of an expert witness who was biased in favor of plaintiffs in medical cases and who was generally affiliated with plaintiffs’ lawyers; such issues of personal bias are for the jury in assessing the weight of the expert witness’s testimony). Another example would be those expert witnesses whose commitment to Marxist ideology is such that they reject any evidence proffered by manufacturing industry as inherently corrupt, while embracing any evidence proffered by labor or the lawsuit industry without critical scrutiny.

[2]  In re Phenylpropanolamine (PPA) Prods. Liab. Litig., MDL No. 1407, 289 F. Supp. 2d 1230 (W.D. Wash. 2003) (Yale Hemorrhagic Stroke Project).

[3]  Cook v. Rockwell Internat’l Corp., 580 F. Supp. 2d 1071, 1098 (D. Colo. 2006) (“Defendants next claim that Dr. Clapp’s study and the conclusions he drew from it are unreliable because they failed to comply with four factors or criteria for drawing causal interferences from epidemiological studies: accounting for known confounders … .”), rev’d and remanded on other grounds, 618 F.3d 1127 (10th Cir. 2010), cert. denied, ___ U.S. ___, 133 S.Ct. 22 (2012). For another example of a trial court refusing to see through important qualitative differences between and among epidemiologic studies, see In re Welding Fume Prods. Liab. Litig., 2006 WL 4507859, *33 (N.D. Ohio 2006) (reducing all studies to one level, and treating all criticisms as though they rendered all studies invalid)

[4]  General Electric Co. v. Joiner, 522 U.S. 136 (1997).

[5]  Proctor & Gamble Co. v. Haugen, 2007 WL 709298, at *2 (D. Utah 2007); see also United States v. McCluskey, 954 F.Supp.2d 1227, 1247-48 (D.N.M. 2013) (“the trial judge decides the scientific validity of underlying principles and methodology” and “once that validity is demonstrated, other reliability issues go to the weight − not the admissibility − of the evidence”); Murphy-Sims v. Owners Ins. Co., No. 16-CV-0759-CMA-MLC, 2018 WL 8838811, at *7 (D. Colo. Feb. 27, 2018) (“Concerns surrounding the proper application of the methodology typically go to the weight and not admissibility[.]”).

[6]  Walker v. Gordon, 46 F. App’x 691, 696 (3rd Cir. 2002).

[7]  In re Paoli R.R. Yard PCB Litig., 35 F.3d 717, 745 (3d Cir. 1994).

[8]  United States v. Gipson, 383 F.3d 689, 696 (8th Cir. 2004)(relying upon pre-2000 authority for this proposition).

[9]  City of Pomona v. SQM N.Am. Corp. 750 F.3d 1036, 1047 (9th Cir. 2014).

[10]  Quiet Tech. DC-8, Inc. v. Hurel-Dubois UK Ltd., 326 F.3d 1333, 1343 (11th Cir. 2003).

[11]  Puga v. RCX Sols., Inc., 922 F.3d 285, 294 (5th Cir. 2019). See also United States v. Hodge, 933 F.3d 468, 478 (5th Cir. 2019)(“As a general rule, questions relating to the bases and sources of an expert’s opinion affect the weight to be assigned that opinion rather than its admissibility and should be left for the jury’s consideration.”); MCI Communications Service Inc. v. KC Trucking & Equip. LLC, 403 F. Supp. 3d 548, 556 (W.D. La. 2019); Coleman v. United States, No. SA-16-CA-00817-DAE, 2017 WL 9360840, at *4 (W.D. Tex. Aug. 16, 2017); Alvarez v. State Farm Lloyds, No. SA-18-CV-01191-XR, 2020 WL 734482, at *3 (W.D. Tex. Feb. 13, 2020)(“To the extent State Farm wishes to attack the ‘bases and sources’ of Dr. Hall’s opinion, such questions affect the weight to be assigned to that opinion rather than its admissibility and should also be left for the jury’s consideration.”)(internal quotation and citation omitted); Patenaude v. Dick’s Sporting Goods, Inc., No. 9:18-CV-3151-RMG, 2019 WL 5288077, at *2 (D.S.C. Oct. 18, 2019) (“More fundamentally, each of these arguments goes to the factual basis of the report, … and it is well settled that the factual basis for an expert opinion generally goes to weight, not admissibility.”); Wischermann Partners, Inc. v. Nashville Hosp. Capital LLC, No. 3:17-CV-00849, 2019 WL 3802121, at *3 (M.D. Tenn. Aug. 13, 2019) (“[A]rguments that Pinkowski’s opinions are unreliable because he failed to review other relevant information and ignored certain facts bear on the factual basis for Pinkowski’s opinions, and, therefore, go to the weight, rather than the admissibility, of Pinkowski’s testimony.”).

[12]  Milward v. Acuity Specialty Products Group, Inc., 639 F.3d 11, 22 (1st Cir. 2011) (internal citations omitted), cert. denied sub nom., U.S. Steel Corp. v. Milward, 132 S. Ct. 1002 (2012).

[13]  Kuhn v. Wyeth, Inc., 686 F.3d 618, 633 (8th Cir. 2012): Kuhn v. Wyeth, Inc., 686 F.3d 618, 633 (8th Cir. 2012), rev’g Beylin v. Wyeth, 738 F.Supp. 2d 887, 892 (E.D.Ark. 2010) (MDL court) (Wilson, J. & Montgomery, J.) (excluding proffered testimony of Dr. Jasenka Demirovic who appeared to have “selected study data that best supported her opinion, while downplaying contrary findings or conclusions.”); see United States v. Finch, 630 F.3d 1057 (8th Cir. 2011) (the sufficiency of the factual basis for an expert’s testimony goes to credibility rather than admissibility, and only where the testimony “is so fundamentally unsupported that it can offer no assistance to the jury must such testimony be excluded”); Katzenmeier v. Blackpowder Prods., Inc., 628 F.3d 948, 952 (8th Cir. 2010)(“As a general rule, the factual basis of an expert opinion goes to the credibility of the testimony, not the admissibility, and it is up to the opposing party to examine the factual basis for the opinion in cross-examination.”); Paul Beverage Co. v. American Bottling Co., No. 4:17CV2672 JCH, 2019 WL 1044057, at *2 (E.D. Mo. Mar. 5, 2019) (admitting challenged opinion testimony without addressing the expert witness’s basis or application of methodology, following Eighth Circuit’s incorrect statement in Nebraska Plastics, Inc. v. Holland Colors Americas, Inc., 408 F.3d 410, 416 (8th Cir. 2005) that “[a]s a general rule, the factual basis of an expert opinion goes to the credibility of the testimony, not the admissibility, and it is up to the opposing party to examine the factual basis for the opinion in cross-examination[,]”). See alsoThe Fallacy of Cherry Picking As Seen in American Courtrooms” (May 3, 2014).

[14]  SeeJudicial Dodgers – Reassigning the Burden of Proof on Rule 702” (May 13, 2020); “Judicial Dodgers – The Crossexamination Excuse for Denying Rule 702 Motions” (May 11, 2020).

[15]  Fed. R. Evid. 702, Advisory Note (quoting In re Paoli RR Yard PCB Litig., 35 F.3d 717, 745 (3d Cir. 1994)).

[16]  See Nease v. Ford Motor Co., 848 F.3d 219, 231 (4th Cir. 2017) (“For the district court to conclude that Ford’s reliability arguments simply ‘go to the weight the jury should afford Mr. Sero’s testimony’ is to delegate the court’s gatekeeping responsibility to the jury.”).

[17]  Daniel J. Capra, Reporter’s Memorandum re Forensic Evidence, Daubert and Rule 702, at 1-2 (Apr. 1, 2018)

[18]  Id. at 43.

[19]  Id. at 43, 49-50.

[20]  Id. at 49-50.

[21]  Id. at 52.

[22]  Daniel J. Capra, Reporter, Reporter’s Memorandum re Possible Amendments to Rule 702, Advisory Comm. on Evidence Rules, Minutes of Meeting at 23 (May 3, 2019).

Judicial Dodgers – Reassigning the Burden of Proof on Rule 702

May 13th, 2020

Explaining the denial of a Rule 702 motion in terms of the availability of cross-examination is just one among several dodges that judges use to avoid fully engaging with Rule 702’s requirements.[1] Another dodge involves shifting the burden of proof on admissibility from the proponent of the challenged expert witness to the challenger. This dodgewould appear to violate well-established law.

The Supreme Court, in deciding Daubert, made clear that the question whether an expert witness’s opinion was admissible was governed under the procedure set out in Federal Rule of Evidence 104(a).[2] The significance of placing the Rule 702 issues under the procedures set out in Rule 104(a) is that the trial judge must make the admissibility determination, and that he or she is not bound by the rules of evidence. The exclusion of the admissibility determination from the other rules of evidence means that trial judges can look at challenged expert witnesses’ relied-upon materials, and other facts, data, and opinions, regardless of these materials’ admissibility. The Supreme Court also made clear that the admissibility of an expert witness’s opinion testimony should be shown “by a preponderance of proof.”[3] Every court that has directly addressed the burden of proof issue in a Rule 702 challenge to expert witness testimony has clearly assigned that burden to the proponent of the testimony.[4]

Trial courts intent upon evading gatekeeping responsibility, however, have created a presumption of admissibility. When called upon to explain why they have denied Rule 702 challenges, these courts advert to the presumption as an explanation and justification for the denial.[5] Some courts even manage to discuss the burden of proof upon the proponent, and a presumption of admissibility, in almost the same breath.[6]

In his policy brief for amending Rule 702, Lee Mickus traces the presumption innovation to Borawick v. Shay, a 1995 Second Circuit decision that involved a challenge to hypnotically refreshed (or created) memory.[7] In Borawick, the Court of Appeals held that the plaintiff’s challenge turned upon whether Borawick’s testimony was competent or admissible, and that it did not involve the “the admissibility of data derived from scientific techniques or expert opinions.”[8] Nevertheless, in dicta, the court observed that “by loosening the strictures on scientific evidence set by Frye, Daubert reinforces the idea that there should be a presumption of admissibility of evidence.”[9]

Presumptions come in different forms and operate differently, and this casual reference to a presumption in dictum could mean any number of things. A presumption of admissibility could mean simply that unless there is a challenge to an expert witness’s opinion, the opinion is admissible.[10] The presumption could be a bursting-bubble (Thayer) presumption, which disappears once the opponent of the evidence credibly raises questions about the evidence’s admissibility. The presumption might be something that does not disappear, but once the admissibility is challenged, the presumption continues to provide some evidence for the proponent. And in the most extreme forms, the (Morgan) presumption might be nothing less than a judicially artful way of saying that the burden of proof is shifted to the opponent of the evidence to show inadmissibility.[11]

Although Borawick suggested that there should be a presumption, it did not exactly hold that one existed. A presumption in favor of the admissibility of evidence raises many questions about the nature, definition, and operation of the presumption. It throws open the question what evidence is needed to rebut the presumption. For instance, may a party whose expert witness is challenged not defend the witness’s compliance with Rule 702, stand on the presumption, and still prevail?

There is no mention of a presumption in Rule 702 itself, or in any Supreme Court decision on Rule 702, or in the advisory committee notes. Inventing a presumption, especially a poorly described one, turns the judicial discretion to grant or deny a Rule 702 challenge into an arbitrary decision.

Most importantly, given the ambiguity of “presumption,” a judicial opinion that denies a Rule 702 challenge by invoking a legal fiction fails to answer the question whether the proponent of the expert witness has carried the burden of showing that all the subparts of Rule 702 were satisfied by a preponderance of the evidence. While judges may prefer not to endorse or disavow the methodology of an otherwise “qualified” expert witness, their office requires them to do so, and not hide behind fictional presumptions.


1

[1]  “Judicial Dodgers – The Crossexamination Excuse for Denying Rule 702 Motions” (May 11, 2020).

[2]  Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 592 n.10 (1993).

[3]  Id., citing Bourjaily v. United States, 483 U. S. 171, 175-176 (1987).

[4]  Barrett v. Rhodia, Inc., 606 F.3d 975, 980 (8th Cir. 2010) (quoting Marmo v. Tyson Fresh Meats, Inc., 457 F.3d 748, 757 (8th Cir. 2006)); Beylin v. Wyeth, 738 F. Supp. 2d 887 (E.D. Ark. 2010) (MDL court) (Wilson, J. & Montgomery, J.); Pride v. BIC Corp., 218 F.3d 566, 578 (6th Cir. 2000); Reece v. Astrazeneca Pharms., LP, 500 F. Supp. 2d 736, 742 (S.D. Ohio 2007).

[5]  See, e.g., Cates v. Trustees of Columbia Univ. in City of New York, No. 16CIV6524GBDSDA, 2020 WL 1528124, at *6 (S.D.N.Y. Mar. 30, 2020) (discussing presumptive admissibility); Price v. General Motors, LLC, No. CIV-17-156-R, 2018 WL 8333415, at *1 (W.D. Okla. Oct. 3, 2018) (“[T]here is a presumption under the Rules that expert testimony is admissible.”)(internetal citation omitted); Powell v. Schindler Elevator Corp., No. 3:14cv579 (WIG), 2015 WL 7720460, at *2 (D. Conn. Nov. 30, 2015) (“The Second Circuit has made clear that Daubert contemplates liberal admissibility standards, and reinforces the idea that there should be a presumption of admissibility of evidence.”); Advanced Fiber Technologies (AFT) Trust v. J & L Fiber Services, Inc., No. 1:07-CV-1191, 2015 WL 1472015, at *20 (N.D.N.Y. Mar. 31, 2015) (“In assuming this [gatekeeper] role, the Court applies a presumption of admissibility.”); Crawford v. Franklin Credit Mgt. Corp., 08-CV-6293 (KMW), 2015 WL 13703301, at *2 (S.D.N.Y. Jan. 22, 2015) (“[T]he court should apply ‘a presumption of admissibility’ of evidence” in carrying out the gatekeeper function.); Martinez v. Porta, 598 F. Supp. 2d 807, 812 (N.D. Tex. 2009) (“Expert testimony is presumed admissible”).

[6]  S.E.C. v. Yorkville Advisors, LLC, 305 F. Supp. 3d 486, 503-04 (S.D.N.Y. 2018) (“The party seeking to introduce the expert testimony bears the burden of establishing by a preponderance of the evidence that the proffered testimony is admissible. There is a presumption that expert testimony is admissible … .”) (internal citations omitted).

[7]  Borawick v. Shay, 68 F.3d 597, 610 (2d Cir. 1995), cert. denied, 517 U.S. 1229 (1996).

[8]  Id.

[9]  Id. (referring to Frye v. United States, 293 F. 1013 (D.C.Cir.1923)).

[10]  In re Zyprexa Prod. Liab. Litig., 489 F. Supp. 2d 230, 282 (E.D.N.Y. 2007) (Weinstein, J.) (“Since Rule 702 embodies a liberal standard of admissibility for expert opinions, the assumption the court starts with is that a well-qualified expert’s testimony is admissible.”).

[11]  See, e.g., Orion Drilling Co., LLC v. EQT Prod. Co., No. CV 16-1516, 2019 WL 4273861, at *34 (W.D. Pa. Sept. 10, 2019) (after declaring that “[e]xclusion is disfavored” under Rule 702, the court flipped the burden of production and declared the opinion testimony admissible, stating “Orion has not established that incorporation of the data renders Ray’s opinion unreliable.”).

Judicial Dodgers – The Crossexamination Excuse for Denying Rule 702 Motions

May 11th, 2020

In my last post,[1] I praised Lee Mickus’s recent policy paper on amending Rule 702 for its persuasive force on the need for an amendment, as well as a source for helping lawyers anticipate common judicial dodges to a faithful application of the rule.[2] There are multiple dodges used by judicial dodgers, and it behooves litigants to recognize and anticipate them. In this post, and perhaps future ones, I elaborate upon the concerns that Mickus documents.

One prevalent judicial response to the Rule 702 motion is to kick the can and announce that the challenge to an expert witness’s methodological shenanigans can and should be addressed by crossexamination. This judicial response was, of course, the standard one before the 1993 Daubert decision, but Justice Blackmun’s opinion kept it alive in frequently quote dicta:

“Vigorous cross-examination, presentation of contrary evidence, and careful instruction on the burden of proof are the traditional and appropriate means of attacking shaky but admissible evidence.”[3]

Justice Blackmun, no doubt, believed he was offering a “helpful” observation here, but the reality is quite different. Traditionally, courts allowed qualified expert witnesses to opine with wild abandon, after showing that they had the very minimal qualifications required to do so in court. In the face of this traditional judicial lassitude, “[v]igorous cross-examination, presentation of contrary evidence, and careful instruction on the burden of proof” were all a litigant could hope to accomplish in litigation. Furthermore, the litany of remedies for “shaky but admissible evidence” fails to help lower court judges and lawyers sort shaky but admissible evidence from shaky and inadmissible evidence.

Perhaps even more to the point, cases at common law “traditionally” did not involve multivariate logistic regression, structural equation models, propensity score weighting, and the like. Juries did just fine on whether Farmer Brown had exercised due care when he ran over his neighbor’s cow with his tractor, or even when a physician opined that a child was born 350 days after the putative father’s death was sired by the testator and entitled to inherit from “dad.”

Mickus is correct that a trial judge’s comment that the loser of a Rule 702 motion is free to cross-examine is often a dodge, an evasion, or an outright failure to engage with the intricacies of a complex methodological challenge.[4] Stating that the “traditional and appropriate means of attacking shaky but admissible evidence” remain available is a truism, and might be offered as judicial balm to the motion loser, but the availability of such means is hardly an explanation or justification for denying the Rule 702 motion. Furthermore, Justice Blackmun’s observation about traditional means was looking back at an era when in most state and federal court, a person found to be minimally qualified, could pretty much say anything regardless of scientific validity. That was the tradition that stood in active need of reform when Daubert was decided in 1993.

Mickus is also certainly correct that the whole point of judicial gatekeeping is that the presentation of vive voce testimony before juries is not an effective method for revealing shaky, inadmissible opinion testimony. A few courts have acknowledged that cross-examination in front of a jury is not an appropriate justification for admitting methodologically infirm expert witness opinion testimony. In the words of Judge Jed Rakoff, who served on the President’s Council of Advisors on Science and Technology,[5] addressed the limited ability of cross-examination in the context of forensic evidence:

“Although effective cross-examination may mitigate some of these dangers, the explicit premise of Daubert and Kumho Tire is that, when it comes to expert testimony, cross-examination is inherently handicapped by the jury’s own lack of background knowledge, so that the Court must play a greater role, not only in excluding unreliable testimony, but also in alerting the jury to the limitations of what is presented.”[6]

Judge Rakoff’s point is by no means limited to forensic evidence, and it has been acknowledged more generally by Professor Daniel Capra, the Reporter to the Advisory Committee on Evidence Rules:

“the key to Daubert is that cross-examination alone is ineffective in revealing nuanced defects in expert opinion testimony and that the trial judge must act as a gatekeeper to ensure that unreliable opinions don’t get to the jury in the first place.”[7]

Juries do not arrive at the court house knowledgeable about statistical and scientific methods; nor are they prepared to spend weeks going over studies to assess their quality, and whether an expert witness engaged in cherry picking, misapplying methodologies, or insufficient investigation.[8] In discussing the problem of expert witnesses’ overstating the strength of their opinions, beyond what is supported by evidence, the Reporter stressed the limits and ineffectiveness of remedial adversarial cross-examination:

“Perhaps another way to think about cross-examination as a remedy is to compare the overstatement issue to the issues of sufficiency of basis, reliability of methodology, and reliable application of that methodology. As we know, those three factors must be shown by a preponderance of the evidence. The whole point of Rule 702 — and the Daubert-Rule 104(a) gatekeeping function — is that these issues cannot be left to cross-examination. The underpinning of Daubert is that an expert’s opinion could be unreliable and the jury could not figure that out, even given cross-examination and argument, because the jurors are deferent to a qualified expert (i.e., the white lab coat effect). The premise is that cross-examination cannot undo the damage that has been done by the expert who has power over the jury. This is because, for the very reason that an expert is needed (because lay jurors need assistance) the jury may well be unable to figure out whether the expert is providing real information or junk. The real question, then, is whether the dangers of overstatement are any different from the dangers of insufficient basis, unreliability of methodology, and unreliable application. Why would cross-examination be insufficient for the latter yet sufficient for the former?

It is hard to see any difference between the risk of overstatement and the other risks that are regulated by Rule 702. When an expert says that they are certain of a result — when they cannot be — how is that easier for the jury to figure out than if an expert says something like ‘I relied on four scientifically valid studies concluding that PCB’s cause small lung cancer’. When an expert says he employed a ‘scientific methodology’ when that is not so, how is that different from an expert saying “I employed a reliable methodology” when that is not so?”[9]

The Reporter’s example of PCBs and small lung cancer was an obvious reference to the Joiner case, in which the Supreme Court held that the trial judge had properly excluded causation opinions. The Reporter’s point goes directly to the cross-examination excuse for not shirking the gatekeeping function. In Joiner, the Court held that gatekeeping was necessitated when cross-examination was insufficient in the face of an analytical gap between methodology and conclusion.[10] Indeed, such gaps are or should be present in most well-conceived Rule 702 challenges.

The problem is not only that juries defer to expert witnesses. Juries lack the competence to assess scientific validity. Although many judges are lacking in such competence, at least litigants can expect them to read the Reference Manual on Scientific Evidence before they read the parties’ briefs and the expert witnesses’ reports. If the trial judge’s opinion evidences ignorance of the Manual, then at least there is the possibility of an appeal. It will be a strange day in a stranger world, when a jury summons arrives in the mail with a copy of the Manual!

The rules of evidence permit expert witnesses to rely upon inadmissible evidence, at least when experts in their field would do so reasonably. To decide whether the reliance is reasonable requires the decision maker go outside the “proofs” that would typically be offered at trial. Furthermore, the decision maker – gatekeeper – will have to read the relied-upon study and data to evaluate the reasonableness of the reliance. In a jury trial, the actual studies relied upon are rarely admissible, and so the jury almost never has the opportunity to read them to make its own determination of reasonableness of reliance, or of whether the study and its data really support what the expert witness draws from it.

Of course, juries do not have to write opinions about their findings. They need neither explain nor justify their verdicts, once the trial court has deemed that there is the minimally sufficient evidence to support a verdict. Juries, with whatever help cross-examination provides, in the absence of gatekeeping, cannot deliver anything approaching scientific due process of law.

Despite Supreme Court holdings, a substantially revised and amended Rule 702, and clear direction from the Advisory Committee, some lower courts have actively resisted enforcing the requirements of Rule. 702 Part of this resistance consists in pushing the assessment of the reliability of the data and assumptions used in applying a given methodology out of the gatekeeping column and into the jury’s column. Despite the clear language of Rule 702, and the Advisory Committee Note,[11] some Circuits of the Court of Appeals have declared that assessing the reliability of assumptions and data is not judges’ work (outside of a bench trial).[12]

As Seinfeld has taught us, rules are like reservations. It is not enough to make the rules, you have to keep and follow them. Indeed, following the rule is really the important part.[13] Although an amended Rule 702 might include a provision that “we really mean this,” perhaps it is worth a stop at the Supreme Court first to put down the resistance.


[1]  “Should Federal Rule of Evidence 702 Be Amended?” (May 8, 2020).

[2]  Lee Mickus, “Gatekeeping Reorientation: Amend Rule 702 to Correct Judicial Misunderstanding about Expert Evidence,” Washington Legal Foundation Critical Legal Issues Working Paper No. 217 (May 2020).

[3]  Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 596 (1993).

[4]  See, e.g., AmGuard Ins. Co. v. Lone Star Legal Aid, No. CV H-18-2139, 2020 WL 60247, at *8 (S.D. Tex. Jan. 6, 2020) (“[O]bjections [that the expert could not link her experienced-based methodology to her conclusions] are better left for cross examination, not a basis for exclusion.”); Powell v. Schindler Elevator Corp., No. 3:14cv579 (WIG), 2015 WL 7720460, at *2 (D. Conn. Nov. 30, 2015) (“To the extent Defendant argues that Mr. McPartland’s conclusions are unreliable, it may attack his report through cross examination.”); Wright v. Stern, 450 F. Supp. 2d 335, 359–60 (S.D.N.Y. 2006) (“In a close case, a court should permit the testimony to be presented at trial, where it can be tested by cross-examination and measured against the other evidence in the case.”) (internal citation omitted). See also Adams v. Toyota Motor Corp., 867 F.3d 903, 916 (8th Cir. 2017) (affirming admission of expert testimony, reiterating the flexibility of the Daubert inquiry and emphasizing that defendant’s concerns could all be addressed with “[v]igorous cross-examination, presentation of contrary evidence, and careful instruction on the burden of proof”); Liquid Dynamics Corp. v. Vaughan Corp., 449 F.3d 1209, 1221 (Fed. Cir. 2006) (“The identification of such flaws in generally reliable scientific evidence is precisely the role of cross-examination.” (internal citation omitted)); Carmichael v. Verso Paper, LLC, 679 F. Supp. 2d 109, 119 (D. Me. 2010) (“[W]hen the adequacy of the foundation for the expert testimony is at issue, the law favors vigorous cross-examination over exclusion.”); Crawford v. Franklin Credit Mgt. Corp., 08-CV-6293 (KMW), 2015 WL 13703301, at *6 (S.D.N.Y. Jan. 22, 2015) (“In light of the ‘presumption of admissibility of evidence,’ that opportunity [for cross-examination] is sufficient to ensure that the jury receives testimony that is both relevant and reliable.”) (internal citation omitted).

Even the most explicitly methodological challenges are transmuted into cross-examination issues by refusnik courts. For instance, cherry picking is reduced to a credibility issue for the jury and not germane to the court’s Rule 702 determination. In re Chantix Prods. Liab. Litig., 889 F. Supp. 2d 1272, 1288 (N.D. Ala. 2012) (finding that an expert witness’s deliberate decision not to rely upon clinical trial data merely “is a matter for cross-examination, not exclusion under Daubert”); In re Urethane Antitrust Litig., 2012 WL 6681783, at *3 (D.Kan.) (“The extent to which [an expert] considered the entirety of the evidence in the case is a matter for cross-examination.”); Bouchard v. Am. Home Prods. Corp., 2002 WL 32597992, at *7 (N.D. Ohio) (“If the plaintiff believes that the expert ignored evidence that would have required him to substantially change his opinion, that is a fit subject for cross-examination.”). Similarly, courts have by ipse dixit made flawed application of what a standard methodological into merely a credibility issue to be explore by cross-examination rather than by judicial gatekeeping. United States v. Adam Bros. Farming, 2005 WL 5957827, at *5 (C.D. Cal. 2005) (“Defendants’ objections are to the accuracy of the expert’s application of the methodology, not the methodology itself, and as such are properly reserved for cross-examination.”); Oshana v. Coca-Cola Co., 2005 WL 1661999, at *4 (N.D. Ill.) (“Challenges addressing flaws in an expert’s application of reliable methodology may be raised on cross-examination.”).

[5]  President’s Council of Advisors on Science and Technology, Report to the President on Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods (Sept. 2016).

[6]  United States v. Glynn, 578 F. Supp. 2d 567, 574 (S.D.N.Y. 2008) (Rakoff, J.)

[7]  Daniel J. Capra, Reporter, Advisory Comm. on Evidence Rules, Minutes of Meeting at 23 (May 3, 2019) (comments of the Reporter).

[8]  Daniel J. Capra, Reporter’s Memorandum re Forensic Evidence, Daubert and Rule 702 at 50 (April 1, 2018) (identifying issues such as insufficient investigation, cherry-picking data, or misapplying standard methodologies, as examples of a “white lab coat” problem resulting from juries’ inability to evaluate expert witnesses’ factual bases, methodologies, and applications of methods).

[9]  Daniel J. Capra, Reporter, Advisory Comm. on Evidence Rules, Minutes of Meeting at 10-11 (Oct. 1, 2019) (comments of the Reporter on possible amendment of Rule 702) (internal citation to Joiner omitted).

[10]  Id. at 11 n.5.

[11]  See In re Paoli RR Yard PCB Litig., 35 F.3d 717, 745 (3d Cir. 1994) (calling for a close, careful analysis of the application of a proper methodology to every step in the case; “any step that renders the analysis unreliable renders the expert’s testimony inadmissible whether the step completely changes a reliable methodology or merely misapplies that methodology”).

[12]  See, e.g., City of Pomona v. SQM North Am. Corp., 750 F.3d 1036, 1047 (9th Cir. 2014) (rejecting the Paoli any-step approach without careful analysis of the statute, the advisory committee note, or Supreme Court decisions); Manpower, Inc. v. Ins. Co. of Pa., 732 F.3d 796, 808 (7th Cir. 2013) (“[t]he reliability of data and assumptions used in applying a methodology is tested by the adversarial process and determined by the jury; the court’s role is generally limited to assessing the reliability of the methodology – the framework – of the expert’s analysis”); Bonner v. ISP Techs., Inc., 259 F.3d 924, 929 (8th Cir. 2001) (“the factual basis of an expert opinion goes to the credibility of the testimony, not the admissibility, and it is up to the opposing party to examine the factual basis for the opinion in cross-examination”).

[13]  Despite the clarity of the revised Rule 702, and the intent to synthesize Daubert, Joiner, Kumho Tire, and Weisgram, some courts have insisted that nothing changed with the amended rule. See, e.g., Pappas v. Sony Elec., Inc., 136 F. Supp. 2d 413, 420 & n.11 (W.D. Pa. 2000) (opining that Rule 702 as amended did not change the application of Daubert within the Third Circuit) (“The Committee Notes to the amended Rule 702 cite and discuss several Court of Appeals decisions that have properly applied Daubert and its progeny. Among these decisions are numerous cases from the Third Circuit. See Committee Note to 2000 Amendments to Fed. R.Evid. 702. Accordingly, I conclude that amended Rule 702 does not effect a change in the application of Daubert in the Third Circuit.”). Of course, if nothing changed, then the courts that take this position should be able to square their decisions with text of Rule 702, as amended in 2000.

Should Federal Rule of Evidence 702 Be Amended?

May 8th, 2020

Almost 27 years have passed since the United States Supreme Court issued its opinion in Daubert.[1] The holding was narrow. The Court reminded the Bar that Federal Rule of Evidence 702 was a statute, and that courts were thus bound to read it as a statute. The plain language of Rule 702 had been adopted by the Court in 1972, and then enacted by Congress, to be effective on July 1, 1975. Absent from the enacted Rule 702 was the “twilight zone” test articulated by a lower federal court in 1923.[2] In the Daubert case, the defense erroneously urged the application of the twilight zone test. In the post-modern way, the plaintiffs urged the application of no test.[3] The Court held simply that the twilight zone test had not been incorporated in the statutory language of Rule 702. Instead, the Court observed that the plain language of the statute imposed “helpfulness” and epistemic requirements for admitting expert witness opinion testimony.

It took another two Supreme Court decisions to flesh out the epistemic requirements for expert witnesses’ opinions,[4] and a third decision in which the Court told the Bench and Bar that the requirements of Rule 702 are “exacting.”[5] After the Supreme Court had added significantly to Rule 702’s helpfulness and knowledge requirements, the Advisory Committee revised the rule in 2000, to synthesize and incorporate these four Supreme Court decisions, and scholarly thinking about the patho-epistemology of expert witness opinion testimony. The Committee revised Rule 702 again in 2011, but only on “stylistic” issues, without any intent to add to or subtract from the 2000 rule.

Not all judges got the memo, or bothered to read and implement the revised Rule 702, in 2000. At both the District Court and the Circuit levels, courts persisted, and continue to persist, in citing retrograde decisions that predate the 2000 amendment, and even predate the 1993 decision in Daubert. Even the Supreme Court, in a 2011 opinion that did not involve the interpretation of Rule 702, was misled by a Solicitor General’s amicus brief, into citing one of the most anti-science, anti-method, post-modern, pre-Daubert, anything-goes decisions.[6] The judicial resistance to Rule 702 is well documented in many scholarly articles,[7] by the Reporter to the Advisory Committee,[8] and in the pages of this and other blogs.

In 2015, when evidence scholar David Bernstein argued that Rule 702 required amending,[9] I acknowledged the strength of his argument, but resisted because of what I perceived to be the danger of opening up the debate in Congress.[10] Professor Bernstein and lawyer Eric Lasker detailed and documented the many judicial dodges and evasions engaged in by many judges intent upon ignoring the clear requirements of Rule 702.

A paper published this week by the Washington Legal Foundation has updated and expanded the case for reform made by Professor Bernstein five years ago. In his advocacy paper, lawyer Lee Mickus has collated and analyzed some of the more recent dodges, which will depress the spirits of anyone who believes in evidence-based decision making.[11] My resistance to reform by amendment is waning. The meaning and intent of Rule 702 has been scarred over by precedent based upon judicial ipse dixit, and not Rule 702.

Mickus’s paper, like Professor Bernstein’s articles before, makes a persuasive case for reform, but this new paper does not evaluate the vagaries of navigating an amendment through the Advisory Committee, the Supreme Court, and Congress. Even if the reader is not interested in the amendment process, the paper can be helpful to the advocate in anticipating dodgy rule denialism.


[1]  Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993).

[2]  Frye v. United States, 54 App. D.C. 46, 293 F. 1013 (1923).

[3]  SeeThe Advocates’ Errors in Daubert” (Dec. 28, 2018).

[4]  General Electric Co. v. Joiner, 522 U.S. 136 (1997); Kumho Tire Co. v. Carmichael, 526 U.S. 137 (1999).

[5]  Weisgram v. Marley Co., 528 U.S. 440, 455 (2000) (Ginsberg, J.) (unanimous decision).

[6] Matrixx Initiatives, Inc. v. Siracusano, 563 US 27, 131 S.Ct. 1309, 1319 (2011) (citing Wells v. Ortho Pharmaceutical Corp., 615 F. Supp. 262, 298 (N.D. Ga. 1985), aff’d and rev’d in part on other grounds, 788 F.2d 741 (11th Cir.), cert. denied, 479 U.S.950 (1986)).  SeeWells v. Ortho Pharmaceutical Corp. Reconsidered – Part 1”; “Part 2”; “Part 3”; “Part 4”; “Part 5”; and “Part 6”.

[7]  David E. Bernstein & Eric Lasker, “Defending Daubert: It’s Time to Amend Federal Rule of Evidence 702,” 57 Wm. & Mary L. Rev. 1 (2015); David E. Bernstein, “The Misbegotten Judicial Resistance to the Daubert Revolution,” 89 Notre Dame L. Rev. 27 (2014).

[8]  See Daniel J. Capra, Reporter’s Memorandum re Forensic Evidence, Daubert and Rule 702 at 52 (April 1, 2018) (“[T]he fact remains that some courts are ignoring the requirements of Rule 702(b) and (d). That is frustrating.”).

[9]  David E. Bernstein & Eric Lasker, “Defending Daubert: It’s Time to Amend Federal Rule of Evidence 702,” 57 Wm. & Mary L. Rev. 1 (2015).

[10]  “On Amending Rule 702 of the Federal Rules of Evidence” (Oct. 17, 2015).

[11]  Lee Mickus, “Gatekeeping Reorientation: Amend Rule 702 to Correct Judicial Misunderstanding about Expert Evidence,” Washington Legal Foundation Critical Legal Issues Working Paper No. 217 (May 2020).

Dark Money, Scott Augustine, and Hot Air

April 11th, 2020

Fraud by the litigation industry takes many different forms. In the massive silicosis litigation unleashed in Mississippi and Texas in the early 2000s, plaintiffs’ lawyers colluded with physicians to concoct dubious diagnoses of silicosis. Fraudulent diagnoses of silicosis led to dismissals of thousands of cases, as well as the professional defrocking of some physician witnesses.[1] For those trying to keep up with lawsuit industry’s publishing arm, discussion of the Great Silicosis Fraud is completely absent from David Michaels’ recent book, The Triumph of Doubt.[2] So too is any mention of “dark money” that propelled the recently concluded Bair Hugger litigation.

Back in 2017, I wrote about the denial of a Rule 702 motion in the Bair Hugger litigation.[3] At the time, I viewed the trial court’s denial, on the facts of the case, to be a typical failure of gatekeeping.[4] Events in the Bair Hugger cases were only warming up in 2017.

After the court’s ruling, 3M took the first bellwether case to trial and won the case with jury, on May 30, 2018. Perhaps this jury verdict encouraged the MDL trial judge to take 3M’s motion for reconsideration of the Rule 702 motion seriously. In July 2019, the MDL court granted 3M’s motion to exclude the opinion testimony of plaintiffs’ general causation and mechanism expert witnesses, Drs. Jarvis, Samet, Stonnington, and Elghobashi.[5] Without these witnesses, over 5,000 plaintiffs, who had been misled about the merits of their cases, were stranded and set up for dismissal. On August 2, 2019, the MDL cases were dismissed for want of evidentiary support on causation. On August 29, 2019, plaintiffs filed a joint notice of appeal to the Eight Circuit.

The two Bair Hugger Rule 702 federal court decisions focused (or failed to focus) on scientific considerations. Most of the story of “dark money” and the manufacturing of science to support the litigation were suppressed in the Rule 702 motion practice, and in the federal jury trial. In her second Rule 702 reconsideration opinion, the MDL judge did mention undisclosed conflicts of interest by authors of the key studies relied upon by plaintiffs’ witnesses.[6]

To understand how the Bair Hugger litigation got started, and to obtain a full understanding of the nature of the scientific evidence was, a disinterested observer will have to read the state court decisions. Defendant 3M moved to exclude plaintiffs’ causation expert witnesses, in its Minnesota state court cases, under the so-called Frye standard. In response, the state judge excluded plaintiffs’ witnesses for advancing a novel scientific theory that lacked acceptance in the relevant scientific community. The Minnesota Court of Appeals affirmed, with a decision that talked rather more freely about the plaintiffs’ counsel’s dark money. In re 3M Bair Hugger Litig., 924 N.W.2d 16 (Minn. App. 2019) [cited as Bair Hugger].

As the Minnesota Court of Appeals explained, a forced-air warming device (FAWD) is a very important, useful device to keep patients’ body temperatures normal during surgery. The “Bair Hugger” is a FAWD, which was invented in 1987, by Dr. Scott Augustine, an anesthesiologist, who at the time was the chief executive officer of Augustine Medical, Inc. Bair Hugger at 19.

In the following 15 years, the Bair Hugger became the leading FAWD in the world. In 2002, the federal government notified Augustine that it was investigating him for Medicare fraud. Augustine resigned from the company that bore his name, and the company purged the taint by reorganizing as Arizant Healthcare Inc. (Arizant), which continued to make the Bair Hugger. In the following year, 2003, Augustine pleaded guilty to fraud and paid a $2 million fine. His sentence included a five-year ban from involvement in federal health-care programs.

During the years of his banishment, fraudfeasor Augustine developed a rival product and then embarked upon a global attack on the safety of his own earlier invention, the Bair Hugger. In the United Kingdom, his claim that the Bair Hugger increased risks of surgical site infections attacks was rejected by the UK National Institute for Health and Clinical Excellence. A German court enjoined Augustine from falsely claiming that the Bair Hugger led to increased bacterial contamination.[7] The United States FDA considered and rejected Augustine’s claims, and recommended the use of FAWDs.

In 2009, Augustine began to work as a non-testifying expert witness with the Houston, Texas, plaintiffs’ law firm of Kennedy Hodges LLP. A series of publications resulted in which the authors attempted to raise questions about the safety of the Bair Hugger. By 2013, with the medical literature “seeded” with several studies attacking the Bair Hugger, the Kennedy Hodges law firm began to manufacture law suits against Arizant and 3M (which had bought the Bair Hugger product line from Arizant in 2010). Bair Hugger at 20.

The seeding studies were marketing and litigation propaganda used by Augustine to encourage the all-too-complicit lawsuit industry to ramp up production of complaints against 3M over the Bair Hugger. Several of the plaintiffs’ studies included as an author a young statistician, Mark Albrecht, an employee of, or a contractor for, Augustine’s new companies, Augustine Temperature Management and Augustine Medical. Even when disclosures were made, they were at best “anemic”:

“The author or one or more of the authors have received or will receive benefits for personal or professional use from a commercial party related directly or indirectly to the subject of this article.”[8]

Some of these studies generally included a disclosure that Albrecht was funded or employed by Augustine, but they did not disclose the protracted, bitter feud or Augustine’s confessed fraudulent conduct. Another author of some of the plaintiffs’ studies included David Leaper, who was a highly paid “consultant’’ to Augustine at the time of the work on the study. None of the studies disclosed Leaper’s consultancy for Augustin:

  1. Mark Albrecht, Robert Gauthier, and David Leaper, “Forced air warming, a source of airborne contamination in the operating room?” 1 Orthopedic Rev. (Pavia) e28 (2009)
  2. Mark Albrecht, Robert L. Gauthier, Kumar Belani, Mark Litchy, and David Leaper, “Forced-air warming blowers: An evaluation of filtration adequacy and airborne contamination emissions in the operating room,” 39 Am. J. Infection Control 321 (2011)
  3. P.D. McGovern, Mark Albrecht, Kumar Belani, C. Nachtsheim, “Forced-air warming and ultra-clean ventilation do not mix,” 93 J. Bone & Joint Surg. – British 1537 (2011)
  4. K.B. Dasari, Mark Albrecht, and M. Harper, “Effect of forced-air warming on the performance of operating-theatre laminar-flow ventilation,” 67 Anaesthesia 244 (2012)
  5. Mike Reed, Oliver Kimberger, Paul D. McGovern, and Mark C. Albrecht, “Forced-Air Warming Design: Evaluation of Intake Filtration, Internal Microbial Buildup, and Airborne-Contamination Emissions,” 81 Am. Ass’n Nurse Anesthetists 275 (2013)
  6. Kumar Belani, Mark Albrecht, Paul McGovern, Mike Reed, and Christopher Nachtsheim, “Patient warming excess heat: the effects on orthopedic operating room ventilation performance,” 117 Anesthesia & Analgesia 406 (2013)

In one study, Augustine’s employee Mark Albrecht conducted the experiment with one of the authors, but was not listed as an author although he wrote an early draft of the study. Augustine provided all the equipment used in the experiment. The published paper failed to disclose any of these questionable activities:

  1. A.J. Legg & A.J. Hammer, “Forced-air patient warming blankets disrupt unidirectional flow,” 95 Bone & Joint J. 407 (2013)

Another study had more peripheral but still questionable involvement of Augustine, whose company lent the authors equipment used to conduct the study, without proper acknowledgment and disclosure:

  1. A.J. Legg, T. Cannon, and A. J. Hamer, “Do forced-air warming devices disrupt unidirectional downward airflow?” 94 J. Bone & Joint Surg. – British 254 (2012)

In addition to the defects in the authors’ disclosures, 3M discovered that two of the studies had investigated whether the Bair Hugger spread bacteria in the surgical area. Although the experiments found no spread with the Bair Hugger, the researchers never publicly disclosed their exculpatory evidence.[9]

Augustine’s marketing campaign, through these studies, ultimately fell flat at the FDA, which denied his citizen’s petition and recommended that surgeons continue to use FAWDs such as the Bair Hugger.[10] Augustine’s proxy litigation war against 3M also fizzled, unless the 8th Circuit revives his vendetta. Nonetheless, the Augustine saga raises serious questions about how litigation funding of “scientific studies” will vex the search for the truth in pharmaceutical products litigation. The Augustine attempt to pollute the medical literature was relatively apparent, but dark money from undisclosed financiers may require greater attention from litigants and from journal editors.


[1]  In re Silica Products Liab. Litig., MDL No. 1553, 398 F. Supp. 2d 563 (S.D.Tex. 2005).

[2]  David Michaels, The Triumph of Doubt: Dark Money and the Science of Deception (2020).

[3]  In re Bair Hugger Forced Air Warming, MDL No. 15-2666, 2017 WL 6397721 (D. Minn. Dec. 13, 2017).

[4]  “Gatekeeping of Expert Witnesses Needs a Bair Hug” (Dec. 20, 2017).

[5]  In re Bair Hugger Forced Air Warming Devices Prods. Liab. Litig., MDL No. 15-2666, 2019 WL 4394812 (D. Minn. July 31, 2019). See Joe G. Hollingsworth & Caroline Barker, “Exclusion of Junk Science in ‘Bair Hugger’ MDL Shows Daubert Is Still Breathing,” Wash. Leg. Foundation (Jan 23, 2020); Christine Kain, Patrick Reilly, Hannah Anderson and Isabelle Chammas, “Top 5 Drug And Medical Device Developments Of 2019,” Law360 (Jan. 9, 2020).

[6]  In re Bair Hugger Forced Air Warming Devices Prods. Liab. Litig., 2019 WL 4394812, at *10 n.13 (D. Minn. July 31, 2019) (observing that “[i]n the published study, the authors originally declared no conflicts of interest”).

[7]  Dr. Augustine has never been a stranger to the judicial system. See, e.g., Augustine Medical, Inc. v. Gaymar Industries, Inc., 181 F.3d 1291 (Fed. Cir. 1999); Augustine Medical, Inc. v. Progressive Dynamics, Inc., 194 F.3d 1367 (Fed. Cir. 1999); Cincinnati Sub-Zero Products, Inc. v. Augustine Medical, Inc., 800 F. Supp. 1549 (S.D. Ohio 1992).

[8]  P.D. McGovern, Mark Albrecht, Kumar Belani, and C. Nachtsheim, “Forced-air warming and ultra-clean ventilation do not mix,” 93 J. Bone & Joint Surg. – British 1537, 1544 (2011).

[9]  See https://www.truthaboutbairhugger.com/truth-science-behind-claims-3m-bair-hugger-system-look-augustine-connections-research-studies/.

[10]  William Maisel, “Information about the Use of Forced Air Thermal Regulating Systems – Letter to Health Care Providers”; Center for Devices and Radiological Health, U.S. Food and Drug Administration (Aug. 30, 2017).

Dodgy Data Duck Daubert Decisions

March 11th, 2020

Judges say the darndest things, especially when it comes to their gatekeeping responsibilities under Federal Rules of Evidence 702 and 703. One of the darndest things judges say is that they do not have to assess the quality of the data underlying an expert witness’s opinion.

Even when acknowledging their obligation to “assess the reasoning and methodology underlying the expert’s opinion, and determine whether it is both scientifically valid and applicable to a particular set of facts,”[1] judges have excused themselves from having to look at the trustworthiness of the underlying data for assessing the admissibility of an expert witness’s opinion.

In McCall v. Skyland Grain LLC, the defendant challenged an expert witness’s reliance upon oral reports of clients. The witness, Mr. Bradley Walker, asserted that he regularly relied upon such reports, in similar contexts of the allegations that the defendant misapplied herbicide to plaintiffs’ crops. The trial court ruled that the defendant could cross-examine the declarant who was available trial, and concluded that the “reliability of that underlying data can be challenged in that manner and goes to the weight to be afforded Mr. Walker’s conclusions, not their admissibility.”[2] Remarkably, the district court never evaluated the reasonableness of Mr. Walker’s reliance upon client reports in this or any context.

In another federal district court case, Rodgers v. Beechcraft Corporation, the trial judge explicitly acknowledged the responsibility to assess whether the expert witness’s opinion was based upon “sufficient facts and data,” but disclaimed any obligation to assess the quality of the underlying data.[3] The trial court in Rodgers cited a Tenth Circuit case from 2005,[4] which in turn cited the Supreme Court’s 1993 decision in Daubert, for the proposition that the admissibility review of an expert witness’s opinion was limited to a quantitative sufficiency analysis, and precluded a qualitative analysis of the underlying data’s reliability. Quoting from another district court criminal case, the court in Rodgers announced that “the Court does not examine whether the facts obtained by the witness are themselves reliable – whether the facts used are qualitatively reliable is a question of the weight to be given the opinion by the factfinder, not the admissibility of the opinion.”[5]

In a 2016 decision, United States v. DishNetwork LLC, the court explicitly disclaimed that it was required to “evaluate the quality of the underlying data or the quality of the expert’s conclusions.”[6] This district court pointed to a Seventh Circuit decision, which maintained that  “[t]he soundness of the factual underpinnings of the expert’s analysis and the correctness of the expert’s conclusions based on that analysis are factual matters to be determined by the trier of fact, or, where appropriate, on summary judgment.”[7] The Seventh Circuit’s decision, however, issued in June 2000, several months before the effective date of the amendments to Federal Rule of Evidence 702 (December 2000).

In 2012, a magistrate judge issued an opinion along the same lines, in Bixby v. KBR, Inc.[8] After acknowledging what must be done in ruling on a challenge to an expert witness, the judge took joy in what could be overlooked. If the facts or data upon which the expert witness has relied are “minimally sufficient,” then the gatekeeper can regard questions about “the nature or quality of the underlying data bear upon the weight to which the opinion is entitled or to the credibility of the expert’s opinion, and do not bear upon the question of admissibility.”[9]

There need not be any common law mysticism to the governing standard. The relevant law is, of course, a statute, which appears to be forgotten in many of the failed gatekeeping decisions:

Rule 702. Testimony by Expert Witnesses

A witness who is qualified as an expert by knowledge, skill, experience, training, or education may testify in the form of an opinion or otherwise if:

(a) the expert’s scientific, technical, or other specialized knowledge will help the trier of fact to understand the evidence or to determine a fact in issue;

(b) the testimony is based on sufficient facts or data;

(c) the testimony is the product of reliable principles and methods; and

(d) the expert has reliably applied the principles and methods to the facts of the case.

It would seem that you could not produce testimony that is the product of reliable principles and methods by starting with unreliable underlying facts and data. Certainly, having a reliable method would require selecting reliable facts and data from which to start. What good would the reliable application of reliable principles to crummy data?

The Advisory Committee Notes to Rule 702 hints at an answer to the problem:

“There has been some confusion over the relationship between Rules 702 and 703. The amendment makes clear that the sufficiency of the basis of an expert’s testimony is to be decided under Rule 702. Rule 702 sets forth the overarching requirement of reliability, and an analysis of the sufficiency of the expert’s basis cannot be divorced from the ultimate reliability of the expert’s opinion. In contrast, the ‘reasonable reliance’ requirement of Rule 703 is a relatively narrow inquiry. When an expert relies on inadmissible information, Rule 703 requires the trial court to determine whether that information is of a type reasonably relied on by other experts in the field. If so, the expert can rely on the information in reaching an opinion. However, the question whether the expert is relying on a sufficient basis of information—whether admissible information or not—is governed by the requirements of Rule 702.”

The answer is only partially satisfactory. First, if the underlying data are independently admissible, then there may indeed be no gatekeeping of an expert witness’s reliance upon such data. Rule 703 imposes a reasonableness test for reliance upon inadmissible underlying facts and data, but appears to give otherwise admissible facts and data a pass. Second, the above judicial decisions do not mention any Rule 703 challenge to the expert witnesses’ reliance. If so, then there is a clear lesson for counsel. When framing a challenge to the admissibility of an expert witness’s opinion, show that the witness has unreasonably relied upon facts and data, from whatever source, in violation of Rule 703. Then show that without the unreasonably relied upon facts and data, the witness cannot show that his or her opinion satisfies Rule 702(a)-(d).


[1]  See, e.g., McCall v. Skyland Grain LLC, Case 1:08-cv-01128-KHV-BNB, Order (D. Colo. June 22, 2010) (Brimmer, J.) (citing Dodge v. Cotter Corp., 328 F.3d 1212, 1221 (10th Cir. 2003), citing in turn Daubert v. Merrill Dow Pharms., Inc., 509 U.S. 579,  592-93 (1993).

[2]  McCall v. Skyland Grain LLC Case 1:08-cv-01128-KHV-BNB, Order at p.9 n.6 (D. Colo. June 22, 2010) (Brimmer, J.)

[3]  Rodgers v. Beechcraft Corp., Case No. 15-CV-129-CVE-PJC, Report & Recommendation at p.6 (N.D. Okla. Nov. 29, 2016).

[4]  Id., citing United.States. v. Lauder, 409 F.3d 1254, 1264 (10th Cir. 2005) (“By its terms, the Daubert opinion applies only to the qualifications of an expert and the methodology or reasoning used to render an expert opinion” and “generally does not, however, regulate the underlying facts or data that an expert relies on when forming her opinion.”), citing Daubert v. Merrill Dow Pharms., Inc., 509 U.S. 579, 592-93 (1993).

[5]  Id., citing and quoting United States v. Crabbe, 556 F. Supp. 2d 1217, 1223
(D. Colo. 2008) (emphasis in original). In Crabbe, the district judge mostly excluded the challenged expert witness, thus rendering its verbiage on quality of data as obiter dicta). The pronouncements about the nature of gatekeeping proved harmless error when the court dismissed the case on other grounds. Rodgers v. Beechcraft Corp., 248 F. Supp. 3d 1158 (N.D. Okla. 2017) (granting summary judgment).

[6]  United States v. DishNetwork LLC, No. 09-3073, Slip op. at 4-5 (C.D. Ill. Jan. 13, 2016) (Myerscough, J.)

[7]  Smith v. Ford Motor Co., 215 F.3d 713, 718 (7th Cir. 2000).

[8]  Bixby v. KBR, Inc., Case 3:09-cv-00632-PK, Slip op. at 6-7 (D. Ore. Aug. 29, 2012) (Papak, M.J.)

[9]  Id. (citing Hangarter v. Provident Life & Accident Ins. Co., 373 F.3d 998, 1017 (9th Cir. 2004), quoting Children’s Broad Corp. v. Walt Disney Co., 357 F.3d 860, 865 (8th Cir. 2004) (“The factual basis of an expert opinion goes to the credibility of the testimony, not the admissibility, and it is up to the opposing party to examine the factual basis for the opinion in cross-examination.”).

Science Journalism – UnDark Noir

February 23rd, 2020

Critics of the National Association of Scholars’ conference on Fixing Science pointed readers to an article in Undark, an on-line popular science site for lay audiences, and they touted the site for its science journalism. My review of the particular article left me unimpressed and suspicious of Undark’s darker side. When I saw that the site featured an article on the history of the Supreme Court’s Daubert decision, I decided to give the site another try. For one thing, I am sympathetic to the task science journalists take on: it is important and difficult. In many ways, lawyers must commit to perform the same task. Sadly, most journalists and lawyers, with some notable exceptions, lack the scientific acumen and English communication skills to meet the needs of this task.

The Undark article that caught my attention was a history of the Daubert decision and the Bendectin litigation that gave rise to the Supreme Court case.[1] The author, Peter Andrey Smith, is a freelance reporter, who often covers science issues. In his Undark piece, Smith covered some of the oft-told history of the Daubert case, which has been told before, better and in more detail in many legal sources. Smith gets some credit for giving the correct pronunciation of the plaintiff’s name – “DAW-burt,” and for recounting how both sides declared victory after the Supreme Court’s ruling. The explanation Smith gives of the opinion by Associate Justice Harry Blackmun is reasonably accurate, and he correctly notes that a partial dissenting opinion by Chief Justice Rehnquist complained that the majority’s decision would have trial judges become “amateur scientists.” Nowhere in the article will you find, however, the counter to the dissent: an honest assessment of the institutional and individual competence of juries to decide complex scientific issues.

The author’s biases eventually, however, become obvious. He recounts his interviews with Jason Daubert and his mother, Joyce Daubert. He earnestly reports how Joyce Daubert remembered having taken Bendectin during her pregnancy with Jason, and in the moment of that recall, “she felt she’d finally identified the teratogen that harmed Jason.” Really? Is that how teratogens are identified? Might it have been useful and relevant for a scientific journalist to explain that there are four million live births every year in the United States and that 3% of children born each year have major congenital malformations? And that most malformations have no known cause? Smith ingenuously relays that Jason Daubert had genetic testing, but omits that genetic testing in the early 1990s was fairly primitive and limited. In any event, how were any expert witnesses supposed to rule out base-line risk of birth defects, especially given weak to non-existent epidemiologic support for the Daubert’s claims? Smith does answer these questions; he does not even acknowledge the questions.

Smith later quotes Joyce Daubert as describing the litigation she signed up for as “the hill I’ll die on. You only go to war when you think you can win.” Without comment or analysis, Smith gives Joyce Daubert an opportunity to rant against the “injustice” of how her lawsuit turned out. Smith tells us that the Dauberts found the “legal system remains profoundly disillusioning.” Joyce Daubert told Smith that “it makes me feel stupid that I was so naïve to think that, after we’d invested so much in the case, that we would get justice.”  When called for jury duty, she introduces herself as

“I’m Daubert of Daubert versus Merrell Dow … ; I don’t want to sit on this jury and pretend that I can pass judgment on somebody when there is no justice. Please allow me to be excused.”

But didn’t she really get all the justice she deserved? Given her zealotry, doesn’t she deserve to have her name on the decision that serves to rein in expert witnesses who outrun their scientific headlights? Smith is coy and does not say, but in presenting Mrs. Daubert’s rant, without presenting the other side, he is using his journalistic tools in a fairly blatant attempt to mislead. At this point, I begin to get the feeling that Smith is preaching to a like-minded choir over there at Undark.

The reader is not treated to any interviews with anyone from the company that made Bendectin, any of its scientists, or any of the scientists who published actual studies on whether Bendectin was associated with the particular birth defects Jason Daubert had, or for that matter, with any birth defects at all. The plaintiffs’ expert witnesses quoted and cited never published anything at all on the subject. The readers are left to their imagination about how the people who developed Bendectin felt about the litigation strategies and tactics of the lawsuit industry.

The journalistic ruse is continued with Smith’s treatment of the other actors in the Daubert passion play. Smith describes the Bendectin plaintiffs’ lawyer Barry Nace in hagiographic terms, but omits his bar disciplinary proceedings.[2] Smith tells us that Nace had an impressive background in chemistry, and quotes him in an interview in which he described the evidentiary rules on scientific witness testimony as “scientific evidence crap.”

Smith never describes the Daubert’s actual affirmative evidence in any detail, which one might expect in a sophisticated journalistic outlet. Instead, he described some of their expert witnesses, Shanna Swan, a reproductive epidemiologist, and Alan K. Done, “a former pediatrician from Wayne State University.” Smith is secretive about why Done was done in at Wayne State; and we learn nothing about the serious accusations of perjury on credentials by Done. Instead, Smith regales us with Done’s tsumish theory, which takes inconclusive bits of evidence, throws them together, and then declares causation that somehow eludes the rest of the scientific establishment.

Smith tells us that Swan was a rebuttal witness, who gave an opinion that the data did not rule out “the possibility Bendectin caused defects.” Legally and scientifically, Smith is derelict in failing to explain that the burden was on the party claiming causation, and that Swan’s efforts to manufacture doubt were beside the point. Merrell Dow did not have to rule out any possibility of causation; the plaintiffs had to establish causation. Nor does Smith delve into how Swan sought to reprise her performance in the silicone gel breast implant litigation, only to be booted by several judges as an expert witness. And then for a convincer, Smith sympathetically repeats plaintiffs’ lawyer Barry Nace’s hyperbolic claim that Bendectin manufacturer, Merrell Dow had been “financing scientific articles to get their way,” adding by way of emphasis, in his own voice:

“In some ways, here was the fake news of its time: If you lacked any compelling scientific support for your case, one way to undermine the credibility of your opponents was by calling their evidence ‘junk science’.”

Against Nace’s scatalogical Jackson Pollack approach, Smith is silent about another plaintiffs’ expert witness, William McBride, who was found guilty of scientific fraud.[3] Smith reports interviews of several well-known, well-respected evidence scholars. He dutifully report Professor Edward Cheng’s view that “the courts were right to dismiss the [Bendectin] plaintiffs’ claims.” Smith quotes Professor D. Michael Risinger that claims from both sides in Bendectin cases were exaggerated, and that the 1970s and 1980s saw an “unbridled expansion of self-anointed experts,” with “causation in toxic torts had been allowed to become extremely lax.” So a critical reader might wonder why someone like Professor Cheng, who has a doctorate in statistics, a law degree from Harvard, and teaches at Vanderbilt Law School, would vindicate the manufacturers’ position in the Bendectin litigation. Smith never attempts to reconcile his interviews of the law professors with the emotive comments of Barry Nace and Joyce Daubert.

Smith acknowledges that a reformulated version of Bendectin, known as  Diclegis, was approved by the Food and Drug Administration in the United States, in 2013, for treatment of  nausea and vomiting during pregnancy. Smith tells us that Joyce is not convinced the drug should be back on the market,” but really why would any reasonable person care about her view of the matter? The challenge by Nav Persaud, a Toronto physician, is cited, but Persaud’s challenge is to the claim of efficacy, not to the safety of the medication. Smith tells us that Jason Daubert “briefly mulled reopening his case when Diclegis, the updated version of Bendectin, was re-approved.” But how would the approval of Diclegis, on the strength of a full new drug application, somehow support his claim anew? And how would he “reopen” a claim that had been fully litigated in the 1990s, and well past any statute of limitations?

Is this straight reporting? I think not. It is manipulative and misleading.

Smith notes, without attribution, that some scholars condemn litigation, such as the cases involving Bendectin, as an illegitimate form of regulation of medications. In opposition, he appears to rely upon Elizabeth Chamblee Burch, a professor at the University of Georgia School of Law for the view that because the initial pivotal clinical trials for regulatory approvals take place in limited populations, litigation “serves as a stopgap for identifying rare adverse outcomes that could crop up when several hundreds of millions of people are exposed to those products over longer periods of time.” The problem with this view is that Smith ignores the whole process of pharmacovigilance, post-registration trials, and pharmaco-epidemiologic studies conducted after the licensing of a new medication. The suggested necessity of reliance upon the litigation system as an adjunct to regulatory approval is at best misplaced and tenuous.

Smith correctly explains that the Daubert standard is still resisted in criminal cases, where it could much improve the gatekeeping of forensic expert witness opinion. But while the author gets his knickers in a knot over wrongful convictions, he seems quite indifferent to wrongful judgments in civil action.

Perhaps the one positive aspect of this journalistic account of the Daubert case was that Jason Daubert, unlike his mother, was open minded about his role in transforming the law of scientific evidence. According to Smith, Jason Daubert did not see the case as having “not ruined his life.” Indeed, Jason seemed to approve the basic principle of the Daubert case, and the subsequent legislation that refined the admissibility standard: “Good science should be all that gets into the courts.”


[1] Peter Andrey Smith, “Where Science Enters the Courtroom, the Daubert Name Looms Large: Decades ago, two parents sued a drug company over their newborn’s deformity – and changed courtroom science forever,” Undark (Feb. 17, 2020).

[2]  Lawyer Disciplinary Board v. Nace, 753 S.E.2d 618, 621–22 (W. Va.) (per curiam), cert. denied, 134 S. Ct. 474 (2013).

[3] Neil Genzlinger, “William McBride, Who Warned About Thalidomide, Dies at 91,” N.Y. Times (July 15, 2018); Leigh Dayton, “Thalidomide hero found guilty of scientific fraud,” New Scientist (Feb. 27, 1993); G.F. Humphrey, “Scientific fraud: the McBride case,” 32 Med. Sci. Law 199 (1992); Andrew Skolnick, “Key Witness Against Morning Sickness Drug Faces Scientific Fraud Charges,” 263 J. Am. Med. Ass’n 1468 (1990).

Judicial Gatekeeping Cures Claims That Viagra Can Cause Melonoma

January 24th, 2020

The phosphodiesterases 5 inhibitor medications (PDE5i) seem to arouse the litigation propensities of the lawsuit industry. The PDE5i medications (sildenafil, tadalafil, etc.) have multiple indications, but they are perhaps best known for their ability to induce penile erections, which in some situations can be a very useful outcome.

The launch of Viagra in 1998 was followed by litigation that claimed the drug caused heart attacks, and not the romantic kind. The only broken hearts, however, were those of the plaintiffs’ lawyers and their expert witnesses who saw their litigation claims excluded and dismissed.[1]

Then came claims that the PDE5i medications caused non-arteritic anterior ischemic optic neuropathy (“NAION”), based upon a dubious epidemiologic study by Dr. Gerald McGwin. This litigation demonstrated, if anything, that while love may be blind, erections need not be.[2] The NAION cases were consolidated in a multi-district litigation (MDL) in front of Judge Paul Magnuson, in the District of Minnesota. After considerable back and forth, Judge Manguson ultimately concluded that the McGwin study was untrustworthy, and the NAION claims were dismissed.[3]

In 2014, the American Medical Association’s internal medicine journal published an observational epidemiologic study of sildenafil (Viagra) use and melanoma.[4] The authors of the study interpreted their study modestly, concluding:

“[s]ildenafil use may be associated with an increased risk of developing melanoma. Although this study is insufficient to alter clinical recommendations, we support a need for continued investigation of this association.”

Although the Li study eschewed causal conclusions and new clinical recommendations in view of the need for more research into the issue, the litigation industry filed lawsuits, claiming causality.[5]

In the new natural order of things, as soon as the litigation industry cranks out more than a few complaints, an MDL results, and the PDE5i – melanoma claims were no exception. By spring 2016, plaintiffs’ counsel had collected ten cases, a minion, sufficient for an MDL.[6] The MDL plaintiffs named the manufacturers of sildenafil and tadalafil, two of the more widely prescribed PDEi5 medications, on behalf of putative victims.

While the MDL cases were winding their way through discovery and possible trials, additional studies and meta-analyses were published. None of the subsequent studies, including the systematic reviews and meta-analyses, concluded that there was a causal association. Most scientists who were publishing on the issue opined that systematic error (generally confounding) prevented a causal interpretation of the data.[7]

Many of the observational studies found statistically significantly increased relative risks about 1.1 to 1.2 (10 to 20%), typically with upper bounds of 95% confidence intervals less than 2.0. The only scientists who inferred general causation from the available evidence were those who had been recruited and retained by plaintiffs’ counsel. As plaintiffs’ expert witnesses, they contended that the Li study, and the several studies that became available afterwards, collectively showed that PDE5i drugs cause melanoma in humans.

Not surprisingly, given the absence of any non-litigation experts endorsing the causal conclusion, the defendants challenged plaintiffs’ proffered expert witnesses under Federal Rule of Evidence 702. Plaintiffs’ counsel also embraced judicial gatekeeping and challenged the defense experts. The MDL trial judge, the Hon. Richard Seeborg, held hearings with four days of viva voce testimony from four of plaintiffs’ expert witnesses (two on biological plausibility, and two on epidemiology), and three of the defense’s experts. Last week, Judge Seeborg ruled by granting in part, and denying in part, the parties’ motions.[8]

The Decision

The MDL trial judge’s opinion is noteworthy in many respects. First, Judge Richard Seeborg cited and applied Rule 702, a statute, and not dicta from case law that predates the most recent statutory version of the rule. As a legal process matter, this respect for judicial process and the difference in legal authority between statutory and common law was refreshing. Second, the judge framed the Rule 702 issue, in line with the statute, and Ninth Circuit precedent, as an inquiry whether expert witnesses deviated from the standard of care of how scientists “conduct their research and reach their conclusions.”[9]

Biological Plausibility

Plaintiffs proffered three expert witnesses on biological plausibility, Drs. Rizwan Haq, Anand Ganesan, and Gary Piazza. All were subject to motions to exclude under Rule 702. Judge Seeborg denied the defense motions against all three of plaintiffs’ plausibility witnesses.[10]

The MDL judge determined that biological plausibility is neither necessary nor sufficient for inferring causation in science or in the law. The defense argued that the plausibility witnesses relied upon animal and cell culture studies that were unrealistic models of the human experience.[11] The MDL court, however, found that the standard for opinions on biological plausibility is relatively forgiving, and that the testimony of all three of plaintiffs’ proffered witnesses was admissible.

The subjective nature of opinions about biological plausibility is widely recognized in medical science.[12] Plausibility determinations are typically “Just So” stories, offered in the absence of hard evidence that postulated mechanisms are actually involved in a real causal pathway in human beings.

Causal Association

The real issue in the MDL hearings was the conclusion reached by plaintiffs’ expert witnesses that the PDE5i medications cause melanoma. The MDL court did not have to determine whether epidemiologic studies were necessary for such a causal conclusion. Plaintiffs’ counsel had proffered three expert witnesses with more or less expertise in epidemiology: Drs. Rehana Ahmed-Saucedo, Sonal Singh, and Feng Liu-Smith. All of plaintiffs’ epidemiology witnesses, and certainly all of defendants’ experts, implicitly if not explicitly embraced the proposition that analytical epidemiology was necessary to determine whether PDE5i medications can cause melanoma.

In their motions to exclude Ahmed-Saucedo, Singh, and Liu-Smith, the defense pointed out that, although many of the studies yielded statistically significant estimates of melanoma risk, none of the available studies adequately accounted for systematic bias in the form of confounding. Although the plaintiffs’ plausibility expert witnesses advanced “Just-So” stories about PDE5i and melanoma, the available studies showed an almost identical increased risk of basal cell carcinoma of the skin, which would be explained by confounding, but not by plaintiffs’ postulated mechanisms.[13]

The MDL court acknowledged that whether epidemiologic studies “adequately considered” confounding was “central” to the Rule 702 inquiry. Without any substantial analysis, however, the court gave its own ipse dixit that the existence vel non of confounding was an issue for cross-examination and the jury’s resolution.[14] Whether there was a reasonably valid association between PDE5i and melanoma was a jury question. This judicial refusal to engage with the issue of confounding was one of the disappointing aspects of the decision.

The MDL court was less forgiving when it came to the plaintiffs’ epidemiology expert witnesses’ assessment of the association as causal. All the parties’ epidemiology witnesses invoked Sir Austin Bradford Hill’s viewpoints or factors for judging whether associations were causal.[15] Although they embraced Hill’s viewpoints on causation, the plaintiffs’ epidemiologic expert witnesses had a much more difficult time faithfully applying them to the evidence at hand. The MDL court concluded that the plaintiffs’ witnesses deviated from their own professional standard of care in their analysis of the data.[16]

Hill’s first enumerated factor was “strength of association,” which is typically expressed epidemiologically as a risk ratio or a risk difference. The MDL court noted that the extant epidemiologic studies generally showed relative risks around 1.2 for PDE5i and melanoma, which was “undeniably” not a strong association.[17]

The plaintiffs’ epidemiology witnesses were at sea on how to explain away the lack of strength in the putative association. Dr. Ahmed-Saucedo retreated into an emphasis on how all or most of the studies found some increased risk, but the MDL court correctly found that this ruse was merely a conflation of strength with consistency of the observed associations. Dr. Ahmed-Saucedo’s dismissal of the importance of a dose-response relationship, another Hill factor, as unimportant sealed her fate. The MDL court found that her Bradford Hill analysis was “unduly results-driven,” and that her proffered testimony was not admissible.[18] Similarly, the MDL court found that Dr. Feng Liu-Smith similarly conflated strength of association with consistency, which error was too great a professional deviation from the standard of care.[19]

Dr. Sonal Singh fared no better after he contradicted his own prior testimony that there is an order of importance to the Hill factors, with “strength of association,” at or near the top. In the face of a set of studies, none of which showed a strong association, Dr. Singh abandoned his own interpretative principle to suit the litigation needs of the case. His analysis placed the greatest weight on the Li study, which had the highest risk ratio, but he failed to advance any persuasive reason for his emphasis on one of the smallest studies available. The MDL court found that Dr. Singh’s claim to have weighed strength of association heavily, despite the obvious absence of strong associations, puzzling and too great an analytical gap to abide.[20]

Judge Seeborg thus concluded that while the plaintiffs’ expert witness could opine that there was an association, which was arguably plausible, they could not, under Rule 702, contend that the association was causal. In attempting to advance an argument that the association met Bradford Hill’s factors for causality, the plaintiffs’ witnesses had ignored, misrepresented, or confused one of the most important factors, strength of the association, in a way that revealed their analyses to be result driven and unfaithful to the methodology they claimed to have followed. Judge Seeborg emphasized a feature of the revised Rule 702, which often is ignored by his fellow federal judges:[21]

“Under the amendment, as under Daubert, when an expert purports to apply principles and methods in accordance with professional standards, and yet reaches a conclusion that other experts in the field would not reach, the trial court may fairly suspect that the principles and methods have not been faithfully applied. See Lust v. Merrell Dow Pharmaceuticals, Inc., 89 F.3d 594, 598 (9th Cir. 1996). The amendment specifically provides that the trial court must scrutinize not only the principles and methods used by the expert, but also whether those principles and methods have been properly applied to the facts of the case.”

Given that the plaintiffs’ witnesses purported to apply a generally accepted methodology, Judge Seeborg was left to question why they would conclude causality when no one else in their field had done so.[22] The epidemiologic issue had been around for several years, and addressed not just in observational studies, but systematically reviewed and meta-analyzed. The absence of published causal conclusions was not just an absence of evidence, but evidence of absence of expert support for how plaintiffs’ expert witnesses applied the Bradford Hill factors.

Reliance Upon Studies That Did Not Conclude Causation Existed

Parties challenging causal claims will sometimes point to the absence of a causal conclusion in the publication of individual epidemiologic studies that are the main basis for the causal claim. In the PDE5i-melanoma cases, the defense advanced this argument unsuccessfully. The MDL court rejected the defense argument, based upon the absence of any comprehensive review of all the pertinent evidence for or against causality in an individual study; the study authors are mostly concerned with conveying the results of their own study.[23] The authors may have a short discussion of other study results as the rationale for their own study, but such discussions are often limited in scope and purpose. Judge Seeborg, in this latest round of PDE5i litigation, thus did not fault plaintiffs’ witnesses’ reliance upon epidemiologic or mechanistic studies, which individually did not assert causal conclusions; rather it was the absence of causal conclusions in systematic reviews, meta-analyses, narrative reviews, regulatory agency pronouncements, or clinical guidelines that ultimately raised the fatal inference that the plaintiffs’ witnesses were not faithfully deploying a generally accepted methodology.

The defense argument that pointed to the individual epidemiologic studies themselves derives some legal credibility from the Supreme Court’s opinion in General Electric Co. v. Joiner, 522 U.S. 136 (1997). In Joiner, the SCOTUS took plaintiffs’ expert witnesses to task for drawing stronger conclusions than were offered in the papers upon which they relied. Chief Justice Rehnquist gave considerable weight to the consideration that the plaintiffs’ expert witnesses relied upon studies, the authors of which explicitly refused to interpret as supporting a conclusion of human disease causation.[24]

Joiner’s criticisms of the reliance upon studies that do not themselves reach causal conclusions have gained a foothold in the case law interpreting Rule 702. The Fifth Circuit, for example, has declared:[25]

“It is axiomatic that causation testimony is inadmissible if an expert relies upon studies or publications, the authors of which were themselves unwilling to conclude that causation had been proven.”

This aspect of Joiner may properly limit the over-interpretation or misinterpretation of an individual study, which seems fine.[26] The Joiner case may, however, perpetuate an authority-based view of science to the detriment of requiring good and sufficient reasons to support the testifying expert witnesses’ opinions.  The problem with Joiner’s suggestion that expert witness opinion should not be admissible if it disagrees with the study authors’ discussion section is that sometimes study authors grossly over-interpret their data.  When it comes to scientific studies written by “political scientists” (scientists who see their work as advancing a political cause or agenda), then the discussion section often becomes a fertile source of unreliable, speculative opinions that should not be given credence in Rule 104(a) contexts, and certainly should not be admissible in trials. In other words, the misuse of non-rigorous comments in published articles can cut both ways.

There have been, and will continue to be, occasions in which published studies contain data, relevant and important to the causation issue, but which studies also contain speculative, personal opinions expressed in the Introduction and Discussion sections.  The parties’ expert witnesses may disagree with those opinions, but such disagreements hardly reflect poorly upon the testifying witnesses.  Neither side’s expert witnesses should be judged by those out-of-court opinions.  Perhaps the hearsay discussion section may be considered under Rule 104(a), which suspends the application of the Rules of Evidence, but it should hardly be a dispositive factor, other than raising questions for the reviewing court.

In exercising their gatekeeping function, trial judges should exercise care in how they assess expert witnesses’ reliance upon study data and analyses, when they disagree with the hearsay authors’ conclusions or discussions.  Given how many journals cater to advocacy scientists, and how variable the quality of peer review is, testifying expert witnesses should, in some instances,  have the expertise to interpret the data without substantial reliance upon, or reference to, the interpretative comments in the published literature.

Judge Seeborg sensibly seems to have distinguished between the absence of causal conclusions in individual epidemiologic studies and the absence of causal conclusions in any reputable medical literature.[27] He refused to be ensnared in the Joiner argument because:[28]

“Epidemiology studies typically only expressly address whether an association exists between agents such as sildenafil and tadalafil and outcomes like melanoma progression. As explained in In re Roundup Prod. Liab. Litig., 390 F. Supp. 3d 1102, 1116 (N.D. Cal. 2018), ‘[w]hether the agents cause the outcomes, however, ordinarily cannot be proven by epidemiological studies alone; an evaluation of causation requires epidemiologists to exercise judgment about the import of those studies and to consider them in context’.”

This new MDL opinion, relying upon the Advisory Committee Notes to Rule 702, is thus a more felicitous statement of the goals of gatekeeping.

Confidence Intervals

As welcome as some aspects of Judge Seeborg’s opinion are, the decision is not without mistakes. The district judge, like so many of his judicial colleagues, trips over the proper interpretation of a confidence interval:[29]

“When reviewing the results of a study it is important to consider the confidence interval, which, in simple terms, is the ‘margin of error’. For example, a given study could calculate a relative risk of 1.4 (a 40 percent increased risk of adverse events), but show a 95 percent ‘confidence interval’ of .8 to 1.9. That confidence interval means there is 95 percent chance that the true value—the actual relative risk—is between .8 and 1.9.”

This statement is inescapably wrong. The 95 percent probability attaches to the capturing of the true parameter – the actual relative risk – in the long run of repeated confidence intervals that result from repeated sampling of the same sample size, in the same manner, from the same population. In Judge Seeborg’s example, the next sample might give a relative risk point estimate 1.9, and that new estimate will have a confidence interval that may run from just below 1.0 to over 3. A third sample might turn up a relative risk estimate of 0.8, with a confidence interval that runs from say 0.3 to 1.4. Neither the second nor the third sample would be reasonably incompatible with the first. A more accurate assessment of the true parameter is that it will be somewhere between 0.3 and 3, a considerably broader range for the 95 percent.

Judge Seeborg’s error is sadly all too common. Whenever I see the error, I wonder whence it came. Often the error is in briefs of both plaintiffs’ and defense counsel. In this case, I did not see the erroneous assertion about confidence intervals made in plaintiffs’ or defendants’ briefs.


[1]  Brumley  v. Pfizer, Inc., 200 F.R.D. 596 (S.D. Tex. 2001) (excluding plaintiffs’ expert witness who claimed that Viagra caused heart attack); Selig v. Pfizer, Inc., 185 Misc. 2d 600 (N.Y. Cty. S. Ct. 2000) (excluding plaintiff’s expert witness), aff’d, 290 A.D. 2d 319, 735 N.Y.S. 2d 549 (2002).

[2]  “Love is Blind but What About Judicial Gatekeeping of Expert Witnesses? – Viagra Part I” (July 7, 2012); “Viagra, Part II — MDL Court Sees The Light – Bad Data Trump Nuances of Statistical Inference” (July 8, 2012).

[3]  In re Viagra Prods. Liab. Litig., 572 F.Supp. 2d 1071 (D. Minn. 2008), 658 F. Supp. 2d 936 (D. Minn. 2009), and 658 F. Supp. 2d 950 (D. Minn. 2009).

[4]  Wen-Qing Li, Abrar A. Qureshi, Kathleen C. Robinson, and Jiali Han, “Sildenafil use and increased risk of incident melanoma in US men: a prospective cohort study,” 174 J. Am. Med. Ass’n Intern. Med. 964 (2014).

[5]  See, e.g., Herrara v. Pfizer Inc., Complaint in 3:15-cv-04888 (N.D. Calif. Oct. 23, 2015); Diana Novak Jones, “Viagra Increases Risk Of Developing Melanoma, Suit Says,” Law360 (Oct. 26, 2015).

[6]  See In re Viagra (Sildenafil Citrate) Prods. Liab. Litig., 176 F. Supp. 3d 1377, 1378 (J.P.M.L. 2016).

[7]  See, e.g., Jenny Z. Wang, Stephanie Le , Claire Alexanian, Sucharita Boddu, Alexander Merleev, Alina Marusina, and Emanual Maverakis, “No Causal Link between Phosphodiesterase Type 5 Inhibition and Melanoma,” 37 World J. Men’s Health 313 (2019) (“There is currently no evidence to suggest that PDE5 inhibition in patients causes increased risk for melanoma. The few observational studies that demonstrated a positive association between PDE5 inhibitor use and melanoma often failed to account for major confounders. Nonetheless, the substantial evidence implicating PDE5 inhibition in the cyclic guanosine monophosphate (cGMP)-mediated melanoma pathway warrants further investigation in the clinical setting.”); Xinming Han, Yan Han, Yongsheng Zheng, Qiang Sun, Tao Ma, Li Dai, Junyi Zhang, and Lianji Xu, “Use of phosphodiesterase type 5 inhibitors and risk of melanoma: a meta-analysis of observational studies,” 11 OncoTargets & Therapy 711 (2018).

[8]  In re Viagra (Sildenafil Citrate) and Cialis (Tadalafil) Prods. Liab. Litig., Case No. 16-md-02691-RS, Order Granting in Part and Denying in Part Motions to Exclude Expert Testimony (N.D. Calif. Jan. 13, 2020) [cited as Opinion].

[9]  Opinion at 8 (“determin[ing] whether the analysis undergirding the experts’ testimony falls within the range of accepted standards governing how scientists conduct their research and reach their conclusions”), citing Daubert v. Merrell Dow Pharm., Inc. (Daubert II), 43 F.3d 1311, 1317 (9th Cir. 1995).

[10]  Opinion at 11.

[11]  Opinion at 11-13.

[12]  See Kenneth J. Rothman, Sander Greenland, and Timothy L. Lash, “Introduction,” chap. 1, in Kenneth J. Rothman, et al., eds., Modern Epidemiology at 29 (3d ed. 2008) (“no approach can transform plausibility into an objective causal criterion).

[13]  Opinion at 15-16.

[14]  Opinion at 16-17.

[15]  See Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295 (1965); see also “Woodside & Davis on the Bradford Hill Considerations” (April 23, 2013).

[16]  Opinion at 17 – 21.

[17]  Opinion at 18. The MDL court cited In re Silicone Gel Breast Implants Prod. Liab. Litig., 318 F. Supp. 2d 879, 893 (C.D. Cal. 2004), for the proposition that relative risks greater than 2.0 permit the inference that the agent under study “was more likely than not responsible for a particular individual’s disease.”

[18]  Opinion at 18.

[19]  Opinion at 20.

[20]  Opinion at 19.

[21]  Opinion at 21, quoting from Rule 702, Advisory Committee Notes (emphasis in Judge Seeborg’s opinion).

[22]  Opinion at 21.

[23]  SeeFollow the Data, Not the Discussion” (May 2, 2010).

[24]  Joiner, 522 U.S. at 145-46 (noting that the PCB studies at issue did not support expert witnesses’ conclusion that PCB exposure caused cancer because the study authors, who conducted the research, were not willing to endorse a conclusion of causation).

[25]  Huss v. Gayden, 571 F.3d 442  (5th Cir. 2009) (citing Vargas v. Lee, 317 F.3d 498, 501-01 (5th Cir. 2003) (noting that studies that did not themselves embrace causal conclusions undermined the reliability of the plaintiffs’ expert witness’s testimony that trauma caused fibromyalgia); see also McClain v. Metabolife Internat’l, Inc., 401 F.3d 1233, 1247-48 (11th Cir. 2005) (expert witnesses’ reliance upon studies that did not reach causal conclusions about ephedrine supported the challenge to the reliability of their proffered opinions); Happel v. Walmart, 602 F.3d 820, 826 (7th Cir. 2010) (observing that “is axiomatic that causation testimony is inadmissible if an expert relies upon studies or publications, the authors of which were themselves unwilling to conclude that causation had been proven”).

[26]  In re Accutane Prods. Liab. Litig., 511 F. Supp. 2d 1288, 1291 (M.D. Fla. 2007) (“When an expert relies on the studies of others, he must not exceed the limitations the authors themselves place on the study. That is, he must not draw overreaching conclusions.) (internal citations omitted).

[27]  See Rutigliano v. Valley Bus. Forms, 929 F. Supp. 779, 785 (D.N.J. 1996), aff’d, 118 F.3d 1577 (3d Cir. 1997) (“law warns against use of medical literature to draw conclusions not drawn in the literature itself …. Reliance upon medical literature for conclusions not drawn therein is not an accepted scientific methodology.”).

[28]  Opinion at 14

[29]  Opinion at 4 – 5.

Is the IARC Lost in the Weeds?

November 30th, 2019

A couple of years ago, I met David Zaruk at a Society for Risk Analysis meeting, where we were both presenting. I was aware of David’s blogging and investigative journalism, but meeting him gave me a greater appreciation for the breadth and depth of his work. For those of you who do not know David, he is present in cyberspace as the Risk-Monger who blogs about risk and science communications issues. His blog has featured cutting-edge exposés about the distortions in risk communications perpetuated by the advocacy of non-governmental organizations (NGOs). Previously, I have recorded my objections to the intellectual arrogance of some such organizations that purport to speak on behalf of the public interest, when often they act in cahoots with the lawsuit industry in the manufacturing of tort and environmental litigation.

David’s writing on the lobbying and control of NGOs by plaintiffs’ lawyers from the United States should be required reading for everyone who wants to understand how litigation sausage is made. His series, “SlimeGate” details the interplay among NGO lobbying, lawsuit industry maneuvering, and carcinogen determinations at the International Agency for Research on Cancer (IARC). The IARC, a branch of the World Health Organization, is headquartered in Lyon, France. The IARC convenes “working groups” to review the scientific studies of the carcinogencity of various substances and processes. The IARC working groups produce “monographs” of their reviews, and the IARC publishes these monographs, in print and on-line. The United States is in the top tier of participating countries for funding the IARC.

The IARC was founded in 1965, when observational epidemiology was still very much an emerging science, with expertise concentrated in only a few countries. For its first few decades, the IARC enjoyed a good reputation, and its monographs were considered definitive reviews, especially under its first director, Dr. John Higginson, from 1966 to 1981.[1] By the end of the 20th century, the need for the IARC and its reviews had waned, as the methods of systematic review and meta-analyses had evolved significantly, and had became more widely standardized and practiced.

Understandably, the IARC has been concerned that the members of its working groups should be viewed as disinterested scientists. Unfortunately, this concern has been translated into an asymmetrical standard that excludes anyone with a hint of manufacturing connection, but keeps the door open for those scientists with deep lawsuit industry connections. Speaking on behalf of the plaintiffs’ bar, Michael Papantonio, a plaintiffs’ lawyer who founded Mass Torts Made Perfect, noted that “We [the lawsuit industry] operate just like any other industry.”[2]

David Zaruk has shown how this asymmetry has been exploited mercilessly by the lawsuit industry and its agents in connection with the IARC’s review of glyphosate.[3] The resulting IARC classification of glyphosate has led to a litigation firestorm and an all-out assault on agricultural sustainability and productivity.[4]

The anomaly of the IARC’s glyphosate classification has been noted by scientists as well. Dr. Geoffrey Kabat is a cancer epidemiologist, who has written perceptively on the misunderstandings and distortions of cancer risk assessments in various settings.[5] He has previously written about glyphosate in Forbes and elsewhere, but recently he has written an important essay on glyphosate in Issues in Science and Technology, which is published by the National Academies of Sciences, Engineering, and Medicine and Arizona State University. In his essay, Dr. Kabat details how the IARC’s evaluation of glyphosate is an outlier in the scientific and regulatory world, and is not well supported by the available evidence.[6]

The problems with the IARC are both substantive and procedural.[7] One of the key problems that face IARC evaluations is an incoherent classification scheme. IARC evaluations classify putative human carcinogenic risks into five categories: Group I (known), Group 2A (probably), Group 2B (possibly), Group 3 (unclassifiable), and Group 4 (probably not). Group 4 is virtually an empty set with only one substance, caprolactam ((CH2)5C(O)NH), an organic compound used in the manufacture of nylon.

In the IARC evaluation at issue, glyphosate was placed into Group 2A, which would seem to satisfy the legal system’s requirement that an exposure more likely than not causes the harm in question. Appearances and word usage, however, can be deceiving. Probability is a continuous scale from zero to one. In Bayesian decision making, zero and one are unavailable because if either was our starting point, no amount of evidence could ever change our judgment of the probability of causation. (Cromwell’s Rule) The IARC informs us that its use of “probably” is quite idiosyncratic; the probability that a Group 2A agent causes cancer has “no quantitative” meaning. All the IARC intends is that a Group 2A classification “signifies a greater strength of evidence than possibly carcinogenic.”[8]

In other words, Group 2A classifications are consistent with having posterior probabilities of less than 0.5 (or 50 percent). A working group could judge the probability of a substance or a process to be carcinogenic to humans to be greater than zero, but no more than five or ten percent, and still vote for a 2A classification, in keeping with the IARC Preamble. This low probability threshold for a 2A classification converts the judgment of “probably carcinogenic” into a precautionary prescription, rendered when the most probable assessment is either ignorance or lack of causality. There is thus a practical certainty, close to 100%, that a 2A classification will confuse judges and juries, as well as the scientific community.

In IARC-speak, a 2A “probability” connotes “sufficient evidence” in experimental animals, and “limited evidence” in humans. A substance can receive a 2A classification even when the sufficient evidence of carcinogenicity occurs in one non-human animal specie, even though other animal species fail to show carcinogenicity. A 2A classification can raise the thorny question in court whether a claimant is more like a rat or a mouse.

Similarly, “limited evidence” in humans can be based upon inconsistent observational studies that fail to measure and adjust for known and potential confounding risk factors and systematic biases. The 2A classification requires little substantively or semantically, and many 2A classifications leave juries and judges to determine whether a chemical or medication caused a human being’s cancer, when the basic predicates for Sir Austin Bradford Hill’s factors for causal judgment have not been met.[9]

In courtrooms, IARC 2A classifications should be excluded as legally irrelevant, under Rule 403. Even if a 2A IARC classification were a credible judgment of causation, admitting evidence of the classification would be “substantially outweighed by a danger of … unfair prejudice, confusing the issues, [and] misleading the jury….”[10]

The IARC may be lost in the weeds, but there is no need to fret. A little Round Up™ will help.


[1]  See John Higginson, “The International Agency for Research on Cancer: A Brief History of Its History, Mission, and Program,” 43 Toxicological Sci. 79 (1998).

[2]  Sara Randazzo & Jacob Bunge, “Inside the Mass-Tort Machine That Powers Thousands of Roundup Lawsuits,” Wall St. J. (Nov. 25, 2019).

[3]  David Zaruk, “The Corruption of IARC,” Risk Monger (Aug. 24, 2019); David Zaruk, “Greed, Lies and Glyphosate: The Portier Papers,” Risk Monger (Oct. 13, 2017).

[4]  Ted Williams, “Roundup Hysteria,” Slate Magazine (Oct. 14, 2019).

[5]  See, e.g., Geoffrey Kabat, Hyping Health Risks: Environmental Hazards in Everyday Life and the Science of Epidemiology (2008); Geoffrey Kabat, Getting Risk Right: Understanding the Science of Elusive Health Risks (2016).

[6]  Geoffrey Kabat, “Who’s Afraid of Roundup?” 36 Issues in Science and Technology (Fall 2019).

[7]  See Schachtman, “Infante-lizing the IARC” (May 13, 2018); “The IARC Process is Broken” (May 4, 2016). See also Eric Lasker and John Kalas, “Engaging with International Carcinogen Evaluations,” Law360 (Nov. 14, 2019).

[8]  “IARC Preamble to the IARC Monographs on the Identification of Carcinogenic Hazards to Humans,” at Sec. B.5., p.31 (Jan. 2019); See alsoIARC Advisory Group Report on Preamble” (Sept. 2019).

[9]  See Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295 (1965) (noting that only when “[o]ur observations reveal an association between two variables, perfectly clear-cut and beyond what we would care to attribute to the play of chance,” do we move on to consider the nine articulated factors for determining whether an association is causal.

[10]  Fed. R. Evid. 403.

 

Science Bench Book for Judges

July 13th, 2019

On July 1st of this year, the National Judicial College and the Justice Speakers Institute, LLC released an online publication of the Science Bench Book for Judges [Bench Book]. The Bench Book sets out to cover much of the substantive material already covered by the Federal Judicial Center’s Reference Manual:

Acknowledgments

Table of Contents

  1. Introduction: Why This Bench Book?
  2. What is Science?
  3. Scientific Evidence
  4. Introduction to Research Terminology and Concepts
  5. Pre-Trial Civil
  6. Pre-trial Criminal
  7. Trial
  8. Juvenile Court
  9. The Expert Witness
  10. Evidence-Based Sentencing
  11. Post Sentencing Supervision
  12. Civil Post Trial Proceedings
  13. Conclusion: Judges—The Gatekeepers of Scientific Evidence

Appendix 1 – Frye/Daubert—State-by-State

Appendix 2 – Sample Orders for Criminal Discovery

Appendix 3 – Biographies

The Bench Book gives some good advice in very general terms about the need to consider study validity,[1] and to approach scientific evidence with care and “healthy skepticism.”[2] When the Bench Book attempts to instruct on what it represents the scientific method of hypothesis testing, the good advice unravels:

“A scientific hypothesis simply cannot be proved. Statisticians attempt to solve this dilemma by adopting an alternate [sic] hypothesis – the null hypothesis. The null hypothesis is the opposite of the scientific hypothesis. It assumes that the scientific hypothesis is not true. The researcher conducts a statistical analysis of the study data to see if the null hypothesis can be rejected. If the null hypothesis is found to be untrue, the data support the scientific hypothesis as true.”[3]

Even in experimental settings, a statistical analysis of the data do not lead to a conclusion that the null hypothesis is untrue, as opposed to not reasonably compatible with the study’s data. In observational studies, the statistical analysis must acknowledge whether and to what extent the study has excluded bias and confounding. When the Bench Book turns to speak of statistical significance, more trouble ensues:

“The goal of an experiment, or observational study, is to achieve results that are statistically significant; that is, not occurring by chance.”[4]

In the world of result-oriented science, and scientific advocacy, it is perhaps true that scientists seek to achieve statistically significant results. Still, it seems crass to come right out and say so, as opposed to saying that the scientists are querying the data to see whether they are compatible with the null hypothesis. This first pass at statistical significance is only mildly astray compared with the Bench Book’s more serious attempts to define statistical significance and confidence intervals:

4.10 Statistical Significance

The research field agrees that study outcomes must demonstrate they are not the result of random chance. Leaving room for an error of .05, the study must achieve a 95% level of confidence that the results were the product of the study. This is denoted as p ≤ 05. (or .01 or .1).”[5]

and

“The confidence interval is also a way to gauge the reliability of an estimate. The confidence interval predicts the parameters within which a sample value will fall. It looks at the distance from the mean a value will fall, and is measured by using standard deviations. For example, if all values fall within 2 standard deviations from the mean, about 95% of the values will be within that range.”[6]

Of course, the interval speaks to the precision of the estimate, not its reliability, but that is a small point. These definitions are virtually guaranteed to confuse judges into conflating statistical significance and the coefficient of confidence with the legal burden of proof probability.

The Bench Book runs into problems in interpreting legal decisions, which would seem softer grist for the judicial mill. The authors present dictum from the Daubert decision as though it were a holding:[7]

“As noted in Daubert, ‘[t]he focus, of course, must be solely on principles and methodology, not on the conclusions they generate’.”

The authors fail to mention that this dictum was abandoned in Joiner, and that it is specifically rejected by statute, in the 2000 revision to the Federal Rule of Evidence 702.

Early in the Bench Book, it authors present a subsection entitled “The Myth of Scientific Objectivity,” which they might have borrowed from Feyerabend or Derrida. The heading appears misleading because the text contradicts it:

“Scientists often develop emotional attachments to their work—it can be difficult to abandon an idea. Regardless of bias, the strongest intellectual argument, based on accepted scientific hypotheses, will always prevail, but the road to that conclusion may be fraught with scholarly cul-de-sacs.”[8]

In a similar vein, the authors misleadingly tell readers that “the forefront of science is rarely encountered in court,” and so “much of the science mentioned there shall be considered established….”[9] Of course, the reality is that many causal claims presented in court have already been rejected or held to be indeterminate by the scientific community. And just when readers may think themselves safe from the goblins of nihilism, the authors launch into a theory of naïve probabilism that science is just placing subjective probabilities upon data, based upon preconceived biases and beliefs:

“All of these biases and beliefs play into the process of weighing data, a critical aspect of science. Placing weight on a result is the process of assigning a probability to an outcome. Everything in the universe can be expressed in probabilities.”[10]

So help the expert witness who honestly (and correctly) testifies that the causal claim or its rejection cannot be expressed as a probability statement!

Although I have not read all of the Bench Book closely, there appears to be no meaningful discussion of Rule 703, or of the need to access underlying data to ensure that the proffered scientific opinion under scrutiny has used appropriate methodologies at every step in its development. Even a 412 text cannot address every issue, but this one does little to help the judicial reader find more in-depth help on statistical and scientific methodological issues that arise in occupational and environmental disease claims, and in pharmaceutical products litigation.

The organizations involved in this Bench Book appear to be honest brokers of remedial education for judges. The writing of this Bench Book was funded by the State Justice Institute (SJI) Which is a creation of federal legislation enacted with the laudatory goal of improving the quality of judging in state courts.[11] Despite its provenance in federal legislation, the SJI is a a private, nonprofit corporation, governed by 11 directors appointed by the President, and confirmed by the Senate. A majority of the directors (six) are state court judges, one state court administrator, and four members of the public (no more than two from any one political party). The function of the SJI is to award grants to improve judging in state courts.

The National Judicial College (NJC) originated in the early 1960s, from the efforts of the American Bar Association, American Judicature Society and the Institute of Judicial Administration, to provide education for judges. In 1977, the NJC became a Nevada not-for-profit (501)(c)(3) educational corporation, which its campus at the University of Nevada, Reno, where judges could go for training and recreational activities.

The Justice Speakers Institute appears to be a for-profit company that provides educational resources for judge. A Press Release touts the Bench Book and follow-on webinars. Caveat emptor.

The rationale for this Bench Book is open to question. Unlike the Reference Manual for Scientific Evidence, which was co-produced by the Federal Judicial Center and the National Academies of Science, the Bench Book’s authors are lawyers and judges, without any subject-matter expertise. Unlike the Reference Manual, the Bench Book’s chapters have no scientist or statistician authors, and it shows. Remarkably, the Bench Book does not appear to cite to the Reference Manual or the Manual on Complex Litigation, at any point in its discussion of the federal law of expert witnesses or of scientific or statistical method. Perhaps taxpayers would have been spared substantial expense if state judges were simply encouraged to read the Reference Manual.


[1]  Bench Book at 190.

[2]  Bench Book at 174 (“Given the large amount of statistical information contained in expert reports, as well as in the daily lives of the general society, the ability to be a competent consumer of scientific reports is challenging. Effective critical review of scientific information requires vigilance, and some healthy skepticism.”).

[3]  Bench Book at 137; see also id. at 162.

[4]  Bench Book at 148.

[5]  Bench Book at 160.

[6]  Bench Book at 152.

[7]  Bench Book at 233, quoting Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579, 595 (1993).

[8]  Bench Book at 10.

[9]  Id. at 10.

[10]  Id. at 10.

[11] See State Justice Institute Act of 1984 (42 U.S.C. ch. 113, 42 U.S.C. § 10701 et seq.).