TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Judicial Dodgers – Weight not Admissibility

May 28th, 2020

Another vacuous response to a methodological challenge under Rule 702 is to label the challenge as “going to the weight, not the admissibility” of the challenged expert witness’s testimony. Of course, a challenge may be solely focused upon the expert witness’s credibility, such as when an expert witness testifies on many occasions only for one side in similar disputes, or for one whose political commitments render him unable to acknowledge the bona fides of any studies conducted by the adversarial parties.[1] If, however, the Rule 702 challenge stated an objection to the witness’s methodology, then the objection would count against both the opinion’s weight and its admissibility. The judicial “weight not admissibility” label conveys the denial of the challenge, but it hardly explains how and why the challenge failed under Rule 702. Applying such a label without addressing the elements of Rule 702, and how the challenged expert witness satisfied those elements, is often nothing less than a failure of judging.

The Flawed Application of a Generally Accepted Methodology

If a meretricious expert witness by pretense or ignorance invokes a standard methodology but does so in a flawed or distorted, or in an invalid way, then there will be a clear break in the chain of inferences from data to conclusion. The clear language of Rule 702 should render such an expert witness’s conclusion inadmissible. Some courts, however, retreat into a high level of generality about the method used rather than inspecting the method as applied. For example, a court might look at an expert witness’s opinion and correctly find that it relied upon epidemiology, and that epidemiology is a generally accepted discipline concerned with identifying causes. The specific detail of the challenge may have shown that the witness had relied upon a study that was thoroughly flawed,[2] or that the witness relied upon an epidemiologic study of a type that cannot support a causal inference.[3]

Rule 702 and the Supreme Court’s decision in Joiner make clear that the trial court must evaluate the expert witness’s application of methodology and whether it actually supports valid inferences leading to the witness’s claims and conclusions.[4] And yet, lower courts continue to characterize the gatekeeping process as “hands off” the application of methodology and conclusions:

“Where the court has determined that plaintiffs have met their burden of showing that the methodology is reliable, the expert’s application of the methodology and his or her conclusions are issues of credibility for the jury.”[5]

This rejection of the clear demands of a statute has infected even the intermediate appellate United States Court of Appeals. In a muddled application of Rule 702, the Third Circuit approved admitting expert witness testimony in a case, explaining “because [the objecting party / plaintiff] objected to the application rather than the legitimacy of [the expert’s] methodology, such objections were more appropriately addressed on cross-examination and no Daubert hearing was required”).[6] Such a ruling in the Third Circuit is especially jarring because it violates not only the clear language of Rule 702, but also established precedent within the Circuit that holds that “any step that renders the analysis unreliable . . . renders the expert’s testimony inadmissible. This is true whether the step completely changes a reliable methodology or merely misapplies that methodology.”[7]

The Eight Circuit seems to have set itself up stridently against the law by distinguishing between scientific methodologies and their applications, and holding that “when the application of a scientific methodology is challenged as unreliable under Daubert and the methodology itself is otherwise sufficiently reliable, outright exclusion of the evidence in question is warranted only if the methodology was so altered by a deficient application as to skew the methodology itself.”[8]

The Ninth Circuit similarly has followed this dubious distinction between methodology in the abstract and methodology as applied. In City of Pomona, the Circuit addressed the admissibility of an expert witness whose testing deviated from protocols. Relying upon pre-2000 Ninth Circuit case law, decided before the statutory language of Rule 702 was adopted, the court found that:

“expert evidence is inadmissible where the analysis is the result of a faulty methodology or theory as opposed to imperfect execution of laboratory techniques whose theoretical foundation is sufficiently accepted in the scientific community to pass muster under Daubert.”[9]

The Eleventh Circuit has similarly disregarded Rule 702 by adverting to an improvised distinction between validity of methodology and flawed application of methodology.[10]

Cherry Picking and Inadequate Bases

Most of the Circuits of the United States Court of Appeals have contributed to the mistaken belief that “[a]s a general rule, questions relating to the bases and sources of an expert’s opinion affect the weight to be assigned that opinion rather than its admissibility.”[11] Clearly, such questions can undermine the admissibility of an expert witness’s opinion under Rule 702, and courts need to say why they have found the challenged opinion to have had a “sufficient basis.” For example, in the notorious Milward case, the First Circuit, citing legally invalid pre-Daubert decisions, stated that “when the factual underpinning of an expert’s opinion is weak it is a matter affecting the weight and credibility of the testimony − a question to be resolved by the jury.”[12]

After Milward, the Eighth Circuit followed suit in a hormone replacement therapy case. An expert who ignored studies was excluded by the district court, but the Court of Appeals found an abuse of discretion, holding that the sufficiency of an expert’s basis is a question of weight and not admissibility.[13]

These rulings elevate form over substance by halting the gatekeeping inquiry at an irrelevant, high level of abstraction, and finding that the challenged expert witness was doing something “sciencey,” which is good enough for government work. The courts involved evaded their gatekeeping duties and ignored the undue selectivity in reliance materials and the inadequacy and insufficiency of the challenged expert witness’s factual predicate. The question is not whether expert witnesses relied upon “scientific studies,” but whether their causal conclusions and claims are well supported, under scientific standards, by the studies upon which they relied.

Like the covert shifting of the burden of proof, or the glib assessment that the loser can still cross-examine in front of the jury,[14] the rulings discussed represent another way that judges kick the can on Rule 702 motions. Despite the confusing verbiage, these judicial rulings are a serious deviation from the text of Rule 702, as well as the Advisory Committee Note to the 2000 Amendments, which embraced the standard articulated in In re Paoli, that

“any step that renders the analysis unreliable . . . renders the expert’s testimony inadmissible. This is true whether the step completely changes a reliable methodology or merely misapplies that methodology.”[15]

On a positive note, some courts have recognized that responding with the conclusory assessment of a challenge’s going to weight not admissibility is a delegation of the court’s gatekeeping duty to the jury.[16]

In 2018, Professor Daniel Capra, the Reporter to the Rules Committee addressed the “weight not admissibility dodge” at length in his memorandum to the Rules Committee:

“Rule 702 clearly states that these are questions of admissibility, but many courts treat them as questions of weight. The issue for the Committee is whether something/anything can be done about these wayward decisions.”[17]

The Reporter charitably noted that the problem could be in the infelicitous expression of some courts that short-circuit their analyses by saying “I see the problems, but they go to the weight of the evidence.”[18] Perhaps these courts meant to say that they had found that the proponent of the challenged expert witness testimony had shown admissibility by a preponderance, and that what non-disqualifying problems remained should be taken up on cross-examination.[19] The principle of charity, however, cannot exonerate federal judges from exercising the dodge repeatedly in the face of clear statutory language. Indeed, the Reporter reaffirmed the Rules Committee’s substantive judgment that questions of sufficient basis and reliable application of methodology are admissibility issues:[20]

“It is hard to see how expert testimony is reliable if the expert has not done sufficient investigation, or has cherry-picked the data, or has misapplied the methodology. The same ‘white lab coat’ problem − that the jury will not be able to figure out the expert’s missteps − would seem to apply equally to basis, methodology and application.”

Although the Reporter opined that some authors may have overstated judicial waywardness, he found the judicial disregard of the requirements of Rule 702(b) and (d) incontrovertible.[21]

Professor Capra restated his conclusions a year later, in 2019, when he characterized broad statements such as such as “challenges to the sufficiency of an expert’s basis raise questions of weight and not admissibility” as “misstatement[s] made by circuit courts in a disturbing number of cases… .”[22] Factual insufficiency and unreliable application of methodology are, of course, also credibility and ethical considerations, but they are the fact finders’ concern only after the proponent has shown admissibility by a preponderance of the evidence. Principled adjudication requires judges to say what they mean and mean what they say.


[1]  See also Cruz-Vazquez v. Mennonite Gen. Hosp. Inc., 613 F.3d 54 (1st Cir. 2010) (reversing exclusion of an expert witness who was biased in favor of plaintiffs in medical cases and who was generally affiliated with plaintiffs’ lawyers; such issues of personal bias are for the jury in assessing the weight of the expert witness’s testimony). Another example would be those expert witnesses whose commitment to Marxist ideology is such that they reject any evidence proffered by manufacturing industry as inherently corrupt, while embracing any evidence proffered by labor or the lawsuit industry without critical scrutiny.

[2]  In re Phenylpropanolamine (PPA) Prods. Liab. Litig., MDL No. 1407, 289 F. Supp. 2d 1230 (W.D. Wash. 2003) (Yale Hemorrhagic Stroke Project).

[3]  Cook v. Rockwell Internat’l Corp., 580 F. Supp. 2d 1071, 1098 (D. Colo. 2006) (“Defendants next claim that Dr. Clapp’s study and the conclusions he drew from it are unreliable because they failed to comply with four factors or criteria for drawing causal interferences from epidemiological studies: accounting for known confounders … .”), rev’d and remanded on other grounds, 618 F.3d 1127 (10th Cir. 2010), cert. denied, ___ U.S. ___, 133 S.Ct. 22 (2012). For another example of a trial court refusing to see through important qualitative differences between and among epidemiologic studies, see In re Welding Fume Prods. Liab. Litig., 2006 WL 4507859, *33 (N.D. Ohio 2006) (reducing all studies to one level, and treating all criticisms as though they rendered all studies invalid)

[4]  General Electric Co. v. Joiner, 522 U.S. 136 (1997).

[5]  Proctor & Gamble Co. v. Haugen, 2007 WL 709298, at *2 (D. Utah 2007); see also United States v. McCluskey, 954 F.Supp.2d 1227, 1247-48 (D.N.M. 2013) (“the trial judge decides the scientific validity of underlying principles and methodology” and “once that validity is demonstrated, other reliability issues go to the weight − not the admissibility − of the evidence”); Murphy-Sims v. Owners Ins. Co., No. 16-CV-0759-CMA-MLC, 2018 WL 8838811, at *7 (D. Colo. Feb. 27, 2018) (“Concerns surrounding the proper application of the methodology typically go to the weight and not admissibility[.]”).

[6]  Walker v. Gordon, 46 F. App’x 691, 696 (3rd Cir. 2002).

[7]  In re Paoli R.R. Yard PCB Litig., 35 F.3d 717, 745 (3d Cir. 1994).

[8]  United States v. Gipson, 383 F.3d 689, 696 (8th Cir. 2004)(relying upon pre-2000 authority for this proposition).

[9]  City of Pomona v. SQM N.Am. Corp. 750 F.3d 1036, 1047 (9th Cir. 2014).

[10]  Quiet Tech. DC-8, Inc. v. Hurel-Dubois UK Ltd., 326 F.3d 1333, 1343 (11th Cir. 2003).

[11]  Puga v. RCX Sols., Inc., 922 F.3d 285, 294 (5th Cir. 2019). See also United States v. Hodge, 933 F.3d 468, 478 (5th Cir. 2019)(“As a general rule, questions relating to the bases and sources of an expert’s opinion affect the weight to be assigned that opinion rather than its admissibility and should be left for the jury’s consideration.”); MCI Communications Service Inc. v. KC Trucking & Equip. LLC, 403 F. Supp. 3d 548, 556 (W.D. La. 2019); Coleman v. United States, No. SA-16-CA-00817-DAE, 2017 WL 9360840, at *4 (W.D. Tex. Aug. 16, 2017); Alvarez v. State Farm Lloyds, No. SA-18-CV-01191-XR, 2020 WL 734482, at *3 (W.D. Tex. Feb. 13, 2020)(“To the extent State Farm wishes to attack the ‘bases and sources’ of Dr. Hall’s opinion, such questions affect the weight to be assigned to that opinion rather than its admissibility and should also be left for the jury’s consideration.”)(internal quotation and citation omitted); Patenaude v. Dick’s Sporting Goods, Inc., No. 9:18-CV-3151-RMG, 2019 WL 5288077, at *2 (D.S.C. Oct. 18, 2019) (“More fundamentally, each of these arguments goes to the factual basis of the report, … and it is well settled that the factual basis for an expert opinion generally goes to weight, not admissibility.”); Wischermann Partners, Inc. v. Nashville Hosp. Capital LLC, No. 3:17-CV-00849, 2019 WL 3802121, at *3 (M.D. Tenn. Aug. 13, 2019) (“[A]rguments that Pinkowski’s opinions are unreliable because he failed to review other relevant information and ignored certain facts bear on the factual basis for Pinkowski’s opinions, and, therefore, go to the weight, rather than the admissibility, of Pinkowski’s testimony.”).

[12]  Milward v. Acuity Specialty Products Group, Inc., 639 F.3d 11, 22 (1st Cir. 2011) (internal citations omitted), cert. denied sub nom., U.S. Steel Corp. v. Milward, 132 S. Ct. 1002 (2012).

[13]  Kuhn v. Wyeth, Inc., 686 F.3d 618, 633 (8th Cir. 2012): Kuhn v. Wyeth, Inc., 686 F.3d 618, 633 (8th Cir. 2012), rev’g Beylin v. Wyeth, 738 F.Supp. 2d 887, 892 (E.D.Ark. 2010) (MDL court) (Wilson, J. & Montgomery, J.) (excluding proffered testimony of Dr. Jasenka Demirovic who appeared to have “selected study data that best supported her opinion, while downplaying contrary findings or conclusions.”); see United States v. Finch, 630 F.3d 1057 (8th Cir. 2011) (the sufficiency of the factual basis for an expert’s testimony goes to credibility rather than admissibility, and only where the testimony “is so fundamentally unsupported that it can offer no assistance to the jury must such testimony be excluded”); Katzenmeier v. Blackpowder Prods., Inc., 628 F.3d 948, 952 (8th Cir. 2010)(“As a general rule, the factual basis of an expert opinion goes to the credibility of the testimony, not the admissibility, and it is up to the opposing party to examine the factual basis for the opinion in cross-examination.”); Paul Beverage Co. v. American Bottling Co., No. 4:17CV2672 JCH, 2019 WL 1044057, at *2 (E.D. Mo. Mar. 5, 2019) (admitting challenged opinion testimony without addressing the expert witness’s basis or application of methodology, following Eighth Circuit’s incorrect statement in Nebraska Plastics, Inc. v. Holland Colors Americas, Inc., 408 F.3d 410, 416 (8th Cir. 2005) that “[a]s a general rule, the factual basis of an expert opinion goes to the credibility of the testimony, not the admissibility, and it is up to the opposing party to examine the factual basis for the opinion in cross-examination[,]”). See alsoThe Fallacy of Cherry Picking As Seen in American Courtrooms” (May 3, 2014).

[14]  SeeJudicial Dodgers – Reassigning the Burden of Proof on Rule 702” (May 13, 2020); “Judicial Dodgers – The Crossexamination Excuse for Denying Rule 702 Motions” (May 11, 2020).

[15]  Fed. R. Evid. 702, Advisory Note (quoting In re Paoli RR Yard PCB Litig., 35 F.3d 717, 745 (3d Cir. 1994)).

[16]  See Nease v. Ford Motor Co., 848 F.3d 219, 231 (4th Cir. 2017) (“For the district court to conclude that Ford’s reliability arguments simply ‘go to the weight the jury should afford Mr. Sero’s testimony’ is to delegate the court’s gatekeeping responsibility to the jury.”).

[17]  Daniel J. Capra, Reporter’s Memorandum re Forensic Evidence, Daubert and Rule 702, at 1-2 (Apr. 1, 2018)

[18]  Id. at 43.

[19]  Id. at 43, 49-50.

[20]  Id. at 49-50.

[21]  Id. at 52.

[22]  Daniel J. Capra, Reporter, Reporter’s Memorandum re Possible Amendments to Rule 702, Advisory Comm. on Evidence Rules, Minutes of Meeting at 23 (May 3, 2019).

SKAPPOLOGY

May 26th, 2020

The Genetic Literacy Project (GLP) asks:

“Who is David and who is Goliath in the lobbying battle over agricultural biotechnology? Activists? Agro-business? In a commitment to transparency, the GLP has mined 5 years of data to help the public understand the funding network that shapes the biotechnology debate.”

The amount of money flowing into the campaign against genetically modified organisms (GMOs) is astonishing, but it does not stop the hypocritical complaints against industry’s sponsorship of studies to help show the safety of GMOs. In a recent on-line article, the GLP has published charts to map contributions from not-for-profit non-governmental organizations to anti-biotechnology advocacy groups. Close to a billion dollars ($850M) flowed into the coffers of these organizations from 2012 to 2016. The GLP’s work on tracking this funding is commendable for bringing balance to the debate about the effect of corporate money on health and environmental issues. Corporate includes the lawsuit industry and the advocacy industries.

Well actually, it would be a wonderful world if the GLP’s tracking were unnecessary. In one such alternative universe, people would ask to examine the evidence for and against claims, and they would have a healthy respect for uncertainty.

Studies funded by parties are routinely relied upon in litigation, and they are often pivotal in how courts decide significant claims of environmental or occupational harm.[1] Unfortunately, the sponsorship of studies by plaintiffs’ counsel, third-party litigation funding entities, and advocacy groups is often obscured or hidden.

* * * * * * * * * * * *

I recently happened upon an article of interest in an obscure journal, by a well-known author.[2]  The author, John C. Bailar, formerly an Editor-in-Chief of the Journal of the National Cancer Institute, was  professor emeritus in the University of Chicago’s Department of Public Health Sciences. He died in September 2016. Bailar was a graduate of the Yale University medical school, and also held a doctorate in statistics.

There is nothing ground breaking in Bailar’s article, but it is a nice summary of the ways that errors can creep into the scientific literature, short of actual fabrication or falsification of data.[3] It is also worth reading because it is an article that comes from one of the several Coronado Conferences, sponsored by an advocacy organization that has fraudulently concealed its funding, The Project on Scientific Knowledge and Public Policy, aka SKAPP.

To be sure, authors of SKAPP-funded articles have invariably cited their funding from SKAPP, and Bailar was no exception. Bailar made the following acknowledgements:

“Support for this paper was provided by The Project on Scientific Knowledge and Public Policy (SKAPP) at The George Washington University School of Public Health and Health Services. It is revised from a paper presented at SKAPP’s March 2006 Coronado Conference “Truth and Advocacy: The Quality and Nature of Litigation and Regulatory Science.” The papers from that conference will be published elsewhere.”[4]

The acknowledgement of support was rather anemic by SKAPP standards.  Most SKAPP-funded articles recited something closer to the following provided by David Michaels, who headed up SKAPP and worked as an expert witness for the litigation industry, until becoming the Administrator of the Occupational Health & Safety Administration, in President Obama’s administration:[5]

“DM [David Michaels] and CM [Celeste Monforton] are employed by the George Washington University School of Public Health and Health Services as part of the Project on Scientific Knowledge and Public Policy (SKAPP). Their salaries, in part, are funded by the Common Benefit Litigation Expense Trust, a fund established pursuant to a court order in the Silicone Gel Breast Implant Products Liability litigation. SKAPP’s funding is unrestricted; its funders are not given advance notice or the opportunity to review or approve any documents produced by the project. PL [Peter Lurie] is with Public Citizen’s Health Research Group.”

Michaels’ statement was perhaps a little more forthcoming, but few scientists or lay persons would know that his salary, and support, came from plaintiffs’ lawyers as part of an active litigation effort. Although Michaels claimed that the funding was unrestricted, like Big Tobacco funding, the sponsor, plaintiffs’ counsel, created a substantial selection effect in choosing beneficiaries who would deliver its pre-approved message. The Common Benefit Trust may sound like an eleemosynary, public-spirited, organization, with the imprimatur of the federal court system.  It was not.

Was Bailar influenced by his source of funding?  His topic would have permitted him many examples from the annals of science or litigation, but interestingly one of the few examples Bailar chose to give details about was a scientific dispute between the semiconductor industry and Richard Clapp, who was acting as an expert witness in litigation against that industry.  Although Clapp used a study design known to be inaccurate and biased, Bailar touted Clapp’s research over that sponsored by members of the industry.  Richard Clapp, in addition to have been an expert witness for the litigation industry on many occasions, also happened to have been a member of the SKAPP’s advisory committee. Hmmm.

Whence comes SKAPP funding?  SKAPP trades on most readers’ lack of familiarity with how “common benefit funds” are established.  They sound like some sort of disembodied charitable trust, such as the Pew. In fact, the silicone common benefit trust was nothing more than a funding device for mass federal litigation involving silicone breast implants. Ironically, the funding came from a litigation in which one leading judge described plaintiffs’ expert witnesses as “charlatans,” and the litigation claims as largely based upon fraud.[6] Cynics might believe that Bailar’s choice of Clapp versus the semiconductor industry, regardless of the merits, was driven by a desire to please SKAPP & Clapp.

The common benefit fund for the silicone-gel breast implant litigation was created by Order 13, “Establishing Plaintiffs’ Litigation Expense Fund to Compensate and Reimburse Attorneys for Services Performed and Expenses Incurred for Common Benefit.” The late Judge Sam Pointer, appointed to preside over MDL 926, In re Silicone Gel Breast Implants Products Liability Litigation, Master File No. CV 92-P-10000-S, entered the order on July 23, 1993.  Some of the pertinent terms of Order 13 illustrate how it was supposed to operate:

This order is entered in order to provide for the fair and equitable sharing among plaintiffs of the cost of special services performed and expenses incurred by attorneys acting for the common benefit of all plaintiffs.

  1. Plaintiffs’ Litigation Expense Fund to be Established. Plaintiffs’ National Liaison Counsel … are directed to establish an interest-bearing account to receive and disburse funds as provided in this order.

***

  1. Assessment.

(a)    All plaintiffs and their attorneys who, after this date, either agree — for a monetary consideration — to settle, compromise, dismiss, or reduce the amount of a claim or, with or without a trial, recover a judgment for monetary damages or other monetary relief, including both compensatory and punitive damages, with respect to a breast implant claim are hereby assessed:

(1)    5% of the “gross monetary recovery,” if the agreement is made or the judgment is entered after this date and before November 1, 1993, or

(2)    6% of the “gross monetary recovery,” if the agreement is made or the judgment is entered after October 31, 1993.

Defendants are directed to withhold this assessment from amounts paid to plaintiffs and their counsel, and to pay the assessment into the fund as a credit against the settlement or judgment.  ***

  1. Disbursements.

(a)    Payments may be made from the fund to attorneys who provide services or incur expenses for the joint and common benefit of plaintiffs in addition to their own client or clients.  Attorneys eligible are not limited to Plaintiffs’ National Liaison Counsel and members of Plaintiffs’ National Steering Committee, but include, for example, other attorneys called upon by them to assist in performing their responsibilities, State Liaison Counsel, and other attorneys performing similar responsibilities in state court actions in which the presiding state-court judge has imposed similar obligations upon plaintiffs to contribute to the fund.

(b)    Payments will be allowed only to compensate for special services performed, and to reimburse for special expenses incurred, for the joint and common benefit of all plaintiffs.

***

(c)    No amounts will be disbursed without review and approval by a committee of federal and state judicial officers to be designated by the court.  The committee may, however, utilize the services of a special master to assist in this review, and may authorize one or more of its members to act for the committee in approving particular types of applications for disbursement.

(d)    If the fund exceeds the amount needed to make payments as provided in this order, the court will order an refund to those who have contributed to the fund.  Any such refund will be made in proportion to the amount of the contributions.”

For a while, a defense lawyer, representing the defendants in the silicone MDL, participated in discussions concerning MDL 926 Order 13 funds, until the plaintiffs’ lawyers decided that his services were not needed, and excluded him from discussions of the use of the monies. The reality is that the plaintiffs’ lawyers in the silicone litigation were able to bamboozle the slim oversight committee into approving a propaganda campaign against Daubert gatekeeping, and that recipients of the plaintiffs’ lawyers’ largesse were able to misrepresent their funding as though it were from a federal court.

There are further ironies connected with the silicone common benefit trust.  First, the silicone litigation was effectively over when the court-appointed expert witnesses’ reports that announced that the plaintiffs’ expert witnesses lacked sound scientific evidence to support conclusions of causation.  SKAPP’s website reports that its activities started around 2002, by which time both the court-appointed witnesses, as well as the British Ministry of Health, and the Institute of Medicine’s select committee had reported that there was no basis for the plaintiffs’ causal claims in litigation.[7] The second irony is that SKAPP, through its sponsorship of various research and writing projects, had made the recipients of SKAPP money, by the terms of Order 13, agents of the silicone plaintiffs’ lawyers and their clients. Recipients of SKAPP funding who did not disclose that their support or salaries come from the coffers of plaintiffs’ counsel were engaged in misleading their readers and the scientific and legal communities.

I have written often in the past about SKAPP as an agent of plaintiffs’ counsel in mass tort litigation.[8] The concern is not new, but it has continuing significance because of the asymmetrical standard advanced by the lawsuit industry and its scientific advisors who seek to disqualify manufacturing industry and its scientific advisors from participating in scientific debate and argument about various health claims.[9]


[1]  See, e.g., Leaf River Forest Prods. v. Ferguson, 662 So. 2d 648, 657 (Miss. 1995) (litigation involving defense expert witness’s reliance upon dioxin studies funded by defendant paper mills); Maurer v. Heyer-Schulte Corp., No. Civ. A. 92-3485, 2002 WL 31819160 at *3 (E.D. La. Dec. 13, 2002) (granting defendant’s summary judgment against plaintiff’s claim that breast implants caused her harm; citing defendants’ sponsored epidemiologic studies showing no causal link, including epidemiologic study conducted in Sweden); Nat’l Res. Def. Council v. Evans, 232 F. Supp. 2d 1003, 1013 (N.D. Cal. 2002) (“commend[ing] defendants’ sponsorship of independent scientific research…”); FTC v. Pantron I, Corp., 1991 U.S. Dist. LEXIS 21858 (C.D. Cal. Sept 6, 1991) (finding study funded by defendants met “basic and fundamental requirements for scientific validity and reliability”).

[2]  John C. Bailar, “How to distort the scientific record without actually lying: truth, and the arts of science,” 11 European J. Oncol. 217 (2006).

[3]  Id. at 218.

[4]  Id. at 223.

[5]  David Michaels, Celeste Monforton & Peter Lurie, “Selected science: an industry campaign to undermine an OSHA hexavalent chromium standard,” 65 Envt’l Health 5 (2006).

[6]     Hon. Jack B. Weinstein, “Preliminary Reflections on Administration of Complex Litigation” 2009 Cardozo L. Rev. de novo 1, 14 (2009).

[7]   Independent Review Group, Silicone Breast Implants: The Report of the Independent Review Group 8, 22-23 (July 1998) (concluding that there was no demonstrable risk of connective tissue disease from silicone breast implants); Stuart Bondurant, Virginia Ernster, and Roger Herdman, eds., Safety of Silicone Breast Implants (1999) (rejecting plaintiffs’ theories and litigation claims of systemic disease).

[8]   “SKAPP A LOT” (April 30, 2010); “Manufacturing Certainty” (Oct. 25, 2011); “David Michaels’ Public Relations Problem” (Dec. 2, 2011); “Conflicted Public Interest Groups” (Nov. 3, 2013). See also Walter Olson, Schools for Misrule: Legal Academia and an Overlawyered America 121-22 (2011); David E. Bernstein & Eric G. Lasker, “Defending Daubert: It’s Time to Amend Federal Rule of Evidence 702,” 57 William & Mary L. Rev. 1, 39 & n.211 (2015); Ted Frank, “Daubert Debate,” Overlawyered (July 5, 2003); Peter Nordberg, “Bernstein on SKAPP (part 1),” Daubert on the Web (Jul)y 02, 2003).

[9]   Consider the media hysteria over former President Obama’s nomination of Dr. Robert Califf, to serve as Chair of the Food and Drug Administration.[9] The criticism was based upon his having served as the founding director of the Duke Clinical Research Institute, which received funding directly from pharmaceutical companies. The Senate confirmed Califf (89 to 4), but the controversy highlights the hypocrisy in play. Brady Dennis, “Senate confirms Robert Califf as new FDA commissioner,” Wash. Post (Feb. 24, 2016).

Judicial Dodgers – Reassigning the Burden of Proof on Rule 702

May 13th, 2020

Explaining the denial of a Rule 702 motion in terms of the availability of cross-examination is just one among several dodges that judges use to avoid fully engaging with Rule 702’s requirements.[1] Another dodge involves shifting the burden of proof on admissibility from the proponent of the challenged expert witness to the challenger. This dodgewould appear to violate well-established law.

The Supreme Court, in deciding Daubert, made clear that the question whether an expert witness’s opinion was admissible was governed under the procedure set out in Federal Rule of Evidence 104(a).[2] The significance of placing the Rule 702 issues under the procedures set out in Rule 104(a) is that the trial judge must make the admissibility determination, and that he or she is not bound by the rules of evidence. The exclusion of the admissibility determination from the other rules of evidence means that trial judges can look at challenged expert witnesses’ relied-upon materials, and other facts, data, and opinions, regardless of these materials’ admissibility. The Supreme Court also made clear that the admissibility of an expert witness’s opinion testimony should be shown “by a preponderance of proof.”[3] Every court that has directly addressed the burden of proof issue in a Rule 702 challenge to expert witness testimony has clearly assigned that burden to the proponent of the testimony.[4]

Trial courts intent upon evading gatekeeping responsibility, however, have created a presumption of admissibility. When called upon to explain why they have denied Rule 702 challenges, these courts advert to the presumption as an explanation and justification for the denial.[5] Some courts even manage to discuss the burden of proof upon the proponent, and a presumption of admissibility, in almost the same breath.[6]

In his policy brief for amending Rule 702, Lee Mickus traces the presumption innovation to Borawick v. Shay, a 1995 Second Circuit decision that involved a challenge to hypnotically refreshed (or created) memory.[7] In Borawick, the Court of Appeals held that the plaintiff’s challenge turned upon whether Borawick’s testimony was competent or admissible, and that it did not involve the “the admissibility of data derived from scientific techniques or expert opinions.”[8] Nevertheless, in dicta, the court observed that “by loosening the strictures on scientific evidence set by Frye, Daubert reinforces the idea that there should be a presumption of admissibility of evidence.”[9]

Presumptions come in different forms and operate differently, and this casual reference to a presumption in dictum could mean any number of things. A presumption of admissibility could mean simply that unless there is a challenge to an expert witness’s opinion, the opinion is admissible.[10] The presumption could be a bursting-bubble (Thayer) presumption, which disappears once the opponent of the evidence credibly raises questions about the evidence’s admissibility. The presumption might be something that does not disappear, but once the admissibility is challenged, the presumption continues to provide some evidence for the proponent. And in the most extreme forms, the (Morgan) presumption might be nothing less than a judicially artful way of saying that the burden of proof is shifted to the opponent of the evidence to show inadmissibility.[11]

Although Borawick suggested that there should be a presumption, it did not exactly hold that one existed. A presumption in favor of the admissibility of evidence raises many questions about the nature, definition, and operation of the presumption. It throws open the question what evidence is needed to rebut the presumption. For instance, may a party whose expert witness is challenged not defend the witness’s compliance with Rule 702, stand on the presumption, and still prevail?

There is no mention of a presumption in Rule 702 itself, or in any Supreme Court decision on Rule 702, or in the advisory committee notes. Inventing a presumption, especially a poorly described one, turns the judicial discretion to grant or deny a Rule 702 challenge into an arbitrary decision.

Most importantly, given the ambiguity of “presumption,” a judicial opinion that denies a Rule 702 challenge by invoking a legal fiction fails to answer the question whether the proponent of the expert witness has carried the burden of showing that all the subparts of Rule 702 were satisfied by a preponderance of the evidence. While judges may prefer not to endorse or disavow the methodology of an otherwise “qualified” expert witness, their office requires them to do so, and not hide behind fictional presumptions.


1

[1]  “Judicial Dodgers – The Crossexamination Excuse for Denying Rule 702 Motions” (May 11, 2020).

[2]  Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 592 n.10 (1993).

[3]  Id., citing Bourjaily v. United States, 483 U. S. 171, 175-176 (1987).

[4]  Barrett v. Rhodia, Inc., 606 F.3d 975, 980 (8th Cir. 2010) (quoting Marmo v. Tyson Fresh Meats, Inc., 457 F.3d 748, 757 (8th Cir. 2006)); Beylin v. Wyeth, 738 F. Supp. 2d 887 (E.D. Ark. 2010) (MDL court) (Wilson, J. & Montgomery, J.); Pride v. BIC Corp., 218 F.3d 566, 578 (6th Cir. 2000); Reece v. Astrazeneca Pharms., LP, 500 F. Supp. 2d 736, 742 (S.D. Ohio 2007).

[5]  See, e.g., Cates v. Trustees of Columbia Univ. in City of New York, No. 16CIV6524GBDSDA, 2020 WL 1528124, at *6 (S.D.N.Y. Mar. 30, 2020) (discussing presumptive admissibility); Price v. General Motors, LLC, No. CIV-17-156-R, 2018 WL 8333415, at *1 (W.D. Okla. Oct. 3, 2018) (“[T]here is a presumption under the Rules that expert testimony is admissible.”)(internetal citation omitted); Powell v. Schindler Elevator Corp., No. 3:14cv579 (WIG), 2015 WL 7720460, at *2 (D. Conn. Nov. 30, 2015) (“The Second Circuit has made clear that Daubert contemplates liberal admissibility standards, and reinforces the idea that there should be a presumption of admissibility of evidence.”); Advanced Fiber Technologies (AFT) Trust v. J & L Fiber Services, Inc., No. 1:07-CV-1191, 2015 WL 1472015, at *20 (N.D.N.Y. Mar. 31, 2015) (“In assuming this [gatekeeper] role, the Court applies a presumption of admissibility.”); Crawford v. Franklin Credit Mgt. Corp., 08-CV-6293 (KMW), 2015 WL 13703301, at *2 (S.D.N.Y. Jan. 22, 2015) (“[T]he court should apply ‘a presumption of admissibility’ of evidence” in carrying out the gatekeeper function.); Martinez v. Porta, 598 F. Supp. 2d 807, 812 (N.D. Tex. 2009) (“Expert testimony is presumed admissible”).

[6]  S.E.C. v. Yorkville Advisors, LLC, 305 F. Supp. 3d 486, 503-04 (S.D.N.Y. 2018) (“The party seeking to introduce the expert testimony bears the burden of establishing by a preponderance of the evidence that the proffered testimony is admissible. There is a presumption that expert testimony is admissible … .”) (internal citations omitted).

[7]  Borawick v. Shay, 68 F.3d 597, 610 (2d Cir. 1995), cert. denied, 517 U.S. 1229 (1996).

[8]  Id.

[9]  Id. (referring to Frye v. United States, 293 F. 1013 (D.C.Cir.1923)).

[10]  In re Zyprexa Prod. Liab. Litig., 489 F. Supp. 2d 230, 282 (E.D.N.Y. 2007) (Weinstein, J.) (“Since Rule 702 embodies a liberal standard of admissibility for expert opinions, the assumption the court starts with is that a well-qualified expert’s testimony is admissible.”).

[11]  See, e.g., Orion Drilling Co., LLC v. EQT Prod. Co., No. CV 16-1516, 2019 WL 4273861, at *34 (W.D. Pa. Sept. 10, 2019) (after declaring that “[e]xclusion is disfavored” under Rule 702, the court flipped the burden of production and declared the opinion testimony admissible, stating “Orion has not established that incorporation of the data renders Ray’s opinion unreliable.”).

Judicial Dodgers – The Crossexamination Excuse for Denying Rule 702 Motions

May 11th, 2020

In my last post,[1] I praised Lee Mickus’s recent policy paper on amending Rule 702 for its persuasive force on the need for an amendment, as well as a source for helping lawyers anticipate common judicial dodges to a faithful application of the rule.[2] There are multiple dodges used by judicial dodgers, and it behooves litigants to recognize and anticipate them. In this post, and perhaps future ones, I elaborate upon the concerns that Mickus documents.

One prevalent judicial response to the Rule 702 motion is to kick the can and announce that the challenge to an expert witness’s methodological shenanigans can and should be addressed by crossexamination. This judicial response was, of course, the standard one before the 1993 Daubert decision, but Justice Blackmun’s opinion kept it alive in frequently quote dicta:

“Vigorous cross-examination, presentation of contrary evidence, and careful instruction on the burden of proof are the traditional and appropriate means of attacking shaky but admissible evidence.”[3]

Justice Blackmun, no doubt, believed he was offering a “helpful” observation here, but the reality is quite different. Traditionally, courts allowed qualified expert witnesses to opine with wild abandon, after showing that they had the very minimal qualifications required to do so in court. In the face of this traditional judicial lassitude, “[v]igorous cross-examination, presentation of contrary evidence, and careful instruction on the burden of proof” were all a litigant could hope to accomplish in litigation. Furthermore, the litany of remedies for “shaky but admissible evidence” fails to help lower court judges and lawyers sort shaky but admissible evidence from shaky and inadmissible evidence.

Perhaps even more to the point, cases at common law “traditionally” did not involve multivariate logistic regression, structural equation models, propensity score weighting, and the like. Juries did just fine on whether Farmer Brown had exercised due care when he ran over his neighbor’s cow with his tractor, or even when a physician opined that a child was born 350 days after the putative father’s death was sired by the testator and entitled to inherit from “dad.”

Mickus is correct that a trial judge’s comment that the loser of a Rule 702 motion is free to cross-examine is often a dodge, an evasion, or an outright failure to engage with the intricacies of a complex methodological challenge.[4] Stating that the “traditional and appropriate means of attacking shaky but admissible evidence” remain available is a truism, and might be offered as judicial balm to the motion loser, but the availability of such means is hardly an explanation or justification for denying the Rule 702 motion. Furthermore, Justice Blackmun’s observation about traditional means was looking back at an era when in most state and federal court, a person found to be minimally qualified, could pretty much say anything regardless of scientific validity. That was the tradition that stood in active need of reform when Daubert was decided in 1993.

Mickus is also certainly correct that the whole point of judicial gatekeeping is that the presentation of vive voce testimony before juries is not an effective method for revealing shaky, inadmissible opinion testimony. A few courts have acknowledged that cross-examination in front of a jury is not an appropriate justification for admitting methodologically infirm expert witness opinion testimony. In the words of Judge Jed Rakoff, who served on the President’s Council of Advisors on Science and Technology,[5] addressed the limited ability of cross-examination in the context of forensic evidence:

“Although effective cross-examination may mitigate some of these dangers, the explicit premise of Daubert and Kumho Tire is that, when it comes to expert testimony, cross-examination is inherently handicapped by the jury’s own lack of background knowledge, so that the Court must play a greater role, not only in excluding unreliable testimony, but also in alerting the jury to the limitations of what is presented.”[6]

Judge Rakoff’s point is by no means limited to forensic evidence, and it has been acknowledged more generally by Professor Daniel Capra, the Reporter to the Advisory Committee on Evidence Rules:

“the key to Daubert is that cross-examination alone is ineffective in revealing nuanced defects in expert opinion testimony and that the trial judge must act as a gatekeeper to ensure that unreliable opinions don’t get to the jury in the first place.”[7]

Juries do not arrive at the court house knowledgeable about statistical and scientific methods; nor are they prepared to spend weeks going over studies to assess their quality, and whether an expert witness engaged in cherry picking, misapplying methodologies, or insufficient investigation.[8] In discussing the problem of expert witnesses’ overstating the strength of their opinions, beyond what is supported by evidence, the Reporter stressed the limits and ineffectiveness of remedial adversarial cross-examination:

“Perhaps another way to think about cross-examination as a remedy is to compare the overstatement issue to the issues of sufficiency of basis, reliability of methodology, and reliable application of that methodology. As we know, those three factors must be shown by a preponderance of the evidence. The whole point of Rule 702 — and the Daubert-Rule 104(a) gatekeeping function — is that these issues cannot be left to cross-examination. The underpinning of Daubert is that an expert’s opinion could be unreliable and the jury could not figure that out, even given cross-examination and argument, because the jurors are deferent to a qualified expert (i.e., the white lab coat effect). The premise is that cross-examination cannot undo the damage that has been done by the expert who has power over the jury. This is because, for the very reason that an expert is needed (because lay jurors need assistance) the jury may well be unable to figure out whether the expert is providing real information or junk. The real question, then, is whether the dangers of overstatement are any different from the dangers of insufficient basis, unreliability of methodology, and unreliable application. Why would cross-examination be insufficient for the latter yet sufficient for the former?

It is hard to see any difference between the risk of overstatement and the other risks that are regulated by Rule 702. When an expert says that they are certain of a result — when they cannot be — how is that easier for the jury to figure out than if an expert says something like ‘I relied on four scientifically valid studies concluding that PCB’s cause small lung cancer’. When an expert says he employed a ‘scientific methodology’ when that is not so, how is that different from an expert saying “I employed a reliable methodology” when that is not so?”[9]

The Reporter’s example of PCBs and small lung cancer was an obvious reference to the Joiner case, in which the Supreme Court held that the trial judge had properly excluded causation opinions. The Reporter’s point goes directly to the cross-examination excuse for not shirking the gatekeeping function. In Joiner, the Court held that gatekeeping was necessitated when cross-examination was insufficient in the face of an analytical gap between methodology and conclusion.[10] Indeed, such gaps are or should be present in most well-conceived Rule 702 challenges.

The problem is not only that juries defer to expert witnesses. Juries lack the competence to assess scientific validity. Although many judges are lacking in such competence, at least litigants can expect them to read the Reference Manual on Scientific Evidence before they read the parties’ briefs and the expert witnesses’ reports. If the trial judge’s opinion evidences ignorance of the Manual, then at least there is the possibility of an appeal. It will be a strange day in a stranger world, when a jury summons arrives in the mail with a copy of the Manual!

The rules of evidence permit expert witnesses to rely upon inadmissible evidence, at least when experts in their field would do so reasonably. To decide whether the reliance is reasonable requires the decision maker go outside the “proofs” that would typically be offered at trial. Furthermore, the decision maker – gatekeeper – will have to read the relied-upon study and data to evaluate the reasonableness of the reliance. In a jury trial, the actual studies relied upon are rarely admissible, and so the jury almost never has the opportunity to read them to make its own determination of reasonableness of reliance, or of whether the study and its data really support what the expert witness draws from it.

Of course, juries do not have to write opinions about their findings. They need neither explain nor justify their verdicts, once the trial court has deemed that there is the minimally sufficient evidence to support a verdict. Juries, with whatever help cross-examination provides, in the absence of gatekeeping, cannot deliver anything approaching scientific due process of law.

Despite Supreme Court holdings, a substantially revised and amended Rule 702, and clear direction from the Advisory Committee, some lower courts have actively resisted enforcing the requirements of Rule. 702 Part of this resistance consists in pushing the assessment of the reliability of the data and assumptions used in applying a given methodology out of the gatekeeping column and into the jury’s column. Despite the clear language of Rule 702, and the Advisory Committee Note,[11] some Circuits of the Court of Appeals have declared that assessing the reliability of assumptions and data is not judges’ work (outside of a bench trial).[12]

As Seinfeld has taught us, rules are like reservations. It is not enough to make the rules, you have to keep and follow them. Indeed, following the rule is really the important part.[13] Although an amended Rule 702 might include a provision that “we really mean this,” perhaps it is worth a stop at the Supreme Court first to put down the resistance.


[1]  “Should Federal Rule of Evidence 702 Be Amended?” (May 8, 2020).

[2]  Lee Mickus, “Gatekeeping Reorientation: Amend Rule 702 to Correct Judicial Misunderstanding about Expert Evidence,” Washington Legal Foundation Critical Legal Issues Working Paper No. 217 (May 2020).

[3]  Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 596 (1993).

[4]  See, e.g., AmGuard Ins. Co. v. Lone Star Legal Aid, No. CV H-18-2139, 2020 WL 60247, at *8 (S.D. Tex. Jan. 6, 2020) (“[O]bjections [that the expert could not link her experienced-based methodology to her conclusions] are better left for cross examination, not a basis for exclusion.”); Powell v. Schindler Elevator Corp., No. 3:14cv579 (WIG), 2015 WL 7720460, at *2 (D. Conn. Nov. 30, 2015) (“To the extent Defendant argues that Mr. McPartland’s conclusions are unreliable, it may attack his report through cross examination.”); Wright v. Stern, 450 F. Supp. 2d 335, 359–60 (S.D.N.Y. 2006) (“In a close case, a court should permit the testimony to be presented at trial, where it can be tested by cross-examination and measured against the other evidence in the case.”) (internal citation omitted). See also Adams v. Toyota Motor Corp., 867 F.3d 903, 916 (8th Cir. 2017) (affirming admission of expert testimony, reiterating the flexibility of the Daubert inquiry and emphasizing that defendant’s concerns could all be addressed with “[v]igorous cross-examination, presentation of contrary evidence, and careful instruction on the burden of proof”); Liquid Dynamics Corp. v. Vaughan Corp., 449 F.3d 1209, 1221 (Fed. Cir. 2006) (“The identification of such flaws in generally reliable scientific evidence is precisely the role of cross-examination.” (internal citation omitted)); Carmichael v. Verso Paper, LLC, 679 F. Supp. 2d 109, 119 (D. Me. 2010) (“[W]hen the adequacy of the foundation for the expert testimony is at issue, the law favors vigorous cross-examination over exclusion.”); Crawford v. Franklin Credit Mgt. Corp., 08-CV-6293 (KMW), 2015 WL 13703301, at *6 (S.D.N.Y. Jan. 22, 2015) (“In light of the ‘presumption of admissibility of evidence,’ that opportunity [for cross-examination] is sufficient to ensure that the jury receives testimony that is both relevant and reliable.”) (internal citation omitted).

Even the most explicitly methodological challenges are transmuted into cross-examination issues by refusnik courts. For instance, cherry picking is reduced to a credibility issue for the jury and not germane to the court’s Rule 702 determination. In re Chantix Prods. Liab. Litig., 889 F. Supp. 2d 1272, 1288 (N.D. Ala. 2012) (finding that an expert witness’s deliberate decision not to rely upon clinical trial data merely “is a matter for cross-examination, not exclusion under Daubert”); In re Urethane Antitrust Litig., 2012 WL 6681783, at *3 (D.Kan.) (“The extent to which [an expert] considered the entirety of the evidence in the case is a matter for cross-examination.”); Bouchard v. Am. Home Prods. Corp., 2002 WL 32597992, at *7 (N.D. Ohio) (“If the plaintiff believes that the expert ignored evidence that would have required him to substantially change his opinion, that is a fit subject for cross-examination.”). Similarly, courts have by ipse dixit made flawed application of what a standard methodological into merely a credibility issue to be explore by cross-examination rather than by judicial gatekeeping. United States v. Adam Bros. Farming, 2005 WL 5957827, at *5 (C.D. Cal. 2005) (“Defendants’ objections are to the accuracy of the expert’s application of the methodology, not the methodology itself, and as such are properly reserved for cross-examination.”); Oshana v. Coca-Cola Co., 2005 WL 1661999, at *4 (N.D. Ill.) (“Challenges addressing flaws in an expert’s application of reliable methodology may be raised on cross-examination.”).

[5]  President’s Council of Advisors on Science and Technology, Report to the President on Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods (Sept. 2016).

[6]  United States v. Glynn, 578 F. Supp. 2d 567, 574 (S.D.N.Y. 2008) (Rakoff, J.)

[7]  Daniel J. Capra, Reporter, Advisory Comm. on Evidence Rules, Minutes of Meeting at 23 (May 3, 2019) (comments of the Reporter).

[8]  Daniel J. Capra, Reporter’s Memorandum re Forensic Evidence, Daubert and Rule 702 at 50 (April 1, 2018) (identifying issues such as insufficient investigation, cherry-picking data, or misapplying standard methodologies, as examples of a “white lab coat” problem resulting from juries’ inability to evaluate expert witnesses’ factual bases, methodologies, and applications of methods).

[9]  Daniel J. Capra, Reporter, Advisory Comm. on Evidence Rules, Minutes of Meeting at 10-11 (Oct. 1, 2019) (comments of the Reporter on possible amendment of Rule 702) (internal citation to Joiner omitted).

[10]  Id. at 11 n.5.

[11]  See In re Paoli RR Yard PCB Litig., 35 F.3d 717, 745 (3d Cir. 1994) (calling for a close, careful analysis of the application of a proper methodology to every step in the case; “any step that renders the analysis unreliable renders the expert’s testimony inadmissible whether the step completely changes a reliable methodology or merely misapplies that methodology”).

[12]  See, e.g., City of Pomona v. SQM North Am. Corp., 750 F.3d 1036, 1047 (9th Cir. 2014) (rejecting the Paoli any-step approach without careful analysis of the statute, the advisory committee note, or Supreme Court decisions); Manpower, Inc. v. Ins. Co. of Pa., 732 F.3d 796, 808 (7th Cir. 2013) (“[t]he reliability of data and assumptions used in applying a methodology is tested by the adversarial process and determined by the jury; the court’s role is generally limited to assessing the reliability of the methodology – the framework – of the expert’s analysis”); Bonner v. ISP Techs., Inc., 259 F.3d 924, 929 (8th Cir. 2001) (“the factual basis of an expert opinion goes to the credibility of the testimony, not the admissibility, and it is up to the opposing party to examine the factual basis for the opinion in cross-examination”).

[13]  Despite the clarity of the revised Rule 702, and the intent to synthesize Daubert, Joiner, Kumho Tire, and Weisgram, some courts have insisted that nothing changed with the amended rule. See, e.g., Pappas v. Sony Elec., Inc., 136 F. Supp. 2d 413, 420 & n.11 (W.D. Pa. 2000) (opining that Rule 702 as amended did not change the application of Daubert within the Third Circuit) (“The Committee Notes to the amended Rule 702 cite and discuss several Court of Appeals decisions that have properly applied Daubert and its progeny. Among these decisions are numerous cases from the Third Circuit. See Committee Note to 2000 Amendments to Fed. R.Evid. 702. Accordingly, I conclude that amended Rule 702 does not effect a change in the application of Daubert in the Third Circuit.”). Of course, if nothing changed, then the courts that take this position should be able to square their decisions with text of Rule 702, as amended in 2000.

Should Federal Rule of Evidence 702 Be Amended?

May 8th, 2020

Almost 27 years have passed since the United States Supreme Court issued its opinion in Daubert.[1] The holding was narrow. The Court reminded the Bar that Federal Rule of Evidence 702 was a statute, and that courts were thus bound to read it as a statute. The plain language of Rule 702 had been adopted by the Court in 1972, and then enacted by Congress, to be effective on July 1, 1975. Absent from the enacted Rule 702 was the “twilight zone” test articulated by a lower federal court in 1923.[2] In the Daubert case, the defense erroneously urged the application of the twilight zone test. In the post-modern way, the plaintiffs urged the application of no test.[3] The Court held simply that the twilight zone test had not been incorporated in the statutory language of Rule 702. Instead, the Court observed that the plain language of the statute imposed “helpfulness” and epistemic requirements for admitting expert witness opinion testimony.

It took another two Supreme Court decisions to flesh out the epistemic requirements for expert witnesses’ opinions,[4] and a third decision in which the Court told the Bench and Bar that the requirements of Rule 702 are “exacting.”[5] After the Supreme Court had added significantly to Rule 702’s helpfulness and knowledge requirements, the Advisory Committee revised the rule in 2000, to synthesize and incorporate these four Supreme Court decisions, and scholarly thinking about the patho-epistemology of expert witness opinion testimony. The Committee revised Rule 702 again in 2011, but only on “stylistic” issues, without any intent to add to or subtract from the 2000 rule.

Not all judges got the memo, or bothered to read and implement the revised Rule 702, in 2000. At both the District Court and the Circuit levels, courts persisted, and continue to persist, in citing retrograde decisions that predate the 2000 amendment, and even predate the 1993 decision in Daubert. Even the Supreme Court, in a 2011 opinion that did not involve the interpretation of Rule 702, was misled by a Solicitor General’s amicus brief, into citing one of the most anti-science, anti-method, post-modern, pre-Daubert, anything-goes decisions.[6] The judicial resistance to Rule 702 is well documented in many scholarly articles,[7] by the Reporter to the Advisory Committee,[8] and in the pages of this and other blogs.

In 2015, when evidence scholar David Bernstein argued that Rule 702 required amending,[9] I acknowledged the strength of his argument, but resisted because of what I perceived to be the danger of opening up the debate in Congress.[10] Professor Bernstein and lawyer Eric Lasker detailed and documented the many judicial dodges and evasions engaged in by many judges intent upon ignoring the clear requirements of Rule 702.

A paper published this week by the Washington Legal Foundation has updated and expanded the case for reform made by Professor Bernstein five years ago. In his advocacy paper, lawyer Lee Mickus has collated and analyzed some of the more recent dodges, which will depress the spirits of anyone who believes in evidence-based decision making.[11] My resistance to reform by amendment is waning. The meaning and intent of Rule 702 has been scarred over by precedent based upon judicial ipse dixit, and not Rule 702.

Mickus’s paper, like Professor Bernstein’s articles before, makes a persuasive case for reform, but this new paper does not evaluate the vagaries of navigating an amendment through the Advisory Committee, the Supreme Court, and Congress. Even if the reader is not interested in the amendment process, the paper can be helpful to the advocate in anticipating dodgy rule denialism.


[1]  Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993).

[2]  Frye v. United States, 54 App. D.C. 46, 293 F. 1013 (1923).

[3]  SeeThe Advocates’ Errors in Daubert” (Dec. 28, 2018).

[4]  General Electric Co. v. Joiner, 522 U.S. 136 (1997); Kumho Tire Co. v. Carmichael, 526 U.S. 137 (1999).

[5]  Weisgram v. Marley Co., 528 U.S. 440, 455 (2000) (Ginsberg, J.) (unanimous decision).

[6] Matrixx Initiatives, Inc. v. Siracusano, 563 US 27, 131 S.Ct. 1309, 1319 (2011) (citing Wells v. Ortho Pharmaceutical Corp., 615 F. Supp. 262, 298 (N.D. Ga. 1985), aff’d and rev’d in part on other grounds, 788 F.2d 741 (11th Cir.), cert. denied, 479 U.S.950 (1986)).  SeeWells v. Ortho Pharmaceutical Corp. Reconsidered – Part 1”; “Part 2”; “Part 3”; “Part 4”; “Part 5”; and “Part 6”.

[7]  David E. Bernstein & Eric Lasker, “Defending Daubert: It’s Time to Amend Federal Rule of Evidence 702,” 57 Wm. & Mary L. Rev. 1 (2015); David E. Bernstein, “The Misbegotten Judicial Resistance to the Daubert Revolution,” 89 Notre Dame L. Rev. 27 (2014).

[8]  See Daniel J. Capra, Reporter’s Memorandum re Forensic Evidence, Daubert and Rule 702 at 52 (April 1, 2018) (“[T]he fact remains that some courts are ignoring the requirements of Rule 702(b) and (d). That is frustrating.”).

[9]  David E. Bernstein & Eric Lasker, “Defending Daubert: It’s Time to Amend Federal Rule of Evidence 702,” 57 Wm. & Mary L. Rev. 1 (2015).

[10]  “On Amending Rule 702 of the Federal Rules of Evidence” (Oct. 17, 2015).

[11]  Lee Mickus, “Gatekeeping Reorientation: Amend Rule 702 to Correct Judicial Misunderstanding about Expert Evidence,” Washington Legal Foundation Critical Legal Issues Working Paper No. 217 (May 2020).

Data Games – A Techno Thriller

April 22nd, 2020

Data Games – A Techno Thriller

Sherlock Holmes, Hercule Poirot, Miss Marple, Father Brown, Harry Bosch, Nancy Drew, Joe and Frank Hardy, Sam Spade, Columbo, Lennie Briscoe, Inspector Clouseau, and Dominic Da Vinci:

Move over; there is a new super sleuth in town.

Meet Professor Ken Wheeler.

Ken is a statistician, and so by profession, he is a data detective. In his day job, he teaches at a northeastern university, where his biggest challenges are managing the expectations of students and administrators, while trying to impart statistical learning. At home, Ken rarely manages to meet the expectations of his wife and son. But as some statisticians are wont to do, Ken sometimes takes on consulting gigs that require him to use his statistical skills to help litigants sort out the role of chance in cases that run from discrimination claims to rare health effects. In this contentious, sharp-elbowed environment, Ken excels. And truth be told, Ken actually finds great satisfaction in identifying the egregious errors and distortions of adversary statisticians

Wheeler’s sleuthing usually involves ascertaining random error or uncovering a lurking variable, but in Herberg I. Weisberg’s just-published novel, Data Games: A Techno Thriller, Wheeler is drawn into a high-stakes conspiracy of intrigue, violence, and fraud that goes way beyond the run-of-the-mine p-hacking and data dredging.

An urgent call from a scientific consulting firm puts Ken Wheeler in the midst of imminent disaster for a pharmaceutical manufacturer, whose immunotherapy anti-cancer wonder drug, Verbana, is under attack. A group of apparently legitimate scientists have obtained the dataset from Verbana’s pivotal clinical trial, and they appear on the verge of blowing Verbana out of the formulary with a devastating analysis that will show that the drug causes early dementia. Wheeler’s mission is to debunk the debunking analysis when it comes.

For those readers who are engaged in the litigation defense of products liability claims against medications, the scenario is familiar enough. The scientific group studying Verbana’s alleged side effect seems on the up-and-up, but they appear to engaged in a cherry-picking exercise, guided by a dubious theory of biological plausibility, known as the “Kreutzfeld hypothesis.”

It is not often that mystery novels turn on surrogate outcomes, biomarkers, genomic medicine, and predictive analytics, but Data Games is no ordinary mystery. And Wheeler is no ordinary detective. To be sure, the middle-aged Wheeler drives a middle-aged BMW, not a Bond car, and certainly not a Bonferroni. And Wheeler’s toolkit may not include a Glock, but he can handle the lasso, the jacknife, and the logit, and serve them up with SAS. Wheeler sees patterns where others see only chaos.

Unlike the typical Hollywood rubbish about stereotyped evil pharmaceutical companies, the hero of Data Games finds that there are sinister forces behind what looks like an honest attempt to uncover safety problems with Verbana. These sinister forces will use anything to achieve their illicit ends, including superficially honest academics with white hats. The attack on Verbana gets the FDA’s attention and an urgent hearing in White Oak, where Wheeler shines.

The author of Data Games, Herbert I. Weisberg, is himself a statistician, and a veteran of some of the dramatic data games he writes about in this novel. Weisberg is perhaps better known for his “homework” books, such asWillful Ignorance: The Mismeasure of Uncertainty (2014), and Bias and Causation: Models and Judgment for Valid Comparisons (2010). If, however, you ever find yourself in a pandemic lockdown, Weisberg’s Data Games: A Techno Thriller is a perfect way to escape. For under $3, you will be entertained, and you might even learn something about probability and statistics.

Disproportionality Analyses Misused by Lawsuit Industry

April 20th, 2020

Adverse event reporting is a recognized, important component of pharmacovigilence. Regulatory agencies around the world further acknowledge that an increased rate of reporting of a specific adverse event may signal the possible existence of an association. In the last two decades, pharmacoepidemiologists have developed techniques for mining databases of adverse event reports for evidence of a disproportionate level of reporting for a particular medication – adverse event pair. Such studies can help identify “signals” of potential issues for further study with properly controlled epidemiologic studies.[1]

Most sane and sensible epidemiologists recognize that the low quality, inconsistences, and biases of the data in adverse event reporting databases render studies of disproportionate reporting “poor surrogates for controlled epidemiologic studies.” In the face of incomplete and inconsistent reporting, so-called disproportionality analyses (“DPA”) assume that incomplete reporting will be constant for all events for a specific medication. Regulatory attention, product labeling, lawyer advertising and client recruitment, social media and publicity, and time since launch are all known to affect reporting rates, and to ensure that reporting rates for some event types for a specific medication will be higher. Thus, the DPA assumptions are virtually always false and unverifiable.[2]

DPAs are non-analytical epidemiologic studies that cannot rise in quality or probativeness above the level of the anecdote upon which they are based. DPAs may generate signals or hypotheses, but they cannot test hypotheses of causality. Although simple in concept, DPAs involve some complicated computations that embue them with an aura of “proofiness.” As would-be studies that lack probativeness for causality, they are thus ideal tools for the lawsuit industry to support litigation campaigns against drugs and medical devices. Indeed, if a statistical technique is difficult to understand but relatively easy to perform and even easier to pass off to unsuspecting courts and juries, then you can count on its metastatic use in litigation. The DPA has become one of the favorite tools of the lawsuit industry’s statisticians. This litigation use, however, cannot obscure the simple fact that the relative reporting risk provided by a DPA can never rise to the level of a relative risk.

In one case in which a Parkinson’s disease patient claimed that his compulsive gambling was caused by his use of the drug Requip, the plaintiff’s expert witness attempted to invoke a DPA in support of his causal claim. In granting a Rule 702 motion to exclude the expert witnesses who relied upon a DPA, the trial judge rejected the probativeness of DPAs, based upon the FDA’s rejection of such analyses for anything other than signal detection.[3]

In the Accutane litigation, statistician David Madigan attempted to support his fatally weak causation opinion with a DPA for Crohn’s disease and Accutane adverse event reports. According to the New Jersey Supreme Court, Madigan claimed that his DPA showed “striking signal of disproportionality” indicative of a “strong association” between Accutane use and Crohn’s disease.[4]  With the benefit of a thorough review by the trial court, the New Jersey Supreme Court found other indicia of unreliability in Madigan’s opinions, such that it was not fooled by Madigan’s shenanigans. In any event, no signal of disproportionality could ever show an association between medication use and a disease; at best the DPA can show only an association between reporting of the medication use and the outcome of interest.

In litigation over Mirena and intracranial hypertension, one of the lawsuit industry’s regulars, Mayhar Etminan, published a DPA based upon the FDA’s Adverse Event Reporting System, which purported to find an increased reporting odds ratio.[5] Unthinkingly, the plaintiffs’ other testifying expert witnesses relied upon Etminan’s study. When a defense expert witness pointed out that Etminan had failed to adjust for age and gender in his multivariate analysis,[6] he repudiated his findings.[7] Remarkably, when Etminan published his original DPA in 2015, he declared that he had no conflicts, but when he published his repudiation, he disclosed that he “has been an expert witness in Mirena litigation in the past but is no longer part of the litigation.” The Etminan kerfuffle helped scuttle the plaintiffs’ assault on Mirena.[8]

DPAs have, on occasion, bamboozled federal judges into treating them as analytical epidemiology that can support causal claims. For instance, misrepresentations or misunderstandings of what DPAs can and cannot do carried the day in a Rule 702 contest on the admissibility of opinion testimony by statistician Rebecca Betensky. In multidistrict litigation over the safety of inferior vena cava (“IVC”) filters, plaintiffs’ counsel retained Rebecca Betensky, to prepare a DPA of adverse events reported for the defendants’ retrievable filters. The MDL judge’s description of Betensky’s opinion demonstrates that her DPA was either misrepresented or misunderstood:

“In this MDL, Dr. Betensky opines generally that there is a higher risk of adverse events for Bard’s retrievable IVC filters than for its permanent SNF.”[9]

The court clearly took Betensky to be opining about risk and not the risk of reporting. The court’s opinion goes on to describe Betensky’s calculation of a “reporting risk ratio,” but found that she could testify that the retrievable IVC filters increased the risk of the claimed adverse events, and not merely that there was an increase in reporting risk ratios.

Betensky acknowledged that the reporting risk ratios were “imperfect estimates of the actual risk ratios,”[10] but nevertheless dismissed all caveats about the inability of DPAs to assess actual increased risk. The trial court quoted Dr. Betensky’s attempt to infuse analytical rigor into a data mining exercise:

“[A]dverse events are generally considered to be underreported to the databases, and potentially differentially by severity of adverse event and by drug or medical device. . . . It is important to recognize that underreporting in and of itself is not problematic. Rather, differential underreporting of the higher risk device is what leads to bias. And even if there was differential underreporting of the higher risk device, given the variation in reporting relative risks across adverse events, the differential reporting would have had to have been highly variable across adverse events. This does not seem plausible given the severity of the adverse events considered. Given the magnitude of the RRR’s [relative reporting ratios], and their variability across adverse events, it seems implausible that differential underreporting by filter could fully explain the deviation of the observed RRR’s from 1.”[11]

Of course, this explanation fails to account for differential over-reporting for the newer, but less risky or equally risk device. Betensky dismissed notoriety bias as having caused an increase in reporting adverse events because her DPA ended with 2014, before the FDA had issued a warning letter. The lawsuit industry, however, was on the attack against IVC filers, years before 2014.[12] Similarly, Betensky dismissed consideration of the Weber effect, but her analysis apparently failed to acknowledge that notoriety and Weber effect are just two of many possible biases in DPAs.

In the face of her credentials, the MDL trial judge retreated to the usual chestnuts that are served up when a Rule 702 challenge is denied.  Judge Campbell thus observed that “[i]t is not the job of the court to insure that the evidence heard by the jury is error-free, but to insure that it is sufficiently reliable to be considered by the jury.”[13]  The trial judge professed a need to be “be careful not to conflate questions of admissibility of expert testimony with the weight appropriately to be accorded to such testimony by the fact finder.”[14] The court denied the claim that Betensky had engaged in an ipse dixit, by engaging in its own ipse dixit. Judge Campbell found that Betensky had explained her assumptions, had acknowledged shortcomings, and had engaged in various sensitivity tests of the validity of her DPA; and so he concluded that Betensky did not present “a case where ‘there is simply too great an analytical gap between the data and the opinion proffered’.”[15]

By closing off inquiry into the limits of the DPA methodology, Judge Campbell managed to stumble into a huge analytical gap he blindly ignored, or was unaware of. Even the best DPAs cannot substitute for analytical epidemiology in a scientific methodology of determining causation. The ipse dixit becomes apparent when we consider that the MDL gatekeeping opinion on Rebecca Betensky fails to mention the extensive body of regulatory and scientific opinion about the distinct methodologic limitations of DPA. The U.S. FDA’s official guidance on good pharmacovigilance practices, for example, instructs us that

“[d]ata mining is not a tool for establishing causal attributions between products and adverse events.”[16]

The FDA specifically cautions that the signals detected by data mining techniques should be acknowledged to be “inherently exploratory or hypothesis generating.”[17] The agency exercises caution when making its own comparisons of adverse events between products in the same class because of the low quality of the data themselves, and uncontrollable and unpredictable biases in how the data are collected.[18] Because of the uncertainties in DPAs,

“FDA suggests that a comparison of two or more reporting rates be viewed with extreme caution and generally considered exploratory or hypothesis-generating. Reporting rates can by no means be considered incidence rates, for either absolute or comparative purposes.”[19]

The European Medicines Agency offers similar advice and caution:

“Therefore, the concept of SDR [Signal of Disproportionate Reporting] is applied in this guideline to describe a ‘statistical signal’ that has originated from a statistical method. The underlying principle of this method is that a drug–event pair is reported more often than expected relative to an independence model, based on the frequency of ICSRs on the reported drug and the frequency of ICSRs of a specific adverse event. This statistical association does not imply any kind of causal relationship between the administration of the drug and the occurrence of the adverse event.”[20]

The current version of perhaps the leading textbook on pharmacoepidemiology is completely in accord with the above regulatory guidances. In addition to emphasizing the limitations on data quality from adverse event reporting, and the inability to interpret temporal trends, the textbook authors clearly characterize DPAs as generating signals, and unable to serve as hypothesis tests:

“a signal of disproportionality is a measure of a statistical association within a collection of AE/ADR reports (rather than in a population), and it is not a measure of causality. In this regard, it is important to underscore that the use of data mining is for signal detection – that is, for hypothesis  generation – and that further work is needed to evaluate the signal.”[21]

Reporting ratios are not, and cannot serve as, measures of incidence or prevalence, because adverse event databases do not capture all the events of interest, and so these ratios “it must be interpreted cautiously.”[22] The authors further emphasize that “well-designed pharmacoepidemiology or clinical studies are needed to assess the signal.”[23]

The authors of this chapter are all scientists and officials at the FDA’s Center for Drug Evaluation and Research, and the World Health Organization. Although they properly disclaimed to have been writing for their agencies, their agencies have independently embraced their concepts in other agency publications. The consensus view of the hypothesis generating nature of DPAs can easily be seen in surveying the relevant literature.[24] Passing off a DPA as a study that supports causal inference is not a mere matter of “weight,” or excluding any opinion that has some potential for error. The misuse of Betensky’s DPA is a methodological error that goes to the heart of what Congress intended to be screened and excluded by Rule 702.


[1]  Sean Hennessy, “Disproportionality analyses of spontaneous reports,” 13 Pharmacoepidemiology & Drug Safety 503, 503 (2004).

[2]  Id. See, e.g., Patrick Waller & Mira Harrison-Woolrych, An Introduction to Pharmacovigilance 68-69 (2nd ed. 2017) (noting the example of the WHO’s DPA that found a 10-fold reporting rate increase for statins and ALS, which reporting association turned out to be spurious).

[3]  Wells v. SmithKline Beecham Corp., 2009 WL 564303, at *12 (W.D. Tex. 2009) (citing and quoting from the FDA’s Guidance for Industry: Good Pharmacovigilance Practices and Pharmacoepidemiologic Assessment (2005)), aff’d, 601 F.3d 375 (5th Cir. 2010). But see In re Abilify (Aripiprazole) Prods. Liab. Litig., 299 F.Supp. 3d 1291. 1324 (N.D. Fla. 2018) (noting that the finding of a DPA that compared Abilify with other anti-psychotics helped to show that a traditional epidemiologic study was not confounded by the indication for depressive symptoms).

[4]  In re Accutane Litig., 234 N.J. 340, 191 A.3d 560, 574 (2018).

[5]  See Mahyar Etminan, Hao Luo, and Paul Gustafson, et al., “Risk of intracranial hypertension with intrauterine levonorgestrel,” 6 Therapeutic Advances in Drug Safety 110 (2015).

[6]  Deborah Friedman, “Risk of intracranial hypertension with intrauterine levonorgestrel,” 7 Therapeutic Advances in Drug Safety 23 (2016).

[7]  Mahyar Etminan, “Revised disproportionality analysis of Mirena and benign intracranial hypertension,” 8 Therapeutic Advances in Drug Safety 299 (2017).

[8]  In re Mirena IUS Levonorgestrel-Relaated Prods. Liab. Litig. (No. II), 387 F. Supp. 3d 323, 331 (S.D.N.Y. 2019) (Engelmayer, J.).

[9]  In re Bard IVC Filters Prods. Liab. Litig., No. MDL 15-02641-PHX DGC, Order Denying Motion to Exclude Rebecca Betensky at 2 (D. Ariz. Jan. 22, 2018) (Campbell, J.) (emphasis added) [Order]

[10]  Id. at 4.

[11]  Id.

[12]  See Matt Fair, “C.R. Bard’s Faulty Filters Pose Health Risks, Suit Says,” Law360 (Aug. 10, 2012); See, e.g., Derrick J. Stobaugh, Parakkal Deepak, & Eli D. Ehrenpreis, “Alleged isotretinoin-associated inflammatory bowel disease: Disproportionate reporting by attorneys to the Food and Drug Administration Adverse Event Reporting System,” 69 J. Am. Acad. Dermatol. 393 (2013) (documenting stimulated reporting from litigation activities).

[13]  Order at 6, quoting from Southwire Co. v. J.P. Morgan Chase & Co., 528 F. Supp. 2d 908, 928 (W.D. Wis. 2007).

[14]  Id., citing In re Trasylol Prods. Liab. Litig., No. 08-MD-01928, 2010 WL 1489793, at *7 (S.D. Fla. Feb. 24, 2010).

[15]  Id., citing and quoting from In re Trasylol Prods. Liab. Litig., No. 08-MD-01928, 2010 WL 1489793, at *7 (S.D. Fla. Feb. 24, 2010) ((quoting General Electric v. Joiner, 522 U.S. 136, 146 (1997)).

[16]  FDA, “Good Pharmacovigilance Practices and Pharmacoepidemiologic Assessment Guidance for Industry” at 8 (2005) (emphasis added).

[17]  Id. at 9.

[18]  Id.

[19]  Id. at 11 (emphasis added).

[20]  EUDRAVigilance Expert Working Group, European Medicines Agency, “Guideline on the Use of Statistical Signal Detection Methods in the EUDRAVigilance Data Analysis System,” at 3 (2006) (emphasis added).

[21]  Gerald J. Dal Pan, Marie Lindquist & Kate Gelperin, “Postmarketing Spontaneous Pharmacovigilance Reporting Systems,” in Brian L. Strom & Stephen E. Kimmel and Sean Hennessy, Pharmacoepidemiology at 185 (6th ed. 2020) (emphasis added).

[22]  Id. at 187.

[23]  Id. See also Andrew Bate, Gianluca Trifirò, Paul Avillach & Stephen J.W. Evans, “Data Mining and Other Informatics Approaches to Pharmacoepidemiology,” chap. 27, in Brian L. Strom & Stephen E. Kimmel and Sean Hennessy, Pharmacoepidemiology at 685-88 (6th ed. 2020) (acknowledging the importance of DPAs for detecting signals that must then be tested with analytical epidemiology) (authors from industry, Pfizer, and academia, including NYU School of Medicine, Harvard Medical School, and London School of Hygiene and Tropical Medicine).

[24]  See, e.g., Patrick Waller & Mira Harrison-Woolrych, An Introduction to Pharmacovigilance 61 (2nd ed. 2017) (“[A]lthough the numbers are calculated in a similar way to relative risks, they do not represent a meaningful calculation of risk.” *** “Indicators of disproportionality are measures of association and even quite extreme results may not be causal.”); Ronald D. Mann & Elizabeth B. Andrews, Pharmacovigilance 240 (2d ed. 2007) (“Importantly, data mining cannot prove or refute causal associations between drugs and events. Data mining simply identifies disproportionality of drugevent reporting patterns in databases. The absence of a signal does not rule out a safety problem. Similarly, the presence of a signal is not a proof of a causal relationship between a drug and an adverse event.”); Patrick Waller, An Introduction to Pharmacovigilance 49 (2010) (“[A]lthough the numbers are calculated in a similar way to relative risks, they do not represent a meaningful calculation of risk. Whilst it is true that the greater the degree of disproportionality, the more reason there is to look further, the only real utility of the numbers is to decide whether or not there are more cases than might reasonably have been expected. Indicators of disproportionality are measures of association and even quite extreme results may not be causal.”); Sidney N. Kahn, “You’ve found a safety signal–now what?  Regulatory implications of industry signal detection activities,” 30 Drug Safety 615 (2007).

Dark Money, Scott Augustine, and Hot Air

April 11th, 2020

Fraud by the litigation industry takes many different forms. In the massive silicosis litigation unleashed in Mississippi and Texas in the early 2000s, plaintiffs’ lawyers colluded with physicians to concoct dubious diagnoses of silicosis. Fraudulent diagnoses of silicosis led to dismissals of thousands of cases, as well as the professional defrocking of some physician witnesses.[1] For those trying to keep up with lawsuit industry’s publishing arm, discussion of the Great Silicosis Fraud is completely absent from David Michaels’ recent book, The Triumph of Doubt.[2] So too is any mention of “dark money” that propelled the recently concluded Bair Hugger litigation.

Back in 2017, I wrote about the denial of a Rule 702 motion in the Bair Hugger litigation.[3] At the time, I viewed the trial court’s denial, on the facts of the case, to be a typical failure of gatekeeping.[4] Events in the Bair Hugger cases were only warming up in 2017.

After the court’s ruling, 3M took the first bellwether case to trial and won the case with jury, on May 30, 2018. Perhaps this jury verdict encouraged the MDL trial judge to take 3M’s motion for reconsideration of the Rule 702 motion seriously. In July 2019, the MDL court granted 3M’s motion to exclude the opinion testimony of plaintiffs’ general causation and mechanism expert witnesses, Drs. Jarvis, Samet, Stonnington, and Elghobashi.[5] Without these witnesses, over 5,000 plaintiffs, who had been misled about the merits of their cases, were stranded and set up for dismissal. On August 2, 2019, the MDL cases were dismissed for want of evidentiary support on causation. On August 29, 2019, plaintiffs filed a joint notice of appeal to the Eight Circuit.

The two Bair Hugger Rule 702 federal court decisions focused (or failed to focus) on scientific considerations. Most of the story of “dark money” and the manufacturing of science to support the litigation were suppressed in the Rule 702 motion practice, and in the federal jury trial. In her second Rule 702 reconsideration opinion, the MDL judge did mention undisclosed conflicts of interest by authors of the key studies relied upon by plaintiffs’ witnesses.[6]

To understand how the Bair Hugger litigation got started, and to obtain a full understanding of the nature of the scientific evidence was, a disinterested observer will have to read the state court decisions. Defendant 3M moved to exclude plaintiffs’ causation expert witnesses, in its Minnesota state court cases, under the so-called Frye standard. In response, the state judge excluded plaintiffs’ witnesses for advancing a novel scientific theory that lacked acceptance in the relevant scientific community. The Minnesota Court of Appeals affirmed, with a decision that talked rather more freely about the plaintiffs’ counsel’s dark money. In re 3M Bair Hugger Litig., 924 N.W.2d 16 (Minn. App. 2019) [cited as Bair Hugger].

As the Minnesota Court of Appeals explained, a forced-air warming device (FAWD) is a very important, useful device to keep patients’ body temperatures normal during surgery. The “Bair Hugger” is a FAWD, which was invented in 1987, by Dr. Scott Augustine, an anesthesiologist, who at the time was the chief executive officer of Augustine Medical, Inc. Bair Hugger at 19.

In the following 15 years, the Bair Hugger became the leading FAWD in the world. In 2002, the federal government notified Augustine that it was investigating him for Medicare fraud. Augustine resigned from the company that bore his name, and the company purged the taint by reorganizing as Arizant Healthcare Inc. (Arizant), which continued to make the Bair Hugger. In the following year, 2003, Augustine pleaded guilty to fraud and paid a $2 million fine. His sentence included a five-year ban from involvement in federal health-care programs.

During the years of his banishment, fraudfeasor Augustine developed a rival product and then embarked upon a global attack on the safety of his own earlier invention, the Bair Hugger. In the United Kingdom, his claim that the Bair Hugger increased risks of surgical site infections attacks was rejected by the UK National Institute for Health and Clinical Excellence. A German court enjoined Augustine from falsely claiming that the Bair Hugger led to increased bacterial contamination.[7] The United States FDA considered and rejected Augustine’s claims, and recommended the use of FAWDs.

In 2009, Augustine began to work as a non-testifying expert witness with the Houston, Texas, plaintiffs’ law firm of Kennedy Hodges LLP. A series of publications resulted in which the authors attempted to raise questions about the safety of the Bair Hugger. By 2013, with the medical literature “seeded” with several studies attacking the Bair Hugger, the Kennedy Hodges law firm began to manufacture law suits against Arizant and 3M (which had bought the Bair Hugger product line from Arizant in 2010). Bair Hugger at 20.

The seeding studies were marketing and litigation propaganda used by Augustine to encourage the all-too-complicit lawsuit industry to ramp up production of complaints against 3M over the Bair Hugger. Several of the plaintiffs’ studies included as an author a young statistician, Mark Albrecht, an employee of, or a contractor for, Augustine’s new companies, Augustine Temperature Management and Augustine Medical. Even when disclosures were made, they were at best “anemic”:

“The author or one or more of the authors have received or will receive benefits for personal or professional use from a commercial party related directly or indirectly to the subject of this article.”[8]

Some of these studies generally included a disclosure that Albrecht was funded or employed by Augustine, but they did not disclose the protracted, bitter feud or Augustine’s confessed fraudulent conduct. Another author of some of the plaintiffs’ studies included David Leaper, who was a highly paid “consultant’’ to Augustine at the time of the work on the study. None of the studies disclosed Leaper’s consultancy for Augustin:

  1. Mark Albrecht, Robert Gauthier, and David Leaper, “Forced air warming, a source of airborne contamination in the operating room?” 1 Orthopedic Rev. (Pavia) e28 (2009)
  2. Mark Albrecht, Robert L. Gauthier, Kumar Belani, Mark Litchy, and David Leaper, “Forced-air warming blowers: An evaluation of filtration adequacy and airborne contamination emissions in the operating room,” 39 Am. J. Infection Control 321 (2011)
  3. P.D. McGovern, Mark Albrecht, Kumar Belani, C. Nachtsheim, “Forced-air warming and ultra-clean ventilation do not mix,” 93 J. Bone & Joint Surg. – British 1537 (2011)
  4. K.B. Dasari, Mark Albrecht, and M. Harper, “Effect of forced-air warming on the performance of operating-theatre laminar-flow ventilation,” 67 Anaesthesia 244 (2012)
  5. Mike Reed, Oliver Kimberger, Paul D. McGovern, and Mark C. Albrecht, “Forced-Air Warming Design: Evaluation of Intake Filtration, Internal Microbial Buildup, and Airborne-Contamination Emissions,” 81 Am. Ass’n Nurse Anesthetists 275 (2013)
  6. Kumar Belani, Mark Albrecht, Paul McGovern, Mike Reed, and Christopher Nachtsheim, “Patient warming excess heat: the effects on orthopedic operating room ventilation performance,” 117 Anesthesia & Analgesia 406 (2013)

In one study, Augustine’s employee Mark Albrecht conducted the experiment with one of the authors, but was not listed as an author although he wrote an early draft of the study. Augustine provided all the equipment used in the experiment. The published paper failed to disclose any of these questionable activities:

  1. A.J. Legg & A.J. Hammer, “Forced-air patient warming blankets disrupt unidirectional flow,” 95 Bone & Joint J. 407 (2013)

Another study had more peripheral but still questionable involvement of Augustine, whose company lent the authors equipment used to conduct the study, without proper acknowledgment and disclosure:

  1. A.J. Legg, T. Cannon, and A. J. Hamer, “Do forced-air warming devices disrupt unidirectional downward airflow?” 94 J. Bone & Joint Surg. – British 254 (2012)

In addition to the defects in the authors’ disclosures, 3M discovered that two of the studies had investigated whether the Bair Hugger spread bacteria in the surgical area. Although the experiments found no spread with the Bair Hugger, the researchers never publicly disclosed their exculpatory evidence.[9]

Augustine’s marketing campaign, through these studies, ultimately fell flat at the FDA, which denied his citizen’s petition and recommended that surgeons continue to use FAWDs such as the Bair Hugger.[10] Augustine’s proxy litigation war against 3M also fizzled, unless the 8th Circuit revives his vendetta. Nonetheless, the Augustine saga raises serious questions about how litigation funding of “scientific studies” will vex the search for the truth in pharmaceutical products litigation. The Augustine attempt to pollute the medical literature was relatively apparent, but dark money from undisclosed financiers may require greater attention from litigants and from journal editors.


[1]  In re Silica Products Liab. Litig., MDL No. 1553, 398 F. Supp. 2d 563 (S.D.Tex. 2005).

[2]  David Michaels, The Triumph of Doubt: Dark Money and the Science of Deception (2020).

[3]  In re Bair Hugger Forced Air Warming, MDL No. 15-2666, 2017 WL 6397721 (D. Minn. Dec. 13, 2017).

[4]  “Gatekeeping of Expert Witnesses Needs a Bair Hug” (Dec. 20, 2017).

[5]  In re Bair Hugger Forced Air Warming Devices Prods. Liab. Litig., MDL No. 15-2666, 2019 WL 4394812 (D. Minn. July 31, 2019). See Joe G. Hollingsworth & Caroline Barker, “Exclusion of Junk Science in ‘Bair Hugger’ MDL Shows Daubert Is Still Breathing,” Wash. Leg. Foundation (Jan 23, 2020); Christine Kain, Patrick Reilly, Hannah Anderson and Isabelle Chammas, “Top 5 Drug And Medical Device Developments Of 2019,” Law360 (Jan. 9, 2020).

[6]  In re Bair Hugger Forced Air Warming Devices Prods. Liab. Litig., 2019 WL 4394812, at *10 n.13 (D. Minn. July 31, 2019) (observing that “[i]n the published study, the authors originally declared no conflicts of interest”).

[7]  Dr. Augustine has never been a stranger to the judicial system. See, e.g., Augustine Medical, Inc. v. Gaymar Industries, Inc., 181 F.3d 1291 (Fed. Cir. 1999); Augustine Medical, Inc. v. Progressive Dynamics, Inc., 194 F.3d 1367 (Fed. Cir. 1999); Cincinnati Sub-Zero Products, Inc. v. Augustine Medical, Inc., 800 F. Supp. 1549 (S.D. Ohio 1992).

[8]  P.D. McGovern, Mark Albrecht, Kumar Belani, and C. Nachtsheim, “Forced-air warming and ultra-clean ventilation do not mix,” 93 J. Bone & Joint Surg. – British 1537, 1544 (2011).

[9]  See https://www.truthaboutbairhugger.com/truth-science-behind-claims-3m-bair-hugger-system-look-augustine-connections-research-studies/.

[10]  William Maisel, “Information about the Use of Forced Air Thermal Regulating Systems – Letter to Health Care Providers”; Center for Devices and Radiological Health, U.S. Food and Drug Administration (Aug. 30, 2017).

April Fool – Zambelli-Weiner Must Disclose

April 2nd, 2020

Back in the summer of 2019, Judge Saylor, the MDL judge presiding over the Zofran birth defect cases, ordered epidemiologist, Dr. Zambelli-Weiner to produce documents relating to an epidemiologic study of Zofran,[1] as well as her claimed confidential consulting relationship with plaintiffs’ counsel.[2]

This previous round of motion practice and discovery established that Zambelli-Weiner was a paid consultant in advance of litigation, that her Zofran study was funded by plaintiffs’ counsel, and that she presented at a Las Vegas conference, for plaintiffs’ counsel only, on [sic] how to make mass torts perfect. Furthermore, she had made false statements to the court about her activities.[3]

Zambelli-Weiner ultimately responded to the discovery requests but she and plaintiffs’ counsel withheld several documents as confidential, pursuant to the MDL’s procedure for protective orders. Yesterday, April 1, 2020, Judge Saylor entered granted GlaxoSmithKline’s motion to de-designate four documents that plaintiffs claimed to be confidential.[4]

Zambelli-Weiner sought to resist GSK’s motion to compel disclosure of the documents on a claim that GSK was seeking the documents to advance its own litigation strategy. Judge Saylor acknowledged that Zambelli-Weiner’s psycho-analysis might be correct, but that GSK’s motive was not the critical issue. According to Judge Saylor, the proper inquiry was whether the claim of confidentiality was proper in the first place, and whether removing the cloak of secrecy was appropriate under the facts and circumstances of the case. Indeed, the court found “persuasive public-interest reasons” to support disclosure, including providing the FDA and the EMA a complete, unvarnished view of Zambelli-Weiner’s research.[5] Of course, the plaintiffs’ counsel, in close concert with Zambelli-Weiner, had created GSK’s need for the documents.

This discovery battle has no doubt been fought because plaintiffs and their testifying expert witnesses rely heavily upon the Zambelli-Weiner study to support their claim that Zofran causes birth defects. The present issue is whether four of the documents produced by Dr. Zambelli-Weiner pursuant to subpoena should continue to enjoy confidential status under the court’s protective order. GSK argued that the documents were never properly designated as confidential, and alternatively, the court should de-designate the documents because, among other things, the documents would disclose information important to medical researchers and regulators.

Judge Saylor’s Order considered GSK’s objections to plaintiffs’ and Zambelli-Weiner’s withholding four documents:

(1) Zambelli-Weiner’s Zofran study protocol;

(2) Undisclosed, hidden analyses that compared birth defects rates for children born to mothers who used Zofran with the rates seen with the use of other anti-emetic medications;

(3) An earlier draft Zambelli-Weiner’s Zofran study, which she had prepared to submit to the New England Journal of Medicine; and

(4) Zambelli-Weiner’s advocacy document, a “Causation Briefing Document,” which she prepared for plaintiffs’ lawyers.

Judge Saylor noted that none of the withheld documents would typically be viewed as confidential. None contained “sensitive personal, financial, or medical information.”[6]  The court dismissed Zambelli-Weiner’s contention that the documents all contained “business and proprietary information,” as conclusory and meritless. Neither she nor plaintiffs’ counsel explained how the requested documents implicated proprietary information when Zambelli-Weiner’s only business at issue is to assist in making lawsuits. The court observed that she is not “engaged in the business of conducting research to develop a pharmaceutical drug or other proprietary medical product or device,” and is related solely to her paid consultancy to plaintiffs’ lawyers. Neither she nor the plaintiffs’ lawyers showed how public disclosure would hurt her proprietary or business interests. Of course, if Zambelli-Weiner had been dishonest in carrying out the Zofran study, as reflected in study deviations from its protocol, her professional credibility and her business of conducting such studies might well suffer. Zambelli-Weiner, however, was not prepared to affirm the antecedent of that hypothetical. In any event, the court found that whatever right Zambelli-Weiner might have enjoyed to avoid discovery evaporated with her previous dishonest representations to the MDL court.[7]

The Zofran Study Protocol

GSK sought production of the Zofran study protocol, which in theory contained the research plan for the Zofran study and the analyses the researchers intended to conduct. Zambelli-Weiner attempted to resist production on the specious theory that she had not published the protocol, but the court found this “non-publication” irrelevant to the claim of confidentiality. Most professional organizations, such as the International Society of Pharmacoepidemiology (“ISPE”), which ultimately published Zambelli-Weiner’s study, encourage the publication and sharing of study protocols.[8] Disclosure of protocols helps ensure the integrity of studies by allowing readers to assess whether the researchers have adhered to their study plan, or have engaged in ad hoc data dredging in search for a desired result.[9]

The Secret, Undisclosed Analyses

Perhaps even more egregious than withholding the study protocol was the refusal to disclose unpublished analyses comparing the rate of birth defects among children born to mothers who used Zofran with the birth defect rates of children with in utero exposure to other anti-emetic medications.  In ruling that Zambelli-Weiner must produce the unpublished analyses, the court expressed its skepticism over whether these analyses could ever have been confidential. Under ISPE guidelines, researchers must report findings that significantly affect public health, and the relative safety of Zofran is essential to its evaluation by regulators and prescribing physicians.

Not only was Zambelli-Weiner’s failure to include these analyses in her published article ethically problematic, but she apparently hid these analyses from the Pharmacovigilance Risk Assessment Committee (PRAC) of the European Medicines Agency, which specifically inquired of Zambelli-Weiner whether she had performed such analyses. As a result, the PRAC recommended a label change based upon Zambelli-Weiner’s failure to disclosure material information. Furthermore, the plaintiffs’ counsel represented they intended to oppose GSK’s citizen petition to the FDA, based upon the Zambelli-Weiner study. The apparently fraudulent non-disclosure of relevant analyses could not have been more fraught for public health significance. The MDL court found that the public health need trumped any (doubtful) claim to confidentiality.[10] Against the obvious public interest, Zambelli-Weiner offered no “compelling countervailing interest” in keeping her secret analyses confidential.

There were other aspects to the data-dredging rationale not discussed in the court’s order. Without seeing the secret analyses of other anti-emetics, readers were deprive of an important opportunity to assess actual and potential confounding in her study. Perhaps even more important, the statistical tools that Zambelli-Weiner used, including any measurements of p-values and confidence intervals, and any declarations of “statistical significance,” were rendered meaningless by her secret, undisclosed, multiple testing. As noted by the American Statistical Association (ASA) in its 2016 position statement, “4. Proper inference requires full reporting and transparency.”

The ASA explains that the proper inference from a p-value can be completely undermined by “multiple analyses” of study data, with selective reporting of sample statistics that have attractively low p-values, or cherry picking of suggestive study findings. The ASA points out that common practices of selective reporting compromises valid interpretation. Hence the correlative recommendation:

“Researchers should disclose the number of hypotheses explored during the study, all data collection decisions, all statistical analyses conducted and all p-values computed. Valid scientific conclusions based on p-values and related statistics cannot be drawn without at least knowing how many and which analyses were conducted, and how those analyses (including p-values) were selected for reporting.”[11]

The Draft Manuscript for the New England Journal of Medicine

The MDL court wasted little time and ink in dispatching Zambelli-Weiner’s claim of confidentiality for her draft New England Journal of Medicine manuscript. The court found that she failed to explain how any differences in content between this manuscript and the published version constituted “proprietary business information,” or how disclosure would cause her any actual prejudice.

Zambelli-Weiner’s Litigation Road Map

In a world where social justice warriors complain about organizations such as Exponent, for its litigation support of defense efforts, the revelation that Zambelli-Weiner was helping to quarterback the plaintiffs’ offense deserves greater recognition. Zambelli-Weiner’s litigation road map was clearly created to help Grant & Eisenhofer, P.A., the plaintiffs’ lawyers,, create a causation strategy (to which she would add her Zofran study). Such a document from a consulting expert witness is typically the sort of document that enjoys confidentiality and protection from litigation discovery. The MDL court, however, looked beyond Zambelli-Weiner’s role as a “consulting witness” to her involvement in designing and conducting research. The broader extent of her involvement in producing studies and communicating with regulators made her litigation “strategery” “almost certainly relevant to scientists and regulatory authorities” charged with evaluating her study.”[12]

Despite Zambelli-Weiner’s protestations that she had made a disclosure of conflict of interest, the MDL court found her disclosure anemic and the public interest in knowing the full extent of her involvement in advising plaintiffs’ counsel, long before the study was conducted, great.[13]

The legal media has been uncommonly quiet about the rulings on April Zambelli-Weiner, in the Zofran litigation. From the Union of Concerned Scientists, and other industry scolds such as David Egilman, David Michaels, and Carl Cranor – crickets. Meanwhile, while the appeal over the admissibility of her testimony is pending before the Pennsylvania Supreme Court,[14] Zambelli-Weiner continues to create an unenviable record in Zofran, Accutane,[15] Mirena,[16] and other litigations.


[1]  April Zambelli‐Weiner, Christina Via, Matt Yuen, Daniel Weiner, and Russell S. Kirby, “First Trimester Pregnancy Exposure to Ondansetron and Risk of Structural Birth Defects,” 83 Reproductive Toxicology 14 (2019).

[2]  See In re Zofran (Ondansetron) Prod. Liab. Litig., 392 F. Supp. 3d 179, 182-84 (D. Mass. 2019) (MDL 2657) [cited as In re Zofran].

[3]  “Litigation Science – In re Zambelli-Weiner” (April 8, 2019); “Mass Torts Made Less Bad – The Zambelli-Weiner Affair in the Zofran MDL” (July 30, 2019). See also Nate Raymond, “GSK accuses Zofran plaintiffs’ law firms of funding academic study,” Reuters (Mar. 5, 2019).

[4]  In re Zofran Prods. Liab. Litig., MDL No. 1:15-md-2657-FDS, Order on Defendant’s Motion to De-Designate Certain Documents as Confidential Under the Protective Order (D.Mass. Apr. 1, 2020) [Order].

[5]  Order at n.3

[6]  Order at 3.

[7]  See In re Zofran, 392 F. Supp. 3d at 186.

[8]  Order at 4. See also Xavier Kurz, Susana Perez-Gutthann, the ENCePP Steering Group, “Strengthening standards, transparency, and collaboration to support medicine evaluation: Ten years of the European Network of Centres for Pharmacoepidemiology and Pharmacovigilance (ENCePP),” 27 Pharmacoepidemiology & Drug Safety 245 (2018).

[9]  Order at note 2 (citing Charles J. Walsh & Marc S. Klein, “From Dog Food to Prescription Drug Advertising: Litigating False Scientific Establishment Claims Under the Lanham Act,” 22 Seton Hall L. Rev. 389, 431 (1992) (noting that adherence to study protocol “is essential to avoid ‘data dredging’—looking through results without a predetermined plan until one finds data to support a claim”).

[10]  Order at 5, citing Anderson v. Cryovac, Inc., 805 F.2d 1, 8 (1st Cir. 1986) (describing public-health concerns as “compelling justification” for requiring disclosing of confidential information).

[11]  Ronald L. Wasserstein & Nicole A. Lazar, “The ASA’s Statement on p-Values: Context, Process, and Purpose,” 70 The American Statistician 129 (2016)

See alsoThe American Statistical Association’s Statement on and of Significance” (March 17, 2016).“Courts Can and Must Acknowledge Multiple Comparisons in Statistical Analyses (Oct. 14, 2014).

[12]  Order at 6.

[13]  Cf. Elizabeth J. Cabraser, Fabrice Vincent & Alexandra Foote, “Ethics and Admissibility: Failure to Disclose Conflicts of Interest in and/or Funding of Scientific Studies and/or Data May Warrant Evidentiary Exclusions,” Mealey’s Emerging Drugs Reporter (Dec. 2002) (arguing that failure to disclose conflicts of interest and study funding should result in evidentiary exclusions).

[14]  Walsh v. BASF Corp., GD #10-018588 (Oct. 5, 2016, Pa. Ct. C.P. Allegheny Cty., Pa.) (finding that Zambelli-Weiner’s and Nachman Brautbar’s opinions that pesticides generally cause acute myelogenous leukemia, that even the smallest exposure to benzene increases the risk of leukemia offended generally accepted scientific methodology), rev’d, 2018 Pa. Super. 174, 191 A.3d 838, 842-43 (Pa. Super. 2018), appeal granted, 203 A.3d 976 (Pa. 2019).

[15]  In re Accutane Litig., No. A-4952-16T1, (Jan. 17, 2020 N.J. App. Div.) (affirming exclusion of Zambelli-Weiner as an expert witness).

[16]  In re Mirena IUD Prods. Liab. Litig., 169 F. Supp. 3d 396 (S.D.N.Y. 2016) (excluding Zambelli-Weiner in part).

Dodgy Data Duck Daubert Decisions

March 11th, 2020

Judges say the darndest things, especially when it comes to their gatekeeping responsibilities under Federal Rules of Evidence 702 and 703. One of the darndest things judges say is that they do not have to assess the quality of the data underlying an expert witness’s opinion.

Even when acknowledging their obligation to “assess the reasoning and methodology underlying the expert’s opinion, and determine whether it is both scientifically valid and applicable to a particular set of facts,”[1] judges have excused themselves from having to look at the trustworthiness of the underlying data for assessing the admissibility of an expert witness’s opinion.

In McCall v. Skyland Grain LLC, the defendant challenged an expert witness’s reliance upon oral reports of clients. The witness, Mr. Bradley Walker, asserted that he regularly relied upon such reports, in similar contexts of the allegations that the defendant misapplied herbicide to plaintiffs’ crops. The trial court ruled that the defendant could cross-examine the declarant who was available trial, and concluded that the “reliability of that underlying data can be challenged in that manner and goes to the weight to be afforded Mr. Walker’s conclusions, not their admissibility.”[2] Remarkably, the district court never evaluated the reasonableness of Mr. Walker’s reliance upon client reports in this or any context.

In another federal district court case, Rodgers v. Beechcraft Corporation, the trial judge explicitly acknowledged the responsibility to assess whether the expert witness’s opinion was based upon “sufficient facts and data,” but disclaimed any obligation to assess the quality of the underlying data.[3] The trial court in Rodgers cited a Tenth Circuit case from 2005,[4] which in turn cited the Supreme Court’s 1993 decision in Daubert, for the proposition that the admissibility review of an expert witness’s opinion was limited to a quantitative sufficiency analysis, and precluded a qualitative analysis of the underlying data’s reliability. Quoting from another district court criminal case, the court in Rodgers announced that “the Court does not examine whether the facts obtained by the witness are themselves reliable – whether the facts used are qualitatively reliable is a question of the weight to be given the opinion by the factfinder, not the admissibility of the opinion.”[5]

In a 2016 decision, United States v. DishNetwork LLC, the court explicitly disclaimed that it was required to “evaluate the quality of the underlying data or the quality of the expert’s conclusions.”[6] This district court pointed to a Seventh Circuit decision, which maintained that  “[t]he soundness of the factual underpinnings of the expert’s analysis and the correctness of the expert’s conclusions based on that analysis are factual matters to be determined by the trier of fact, or, where appropriate, on summary judgment.”[7] The Seventh Circuit’s decision, however, issued in June 2000, several months before the effective date of the amendments to Federal Rule of Evidence 702 (December 2000).

In 2012, a magistrate judge issued an opinion along the same lines, in Bixby v. KBR, Inc.[8] After acknowledging what must be done in ruling on a challenge to an expert witness, the judge took joy in what could be overlooked. If the facts or data upon which the expert witness has relied are “minimally sufficient,” then the gatekeeper can regard questions about “the nature or quality of the underlying data bear upon the weight to which the opinion is entitled or to the credibility of the expert’s opinion, and do not bear upon the question of admissibility.”[9]

There need not be any common law mysticism to the governing standard. The relevant law is, of course, a statute, which appears to be forgotten in many of the failed gatekeeping decisions:

Rule 702. Testimony by Expert Witnesses

A witness who is qualified as an expert by knowledge, skill, experience, training, or education may testify in the form of an opinion or otherwise if:

(a) the expert’s scientific, technical, or other specialized knowledge will help the trier of fact to understand the evidence or to determine a fact in issue;

(b) the testimony is based on sufficient facts or data;

(c) the testimony is the product of reliable principles and methods; and

(d) the expert has reliably applied the principles and methods to the facts of the case.

It would seem that you could not produce testimony that is the product of reliable principles and methods by starting with unreliable underlying facts and data. Certainly, having a reliable method would require selecting reliable facts and data from which to start. What good would the reliable application of reliable principles to crummy data?

The Advisory Committee Notes to Rule 702 hints at an answer to the problem:

“There has been some confusion over the relationship between Rules 702 and 703. The amendment makes clear that the sufficiency of the basis of an expert’s testimony is to be decided under Rule 702. Rule 702 sets forth the overarching requirement of reliability, and an analysis of the sufficiency of the expert’s basis cannot be divorced from the ultimate reliability of the expert’s opinion. In contrast, the ‘reasonable reliance’ requirement of Rule 703 is a relatively narrow inquiry. When an expert relies on inadmissible information, Rule 703 requires the trial court to determine whether that information is of a type reasonably relied on by other experts in the field. If so, the expert can rely on the information in reaching an opinion. However, the question whether the expert is relying on a sufficient basis of information—whether admissible information or not—is governed by the requirements of Rule 702.”

The answer is only partially satisfactory. First, if the underlying data are independently admissible, then there may indeed be no gatekeeping of an expert witness’s reliance upon such data. Rule 703 imposes a reasonableness test for reliance upon inadmissible underlying facts and data, but appears to give otherwise admissible facts and data a pass. Second, the above judicial decisions do not mention any Rule 703 challenge to the expert witnesses’ reliance. If so, then there is a clear lesson for counsel. When framing a challenge to the admissibility of an expert witness’s opinion, show that the witness has unreasonably relied upon facts and data, from whatever source, in violation of Rule 703. Then show that without the unreasonably relied upon facts and data, the witness cannot show that his or her opinion satisfies Rule 702(a)-(d).


[1]  See, e.g., McCall v. Skyland Grain LLC, Case 1:08-cv-01128-KHV-BNB, Order (D. Colo. June 22, 2010) (Brimmer, J.) (citing Dodge v. Cotter Corp., 328 F.3d 1212, 1221 (10th Cir. 2003), citing in turn Daubert v. Merrill Dow Pharms., Inc., 509 U.S. 579,  592-93 (1993).

[2]  McCall v. Skyland Grain LLC Case 1:08-cv-01128-KHV-BNB, Order at p.9 n.6 (D. Colo. June 22, 2010) (Brimmer, J.)

[3]  Rodgers v. Beechcraft Corp., Case No. 15-CV-129-CVE-PJC, Report & Recommendation at p.6 (N.D. Okla. Nov. 29, 2016).

[4]  Id., citing United.States. v. Lauder, 409 F.3d 1254, 1264 (10th Cir. 2005) (“By its terms, the Daubert opinion applies only to the qualifications of an expert and the methodology or reasoning used to render an expert opinion” and “generally does not, however, regulate the underlying facts or data that an expert relies on when forming her opinion.”), citing Daubert v. Merrill Dow Pharms., Inc., 509 U.S. 579, 592-93 (1993).

[5]  Id., citing and quoting United States v. Crabbe, 556 F. Supp. 2d 1217, 1223
(D. Colo. 2008) (emphasis in original). In Crabbe, the district judge mostly excluded the challenged expert witness, thus rendering its verbiage on quality of data as obiter dicta). The pronouncements about the nature of gatekeeping proved harmless error when the court dismissed the case on other grounds. Rodgers v. Beechcraft Corp., 248 F. Supp. 3d 1158 (N.D. Okla. 2017) (granting summary judgment).

[6]  United States v. DishNetwork LLC, No. 09-3073, Slip op. at 4-5 (C.D. Ill. Jan. 13, 2016) (Myerscough, J.)

[7]  Smith v. Ford Motor Co., 215 F.3d 713, 718 (7th Cir. 2000).

[8]  Bixby v. KBR, Inc., Case 3:09-cv-00632-PK, Slip op. at 6-7 (D. Ore. Aug. 29, 2012) (Papak, M.J.)

[9]  Id. (citing Hangarter v. Provident Life & Accident Ins. Co., 373 F.3d 998, 1017 (9th Cir. 2004), quoting Children’s Broad Corp. v. Walt Disney Co., 357 F.3d 860, 865 (8th Cir. 2004) (“The factual basis of an expert opinion goes to the credibility of the testimony, not the admissibility, and it is up to the opposing party to examine the factual basis for the opinion in cross-examination.”).

The opinions, statements, and asseverations expressed on Tortini are my own, or those of invited guests, and these writings do not necessarily represent the views of clients, friends, or family, even when supported by good and sufficient reason.