TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Judicial Dodgers – Weight not Admissibility

May 28th, 2020

Another vacuous response to a methodological challenge under Rule 702 is to label the challenge as “going to the weight, not the admissibility” of the challenged expert witness’s testimony. Of course, a challenge may be solely focused upon the expert witness’s credibility, such as when an expert witness testifies on many occasions only for one side in similar disputes, or for one whose political commitments render him unable to acknowledge the bona fides of any studies conducted by the adversarial parties.[1] If, however, the Rule 702 challenge stated an objection to the witness’s methodology, then the objection would count against both the opinion’s weight and its admissibility. The judicial “weight not admissibility” label conveys the denial of the challenge, but it hardly explains how and why the challenge failed under Rule 702. Applying such a label without addressing the elements of Rule 702, and how the challenged expert witness satisfied those elements, is often nothing less than a failure of judging.

The Flawed Application of a Generally Accepted Methodology

If a meretricious expert witness by pretense or ignorance invokes a standard methodology but does so in a flawed or distorted, or in an invalid way, then there will be a clear break in the chain of inferences from data to conclusion. The clear language of Rule 702 should render such an expert witness’s conclusion inadmissible. Some courts, however, retreat into a high level of generality about the method used rather than inspecting the method as applied. For example, a court might look at an expert witness’s opinion and correctly find that it relied upon epidemiology, and that epidemiology is a generally accepted discipline concerned with identifying causes. The specific detail of the challenge may have shown that the witness had relied upon a study that was thoroughly flawed,[2] or that the witness relied upon an epidemiologic study of a type that cannot support a causal inference.[3]

Rule 702 and the Supreme Court’s decision in Joiner make clear that the trial court must evaluate the expert witness’s application of methodology and whether it actually supports valid inferences leading to the witness’s claims and conclusions.[4] And yet, lower courts continue to characterize the gatekeeping process as “hands off” the application of methodology and conclusions:

“Where the court has determined that plaintiffs have met their burden of showing that the methodology is reliable, the expert’s application of the methodology and his or her conclusions are issues of credibility for the jury.”[5]

This rejection of the clear demands of a statute has infected even the intermediate appellate United States Court of Appeals. In a muddled application of Rule 702, the Third Circuit approved admitting expert witness testimony in a case, explaining “because [the objecting party / plaintiff] objected to the application rather than the legitimacy of [the expert’s] methodology, such objections were more appropriately addressed on cross-examination and no Daubert hearing was required”).[6] Such a ruling in the Third Circuit is especially jarring because it violates not only the clear language of Rule 702, but also established precedent within the Circuit that holds that “any step that renders the analysis unreliable . . . renders the expert’s testimony inadmissible. This is true whether the step completely changes a reliable methodology or merely misapplies that methodology.”[7]

The Eight Circuit seems to have set itself up stridently against the law by distinguishing between scientific methodologies and their applications, and holding that “when the application of a scientific methodology is challenged as unreliable under Daubert and the methodology itself is otherwise sufficiently reliable, outright exclusion of the evidence in question is warranted only if the methodology was so altered by a deficient application as to skew the methodology itself.”[8]

The Ninth Circuit similarly has followed this dubious distinction between methodology in the abstract and methodology as applied. In City of Pomona, the Circuit addressed the admissibility of an expert witness whose testing deviated from protocols. Relying upon pre-2000 Ninth Circuit case law, decided before the statutory language of Rule 702 was adopted, the court found that:

“expert evidence is inadmissible where the analysis is the result of a faulty methodology or theory as opposed to imperfect execution of laboratory techniques whose theoretical foundation is sufficiently accepted in the scientific community to pass muster under Daubert.”[9]

The Eleventh Circuit has similarly disregarded Rule 702 by adverting to an improvised distinction between validity of methodology and flawed application of methodology.[10]

Cherry Picking and Inadequate Bases

Most of the Circuits of the United States Court of Appeals have contributed to the mistaken belief that “[a]s a general rule, questions relating to the bases and sources of an expert’s opinion affect the weight to be assigned that opinion rather than its admissibility.”[11] Clearly, such questions can undermine the admissibility of an expert witness’s opinion under Rule 702, and courts need to say why they have found the challenged opinion to have had a “sufficient basis.” For example, in the notorious Milward case, the First Circuit, citing legally invalid pre-Daubert decisions, stated that “when the factual underpinning of an expert’s opinion is weak it is a matter affecting the weight and credibility of the testimony − a question to be resolved by the jury.”[12]

After Milward, the Eighth Circuit followed suit in a hormone replacement therapy case. An expert who ignored studies was excluded by the district court, but the Court of Appeals found an abuse of discretion, holding that the sufficiency of an expert’s basis is a question of weight and not admissibility.[13]

These rulings elevate form over substance by halting the gatekeeping inquiry at an irrelevant, high level of abstraction, and finding that the challenged expert witness was doing something “sciencey,” which is good enough for government work. The courts involved evaded their gatekeeping duties and ignored the undue selectivity in reliance materials and the inadequacy and insufficiency of the challenged expert witness’s factual predicate. The question is not whether expert witnesses relied upon “scientific studies,” but whether their causal conclusions and claims are well supported, under scientific standards, by the studies upon which they relied.

Like the covert shifting of the burden of proof, or the glib assessment that the loser can still cross-examine in front of the jury,[14] the rulings discussed represent another way that judges kick the can on Rule 702 motions. Despite the confusing verbiage, these judicial rulings are a serious deviation from the text of Rule 702, as well as the Advisory Committee Note to the 2000 Amendments, which embraced the standard articulated in In re Paoli, that

“any step that renders the analysis unreliable . . . renders the expert’s testimony inadmissible. This is true whether the step completely changes a reliable methodology or merely misapplies that methodology.”[15]

On a positive note, some courts have recognized that responding with the conclusory assessment of a challenge’s going to weight not admissibility is a delegation of the court’s gatekeeping duty to the jury.[16]

In 2018, Professor Daniel Capra, the Reporter to the Rules Committee addressed the “weight not admissibility dodge” at length in his memorandum to the Rules Committee:

“Rule 702 clearly states that these are questions of admissibility, but many courts treat them as questions of weight. The issue for the Committee is whether something/anything can be done about these wayward decisions.”[17]

The Reporter charitably noted that the problem could be in the infelicitous expression of some courts that short-circuit their analyses by saying “I see the problems, but they go to the weight of the evidence.”[18] Perhaps these courts meant to say that they had found that the proponent of the challenged expert witness testimony had shown admissibility by a preponderance, and that what non-disqualifying problems remained should be taken up on cross-examination.[19] The principle of charity, however, cannot exonerate federal judges from exercising the dodge repeatedly in the face of clear statutory language. Indeed, the Reporter reaffirmed the Rules Committee’s substantive judgment that questions of sufficient basis and reliable application of methodology are admissibility issues:[20]

“It is hard to see how expert testimony is reliable if the expert has not done sufficient investigation, or has cherry-picked the data, or has misapplied the methodology. The same ‘white lab coat’ problem − that the jury will not be able to figure out the expert’s missteps − would seem to apply equally to basis, methodology and application.”

Although the Reporter opined that some authors may have overstated judicial waywardness, he found the judicial disregard of the requirements of Rule 702(b) and (d) incontrovertible.[21]

Professor Capra restated his conclusions a year later, in 2019, when he characterized broad statements such as such as “challenges to the sufficiency of an expert’s basis raise questions of weight and not admissibility” as “misstatement[s] made by circuit courts in a disturbing number of cases… .”[22] Factual insufficiency and unreliable application of methodology are, of course, also credibility and ethical considerations, but they are the fact finders’ concern only after the proponent has shown admissibility by a preponderance of the evidence. Principled adjudication requires judges to say what they mean and mean what they say.


[1]  See also Cruz-Vazquez v. Mennonite Gen. Hosp. Inc., 613 F.3d 54 (1st Cir. 2010) (reversing exclusion of an expert witness who was biased in favor of plaintiffs in medical cases and who was generally affiliated with plaintiffs’ lawyers; such issues of personal bias are for the jury in assessing the weight of the expert witness’s testimony). Another example would be those expert witnesses whose commitment to Marxist ideology is such that they reject any evidence proffered by manufacturing industry as inherently corrupt, while embracing any evidence proffered by labor or the lawsuit industry without critical scrutiny.

[2]  In re Phenylpropanolamine (PPA) Prods. Liab. Litig., MDL No. 1407, 289 F. Supp. 2d 1230 (W.D. Wash. 2003) (Yale Hemorrhagic Stroke Project).

[3]  Cook v. Rockwell Internat’l Corp., 580 F. Supp. 2d 1071, 1098 (D. Colo. 2006) (“Defendants next claim that Dr. Clapp’s study and the conclusions he drew from it are unreliable because they failed to comply with four factors or criteria for drawing causal interferences from epidemiological studies: accounting for known confounders … .”), rev’d and remanded on other grounds, 618 F.3d 1127 (10th Cir. 2010), cert. denied, ___ U.S. ___, 133 S.Ct. 22 (2012). For another example of a trial court refusing to see through important qualitative differences between and among epidemiologic studies, see In re Welding Fume Prods. Liab. Litig., 2006 WL 4507859, *33 (N.D. Ohio 2006) (reducing all studies to one level, and treating all criticisms as though they rendered all studies invalid)

[4]  General Electric Co. v. Joiner, 522 U.S. 136 (1997).

[5]  Proctor & Gamble Co. v. Haugen, 2007 WL 709298, at *2 (D. Utah 2007); see also United States v. McCluskey, 954 F.Supp.2d 1227, 1247-48 (D.N.M. 2013) (“the trial judge decides the scientific validity of underlying principles and methodology” and “once that validity is demonstrated, other reliability issues go to the weight − not the admissibility − of the evidence”); Murphy-Sims v. Owners Ins. Co., No. 16-CV-0759-CMA-MLC, 2018 WL 8838811, at *7 (D. Colo. Feb. 27, 2018) (“Concerns surrounding the proper application of the methodology typically go to the weight and not admissibility[.]”).

[6]  Walker v. Gordon, 46 F. App’x 691, 696 (3rd Cir. 2002).

[7]  In re Paoli R.R. Yard PCB Litig., 35 F.3d 717, 745 (3d Cir. 1994).

[8]  United States v. Gipson, 383 F.3d 689, 696 (8th Cir. 2004)(relying upon pre-2000 authority for this proposition).

[9]  City of Pomona v. SQM N.Am. Corp. 750 F.3d 1036, 1047 (9th Cir. 2014).

[10]  Quiet Tech. DC-8, Inc. v. Hurel-Dubois UK Ltd., 326 F.3d 1333, 1343 (11th Cir. 2003).

[11]  Puga v. RCX Sols., Inc., 922 F.3d 285, 294 (5th Cir. 2019). See also United States v. Hodge, 933 F.3d 468, 478 (5th Cir. 2019)(“As a general rule, questions relating to the bases and sources of an expert’s opinion affect the weight to be assigned that opinion rather than its admissibility and should be left for the jury’s consideration.”); MCI Communications Service Inc. v. KC Trucking & Equip. LLC, 403 F. Supp. 3d 548, 556 (W.D. La. 2019); Coleman v. United States, No. SA-16-CA-00817-DAE, 2017 WL 9360840, at *4 (W.D. Tex. Aug. 16, 2017); Alvarez v. State Farm Lloyds, No. SA-18-CV-01191-XR, 2020 WL 734482, at *3 (W.D. Tex. Feb. 13, 2020)(“To the extent State Farm wishes to attack the ‘bases and sources’ of Dr. Hall’s opinion, such questions affect the weight to be assigned to that opinion rather than its admissibility and should also be left for the jury’s consideration.”)(internal quotation and citation omitted); Patenaude v. Dick’s Sporting Goods, Inc., No. 9:18-CV-3151-RMG, 2019 WL 5288077, at *2 (D.S.C. Oct. 18, 2019) (“More fundamentally, each of these arguments goes to the factual basis of the report, … and it is well settled that the factual basis for an expert opinion generally goes to weight, not admissibility.”); Wischermann Partners, Inc. v. Nashville Hosp. Capital LLC, No. 3:17-CV-00849, 2019 WL 3802121, at *3 (M.D. Tenn. Aug. 13, 2019) (“[A]rguments that Pinkowski’s opinions are unreliable because he failed to review other relevant information and ignored certain facts bear on the factual basis for Pinkowski’s opinions, and, therefore, go to the weight, rather than the admissibility, of Pinkowski’s testimony.”).

[12]  Milward v. Acuity Specialty Products Group, Inc., 639 F.3d 11, 22 (1st Cir. 2011) (internal citations omitted), cert. denied sub nom., U.S. Steel Corp. v. Milward, 132 S. Ct. 1002 (2012).

[13]  Kuhn v. Wyeth, Inc., 686 F.3d 618, 633 (8th Cir. 2012): Kuhn v. Wyeth, Inc., 686 F.3d 618, 633 (8th Cir. 2012), rev’g Beylin v. Wyeth, 738 F.Supp. 2d 887, 892 (E.D.Ark. 2010) (MDL court) (Wilson, J. & Montgomery, J.) (excluding proffered testimony of Dr. Jasenka Demirovic who appeared to have “selected study data that best supported her opinion, while downplaying contrary findings or conclusions.”); see United States v. Finch, 630 F.3d 1057 (8th Cir. 2011) (the sufficiency of the factual basis for an expert’s testimony goes to credibility rather than admissibility, and only where the testimony “is so fundamentally unsupported that it can offer no assistance to the jury must such testimony be excluded”); Katzenmeier v. Blackpowder Prods., Inc., 628 F.3d 948, 952 (8th Cir. 2010)(“As a general rule, the factual basis of an expert opinion goes to the credibility of the testimony, not the admissibility, and it is up to the opposing party to examine the factual basis for the opinion in cross-examination.”); Paul Beverage Co. v. American Bottling Co., No. 4:17CV2672 JCH, 2019 WL 1044057, at *2 (E.D. Mo. Mar. 5, 2019) (admitting challenged opinion testimony without addressing the expert witness’s basis or application of methodology, following Eighth Circuit’s incorrect statement in Nebraska Plastics, Inc. v. Holland Colors Americas, Inc., 408 F.3d 410, 416 (8th Cir. 2005) that “[a]s a general rule, the factual basis of an expert opinion goes to the credibility of the testimony, not the admissibility, and it is up to the opposing party to examine the factual basis for the opinion in cross-examination[,]”). See alsoThe Fallacy of Cherry Picking As Seen in American Courtrooms” (May 3, 2014).

[14]  SeeJudicial Dodgers – Reassigning the Burden of Proof on Rule 702” (May 13, 2020); “Judicial Dodgers – The Crossexamination Excuse for Denying Rule 702 Motions” (May 11, 2020).

[15]  Fed. R. Evid. 702, Advisory Note (quoting In re Paoli RR Yard PCB Litig., 35 F.3d 717, 745 (3d Cir. 1994)).

[16]  See Nease v. Ford Motor Co., 848 F.3d 219, 231 (4th Cir. 2017) (“For the district court to conclude that Ford’s reliability arguments simply ‘go to the weight the jury should afford Mr. Sero’s testimony’ is to delegate the court’s gatekeeping responsibility to the jury.”).

[17]  Daniel J. Capra, Reporter’s Memorandum re Forensic Evidence, Daubert and Rule 702, at 1-2 (Apr. 1, 2018)

[18]  Id. at 43.

[19]  Id. at 43, 49-50.

[20]  Id. at 49-50.

[21]  Id. at 52.

[22]  Daniel J. Capra, Reporter, Reporter’s Memorandum re Possible Amendments to Rule 702, Advisory Comm. on Evidence Rules, Minutes of Meeting at 23 (May 3, 2019).

Judicial Dodgers – The Crossexamination Excuse for Denying Rule 702 Motions

May 11th, 2020

In my last post,[1] I praised Lee Mickus’s recent policy paper on amending Rule 702 for its persuasive force on the need for an amendment, as well as a source for helping lawyers anticipate common judicial dodges to a faithful application of the rule.[2] There are multiple dodges used by judicial dodgers, and it behooves litigants to recognize and anticipate them. In this post, and perhaps future ones, I elaborate upon the concerns that Mickus documents.

One prevalent judicial response to the Rule 702 motion is to kick the can and announce that the challenge to an expert witness’s methodological shenanigans can and should be addressed by crossexamination. This judicial response was, of course, the standard one before the 1993 Daubert decision, but Justice Blackmun’s opinion kept it alive in frequently quote dicta:

“Vigorous cross-examination, presentation of contrary evidence, and careful instruction on the burden of proof are the traditional and appropriate means of attacking shaky but admissible evidence.”[3]

Justice Blackmun, no doubt, believed he was offering a “helpful” observation here, but the reality is quite different. Traditionally, courts allowed qualified expert witnesses to opine with wild abandon, after showing that they had the very minimal qualifications required to do so in court. In the face of this traditional judicial lassitude, “[v]igorous cross-examination, presentation of contrary evidence, and careful instruction on the burden of proof” were all a litigant could hope to accomplish in litigation. Furthermore, the litany of remedies for “shaky but admissible evidence” fails to help lower court judges and lawyers sort shaky but admissible evidence from shaky and inadmissible evidence.

Perhaps even more to the point, cases at common law “traditionally” did not involve multivariate logistic regression, structural equation models, propensity score weighting, and the like. Juries did just fine on whether Farmer Brown had exercised due care when he ran over his neighbor’s cow with his tractor, or even when a physician opined that a child was born 350 days after the putative father’s death was sired by the testator and entitled to inherit from “dad.”

Mickus is correct that a trial judge’s comment that the loser of a Rule 702 motion is free to cross-examine is often a dodge, an evasion, or an outright failure to engage with the intricacies of a complex methodological challenge.[4] Stating that the “traditional and appropriate means of attacking shaky but admissible evidence” remain available is a truism, and might be offered as judicial balm to the motion loser, but the availability of such means is hardly an explanation or justification for denying the Rule 702 motion. Furthermore, Justice Blackmun’s observation about traditional means was looking back at an era when in most state and federal court, a person found to be minimally qualified, could pretty much say anything regardless of scientific validity. That was the tradition that stood in active need of reform when Daubert was decided in 1993.

Mickus is also certainly correct that the whole point of judicial gatekeeping is that the presentation of vive voce testimony before juries is not an effective method for revealing shaky, inadmissible opinion testimony. A few courts have acknowledged that cross-examination in front of a jury is not an appropriate justification for admitting methodologically infirm expert witness opinion testimony. In the words of Judge Jed Rakoff, who served on the President’s Council of Advisors on Science and Technology,[5] addressed the limited ability of cross-examination in the context of forensic evidence:

“Although effective cross-examination may mitigate some of these dangers, the explicit premise of Daubert and Kumho Tire is that, when it comes to expert testimony, cross-examination is inherently handicapped by the jury’s own lack of background knowledge, so that the Court must play a greater role, not only in excluding unreliable testimony, but also in alerting the jury to the limitations of what is presented.”[6]

Judge Rakoff’s point is by no means limited to forensic evidence, and it has been acknowledged more generally by Professor Daniel Capra, the Reporter to the Advisory Committee on Evidence Rules:

“the key to Daubert is that cross-examination alone is ineffective in revealing nuanced defects in expert opinion testimony and that the trial judge must act as a gatekeeper to ensure that unreliable opinions don’t get to the jury in the first place.”[7]

Juries do not arrive at the court house knowledgeable about statistical and scientific methods; nor are they prepared to spend weeks going over studies to assess their quality, and whether an expert witness engaged in cherry picking, misapplying methodologies, or insufficient investigation.[8] In discussing the problem of expert witnesses’ overstating the strength of their opinions, beyond what is supported by evidence, the Reporter stressed the limits and ineffectiveness of remedial adversarial cross-examination:

“Perhaps another way to think about cross-examination as a remedy is to compare the overstatement issue to the issues of sufficiency of basis, reliability of methodology, and reliable application of that methodology. As we know, those three factors must be shown by a preponderance of the evidence. The whole point of Rule 702 — and the Daubert-Rule 104(a) gatekeeping function — is that these issues cannot be left to cross-examination. The underpinning of Daubert is that an expert’s opinion could be unreliable and the jury could not figure that out, even given cross-examination and argument, because the jurors are deferent to a qualified expert (i.e., the white lab coat effect). The premise is that cross-examination cannot undo the damage that has been done by the expert who has power over the jury. This is because, for the very reason that an expert is needed (because lay jurors need assistance) the jury may well be unable to figure out whether the expert is providing real information or junk. The real question, then, is whether the dangers of overstatement are any different from the dangers of insufficient basis, unreliability of methodology, and unreliable application. Why would cross-examination be insufficient for the latter yet sufficient for the former?

It is hard to see any difference between the risk of overstatement and the other risks that are regulated by Rule 702. When an expert says that they are certain of a result — when they cannot be — how is that easier for the jury to figure out than if an expert says something like ‘I relied on four scientifically valid studies concluding that PCB’s cause small lung cancer’. When an expert says he employed a ‘scientific methodology’ when that is not so, how is that different from an expert saying “I employed a reliable methodology” when that is not so?”[9]

The Reporter’s example of PCBs and small lung cancer was an obvious reference to the Joiner case, in which the Supreme Court held that the trial judge had properly excluded causation opinions. The Reporter’s point goes directly to the cross-examination excuse for not shirking the gatekeeping function. In Joiner, the Court held that gatekeeping was necessitated when cross-examination was insufficient in the face of an analytical gap between methodology and conclusion.[10] Indeed, such gaps are or should be present in most well-conceived Rule 702 challenges.

The problem is not only that juries defer to expert witnesses. Juries lack the competence to assess scientific validity. Although many judges are lacking in such competence, at least litigants can expect them to read the Reference Manual on Scientific Evidence before they read the parties’ briefs and the expert witnesses’ reports. If the trial judge’s opinion evidences ignorance of the Manual, then at least there is the possibility of an appeal. It will be a strange day in a stranger world, when a jury summons arrives in the mail with a copy of the Manual!

The rules of evidence permit expert witnesses to rely upon inadmissible evidence, at least when experts in their field would do so reasonably. To decide whether the reliance is reasonable requires the decision maker go outside the “proofs” that would typically be offered at trial. Furthermore, the decision maker – gatekeeper – will have to read the relied-upon study and data to evaluate the reasonableness of the reliance. In a jury trial, the actual studies relied upon are rarely admissible, and so the jury almost never has the opportunity to read them to make its own determination of reasonableness of reliance, or of whether the study and its data really support what the expert witness draws from it.

Of course, juries do not have to write opinions about their findings. They need neither explain nor justify their verdicts, once the trial court has deemed that there is the minimally sufficient evidence to support a verdict. Juries, with whatever help cross-examination provides, in the absence of gatekeeping, cannot deliver anything approaching scientific due process of law.

Despite Supreme Court holdings, a substantially revised and amended Rule 702, and clear direction from the Advisory Committee, some lower courts have actively resisted enforcing the requirements of Rule. 702 Part of this resistance consists in pushing the assessment of the reliability of the data and assumptions used in applying a given methodology out of the gatekeeping column and into the jury’s column. Despite the clear language of Rule 702, and the Advisory Committee Note,[11] some Circuits of the Court of Appeals have declared that assessing the reliability of assumptions and data is not judges’ work (outside of a bench trial).[12]

As Seinfeld has taught us, rules are like reservations. It is not enough to make the rules, you have to keep and follow them. Indeed, following the rule is really the important part.[13] Although an amended Rule 702 might include a provision that “we really mean this,” perhaps it is worth a stop at the Supreme Court first to put down the resistance.


[1]  “Should Federal Rule of Evidence 702 Be Amended?” (May 8, 2020).

[2]  Lee Mickus, “Gatekeeping Reorientation: Amend Rule 702 to Correct Judicial Misunderstanding about Expert Evidence,” Washington Legal Foundation Critical Legal Issues Working Paper No. 217 (May 2020).

[3]  Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 596 (1993).

[4]  See, e.g., AmGuard Ins. Co. v. Lone Star Legal Aid, No. CV H-18-2139, 2020 WL 60247, at *8 (S.D. Tex. Jan. 6, 2020) (“[O]bjections [that the expert could not link her experienced-based methodology to her conclusions] are better left for cross examination, not a basis for exclusion.”); Powell v. Schindler Elevator Corp., No. 3:14cv579 (WIG), 2015 WL 7720460, at *2 (D. Conn. Nov. 30, 2015) (“To the extent Defendant argues that Mr. McPartland’s conclusions are unreliable, it may attack his report through cross examination.”); Wright v. Stern, 450 F. Supp. 2d 335, 359–60 (S.D.N.Y. 2006) (“In a close case, a court should permit the testimony to be presented at trial, where it can be tested by cross-examination and measured against the other evidence in the case.”) (internal citation omitted). See also Adams v. Toyota Motor Corp., 867 F.3d 903, 916 (8th Cir. 2017) (affirming admission of expert testimony, reiterating the flexibility of the Daubert inquiry and emphasizing that defendant’s concerns could all be addressed with “[v]igorous cross-examination, presentation of contrary evidence, and careful instruction on the burden of proof”); Liquid Dynamics Corp. v. Vaughan Corp., 449 F.3d 1209, 1221 (Fed. Cir. 2006) (“The identification of such flaws in generally reliable scientific evidence is precisely the role of cross-examination.” (internal citation omitted)); Carmichael v. Verso Paper, LLC, 679 F. Supp. 2d 109, 119 (D. Me. 2010) (“[W]hen the adequacy of the foundation for the expert testimony is at issue, the law favors vigorous cross-examination over exclusion.”); Crawford v. Franklin Credit Mgt. Corp., 08-CV-6293 (KMW), 2015 WL 13703301, at *6 (S.D.N.Y. Jan. 22, 2015) (“In light of the ‘presumption of admissibility of evidence,’ that opportunity [for cross-examination] is sufficient to ensure that the jury receives testimony that is both relevant and reliable.”) (internal citation omitted).

Even the most explicitly methodological challenges are transmuted into cross-examination issues by refusnik courts. For instance, cherry picking is reduced to a credibility issue for the jury and not germane to the court’s Rule 702 determination. In re Chantix Prods. Liab. Litig., 889 F. Supp. 2d 1272, 1288 (N.D. Ala. 2012) (finding that an expert witness’s deliberate decision not to rely upon clinical trial data merely “is a matter for cross-examination, not exclusion under Daubert”); In re Urethane Antitrust Litig., 2012 WL 6681783, at *3 (D.Kan.) (“The extent to which [an expert] considered the entirety of the evidence in the case is a matter for cross-examination.”); Bouchard v. Am. Home Prods. Corp., 2002 WL 32597992, at *7 (N.D. Ohio) (“If the plaintiff believes that the expert ignored evidence that would have required him to substantially change his opinion, that is a fit subject for cross-examination.”). Similarly, courts have by ipse dixit made flawed application of what a standard methodological into merely a credibility issue to be explore by cross-examination rather than by judicial gatekeeping. United States v. Adam Bros. Farming, 2005 WL 5957827, at *5 (C.D. Cal. 2005) (“Defendants’ objections are to the accuracy of the expert’s application of the methodology, not the methodology itself, and as such are properly reserved for cross-examination.”); Oshana v. Coca-Cola Co., 2005 WL 1661999, at *4 (N.D. Ill.) (“Challenges addressing flaws in an expert’s application of reliable methodology may be raised on cross-examination.”).

[5]  President’s Council of Advisors on Science and Technology, Report to the President on Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods (Sept. 2016).

[6]  United States v. Glynn, 578 F. Supp. 2d 567, 574 (S.D.N.Y. 2008) (Rakoff, J.)

[7]  Daniel J. Capra, Reporter, Advisory Comm. on Evidence Rules, Minutes of Meeting at 23 (May 3, 2019) (comments of the Reporter).

[8]  Daniel J. Capra, Reporter’s Memorandum re Forensic Evidence, Daubert and Rule 702 at 50 (April 1, 2018) (identifying issues such as insufficient investigation, cherry-picking data, or misapplying standard methodologies, as examples of a “white lab coat” problem resulting from juries’ inability to evaluate expert witnesses’ factual bases, methodologies, and applications of methods).

[9]  Daniel J. Capra, Reporter, Advisory Comm. on Evidence Rules, Minutes of Meeting at 10-11 (Oct. 1, 2019) (comments of the Reporter on possible amendment of Rule 702) (internal citation to Joiner omitted).

[10]  Id. at 11 n.5.

[11]  See In re Paoli RR Yard PCB Litig., 35 F.3d 717, 745 (3d Cir. 1994) (calling for a close, careful analysis of the application of a proper methodology to every step in the case; “any step that renders the analysis unreliable renders the expert’s testimony inadmissible whether the step completely changes a reliable methodology or merely misapplies that methodology”).

[12]  See, e.g., City of Pomona v. SQM North Am. Corp., 750 F.3d 1036, 1047 (9th Cir. 2014) (rejecting the Paoli any-step approach without careful analysis of the statute, the advisory committee note, or Supreme Court decisions); Manpower, Inc. v. Ins. Co. of Pa., 732 F.3d 796, 808 (7th Cir. 2013) (“[t]he reliability of data and assumptions used in applying a methodology is tested by the adversarial process and determined by the jury; the court’s role is generally limited to assessing the reliability of the methodology – the framework – of the expert’s analysis”); Bonner v. ISP Techs., Inc., 259 F.3d 924, 929 (8th Cir. 2001) (“the factual basis of an expert opinion goes to the credibility of the testimony, not the admissibility, and it is up to the opposing party to examine the factual basis for the opinion in cross-examination”).

[13]  Despite the clarity of the revised Rule 702, and the intent to synthesize Daubert, Joiner, Kumho Tire, and Weisgram, some courts have insisted that nothing changed with the amended rule. See, e.g., Pappas v. Sony Elec., Inc., 136 F. Supp. 2d 413, 420 & n.11 (W.D. Pa. 2000) (opining that Rule 702 as amended did not change the application of Daubert within the Third Circuit) (“The Committee Notes to the amended Rule 702 cite and discuss several Court of Appeals decisions that have properly applied Daubert and its progeny. Among these decisions are numerous cases from the Third Circuit. See Committee Note to 2000 Amendments to Fed. R.Evid. 702. Accordingly, I conclude that amended Rule 702 does not effect a change in the application of Daubert in the Third Circuit.”). Of course, if nothing changed, then the courts that take this position should be able to square their decisions with text of Rule 702, as amended in 2000.

Disproportionality Analyses Misused by Lawsuit Industry

April 20th, 2020

Adverse event reporting is a recognized, important component of pharmacovigilence. Regulatory agencies around the world further acknowledge that an increased rate of reporting of a specific adverse event may signal the possible existence of an association. In the last two decades, pharmacoepidemiologists have developed techniques for mining databases of adverse event reports for evidence of a disproportionate level of reporting for a particular medication – adverse event pair. Such studies can help identify “signals” of potential issues for further study with properly controlled epidemiologic studies.[1]

Most sane and sensible epidemiologists recognize that the low quality, inconsistences, and biases of the data in adverse event reporting databases render studies of disproportionate reporting “poor surrogates for controlled epidemiologic studies.” In the face of incomplete and inconsistent reporting, so-called disproportionality analyses (“DPA”) assume that incomplete reporting will be constant for all events for a specific medication. Regulatory attention, product labeling, lawyer advertising and client recruitment, social media and publicity, and time since launch are all known to affect reporting rates, and to ensure that reporting rates for some event types for a specific medication will be higher. Thus, the DPA assumptions are virtually always false and unverifiable.[2]

DPAs are non-analytical epidemiologic studies that cannot rise in quality or probativeness above the level of the anecdote upon which they are based. DPAs may generate signals or hypotheses, but they cannot test hypotheses of causality. Although simple in concept, DPAs involve some complicated computations that embue them with an aura of “proofiness.” As would-be studies that lack probativeness for causality, they are thus ideal tools for the lawsuit industry to support litigation campaigns against drugs and medical devices. Indeed, if a statistical technique is difficult to understand but relatively easy to perform and even easier to pass off to unsuspecting courts and juries, then you can count on its metastatic use in litigation. The DPA has become one of the favorite tools of the lawsuit industry’s statisticians. This litigation use, however, cannot obscure the simple fact that the relative reporting risk provided by a DPA can never rise to the level of a relative risk.

In one case in which a Parkinson’s disease patient claimed that his compulsive gambling was caused by his use of the drug Requip, the plaintiff’s expert witness attempted to invoke a DPA in support of his causal claim. In granting a Rule 702 motion to exclude the expert witnesses who relied upon a DPA, the trial judge rejected the probativeness of DPAs, based upon the FDA’s rejection of such analyses for anything other than signal detection.[3]

In the Accutane litigation, statistician David Madigan attempted to support his fatally weak causation opinion with a DPA for Crohn’s disease and Accutane adverse event reports. According to the New Jersey Supreme Court, Madigan claimed that his DPA showed “striking signal of disproportionality” indicative of a “strong association” between Accutane use and Crohn’s disease.[4]  With the benefit of a thorough review by the trial court, the New Jersey Supreme Court found other indicia of unreliability in Madigan’s opinions, such that it was not fooled by Madigan’s shenanigans. In any event, no signal of disproportionality could ever show an association between medication use and a disease; at best the DPA can show only an association between reporting of the medication use and the outcome of interest.

In litigation over Mirena and intracranial hypertension, one of the lawsuit industry’s regulars, Mayhar Etminan, published a DPA based upon the FDA’s Adverse Event Reporting System, which purported to find an increased reporting odds ratio.[5] Unthinkingly, the plaintiffs’ other testifying expert witnesses relied upon Etminan’s study. When a defense expert witness pointed out that Etminan had failed to adjust for age and gender in his multivariate analysis,[6] he repudiated his findings.[7] Remarkably, when Etminan published his original DPA in 2015, he declared that he had no conflicts, but when he published his repudiation, he disclosed that he “has been an expert witness in Mirena litigation in the past but is no longer part of the litigation.” The Etminan kerfuffle helped scuttle the plaintiffs’ assault on Mirena.[8]

DPAs have, on occasion, bamboozled federal judges into treating them as analytical epidemiology that can support causal claims. For instance, misrepresentations or misunderstandings of what DPAs can and cannot do carried the day in a Rule 702 contest on the admissibility of opinion testimony by statistician Rebecca Betensky. In multidistrict litigation over the safety of inferior vena cava (“IVC”) filters, plaintiffs’ counsel retained Rebecca Betensky, to prepare a DPA of adverse events reported for the defendants’ retrievable filters. The MDL judge’s description of Betensky’s opinion demonstrates that her DPA was either misrepresented or misunderstood:

“In this MDL, Dr. Betensky opines generally that there is a higher risk of adverse events for Bard’s retrievable IVC filters than for its permanent SNF.”[9]

The court clearly took Betensky to be opining about risk and not the risk of reporting. The court’s opinion goes on to describe Betensky’s calculation of a “reporting risk ratio,” but found that she could testify that the retrievable IVC filters increased the risk of the claimed adverse events, and not merely that there was an increase in reporting risk ratios.

Betensky acknowledged that the reporting risk ratios were “imperfect estimates of the actual risk ratios,”[10] but nevertheless dismissed all caveats about the inability of DPAs to assess actual increased risk. The trial court quoted Dr. Betensky’s attempt to infuse analytical rigor into a data mining exercise:

“[A]dverse events are generally considered to be underreported to the databases, and potentially differentially by severity of adverse event and by drug or medical device. . . . It is important to recognize that underreporting in and of itself is not problematic. Rather, differential underreporting of the higher risk device is what leads to bias. And even if there was differential underreporting of the higher risk device, given the variation in reporting relative risks across adverse events, the differential reporting would have had to have been highly variable across adverse events. This does not seem plausible given the severity of the adverse events considered. Given the magnitude of the RRR’s [relative reporting ratios], and their variability across adverse events, it seems implausible that differential underreporting by filter could fully explain the deviation of the observed RRR’s from 1.”[11]

Of course, this explanation fails to account for differential over-reporting for the newer, but less risky or equally risk device. Betensky dismissed notoriety bias as having caused an increase in reporting adverse events because her DPA ended with 2014, before the FDA had issued a warning letter. The lawsuit industry, however, was on the attack against IVC filers, years before 2014.[12] Similarly, Betensky dismissed consideration of the Weber effect, but her analysis apparently failed to acknowledge that notoriety and Weber effect are just two of many possible biases in DPAs.

In the face of her credentials, the MDL trial judge retreated to the usual chestnuts that are served up when a Rule 702 challenge is denied.  Judge Campbell thus observed that “[i]t is not the job of the court to insure that the evidence heard by the jury is error-free, but to insure that it is sufficiently reliable to be considered by the jury.”[13]  The trial judge professed a need to be “be careful not to conflate questions of admissibility of expert testimony with the weight appropriately to be accorded to such testimony by the fact finder.”[14] The court denied the claim that Betensky had engaged in an ipse dixit, by engaging in its own ipse dixit. Judge Campbell found that Betensky had explained her assumptions, had acknowledged shortcomings, and had engaged in various sensitivity tests of the validity of her DPA; and so he concluded that Betensky did not present “a case where ‘there is simply too great an analytical gap between the data and the opinion proffered’.”[15]

By closing off inquiry into the limits of the DPA methodology, Judge Campbell managed to stumble into a huge analytical gap he blindly ignored, or was unaware of. Even the best DPAs cannot substitute for analytical epidemiology in a scientific methodology of determining causation. The ipse dixit becomes apparent when we consider that the MDL gatekeeping opinion on Rebecca Betensky fails to mention the extensive body of regulatory and scientific opinion about the distinct methodologic limitations of DPA. The U.S. FDA’s official guidance on good pharmacovigilance practices, for example, instructs us that

“[d]ata mining is not a tool for establishing causal attributions between products and adverse events.”[16]

The FDA specifically cautions that the signals detected by data mining techniques should be acknowledged to be “inherently exploratory or hypothesis generating.”[17] The agency exercises caution when making its own comparisons of adverse events between products in the same class because of the low quality of the data themselves, and uncontrollable and unpredictable biases in how the data are collected.[18] Because of the uncertainties in DPAs,

“FDA suggests that a comparison of two or more reporting rates be viewed with extreme caution and generally considered exploratory or hypothesis-generating. Reporting rates can by no means be considered incidence rates, for either absolute or comparative purposes.”[19]

The European Medicines Agency offers similar advice and caution:

“Therefore, the concept of SDR [Signal of Disproportionate Reporting] is applied in this guideline to describe a ‘statistical signal’ that has originated from a statistical method. The underlying principle of this method is that a drug–event pair is reported more often than expected relative to an independence model, based on the frequency of ICSRs on the reported drug and the frequency of ICSRs of a specific adverse event. This statistical association does not imply any kind of causal relationship between the administration of the drug and the occurrence of the adverse event.”[20]

The current version of perhaps the leading textbook on pharmacoepidemiology is completely in accord with the above regulatory guidances. In addition to emphasizing the limitations on data quality from adverse event reporting, and the inability to interpret temporal trends, the textbook authors clearly characterize DPAs as generating signals, and unable to serve as hypothesis tests:

“a signal of disproportionality is a measure of a statistical association within a collection of AE/ADR reports (rather than in a population), and it is not a measure of causality. In this regard, it is important to underscore that the use of data mining is for signal detection – that is, for hypothesis  generation – and that further work is needed to evaluate the signal.”[21]

Reporting ratios are not, and cannot serve as, measures of incidence or prevalence, because adverse event databases do not capture all the events of interest, and so these ratios “it must be interpreted cautiously.”[22] The authors further emphasize that “well-designed pharmacoepidemiology or clinical studies are needed to assess the signal.”[23]

The authors of this chapter are all scientists and officials at the FDA’s Center for Drug Evaluation and Research, and the World Health Organization. Although they properly disclaimed to have been writing for their agencies, their agencies have independently embraced their concepts in other agency publications. The consensus view of the hypothesis generating nature of DPAs can easily be seen in surveying the relevant literature.[24] Passing off a DPA as a study that supports causal inference is not a mere matter of “weight,” or excluding any opinion that has some potential for error. The misuse of Betensky’s DPA is a methodological error that goes to the heart of what Congress intended to be screened and excluded by Rule 702.


[1]  Sean Hennessy, “Disproportionality analyses of spontaneous reports,” 13 Pharmacoepidemiology & Drug Safety 503, 503 (2004).

[2]  Id. See, e.g., Patrick Waller & Mira Harrison-Woolrych, An Introduction to Pharmacovigilance 68-69 (2nd ed. 2017) (noting the example of the WHO’s DPA that found a 10-fold reporting rate increase for statins and ALS, which reporting association turned out to be spurious).

[3]  Wells v. SmithKline Beecham Corp., 2009 WL 564303, at *12 (W.D. Tex. 2009) (citing and quoting from the FDA’s Guidance for Industry: Good Pharmacovigilance Practices and Pharmacoepidemiologic Assessment (2005)), aff’d, 601 F.3d 375 (5th Cir. 2010). But see In re Abilify (Aripiprazole) Prods. Liab. Litig., 299 F.Supp. 3d 1291. 1324 (N.D. Fla. 2018) (noting that the finding of a DPA that compared Abilify with other anti-psychotics helped to show that a traditional epidemiologic study was not confounded by the indication for depressive symptoms).

[4]  In re Accutane Litig., 234 N.J. 340, 191 A.3d 560, 574 (2018).

[5]  See Mahyar Etminan, Hao Luo, and Paul Gustafson, et al., “Risk of intracranial hypertension with intrauterine levonorgestrel,” 6 Therapeutic Advances in Drug Safety 110 (2015).

[6]  Deborah Friedman, “Risk of intracranial hypertension with intrauterine levonorgestrel,” 7 Therapeutic Advances in Drug Safety 23 (2016).

[7]  Mahyar Etminan, “Revised disproportionality analysis of Mirena and benign intracranial hypertension,” 8 Therapeutic Advances in Drug Safety 299 (2017).

[8]  In re Mirena IUS Levonorgestrel-Relaated Prods. Liab. Litig. (No. II), 387 F. Supp. 3d 323, 331 (S.D.N.Y. 2019) (Engelmayer, J.).

[9]  In re Bard IVC Filters Prods. Liab. Litig., No. MDL 15-02641-PHX DGC, Order Denying Motion to Exclude Rebecca Betensky at 2 (D. Ariz. Jan. 22, 2018) (Campbell, J.) (emphasis added) [Order]

[10]  Id. at 4.

[11]  Id.

[12]  See Matt Fair, “C.R. Bard’s Faulty Filters Pose Health Risks, Suit Says,” Law360 (Aug. 10, 2012); See, e.g., Derrick J. Stobaugh, Parakkal Deepak, & Eli D. Ehrenpreis, “Alleged isotretinoin-associated inflammatory bowel disease: Disproportionate reporting by attorneys to the Food and Drug Administration Adverse Event Reporting System,” 69 J. Am. Acad. Dermatol. 393 (2013) (documenting stimulated reporting from litigation activities).

[13]  Order at 6, quoting from Southwire Co. v. J.P. Morgan Chase & Co., 528 F. Supp. 2d 908, 928 (W.D. Wis. 2007).

[14]  Id., citing In re Trasylol Prods. Liab. Litig., No. 08-MD-01928, 2010 WL 1489793, at *7 (S.D. Fla. Feb. 24, 2010).

[15]  Id., citing and quoting from In re Trasylol Prods. Liab. Litig., No. 08-MD-01928, 2010 WL 1489793, at *7 (S.D. Fla. Feb. 24, 2010) ((quoting General Electric v. Joiner, 522 U.S. 136, 146 (1997)).

[16]  FDA, “Good Pharmacovigilance Practices and Pharmacoepidemiologic Assessment Guidance for Industry” at 8 (2005) (emphasis added).

[17]  Id. at 9.

[18]  Id.

[19]  Id. at 11 (emphasis added).

[20]  EUDRAVigilance Expert Working Group, European Medicines Agency, “Guideline on the Use of Statistical Signal Detection Methods in the EUDRAVigilance Data Analysis System,” at 3 (2006) (emphasis added).

[21]  Gerald J. Dal Pan, Marie Lindquist & Kate Gelperin, “Postmarketing Spontaneous Pharmacovigilance Reporting Systems,” in Brian L. Strom & Stephen E. Kimmel and Sean Hennessy, Pharmacoepidemiology at 185 (6th ed. 2020) (emphasis added).

[22]  Id. at 187.

[23]  Id. See also Andrew Bate, Gianluca Trifirò, Paul Avillach & Stephen J.W. Evans, “Data Mining and Other Informatics Approaches to Pharmacoepidemiology,” chap. 27, in Brian L. Strom & Stephen E. Kimmel and Sean Hennessy, Pharmacoepidemiology at 685-88 (6th ed. 2020) (acknowledging the importance of DPAs for detecting signals that must then be tested with analytical epidemiology) (authors from industry, Pfizer, and academia, including NYU School of Medicine, Harvard Medical School, and London School of Hygiene and Tropical Medicine).

[24]  See, e.g., Patrick Waller & Mira Harrison-Woolrych, An Introduction to Pharmacovigilance 61 (2nd ed. 2017) (“[A]lthough the numbers are calculated in a similar way to relative risks, they do not represent a meaningful calculation of risk.” *** “Indicators of disproportionality are measures of association and even quite extreme results may not be causal.”); Ronald D. Mann & Elizabeth B. Andrews, Pharmacovigilance 240 (2d ed. 2007) (“Importantly, data mining cannot prove or refute causal associations between drugs and events. Data mining simply identifies disproportionality of drugevent reporting patterns in databases. The absence of a signal does not rule out a safety problem. Similarly, the presence of a signal is not a proof of a causal relationship between a drug and an adverse event.”); Patrick Waller, An Introduction to Pharmacovigilance 49 (2010) (“[A]lthough the numbers are calculated in a similar way to relative risks, they do not represent a meaningful calculation of risk. Whilst it is true that the greater the degree of disproportionality, the more reason there is to look further, the only real utility of the numbers is to decide whether or not there are more cases than might reasonably have been expected. Indicators of disproportionality are measures of association and even quite extreme results may not be causal.”); Sidney N. Kahn, “You’ve found a safety signal–now what?  Regulatory implications of industry signal detection activities,” 30 Drug Safety 615 (2007).

Palavering About P-Values

August 17th, 2019

The American Statistical Association’s most recent confused and confusing communication about statistical significance testing has given rise to great mischief in the world of science and science publishing.[1] Take for instance last week’s opinion piece about “Is It Time to Ban the P Value?” Please.

Helena Chmura Kraemer is an accomplished professor of statistics at Stanford University. This week the Journal of the American Medical Association network flagged Professor Kraemer’s opinion piece on p-values as one of its most read articles. Kraemer’s eye-catching title creates the impression that the p-value is unnecessary and inimical to valid inference.[2]

Remarkably, Kraemer’s article commits the very mistake that the ASA set out to correct back in 2016,[3] by conflating the probability of the data under a hypothesis of no association with the probability of a hypothesis given the data:

“If P value is less than .05, that indicates that the study evidence was good enough to support that hypothesis beyond reasonable doubt, in cases in which the P value .05 reflects the current consensus standard for what is reasonable.”

The ASA tried to break the bad habit of scientists’ interpreting p-values as allowing us to assign posterior probabilities, such as beyond a reasonable doubt, to hypotheses, but obviously to no avail.

Kraemer also ignores the ASA 2016 Statement’s teaching of what the p-value is not and cannot do, by claiming that p-values are determined by non-random error probabilities such as:

“the reliability and sensitivity of the measures used, the quality of the design and analytic procedures, the fidelity to the research protocol, and in general, the quality of the research.”

Kraemer provides errant advice and counsel by insisting that “[a] non-significant result indicates that the study has failed, not that the hypothesis has failed.” If the p-value is the measure of the probability of observing an association at least as large as obtained given an assumed null hypothesis, then of course a large p-value cannot speak to the failure of the hypothesis, but why declare that the study has failed? The study was perhaps indeterminate, but it still yielded information that perhaps can be combined with other data, or help guide future studies.

Perhaps in her most misleading advice, Kraemer asserts that:

“[w]hether P values are banned matters little. All readers (reviewers, patients, clinicians, policy makers, and researchers) can just ignore P values and focus on the quality of research studies and effect sizes to guide decision-making.”

Really? If a high quality study finds an “effect size” of interest, we can now ignore random error?

The ASA 2016 Statement, with its “six principles,” has provoked some deliberate or ill-informed distortions in American judicial proceedings, but Kraemer’s editorial creates idiosyncratic meanings for p-values. Even the 2019 ASA “post-modernism” does not advocate ignoring random error and p-values, as opposed to proscribing dichotomous characterization of results as “statistically significant,” or not.[4] The current author guidelines for articles submitted to the Journals of the American Medical Association clearly reject this new-fangled rejection of evaluating this new-fangled rejection of the need to assess the role of random error.[5]


[1]  See Ronald L. Wasserstein, Allen L. Schirm, and Nicole A. Lazar, “Editorial: Moving to a World Beyond ‘p < 0.05’,” 73 Am. Statistician S1, S2 (2019).

[2]  Helena Chmura Kraemer, “Is It Time to Ban the P Value?J. Am. Med. Ass’n Psych. (August 7, 2019), in-press at doi:10.1001/jamapsychiatry.2019.1965.

[3]  Ronald L. Wasserstein & Nicole A. Lazar, “The ASA’s Statement on p-Values: Context, Process, and Purpose,” 70 The American Statistician 129 (2016).

[4]  “Has the American Statistical Association Gone Post-Modern?” (May 24, 2019).

[5]  See instructions for authors at https://jamanetwork.com/journals/jama/pages/instructions-for-authors