For your delectation and delight, desultory dicta on the law of delicts.

Biostatistics and FDA Regulation: The Convergence of Science and Law

May 29th, 2014

On May 20, 2014, the Food and Drug Law Institute (FDLI), the Drug Information Association (DIA), and the Harvard Law School’s Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics, in collaboration with the Harvard School of Public Health Department of Biostatistics and Harvard Catalyst | The Harvard Clinical and Translational Science Center, presented a symposium on“Biostatistics and FDA Regulation: The Convergence of Science and Law.”

The symposium might just as well have been described as the collision of science and law.

The Symposium agenda addressed several cutting-edge issues on statistical evidence in the law, criminal, civil, and regulatory. Names of presenters are hyperlinked to presentations slides that are available.

I. Coleen Klasmeier, of Sidley Austin LLP, introduced and moderated the first section, “Introduction to Statistics and Regulatory Law,” which focused on current biostatistical issues in regulation of drugs, devices, and foods by the Food and Drug Administration (FDA). Qi Jiang, Executive Director of Amgen, Robert T. O’Neill, retired from the FDA, and now Statistical Advisor in CDER, and Jerald S. Schindler, of Merck Research Laboratories, presented.

II. Qi Jiang moderated and introduced the second section on safety issues, and the difficulties presented by meta-analysis and other statistical assessments of safety outcomes in clinical trials and in marketing of drugs and devices. Lee-Jen Wei, of the Harvard School of Public Health, Geoffrey M. Levitt, an Associate General Counsel of Pfizer, Inc., and Janet Wittes, of the Statistics Collaborative, presented.

III. Aaron Katz, of Ropes & Gray LLP, introduced the third section, on “Statistical Disputes in Life Sciences Litigation,” which addressed recent developments in expert witness gatekeeping, the Avandia litigation, and the role of statistics in two recent cases, Matrixx, Inc. v. Siracusano, and United States v. HarkonenAnand Agneshwar, of Arnold & Porter LLP, Lee-Jen Wei, Christina L. Diaz, Assistant General Counsel of GlaxoSmithKline, and Nathan A. Schachtman presented.

IV. Christopher Robertson, a law professor now visiting at Harvard Law School, moderated a talk by Robert O’Neill on “Emerging Issues,” at the FDA.

V. Dr. Wittes moderated a roundtable discussion on “Can We Handle the Truth,” which explored developments in First Amendment and media issues involved in regulation and litigation. Anand Agneshwar, and Freddy A. Jimenez, Assistant General Counsel, Johnson & Johnson, presented.

The Outer Limits (and Beyond?) of Ex Parte Advocacy of Federal Judges

May 23rd, 2014

As every trial lawyer knows, people sometimes reveal important facts in curious ways, incorporated in their own biased narrative of events.  Recently, I heard a recorded lecture about expert witnesses, by a plaintiffs’ lawyer, who revealed a damning fact about a judge.  The lawyer clearly thought that this fact was commendatory, but in fact revealed another effort of scientific advocates and zealots to subvert the neutrality of federal judges.  See In re School Asbestos Litigation, 977 F.2d 764 (3d Cir. 1992) (describing effort by plaintiffs’ lawyers and the late Dr. Irving Selikoff to corrupt state and federal judges with one-sided ex parte presentations of their views at the so-called Third-Wave Conference).

Anthony Z. Roisman is the Managing Partner of the National Legal Scholars Law Firm.  This firm has a roster of affiliated law professors who serve as consultants for plaintiffs in environmental and tort cases. (Some other participants in this law firm include Jay M. Feinman, Lucinda M. Finley, Neil Vidmar, and Richard W. Wright.) Roisman has been active in various plaintiff organizations, including serving as the head of the ATLA Section on Toxic, Environmental & Pharmaceutical Torts (STEP). 

Roisman lectures frequently for the American Law Institute on expert witness issues. Recently, I was listening to an mp3 recording of one of Roisman’s lectures on expert witnesses in environmental litigation.  Given Roisman’s practice and politics, I was not surprised to hear him praise Judge Rothstein’s opinion that refused to exclude plaintiffs’ expert witnesses’ causation opinions in the PPA litigation.  See In re Phenylpropanolamine Prod. Liab. Litig., 289 F. 2d 1230 (2003).  What stunned me, however, was his statement that Judge Rothstein issued her opinion “fresh from a seminar at the Tellus Institute,” which he described as “organization set up by scientist trying to bring common sense to interpretation of science.”

Post hoc; ergo propter hoc?

Judge Rothstein’s PPA decision stands as a landmark of judicial gullibility.  Judge Rothstein conducted hearings and entertaining extensive briefings on the reliability of plaintiffs’ expert witnesses’ opinions, which were based largely upon one epidemiologic study, known as the “Yale Hemorrhagic Stroke Project (HSP).”  In the end, publication in a prestigious peer-reviewed journal proved to be a proxy for independent review: “The prestigious NEJM published the HSP results, further substantiating that the research bears the indicia of good science.” Id. at 1239 (citing Daubert II for the proposition that peer review shows the research meets the minimal criteria for good science). The admissibility challenges were refused.

Ultimately, the HSP study received much more careful analysis before juries, which uniformly returned verdicts for the defense. After one of the early defense verdicts, plaintiffs’ counsel challenged the defendant’s reliance upon underlying data in the HSP, which went behind the peer-reviewed publication, and which showed that the peer review failed to prevent serious errors.  The trial court rejected the plaintiffs’ request for a new trial, and spoke to the significance of challenging the superficial significance of peer review of the key study relied upon by plaintiffs in the PPA litigation:

“I mean, you could almost say that there was some unethical activity with that Yale Study.  It’s real close.  I mean, I — I am very, very concerned at the integrity of those researchers.”

“Yale gets — Yale gets a big black eye on this.”

O’Neill v. Novartis AG, California Superior Court, Los Angeles Cty., Transcript of Oral Argument on Post-Trial Motions, at 46 -47 (March 18, 2004) (Hon. Anthony J. Mohr)

Roisman’s endorsement of the PPA decision may have been purely result-oriented jurisprudence, but what of his enthusiasm for the “learning” that Judge Rothstein received at the Tellus Institute.  Tell us, what is this Tellus Institute?

In 2003, roughly contemporaneously with Judge Rothstein’s PPA decision, SKAPP published a jeremiad against the Daubert decision, with support from none other than the Tellus Group. See Daubert: The Most Influential Supreme Court Ruling You’ve Never Heard Of;  A Publication of the Project on Scientific Knowledge and Public Policy, coordinated by the Tellus Institute (2003). The Tellus Institute website tells us very little specific detail about the Institute’s projects, other than stating some vague and pious goals.  The alignment, however, of the Tellus Institute with David Michael’s SKAPP, which was created with plaintiffs’ lawyers’ funding, certainly seems like a dubious indicator of neutrality and scientific commitment.  SeeSkapp a Lot” (April 30, 2010).

We might get a better idea of the organization from the Tellus membership.

Richard Clapp and David Ozonoff are both regular testifiers for plaintiffs in so-called toxic tort and environmental litigation. In an article published about the time of the PPA decision, Clapp and Ozonoff acknowledged having benefited from discussions with colleagues at the Tellus Institute.  See Richard W. Clapp & David Ozonoff, “Environment and Health: Vital Intersection or Contested Territory?” 30 Am. J. L. & Med. 189, 189 (2004) (“This Article also benefited from discussions with colleagues in the project on Scientific Knowledge and Public Policy at Tellus Institute, in Boston, Massachusetts.”).

In the infamous case of Selikoff and Motley and their effort to subvert the neutrality of Judge James M. Kelly in the school district asbestos litigation, the conspiracy was detected in time for a successful recusal effort. In re School Asbestos Litigation, 977 F.2d 764 (3d Cir. 1992).  Unfortunately, in the PPA litigation, there was no disclosure of the efforts by the advocacy group, Tellus Institute, to undermine the neutrality of a federal judge. 

Outside observers will draw their own inferences about whether Tellus was an “honest broker” of scientific advice to Judge Rothstein. One piece of evidence may be SKAPP’s website, which contains a page about Richard Clapp’s courtroom advocacy in the PPA litigation. Additional evidence comes from Clapp’s leadership role in Physicians for Social Responsibility, and his own characterization of himself as a healthcare professional advocate. Clapp, a member of Tellus, was an expert witness for plaintiffs in PPA cases.

Was Clapp present at the Tellus Institute meeting attended by Judge Rothstein? History will judge whether the Tellus Institute participated in corrupting the administration of justice.

The Fallacy of Cherry Picking As Seen in American Courtrooms

May 3rd, 2014

After a long winter, the cherry trees are finally managing to blossom.  Before we know it, it will be cherry-picking time.

Cherry picking is a good thing; right?  Cherry picking yields cherries, and cherries are good.  Selective cherry picking yields the best, ripest, sweetest, tastiest cherries. Cherry picking data no doubt yields the best, unbiased, unconfounded, most probative data to be had.  Well, maybe not.

What could be wrong with picking cherries?  At the end of the process you have cherries, and if you do it right, you have all ripe, and no rotten, cherries.  Your collection of ripe cherries, however, will be unrepresentative of the universe of cherries, but at least we understand how and why your cherries were selected.

Elite colleges cherry pick the best high school students; leading law schools cherry pick the top college students; and top law firms and federal judges cherry pick the best graduates from the best law schools.  Lawyers are all-too-comfortable with “cherry picking.”  Of course, the cherry-picking process here has at least some objective criteria, which can be stated in advance of the selection.

In litigation, each side is expected to “cherry pick” the favorable evidence, and ignore or flyblow the contrary evidence.  Perhaps this aspect of the adversarial system induces complacency in judges about selectivity in the presentation of evidence by parties and their witnesses.  In science, this kind of adversarial selectivity is a sure way to inject bias and subjectivity into claims of knowledge.  And even in law, there are limits to this adversarial system. Undue selectivity in citing precedent can land a lawyer in a heap of trouble. See Thul v. OneWest Bank, FSB, No. 12 C 6380, 2013 WL 212926 (N.D. Ill. Jan. 18, 2013) (failure to cite relevant judicial precedent constitutes an ethical offense)

In science, the development of the systematic review, in large measure, has been supported by the widespread recognition that studies cannot be evaluated with post hoc, subjective evaluative criteria. See generally Matthias Egger, George Davey Smith, and Douglas Altman, Systematic Reviews in Health Care: Meta-Analysis in Context (2001).

Farmers pick the cherries they want to go to market, to make money and satisfy customers. The harvesters’ virtue lies in knowing what to pick to obtain the best crop.  The scientist’s virtue lies in the disinterested acquisition of data pursuant to a plan, and the evaluation of the data pursuant to pre-specified criteria.

The scientist’s virtue is threatened by motivations that are all-too human, and all-too common. The vice in science is wanting data that yields marketable publications, grants, promotions, awards, prizes, and perhaps a touch of fame. Picking data based upon a desired outcome is at the very least scientific fallacy if not scientific fraud. Cherry picking does not necessarily imply scienter, but in science, it is a strict liability offense.

The metaphor of cherry picking, mixed as it may be, thus gives us a label for fallacy and error.  Cherry picking incorporates sampling bias, selection bias,  confirmation bias, hasty generalization, and perhaps others as well. As explained recently, in Nature:

“Data can be dredged or cherry picked. Evidence can be arranged to support one point of view. * * * The question to ask is: ‘What am I not being told?’”

William J. Sutherland, David Spiegelhalter & Mark Burgman, “Policy: Twenty tips for interpreting scientific claims,” 503 Nature 335, 337 (2013).

Cherry picking in the orchard may be a good thing, but in the scientific world, it refers to the selection of studies or data within studies to yield results desired results, however misleading or counterfactual.  See Ben Goldacre, Bad Science 97-99 (2008). The selective use of evidence is not a fallacy unique to science. Cherry picking is widely acknowledged to seriously undermine the quality of public debate See Gary Klass, “Just Plain Data Analysis: Common Statistical Fallacies in Analyses of Social Indicator Data” (2008).  See generally Bradley Dowden, “Fallacies,” in James Fieser & Bradley Dowden, eds., Internet Encyclopedia of Philosophy.

The International Encyclopedia of Philosophy describes “cherry picking” as a fallacy, “a kind of error in reasoning.”  Cherry-picking the evidence, also known as “suppressed evidence,” is:

“[i]ntentionally failing to use information suspected of being relevant and significant is committing the fallacy of suppressed evidence. This fallacy usually occurs when the information counts against one’s own conclusion. * * * If the relevant information is not intentionally suppressed but rather inadvertently overlooked, the fallacy of suppressed evidence also is said to occur, although the fallacy’s name is misleading in this case.”

Bradley Dowden, “Suppressed Evidence,” International Encyclopedia of Philosophy (Last updated: December 31, 2010). See alsoCherry picking (fallacy),” Wikipedia (describing cherry picking as the pointing to data that appears to confirm one’s opinion, while ignoring contradictory data).

In 1965, in his landmark paper, Sir Austin Bradford Hill described some important factors to consider in determining whether a clear-cut association, beyond that which we would attribute to chance, was a causal association. Hill, Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295, 295 (1965).

One of the key Hill factors is, of course, consistent, replicated results.  Surely, an expert witness should not be permitted to manufacture a faux consistency by conducting a partial review.  In birth defects litigation, the problem of  “cherry picking” is so severe that one of the leading professional societies concerned with birth defects has issued a position paper to remind its members, other scientists, and the public that “[c]ausation determinations are made using all the scientific evidence”:

Causation determinations are made using all the scientific evidence. This evidence is derived from correctly interpreted papers that have been published in the peer-reviewed literature. Unpublished data may be useful if available in sufficient detail for an evaluation and if derived from a source that is known to use reliable internal or external review standards. A National Toxicology program report would be an example of an unpublished source that is typically reliable. All available papers are considered in a scientific deliberation; selective consideration of the literature is not a scientific procedure.”

The Public Affairs Committee of the Teratology Society, “Teratology Society Public Affairs Committee Position Paper Causation in Teratology-Related Litigation,” 73 Birth Defects Research (Part A) 421, 422 (2005) (emphasis added).

* * * * * *

Cherry picking is a main rhetorical device for the litigator. Given the pejorative connotations of “cherry picking,” no one should be very surprised that lawyers and judges couch their Rule 702 arguments and opinions in terms of whether expert witnesses engaged in this fulsome fruitful behavior.

The judicial approach to cherry picking is a just a little schizophrenic. Generally, in the context of exercising its gatekeeping function for expert witnesses, the elimination of cherry picking is an important goal. Lust v. Merrell Dow Pharmaceuticals, Inc., 89 F.3d 594, 596-98 (9th Cir. 1996) (affirming exclusion of Dr. Done in a Chlomid birth defects case; district court found that “Dr. Done has seen fit to ‘pick and chose’ [sic] from the scientific landscape and present the Court with what he believes the final picture looks like. This is hardly scientific.”) (internal citation omitted); Barber v. United Airlines, Inc., 17 Fed. Appx. 433, 437 (7th Cir. 2001) (holding that a “selective use of facts fails to satisfy the scientific method and Daubert”). See also Crawford v. Indiana Harbor Belt Railroad Co., 461 F.3d 844 (7th Cir. 2006) (affirming summary judgment in disparate treatment discharge case, and noting judicial tendency to require “comparability” between plaintiffs and comparison group as a “natural response to cherry-picking by plaintiffs”); Miller v. Pfizer, Inc., 196 F. Supp. 2d 1062, (D. Kan. 2002) (excluding, with aid of independent, court-appointed expert witnesses, a party expert witness, David Healy, who failed to reconcile the fact that other research is contrary to his conclusion), aff’d, 356 F.3d 1326 (10th Cir.), cert denied, 125 S. Ct. 40 (2004).

In Ellis v. Barnhart, the Eighth Circuit affirmed a district court’s reversal of an Administrative Law Judge for “cherry picking” the record in a disability case.  392 F.3d 988 (8th Cir. 2005).  Clearly cherry picking was a bad thing for a judicial officer to do when charged with the administration of justice. Several years later, however, the Eighth Circuit held that a trial court erred in excluding an expert witness for having offered an opinion that ignored the witness’s own prior, contrary opinions, a key National Institutes of Health clinical trial, and multiple other studies.  The adversary’s charges of  “cherry picking” were to no avail. Kuhn v. Wyeth, Inc., 686 F.3d 618, 633 (8th Cir. 2012) (“There may be several studies supporting Wyeth’s contrary position, but it is not the province of the court to choose between the competing theories when both are supported by reliable scientific evidence.”), rev’g Beylin v. Wyeth, 738 F.Supp. 2d 887, 892 (E.D.Ark. 2010) (MDL court) (Wilson, J. & Montgomery, J.) (excluding proffered testimony of Dr. Jasenka Demirovic who appeared to have “selected study data that best supported her opinion, while downplaying contrary findings or conclusions.”).

But wait, the court in Kuhn did not cite its own published opinion on cherry picking in Ellis.  Some might say that the Circuit cherry picked its own precedents to get to a desired result. Anthony Niblett, “Do Judges Cherry Pick Precedents to Justify Extralegal Decisions?: A Statistical Examination,” 70 Maryland L. Rev. 234 (2010) (reviewing charges of cherry picking, and examining data [cherry picked?] from California).

The situation in the federal trial courts is chaotic. Most of the caselaw recognizes the fallacy of an expert witness’s engaging in ad hoc selection of studies upon which to rely.  Federal courts, clear on their gatekeeping responsibilities and aware of the selection fallacy, have condemned cherry-picking expert witnesses. Judge Lewis Kaplan, in the Southern District of New York, expressed the proper judicial antipathy to cherry picking:

“[A]ny theory that fails to explain information that otherwise would tend to cast doubt on that theory is inherently suspect,” and “courts have excluded expert testimony ‘where the expert selectively chose his support from the scientific landscape.’”

In re Rezulin Prod. Liab. Litig., 369 F. Supp. 2d 398, 425 & n.164 (S.D.N.Y. 2005) (citation omitted).

Judge Breyer, of the Northern District of California, expressed similar sentiments in ruling on Rule 702 motions in the Celebrex personal injury litigation:

“these experts ignore the great weight of the observational studies that contradict their conclusion and rely on the handful that appear to support their litigation-created opinion.”

In re Bextra & Celebrex Mktg. Sales Pracs. & Prods. Liab. Litig., 524 F. Supp. 2d 1166, 1181 (N.D. Cal. 2007).  The “cherry-picking” of favorable data “does not reflect scientific knowledge, is not derived by the scientific method, and is not ‘good science.’” Id. at 1176.

Other illustrative federal cases include:

In re Bausch & Lomb, Inc., 2009 WL 2750462 at *13-14 (D.S.C. 2009) (“Dr. Cohen did not address [four contradictory] studies in her expert reports or affidavit, and did not include them on her literature reviewed list [. . .] This failure to address this contrary data renders plaintiffs’ theory inherently unreliable.”)

Rimbert v. Eli Lilly & Co., No. 06-0874, 2009 WL 2208570, *19 (D.N.M. July 21, 2009) )(“Even more damaging . . . is her failure to grapple with any of the myriad epidemiological studies that refute her conclusion.”), aff’d, 647 F.3d 1247 (10th Cir. 2011) (affirming exclusion but remanding to permit plaintiff to find a new expert witness)

LeClercq v. The Lockformer Co., No. 00C7164, 2005 WL 1162979, at *4, 2005 U.S. Dist. LEXIS 7602, at *15 (N.D. Ill. Apr. 28, 2005) (“failure to discuss the import of, or even mention … material facts in [expert] reports amounts to ‘cherry-pick[ing]’ … and such selective use of facts fail[s] to satisfy the scientific method and Daubert.”) (internal citations and quotations omitted)

Contractors Ass’n of E. Pa. Inc. v. City of Philadelphia, 893 F. Supp. 419, 436 (E.D. Pa., 1995) (holding that expert witness opinion was unreliable when witness’s conclusions rested on incomplete factual data)

Galaxy Computer Servs. Inc. v. Baker, 325 B.R. 544 (E.D. Va. 2005) (excluding expert witness when witness relied upon incomplete data in reaching a valuation assessment).

Dwyer v. Sec’y of Health & Human Servs., No. 03-1202V, 2010 WL 892250, at *14 (Fed. Cl. Spec. Mstr. Mar. 12, 2010)(recommending rejection of thimerosal autism claim)(“In general, respondent’s experts provided more responsive answers to such questions.  Respondent’s experts were generally more careful and nuanced in their expert reports and testimony. In contrast, petitioners’ experts were more likely to offer opinions that exceeded their areas of expertise, to “cherry-pick” data from articles that were otherwise unsupportive of their position, or to draw conclusions unsupported by the data cited… .”)

Holden Metal & Aluminum Works, Ltd. v. Wismarq Corp., No. 00C0191, 2003 WL 1797844, at *2 (N.D. Ill. Apr. 3, 2003) (“Essentially, the expert ‘cherrypicked’ the facts he considered to render his opinion, and such selective use of facts failed to satisfy the scientific method and Daubert.”) (internal citation omitted).

Flue-Cured Tobacco Cooperative Stabilization Corp. v. EPA, 4 F. Supp. 2d 435, 459 – 60  (M.D.N.C. 1998) (finding that  EPA’s selection of studies for inclusion in a meta-analysis to be “disturbing,” and that agency’s selective, incomplete inclusion of studies violated its own guidelines for conducting risk assessments), rev’d on other grounds, 313 F.3d 852, 862 (4th Cir. 2002) (Widener, J.) (holding that the issuance of the report was not “final agency action”)

Fail-Safe, LLC v. AO Smith Corp., 744 F. Supp. 2d 870, 889 (E.D. Wis. 2010) (“the court also finds the witness’s methodology unreliable because of how Dr. Keegan uniformly treated all evidence that undermined his underlying conclusion: unwarranted dismissal of the evidence or outright blindness to contrary evidence. In fact, it is readily apparent that Dr. Keegan all but ‘cherry picked’ the data he wanted to use, providing the court with another strong reason to conclude that the witness utilized an unreliable methodology. * * * Dr. Keegan’s two reports are rich with examples of his ‘cherry picking’ of the evidence.”)

As noted, however, there are federal trial courts that are all too willing to suspend judgment and kick the case to the jury.  Here is a sampler of cases that found cherry picking to be an acceptable methodology, or at least a methodology sufficient to require that the case be submitted to the finder of fact.

In Berg v. Johnson & Johnson, the district court noted the defendants’ argument that proffered testimony is unreliable because witness “cherry-picked” data in order to form an opinion solely for purposes of litigation. 940 F.Supp. 2d 983, 991-92 (D.S.D. 2013). The trial judge, however, was not willing to look particularly closely at what was excluded or why:

“The only difference between his past and present research seems to exist in how he categorized his data. Defendants label this ‘cherry-picking’. The court views it as simply looking at the existing data from a different perspective.”

Id.  Of course, expert witnesses on opposite sides look at the case from different perspectives, but the question begged was whether the challenged expert witness had categorized data in an unprincipled way. Other cases of this ilk include:

United States v. Paracha, 2006 WL 12768, at *20 (S.D. N.Y. Jan. 3, 2006) (rejecting challenge to terrorism expert witness on grounds that he cherry picked evidence in conspiracy prosecution involving al Queda)

In re Chantix (Varenicline) Products Liab. Litig., 889 F. Supp. 2d 1272, 1288 (N.D. Ala. 2012) (“Why Dr. Kramer chose to include or exclude data from specific clinical trials is a matter for cross-examination, not exclusion under Daubert.“)

Bouchard v. Am. Home Prods. Corp., 2002 WL 32597992 at *7 (N.D. Ohio May 24, 2002) (“If Bouchard believes that [the expert]… ignored evidence that would have required him to substantially change his opinion, that is a fit subject for cross-examination, not a grounds for wholesale rejection of an expert opinion.”)

In re Celexa & Lexapro Prods. Liab. Litig., 927 F. Supp. 2d 758, 2013 WL 791780, at *5, *7, *8 (E.D. Mo. 2013) (Sippel, J.) (rejecting challenge to David Healy in antidepressant suicide case)

Allen v. Takeda Pharms., MDL No. 6:11-md-2299, No. 12-cv-00064, 2013 WL 6825953, at *11 (W.D. La. Dec. 20, 2013) (challenged expert witness in Actos litigation sufficiently explained his choices to be exonerated from charges of cherry picking)

In re NuvaRing Prods. Liab. Litig., No. 4:08–MD–1964 RWS, 2013 WL 791787 (E.D. Mo. Mar. 4, 2013) (“As to cherry picking data, the Eighth Circuit has recently made clear that such allegations should be left for crossexamination.”)

McClellan v. I-Flow Corp., 710 F. Supp. 2d 1092, 1114 (D. Ore. 2010) (“Defendants are correct that plaintiffs’ experts must elucidate how the relevant evidence lends support to their opinions by explaining…..”) (rejecting cherry picking but denying Rule 702 challenge based in part upon alleged cherry picking)

Rich v. Taser Internat’l, Inc., No. 2:09–cv–02450–ECR–RJJ, 2012 WL 1080281, at *6 (D. Nev. March 30, 2012) (noting the objection to cherry picking but holding that it was an issue for cross-examination)

In re Urethane Antitrust Litig., No. 04-1313-JWL, MDL No. 1616, 2012 WL 6681783, at *3 (D. Kan. Dec. 21, 2012) (allowing expert testimony that “certain events are consistent with collusion”; “the extent to which [an expert] considered the entirety of the evidence in the case is a matter for cross-examination.”)

In re Titanium Dioxide Antitrust Litig., No. RDB-10-0318, 2013 WL 1855980, 2013 U.S. Dist. LEXIS 62394 (D. Md. May 1, 2013) (rejecting Rule 702 cherry-picking challenge to an expert who cherry picked; witness’s selection of documents upon which to rely from a record that exceeded 14 million pages was not unreliable. “ If important portions of the record were overlooked, then the Defendants may address that issue at trial.”)


The situation in state courts is similarly chaotic and fragmented.

In Lakey v. Puget Sound Energy, Inc., the Washington Supreme Court resoundingly rejected “cherry picking” by expert witnesses in a public and private nuisance case against a local utility for fear of future illnesses from exposure to electro-magnetic frequency radiation (EMF).  Lakey v. Puget Sound Energy, Inc., 176 Wn.2d 909 (2013). The court held that the plaintiffs’ expert witnesses’ cherry-picking approach to data and studies was properly excluded under Rule 702. Their selective approach vitiated the reliability of his opinion with the consequence of :

“seriously tainting his conclusions because epidemiology is an iterative science relying on later studies to refine earlier studies in order to reach better and more accurate conclusions. Carpenter refused to account for the data from the toxicological studies, which epidemiological methodology requires unless the evidence for the link between exposure and disease is unequivocal and strong, which is not the case here. Carpenter also selectively sampled data within one of the studies he used, taking data indicating an EMF-illness link and ignoring the larger pool of data within the study that showed no such link, Carpenter’s treatment of this data created an improper false impression about what the study actually showed.”

Id.; see alsoWashington Supreme Court Illustrates the Difference Between Frye and Rule 702” (April 15, 2013).

Other state Supreme Courts have recognized and rejected the gerrymandering of scientific evidence.  Betz v. Pneumo Abex LLC, 2012 WL 1860853, *16 (May 23, 2012 Pa. S. Ct.)(“According to Appellants, moreover, the pathologist’s self-admitted selectivity in his approach to the literature is decidedly inconsistent with the scientific method. Accord Brief for Amici Scientists at 17 n.2 (“‘Cherry picking’ the literature is also a departure from ‘accepted procedure’.”)); George v. Vermont League of Cities and Towns, 2010 Vt. 1, 993 A.2d 367, 398 (Vt. 2010)(expressing concern about how and why plaintiff’s expert witnesses selected some studies to include in their “weight of evidence” methodology.  Without an adequate explanation of selection and weighting criteria, the choices seemed “arbitrary” “cherry picking.”); Bowen v. E.I. DuPont de Nemours & Co., 906 A.2d 787, 797 (Del. 2006) (noting that expert witnesses cannot ignore studies contrary to their opinions).

Lower state courts have also quashed the cherry-picking harvest. Scaife v. AstraZeneca LP, 2009 WL 1610575, at *8 (Del. Super. June 9, 2009) (“Simply stated, the expert cannot accept some but reject other data from the medical literature without explaining the bases for her acceptance or rejection.”); see also In re Bextra & Celebrex Prod. Liab. Litig., No. 762000/2006, 2008 N.Y. Misc. LEXIS 720, at *47 (Sup. Ct. N.Y. Co. Jan 7, 2008) (stating that plaintiffs must show that their experts “do not ignore contrary data”).

The Nebraska Supreme Court appears to recognize the validity of considering the existence of cherry-picking in expert witness gatekeeping.  In practice, however, that Court has shown an unwillingness to tolerate close scrutiny into what was included and excluded from the expert witness’s consideration.  King v. Burlington No. Santa Fe Ry, ___N.W.2d___, 277 Neb. Reports 203, 234 (2009)(noting that the law does “not preclude a trial court from considering as part of its reliability inquiry whether an expert has cherry-picked a couple of supporting studies from an overwhelming contrary body of literature,” but ignoring the force of the fallacious expert witness testimony by noting that the questionable expert witness (Frank) had some studies that showed associations between exposure to diesel exhaust or benzene and multiple myeloma).

“Of all the offspring of time, Error is the most ancient, and is so old and familiar an acquaintance, that Truth, when discovered, comes upon most of us like an intruder, and meets the intruder’s welcome.”

Charles MacKay, Extraordinary Popular Delusions and the Madness of Crowds (1841)