TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

From Here to CERT-ainty

June 28th, 2018

An enterprising journalist, Michael Waters, recently published an important exposé on the Council for Education and Research on Toxics (CERT). Michael Waters, “The Secretive Non-Profit Gaming California’s Health Laws: The Council for Education and Research on Toxics has won million-dollar settlements using a controversial public health law,” The Outline (June 18, 2018). Digging deep into the shadowy organization, Mr. Waters reported that:

“CERT doesn’t have a website, a social media account, or any notable public presence, despite having won million-dollar judgments by suing corporations. However, files from the California Secretary of State show that in May 30, 2001, four people co-founded the non-profit: C. Sterling Wolfe, a former environmental lawyer; Brad Lunn; Carl Cranor, a toxicology professor at University of California Riverside; and Martyn T. Smith, a toxicology professor at Berkeley.”

Id.

Mr. Water’s investigation puts important new facts on the table about the conduct of the CERT corporation. The involvement of Christopher Sterling Wolfe, a Torrance, California, plaintiffs’ lawyer, is not terribly surprising. The involvement in CERT of frequent plaintiffs’ expert witnesses, Carl F. Cranor and Martyn T. Smith, however, raises serious ethical questions. Both Cranor and Smith were expert witnesses for plaintiffs in the infamous Milward case,1 and after the trial court excluded their testimony and granted summary judgment, CERT filed an amicus brief in the Court of Appeals.2

The rules governing amicus briefs in federal appellate courts require disclosure of the amicus’s interest in the proceedings. By the time that CERT filed its amicus brief in Milward, Cranor and Smith may not have been officers of the corporation, but given CERT’s funding of Smith’s research, these “Founding Fathers” certainly had a continuing close relationship with the corporation.3Coffee with Cream, Sugar & a Dash of Acrylamide” (June 9, 2018). Given CERT’s name, which suggests a public interest mission, the corporation’s litigation activities on behalf of its founders, Cranor and Smith, exhibit a certain lack of candor with the court.

======================

My discussions with Mr. Waters, and his insightful piece in The Outline, led to a call from Madeleine Brand, who wanted to discuss CERT’s litigation against Starbucks, under California’s Proposition 65 laws, over acrylamide content in coffee. David Roe, a self-styled environmental activist and drafter of California’s bounty hunting law, was interviewed directly after me.4

As every California now no doubt knows, acrylamide is present in many foods. The substance is created when the amino acid asparagine is heated in the presence of sugars. Of course, I expected to hear Roe defend his creation, Proposition 65, generally, and the application of Proposition 65 to the low levels of acrylamide in coffee, perhaps on contrary-to-fact precautionary principle grounds. What surprised me were Roe’s blaming the victim, Starbucks for not settling, and his strident assertions that it was a long-established fact that acrylamide causes cancer.

Contrary to Roe’s asseverations, the National Cancer Institute has evaluated the acrylamide issues quite differently. On its website, the NCI has addressed “Acrylamide and Cancer Risk,” and mostly found none. Roe had outrageously suggested that there were no human data, because of the ethics of feeding acrylamide to humans, and so regulators had to rely upon rodent studies. The NCI, however, had looked at occupational studies in which workers were exposed to acrylamide in manufacturing processes at levels much higher than any dietary intake. The NCI observed “studies of occupational exposure have not suggested increased risks of cancer.” As for rodents, the NCI noted that “toxicology studies have shown that humans and rodents not only absorb acrylamide at different rates, they metabolize it differently as well.”

The NCI’s fact sheet is a relatively short précis, but the issue of acrylamide has been addressed in many studies, collected and summarized in meta-analyses.5 Since the NCI’s summary of the animal toxicology and human epidemiology, several important research groups have reported careful human studies that consistently have found no association between dietary acrylamide and cancer risk.6


1 Milward v. Acuity Specialty Products Group, Inc., 639 F.3d 11 (1st Cir. 2011), cert. denied sub nom., U.S. Steel Corp. v. Milward, 132 S. Ct. 1002 (2012).

2 See “The Council for Education and Research on Toxics” (July 9, 2013).

3 A Guidestar Report show that in 2007, the corporate officer were Nancy L. Quam-Wickham and Nancy Perley, in addition to Lunn and Wolfe.

4 Not to be confused with David Roe, the famous snooker player.

5 Claudio Pelucchi, Carlo La Vecchia, Bosetti C, P. Boyle & Paolo Boffetta, “Exposure to acrylamide and human cancer–a review and meta-analysis of epidemiologic studies,” 22 Ann. Oncology 1487 (2011); Claudio Pelucchi, Cristina Bosetti, Carlotta Galeone & Carlo La Vecchia, “Dietary acrylamide and cancer risk: An updated meta-analysis,” 136 Internat’l J. Cancer 2912 (2015).

6 C. Pelucchi, V. Rosato, P. M. Bracci, D. Li, R. E. Neale, E. Lucenteforte, D. Serraino, K. E. Anderson, E. Fontham, E. A. Holly, M. M. Hassan, J. Polesel, C. Bosetti, L. Strayer, J. Su, P. Boffetta, E. J. Duell & C. La Vecchia, “Dietary acrylamide and the risk of pancreatic cancer in the International Pancreatic Cancer Case–Control Consortium (PanC4),” 28 Ann. Oncology 408 (2017) (reporting that the PanC4 pooled-analysis found no association between dietary acrylamide and pancreatic cancer); Rebecca E. Graff, Eunyoung Cho, Mark A. Preston, Alejandro Sanchez, Lorelei A. Mucci & Kathryn M. Wilson, “Dietary acrylamide intake and risk of renal cell carcinoma in two large prospective cohorts,” 27 Cancer Epidemiol., Biomarkers & Prevention (2018) (in press at doi: 10.1158/1055-9965.EPI-18-0320) (failing to find an association between dietary acrylamide and renal cell carcinoma); Andy Perloy, Leo J. Schouten, Piet A. van den Brandt, Roger Godschalk, Frederik-Jan van Schooten & Janneke G. F. Hogervorst, “The Role of Genetic Variants in the Association between Dietary Acrylamide and Advanced Prostate Cancer in the Netherlands Cohort Study on Diet and Cancer,” 70 Nutrition & Cancer 620 (2018) (finding “no clear evidence was found for interaction between acrylamide intake and selected genetic variants for advanced prostate cancer”).

Scientific Evidence in Canadian Courts

February 20th, 2018

A couple of years ago, Deborah Mayo called my attention to the Canadian version of the Reference Manual on Scientific Evidence.1 In the course of discussion of mistaken definitions and uses of p-values, confidence intervals, and significance testing, Sander Greenland pointed to some dubious pronouncements in the Science Manual for Canadian Judges [Manual].

Unlike the United States federal court Reference Manual, which is published through a joint effort of the National Academies of Science, Engineering, and Medicine, the Canadian version, is the product of the Canadian National Judicial Institute (NJI, or the Institut National de la Magistrature, if you live in Quebec), which claims to be an independent, not-for-profit group, that is committed to educating Canadian judges. In addition to the Manual, the Institute publishes Model Jury Instructions and a guide, Problem Solving in Canada’s Courtrooms: A Guide to Therapeutic Justice (2d ed.), as well as conducting educational courses.

The NJI’s website describes the Instute’s Manual as follows:

Without the proper tools, the justice system can be vulnerable to unreliable expert scientific evidence.

         * * *

The goal of the Science Manual is to provide judges with tools to better understand expert evidence and to assess the validity of purportedly scientific evidence presented to them. …”

The Chief Justice of Canada, Hon. Beverley M. McLachlin, contributed an introduction to the Manual, which was notable for its frank admission that:

[w]ithout the proper tools, the justice system is vulnerable to unreliable expert scientific evidence.

****

Within the increasingly science-rich culture of the courtroom, the judiciary needs to discern ‘good’ science from ‘bad’ science, in order to assess expert evidence effectively and establish a proper threshold for admissibility. Judicial education in science, the scientific method, and technology is essential to ensure that judges are capable of dealing with scientific evidence, and to counterbalance the discomfort of jurists confronted with this specific subject matter.”

Manual at 14. These are laudable goals, indeed, but did the National Judicial Institute live up to its stated goals, or did it leave Canadian judges vulnerable to the Institute’s own “bad science”?

In his comments on Deborah Mayo’s blog, Greenland noted some rather cavalier statements in Chapter two that suggest that the conventional alpha of 5% corresponds to a “scientific attitude that unless we are 95% sure the null hypothesis is false, we provisionally accept it.” And he, pointed elsewhere where the chapter seems to suggest that the coefficient of confidence that corresponds to an alpha of 5% “constitutes a rather high standard of proof,” thus confusing and conflating probability of random error with posterior probabilities. Greenland is absolutely correct that the Manual does a rather miserable job of educating Canadian judges if our standard for its work product is accuracy and truth.

Some of the most egregious errors are within what is perhaps the most important chapter of the Manual, Chapter 2, “Science and the Scientific Method.” The chapter has two authors, a scientist, Scott Findlay, and a lawyer, Nathalie Chalifour. Findlay is an Associate Professor, in the Department of Biology, of the University of Ottawa. Nathalie Chalifour is an Associate Professor on the Faculty of Law, also in the University of Ottawa. Together, they produced some dubious pronouncements, such as:

Weight of the Evidence (WOE)

First, the concept of weight of evidence in science is similar in many respects to its legal counterpart. In both settings, the outcome of a weight-of-evidence assessment by the trier of fact is a binary decision.”

Manual at 40. Findlay and Chalifour cite no support for their characterization of WOE in science. Most attempts to invoke WOE are woefully vague and amorphous, with no meaningful guidance or content.2  Sixty-five pages later, if any one is noticing, the authors let us in a dirty, little secret:

at present, there exists no established prescriptive methodology for weight of evidence assessment in science.”

Manual at 105. The authors omit, however, that there are prescriptive methods for inferring causation in science; you just will not see them in discussions of weight of the evidence. The authors then compound the semantic and conceptual problems by stating that “in a civil proceeding, if the evidence adduced by the plaintiff is weightier than that brought forth by the defendant, a judge is obliged to find in favour of the plaintiff.” Manual at 41. This is a remarkable suggestion, which implies that if the plaintiff adduces the crummiest crumb of evidence, a mere peppercorn on the scales of justice, but the defendant has none to offer, that the plaintiff must win. The plaintiff wins notwithstanding that no reasonable person could believe that the plaintiff’s claims are more likely than not true. Even if there were the law of Canada, it is certainly not how scientists think about establishing the truth of empirical propositions.

Confusion of Hypothesis Testing with “Beyond a Reasonable Doubt”

The authors’ next assault comes in conflating significance probability with the probability connected with the burden of proof, a posterior probability. Legal proceedings have a defined burden of proof, with criminal cases requiring the state to prove guilt “beyond a reasonable doubt.” Findlay and Chalifour’s discussion then runs off the rails by likening hypothesis testing, with an alpha of 5% or its complement, 95%, as a coefficient of confidence, to a “very high” burden of proof:

In statistical hypothesis-testing – one of the tools commonly employed by scientists – the predisposition is that there is a particular hypothesis (the null hypothesis) that is assumed to be true unless sufficient evidence is adduced to overturn it. But in statistical hypothesis-testing, the standard of proof has traditionally been set very high such that, in general, scientists will only (provisionally) reject the null hypothesis if they are at least 95% sure it is false. Third, in both scientific and legal proceedings, the setting of the predisposition and the associated standard of proof are purely normative decisions, based ultimately on the perceived consequences of an error in inference.”

Manual at 41. This is, as Greenland and many others have pointed out, a totally bogus conception of hypothesis testing, and an utterly false description of the probabilities involved.

Later in the chapter, Findlay and Chalifour flirt with the truth, but then lapse into an unrecognizable parody of it:

Inferential statistics adopt the frequentist view of probability whereby a proposition is either true or false, and the task at hand is to estimate the probability of getting results as discrepant or more discrepant than those observed, given the null hypothesis. Thus, in statistical hypothesis testing, the usual inferred conclusion is either that the null is true (or rather, that we have insufficient evidence to reject it) or it is false (in which case we reject it). 16 The decision to reject or not is based on the value of p if the estimated value of p is below some threshold value a, we reject the null; otherwise we accept it.”

Manual at 74. OK; so far so good, but here comes the train wreck:

By convention (and by convention only), scientists tend to set α = 0.05; this corresponds to the collective – and, one assumes, consensual – scientific attitude that unless we are 95% sure the null hypothesis is false, we provisionally accept it. It is partly because of this that scientists have the reputation of being a notoriously conservative lot, given that a 95% threshold constitutes a rather high standard of proof.”

Manual at 75. Uggh; so we are back to significance probability’s being a posterior probability. As if to atone for their sins, in the very next paragraph, the authors then remind the judicial readers that:

As noted above, p is the probability of obtaining results at least as discrepant as those observed if the null is true. This is not the same as the probability of the null hypothesis being true, given the results.”

Manual at 75. True, true, and completely at odds with what the authors have stated previously. And to add to the reader’s now fully justified conclusion, the authors describe the standard for rejecting the null hypothesis as “very high indeed.” Manual at 102, 109. Any reader who is following the discussion might wonder how and why there is such a problem of replication and reproducibility in contemporary science.

Conflating Bayesianism with Frequentist Modes of Inference

We have seen how Findlay and Chalifour conflate significance and posterior probabilities, some of the time. In a section of their chapter that deals explicitly with probability, the authors tell us that before any study is conducted the prior probability of the truth of the tested hypothesis is 50%, sans evidence. This an astonishing creation of certainty out nothingness, and perhaps it explains the authors’ implied claim that the crummiest morsel of evidence on one side is sufficient to compel a verdict, if the other side has no morsels at all. Here is how the authors put their claim to the Canadian judges:

Before each study is conducted (that is, a priori), the hypothesis is as likely to be true as it is to be false. Once the results are in, we can ask: How likely is it now that the hypothesis is true? In the first study, the low a priori inferential strength of the study design means that this probability will not be much different from the a priori value of 0.5 because any result will be rather equivocal owing to limitations in the experimental design.”

Manual at 64. This implied Bayesian slant, with 50% priors, in the world of science would lead anyone to believe “as many as six impossible things before breakfast,” and many more throughout the day.

Lest you think that the Manual is all rubbish, there are occasional gems of advice to the Canadian judges. The authors admonish the judges to

be wary of individual ‘statistically significant’ results that are mined from comparatively large numbers of trials or experiments, as the results may be ‘cherry picked’ from a larger set of experiments or studies that yielded mostly negative results. The court might ask the expert how many other trials or experiments testing the same hypothesis he or she is aware of, and to describe the outcome of those studies.”

Manual at 87. Good advice, but at odds with the authors’ characterization of statistical significance as establishing the rejection of the null hypothesis well-nigh beyond a reasonable doubt.

When Greenland first called attention to this Manual, I reached to some people who had been involved in its peer review. One reviewer told me that it was a “living document,” and would likely be revised after he had the chance to call the NJI’s attention to the errors. But two years later, the errors remain, and so we have to infer that the authors meant to say all the contradictory and false statements that are still present in the downloadable version of the Manual.


2 SeeWOE-fully Inadequate Methodology – An Ipse Dixit By Another Name” (May 1, 2012); “Weight of the Evidence in Science and in Law” (July 29, 2017); see also David E. Bernstein, “The Misbegotten Judicial Resistance to the Daubert Revolution,” 89 Notre Dame L. Rev. 27 (2013).

Ninth Circuit Quashes Harkonen’s Last Chance

January 8th, 2018

With the benefit of hindsight, even the biggest whopper can be characterized as a strategic choice for trial counsel. As are result of this sort of thinking, the convicted have a very difficult time in pressing claims of ineffective assistance of counsel. After the fact, a reviewing or an appellate court can always imagine a strategic reason for trial counsel’s decisions, even if they contributed to the client’s conviction.

In the Harkonen case, a pharmaceutical executive was indicted and tried for wire fraud and misbranding. His crime was to send out a fax with a preliminary assessment of a recently unblinded clinical trial. In his fax, Dr Harkonen described the trial’s results as “demonstrating” a survival benefit in study participants with mild and moderate disease. Survival (or mortality) was not a primary outcome of the trial, but it was a secondary outcome, and arguably the most important one of all. The subgroup of “mild and moderate” was not pre-specified, but it was highly plausible.

Clearly, Harkonen’s post hoc analysis would not be sufficient normally to persuade the FDA to approve a medication, but Harkonen did not assert or predict that the company would obtain FDA approval. He simply claimed that the trial “demonstrated” a benefit. A charitable interpretation of his statement, which was several pages long, would include the prior successful clinical trial, as important context for Harkonen’s statement.

The United States government, however, was not interested in the principle of charity, the context, or even its own pronouncements on the issue of statistical significance. Instead, the United States Attorney pushed for draconian sentences under the Wire Fraud Act, and the misbranding sections of the Food, Drug, and Cosmetics Act. A jury acquitted on the misbranding charge, but convicted on wire fraud. The government’s request for an extreme prison term and fines was rebuffed by the trial court, which imposed a term of six months of house arrest, and a small fine.1 The conviction, however, effectively keeps Dr Harkonen from working again in the pharmaceutical industry.

In post-verdict challenges to the conviction, Harkonen’s lawyers were able to marshal support from several well-renown statisticians and epidemiologists, but the trial court was reluctant to consider these post-verdict opinions when the defense called no expert witness at trial. The trial situation, however, was complicated and confused by the government’s pre-trial position that it would not call expert witnesses on the statistical and clinical trial interpretative issues. Contrary to these representations, the government called Dr Thomas Fleming, as statistician, who testified at some length, and without objection, to strict criteria for assessing statistical significance and causation in clinical trials.

Having read Fleming’s testimony, I can say that the government got away with introducing a great deal of expert witness opinion testimony, without effective contradiction or impeachment. With the benefit of hindsight, the defense decision not to call an expert witness looks like a serious deviation from the standard of care. Fleming’s “facts” about how the FDA would evaluate the success or failure of the clinical trial were not relevant to whether Harkonen’s claim of a demonstrated benefit were true or false. More importantly, Harkonen’s claim involved an inference, which is not a fact, but an opinion. Fleming’s contrary opinion really did not turn Harkonen’s claim into a falsehood. A contrary rule would have many expert witnesses in civil and in criminal litigation behind bars on similar charges of wire or mail fraud.

After Harkonen exhausted his direct appeals,2 he petitioned for a writ of coram nobis. The trial court denied the petition,3 and in a non-precedential opinion [sic], the Ninth Circuit affirmed the denial of coram nobis.4 United States v. Harkonen, slip op., No. 15-16844 (9th Cir., Dec. 4, 2017) [cited below as Harkonen].

The Circuit rejected Harkonen’s contention that the Supreme Court had announced a new rule with respect to statistical significance, in Matrixx Initiatives, Inc. v. Siracusano, 563 U.S. 27 (2011), which change in law required that his conviction be vacated. Harkonen’s lawyer, like much of the plaintiffs’ tort bar, oversold the Supreme Court’s comments about statistical significance, which were at best dicta, and not very well considered or supported dicta, at that. Still, there was an obvious tension, and duplicity, between positions that the government, through the Solicitor General’s office, had taken in Siracusano, and positions the government took in the Harkonen case.5 Given the government’s opportunistic double-faced arguments about statistical significance, the Ninth Circuit held that Harkonen’s proffered evidence was “compelling, especially in light of Matrixx,” but the panel concluded that his conviction was not the result of a “manifest injustice” that requires the issuance of the writ of coram nobis. Harkonen at 2 (emphasis added). Apparently, Harkonen had suffered an injustice of a less obvious and blatant variety, which did not rise to the level of manifest injustice.

The Ninth Circuit gave similarly short shrift to Harkonen’s challenge to the competency of his counsel. His trial lawyers had averred that they thought that they were doing well enough not to risk putting on an expert witness, especially given that the defense’s view of the evidence came out in the testimony of the government’s witnesses. The Circuit thus acquiesced in the view that both sides had chosen to forgo expert witness testimony, and overlooked the defense’s competency issue for not having objected to Fleming’s opinion trial testimony. Harkonen at 2-4. Remarkably, the appellate court did not look at how Fleming was allowed to testify on statistical issues, without being challenged on cross-examination.


2 United States v. Harkonen, 510 F. App’x 633, 638 (9th Cir. 2013), cert. denied, 134 S. Ct. 824 (2013).

4 Dave Simpson, “9th Circuit Refuses To Rethink Ex-InterMune CEO’s Conviction,” Law360 (Dec. 5, 2017).

Failed Gatekeeping in Ambrosini v. Labarraque (1996)

December 28th, 2017

The Ambrosini case straddled the Supreme Court’s 1993 Daubert decision. The case began before the Supreme Court clarified the federal standard for expert witness gatekeeping, and ended in the Court of Appeals for the District of Columbia, after the high court adopted the curious notion that scientific claims should be based upon reliable evidence and valid inferences. That notion has only slowly and inconsistently trickled down to the lower courts.

Given that Ambrosini was litigated in the District of Columbia, where the docket is dominated by regulatory controversies, frequently involving dubious scientific claims, no one should be surprised that the D.C. Court of Appeals did not see that the Supreme Court had read “an exacting standard” into Federal Rule of Evidence 702. And so, we see, in Ambrosini, this Court of Appeals citing and purportedly applying its own pre-Daubert decision in Ferebee v. Chevron Chem. Co., 552 F. Supp. 1297 (D.D.C. 1982), aff’d, 736 F.2d 1529 (D.C. Cir.), cert. denied, 469 U.S. 1062 (1984).1 In 2000, the Federal Rule of Evidence 702 was revised in a way that extinguishes the precedential value of Ambrosini and the broad dicta of Ferebee, but some courts and commentators have failed to stay abreast of the law.

Escolastica Ambrosini was using a synthetic progestin birth control, Depo-Provera, as well as an anti-nausea medication, Bendectin, when she became pregnant. The child that resulted from this pregnancy, Teresa Ambrosini, was born with malformations of her face, eyes, and ears, cleft lip and palate, and vetebral malformations. About three percent of all live births in the United States have a major malformation. Perhaps because the Divine Being has sovereign immunity, Escolastica sued the manufacturers of Bendectin and Depo-Provera, as well as the prescribing physician.

The causal claims were controversial when made, and they still are. The progestin at issue, medroxyprogesterone acetate (MPA), was embryotoxic in the cynomolgus monkey2, but not in the baboon3. The evidence in humans was equivocal at best, and involved mostly genital malformations4; the epidemiologic evidence for the MPA causal claim to this day remains unconvincing5.

At the close of discovery in Ambrosini, Upjohn (the manufacturer of the progestin) moved for summary judgment, with a supporting affidavit of a physician and geneticist, Dr. Joe Leigh Simpson. In his affidavit, Simpson discussed three epidemiologic studies, as well as other published papers, in support of his opinion that the progestin at issue did not cause the types of birth defects manifested by Teresa Ambrosini.

Ambrosini had disclosed two expert witnesses, Dr. Allen S. Goldman and Dr. Brian Strom. Neither Goldman nor Strom bothered to identify the papers, studies, data, or methodology used in arriving at an opinion on causation. Not surprisingly, the district judge was unimpressed with their opposition, and granted summary judgment for the defendant. Ambrosini v. Labarraque, 966 F.2d 1462, 1466 (D.C. Cir. 1992).

The plaintiffs appealed on the remarkable ground that Goldman’s and Strom’s crypto-evidence satisfied Federal Rule of Evidence 703. Even more remarkably, the Circuit, in a strikingly unscholarly opinion by Judge Mikva, opined that disclosure of relied-upon studies was not required for expert witnesses under Rules 703 and 705. Judge Mikva seemed to forget that the opinions being challenged were not given in testimony, but in (late-filed) affidavits that had to satisfy the requirement of Federal Rule of Civil Procedure 26. Id. at 1468-69. At trial, an expert witness may express an opinion without identifying its bases, but of course the adverse party may compel disclosure of those bases. In discovery, the proffered expert witness must supply all opinions and evidence relied upon in reach the opinions. In any event, the Circuit remanded the case for a hearing and further proceedings, at which the two challenged expert witnesses, Goldman and Strom, would have to identify the bases of their opinions. Id. at 1471.

Not long after the case landed back in the district court, the Supreme Court decided Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993). With an order to produce entered, plaintiffs’ counsel could no longer hide Goldman and Strom’s evidentiary bases, and their scientific inferences came under judicial scrutiny.

Upjohn moved again to exclude Goldman and Strom’s opinions. The district court upheld Upjohn’s challenges, and granted summary judgment in favor of Upjohn for the second time. The Ambrosinis appealed again, but the second case in the D.C. Circuit resulted in a split decision, with the majority holding that the exclusion of Goldman and Strom’s opinions under Rule 702 was erroneous. Ambrosini v. Labarraque, 101 F.3d 129 (D.C. Cir. 1996).

Although issued two decades ago, the majority’s opinion remains noteworthy as an example of judicial resistance to the existence and meaning of the Supreme Court’s Daubert opinion. The majority opinion uncritically cited the notorious Ferebee6 and other pre-Daubert decisions. The court embraced the Daubert dictum about gatekeeping being limited to methodologic consideration, and then proceeded to interpret methodology as superficially as necessary to sustain admissibility. If an expert witness claimed to have looked at epidemiologic studies, and epidemiology was an accepted methodology, then the opinion of the expert witness must satisfy the legal requirements of Daubert, or so it would seem from the opinion of the U.S. Court of Appeals for the District of Columbia.

Despite the majority’s hand waving, a careful reader will discern that there must have been substantial gaps and omissions in the explanations and evidence cited by plaintiffs’ expert witnesses. Seeing anything clearly in the Circuit’s opinion is made difficult, however, by careless and imprecise language, such as its descriptions of studies as showing, or not showing “causation,” when it could have meant only that such studies showed associations, with more or less random and systematic error.

Dr. Strom’s report addressed only general causation, and even so, he apparently did not address general causation of the specific malformations manifested by the plaintiffs’ child. Strom claimed to have relied upon the “totality of the data,” but his methodologic approach seems to have required him to dismiss studies that failed to show an association.

Dr. Strom first set forth the reasoning he employed that led him to disagree with those studies finding no causal relationship [sic] between progestins and birth defects like Teresa’s. He explained that an epidemiologist evaluates studies based on their ‘statistical power’. Statistical power, he continued, represents the ability of a study, based on its sample size, to detect a causal relationship. Conventionally, in order to be considered meaningful, negative studies, that is, those which allege the absence of a causal relationship, must have at least an 80 to 90 percent chance of detecting a causal link if such a link exists; otherwise, the studies cannot be considered conclusive. Based on sample sizes too small to be reliable, the negative studies at issue, Dr. Strom explained, lacked sufficient statistical power to be considered conclusive.”

Id. at 1367.

Putting aside the problem of suggesting that an observational study detects a “causal relationship,” as opposed to an association in need of further causal evaluation, the Court’s précis of Strom’s testimony on power is troublesome, and typical of how other courts have misunderstood and misapplied the concept of statistical power. Statistical power is a probability of observing an association of a specified size at a specified level of statistical significance. The calculation of statistical power turns indeed on sample size, the level of significance probability preselected for “statistical significance, an assumed probability distribution of the sample, and, critically, an alternative hypothesis. Without a specified alternative hypothesis, the notion of statistical power is meaningless, regardless of what probability (80% or 90% or some other percentage) is sought for finding the alternative hypothesis. Furthermore, the notion that the defense must adduce studies with “sufficient statistical power to be considered conclusive” creates an unscientific standard that can never be met, while subverting the law’s requirement that the claimant establish causation.

The suggestion that the studies that failed to find an association cannot be considered conclusive because they “lacked sufficient statistical power” is troublesome because it distorts and misapplies the very notion of statistical power. No attempt was made to describe the confidence intervals surrounding the point estimates of the null studies; nor was there any discussion whether the studies could be aggregated to increase their power to rule out meaningful associations.

The Circuit court’s scientific jurisprudence was thus seriously flawed. Without a discussion of the end points observed, the relevant point estimates of risk ratios, and the confidence intervals, the reader cannot assess the strength of the claims made by Goldman and Strom, or by defense expert Simpson, in their reports. Without identifying the study endpoints, the reader cannot evaluate whether the plaintiffs’ expert witnesses relied upon relevant outcomes in formulating their opinions. The court viewed the subject matter from 30,000 feet, passing over at 600 mph, without engagement or care. A strong dissent, however, suggested serious mischaracterizations of the plaintiffs’ evidence by the majority.

The only specific causation testimony to support plaintiff’s claims came from Goldman, in what appears to have been a “differential etiology.” Goldman purported to rule out a genetic cause, even though he had not conducted a critical family history or ordered a state-of-the-art chromosomal study. Id. at 140. Of course, nothing in a differential etiology approach would allow a physician to rule out “unknown” causes, which, for birth defects, make up the most prevalent and likely causes to explain any particular case. The majority acknowledged that these were short comings, but rhetorically characterized them as substantive, not methodologic, and therefore as issues for cross-examination, not for consideration by a judicial gatekeeping. All this is magical thinking, but it continues to infect judicial approaches to specific causation. See, e.g., Green Mountain Chrysler Plymouth Dodge Jeep v. Crombie, 508 F. Supp. 2d 295, 311 (D.Vt. 2007) (citing Ambrosini for the proposition that “the possibility of uneliminated causes goes to weight rather than admissibility, provided that the expert has considered and reasonably ruled out the most obvious”). In Ambrosini, however, Dr. Goldman had not ruled out much of anything.

Circuit Judge Karen LeCraft Henderson dissented in a short, but pointed opinion that carefully marshaled the record evidence. Drs. Goldman and Strom had relied upon a study by Greenberg and Matsunaga, whose data failed to show a statistically significant association between MPA and cleft lip and palate, when the crucial issue of timing of exposure was taken into consideration. Ambrosini, 101 F.3d at 142.

Beyond the specific claims and evidence, Judge Henderson anticipated the subsequent Supreme Court decisions in Joiner, Kumho Tire, and Weisgram, and the year 2000 revision of Rule 702, in noting that the majority’s acceptance of glib claims to have used a “traditional methodology” would render Daubert nugatory. Id. at 143-45 (characterizing Strom and Goldman’s methodologies as “wispish”). Even more importantly, Judge Henderson refused to indulge the assumption that somehow the length of Goldman’s C.V. substituted for evidence that his methods satisfied the legal (or scientific) standard of reliability. Id.

The good news is that little or nothing in Ambrosini survives the 2000 amendment to Rule 702. The bad news is that not all federal judges seem to have noticed, and that some commentators continue to cite the case, as lovely.

Probably no commentator has promiscuously embraced Ambrosini as warmly as Carl Cranor, a philosopher, and occasional expert witness for the lawsuit industry, in several publications and presentations.8 Cranor has been particularly enthusiastic about Ambrosini’s approval of expert witness’s testimony that failed to address “the relative risk between exposed and unexposed populations of cleft lip and palate, or any other of the birth defects from which [the child] suffers,” as well as differential etiologies that exclude nothing.9 Somehow Cranor, as did the majority in Ambrosini, believes that testimony that fails to identify the magnitude of the point estimate of relative risk can “assist the trier of fact to understand the evidence or to determine a fact in issue.”10 Of course, without that magnitude given, the trier of fact could not evaluate the strength of the alleged association; nor could the trier assess the probability of individual causation to the plaintiff. Cranor also has written approvingly of lumping unrelated end points, which defeats the assessment of biological plausibility and coherence by the trier of fact. When the defense expert witness in Ambrosini adverted to the point estimates for relevant end points, the majority, with Cranor’s approval, rejected the null findings as “too small to be significant.”11 If the null studies were, in fact, too small to be useful tests of the plaintiffs’ claims, intellectual and scientific honesty required an acknowledgement that the evidentiary display was not one from which a reasonable scientist would draw a causal conclusion.


1Ambrosini v. Labarraque, 101 F.3d 129, 138-39 (D.C. Cir. 1996) (citing and applying Ferebee), cert. dismissed sub nom. Upjohn Co. v. Ambrosini, 117 S.Ct. 1572 (1997) See also David E. Bernstein, “The Misbegotten Judicial Resistance to the Daubert Revolution,” 89Notre Dame L. Rev. 27, 31 (2013).

2 S. Prahalada, E. Carroad, M. Cukierski, and A.G. Hendrickx, “Embryotoxicity of a single dose of medroxyprogesterone acetate (MPA) and maternal serum MPA concentrations in cynomolgus monkey (Macaca fascicularis),” 32 Teratology 421 (1985).

3 S. Prahalada, E. Carroad, and A.G. Hendrick, “Embryotoxicity and maternal serum concentrations of medroxyprogesterone acetate (MPA) in baboons (Papio cynocephalus),” 32 Contraception 497 (1985).

4 See, e.g., Z. Katz, M. Lancet, J. Skornik, J. Chemke, B.M. Mogilner, and M. Klinberg, “Teratogenicity of progestogens given during the first trimester of pregnancy,” 65 Obstet Gynecol. 775 (1985); J.L. Yovich, S.R. Turner, and R. Draper, “Medroxyprogesterone acetate therapy in early pregnancy has no apparent fetal effects,” 38 Teratology 135 (1988).

5 G. Saccone, C. Schoen, J.M. Franasiak, R.T. Scott, and V. Berghella, “Supplementation with progestogens in the first trimester of pregnancy to prevent miscarriage in women with unexplained recurrent miscarriage: a systematic review and meta-analysis of randomized, controlled trials,” 107 Fertil. Steril. 430 (2017).

6 Ferebee v. Chevron Chemical Co., 736 F.2d 1529, 1535 (D.C. Cir.), cert. denied, 469 U.S. 1062 (1984).

7 Dr. Strom was also quoted as having provided a misleading definition of statistical significance: “whether there is a statistically significant finding at greater than 95 percent chance that it’s not due to random error.” Ambrosini at 101 F.3d at 136. Given the majority’s inadequate description of the record, the description of witness testimony may not be accurate, and error cannot properly be allocated.

8 Carl F. Cranor, Toxic Torts: Science, Law, and the Possibility of Justice 320, 327-28 (2006); see also Carl F. Cranor, Toxic Torts: Science, Law, and the Possibility of Justice 238 (2d ed. 2016).

9 Carl F. Cranor, Toxic Torts: Science, Law, and the Possibility of Justice 320 (2006).

10 Id.

11 Id. ; see also Carl F. Cranor, Toxic Torts: Science, Law, and the Possibility of Justice 238 (2d ed. 2016).