TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

New Superhero?

December 31st, 2012

The Verdict. A Civil Action. Class ActionMy Cousin Vinnie.

Wonder Woman, Superman, Batman, Ironman.

America loves movies, and superheroes.

So 2013 should be an exciting year with a new superhero movie coming to a theater, or a courthouse, near you: Egilman.

Actor-producer-director Patrick Coppola has announced that he is developing a film, which has yet to be given a catchy name.  Coppola calls the film in development:  the DOCTOR DAVID EGILMAN PROJECT.  According to Coppola, he was hired by

“by world famous MD – Doctor David Egilman to create and write a Screenplay based on Doctor Egilman’s life and the many cases he has served on as an expert witness in various chemical poisoning trials. Doctor Egilman is a champion of the underdog and has several worldwide charities and medical clinics he funds and donates his time to.”

Patrick Coppola describes his screenplay for the “Doctor David Egilman Project” as a story of conspiracy among corporate suppliers of beryllium materials, the government, and the thought leaders in occupational medicine to suppress information about harm to workers. In this narrative, which is a familiar refrain for plaintiffs’ counsel in toxic tort litigation, profits always take precedence over safety, and unions mysteriously are silently complicit in the carnage.

Can’t wait!

 

Reanalysis of Epidemiologic Studies – Not Intrinsically WOEful

December 27th, 2012

A recent student law review article discusses reanalyses of epidemiologic studies, an important, and overlooked topic in the jurisprudence of scientific evidence.  Alexander J. Bandza, “Epidemiological-Study Reanalyses and Daubert: A Modest Proposal to Level the Playing Field in Toxic Tort Litigation,” 39 Ecology L. Q. 247 (2012).

In the Daubert case itself, the Ninth Circuit, speaking through Judge Kozinksi, avoided the methodological issues raised by Shanna Swan’s reanalysis of Bendectin epidemiologic studies, by assuming arguendo its validity, and holding that the small relative risk yielded by the reanalysis would not support a jury verdict of specific causation. Daubert v. Merrell Dow Pharm., Inc., 43 F.3d 1311, 1317–18 (9th Cir. 1995).

There is much that can, and should, be said about reanalyses in litigation and in the scientific process, but Bandza never really gets down to the business at hand. His 36 page article curiously does not begin to address reanalysis until the bottom of the 20th page. The first half of the article, and then some, reviews some time-worn insights and factoids about scientific evidence. Finally, at page 266, the author introduces and defines reanalysis:

“Reanalysis occurs ‘when a person other than the original investigator obtains an epidemiologic data set and conducts analyses to evaluate the quality, reliability or validity of the dataset, methods, results or conclusions reported by the original investigator’.”

Bandza at 266 (quoting Raymond Neutra et al., “Toward Guidelines for the Ethical Reanalysis and Reinterpretation of Another’s Research,” 17 Epidemiology 335, 335 (2006).

Bandza correctly identifies some of the bases for judicial hostility to re-analyses. For instance, some courts are troubled or confused when expert witnesses disagree with, or reevaluate, the conclusions of a published article. The witnesses’ conclusions may not be published or peer reviewed, and thus the proffered testimony fails one of the Daubert factors.  Bandza correctly notes that peer review is greatly overrated by judges. Bandza at 270. I would add that peer review is an inappropriate proxy for validity, a “test,” which reflects a distrust of the unpublished.  Unfortunately, this judicial factor ignores the poor quality of much of what is published, and the extreme variability in the peer review process. Judges overrate peer review because they are desperate for a proxy for validity of the studies relied upon, which will allow them to pass their gatekeeping responsibility on to the jury. Furthermore, the authors’ own conclusions are hearsay, and their qualifications are often not fully before the court.  What is important is the opinion of the expert witness who can be cross-examined and challenged.  SeeFOLLOW THE DATA, NOT THE DISCUSSION.” What counts is the validity of the expert witness’s reasoning and inferences.

Bandza’s article, which by title advertises itself to be about re-analyses, gives only a few examples of re-analyses without much detail.  He notes concerns that reanalyses may impugn the reputation of published scientists, and burden them with defending their data.  Who would have it any other way? After this short discussion, the article careens into a discussion of “weight of the evidence” (WOE) methodology. Bandza tells us that the rejection of re-analyses in judicial proceedings “implicitly rules out using the weight-of-the-evidence methodology often appropriate for, or even necessary to, scientific analysis of potentially toxic substances.” Bandza at 270.  This argument, however, is one sustained non-sequitur.  WOE is defined in several ways, but none of the definitions require or suggest the incorporation of re-analyses. Re-analyses raise reliability and validity issues regardless whether an expert witness incorporates them into a WOE assessment. Yet Bandza tells us that the rejection of re-analyses “Implicitly Ignores the Weight-of-the-Evidence Methodology Appropriate for the Scientific Analysis of Potentially Toxic Substances.” Bandza at 274. This conclusion simply does not follow from the nature of WOE methodology or reanalyses.

Bandza’s ipse dixit raises the independent issue whether WOE methodology is appropriate for scientific analysis. WOE is described as embraced or used by regulatory agencies, but that description hardly recommends the methodology as the basis for a scientific, as opposed to a regulatory, conclusion.  Furthermore, Bandza ignores the ambiguity and variability of WOE by referring to it as a methodology, when in reality, WOE is used to describe a wide variety of methods of reasoning to a conclusion. Bandza cites Douglas Weed’s article on WOE, but fails to come to grips with the serious objections raised by Weed in his article to the use of WOE methodologies.  Douglas Weed, “Weight of Evidence: A Review of Concept and Methods,” 25 Risk Analysis 1545, 1546–52 (2005) (describing the vagueness and imprecision of WOE methodologies). See also “WOE-fully Inadequate Methodology – An Ipse Dixit By Another Name.”

Bandza concludes his article with a hymn to the First Circuit’s decision in Milward v. Acuity Specialty Products Group, Inc., 639 F.3d 11 (1st Cir. 2011). Plaintiffs’ expert witness, Dr. Martyn Smith claimed to have performed a WOE analysis, which in turn was based upon a re-analysis of several epidemiologic studies. True, true, and immaterial.  The re-analyses were not inherently a part of a WOE approach. Presumably, Smith re-analyzed some of the epidemiologic studies because he felt that the data as presented did not support his desired conclusion.  Given the motivations at work, the district court in Milward was correct to look skeptically and critically at the re-analyses.

Bandza notes that there are procedural and evidentiary safeguards in federal court against unreliable or invalid re-analyses of epidemiologic studies.  Bandza at 277. Yes, there are safeguards but they help only when they are actually used. The First Circuit in Milward reversed the district court for looking too closely at the re-analyses, spouting the chestnut that the objections went to the weight not the admissibility of the evidence.  Bandza embraces the rhetoric of the Circuit, but he offers no description or analysis of the liberties that Martyn Smith took with the data, or the reasonableness of Smith’s reliance upon the re-analyzed data.

There is no necessary connection between WOE methodologies and re-analyses of epidemiologic studies.  Re-analyses can be done properly to support or deconstruct the conclusions of published papers.  As Bandza points out, some re-analyses may go on to be peer reviewed and published themselves.  Validity is the key, and WOE methodologies have little to do with the process of evaluating the original or the re-analyzed study.

 

 

Litmus Tests

December 27th, 2012

Rule 702 is, or is not, a litmus test for expert witness opinion admissibility.  Relative risk is, or is not, a litmus test for specific causation.  Statistical significance is, or is not, a litmus test for reasonable reliance upon the results of a study.  It is relatively easy to find judicial opinions on either side of the litmus divide.  Compare National Judicial College, Resource Guide for Managing Complex Litigation at 57 (2010) (Daubert is not a litmus test) with Cryer v. Werner Enterprises, Inc., Civ. Action No. 05-S-696-NE, Mem. Op. & Order at 16 n. 63 (N.D. Ala. Dec. 28, 2007) (describing the Eleventh Circuit’s restatement of Rule 702’s “litmus test” for the methodological reliability of proffered expert witness opinion testimony).

The “litmus test“ is one sorry, overworked metaphor.  Perhaps its appeal has to do with a vague collective memory that litmus paper is one of those “things of science,” which we used in high school chemistry, and never had occasion to use again. Perhaps, litmus tests have the appeal of “proofiness.”

The reality is different. The litmus test is a semi-quantitative test for acidity or alkalinity.  Neutral litmus is purple.  Under acidic conditions, litmus turns red; under basic conditions, it turns blue.  For some time, scientists have used pH meters when they want a precise quantification of acidity or alkalinity.  Litmus paper is a fairly crude test, which easily discriminates  moderate acidity from alkalinity (say pH 4 from pH 11), but is relatively useless for detecting an acidity at pH or 6.95, or alkalinity at 7.05.

So what exactly are legal authors trying to say when they say that some feature of a test is, or is not, a “litmus test”? The litmus test is accurate, but not precise at the important boundary at neutrality.  The litmus test color can be interpreted for degree of acidity or alkalinity, but it is not the preferred method to obtain a precise measurement. Saying that a judicial candidate’s views on abortion are a litmus test for the Senate’s evaluation of the candidate makes sense, given the relative binary nature of the outcome of a litmus test, and the polarization of political views on abortion. Apparently, neutral views or views close to neutrality on abortion are not a desideratum for judicial candidates.  A cruder, binary test is exactly what is desired by politicians.

The litmus test that is used for judicial candidates does not seem to work so well when used to describe scientific or statistical inference.  The litmus test is well understood, but fairly obsolete in modern laboratory practice.  When courts say things, such as statistical significance is not a litmus test for acceptability of a study’s results, clearly they are correct because measure of random error is only one aspect of judging a body of evidence for, or against, an association.  Yet courts seem to imply something else, at least at times:

statistical significance is not an important showing in making a case that an exposure is reliably associated with a particular outcome.

Here courts are trading in half truths.  Statistical significance is quantitative, and the choice of a level of significance is not based upon immutable law. So like the slight difference between a pH of 6.95 and 7.05, statistical significance tests have a boundary issue.  Nonetheless, a consideration of random error cannot be dismissed or overlooked on the theory that significance level is not a “litmus test.”  This metaphor obscures and attempts to excuse sloppy thinking.  It is time to move beyond this metaphor.

Lumpenepidemiology

December 24th, 2012

Judge Helen Berrigan, who presides over the Paxil birth defects MDL in New Orleans, has issued a nicely reasoned Rule 702 opinion, upholding defense objections to plaintiffs expert witnesses, Paul Goldstein, Ph.D., and Shira Kramer, Ph.D. Frischhertz v SmithKline Beecham EDLa 2012 702 MSJ Op.

The plaintiff, Andrea Frischhertz, took GSK’s Paxil, a selective serotonin reuptake inhibitor (SSRI), for depression while pregnant with her daughter, E.F. The parties agreed that E.F. was born with a deformity of her right hand.  Plaintiffs originally claimed that E.F. had a heart defect, but their expert witnesses appeared to give up this claim at deposition, as lacking evidential support.

Adhering to Daubert’s Epistemiologic Lesson

Like many other lower federal courts, Judge Berrigan focused her analysis on the language of Daubert v. Merrell Dow Pharmaceuticals Inc., 509 U.S. 579 (1993), a case that has been superseded by subsequent cases and a revision to the operative statute, Rule 702.  Fortunately, the trial court did not lose sight of the key epistemological teaching of Daubert, which is based upon Rule 702:

“Regarding reliability, the [Daubert] Court said: ‘the subject of an expert’s testimony must be “scientific . . . knowledge.” The adjective “scientific” implies a grounding in the methods and procedures of science. Similarly, the word “knowledge” connotes more than subjective belief or unsupported speculation’.”

Slip Op. at 3 (quoting Daubert, 509 U.S. at 589-590).

There was not much to the plaintiffs’ expert witnesses’ opinion beyond speculation, but many other courts have been beguiled by speculation dressed up as “scientific … knowledge.”  Dr. Goldstein relied upon whole embryo culture testing of SSRIs, but in the face overwhelming evidence, Dr. Goldstein was forced to concede that this test may generate hypotheses about, but cannot predict, human risk of birth defects.  No doubt this concession made the trial court’s decision easier, but the result would have been required regardless of Dr. Goldstein’s exhibition of truthfulness at deposition.

Statistical Association – A Good Place to Begin

More interestingly, the trial court rejected the plaintiffs’ expert witnesses’ efforts to leapfrog finding a statistically significant association to parsing the so-called Bradford Hill factors:

“The Bradford-Hill criteria can only be applied after a statistically significant association has been identified. Federal Judicial Center, Reference Manual on Scientific Evidence, 599, n.141 (3d. ed. 2011) (“In a number of cases, experts attempted to use these guidelines to support the existence of causation in the absence of any epidemiologic studies finding an association . . . . There may be some logic to that effort, but it does not reflect accepted epidemiologic methodology.”). See, e.g., Dunn v. Sandoz Pharms., 275 F. Supp. 2d 672, 678 (M.D.N.C. 2003). Here, Dr. Goldstein attempted to use the Bradford-Hill criteria to prove causation without first identifying a valid statistically significant association. He first developed a hypothesis and then attempted to use the Bradford-Hill criteria to prove it. Rec. Doc. 187, Exh. 2, depo. Goldstein, p. 103. Because there is no data showing an association between Paxil and limb defects, no association existed for Dr. Goldstein to apply the Bradford-Hill criteria. Hence, Dr. Goldstein’s general causation opinion is not reliable.”

Slip op. at 6.

The trial court’s rejection of Dr. Goldstein’s attempted end run is particularly noteworthy given the Reference Manual’s weak-kneed attempt to suggest that this reasoning has “some logic” to it.  The Manual never articulates what “logic” commends Dr. Goldstein’s approach; nor does it identify any causal relationship ever established with such paltry evidence in the real world of science. The Manual does cite several legal cases that excused or overlooked the need to find a statistically significant association, and even elevated such reasoning into legally acceptable, admissibility method.  See Reference Manual on Scientific Evidence at 599 n. 141 (describing cases in which purported expert witnesses attempted to use Bradford Hill factors in the absence of a statistically significant association; citing Rains v. PPG Indus., Inc., 361 F. Supp. 2d 829, 836–37 (S.D. Ill. 2004); ); Soldo v. Sandoz Pharms. Corp., 244 F. Supp. 2d 434, 460–61 (W.D. Pa. 2003).  The Reference Manual also cited cases, without obvious disapproval, which completely dispatched with any necessity of considering any of the Bradford Hill factors, or the precondition of a statistically significant association.  See Reference Manual at 599 n. 144 (citing Cook v. Rockwell Int’l Corp., 580 F. Supp. 2d 1071, 1098 (D. Colo. 2006) (“Defendants cite no authority, scientific or legal, that compliance with all, or even one, of these factors is required. . . . The scientific consensus is, in fact, to the contrary. It identifies Defendants’ list of factors as some of the nine factors or lenses that guide epidemiologists in making judgments about causation. . . . These factors are not tests for determining the reliability of any study or the causal inferences drawn from it.“).

Shira Kramer Takes Her Lumpings

The plaintiffs’ other key expert witness, Dr. Shira Kramer, was a more sophisticated and experienced obfuscator.  Kramer attempted to provide plaintiffs with a necessary association by “lumping” all birth defects together in her analysis of epidemiologic data of birth defects among children of women who had ingested Paxil (or other SSRIs).  Given the clear evidence that different birth defects arise at different times, based upon interference with different embryological processes, the trial court discerned this “lumping” of end points to be methodologically inappropriate.  Slip op. at 8 (citing Chamber v. Exxon Corp., 81 F. Supp. 2d 661 (M.D. La. 2000), aff’d, 247 F.3d 240 (5th Cir. 2001) (unpublished).

Without her “lumping”, Dr. Kramer was left with only a weak, inconsistent claim of biological plausibility and temporality. Finding that Dr. Kramer’s opinion had outrun her headlights, Judge Berrigan, excluded Dr. Kramer as an expert witness, and granted GSK summary judgment.

Merry Christmas!

 

The Matrixx Motion in U.S. v. Harkonen

December 17th, 2012

United States of America v. W. Scott Harkonen, MD — Part III

Background

The recent oral argument in United States v. Harkonen (seeThe (Clinical) Trial by Franz Kafka” (Dec. 11, 2012)), pushed me to revisit the brief filed by the Solicitor General’s office in Matrixx Initiatives Inc. v. Siracusano, 131 S. Ct. 1309 (2011).  One of Dr. Harkonen’s post-trial motions contended that the government’s failure to disclose its Matrixx amicus brief deprived him of a powerful argument that would have resulted from citing the language of the brief, which disparaged the necessity of statistical significance for “demonstrating” causal inferences. SeeMultiplicity versus Duplicity – The Harkonen Conviction” (Dec. 11, 2012).

Matrixx Initiatives is a good example of how litigants make bad law when they press for rulings on bad facts.  The Supreme Court ultimately held that pleading and proving causation were not necessary for a securities fraud action that turned on non-disclosure of information about health outcomes among users of the company’s medication. What is required is “materiality,” which may be satisfied upon a much lower showing than causation.  Because Matrixx Initiatives contended that statistical significance was necessary to causation, which in turn was needed to show materiality, much of the briefings before the Supreme Court addressed statistical significance, but the reality is that the Court’s disposition obviated any discussion of the role of statistical inferences for causation. 131 S.Ct. at 1319.

Still, the Supreme Court, in a unanimous opinion, plowed forward and issued its improvident dicta about statistical significance. Taken at face value, the Court’s statement that “the premise that statistical significance is the only reliable indication of causation … is flawed,” is unexceptionable. Matrixx Initiatives, 131 S.Ct. at 1319.  For one thing, the statement would be true if statistical significance were necessary but not sufficient to “indicate” causation. But more to the point, there are some cases in which statistical significance may not be part of the analytical toolkit for reaching a causal conclusion. For instance, the infamous Ferebee case, which did not involve Federal Rule of 702, is a good example of a case that did not involve epidemiologic or statistical evidence.  SeeFerebee Revisited” (Nov. 8, 2012) (discussing the agreement of both parties that statistical evidence was not necessary to resolve general causation because of the acute onset, post-exposure, of an extremely uncommon medical outcome – severe diffuse interstitial pulmonary fibrosis).

Surely, there are other such cases, but in modern products liability law, many causation puzzles are based upon the interpretation of rate-driven processes, measured using epidemiologic studies, involving a measurable base-line risk and an observed higher or lower risk among a sample of an exposed population. In this context, some evaluation of the size of random error is, indeed, necessary. The Supreme Court’s muddled dicta, however, has confused the issues by painting with an extremely broad brush.

The dicta in Matrixx Initiatives has already led to judicial errors. The MDL court in the Chantix litigation provides one such instance. Plaintiffs claimed that Chantix, a medication that helps people stop smoking, causes suicide. Pfizer, the manufacturer, challenged plaintiffs’ general causation expert witnesses, for not meeting the standards of Federal Rule of Evidence 702, for various reasons, not the least of which was that the studies relied upon by plaintiffs’ witnesses did not show statistical significance.  In re Chantix Prods. Liab. Litig., MDL 2092, 2012 U.S. Dist. LEXIS 130144 (Aug. 21, 2012).  The Chantix MDL court, citing Matrixx Initiatives for a blanket rejection of the need to consider random error, denied the defendant’s challenge. Id. at *41-42 (citing Matrixx Initiatives, 131 S.Ct. at 1319).

The Supreme Court, in Matrixx, however, never stated or implied such a blanket rejection of the importance of considering random error in evidence that was essentially statistical in nature. Of course, if it had done so, it would have been wrong.

Within two weeks of the Chantix decision, a similar erroneous interpretation of Matrixx Initiatives surfaced in MDL litigation over fenfluramine.  Cheek v. Wyeth Pharm. Inc., 2012 U.S. Dist. LEXIS 123485 (E.D. Pa. Aug. 30, 2012). Rejecting a Rule 702 challenge to plaintiffs’ expert witness’s opinion, the MDL trial judge, cited Matrixx Initiatives for the assertion that:

Daubert does not require that an expert opinion regarding causation be based on statistical evidence in order to be reliable. * * * In fact, many courts have recognized that medical professionals often base their opinions on data other than statistical evidence from controlled clinical trials or epidemiological studies.”

Id. at *22 (citing Matrixx Initiatives, 131 S. Ct. at 1319, 1320).  While some causation opinions might be perfectly appropriately based upon other than statistical evidence, the Supreme Court specifically disclaimed any comment upon Rule 702, in Matrixx Initiatives, which was a case about proper pleading of materiality in a securities fraud case, not about proper foundations for actual evidence of causation, at trial, of a health-effects claim. The Cheek decision is thus remarkable for profoundly misunderstanding the Matrixx case. There was no resolution of any Rule 702 issue in Matrixx.

The Trial Court’s Denial of the Matrixx Motion in Harkonen

Dr. Harkonen argued that he is entitled to a new trial on the basis of “newly discovered evidence” in the form of the government’s amicus brief in Matrixx. The trial court denied this motion on several grounds.  First, the government’s amicus brief was filed after the jury returned its verdict against Dr. Harkonen.  Second, the language in the Solicitor General’s amicus brief was just “argument.”  And third, the issue in Matrixx involved adverse events, not efficacy, and the FDA, as well as investors, would be concerned with lesser levels of evidence that did not “demonstrate” causation.  United States v. Harkonen, Memorandum & Order re Defendant Harkonen’s Motions for a New Trial, No. C 08-00164 MHP (N.D. Calif. April 18, 2011). Perhaps the most telling ground might have been that the government’s amicus briefing about statistical significance, prompted by Matrixx Initiatives’ appellate theory, was irrelevant to the proper resolution of that Supreme Court case.  Still, if these reasons are taken individually, or in combination, they fail to mitigate the unfairness of the government’s prosecution of Dr. Harkonen.

The Amicus Brief Behind the Matrixx Motion

Judge Patel’s denial of the motion raised serious problems. SeeMultiplicity versus Duplicity – The Harkonen Conviction” (Dec. 11, 2012).  It may thus be worth a closer look at the government’s amicus brief to evaluate Dr. Harkonen’s Matrixx motion. The distinction between efficacy and adverse effects is particularly unconvincing.  Similarly, it does not seem fair to permit the government to take inconsistent positions, whether on facts or on inferences and arguments, when those inconsistencies confuse criminal defendants, prosecutors, civil litigants, and lower court judges. After all, Dr. Harkonen’s use of the key word, “demonstrate” was an argument about the strength and epistemic strength of the evidence at hand.

The government’s amicus brief was filed by the Solicitor General’s office, along with counsel for the Food and Drug Division of the Department of Health & Human Services. The government, in its brief, appeared to disclaim the necessity, or even the importance, of statistical significance:

“[w]hile statistical significance provides some indication about the validity of a correlation between a product and a harm, a determination that certain data are not statistically significant … does not refute an inference of causation.”

Brief for the United States as Amicus Curiae Supporting Respondents, in Matrixx Initiatives, Inc. v. Siracusano, 2010 WL 4624148, at *14 (Nov. 12, 2010). This statement, with its double negatives, is highly problematic.  Validity of a correlation is really not what is at issue in randomized clinical trial; rather it is the statistical reliability or stability of the measurement that is called into question when the result is not statistically significant.  A statistically insignificant result may not refute causation, but it certainly does not thereby support an inference of causation.  The Solicitor General’s brief made this statement without citation to any biostatistics text or treatise.

The government’s amicus brief introduces its discussion of statistical significance with a heading, entitled “Statistical significance is a limited and non-exclusive tool for inferring causation.” Id. at *13.  In a footnote, the government elaborated that its position applied to both safety and efficacy outcomes:

“[t]he same principle applies to studies suggesting that a particular drug is efficacious. A study  in which the cure rate for cancer patients who took a drug was twice the cure rate for those who took a placebo could generate meaningful interest even if the results were not statistically significant.”

Id. at *15 n.2.  Judge Patel’s distinction between efficacy and adverse events thus cannot be sustained. Of course, “meaningful interest” is not exactly a sufficient basis for a causal conclusion. As a general matter, Dr. Harkonen’s motion seems well grounded.  Although not a model of clarity, the amicus brief appears to disparage the necessity of statistical significance for supporting a causal conclusion. A criminal defendant being prosecuted for using the wrong verb to describe his characterization of the inference he drew from a clinical trial would certainly want to showcase these high-profile statements made by Solicitor General’s office to the highest court of the land.

Solicitor General’s Good Advice

Much of the Solicitor General’s brief is directly on point for the Matrixx case. The amicus brief leads off by insisting that information that supports reasonable suspicions about adverse events, may be material absent sufficient evidence of causation.  Id. at 11.  Of course, this is the dispositive argument, and it is stated well in the brief.  The brief then wonders into scientific and statistical territory, with little or no authority, at times misciting important works such as the Reference Manual on Scientific Evidence.

The Solicitor General’s amicus brief hones in on the key issue: materiality, which does not necessarily involve causation:

“Second, a reasonable investor may consider information suggesting an adverse drug effect important even if it does not prove that the drug causes the effect.”

Brief for the United States as Amicus Curiae Supporting Respondents, in Matrixx Initiatives, Inc. v. Siracusano, 2010 WL 4624148, at *8.

“As explained above (see p. 19, supra), however, adverse event reports do not lend themselves to a statistical-significance analysis. At a minimum, the standard petitioners advocate would require the design of a scientific study able to capture the relative rates of incidence (either through a clinical trial or observational study); enough participants and data to perform such a study and make it powerful enough to detect any increased incidence of the adverse effect; and a researcher equipped and interested enough to conduct it.”

Id. at 23.

“As petitioners acknowledge (Br. 23), FDA does not apply any single metric for determining when additional inquiry or action is necessary, and it certainly does not insist upon ‘statistical significance.’ See Adverse Event Reporting 7. Indeed, statistical significance is not a scientifically appropriate or meaningful standard in evaluating adverse event data outside of carefully designed studies. Id. at 5; cf. Lempert 240 (‘it is meaningless to talk about receiving a statistically significant number of complaints’).”

Id. at 19. So statistical significance is unrelated to the case, and the kind of evidence of materiality, alleged by plaintiffs, does not even open itself to a measurement of statistical significance.  At this point, the brief writers might have called it a day.  The amicus brief, however, pushes on.

Solicitor General’s Ignoratio Elenchi

A good part of the government’s amicus brief in Matrixx presented argument irrelevant to the issues before the Court, even assuming that statistical significance was relevant to materiality.

“First, data showing a statistically significant association are not essential to establish a link between use of a drug and an adverse effect. As petitioners ultimately acknowledge (Br. 44 n.22), medical researchers, regulators, and courts consider multiple factors in assessing causation.”

Brief for the United States as Amicus Curiae Supporting Respondents, in Matrixx Initiatives, Inc. v. Siracusano, 2010 WL 4624148, at *12.  This statement is a non-sequitur.  The consideration of multiple factors in assessing causation does not make the need for a statistically significant association or more less essential. Statistical significance could still be necessary but not sufficient in assessing causation.  The government’s brief writers pick up the thread a few pages later:

“More broadly, causation can appropriately be inferred through consideration of multiple factors independent of statistical significance. In a footnote, petitioners acknowledge that critical fact: ‘[C]ourts permit an inference of causation on the basis of scientifically reliable evidence other than statistically significant epidemiological data. In such cases experts rely on a lengthy list of factors to draw reliable inferences, including, for example,

(1) the “strength” of the association, including “whether it is statistically significant”;

(2) temporal relationship between exposure and the adverse event;

(3) consistency across multiple studies;

(4) “biological plausibility”;

(5) “consideration of alternative explanations” (i.e., confounding);

(6) “specificity” (i.e., whether the specific chemical is associated with the specific disease at issue); and

(7) dose-response relationship (i.e., whether an increase in exposure yields an increase in risk).’ ”

Pet. Br. 44 n.22 (citations omitted). Those and other factors for inferring causation have been well recognized in the medical literature and by the courts of appeals. See, e.g., Reference Guide on Epidemiology 345-347 (discussing relevance of toxicologic studies), 375-379 (citing, e.g., Austin Bradford Hill, The Environment and Disease: Association or Causation?, 58 Proc. Royal Soc’y Med. 295 (1965))… .”

Id. at 15-16. These enumerated factors are obviously due to Sir Austin Bradford Hill. No doubt Matrixx Initiatives cited the Bradford Hill factors, but that was because the company was contending that statistical significance was necessary but not sufficient to show causation.  As Bradford Hill showed by his famous conclusion that smoking causes lung cancer, these factors were considered after statistical significance was shown in several epidemiologic studies.  The Supreme Court incorporated this non-argument into its opinion, even after disclaiming that causation was needed for materiality or that the Court was going to assess the propriety of causal findings in other cases.

The Solicitor General went on to cite three cases for the proposition that statistical significance is not necessary for assessing causation:

Best v. Lowe’s Home Centers, Inc., 563 F.3d 171, 178 (6th Cir. 2009) (“an ‘overwhelming majority of the courts of appeals’ agree” that differential diagnosis, a process for medical diagnosis that does not entail statistical significance tests, informs causation) (quoting Westberry v. Gislaved Gummi AB, 178 F.3d 257, 263 (4th Cir. 1999)).”

Id. at 16.  These two cases both involved so-called “differential diagnosis” or differential etiology, a process of ruling in, by ruling out.  This method, which involves iterative disjunctive syllogism, starts from established causes, and reasons to a single cause responsible for a given case of the disease.  The citation of these cases was irrelevant and bad scholarship by the government.  The Solicitor General’s error here seems to have been responsible for the Supreme Court’s unthinking incorporation of these cases into its opinion.

The Solicitor General went on to cite a third case, the infamous Ferebee, for its suggestion that statistical significance was not necessary to establish causation:

Ferebee v. Chevron Chem. Co., 736 F.2d 1529, 1536 (D.C. Cir.) (‘[P]roducts liability law does not preclude recovery until a “statistically significant” number of people have been injured’.), cert. denied, 469 U.S. 1062 (1984). As discussed below (see pp. 19-20, infra), FDA relies on a number of those factors in deciding whether to take regulatory action based on reports of an adverse drug effect.”

Id. at 16.  Curiously, the Supreme Court departed from its reliance on the Solicitor General’s brief, with respect to Ferebee, and substituted its own citation to Wells v. Ortho Pharmaceutical Corp., 615 F. Supp. 262 (N.D. Ga. 1985), aff’d in relevant part, 788 F.2d 741 (11th Cir.), cert. denied, 479 U.S.950 (1986). See Wells v. Ortho Pharmaceutical Corp. Reconsidered – Part 1 (Nov. 12, 2012).  The reliance upon the two differential etiology cases was “demonstrably” wrong, but citing Wells was even more bizarre because that case featured at least one statistically significant study relied upon by plaintiffs’ expert witnesses. Ferebee, on the other hand, involved an acute onset of a rare condition – severe pulmonary fibrosis – shortly after exposure to paraquat.  Ferebee was thus a case in which the parties agreed that the causal relationship between paraquat and lung fibrosis had been established by non-analytical epidemiologic evidence.  See Ferebee Revisited.

The government then pointed out in its amicus that sometimes statistical significance is hard to obtain:

“In some circumstances —e.g., where an adverse effect is subtle or has a low rate of incidence —an inability to obtain a data set of appropriate quality or quantity may preclude a finding of statistical significance. Ibid. That does not mean, however, that researchers have no basis on which to infer a plausible causal link between a drug and an adverse effect.”

Id. at 15. Biological plausibility is hardly a biologically established causal link.  Inability to find an appropriate data set often translates into an inability to draw a causal conclusion; inappropriate data are not an excuse for jumping to unsupported conclusions.

Solicitor General’s Bad Advice – Crimen Falsi?

The government’s brief then manages to go from bad to worse. The government’s amicus brief in Matrixx raises serious concerns about criminalizing inappropriate statistical statements, inferences, or conclusions.  If the Solicitor General’s office, with input from Chief Counsel of the Food and Drug Division, of the Department of Health & Human Services, cannot correctly state basic definitions of statistical significance, then the government has no business of prosecuting others for similar offenses.

“To assess statistical significance in the medical context, a researcher begins with the ‘null hypothesis’, i.e., that there is no relationship between the drug and the adverse effect. The researcher calculates a ‘p-value’, which is the probability that the association observed in the study would have occurred even if there were in fact no link between the drug and the adverse effect. If that p-value is lower than the ‘significance level’ selected for the study, then the results can be deemed statistically significant.”

Id. at 13. Here the government’s brief commits a common error that results when lawyers want to simplify the definition of a p-value. The p-value is a cumulative probability of observing a disparity at least as great as observed, given the assumption that there is no difference.  Furthermore, the subjunctive is not appropriate to describe the basic assumption of significance probability.

“The significance level most commonly used in medical studies is 0.05. If the p-value is less than 0.05, there is less than a 5% chance that the observed association between the drug and the effect would have occurred randomly, and the results from such a study are deemed statistically significant. Conversely, if the p-value is greater than 0.05, there is greater than a 5% chance that the observed association would have occurred randomly, and the results are deemed not statistically significant. See Reference Guide on Epidemiology 357-358; David Kaye & David A. Freedman, Reference Guide on Statistics, in Reference Manual on Scientific Evidence 123, 123-125 (2d ed. 2000) (Reference Guide on Statistics).”

Id. at 14. Here the government’s brief drops the conditional of the significance probability; the p-value provides the probability that a disparity at least as large as observed would have occurred (based upon the assumed probability model), given the assumption that there really is no difference between the observed and expected results.

“While statistical significance provides some indication about the validity of a correlation between a product and a harm, a determination that certain data are not statistically significant – let alone, as here, the absence of any determination one way or the other — does not refute an inference of causation. See Michael D. Green, Expert Witnesses and Sufficiency of Evidence in Toxic Substances Litigation: The Legacy of Agent Orange and Bendectin Litigation, 86 Nw. U. L. Rev. 643, 682- 683 (1992).”

Id. at 14. Validity is probably the wrong word since most statisticians and scientific authors use validity to refer to features other than low random error.

“Take, for example, results from a study, with a p-value of 0.06, showing that those who take a drug develop a rare but serious adverse effect (e.g., permanent paralysis) three times as often as those who do not. Because the p-value exceeds 5%, the study’s results would not be considered statistically significant at the 0.05 level. But since the results indicate a 94% likelihood that the observed association between the drug and the effect would not have occurred randomly, the data would clearly bear on the drug’s safety. Upon release of such a study, “confidence in the safety of the drug in question should diminish, and if the drug were important enough to [the issuer’s] balance sheet, the price of its stock would be expected to decline.” Lempert 239.2

Id. at 14-15. The citation to Lempert’s article is misleading. At the cited page, Professor Lempert is simply making the point that materiality in a securities fraud case will often be present when evidence for a causal conclusion is not. Richard Lempert, “The Significance of Statistical Significance:  Two Authors Restate An Incontrovertible Caution. Why A Book?” 34 Law & Social Inquiry 225, 239 (2009).  In so writing, Lempert anticipated the true holding of Matrixx Initiative.  The calculation of the 94% likelihood is also incorrect.  The quantity (1 – [p-value]) yields a probability that describes the probability of obtaining a disparity no greater than the observed result, on the assumption that there is no difference at all between observed and expect results. There is, however, a larger point lurking in this passage of the amicus brief, which is the difference between a p-value of 0.05 and 0.06 is not particularly large, and there is thus a degree of arbitrariness to treating it as too sharp a line.

All in all, a distressingly poor performance by the Solicitor General’s office.  With access to many talented statisticians, the government could have at least have had a competent statistician review and approve the content of this amicus brief.  I suspect that most judges and lawyers, however, would balk at drawing an inference that the Solicitor General intended to mislead the Court simply because the brief contained so many misstatements about statistical inference.  This reluctance should have obvious implications for the government’s attempt to criminalize Dr. Harkonen’s statistical inferences.

Egilman Petitions the Supreme Court for Review of His Own Exclusion in Newkirk v. Conagra Foods

December 13th, 2012

Last year, the Ninth Circuit of the United States Court of Appeals affirmed a district judge’s decision to exclude Dr David S. Egilman from testifying in a consumer-exposure diacetyl case.  Newkirk v. Conagra Foods Inc., 438 Fed.Appx. 607  (9th Cir. 2011).  The plaintiff moved on, but his expert witness could not let his exclusion go.

To get the full “flavor” of this diacetyl case, read the district court’s opinion, which excluded Egilman and other witnesses, and entered summary judgment for the defense. Newkirk v. Conagra Foods, Inc., 727 F. Supp. 2d 1006  (E.D. Wash. July 2, 2010).  Here is the language that had Dr. Egilman popping mad:

“In other parts of his reports and testimony, Dr. Egilman relies on existing data, mostly in the form of published studies, but draws conclusions far beyond what the study authors concluded, or Dr. Egilman manipulates the data from those studies to reach misleading conclusions of his own. See Daubert I, 509 U.S. at 592–93, 113 S.Ct. 2786.”

727 F. Supp. 2d at 1018.

This language, cut Dr. Egilman to the kernel, and provoked him to lodge a personal appeal to the Ninth Circuit, based in part upon the economic harm done to his litigation consulting and testimonial practice. (See attached Egilman Motion Appeal Diacetyl Exclusion 2011 and Egilman Declaration Newkirk Diacetyl Appeal 2011.)  Not only did the exclusion hurt Dr. Egilman’s livelihood, but also his eleemosynary endeavors:

“The Daubert ruling eliminates my ability to testify in this case and in others. I will lose the opportunity to bill for services in this case and in others (although I generally donate most fees related to courtroom testimony to charitable organizations, the lack of opportunity to do so is an injury to me). Based on my experience, it is virtually certain that some lawyers will choose not to attempt to retain me as a result of this ruling. Some lawyers will be dissuaded from retaining my services because the ruling is replete with unsubstantiated pejorative attacks on my qualifications as a scientist and expert. The judge’s rejection of my opinion is primarily an ad hominem attack and not based on an actual analysis of what I said – in an effort to deflect the ad hominem nature of the attack the judge creates ‘straw man’ arguments and then knocks the straw men down, without ever addressing the substance of my positions.”

Egilman Declaration in Newkirk at Paragraph 11.

The Ninth Circuit affirmed Dr. Egilman’s exclusion, Newkirk v. Conagra Foods, Inc., 438 Fed. Appx. 607 (9th Cir. 2011).  SeeNinth Circuit Affirms Rule 702 Exclusion of Dr David Egilman in Diacetyl Case.

This year, the Ninth Circuit dismissed his personal appeal for lack of standing.  Egilman v. Conagra Foods, Inc., 2012 WL 3836100 (9th Cir. 2012). Previously, I suggested that the Ninth Circuit had issued a judgment from which there will be no appeal.  I may have been mistaken.  Last week, counsel for Dr. Egilman filed a petition for certiorari in the United States Supreme Court.  Smarting from the district court’s attack on his character and professionalism, Dr. Egilman is seeking the personal right to appeal an adverse Rule 702 ruling.  The Circuit split, which Dr. Egilman hopes will get him a hearing in the Supreme Court, involves the issue whether he, as a non-party witness, must intervene in the proceedings in order to preserve his right to appeal:

“Whether a nonparty to a district court proceeding has a right to appeal a decision that adversely affects his interest, as the Second, Sixth, and D.C. Circuits hold, or whether, as six other circuit courts hold, the nonparty must intervene or otherwise participate in the district court proceedings to have a right to appeal.”

Egilman Pet’n Cert Newkirk v Conagra SCOTUS at 5 (Dec. 2012).  Of course there is also a split among courts about Dr. Egilman reliability.

And who represents Dr. Egilman?  Counsel of record is Alexander A. Reinert, who teaches at Cardozo Law School, here in New York.  Dr. Egilman and Reinert have published several articles together, within the scope of Dr. Egilman’s litigation-oriented practice.[i]  In the past, I have commented upon Reinert’s work.  See, e.g., Schachtman, “Confidence in Intervals and Diffidence in the Courts” (May 8, 2012 ) (Arthur H. Bryant & Alexander A. Reinert, “The Legal System’s Use of Epidemiology,” 87 Judicature 12, 19 (2003)(“The confidence interval is intended to provide a range of values within which, at a specified level of certainty, the magnitude of association lies.”) (incorrectly citing the first edition of Rothman & Greenland, Modern Epidemiology 190 (Philadelphia 1998)). It should be interesting to see what mischief Egilman & Reinert can make in the Supreme Court.


[i] David S. Egilman & Alexander A. Reinert, “Corruption of Previously Published Asbestos Research,” 55 Arch. Envt’l Health 75 (2000); David S. Egilman & Alexander A. Reinert,“Asbestos Exposure and Lung Cancer: Asbestosis Is Not Necessary,” 30 Am. J. Indus. Med. 398 (1996); David S. Egilman & Alexander A. Reinert, “The Asbestos TLV: Early Evidence of Inadequacy,” Am. J. Indus. Med. 369 (1996);  David S. Egilman & Alexander A. Reinert,“The Origin and Development of the Asbestos Threshold Limit Value: Scientific Indifference and Corporate Influence,”  25 Internat’l J. Health Serv. 667 (1995).

Multiplicity versus Duplicity – The Harkonen Conviction

December 11th, 2012

United States of America v. W. Scott Harkonen, MD — Part II

The Alleged Fraud – “False as a matter of statistics”

The essence of the government’s case was that drawing an inference of causation from a statistically nonsignificant, post-hoc analysis was “false as a matter of statistics.” ER2498.  Dr. Harkonen’s trial counsel did not present any statistician testimony at trial.  In their final argument, his counsel explained that they obtained sufficient concessions at trial to make their point.

In post-trial motions, new counsel for Dr. Harkonen submitted affidavits from Dr. Steven Goodman and Dr. Donald Rubin, two very capable and highly accomplished statisticians, who explained the diversity of views in their field about the role of p-values in interpreting study data and drawing causal inferences.  At trial, however, the government’s witnesses, Drs. Crager and Fleming, testified that p-values of [less than] 0.05 were “magic numbers.”  United States v. Harkonen, 2010 WL 2985257, at *5 (N.D. Calif. 2010) (Judge Patel’s opinion denying defendant’s post–trial motions to dismiss the indictment, for acquittal, or for a new trial).  Sometimes judges are looking for bright lines in the wrong places.

The Multiplicity Problem

The government argued that the proper interpretation of a given p-value requires information about the nature and context of the statistical test that gave rise to the p-value.  If many independent tests are run on the same set of data, a low p-value would be expected to occur by chance alone.  Multiple testing can inflate the rate of false-positive findings, Type I errors.  The generation of these potentially false positive results is sometimes called the “multiplicity problem”; in the face of multiple testing, a stated p-value can greatly understate the level of false-positive findings.

In the context of a randomized clinical trial, it is thus important to know what the prespecified primary and secondary end points were.  David Moher, Kenneth F. Schulz, and Douglas G. Altman, “The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials,” 357 Lancet 1191 (2001). Post hoc data dredging can lead to the “Texas Sharpshooter Fallacy,” which results when an investigator draws a target around a hit, after the fact, and declares a bulls-eye.

Dr. Fleming thus had a limited point; namely the use of the verb “demonstrate” rather than “show” or “suggest” was too strong if based solely upon InterMune’s clinical trial, given that the low p-value came in the context of a non-prespecified subgroup analysis. (The supposedly offensive press release issued by Dr. Harkonen did indicate that the data confirmed the results in a previously reported phase II trial.) If the government engaged in some counter-speech to say that Dr. Harkonen’s statements fell below an idealized “best statistical practice” in his use of “demonstrate,” many statisticians might well agree with the government.  Even this limited point would evaporates if Dr. Harkonen had stated that the phase III subgroup analysis, along with the earlier published clinical trial, and clinical experience, “demonstrated” a survival benefit.  Had Dr. Harkonen issued this more scientifically felicitous statement, the government could not have made a claim of falsity in using the verb “to demonstrate” with a single p-value from a post hoc subgroup analysis.  Such a statement would have taken Dr. Harkonen’s analytic inference out of the purely statistical realm. Indeed, Dr. Harkonen’s press release did reference an earlier phase II trial, as well as notify readers that more detailed analyses would be presented at upcoming medical conferences.  Although Dr. Harkonen did use “demonstrate” to characterize the results of the phase III trial, standing alone, the entire press release made clear that the data were preliminary. It is difficult to imagine any reasonable physician prescribing Actimmune on the basis of the press release.

The prosecution and conviction of Dr. Harkonen thus raises the issue whether the alleged improper characterization of a study’s statistical result can be criminalized by the State.  Clearly, the federal prosecutors were motivated by their perception that the alleged fraud was connected to an attempt to promote an off-label use of Actimmune.  Such linguistic precision, however, is widely flouted in the world of law and science.  Lawyers use the word “proofs,” which often admit of inferences for either side, to describe real, demonstrative, and testimonial evidence.  A mathematician might be moved to prosecute all lawyers for fraudulent speech.  From the mathematicians’ perspective, the lawyers have made a claim of certainty in using “proof,” which is totally out of place.  Even in the world of science, the verb “to demonstrate” is used in a way that does not imply the sort of certitude that the purists might wish to retain for the strongest of empirical inferences from clinical trials. See, e.g., William B. Wong, Vincent W. Lin, Denise Boudreau, and Emily Beth Devine, “Statins in the prevention of dementia and Alzheimer’s disease: A meta-analysis of observational studies and an assessment of confounding,” 21 Pharmacoepidemiology & Drug Safety in-press, at Abstract (2012) (“Studies demonstrate the potential for statins to prevent dementia and Alzheimer’s disease (AD), but the evidence is inconclusive.”) (emphasis added).

The Duplicity Problem – The Matrixx Motion

After the conviction, Dr. Harkonen’s counsel moved for a new trial on grounds of newly discovered evidence. Dr. Harkonen’s counsel hoisted the prosecutors with their own petards, by quoting the government’s amicus brief to the United States Supreme Court in Matrixx Initiatives Inc. v. Siracusano, 131 S. Ct. 1309 (2011).  In Matrixx, the securities fraud plaintiffs contended that they need not plead “statistically significant” evidence for adverse drug effects.  The Solicitor General’s office, along with counsel for the Food and Drug Division of the Department of Health & Human Services, in their zeal to assist plaintiffs in their claims against an over-the-counter pharmaceutical manufacturer, disclaimed the necessity, or even the importance, of statistical significance:

“[w]hile statistical significance provides some indication about the validity of a correlation between a product and a harm, a determination that certain data are not statistically significant … does not refute an inference of causation.”

Brief for the United States as Amicus Curiae Supporting Respondents, in Matrixx Initiatives, Inc. v. Siracusano, 2010 WL 4624148, at *14 (Nov. 12, 2010).

The government’s amicus brief introduces its discussion of this topic with a heading, entitled “Statistical significance is a limited and non-exclusive tool for inferring causation.” Id. at *13.  In a footnote, the government elaborated that its position applied to both safety and efficacy outcomes:

“[t]he same principle applies to studies suggesting that a particular drug is efficacious. A study  in which the cure rate for cancer patients who took a drug was twice the cure rate for those who took a placebo could generate meaningful interest even if the results were not statistically significant.”

Id. at *15 n.2.

The government might have suggested that Dr. Harkonen was parsing the amicus brief incorrectly.  After all, generating “meaningful interest” is not the same as generating a scientific conclusion, or as “demonstrating.” As I will show in a future post, the government, in its amicus brief, consistently misstated the meaning of statistical significance, and of significance probability.  The government’s inability to communicate these concepts correctly raises serious due process issues with a prosecution against someone for having using the wrong verb to describe a statistical inference. 

SCOTUS

The government’s amicus brief was clearly influential before the Supreme Court. The Court cited to, and adopted in dictum, the claim that the absence of statistical significance did not mean that medical expert witnesses could not have a reliable basis for inferring causation between a drug and an adverse event.  Matrixx Initiatives, Inc. v. Siracusano, — U.S. –, 131 S.Ct. 1309, 1319-20 (2011) (“medical professionals and researchers do not limit the data they consider to the results of randomized clinical trials or to statistically significant evidence”).

In any event, the prosecutor, in Dr. Harkonen’s trial, argued in summation that InterMune’s clinical trial had “failed,” and no conclusions could be drawn from the trial.  If this argument was not flatly contradicted by the government’s Matrixx brief, then the argument was certainly undermined by the rhetorical force of the government’s amicus brief.

The district court denied Dr. Harkonen’s motion for a new trial, and explained that the government’s Matrixx amicus brief contained “argument” rather than “newly discovered evidence.” United States v. Harkonen, No. C 08-00164 MHP, Memorandum and Order re Defendant Harkonen’s Motions for a New Trial at 14 (N.D. Calif. April 18, 2011). This rationale seems particularly inappropriate because the interpretation of a statistical test and the drawing of an inference are both “arguments,” and it is a fact that the government contended that p < 0.05 was not necessary to drawing causal inferences. The district court also offered that Matrixx was distinguishable on grounds that the securities fraud in Matrixx involved a safety outcome rather than an efficacy conclusion. This distinction truly lacks a difference:  the standards for determining causation do not differ between establishing harm or efficacy.  Of course, the FDA does employ a lesser, precautionary standard for regulating against harm, but this difference does not mean that the causal connections between harm and drugs are assessed on different standards.

On December 6th, the appeals in United States v. Harkonen were argued and submitted for decision.  Win or lose, Dr. Harkonen is likely to make important law in how scientists and lawyers speak about statistical inferences.

The (Clinical) Trial by Franz Kafka

December 9th, 2012

United States of America v. W. Scott Harkonen, MD — Part I

Last week, Mark Haddad, of Sidley Austin, argued Dr. W. Scott Harkonen’s appeal in the Ninth Circuit.   In 2009, Dr. Harkonen was convicted by a jury, before the Hon. Marilyn Hall Patel, on a single count of wire fraud, under 18 U.S.C. § 1343. The jury acquitted Dr. Harkonen of felony misbranding, 21 U.S.C. §§ 331(k), 333(a)(2), 352(a).  Dr. Harkonen’s crime?  Bad statistical practice!

Dr. Harkonen, a physician, was the President and CEO of InterMune, Inc., a biotechnology company that researches and develops medications. InterMune developed interferon gamma-1b (Actimmune®), which was licensed by the FDA for the treatment of two rare diseases, chronic granulomatous disease and severe, malignant osteopetrosis.  In 1999, Austrian researchers published the results of a small randomized clinical trial, which concluded that at 12 months, treatment with interferon gamma-1b (Actimmune®) plus prednisolone was associated with “substantial improvements in the conditions of patients with idiopathic pulmonary fibrosis [IPF] who had had no response to glucocorticoids alone.” Rolf Ziesche, Elisabeth Hofbauer, Karin Wittmann, Ventzislav Petkov, Lutz-Henning Block, 341 New Engl. J. Med., 1264 (1999).  Based upon this 1999 clinical trial, InterMune conducted another clinical trial, with a primary end point of “progression-free” survival,” measured by decrease in specific pulmonary function tests or death.  InterMune’s trial specified nine secondary end points, including survival time over from randomization until the end of the trial.

InterMune’s trial failed to show overall reduction in progression-free survival.  Patients on Actimmune did, however, experience improvements on the survival end point, which were not statistically significant at the pre-specified level of alpha (p < 0.05).  Although not statistically significant as defined, 28 of 168 patients on placebo died, while only 16 of 162 patients on Actimmune died – an absolute value of 40% higher survival on therapy, p-value = 0.084.  The relative survival benefit was greater (70%) for a non-prespecified subgroup that had mild-to-moderate IPF (by pulmonary function criteria) at the outset of the trial.

For a combined subgroup of all mild-to-moderate IPF patients (FVC>55%), making up 77% of all trial participants, the absolute difference in mortality was only 6 patients on Actimmune (n = 126), compared to 21 on placebo (n = 128). For this non-prespecified subgroup, the improvement was 70%, p = 0.004.

In August 2002, Dr. Harkonen approved a press release, which carried a headline, “phase III data demonstrating survival benefit of Actimmune in IPF.” A subtitle announced the 70% relative reduction in patients with mild to moderate disease.  The text of the press release stated that the company’s view was based upon “preliminary,” clinical trial data, which “demonstrate a significant survival benefit in patients with mild to moderate disease randomly assigned to Actimmune versus control treatment (p=0.004).” The press release also stated the results and associated p-value for the survival endpoint for the whole study population, as well as the results of the long-term follow-up study of the patients from the original study by Ziesche, et al. (which also showed a survival benefit for those randomized to Actimmune).  The remainder of the four-page press release acknowledged that the results of the primary end point did not reach statistical significance, and identified two upcoming medical conferences, as well as a conference call with the investment community that would be recorded and posted on the company’s website for two days, at which further details would be provided.

Dr. Harkonen was acquitted of misbranding, but convicted of wire fraud for having issued this press release.  The gravamen of his crime was stating that the clinical trial “demonstrated” prolonged survival for IPF patients.  The prosecution asserted that Dr. Harkonen engaged in data dredging, grasping for the right non-prespecified end point that had a low p-value attached. Such data dredging implicates the problem of multiple comparisons or tests, with the result of increasing the risk of a false-positive finding, notwithstanding the p-value below 0.05.

Supported by the testimony of Professor Thomas Fleming, who chaired the Data Safety Monitoring Board for the clinical trial in question, the government claimed that the trial results were “negative” because the p-values for all the pre-specified endpoints exceeded 0.05.  Shortly after the press release, Fleming sent InterMune a letter that strongly dissented from the language of the press release, which he characterized as misleading.  Because the primary and secondary end points were not statistically significant, and because the reported mortality benefit was found in a non-prespecified subgroup, the interpretation of the trial data required “greater caution,” and the press release was a “serious misrepresentation of results obtained from exploratory data subgroup analyses.”

The district court sentenced Harkonen to six months of home confinement, three years of probation, 200 hours of community service, and a fine of $20,000. Dr. Harkonen appealed on grounds that the federal fraud statutes do not permit the government to prosecute persons for expressing scientific opinions about which reasonable minds can differ.  If any reasonable could find the defendant’s statement to be true, the trial court should dismiss the prosecution.  Statements that have support even from a minority of the scientific community should not be the basis for a fraud charge.  In Dr. Harkonen’s case, the government did not allege any misstatement of an objectively verifiable fact, but alleged falsity in his characterization of the data’s “demonstration” of an efficacy effect.  The government cross-appealed to complain about the leniency of the sentence.

Dr. Harkonen’s trial counsel did not present any expert witnesses, but he did elicit testimony from some of the government witnesses about the proper interpretation of the trial data and about controversy concerning the reliance upon a precise p-value for interpreting causality.  On appeal, for instance, Dr. Harkonen’s counsel quoted government witness, Dr. Wayne Hockmeyer:

“Many times people have the impression that—that when you look at data, it’s immediately clear what conclusions you ought to draw from those data. . . . And sometimes that’s true. And sometimes there are gray areas. And it is not true all the time. And there’s a lot of vigorous debate that goes on amongst members of the scientific and medical community about the conclusions that one ought to draw from those data. ER1085.”

A panel of three judges, Judges Nelson, Tashima, and Murguia, heard Dr. Harkonen’s appeal.  The case presents obvious first amendment issues, but the more curious issues involve whether the government can impose a statistical orthodoxy on pain of punishment under the wire fraud statutes.  There is much that can be said of Dr. Harkonen’s interpretation of the data.  Clearly, multiplicity was a problem that diluted the meaning of the reported p-value, but the government never presented evidence of what the p-value, corrected for multiple testing, might be.  If Dr. Harkonen committed a crime, then so have many biomedical journal editors, article authors, and government scientists for having over-interpreted evidence in communications that travel in the U.S. mails, and by the internet.

EPA Post Hoc Statistical Tests – One Tail vs Two

December 2nd, 2012

EPA 1992 Meta-Analysis of ETA & Lung Cancer – Part 2

In 1992, the U.S. Environmental Protection Agency (EPA) published a risk assessment of lung cancer (and other) risks from environmental tobacco smoke (ETS).  See Respiratory Health Effects of Passive Smoking: Lung Cancer and Other Disorders EPA/600/6-90/006F (1992).  The agency concluded that ETS causes about 3,000 lung cancer deaths each year among non-smoking adults.  See also EPA “Fact Sheet: Respiratory Health Effects of Passive Smoking,” Office of Research and Development, and Office of Air and Radiation, EPA Document Number 43-F-93-003 (Jan. 1993).

In my last post, I discussed  how various plaintiffs, including tobacco companies, challenged the EPA’s conclusions as agency action that violated administrative and statutory procedures. “EPA Cherry Picking (WOE) – EPA 1992 Meta-Analysis of ETA & Lung Cancer – Part 1” (Dec. 2. 2012). The plaintiffs further claimed that the EPA had manufactured its methods to achieve the result it desired in advance of the analyses. A federal district court agreed with the methodological challenges to the EPA’s report, but the Court of Appeals reversed on grounds that the agency’s report was not reviewable agency action.  Flue-Cured Tobacco Cooperative Stabilization Corp. v. EPA, 4 F. Supp. 2d 435 (M.D.N.C. 1998), rev’d 313 F.3d 852, 862 (4th Cir. 2002) (Widener, J.) (holding that the issuance of the report was not “final agency action”).

One of the grounds of the plaintiffs’ challenge was that the EPA had changed, without explanation, from a 95% to a 90% confidence interval.  The change in the specification of the coefficient of confidence was equivalent to a shift from a two-tailed to a one-tailed test of confidence, with alpha set at 5%.  This change, along with gerrymandering or “cherry picking” of studies, allowed the EPA to claim a statistically significant association between ETS and lung cancer. 4 F. Supp. 2d at 461.  The plaintiffs pointed to EPA’s own previous risk assessments, as well as statistical analyses by the World Health Organization (International Agency for Research on Cancer), the National Research Council, and the Surgeon General, all of which routinely use 95% intervals, and two-tailed tests of significance.  Id.

In its 1990 Draft ETS Risk Assessment, the EPA had used a 95% confidence interval, but in later drafts, changed to a 90% interval.  One of the epidemiologists on the EPA’s Scientific Advisory Board, Geoffrey Kabat, criticized this post hoc change, noting that the use of 90% intervals are disfavored and that the post hoc change in statistical methodology created the appearance of an intent to influence the outcome of the analysis. Id. (citing Geoffrey Kabat, “Comments on EPA’s Draft Report: Respiratory Health Effects of Passive Smoking: Lung Cancer and Other Disorders,” II.SAB.9.15 at 6 (July 28, 1992) (JA 12,185).

The EPA argued that its adoption of a one-tailed test of significance was justified on the basis of an a priori hypothesis that ETS is associated with lung cancer.  Id. at 451-52, 461 (citing to ETS Risk Assessment at 5–2). The court found this EPA argument hopelessly circular.  The agency postulated its a priori hypothesis, which it then took as license to dilute the statistical test for assessing the evidence.  The agency, therefore, had assumed what it wished to show, in order to achieve the result it sought.  Id. at 456.  The EPA claimed that the one-tailed test had more power, but with dozens of studies aggregated into a summary result, the court recognized that Type I error was a larger threat to the validity of the agency’s conclusions.

The EPA also advanced a muddled defense of its use of 90% confidence intervals by arguing that if it used a 95% interval, the results would have been incongruent with the one-tailed p-values.  The court recognized that this was really no discrepancy at all, but only a corollary of using either one-tailed 5% tests or 90% confidence intervals.  Id. at 461.

If the EPA had adhered to its normal methodology, there would have been no statistically significant association between ETS and lung cancer. With its post hoc methodological choice, and highly selective approach to study inclusions in its meta-analysis, the EPA was able to claim a weak statistically significant association between ETS and lung cancer.  Id. at 463.  The court found this to be a deviation from the legally required use of “best judgment possible based upon the available evidence.”  Id.

Of course, the EPA could have announced its one-tailed test from the inception of the risk assessment, and justified its use on grounds that it was attempting to reach only a precautionary judgment for purposes of regulation.  Instead, the agency tried to showcase its finding as a scientific conclusion, which only further supported the tobacco companies’ challenge to the post hoc change in plan for statistical analysis.

Although the validity issues in the EPA’s 1992 meta-analysis should have been superseded by later studies, and later meta-analyses, the government’s fraud case, before Judge Kessler, resurrected the issue:

“3344. Defendants criticized EPA’s meta-analysis of U.S. epidemiological studies, particularly its use of an ‘unconventional 90 percent confidence interval’. However, Dr. [David] Burns, who participated in the EPA Risk Assessment, testified that the EPA used a one-tailed 95% confidence interval, not a two-tailed 90% confidence interval. He also explained in detail why a one-tailed test was proper: The EPA did not use a 90% confidence interval. They used a traditional 95% confidence interval, but they tested for that interval only in one direction. That is, rather than testing for both the possibility that exposure to ETS increased risk and the possibility that it decreased risk, the EPA only tested for the possibility that it increased the risk. It tested for that possibility using the traditional 5% chance or a P value of 0.05. It did not test for the possibility that ETS protected those exposed from developing lung cancer at the direction of the advisory panel which made that decision based on its prior decision that the evidence established that ETS was a carcinogen. What was being tested was whether the exposure was sufficient to increase lung cancer risk, not whether the agent itself, that is cigarette smoke, had the capacity to cause lung cancer with sufficient exposure. The statement that a 90% confidence interval was used comes from the observation that if you test for a 5% probability in one direction the boundary is the same as testing for a 10% probability in two directions. Burns WD, 67:5-15. In fact, the EPA Risk Assessment stated, ‘Throughout this chapter, one-tailed tests of significance (p = 0.05) are used …’ .”

U.S. v. Philip Morris USA, Inc., 449 F. Supp. 2d 1, 702-03 (D.D.C., 2006) (Kessler, J.) (internal citations omitted).

Judge Kessler was misled by Dr. Burns, a frequent testifier for plaintiffs’ counsel in tobacco cases.  Burns should have known that with respect to the lower bound of the confidence interval, which is what matters for determining whether the meta-analysis excludes a risk ratio of 1.0, there is no difference between a one-tailed 95% confidence interval and a two-tailed 90% interval.  Burns’ sophistry hardly saves the EPA’s error in changing its pre-specified end point and statistical analysis, or the danger of unduly increasing the risk of Type I error in the EPA meta-analysis. SeePin the Tail on the Significance Test” (July 14th, 2012)

Post-script

Judge Widener wrote the opinion for a panel of the United States Court of Appeals, for the Fourth Circuit, which reversed the district court’s judgment, enjoining the EPA’s report.  The Circuit’s decision did not address the scientific issues, but by holding that the agency action was not reviewable, the basis for the district court’ review of the scientific and statistical issues was removed.  For those pundits who see only self-interested behavior in judging, the author of the Circuit’s decision was a life-time smoker, who grew Burley tobacco on his farm, outside Abingdon, Virginia.  Judge Widener died on September 19, 2007, of lung cancer.

EPA Cherry Picking (WOE) – EPA 1992 Meta-Analysis of ETS & Lung Cancer – Part 1

December 2nd, 2012

Somehow, before the Supreme Court breathed life into Federal Rule of Evidence 702, parties sometimes found a way to challenge dubious scientific evidence in court.  One good example is the challenge to the United States Environmental Protection Agency’s risk assessment of passive smoking, also known as environmental tobacco smoke (ETS).  In 1992, the Environmental Protection Agency (EPA) published a risk assessment of lung cancer (and other) risks from ETS.  See Respiratory Health Effects of Passive Smoking: Lung Cancer and Other Disorders EPA/600/6-90/006F (1992).  The agency concluded that ETS causes about 3,000 lung cancer deaths each year among non-smoking adults in the United States.  See also EPA “Fact Sheet: Respiratory Health Effects of Passive Smoking,” Office of Research & Development; EPA Document Number 43-F-93-003 (Jan. 1993).

Various plaintiffs, including tobacco companies, challenged the EPA’s conclusions as agency action that violated administrative and statutory procedures.  The plaintiffs further claimed that the EPA had manufactured its methods to achieve the result it desired in advance of the analyses. In other words, plaintiffs asserted that the EPA’s issuance of the ETS report violated the Administrative Procedures Act’ procedural requirements, as well as the requirements of the specific enabling legislation, the Radon Gas and Indoor Air Quality Research Act, Pub.L. No. 99–499, 100 Stat. 1758–60 (1986) (codified at 42 U.S.C. § 7401 note (1994)).  A federal district court agreed with the methodological challenges to the EPA’s report, but the Court of Appeals reversed on grounds that the agency’s report was not reviewable agency action.  Flue-Cured Tobacco Cooperative Stabilization Corp. v. EPA, 4 F. Supp. 2d 435 (M.D.N.C. 1998), rev’d on other grounds, 313 F.3d 852, 862 (4th Cir. 2002) (Widener, J.) (holding that the issuance of the report was not “final agency action”). The district court’s assessment of the validity issues were not addressed by the appellate court.

Notwithstanding the district court’s findings, the EPA continues to claim that it had reached valid scientific conclusions using a “scientific approach”:

“EPA reached its conclusions concerning the potential for ETS to act as a human carcinogen based on an analysis of all of the available data, including more than 30 epidemiologic (human) studies looking specifically at passive smoking as well as information on active or direct smoking. In addition, EPA considered animal data, biological measurements of human uptake of tobacco smoke components and other available data. The conclusions were based on what is commonly known as the total weight-of-evidence” rather than on any one study or type of study.

The finding that ETS should be classified as a Group A carcinogen is based on the conclusive evidence of the dose-related lung carcinogenicity of mainstream smoke in active smokers and the similarities of mainstream and sidestream smoke given off by the burning end of the cigarette. The finding is bolstered by the statistically significant exposure-related increase in lung cancer in nonsmoking spouses of smokers which is found in an analysis of more than 30 epidemiology studies that examined the association between secondhand smoke and lung cancer.”

EPA “Fact Sheet: Respiratory Health Effects of Passive Smoking,”  Office of Research and Development; EPA Document Number 43-F-93-003, January 1993 (emphasis added).

A prominent feature of the EPA’s analysis was a meta-analysis of epidemiologic studies of ETS and lung cancer.  Interestingly, the tobacco industry plaintiffs did not appear to challenge the legitimacy of the basic meta-analytic enterprise, which was still controversial at the time.  See, e.g., “Samuel Shapiro, Meta-analysis/Smeta-analysis,” 140 Am. J. Epidem. 771 (1994); Alvan Feinstein, “Meta-Analysis: Statistical Alchemy for the 21st Century,” 48 J. Clin. Epidem. 71 (1995).  Their challenge went straight to the validity of the EPA’s meta-analysis, and a documented post hoc change in the agency’s statistical plan for analyzing the meta-analysis results.  Only a few years earlier, the defense in polychlorobiphenyl (PCB) litigation broadly challenged a plaintiffs’ expert witness’s use of meta-analysis of observational epidemiologic studies, only to have the Third Circuit reject the challenge and to direct the district court to review the validity of the meta-analysis as conducted by the witness.  In re Paoli RR Yard PCB Litig., 706 F. Supp. 358, 373 (E.D. Pa. 1988), rev’d, 916 F.2d 829, 856-57 (3d Cir. 1990), cert. denied, 499 U.S. 961 (1991); see also Hines v. Consol. Rail Corp., 926 F.2d 262, 273 (3d Cir. 1991).

The EPA report was not the first attempt to use meta-analysis for the epidemiology of ETS and lung cancer.  In 1986, the National Academy of Sciences reported a meta-analysis on the subject.  See National Research Council, National Academy of Sciences,  Environmental tobacco smoke: measuring exposures and assessing health effects (Wash. DC 1986).  This earlier meta-analysis was also controversial.  Indeed, some of the early concerns over the use of meta-analysis for observational epidemiologic studies arose in the context of studies of ETS.  See, e.g., Joseph L. Fleiss & Alan J. Gross, “Meta-Analysis in Epidemiology, with Special Reference to Studies of the Association between Exposure to Environmental Tobacco Smoke and Lung Cancer:  A Critique,” 44 J. Clin. Epidem. 127 (1991) (criticizing the National Research Council 1986 meta-analysis of ETS and lung cancer studies as unwarranted based upon the low quality of the studies included).  These concerns were heightened by politicized use of meta-analyses in regulatory agencies to overclaim scientific conclusions from weak, inconclusive data.

In the EPA’s meta-analysis, statistical significance was achieved only by changing the criterion of significance, post hoc, from a two-tailed to a one-tailed 5% test.  Perhaps more disturbing was the scientific gerrymandering that took place as to which studies to include and exclude from the meta-analysis.

In its first review of the EPA’s draft report, a committee of the agency’s Scientific Advisory Board, the IAQC [the Indoor Air Quality/Total Human Exposure Committee] found that the EPA’s ETS risk assessment violated one of the necessary criteria for a valid meta-analysis – a “precise definition of criteria used to include (or exclude) studies.”  4 F. Supp. 2d at 459 (citing EPA, An SAB Report: Review of Draft Environmental Tobacco Smoke Health Effects Document, EPA/SAB/IAQC/91/007 at 32–33 (1991) (SAB 1991 Review) (JA 9,497–98)).  The agency had not provided specific criteria for including studies. The IAQC also noted that it was important to evaluate the consequences of having excluded studies in the form of sensitivity studies. In a later review, in 1992, both the EPA and the IAQC dropped this critique of the agency’s meta-analysis, without explanation.  Id. at 459.

By the time the EPA released its ETS report in 1993, there were about 58 published epidemiologic studies available for inclusion in any meta-analysis.  The EPA included only 31.  The agency limited its analysis to nonsmoking women married to smoking spouses.  There were 33 studies of this exposed group; the EPA included 31 of the 33.  There were also available 12 studies of women exposed to ETS in their workplace, and 13 studies of women who were exposed to ETS as children.  Id. at 458. There were three late-breaking studies of women with spousal exposures, but the EPA excluded two, without explanation.  Id. at 459.

In reviewing the plaintiffs’ challenge, the district court noted that the EPA had given a bare, unconvincing explanation for excluding the childhood and workplace studies.  Id.  The EPA argued that there was less data in the childhood and workplace studies, but this assertion struck the court as an evasive rationale when one of the purposes of conducting a meta-analysis was to incorporate the data from smaller, less powerful studies.  Id. 458-59.  The primary author of the disputed chapter of the EPA report, Kenneth Brown, called the disputed studies “inadequate,” without providing a rational basis or explanation.  The IAQC, in its earlier review of a 1991 draft report, recognized that the excluded studies provided less information, but concluded that the agency’s “the report should review and comment on the data that do exist… .” Id. at 459.

The court found the EPA’s selection of studies for inclusion in a meta-analysis to be “disturbing”:

“First, there is evidence in the record supporting the accusation that EPA ‘cherry picked’ its data. Without criteria for pooling studies into a meta-analysis, the court cannot determine whether the exclusion of studies likely to disprove EPA’s a priori hypothesis was coincidence or intentional. Second, EPA’s excluding nearly half of the available studies directly conflicts with EPA’s purported purpose for analyzing the epidemiological studies and conflicts with EPA’s Risk Assessment Guidelines. See ETS Risk Assessment at 4–29 (“These data should also be examined in the interest of weighing all the available evidence, as recommended by EPA’s carcinogen risk assessment guidelines (U.S.EPA, 1986a) ….” (emphasis added)). Third, EPA’s selective use of data conflicts with the Radon Research Act. The Act states EPA’s program shall ‘‘gather data and information on all aspects of indoor air quality….’’ Radon Research Act § 403(a)(1) (emphasis added). In conducting a risk assessment under the Act, EPA deliberately refused to assess information on all aspects of indoor air quality.”

4 F. Supp. 2d at 460.

The court was no doubt impressed by the duplicity of the agency’s claim to have used a “total weight of the evidence” approach to the question of causality, and its censoring of the analysis in a way that appeared to game the result.  Id. at 454  The EPA’s guidelines called for basing conclusions on all available evidence.  EPA’s Guidelines for Carcinogen Risk Assessment, 51 Fed. Reg. 33,996, 33,999-34,000 (1986).

Using evidence selectively, with a post hoc adoption of a one-tailed test of statistical significance, the EPA reported a summary estimate of risk of 1.19, and categorized ETS as a “Group A” carcinogens. In most of its previous Group A classifications, the agency had based its decisions upon much higher relative risks.  Indeed, the agency had rejected Group A classifications when relative risks were found to be less than three.  4 F. Supp. 2d at 461.  The sum total of the agency’s methodological laxity was too much for the district court, which struck the chapters of the EPA report.  Four years later, the Fourth Circuit of the U.S. Court of Appeals reversed, on grounds that the EPA report was not reviewable agency action.

The EPA report became a lightning rod for methodological criticism of meta-analysis for observational studies, and the EPA’s use of meta-analysis.  Critics argued that the EPA had succumbed to political pressure from the anti-tobacco lobby.  See, e.g., Gio B. Gori & John C. Luik, Passive Smoke: The EPA’s Betrayal of Science and Policy (Vancouver, BC: The Fraser Institute 1999); John C. Luik, “Pandora’s Box: The Dangers of Politically Corrupted Science for Democratic Public Policy,” Bostonia 54 (Winter 1999-94).  See also Elizabeth Fisher, “Case law analysis. Passive smoking and active courts: the nature and role of risk regulators in the US and UK.  Flue-cured Tobacco Co-op v US Environmental Protection Agency,” 12 J. Envt’l Law 79 (2000).

The federal government has been trying to defend the EPA’s 1992 report, ever since.  In 1998, upon listing ETS as a known carcinogen, the Department of Health and Human Services noted that “[t]he individual studies were carefully summarized and evaluated”  in the 1992 EPA report.  U.S. Dep’t of Health & Human Services, National Toxicology Program, Final Report on Carcinogens – Background Document for Environmental Tobacco Smoke: Meeting of the NTP Board of Scientific Counselors – Report on Carcinogens Subcommittee at 24 (Research Triangle Park, NC 1998).  Anti-tobacco scientists, including scientists involved in the EPA report, have attacked the motives of the industry, and of the scientists who have challenged the report.  See, e.g., Jonathan M. Samet & Thomas A. Burke, “Turning Science Into Junk: The Tobacco Industry and Passive Smoking,” 91 Am. J. Pub. Health 1742 (2001); Monique E. Muggl, Richard D. Hurt, and James Repace, “The Tobacco Industry’s Political Efforts to Derail the EPA Report on ETS,” 26 Am. J. Prev. Med. 167 (2004); Deborah E. Barnes & Lisa A. Bero, “Why review articles on the health effects of passive smoking reach different conclusions,” 279 J. Am. Med. Ass’n 1566 (1998).

Of course, science did not remain status quo 1992.  Later studies were published, and the controversy continued, such that the 1992 meta-analysis is now largely scientifically irrelevant.  See James Enstrom & Geoffrey Kabat, “Environmental tobacco smoke and tobacco related mortality in a prospective study of Californians, 1960-98,” 326 Br. Med. J. 1057 (2003); G. Davey Smith, “Effect of passive smoking on health: More information is available, but the controversy still persists,” 326 Br. Med. J. 1048–9 (2003).

A troubling implication of those who attack the tobacco industry is that the industry was not allowed to raise methodological challenges to the EPA’s purported use of a scientific method.  The EPA defenders rarely engage with the specifics of the methodological challenge or the district court’s review. Another implication is that the EPA’s meta-analysis remains a clear example of where a regulatory agency could have acted upon a precautionary principle, but chose to dress up its analysis as something it was not:  a scientific conclusion of causality. Given that the agency was not even engaged in reviewable agency action, and that it had plenty of biological plausibility for a precautionary finding that ETS causes lung cancer, the agency could easily have avoided the vitriolic debate it engendered with its 1992 report.