TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Egilman Instigates Kerfuffle at McGill University

January 15th, 2013

Last February, the Canadian Broadcast Corporation unleashed a one-sided, twenty minute investigative journalistic film on the Quebec asbestos industry.  All allegations from the plaintiffs’ litigation world were accepted as true, and the asbestos mining industry was cast as a manufacturer of doubt and deception.  See Fatal Deception(Feb 2, 2012). 

The narrator raises the suggestion that the Canadian federal government is relying upon “junk science” to justify support for continuing exports of chrysotile and for reopening the Quebec mines.  This CBC production features Dr. David Egilman, holding forth on his views on the relationship between McGill University, Professor Corbett Macdonald, and the Quebec asbestos industry. Mostly, Egilman is permitted to define the issues and provide the “answers,” although the CBC film does give some air time to Professor Bruce Case, who points out that Egilman is not a scientist, but rather a social critic. When Professor Case was asked on air how he would respond to Egilman in response to his allegations, Case responded, “I wouldn’t give Dr. Egilman the time of day…because he’s not an honorable person.”

For over a decade, Egilman has been pressing his allegations that asbestos research conducted by McGill University investigators was tainted.  In September 2012, McGill University’s Research Integrity Officer, Abraham Fuks, reported that the Egilman allegations were baseless and unsupported.  Consultation Report to Dean David Eidelman (Sept. 23, 2012). See Egilman’s Allegations Against McDonald and His Epidemiologic Research Are Baseless (Oct. 20, 2012).  Egilman responded to Professor Fuks’ report by labeling it “a shameful cover-up.”  Eric Andrew-Gee, “Asbestos debate rages on at the Faculty Club:  American researcher attacks McGill’s asbestos investigation,” The McGill Daily (Jan. 10, 2013).

The Egilman show apparently kicked off the new year at the McGill University Faculty Club, earlier this month, with a shouting match.  According to the University’s newspaper, Egilman called MacDonald’s research on the Quebec chrysotile miners and millers “garbage,” and he called upon McGill University to retract the paper.

Egilman’s argument took the high road and the low road:  He understandably objected to McGill’s and MacDonald’s refusal to share mineralogical data about tremolite content of asbestos from the Thetford Mines.  Of course, the sad state of epidemiology today is that there is no mechanism for requiring data sharing, and the authors of pro-plaintiff studies have consistently refused to share data, and have fought subpoenas tooth and nail.

But then there was the low road. According to the McGill Daily, Egilman lapsed into name calling.  During his presentation, Egilman referred to McGill’s Professor Fuks as Inspector Fox and included a cartoon in his slideshow of a henhouse guarded by a grinning fox.  “Fuks, by the way, is German for Fox,” Egilman said.

One of the McGill professors chided Egilman for his ad hominem attack on Professor Fuks, and pointed out that Egilman could have made his points without personal attacks. Egilman responded “I could have, but it’s funny.” Id.

Egilman called upon his audience to evaluate his claims against those of Professors Case and Fuks. “One of us is an asshole,” he announced. Id. Indeed. Just perform the iterative disjunctive syllogism; it’s a matter of elimination.  For a more scholarly analysis of assholes, see Aaron James, Assholes:  A Theory (2012).

Tunnel Vision on Conflicts of Interest

January 13th, 2013

Judge Alsup’s order requiring disclosure of money paid to bloggers and journalists is only a recent manifestation of a misguided attempt to control conflicts of interest among non-parties.  See Can a Court Engage in Abusive Discovery? (Jan. 10, 2013).  Judge Alsup’s curious orders can be traced to encouragement in the Federal Judicial Center’s “pocket guide” to managing an MDL for products liability cases.   Barbara Rothstein & Catherine  Borden, Managing Multidistrict Litigation in Products Liability Cases: A Pocket Guide for Transferee Judges (2011).  Link or download  This FJC publication suggested that an MDL court should unleash discovery against authors of published works for evidence of bias, citing an MDL trial court that ordered parties to produce lists of payments to authors of articles relied upon by expert witnesses. Id. at 35 n.48 (citing In re Welding Fume Prods. Liab. Litig., 534 F. Supp. 2d 761 (N.D. Ohio 2008).

The United States Supreme Court has also encouraged hostility to party-funded research and writing.  In Exxon Shipping Co. v. Baker, 554 U.S. 471, 501 (2008), the Court struck down a large punitive damage award.  Justice Souter, writing for a divided court, noted in footnote 17:

“The Court is aware of a body of literature running parallel to anecdotal reports, examining the predictability of punitive awards by conducting numerous ‘mock juries’, where different ‘jurors’ are confronted with the same hypothetical case. See, e.g., C. Sunstein, R. Hastie, J. Payne, D. Schkade, W. Viscusi, Punitive Damages: How Juries Decide (2002); Schkade, Sunstein, & Kahneman, Deliberating About Dollars: The Severity Shift, 100 Colum. L.Rev. 1139 (2000); Hastie, Schkade, & Payne, Juror Judgments in Civil Cases: Effects of Plaintiff’s Requests and Plaintiff’s Identity on Punitive Damage Awards, 23 Law & Hum. Behav. 445 (1999); Sunstein, Kahneman, & Schkade, Assessing Punitive Damages (with Notes on Cognition and Valuation in Law), 107 Yale L.J. 2071 (1998). Because this research was funded in part by Exxon, we decline to rely on it.”

Id. at n.17; see also Conflicts of Interest, Footnote 17, and Scientific McCarthyism.  The glib dismissal of behavioral research on a relevant topic by the Supreme Court was remarkable.  Professor Sunstein, is a professor at University of Chicago, and formerly served the Administrator of the White House Office of Information and Regulatory Affairs, in President Obama’s administration.  Professor Kahneman, a Nobel Laureate, is Professor Emeritus in Princeton University. Professor W. Kip Viscusi has been one of the most prolific writers about and investigators of punitive damages.  Justice Souter’s footnote might be interpreted to impugn the integrity of their research by virtue of their corporate sponsorship. More important, Justice Souter’s opinion fails to explain why the Court would not look beyond funding to the merits of the funded research. Courts consider arguments of the parties’ counsel, although of course, the parties compensate their counsel for marshalling facts, and formulating and presenting arguments. Perhaps Justice Souter would have been justified in announcing that he and his judicial colleagues had looked at Sunstein’s research more closely than other cited research.  The wholesale dismissal of relevant evidence based upon funding is irrational.

A new article posted on the Social Science Research Network explores the misdirection and distortion created by the single-minded focus on financial conflicts of interest.  Richard S. Saver,  “Is It Really All About The Money? Reconsidering Non-Financial Interests In Medical Research,” Journal of Law, Medicine & Ethics (forthcoming 2013).

Richard Saver is a professor of law at the University of North Carolina School of Law, and also holds appointments in the UNC’s School of Medicine.  Saver describes how conflicts of interest (COIs) have largely and incorrectly been reduced to financial conflicts.  For instance, in 2011, the National Institutes of Health (NIH) addressed only financial issues when it promulgated rules for managing conflicts of interest in the field of medical research.  Department of Health and Human Services, “Responsibility of Applicants for Promoting Objectivity in Research for Which Public Health Service Funding is Sought and Responsible Prospective Contractors,” 76 Fed. Reg. 53256 (Aug. 25, 2011).  Several commentators advocated regulation of non-financial COI, but the agency refused to include such COIs within its rules. Id. at 53258.  The Institute of Medicine (IOM), in its monograph on COI in medicine, similarly gave almost exclusive priority to financial ties.   Institute of Medicine, Conflict of Interest in Medical Research, Education, and Practice (Washington, D.C.: The National Academies Press, 2009).

Saver argues that the focus on economic COI is dangerous because it instills complacency about non-financial interests, and provides a false sense of assurance that the most serious biases are disclosed or eliminated.  Saver’s review of retractions, frauds, and ethical lapses in biomedical research suggests that non-financial interests, such as friendships and alliances, institutional hierarchies, intellectual biases and commitments, beneficence, “white-hat” advocacy, as well as the drive for professional achievement, recognition, and rewards, all have the potential to complicate, distort, and sometimes undermine scientific research in myriad ways. The failure to recognize serious non-economic COIs and biases, and the reluctance to treat them differently from financial COIs endangers the validity of science.  Not only are these non-financial threats ignored, but financial interests receive undue attention, resulting in the erosion of public trust in scientific research that is sound.

Professor Saver’s caveats about COI moralism apply beyond biomedical research.  The Exxon Shipping case, the MDL Pocket Guide, and Judge Alsup’s opinion on disclosure of payments to journalists and bloggers signal that courts are well on their way towards selectively and arbitrarily screening out evidence and arguments based upon sponsorship.  What is needed is a whole-hearted commitment to consider and analyze all the available data. Time to shed the blinders.

Can a Court Engage in Abusive Discovery?

January 11th, 2013

ABA’s Litigation News reported that the federal district judge in lawsuit between Oracle and Google sua sponte ordered the parties to identify all journalists, bloggers, or other writers paid to comment on any of the issues in the case. Jannis E. Goodnow, “Surprise Order Forces Parties to Identify Bloggers on Payroll,” Litigation News (Jan. 3, 2013).

Judge William Alsup cited his “concern” that the parties or their counsel “may have retained or paid print or Internet authors, journalists, commentators or bloggers who have and/or may publish comments on the issues.” Oracle America, Inc. v. Google Inc., No. C 10-03561 WHA, Order re Disclosure of Financial Relationships with Commentators on Issues in This Case (N.D. Calif. Aug. 7, 2012). And in a follow-up order, Judge Alsup expanded the scope of the order to include disclosure of those authors even if their payment was not specifically for commenting on the case before him. Order to Supplement (Aug. 20, 2012).

Judge Alsup offered a strange rationale for his disclosure order:

“Just as a treatise on the law may influence the courts, public commentary that purports to be independent may have an influence on the courts and/or their staff if only in subtle ways. If a treatise author or blogger is paid by a litigant, should not that relationship be known?”

Well; not really.  A treatise author’s opinion is only as good as the facts and inferences that are employed in reaching a conclusion.  The judge’s job is, of course, to assess the reasoning for himself.

Judge Alsup’s rhetorical question reveals a fundamental misunderstanding of the judicial function.  The arguments in a case come from counsel who have are being paid.  The financial biases are complex.  Generally, the court might expect each side to benefit in some way financially from obtaining a ruling in line with the arguments it has advanced.  Sometimes, defense counsel may benefit from perpetuating a litigation, but their reputational interests alone certainly will motivate them to prevail.  Plaintiffs’ counsel hope to make money in pursuing a case.  Typically, legal counsel can be trusted to address the issues comprehensively.  Money will be involved.

Of course, journalists, bloggers, law professors, or pundits may put forward an argument not advanced by the parties.  A trial or an appellate judge might see such an argument, but why would funding make a difference to the merits of the argument?  Judges are, after all, supposed to be good at evaluating arguments without favor or prejudice.  And if a judge were to encounter an argument, presented by a non-party but not by the parties, which argument was persuasive, the judge would have the option of requesting the parties’ briefing on the matter.

Disclosure of party funding of writers might legitimately be needed when there was a matter of a poll or survey evidence influenced by the funded writers.  Such evidence apparently was not involved in the Oracle case; indeed, Judge Alsup’s request was based upon a purely fictitious concern.  After the parties made their disclosures, Judge Alsup announced that he would not take any action because none of the journalistic commentary or blogging had influenced any of his decisions.

So this intrusive exercise in court-ordered disclosure really served no purpose at all.  If a party were to propound discovery requests that did not advance the litigation, we would not hesitate to call the discovery abusive.

Unfortunately, no one is paying me to blog on the issues I find interesting.

The Dubious Origins of the Linear No Threshold Model of Carcinogenesis

January 10th, 2013

Most regulation of chemical exposures for causing cancer outcomes is based upon a linear no-threshold exposure-response (LNT) model.  This model informs risk assessments and policy judgments in the United States, Europe, and throughout the world.  If the LNT model were offered to support only precautionary principle judgments, then we might excuse the pretensions of LNT advocates.  The authors of LNT models, environmentalists, and a cadre of anti-industry scientists (“The Lobby”) are not willing, however, to be understood to be offering mere prudential guidance.  They claim that they are offering scientific conclusions, worthy of being taken seriously, as knowledge.

Passing off prudence, aesthetics, and hostility to industrialization as science is a deception that takes place virtually every day in the front pages of the lay media, in scientific journals, as well as in the Federal Register.  Implicit in the stridency of the Lobby is a desire to impose an often impossible burden upon industry to prove that there is no increased risk at miniscule exposures, and to challenge negative results with claims of inadequate power.

Professor Calabrese, in the current issue of the University of Chicago Law Review, has traced a source of the deception behind LNT to a series of events tied to the United States nuclear weapons program during World War II.  Edward J. Calabrese, “US Risk Assessment Policy: A History of Deception — A Response to Arden Rowell, Allocating Pollution,” 79 U. Chi. L. Rev. 985 (2012).   In 1927, Hermann J. Muller, a radiation geneticist, discovered that X-ray radiation caused mutation of fruit fly germ cells. The importance of health outcomes from radiation exposure led to Muller’s service as an advisor to the Manhattan Project during World War II.  In 1946, Muller received the Nobel Prize in medicine, for his work in genetics.

Professor Calabrese points out that Muller so thoroughly dismissed threshold models of radiation mutagenicity in his Nobel address, that the LNT became accepted dogma in the regulatory and scientific agencies of the United States government, and later, around the world.   Edward J. Calabrese, “Muller’s Nobel Prize Lecture: when ideology prevailed over science,” 126 Toxicol. Sci. 1 (2012).   What Professor Calabrese adds to the historical narrative is that Muller was aware of high quality research that undermined the LNT, when he announced in his Nobel lecture that there was no evidence for thresholds.  In the 1950’s, Muller continued to use his influence to advance the LNT model of radiation-induced carcinogenesis with the National Academies of Science Biological Effects of Atomic Radiation (“BEAR I”) Committee.

Professor Calabrese has provided an important cautionary tale about how scientific beliefs take hold and are propagated.  Muller’s influence and the rise of the LNT model is a recurring tale of the role of power, persuasion, and prestige in claiming scientific truths. It is not just about money.  Conflicts of interest involving professional honors, institutional recognition, friendships, intellectual commitments, and investigative “zeal,” which are often much greater threats to the integrity of science and medical research than receipt of payments.  See Richard S. Saver, “Is It Really All about the Money? Reconsidering Non-Financial Interests in Medical Research,”  40 J. Law, Med. & Ethics (2012).

If Professor Calabresse is right in his historical analysis, Muller achieved his goal of protecting his discovery from challenge by a selective presentation of the relevant data at a crucial moment in the formation of scientific thinking about mutagenesis. The rise and fall of the LNT model carries with it an important lesson to judicial gatekeepers.  Unlike the audience of Muller’s Nobel speech, lawyers and judges can, and must, insist upon a declaration of all materials considered so that they can determine whether an expert witness has proffered an opinion based upon a thorough, complete review of the relevant data.

New Superhero?

December 31st, 2012

The Verdict. A Civil Action. Class ActionMy Cousin Vinnie.

Wonder Woman, Superman, Batman, Ironman.

America loves movies, and superheroes.

So 2013 should be an exciting year with a new superhero movie coming to a theater, or a courthouse, near you: Egilman.

Actor-producer-director Patrick Coppola has announced that he is developing a film, which has yet to be given a catchy name.  Coppola calls the film in development:  the DOCTOR DAVID EGILMAN PROJECT.  According to Coppola, he was hired by

“by world famous MD – Doctor David Egilman to create and write a Screenplay based on Doctor Egilman’s life and the many cases he has served on as an expert witness in various chemical poisoning trials. Doctor Egilman is a champion of the underdog and has several worldwide charities and medical clinics he funds and donates his time to.”

Patrick Coppola describes his screenplay for the “Doctor David Egilman Project” as a story of conspiracy among corporate suppliers of beryllium materials, the government, and the thought leaders in occupational medicine to suppress information about harm to workers. In this narrative, which is a familiar refrain for plaintiffs’ counsel in toxic tort litigation, profits always take precedence over safety, and unions mysteriously are silently complicit in the carnage.

Can’t wait!

 

Reanalysis of Epidemiologic Studies – Not Intrinsically WOEful

December 27th, 2012

A recent student law review article discusses reanalyses of epidemiologic studies, an important, and overlooked topic in the jurisprudence of scientific evidence.  Alexander J. Bandza, “Epidemiological-Study Reanalyses and Daubert: A Modest Proposal to Level the Playing Field in Toxic Tort Litigation,” 39 Ecology L. Q. 247 (2012).

In the Daubert case itself, the Ninth Circuit, speaking through Judge Kozinksi, avoided the methodological issues raised by Shanna Swan’s reanalysis of Bendectin epidemiologic studies, by assuming arguendo its validity, and holding that the small relative risk yielded by the reanalysis would not support a jury verdict of specific causation. Daubert v. Merrell Dow Pharm., Inc., 43 F.3d 1311, 1317–18 (9th Cir. 1995).

There is much that can, and should, be said about reanalyses in litigation and in the scientific process, but Bandza never really gets down to the business at hand. His 36 page article curiously does not begin to address reanalysis until the bottom of the 20th page. The first half of the article, and then some, reviews some time-worn insights and factoids about scientific evidence. Finally, at page 266, the author introduces and defines reanalysis:

“Reanalysis occurs ‘when a person other than the original investigator obtains an epidemiologic data set and conducts analyses to evaluate the quality, reliability or validity of the dataset, methods, results or conclusions reported by the original investigator’.”

Bandza at 266 (quoting Raymond Neutra et al., “Toward Guidelines for the Ethical Reanalysis and Reinterpretation of Another’s Research,” 17 Epidemiology 335, 335 (2006).

Bandza correctly identifies some of the bases for judicial hostility to re-analyses. For instance, some courts are troubled or confused when expert witnesses disagree with, or reevaluate, the conclusions of a published article. The witnesses’ conclusions may not be published or peer reviewed, and thus the proffered testimony fails one of the Daubert factors.  Bandza correctly notes that peer review is greatly overrated by judges. Bandza at 270. I would add that peer review is an inappropriate proxy for validity, a “test,” which reflects a distrust of the unpublished.  Unfortunately, this judicial factor ignores the poor quality of much of what is published, and the extreme variability in the peer review process. Judges overrate peer review because they are desperate for a proxy for validity of the studies relied upon, which will allow them to pass their gatekeeping responsibility on to the jury. Furthermore, the authors’ own conclusions are hearsay, and their qualifications are often not fully before the court.  What is important is the opinion of the expert witness who can be cross-examined and challenged.  SeeFOLLOW THE DATA, NOT THE DISCUSSION.” What counts is the validity of the expert witness’s reasoning and inferences.

Bandza’s article, which by title advertises itself to be about re-analyses, gives only a few examples of re-analyses without much detail.  He notes concerns that reanalyses may impugn the reputation of published scientists, and burden them with defending their data.  Who would have it any other way? After this short discussion, the article careens into a discussion of “weight of the evidence” (WOE) methodology. Bandza tells us that the rejection of re-analyses in judicial proceedings “implicitly rules out using the weight-of-the-evidence methodology often appropriate for, or even necessary to, scientific analysis of potentially toxic substances.” Bandza at 270.  This argument, however, is one sustained non-sequitur.  WOE is defined in several ways, but none of the definitions require or suggest the incorporation of re-analyses. Re-analyses raise reliability and validity issues regardless whether an expert witness incorporates them into a WOE assessment. Yet Bandza tells us that the rejection of re-analyses “Implicitly Ignores the Weight-of-the-Evidence Methodology Appropriate for the Scientific Analysis of Potentially Toxic Substances.” Bandza at 274. This conclusion simply does not follow from the nature of WOE methodology or reanalyses.

Bandza’s ipse dixit raises the independent issue whether WOE methodology is appropriate for scientific analysis. WOE is described as embraced or used by regulatory agencies, but that description hardly recommends the methodology as the basis for a scientific, as opposed to a regulatory, conclusion.  Furthermore, Bandza ignores the ambiguity and variability of WOE by referring to it as a methodology, when in reality, WOE is used to describe a wide variety of methods of reasoning to a conclusion. Bandza cites Douglas Weed’s article on WOE, but fails to come to grips with the serious objections raised by Weed in his article to the use of WOE methodologies.  Douglas Weed, “Weight of Evidence: A Review of Concept and Methods,” 25 Risk Analysis 1545, 1546–52 (2005) (describing the vagueness and imprecision of WOE methodologies). See also “WOE-fully Inadequate Methodology – An Ipse Dixit By Another Name.”

Bandza concludes his article with a hymn to the First Circuit’s decision in Milward v. Acuity Specialty Products Group, Inc., 639 F.3d 11 (1st Cir. 2011). Plaintiffs’ expert witness, Dr. Martyn Smith claimed to have performed a WOE analysis, which in turn was based upon a re-analysis of several epidemiologic studies. True, true, and immaterial.  The re-analyses were not inherently a part of a WOE approach. Presumably, Smith re-analyzed some of the epidemiologic studies because he felt that the data as presented did not support his desired conclusion.  Given the motivations at work, the district court in Milward was correct to look skeptically and critically at the re-analyses.

Bandza notes that there are procedural and evidentiary safeguards in federal court against unreliable or invalid re-analyses of epidemiologic studies.  Bandza at 277. Yes, there are safeguards but they help only when they are actually used. The First Circuit in Milward reversed the district court for looking too closely at the re-analyses, spouting the chestnut that the objections went to the weight not the admissibility of the evidence.  Bandza embraces the rhetoric of the Circuit, but he offers no description or analysis of the liberties that Martyn Smith took with the data, or the reasonableness of Smith’s reliance upon the re-analyzed data.

There is no necessary connection between WOE methodologies and re-analyses of epidemiologic studies.  Re-analyses can be done properly to support or deconstruct the conclusions of published papers.  As Bandza points out, some re-analyses may go on to be peer reviewed and published themselves.  Validity is the key, and WOE methodologies have little to do with the process of evaluating the original or the re-analyzed study.

 

 

Litmus Tests

December 27th, 2012

Rule 702 is, or is not, a litmus test for expert witness opinion admissibility.  Relative risk is, or is not, a litmus test for specific causation.  Statistical significance is, or is not, a litmus test for reasonable reliance upon the results of a study.  It is relatively easy to find judicial opinions on either side of the litmus divide.  Compare National Judicial College, Resource Guide for Managing Complex Litigation at 57 (2010) (Daubert is not a litmus test) with Cryer v. Werner Enterprises, Inc., Civ. Action No. 05-S-696-NE, Mem. Op. & Order at 16 n. 63 (N.D. Ala. Dec. 28, 2007) (describing the Eleventh Circuit’s restatement of Rule 702’s “litmus test” for the methodological reliability of proffered expert witness opinion testimony).

The “litmus test“ is one sorry, overworked metaphor.  Perhaps its appeal has to do with a vague collective memory that litmus paper is one of those “things of science,” which we used in high school chemistry, and never had occasion to use again. Perhaps, litmus tests have the appeal of “proofiness.”

The reality is different. The litmus test is a semi-quantitative test for acidity or alkalinity.  Neutral litmus is purple.  Under acidic conditions, litmus turns red; under basic conditions, it turns blue.  For some time, scientists have used pH meters when they want a precise quantification of acidity or alkalinity.  Litmus paper is a fairly crude test, which easily discriminates  moderate acidity from alkalinity (say pH 4 from pH 11), but is relatively useless for detecting an acidity at pH or 6.95, or alkalinity at 7.05.

So what exactly are legal authors trying to say when they say that some feature of a test is, or is not, a “litmus test”? The litmus test is accurate, but not precise at the important boundary at neutrality.  The litmus test color can be interpreted for degree of acidity or alkalinity, but it is not the preferred method to obtain a precise measurement. Saying that a judicial candidate’s views on abortion are a litmus test for the Senate’s evaluation of the candidate makes sense, given the relative binary nature of the outcome of a litmus test, and the polarization of political views on abortion. Apparently, neutral views or views close to neutrality on abortion are not a desideratum for judicial candidates.  A cruder, binary test is exactly what is desired by politicians.

The litmus test that is used for judicial candidates does not seem to work so well when used to describe scientific or statistical inference.  The litmus test is well understood, but fairly obsolete in modern laboratory practice.  When courts say things, such as statistical significance is not a litmus test for acceptability of a study’s results, clearly they are correct because measure of random error is only one aspect of judging a body of evidence for, or against, an association.  Yet courts seem to imply something else, at least at times:

statistical significance is not an important showing in making a case that an exposure is reliably associated with a particular outcome.

Here courts are trading in half truths.  Statistical significance is quantitative, and the choice of a level of significance is not based upon immutable law. So like the slight difference between a pH of 6.95 and 7.05, statistical significance tests have a boundary issue.  Nonetheless, a consideration of random error cannot be dismissed or overlooked on the theory that significance level is not a “litmus test.”  This metaphor obscures and attempts to excuse sloppy thinking.  It is time to move beyond this metaphor.

Lumpenepidemiology

December 24th, 2012

Judge Helen Berrigan, who presides over the Paxil birth defects MDL in New Orleans, has issued a nicely reasoned Rule 702 opinion, upholding defense objections to plaintiffs expert witnesses, Paul Goldstein, Ph.D., and Shira Kramer, Ph.D. Frischhertz v SmithKline Beecham EDLa 2012 702 MSJ Op.

The plaintiff, Andrea Frischhertz, took GSK’s Paxil, a selective serotonin reuptake inhibitor (SSRI), for depression while pregnant with her daughter, E.F. The parties agreed that E.F. was born with a deformity of her right hand.  Plaintiffs originally claimed that E.F. had a heart defect, but their expert witnesses appeared to give up this claim at deposition, as lacking evidential support.

Adhering to Daubert’s Epistemiologic Lesson

Like many other lower federal courts, Judge Berrigan focused her analysis on the language of Daubert v. Merrell Dow Pharmaceuticals Inc., 509 U.S. 579 (1993), a case that has been superseded by subsequent cases and a revision to the operative statute, Rule 702.  Fortunately, the trial court did not lose sight of the key epistemological teaching of Daubert, which is based upon Rule 702:

“Regarding reliability, the [Daubert] Court said: ‘the subject of an expert’s testimony must be “scientific . . . knowledge.” The adjective “scientific” implies a grounding in the methods and procedures of science. Similarly, the word “knowledge” connotes more than subjective belief or unsupported speculation’.”

Slip Op. at 3 (quoting Daubert, 509 U.S. at 589-590).

There was not much to the plaintiffs’ expert witnesses’ opinion beyond speculation, but many other courts have been beguiled by speculation dressed up as “scientific … knowledge.”  Dr. Goldstein relied upon whole embryo culture testing of SSRIs, but in the face overwhelming evidence, Dr. Goldstein was forced to concede that this test may generate hypotheses about, but cannot predict, human risk of birth defects.  No doubt this concession made the trial court’s decision easier, but the result would have been required regardless of Dr. Goldstein’s exhibition of truthfulness at deposition.

Statistical Association – A Good Place to Begin

More interestingly, the trial court rejected the plaintiffs’ expert witnesses’ efforts to leapfrog finding a statistically significant association to parsing the so-called Bradford Hill factors:

“The Bradford-Hill criteria can only be applied after a statistically significant association has been identified. Federal Judicial Center, Reference Manual on Scientific Evidence, 599, n.141 (3d. ed. 2011) (“In a number of cases, experts attempted to use these guidelines to support the existence of causation in the absence of any epidemiologic studies finding an association . . . . There may be some logic to that effort, but it does not reflect accepted epidemiologic methodology.”). See, e.g., Dunn v. Sandoz Pharms., 275 F. Supp. 2d 672, 678 (M.D.N.C. 2003). Here, Dr. Goldstein attempted to use the Bradford-Hill criteria to prove causation without first identifying a valid statistically significant association. He first developed a hypothesis and then attempted to use the Bradford-Hill criteria to prove it. Rec. Doc. 187, Exh. 2, depo. Goldstein, p. 103. Because there is no data showing an association between Paxil and limb defects, no association existed for Dr. Goldstein to apply the Bradford-Hill criteria. Hence, Dr. Goldstein’s general causation opinion is not reliable.”

Slip op. at 6.

The trial court’s rejection of Dr. Goldstein’s attempted end run is particularly noteworthy given the Reference Manual’s weak-kneed attempt to suggest that this reasoning has “some logic” to it.  The Manual never articulates what “logic” commends Dr. Goldstein’s approach; nor does it identify any causal relationship ever established with such paltry evidence in the real world of science. The Manual does cite several legal cases that excused or overlooked the need to find a statistically significant association, and even elevated such reasoning into legally acceptable, admissibility method.  See Reference Manual on Scientific Evidence at 599 n. 141 (describing cases in which purported expert witnesses attempted to use Bradford Hill factors in the absence of a statistically significant association; citing Rains v. PPG Indus., Inc., 361 F. Supp. 2d 829, 836–37 (S.D. Ill. 2004); ); Soldo v. Sandoz Pharms. Corp., 244 F. Supp. 2d 434, 460–61 (W.D. Pa. 2003).  The Reference Manual also cited cases, without obvious disapproval, which completely dispatched with any necessity of considering any of the Bradford Hill factors, or the precondition of a statistically significant association.  See Reference Manual at 599 n. 144 (citing Cook v. Rockwell Int’l Corp., 580 F. Supp. 2d 1071, 1098 (D. Colo. 2006) (“Defendants cite no authority, scientific or legal, that compliance with all, or even one, of these factors is required. . . . The scientific consensus is, in fact, to the contrary. It identifies Defendants’ list of factors as some of the nine factors or lenses that guide epidemiologists in making judgments about causation. . . . These factors are not tests for determining the reliability of any study or the causal inferences drawn from it.“).

Shira Kramer Takes Her Lumpings

The plaintiffs’ other key expert witness, Dr. Shira Kramer, was a more sophisticated and experienced obfuscator.  Kramer attempted to provide plaintiffs with a necessary association by “lumping” all birth defects together in her analysis of epidemiologic data of birth defects among children of women who had ingested Paxil (or other SSRIs).  Given the clear evidence that different birth defects arise at different times, based upon interference with different embryological processes, the trial court discerned this “lumping” of end points to be methodologically inappropriate.  Slip op. at 8 (citing Chamber v. Exxon Corp., 81 F. Supp. 2d 661 (M.D. La. 2000), aff’d, 247 F.3d 240 (5th Cir. 2001) (unpublished).

Without her “lumping”, Dr. Kramer was left with only a weak, inconsistent claim of biological plausibility and temporality. Finding that Dr. Kramer’s opinion had outrun her headlights, Judge Berrigan, excluded Dr. Kramer as an expert witness, and granted GSK summary judgment.

Merry Christmas!

 

The Matrixx Motion in U.S. v. Harkonen

December 17th, 2012

United States of America v. W. Scott Harkonen, MD — Part III

Background

The recent oral argument in United States v. Harkonen (seeThe (Clinical) Trial by Franz Kafka” (Dec. 11, 2012)), pushed me to revisit the brief filed by the Solicitor General’s office in Matrixx Initiatives Inc. v. Siracusano, 131 S. Ct. 1309 (2011).  One of Dr. Harkonen’s post-trial motions contended that the government’s failure to disclose its Matrixx amicus brief deprived him of a powerful argument that would have resulted from citing the language of the brief, which disparaged the necessity of statistical significance for “demonstrating” causal inferences. SeeMultiplicity versus Duplicity – The Harkonen Conviction” (Dec. 11, 2012).

Matrixx Initiatives is a good example of how litigants make bad law when they press for rulings on bad facts.  The Supreme Court ultimately held that pleading and proving causation were not necessary for a securities fraud action that turned on non-disclosure of information about health outcomes among users of the company’s medication. What is required is “materiality,” which may be satisfied upon a much lower showing than causation.  Because Matrixx Initiatives contended that statistical significance was necessary to causation, which in turn was needed to show materiality, much of the briefings before the Supreme Court addressed statistical significance, but the reality is that the Court’s disposition obviated any discussion of the role of statistical inferences for causation. 131 S.Ct. at 1319.

Still, the Supreme Court, in a unanimous opinion, plowed forward and issued its improvident dicta about statistical significance. Taken at face value, the Court’s statement that “the premise that statistical significance is the only reliable indication of causation … is flawed,” is unexceptionable. Matrixx Initiatives, 131 S.Ct. at 1319.  For one thing, the statement would be true if statistical significance were necessary but not sufficient to “indicate” causation. But more to the point, there are some cases in which statistical significance may not be part of the analytical toolkit for reaching a causal conclusion. For instance, the infamous Ferebee case, which did not involve Federal Rule of 702, is a good example of a case that did not involve epidemiologic or statistical evidence.  SeeFerebee Revisited” (Nov. 8, 2012) (discussing the agreement of both parties that statistical evidence was not necessary to resolve general causation because of the acute onset, post-exposure, of an extremely uncommon medical outcome – severe diffuse interstitial pulmonary fibrosis).

Surely, there are other such cases, but in modern products liability law, many causation puzzles are based upon the interpretation of rate-driven processes, measured using epidemiologic studies, involving a measurable base-line risk and an observed higher or lower risk among a sample of an exposed population. In this context, some evaluation of the size of random error is, indeed, necessary. The Supreme Court’s muddled dicta, however, has confused the issues by painting with an extremely broad brush.

The dicta in Matrixx Initiatives has already led to judicial errors. The MDL court in the Chantix litigation provides one such instance. Plaintiffs claimed that Chantix, a medication that helps people stop smoking, causes suicide. Pfizer, the manufacturer, challenged plaintiffs’ general causation expert witnesses, for not meeting the standards of Federal Rule of Evidence 702, for various reasons, not the least of which was that the studies relied upon by plaintiffs’ witnesses did not show statistical significance.  In re Chantix Prods. Liab. Litig., MDL 2092, 2012 U.S. Dist. LEXIS 130144 (Aug. 21, 2012).  The Chantix MDL court, citing Matrixx Initiatives for a blanket rejection of the need to consider random error, denied the defendant’s challenge. Id. at *41-42 (citing Matrixx Initiatives, 131 S.Ct. at 1319).

The Supreme Court, in Matrixx, however, never stated or implied such a blanket rejection of the importance of considering random error in evidence that was essentially statistical in nature. Of course, if it had done so, it would have been wrong.

Within two weeks of the Chantix decision, a similar erroneous interpretation of Matrixx Initiatives surfaced in MDL litigation over fenfluramine.  Cheek v. Wyeth Pharm. Inc., 2012 U.S. Dist. LEXIS 123485 (E.D. Pa. Aug. 30, 2012). Rejecting a Rule 702 challenge to plaintiffs’ expert witness’s opinion, the MDL trial judge, cited Matrixx Initiatives for the assertion that:

Daubert does not require that an expert opinion regarding causation be based on statistical evidence in order to be reliable. * * * In fact, many courts have recognized that medical professionals often base their opinions on data other than statistical evidence from controlled clinical trials or epidemiological studies.”

Id. at *22 (citing Matrixx Initiatives, 131 S. Ct. at 1319, 1320).  While some causation opinions might be perfectly appropriately based upon other than statistical evidence, the Supreme Court specifically disclaimed any comment upon Rule 702, in Matrixx Initiatives, which was a case about proper pleading of materiality in a securities fraud case, not about proper foundations for actual evidence of causation, at trial, of a health-effects claim. The Cheek decision is thus remarkable for profoundly misunderstanding the Matrixx case. There was no resolution of any Rule 702 issue in Matrixx.

The Trial Court’s Denial of the Matrixx Motion in Harkonen

Dr. Harkonen argued that he is entitled to a new trial on the basis of “newly discovered evidence” in the form of the government’s amicus brief in Matrixx. The trial court denied this motion on several grounds.  First, the government’s amicus brief was filed after the jury returned its verdict against Dr. Harkonen.  Second, the language in the Solicitor General’s amicus brief was just “argument.”  And third, the issue in Matrixx involved adverse events, not efficacy, and the FDA, as well as investors, would be concerned with lesser levels of evidence that did not “demonstrate” causation.  United States v. Harkonen, Memorandum & Order re Defendant Harkonen’s Motions for a New Trial, No. C 08-00164 MHP (N.D. Calif. April 18, 2011). Perhaps the most telling ground might have been that the government’s amicus briefing about statistical significance, prompted by Matrixx Initiatives’ appellate theory, was irrelevant to the proper resolution of that Supreme Court case.  Still, if these reasons are taken individually, or in combination, they fail to mitigate the unfairness of the government’s prosecution of Dr. Harkonen.

The Amicus Brief Behind the Matrixx Motion

Judge Patel’s denial of the motion raised serious problems. SeeMultiplicity versus Duplicity – The Harkonen Conviction” (Dec. 11, 2012).  It may thus be worth a closer look at the government’s amicus brief to evaluate Dr. Harkonen’s Matrixx motion. The distinction between efficacy and adverse effects is particularly unconvincing.  Similarly, it does not seem fair to permit the government to take inconsistent positions, whether on facts or on inferences and arguments, when those inconsistencies confuse criminal defendants, prosecutors, civil litigants, and lower court judges. After all, Dr. Harkonen’s use of the key word, “demonstrate” was an argument about the strength and epistemic strength of the evidence at hand.

The government’s amicus brief was filed by the Solicitor General’s office, along with counsel for the Food and Drug Division of the Department of Health & Human Services. The government, in its brief, appeared to disclaim the necessity, or even the importance, of statistical significance:

“[w]hile statistical significance provides some indication about the validity of a correlation between a product and a harm, a determination that certain data are not statistically significant … does not refute an inference of causation.”

Brief for the United States as Amicus Curiae Supporting Respondents, in Matrixx Initiatives, Inc. v. Siracusano, 2010 WL 4624148, at *14 (Nov. 12, 2010). This statement, with its double negatives, is highly problematic.  Validity of a correlation is really not what is at issue in randomized clinical trial; rather it is the statistical reliability or stability of the measurement that is called into question when the result is not statistically significant.  A statistically insignificant result may not refute causation, but it certainly does not thereby support an inference of causation.  The Solicitor General’s brief made this statement without citation to any biostatistics text or treatise.

The government’s amicus brief introduces its discussion of statistical significance with a heading, entitled “Statistical significance is a limited and non-exclusive tool for inferring causation.” Id. at *13.  In a footnote, the government elaborated that its position applied to both safety and efficacy outcomes:

“[t]he same principle applies to studies suggesting that a particular drug is efficacious. A study  in which the cure rate for cancer patients who took a drug was twice the cure rate for those who took a placebo could generate meaningful interest even if the results were not statistically significant.”

Id. at *15 n.2.  Judge Patel’s distinction between efficacy and adverse events thus cannot be sustained. Of course, “meaningful interest” is not exactly a sufficient basis for a causal conclusion. As a general matter, Dr. Harkonen’s motion seems well grounded.  Although not a model of clarity, the amicus brief appears to disparage the necessity of statistical significance for supporting a causal conclusion. A criminal defendant being prosecuted for using the wrong verb to describe his characterization of the inference he drew from a clinical trial would certainly want to showcase these high-profile statements made by Solicitor General’s office to the highest court of the land.

Solicitor General’s Good Advice

Much of the Solicitor General’s brief is directly on point for the Matrixx case. The amicus brief leads off by insisting that information that supports reasonable suspicions about adverse events, may be material absent sufficient evidence of causation.  Id. at 11.  Of course, this is the dispositive argument, and it is stated well in the brief.  The brief then wonders into scientific and statistical territory, with little or no authority, at times misciting important works such as the Reference Manual on Scientific Evidence.

The Solicitor General’s amicus brief hones in on the key issue: materiality, which does not necessarily involve causation:

“Second, a reasonable investor may consider information suggesting an adverse drug effect important even if it does not prove that the drug causes the effect.”

Brief for the United States as Amicus Curiae Supporting Respondents, in Matrixx Initiatives, Inc. v. Siracusano, 2010 WL 4624148, at *8.

“As explained above (see p. 19, supra), however, adverse event reports do not lend themselves to a statistical-significance analysis. At a minimum, the standard petitioners advocate would require the design of a scientific study able to capture the relative rates of incidence (either through a clinical trial or observational study); enough participants and data to perform such a study and make it powerful enough to detect any increased incidence of the adverse effect; and a researcher equipped and interested enough to conduct it.”

Id. at 23.

“As petitioners acknowledge (Br. 23), FDA does not apply any single metric for determining when additional inquiry or action is necessary, and it certainly does not insist upon ‘statistical significance.’ See Adverse Event Reporting 7. Indeed, statistical significance is not a scientifically appropriate or meaningful standard in evaluating adverse event data outside of carefully designed studies. Id. at 5; cf. Lempert 240 (‘it is meaningless to talk about receiving a statistically significant number of complaints’).”

Id. at 19. So statistical significance is unrelated to the case, and the kind of evidence of materiality, alleged by plaintiffs, does not even open itself to a measurement of statistical significance.  At this point, the brief writers might have called it a day.  The amicus brief, however, pushes on.

Solicitor General’s Ignoratio Elenchi

A good part of the government’s amicus brief in Matrixx presented argument irrelevant to the issues before the Court, even assuming that statistical significance was relevant to materiality.

“First, data showing a statistically significant association are not essential to establish a link between use of a drug and an adverse effect. As petitioners ultimately acknowledge (Br. 44 n.22), medical researchers, regulators, and courts consider multiple factors in assessing causation.”

Brief for the United States as Amicus Curiae Supporting Respondents, in Matrixx Initiatives, Inc. v. Siracusano, 2010 WL 4624148, at *12.  This statement is a non-sequitur.  The consideration of multiple factors in assessing causation does not make the need for a statistically significant association or more less essential. Statistical significance could still be necessary but not sufficient in assessing causation.  The government’s brief writers pick up the thread a few pages later:

“More broadly, causation can appropriately be inferred through consideration of multiple factors independent of statistical significance. In a footnote, petitioners acknowledge that critical fact: ‘[C]ourts permit an inference of causation on the basis of scientifically reliable evidence other than statistically significant epidemiological data. In such cases experts rely on a lengthy list of factors to draw reliable inferences, including, for example,

(1) the “strength” of the association, including “whether it is statistically significant”;

(2) temporal relationship between exposure and the adverse event;

(3) consistency across multiple studies;

(4) “biological plausibility”;

(5) “consideration of alternative explanations” (i.e., confounding);

(6) “specificity” (i.e., whether the specific chemical is associated with the specific disease at issue); and

(7) dose-response relationship (i.e., whether an increase in exposure yields an increase in risk).’ ”

Pet. Br. 44 n.22 (citations omitted). Those and other factors for inferring causation have been well recognized in the medical literature and by the courts of appeals. See, e.g., Reference Guide on Epidemiology 345-347 (discussing relevance of toxicologic studies), 375-379 (citing, e.g., Austin Bradford Hill, The Environment and Disease: Association or Causation?, 58 Proc. Royal Soc’y Med. 295 (1965))… .”

Id. at 15-16. These enumerated factors are obviously due to Sir Austin Bradford Hill. No doubt Matrixx Initiatives cited the Bradford Hill factors, but that was because the company was contending that statistical significance was necessary but not sufficient to show causation.  As Bradford Hill showed by his famous conclusion that smoking causes lung cancer, these factors were considered after statistical significance was shown in several epidemiologic studies.  The Supreme Court incorporated this non-argument into its opinion, even after disclaiming that causation was needed for materiality or that the Court was going to assess the propriety of causal findings in other cases.

The Solicitor General went on to cite three cases for the proposition that statistical significance is not necessary for assessing causation:

Best v. Lowe’s Home Centers, Inc., 563 F.3d 171, 178 (6th Cir. 2009) (“an ‘overwhelming majority of the courts of appeals’ agree” that differential diagnosis, a process for medical diagnosis that does not entail statistical significance tests, informs causation) (quoting Westberry v. Gislaved Gummi AB, 178 F.3d 257, 263 (4th Cir. 1999)).”

Id. at 16.  These two cases both involved so-called “differential diagnosis” or differential etiology, a process of ruling in, by ruling out.  This method, which involves iterative disjunctive syllogism, starts from established causes, and reasons to a single cause responsible for a given case of the disease.  The citation of these cases was irrelevant and bad scholarship by the government.  The Solicitor General’s error here seems to have been responsible for the Supreme Court’s unthinking incorporation of these cases into its opinion.

The Solicitor General went on to cite a third case, the infamous Ferebee, for its suggestion that statistical significance was not necessary to establish causation:

Ferebee v. Chevron Chem. Co., 736 F.2d 1529, 1536 (D.C. Cir.) (‘[P]roducts liability law does not preclude recovery until a “statistically significant” number of people have been injured’.), cert. denied, 469 U.S. 1062 (1984). As discussed below (see pp. 19-20, infra), FDA relies on a number of those factors in deciding whether to take regulatory action based on reports of an adverse drug effect.”

Id. at 16.  Curiously, the Supreme Court departed from its reliance on the Solicitor General’s brief, with respect to Ferebee, and substituted its own citation to Wells v. Ortho Pharmaceutical Corp., 615 F. Supp. 262 (N.D. Ga. 1985), aff’d in relevant part, 788 F.2d 741 (11th Cir.), cert. denied, 479 U.S.950 (1986). See Wells v. Ortho Pharmaceutical Corp. Reconsidered – Part 1 (Nov. 12, 2012).  The reliance upon the two differential etiology cases was “demonstrably” wrong, but citing Wells was even more bizarre because that case featured at least one statistically significant study relied upon by plaintiffs’ expert witnesses. Ferebee, on the other hand, involved an acute onset of a rare condition – severe pulmonary fibrosis – shortly after exposure to paraquat.  Ferebee was thus a case in which the parties agreed that the causal relationship between paraquat and lung fibrosis had been established by non-analytical epidemiologic evidence.  See Ferebee Revisited.

The government then pointed out in its amicus that sometimes statistical significance is hard to obtain:

“In some circumstances —e.g., where an adverse effect is subtle or has a low rate of incidence —an inability to obtain a data set of appropriate quality or quantity may preclude a finding of statistical significance. Ibid. That does not mean, however, that researchers have no basis on which to infer a plausible causal link between a drug and an adverse effect.”

Id. at 15. Biological plausibility is hardly a biologically established causal link.  Inability to find an appropriate data set often translates into an inability to draw a causal conclusion; inappropriate data are not an excuse for jumping to unsupported conclusions.

Solicitor General’s Bad Advice – Crimen Falsi?

The government’s brief then manages to go from bad to worse. The government’s amicus brief in Matrixx raises serious concerns about criminalizing inappropriate statistical statements, inferences, or conclusions.  If the Solicitor General’s office, with input from Chief Counsel of the Food and Drug Division, of the Department of Health & Human Services, cannot correctly state basic definitions of statistical significance, then the government has no business of prosecuting others for similar offenses.

“To assess statistical significance in the medical context, a researcher begins with the ‘null hypothesis’, i.e., that there is no relationship between the drug and the adverse effect. The researcher calculates a ‘p-value’, which is the probability that the association observed in the study would have occurred even if there were in fact no link between the drug and the adverse effect. If that p-value is lower than the ‘significance level’ selected for the study, then the results can be deemed statistically significant.”

Id. at 13. Here the government’s brief commits a common error that results when lawyers want to simplify the definition of a p-value. The p-value is a cumulative probability of observing a disparity at least as great as observed, given the assumption that there is no difference.  Furthermore, the subjunctive is not appropriate to describe the basic assumption of significance probability.

“The significance level most commonly used in medical studies is 0.05. If the p-value is less than 0.05, there is less than a 5% chance that the observed association between the drug and the effect would have occurred randomly, and the results from such a study are deemed statistically significant. Conversely, if the p-value is greater than 0.05, there is greater than a 5% chance that the observed association would have occurred randomly, and the results are deemed not statistically significant. See Reference Guide on Epidemiology 357-358; David Kaye & David A. Freedman, Reference Guide on Statistics, in Reference Manual on Scientific Evidence 123, 123-125 (2d ed. 2000) (Reference Guide on Statistics).”

Id. at 14. Here the government’s brief drops the conditional of the significance probability; the p-value provides the probability that a disparity at least as large as observed would have occurred (based upon the assumed probability model), given the assumption that there really is no difference between the observed and expected results.

“While statistical significance provides some indication about the validity of a correlation between a product and a harm, a determination that certain data are not statistically significant – let alone, as here, the absence of any determination one way or the other — does not refute an inference of causation. See Michael D. Green, Expert Witnesses and Sufficiency of Evidence in Toxic Substances Litigation: The Legacy of Agent Orange and Bendectin Litigation, 86 Nw. U. L. Rev. 643, 682- 683 (1992).”

Id. at 14. Validity is probably the wrong word since most statisticians and scientific authors use validity to refer to features other than low random error.

“Take, for example, results from a study, with a p-value of 0.06, showing that those who take a drug develop a rare but serious adverse effect (e.g., permanent paralysis) three times as often as those who do not. Because the p-value exceeds 5%, the study’s results would not be considered statistically significant at the 0.05 level. But since the results indicate a 94% likelihood that the observed association between the drug and the effect would not have occurred randomly, the data would clearly bear on the drug’s safety. Upon release of such a study, “confidence in the safety of the drug in question should diminish, and if the drug were important enough to [the issuer’s] balance sheet, the price of its stock would be expected to decline.” Lempert 239.2

Id. at 14-15. The citation to Lempert’s article is misleading. At the cited page, Professor Lempert is simply making the point that materiality in a securities fraud case will often be present when evidence for a causal conclusion is not. Richard Lempert, “The Significance of Statistical Significance:  Two Authors Restate An Incontrovertible Caution. Why A Book?” 34 Law & Social Inquiry 225, 239 (2009).  In so writing, Lempert anticipated the true holding of Matrixx Initiative.  The calculation of the 94% likelihood is also incorrect.  The quantity (1 – [p-value]) yields a probability that describes the probability of obtaining a disparity no greater than the observed result, on the assumption that there is no difference at all between observed and expect results. There is, however, a larger point lurking in this passage of the amicus brief, which is the difference between a p-value of 0.05 and 0.06 is not particularly large, and there is thus a degree of arbitrariness to treating it as too sharp a line.

All in all, a distressingly poor performance by the Solicitor General’s office.  With access to many talented statisticians, the government could have at least have had a competent statistician review and approve the content of this amicus brief.  I suspect that most judges and lawyers, however, would balk at drawing an inference that the Solicitor General intended to mislead the Court simply because the brief contained so many misstatements about statistical inference.  This reluctance should have obvious implications for the government’s attempt to criminalize Dr. Harkonen’s statistical inferences.

Egilman Petitions the Supreme Court for Review of His Own Exclusion in Newkirk v. Conagra Foods

December 13th, 2012

Last year, the Ninth Circuit of the United States Court of Appeals affirmed a district judge’s decision to exclude Dr David S. Egilman from testifying in a consumer-exposure diacetyl case.  Newkirk v. Conagra Foods Inc., 438 Fed.Appx. 607  (9th Cir. 2011).  The plaintiff moved on, but his expert witness could not let his exclusion go.

To get the full “flavor” of this diacetyl case, read the district court’s opinion, which excluded Egilman and other witnesses, and entered summary judgment for the defense. Newkirk v. Conagra Foods, Inc., 727 F. Supp. 2d 1006  (E.D. Wash. July 2, 2010).  Here is the language that had Dr. Egilman popping mad:

“In other parts of his reports and testimony, Dr. Egilman relies on existing data, mostly in the form of published studies, but draws conclusions far beyond what the study authors concluded, or Dr. Egilman manipulates the data from those studies to reach misleading conclusions of his own. See Daubert I, 509 U.S. at 592–93, 113 S.Ct. 2786.”

727 F. Supp. 2d at 1018.

This language, cut Dr. Egilman to the kernel, and provoked him to lodge a personal appeal to the Ninth Circuit, based in part upon the economic harm done to his litigation consulting and testimonial practice. (See attached Egilman Motion Appeal Diacetyl Exclusion 2011 and Egilman Declaration Newkirk Diacetyl Appeal 2011.)  Not only did the exclusion hurt Dr. Egilman’s livelihood, but also his eleemosynary endeavors:

“The Daubert ruling eliminates my ability to testify in this case and in others. I will lose the opportunity to bill for services in this case and in others (although I generally donate most fees related to courtroom testimony to charitable organizations, the lack of opportunity to do so is an injury to me). Based on my experience, it is virtually certain that some lawyers will choose not to attempt to retain me as a result of this ruling. Some lawyers will be dissuaded from retaining my services because the ruling is replete with unsubstantiated pejorative attacks on my qualifications as a scientist and expert. The judge’s rejection of my opinion is primarily an ad hominem attack and not based on an actual analysis of what I said – in an effort to deflect the ad hominem nature of the attack the judge creates ‘straw man’ arguments and then knocks the straw men down, without ever addressing the substance of my positions.”

Egilman Declaration in Newkirk at Paragraph 11.

The Ninth Circuit affirmed Dr. Egilman’s exclusion, Newkirk v. Conagra Foods, Inc., 438 Fed. Appx. 607 (9th Cir. 2011).  SeeNinth Circuit Affirms Rule 702 Exclusion of Dr David Egilman in Diacetyl Case.

This year, the Ninth Circuit dismissed his personal appeal for lack of standing.  Egilman v. Conagra Foods, Inc., 2012 WL 3836100 (9th Cir. 2012). Previously, I suggested that the Ninth Circuit had issued a judgment from which there will be no appeal.  I may have been mistaken.  Last week, counsel for Dr. Egilman filed a petition for certiorari in the United States Supreme Court.  Smarting from the district court’s attack on his character and professionalism, Dr. Egilman is seeking the personal right to appeal an adverse Rule 702 ruling.  The Circuit split, which Dr. Egilman hopes will get him a hearing in the Supreme Court, involves the issue whether he, as a non-party witness, must intervene in the proceedings in order to preserve his right to appeal:

“Whether a nonparty to a district court proceeding has a right to appeal a decision that adversely affects his interest, as the Second, Sixth, and D.C. Circuits hold, or whether, as six other circuit courts hold, the nonparty must intervene or otherwise participate in the district court proceedings to have a right to appeal.”

Egilman Pet’n Cert Newkirk v Conagra SCOTUS at 5 (Dec. 2012).  Of course there is also a split among courts about Dr. Egilman reliability.

And who represents Dr. Egilman?  Counsel of record is Alexander A. Reinert, who teaches at Cardozo Law School, here in New York.  Dr. Egilman and Reinert have published several articles together, within the scope of Dr. Egilman’s litigation-oriented practice.[i]  In the past, I have commented upon Reinert’s work.  See, e.g., Schachtman, “Confidence in Intervals and Diffidence in the Courts” (May 8, 2012 ) (Arthur H. Bryant & Alexander A. Reinert, “The Legal System’s Use of Epidemiology,” 87 Judicature 12, 19 (2003)(“The confidence interval is intended to provide a range of values within which, at a specified level of certainty, the magnitude of association lies.”) (incorrectly citing the first edition of Rothman & Greenland, Modern Epidemiology 190 (Philadelphia 1998)). It should be interesting to see what mischief Egilman & Reinert can make in the Supreme Court.


[i] David S. Egilman & Alexander A. Reinert, “Corruption of Previously Published Asbestos Research,” 55 Arch. Envt’l Health 75 (2000); David S. Egilman & Alexander A. Reinert,“Asbestos Exposure and Lung Cancer: Asbestosis Is Not Necessary,” 30 Am. J. Indus. Med. 398 (1996); David S. Egilman & Alexander A. Reinert, “The Asbestos TLV: Early Evidence of Inadequacy,” Am. J. Indus. Med. 369 (1996);  David S. Egilman & Alexander A. Reinert,“The Origin and Development of the Asbestos Threshold Limit Value: Scientific Indifference and Corporate Influence,”  25 Internat’l J. Health Serv. 667 (1995).

The opinions, statements, and asseverations expressed on Tortini are my own, or those of invited guests, and these writings do not necessarily represent the views of clients, friends, or family, even when supported by good and sufficient reason.