TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Probabilism Case Law

January 28th, 2013

Some judges and commentators have characterized all evidence as ultimately “probable,” but other writers have criticized this view as trading on the ambiguities inherent in our ordinary usage of probable to convey an epistemic hedge or uncertainty.  How successful is the probabilistic program in the law?  In the context of assessing causation, many courts have succumbed to the temptation to substitute risk for causation.  Other courts have noticed the difference between a prospective risk and a retrospective factual determination that a risk factor actually participated in bringing about the caused result.  In any event, judicial skepticism about probabilistic evidence, in many contexts, has found its expression in holdings and in dicta of common law courts.  The following is a chronological listing of some pertinent cases that rejected or limited the use of overtly probabilistic evidence. There are only two cases involving epidemiological evidence before 1970 on the list.

Day v. Boston & Maine R.R., 96 Me. 207, 217–218, 52 A. 771, 774 (1902) (“Quantitative probability, however, is only the greater chance. It is not proof, nor even probative evidence, of the proposition to be proved. That in one throw of dice, there is a quantitative probability, or greater chance, that a less number of spots than sixes will fall uppermost is no evidence whatever that in a given throw such was the actual result. Without something more, the actual result of the throw would still be utterly unknown. The slightest real evidence would outweigh all the probability otherwise.”)

Toledo, St. L. & W. R. Co. v. Howe, 191 F. 776, 782-83 (6th Cir. 1911) (holding that evidence at issue was not probabilistic, but noting in dictum that “[n]o man’s property should be taken from him on the mere guess that he has committed a wrong. . . because of a probability among other probabilities that the accident for which recovery is sought might have happened in the way charged.”)

People v. Risley, 214 N.Y. 75, 86, 108 N.E. 200, 203 (1915) (holding that probability calculations were improper when “the fact to be established in this case was not the probability of a future event, but whether an occurrence asserted by the people to have happened had actually taken place”)

Lampe v. Franklin Am. Trust, 339 Mo. 361, 384, 96 S.W.2d 710, 723 (1936) (verdict must be based upon what the jury finds to be facts rather than what they find to be ‘more probable’.)

Sargent v. Massachusetts Accident Co., 307 Mass. 246, 250, 29 N.E.2d 825, 827 (1940) (the preponderance standard requires more than showing that the chances mathematically favor a fact in dispute; the proponent must prove the proposition in dispute such that the jurors form an actual belief in the truth of the proposition) (“It has been held not enough that mathematically the chances somewhat favor a proposition to be proved; for example, the fact that colored automobiles made in the current year outnumber black ones would not warrant a finding that an undescribed automobile of the current year is colored and not black, nor would the fact that only a minority of men die of cancer warrant a finding that a particular man did not die of cancer. The weight or preponderance of the evidence is its power to convince the tribunal which has the determination of the fact, of the actual truth of the proposition to be proved. After the evidence has been weighed, that proposition is proved by a preponderance of the evidence if it is made to appear more likely or probable in the sense that actual belief in its truth, derived from the evidence, exists in the mind or minds of the tribunal notwithstanding any doubts that may linger there.”)

Smith v. Rapid Transit, 317 Mass. 469, 470, 58 N.E.2d 754, 755 (1945) (evidence that defendant was the only bus franchise operating in the area where the accident took place was not sufficient to establish that the bus that caused the accident belonged to the defendant where private or chartered buses could have been in the area; it is not enough that mathematically the chances somewhat favor the proposition to be proved)

Kamosky v Owens-Illinois Co., 89 F. Supp. 561, 561-62 (M.D.Pa. 1950) (directing verdict in favor of defendant; statistical likelihood that defendant manufactured the bottle that injured plaintiff was insufficient to satisfy plaintiff’s burden of proof)

Mahoney v. United States, 220 F. Supp. 823, 840 41 (E.D. Tenn. 1963) (Taylor, C.J.) (holding that plaintiffs had failed to prove that their cancers were caused by radiation exposures, on the basis of their statistical, epidemiological proofs), aff’d, 339 F.2d 605 (6th Cir. 1964) (per curiam)

In re King, 352 Mass. 488, 491-92, 225 N.E.2d 900, 902 (1967) (physician expert’s opinion that expressed a mathematical likelihood, unsupported by clinical evidence, that claimant’s death from cancer was caused by his accidental fall was legally insufficient to support a judgment)

Garner v. Heckla Mining Co., 19 Utah 2d 367, 431 P.2d 794, 796 97 (1967) (affirming denial of compensation to family of a uranium miner who had smoked cigarettes and had died of lung cancer; statistical evidence of synergistically increased risk of lung cancer among uranium miners is insufficient to show causation of decedent’s lung cancer, especially considering his having smoked cigarettes)

Whitehurst v. Revlon, 307 F. Supp. 918, 920 (E.D. Va. 1969) (holding that challenged evidence was not probabilistic, and noting in dictum that probability evidence of negligence evidence would leave verdict based upon conjecture, guess or speculation)

Guenther v. Armstrong Rubber Co., 406 F.2d 1315, 1318 (3d Cir. 1969) (holding that defendant cannot be found liable on the basis that it supplied 75-80% of the kind of tire purchased by the plaintiff; any verdict based on this evidence “would at best be a guess”)

Crawford v. Industrial Comm’n, 23 Ariz. App. 578, 582-83, 534 P.2d 1077, 1078, 1082-83 (1975) (affirming an employee’s award of no compensation because he was exposed to disease producing conditions both on and off the job; a physician’s testimony, expressed to a reasonable degree of medical certainty that the working conditions statistically increased the probability of developing a disease does not satisfy the reasonable certainty standard)

Olson v. Federal American Partners, 567 P.2d 710, 712 13 (Wyo. 1977) (affirming judgment for employer in compensation proceedings; cigarette smoking claimant failed to show that his lung cancer resulted from workplace exposure to radiation, despite alleged synergism between smoking and radiation).

Heckman v. Federal Press Co., 587 F.2d 612, 617 (3d Cir. 1977) (statistical data about a group do not establish facts about an individual)

Bazemore v. Davis, 394 A.2d 1377, 1382 n.7 (D.C. 1978) (if verdicts were determined on the basis of statistics indicating high probability of alleged facts, more often than not they would be correct guesses, but this is not a sufficient basis for reaching verdicts)

Kaminsky v. Hertz Corp., 94 Mich. App. 356 (1979) (dictum; reversing summary judgment)

Sulesky v. United States, 545 F. Supp. 426, 430 (S.D.W.Va. 1982) (swine flu vaccine GBS cases; epidemiological studies alone do not prove or disprove causation in an individual)

Robinson v. United States, 533 F. Supp. 320, 330 (E.D. Mich. 1982) (finding for government in swine flu vaccine case; the court found that that the epidemiological evidence offered by the plaintiff was not probative, and that it “would reach the same result if the epidemiological data were entirely excluded since statistical evidence cannot establish cause and effect in an individual”)

Iglarsh v. United States, No. 79 C 2148, 1983 U.S. Dist. LEXIS 10950, *10 (N.D.Ill. Dec. 9, 1983) (“In the absence of a statistically valid epidemiological study, even the plaintiff’s treating physician or expert witness, or any clinician for that matter, is unable to attribute a plaintiff’s injury to the swine flu vaccination.”)

Johnston v. United States, 597 F. Supp. 374, 412, 425-26 (D.Kan. 1984) (although the probability of attribution increases with the relative risk, expert must still speculate in making an individual attribution; “a statistical method which shows a greater than 50% probability does not rise to the required level of proof; plaintiffs’ expert witnesses’ reports were “statistical sophistry,” not medical opinion)

Kramer v. Weedhopper of Utah, Inc., 490 N.E.2d 104, 108 (Ill. App. Ct. 1986) (Stamos, J., dissenting) (“Liability is not based on a balancing of probabilities, but on a finding of fact.  While the majority contends that the measure of what is considered sufficient evidence [to support submitting a case to the jury] resolves itself into a question of probability, a review of case law … reveals that a theoretical probability alone cannot be the basis for [a prima facie case].  There must be some evidence in addition to the abstraction which will enable a jury to choose between competing probabilities.”)

Washington v. Armstrong World Industries, 839 F.2d 1121 (5th Cir. 1988) (affirming grant of summary judgment on grounds that statistical correlation between asbestos exposure and disease did not support specific causation)

Thompson v. Merrell Dow Pharm., 229 N.J. Super. 230, 244, 551 A.2d 177, 185 (1988) (epidemiology looks at increased incidences of diseases in populations)

Norman v. National Gypsum Co., 739 F. Supp. 1137, 1138 (E.D. Tenn. 1990) (statistical evidence of risk of lung cancer from asbestos and smoking was insufficient to show individual causation, without evidence of asbestos fibers in the plaintiff’s lung tissue)

Smith v. Ortho Pharmaceutical Corp., 770 F. Supp. 1561, 1576 (N.D. Ga. 1991) (“The court notes that, in an individual case, epidemiology cannot conclusively prove causation; at best, it can establish only a certain probability that a randomly selected case of birth defect was one that would not have occurred absent exposure (or the ‘relative risk’ of the exposed population).”)

Smith v. Ortho Pharmaceutical Corp., 770 F. Supp. 1561, 1573 (N.D. Ga. 1991) (“However, in an individual case, epidemiology cannot conclusively prove causation; at best, it can only establish a certain probability that a randomly selected case of disease was one that would not have occurred absent exposure, or the ‘relative risk’ of the exposed population.  Epidemiology, therefore, involves evidence on causation derived from group-based information, rather than specific conclusions regarding causation in an individual case.”)

Howard v. Wal-Mart Stores, Inc., 160 F.3d 358, 359–60 (7th Cir. 1998) (Posner, C.J.)

Krim v. pcOrder.com, Inc., 402 F.3d 489 (5th Cir. 2005) (rejecting standing plaintiffs’ standing to sue for fraud absent a showing of actual tracing of shares to the offending public offering; statistical likelihood of those shares having been among those purchased was insufficient to confer standing)

New Release of PLI’s Treatise on Product Liability Litigation

January 19th, 2013

The Practicing Law Institute (PLI) has released a new edition of its treatise on product liability litigation.  Stephanie A. Scharf, Lise T. Spacapan, Traci M. Braun, and Sarah R. Marmor, eds., Product Liability Litigation:  Current Law, Strategies and Best Practices (PLI Dec. 2012).

The new edition, the third release of the treatise, has several new chapters, including my contribution, Chapter 30A, “Statistical Evidence in Products Liability Litigation,” which discusses the use of, and recent developments, in statistical and scientific evidence in the law, including judicial mishandling of “significance probability,” statistical significance, statistical power, and meta-analysis.  Here is the table of contents for this new chapter on statistical evidence:

  • § 30A:1 : Overview 30A-2
  • § 30A:2 : Litigation Context of Statistical Issues 30A-2
  • § 30A:3 : Qualification of Expert Witnesses Who Give Testimony on Statistical Issues 30A-3
  • § 30A:4 : Admissibility of Statistical Evidence—Rules 702 and 703 30A-3
  • § 30A:5 : Significance Probability 30A-5
    • § 30A:5.1 : Definition of Significance Probability (The “p-value”) 30A-5
    • § 30A:5.2 : The Transpositional Fallacy 30A-5
    • § 30A:5.3 : Confusion Between Significance Probability and The Burden of Proof 30A-6
    • § 30A:5.4 : Hypothesis Testing 30A-7
    • § 30A:5.5 : Confidence Intervals 30A-8
    • § 30A:5.6 : Inappropriate Use of Statistical Significance—Matrixx Initiatives, Inc. v. Siracusano 30A-9
      • [A] : Sequelae of Matrixx Initiatives 30A-12
      • [B] : Is Statistical Significance Necessary? 30A-13
  • § 30A:6 : Statistical Power30A-14
    • § 30A:6.1 : Definition of Statistical Power 30A-14
    • § 30A:6.2 : Cases Involving Statistical Power 30A-15
  • § 30A:7 : Meta-Analysis 30A-17
    • § 30A:7.1 : Definition and History of Meta-Analysis 30A-17
    • § 30A:7.2 : Consensus Statements 30A-18
    • § 30A:7.3 : Use of Meta-Analysis in Litigation 30A-18
    • § 30A:7.4 : Competing Models for Meta-Analysis 30A-20
    • § 30A:7.5 : Recent Cases Involving Meta-Analyses 30A-21
  • § 30A:8 : Conclusion 30A-23

The treatise weighs in with over 40 chapters, and over 1,000 pages.  The table of contents and table of authorities are available online at the PLI’s website.

The PLI is a non-profit educational organization, chartered by the Regents of the University of the State of New York.  The PLI provides continuing legal education, and publishes treatises and handbooks geared for the practitioner.

Egilman Instigates Kerfuffle at McGill University

January 15th, 2013

Last February, the Canadian Broadcast Corporation unleashed a one-sided, twenty minute investigative journalistic film on the Quebec asbestos industry.  All allegations from the plaintiffs’ litigation world were accepted as true, and the asbestos mining industry was cast as a manufacturer of doubt and deception.  See Fatal Deception(Feb 2, 2012). 

The narrator raises the suggestion that the Canadian federal government is relying upon “junk science” to justify support for continuing exports of chrysotile and for reopening the Quebec mines.  This CBC production features Dr. David Egilman, holding forth on his views on the relationship between McGill University, Professor Corbett Macdonald, and the Quebec asbestos industry. Mostly, Egilman is permitted to define the issues and provide the “answers,” although the CBC film does give some air time to Professor Bruce Case, who points out that Egilman is not a scientist, but rather a social critic. When Professor Case was asked on air how he would respond to Egilman in response to his allegations, Case responded, “I wouldn’t give Dr. Egilman the time of day…because he’s not an honorable person.”

For over a decade, Egilman has been pressing his allegations that asbestos research conducted by McGill University investigators was tainted.  In September 2012, McGill University’s Research Integrity Officer, Abraham Fuks, reported that the Egilman allegations were baseless and unsupported.  Consultation Report to Dean David Eidelman (Sept. 23, 2012). See Egilman’s Allegations Against McDonald and His Epidemiologic Research Are Baseless (Oct. 20, 2012).  Egilman responded to Professor Fuks’ report by labeling it “a shameful cover-up.”  Eric Andrew-Gee, “Asbestos debate rages on at the Faculty Club:  American researcher attacks McGill’s asbestos investigation,” The McGill Daily (Jan. 10, 2013).

The Egilman show apparently kicked off the new year at the McGill University Faculty Club, earlier this month, with a shouting match.  According to the University’s newspaper, Egilman called MacDonald’s research on the Quebec chrysotile miners and millers “garbage,” and he called upon McGill University to retract the paper.

Egilman’s argument took the high road and the low road:  He understandably objected to McGill’s and MacDonald’s refusal to share mineralogical data about tremolite content of asbestos from the Thetford Mines.  Of course, the sad state of epidemiology today is that there is no mechanism for requiring data sharing, and the authors of pro-plaintiff studies have consistently refused to share data, and have fought subpoenas tooth and nail.

But then there was the low road. According to the McGill Daily, Egilman lapsed into name calling.  During his presentation, Egilman referred to McGill’s Professor Fuks as Inspector Fox and included a cartoon in his slideshow of a henhouse guarded by a grinning fox.  “Fuks, by the way, is German for Fox,” Egilman said.

One of the McGill professors chided Egilman for his ad hominem attack on Professor Fuks, and pointed out that Egilman could have made his points without personal attacks. Egilman responded “I could have, but it’s funny.” Id.

Egilman called upon his audience to evaluate his claims against those of Professors Case and Fuks. “One of us is an asshole,” he announced. Id. Indeed. Just perform the iterative disjunctive syllogism; it’s a matter of elimination.  For a more scholarly analysis of assholes, see Aaron James, Assholes:  A Theory (2012).

Tunnel Vision on Conflicts of Interest

January 13th, 2013

Judge Alsup’s order requiring disclosure of money paid to bloggers and journalists is only a recent manifestation of a misguided attempt to control conflicts of interest among non-parties.  See Can a Court Engage in Abusive Discovery? (Jan. 10, 2013).  Judge Alsup’s curious orders can be traced to encouragement in the Federal Judicial Center’s “pocket guide” to managing an MDL for products liability cases.   Barbara Rothstein & Catherine  Borden, Managing Multidistrict Litigation in Products Liability Cases: A Pocket Guide for Transferee Judges (2011).  Link or download  This FJC publication suggested that an MDL court should unleash discovery against authors of published works for evidence of bias, citing an MDL trial court that ordered parties to produce lists of payments to authors of articles relied upon by expert witnesses. Id. at 35 n.48 (citing In re Welding Fume Prods. Liab. Litig., 534 F. Supp. 2d 761 (N.D. Ohio 2008).

The United States Supreme Court has also encouraged hostility to party-funded research and writing.  In Exxon Shipping Co. v. Baker, 554 U.S. 471, 501 (2008), the Court struck down a large punitive damage award.  Justice Souter, writing for a divided court, noted in footnote 17:

“The Court is aware of a body of literature running parallel to anecdotal reports, examining the predictability of punitive awards by conducting numerous ‘mock juries’, where different ‘jurors’ are confronted with the same hypothetical case. See, e.g., C. Sunstein, R. Hastie, J. Payne, D. Schkade, W. Viscusi, Punitive Damages: How Juries Decide (2002); Schkade, Sunstein, & Kahneman, Deliberating About Dollars: The Severity Shift, 100 Colum. L.Rev. 1139 (2000); Hastie, Schkade, & Payne, Juror Judgments in Civil Cases: Effects of Plaintiff’s Requests and Plaintiff’s Identity on Punitive Damage Awards, 23 Law & Hum. Behav. 445 (1999); Sunstein, Kahneman, & Schkade, Assessing Punitive Damages (with Notes on Cognition and Valuation in Law), 107 Yale L.J. 2071 (1998). Because this research was funded in part by Exxon, we decline to rely on it.”

Id. at n.17; see also Conflicts of Interest, Footnote 17, and Scientific McCarthyism.  The glib dismissal of behavioral research on a relevant topic by the Supreme Court was remarkable.  Professor Sunstein, is a professor at University of Chicago, and formerly served the Administrator of the White House Office of Information and Regulatory Affairs, in President Obama’s administration.  Professor Kahneman, a Nobel Laureate, is Professor Emeritus in Princeton University. Professor W. Kip Viscusi has been one of the most prolific writers about and investigators of punitive damages.  Justice Souter’s footnote might be interpreted to impugn the integrity of their research by virtue of their corporate sponsorship. More important, Justice Souter’s opinion fails to explain why the Court would not look beyond funding to the merits of the funded research. Courts consider arguments of the parties’ counsel, although of course, the parties compensate their counsel for marshalling facts, and formulating and presenting arguments. Perhaps Justice Souter would have been justified in announcing that he and his judicial colleagues had looked at Sunstein’s research more closely than other cited research.  The wholesale dismissal of relevant evidence based upon funding is irrational.

A new article posted on the Social Science Research Network explores the misdirection and distortion created by the single-minded focus on financial conflicts of interest.  Richard S. Saver,  “Is It Really All About The Money? Reconsidering Non-Financial Interests In Medical Research,” Journal of Law, Medicine & Ethics (forthcoming 2013).

Richard Saver is a professor of law at the University of North Carolina School of Law, and also holds appointments in the UNC’s School of Medicine.  Saver describes how conflicts of interest (COIs) have largely and incorrectly been reduced to financial conflicts.  For instance, in 2011, the National Institutes of Health (NIH) addressed only financial issues when it promulgated rules for managing conflicts of interest in the field of medical research.  Department of Health and Human Services, “Responsibility of Applicants for Promoting Objectivity in Research for Which Public Health Service Funding is Sought and Responsible Prospective Contractors,” 76 Fed. Reg. 53256 (Aug. 25, 2011).  Several commentators advocated regulation of non-financial COI, but the agency refused to include such COIs within its rules. Id. at 53258.  The Institute of Medicine (IOM), in its monograph on COI in medicine, similarly gave almost exclusive priority to financial ties.   Institute of Medicine, Conflict of Interest in Medical Research, Education, and Practice (Washington, D.C.: The National Academies Press, 2009).

Saver argues that the focus on economic COI is dangerous because it instills complacency about non-financial interests, and provides a false sense of assurance that the most serious biases are disclosed or eliminated.  Saver’s review of retractions, frauds, and ethical lapses in biomedical research suggests that non-financial interests, such as friendships and alliances, institutional hierarchies, intellectual biases and commitments, beneficence, “white-hat” advocacy, as well as the drive for professional achievement, recognition, and rewards, all have the potential to complicate, distort, and sometimes undermine scientific research in myriad ways. The failure to recognize serious non-economic COIs and biases, and the reluctance to treat them differently from financial COIs endangers the validity of science.  Not only are these non-financial threats ignored, but financial interests receive undue attention, resulting in the erosion of public trust in scientific research that is sound.

Professor Saver’s caveats about COI moralism apply beyond biomedical research.  The Exxon Shipping case, the MDL Pocket Guide, and Judge Alsup’s opinion on disclosure of payments to journalists and bloggers signal that courts are well on their way towards selectively and arbitrarily screening out evidence and arguments based upon sponsorship.  What is needed is a whole-hearted commitment to consider and analyze all the available data. Time to shed the blinders.

Can a Court Engage in Abusive Discovery?

January 11th, 2013

ABA’s Litigation News reported that the federal district judge in lawsuit between Oracle and Google sua sponte ordered the parties to identify all journalists, bloggers, or other writers paid to comment on any of the issues in the case. Jannis E. Goodnow, “Surprise Order Forces Parties to Identify Bloggers on Payroll,” Litigation News (Jan. 3, 2013).

Judge William Alsup cited his “concern” that the parties or their counsel “may have retained or paid print or Internet authors, journalists, commentators or bloggers who have and/or may publish comments on the issues.” Oracle America, Inc. v. Google Inc., No. C 10-03561 WHA, Order re Disclosure of Financial Relationships with Commentators on Issues in This Case (N.D. Calif. Aug. 7, 2012). And in a follow-up order, Judge Alsup expanded the scope of the order to include disclosure of those authors even if their payment was not specifically for commenting on the case before him. Order to Supplement (Aug. 20, 2012).

Judge Alsup offered a strange rationale for his disclosure order:

“Just as a treatise on the law may influence the courts, public commentary that purports to be independent may have an influence on the courts and/or their staff if only in subtle ways. If a treatise author or blogger is paid by a litigant, should not that relationship be known?”

Well; not really.  A treatise author’s opinion is only as good as the facts and inferences that are employed in reaching a conclusion.  The judge’s job is, of course, to assess the reasoning for himself.

Judge Alsup’s rhetorical question reveals a fundamental misunderstanding of the judicial function.  The arguments in a case come from counsel who have are being paid.  The financial biases are complex.  Generally, the court might expect each side to benefit in some way financially from obtaining a ruling in line with the arguments it has advanced.  Sometimes, defense counsel may benefit from perpetuating a litigation, but their reputational interests alone certainly will motivate them to prevail.  Plaintiffs’ counsel hope to make money in pursuing a case.  Typically, legal counsel can be trusted to address the issues comprehensively.  Money will be involved.

Of course, journalists, bloggers, law professors, or pundits may put forward an argument not advanced by the parties.  A trial or an appellate judge might see such an argument, but why would funding make a difference to the merits of the argument?  Judges are, after all, supposed to be good at evaluating arguments without favor or prejudice.  And if a judge were to encounter an argument, presented by a non-party but not by the parties, which argument was persuasive, the judge would have the option of requesting the parties’ briefing on the matter.

Disclosure of party funding of writers might legitimately be needed when there was a matter of a poll or survey evidence influenced by the funded writers.  Such evidence apparently was not involved in the Oracle case; indeed, Judge Alsup’s request was based upon a purely fictitious concern.  After the parties made their disclosures, Judge Alsup announced that he would not take any action because none of the journalistic commentary or blogging had influenced any of his decisions.

So this intrusive exercise in court-ordered disclosure really served no purpose at all.  If a party were to propound discovery requests that did not advance the litigation, we would not hesitate to call the discovery abusive.

Unfortunately, no one is paying me to blog on the issues I find interesting.

The Dubious Origins of the Linear No Threshold Model of Carcinogenesis

January 10th, 2013

Most regulation of chemical exposures for causing cancer outcomes is based upon a linear no-threshold exposure-response (LNT) model.  This model informs risk assessments and policy judgments in the United States, Europe, and throughout the world.  If the LNT model were offered to support only precautionary principle judgments, then we might excuse the pretensions of LNT advocates.  The authors of LNT models, environmentalists, and a cadre of anti-industry scientists (“The Lobby”) are not willing, however, to be understood to be offering mere prudential guidance.  They claim that they are offering scientific conclusions, worthy of being taken seriously, as knowledge.

Passing off prudence, aesthetics, and hostility to industrialization as science is a deception that takes place virtually every day in the front pages of the lay media, in scientific journals, as well as in the Federal Register.  Implicit in the stridency of the Lobby is a desire to impose an often impossible burden upon industry to prove that there is no increased risk at miniscule exposures, and to challenge negative results with claims of inadequate power.

Professor Calabrese, in the current issue of the University of Chicago Law Review, has traced a source of the deception behind LNT to a series of events tied to the United States nuclear weapons program during World War II.  Edward J. Calabrese, “US Risk Assessment Policy: A History of Deception — A Response to Arden Rowell, Allocating Pollution,” 79 U. Chi. L. Rev. 985 (2012).   In 1927, Hermann J. Muller, a radiation geneticist, discovered that X-ray radiation caused mutation of fruit fly germ cells. The importance of health outcomes from radiation exposure led to Muller’s service as an advisor to the Manhattan Project during World War II.  In 1946, Muller received the Nobel Prize in medicine, for his work in genetics.

Professor Calabrese points out that Muller so thoroughly dismissed threshold models of radiation mutagenicity in his Nobel address, that the LNT became accepted dogma in the regulatory and scientific agencies of the United States government, and later, around the world.   Edward J. Calabrese, “Muller’s Nobel Prize Lecture: when ideology prevailed over science,” 126 Toxicol. Sci. 1 (2012).   What Professor Calabrese adds to the historical narrative is that Muller was aware of high quality research that undermined the LNT, when he announced in his Nobel lecture that there was no evidence for thresholds.  In the 1950’s, Muller continued to use his influence to advance the LNT model of radiation-induced carcinogenesis with the National Academies of Science Biological Effects of Atomic Radiation (“BEAR I”) Committee.

Professor Calabrese has provided an important cautionary tale about how scientific beliefs take hold and are propagated.  Muller’s influence and the rise of the LNT model is a recurring tale of the role of power, persuasion, and prestige in claiming scientific truths. It is not just about money.  Conflicts of interest involving professional honors, institutional recognition, friendships, intellectual commitments, and investigative “zeal,” which are often much greater threats to the integrity of science and medical research than receipt of payments.  See Richard S. Saver, “Is It Really All about the Money? Reconsidering Non-Financial Interests in Medical Research,”  40 J. Law, Med. & Ethics (2012).

If Professor Calabresse is right in his historical analysis, Muller achieved his goal of protecting his discovery from challenge by a selective presentation of the relevant data at a crucial moment in the formation of scientific thinking about mutagenesis. The rise and fall of the LNT model carries with it an important lesson to judicial gatekeepers.  Unlike the audience of Muller’s Nobel speech, lawyers and judges can, and must, insist upon a declaration of all materials considered so that they can determine whether an expert witness has proffered an opinion based upon a thorough, complete review of the relevant data.

New Superhero?

December 31st, 2012

The Verdict. A Civil Action. Class ActionMy Cousin Vinnie.

Wonder Woman, Superman, Batman, Ironman.

America loves movies, and superheroes.

So 2013 should be an exciting year with a new superhero movie coming to a theater, or a courthouse, near you: Egilman.

Actor-producer-director Patrick Coppola has announced that he is developing a film, which has yet to be given a catchy name.  Coppola calls the film in development:  the DOCTOR DAVID EGILMAN PROJECT.  According to Coppola, he was hired by

“by world famous MD – Doctor David Egilman to create and write a Screenplay based on Doctor Egilman’s life and the many cases he has served on as an expert witness in various chemical poisoning trials. Doctor Egilman is a champion of the underdog and has several worldwide charities and medical clinics he funds and donates his time to.”

Patrick Coppola describes his screenplay for the “Doctor David Egilman Project” as a story of conspiracy among corporate suppliers of beryllium materials, the government, and the thought leaders in occupational medicine to suppress information about harm to workers. In this narrative, which is a familiar refrain for plaintiffs’ counsel in toxic tort litigation, profits always take precedence over safety, and unions mysteriously are silently complicit in the carnage.

Can’t wait!

 

Reanalysis of Epidemiologic Studies – Not Intrinsically WOEful

December 27th, 2012

A recent student law review article discusses reanalyses of epidemiologic studies, an important, and overlooked topic in the jurisprudence of scientific evidence.  Alexander J. Bandza, “Epidemiological-Study Reanalyses and Daubert: A Modest Proposal to Level the Playing Field in Toxic Tort Litigation,” 39 Ecology L. Q. 247 (2012).

In the Daubert case itself, the Ninth Circuit, speaking through Judge Kozinksi, avoided the methodological issues raised by Shanna Swan’s reanalysis of Bendectin epidemiologic studies, by assuming arguendo its validity, and holding that the small relative risk yielded by the reanalysis would not support a jury verdict of specific causation. Daubert v. Merrell Dow Pharm., Inc., 43 F.3d 1311, 1317–18 (9th Cir. 1995).

There is much that can, and should, be said about reanalyses in litigation and in the scientific process, but Bandza never really gets down to the business at hand. His 36 page article curiously does not begin to address reanalysis until the bottom of the 20th page. The first half of the article, and then some, reviews some time-worn insights and factoids about scientific evidence. Finally, at page 266, the author introduces and defines reanalysis:

“Reanalysis occurs ‘when a person other than the original investigator obtains an epidemiologic data set and conducts analyses to evaluate the quality, reliability or validity of the dataset, methods, results or conclusions reported by the original investigator’.”

Bandza at 266 (quoting Raymond Neutra et al., “Toward Guidelines for the Ethical Reanalysis and Reinterpretation of Another’s Research,” 17 Epidemiology 335, 335 (2006).

Bandza correctly identifies some of the bases for judicial hostility to re-analyses. For instance, some courts are troubled or confused when expert witnesses disagree with, or reevaluate, the conclusions of a published article. The witnesses’ conclusions may not be published or peer reviewed, and thus the proffered testimony fails one of the Daubert factors.  Bandza correctly notes that peer review is greatly overrated by judges. Bandza at 270. I would add that peer review is an inappropriate proxy for validity, a “test,” which reflects a distrust of the unpublished.  Unfortunately, this judicial factor ignores the poor quality of much of what is published, and the extreme variability in the peer review process. Judges overrate peer review because they are desperate for a proxy for validity of the studies relied upon, which will allow them to pass their gatekeeping responsibility on to the jury. Furthermore, the authors’ own conclusions are hearsay, and their qualifications are often not fully before the court.  What is important is the opinion of the expert witness who can be cross-examined and challenged.  SeeFOLLOW THE DATA, NOT THE DISCUSSION.” What counts is the validity of the expert witness’s reasoning and inferences.

Bandza’s article, which by title advertises itself to be about re-analyses, gives only a few examples of re-analyses without much detail.  He notes concerns that reanalyses may impugn the reputation of published scientists, and burden them with defending their data.  Who would have it any other way? After this short discussion, the article careens into a discussion of “weight of the evidence” (WOE) methodology. Bandza tells us that the rejection of re-analyses in judicial proceedings “implicitly rules out using the weight-of-the-evidence methodology often appropriate for, or even necessary to, scientific analysis of potentially toxic substances.” Bandza at 270.  This argument, however, is one sustained non-sequitur.  WOE is defined in several ways, but none of the definitions require or suggest the incorporation of re-analyses. Re-analyses raise reliability and validity issues regardless whether an expert witness incorporates them into a WOE assessment. Yet Bandza tells us that the rejection of re-analyses “Implicitly Ignores the Weight-of-the-Evidence Methodology Appropriate for the Scientific Analysis of Potentially Toxic Substances.” Bandza at 274. This conclusion simply does not follow from the nature of WOE methodology or reanalyses.

Bandza’s ipse dixit raises the independent issue whether WOE methodology is appropriate for scientific analysis. WOE is described as embraced or used by regulatory agencies, but that description hardly recommends the methodology as the basis for a scientific, as opposed to a regulatory, conclusion.  Furthermore, Bandza ignores the ambiguity and variability of WOE by referring to it as a methodology, when in reality, WOE is used to describe a wide variety of methods of reasoning to a conclusion. Bandza cites Douglas Weed’s article on WOE, but fails to come to grips with the serious objections raised by Weed in his article to the use of WOE methodologies.  Douglas Weed, “Weight of Evidence: A Review of Concept and Methods,” 25 Risk Analysis 1545, 1546–52 (2005) (describing the vagueness and imprecision of WOE methodologies). See also “WOE-fully Inadequate Methodology – An Ipse Dixit By Another Name.”

Bandza concludes his article with a hymn to the First Circuit’s decision in Milward v. Acuity Specialty Products Group, Inc., 639 F.3d 11 (1st Cir. 2011). Plaintiffs’ expert witness, Dr. Martyn Smith claimed to have performed a WOE analysis, which in turn was based upon a re-analysis of several epidemiologic studies. True, true, and immaterial.  The re-analyses were not inherently a part of a WOE approach. Presumably, Smith re-analyzed some of the epidemiologic studies because he felt that the data as presented did not support his desired conclusion.  Given the motivations at work, the district court in Milward was correct to look skeptically and critically at the re-analyses.

Bandza notes that there are procedural and evidentiary safeguards in federal court against unreliable or invalid re-analyses of epidemiologic studies.  Bandza at 277. Yes, there are safeguards but they help only when they are actually used. The First Circuit in Milward reversed the district court for looking too closely at the re-analyses, spouting the chestnut that the objections went to the weight not the admissibility of the evidence.  Bandza embraces the rhetoric of the Circuit, but he offers no description or analysis of the liberties that Martyn Smith took with the data, or the reasonableness of Smith’s reliance upon the re-analyzed data.

There is no necessary connection between WOE methodologies and re-analyses of epidemiologic studies.  Re-analyses can be done properly to support or deconstruct the conclusions of published papers.  As Bandza points out, some re-analyses may go on to be peer reviewed and published themselves.  Validity is the key, and WOE methodologies have little to do with the process of evaluating the original or the re-analyzed study.

 

 

Litmus Tests

December 27th, 2012

Rule 702 is, or is not, a litmus test for expert witness opinion admissibility.  Relative risk is, or is not, a litmus test for specific causation.  Statistical significance is, or is not, a litmus test for reasonable reliance upon the results of a study.  It is relatively easy to find judicial opinions on either side of the litmus divide.  Compare National Judicial College, Resource Guide for Managing Complex Litigation at 57 (2010) (Daubert is not a litmus test) with Cryer v. Werner Enterprises, Inc., Civ. Action No. 05-S-696-NE, Mem. Op. & Order at 16 n. 63 (N.D. Ala. Dec. 28, 2007) (describing the Eleventh Circuit’s restatement of Rule 702’s “litmus test” for the methodological reliability of proffered expert witness opinion testimony).

The “litmus test“ is one sorry, overworked metaphor.  Perhaps its appeal has to do with a vague collective memory that litmus paper is one of those “things of science,” which we used in high school chemistry, and never had occasion to use again. Perhaps, litmus tests have the appeal of “proofiness.”

The reality is different. The litmus test is a semi-quantitative test for acidity or alkalinity.  Neutral litmus is purple.  Under acidic conditions, litmus turns red; under basic conditions, it turns blue.  For some time, scientists have used pH meters when they want a precise quantification of acidity or alkalinity.  Litmus paper is a fairly crude test, which easily discriminates  moderate acidity from alkalinity (say pH 4 from pH 11), but is relatively useless for detecting an acidity at pH or 6.95, or alkalinity at 7.05.

So what exactly are legal authors trying to say when they say that some feature of a test is, or is not, a “litmus test”? The litmus test is accurate, but not precise at the important boundary at neutrality.  The litmus test color can be interpreted for degree of acidity or alkalinity, but it is not the preferred method to obtain a precise measurement. Saying that a judicial candidate’s views on abortion are a litmus test for the Senate’s evaluation of the candidate makes sense, given the relative binary nature of the outcome of a litmus test, and the polarization of political views on abortion. Apparently, neutral views or views close to neutrality on abortion are not a desideratum for judicial candidates.  A cruder, binary test is exactly what is desired by politicians.

The litmus test that is used for judicial candidates does not seem to work so well when used to describe scientific or statistical inference.  The litmus test is well understood, but fairly obsolete in modern laboratory practice.  When courts say things, such as statistical significance is not a litmus test for acceptability of a study’s results, clearly they are correct because measure of random error is only one aspect of judging a body of evidence for, or against, an association.  Yet courts seem to imply something else, at least at times:

statistical significance is not an important showing in making a case that an exposure is reliably associated with a particular outcome.

Here courts are trading in half truths.  Statistical significance is quantitative, and the choice of a level of significance is not based upon immutable law. So like the slight difference between a pH of 6.95 and 7.05, statistical significance tests have a boundary issue.  Nonetheless, a consideration of random error cannot be dismissed or overlooked on the theory that significance level is not a “litmus test.”  This metaphor obscures and attempts to excuse sloppy thinking.  It is time to move beyond this metaphor.

Lumpenepidemiology

December 24th, 2012

Judge Helen Berrigan, who presides over the Paxil birth defects MDL in New Orleans, has issued a nicely reasoned Rule 702 opinion, upholding defense objections to plaintiffs expert witnesses, Paul Goldstein, Ph.D., and Shira Kramer, Ph.D. Frischhertz v SmithKline Beecham EDLa 2012 702 MSJ Op.

The plaintiff, Andrea Frischhertz, took GSK’s Paxil, a selective serotonin reuptake inhibitor (SSRI), for depression while pregnant with her daughter, E.F. The parties agreed that E.F. was born with a deformity of her right hand.  Plaintiffs originally claimed that E.F. had a heart defect, but their expert witnesses appeared to give up this claim at deposition, as lacking evidential support.

Adhering to Daubert’s Epistemiologic Lesson

Like many other lower federal courts, Judge Berrigan focused her analysis on the language of Daubert v. Merrell Dow Pharmaceuticals Inc., 509 U.S. 579 (1993), a case that has been superseded by subsequent cases and a revision to the operative statute, Rule 702.  Fortunately, the trial court did not lose sight of the key epistemological teaching of Daubert, which is based upon Rule 702:

“Regarding reliability, the [Daubert] Court said: ‘the subject of an expert’s testimony must be “scientific . . . knowledge.” The adjective “scientific” implies a grounding in the methods and procedures of science. Similarly, the word “knowledge” connotes more than subjective belief or unsupported speculation’.”

Slip Op. at 3 (quoting Daubert, 509 U.S. at 589-590).

There was not much to the plaintiffs’ expert witnesses’ opinion beyond speculation, but many other courts have been beguiled by speculation dressed up as “scientific … knowledge.”  Dr. Goldstein relied upon whole embryo culture testing of SSRIs, but in the face overwhelming evidence, Dr. Goldstein was forced to concede that this test may generate hypotheses about, but cannot predict, human risk of birth defects.  No doubt this concession made the trial court’s decision easier, but the result would have been required regardless of Dr. Goldstein’s exhibition of truthfulness at deposition.

Statistical Association – A Good Place to Begin

More interestingly, the trial court rejected the plaintiffs’ expert witnesses’ efforts to leapfrog finding a statistically significant association to parsing the so-called Bradford Hill factors:

“The Bradford-Hill criteria can only be applied after a statistically significant association has been identified. Federal Judicial Center, Reference Manual on Scientific Evidence, 599, n.141 (3d. ed. 2011) (“In a number of cases, experts attempted to use these guidelines to support the existence of causation in the absence of any epidemiologic studies finding an association . . . . There may be some logic to that effort, but it does not reflect accepted epidemiologic methodology.”). See, e.g., Dunn v. Sandoz Pharms., 275 F. Supp. 2d 672, 678 (M.D.N.C. 2003). Here, Dr. Goldstein attempted to use the Bradford-Hill criteria to prove causation without first identifying a valid statistically significant association. He first developed a hypothesis and then attempted to use the Bradford-Hill criteria to prove it. Rec. Doc. 187, Exh. 2, depo. Goldstein, p. 103. Because there is no data showing an association between Paxil and limb defects, no association existed for Dr. Goldstein to apply the Bradford-Hill criteria. Hence, Dr. Goldstein’s general causation opinion is not reliable.”

Slip op. at 6.

The trial court’s rejection of Dr. Goldstein’s attempted end run is particularly noteworthy given the Reference Manual’s weak-kneed attempt to suggest that this reasoning has “some logic” to it.  The Manual never articulates what “logic” commends Dr. Goldstein’s approach; nor does it identify any causal relationship ever established with such paltry evidence in the real world of science. The Manual does cite several legal cases that excused or overlooked the need to find a statistically significant association, and even elevated such reasoning into legally acceptable, admissibility method.  See Reference Manual on Scientific Evidence at 599 n. 141 (describing cases in which purported expert witnesses attempted to use Bradford Hill factors in the absence of a statistically significant association; citing Rains v. PPG Indus., Inc., 361 F. Supp. 2d 829, 836–37 (S.D. Ill. 2004); ); Soldo v. Sandoz Pharms. Corp., 244 F. Supp. 2d 434, 460–61 (W.D. Pa. 2003).  The Reference Manual also cited cases, without obvious disapproval, which completely dispatched with any necessity of considering any of the Bradford Hill factors, or the precondition of a statistically significant association.  See Reference Manual at 599 n. 144 (citing Cook v. Rockwell Int’l Corp., 580 F. Supp. 2d 1071, 1098 (D. Colo. 2006) (“Defendants cite no authority, scientific or legal, that compliance with all, or even one, of these factors is required. . . . The scientific consensus is, in fact, to the contrary. It identifies Defendants’ list of factors as some of the nine factors or lenses that guide epidemiologists in making judgments about causation. . . . These factors are not tests for determining the reliability of any study or the causal inferences drawn from it.“).

Shira Kramer Takes Her Lumpings

The plaintiffs’ other key expert witness, Dr. Shira Kramer, was a more sophisticated and experienced obfuscator.  Kramer attempted to provide plaintiffs with a necessary association by “lumping” all birth defects together in her analysis of epidemiologic data of birth defects among children of women who had ingested Paxil (or other SSRIs).  Given the clear evidence that different birth defects arise at different times, based upon interference with different embryological processes, the trial court discerned this “lumping” of end points to be methodologically inappropriate.  Slip op. at 8 (citing Chamber v. Exxon Corp., 81 F. Supp. 2d 661 (M.D. La. 2000), aff’d, 247 F.3d 240 (5th Cir. 2001) (unpublished).

Without her “lumping”, Dr. Kramer was left with only a weak, inconsistent claim of biological plausibility and temporality. Finding that Dr. Kramer’s opinion had outrun her headlights, Judge Berrigan, excluded Dr. Kramer as an expert witness, and granted GSK summary judgment.

Merry Christmas!

 

The opinions, statements, and asseverations expressed on Tortini are my own, or those of invited guests, and these writings do not necessarily represent the views of clients, friends, or family, even when supported by good and sufficient reason.