TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Exposure, Epidemiology, and External Validity under Rule 702

May 14th, 2012

Sometimes legal counsel take positions in court determined solely by the expediency of what expert witnesses are available, and what opinions are held by those witnesses.

Back in the early days of the asbestos litigation in Philadelphia, a hotbed of early asbestos litigation, plaintiffs and defendants each identified a pool of available expert witnesses on lung diseases.  Each side found witnesses who held views on important issues, such as whether asbestos caused lung cancer, with or without pre-existing asbestosis, whether all types of asbestos caused mesothelioma, whether asbestos caused gastrointestinal cancers, and whether “each and every exposure was a substantial factor” in producing an asbestos-related disease.  Some expert witnesses adopted opinions as a matter of convenience and malleability, but most witnesses expressed sincerely held opinions.  Either way, each expert witness active in the asbestos litigation, came to be seen as a partisan of one side.  Because of the volume of cases, there was the opportunity to be engaged in a large number of cases, and to earn sizable fees as an expert witness.  Both side’s expert witnesses struggled to avoid being labeled hired guns.

A few expert witnesses, eager to avoid being locked in as either a “plaintiff’s” or a “defendant’s” expert witness, with perhaps some damage to their professional reputations, balanced their views in a way to avoid being classified as working exclusively for one side or the other.  The late Paul Epstein, MD, adopted this strategy to great effect.  Dr. Epstein had excellent credentials, and he was an excellent physician.  He was on the faculty at the University of Pennsylvania, and he was a leader in the American College of Physicians, where he was the deputy editor of the Annals of Internal Medicine.  Dr. Epstein exemplified gravitas and learning.  He was not, however, above adopting views in such a way as to balance out his commitments to both the plaintiffs’ and defense bars.  By doing so, Dr. Epstein made himself invaluable to both sides, and he made aggressive cross-examination difficult, if not impossible, when he testified.  I suspect his positions had this strategic goal.

In his first testimonies, in the late 1970’s and early 1980’s, Dr. Epstein expressed the view that asbestos exposure caused parietal pleural plaques, but these plaques rarely interfered with respiration.  Pleural plaques did not cause impairment or disability, and thus they were not an “injury.”  Dr. Epstein’s views were very helpful in obtaining defense verdicts in cases of disputed pleural thickening or plaques, and they led to his being much sought after by defense counsel for their independent medical examinations.  Dr. Epstein also strongly believed, based upon the epidemiologic evidence, that asbestos did not cause gastrointestinal or laryngeal cancer.

Dr. Epstein was wary of being labeled a “defendants’ expert” in the asbestos litigation, especially given the social opprobrium that attached to working for the “asbestos industry.”  And so, by the mid-1980’s, Dr. Epstein surprised the defense bar by showing up in a plaintiff’s lung cancer case, without underlying asbestosis.  Dr. Epstein took the position that if the plaintiff worked around asbestos, and later developed lung cancer, then asbestos caused his lung cancer, and “each and every exposure to asbestos” contributed substantially to the outcome.  Risk was causation; ipse dixit.  Dr. Epstein recited the Selikoff multiplicative “synergy” theory, with relative risks of 5 (for non-smoking asbestos workers), 10 (for smoking non-asbestos workers), and 50 (for smoking asbestos-exposed workers).  Every worker was described with the same set of risk ratios.  Remarkably, and unscientifically, Dr. Epstein gave the same risk figures in every plaintiff’s lung cancer case, regardless of the duration or level of exposure.  In mesothelioma cases, Dr. Epstein took the unscientific position that all fiber types (chrysotile, amosite, crocidolite, and anthopyllite) contributed to any patient’s mesothelioma.

Dr. Epstein’s views made him off limits to plaintiffs in non-malignancy cases, and off limits to defendants in lung cancer and mesothelioma cases.

Because of his careful alignment with both plaintiffs’ and defense bars, Dr. Epstein’s views were never forcefully challenged.  Of course, the Pennsylvania case law in the 1980’s and 1990’s was not particularly favorable to challenges to the validity of opinions about causation, but even as Rule 702 evolved in federal court, both plaintiffs’ and defense counsel were unable to antagonize Dr. Epstein.  The inanity of “each and every exposure” was not seriously hurtful in the early asbestos litigation, when the defendants were almost all manufacturers of asbestos-containing insulation, and if a manufacturer had supplied insulation to a worksite, then the proportion of asbestos exposure for that manufacturer would likely have been “substantial.”

Today, the nature of the asbestos litigation has changed, but it when we examine Pennsylvania law and procedure, it is not surprising to see that Dr. Epstein’s views have had a long-lasting effect.  Claimants with only pleural plaques have been relegated to an “inactive” docket.  Plaintiffs’ expert witnesses still opine that each and every exposure was substantial, without any basis in evidence, and they still recite the same 5x, 10x, and 50x risk ratios, based upon Selikoff’s insulator studies, even though the Philadelphia Court of Common Pleas probably has not seen more than a handful of insulators’ cases in the last decade.  Dozens of epidemiologic studies have shown that asbestos exposures of bystander trades, chrysotile factory workers, and other non-insulator, occupational exposures have lower risks of asbestos-related diseases.

The failure to challenge the Selikoff risk ratios is regrettable, especially considering that it was based upon politics, personalities, and not on scientific or legal evidentiary grounds.

As Irving Selikoff observed about his frequently cited statistics:

“These particular figures apply to the particular groups of asbestos workers in this study.  The net synergistic effect would not have been the same if their smoking habits had been different; and it probably would have been different if their lapsed time from first exposure to asbestos dust had been different or if the amount of asbestos dust they had inhaled had been different.”

E. Cuyler Hammond, Irving Selikoff, and Herbert Seidman, “Asbestos Exposure, Cigarette Smoking and Death Rates,” 330 Ann. N.Y. Acad. Sci. 473, 487 (1979).

The Selikoff risk figures were unreliable even for insulators, given that the so-called non-smokers were admittedly occasional smokers, and the low relative risk for smokers in the general population came from an historical cohort of relatively healthy American Cancer Society volunteers. The updated risk figures for smokers in the general population placed their lung cancer risk closer to, and above, 20-fold, which raised doubts about Selikoff’s neat multiplicative theory.

The more important lesson though is that the Philadelphia courts, with acquiescence from most defense counsel, never challenged the use of Selikoff’s 5x, 10x, and 50x risk ratios to describe asbestos effects and smoking interactions.  Dr. Epstein made such a challenge impolitic and imprudent.  In Philadelphia, the Selikoff risk ratios gained a measure of respectability that they never deserved in science, or in the courtroom.

*****

Under Rule 702, the law has evolved to require reasonable exposure assessments of plaintiffs’ exposures, and supporting epidemiology that shows relevant increase risks at the level and the latency actual experienced by each plaintiff.  This criterion does not come from a “sufficiency” review as some have suggested; it is clearly a requirement of external validity of the epidemiologic studies relied upon by expert witnesses.

The following cases excluded or limited expert witness opinion testimony with respect to epidemiological studies that the court concluded were not sufficiently similar to the facts of the case to warrant the admission of an expert’s opinion based on their results:

SUPREME COURT

General Electric Co. v. Joiner, 522 U.S. 136 (1997)(questioning the external validity of a study of massive injected doses of PCBs in baby mice, with an outcome unrelated to the cancer claimed by paintiff)

1st Circuit

Sutera v. Perrier Group of America Inc., 986 F. Supp. 655 (D. Mass. 1997)(occupational epidemiology of benzene exposure and benzene does not inform health effects from vanishingly low exposure to benzene in bottled water)

Whiting v. Boston Edison Co., 891 F. Supp. 12 (D. Mass. 1995) (excluding plaintiff’s expert witnesses; holding that epidemiology of Japanese atom bomb victims, and of patients treated with X-rays for spinal arthritis, and acute lymphocytic leukemia (ALL), was an invalid extrapolative model for plaintiff’s much lower exposure)

2d Circuit

Wills v. Amerada Hess Corp., 2002 WL 140542 (S.D. N.Y. 2002)(excluding plaintiff’s expert witness who attempted to avoid exposure assessment by arguing no threshold)(‘‘[E]ven though benzene and PAHs have been shown to cause some types of cancer, it is too difficult a leap to allow testimony that says any amount of exposure to these toxins caused squamous cell carcinoma of the head and neck in the decedent… . It is not grounded in reliable scientific methods, but only Dr. Bidanset’s presumptions. It fails all of the Daubert factors.’’), aff’d, 379 F.3d 32 (2d Cir. 2004)(Sotomayor, J.), cert. denied, 126 S.Ct. 355 (2005)

Amorgianos v. National RR Passenger Corp., 137 F. Supp. 2d 147 (E.D. N.Y. 2001), aff’d, 303 F.3d 256 (2d Cir. 2002);

Mancuso v. Consolidated Edison Co., 967 F.Supp. 1437, 1444 (S.D.N.Y. 1997)

3d Circuit

Magistrini v. One Hour Martinizing Dry Cleaning, 180 F. Supp. 2d 584(D.N.J. 2002), aff’d, 68 Fed. Appx. 356 (3d Cir. 2003);

In re W.R. Grace & Co., 355 B.R. 462 (Bankr. D. Del. 2006)

4th Circuit

White v. Dow Chemical Co., 321 F.Appx. 266, 273 (4th Cir. 2009)

Newman v. Motorola, Inc., 78 Fed. Appx. 292 (4th Cir. 2003)

Cavallo v. Star Enterprise, 892 F. Supp. 756, 764, 773 (E.D. Va. 1995) (excluding opinion of expert witness who failed to identify plaintiff ’s exposure levels to jet fuel, and failed to characterize the relevant dose-response relationship), aff’d in relevant part, 100 F.3d 1150, 1159 (4th Cir. 1996)

5th Circuit

LeBlanc v. Chevron USA, Inc., 396 Fed. Appx. 94 (5th Cir. 2010)

 Knight v. Kirby Inland Marine Inc.,482 F.3d 347 (5th Cir. 2007);

Cotroneo v. Shaw Environmental & Infrastructure, Inc., 2007 WL 3145791 (S.D. Tex. 2007)

Castellow v. Chevron USA, 97 F. Supp. 2d 780, 796 (S.D. Tex. 2000) (‘‘[T]here is no reliable evidence before this court on the amount of benzene, from gasoline or any other source, to which Mr. Castellow was exposed.’’)

Moore v. Ashland Chemical Inc., 151 F.3d 269, 278 (5th Cir. 1998) (en banc);

Allen v. Pennsylvania Engineering Corp., 102 F.3d 194, 198-99 (5th Cir. 1996)

6th Circuit

Pluck v. BP Oil Pipeline Co., 640 F.3d 671 (6th Cir. 2011)(affirming district court’s exclusion of Dr. James Dahlgren; noting that he lacked reliable data to support his conclusion of heavy benzene exposure; holding that without quantifiable exposure data, the Dahlgren’s causation opinion was mere “speculation and conjecture”)

 Nelson v. Tennessee Gas Pipeline Co., 243 F.3d 244, 252 (6th Cir. 2001)(noting ‘‘with respect to the question of dose, plaintiffs cannot dispute that [their expert] made no attempt to determine what amount of PCB exposure the Lobelvill subjects had received and simply assumed that it was sufficient to make them ill.’’)

Conde v. Velsicol Chemical Corp., 24 F.3d, 809, 810 (6th Cir. 1994)(excluding expert testimony that chlordane,although an acknowledged carcinogen that was applied in a manner that violated federal criminal law, caused plaintiff’s injuries when expert witness’s opinion was based upon high-dose animal studies as opposed to the low-exposure levels experienced by the plaintiffs)

7th Circuit

Cunningham v. Masterwear Corp., 2007 WL 1164832 (S.D. Ind., Apr. 19, 2007)(excluding plaintiff’s expert witnesses who opined without valid evidence of plaintiffs’ exposure to perchloroethylene (PCE)), aff’d, 569 F.3d 673 (7th Cir. 2009) (Posner, J.)(affirming exclusion of expert witness and grant of summary judgment)

Wintz v. Northrop Corp., 110 F.3d 508, 513 (7th Cir. 1997)

Schmaltz v. Norfolk & Western Ry. Co., 878 F. Supp. 1119, 1122 (N.D. Ill. 1995) (excluding expert witness opinion testimony that was offered in ignorance of plaintiff’s level of exposure to herbicide)

8th Circuit

Junk v. Terminix Intern. Co. Ltd. Partnership, 594 F. Supp. 2d 1062, 1073 (S.D. Iowa 2008).

Medalen v. Tiger Drylac U.S.A., Inc., 269 F. Supp. 2d 1118, 1132 (D. Minn. 2003)

National Bank of Commerce v. Associated Milk Producers, Inc., 22 F. Supp. 2d 942 (E.D. Ark. 1998)(excluding causation opinion that lacked exposure level data), aff’d, 191 F.3d 858 (8th Cir. 1999)

Bednar v. Bassett Furniture Mfg. Co., Inc.,147 F.3d 737, 740 (8th Cir. 1998) (“The Bednars had to make a threshold showing that the dresser exposed the baby to levels of gaseous formaldehyde known to cause the type of injuries she suffered”)

Wright v. Willamette Industries, Inc., 91 F.3d 1105, 1106 (8th Cir. 1996) (affirming exclusion; requiring evidence of actual exposure to levels of substance known to cause claimed injury)

National Bank of Commerce v. Dow Chemical Co., 965 F. Supp. 1490, 1502 (E.D. Ark., 1996)

9th Circuit

In re Bextra & Celebrex Marketing Sales Practices & Product Liab. Litig., 524 F. Supp. 2d 1166, 1180 (N.D. Cal. 2007)(granting Rule 702 exclusion of expert witness’s opinions with respect to low dose, but admitting opinions with respect to high dose Bextra and Celebrex)

Henricksen v. ConocoPhillips Co., 605 F. Supp. 2d 1142, 1157 (E.D. Wash. 2009)

Valentine v. Pioneer Chlor Alkali Co., Inc., 921 F. Supp. 666, 676 (D. Nev. 1996)

Abuan v. General Electric Co., 329 F.3d 329, 333 (9th Cir. 1993) (Guam)

10th Circuit

Maddy v. Vulcan Materials Co., 737 F.Supp. 1528, 1533 (D.Kan. 1990) (noting the lack of any scientific evidence of the level or duration of plaintiff’s exposure to specific toxins).

Estate of Mitchell v. Gencorp, Inc., 968 F. Supp. 592, 600 (D. Kan. 1997), aff’d,165 F.3d 778, 781 (10th Cir. 1999)

11th Circuit

Brooks v. Ingram Barge Co., 2008 WL 5070243 *5 (N.D. Miss. 2008)) (noting that plaintiff’s expert witness “acknowledges that it is unclear how much exhaust Brooks was exposed to, how much exhaust it takes to make developing cancer a probability, or how much other factors played a role in Brooks developing cancer.”)

Cuevas v. E.I. DuPont de Nemours & Co., 956 F. Supp. 1306, 1312 (S.D. Miss. 1997)

Chikovsky v. Ortho Pharmaceutical Corp., 832 F. Supp. 341, 345–46 (S.D. Fla. 1993)(excluding opinion of an expert witness who did not know plaintiff’s actual exposure or dose of Retin-A, and the level of absorbed Retin-A that is unsafe for gestating women)

Savage v. Union Pacific RR, 67 F. Supp. 2d 1021 (E.D. Ark. 1999)

 

STATE CASES

California

Jones v. Ortho Pharmaceutical Corp., 163 Cal. App. 3d 396, 404, 209 Cal. Rptr. 456, 461 (1985)(duration of use in relied upon studies not relevant to plaintiffs’ use)

Michigan

Nelson v. American Sterilizer Co., 566 N.W. 2d 671 (Mich. Ct. App. 1997)(affirming exclusion of expert witness who opined, based upon high-dose animal studies, that plaintiff’s liver disease was caused by low-level exposure to chemicals used in sterilizing medical equipment)

Mississippi

Watts v. Radiator Specialty Co., 2008 WL 2372694 *3 (Miss.2008);

Ohio

Valentine v. PPG Indus., Inc., 158 Ohio App. 3d 615, 821 N.E.2d 580 (2004)

Oklahoma

Christian v. Gray, 2003 Okla. 10, 65 P.3d 591, 601 (2003);

Holstine v. Texasco, 2001 WL 605137 (Okla. Dist. Ct. 2001)(excluding expert witness testimony that failed to assess plaintiff’s short-term, low-level benzene exposure as fitting the epidemiology relied upon to link plaintiff’s claimed injury with his exposure)

Texas

Merrell Dow Pharm., Inc. v. Havner, 953 S.W.2d 706, 720 (Tex. 1997) (“To raise a fact issue on causation and thus to survive legal sufficiency review, a claimant must do more than simply introduce into evidence epidemiological studies that show a substantially elevated risk. A claimant must show that he or she is similar to those in the studies.”).

Merck & Co. v. Garza, 347 S.W.3d 256 (Tex. 2011)

Frias v. Atlantic Richfield Co., 104 S.W.3d 925, 929 (Tex. App. Houston 2003)(holding that plaintiffs’ expert witness’s testimony was inadmissible for relying upon epidemiologic studies that involved much higher levels of exposure than experienced by plaintiff)

Daniels v. Lyondell-Citgo Refining Co, 99 S.W.3d 722 (Tex. App. 2003) (claim that benzene exposure caused plaintiff’s lung cancer had to be supported with studies of comparable exposure, and latency, as that observed and reported in the studies)

Austin v. Kerr-McGee Refining Corp., 25 S.W.3d 280, 292 (Tex. App. Texarkana 2000)

Giving Rule 703 the Cold Shoulder

May 12th, 2012

I have written previously about the gap in Rule 702, which provides a multi-factorial test for the admissibility of an opinion from a properly qualified expert witness:

(a) the expert’s scientific, technical, or other specialized knowledge will help the trier of fact to understand the evidence or to determine a fact in issue;

(b) the testimony is based on sufficient facts or data;

(c) the testimony is the product of reliable principles and methods; and

(d) the expert has reliably applied the principles and methods to the facts of the case.

Noticeably absent from Rule 702 is any requirement that the facts or data upon which the expert witness relies be worth a damn.  From Rule 702(b), (c), and (d) alone, an expert witness, armed with sufficient unreliable, fraudulent, imaginary, or simply incorrect facts and data, using reliable principles and methods, and applying those principles and methods reliably to the facts of the case, gets to testify at trial.  Arguably, the first subsection, Rule 702(a), which limits testimony to helpful “knowledge” provides an overriding condition that helps to qualify the next three.  It is difficult to imagine that knowledge is based upon unreliable facts and data.

Still, the failure to require reliable data explicitly within the scope of Rule 702 is disturbing.  This unhappy state of affairs, in which courts do not exercise gatekeeping over the quality of the data themselves, is apparently the law of the Tenth Circuit, of the United States Court of Appeals.

In Pritchett v. I-Flow Corporation, the plaintiff had shoulder surgery, which required the use of a “pain pump” to inject anesthetic medication into the shoulder post-operatively.  The plaintiff went on to develop “chondrolysis” in his shoulder joint, a condition that involves partial or complete loss of cartilage in the shoulder joint.  Pritchett v. I-Flow Corp., Civil Action No. 09-cv-02433-WJM-KLM. (D. Colo. April 17, 2012) (Mix, J., Magistrate Judge).

The opinion is a mechanical recitation of Daubert procedure and method, with little analysis of the expert witness’s opinion, until the magistrate judge describes the requirement of Rule 702 (b) for “sufficient facts and data”:

“i. Sufficient Facts and Data

The proponent of the opinion must first show that the witness gathered “sufficient facts and data” to formulate the opinion. In the Tenth Circuit, assessment of the sufficiency of the facts and data used by the witness is a quantitative, rather than a qualitative, analysis. Fed. R. Evid. 702, Advisory Committee Notes to 2000 Amendments; see also United States v. Lauder, 409 F.3d 1254, 1264 n.5 (10th Cir. 2005). That is to say, the Court does not examine whether the facts obtained by the witness are themselves reliable; whether the facts used are qualitatively reliable is a question of the weight that should be given to the opinion by the fact-finder, not the admissibility of the opinion. Lauder, 409 F.3d at 1264. Instead, “this inquiry examines only whether the witness obtained the amount of data that the methodology itself demands.” Crabbe, 556 F. Supp. 2d at 1223.”

Pritchett v. I-Flow Corp. (emphasis added).  That is to say: the whole gatekeeping enterprise is really about appearances and not about trying to ensure more accurate fact finding.

If the court’s analysis of Rule 702 should be correct, it is in any event an incomplete analysis that omits the important role of Rule 703:

Rule 703. Bases of an Expert’s Opinion Testimony

An expert may base an opinion on facts or data in the case that the expert has been made aware of or personally observed. If experts in the particular field would reasonably rely on those kinds of facts or data in forming an opinion on the subject, they need not be admissible for the opinion to be admitted. But if the facts or data would otherwise be inadmissible, the proponent of the opinion may disclose them to the jury only if their probative value in helping the jury evaluate the opinion substantially outweighs their prejudicial effect.

According to Magistrate Mix, the reliability of the facts and data do not count for gatekeeping.  Chalk up another loophole to the law’s requirement of reliable scientific evidence.

 

Philadelphia Plaintiff’s Claims Against Fixodent Prove Toothless

May 2nd, 2012

In Milward, Martyn Smith got a pass from the First Circuit of the U.S. Court of Appeals on his “weight of the evidence” (WOE) approach to formulating an opinion as an expert witness.  Last week, Smith’s WOE did not fare so well.  The Honorable Sandra Mazer Moss, in one of her last rulings as judge presiding over the Philadelphia Court of Common Pleas mass tort program, sprinkled some cheer to dispel WOE in Jacoby v Rite Aid PCCP (Order of April 27, 2012; Opinion of April 12, 2012).

Applying Pennsylvania’s Frye standard, Judge Moss upheld Proctor & Gambles challenge to Dr. Martyn Smith, as well as two other plaintiff expert witnesses, Dr. Ebbing Lautenbach and Dr. Frederick Askari.  The plaintiff, Mr. Mark Jacoby, used Fixodent for six years before he first experienced parasthesias and numbness in his hands and feet.  Jacoby’s expert witnesses claimed that Fixodent contains zinc compounds, which are released upon use, and are absorbed into the blood stream.  Very high zinc levels suppress copper levels, and cause a copper deficiency myeloneuropathy.  Finding that the plaintiffs’ causal claims were toothless in the face of sound science, Judge Moss excluded the reports and proffered testimony of Drs. Smith, Askari, and Lautenbach.

Although Pennsylvania courts follow a Frye standard, Judge Moss followed the lead of a federal judge, who had previously examined the same body of evidence, and who excluded plaintiff’s expert witnesses, under Federal Rule of Evidence 702, in In re Denture Cream Prods. Liab. Litig., 795 F. Supp. 2d 1345 (S.D. Fla. 2011).  Without explication, Judge Moss stated that Judge Altanoga’s reasoning and conclusions, reached under federal law, were “very persuasive” under Frye.  Moss Opinion at 5.  In particular, Judge Moss appeared to be impressed by the lack of baseline incidence data on copper deficiency myeloneuropathy, the lack of exposure-response information, and the lack of risk ratios for any level of use of Fixodent.  Id. at 6 – 10.

Judge Moss accepted at face value Martyn Smith’s claims that WOE can be used to demonstrate causation when no individual study is conclusive.  Her Honor did, however, look more critically at the component parts of Smith’s particular application of WOE in the Jacoby case.  Smith used various steps of extrapolation, dose-response, and differential diagnosis in applying WOE, but these steps were woefully unsound.  Id. at 9.  There was no evidence of how low, and for how long, a person’s copper levels must drop before injury results.  Having attacked Proctor & Gamble’s pharmacokinetic studies, the plaintiffs’ expert witnesses had no basis for inferring levels for any plaintiff.  Furthermore, the plaintiffs’ witnesses had no baseline incidence data, and no risk ratios to apply for any level of exposure to, or use of, defendant’s product.

Predictably, plaintiffs’ invoked the pass that Smith received in Milward, but Judge Moss easily distinguished Milward as having involved baseline rates and risk ratios (even if Smith may have imagined the data to calculate those ratios).

Another plaintiff witness, Dr. Askari, used a method he called the “totality of the evidence” (TOE) approach.  In short, TOE is WOE is NO good, as applied in this case.  Id. at 10 -11.

Finally, another plaintiff’s witness, Dr. Lautenbach applied the Naranjo Adverse Drug Reaction Probability Scale, by which he purported to transmute case reports and case series into a conclusion of causality.  Actually, Lautenbach seems to have claimed that the lack of analytical epidemiologic studies supporting an association between Fixodent and myeloneuropathy did not refute the existence of a causal relationship.  Of course, this lack of evidence hardly supports the causal relationship.  Judge Moss assumed that Lautenbach was actually asserting a causal relationship, but since he was relying upon the same woefully, toefully flawed body of evidence, Her Honor excluded Dr. Lautenbach as well.  Id. at 12.

WOE-fully Inadequate Methodology – An Ipse Dixit By Another Name

May 1st, 2012

Take all the evidence, throw it into the hopper, close your eyes, open your heart, and guess the weight.  You could be a lucky winner!  The weight of the evidence suggests that the weight-of-the-evidence (WOE) method is little more than subjective opinion, but why care if it helps you to get to a verdict?

The scientific community has never been seriously impressed by the so-called weight of the evidence (WOE) approach to determining causality.  The phrase is vague and ambiguous; its use, inconsistent. See, e.g., V. H. Dale, G.R. Biddinger, M.C. Newman, J.T. Oris, G.W. Suter II, T. Thompson, et al., “Enhancing the ecological risk assessment process,” 4 Integrated Envt’l Assess. Management 306 (2008)(“An approach to interpreting lines of evidence and weight of evidence is critically needed for complex assessments, and it would be useful to develop case studies and/or standards of practice for interpreting lines of evidence.”);  Igor Linkov, Drew Loney, Susan M. Cormier, F.Kyle Satterstrom, Todd Bridges, “Weight-of-evidence evaluation in environmental assessment: review of qualitative and quantitative approaches,” 407 Science of Total Env’t 5199–205 (2009); Douglas L. Weed, “Weight of Evidence: A Review of Concept and Methods,” 25 Risk Analysis 1545 (2005) (noting the vague, ambiguous, indefinite nature of the concept of “weight of evidence” review);   R.G. Stahl Jr., “Issues addressed and unaddressed in EPA’s ecological risk guidelines,” 17 Risk Policy Report 35 (1998); (noting that U.S. Environmental Protection Agency’s guidelines for ecological weight-of-evidence approaches to risk assessment fail to provide guidance); Glenn W. Suter II, Susan M. Cormier, “Why and how to combine evidence in environmental assessments:  Weighing evidence and building cases,” 409 Science of the Total Environment 1406, 1406 (2011)(noting arbitrariness and subjectivity of WOE “methodology”).

 

General Electric v. Joiner

Most savvy judges quickly figured out that weight of the evidence (WOE) was suspect methodology, woefully lacking, and indeed, not really a methodology at all.

The WOE method was part of the hand waving in Joiner by plaintiffs’ expert witnesses, including the frequent testifier Rabbi Teitelbaum.  The majority recognized that Rabbi Teitelbaum’s WOE weighed in at less than a peppercorn, and affirmed the district court’s exclusion of his opinions.  The Joiner Court’s assessment provoked a dissent from Justice Stevens, who was troubled by the Court’s undressing of the WOE methodology:

“Dr. Daniel Teitelbaum elaborated on that approach in his deposition testimony: ‘[A]s a toxicologist when I look at a study, I am going to require that that study meet the general criteria for methodology and statistical analysis, but that when all of that data is collected and you ask me as a patient, Doctor, have I got a risk of getting cancer from this? That those studies don’t answer the question, that I have to put them all together in my mind and look at them in relation to everything I know about the substance and everything I know about the exposure and come to a conclusion. I think when I say, “To a reasonable medical probability as a medical toxicologist, this substance was a contributing cause,” … to his cancer, that that is a valid conclusion based on the totality of the evidence presented to me. And I think that that is an appropriate thing for a toxicologist to do, and it has been the basis of diagnosis for several hundred years, anyway’.

* * * *

Unlike the District Court, the Court of Appeals expressly decided that a ‘weight of the evidence’ methodology was scientifically acceptable. To this extent, the Court of Appeals’ opinion is persuasive. It is not intrinsically “unscientific” for experienced professionals to arrive at a conclusion by weighing all available scientific evidence—this is not the sort of ‘junk science’ with which Daubert was concerned. After all, as Joiner points out, the Environmental Protection Agency (EPA) uses the same methodology to assess risks, albeit using a somewhat different threshold than that required in a trial.  Petitioners’ own experts used the same scientific approach as well. And using this methodology, it would seem that an expert could reasonably have concluded that the study of workers at an Italian capacitor plant, coupled with data from Monsanto’s study and other studies, raises an inference that PCB’s promote lung cancer.”

General Electric v. Joiner, 522 U.S. 136, 152-54 (1997)(Stevens, J., dissenting)(internal citations omitted)(confusing critical assessment of studies with WOE; and quoting Rabbit Teitelbaum’s attempt to conflate diagnosis with etiological attribution).  Justice Stevens could reach his assessment only by ignoring the serious lack of internal and external validity in the studies relied upon by Rabbi Teitelbaum.  Those studies did not support his opinion individually or collectively.

Justice Stevens was wrong as well about the claimed scientific adequacy of WOE.  Courts have long understood that precautionary, preventive judgments of regulatory agencies are different from scientific conclusions that are admissible in civil and criminal litigation.  See Allen v. Pennsylvania Engineering Corp., 102 F.3d 194 (5th Cir. 1996)(WOE, although suitable for regulatory risk assessment, is not appropriate in civil litigation).  Justice Stevens’ characterization of WOE was little more than judicial ipse dixit, and it was, in any event, not the law; it was the argument of a dissenter.

 

Milward v. Acuity Specialty Products

Admittedly, dissents can sometimes help lower court judges chart a path of evasion and avoidance of a higher court’s holding.  In Milward, Justice Stevens’ mischaracterization of WOE and scientific method was adopted as the legal standard for expert witness testimony by a panel of the United States Court of Appeals, for the First Circuit.  Milward v. Acuity Specialty Products Group, Inc., 664 F.Supp. 2d 137 (D. Mass. 2009), rev’d, 639 F.3d 11 (1st Cir. 2011), cert. denied, U.S. Steel Corp. v. Milward, ___ U.S. ___, 2012 WL 33303 (2012).

Mr. Milward claimed that he was exposed to benzene as a refrigerator technician, and developed acute promyelocytic leukeumia (APL) as result.  664 F. Supp. 2d at 140. In support of his claim, Mr. Milward offered the testimony of Dr. Martyn T. Smith, a toxicologist, who testified that the “weight of the evidence” supported his opinion that benzene exposure causes APL. Id. Smith, in his litigation report, described his methodology as an application of WOE:

“The term WOE has come to mean not only a determination of the statistical and explanatory power of any individual study (or the combined power of all the studies) but the extent to which different types of studies converge on the hypothesis.) In assessing whether exposure to benzene may cause APL, I have applied the Hill considerations . Nonetheless, application of those factors to a particular causal hypothesis, and the relative weight to assign each of them, is both context dependent and subject to the independent judgment of the scientist reviewing the available body of data. For example, some WOE approaches give higher weight to mechanistic information over epidemiological data.”

Smith Report at ¶¶19, 21 (citing Sheldon Krimsky, “The Weight of Scientific Evidence in Policy and Law,” 95(S1) Am. J. Public Health 5130, 5130-31 (2005))(March 9, 2009).  Smith marshaled several bodies of evidence, which he claimed collectively supported his opinion that benzene causes APL.  Milward, 664 F. Supp. 2d at 143.

Milward also offered the testimony of a philosophy professor, Carl F. Cranor, for the opinion that WOE was an acceptable methodology, and that all scientific inference is subject to judgment.  This is the same Cranor who, advocating for open admissions of all putative scientific opinions, showcased his confusion between statistical significance probability and the posterior probability involved in a conclusion of causality.  Carl F. Cranor, Regulating Toxic Substances: A Philosophy of Science and the Law at 33-34(Oxford 1993)(“One can think of α, β (the chances of type I and type II errors, respectively) and 1- β as measures of the “risk of error” or “standards of proof.”) See also id. at 44, 47, 55, 72-76.

After a four-day evidentiary hearing, the district court found that Martyn Smith’s opinion was merely a plausible hypothesis, and not admissible.  Milward, 664 F. Supp. 2d at 149.  The Court of Appeals, in an opinion by Chief Judge Lynch, however, reversed and ruled that an inference of general causation based on a WOE methodology satisfied the reliability requirement for admission under Federal Rule of Evidence 702.  639 F.3d at 26.  According to the Circuit, WOE methodology was scientifically sound,  Id. at 22-23.

 

WOE Cometh

Because the WOE methodology is not well described, either in the published literature or in Martyn Smith’s litigation report, it is difficult to understand exactly what the First Circuit approved by reversing Smith’s exclusion.  Usually the burden is on the proponent of the opinion testimony, and one would have thought that the vagueness of the described methodology would count against admissibility.  It is hard to escape the conclusion that the Circuit elevated a poorly described method, best characterized as hand waving, into a description of scientific method

The Panel appeared to have been misled by Carl F. Cranor, who described “inference to the best explanation” as requiring a scientist to “consider all of the relevant evidence” and “integrate the evidence using professional judgment to come to a conclusion about the best explanation. Id at 18. The available explanations are then weighed, and a would-be expert witness is free to embrace the one he feels offers the “best” explanation.  The appellate court’s opinion takes WOE, combined with Cranor’s “inference to the best explanation,” to hold that an expert witness need only opine that he has considered the range of plausible explanations for the association, and that he believes that the causal explanation is the best or “most plausible.”  Id. at 20 (upholding this approach as “methodologically reliable”).

What is missing of course is the realization that plausible does not mean established, reasonably certain, or even more likely than not.  The Circuit’s invocation of plausibility also obscures the indeterminacy of the available data for supporting a reliable conclusion of causation in many cases.

Curiously, the Panel likened WOE to the use of differential diagnosis, which is a method for inferring the specific cause of a particular patient’s disease or disorder.  Id. at 18.  This is a serious confusion between a method concerned with general causation and one concerned with specific causation.  Even if, by the principle of charity, we allow that the First Circuit was thinking of some process of differential etiology rather than diagnosis, given that diagnoses (other than for infectious diseases and a few pathognomonic disorders) do not usually carry with them information about unique etiologic agents.  But even such a process of differential etiology is a well-structured dysjunctive syllogism of the form:

A v B v C

~A ∩ ~B

∴ C

There is nothing subjective about assigning weights or drawing inferences in applying such a syllogism.  In the Milward case, one of the propositional facts that might have well explained the available evidence was chance, but plaintiff’s expert witness Smith could not and did not rule out chance in that the studies upon which he relied were not statistically significant.  Smith could thus never get past “therefore” in any syllogism or in any other recognizable process of reasoning.

The Circuit Court provides no insight into the process Smith used to weigh the available evidence, and it failed to address the analytical gaps and evidentiary insufficiencies addressed by the trial court, other than to invoke the mantra that all these issues go to “the weight, not the admissibility” of Smith’s opinions.  This, of course, is a conclusion, not an explanation or a legal theory.

There is also a cute semantic trick lurking in plaintiffs’ position in Milward, which results from their witnesses describing their methodology as “WOE.”  Since the jury is charged with determining the “weight of the evidence,” any evaluation of the WOE would be an invasion of the province of the jury.  Milward, 639 F.3d at 20. QED by the semantic device of deliberating conflating the name of the putative scientific methodology with the term traditionally used to describe jury fact finding.

In any event, the Circuit’s chastisement of the district court for evaluating Smith’s implementation of the WOE methodology, his logical, mathematical, and epidemiological errors, his result-driven reinterpretation of study data, threatens to read an Act of Congress — the Federal Rules of Evidence, and especially Rules 702 and 703 — out of existence by judicial fiat.  The Circuit’s approach is also at odds with Supreme Court precedent (now codified in Rule 702) on the importance and the requirement of evaluating opinion testimony for analytical gaps and the ipse dixit of expert witnesses.  General Electic Co. v. Joiner, 522 U.S. 136, 146 (1997).

 

Smith’s Errors in Recalculating Odds Ratios of Published Studies

In the district court, the defendants presented testimony of an epidemiologist, Dr. David H. Garabrant, who took Smith to task for calculating risk ratios incorrectly.  Smith did not have any particular expertise in epidemiologist, and his faulty calculations were problematic from the perspective of both Rule 702 and Rule 703.  The district court found the criticisms of Smith’s calculations convincing, 664 F. Supp. 2d at 149, but the appellate court held that the technical dispute was for the jury; “both experts’ opinions are supported by evidence and sound scientific reasoning,” Milward, 639 F.3d at 24.  This ruling is incomprehensible.  Plaintiffs had the burden of showing admissibility of Smith opinion generally, but also the reasonability of his reliance upon the calculated odds ratio.  The defendants had no burden of persuasion on the issue of Smith’s calculations, but they presented testimony, which apparently carried the day.  The appellate court had no basis for reversing the specific ruling with respect to the erroneously calculated risk ratio.

 

Smith’s Reliance upon Statistically Insignificant Studies

Smith relied upon studies that were not statistically significant at any accepted level.  An opinion of causality requires a showing that chance, bias, and confounding have been excluded in assessing an existing association.  Smith failed to exclude chance as an explanation for the association, and the burden to make this exclusion was on the plaintiffs. This failure was not something that could readily be patched by adverting to other evidence of studies in animals or in test tubes.    The Court of Appeals excused the important analytical gap in plaintiffs’ witness’s opinion because APL is rare, and data collection is difficult in the United States.  Id. at 24.  Evidence “consistent with” and “suggestive of” the challenged witness’s opinion thus suffices.  This is a remarkable homeopathic dilution of both legal and scientific causation.  Now we have a rule of law that allows plaintiffs to be excused from having to prove their case with reliable evidence if they allege a rare disease for which they lack evidence.

 

Leveling the Hierarchy of Evidence

Imagine trying to bring a medication to market with a small case-control study, with a non-statistically significant odds ratio!  Oh, but these clinical trials are so difficult and expensive; and they take such a long time.  Like a moment’s thought, when thinking is so hard and a moment such a long time.  We would be quite concerned if the FDA abridged the standard for causal efficacy in the licensing of new medications; we should be just as concerned about judicial abridgments of standards for causation of harm in tort actions.

Leveling the hierarchy of evidence has been an explicit or implicit goal of several law professors.  Some of the leveling efforts even show up in the new Reference Manual for Scientific Evidence (RMSE 3d ed. 2011).  SeeNew-Age Levellers – Flattening Hierarchy of Evidence.”

The Circuit, in Milward, quoted an article published in the Journal of the National Cancer Institute by Michele Carbone and others who suggest that there should be no hierarchy, but the Court ignored a huge body of literature that explains and defends the need for recognizing that not all study designs or types are equal.  Interestingly, the RMSE chapter on epidemiology by Professor Green (see more below) cites the same article.  RMSE 3d at 564 & n.48 (citing and quoting symposium paper that “[t]here should be no hierarchy [among different types of scientific methods to determine cancer causation]. Epidemiology, animal, tissue culture and molecular pathology should be seen as integrating evidences in the determination of human carcinogenicity.” Michele Carbone et al., “Modern Criteria to Establish Human Cancer Etiology,” 64 Cancer Res. 5518, 5522 (2004).)  Carbone, of course, is best known for his advocacy of a viral cause (SV40), of human mesothelioma, a claim unsupported, and indeed contradicted, by epidemiologic studies.  Carbone’s statement does not support the RMSE chapter’s leveling of epidemiology and toxicology, and Carbone is, in any event, an unlikely source to cite.

The First Circuit, in Milward, studiously ignored a mountain of literature on evidence-based medicine, including the RSME 3d chapter on “Reference Guide on Medical Testimony,” which teaches that leveling of study designs and types is inappropriate. The RMSE chapter devotes several pages to explaining the role of study design in assessing an etiological issue:

3. Hierarchy of medical evidence

With the explosion of available medical evidence, increased emphasis has been placed on assembling, evaluating, and interpreting medical research evidence.  A fundamental principle of evidence-based medicine (see also Section IV.C.5, infra) is that the strength of medical evidence supporting a therapy or strategy is hierarchical.

When ordered from strongest to weakest, systematic review of randomized trials (meta-analysis) is at the top, followed by single randomized trials, systematic reviews of observational studies, single observational studies, physiological studies, and unsystematic clinical observations.150 An analysis of the frequency with which various study designs are cited by others provides empirical evidence supporting the influence of meta-analysis followed by randomized controlled trials in the medical evidence hierarchy.151 Although they are at the bottom of the evidence hierarchy, unsystematic clinical observations or case reports may be the first signals of adverse events or associations that are later confirmed with larger or controlled epidemiological studies (e.g., aplastic anemia caused by chloramphenicol,152 or lung cancer caused by asbestos153). Nonetheless, subsequent studies may not confirm initial reports (e.g., the putative association between coffee consumption and pancreatic cancer).154

John B. Wong, Lawrence O. Gostin, and Oscar A. Cabrera, “Reference Guide on Medical Testimony,” RMSE 3d 687, 723 -24 (2011).   The implication that there is no hierarchy of evidence in causal inference, and that tissue culture studies are as relevant as epidemiology, is patently absurd. The Circuit not only went out on a limb, it managed to saw the limb off, while “out there.”

 

Milward – Responses Critical and Otherwise

The First Circuit’s decision in Milward made an immediate impression upon those writers who have worked hard to dismantle or marginalize Rule 702.  The Circuit’s decision was mysteriously cited with obvious approval by Professor Margaret Berger, even though she had died before the decision was published!  Margaret A. Berger, “The Admissibility of Expert Testimony,” RMSE 3d at 20 & n. 51(2011).  Professor Michael Green, one of the reporters for the ALI’s Restatement (Third) of Torts hyperbolically called Milward “[o]ne of the most significant toxic tort causation cases in recent memory.”  Michael D. Green, “Introduction: Restatement of Torts as a Crystal Ball,” 37 Wm. Mitchell L. Rev. 993, 1009 n.53 (2011).

The WOE approach, and its embrace in Milward, obscures the reality that sometimes the evidence does not logically or analytically support the offered conclusion, and at other times, the best explanation is uncertainty.  By adopting the WOE approach, vague and ambiguous as it is, the Milward Court was beguiled into holding that WOE determinations are for the jury.  The lack of meaningful content of WOE means that decisions such as Milward effectively remove the gatekeeping function, or permit that function to be minimally satisfied by accepting an expert witness’s claim to have employed WOE.  The epistemic warrant required by Rule 702 is diluted if not destroyed.  Scientific hunch and speculation, proper in their place, can be passed off for scientific knowledge to gullible or result-oriented judges and juries.

Admissibility versus Sufficiency of Expert Witness Evidence

April 18th, 2012

Professors Michael Green and Joseph Sanders are two of the longest serving interlocutors in the never-ending discussion and debate about the nature and limits of expert witness testimony on scientific questions about causation.  Both have made important contributions to the conversation, and both have been influential in academic and judicial circles.  Professor Green has served as the co-reporter for the American Law Institute’s Restatement (Third) of Torts: Liability for Physical Harm.  Whether wrong or right, new publications about expert witness issues by Green or Sanders call for close attention.

Early last month, Professors Green and Sanders presented together at a conference, on “Admissibility Versus Sufficiency: Controlling the Quality of Expert Witness Testimony in the United States.” Video and audio of their presentation can be found online.  The authors posted a manuscript of their draft article on expert witness testimony to the Social Science Research Network.  See Michael D. Green & Joseph Sanders, “Admissibility Versus Sufficiency: Controlling the Quality of Expert Witness Testimony in the United States,” <downloaded on March 25, 2012>.

The authors argue that most judicial exclusions of expert witness causal opinion testimony are based upon a judgment that the challenged witness’s opinion is based upon insufficient evidence.  They point to litigations, such as the Bendectin and silicone gel breast implant cases, where the defense challenges were supported in part by a body of “exonerative” epidemiologic studies.  Legal theory construction is always fraught with danger in that it either stands to be readily refuted by counterexample, or it is put forward as a normative, prescriptive tool to change the world, thus lacking in descriptive or explanatory component.  Green and Sanders, however, seem to be earnest in suggesting that their reductionist approach is both descriptive and elucidative of actual judicial practice.

The authors’ reductionist approach in this area, and especially as applied to the Bendectin and silicone decisions, however, ignores that even before the so-called exonerative epidemiology on Bendectin and silicone was available, the plaintiffs’ expert witnesses were presenting opinions on general and specific causation, based upon studies and evidence of dubious validity. Given that the silicone litigation erupted before Daubert was decided, and Bendectin cases pretty much ended with Daubert, neither litigations really permit a clean before and after picture.  Before Daubert, courts struggled with how to handle both the invalidity and the insufficiency (once the impermissible inferences were stripped away) in the Bendectin cases.  And before Daubert, all silicone cases went to the jury.  Even after Daubert, for some time, silicone cases resulted in jury verdicts, which were upheld on appeal.  It took defendants some time to uncover the nature and extent of the invalidity in plaintiffs’ expert witnesses’ opinions, the invalidity of the studies upon which these witnesses relied, and the unreasonableness of the witnesses’ reliance upon various animal and in vitro toxicologic and immunologic studies. And it took trial courts a few years after the Supreme Court’s 1993 Daubert decision to warm up to their new assignment.  Indeed, Green and Sanders get a good deal of mileage in their reductionist approach from trial and appellate courts that were quite willing to collapse the distinction between reliability or validity on the one hand, and sufficiency on the other.  Some of those “back benching” courts used consensus statements and reviews, which both marshaled the contrary evidence as well as documented the invalidity of the would-be affirmative evidence.  This judicial reliance upon external sources that encompassed both sufficiency and reliability should not be understood to mean that reliability (or validity) is nothing other than sufficiency.

A post-Daubert line of cases is more revealing:  the claim that the ethyl mercury vaccine preservative, thimerosal, causes autism.  Professors Green and Sanders touch briefly upon this litigation.  See Blackwell v. Wyeth, 971 A.2d 235 (Md. 2009).  Plaintiff’s expert witness, David Geier, had published several articles in which he claimed to have supported a causal nexus between thimerosal and autism.  Green and Sanders dutifully note that the Maryland courts ultimately rejected the claims based upon Geier’s data as wholly inadequate, standing alone to support the inference he zealously urged to be drawn.  Id. at 32.  Whether this is sufficiency or the invalidity of his ultimate inference of causation from an inadequate data set perhaps can be debated, but surely the validity concerns should not be lost in the shuffle of evaluating the evidence available.  Of course, exculpatory epidemiologic studies ultimately were published, based upon high quality data and inferences, but strictly speaking, these studies were not necessary to the process of ruling Geier’s advocacy science out of bounds for valid scientific discourse and legal proceedings.

Some additional comments.

 

1. Questionable reductionism.  The authors describe the thrust of their argument as a need to understand judicial decisions on expert witness admissibility as “sufficiency judgments.”  While their analysis simplifies the gatekeeping decisions, it also abridges the process in a way that omits important determinants of the law and its application.  Sufficiency, or the lack thereof, is often involved as a fatal deficiency in expert witness opinion testimony on causal issues, but the authors’ attempt to reduce many exclusionary decisions to insufficiency determinations ignores the many ways that expert witnesses (and scientists in the real world outside of courtrooms) go astray.  The authors’ reductionism seems a weak, if not flawed, predictive, explanatory, and normative theory of expert witness gatekeeping.  Furthermore, this reductionism holds a false allure to judges who may be tempted to oversimplify their gatekeeping task by conflating gatekeeping with the jury’s role:  exclude the proffered expert witness opinion testimony because, considering all the available evidence, the testimony is probably wrong.

 

2. Weakness of peer review, publication, and general acceptance in predicting gatekeeping decisions.  The authors further describe a “sufficiency approach” as openly acknowledging the relative unimportance of peer review, publication, and general acceptance.  Id. at 39.  These factors do not lack importance because they are unrelated to sufficiency; they are unimportant because they are weak proxies for validity.  Their presence or absence does not really help predict whether the causal opinion offered is invalid,  or otherwise unreliable.  The existence of published, high-quality, peer-reviewed systematic reviews does, however, bear on sufficiency of the evidence.  At least in some cases, courts consider such reviews and rely upon them heavily in reaching a decision on Rule 702, but we should ask to what extent has the court simply avoided the hard work of thinking through the problem on its own.

 

3. Questionable indictment of juries and the adversarial system for the excesses of expert witnesses.  Professors Green and Sanders describe the development of common law, and rules, to control expert witness testimony as “a judicial attempt to moderate the worst consequences of two defining characteristics of United States civil trials:  party control of experts and the widespread use of jury decision makers.” Id. at 2.  There is no doubt that these are two contributing factors in some of the worst excesses, but the authors really offer no support for their causal judgment.  The experience of courts in Europe, where civil juries and party control of expert witnesses are often absent from the process, raises questions about the Green and Sanders’ attribution. See, e.g., R. Meester, M. Collins, R.D. Gill, M. van Lambalgen,  “On the (ab)use of statistics in the legal case against the nurse Lucia de B”. 5 Law, Probability and Risk 233 (2007) (describing the conviction of Nurse Lucia de Berk in the Netherlands, based upon shabby statistical evidence).

Perhaps a more general phenomenon is at play, such as an epistemologic pathology of expert witnesses who feel empowered and unconstrained by speaking in court, to untutored judges or juries.  The thrill of power, the arrogance of asserted opinion, the advancement of causes and beliefs, the lure of lucre, the freedom from contradiction, and a whole array of personality quirks are strong inducements for expert witnesses, in many countries, to outrun their scientific headlights.  See Judge Jack B. Weinstein, “Preliminary Reflections on Administration of Complex Litigation” 2009 Cardozo L. Rev. de novo 1, 14 (2009) (describing plaintiffs’ expert witnesses in silicone litigation as “charlatans”; “[t]he breast implant litigation was largely based on a litigation fraud. … Claims—supported by medical charlatans—that enormous damages to women’s systems resulted could not be supported.”)

In any event, there have been notoriously bad verdicts in cases decided by trial judges as the finders of fact.  See, e.g., Wells v. Ortho Pharmaceutical Corp., 615 F. Supp. 262 (N.D.Ga. 1985), aff’d and rev’d in part on other grounds, 788 F.2d 741 (11th Cir.), cert. denied, 479 U.S. 950 (1986); Barrow v. Bristol-Meyers Squibb Co., 1998 WL 812318, at *23 (M.D. Fla., Oct. 29, 1998)(finding for breast implant plaintiff whose claims were supported by dubious scientific studies), aff’d, 190 F. 3d 541 (11th Cir. 1999).  Bad things can happen in the judicial process even without the participation of lay juries.

Green and Sanders are correct to point out that juries are often confused by scientific evidence, and lack the time, patience, education, and resources to understand it.  Same for judges.  The real difference is that the decisions of judges is public.  Judges are expected to explain their reasoning, and there is some, even if limited, appellate review for judicial gatekeeping decisions. In this vein, Green and Sanders dismiss the hand wringing over disagreements among courts on admissibility decisions by noting that similar disagreements over evidentiary sufficiency issues fill the appellate reporters.  Id. at 37.  Green and Sanders might well add that at least the disagreements are out in the open, advanced with supporting reasoning, for public discussion and debate, unlike the unimpeachable verdicts of juries and their cloistered, secretive reasoning or lack thereof.

In addition, Green and Sander’s fail to mention a considerable problem:  the admission of weak, pathologic, or overstated scientific opinion undermines confidence in the judicial judgments based upon verdicts that come out of a process that featured the dubious opinions of the expert witnesses.  The public embarrassment of the court system for its judgments, based upon questionable expert witness opinion testimony, was a strong inducement to changing the libertine pre-Daubert laissez-faire approach.

 

4.  Failure to consider the important role of Rule 703, which is quite independent of any “sufficiency” considerations, in the gatekeeping process.  Green and Sanders properly acknowledge the historical role that Rule 703, of the Federal Rules of Evidence, played in judicial attempts to regain some semblance of control over expert witness opinion.  They do not pursue the issue of its present role, which is often neglected and underemphasized.  In part, Rule 703, with its requirement that courts screen expert witness reliance upon independently inadmissible evidence (which means virtually all epidemiologic and animal studies and their data analyses), goes to the heart of gatekeeping by requiring judges to examine the quality of study data, and the reasonableness of reliance upon such data, by testifying expert witnesses.  See Schachtman, RULE OF EVIDENCE 703 — Problem Child of Article VII (Sept. 19, 2011).  Curiously, the authors try to force Rule 703 into their sufficiency pigeonhole even though it calls for a specific inquiry into the reasonableness (vel non) of reliance upon specific (hearsay or otherwise inadmissible) studies.  In my view, Rule 703 is predominantly a validity, and not a sufficiency, inquiry.

Judge Weinstein’s use of Rule 703, in In re Agent Orange, to strip out the most egregiously weak evidence did not predominantly speak to the evidentiary insufficiency of the plaintiffs’ expert witnesses reliance materials; nor did it look to the defendants’ expert witnesses’ reliance upon contradicting evidence.  Judge Weinstein was troubled by the plaintiffs’ expert witnesses reliance upon hearsay statements, from biased witnesses, of the plaintiffs’ medical condition.  Judge Weinstein did, of course, famously apply sufficiency criteria, including relative risks too low to permit an inference of specific causation, and the insubstantial totality of the evidence, but Judge Weinstein’s judicial philosophy then was to reject Rule 702 as a quality-control procedure for expert witness opinion testimony.  See In re Agent Orange Product Liab. Litig., 597 F. Supp. 740, 785, 817 (E.D.N.Y. 1984)(plaintiffs must prove at least a two-fold increase in rate of disease allegedly caused by the exposure), aff’d, 818 F.2d 145, 150-51 (2d Cir. 1987)(approving district court’s analysis), cert. denied sub nom. Pinkney v. Dow Chemical Co., 484 U.S. 1004  (1988); see also In re “Agent Orange” Prod. Liab. Litig., 611 F. Supp. 1223, 1240, 1262 (E.D.N.Y. 1985), aff’d, 818 F.2d 187 (2d Cir. 1987), cert. denied, 487 U.S. 1234 (1988).  A decade later, in the breast implant litigation, Judge Weinstein adhered to his rejection of Rule 702 to make explicit expert witness validity rulings or sufficiency determinations by granting summary judgment on the entire evidentiary display.  This assessment of sufficiency was not, however, driven by the rules of evidence; it was based firmly upon Federal Rule of Civil Procedure 56’s empowerment of the trial judge to make an overall assessment that plaintiffs lack a submissible case.  See In re Breast Implant Cases, 942 F.Supp. 958 (E. & S.D.N.Y. 1996)(granting summary judgment because of insufficiency of plaintiffs’ evidence, but specifically declining to rule on defendants’ Rule 702 and Rule 703 motions).  Within a few years, court-appointed expert witnesses, and the Institute of Medicine, weighed in with withering criticisms of plaintiffs’ attempted scientific case.  Given that there was so little valid evidence, sufficiency really never was at issue for these experts, but Judge Weinstein chose to frame the issue as sufficiency to avoid ruling on the pending motions under Rule 702.

 

5. Re-analyzing Re-analysis.  In the Bendectin litigation, some of the plaintiffs’ expert witnesses sought to offer various re-analyses of published papers.  Defendant Merrell Dow objected, and appeared to have framed its objections in general terms to unpublished re-analyses of published papers.  Green and Sanders properly note that some of the defense arguments, to the extent stated generally as prohibitions against re-analyses, were overblown and overstated.  Re-analyses can take so many forms, and the quality of peer reviewed papers is so variable, it would be foolhardy to frame a judicial rule as a prohibition against re-analyzing data in published studies.  Indeed, so many studies are published with incorrect statistical analyses that parties and expert witnesses have an obligation to call the problems to the courts’ attention, and to correct the problems when possible.

The notion that peer review was important in any way to serve as a proxy for reliability or validity has not been borne out.  Similarly, the suggestion that reanalyses of existing data from published papers were presumptively suspect was also not well considered.  Id. at 13.

 

6. Comments dismissive of statistical significance and methodological rigor.  Judgments of causality are, at the end of the day, qualitative judgments, but is it really true that:

“Ultimately, of course, regardless of how rigorous the methodology of more probative studies, the magnitude of any result and whether it is statistically significant, judgment and inference is required as to whether the available research supports an inference of causation.”

Id. at 16 (citing among sources a particularly dubious case, Milward v. Acuity Specialty Prods. Group, Inc., 639 F.3d 11 (1st Cir. 2011), cert. denied, ___ U.S. ___ (2012). Can the authors really intend to say that the judgment of causal inference is or should be made “regardless” of the rigor of methodology, regardless of statistical significance, regardless of a hierarchy of study evidentiary probitiveness?  Perhaps the authors simply meant to say that, at the end of the day, judgments of causal inference are qualitative judgments.  As much as I would like to extend the principle of charity to the authors, their own labeling of appellate decisions contrary to Milward as “silly,” makes the benefit of the doubt seem inappropriate.

 

7.  The shame of scientists and physicians opining on specific causation.  Green and Sanders acknowledge that judgments of specific causation – the causation of harm in a specific person – are often uninformed by scientific considerations, and that Daubert criteria are unhelpful.

“Unfortunately, outside the context of litigation this is an inquiry to which most doctors devote very little time.46  True, they frequently serve as expert witnesses in such cases (because the law demands evidence on this issue) but there is no accepted scientific methodology for determining the cause of an individual’s disease and, therefore, the error rate is simply unknown and unquantifiable.47”

Id. at 18. (Professor Green’s comments at the conference seemed even more apodictic.) The authors, however, seem to have no sense of outrage that expert witnesses offer opinions on this topic, for which the witnesses have no epistemic warrant, and that courts accept these facile, if not fabricated, judgments.  Furthermore, specific causation is very much a scientific issue.  Scientists may, as a general matter, concentrate on population studies that show associations, which may be found to be causal, but some scientists have worked on gene associations that define extremely high risk sub-populations that determine the overall population risk.  As Green and Sanders acknowledge, when the relative risks are extremely high (say > 100), we do not need to use any fancy math to know that most cases in the exposed group will result (but for) from their exposure.  A tremendous amount of scientific work has been done to identify biomarkers of increased risk, and to tie the increased risk to an agent-specific causal mechanism.  See, e.g., Gregory L. Erexson, James L. Wilmer, and Andrew D. Kligerman, “Sister Chromatid Exchange Induction in Human Lymphocytes Exposed to Benzene and Its Metabolites in Vitro,” 45 Cancer Research 2471 (1985).

 

8. Sufficiency versus admissibility.  Green and Sanders opine that many gatekeeping decisions, such as the Bendectin and breast implant cases, should be understood as sufficiency decisions that have incorporated the significant exculpatory epidemiologic evidence offered by defendants.  Id. at 20.  The “mature epidemiologic evidence” overwhelmed the plaintiffs’ meager evidence to the point that a jury verdict was not sustainable as a matter of law.  Id.  The authors’ approach requires a weighing of the complete evidentiary display, “entirely apart from the [plaintiffs’] expert’s testimony, to determine the overall sufficiency and reasonableness of the claimed inference of causation.  Id. at 21.  What is missing, however, from this approach is that even without the defendants’ mature or solid body of epidemiologic evidence, the plaintiff’s expert witness was urging an inference of causation based upon fairly insubstantial evidence. Green and Sanders are concerned, no doubt, that if sufficiency were the main driver of exclusionary rulings, then the disconnect between appellate standard of review for expert witness opinion admissibility, which is reversed only for an “abuse of discretion” by the trial court, and the standard of review for typical grants of summary judgments, which are evaluated “de novo” by the appellate court.  Green and Sanders hint that the expert witnesses decisions, which they see as mainly sufficiency judgments, may not be appropriate for the non-searching “abuse of discretion” standard.  See id. at 40 – 41 (citing the asymmetric “hard look” approach taken in In re Paoli RR Yard PCB Litig., 35 F.3d 717, 749-5- (3d Cir. 1994), and in the intermediate appellate court in Joiner itself).  Of course, the Supreme Court’s decision in Joiner was an abandonment of something akin to de novo hard-look appellate review, lopsidedly applied to exclusions only.  Decisions to admit did not lead to summary dispositions without trial and thus were never given any meaningful appellate review.

Elsewhere, Green and Sanders note that they do not necessarily share the doubts of the “hand wringers” over the inconsistent exclusionary rulings that result from an abuse of discretion standard.  At the end of their article, however, the authors note that viewing expert witness opinion exclusions as “sufficiency determinations” raises the question whether appellate courts should review these determinations de novo, as they would review ordinary factual “no evidence” or “insufficient evidence” grants of summary judgment.  Id. at 40.  There are reasonable arguments both ways, but it is worth pointing out that appellate decisions affirming rulings going both ways on the same expert witnesses, opining about the same litigated causal issue, are different from jury verdicts going both ways on causation.  First, the reasoning of the courts is, we hope, set out for public consumption, discussion, and debate, in a way that a jury’s deliberations are not.  Second, the fact of decisions “going both ways” is a statement that the courts view the issue as close and subject to debate.  Third, if the scientific and legal communities are paying attention, as they should, they can weigh in on the disparity, and on the stated reasons.  Assuming that courts are amenable to good reasons, they may have the opportunity to revisit the issue in a way that juries, which serve for one time on the causal issue, can never do.  We might hope that the better reasoned decisions, especially those that were supported by the disinterested scientific community, would have some persuasive authority,

 

9.  Abridgment of Rule 702’s approach to gatekeeping.  The authors’ approach to sufficiency also suffers from ignoring, not only Rule 703’s requirements into the reasonableness of reliance upon individual studies, but also from ignoring Rule 702 (c) and (d), which require that:

(c) the testimony is the product of reliable principles and methods; and

(d) the expert has reliably applied the principles and methods to the facts of the case.

These subsections of Rule 702 do not readily allow the use of proxy or substitute measures of validity or reliability; they require the trial court to assess the expert witnesses’ reasoning from data to conclusions. In large part, Green and Sanders have been misled by the instincts of courts to retreat to proxies for validity in the form of “general acceptance,” “peer review,” and contrary evidence that makes the challenged opinion appear “insubstantial.”

There is a substantial danger that Green and Sander’s reductionist approach, and their equation of admissibility with sufficiency, will undermine trial courts’ willingness to assess the more demanding, and time-consuming, validity claims that are inherent in all expert witness causation opinions.

 

10. Weight-of-the evidence (WOE) reasoning.  The authors appear captivated by the use of so-called weight-of-the evidence (WOE) reasoning, questionably featured in some recent judicial decisions.  The so-called WOE method is really not much of a method at all, but rather a hand-waving process that often excuses the poverty of data and valid analysis.  See, e.g., Douglas L. Weed, “Weight of Evidence: A Review of Concept and Methods,” 25 Risk Analysis 1545 (2005) (noting the vague, ambiguous, indefinite nature of the concept of “weight of evidence” review).  See also Schachtman, “Milward — Unhinging the Courthouse Door to Dubious Scientific Evidence” (Sept. 2, 2011).

In Allen v. Pennsylvania Engineering Corp., 102 F.3d 194 (5th Cir.1996), the appellate court disparaged WOE as a regulatory tool for making precautionary judgments, not fit for civil litigation that involves actual causation as opposed to “as if” judgments.  Green and Sanders pejoratively label the Allen court’s approach as “silly”:

“The idea that a regulatory agency would make a carcinogenicity determination if it were not the best explanation of the evidence, i.e., more likely than not, is silly.”

Id. at 29 n.82 (emphasis added).  But silliness is as silliness does.  Only a few pages later in their paper, Green and Sanders admit that:

“As some courts have noted, the regulatory threshold is lower than required in tort claims. With respect to the decision of the FDA to withdraw approval of Parlodel, the court in Glastetter v. Novartis Pharmaceuticals Corp., 107 F. Supp. 2d 1015 (E.D. Mo. 2000), judgment aff’d, 252 F.3d 986 (8th Cir. 2001), commented that the FDA’s withdrawal statement, “does not establish that the FDA had concluded that bormocriptine can cause an ICH [intreceberal hemorrhage]; instead, it indicates that in light of the limited social utility of bromocriptine in treating lactation and the reports of possible adverse effects, the drug should no longer be used for that purpose. For these reasons, the court does not believe that the FDA statement alone establishes the reliability of plaintiffs’ experts’ causation testimony.” Glastetter v. Novartis Pharmaceuticals Corp., 107 F. Supp. 2d 1015 (E.D. Mo. 2000), aff’d, 252 F.3d 986 (8th Cir. 2001).”

Id. at 34 n.101. Not only do the authors appear to contradict themselves on the burden of persuasion for regulatory decisions, they offer no support for their silliness indictment.  Certainly, regulatory decisions, and not only the FDA’s, are frequently based upon precautionary principles that involve applying uncertain, ambiguous, or confusing data analyses to the process of formulating protective rules and regulations in the absence of scientific knowledge.  Unlike regulatory agencies, which operate under the Administrative Procedures Act, federal courts, and many state courts, operate under Rule 702 and 703’s requirements that expert witness opinion have the epistemic warrant of “knowledge,” not hunch, conjecture, or speculation.

Confidence in Intervals and Diffidence in the Courts

March 4th, 2012

Next year, the Supreme Court’s Daubert decision will turn 20.  The decision, in interpreting Federal Rule of Evidence 702, dramatically changed the landscape of expert witness testimony.  Still, there are many who would turn the clock back to disabling the gatekeeping function.  In past posts, I have identified scholars, such as Erica Beecher-Monas and the late Margaret Berger, who tried to eviscerate judicial gatekeeping.  Recently a student note argued for the complete abandonment of all judicial control of expert witness testimony.  See  Note, “Admitting Doubt: A New Standard for Scientific Evidence,” 123 Harv. L. Rev. 2021 (2010)(arguing that courts should admit all relevant evidence).

One advantage that comes from requiring trial courts to serve as gatekeepers is that the expert witnesses’ reasoning is approved or disapproved in an open, transparent, and rational way.  Trial courts subject themselves to public scrutiny in a way that jury decision making does not permit.  The critics of Daubert often engage in a cynical attempt to remove all controls over expert witnesses in order to empower juries to act on their populist passions and prejudices.  When courts misinterpret statistical and scientific evidence, there is some hope of changing subsequent decisions by pointing out their errors.  Jury errors on the other hand, unless they involve determinations of issues for which there were “no evidence,” are immune to institutional criticism or correction.

Despite my whining, not all courts butcher statistical concepts.  There are many astute judges out there who see error and call it error.  Take for instance, the trial judge who was confronted with this typical argument:

“While Giles admits that a p-value of .15 is three times higher than what scientists generally consider statistically significant—that is, a p-value of .05 or lower—she maintains that this ‘‘represents 85% certainty, which meets any conceivable concept of preponderance of the evidence.’’ (Doc. 103 at 16).”

Giles v. Wyeth, Inc., 500 F.Supp. 2d 1048, 1056-57 (S.D.Ill. 2007), aff’d, 556 F.3d 596 (7th Cir. 2009).  Despite having case law cited to it (such as In re Ephedra), the trial court looked to the Reference Manual on Scientific Evidence, a resource that seems to be ignored by many federal judges, and rejected the bogus argument.  Unfortunately, the lawyers who made the bogus argument still are licensed, and at large, to incite the same error in other cases.

This business perhaps would be amenable to an empirical analysis.  An enterprising sociologist of the law could conduct some survey research on the science and math training of the federal judiciary, on whether the federal judges have read chapters of the Reference Manual before deciding cases involving statistics or science, and whether federal judges expressed the need for further education.  This survey evidence could be capped by an analysis of the prevalence of certain kinds of basic errors, such as the transpositional fallacy committed by so many judges (but decisively rejected in the Giles case).  Perhaps such an empirical analysis would advance our understanding whether we need specialty science courts.

One of the reasons that the Reference Manual on Scientific Evidence is worthy of so much critical attention is that the volume has the imprimatur of the Federal Judicial Center, and now the National Academies of Science.  Putting aside the idiosyncratic chapter by the late Professor Berger, the Manual clearly present guidance on many important issues.  To be sure, there are gaps, inconsistencies, and mistakes, but the statistics chapter should be a must-read for federal (and state) judges.

Unfortunately, the Manual has competition from lesser authors whose work obscures, misleads, and confuses important issues.  Consider an article by two would-be expert witnesses, who testify for plaintiffs, and confidently misstate the meaning of a confidence interval:

“Thus, a RR [relative risk] of 1.8 with a confidence interval of 1.3 to 2.9 could very likely represent a true RR of greater than 2.0, and as high as 2.9 in 95 out of 100 repeated trials.”

Richard W. Clapp & David Ozonoff, “Environment and Health: Vital Intersection or Contested Territory?” 30 Am. J. L. & Med. 189, 210 (2004).  This misstatement was then cited and quoted with obvious approval by Professor Beecher-Monas, in her text on scientific evidence.  Erica Beecher-Monas, Evaluating Scientific Evidence: An Interdisciplinary Framework for Intellectual Due Process 60-61 n. 17 (2007).   Beecher-Monas goes on, however, to argue that confidence interval coefficients are not the same as burdens of proof, but then implies that scientific standards of proof are different from the legal preponderance of the evidence.  She provides no citation or support for the higher burden of scientific proof:

“Some commentators have attributed the causation conundrum in the courts to the differing burdens of proof in science and law.28 In law, the civil standard of ‘more probable than not’ is often characterized as a probability greater than 50 percent.29 In science, on the other hand, the most widely used standard is a 95 percent confidence interval (corresponding to a 5 percent level of significance, or p-level).30 Both sound like probabilistic assessment. As a result, the argument goes, civil judges should not exclude scientific testimony that fails scientific validity standards because the civil legal standards are much lower. The transliteration of the ‘more probable than not’ standard of civil factfinding into a quantitative threshold of statistical evidence is misconceived. The legal and scientific standards are fundamentally different. They have different goals and different measures.  Therefore, one cannot justifiably argue that evidence failing to meet the scientific standards nonetheless should be admissible because the scientific standards are too high for preponderance determinations.”

Id. at 65.  This seems to be on the right track, although Beecher-Monas does not state clearly whether she subscribes to the notion that the burdens of proof in science and law differ.  The argument then takes a wrong turn:

“Equating confidence intervals with burdens of persuasion is simply incoherent. The goal of the scientific standard – the 95 percent confidence interval – is to avoid claiming an effect when there is none (i.e., a false positive).31

Id. at 66.   But this is crazy error; confidence intervals are not burdens of persuasion, legal or scientific.  Beecher-Monas is not, however, content to leave this alone:

“Scientists using a 95 percent confidence interval are making a prediction about the results being due to something other than chance.”

Id. at 66 (emphasis added).  Other than chance?  Well this implies causality, as well as bias and confounding, but the confidence interval, like the p-value, addresses only random or sampling error.  Beecher-Monas’s error is neither random nor scientific.  Indeed, she perpetuates the same error committed by the Fifth Circuit in a frequently cited Bendectin case, which interpreted the confidence interval as resolving questions of the role of matters “other than chance,” such as bias and confounding.  Brock v. Merrill Dow Pharmaceuticals, Inc., 874 F.2d 307, 311-12 (5th Cir. 1989)(“Fortunately, we do not have to resolve any of the above questions [as to bias and confounding], since the studies presented to us incorporate the possibility of these factors by the use of a confidence interval.”)(emphasis in original).  See, e.g., David H. Kaye, David E. Bernstein, and Jennifer L. Mnookin, The New Wigmore – A Treatise on Evidence:  Expert Evidence § 12.6.4, at 546 (2d ed. 2011) Michael O. Finkelstein, Basic Concepts of Probability and Statistics in the Law 86-87 (2009)(criticizing the overinterpretation of confidence intervals by the Brock court).

Clapp, Ozonoff, and Beecher-Monas are not alone in offering bad advice to judges who must help resolve statistical issues.  Déirdre Dwyer, a prominent scholar of expert evidence in the United Kingdom, manages to bundle up the transpositional fallacy and a misstatement of the meaning of the confidence interval into one succinct exposition:

“By convention, scientists require a 95 per cent probability that a finding is not due to chance alone. The risk ratio (e.g. ‘2.2’) represents a mean figure. The actual risk has a 95 per cent probability of lying somewhere between upper and lower limits (e.g. 2.2 ±0.3, which equals a risk somewhere between 1.9 and 2.5) (the ‘confidence interval’).”

Déirdre Dwyer, The Judicial Assessment of Expert Evidence 154-55 (Cambridge Univ. Press 2008).

Of course, Clapp, Ozonoff, Beecher-Monas, and Dwyer build upon a long tradition of academics’ giving errant advice to judges on this very issue.  See, e.g., Christopher B. Mueller, “Daubert Asks the Right Questions:  Now Appellate Courts Should Help Find the Right Answers,” 33 Seton Hall L. Rev. 987, 997 (2003)(describing the 95% confidence interval as “the range of outcomes that would be expected to occur by chance no more than five percent of the time”); Arthur H. Bryant & Alexander A. Reinert, “The Legal System’s Use of Epidemiology,” 87 Judicature 12, 19 (2003)(“The confidence interval is intended to provide a range of values within which, at a specified level of certainty, the magnitude of association lies.”) (incorrectly citing the first edition of Rothman & Greenland, Modern Epidemiology 190 (Philadelphia 1998);  John M. Conley & David W. Peterson, “The Science of Gatekeeping: The Federal Judicial Center’s New Reference Manual on Scientific Evidence,” 74 N.C.L.Rev. 1183, 1212 n.172 (1996)(“a 95% confidence interval … means that we can be 95% certain that the true population average lies within that range”).

Who has prevailed?  The statistically correct authors of the statistics chapter of the Reference Manual on Scientific Evidence, or the errant commentators?  It would be good to have some empirical evidence to help evaluate the judiciary’s competence. Here are some cases, many drawn from the Manual‘s discussions, arranged chronologically, before and after the first appearance of the Manual:

Before First Edition of the Reference Manual on Scientific Evidence:

DeLuca v. Merrell Dow Pharms., Inc., 911 F.2d 941, 948 (3d Cir. 1990)(“A 95% confidence interval is constructed with enough width so that one can be confident that it is only 5% likely that the relative risk attained would have occurred if the true parameter, i.e., the actual unknown relationship between the two studied variables, were outside the confidence interval.   If a 95% confidence interval thus contains ‘1’, or the null hypothesis, then a researcher cannot say that the results are ‘statistically significant’, that is, that the null hypothesis has been disproved at a .05 level of significance.”)(internal citations omitted)(citing in part, D. Barnes & J. Conley, Statistical Evidence in Litigation § 3.15, at 107 (1986), as defining a CI as “a limit above or below or a range around the sample mean, beyond which the true population is unlikely to fall”).

United States ex rel. Free v. Peters, 806 F. Supp. 705, 713 n.6 (N.D. Ill. 1992) (“A 99% confidence interval, for instance, is an indication that if we repeated our measurement 100 times under identical conditions, 99 times out of 100 the point estimate derived from the repeated experimentation will fall within the initial interval estimate … .”), rev’d in part, 12 F.3d 700 (7th Cir. 1993)

DeLuca v. Merrell Dow Pharms., Inc., 791 F. Supp. 1042, 1046 (D.N.J. 1992)(”A 95% confidence interval means that there is a 95% probability that the ‘true’ relative risk falls within the interval”) , aff’d, 6 F.3d 778 (3d Cir. 1993)

Turpin v. Merrell Dow Pharms., Inc., 959 F.2d 1349, 1353-54 & n.1 (6th Cir. 1992)(describing a 95% CI of 0.8 to 3.10, to mean that “random repetition of the study should produce, 95 percent of the time, a relative risk somewhere between 0.8 and 3.10”)

Hilao v. Estate of Marcos, 103 F.3d 767, 787 (9th Cir. 1996)(Rymer, J., dissenting and concurring in part).

After the first publication of the Reference Manual on Scientific Evidence:

American Library Ass’n v. United States, 201 F.Supp. 2d 401, 439 & n.11 (E.D.Pa. 2002), rev’d on other grounds, 539 U.S. 194 (2003)

SmithKline Beecham Corp. v. Apotex Corp., 247 F.Supp.2d 1011, 1037-38 (N.D. Ill. 2003)(“the probability that the true value was between 3 percent and 7 percent, that is, within two standard deviations of the mean estimate, would be 95 percent”)(also confusing attained significance probability with posterior probability: “This need not be a fatal concession, since 95 percent (i.e., a 5 percent probability that the sign of the coefficient being tested would be observed in the test even if the true value of the sign was zero) is an  arbitrary measure of statistical significance.  This is especially so when the burden of persuasion on an issue is the undemanding ‘preponderance’ standard, which  requires a confidence of only a mite over 50 percent. So recomputing Niemczyk’s estimates as significant only at the 80 or 85 percent level need not be thought to invalidate his findings.”), aff’d on other grounds, 403 F.3d 1331 (Fed. Cir. 2005)

In re Silicone Gel Breast Implants Prods. Liab. Litig, 318 F.Supp.2d 879, 897 (C.D. Cal. 2004) (interpreting a relative risk of 1.99, in a subgroup of women who had had polyurethane foam covered breast implants, with a 95% CI that ran from 0.5 to 8.0, to mean that “95 out of 100 a study of that type would yield a relative risk somewhere between on 0.5 and 8.0.  This huge margin of error associated with the PUF-specific data (ranging from a potential finding that implants make a woman 50% less likely to develop breast cancer to a potential finding that they make her 800% more likely to develop breast cancer) render those findings meaningless for purposes of proving or disproving general causation in a court of law.”)(emphasis in original)

Ortho–McNeil Pharm., Inc. v. Kali Labs., Inc., 482 F.Supp. 2d 478, 495 (D.N.J.2007)(“Therefore, a 95 percent confidence interval means that if the inventors’ mice experiment was repeated 100 times, roughly 95 percent of results would fall within the 95 percent confidence interval ranges.”)(apparently relying party’s expert witness’s report), aff’d in part, vacated in part, sub nom. Ortho McNeil Pharm., Inc. v. Teva Pharms Indus., Ltd., 344 Fed.Appx. 595 (Fed. Cir. 2009)

Eli Lilly & Co. v. Teva Pharms, USA, 2008 WL 2410420, *24 (S.D.Ind. 2008)(stating incorrectly that “95% percent of the time, the true mean value will be contained within the lower and upper limits of the confidence interval range”)

Benavidez v. City of Irving, 638 F.Supp. 2d 709, 720 (N.D. Tex. 2009)(interpreting a 90% CI to mean that “there is a 90% chance that the range surrounding the point estimate contains the truly accurate value.”)

Estate of George v. Vermont League of Cities and Towns, 993 A.2d 367, 378 n.12 (Vt. 2010)(erroneously describing a confidence interval to be a “range of values within which the results of a study sample would be likely to fall if the study were repeated numerous times”)

Correct Statements

There is no reason for any of these courts to have struggled so with the concept of statistical significance or of the confidence interval.  These concepts are well elucidated in the Reference Manual on Scientific Evidence (RMSE):

“To begin with, ‘confidence’ is a term of art. The confidence level indicates the percentage of the time that intervals from repeated samples would cover the true value. The confidence level does not express the chance that repeated estimates would fall into the confidence interval.91

* * *

According to the frequentist theory of statistics, probability statements cannot be made about population characteristics: Probability statements apply to the behavior of samples. That is why the different term ‘confidence’ is used.”

RMSE 3d at 247 (2011).

Even before the Manual, many capable authors have tried to reach the judiciary to help them learn and apply statistical concepts more confidently.  Professors Michael Finkelstein and Bruce Levin, of the Columbia University’s Law School and Mailman School of Public Health, respectively, have worked hard to educate lawyers and judges in the important concepts of statistical analyses:

“It is the confidence limits PL and PU that are random variables based on the sample data. Thus, a confidence interval (PL, PU ) is a random interval, which may or may not contain the population parameter P. The term ‘confidence’ derives from the fundamental property that, whatever the true value of P, the 95% confidence interval will contain P within its limits 95% of the time, or with 95% probability. This statement is made only with reference to the general property of confidence intervals and not to a probabilistic evaluation of its truth in any particular instance with realized values of PL and PU. “

Michael O. Finkelstein & Bruce Levin, Statistics for Lawyers at 169-70 (2d ed. 2001)

Courts have no doubt been confused to some extent between the operational definition of a confidence interval and the role of the sample point estimate as an estimator of the population parameter.  In some instances, the sample statistic may be the best estimate of the population parameter, but that estimate may be rather crummy because of the sampling error involved.  See, e.g., Kenneth J. Rothman, Sander Greenland, Timothy L. Lash, Modern Epidemiology 158 (3d ed. 2008) (“Although a single confidence interval can be much more informative than a single P-value, it is subject to the misinterpretation that values inside the interval are equally compatible with the data, and all values outside it are equally incompatible. * * *  A given confidence interval is only one of an infinite number of ranges nested within one another. Points nearer the center of these ranges are more compatible with the data than points farther away from the center.”); Nicholas P. Jewell, Statistics for Epidemiology 23 (2004)(“A popular interpretation of a confidence interval is that it provides values for the unknown population proportion that are ‘compatible’ with the observed data.  But we must be careful not to fall into the trap of assuming that each value in the interval is equally compatible.”); Charles Poole, “Confidence Intervals Exclude Nothing,” 77 Am. J. Pub. Health 492, 493 (1987)(“It would be more useful to the thoughtful reader to acknowledge the great differences that exist among the p-values corresponding to the parameter values that lie within a confidence interval … .”).

Admittedly, I have given an impressionistic account, and I have used anecdotal methods, to explore the question whether the courts have improved in their statistical assessments in the 20 years since the Supreme Court decided Daubert.  Many decisions go unreported, and perhaps many errors are cut off from the bench in the course of testimony or argument.  I personally doubt that judges exercise greater care in their comments from the bench than they do in published opinions.  Still, the quality of care exercised by the courts would be a worthy area of investigation by the Federal Judicial Center, or perhaps by other sociologists of the law.

Relative of Risk > Two in the Courts – Updated

March 3rd, 2012

See , for the updated the case law on the issue of using relative and attributable risks to satisfy plaintiff’s burden of showing, more likely than not, that an exposure or condition caused a plaintiff’s disease or injury.

Meta-Analysis of Observational Studies in Non-Pharmaceutical Litigations

February 26th, 2012

Yesterday, I posted on several pharmaceutical litigations that have involved meta-analytic studies.   Meta-analytic studies have also figured prominently in non-pharmaceutical product liability litigation, as well as in litigation over videogames, criminal recidivism, and eyewitness testimony.  Some, but not all, of the cases in these other areas of litigation are collected below.  In some cases, the reliability or validity of the meta-analyses were challenged; in some cases, the court fleetingly referred to meta-analyses relied upon the parties.  Some of the courts’ treatments of meta-analysis are woefully inadequate or erroneous.  The failure of the Reference Manual on Scientific Evidence to update its treatment of meta-analysis is telling.  See The Treatment of Meta-Analysis in the Third Edition of the Reference Manual on Scientific Evidence” (Nov. 14, 2011).

 

Abortion (Breast Cancer)

Christ’s Bride Ministries, Inc. v. Southeastern Pennsylvania Transportation Authority, 937 F.Supp. 425 (E.D. Pa. 1996), rev’d, 148 F.3d 242 (3d Cir. 1997)

Asbestos

In re Joint E. & S. Dist. Asbestos Litig., 827 F. Supp. 1014, 1042 (S.D.N.Y. 1993)(“adding a series of positive but statistically insignificant SMRs [standardized mortality ratios] together does not produce a statistically significant pattern”), rev’d, 52 F.3d 1124 (2d Cir. 1995).

In Re Asbestos Litigation, Texas Multi District Litigation Cause No. 2004-03964 (June 30, 2005)(Davidson, J.)(“The Defendants’ response was presented by Dr. Timothy Lash.  I found him to be highly qualified and equally credible.  He largely relied on the report submitted to the Environmental Protection Agency by Berman and Crump (“B&C”).  He found the meta-analysis contained in B&C credible and scientifically based.  B&C has not been published or formally accepted by the EPA, but it does perform a valuable study of the field.  If the question before me was whether B&C is more credible than the Plaintiffs’ studies taken together, my decision might well be different.”)

Jones v. Owens-Corning Fiberglas, 288 N.J. Super. 258, 672 A.2d 230 (1996)

Berger v. Amchem Prods., 818 N.Y.S.2d 754 (2006)

Grenier v. General Motors Corp., 2009 WL 1034487 (Del.Super. 2009)

Benzene

Knight v. Kirby Inland Marine, Inc., 363 F. Supp. 2d 859 (N.D. Miss. 2005)(precluding proffered opinion that benzene caused bladder cancer and lymphoma; noting without elaboration or explanation, that meta-analyses are “of limited value in combining the results of epidemiologic studies based on observation”), aff’d, 482 F.3d 347 (5th Cir. 2007)

Baker v. Chevron USA, Inc., 680 F.Supp. 2d 865 (S.D. Ohio 2010)

Diesel Exhaust Exposure

King v. Burlington Northern Santa Fe Ry. Co., 277 Neb. 203, 762 N.W.2d 24 (2009)

Kennecott Greens Creek Mining Co. v. Mine Safety & Health Admin., 476 F.3d 946 (D.C. Cir. 2007)

Eyewitness Testimony

State of New Jersey v. Henderson, 208 N.J. 208, 27 A.3d 872 (2011)

Valle v. Scribner, 2010 WL 4671466 (C.D. Calif. 2010)

People v. Banks, 16 Misc.3d 929, 842 N.Y.S.2d 313 (2007)

Lead

Palmer Asarco Inc., 510 F.Supp.2d 519 (N.D. Okla. 2007)

PCBs

In re Paoli R.R. Yard PCB Litigation, 916 F.2d 829, 856-57 (3d Cir.1990) (‘‘There is some evidence that half the time you shouldn’t believe meta-analysis, but that does not mean that meta-analyses are necessarily in error. It means that they are, at times, used in circumstances in which they should not be.’’) (internal quotation marks and citations omitted), cert. denied, 499 U.S. 961 (1991)

Repetitive Stress

Allen v. International Business Machines Corp., 1997 U.S. Dist. LEXIS 8016 (D. Del. 1997)

Tobacco

Flue-Cured Tobacco Cooperative Stabilization Corp. v. United States Envt’l Protection Agency, 4 F.Supp.2d 435 (M.D.N.C. 1998), vacated by, 313 F.3d 852 (4th Cir. 2002)

Tocolytics – Medical Malpractice

Hurd v. Yaeger, 2009 WL 2516874 (M.D. Pa. 2009)

Toluene

Black v. Rhone-Poulenc, Inc., 19 F.Supp.2d 592 (S.D.W.Va. 1998)

Video Games (Violent Behavior)

Brown v. Entertainment Merchants Ass’n, ___ U.S.___, 131 S.Ct. 2729 (2011)

Entertainment Software Ass’n v. Blagojevich, 404 F.Supp.2d 1051 (N.D. Ill. 2005)

Entertainment Software Ass’n v. Hatch, 443 F.Supp.2d 1065 (D. Minn. 2006)

Video Software Dealers Ass’n v. Schwarzenegger, 556 F.3d 950 (9th Cir. 2009)

Vinyl Chloride

Taylor v. Airco, 494 F. Supp. 2d 21 (D. Mass. 2007)(permitting opinion testimony that vinyl chloride caused intrahepatic cholangiocarcinoma, without commenting upon the reasonableness of reliance upon the meta-analysis cited)

Welding

Cooley v. Lincoln Electric Co., 693 F.Supp.2d 767 (N.D. Ohio. 2010)

Meta-Analysis in Pharmaceutical Cases

February 25th, 2012

The Third Edition of the Reference Manual on Scientific Evidence attempts to cover a lot of ground to give the federal judiciary guidance on scientific, medical, and statistical, and engineering issues.  It has some successes, and some failures.  One of the major problems in coverage in the new Manual is its inconsistent, sparse, and at points out-dated treatment of meta-analysis.   See The Treatment of Meta-Analysis in the Third Edition of the Reference Manual on Scientific Evidence” (Nov. 14, 2011).

As I have pointed out elsewhere, the gaps and problems in the Manual‘s coverage are not “harmless error,” when some courts have struggled to deal with methodological and evaluative issues in connection with specific meta-analyses.  SeeLearning to Embrace Flawed Evidence – The Avandia MDL’s Daubert Opinion” (Jan. 10, 2011).

Perhaps the reluctance to treat meta-analysis more substantively comes from a perception that the technique for analyzing multiple studies does not come up frequently in litigation.  If so, let me help dispel the notion.  I have collected a partial list of drug and medical device cases that have confronted meta-analysis in one form or another.  In some cases, such as the Avandia MDL, a meta-analysis was a key, or the key, piece of evidence.  In other cases, meta-analysis may have been treated more peripherally.  Still, there are over 20 pharmaceutical cases in the last two decades that have dealt with the statistical techniques involved in meta-analysis.  In another post, I will collect the non-pharmaceutical cases as well.

 

Aredia – Zometa

Deutsch v. Novartis Pharm. Corp., 768 F. Supp. 2d 420 (E.D.N.Y. 2011)

 

Avandia

In re Avandia Marketing, Sales Practices and Product Liability Litigation, 2011 WL 13576, *12 (E.D. Pa. 2011)

Avon Pension Fund v. GlaxoSmithKline PLC, 343 Fed.Appx. 671 (2d Cir. 2009)

 

Baycol

In re Baycol Prods. Litig., 532 F.Supp. 2d 1029 (D. Minn. 2007)

 

Bendectin

Daubert v. Merrell Dow Pharm., 43 F.3d 1311 (9th Cir. 1995) (on remand from Supreme Court)

DePyper v. Navarro, 1995 WL 788828 (Mich.Cir.Ct. 1995)

 

Benzodiazepine

Vinitski v. Adler, 69 Pa. D. & C.4th 78, 2004 WL 2579288 (Phila. Cty. Ct. Common Pleas 2004)

 

Celebrex – Bextra

In re Bextra & Celebrex Marketing Sales Practices & Prod. Liab. Litig., 524 F.Supp.2d 1166 (2007)


E5 (anti-endotoxin monoclonal antibody for gram-negative sepsis)

Warshaw v. Xoma Corp., 74 F.3d 955 (1996)

 

Excedrin vs. Tylenol

McNeil-P.C.C., Inc. v. Bristol-Myers Squibb Co., 938 F.2d 1544 (2d Cir. 1991)

 

Fenfluramine, Phentermine

In re Diet Drugs Prod. Liab. Litig., 2000 WL 1222042 (E.D.Pa. 2000)

 

Fosamax

In re Fosamax Prods. Liab. Litig., 645 F.Supp.2d 164 (S.D.N.Y. 2009)

 

Gadolinium

In re Gadolinium-Based Contrast Agents Prod. Liab. Litig., 2010 WL 1796334 (N.D. Ohio 2010)

 

Neurontin

In re Neurontin Marketing, Sales Pracices, and Products Liab. Litig., 612 F.Supp.2d 116 (D. Mass. 2009)

 

Paxil (SSRI)

Tucker v. Smithkline Beecham Corp., 2010 U.S. Dist. LEXIS 30791 (S.D.Ind. 2010)

 

Prozac (SSRI)

Rimberg v. Eli Lilly & Co., 2009 WL 2208570 (D.N.M.)

 

Seroquel

In re Seroquel Products Liab. Litig., 2009 WL 3806434 *5 (M.D. Fla. 2009)

 

Silicone – Breast Implants

Allison v. McGhan Med. Corp., 184 F.3d 1300, 1315 n. 12 (11th Cir. 1999)(noting, in passing that the district court had found a meta-analysis (the “Kayler study”) unreliable “because it was a re-analysis of other studies that had found no statistical correlation between silicone implants and disease”)

Thimerosal – Vaccine

Salmond v. Sec’y Dep’t of Health & Human Services, 1999 WL 778528 (Fed.Cl. 1999)

Hennessey v. Sec’y Dep’t Health & Human Services, 2009 WL 1709053 (Fed.Cl. 2009)

 

Trasylol

In re Trasylol Prods. Liab. Litig., 2010 WL 1489793 (S.D. Fla. 2010)

 

Vioxx

Merck & Co., Inc. v. Ernst, 296 S.W.3d 81 (Tex. Ct. App. 2009)
Merck & Co., Inc. v. Garza, 347 S.W.3d 256 (Tex. 2011)

 

X-Ray Contrast Media (Nephrotoxicity of Visipaque versus Omnipaque)

Bracco Diagnostics, Inc. v. Amersham Health, Inc., 627 F.Supp.2d 384 (D.N.J. 2009)

Zestril

E.R. Squibb & Sons, Inc. v. Stuart Pharms., 1990 U.S. Dist. LEXIS 15788 (D.N.J. 1990)(Zestril versus Squibb’s competing product,
Capote)

 

Zoloft (SSRI)

Miller v. Pfizer, Inc., 356 F.3d 1326 (10th Cir. 2004)

 

Zymar

Senju Pharmaceutical Co. Ltd. v. Apotex Inc., 2011 WL 6396792 (D.Del. 2011)

 

Zyprexa

In re Zyprexa Products Liab. Litig., 489 F.Supp.2d 230 (E.D.N.Y. 2007) (Weinstein, J.)

Unreported Decisions on Expert Witness Opinion in New Jersey

February 21st, 2012

In New Jersey, as in other states, unpublished opinions have a quasi-outlaw existence.  According to the New Jersey Rules of Court, unpublished opinions are not precedential.  By court fiat, the court system has declared that it can act a certain way in a given case, and not have to follow its own lead in other cases:

No unpublished opinion shall constitute precedent or be binding upon any court. Except for appellate opinions not approved for publication that have been reported in an authorized administrative law reporter, and except to the extent required by res judicata, collateral estoppel, the single controversy doctrine or any other similar principle of law, no unpublished opinion shall be cited by any court. No unpublished opinion shall be cited to any court by counsel unless the court and all other parties are served with a copy of the opinion and of all contrary unpublished opinions known to counsel.

New Jersey Rule of Court 1:36-3 (Unpublished Opinions).

Litigants down the road may feel that they are not being given the equal protection of the law, but never mind.  Res judicata and collateral estoppel are in, but stare decisis is out.  Consistency and coherence are so difficult, surely it is better to be free from having from these criteria of rationality unless we decide to “opt in” by publishing opinions with our decisions.  As many other scholars and commentators have noted, rules of this sort allow decisions from other states, and even other countries, to be potentially persuasive, whereas by court rule and fiat, an unpublished decision of the deciding court can not have any precedential value.  Why then permit unpublished cases to be cited at all?

Having tracked decisions, published and un-, in New Jersey for many years, I am left with an impression that the Appellate Division has a tendency to refuse to publish opinions of decisions in which it has reversed the trial court’s refusal to exclude expert witness testimony, or in which it has affirmed the trial court’s exclusion of expert testimony.  Opinions that explain the affirmance of a denial of expert witness exclusion or the reversal of a trial court’s grant of exclusion appear to be published more often.  Stated as a four-fold table:

  Trial Court Permits Expert Trial Court Bars Expert
Appellate Court Affirms Published Not Published
Appellate Court Reverses Not Published Publish

My impression is that there is an institutional bias against creating a body of law that illuminates the criteria for admission and for exclusion of expert witness opinion testimony. This is only an impression, and I do not have statistics, descriptive or inferential on these judicial behaviors.  From a jurisprudential perspective, the affirmance of an exclusion below, or the reversal of a denial of exclusion below, should be at least as important as publishing the reversal of an exclusion below.  The goal of announcing to the Bar and to trial judges the criteria for inclusion and exclusion would seem to suggest greater publication of the opinions, from the two unpublished cells, in the contingency table, above.

No citation and no precedent rules are deeply problematic, and have attracted a great deal of scholarly attention.  See Erica Weisgerber, “Unpublished Opinions: A Convenient Means to an Unconstitutional End,” 97 Georgetown L.J. 621 (2009);  Rafi Moghadam, “Judge Nullification: A Perception of Unpublished Opinions,” 62 Hastings L.J. 1397 (2011);  Norman R. Williams, “The failings of Originalism:  The Federal Courts and the Power of Precedent,” 37 U.C.. Davis L. Rev. 761 (2004);  Dione C. Greene, “The Federal Courts of Appeals, Unpublished Decisions, and the ‘No-Citation Rule,” 81 Indiana L.J. 1503 (2006);  Vincent M. Cox, “Freeing Unpublished Opinions from Exile: Going Beyond the Citation Permitted by Proposed Federal Rule of Appellate Procedure 32.1,” 44 Washburn L.J. 105 (2004);  Sarah E. Ricks, “The Perils of Unpublished Non-Precedential Federal Appellate Opinions: A Case Study of The Substantive Due Process State-Created Danger Doctrine in One Circuit,” 81 Wash. L.Rev. 217 (2006);  Michael J. Woodruff, “State Supreme Court Opinion Publication in the Context of Ideology and Electoral Incentives.” New York University Department of Politics (March 2011);   Michael B. W. Sinclair, “Anastasoff versus Hart: The Constitutionality and Wisdom of Denying Precedential Authority to Circuit Court Decisions.”  See generally The Committee for the Rule of Law (website) (collecting scholarship and news on the issue of unpublished and supposedly non-precedential opinions).

What would be useful is an empirical analysis of the New Jersey Appellate Division’s judicial behavior in deciding whether or not to publish decisions for each of the four cells, in the four-fold table, above.  If my impression is correct, the suggestion of institutional bias would give further support to the abandonment of N.J. Rule of Court 1:36-3.