TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Landrigan v. The Celotex Corporation, Revisited

June 4th, 2013

Old-fashioned torts presented few problems for attributing causation of the plaintiff’s harm.  Summers v. Tice, 33 Cal.2d 80, 199 P.2d 1 (1948), may have involved uncertainty about the shooter, but there was no doubt that a pellet from one of the two defendants’ guns hit the plaintiff and caused a legally recognized injury.

Specific causation has been, and remains, the soft underbelly of the toxic tort world, at least for those cases not involving so-called signature diseases.  A priori assessments of risk do not necessarily translate into post-exposure, post-diagnosis attributions of outcome to exposure.  Put simply, risk is not cause. Guinn v AstraZeneca Pharms. LP, 602 F.3d 1245, 1255 (11th Cir. 2010) (“An expert, however, cannot merely conclude that all risk factors for a disease are substantial contributing factors in its development.  The fact that exposure to a substance may be a risk factor for a disease does not make it an actual cause simply because the disease developed.”) Unless there is a “fingerprint of causation,” what scientists would call a completely specific biomarker, then specific causation opinions are mostly guesswork.

Tobacco companies and others exploited this fact, in face of large relative risks of lung cancer among smokers, to maintain that these epidemiologic assessment were not probative of specific causation.  Andrew See, “Use of Human Epidemiology Studies in Proving Causation,” 67 Def. Couns. J. 478, 478 (2000) (“Epidemiology studies are relevant only to the issue of general causation and cannot establish whether an exposure or factor caused disease or injury in a specific individual.”); Melissa Moore Thomson, Causal Inference in Epidemiology: Implications for Toxic Tort Litigation, 71 N.C. L. Rev. 247, 255 (1992) (“statistic-based epidemiological study results should not be applied directly to establish the likelihood of causation in an individual plaintiff”); Michael Dore, Commentary on the Use of Epidemiological Evidence in Demonstrating Cause-in-Fact, 7 Harv. Envt’l L. Rev. 429, 433 (1983) (“Epidemiological evidence, like other generalized evidence, deals with categories of occurrences rather than particular individual occurrences. . . . Such evidence may help demonstrate that a particular event occurred, but only when accompanied by more specific evidence.”).  See, e.g., In re Fibreboard Corp.,893 F.2d 706, 712 (5th Cir.1990) (“It is evident that these statistical estimates deal only with general causation, for population-based probability estimates do not speak to a probability of causation in any one case; the estimate of relative risk is a property of the studied population, not of an individual’s case.” (emphasis in original; internal quotation omitted)).

Indeed, some courts continue to uphold this extreme anti-probabilistic view, even when relative risks exceed 20, or more.  McTear v. Imperial Tobacco Ltd., [2005] CSOH 69, at ¶ 6.180 (Nimmo Smith, L.J.) (“epidemiological evidence cannot be used to make statements about individual causation… . Epidemiology cannot provide information on the likelihood that an exposure produced an individual’s condition. The population attributable risk is a measure for populations only and does not imply a likelihood of disease occurrence within an individual, contingent upon that individual’s exposure.

In past posts, I have addressed some misunderstandings and misrepresentations concerning the use of a priori risk to assessment of specific causation.  One of the more glaring examples of bad scholarship in this area comes in a text edited by Professor Joseph Gastwirth:

“The court in Landrigan v. Celotex Corp. (1992: 1087) arrived at a similar conclusion, finding that:

a relative risk of 2.0 is not so much a password to a finding of causation as one piece of evidence, among others, for the court to consider in determining whether the expert has employed a sound methodology in reaching his or her conclusion.”

Accordingly the court granted recovery for injuries alleged to have arisen as the result of exposure to asbestos, although the demonstrated relative risk was 1.5.

Sana Loue, “Epidemiological Causation in the Legal Context: Substance and Procedures,” in Joseph Gastwirth, ed., Statistical Science in the Courtroom 263, 277 (2000).

Now that is stunningly bad scholarship, from someone who is both a lawyer and a scientist. The New Jersey Supreme Court, in the cited case, reversed a directed verdict for the defendants, and remanded for reconsideration of the admissibility of the plaintiffs’ expert witnesses.  There was never even an opportunity for the Supreme Court to “grant recoveries.”  Indeed, Mrs. Landrigan never obtained a favorable verdict in her lawsuit.  After remand, she dismissed her action in the face of the daunting task faced by her expert witnesses.

The author of the chapter, Sana Loue, is a Professor and Director in the Department of Epidemiology and Biostatistics in the School of Medicine of Case Western Reserve University, Cleveland, Ohio. Dr. Loue holds doctoral degrees in epidemiology and medical anthropology, as well as a law degree.

Dr. Loue is not alone in misunderstanding the Landrigan case. Some of the confusion perhaps results from the New Jersey Supreme Court’s errant opinion.  Some language in the Supreme Court’s decision makes it seem that there was an objection to the admissibility of the plaintiff’s expert witnesses’ opinions.  There was none.  Unlike many gatekeeping decisions, the plaintiff had a full opportunity to be heard; the defendants moved for a directed verdict at the end of the plaintiff’s case.  In addressing the defendants’ motion, the trial court assumed, for the sake of argument, that asbestos can cause colorectal cancer.  General causation was, of course, contested, but the motion turned on whether there was evidence in the record that would support specific causation.  The trial court held that specific causation required expert witness opinion testimony, but that the testimony in the case failed to provide a basis on which a reasonable jury could conclude that Mr. Landrigan’s colorectal cancer was caused by his alleged occupational asbestos exposure.

The New Jersey Appellate Division affirmed in a published opinion.  579 A.2d 1268 (1990).  The Appellate Division’s decision is still worth reading, not only because it correctly decided the issues, but because it reports material facts that the Supreme Court chose to ignore.  First, the Appellate Division noted that the most that Mr. Landrigan had sustained in terms of respiratory effects from his occupational asbestos exposure was pleural thickening, which never caused him impairment in his lifetime.  Indeed, Mr. Landrigan never was aware of this radiographic change, which only an expert witness hired by plaintiff’s counsel could see.  Id. at 1269.  (Plaintiff’ pulmonary physician expert witness, Dr. Sokolowski, had failed his B-reader examination, but he was a favorite of the asbestos plaintiffs’ bar for his “liberal” readings of chest films.)

The Appellate Division also emphasized the record evidence that the cause of most cases of colon cancer was (and remains) unknown, and more important that Mr. Landrigan’s colorectal cancer was physically indistinguishable from almost all other cases of the disease.  Id. at 1270.  The plaintiff’s hired expert witnesses had only epidemiologic evidence of an increased risk of colorectal cancer among asbestos-exposed workers. Although most of the better conducted studies fail to support the claim of increased risk, Drs. Sokolowski and Wagoner, the plaintiff’s witnesses, relied upon Selikoff’s cohort study of insulation workers, and its mortality risk ratios.  Irving J. Selikoff, E. Cuyler Hammond, and Herbert Seidman, “Mortality Experience of Insulation Workers in the United States and Canada, 1943-1976,” 330 Ann. N.Y. Acad. Sci. 91, 103 (1979) (colorectal cancer risk ratio 1.55);  E. Cuyler Hammond, Irving J. Selikoff,  and Herbert Seidman, “Asbestos Exposure, Cigarette Smoking and Death Rates,” 330 Ann. N.Y. Acad. Sci. 473, 480 (1979) (colorectal cancer mortality ratios 1.59 to 1.81).

Mrs. Landrigan’s witnesses both relied upon evidence of an increased risk, while ignoring or dismissing studies that found no such risk, and upon what they claimed was an absence of risk factors, such as fatty diet, excessive alcohol consumption, family history, and prior bowel disease, in Mr. Landrigan.  The trial court, and the Appellate Division, realized that the reasoning that these witnesses advanced failed to support their conclusions, as a matter of science, logic, and law:

“Although not stated by Dr. Sokolowski in so many words, he seems to be saying that risk exposure equates with causation, a proposition which we find legally untenable.”

579 A.2d at 1270 (1990).  The hand waving about ruling out known risk factors left the most likely cause in plain view:  unknown:

“One cannot rule out the presence of other risk factors without knowing what those factors may be.”

Id. at 1271.

The New Jersey Supreme Court reversed and remanded the case for further inquiries into the reliability of the expert witnesses’ opinions.  Landrigan v. The Celotex Corp., 127 N.J. 404, 605 A.2d 1079 (1992).  Therese Keeley, the capable lawyer who tried the Landrigan case for the defense, had argued the appeal before the Appellate Division, but another lawyer, less familiar with the issues, argued for the defendant, in the Supreme Court.  The Supreme Court made much of the new lawyer’s concessions in oral argument:

“Defense counsel urges that the Appellate Division opinion may be read as requiring that an expert may not rely on an epidemiological study to support a finding of individual causation unless the relative risk is greater than 2.0. See 243 N.J.Super. at 457-59, 579 A.2d 1268. At oral argument before us, they agreed that such a requirement may be unnecessary. Counsel acknowledged that under certain circumstances a study with a relative risk of less than 2.0 could support a finding of specific causation. Those circumstances would include, for example, individual clinical data, such as asbestos in or near the tumor or a documented history of extensive asbestos exposure. So viewed, a relative risk of 2.0 is not so much a password to a finding of causation as one piece of evidence, among others, for the court to consider in determining whether the expert has employed a sound methodology in reaching his or her conclusion.”

Id. at 419.  Even so, these concessions, improvident as they might have been, would not permit the Supreme Court to resolve the case as it did.  There was nothing in the Landrigan case, however, which would count as a biomarker of individual causation, or as support for a claim that Mr. Landrigan’s exposure was so much heavier than average that his personal exposure put him on the dose-response curve at a point that corresponded to a relative risk greater than two.

Here is how the Supreme Court described Dr. Sokolowski’s attempted reasoning process:

“In the present case, Dr. Sokolowski began by reviewing the scientific literature to establish both the ability of asbestos to cause colon cancer and the magnitude of the risk that it would cause that result. Next, he assumed that decedent was exposed to asbestos and that his exposure, in both intensity and duration, was comparable to that of the study populations described in the literature. He then assumed that other known risk factors for colon cancer did not apply to decedent. After considering decedent’s exposure and the absence of those factors, Dr. Sokolowski concluded that decedent’s exposure more likely than not had been the cause of his colon cancer.”

Id. at 420-21, 1087-88.  The obvious fly in the ointment is simply that many people with no known risk factors for colon cancer develop the disease.  The assumption behind a cohort study is that all the risk factors are even balanced between the exposed and the unexposed cohorts, and so the relative risk reflects the role of the exposure in question.  Of course, this assumption is rarely true outside the context of a randomized clinical trial, and the Selikoff studies relied upon by plaintiff’s witnesses were particularly inept in controlling or accounting for confounding factors.  Assuming, however, that both the exposed and unexposed groups had the same proportion of men without “known” risk factors, then the most Sokolowski and Wagoner could say was that Mr. Landrigan’s risk of colorectal cancer had been increased by 55% or so, above that of the risk for men similarly situated but lacking occupational asbestos exposure.  This 55% increase was the basis for the Court’s observation that the attributable risk was about 35%.  What the Court left for another day was how, if at all, this evidentiary display could support a conclusion of specific causation.  The trial and intermediate appellate courts saw clearly that Sokolowski and Wagoner had utterly failed to support their specific causation opinions, but the Supreme Court was intent upon giving them another bite at the apple:

“Without limiting the trial court on remand, its assessment of Dr. Sokolowski’s testimony should include an evaluation of the validity both of the studies on which he relied and of his assumption that the decedent’s asbestos exposure was like that of the members of the study populations. The court should also verify Dr.  Sokolowski’s assumption concerning the absence of other risk factors. Finally, the court should ascertain if the relevant scientific community accepts the process by which Dr. Sokolowski reasoned to the conclusion that the decedent’s asbestos exposure had caused his cancer.”

Id. at 420, 1088.  The Court thus did not give plaintiff’s expert witnesses a free pass for trial number two.  When faced with the prospect of having to show that Sokolowski’s and Wagoner’s ipse dixit were reaccepted by the relevant scientific community, the plaintiff dismissed her case.

As They WOE, So No Recovery Have the Reeps

May 22nd, 2013

Late last year, Justice York excluded Dr. Shira Kramer’s WOE-ful opinion that gasoline fumes from an alleged fuel-line leak caused Sean Reep to be born with cerebral palsy.  Reeps v. BMW of North America, LLC, 2012 NY Slip Op 33030(U), N.Y.S.Ct., Index No. 100725/08 (New York Cty. Dec. 21, 2012) (York, J.).  Kramer’s opinion was a parody of science, pieced together from case reports, animal studies, and epidemiologic studies that looked at exposures utterly unlike that of Mrs. Reep’s exposure.

Justice York saw through the charade.  The animal studies were largely exonerative. The case reports were of birth defects quite different from those sustained by Sean Reeps.  The epidemiologic studies were of different chemicals or chemicals at levels very different from those experienced by Mrs. Reeps. Plaintiffs’ expert witnesses ignored established principles of teratology in claiming late-term birth defects to have been causally related to early term exposures. Plaintiffs’ expert witnesses gave a convincing presentation of how not to do science, and why judicial gatekeeping is necessary.  SeeNew York Breathes Life into Frye Standard – Reeps v. BMW” (Mar. 5, 2013).

Justice York clearly articulated that the “plaintiff’s burden to prove the methodology applied to reach the conclusions will not be rejected by specialists in the field.”  Reeps, slip op. at 11.  The trial court recognized that under the New York state version of Frye, the court must determine whether plaintiffs’ expert witnesses are faithfully applying a methodology, such as the Bradford Hill criteria, or whether they are they are “pay[ing] lip service to them while pursuing a completely different enterprise.”  Id.  Justice York recognized that the court must examine a proffered opinion to determine whether it “properly relates existing data, studies or literature to the plaintiff’s situation, or whether, instead, it is connected to existing data only by the ipse dixit of the expert.” Id. (internal quotations omitted).

Plaintiffs were unhappy with Justice York’s decision, and their counsel moved for reconsideration, positing only 15 supposed errors or misunderstandings in the opinion. On May 10, the trial court denied the motion for reconsideration and further explicated the scientific deficiencies of plaintiffs’ witnesses’ opinions.

The trial court was unimpressed:

“In general, attorney for plaintiffs misrepresents the substance of this court’s Decision. The court did not prefer conclusions of defendants’ experts to that of plaintiffs – disagreement among experts is to be expected, since causation analysis involves professional judgment in interpreting data and literature. An expert opinion is precluded when it is reached in violation of generally accepted scientific principles. The court determined that Drs. Kramer and Frazier did not follow generally accepted scientific methodology.”

Reeps, 2013 NY Slip Op 31055(U) at 2 (Opinion on Motions to Reargue, to Renew, and for Oral Hearing) (May 10, 2013).

The court noted that the plaintiffs’ witnesses’ novel claim that low-level gasoline vapor inhalation causes birth defects, a claim that had escaped the attention of all other scientists and regulatory agencies, cried out for judicial intervention.  Id. at 3.

The court also rebuffed the claim that plaintiffs’ witnesses, Shira Kramer and Linda Frazier, had followed the Bradford Hill guidelines:

 “These guidelines are employed only after a study finds an association to determine whether that association reflects a true causal relationship.”

Id. at 5 (quoting Federal Judicial Center, National Research Council, Reference Manual on Scientific Evidence at 598-599 (3d ed. 2011)) (emphasis in the original). Kramer and Frazier never got off the dime with the Bradford Hill guidelines.

In considering the plaintiffs’ motions, the trial court also had occasion to revisit the assertion that “weight of the evidence” (WOE) substituted for, or counted as, a scientific basis for a conclusion of causality:

“The metaphorical use of the term is, if nothing else, ‘a colorful way to say the body of evidence we have examined and judged using a method we have not described but could be more or less inferred from a careful between-the-lines reading of our paper’.”

Id. at 5 (quoting Douglas Weed, “Weight of Evidence: A Review of Concept and Methods,” 25 Risk Analysis 1545, 1546-47 (2005).

Unmoved by the sophistical hand waving, the court emphasized that Kramer and Frazier had confused “suggestive” evidence with “conclusions,” and they had misrepresented the meaning and significance of threshold limit values.  All in all, a convincing demonstration of the need for, and the judicial competence to carry out, gatekeeping of expert witness opinion testimony.

Where Are They Now? Marc Lappé and the Missing Data

May 19th, 2013

The recrudescence of silicone “science” made me wonder where some of the major players in the silicone litigation are today. Some of the plaintiffs’ expert witnesses were characters who gave the litigation “atmosphere.”

Marc Alan Lappé was an experimental pathologist, who testified frequently for plaintiffs in toxic exposure cases.  He founded an organization, The Center for Ethics & Toxics (CETOS), to serve as platform for his advocacy activities.  Lappé was a new-age scientist, and an author of popular books on toxic everything:  Chemical Deception: The Toxic Threat to Health and the Environment, and Against the Grain: Biotechnology and the Corporate Takeover of Your Food. When the silicone-gel breast implant litigation went viral, or immunologic, Lappé was embraced by the silicone sisters and their lawyers as one of their leading immunology guys. Lappé, a revolutionary, obliged and produced another pop science classic: Marc Lappé, The Tao Of Immunology: A Revolutionary New Understanding Of Our Body’s Defenses (1997).

Lappé jumped in to the silicone litigation early.  He supported autoimmune claims, as well as the dubious claim that polyurethane-covered breast implants caused or accelerated breast cancer.  Livshits v. Natural Y Surgical Specialties, Inc., 1991 WL 261770 (S.D.N.Y. Nov. 27, 1991).  In depositions and in trial testimony, Lappé was combative and evasive, but when he wanted to be clear, he could be clear enough:

“It’s my opinion that silicone directly or indirectly can precipitate an activated immune state in such women that can lead to an autoimmune condition.”

Lappé Dep. 44:19-22 (Aug. 21, 1995), in Roden v. Medical Engineering Corp., No. 94-02-103, in District Court of Wise County, Texas, 271st Judicial District.

Plaintiffs also offered Lappé as an ethicist, in what was an obvious attempt to turn personal injury cases into passion plays and to raise the emotional temperature of the court rooms.  Plaintiffs were able to get away with such nonsense in some state court cases, but the federal judges generally would not abide expert witnesses on ethics.  See, e.g., Switzer v. McGhan Medical Corp., CV 94-P-14229-S, Transcript at 96-98, N.D. Ala. (Jan. 4, 1996) (Pointer, J.) (noting that Lappé would not be permitted to testify that the defendant’s conduct was unethical or unconscionable). Ironically, Lappé would become ensnared by an article, the publication of which was surrounded in ethical controversy.

Although Lappé had some experience in experimental immunology, he had no background in silicone.  Undaunted, he set about to publish a work of science fiction.  Marc Lappé, “Silicone-reactive disorder: a new autoimmune disease caused by immunostimulation and superantigens,” 41 Medical Hypotheses 348 (1993).  Lappé went on to find some researchers with whom he could join forces, and in 1993, he and his co-authors published an article based upon what was ostensibly bench research on silicone immunology. Alas, Lappé did not really know the other authors, who were pitching their immunological screening test to plaintiffs’ support groups and to plaintiffs’ lawyers.  Lappé signed up as a co-author without knowing the authors’ marketing plan, and without ever having seen the underlying data and statistical analyses. Given his credentials as a bioethicist, the lapse was remarkable. Lappé learned only through his involvement as an expert witness that his coauthors had been warned by the Food and Drug Administration about unlawful marketing of their “test,” and that some of his co-authors were involved in litigation against Bristol-Myers Squibb.  Although the article was clearly intended to support both the marketing of the test, the litigation that would have benefited his coauthors directly, as well as Lappé’s testimonial adventures, the article contained no conflict of interest disclosures. Laurence Wolf, Marc Lappé, Robert Peterson, and Edward Ezrailson, “Human immune response to polydimethylsiloxane (silicone): screening studies in a breast implant population,” 7 Faseb J. 1265 (1993).

Lappé was an advocate, but he was not stupid.  The late Chuck Walsh took some of Lappé’s early depositions in the breast implant litigation, and pressed him on whether he had seen or had access to the underlying data. Lappé Dep. Roden at 94:4 -21 (Aug. 21, 1995).  Lappé also acknowledged that he had been unaware that the data presented in the published paper was truncated from the data originally obtained in the study. Id. at 108:19 – 109:7.  Lappé bristled, as well he should, at these challenges to his ethical bona fides.  He apparently requested  the underlying data on more than one occasion, but his colleagues would not share the data with him:

Question:  I want to ask you, did you ever get the basic raw data?

Answer:    That was asked and answered as recently as three weeks ago.  And the same answer applies today:  No.  I had asked for it.  It was not give[n] to me.  I have asked for it again.  It’s not been given to me.

Lappé  Dep. at 172:9-14 (Mar. 21, 1996), in Wolf v. Surgitek, Inc., No. 92-60186, 113th Judicial District, District Court of Harris County, Texas.

In early 1998, before Judge Pointer’s neutral expert witnesses delivered their reports in the multi-district proceedings, I traveled to Gualala, California, to take Lappé’s deposition in Page v. Bristol-Myers Squibb Co., No. JCCP-2754-03740, California Superior Court, County of San Diego (Jan. 19, 1998).  I recall the little coastal town of Gualala well.  The hotel, restaurant, and even the deposition room were infested with fruit flies, no doubt because pesticides were banned under Lappé’s influence.  When I asked Lappé whether he had changed his views in any way, he gracefully backed away from his previous testimony:

“I believe that the current evidence, the weight of evidence suggests that the antibodies that are formed in women, perhaps in excess of their background levels for IGG antibody, may not have a specificity towards silicone itself as an antigen but may bind preferentially to silicone and therefore given nonspecific binding results such as those as the Emerald Labs detected in their plate bioassay.   I think their evidence does not presently weigh in favor of considering silicone by itself as an antigen.”

Lappé  Dep. Wolf at 100: 5-18.  Later that year, 1998, Tim Pratt extracted further concessions from Lappé, in a Mississippi case.  Lappé acknowledged that the MDL court’s neutral expert witnesses had done a “reasonably good job,” and that he agreed with them that there was not consistent evidence to support the claim that silicone caused autoimmune disease.  Lappé Dep. at 26:1-9 (Dec. 17, 1998), in Brassell v. Medical Engineering Corp., Case No. 251-96-1074 CIV, Hinds County Circuit Court, Mississippi.

The litigation faded away, and so did Lappé.  He died in 2005. Douglas Martin, “Marc Lappé, 62, Dies; Fought Against Chemical Perils,” N.Y. Times (May 21, 2005).  Few other expert witnesses for silicone plaintiffs had the intellectual integrity to confess error.  I hope he has found his missing data.

IARC and Cranor’s Apologetics for Flawed and Fallacious Science

May 13th, 2013

In his recent contribution to the Center for Progressive Reform’s symposium on the Milward case, Professor Cranor suggests that the International Agency for Research on Cancer (IARC) uses weight of the evidence (WOE) in its carcinogenicity determinations.  See Carl F. Cranor, “Milward v. Acuity Specialty Products: Advances in General Causation Testimony in Toxic Tort Litigation,” PDF 3 Wake Forest J. L. & Policy 105 (2013)[hereinafter cited as Cranor]  Cranor’s suggestion is demonstrably wrong.

The IARC process is described in several places, but the definitive presentation of the IARC’s goals and methods is set out in a document known as the “Preamble.”   World Health Organization, IARC Monographs on the Evaluation of Carcinogenic Risks to Humans — Preamble (2006) [cited herein as Preamble]. There is no mention of WOE in the Preamble.

The IARC process consists upon assessments of carcinogenicity of  substances or exposure circumstances, and categorization into specified groups:

IARC Category

Verbal Description

IARC “Findings”

Group 1 [Known] Carcinogenic to humans

111

Group 2A Probably carcinogenic to humans

65

Group 2B Possibly carcinogenic to humans

274

Group 3 Not classifiable as to its carcinogenicity to humans

504

Group 4 Probably not carcinogenic to humans

1

The IARC operative definitions that a substance to a category are highly stylized and unique to IARC.  The definitions do not coincide with ordinary language definitions or general scientific usage.  Only one substance is categorized as “probably not carcinogenic to humans” is Caprolactam.

Alas, oxygen, nitrogen, carbon dioxide, sugar, table salt, water, and many other exposures we all experience, and even require, do not make it to “probably not carcinogenic.”  This fact should clue in the casual reader that the IARC classifications are greatly influenced by the precautionary principle.  There is nothing wrong with this influence, as long as we realize that IARC categorizations do not necessarily line up with scientific determinations.

Cranor attempts to exploit IARC classifications and their verbiage, but in doing so he misrepresents the IARC enterprise.  For instance, his paper for the CPR symposium strongly suggests that a case involving a Group 2A carcinogen would necessarily satisfy the preponderance of evidence standard common in civil cases because the IARC denominates the substance or exposure circumstance as “probably carcinogenic to humans.”  This suggestion is wrong because of the technical, non-ordinary language meanings given to “probably” and “known.” The IARC terminology involves a good bit of epistemic inflation.  Consider first what it means for something to be “probably” a carcinogen:

“Group 2.

This category includes agents for which, at one extreme, the degree of evidence of carcinogenicity in humans is almost sufficient, as well as those for which, at the other extreme, there are no human data but for which there is evidence of carcinogenicity in experimental animals. Agents are assigned to either Group 2A (probably carcinogenic to humans) or Group 2B (possibly carcinogenic to humans) on the basis of epidemiological and experimental evidence of carcinogenicity and mechanistic and other relevant data. The terms probably carcinogenic and possibly carcinogenic have no quantitative significance and are used simply as descriptors of different levels of evidence of human carcinogenicity, with probably carcinogenic signifying a higher level of evidence than possibly carcinogenic.”

Preamble at 22, § 6(d).  So probably does not mean “more likely than not,” and “possibly” means something even less than some unspecified level of probability.  An IARC classification of 2A will not help a plaintiff reach the jury because it does not connote more likely than not.

A category I finding is usually described as a “known” carcinogen, but the reality is that there may still be a good deal of epistemic uncertainty over the classification:

“Group 1: The agent is carcinogenic to humans.

This category is used when there is sufficient evidence of carcinogenicity in humans. Exceptionally, an agent may be placed in this category when evidence of carcinogenicity in humans is less than sufficient but there is sufficient evidence of carcinogenicity in experimental animals and strong evidence in exposed humans that the agent acts through a relevant mechanism of carcinogenicity.”

Preamble at 22, § 6(d).

Again, the precautionary nature of the categorization should  be obvious.  Knowledge of carcinogenicity is equated to sufficient evidence, which leaves open whether there is a body of contradictory evidence.  The IARC’s definition of “sufficiency” does place some limits on what may affirmatively count as “sufficient” evidence:

Sufficient evidence of carcinogenicity: The Working Group considers that a causal relationship has been established between exposure to the agent and human cancer. That is, a positive relationship has been observed between the exposure and cancer in studies in which chance, bias and confounding could be ruled out with reasonable confidence. A statement that there is sufficient evidence is followed by a separate sentence that identifies the target organ(s) or tissue(s) where an increased risk of cancer was observed in humans. Identification of a specific target organ or tissue does not preclude the possibility that the agent may cause cancer at other sites.”

Preamble (2006), at 19, § 6(a).  This definition hardly helps Cranor in his attempt to defend bad science.  Scientists may reasonably disagree over what is sufficient evidence, but the IARC requires, at a minimum, that “chance, bias and confounding could be ruled out with reasonable confidence.”  Id.  Ruling out chance, of course, introduces considerations of statistical significance, multiple comparisons, and the like.  Ruling out bias and confounding with confidence is an essential part of the IARC categorization process,  just as it is an essential part of the scientific process.  Reviewing the relied upon studies for whether they ruled out chance, bias, and confounding, was precisely what the Supreme Court did in General Electric v. Joiner, and what the current statute, Federal Rule of Evidence 702, now requires.  Failing to review the extant epidemiologic studies for their ability to rule out chance,  bias, and confounding is exactly what the district court judge did in Milward.

IARC and Conflicts of Interest – Nemo iudex in causa sua

Holding out the IARC process as exemplifying scientific method involves other controversial aspects of the process.  IARC’s classifications are determined by “working groups” that review the available scientific literature on an agent’s carcinogenicity.  Members of these of groups are selected in part because they have “have published significant research related to the carcinogenicity of the agents being reviewed… .” Preamble at 5.  See also Vincent Cogliano, Robert A. Baan, Kurt Straif, et al., “The science and practice of carcinogen identification and evaluation,” 112 Envt’l Health Persp. 1269, 1273 (2004).

While the IARC tries hard to avoid apparent financial conflicts of interest, its approach to selecting voting members of the working groups invites a more pervasive, more corrupting influence:  working group members must vote on the validity of their own research.  The prestige of their own research will thus be directly affected by the group’s vote, as well as by the analysis in the resulting IARC monograph.  Many writers have criticized this approach.  See, e.g., Paolo Boffetta, Joseph McLaughlin, Carlo La Vecchia, Robert Tarone, Loren Lipworth, and William Blot, “A further plea for adherence to the principles underlying science in general and the epidemiologic enterprise in particular,” 38 Internat’l J. Epidemiol. 678 (2009); Michael Hauptmann & Cecile Ronckers, “A further plea for adherence to the principles underlying science in general and the epidemiologic enterprise in particular,” 39 Internat’l J. Epidemiol. 1677 (2010).

Notably absent from Cranor’s defense of using bad science and incomplete evidence is his disregard of systematic reviews and meta-analysis.  Although “agency” science is a weak shadow of the real thing, even federal agencies have come to see the importance of using principles of systematic reviews in their assessments of science for policy purposes.  See, e.g., FDA, Guidance for industry evidence-based review system for the scientific evaluation of health claims (2009).  Currently underway at the National Toxicology Program’s Office of Health Assessment and Translation (OHAT) is an effort to implement systematic review methodology in the Program’s assessments of potential human health hazards.  That the NTP is only now articulating an OHAT Evaluation Process, incorporating principles of systematic review, suggests that something less rigorous has been used previously.  See Federal Register Notice , 78 Fed. Reg. 37 (Feb. 25, 2013).

No one should be fooled by Cranor’s attempt to pass off  precautionary judgments as scientific determinations of causality.

Cranor’s Defense of Milward at the CPR’s Celebration

May 12th, 2013

THE RISE OF THE UBER-EXPERT

One of the curious aspects of the First Circuit’s decision in Milward was the court’s willingness to tolerate a so-called weight of the evidence (WOE) assessment of a causal issue by toxicologist Martyn Smith, when much of the key evidence did not involve toxicology.  In defending WOE, Professor Cranor argues that scientists (such as those in an International Agency for Research on Cancer (IARC) working group) evaluate evidence from different lines of research into a single, evaluative judgment of the likelihood of causation.  The lines of evidence may involve animal toxicology, cell biology, epidemiology or other disciplines:

“In drawing conclusions from the data to a theory or explanation, it is necessary for scientists to evaluate the quality of different lines of evidence, to integrate them and to assess what conclusion the lines of evidence most likely supports and how well they do so in comparison with alternative explanations.”

See Carl F. Cranor, “Milward v. Acuity Specialty Products: Advances in General Causation Testimony in Toxic Tort Litigation,” PDF 3 Wake Forest J. L. & Policy 105, 117 (2013)[hereinafter cited as Cranor].

Presumably, the scientists will come to the table with the training, experience, and expertise appropriate to their discipline.  The curious aspect of Cranor’s defense is that Martyn Smith’s expertise did not encompass many of  the lines of research advanced, in particular, the epidemiologic.  Of course, in the real world of science, the assessment of the “lines” of evidence is conducted by scientists from the different, relevant disciplines.  In the make-believe world of courtroom science, the collaboration breaks down when a single expert witness, such as Smith, offers opinions outside his real expertise.  Because the law is not particularly demanding with respect to the extent and scope of expertise, Smith was able to hold forth not only on animal experiments, but on human epidemiologic studies.  The defense was able to show that Smith disregarded basic principles of epidemiology, but the First Circuit agreed with Cranor, that consideration of Smith’s disregard should be kicked down the road, to the jury for its consideration.

As a practical matter, in today’s world of highly specialized scientific disciplines, it is simply not possible for an expert witness to address evidence from all the fields needed to evaluate the multiple lines of evidence relevant to a causal issues.  We should rightfully be skeptical of a single expert witness who claims the ability to weigh disparate lines of evidence to synthesize a judgment of causation.  Of course, this is how science is practiced in a courtroom, not in a university.

REJECTION OF EVIDENCE HIERARCHY

Another salient feature of Cranor’s argument is his insistence that there is no hierarchy of evidence.  Cranor’s argument is ambiguous between rejecting a hierarchy of disciplines or a hierarchy within epidemiology itself .  Cranor never actually argues directly for a leveling of all types of epidemiologic studies, and as we will see, his one key citation (repeated three times) is for the hierarchy of disciplines:  epidemiology, molecular biology, genetics, pathology, and the like.

Clearly there are instances of causation determined without epidemiology.  The Henle-Koch postulates after all were developed to assess causation by infection biological organisms.  And in some instances, very suggestive evidence of viral causes of cancer has been attained before confirming epidemiologic evidence.  If there is a meaningful population attributable risk, however, epidemiology should be able to confirm the suspicions of virology or molecular biology.

Cranor repeatedly cites a meeting report of a workshop held in Washington, D.C., in 2003.  See also Michael Green, Michal Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” in Reference Manual on Scientific Evidence 549, 564 (3d ed. 2011) (citing same meeting report).  Cranor’s citations and quotations misleadingly suggest that the report was an official function of the National Cancer Institute (NCI), and that the published report was an official pronouncement of the NCI.  Neither suggestion is true.

Cranor praises the Circuit’s Milward decision for adopting his argument and citing the meeting report for his claim that there is no hierarchy of evidence:

“Citing National Cancer Institute scientists, [the Circuit] also added that “[t]here should be no such hierarchy” of evidence for carcinogenicity as between epidemiological and some other kinds of evidence.100 These scientists and many distinguished scientific committees would not require epidemiological studies to support claims that a substance can cause adverse effects in humans or place certain other a priori constraints on evidence.101

Cranor at 119 (citing Milward, at 17, citing Michele Carbone, et al., Modern Criteria to Establish Human Cancer Etiology, 64 Cancer Research 5518, 5522 (2004)).

Given the emphasis that Cranor places upon the Carbone article, it is worth taking a close look.  Carbone’s article was styled “Meeting Report.” See also Michelle Carbone, Jack Gruber, and May Wong, “Modern criteria to establish human cancer etiology,” 14 Semin. Cancer Biol. 397 (2004).  The article was a report of a workshop, not an official NCI publication.  The NCI hosted the meeting; the meeting was not sponsored by the NCI, and the published meeting report was not an official statement of the NCI.  Notably, the report appeared in Cancer Research as a paid advertisement, not in the Journal of the National Cancer Institute as a scholarly article.

In assessing the citation, readers should consider the authors of the meeting report.  Importantly, the discipline of epidemiology was not strongly represented; most of the chairpersons and scientists in attendance were pathologists, cell biologists, virologists, and toxicologists.  The authors of the meeting report reflect the interests and focus of the scientists in attendance.  The lead author was Michele Carbone, a pathologist at Loyola University Chicago.  Some may recognize Carbone as one of the proponents of Simian Virus 40 as a cause of mesothelioma, a hypothesis that has not fared terribly well in the crucible of epidemiologic science.  Other authors included:

George Klein, with the Microbiology and Tumor Biology Center, Karolinska Institute, in Stockholm,

Jack Gruber, a virologist with the Cancer Etiology Branch of the NCI, and

May Wong, a biochemist, with the NCI.

The basis of the citation to Carbone’s meeting report is an informal discussion session that took place at the meeting.  Those in attendance broke out into two groups, one chaired by Brook Mossman, a pathologist, and the other group chaired by Dr. Harald zur Hausen, a famous virologist who discovered the causal relationship between human papilloma virus and cervical cancer.

The meeting report included a narrative of how the two groups responded to twelve questions. Cranor’s citation to this article is based upon one sentence in Carbone’s report, about one of twelve questions:

6. What is the hierarchy of state-of-the-art approaches needed for confirmation criteria, and which bioassays are critical for decisions: epidemiology, animal testing, cell culture, genomics, and so forth?

There should be no such hierarchy.  Epidemiology, animal, tissue culture and molecular pathology should be seen as integrating evidences in the determination of human carcinogenicity.”

Carbone at 5522.  Considering the fuller context of the meeting and this report, there is nothing particularly surprising about this statement.  It is not clear that the full question and answer even remotely supports the weight that Cranor places upon it.  Clearly, Cranor’s quotations are unduly selective.  For instance, Cranor does not discuss the disagreement among those in attendance over criteria for different carcinogens:

“2. Should the criteria be the same for different agents (viruses, chemicals, physical agents, promoting agents versus initiating DNA-damaging agents)?

There were different opinions. Group 1 debated this issue and concluded that the current listing of criteria should remain the same because we lack sufficient evidence to develop a separate classification. Group 2 strongly supported the view that it is useful to separate the biological or infectious agents from chemical and physical carcinogens due to their frequently entirely different mode of action.”

Carbone at 5521.

Perhaps Cranor did not think a legal audience would be interested in the emphasis given to epidemiology.  The authors of the meeting report noted that the importance to epidemiology for general causation, but its limitations for determining specific causation:

“Concerning the respective roles of epidemiology and molecular pathology, it was noted that epidemiology allows the determination of the overall effect of a given carcinogen in the human population (e.g., hepatitis B virus and hepatocellular carcinoma) but cannot prove causality in the individual tumor patient.”

Carbone at 5518.  The report did not state that epidemiology was not necessary for confirmation of carcinogenicity in the species of interest (humans). The meeting report emphasized the need to integrate the findings of epidemiology and of molecular biology; it did not urge that epidemiology be ignored or disregarded:

“A general consensus was often reached on several topics such as the need to integrate molecular pathology and epidemiology for a more accurate and rapid identification of human carcinogens.”

Carbone at 5518.

“Ideally, before labeling an agent as a human carcinogen, it is important to have epidemiological, experimental animals, and mechanistic evidences (molecular pathology). Not all of the evidence is always available, and, at times, it may be prudent to identify a human carcinogen earlier rather than later.”

Carbone at 5519 (emphasis added).  Unlike Cranor, the authors of the meeting report distinguish between instance when they are acting on a scientific determination of causation, and a precautionary assessment that proceeds prudentially “as if” causation is determined.

Against this fuller context, Cranor’s characterization of the meeting report, and his limited citations and quotations can be seen to be misleading:

“The First Circuit wisely followed the Etiology Branch of the National Cancer Institute, which sponsored a workshop on cancer causation that concluded ‘there should be no . . . hierarchy’ among epidemiology, animal testing, cell culture, genomics, and so forth.164

Cranor at 129.  The suggestion that the informal workshop statement represented the views of the Etiology Branch is bogus.  Not content to misrepresent twice, Cranor comes back for yet a third misleading citation to this report:

“A further conclusion, already noted, is that scientific experts in court should be permitted to rely upon all scientifically relevant evidence in nondeductive arguments to draw conclusions about causation.209 “There should be no such hierarchy” of evidence, as the Milward court put it, following scientists conducting a workshop at the National Cancer Institute.210 This decision stands as an important corrective to the views of some other appellate and district courts concerning the scientific foundation for expert testimony in toxic tort cases.”

Cranor at 135 (emphasis in original) (citing Carbone for a third time).  To see how misleading is Cranor’s suggestion that scientists should be permitted upon all scientific relevant evidence, consider the meeting report’s careful admonition about the lack of validity of some animal models and mechanistic research:

“Moreover, carcinogens and anticarcinogens can have different effects in different situations.  As shown by the example of addition of β-carotene in the diet, β-carotene has chemopreventive effects in many experimental systems, yet it appears to have increased the incidence of lung cancer in heavy smokers. Animal experiments can be very useful in predicting the carcinogenicity of a given chemical. However, there are significant differences in susceptibility among species and within organs in the same species, and differences in the metabolic pathway of a given chemical among human and animals could lead to error.”

Carbone at 5521.  Obviously relevance is conditioned upon validity, a relationship that is ignored, suppressed, and dismissed in Cranor’s article.

The devil, or the WOE, comes from with ignoring the details.

Professor Sanders’ Paen to Milward

May 7th, 2013

Deconstructing the Deconstruction of Deconstruction

Some scholars have suggested that the most searching scrutiny of scientific research takes place in the courtroom.  Barry Nace’s discovery of the “mosaic method” notwithstanding, lawyers rarely contribute new findings, which I suppose supports Professor Sanders’ characterization of the process as “deconstructive.”  The scrutiny of courtroom science is encouraged by the large quantity of poor quality opinions, on issues that must be addressed by lawyers and their clients who wish to prevail.  As philosopher Harry Frankfurt described this situation:

“Bullshit is unavoidable whenever circumstances require someone to talk without knowing what he is talking about.  Thus the production of bullshit is stimulated whenever a person’s obligations or opportunities to speak about some topic exceed his knowledge of the facts that are relevant to that topic.”

Harry Frankfurt, On Bullshit 63 (Princeton Univ. 2005).

This unfortunate situation would seem to be especially true for advocacy science that involves scientists who are intent upon influencing public policy questions, regulation, and litigation outcomes.  Some of the most contentious issues, and tendentious studies, take place within the realm of occupational, environmental, and related disciplines. Sadly, many occupational and environmental medical practitioners seem particularly prone to publish in journals with low standards and poor peer review.  Indeed, the scientists and clinicians who work in some areas make up an insular community, in which the members are the peer reviewers and editors of each other’s work.  The net result is that any presumption of reliability for peer-reviewed biomedical research is untenable.

The silicone gel-breast implant litigation provides an interesting case study of the phenomenon.  Contrary to post-hoc glib assessments that there was “no” scientific evidence offered by plaintiffs, the fact is that there was a great deal.  Most of what was offered was published in peer-reviewed journals; some was submitted by scientists who had some credibility and standing within their scientific, academic communities:  Gershwin, Kossovsky, Lappe, Shanklin, Garrido, et al.  Lawyers, armed with subpoenas, interrogatories, and deposition notices, were able to accomplish what peer reviewers could not.  What Professor Sanders and others call “deconstruction” was none other than a scientific demonstration of study invalidity, seriously misleading data collection and analysis, and even fraud.  See Hon. Jack B. Weinstein, “Preliminary Reflections on Administration of Complex Litigation” 2009 Cardozo L. Rev. de novo 1, 14 (2009) (describing plaintiffs’ expert witnesses in silicone litigation as “charlatans” and the litigation as largely based upon fraud).

Some scientific publications are motivated almost exclusively by the goal of influencing regulatory or political action.  Consider the infamous meta-analysis by Nissen and Wolski, of clinical trials and heart attack among patients taking Avandia.  Steven Nissen & Kathy Wolski, “Effect of Rosiglitazone on the Risk of Myocardial Infarction and Death from Cardiovascular Causes,” 356 New Engl. J. Med. 2457 (2007). The New England Journal of Medicine rushed the meta-analysis into print in order to pressure the FDA to step up its regulation of post-marketing surveillance of licensed medications.  Later, better-conducted meta-analyses showed how fragile Nissen’s findings were.  See, e.g., George A. Diamond, MD, et al., “Uncertain Effects of Rosiglitazone on the Risk for Myocardial Infarction and Cardiovascular Death,” 147 Ann. Intern. Med. 578 (2007); Tian, et al., “Exact and efficient inference procedure for meta-analysis and its application to the analysis of independent 2 × 2 tables with all available data but without artificial continuity correction,” 10 Biostatistics 275 (2008).  Lawyers should not be shy about pointing out political motivations of badly conducted scientific research, regardless of authorship or where published.

On the other hand, lawyers on both sides of litigation are prone to attack on personal bias and potential conflicts of interest because these attacks are more easily made, and better understood by judges and jurors.  Perhaps it is these “deconstructions” that Professor Sanders finds overblown, in which case, I would agree.  Precisely because jurors have difficulty distinguishing between allegations of funding bias and validity flaws that render studies nugatory, and because inquiries into validity require more time, care, analysis, attention, and scientific and statistical learning, pretrial gatekeeping of expert witnesses is an essential part of achieving substantial justice in litigation of scientific issues.  This is a message that is obscured by the recent cheerleading for the Milward decision at the litigation industry’s symposium on the case.

Deconstructing Professor Sanders’ Deconstruction of the Deconstruction in Milward

A few comments about Professor Sanders’ handling of the facts of Milward itself.

The case arose from a claim of occupational exposure to benzene and an outcome known as APL (acute promyelocytic leukemia), which makes up about 10% of AML (acute myeloid leukemia).  Sanders argues, without any support, that APL is too rare for epidemiology to be definitive.  Sanders at 164.  Here Sanders asserts what Martyn Smith opined, and ignores the data that contradicted Smith.  At least one of the epidemiologic studies cited by Smith was quite large and was able to discern small statistically significant associations when present.  See, e.g., Nat’l Investigative Group for the Survey of Leukemia & Aplastic Anemia, “Countrywide Analysis of Risk Factors for Leukemia and Aplastic Anemia,” 14 Acta Academiae Medicinae Sinicae (1992).  This study found a crude odds ratio of 1.42 for benzene exposure and APL (M3). The study had adequate power to detect a statistically significant odds ratio of 1.54 between benzene and M2a.  Of course, even if one study’s “power” were low, there are other, aggregative strategies, such as meta-analysis, available.  This was not a credibility issue concerning Dr. Smith, for the jury; Smith’s opinion turned on an incorrect and fallacious analyses that did not deserve “jury time.”

The problem is, according to Sanders one of “power.”  In a lengthy footnote, Sander explains what “power” is, and why he believes it is a problem:

“The problem is one of power. Tests of statistical significance are designed to guard against one type error, commonly called Type I Error. This error occurs when one declares a causal relationship to exist when in fact there is no relationship, … . A second type of error, commonly called Type II Error, occurs when one declares a causal relationship does not exist when in fact it does. Id. The “power” of a study measures its ability to avoid a Type II Error. Power is a function of a study’s sample size, the size of the effect one wishes to detect, and the significance level used to guard against Type I Error. . Because power is a function of, among other things, the significance level used to guard against Type I errors, all things being equal, minimizing the probability of one type of error can be done only by increasing the probability of making the other.  Formulae exist to calculate the power of case-control and cohort studies from 2 x 2 contingency table data.

Because the power of any test is reduced as the incidence of an effect decreases, Type II threats to causal conclusions are particularly relevant with respect to rare events. Plaintiffs make a fair criticism of randomized trials or epidemiological cohort studies when they note that sometimes the studies have insufficient power to detect rare events. In this situation, case-control studies are particularly valuable because of their relatively greater power. In most toxic tort contexts, the defendant would prefer to minimize Type I Error while the plaintiffs would prefer to minimize Type II Error. Ideally, what we would prefer are studies that minimize the probability of both types of errors. Given the importance of power in assessing epidemiological evidence, surprisingly few appellate opinions discuss this issue. But see DeLuca v. Merrell Dow Pharm., Inc., 911 F.2d 941, 948 (3d Cir. 1990), which contains a good discussion of epidemiological evidence. The opinion discusses the two types of error and suggests that courts should be concerned about both. Id. Unfortunately, neither the district court opinion nor the court of appeals opinion in Milward discusses power.”

Sanders at 164 n.115 (internal citations omitted).

Sanders is one of the few law professors who almost manages to describe statistical power correctly.  Calculating and evaluating power requires pre-specification of alpha (our maximum tolerated Type I error), sample size, and an alternative hypothesis that we would want to be able to identify at a statistically significant level.  This much is set out in the footnote quoted above.

Sample size, however, is just one factor in the study’s variance, which is not in turn completely specified by sample size.  More important, Sanders’ invocation of power to evaluate the exonerative quality of a study has been largely rejected in the world of epidemiology.  His note that “[f]ormulae exist to calculate the power of case-control and cohort studies from 2 x 2 contingency table data” is largely irrelevant because power is mostly confined to sample size determinations before a study is conducted.  After the data are collected, studies are evaluated by their point estimates and their corresponding confidence intervals. See, e.g., Vandenbroucke, et al., “Strengthening the reporting of observational studies in epidemiology (STROBE):  Explanation and elaboration,” 18 Epidemiology 805, 815 (2007) (Section 10, sample size) (“Do not bother readers with post hoc justifications for study size or retrospective power calculations. From the point of view of the reader, confidence intervals indicate the statistical precision that was ultimately obtained.) (emphasis added). See alsoPower in the Courts — Part Two” (Jan. 21, 2011).

Type II error is important in the evaluation of evidence, but it requires a commitment to a specific alternative hypothesis.  That alternative can always be set closer and closer to the null hypothesis of no association in order to conclude, as some plaintiffs’ counsel would want, that all studies lack power (except of course the ones that turn out to support their claims).  Sanders’ discussion of statistical power ultimately falters because claiming a lack of power without specifying the size of the alternative hypothesis is unprincipled and meaningless.

Sanders tells us that cohorts will have less power than case-control studies, but again the devil is in the details.  Case-control studies are of course relatively more efficient in studying rare diseases, but the statistical precision of their odds ratios will be given by the corresponding confidence intervals.

What is missing from Sanders’ scholarship is a simple statement of what the point estimates and their confidence intervals are.  Plaintiffs in Milward argued that epidemiology was well-nigh unable to detect increased risks of APL, but then they embraced epidemiology when Smith had manipulated and re-arranged data in published studies.

The Yuck Factor

One of the looming problems in expert witness gatekeeping is judicial discomfort and disability in recounting the parties’ contentions, the studies’ data, and the witnesses’ testimony.  In a red car/blue car case, judges are perfectly comfortable giving detailed narratives of the undisputed facts, and the conditions that give rise to discounting or excluding evidence or testimony.  In science cases, not so much.

Which brings us to the data manipulation conducted by Martyn Smith in the Milward case.  Martyn Smith is not an epidemiologist, and he has little or no  experience or expertise in conducting and analyzing epidemiologic studies.  The law of expert witnesses makes challenges to an expert’s qualifications very difficult; generally courts presume that expert witnesses are competent to testify about general scientific and statistical matters.  Often the presumption is incorrect.

In Milward, Smith claimed, on the one hand, that he did not need epidemiology to reach his conclusion, but on the other hand that “suggestive” findings supported his opinion.  On the third hand, he seemed to care enough about the epidemiologic evidence to engage in fairly extensive reanalysis of published studies.  As the district court noted,  Smith made “unduly favorable assumptions in reinterpreting the studies, such as that cases reported as AML could have been cases of APL.”  Milward v. Acuity Specialty Products Group, Inc., 664 F.Supp. 2d 137, 149 (D. Mass. 2009), rev’d, 639 F.3d 11, 19 (1st Cir. 2011), cert. denied sub nom. U.S. Steel Corp. v. Milward, 132 S. Ct. 1002 (2012).  Put less charitably, Smith made up data to suit his hypothesis.

The details of Smith’s manipulations go well beyond cherry picking.  Smith assumed, without evidence, that AML cases were APL cases.  Smith arbitrarily chose and rearranged data to create desirable results.  See Deposition Testimony of Dr. David Garabrant at 22 – 53, in Milward (Feb. 18, 2009).  In some studies, Smith discarded APL cases from the unexposed group, with the consequence of increasing the apparent association; he miscalculated odds ratios; and he presented odds ratios without p-values or confidence intervals.  The district court certainly was entitled to conclude that Smith had sufficiently deviated from scientific standards of care as to make his testimony inadmissible.

Regrettably, the district court did not provide many details of Smith’s reanalyses of studies and their data.  The failure to document Smith’s deviations facilitated the Circuit’s easy generalization that the fallacious reasoning and methodology was somehow invented by the district court.

The appellate court gave no deference to the district court’s assessment, and by judicial fiat turned methodological missteps into credibility issues for the jury.  The Circuit declared that the analytical gap was of the district court’s making, which seemed plausible enough if one read only the appellate decision.  If one reads the actual testimony, the Yuck Factor becomes palpable.

WOE Unto Bradford Hill

Professor Sanders accepts the appellate court’s opinion at face value for its suggestion that:

“Dr. Smith’s opinion was based on a ‘weight of the evidence’ methodology in which he followed the guidelines articulated by world-renowned epidemiologist Sir Arthur Bradford Hill in his seminal methodological article on inferences of causality.”

Sanders at 170 n.140 (quoting Milward, 639 F.3d at 17).

Sanders (and the First Circuit) is unclear whether WOE consists of following the guidelines articulated by Sir Arthur (perhaps Sir Austin Bradford Hill’s less distinguished brother?), or merely includes the guidelines as a larger process.  Not only was there no Sir Arthur, but Sir Austin’s guidelines are distinctly different from WOE in that they pre-specify the consideration to be applied.  No where does the appellate court give any meaningful consideration to whether there was an exposure-response gradient shown, or whether the epidemiologic studies consistently showed an association between benzene and APL.  Had the Circuit given any consideration to the specifics of the guidelines, it would have likely concluded that the district court had engaged in fairly careful, accurate gatekeeping, well within its discretion.  (If the standard were de novo review rather than “abuse of discretion,” the Circuit would have had to confront the significant analytical gaps and manipulations in Smith’s testimony.)  Futhermore, it is time to acknowledge that Bradford Hill’s “guidelines” are taken from a speech given by Sir Austin almost 50 years ago; they hardly represent a comprehensive, state-of-the-art set of guidelines for causal analysis in epidemiology today.

So there you have it.  WOE means the Bradford Hill guidelines, except that the individual guidelines need not be considered.  And although Bradford Hill’s guidelines were offered to evaluate a body of epidemiologic studies, WOE teaches us that we do not need epidemiologic studies, especially if they do not help to establish a plaintiffs’ claim.  Sanders at 168 & n.133 (citing Milward at 22-24).

What is WOE?

If WOE were not really the Bradford Hill guidelines, then what might it be? Attempting to draw a working definition of WOE from the Milward appellate decision, Sanders tell us that WOE requires looking at all the relevant evidence.  Sanders at 169.  Not much guidance there.  Elsewhere he tells us that WOE is “reasoning to the best explanation,” without explicating what such reasoning entails.  Sanders at 169 & n. 136 (quoting Milward at 23,“The hallmark of the weight of the evidence approach is reasoning to the best explanation.”).  This hardly tells us anything about what method Smith and his colleagues were using.

Sanders then tells us that WOE means the whole “tsumish.” (My word; not his.)  Not only should expert witnesses rely upon all the relevant evidence, but they should eschew an atomistic approach that looks (too hard) at individual studies.  Of course, there may be value in looking at the entire evidentiary display.  Indeed, a holistic view may be needed to show the absence of causation.  In many litigations, plaintiffs’ counsel are adept in filling up the courtroom with “bricks,” which do not fit together to form the wall they claim.  In the silicone gel breast implant litigation, plaintiffs’ counsel were able to pick out factoids from studies to create sufficient confusion and doubt that there might be a causal connection between silicone and autoimmune disease.  A careful, systematic analysis, which looked at the big picture, demonstrated that these contentions were bogus.  Committee on the Safety of Silicone Breast Implants, Institute of Medicine, Safety of Silicone Breast Implants (Wash. D.C. 1999) (reviewing studies, many of which were commissioned by litigation defendants, and which collectively showed lack of association between silicone and autoimmune diseases).  Sometimes, however, taking in the view of the entire evidentiary display may obscure what makes up the display.  A piece by El Anatsui may look like a beautiful tapestry, but a closer look will reveal it is just a bunch of bottle caps wired together.

Contrary to Professor Sanders’ assertions, nothing in the Milward appellate opinion explains why studies should be viewed only as a group, or why this view will necessarily show something greater than the parts. Sanders at 170.  Although Sanders correctly discerns that the Circuit elevated WOE from “perspective” to a methodology, there is precious little content to the methodology, especially if it permits witnesses to engage in all sorts of data shenanigans or arbitrary weighting of evidence.  The quaint notion that there is always a best explanation obscures the reality that in science, and especially in science that is likely to be contested in a courtroom, the best explanation will often be “we don’t know.”

Sanders eventually comes around to admit that WOE is perplexingly vague as to how the weighing should be done.  Id. at 170.  He also admits that the holistic view is not always helpful.  Id. at 170 & n.139 (the sum is greater than its parts but only when the combination enhances supportiveness of the parts, and the collective support for the conclusion at issue, etc.).  These concessions should give courts serious pause before they adopt a dissent from a Supreme Court case, that has been repeatedly rejected by courts, commentators, and ultimately by Congress in revising Rule 702.

WOE is Akin to Differential Diagnosis

The Milward opinion seems like a bottomless reserve of misunderstandings.    Professor Sanders barely flinches at the court’s statement that “The use of judgment in the weight of the evidence methodology is similar to that in differential diagnosis.”  Milward at 18.  See Sanders at 171.  Differential “diagnosis” requires previous demonstration of general causation, and proceeds by iterative disjunctive syllogism.  Sanders, and the First Circuit, somehow missed that this syllogistic reasoning is completely unrelated to the abductive inferences that may play a role in reaching conclusions about general causation.  Sanders revealingly tells us that “[e]xperts using a weight of the evidence methodology should be given the same wide latitude as is given those employing the differential diagnosis method.”  Sanders at 172 & n.147.  This counsel appears to be an invitation to speculate.  If the “wide latitude” to which Sanders refers means the approach of a minority of courts that allow expert witnesses to rule in differentials by speculation, and then rule them in by failing to rule out idiopathic cases, then Sanders’ approach is advocacy for epistemic nihilism.

The Corpuscular Approach

Professor Sanders seems to endorse the argument of Milward, as well as Justice Stevens’ dissent in Joiner, that scientists do not assess research by looking at the validity (vel non) of individual studies, and therefore courts should not permit this approach.  Sanders at 173 & n.15.  Neither Justice Stevens nor Professor Sanders presents any evidence for the predicate assertion, which a brief tour of IARC’s less political working group reports would show to be incorrect.

The rationale for Sanders (and Milward’s) reductionism of science to WOE becomes clear when Sanders asserts that “[p]erhaps all or nearly all critiques of an expert employing a weight of the evidence methodology should go to weight, not admissibility. Id. at 173 & n.155.  To be fair, Sanders notes that the Milward court carved out a “solid-body” of exonerative epidemiology exception to WOE.  Id. at 173-74.  This exception, however, does nothing other than placing a substantial burden upon the opponent of expert witness opinion to show that the opinion is demonstrably incorrect.  The proponent gets a free pass as long as there is no “solid body” of such evidence that shows he is affirmatively wrong.  Discerning readers will observe that maneuver simply shifts the burden of admissibility to the opponent,  and eschews the focus on methodology for a renewed emphasis upon general acceptance of conclusions.  Id.

Sanders also notes that other courts have seen through the emptiness of WOE and rejected its application in specific cases.  Id. at 174 & n.163-64 (citing Magistrini v. One Hour Martinizing Dry Cleaning, 180 F. Supp. 2d 584, 601-02 (D.N.J. 2002), aff’d, 68 F. App’x 356 (3d Cir. 2003), where the trial court rejected Dr. Ozonoff’s attempt to deploy WOE without explaining or justifying the mixing and matching of disparate kinds of studies with disparate results).  Sanders’ analysis of Milward seems, however, designed to skim the surface of the case in an effort to validate the First Circuit’s superficial approach.

 

Reconstructing Professor Sanders on Expert Witness Gatekeeping

May 5th, 2013

Last week, I addressed two papers from a symposium organized by the litigation industry to applaud the First Circuit’s decision in Milward v. Acuity Products Group, Inc., 639 F.3d 11 (1st Cir. 2011), cert. denied, 132 S.Ct. 1002 (2012).  Professor Joseph Sanders also contributed to the symposium, in a paper that is a bit more measured, scholarly, and disinterested than the other papers in the group.  Joseph Sanders, “Milward v. Acuity Specialty Products Group: Constructing and Deconstructing Sciences and Law in Judicial Opinion,3 Wake Forest J. L & Policy 141 (2013).  PDF  Still, the industry sponsor, the so-called Center for Progressive Reform, has reasons to be satisfied with the result.

Sanders argues that the Milward opinion is important because it highlights what he characterizes as a “rhetorical conflict that has been ongoing, often below the surface, since the United States Supreme Court’s 1993 opinion in Daubert v. Merrell Dow Pharmaceuticals, Inc.”  Id. at 142.  The argument is overly kind to the judiciary.  There has not been so much as a rhetorical conflict as a reactionary revolt against evidence-based decision making in the federal courts.  Milward simply represents the highwater mark of this revolt against law and science.  See, e.g., David Bernstein on the Daubert Counterrevolution (April 19, 2013).

Sanders invokes the ghost of Derrida and his black brush of deconstruction to suggest that the Daubert process is nothing more than the unraveling of the scientific enterprise, with the goal of showing that it is arbitrary and subjective.  Sanders at 143-44.  According to Sanders, radical deconstruction pushes towards a leveling of “distinctions between fact and faction … more akin to poetry and music than to evidence and argument.” Id. at 145 (citing Stephan Fuchs & Steven Ward, “What Is Deconstruction and Where and When Does It Take Place? Making Facts in Science, Building Cases in Law,” 59 Am. Soc. Rev. 481, 482-83 (1994)).

Lawyers sometimes realize the cost of radical deconstruction is a nihilism that undermines their own credibility and their ability to claim or defend factual assertions.  Sometimes, of course, lawyers ignore these considerations and talk out of both sides of their mouths. In re Welding Fume Prods. Liab. Litig., No. 1:03-CV-17000, MDL 1535, 2006 WL 4507859, *33 (N.D. Ohio 2006) (“According to plaintiffs, the rate of PD [Parkinson’s disease] mortality is so poor a proxy for measuring the rate of overall PD incidence, that the Coggon study proves nothing. In the next breath, however, plaintiffs set forth an unpublished statistical analysis (by Dr. Wells) of PD mortality data collected by the National Center for Health Statistics, arguing it proves that welders, as a group, suffer earlier onset of PD than the general population.77 Of course, the devil is in the details, discussion of which is beyond the scope of this opinion (and perhaps beyond the scope of understanding of the average juror),78 but this example shows how hard it is to tease out whether the limitations of a given study make it unreliable under Daubert.”).

Sanders gives a nod to Sheila Jasanoff, whom he quotes with apparent approval:

“[t]he adversarial structure of litigation is particularly conducive to the deconstruction of scientific facts, since it provides both the incentive (winning the lawsuit) and the formal means (cross-examination) for challenging the contingencies in their opponents’ scientific arguments.”

Sanders at 147 (quoting Shelia Jasanoff, “What Judges Should Know About the Sociology of Science,” 32 Jurimetrics J. 345, 348 (1992)).

With his acknowledgment that adversarial litigation of scientific issues involves  “deconstruction,” or just good, old-fashioned rhetorical excess, Sanders points to the Daubert trilogy as the Supreme Court’s measured response to the problem.

Sanders is, however, not entirely happy about the judiciary’s attempt to rein in the rhetorical excesses of adversarial litigation of scientific issues. Daubert barely scratched the surface of the scientific validity and reliability issues in the Bendectin record, but Sanders asserts that Chief Justice Rehnquist went too far in looking under the hood of the lemon that Joiner’s expert witnesses were trying to sell:

“perhaps Chief Justice Rehnquist erred in the other direction in Joiner when he systematically reviewed the animal studies and four separate epidemiological studies cited by the plaintiff as supporting the position that exposure to PCBs either caused or ‘promoted’66 the plaintiff’s lung cancer.67

Sanders at 154.  Horrors!  A systematic review!! Perish the thought.

Sanders seems to fault the Chief Justice’s approach of picking off “one-by-one” the studies relied upon by Joiner’s expert witnesses as a deconstructive exercise.  Id. at 154 – 55.  As Sanders notes, the Court’s opinion delved into the internal and external validity of the four cited epidemiologic studies to recognize that the:

“studies did not support the plaintiff’s position because the authors of the study said there was no evidence of a relationship, the relationship was not statistically significant, the substance to which the subjects were exposed was not the same as that to which Mr. Joiner was exposed, and the subjects were simultaneously exposed to other carcinogens.70

Id. at 155.  To be fair, not all of these were dispositive considerations, but they represent a summary of a district court’s extensive consideration of the scientific record.

Channeling Susan Haack, Sanders argues that the one-by-one approach (which Professor Green pejoratively called a “Balkanized” approach, and Sanders calls “atomistic”) ignores that a wall is made up of constituent bricks.  Sanders might have done better to study a more accomplished philosopher-scientist:

“[O]n fait la science avec des faits comme une maison avec des pierres; mais une accumulation de faits n’est pas plus une science qu’un tas de pierres n’est une maison.”

Jules Henri Poincaré, La Science et l’Hypothèse (1905) (chapter 9, Les Hypothèses en Physique)(“Science is built up with facts, as a house is with stones. But a collection of facts is no more a science than a heap of stones is a house.”).  Poincaré’s metaphor is more powerful and descriptive than Sander’s because it acknowledges that interlocking pieces of evidence may cohere as a building, or they may be no more than a pile of rubble.  Deeper analysis is required. Poorly constructed walls revert to the pile of bricks from which they came.  Furthermore, the mason must look at the individual bricks to see whether they are cracked, crumbling, or crooked before building a wall that must endure. We want a wall that will endure at least long enough to stand on, or put a roof on. Much more is required than simply invoking “mosaics,” “walls from bricks,” or “crossword puzzles” to transmute a pile of studies into a “warranted” claim to knowledge. Litigants, either plaintiff or defendant, should not be allowed to pick out isolated findings in a variety of studies, and throw them together as if that were science.  This is precisely the rhetorical excess against which Rule 702, with its requirement of “knowledge,” should protect judges, juries, and litigants.

Indeed, as Sanders eventually concedes, the Joiner Court noted the appropriateness of considering the four relied-upon epidemiologic studies, individually or collectively:

“We further hold that, because it was within the District Court’s discretion to conclude that the studies upon which the experts relied were not sufficient, whether individually or in combination, to support their conclusions that Joiner’s exposure to PCB’s contributed to his cancer, the District Court did not abuse its discretion in excluding their testimony.71

General Elec. Co. v. Joiner, 522 U.S. 136, 146-47 (1997).

Professor Sanders might have raised a justiciable argument against the gatekeeping process in Joiner if he had shown, or if he had adverted to other analyses, that the four relied-upon studies collectively meshed to overcome each other’s clear inadequacies.  The silence of Sanders, and other critics, on this score is telling.  Was Rabbi Teitelbaum (one of the Joiners’ expert witnesses) simply insightful or prescient in reading the early returns on PCBs and lung cancer?  What has happened subsequently?  Has the IARC embraced PCBs as a known cause of lung cancer?  Have well-conducted studies and meta-analyses vindicated Teitelbaum’s claims, or have they further confirmed that the gatekeeping in Joiner successfully excluded witnesses who were advancing pathologically weak, specious claims, by pushing and squeezing data until they fit into a pre-determined causal conclusions.

The complaints about judicial “deconstruction” are unfair and empty without looking at these details.  It behooves evidence scholars who want to write in this area to roll up their sleeves and look at the evidence that was in front of the courts, and to learn something about science.  Of course, scientists did not stop exploring the PCB/lung cancer hypothesis after Joiner was decided.  See, e.g, Avima Ruder, Misty Hein, Nancy Nilsen, et al., “Mortality among Workers Exposed to Polychlorinated Biphenyls (PCBs) in an Electrical Capacitor Manufacturing Plant in Indiana: An Update,” 114 Envt’l Health Perspect. 18 (2006) (study by National Institute for Occupational Safety and Health, showing reduced rates of respiratory cancer among PCB-exposed workers, with age-standardized risk ratio of 0.85, and a 95% confidence interval, 0.6 to 1.1)

Stevens’ partial dissent in Joiner of course invested deeply in the mosaic theory, which we now know was the brainchild of plaintiffs’ counsel, Barry Nace. Michael D. Green, “Pessimism about Milward,” 3 Wake Forest J. L & Policy 41, 63 (2013) (reporting that Barry Nace acknowledged having “fed” this rhetorical device to expert witness Alan Done to support arguments for manufacturing certainty in the face of an emerging body of exonerative evidence).  Justice Stevens also cited the EPA as employing “weight of the evidence,” which simply makes the point that WOE is a precautionary approach to scientific evidence, not one for serious causal determinations.  Justice Stevens, and Professor Sanders, might have done better to have looked at what the FDA requires for health claims.  See, e.g., FDA, Guidance for industry evidence-based review system for the scientific evaluation of health claims (2009) (articulating an evidence-based approach). Justice Stevens’ argument fundamentally misconstrues the scientific enterprise of determining causation of health outcomes by reducing it to a precautionary enterprise of regulating possible harms.  Professor Sanders is unclear whether he is restating Justice Stevens’ view, or his own, when he writes:

“Chief Justice Rehnquist was wrong to show the flaws of individual bricks, because it is the wall as a whole that makes up the plaintiff’s case.74

Sanders at 157 (citing Stevens’ opinion in Joiner, 522 U.S. 136, 153n.5  (1997).  If this is Professor Sanders’ view, it is profoundly wrong.  Looking at the individual bricks is necessary to determine whether it can support the plaintiff’s case.  Of course, to the extent it was Justice Stevens’ view, it was a dissent, not a holding, and it was superseded by a statute when Rule 702 was revised in 2000.

 

Milward’s Singular Embrace of Comment C

May 4th, 2013

Professor Michael D. Green is one of the Reporters for the American Law Institute’s Restatement (Third) of Torts: Liability for Physical and Emotional Harm.   Green has been an important interlocutor in the on-going debate and discussion over standards for expert witness opinions.  Although many of his opinions are questionable, his writing is clear, and his positions, transparent.  The seduction of Professor Green and the Wake Forest School of Law by one of the litigation-industry’s organizations, the Center for Progressive Reform, is unfortunate, but the resulting symposium gave Professor Green an opportunity to speak and write about the justly controversial comment c.   Restatement (Third) of Torts: Liability for Physical and Emotional Harm § 28, cmt. c (2010).

Mock Pessimism Over Milward

Professor Green professes to be pessimistic about the Milward decision, but his only real ground for pessimism is that Milward will not be followed.  Michael D. Green, “Pessimism about Milward,” 3 Wake Forest J. L & Policy 41 (2013).  Green describes the First Circuit’s decision in Milward as “fresh,” “virtually unique and sophisticated,” and “satisfying.” Id. at 41, 43, and 50.  Green describes his own reaction to the decision in terms approaching ecstasy:  “delighted,” “favorable,” and “elation.”  Id. at 42, 42, and 43.

Green interprets Milward to embrace four comment c propositions:

  1. “Recognizing that judgment and interpretation are required in assessments of causation.52
  2. Endorsing explicitly and taking seriously weight of the evidence methodology,53 against the great majority of federal courts that had, since Joiner, employed a Balkanized approach to assessing different pieces of evidence bearing on causation.54
  3. Appreciating that because no algorithm exists to constrain the inferential process, scientists may reasonably reach contrary conclusions.55
  4. Not only stating, but taking seriously, the proposition that epidemiology demonstrating the connection between plaintiff’s disease and defendant’s harm is not required for an expert to testify on causation.56 Many courts had stated that idea, but very few had found non-epidemiologic evidence that satisfied them.57

Id. at 50-51.

Green’s points suggest that comment c was designed to reinject a radical subjectivism into scientific judgments allowed to pass for expert witness opinions in American courts.  None of the points is persuasive.  Point (1) is vacuous.  Saying that judgment is necessary does not imply that anything goes or that we will permit the expert witness to be the judge of whether his opinion rises to the level of scientific knowledge.  The required judgment involves an exacting attention to the role of random error, bias, or confounding in producing an apparent association, as well as to the validity of the data, methods, and analyses used to interpret observational or experimental studies.  The required judgment involves an appreciation that not all studies are equally weighty, or equally worthy of consideration for use in reaching causal knowledge.  Some inferences are fatally weak or wrong; some analyses or re-analyses of data are incorrect.  Not all judgments can be blessed by anointing some of them “subjective.”

Point (2) illustrates how far the Restatement process has wondered into the radical terrain of abandoning gatekeeping altogether.  The approach that Green pejoratively calls “Balkanized” is a careful look at what expert witnesses have relied upon to assess whether their conclusions or claims follow from their relied upon sources.  This is the approach used by International Agency for Research on Cancer (IARC) working groups, whose method Green seems to endorse.  Id. at 59.  IARC working groups discuss and debate their inclusionary and exclusionary criteria for studies to be considered, and the validity of each study and its analyses, before they get to an assessment of the entire evidentiary display.  (And several of the IARC working groups have been by no means free of the conscious bias and advocacy that Green sees in party-selected expert witnesses.)  Elsewhere, Green refers to the approach of most federal courts as “corpuscular.”  Id. at 51. Clearly, expert witnesses want to say things in court that do not, so to speak, add up, but Green appears to want to give them all a pass.

Point (3) is, at best, a half truth.  Is Green claiming that reasonable scientists always disagree?  His statement of the point suggests epistemiologic nihilism. Although there are no clear algorithms, the field of science is littered with abandoned and unsuccessful theories from which we can learn when to be skeptical or dismissive of claims and conclusions.  Certainly there are times when reasonable experts will disagree, but there are also times when experts on one side or the other, or both, are overinterpreting or misinterpreting the available evidence.  The judicial system has the option and the obligation to withhold judgments when faced with sparse or inconsistent data.  In many instances, litigation arises because the scientific issues are controversial and unsettled, and the only reasonable position is to look for more evidence, or to look more carefully at the extant evidence.

Point (4) is similarly overblown and misguided.  Green states his point as though epidemiology will never be required.  Here Green’s sympathies betray any sense of fidelity to law or science.  Of course, there may be instances in which epidemiologic evidence will not be necessary, but it is also clear that sometimes only epidemiologic methods can establish the causal claim with any meaningful degree of epistemic warrant.

ANECDOTES TO LIVE BY

Anthony Robbins’ Howler

Professor Green delightfully shares two important anecdotes.  Both are revealing of the process that led up to comment c, and to Milward.

The first anecdote involves the 2002 meeting of the American Law Institute.  Apparently someone thought to invite Dr. Anthony Robbins as a guest. (Green does not tell us who performed this subversive act.)  Robbins is a member of SKAPP, the organization started with plaintiffs’ counsel’s slush fund money diverted from MDL 926, the silicone-gel breast implant litigation.

Robbins rose at the meeting to chastise the ALI for not knowing what it was talking about:

“clear, in my opinion, misstatements of . . . science” or reflected a misunderstanding of scientific principles that “leaves everyone in doubt as to whether you know what you are talking about . . . .”

Id. at 44 (quoting from 79th Annual Meeting, 2002 A.L.I. PROC. at 294).  Pretty harsh, except that Professor Green proceeds to show that it was Robbins who had no idea of what he was talking about.

Robbins asserted that the requirement of a relative risk of greater than two was scientifically incorrect. From Green’s telling of the story, it is difficult to understand whether Robbins was complaining about the use of relative risks (greater than two) for inferring general or specific causation.  If the former, there is some truth to his point, but Robbins would be wrong as to the latter.  Many scientists have opined that relative risks provide information about attributable fractions, which in turn permit inferences about individual cases.  See, e.g., Troyen A. Brennan, “Can Epidemiologists Give Us Some Specific Advice?” 1 Courts, Health Science & the Law 397, 398 (1991) (“This indeterminancy complicates any case in which epidemiological evidence forms the basis for causation, especially when attributable fractions are lower than 50%.  In such cases, it is more probable than not that the individual has her illness as a result of unknown causes, rather than as a result of exposure to hazardous substance.”).  Others have criticized the inference, but usually on the basis that the inference requires that the risk be stochastically distributed in the population under consideration, and we often do not know whether this assumption is true.  Of course, the alternative is that we must stand mute in the face of even very large relative risks and established general causation.  See, e.g., McTear v. Imperial Tobacco Ltd., [2005] CSOH 69, at ¶ 6.180 (Nimmo Smith, L.J.) (“epidemiological evidence cannot be used to make statements about individual causation… . Epidemiology cannot provide information on the likelihood that an exposure produced an individual’s condition.  The population attributable risk is a measure for populations only and does not imply a likelihood of disease occurrence within an individual, contingent upon that individual’s exposure.”).

Robbins second point was truly a howler, one that suggests his animus against gatekeeping may grow out of a concern that he would never pass a basic test of statistical competency.  According to Green, Robbins claimed that “increasing the number of subjects in an epidemiology study can identify small effects with ‘an almost indisputable causal role’.” Id. at 45 (quoting Robbins).  Ironically, lawyer and law professor Green was left to take Robbins to school, to educate him on the differences between sampling error, bias, and confounding.  Green does not get the story completely right because he draws an artificial line between observational epidemiology and experimental clinical trials, and incorrectly implies that bias and confounding are problems only in observational studies.  Id. at 45 n. 24.  Although randomization is undertaken in clinical trials to control for bias and confounding, it is not true that this strategy always works or always works completely.  Still, here we have a lawyer delivering the comeuppance to the scolding scientist.  Sometimes scientists really have no good basis to support their claims, and it is the responsibility of the courts to say so.  Green’s handling of Robbins’ errant views is actually a wonderful demonstration of gatekeeping in action.  What is lovely about it is that the claims and their rebuttal were documented and reported, rather than being swept away in the fog of a jury verdict.

Professor Green’s account of Robbins’ foolery should be troubling because, despite Robbins’ manifest errors, and his more covert biases, we learn that Robbins’ remarks had “a profound impact” on the ALI’s deliberations. Courts that are tempted by the facile answers of comment c should find this impact profoundly disturbing.

Alan Done’s Weight of the Evidence (WOE) or Mosaic Methodology

Professor Green relays an anecdote that bears repeating, many times.  In the Bendectin litigation, plaintiffs’ expert witness, Alan Done testified that Bendectin caused birth defects in children of mothers who ingested the anti-nausea medication during pregnancy.  Done had a relatively easy time spinning his speculative web in the first Bendectin trial because there was only one epidemiologic study, which qualitatively was not very good.  In his second outing, Done was confronted by the defense with an emerging body of exonerative epidemiologic research. In response, he deployed his “mosaic theory” of evidence, of different pieces or lines of evidence that singularly do not show much, but together paint a conclusive picture of the causal pattern. Id. at 61 (describing Done’s use of structure-activity, in vitro animal studies, in vivo animal studies, and his own [idiosyncratic] interpretation of the epidemiologic studies).  Done called his pattern a “mosaic,” which Green correctly sees is none other than “weight of the evidence.”  Id. at 62.

After this second trial was won with the jury, but lost on post-trial motions, plaintiffs’ counsel, Barry Nace, pressed the mosaic theory as a legitimate scientific strategy to demonstrate causation, and the appellate court accepted the strategem:

“Like the pieces of a mosaic, the individual studies showed little or nothing when viewed separately from one another, but they combined to produce a whole that was greater than the sum of its parts: a foundation for Dr. Done’s opinion that Bendectin caused appellant’s birth defects. The evidence also established that Dr. Done’s methodology was generally accepted in the field of teratology, and his qualifications as an expert have not been challenged.103

Id. at 61(citing Oxendine v. Merrell Dow Pharm., Inc., 506 A.2d 1100, 1110 (D.C. 1986).  Green then drops his bombshell:  the philosopher of science who developed the “mosaic theory” (WOE) was the plaintiffs’ lawyer, Barry Nace. According to Green, Nace declared the mosaic idea “Damn brilliant, and I was the one who thought of it and fed it to Alan [Done].”  Id. at 63.

Green attempts to reassure himself that Milward does not mean that Done could use his WOE approach to testify today that Bendectin causes human birth defects.  Id. at 63.  Alas, he provides no meaningful solution to protect against future bogus cases.  Green fails to come to grips with the obvious truth that Done was wrong ab initio.  He was wrong before he was exposed for his perjurious testimony.  See id. at 62 n. 107, and he was wrong before there was a “solid body” of exonerative epidemiology.  His method never had the epistemic warrant he claimed for it, and the only thing that changed over time was a greater recognition of his character for veracity, and the emergence of evidence that collectively supported the null hypothesis of no association.  The defense, however, never had the burden to show that Done’s methodology was unreliable or invalid, and we should look to the more discerning scientists who saw through the smokescreen from the beginning.

I Don’t See Any Method At All

May 2nd, 2013

Kurtz: Did they say why, Willard, why they want to terminate my command?
Willard: I was sent on a classified mission, sir.
Kurtz: It’s no longer classified, is it? Did they tell you?
Willard: They told me that you had gone totally insane, and that your methods were unsound.
Kurtz: Are my methods unsound?
Willard: I don’t see any method at all, sir.
Kurtz: I expected someone like you. What did you expect? Are you an assassin?
Willard: I’m a soldier.
Kurtz: You’re neither. You’re an errand boy, sent by grocery clerks, to collect a bill.

* * * * * * * * * * * * * * * *

The Royal Society, the National Academies of Science, the Nobel Laureates have nothing on the organized plaintiffs’ bar.  Consider the genius and the accomplishments of these men and women.  They have discovered and built a perpetual motion machine — the asbestos litigation.  They have learned how to violate the law of non-contradiction with impunity (e.g., industry is evil, and (litigation) industry is good).  In the realm of the sciences, especially as applied in the courtroom, they have demonstrated the falsity of one of core beliefs: ex nihilo nihil fit.  We have a lot to learn from the plaintiffs’ bar.

WOE to Corporate America

Steve Baughman Jensen is a plaintiffs’ lawyer and he justifiably gloats over his success as lead counsel in Milward v. Acuity Products Group, Inc., 639 F.3d 11 (1st Cir. 2011), cert. denied, 132 S.Ct. 1002 (2012).  In a recent article for the litigation industry’s scholarly journal, Jensen touts Milward as Ariadne’s thread, which will take plaintiffs out of the mazes and traps set for them by the benighted law of expert witnesses.  Steve Baughman Jensen, “Reframing the Daubert Issue in Toxic Tort Cases,” 49 Trial 46 (Feb. 2013).  Jensen alleged that his client worked with solvents that contained varying amounts of  benzene, which caused him to develop Acute Promyelocytic Leukemia (APL), a subtype of Acute Myeloid Leukemia (AML).  The district court excluded plaintiffs’ expert witnesses’ causation opinions; the First Circuit reversed.  Jensen crows about his accomplished feat.

Weight of the Evidence (WOE) — Let Them Drink Ripple

Jensen, with help from philosopher of popular science Carl Cranor and toxicologist Martyn Smith persuaded the appellate court that a “weight of the evidence” (WOE) analysis necessarily involves scientific judgment. (Millward, 639 F.3d at 18), and that this “use of judgment in the weight of the evidence methodology is similar to that in differential diagnosis, which we have repeatedly found to be a reliable method of medical diagnosis.” Id. (internal citations omitted).

Is this what judicial gatekeeping of scientific expert opinion has come to?  Phrenology, homeopathy, aroma therapy, and reflexology involve medical judgment, of sorts, and so they too are now reliable methodologies.  Ripple makes red wine, and so does Chateau Margaux.  Chateau Margaux is based upon judgment in oenology, and so is Ripple.  That only one of these products will stand the test of time is irrelevant; both are the product of oenological judgment.  It’s all a question of the weight you would assign the differing qualities of Ripple and a premier cru bordeaux.

Jensen never defines WOE; the closest he comes to describing WOE is to tell us that it essentially involves a delegation to expert witnesses to validate their own subjective weighing of the evidence. As in the King of Hearts, Jensen rejoices that the inmates are now running the asylum.

Too Much of Nothing

Jensen complains about a “divide and conquer” strategy by which defendants take individual studies, one at a time, pronounce them inadequate to support a judgment of causality, and then claim that the aggregate evidence fails to support causality as well.  Surely sometimes that approach is misguided; yet sometimes the evidentiary display collectively represents “too much of nothing.”  In some litigations, there are hundreds of studies, which despite their numbers, still fail to support causation.  In General Electric v. Joiner, the Supreme Court discerned that the studies relied upon were largely irrelevant or inconclusive, and that taken alone or together, the cited studies failed to support plaintiffs’ claim of causality.  In the silicone-gel breast implant litigation, the plaintiffs’ steering committee submitted banker boxes of studies and argument to the court’s appointed expert witnesses, in an attempt to manufacture causation.  The committee, however, took its time and saw that the evidence taken individually or collectively did not amount to a scientific peppercorn.

Let Ignorance Rule

One of Jensen’s clever attempts to beguile the judiciary involves the transmutation of scientific inference into personal credibility.  “Second-guessing an expert’s application of scientific judgment necessarily requires assessing that expert’s credibility, which is the jury’s role.” Jensen, 49 Trial at 49.  Jensen attempts to reduce the “battle of experts” to a credibility contest and thus outside the purview of judicial gatekeeping.  His argument conflates credibility with methodology and its application. Because the expert witness will predictably opine that he applied the methodology faithfully, Jensen asserts that the court is barred from examining the correctness of the expert witness’s self-validation.

But scientific inference is scientific because it does not depend upon the person drawing it.  The inference may be weak, strong, erroneous, valid, or invalid.  How we characterize the inference will turn on the data and their analysis, not on the witness’s say so.

Jensen cites comment c, to Section 28 of Restatement (Third) of Torts, as supporting his reactionary arguments for abandoning judicial gatekeeping of expert witness opinion testimony.  “Juries, not judges, should determine the validity of two competing expert opinions, both of which typically fall within the realm of reasonable science.” Jensen, 49 Trial at 51 (emphasis added).  The law, however, requires trial courts to assess the validity vel non of would-be testifying expert witnesses:

“[A] trial judge, acting as ‘gatekeeper’, must ‘ensure that any and all scientific testimony or evidence admitted is not only relevant, but reliable’.  This requirement will sometimes ask judges to make subtle and sophisticated determinations about scientific methodology and its relation to the conclusions an expert witness seeks to offer— particularly when a case arises in an area where the science itself is tentative or uncertain, or where testimony about general risk levels in human beings or animals is offered to prove individual causation.”

General Elec. Co. v. Joiner, 522 U.S. 136, 147–49 (1997) (Breyer, J., concurring) (citations omitted).  Not only is Jensen’s argument contrary to the law, the argument is based upon a cynical understanding that juries will usually have little time, experience, or aptitude for assessing  validity issues, and that delegating validity issues to juries ensures that the legal system will not be able to root out pathologically weak evidence and inferences. The resolution of validity issues will be hidden behind the secretive walls of the jury room, rather than in the open sight of reasoned, published opinions, subject to public and scholarly commentary.   See, e.g., In re Welding Fume Prods. Liab. Litig., No. 1:03-CV-17000, MDL 1535, 2006 WL 4507859, *33 n.78 (N.D. Ohio 2006).  (“even the smartest and most attentive juror will be challenged by the parties’ assertions of observation bias, selection bias, information bias, sampling error, confounding, low statistical power, insufficient odds ratio, excessive confidence intervals, miscalculation, design flaws, and other alleged shortcomings of all of the epidemiological studies.”)

Martyn Smith

Jensen extols the achievements of Dr. Martyn Smith, his expert witness who was excluded by the trial court in Milward.  A disinterested reader might mistakenly think that Smith was among the leading benzene researchers in the world, but a little Googling would turn up that Milward was not his first litigation citation.  Smith has been pulled over for outrunning his expert-witness headlights in several other litigations, including:

  • Jacoby v. Rite Aid, Phila. Cty. Ct. Common Pleas (Order of April 27, 2012; Opinion of April 12, 2012) (excluding Smith as an expert witness on the toxicity of Fix-o-Dent)
  • In re Baycol Prods. Litig., 495 F. Supp.2d 977 (D. Minn. 2007)
  • In re Rezulin Prods. Liab. Litig., MDL 1348, 441 F.Supp.2d 567 (S.D.N.Y. 2006)(“silent injury”)

None of these other cases involved benzene, but they all involved speculative opinions.

The Milward Symposium

Jensen took another victory lap at the Milward Symposium Organized By Plaintiffs’ Counsel and Witnesses.  The presentations from this symposium have now appeared in print:  Wake Forest Publishes the Litigation Industry’s Views on MilwardSee Steve Baughman Jensen, “Sometimes Doubt Doesn’t Sell:  A Plaintiffs’ Lawyer’s Perspective on Milward v. Acuity Products,” 3 Wake Forest J. L. & Policy 177 (2013).  Jensen’s contribution was mostly a shrill ad hominem against corporations, as well as their lawyers and scientists who complicitly support an alleged campaign to manufacture doubt.

Perhaps someday the law journal’s faculty advisors and editors will feel some embarrassment over the lack of balance and scholarship in Jensen’s contribution to the symposium.  Corporations are bad; get it?  They manufacture doubt about the litigation industry’s enterprise.  Don’t pay attention to massive litigation fraud, such as faux silicosis, faux asbestosis, faux fen-phen heart disease, faux product identification, etc.  See Larry Husten, “79-Year-Old Cardiologist Sentenced To 6 Years In Prison For Fen-Phen Fraud,” Forbes (Mar. 27, 2013).   Forget that ATLA/AAJ is one of the most powerful rent-seeking lobbies in the United States.  Litigants have a constitutional right to extrapolate as they please.  If a substance causes one disease at a very high dose, then it causes every ailment known to mankind at moderate or low doses.  Specific disease entails general disease, etc.  What you balk?  You must be a doubt mongerer.

Jensen assures us that many scientists support and agree with Martyn Smith, both in his methodology and in his conclusions.  Jensen’s articles are sketchy on details, and of course, the devil is in the details.  See Amended Amicus Curiae Brief of the Council for Education and Research on Toxins et al., In Support of Appellants, in Milward.  This Council seems to fly under the internet radar, but I suspect that its membership and that of the Center for Progressive Reform overlaps somewhat.

Jensen’s article is just one of several published in the Wake Forest Journal of Law & Policy.  Let’s hope the remaining articles have more substance to them.

David Bernstein on the Daubert Counterrevolution

April 19th, 2013

David Bernstein has posted a draft of an important new article on the law of expert witnesses, in which he documents the widespread resistance to judicial gatekeeping of expert witness opinion testimony among federal judges.  Bernstein, “The Daubert Counterrevolution” (Mar. 11, 2013).  Professor Bernstein has posted his draft article, set to be published in the Notre Dame Law Review, both on the Social Science Research Network, and on his law school’s website.

Professor Bernstein correctly notes that the Daubert case, and the subsequent revision to Federal Rule of Evidence 702, marked important changes in the law of expert witnesses.  These changes constituted important reforms, which in my view were as much evolutionary, as revolutionary.  Even before the Daubert case, the law was working to find ways to improve expert witness testimony, and to downplay “authoritative” opining in favor of well-documented ways of knowing.  After all, Rule 702, with its emphasis on “knowledge,” was part of the Federal Rules of Evidence, as adopted in 1975.  Pub. L. 93–595, § 1, Jan. 2, 1975, 88 Stat. 1926 (effective July 1, 1975).  Since their first adoption, Rule 702 required that expert witnesses’ knowledge be helpful to the trier of fact.  By implication, the rule has always suggested that hunches, speculation, and flights of fancy did not belong in the court room.

Professor Bernstein certainly acknowledges that Daubert did not spring out of a vacuum.  Critics of judicial decisions on expert witnesses had agitated for decades to limit expert witness conduct by standards and guidances that operate in the scientific community itself.  The Supreme Court’s serial opinions on Rule 702 (Daubert, Joiner, Kumho Tire, and Weisgram) reflect the need for top-down enforcement of a rule, on the books since 1975, while many lower courts were allowing “anything goes.”

What is perhaps surprising, but well documented by Professor Bernstein, is that after four opinions from the Supreme Court, and a revision in the operative statute itself (Rule 702), some lower federal courts have engaged in a rearguard action against expert witness gatekeeping.  Professor Bernstein rightfully settles on the First Circuit’s decision in Milward as exemplifying a trend to disregard the statutory language and mandate for gatekeeping.  For Bernstein, Milward represents the most recent high-water mark of counterrevolution, with its embrace of errors and fallacies in the name of liberal, if not libertine, admissibility.

I suppose that I would go a step further than Professor Bernstein and label the trend he identifies as “reactionary.”  What is clear is that many courts have been willing to flout the statutory language of Rule 702, in favor of old case law, and evasive reasoning on expert witness admissibility.  Indeed, the Supreme Court itself joined the trend in Matrixx Initiatives, Inc. v. Siracusano, 131 S. Ct. 1309 (2011), when it unanimously affirmed the reversal of a trial court’s Rule 12(b)(6) dismissal of a securities fraud class action.  The corporate defendant objected that the plaintiffs failed to plead statistical significance in alleging causation between Zicam and the loss of the sense of smell.  The Supreme Court, however, made clear that causation was not required to make out a claim of securities fraud.  It was, and would be, sufficient for the company’s product to have raised sufficient regulatory concerns, which in turn would bring regulatory scrutiny and action that would affect the product’s marketability.

Not content to resolve a relatively simple issue of materiality, for which causation and statistical significance were irrelevant, the Supreme Court waxed on, in obiter dicta, about causation and statistical significance, perhaps unwittingly planting seeds for those who would eviscerate Rule 702.  See Matrixx Unloaded (Mar. 29, 2011).  Although the Supreme Court disclaimed any intention to address expert witness admissibility in a case that was solely about the sufficiency of pleading allegations, it cited three cases for the proposition that statistical significance was not necessary for assessing biological causation:

“We note that courts frequently permit expert testimony on causation based on evidence other than statistical significance. See, e.g., Best v. Lowe’s Home Centers, Inc., 563 F. 3d 171, 178 (6th Cir 2009); Westberry v. Gislaved Gummi AB, 178 F. 3d 257, 263–264 (4th Cir. 1999) (citing cases); Wells v. Ortho Pharmaceutical Corp., 788 F. 2d 741, 744–745 (11th Cir. 1986). We need not consider whether the expert testimony was properly admitted in those cases, and we do not attempt to define here what constitutes reliable evidence of causation.”

Id. at 1319.

Remarkably, two of the three cases were about specific causation, arrived at using so-called “differential etiology,” which presupposed the establishment of general causation.  These cases never involved general causation or statistical reasoning, but rather simply the process of elimination (iterative disjunctive syllogism).  The citation to the third case, Wells, was a notorious pre-Daubert, pre-Rule 702 revision case, revealed disappointing scholarship.  Wells involved at least one study that purported to find a statistically significant association.  What was problematic about Wells was its failure to consider the complete evidentiary picture, and to evaluate study validity, bias, and confounding, as well as significance probability.  See Wells v. Ortho Pharmaceutical Corp. Reconsidered – Part 1 (Nov. 12, 2012).

Wells was an important precursor to Daubert in that it brought notoriety and disrepute to how federal courts (and state courts as well) were handling expert witness evidence.  Significantly, Wells was a bench trial, where the trial judge opined that plaintiffs’ expert witnesses seemed more credible based upon atmospherics rather than on their engagement with the actual evidence. See Marc S. Klein, “After Daubert:  Going Forward with Lessons from the Past,” 15 Cardozo L. Rev. 2219, 2225-26 (1994) (quoting trial testimony of trial testimony of Dr. Bruce Buehler: “I am sorry sir, I am not a statistician . . . I don’t understand confidence levels. I never use them. I have to use the author’s conclusions.” Transcript of Jan. 9, 1985, at 358, Wells v. Ortho Pharm. Corp., 615 F. Supp. 262 (N.D. Ga. 1985).

Given the Supreme Court’s opinion in Matrixx, the reactionary movement among lower courts is unsurprising.  Lower courts have now cited and follow the Matrixx dicta on statistical significance in expert witness gatekeeping, despite the Supreme Court’s clear pronouncement that it did not intend to address Rule 702.  See In re Chantix (Varenicline) Prods. Liab. Litig., 2012 U.S. Dist. LEXIS 130144, at *22 (N.D. Ala. 2012); Cheek v. Wyeth Pharm. Inc., 2012 U.S. Dist. LEXIS 123485 (E.D. Pa. Aug. 30, 2012).

Professor Bernstein’s article goes a long way towards documenting the disregard for law and science in this movement.  The examples of reactionary decisions could easily be multiplied. Take for instance, the recent Rule 702 gatekeeping decision in litigation over Celexa and Lexapro, two antidepressant medications.  Judge Rodney W. Sippel denied the defense motions to exclude plaintiffs’ principal expert witness, Dr. David Healy.  In re Celexa & Lexapro Prods. Liab. Litig.,  ___ F.3d ___, 2013 WL 791780 (E.D. Mo. 2013).  In attempting to support its decision, the court announced that:

1. Cherry picking of studies, and data within studies, is acceptable for expert witnessesId. at *5, *7, *8.

2. Outdated law applies, regardless of being superseded by later Supreme Court decisions, and the statutory revision in Rule 702Id. at *2 (citing pre-Joiner case:   “The exclusion of an expert’s opinion is proper only if it is so fundamentally unsupported that it can offer no assistance to the jury.” Wood v. Minn. Mining & Mfg. Co., 112 F.3d 306, 309 (8th Cir.1997) (internal quotation marks and citation omitted).”

3.  The Bradford Hill factors can be disregardedId. at *6 (citing In re Neurontin Mktg., Sales Practices, and Prod. Liab. Litig., 612 F. Supp. 2d 116, 133 (D. Mass. 2009) (MDL 1629), and In re Viagra, 572 F.2d 1071 (D. Minn. 2008)).

These features of the Celexa decision are hardly novel.  As Professor Bernstein shows in his draft article, disregard of Rule 702’s actual language, and of the post-Daubert Supreme Court decisions, is prevalent.  See, e.g., In re Avandia Marketing, Sales Practices & Prod. Liab. Litig., 2011 WL 13576 (E.D. Pa. 2011)(announcing that MDL district judge was bound to apply a “Third Circuit” approach to expert witness gatekeeping, which focused on the challenged expert witnesses’ methodology, not their conclusions, in contravention of Joiner, and of Rule 702 itself).

The Celexa decision pushes the envelope on Bradford Hill.  The two decisions cited downplayed Bradford Hill’s considerations, but did not dismiss them.  In re Neurontin Mktg., Sales Practices, and Prod. Liab. Litig., 612 F. Supp. 2d 116, 133 (D. Mass. 2009) (MDL 1629)(“Although courts have not embraced the Bradford Hill criteria as a litmus test of general causation, both parties repeatedly refer to the criteria, seemingly agreeing that it is a useful launching point and guide. Accordingly, this Court will begin its inquiry by evaluating Plaintiffs’ evidence of an association between Neurontin and suicide-related events, the starting point for an investigation under the criteria.”);  In re Viagra Prods. Liab. Litig., 572 F.Supp.2d at 1081 (“The Court agrees that the Bradford Hill criteria are helpful for determining reliability but rejects Pfizer’s suggestion that any failure to satisfy those criteria provides independent grounds for granting its Daubert Motion.”).

Of course, Sir Austin’s considerations were merely those that he identified in a speech to a medical society.  They were not put forward in a scholarly article; nor are his considerations the last word on the subject.  Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295 (1965).

Even as a précis, given almost 50 years ago, Hill’s factors warrant some consideration rather than waving them off as not a litmus test (whatever that means), followed by complete disregard for any of the important considerations in evaluating the causality of an association.

There was brief bright spot in this fairly dim judicial decision.  The district judge refused to exclude Dr. Healy on grounds that his opinion about particular studies differed from the authors’ own interpretations.  In re Celexa & Lexapro Prods. Liab. Litig.,  ___ F.3d ___, 2013 WL 791780, at *5 (E.D. Mo. 2013) (Sippel, J.).

That is the correct approach, even though there is language in Joiner that suggests that the authors’ views are dominant.  See Follow the Data Not the Discussion.  But the refusal to discount Healy’s opinions on this ground was done without any real inquiry whether Healy had offered a valid, competing interpretation of the data in the published studies.

At the core of the reactionary movement identified by Professor Bernstein is an unwillingness, or an inability, to engage with the scientific evidence that is at issue in various Rule 702 challenges.  Let’s hope that Bernstein’s article induces closer attention to the law and the science in future judicial gatekeeping.