TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

As They WOE, So No Recovery Have the Reeps

May 22nd, 2013

Late last year, Justice York excluded Dr. Shira Kramer’s WOE-ful opinion that gasoline fumes from an alleged fuel-line leak caused Sean Reep to be born with cerebral palsy.  Reeps v. BMW of North America, LLC, 2012 NY Slip Op 33030(U), N.Y.S.Ct., Index No. 100725/08 (New York Cty. Dec. 21, 2012) (York, J.).  Kramer’s opinion was a parody of science, pieced together from case reports, animal studies, and epidemiologic studies that looked at exposures utterly unlike that of Mrs. Reep’s exposure.

Justice York saw through the charade.  The animal studies were largely exonerative. The case reports were of birth defects quite different from those sustained by Sean Reeps.  The epidemiologic studies were of different chemicals or chemicals at levels very different from those experienced by Mrs. Reeps. Plaintiffs’ expert witnesses ignored established principles of teratology in claiming late-term birth defects to have been causally related to early term exposures. Plaintiffs’ expert witnesses gave a convincing presentation of how not to do science, and why judicial gatekeeping is necessary.  SeeNew York Breathes Life into Frye Standard – Reeps v. BMW” (Mar. 5, 2013).

Justice York clearly articulated that the “plaintiff’s burden to prove the methodology applied to reach the conclusions will not be rejected by specialists in the field.”  Reeps, slip op. at 11.  The trial court recognized that under the New York state version of Frye, the court must determine whether plaintiffs’ expert witnesses are faithfully applying a methodology, such as the Bradford Hill criteria, or whether they are they are “pay[ing] lip service to them while pursuing a completely different enterprise.”  Id.  Justice York recognized that the court must examine a proffered opinion to determine whether it “properly relates existing data, studies or literature to the plaintiff’s situation, or whether, instead, it is connected to existing data only by the ipse dixit of the expert.” Id. (internal quotations omitted).

Plaintiffs were unhappy with Justice York’s decision, and their counsel moved for reconsideration, positing only 15 supposed errors or misunderstandings in the opinion. On May 10, the trial court denied the motion for reconsideration and further explicated the scientific deficiencies of plaintiffs’ witnesses’ opinions.

The trial court was unimpressed:

“In general, attorney for plaintiffs misrepresents the substance of this court’s Decision. The court did not prefer conclusions of defendants’ experts to that of plaintiffs – disagreement among experts is to be expected, since causation analysis involves professional judgment in interpreting data and literature. An expert opinion is precluded when it is reached in violation of generally accepted scientific principles. The court determined that Drs. Kramer and Frazier did not follow generally accepted scientific methodology.”

Reeps, 2013 NY Slip Op 31055(U) at 2 (Opinion on Motions to Reargue, to Renew, and for Oral Hearing) (May 10, 2013).

The court noted that the plaintiffs’ witnesses’ novel claim that low-level gasoline vapor inhalation causes birth defects, a claim that had escaped the attention of all other scientists and regulatory agencies, cried out for judicial intervention.  Id. at 3.

The court also rebuffed the claim that plaintiffs’ witnesses, Shira Kramer and Linda Frazier, had followed the Bradford Hill guidelines:

 “These guidelines are employed only after a study finds an association to determine whether that association reflects a true causal relationship.”

Id. at 5 (quoting Federal Judicial Center, National Research Council, Reference Manual on Scientific Evidence at 598-599 (3d ed. 2011)) (emphasis in the original). Kramer and Frazier never got off the dime with the Bradford Hill guidelines.

In considering the plaintiffs’ motions, the trial court also had occasion to revisit the assertion that “weight of the evidence” (WOE) substituted for, or counted as, a scientific basis for a conclusion of causality:

“The metaphorical use of the term is, if nothing else, ‘a colorful way to say the body of evidence we have examined and judged using a method we have not described but could be more or less inferred from a careful between-the-lines reading of our paper’.”

Id. at 5 (quoting Douglas Weed, “Weight of Evidence: A Review of Concept and Methods,” 25 Risk Analysis 1545, 1546-47 (2005).

Unmoved by the sophistical hand waving, the court emphasized that Kramer and Frazier had confused “suggestive” evidence with “conclusions,” and they had misrepresented the meaning and significance of threshold limit values.  All in all, a convincing demonstration of the need for, and the judicial competence to carry out, gatekeeping of expert witness opinion testimony.

Where Are They Now? Marc Lappé and the Missing Data

May 19th, 2013

The recrudescence of silicone “science” made me wonder where some of the major players in the silicone litigation are today. Some of the plaintiffs’ expert witnesses were characters who gave the litigation “atmosphere.”

Marc Alan Lappé was an experimental pathologist, who testified frequently for plaintiffs in toxic exposure cases.  He founded an organization, The Center for Ethics & Toxics (CETOS), to serve as platform for his advocacy activities.  Lappé was a new-age scientist, and an author of popular books on toxic everything:  Chemical Deception: The Toxic Threat to Health and the Environment, and Against the Grain: Biotechnology and the Corporate Takeover of Your Food. When the silicone-gel breast implant litigation went viral, or immunologic, Lappé was embraced by the silicone sisters and their lawyers as one of their leading immunology guys. Lappé, a revolutionary, obliged and produced another pop science classic: Marc Lappé, The Tao Of Immunology: A Revolutionary New Understanding Of Our Body’s Defenses (1997).

Lappé jumped in to the silicone litigation early.  He supported autoimmune claims, as well as the dubious claim that polyurethane-covered breast implants caused or accelerated breast cancer.  Livshits v. Natural Y Surgical Specialties, Inc., 1991 WL 261770 (S.D.N.Y. Nov. 27, 1991).  In depositions and in trial testimony, Lappé was combative and evasive, but when he wanted to be clear, he could be clear enough:

“It’s my opinion that silicone directly or indirectly can precipitate an activated immune state in such women that can lead to an autoimmune condition.”

Lappé Dep. 44:19-22 (Aug. 21, 1995), in Roden v. Medical Engineering Corp., No. 94-02-103, in District Court of Wise County, Texas, 271st Judicial District.

Plaintiffs also offered Lappé as an ethicist, in what was an obvious attempt to turn personal injury cases into passion plays and to raise the emotional temperature of the court rooms.  Plaintiffs were able to get away with such nonsense in some state court cases, but the federal judges generally would not abide expert witnesses on ethics.  See, e.g., Switzer v. McGhan Medical Corp., CV 94-P-14229-S, Transcript at 96-98, N.D. Ala. (Jan. 4, 1996) (Pointer, J.) (noting that Lappé would not be permitted to testify that the defendant’s conduct was unethical or unconscionable). Ironically, Lappé would become ensnared by an article, the publication of which was surrounded in ethical controversy.

Although Lappé had some experience in experimental immunology, he had no background in silicone.  Undaunted, he set about to publish a work of science fiction.  Marc Lappé, “Silicone-reactive disorder: a new autoimmune disease caused by immunostimulation and superantigens,” 41 Medical Hypotheses 348 (1993).  Lappé went on to find some researchers with whom he could join forces, and in 1993, he and his co-authors published an article based upon what was ostensibly bench research on silicone immunology. Alas, Lappé did not really know the other authors, who were pitching their immunological screening test to plaintiffs’ support groups and to plaintiffs’ lawyers.  Lappé signed up as a co-author without knowing the authors’ marketing plan, and without ever having seen the underlying data and statistical analyses. Given his credentials as a bioethicist, the lapse was remarkable. Lappé learned only through his involvement as an expert witness that his coauthors had been warned by the Food and Drug Administration about unlawful marketing of their “test,” and that some of his co-authors were involved in litigation against Bristol-Myers Squibb.  Although the article was clearly intended to support both the marketing of the test, the litigation that would have benefited his coauthors directly, as well as Lappé’s testimonial adventures, the article contained no conflict of interest disclosures. Laurence Wolf, Marc Lappé, Robert Peterson, and Edward Ezrailson, “Human immune response to polydimethylsiloxane (silicone): screening studies in a breast implant population,” 7 Faseb J. 1265 (1993).

Lappé was an advocate, but he was not stupid.  The late Chuck Walsh took some of Lappé’s early depositions in the breast implant litigation, and pressed him on whether he had seen or had access to the underlying data. Lappé Dep. Roden at 94:4 -21 (Aug. 21, 1995).  Lappé also acknowledged that he had been unaware that the data presented in the published paper was truncated from the data originally obtained in the study. Id. at 108:19 – 109:7.  Lappé bristled, as well he should, at these challenges to his ethical bona fides.  He apparently requested  the underlying data on more than one occasion, but his colleagues would not share the data with him:

Question:  I want to ask you, did you ever get the basic raw data?

Answer:    That was asked and answered as recently as three weeks ago.  And the same answer applies today:  No.  I had asked for it.  It was not give[n] to me.  I have asked for it again.  It’s not been given to me.

Lappé  Dep. at 172:9-14 (Mar. 21, 1996), in Wolf v. Surgitek, Inc., No. 92-60186, 113th Judicial District, District Court of Harris County, Texas.

In early 1998, before Judge Pointer’s neutral expert witnesses delivered their reports in the multi-district proceedings, I traveled to Gualala, California, to take Lappé’s deposition in Page v. Bristol-Myers Squibb Co., No. JCCP-2754-03740, California Superior Court, County of San Diego (Jan. 19, 1998).  I recall the little coastal town of Gualala well.  The hotel, restaurant, and even the deposition room were infested with fruit flies, no doubt because pesticides were banned under Lappé’s influence.  When I asked Lappé whether he had changed his views in any way, he gracefully backed away from his previous testimony:

“I believe that the current evidence, the weight of evidence suggests that the antibodies that are formed in women, perhaps in excess of their background levels for IGG antibody, may not have a specificity towards silicone itself as an antigen but may bind preferentially to silicone and therefore given nonspecific binding results such as those as the Emerald Labs detected in their plate bioassay.   I think their evidence does not presently weigh in favor of considering silicone by itself as an antigen.”

Lappé  Dep. Wolf at 100: 5-18.  Later that year, 1998, Tim Pratt extracted further concessions from Lappé, in a Mississippi case.  Lappé acknowledged that the MDL court’s neutral expert witnesses had done a “reasonably good job,” and that he agreed with them that there was not consistent evidence to support the claim that silicone caused autoimmune disease.  Lappé Dep. at 26:1-9 (Dec. 17, 1998), in Brassell v. Medical Engineering Corp., Case No. 251-96-1074 CIV, Hinds County Circuit Court, Mississippi.

The litigation faded away, and so did Lappé.  He died in 2005. Douglas Martin, “Marc Lappé, 62, Dies; Fought Against Chemical Perils,” N.Y. Times (May 21, 2005).  Few other expert witnesses for silicone plaintiffs had the intellectual integrity to confess error.  I hope he has found his missing data.

Biopersistant Silicone

May 18th, 2013

From the late 1980’s until the late 1990’s, a cadre of public health zealots waged war against various silicone medical devices, but especially against silicone gel breast implants.  Their charge was that silicone degraded in vivo to silica, and that it caused autoimmune disease.  Their supposed method:  weight of the evidence.

I recall sitting next to Professor Carl Cranor at a meeting in Washington, D.C.  When the subject of silicone gel breast implants came up, he started trash talking the exonerative epidemiology.  When I introduced myself and told him that I represented one of the defendants in that litigation, he got up and moved.  Thankfully.

In 1999, the Institute of Medicine issued its consensus report that debunked the plaintiffs’ attempts to draw a causal connection between silicone and autoimmune disease.  Stuart Bondurant, et al., Safety of Silicone Breast Implants (1999).   The phrases “weight of the evidence” or “weight of evidence” are never mentioned in the report, over 500 pages long.

Recently, the silicone plaintiffs’ causal theory has resurfaced. There has been no new important evidence, but with the scientific community’s attention drawn elsewhere, some old zealots and some new have wandered back into the field to recycle the claims and hypotheses that consumed lawyers and scientists in the last century.

Last year saw a review by Yehuda Shoenfeld and his Israeli colleagues, who describe a “new” syndrome that manifests with various immune-system disturbances.  These authors call their syndrome ASIA (autoimmune syndrome induced by adjuvant). M. Lidar, N. Agmon-Levin, P. Langevitz, and Y. Shoenfeld, “Silicone and scleroderma revisited,” 21 Lupus 121 (2012).

Shoenfeld, who has dabbled with this theory for 20 years, acknowledges that the epidemiologic studies fail to support the ASIA notion.  Despite the lack of support from controlled, observational studies, these authors proceed to describe “the mechanisms by which silicone may mediate autoimmunity in general, as well as the evidence for causal associations with more specific autoimmune syndromes in general, and scleroderma in particular.”  Id. at 121.

Last month, an article was published online with a collection of case reports from the Netherlands. Jan Tervaert & R. M. Kappel, “Silicone implant incompatibility syndrome (SIIS):A frequent cause of ASIA (Shoenfeld’s syndrome),” 56 Immunologic Research (2013), published online, April 2013.  The authors employ Shoenfeld’s criteria for ASIA, and postulate a causal relationship between silicone implants and the syndrome in 32 cases.

This month, the assault has stepped up.  Yehuda Shoenfeld, “Video Q&A: what is ASIA? An interview with Yehuda Shoenfeld,” 11 BMC Medicine 118 (2013). The video of Dr. Shoenfeld is also available for those who may find it hard to believe that article has found its way into print.

Silicone.  It never goes away.

IARC and Cranor’s Apologetics for Flawed and Fallacious Science

May 13th, 2013

In his recent contribution to the Center for Progressive Reform’s symposium on the Milward case, Professor Cranor suggests that the International Agency for Research on Cancer (IARC) uses weight of the evidence (WOE) in its carcinogenicity determinations.  See Carl F. Cranor, “Milward v. Acuity Specialty Products: Advances in General Causation Testimony in Toxic Tort Litigation,” PDF 3 Wake Forest J. L. & Policy 105 (2013)[hereinafter cited as Cranor]  Cranor’s suggestion is demonstrably wrong.

The IARC process is described in several places, but the definitive presentation of the IARC’s goals and methods is set out in a document known as the “Preamble.”   World Health Organization, IARC Monographs on the Evaluation of Carcinogenic Risks to Humans — Preamble (2006) [cited herein as Preamble]. There is no mention of WOE in the Preamble.

The IARC process consists upon assessments of carcinogenicity of  substances or exposure circumstances, and categorization into specified groups:

IARC Category

Verbal Description

IARC “Findings”

Group 1 [Known] Carcinogenic to humans

111

Group 2A Probably carcinogenic to humans

65

Group 2B Possibly carcinogenic to humans

274

Group 3 Not classifiable as to its carcinogenicity to humans

504

Group 4 Probably not carcinogenic to humans

1

The IARC operative definitions that a substance to a category are highly stylized and unique to IARC.  The definitions do not coincide with ordinary language definitions or general scientific usage.  Only one substance is categorized as “probably not carcinogenic to humans” is Caprolactam.

Alas, oxygen, nitrogen, carbon dioxide, sugar, table salt, water, and many other exposures we all experience, and even require, do not make it to “probably not carcinogenic.”  This fact should clue in the casual reader that the IARC classifications are greatly influenced by the precautionary principle.  There is nothing wrong with this influence, as long as we realize that IARC categorizations do not necessarily line up with scientific determinations.

Cranor attempts to exploit IARC classifications and their verbiage, but in doing so he misrepresents the IARC enterprise.  For instance, his paper for the CPR symposium strongly suggests that a case involving a Group 2A carcinogen would necessarily satisfy the preponderance of evidence standard common in civil cases because the IARC denominates the substance or exposure circumstance as “probably carcinogenic to humans.”  This suggestion is wrong because of the technical, non-ordinary language meanings given to “probably” and “known.” The IARC terminology involves a good bit of epistemic inflation.  Consider first what it means for something to be “probably” a carcinogen:

“Group 2.

This category includes agents for which, at one extreme, the degree of evidence of carcinogenicity in humans is almost sufficient, as well as those for which, at the other extreme, there are no human data but for which there is evidence of carcinogenicity in experimental animals. Agents are assigned to either Group 2A (probably carcinogenic to humans) or Group 2B (possibly carcinogenic to humans) on the basis of epidemiological and experimental evidence of carcinogenicity and mechanistic and other relevant data. The terms probably carcinogenic and possibly carcinogenic have no quantitative significance and are used simply as descriptors of different levels of evidence of human carcinogenicity, with probably carcinogenic signifying a higher level of evidence than possibly carcinogenic.”

Preamble at 22, § 6(d).  So probably does not mean “more likely than not,” and “possibly” means something even less than some unspecified level of probability.  An IARC classification of 2A will not help a plaintiff reach the jury because it does not connote more likely than not.

A category I finding is usually described as a “known” carcinogen, but the reality is that there may still be a good deal of epistemic uncertainty over the classification:

“Group 1: The agent is carcinogenic to humans.

This category is used when there is sufficient evidence of carcinogenicity in humans. Exceptionally, an agent may be placed in this category when evidence of carcinogenicity in humans is less than sufficient but there is sufficient evidence of carcinogenicity in experimental animals and strong evidence in exposed humans that the agent acts through a relevant mechanism of carcinogenicity.”

Preamble at 22, § 6(d).

Again, the precautionary nature of the categorization should  be obvious.  Knowledge of carcinogenicity is equated to sufficient evidence, which leaves open whether there is a body of contradictory evidence.  The IARC’s definition of “sufficiency” does place some limits on what may affirmatively count as “sufficient” evidence:

Sufficient evidence of carcinogenicity: The Working Group considers that a causal relationship has been established between exposure to the agent and human cancer. That is, a positive relationship has been observed between the exposure and cancer in studies in which chance, bias and confounding could be ruled out with reasonable confidence. A statement that there is sufficient evidence is followed by a separate sentence that identifies the target organ(s) or tissue(s) where an increased risk of cancer was observed in humans. Identification of a specific target organ or tissue does not preclude the possibility that the agent may cause cancer at other sites.”

Preamble (2006), at 19, § 6(a).  This definition hardly helps Cranor in his attempt to defend bad science.  Scientists may reasonably disagree over what is sufficient evidence, but the IARC requires, at a minimum, that “chance, bias and confounding could be ruled out with reasonable confidence.”  Id.  Ruling out chance, of course, introduces considerations of statistical significance, multiple comparisons, and the like.  Ruling out bias and confounding with confidence is an essential part of the IARC categorization process,  just as it is an essential part of the scientific process.  Reviewing the relied upon studies for whether they ruled out chance, bias, and confounding, was precisely what the Supreme Court did in General Electric v. Joiner, and what the current statute, Federal Rule of Evidence 702, now requires.  Failing to review the extant epidemiologic studies for their ability to rule out chance,  bias, and confounding is exactly what the district court judge did in Milward.

IARC and Conflicts of Interest – Nemo iudex in causa sua

Holding out the IARC process as exemplifying scientific method involves other controversial aspects of the process.  IARC’s classifications are determined by “working groups” that review the available scientific literature on an agent’s carcinogenicity.  Members of these of groups are selected in part because they have “have published significant research related to the carcinogenicity of the agents being reviewed… .” Preamble at 5.  See also Vincent Cogliano, Robert A. Baan, Kurt Straif, et al., “The science and practice of carcinogen identification and evaluation,” 112 Envt’l Health Persp. 1269, 1273 (2004).

While the IARC tries hard to avoid apparent financial conflicts of interest, its approach to selecting voting members of the working groups invites a more pervasive, more corrupting influence:  working group members must vote on the validity of their own research.  The prestige of their own research will thus be directly affected by the group’s vote, as well as by the analysis in the resulting IARC monograph.  Many writers have criticized this approach.  See, e.g., Paolo Boffetta, Joseph McLaughlin, Carlo La Vecchia, Robert Tarone, Loren Lipworth, and William Blot, “A further plea for adherence to the principles underlying science in general and the epidemiologic enterprise in particular,” 38 Internat’l J. Epidemiol. 678 (2009); Michael Hauptmann & Cecile Ronckers, “A further plea for adherence to the principles underlying science in general and the epidemiologic enterprise in particular,” 39 Internat’l J. Epidemiol. 1677 (2010).

Notably absent from Cranor’s defense of using bad science and incomplete evidence is his disregard of systematic reviews and meta-analysis.  Although “agency” science is a weak shadow of the real thing, even federal agencies have come to see the importance of using principles of systematic reviews in their assessments of science for policy purposes.  See, e.g., FDA, Guidance for industry evidence-based review system for the scientific evaluation of health claims (2009).  Currently underway at the National Toxicology Program’s Office of Health Assessment and Translation (OHAT) is an effort to implement systematic review methodology in the Program’s assessments of potential human health hazards.  That the NTP is only now articulating an OHAT Evaluation Process, incorporating principles of systematic review, suggests that something less rigorous has been used previously.  See Federal Register Notice , 78 Fed. Reg. 37 (Feb. 25, 2013).

No one should be fooled by Cranor’s attempt to pass off  precautionary judgments as scientific determinations of causality.

Cranor’s Defense of Milward at the CPR’s Celebration

May 12th, 2013

THE RISE OF THE UBER-EXPERT

One of the curious aspects of the First Circuit’s decision in Milward was the court’s willingness to tolerate a so-called weight of the evidence (WOE) assessment of a causal issue by toxicologist Martyn Smith, when much of the key evidence did not involve toxicology.  In defending WOE, Professor Cranor argues that scientists (such as those in an International Agency for Research on Cancer (IARC) working group) evaluate evidence from different lines of research into a single, evaluative judgment of the likelihood of causation.  The lines of evidence may involve animal toxicology, cell biology, epidemiology or other disciplines:

“In drawing conclusions from the data to a theory or explanation, it is necessary for scientists to evaluate the quality of different lines of evidence, to integrate them and to assess what conclusion the lines of evidence most likely supports and how well they do so in comparison with alternative explanations.”

See Carl F. Cranor, “Milward v. Acuity Specialty Products: Advances in General Causation Testimony in Toxic Tort Litigation,” PDF 3 Wake Forest J. L. & Policy 105, 117 (2013)[hereinafter cited as Cranor].

Presumably, the scientists will come to the table with the training, experience, and expertise appropriate to their discipline.  The curious aspect of Cranor’s defense is that Martyn Smith’s expertise did not encompass many of  the lines of research advanced, in particular, the epidemiologic.  Of course, in the real world of science, the assessment of the “lines” of evidence is conducted by scientists from the different, relevant disciplines.  In the make-believe world of courtroom science, the collaboration breaks down when a single expert witness, such as Smith, offers opinions outside his real expertise.  Because the law is not particularly demanding with respect to the extent and scope of expertise, Smith was able to hold forth not only on animal experiments, but on human epidemiologic studies.  The defense was able to show that Smith disregarded basic principles of epidemiology, but the First Circuit agreed with Cranor, that consideration of Smith’s disregard should be kicked down the road, to the jury for its consideration.

As a practical matter, in today’s world of highly specialized scientific disciplines, it is simply not possible for an expert witness to address evidence from all the fields needed to evaluate the multiple lines of evidence relevant to a causal issues.  We should rightfully be skeptical of a single expert witness who claims the ability to weigh disparate lines of evidence to synthesize a judgment of causation.  Of course, this is how science is practiced in a courtroom, not in a university.

REJECTION OF EVIDENCE HIERARCHY

Another salient feature of Cranor’s argument is his insistence that there is no hierarchy of evidence.  Cranor’s argument is ambiguous between rejecting a hierarchy of disciplines or a hierarchy within epidemiology itself .  Cranor never actually argues directly for a leveling of all types of epidemiologic studies, and as we will see, his one key citation (repeated three times) is for the hierarchy of disciplines:  epidemiology, molecular biology, genetics, pathology, and the like.

Clearly there are instances of causation determined without epidemiology.  The Henle-Koch postulates after all were developed to assess causation by infection biological organisms.  And in some instances, very suggestive evidence of viral causes of cancer has been attained before confirming epidemiologic evidence.  If there is a meaningful population attributable risk, however, epidemiology should be able to confirm the suspicions of virology or molecular biology.

Cranor repeatedly cites a meeting report of a workshop held in Washington, D.C., in 2003.  See also Michael Green, Michal Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” in Reference Manual on Scientific Evidence 549, 564 (3d ed. 2011) (citing same meeting report).  Cranor’s citations and quotations misleadingly suggest that the report was an official function of the National Cancer Institute (NCI), and that the published report was an official pronouncement of the NCI.  Neither suggestion is true.

Cranor praises the Circuit’s Milward decision for adopting his argument and citing the meeting report for his claim that there is no hierarchy of evidence:

“Citing National Cancer Institute scientists, [the Circuit] also added that “[t]here should be no such hierarchy” of evidence for carcinogenicity as between epidemiological and some other kinds of evidence.100 These scientists and many distinguished scientific committees would not require epidemiological studies to support claims that a substance can cause adverse effects in humans or place certain other a priori constraints on evidence.101

Cranor at 119 (citing Milward, at 17, citing Michele Carbone, et al., Modern Criteria to Establish Human Cancer Etiology, 64 Cancer Research 5518, 5522 (2004)).

Given the emphasis that Cranor places upon the Carbone article, it is worth taking a close look.  Carbone’s article was styled “Meeting Report.” See also Michelle Carbone, Jack Gruber, and May Wong, “Modern criteria to establish human cancer etiology,” 14 Semin. Cancer Biol. 397 (2004).  The article was a report of a workshop, not an official NCI publication.  The NCI hosted the meeting; the meeting was not sponsored by the NCI, and the published meeting report was not an official statement of the NCI.  Notably, the report appeared in Cancer Research as a paid advertisement, not in the Journal of the National Cancer Institute as a scholarly article.

In assessing the citation, readers should consider the authors of the meeting report.  Importantly, the discipline of epidemiology was not strongly represented; most of the chairpersons and scientists in attendance were pathologists, cell biologists, virologists, and toxicologists.  The authors of the meeting report reflect the interests and focus of the scientists in attendance.  The lead author was Michele Carbone, a pathologist at Loyola University Chicago.  Some may recognize Carbone as one of the proponents of Simian Virus 40 as a cause of mesothelioma, a hypothesis that has not fared terribly well in the crucible of epidemiologic science.  Other authors included:

George Klein, with the Microbiology and Tumor Biology Center, Karolinska Institute, in Stockholm,

Jack Gruber, a virologist with the Cancer Etiology Branch of the NCI, and

May Wong, a biochemist, with the NCI.

The basis of the citation to Carbone’s meeting report is an informal discussion session that took place at the meeting.  Those in attendance broke out into two groups, one chaired by Brook Mossman, a pathologist, and the other group chaired by Dr. Harald zur Hausen, a famous virologist who discovered the causal relationship between human papilloma virus and cervical cancer.

The meeting report included a narrative of how the two groups responded to twelve questions. Cranor’s citation to this article is based upon one sentence in Carbone’s report, about one of twelve questions:

6. What is the hierarchy of state-of-the-art approaches needed for confirmation criteria, and which bioassays are critical for decisions: epidemiology, animal testing, cell culture, genomics, and so forth?

There should be no such hierarchy.  Epidemiology, animal, tissue culture and molecular pathology should be seen as integrating evidences in the determination of human carcinogenicity.”

Carbone at 5522.  Considering the fuller context of the meeting and this report, there is nothing particularly surprising about this statement.  It is not clear that the full question and answer even remotely supports the weight that Cranor places upon it.  Clearly, Cranor’s quotations are unduly selective.  For instance, Cranor does not discuss the disagreement among those in attendance over criteria for different carcinogens:

“2. Should the criteria be the same for different agents (viruses, chemicals, physical agents, promoting agents versus initiating DNA-damaging agents)?

There were different opinions. Group 1 debated this issue and concluded that the current listing of criteria should remain the same because we lack sufficient evidence to develop a separate classification. Group 2 strongly supported the view that it is useful to separate the biological or infectious agents from chemical and physical carcinogens due to their frequently entirely different mode of action.”

Carbone at 5521.

Perhaps Cranor did not think a legal audience would be interested in the emphasis given to epidemiology.  The authors of the meeting report noted that the importance to epidemiology for general causation, but its limitations for determining specific causation:

“Concerning the respective roles of epidemiology and molecular pathology, it was noted that epidemiology allows the determination of the overall effect of a given carcinogen in the human population (e.g., hepatitis B virus and hepatocellular carcinoma) but cannot prove causality in the individual tumor patient.”

Carbone at 5518.  The report did not state that epidemiology was not necessary for confirmation of carcinogenicity in the species of interest (humans). The meeting report emphasized the need to integrate the findings of epidemiology and of molecular biology; it did not urge that epidemiology be ignored or disregarded:

“A general consensus was often reached on several topics such as the need to integrate molecular pathology and epidemiology for a more accurate and rapid identification of human carcinogens.”

Carbone at 5518.

“Ideally, before labeling an agent as a human carcinogen, it is important to have epidemiological, experimental animals, and mechanistic evidences (molecular pathology). Not all of the evidence is always available, and, at times, it may be prudent to identify a human carcinogen earlier rather than later.”

Carbone at 5519 (emphasis added).  Unlike Cranor, the authors of the meeting report distinguish between instance when they are acting on a scientific determination of causation, and a precautionary assessment that proceeds prudentially “as if” causation is determined.

Against this fuller context, Cranor’s characterization of the meeting report, and his limited citations and quotations can be seen to be misleading:

“The First Circuit wisely followed the Etiology Branch of the National Cancer Institute, which sponsored a workshop on cancer causation that concluded ‘there should be no . . . hierarchy’ among epidemiology, animal testing, cell culture, genomics, and so forth.164

Cranor at 129.  The suggestion that the informal workshop statement represented the views of the Etiology Branch is bogus.  Not content to misrepresent twice, Cranor comes back for yet a third misleading citation to this report:

“A further conclusion, already noted, is that scientific experts in court should be permitted to rely upon all scientifically relevant evidence in nondeductive arguments to draw conclusions about causation.209 “There should be no such hierarchy” of evidence, as the Milward court put it, following scientists conducting a workshop at the National Cancer Institute.210 This decision stands as an important corrective to the views of some other appellate and district courts concerning the scientific foundation for expert testimony in toxic tort cases.”

Cranor at 135 (emphasis in original) (citing Carbone for a third time).  To see how misleading is Cranor’s suggestion that scientists should be permitted upon all scientific relevant evidence, consider the meeting report’s careful admonition about the lack of validity of some animal models and mechanistic research:

“Moreover, carcinogens and anticarcinogens can have different effects in different situations.  As shown by the example of addition of β-carotene in the diet, β-carotene has chemopreventive effects in many experimental systems, yet it appears to have increased the incidence of lung cancer in heavy smokers. Animal experiments can be very useful in predicting the carcinogenicity of a given chemical. However, there are significant differences in susceptibility among species and within organs in the same species, and differences in the metabolic pathway of a given chemical among human and animals could lead to error.”

Carbone at 5521.  Obviously relevance is conditioned upon validity, a relationship that is ignored, suppressed, and dismissed in Cranor’s article.

The devil, or the WOE, comes from with ignoring the details.

Clowns to the left of me, Jokers to the right

May 11th, 2013

Both the left and the right are infused with hypocrisy when it comes to accepting science and evidence-based evaluations.  The source and cause of the antagonism should be obvious.  A scientific worldview requires a commitment to changing positions if and when new evidence develops, models are refined, and theory deepens.  A political (or a religious) worldview places core commitments above empirical data, as was so clearly revealed in the case of The Vatican v. Galileo Galilei.  The left wants scientists to practice science for the redistribution of wealth.  The right wants scientists to practice science for the Ad maiorem Dei gloriam.

The latest assault on science comes from the right, and has the name “The High Quality Research Act” (HQRA).  The text of the bill provides:

SEC. 2. HIGH QUALITY RESEARCH.

(a) CERTIFICATION

Prior to making an award of any contract or grant funding for a scientific research project, the Director of the National Science Foundation shall publish a statement on the public website of the Foundation that certifies that the research project—

(1) is in the interests of the United States to advance the national health, prosperity, or welfare, and to secure the national defense by promoting the progress of science;

(2) is the finest quality, is ground breaking, and answers questions or solves problems that are of utmost importance to society at large; and

(3) is not duplicative of other research projects being funded by the Foundation or other Federal science agencies.

(b) TRANSFER OF FUNDS

Any unobligated funds for projects not meeting the requirements of subsection (a) may be awarded to other scientific research projects that do meet such requirements.

(c) INITIAL IMPLEMENTATION REPORT .

Not later than 60 days after the date of enactment of this Act, the Director shall report to the Committee on Commerce, Science, and Transportation of the Senate and the Committee on Science, Space, and Technology of the House of Representatives on how the requirements set forth in subsection (a) are being implemented.

(d) NATIONAL SCIENCE BOARD IMPLEMENTATION REPORT

Not later than 1 year after the date of enactment of this Act, the National Science Board shall report to the Committee on Commerce, Science, and Transportation of the Senate and the Committee on Science, Space, and Technology of the House of Representatives its findings and recommendations on how the requirements of subsection (a) are being implemented.

(e) IMPLEMENTATION BY OTHER AGENCIES

Not later than 1 year after the date of enactment of this Act, the Director of the Office of Science and Technology Policy, in collaboration with the National Science and Technology Council, shall report to the Committee on Commerce, Science, and Transportation of the Senate and the Committee on Science, Space, and Technology of the House of Representatives on how the requirements of sub-section (a) may be implemented in other Federal science agencies.

Although the Bill applies by its terms to the National Science Foundation, the Congressional mandate envisions implementation by other Federal science agencies such as the National Institutes of Health.  The heart of the Bill is the required certification from the Director of the NSF under Section (2)(a)(1), (2), and (3), above.

The proposed statutory criteria for the Director’s certification virtually ensure that no research will be funded; indeed, the criteria are inimical to the very idea of research.

The first criterion, the general welfare and defense of the country, is an utterly vacuous standard.  The Director could certify any research under this criterion.

The second criterion, which I call the criterion of hyperbolic research can virtually never be met.  Funded research must now not only be of good or excellent quality, it must be of “the finest quality.”  The research must be “ground breaking.”  Of course, no honest researcher knows in advance that the research will be ground breaking.  There are no guarantees of success in research.  If there were, then the research would be unnecessary because the grant proposal would suffice.  I suspect that every researcher believes his or her research will “answer questions or solve problems that are of the utmost importance to society at large,” but the bill is written to suggest that the Director must certify success in advance, as justification for the funding.  Of course, the NSF Director, if honest, will never be able to satisfy this criterion for most research even when the research is relatively successful in advancing scientific understanding of some phenomenon.  If this is the standard, nothing would or should be funded. Why not just say government should not be a funding resource because political processes can never ensure the highest quality research.

The third criterion for certification by the Director, which requires that the funded research not be duplicative of other federally funded research projects, is the hardest to understand.  A crucial part of the scientific process is replication and demonstration of consistency.  Furthermore, non-duplication is a vague and contested criterion at best.  If a previous cross-sectional study suggested an association between an environmental exposure and a particular disease, would NSF funding of a case-control study be duplicative because it would looking at a possible association between the same exposure and outcome as studied in the previous study?  I would think not, but the language of the bill invites an attack on the Director for certifying the case-control study.

I would be the first to agree that there is some poor science conducted at the public’s expense (and some very good science too), but the certification is poorly designed to advance the quality of federally funded scientific research.  No doubt, the sponsors of the bill see the certification requirement as an opportunity to haul the Director of the NSF (and ultimately of the Director of the National Institutes) into Congressional committee meetings to be publicly dressed down for research that the committee members disapprove of.

Dylan Walsh of the New Yorker has reported about the introduction of the HQRA bill by Representative Lamar Smith, chairman of the House Committee on Science, Space, and Technology. Dylan Walsh, “Not Safe for Funding: The N.S.F. and the Economics of Science,” The New Yorker (May 9, 2013).

Among the distinguished members of the House Committee on Science, Space, and Technology is scientist Congressman Paul Broun.  Back in September 2012, at a church-sponsored event in Georgia, Dr. Broun declared that “all that stuff I was taught about evolution and embryology and the Big Bang theory” are “lies straight from the pit of hell.” These lies, according to Broun, are no casual deviation from the truth; they are part of a conspiracy to “to try to keep me and all the folks who were taught that from understanding that they need a savior.”

This is the same Representative Broun who declared:

“You see, there are a lot of scientific data that I’ve found out as a scientist that actually show that this is really a young Earth. I don’t believe that the earth’s but about 9,000 years old. I believe it was created in six days as we know them. That’s what the Bible says.”

The proposed HQRA is all about turning control of science funding to politicians, people like Representative Broun, and his colleagues.

There is an interesting discussion of the HQRA at Professor Deborah Mayo’s blog, “If it’s called the ‘The High Quality Research Act’, then ….” (May 9, 2013).

Professor Sanders’ Paen to Milward

May 7th, 2013

Deconstructing the Deconstruction of Deconstruction

Some scholars have suggested that the most searching scrutiny of scientific research takes place in the courtroom.  Barry Nace’s discovery of the “mosaic method” notwithstanding, lawyers rarely contribute new findings, which I suppose supports Professor Sanders’ characterization of the process as “deconstructive.”  The scrutiny of courtroom science is encouraged by the large quantity of poor quality opinions, on issues that must be addressed by lawyers and their clients who wish to prevail.  As philosopher Harry Frankfurt described this situation:

“Bullshit is unavoidable whenever circumstances require someone to talk without knowing what he is talking about.  Thus the production of bullshit is stimulated whenever a person’s obligations or opportunities to speak about some topic exceed his knowledge of the facts that are relevant to that topic.”

Harry Frankfurt, On Bullshit 63 (Princeton Univ. 2005).

This unfortunate situation would seem to be especially true for advocacy science that involves scientists who are intent upon influencing public policy questions, regulation, and litigation outcomes.  Some of the most contentious issues, and tendentious studies, take place within the realm of occupational, environmental, and related disciplines. Sadly, many occupational and environmental medical practitioners seem particularly prone to publish in journals with low standards and poor peer review.  Indeed, the scientists and clinicians who work in some areas make up an insular community, in which the members are the peer reviewers and editors of each other’s work.  The net result is that any presumption of reliability for peer-reviewed biomedical research is untenable.

The silicone gel-breast implant litigation provides an interesting case study of the phenomenon.  Contrary to post-hoc glib assessments that there was “no” scientific evidence offered by plaintiffs, the fact is that there was a great deal.  Most of what was offered was published in peer-reviewed journals; some was submitted by scientists who had some credibility and standing within their scientific, academic communities:  Gershwin, Kossovsky, Lappe, Shanklin, Garrido, et al.  Lawyers, armed with subpoenas, interrogatories, and deposition notices, were able to accomplish what peer reviewers could not.  What Professor Sanders and others call “deconstruction” was none other than a scientific demonstration of study invalidity, seriously misleading data collection and analysis, and even fraud.  See Hon. Jack B. Weinstein, “Preliminary Reflections on Administration of Complex Litigation” 2009 Cardozo L. Rev. de novo 1, 14 (2009) (describing plaintiffs’ expert witnesses in silicone litigation as “charlatans” and the litigation as largely based upon fraud).

Some scientific publications are motivated almost exclusively by the goal of influencing regulatory or political action.  Consider the infamous meta-analysis by Nissen and Wolski, of clinical trials and heart attack among patients taking Avandia.  Steven Nissen & Kathy Wolski, “Effect of Rosiglitazone on the Risk of Myocardial Infarction and Death from Cardiovascular Causes,” 356 New Engl. J. Med. 2457 (2007). The New England Journal of Medicine rushed the meta-analysis into print in order to pressure the FDA to step up its regulation of post-marketing surveillance of licensed medications.  Later, better-conducted meta-analyses showed how fragile Nissen’s findings were.  See, e.g., George A. Diamond, MD, et al., “Uncertain Effects of Rosiglitazone on the Risk for Myocardial Infarction and Cardiovascular Death,” 147 Ann. Intern. Med. 578 (2007); Tian, et al., “Exact and efficient inference procedure for meta-analysis and its application to the analysis of independent 2 × 2 tables with all available data but without artificial continuity correction,” 10 Biostatistics 275 (2008).  Lawyers should not be shy about pointing out political motivations of badly conducted scientific research, regardless of authorship or where published.

On the other hand, lawyers on both sides of litigation are prone to attack on personal bias and potential conflicts of interest because these attacks are more easily made, and better understood by judges and jurors.  Perhaps it is these “deconstructions” that Professor Sanders finds overblown, in which case, I would agree.  Precisely because jurors have difficulty distinguishing between allegations of funding bias and validity flaws that render studies nugatory, and because inquiries into validity require more time, care, analysis, attention, and scientific and statistical learning, pretrial gatekeeping of expert witnesses is an essential part of achieving substantial justice in litigation of scientific issues.  This is a message that is obscured by the recent cheerleading for the Milward decision at the litigation industry’s symposium on the case.

Deconstructing Professor Sanders’ Deconstruction of the Deconstruction in Milward

A few comments about Professor Sanders’ handling of the facts of Milward itself.

The case arose from a claim of occupational exposure to benzene and an outcome known as APL (acute promyelocytic leukemia), which makes up about 10% of AML (acute myeloid leukemia).  Sanders argues, without any support, that APL is too rare for epidemiology to be definitive.  Sanders at 164.  Here Sanders asserts what Martyn Smith opined, and ignores the data that contradicted Smith.  At least one of the epidemiologic studies cited by Smith was quite large and was able to discern small statistically significant associations when present.  See, e.g., Nat’l Investigative Group for the Survey of Leukemia & Aplastic Anemia, “Countrywide Analysis of Risk Factors for Leukemia and Aplastic Anemia,” 14 Acta Academiae Medicinae Sinicae (1992).  This study found a crude odds ratio of 1.42 for benzene exposure and APL (M3). The study had adequate power to detect a statistically significant odds ratio of 1.54 between benzene and M2a.  Of course, even if one study’s “power” were low, there are other, aggregative strategies, such as meta-analysis, available.  This was not a credibility issue concerning Dr. Smith, for the jury; Smith’s opinion turned on an incorrect and fallacious analyses that did not deserve “jury time.”

The problem is, according to Sanders one of “power.”  In a lengthy footnote, Sander explains what “power” is, and why he believes it is a problem:

“The problem is one of power. Tests of statistical significance are designed to guard against one type error, commonly called Type I Error. This error occurs when one declares a causal relationship to exist when in fact there is no relationship, … . A second type of error, commonly called Type II Error, occurs when one declares a causal relationship does not exist when in fact it does. Id. The “power” of a study measures its ability to avoid a Type II Error. Power is a function of a study’s sample size, the size of the effect one wishes to detect, and the significance level used to guard against Type I Error. . Because power is a function of, among other things, the significance level used to guard against Type I errors, all things being equal, minimizing the probability of one type of error can be done only by increasing the probability of making the other.  Formulae exist to calculate the power of case-control and cohort studies from 2 x 2 contingency table data.

Because the power of any test is reduced as the incidence of an effect decreases, Type II threats to causal conclusions are particularly relevant with respect to rare events. Plaintiffs make a fair criticism of randomized trials or epidemiological cohort studies when they note that sometimes the studies have insufficient power to detect rare events. In this situation, case-control studies are particularly valuable because of their relatively greater power. In most toxic tort contexts, the defendant would prefer to minimize Type I Error while the plaintiffs would prefer to minimize Type II Error. Ideally, what we would prefer are studies that minimize the probability of both types of errors. Given the importance of power in assessing epidemiological evidence, surprisingly few appellate opinions discuss this issue. But see DeLuca v. Merrell Dow Pharm., Inc., 911 F.2d 941, 948 (3d Cir. 1990), which contains a good discussion of epidemiological evidence. The opinion discusses the two types of error and suggests that courts should be concerned about both. Id. Unfortunately, neither the district court opinion nor the court of appeals opinion in Milward discusses power.”

Sanders at 164 n.115 (internal citations omitted).

Sanders is one of the few law professors who almost manages to describe statistical power correctly.  Calculating and evaluating power requires pre-specification of alpha (our maximum tolerated Type I error), sample size, and an alternative hypothesis that we would want to be able to identify at a statistically significant level.  This much is set out in the footnote quoted above.

Sample size, however, is just one factor in the study’s variance, which is not in turn completely specified by sample size.  More important, Sanders’ invocation of power to evaluate the exonerative quality of a study has been largely rejected in the world of epidemiology.  His note that “[f]ormulae exist to calculate the power of case-control and cohort studies from 2 x 2 contingency table data” is largely irrelevant because power is mostly confined to sample size determinations before a study is conducted.  After the data are collected, studies are evaluated by their point estimates and their corresponding confidence intervals. See, e.g., Vandenbroucke, et al., “Strengthening the reporting of observational studies in epidemiology (STROBE):  Explanation and elaboration,” 18 Epidemiology 805, 815 (2007) (Section 10, sample size) (“Do not bother readers with post hoc justifications for study size or retrospective power calculations. From the point of view of the reader, confidence intervals indicate the statistical precision that was ultimately obtained.) (emphasis added). See alsoPower in the Courts — Part Two” (Jan. 21, 2011).

Type II error is important in the evaluation of evidence, but it requires a commitment to a specific alternative hypothesis.  That alternative can always be set closer and closer to the null hypothesis of no association in order to conclude, as some plaintiffs’ counsel would want, that all studies lack power (except of course the ones that turn out to support their claims).  Sanders’ discussion of statistical power ultimately falters because claiming a lack of power without specifying the size of the alternative hypothesis is unprincipled and meaningless.

Sanders tells us that cohorts will have less power than case-control studies, but again the devil is in the details.  Case-control studies are of course relatively more efficient in studying rare diseases, but the statistical precision of their odds ratios will be given by the corresponding confidence intervals.

What is missing from Sanders’ scholarship is a simple statement of what the point estimates and their confidence intervals are.  Plaintiffs in Milward argued that epidemiology was well-nigh unable to detect increased risks of APL, but then they embraced epidemiology when Smith had manipulated and re-arranged data in published studies.

The Yuck Factor

One of the looming problems in expert witness gatekeeping is judicial discomfort and disability in recounting the parties’ contentions, the studies’ data, and the witnesses’ testimony.  In a red car/blue car case, judges are perfectly comfortable giving detailed narratives of the undisputed facts, and the conditions that give rise to discounting or excluding evidence or testimony.  In science cases, not so much.

Which brings us to the data manipulation conducted by Martyn Smith in the Milward case.  Martyn Smith is not an epidemiologist, and he has little or no  experience or expertise in conducting and analyzing epidemiologic studies.  The law of expert witnesses makes challenges to an expert’s qualifications very difficult; generally courts presume that expert witnesses are competent to testify about general scientific and statistical matters.  Often the presumption is incorrect.

In Milward, Smith claimed, on the one hand, that he did not need epidemiology to reach his conclusion, but on the other hand that “suggestive” findings supported his opinion.  On the third hand, he seemed to care enough about the epidemiologic evidence to engage in fairly extensive reanalysis of published studies.  As the district court noted,  Smith made “unduly favorable assumptions in reinterpreting the studies, such as that cases reported as AML could have been cases of APL.”  Milward v. Acuity Specialty Products Group, Inc., 664 F.Supp. 2d 137, 149 (D. Mass. 2009), rev’d, 639 F.3d 11, 19 (1st Cir. 2011), cert. denied sub nom. U.S. Steel Corp. v. Milward, 132 S. Ct. 1002 (2012).  Put less charitably, Smith made up data to suit his hypothesis.

The details of Smith’s manipulations go well beyond cherry picking.  Smith assumed, without evidence, that AML cases were APL cases.  Smith arbitrarily chose and rearranged data to create desirable results.  See Deposition Testimony of Dr. David Garabrant at 22 – 53, in Milward (Feb. 18, 2009).  In some studies, Smith discarded APL cases from the unexposed group, with the consequence of increasing the apparent association; he miscalculated odds ratios; and he presented odds ratios without p-values or confidence intervals.  The district court certainly was entitled to conclude that Smith had sufficiently deviated from scientific standards of care as to make his testimony inadmissible.

Regrettably, the district court did not provide many details of Smith’s reanalyses of studies and their data.  The failure to document Smith’s deviations facilitated the Circuit’s easy generalization that the fallacious reasoning and methodology was somehow invented by the district court.

The appellate court gave no deference to the district court’s assessment, and by judicial fiat turned methodological missteps into credibility issues for the jury.  The Circuit declared that the analytical gap was of the district court’s making, which seemed plausible enough if one read only the appellate decision.  If one reads the actual testimony, the Yuck Factor becomes palpable.

WOE Unto Bradford Hill

Professor Sanders accepts the appellate court’s opinion at face value for its suggestion that:

“Dr. Smith’s opinion was based on a ‘weight of the evidence’ methodology in which he followed the guidelines articulated by world-renowned epidemiologist Sir Arthur Bradford Hill in his seminal methodological article on inferences of causality.”

Sanders at 170 n.140 (quoting Milward, 639 F.3d at 17).

Sanders (and the First Circuit) is unclear whether WOE consists of following the guidelines articulated by Sir Arthur (perhaps Sir Austin Bradford Hill’s less distinguished brother?), or merely includes the guidelines as a larger process.  Not only was there no Sir Arthur, but Sir Austin’s guidelines are distinctly different from WOE in that they pre-specify the consideration to be applied.  No where does the appellate court give any meaningful consideration to whether there was an exposure-response gradient shown, or whether the epidemiologic studies consistently showed an association between benzene and APL.  Had the Circuit given any consideration to the specifics of the guidelines, it would have likely concluded that the district court had engaged in fairly careful, accurate gatekeeping, well within its discretion.  (If the standard were de novo review rather than “abuse of discretion,” the Circuit would have had to confront the significant analytical gaps and manipulations in Smith’s testimony.)  Futhermore, it is time to acknowledge that Bradford Hill’s “guidelines” are taken from a speech given by Sir Austin almost 50 years ago; they hardly represent a comprehensive, state-of-the-art set of guidelines for causal analysis in epidemiology today.

So there you have it.  WOE means the Bradford Hill guidelines, except that the individual guidelines need not be considered.  And although Bradford Hill’s guidelines were offered to evaluate a body of epidemiologic studies, WOE teaches us that we do not need epidemiologic studies, especially if they do not help to establish a plaintiffs’ claim.  Sanders at 168 & n.133 (citing Milward at 22-24).

What is WOE?

If WOE were not really the Bradford Hill guidelines, then what might it be? Attempting to draw a working definition of WOE from the Milward appellate decision, Sanders tell us that WOE requires looking at all the relevant evidence.  Sanders at 169.  Not much guidance there.  Elsewhere he tells us that WOE is “reasoning to the best explanation,” without explicating what such reasoning entails.  Sanders at 169 & n. 136 (quoting Milward at 23,“The hallmark of the weight of the evidence approach is reasoning to the best explanation.”).  This hardly tells us anything about what method Smith and his colleagues were using.

Sanders then tells us that WOE means the whole “tsumish.” (My word; not his.)  Not only should expert witnesses rely upon all the relevant evidence, but they should eschew an atomistic approach that looks (too hard) at individual studies.  Of course, there may be value in looking at the entire evidentiary display.  Indeed, a holistic view may be needed to show the absence of causation.  In many litigations, plaintiffs’ counsel are adept in filling up the courtroom with “bricks,” which do not fit together to form the wall they claim.  In the silicone gel breast implant litigation, plaintiffs’ counsel were able to pick out factoids from studies to create sufficient confusion and doubt that there might be a causal connection between silicone and autoimmune disease.  A careful, systematic analysis, which looked at the big picture, demonstrated that these contentions were bogus.  Committee on the Safety of Silicone Breast Implants, Institute of Medicine, Safety of Silicone Breast Implants (Wash. D.C. 1999) (reviewing studies, many of which were commissioned by litigation defendants, and which collectively showed lack of association between silicone and autoimmune diseases).  Sometimes, however, taking in the view of the entire evidentiary display may obscure what makes up the display.  A piece by El Anatsui may look like a beautiful tapestry, but a closer look will reveal it is just a bunch of bottle caps wired together.

Contrary to Professor Sanders’ assertions, nothing in the Milward appellate opinion explains why studies should be viewed only as a group, or why this view will necessarily show something greater than the parts. Sanders at 170.  Although Sanders correctly discerns that the Circuit elevated WOE from “perspective” to a methodology, there is precious little content to the methodology, especially if it permits witnesses to engage in all sorts of data shenanigans or arbitrary weighting of evidence.  The quaint notion that there is always a best explanation obscures the reality that in science, and especially in science that is likely to be contested in a courtroom, the best explanation will often be “we don’t know.”

Sanders eventually comes around to admit that WOE is perplexingly vague as to how the weighing should be done.  Id. at 170.  He also admits that the holistic view is not always helpful.  Id. at 170 & n.139 (the sum is greater than its parts but only when the combination enhances supportiveness of the parts, and the collective support for the conclusion at issue, etc.).  These concessions should give courts serious pause before they adopt a dissent from a Supreme Court case, that has been repeatedly rejected by courts, commentators, and ultimately by Congress in revising Rule 702.

WOE is Akin to Differential Diagnosis

The Milward opinion seems like a bottomless reserve of misunderstandings.    Professor Sanders barely flinches at the court’s statement that “The use of judgment in the weight of the evidence methodology is similar to that in differential diagnosis.”  Milward at 18.  See Sanders at 171.  Differential “diagnosis” requires previous demonstration of general causation, and proceeds by iterative disjunctive syllogism.  Sanders, and the First Circuit, somehow missed that this syllogistic reasoning is completely unrelated to the abductive inferences that may play a role in reaching conclusions about general causation.  Sanders revealingly tells us that “[e]xperts using a weight of the evidence methodology should be given the same wide latitude as is given those employing the differential diagnosis method.”  Sanders at 172 & n.147.  This counsel appears to be an invitation to speculate.  If the “wide latitude” to which Sanders refers means the approach of a minority of courts that allow expert witnesses to rule in differentials by speculation, and then rule them in by failing to rule out idiopathic cases, then Sanders’ approach is advocacy for epistemic nihilism.

The Corpuscular Approach

Professor Sanders seems to endorse the argument of Milward, as well as Justice Stevens’ dissent in Joiner, that scientists do not assess research by looking at the validity (vel non) of individual studies, and therefore courts should not permit this approach.  Sanders at 173 & n.15.  Neither Justice Stevens nor Professor Sanders presents any evidence for the predicate assertion, which a brief tour of IARC’s less political working group reports would show to be incorrect.

The rationale for Sanders (and Milward’s) reductionism of science to WOE becomes clear when Sanders asserts that “[p]erhaps all or nearly all critiques of an expert employing a weight of the evidence methodology should go to weight, not admissibility. Id. at 173 & n.155.  To be fair, Sanders notes that the Milward court carved out a “solid-body” of exonerative epidemiology exception to WOE.  Id. at 173-74.  This exception, however, does nothing other than placing a substantial burden upon the opponent of expert witness opinion to show that the opinion is demonstrably incorrect.  The proponent gets a free pass as long as there is no “solid body” of such evidence that shows he is affirmatively wrong.  Discerning readers will observe that maneuver simply shifts the burden of admissibility to the opponent,  and eschews the focus on methodology for a renewed emphasis upon general acceptance of conclusions.  Id.

Sanders also notes that other courts have seen through the emptiness of WOE and rejected its application in specific cases.  Id. at 174 & n.163-64 (citing Magistrini v. One Hour Martinizing Dry Cleaning, 180 F. Supp. 2d 584, 601-02 (D.N.J. 2002), aff’d, 68 F. App’x 356 (3d Cir. 2003), where the trial court rejected Dr. Ozonoff’s attempt to deploy WOE without explaining or justifying the mixing and matching of disparate kinds of studies with disparate results).  Sanders’ analysis of Milward seems, however, designed to skim the surface of the case in an effort to validate the First Circuit’s superficial approach.

 

Milward’s Singular Embrace of Comment C

May 4th, 2013

Professor Michael D. Green is one of the Reporters for the American Law Institute’s Restatement (Third) of Torts: Liability for Physical and Emotional Harm.   Green has been an important interlocutor in the on-going debate and discussion over standards for expert witness opinions.  Although many of his opinions are questionable, his writing is clear, and his positions, transparent.  The seduction of Professor Green and the Wake Forest School of Law by one of the litigation-industry’s organizations, the Center for Progressive Reform, is unfortunate, but the resulting symposium gave Professor Green an opportunity to speak and write about the justly controversial comment c.   Restatement (Third) of Torts: Liability for Physical and Emotional Harm § 28, cmt. c (2010).

Mock Pessimism Over Milward

Professor Green professes to be pessimistic about the Milward decision, but his only real ground for pessimism is that Milward will not be followed.  Michael D. Green, “Pessimism about Milward,” 3 Wake Forest J. L & Policy 41 (2013).  Green describes the First Circuit’s decision in Milward as “fresh,” “virtually unique and sophisticated,” and “satisfying.” Id. at 41, 43, and 50.  Green describes his own reaction to the decision in terms approaching ecstasy:  “delighted,” “favorable,” and “elation.”  Id. at 42, 42, and 43.

Green interprets Milward to embrace four comment c propositions:

  1. “Recognizing that judgment and interpretation are required in assessments of causation.52
  2. Endorsing explicitly and taking seriously weight of the evidence methodology,53 against the great majority of federal courts that had, since Joiner, employed a Balkanized approach to assessing different pieces of evidence bearing on causation.54
  3. Appreciating that because no algorithm exists to constrain the inferential process, scientists may reasonably reach contrary conclusions.55
  4. Not only stating, but taking seriously, the proposition that epidemiology demonstrating the connection between plaintiff’s disease and defendant’s harm is not required for an expert to testify on causation.56 Many courts had stated that idea, but very few had found non-epidemiologic evidence that satisfied them.57

Id. at 50-51.

Green’s points suggest that comment c was designed to reinject a radical subjectivism into scientific judgments allowed to pass for expert witness opinions in American courts.  None of the points is persuasive.  Point (1) is vacuous.  Saying that judgment is necessary does not imply that anything goes or that we will permit the expert witness to be the judge of whether his opinion rises to the level of scientific knowledge.  The required judgment involves an exacting attention to the role of random error, bias, or confounding in producing an apparent association, as well as to the validity of the data, methods, and analyses used to interpret observational or experimental studies.  The required judgment involves an appreciation that not all studies are equally weighty, or equally worthy of consideration for use in reaching causal knowledge.  Some inferences are fatally weak or wrong; some analyses or re-analyses of data are incorrect.  Not all judgments can be blessed by anointing some of them “subjective.”

Point (2) illustrates how far the Restatement process has wondered into the radical terrain of abandoning gatekeeping altogether.  The approach that Green pejoratively calls “Balkanized” is a careful look at what expert witnesses have relied upon to assess whether their conclusions or claims follow from their relied upon sources.  This is the approach used by International Agency for Research on Cancer (IARC) working groups, whose method Green seems to endorse.  Id. at 59.  IARC working groups discuss and debate their inclusionary and exclusionary criteria for studies to be considered, and the validity of each study and its analyses, before they get to an assessment of the entire evidentiary display.  (And several of the IARC working groups have been by no means free of the conscious bias and advocacy that Green sees in party-selected expert witnesses.)  Elsewhere, Green refers to the approach of most federal courts as “corpuscular.”  Id. at 51. Clearly, expert witnesses want to say things in court that do not, so to speak, add up, but Green appears to want to give them all a pass.

Point (3) is, at best, a half truth.  Is Green claiming that reasonable scientists always disagree?  His statement of the point suggests epistemiologic nihilism. Although there are no clear algorithms, the field of science is littered with abandoned and unsuccessful theories from which we can learn when to be skeptical or dismissive of claims and conclusions.  Certainly there are times when reasonable experts will disagree, but there are also times when experts on one side or the other, or both, are overinterpreting or misinterpreting the available evidence.  The judicial system has the option and the obligation to withhold judgments when faced with sparse or inconsistent data.  In many instances, litigation arises because the scientific issues are controversial and unsettled, and the only reasonable position is to look for more evidence, or to look more carefully at the extant evidence.

Point (4) is similarly overblown and misguided.  Green states his point as though epidemiology will never be required.  Here Green’s sympathies betray any sense of fidelity to law or science.  Of course, there may be instances in which epidemiologic evidence will not be necessary, but it is also clear that sometimes only epidemiologic methods can establish the causal claim with any meaningful degree of epistemic warrant.

ANECDOTES TO LIVE BY

Anthony Robbins’ Howler

Professor Green delightfully shares two important anecdotes.  Both are revealing of the process that led up to comment c, and to Milward.

The first anecdote involves the 2002 meeting of the American Law Institute.  Apparently someone thought to invite Dr. Anthony Robbins as a guest. (Green does not tell us who performed this subversive act.)  Robbins is a member of SKAPP, the organization started with plaintiffs’ counsel’s slush fund money diverted from MDL 926, the silicone-gel breast implant litigation.

Robbins rose at the meeting to chastise the ALI for not knowing what it was talking about:

“clear, in my opinion, misstatements of . . . science” or reflected a misunderstanding of scientific principles that “leaves everyone in doubt as to whether you know what you are talking about . . . .”

Id. at 44 (quoting from 79th Annual Meeting, 2002 A.L.I. PROC. at 294).  Pretty harsh, except that Professor Green proceeds to show that it was Robbins who had no idea of what he was talking about.

Robbins asserted that the requirement of a relative risk of greater than two was scientifically incorrect. From Green’s telling of the story, it is difficult to understand whether Robbins was complaining about the use of relative risks (greater than two) for inferring general or specific causation.  If the former, there is some truth to his point, but Robbins would be wrong as to the latter.  Many scientists have opined that relative risks provide information about attributable fractions, which in turn permit inferences about individual cases.  See, e.g., Troyen A. Brennan, “Can Epidemiologists Give Us Some Specific Advice?” 1 Courts, Health Science & the Law 397, 398 (1991) (“This indeterminancy complicates any case in which epidemiological evidence forms the basis for causation, especially when attributable fractions are lower than 50%.  In such cases, it is more probable than not that the individual has her illness as a result of unknown causes, rather than as a result of exposure to hazardous substance.”).  Others have criticized the inference, but usually on the basis that the inference requires that the risk be stochastically distributed in the population under consideration, and we often do not know whether this assumption is true.  Of course, the alternative is that we must stand mute in the face of even very large relative risks and established general causation.  See, e.g., McTear v. Imperial Tobacco Ltd., [2005] CSOH 69, at ¶ 6.180 (Nimmo Smith, L.J.) (“epidemiological evidence cannot be used to make statements about individual causation… . Epidemiology cannot provide information on the likelihood that an exposure produced an individual’s condition.  The population attributable risk is a measure for populations only and does not imply a likelihood of disease occurrence within an individual, contingent upon that individual’s exposure.”).

Robbins second point was truly a howler, one that suggests his animus against gatekeeping may grow out of a concern that he would never pass a basic test of statistical competency.  According to Green, Robbins claimed that “increasing the number of subjects in an epidemiology study can identify small effects with ‘an almost indisputable causal role’.” Id. at 45 (quoting Robbins).  Ironically, lawyer and law professor Green was left to take Robbins to school, to educate him on the differences between sampling error, bias, and confounding.  Green does not get the story completely right because he draws an artificial line between observational epidemiology and experimental clinical trials, and incorrectly implies that bias and confounding are problems only in observational studies.  Id. at 45 n. 24.  Although randomization is undertaken in clinical trials to control for bias and confounding, it is not true that this strategy always works or always works completely.  Still, here we have a lawyer delivering the comeuppance to the scolding scientist.  Sometimes scientists really have no good basis to support their claims, and it is the responsibility of the courts to say so.  Green’s handling of Robbins’ errant views is actually a wonderful demonstration of gatekeeping in action.  What is lovely about it is that the claims and their rebuttal were documented and reported, rather than being swept away in the fog of a jury verdict.

Professor Green’s account of Robbins’ foolery should be troubling because, despite Robbins’ manifest errors, and his more covert biases, we learn that Robbins’ remarks had “a profound impact” on the ALI’s deliberations. Courts that are tempted by the facile answers of comment c should find this impact profoundly disturbing.

Alan Done’s Weight of the Evidence (WOE) or Mosaic Methodology

Professor Green relays an anecdote that bears repeating, many times.  In the Bendectin litigation, plaintiffs’ expert witness, Alan Done testified that Bendectin caused birth defects in children of mothers who ingested the anti-nausea medication during pregnancy.  Done had a relatively easy time spinning his speculative web in the first Bendectin trial because there was only one epidemiologic study, which qualitatively was not very good.  In his second outing, Done was confronted by the defense with an emerging body of exonerative epidemiologic research. In response, he deployed his “mosaic theory” of evidence, of different pieces or lines of evidence that singularly do not show much, but together paint a conclusive picture of the causal pattern. Id. at 61 (describing Done’s use of structure-activity, in vitro animal studies, in vivo animal studies, and his own [idiosyncratic] interpretation of the epidemiologic studies).  Done called his pattern a “mosaic,” which Green correctly sees is none other than “weight of the evidence.”  Id. at 62.

After this second trial was won with the jury, but lost on post-trial motions, plaintiffs’ counsel, Barry Nace, pressed the mosaic theory as a legitimate scientific strategy to demonstrate causation, and the appellate court accepted the strategem:

“Like the pieces of a mosaic, the individual studies showed little or nothing when viewed separately from one another, but they combined to produce a whole that was greater than the sum of its parts: a foundation for Dr. Done’s opinion that Bendectin caused appellant’s birth defects. The evidence also established that Dr. Done’s methodology was generally accepted in the field of teratology, and his qualifications as an expert have not been challenged.103

Id. at 61(citing Oxendine v. Merrell Dow Pharm., Inc., 506 A.2d 1100, 1110 (D.C. 1986).  Green then drops his bombshell:  the philosopher of science who developed the “mosaic theory” (WOE) was the plaintiffs’ lawyer, Barry Nace. According to Green, Nace declared the mosaic idea “Damn brilliant, and I was the one who thought of it and fed it to Alan [Done].”  Id. at 63.

Green attempts to reassure himself that Milward does not mean that Done could use his WOE approach to testify today that Bendectin causes human birth defects.  Id. at 63.  Alas, he provides no meaningful solution to protect against future bogus cases.  Green fails to come to grips with the obvious truth that Done was wrong ab initio.  He was wrong before he was exposed for his perjurious testimony.  See id. at 62 n. 107, and he was wrong before there was a “solid body” of exonerative epidemiology.  His method never had the epistemic warrant he claimed for it, and the only thing that changed over time was a greater recognition of his character for veracity, and the emergence of evidence that collectively supported the null hypothesis of no association.  The defense, however, never had the burden to show that Done’s methodology was unreliable or invalid, and we should look to the more discerning scientists who saw through the smokescreen from the beginning.

Wake Forest Publishes the Litigation Industry’s Views on Milward

April 20th, 2013

This week, The Wake Forest Journal of Law & Policy published six articles from its 2012 Spring Symposium, on “Toxic Tort Litigation After Milward v. Acuity Products.”  The Symposium was a joint production of The Center for Progressive Reform and the Wake Forest University School of Law.  The articles are now available online:

Steve C. Gold, “A Fitting Vision of Science for the Courtroom” PDF

Michael D. Green, “Pessimism About Milward” PDF

Thomas O. McGarity & Sidney A. Shapiro, “Regulator Science in Rulemaking and Tort: Unifying the Weight of the Evidence Approach,PDF

Carl F. Cranor, “Milward v. Acuity Specialty Products: Advances in General Causation Testimony in Toxic Tort Litigation,” PDF

Joseph Sanders , “Milward v. Acuity Specialty Products Group: Constructing and Deconstructing Sciences and Law in Judicial Opinion,” PDF

Steve Baughman Jensen, “Sometimes Doubt Doesn’t Sell: A Plaintiffs’ Lawyer’s Perspective on Milward v. Acuity Products” PDF

As I noted previously, this symposium was a decidedly lopsided affair, as one might expect from its sponsorship by The Center for Progressive Reform (CPR), which speaks for the litigation industry in the United States. SeeMilward Symposium Organized By Plaintiffs’ Counsel and Witnesses” (Feb. 16th, 2013).

Consistent with its sponsorship, the articles are largely cheerleading for the Milward decision.  The Milward plaintiffs’ partisan expert, Carl Cranor, has a paper here, as does a plaintiffs’ lawyer prominent in ATLA/AAJ.  The defense expert witnesses from Milward were not represented at the symposium in the symposium proceedings; nor were there any papers from defense counsel presented or published.  Of the six published papers, only Professor Sanders adopts a somewhat neutral stance towards Milward and the First Circuit’s embrace of Weight of the Evidence in analyzing Rule 702 issues. Cf. Elizabeth Laposata, Richard Barnes, & Stanton Glantz, “Tobacco Industry Influence on the American Law Institute’s Restatements of Torts and Implications for Its Conflict of Interest Policies,” 98 Iowa L. Rev. 1 (2012) (arguing that tobacco lawyers influenced the American Law Institute’s Restatement process).

David Bernstein on the Daubert Counterrevolution

April 19th, 2013

David Bernstein has posted a draft of an important new article on the law of expert witnesses, in which he documents the widespread resistance to judicial gatekeeping of expert witness opinion testimony among federal judges.  Bernstein, “The Daubert Counterrevolution” (Mar. 11, 2013).  Professor Bernstein has posted his draft article, set to be published in the Notre Dame Law Review, both on the Social Science Research Network, and on his law school’s website.

Professor Bernstein correctly notes that the Daubert case, and the subsequent revision to Federal Rule of Evidence 702, marked important changes in the law of expert witnesses.  These changes constituted important reforms, which in my view were as much evolutionary, as revolutionary.  Even before the Daubert case, the law was working to find ways to improve expert witness testimony, and to downplay “authoritative” opining in favor of well-documented ways of knowing.  After all, Rule 702, with its emphasis on “knowledge,” was part of the Federal Rules of Evidence, as adopted in 1975.  Pub. L. 93–595, § 1, Jan. 2, 1975, 88 Stat. 1926 (effective July 1, 1975).  Since their first adoption, Rule 702 required that expert witnesses’ knowledge be helpful to the trier of fact.  By implication, the rule has always suggested that hunches, speculation, and flights of fancy did not belong in the court room.

Professor Bernstein certainly acknowledges that Daubert did not spring out of a vacuum.  Critics of judicial decisions on expert witnesses had agitated for decades to limit expert witness conduct by standards and guidances that operate in the scientific community itself.  The Supreme Court’s serial opinions on Rule 702 (Daubert, Joiner, Kumho Tire, and Weisgram) reflect the need for top-down enforcement of a rule, on the books since 1975, while many lower courts were allowing “anything goes.”

What is perhaps surprising, but well documented by Professor Bernstein, is that after four opinions from the Supreme Court, and a revision in the operative statute itself (Rule 702), some lower federal courts have engaged in a rearguard action against expert witness gatekeeping.  Professor Bernstein rightfully settles on the First Circuit’s decision in Milward as exemplifying a trend to disregard the statutory language and mandate for gatekeeping.  For Bernstein, Milward represents the most recent high-water mark of counterrevolution, with its embrace of errors and fallacies in the name of liberal, if not libertine, admissibility.

I suppose that I would go a step further than Professor Bernstein and label the trend he identifies as “reactionary.”  What is clear is that many courts have been willing to flout the statutory language of Rule 702, in favor of old case law, and evasive reasoning on expert witness admissibility.  Indeed, the Supreme Court itself joined the trend in Matrixx Initiatives, Inc. v. Siracusano, 131 S. Ct. 1309 (2011), when it unanimously affirmed the reversal of a trial court’s Rule 12(b)(6) dismissal of a securities fraud class action.  The corporate defendant objected that the plaintiffs failed to plead statistical significance in alleging causation between Zicam and the loss of the sense of smell.  The Supreme Court, however, made clear that causation was not required to make out a claim of securities fraud.  It was, and would be, sufficient for the company’s product to have raised sufficient regulatory concerns, which in turn would bring regulatory scrutiny and action that would affect the product’s marketability.

Not content to resolve a relatively simple issue of materiality, for which causation and statistical significance were irrelevant, the Supreme Court waxed on, in obiter dicta, about causation and statistical significance, perhaps unwittingly planting seeds for those who would eviscerate Rule 702.  See Matrixx Unloaded (Mar. 29, 2011).  Although the Supreme Court disclaimed any intention to address expert witness admissibility in a case that was solely about the sufficiency of pleading allegations, it cited three cases for the proposition that statistical significance was not necessary for assessing biological causation:

“We note that courts frequently permit expert testimony on causation based on evidence other than statistical significance. See, e.g., Best v. Lowe’s Home Centers, Inc., 563 F. 3d 171, 178 (6th Cir 2009); Westberry v. Gislaved Gummi AB, 178 F. 3d 257, 263–264 (4th Cir. 1999) (citing cases); Wells v. Ortho Pharmaceutical Corp., 788 F. 2d 741, 744–745 (11th Cir. 1986). We need not consider whether the expert testimony was properly admitted in those cases, and we do not attempt to define here what constitutes reliable evidence of causation.”

Id. at 1319.

Remarkably, two of the three cases were about specific causation, arrived at using so-called “differential etiology,” which presupposed the establishment of general causation.  These cases never involved general causation or statistical reasoning, but rather simply the process of elimination (iterative disjunctive syllogism).  The citation to the third case, Wells, was a notorious pre-Daubert, pre-Rule 702 revision case, revealed disappointing scholarship.  Wells involved at least one study that purported to find a statistically significant association.  What was problematic about Wells was its failure to consider the complete evidentiary picture, and to evaluate study validity, bias, and confounding, as well as significance probability.  See Wells v. Ortho Pharmaceutical Corp. Reconsidered – Part 1 (Nov. 12, 2012).

Wells was an important precursor to Daubert in that it brought notoriety and disrepute to how federal courts (and state courts as well) were handling expert witness evidence.  Significantly, Wells was a bench trial, where the trial judge opined that plaintiffs’ expert witnesses seemed more credible based upon atmospherics rather than on their engagement with the actual evidence. See Marc S. Klein, “After Daubert:  Going Forward with Lessons from the Past,” 15 Cardozo L. Rev. 2219, 2225-26 (1994) (quoting trial testimony of trial testimony of Dr. Bruce Buehler: “I am sorry sir, I am not a statistician . . . I don’t understand confidence levels. I never use them. I have to use the author’s conclusions.” Transcript of Jan. 9, 1985, at 358, Wells v. Ortho Pharm. Corp., 615 F. Supp. 262 (N.D. Ga. 1985).

Given the Supreme Court’s opinion in Matrixx, the reactionary movement among lower courts is unsurprising.  Lower courts have now cited and follow the Matrixx dicta on statistical significance in expert witness gatekeeping, despite the Supreme Court’s clear pronouncement that it did not intend to address Rule 702.  See In re Chantix (Varenicline) Prods. Liab. Litig., 2012 U.S. Dist. LEXIS 130144, at *22 (N.D. Ala. 2012); Cheek v. Wyeth Pharm. Inc., 2012 U.S. Dist. LEXIS 123485 (E.D. Pa. Aug. 30, 2012).

Professor Bernstein’s article goes a long way towards documenting the disregard for law and science in this movement.  The examples of reactionary decisions could easily be multiplied. Take for instance, the recent Rule 702 gatekeeping decision in litigation over Celexa and Lexapro, two antidepressant medications.  Judge Rodney W. Sippel denied the defense motions to exclude plaintiffs’ principal expert witness, Dr. David Healy.  In re Celexa & Lexapro Prods. Liab. Litig.,  ___ F.3d ___, 2013 WL 791780 (E.D. Mo. 2013).  In attempting to support its decision, the court announced that:

1. Cherry picking of studies, and data within studies, is acceptable for expert witnessesId. at *5, *7, *8.

2. Outdated law applies, regardless of being superseded by later Supreme Court decisions, and the statutory revision in Rule 702Id. at *2 (citing pre-Joiner case:   “The exclusion of an expert’s opinion is proper only if it is so fundamentally unsupported that it can offer no assistance to the jury.” Wood v. Minn. Mining & Mfg. Co., 112 F.3d 306, 309 (8th Cir.1997) (internal quotation marks and citation omitted).”

3.  The Bradford Hill factors can be disregardedId. at *6 (citing In re Neurontin Mktg., Sales Practices, and Prod. Liab. Litig., 612 F. Supp. 2d 116, 133 (D. Mass. 2009) (MDL 1629), and In re Viagra, 572 F.2d 1071 (D. Minn. 2008)).

These features of the Celexa decision are hardly novel.  As Professor Bernstein shows in his draft article, disregard of Rule 702’s actual language, and of the post-Daubert Supreme Court decisions, is prevalent.  See, e.g., In re Avandia Marketing, Sales Practices & Prod. Liab. Litig., 2011 WL 13576 (E.D. Pa. 2011)(announcing that MDL district judge was bound to apply a “Third Circuit” approach to expert witness gatekeeping, which focused on the challenged expert witnesses’ methodology, not their conclusions, in contravention of Joiner, and of Rule 702 itself).

The Celexa decision pushes the envelope on Bradford Hill.  The two decisions cited downplayed Bradford Hill’s considerations, but did not dismiss them.  In re Neurontin Mktg., Sales Practices, and Prod. Liab. Litig., 612 F. Supp. 2d 116, 133 (D. Mass. 2009) (MDL 1629)(“Although courts have not embraced the Bradford Hill criteria as a litmus test of general causation, both parties repeatedly refer to the criteria, seemingly agreeing that it is a useful launching point and guide. Accordingly, this Court will begin its inquiry by evaluating Plaintiffs’ evidence of an association between Neurontin and suicide-related events, the starting point for an investigation under the criteria.”);  In re Viagra Prods. Liab. Litig., 572 F.Supp.2d at 1081 (“The Court agrees that the Bradford Hill criteria are helpful for determining reliability but rejects Pfizer’s suggestion that any failure to satisfy those criteria provides independent grounds for granting its Daubert Motion.”).

Of course, Sir Austin’s considerations were merely those that he identified in a speech to a medical society.  They were not put forward in a scholarly article; nor are his considerations the last word on the subject.  Austin Bradford Hill, “The Environment and Disease: Association or Causation?” 58 Proc. Royal Soc’y Med. 295 (1965).

Even as a précis, given almost 50 years ago, Hill’s factors warrant some consideration rather than waving them off as not a litmus test (whatever that means), followed by complete disregard for any of the important considerations in evaluating the causality of an association.

There was brief bright spot in this fairly dim judicial decision.  The district judge refused to exclude Dr. Healy on grounds that his opinion about particular studies differed from the authors’ own interpretations.  In re Celexa & Lexapro Prods. Liab. Litig.,  ___ F.3d ___, 2013 WL 791780, at *5 (E.D. Mo. 2013) (Sippel, J.).

That is the correct approach, even though there is language in Joiner that suggests that the authors’ views are dominant.  See Follow the Data Not the Discussion.  But the refusal to discount Healy’s opinions on this ground was done without any real inquiry whether Healy had offered a valid, competing interpretation of the data in the published studies.

At the core of the reactionary movement identified by Professor Bernstein is an unwillingness, or an inability, to engage with the scientific evidence that is at issue in various Rule 702 challenges.  Let’s hope that Bernstein’s article induces closer attention to the law and the science in future judicial gatekeeping.