TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

VECTORS, STRESSORS, AND CRANORS

May 10th, 2013

Offense to that man by whom the WOE cometh!

A few weeks ago, the Wake Forest Journal of Law & Policy published six articles from its 2012 Spring Symposium, on “Toxic Tort Litigation After Milward v. Acuity Products.”  Not a single paper is critical of Milward, which is no surprise given that the Symposium was a joint production of The Center for Progressive Reform and the Wake Forest University School of Law.

In previous posts, I addressed concerns about papers from Professors Green and Sanders.  One of the partisan expert witnesses from the Milward case, Carl Cranor, presented at the symposium, and published in the Journal.  See Carl F. Cranor, “Milward v. Acuity Specialty Products: Advances in General Causation Testimony in Toxic Tort Litigation,” PDF 3 Wake Forest J. L. & Policy 105 (2013) [cited herein as Cranor].

The partisan nature of the Wake Forest/CPR symposium is obvious, and perhaps disclosures of conflicts of interest, so real and palpable, are unnecessary.  Cranor acknowledges that he testified for plaintiffs in Milward, but his disclosure does not address how deep his conflict of interest was.  Cranor at 105. In addition to his consulting, report writing, and testifying, Cranor has written briefs for plaintiffs in this and in other litigations.  Unlike the potential for conflict of interest supposedly raised by payments, Cranor’s conflict of interest is actual.  He has been a long-time advocate for radical precautionary principle regulation, legislation, and adjudication.  Cranor’s conflicts are revealed by his writings, his associations, and his activities.

There is nothing wrong with advocacy per se, and Cranor’s ideas, such as they are, deserve to be judged on their merits.  Cranor will no doubt complain that I am addressing one idea at a time, in a corpuscularian fashion, and that his ideas can only be appreciated as a complete gestalt.  If the individual ideas and claims, however, are incorrect, incomplete, inconsistent, and incoherent, we may rightfully reject the entirety of his claims.

Cranor asserts that he is representing how scientists go about their business in reaching judgments of causality.  He has, however, inaccurately described serious attempts to judge causality in order to distort causal assessments into precautionary practice. Cranor presents a reductionist, abridged notion of scientific assessment of causality in order to legitimate the CPR’s radical agenda in both regulation and adjudication.

VAPID AND VACUOUS

Cranor’s principal claim is that “weight of the evidence” (WOE) is a complete, sufficient description of how scientists do, and should, engage in judging causality.  This claim, however, fails because WOE is not a methodology for attributing causation.

In his symposium article, Cranor introduces WOE by telling us that:

“‘Weight of the evidence argument’ is just another name for nondeductive reasoning.”

Cranor at 113 (citing Larry Wright, Practical 46-49 (Fogelin, ed. 1989)

So WOE is equivalent to induction, abduction, analogy, and every other form and manner of non-deductive reasoning.  So, let’s see.  A rat dropped from the top of my building falls to the ground, accelerating at 32 ft/sec/sec.  A mouse dropped accelerates at the same rate.  Rats and mice are both murine mammal species.  This could be a great analogy.  Rats fed large quantities of saccharin develop bladder tumors.  So by analogy, mice develop bladder tumors from saccharin.  It’s an analogy, but it goes very badly wrong.  But Cranor’s explication of WOE fails to explain why this analogy fails to explain or predict the outcome in mice.  See Kenneth Rothman, Sander Greenland, and Timothy Lash, Modern Epidemiology 30 (3d ed. 2008) (“Whatever insight might be derived from analogy is handicapped by the inventive imagination of scientists who can find analogies everywhere”).

Cranor proceeds to introduce a qualitative criterion, “best support,” in non-deductive settings:

“[N]o one conclusion is ‘guaranteed’ by the premises. Consequently, the evaluative task in assessing such inferences is to judge which conclusion the evidence best supports (or, to put it another way, which explanation best accounts for the evidence in the premises) and how well it does so.”

Cranor at 114-15.  Although the requirement of a superlative qualitative assessment seems promising, as we will see, Cranor ensures that the assessment is empty.  Cranor applauds the First Circuit for adopting his identification of WOE with non-deductive reasoning to the best explanation:

“[Nondeductive reasoning or reasoning] to the best explanation can be thought of as involving six general steps, some of which may be implicit. The scientist must

(1) identify an association between an exposure and a disease,

(2) consider a range of plausible explanations for the association,

(3) rank the rival explanations according to their plausibility,

(4) seek additional evidence to separate the more plausible from the less plausible explanations,

(5) consider all of the relevant available evidence, and

(6) integrate the evidence using professional judgment to come to a conclusion about the best explanation.”

Cranor at 115 (citing Milward v. Acuity Specialty Prods. Group, Inc., 639 F.3d 11, 17−18 (1st Cir. 2011)).  Cranor’s and the Circuit’s embrace of this description is seriously flawed.  They move from WOE to “reasoning to the best explanation,” but they do provide any guidance on the key elements:

(1) what constitutes an association?

(2) what renders an explanation plausible, and when is “unknown on the current evidence” an appropriate explanation to offer?

(3) what are the criteria for ranking plausible explanations?

(4) what evidence will discriminate between and among rival explanations?

(5) how do we consider all relevant evidence without using qualitative and quantitative weights?  If we use weights, how?

(6) how do we integrate disparate lines of evidence, and which profession will provide the critical assessment of validity for the integration, and the determinant of what is the “best explanation”?

Cranor harrumphs with the First Circuit:

“[n]o serious argument can be made that the weight of the evidence approach is inherently unreliable.”

Cranor at 115 (quoting Milward, at 18−19).  The double negative is revealing, and so is the utter lack of content to the so-called methodology.  Even if WOE were a method, the Circuit’s statement is meaningless, much like saying that no one could say that physics is inherently unreliable.  Such a statement certainly would not help us judge the bona fides of cold fusion advocates.

WOE IN COURT

Perhaps in an attempt to induce judges and lawyers to lower their intellectual guard, Cranor tells us that WOE is nothing other than what happens in jury trials:

“Jurors, or judges conducting bench trials, use such inferences to find the most plausible account of whether a person is guilty of a crime or has committed a tort. To convict a person of a crime, jurors must find that the total body of relevant evidence supports the conclusion of a nondeductive argument beyond a reasonable doubt; to hold a person accountable for a tort, jurors must find that the total body of relevant evidence supports the conclusion of a nondeductive argument by a preponderance of the evidence, a lower standard of proof.”

Cranor 116.  To be sure, “weight of the evidence” does have a legal usage.  Any superficial appeal of this analogy between scientific assessment of causation and litigation of facts quickly dissipates when we realize the relevant evidence has been filtered for the jury by recognized rules of evidence.  Evidence deemed too weak, too speculative, too prejudicial will have been excluded in a jury trial.  In addition, there are social norms and contexts that operate in a jury trial that may be inimical to the truth-finding process.  My favorite Philadelphia trial anecdote from a former Assistant District Attorney is about a jury that convicted a man for murder.  Although there were racial issues that made the case difficult and the outcome uncertain, after the trial, the forewoman explained that reaching a verdict was easy because the defendant’s mother, who was identified as living near the courthouse, never showed up for the trial.  More “scientific” studies document the role of race, ethnic, and socio-economic prejudice and bias.  Furthermore, in civil and criminal trials, the evidence is generally unweighted except by lawyers’ argument and rhetoric.  A lawyer unhappy with a study’s result may argue that one author had a conflict of interest, even though the study was well designed and conducted, and provided the “weightiest” evidence on the issue to be decided.  Perhaps Cranor advances the trial example because almost “anything goes” in lawyers’ argument and juries’ assessments of scientific issues.

WOE is vacuous as described by Cranor. Statements that all types of relevant research should be considered do not tell us anything.  Stating that “scientific judgment” is necessary says everything, and thus nothing, because it leaves out any description of the methodology to inform and apply the judgment.  Expert witnesses should not be allowed to invoke WOE as a way to avoid methodological scrutiny.

The WOE Cranor would inflict upon the judicial process has been described as a “black box,” which fails to provide any operative method of specifying relevancy or weight for differing kinds of evidence, or method for synthesizing the disparate studies.  We should not be surprised by the lack of endorsement from the scientific community itself for WOE-ful methods.  The phrase is vague and ambiguous; its use, inconsistent.

See, e.g., V. H. Dale, G.R. Biddinger, M.C. Newman, J.T. Oris, G.W. Suter II, T. Thompson, et al., “Enhancing the ecological risk assessment process,” 4 Integrated Envt’l Assess. Management 306 (2008)(“An approach to interpreting lines of evidence and weight of evidence is critically needed for complex assessments, and it would be useful to develop case studies and/or standards of practice for interpreting lines of evidence.”);

Igor Linkov, Drew Loney, Susan M. Cormier, F. Kyle Satterstrom, and Todd Bridges, “Weight-of-evidence evaluation in environmental assessment: review of qualitative and quantitative approaches,” 407 Science of Total Env’t 5199–205 (2009) (reviewing the use of WOE methods and concluding that the approach is not particularly rigorous, and that the approach “does not lend itself to transparency or repeatability except in simple cases”);

Douglas Weed, “Weight of Evidence: A Review of Concept and Methods,” 25 Risk Analysis 1545 (2005) (noting the vague, ambiguous, indefinite nature of the concept of “weight of evidence” review);

R.G. Stahl, Jr., “Issues addressed and unaddressed in EPA’s ecological risk guidelines,” 17 Risk Policy Report 35 (1998); (noting that U.S. Environmental Protection Agency’s guidelines for ecological weight-of-evidence approaches to risk assessment fail to provide guidance);

Glenn Suter II & Susan Cormier, “Why and how to combine evidence in environmental assessments:  Weighing evidence and building cases,” 409 Science of the Total Environment 1406, 1406 (2011)(noting arbitrariness and subjectivity of WOE “methodology”);

Charles Menzie, Miranda Hope Henning, Jerome Curac, et al. “A weight-of-evidence approach for evaluating ecological risks; report of the Massachusetts Weight-of-Evidence Work Group,” 2 Human Ecological Risk Assessment 277 (1996) (“although the term ‘weight of evidence’ is used frequently in ecological risk assessment, there is no consensus on its definition or how it should be applied”);

Sheldon Krimsky, “The Weight of Scientific Evidence in Policy and Law,” 95 Supp.(1) Am. J. Pub. Health S129, S131 (2005) (“However, the term [WOE] is applied quite liberally in the regulatory literature, the methodology behind it is rarely explicated.”)

Describing WOE, Krimsky notes that “WOE seems to be coming out of a ‘black box’ of scientific judgment.”  Krimsky at S131.  Revealingly, Krimsky references a report from the Agency for Toxic Substances and Disease Registry (ATSDR) of the Department of Health and Human Services, which describes WOE as an alternative to causal determinations when trying to set policy, when “causality is out of reach.”  Id. (citing citing ATSDR, “The Assessment Process: An Interactive Learning Program,” available at http://www.atsdr.cdc.gov/training/public-health-assessment-overview/html/module2/sv18.html (last visited May 8, 2013).

Krimsky thus acknowledges what Cranor tries so hard to obscure:  WOE is a precautionary approach to be applied when the scientific answer is “I don’t know.”

Professor Sanders’ Paen to Milward

May 7th, 2013

Deconstructing the Deconstruction of Deconstruction

Some scholars have suggested that the most searching scrutiny of scientific research takes place in the courtroom.  Barry Nace’s discovery of the “mosaic method” notwithstanding, lawyers rarely contribute new findings, which I suppose supports Professor Sanders’ characterization of the process as “deconstructive.”  The scrutiny of courtroom science is encouraged by the large quantity of poor quality opinions, on issues that must be addressed by lawyers and their clients who wish to prevail.  As philosopher Harry Frankfurt described this situation:

“Bullshit is unavoidable whenever circumstances require someone to talk without knowing what he is talking about.  Thus the production of bullshit is stimulated whenever a person’s obligations or opportunities to speak about some topic exceed his knowledge of the facts that are relevant to that topic.”

Harry Frankfurt, On Bullshit 63 (Princeton Univ. 2005).

This unfortunate situation would seem to be especially true for advocacy science that involves scientists who are intent upon influencing public policy questions, regulation, and litigation outcomes.  Some of the most contentious issues, and tendentious studies, take place within the realm of occupational, environmental, and related disciplines. Sadly, many occupational and environmental medical practitioners seem particularly prone to publish in journals with low standards and poor peer review.  Indeed, the scientists and clinicians who work in some areas make up an insular community, in which the members are the peer reviewers and editors of each other’s work.  The net result is that any presumption of reliability for peer-reviewed biomedical research is untenable.

The silicone gel-breast implant litigation provides an interesting case study of the phenomenon.  Contrary to post-hoc glib assessments that there was “no” scientific evidence offered by plaintiffs, the fact is that there was a great deal.  Most of what was offered was published in peer-reviewed journals; some was submitted by scientists who had some credibility and standing within their scientific, academic communities:  Gershwin, Kossovsky, Lappe, Shanklin, Garrido, et al.  Lawyers, armed with subpoenas, interrogatories, and deposition notices, were able to accomplish what peer reviewers could not.  What Professor Sanders and others call “deconstruction” was none other than a scientific demonstration of study invalidity, seriously misleading data collection and analysis, and even fraud.  See Hon. Jack B. Weinstein, “Preliminary Reflections on Administration of Complex Litigation” 2009 Cardozo L. Rev. de novo 1, 14 (2009) (describing plaintiffs’ expert witnesses in silicone litigation as “charlatans” and the litigation as largely based upon fraud).

Some scientific publications are motivated almost exclusively by the goal of influencing regulatory or political action.  Consider the infamous meta-analysis by Nissen and Wolski, of clinical trials and heart attack among patients taking Avandia.  Steven Nissen & Kathy Wolski, “Effect of Rosiglitazone on the Risk of Myocardial Infarction and Death from Cardiovascular Causes,” 356 New Engl. J. Med. 2457 (2007). The New England Journal of Medicine rushed the meta-analysis into print in order to pressure the FDA to step up its regulation of post-marketing surveillance of licensed medications.  Later, better-conducted meta-analyses showed how fragile Nissen’s findings were.  See, e.g., George A. Diamond, MD, et al., “Uncertain Effects of Rosiglitazone on the Risk for Myocardial Infarction and Cardiovascular Death,” 147 Ann. Intern. Med. 578 (2007); Tian, et al., “Exact and efficient inference procedure for meta-analysis and its application to the analysis of independent 2 × 2 tables with all available data but without artificial continuity correction,” 10 Biostatistics 275 (2008).  Lawyers should not be shy about pointing out political motivations of badly conducted scientific research, regardless of authorship or where published.

On the other hand, lawyers on both sides of litigation are prone to attack on personal bias and potential conflicts of interest because these attacks are more easily made, and better understood by judges and jurors.  Perhaps it is these “deconstructions” that Professor Sanders finds overblown, in which case, I would agree.  Precisely because jurors have difficulty distinguishing between allegations of funding bias and validity flaws that render studies nugatory, and because inquiries into validity require more time, care, analysis, attention, and scientific and statistical learning, pretrial gatekeeping of expert witnesses is an essential part of achieving substantial justice in litigation of scientific issues.  This is a message that is obscured by the recent cheerleading for the Milward decision at the litigation industry’s symposium on the case.

Deconstructing Professor Sanders’ Deconstruction of the Deconstruction in Milward

A few comments about Professor Sanders’ handling of the facts of Milward itself.

The case arose from a claim of occupational exposure to benzene and an outcome known as APL (acute promyelocytic leukemia), which makes up about 10% of AML (acute myeloid leukemia).  Sanders argues, without any support, that APL is too rare for epidemiology to be definitive.  Sanders at 164.  Here Sanders asserts what Martyn Smith opined, and ignores the data that contradicted Smith.  At least one of the epidemiologic studies cited by Smith was quite large and was able to discern small statistically significant associations when present.  See, e.g., Nat’l Investigative Group for the Survey of Leukemia & Aplastic Anemia, “Countrywide Analysis of Risk Factors for Leukemia and Aplastic Anemia,” 14 Acta Academiae Medicinae Sinicae (1992).  This study found a crude odds ratio of 1.42 for benzene exposure and APL (M3). The study had adequate power to detect a statistically significant odds ratio of 1.54 between benzene and M2a.  Of course, even if one study’s “power” were low, there are other, aggregative strategies, such as meta-analysis, available.  This was not a credibility issue concerning Dr. Smith, for the jury; Smith’s opinion turned on an incorrect and fallacious analyses that did not deserve “jury time.”

The problem is, according to Sanders one of “power.”  In a lengthy footnote, Sander explains what “power” is, and why he believes it is a problem:

“The problem is one of power. Tests of statistical significance are designed to guard against one type error, commonly called Type I Error. This error occurs when one declares a causal relationship to exist when in fact there is no relationship, … . A second type of error, commonly called Type II Error, occurs when one declares a causal relationship does not exist when in fact it does. Id. The “power” of a study measures its ability to avoid a Type II Error. Power is a function of a study’s sample size, the size of the effect one wishes to detect, and the significance level used to guard against Type I Error. . Because power is a function of, among other things, the significance level used to guard against Type I errors, all things being equal, minimizing the probability of one type of error can be done only by increasing the probability of making the other.  Formulae exist to calculate the power of case-control and cohort studies from 2 x 2 contingency table data.

Because the power of any test is reduced as the incidence of an effect decreases, Type II threats to causal conclusions are particularly relevant with respect to rare events. Plaintiffs make a fair criticism of randomized trials or epidemiological cohort studies when they note that sometimes the studies have insufficient power to detect rare events. In this situation, case-control studies are particularly valuable because of their relatively greater power. In most toxic tort contexts, the defendant would prefer to minimize Type I Error while the plaintiffs would prefer to minimize Type II Error. Ideally, what we would prefer are studies that minimize the probability of both types of errors. Given the importance of power in assessing epidemiological evidence, surprisingly few appellate opinions discuss this issue. But see DeLuca v. Merrell Dow Pharm., Inc., 911 F.2d 941, 948 (3d Cir. 1990), which contains a good discussion of epidemiological evidence. The opinion discusses the two types of error and suggests that courts should be concerned about both. Id. Unfortunately, neither the district court opinion nor the court of appeals opinion in Milward discusses power.”

Sanders at 164 n.115 (internal citations omitted).

Sanders is one of the few law professors who almost manages to describe statistical power correctly.  Calculating and evaluating power requires pre-specification of alpha (our maximum tolerated Type I error), sample size, and an alternative hypothesis that we would want to be able to identify at a statistically significant level.  This much is set out in the footnote quoted above.

Sample size, however, is just one factor in the study’s variance, which is not in turn completely specified by sample size.  More important, Sanders’ invocation of power to evaluate the exonerative quality of a study has been largely rejected in the world of epidemiology.  His note that “[f]ormulae exist to calculate the power of case-control and cohort studies from 2 x 2 contingency table data” is largely irrelevant because power is mostly confined to sample size determinations before a study is conducted.  After the data are collected, studies are evaluated by their point estimates and their corresponding confidence intervals. See, e.g., Vandenbroucke, et al., “Strengthening the reporting of observational studies in epidemiology (STROBE):  Explanation and elaboration,” 18 Epidemiology 805, 815 (2007) (Section 10, sample size) (“Do not bother readers with post hoc justifications for study size or retrospective power calculations. From the point of view of the reader, confidence intervals indicate the statistical precision that was ultimately obtained.) (emphasis added). See alsoPower in the Courts — Part Two” (Jan. 21, 2011).

Type II error is important in the evaluation of evidence, but it requires a commitment to a specific alternative hypothesis.  That alternative can always be set closer and closer to the null hypothesis of no association in order to conclude, as some plaintiffs’ counsel would want, that all studies lack power (except of course the ones that turn out to support their claims).  Sanders’ discussion of statistical power ultimately falters because claiming a lack of power without specifying the size of the alternative hypothesis is unprincipled and meaningless.

Sanders tells us that cohorts will have less power than case-control studies, but again the devil is in the details.  Case-control studies are of course relatively more efficient in studying rare diseases, but the statistical precision of their odds ratios will be given by the corresponding confidence intervals.

What is missing from Sanders’ scholarship is a simple statement of what the point estimates and their confidence intervals are.  Plaintiffs in Milward argued that epidemiology was well-nigh unable to detect increased risks of APL, but then they embraced epidemiology when Smith had manipulated and re-arranged data in published studies.

The Yuck Factor

One of the looming problems in expert witness gatekeeping is judicial discomfort and disability in recounting the parties’ contentions, the studies’ data, and the witnesses’ testimony.  In a red car/blue car case, judges are perfectly comfortable giving detailed narratives of the undisputed facts, and the conditions that give rise to discounting or excluding evidence or testimony.  In science cases, not so much.

Which brings us to the data manipulation conducted by Martyn Smith in the Milward case.  Martyn Smith is not an epidemiologist, and he has little or no  experience or expertise in conducting and analyzing epidemiologic studies.  The law of expert witnesses makes challenges to an expert’s qualifications very difficult; generally courts presume that expert witnesses are competent to testify about general scientific and statistical matters.  Often the presumption is incorrect.

In Milward, Smith claimed, on the one hand, that he did not need epidemiology to reach his conclusion, but on the other hand that “suggestive” findings supported his opinion.  On the third hand, he seemed to care enough about the epidemiologic evidence to engage in fairly extensive reanalysis of published studies.  As the district court noted,  Smith made “unduly favorable assumptions in reinterpreting the studies, such as that cases reported as AML could have been cases of APL.”  Milward v. Acuity Specialty Products Group, Inc., 664 F.Supp. 2d 137, 149 (D. Mass. 2009), rev’d, 639 F.3d 11, 19 (1st Cir. 2011), cert. denied sub nom. U.S. Steel Corp. v. Milward, 132 S. Ct. 1002 (2012).  Put less charitably, Smith made up data to suit his hypothesis.

The details of Smith’s manipulations go well beyond cherry picking.  Smith assumed, without evidence, that AML cases were APL cases.  Smith arbitrarily chose and rearranged data to create desirable results.  See Deposition Testimony of Dr. David Garabrant at 22 – 53, in Milward (Feb. 18, 2009).  In some studies, Smith discarded APL cases from the unexposed group, with the consequence of increasing the apparent association; he miscalculated odds ratios; and he presented odds ratios without p-values or confidence intervals.  The district court certainly was entitled to conclude that Smith had sufficiently deviated from scientific standards of care as to make his testimony inadmissible.

Regrettably, the district court did not provide many details of Smith’s reanalyses of studies and their data.  The failure to document Smith’s deviations facilitated the Circuit’s easy generalization that the fallacious reasoning and methodology was somehow invented by the district court.

The appellate court gave no deference to the district court’s assessment, and by judicial fiat turned methodological missteps into credibility issues for the jury.  The Circuit declared that the analytical gap was of the district court’s making, which seemed plausible enough if one read only the appellate decision.  If one reads the actual testimony, the Yuck Factor becomes palpable.

WOE Unto Bradford Hill

Professor Sanders accepts the appellate court’s opinion at face value for its suggestion that:

“Dr. Smith’s opinion was based on a ‘weight of the evidence’ methodology in which he followed the guidelines articulated by world-renowned epidemiologist Sir Arthur Bradford Hill in his seminal methodological article on inferences of causality.”

Sanders at 170 n.140 (quoting Milward, 639 F.3d at 17).

Sanders (and the First Circuit) is unclear whether WOE consists of following the guidelines articulated by Sir Arthur (perhaps Sir Austin Bradford Hill’s less distinguished brother?), or merely includes the guidelines as a larger process.  Not only was there no Sir Arthur, but Sir Austin’s guidelines are distinctly different from WOE in that they pre-specify the consideration to be applied.  No where does the appellate court give any meaningful consideration to whether there was an exposure-response gradient shown, or whether the epidemiologic studies consistently showed an association between benzene and APL.  Had the Circuit given any consideration to the specifics of the guidelines, it would have likely concluded that the district court had engaged in fairly careful, accurate gatekeeping, well within its discretion.  (If the standard were de novo review rather than “abuse of discretion,” the Circuit would have had to confront the significant analytical gaps and manipulations in Smith’s testimony.)  Futhermore, it is time to acknowledge that Bradford Hill’s “guidelines” are taken from a speech given by Sir Austin almost 50 years ago; they hardly represent a comprehensive, state-of-the-art set of guidelines for causal analysis in epidemiology today.

So there you have it.  WOE means the Bradford Hill guidelines, except that the individual guidelines need not be considered.  And although Bradford Hill’s guidelines were offered to evaluate a body of epidemiologic studies, WOE teaches us that we do not need epidemiologic studies, especially if they do not help to establish a plaintiffs’ claim.  Sanders at 168 & n.133 (citing Milward at 22-24).

What is WOE?

If WOE were not really the Bradford Hill guidelines, then what might it be? Attempting to draw a working definition of WOE from the Milward appellate decision, Sanders tell us that WOE requires looking at all the relevant evidence.  Sanders at 169.  Not much guidance there.  Elsewhere he tells us that WOE is “reasoning to the best explanation,” without explicating what such reasoning entails.  Sanders at 169 & n. 136 (quoting Milward at 23,“The hallmark of the weight of the evidence approach is reasoning to the best explanation.”).  This hardly tells us anything about what method Smith and his colleagues were using.

Sanders then tells us that WOE means the whole “tsumish.” (My word; not his.)  Not only should expert witnesses rely upon all the relevant evidence, but they should eschew an atomistic approach that looks (too hard) at individual studies.  Of course, there may be value in looking at the entire evidentiary display.  Indeed, a holistic view may be needed to show the absence of causation.  In many litigations, plaintiffs’ counsel are adept in filling up the courtroom with “bricks,” which do not fit together to form the wall they claim.  In the silicone gel breast implant litigation, plaintiffs’ counsel were able to pick out factoids from studies to create sufficient confusion and doubt that there might be a causal connection between silicone and autoimmune disease.  A careful, systematic analysis, which looked at the big picture, demonstrated that these contentions were bogus.  Committee on the Safety of Silicone Breast Implants, Institute of Medicine, Safety of Silicone Breast Implants (Wash. D.C. 1999) (reviewing studies, many of which were commissioned by litigation defendants, and which collectively showed lack of association between silicone and autoimmune diseases).  Sometimes, however, taking in the view of the entire evidentiary display may obscure what makes up the display.  A piece by El Anatsui may look like a beautiful tapestry, but a closer look will reveal it is just a bunch of bottle caps wired together.

Contrary to Professor Sanders’ assertions, nothing in the Milward appellate opinion explains why studies should be viewed only as a group, or why this view will necessarily show something greater than the parts. Sanders at 170.  Although Sanders correctly discerns that the Circuit elevated WOE from “perspective” to a methodology, there is precious little content to the methodology, especially if it permits witnesses to engage in all sorts of data shenanigans or arbitrary weighting of evidence.  The quaint notion that there is always a best explanation obscures the reality that in science, and especially in science that is likely to be contested in a courtroom, the best explanation will often be “we don’t know.”

Sanders eventually comes around to admit that WOE is perplexingly vague as to how the weighing should be done.  Id. at 170.  He also admits that the holistic view is not always helpful.  Id. at 170 & n.139 (the sum is greater than its parts but only when the combination enhances supportiveness of the parts, and the collective support for the conclusion at issue, etc.).  These concessions should give courts serious pause before they adopt a dissent from a Supreme Court case, that has been repeatedly rejected by courts, commentators, and ultimately by Congress in revising Rule 702.

WOE is Akin to Differential Diagnosis

The Milward opinion seems like a bottomless reserve of misunderstandings.    Professor Sanders barely flinches at the court’s statement that “The use of judgment in the weight of the evidence methodology is similar to that in differential diagnosis.”  Milward at 18.  See Sanders at 171.  Differential “diagnosis” requires previous demonstration of general causation, and proceeds by iterative disjunctive syllogism.  Sanders, and the First Circuit, somehow missed that this syllogistic reasoning is completely unrelated to the abductive inferences that may play a role in reaching conclusions about general causation.  Sanders revealingly tells us that “[e]xperts using a weight of the evidence methodology should be given the same wide latitude as is given those employing the differential diagnosis method.”  Sanders at 172 & n.147.  This counsel appears to be an invitation to speculate.  If the “wide latitude” to which Sanders refers means the approach of a minority of courts that allow expert witnesses to rule in differentials by speculation, and then rule them in by failing to rule out idiopathic cases, then Sanders’ approach is advocacy for epistemic nihilism.

The Corpuscular Approach

Professor Sanders seems to endorse the argument of Milward, as well as Justice Stevens’ dissent in Joiner, that scientists do not assess research by looking at the validity (vel non) of individual studies, and therefore courts should not permit this approach.  Sanders at 173 & n.15.  Neither Justice Stevens nor Professor Sanders presents any evidence for the predicate assertion, which a brief tour of IARC’s less political working group reports would show to be incorrect.

The rationale for Sanders (and Milward’s) reductionism of science to WOE becomes clear when Sanders asserts that “[p]erhaps all or nearly all critiques of an expert employing a weight of the evidence methodology should go to weight, not admissibility. Id. at 173 & n.155.  To be fair, Sanders notes that the Milward court carved out a “solid-body” of exonerative epidemiology exception to WOE.  Id. at 173-74.  This exception, however, does nothing other than placing a substantial burden upon the opponent of expert witness opinion to show that the opinion is demonstrably incorrect.  The proponent gets a free pass as long as there is no “solid body” of such evidence that shows he is affirmatively wrong.  Discerning readers will observe that maneuver simply shifts the burden of admissibility to the opponent,  and eschews the focus on methodology for a renewed emphasis upon general acceptance of conclusions.  Id.

Sanders also notes that other courts have seen through the emptiness of WOE and rejected its application in specific cases.  Id. at 174 & n.163-64 (citing Magistrini v. One Hour Martinizing Dry Cleaning, 180 F. Supp. 2d 584, 601-02 (D.N.J. 2002), aff’d, 68 F. App’x 356 (3d Cir. 2003), where the trial court rejected Dr. Ozonoff’s attempt to deploy WOE without explaining or justifying the mixing and matching of disparate kinds of studies with disparate results).  Sanders’ analysis of Milward seems, however, designed to skim the surface of the case in an effort to validate the First Circuit’s superficial approach.

 

Reconstructing Professor Sanders on Expert Witness Gatekeeping

May 5th, 2013

Last week, I addressed two papers from a symposium organized by the litigation industry to applaud the First Circuit’s decision in Milward v. Acuity Products Group, Inc., 639 F.3d 11 (1st Cir. 2011), cert. denied, 132 S.Ct. 1002 (2012).  Professor Joseph Sanders also contributed to the symposium, in a paper that is a bit more measured, scholarly, and disinterested than the other papers in the group.  Joseph Sanders, “Milward v. Acuity Specialty Products Group: Constructing and Deconstructing Sciences and Law in Judicial Opinion,3 Wake Forest J. L & Policy 141 (2013).  PDF  Still, the industry sponsor, the so-called Center for Progressive Reform, has reasons to be satisfied with the result.

Sanders argues that the Milward opinion is important because it highlights what he characterizes as a “rhetorical conflict that has been ongoing, often below the surface, since the United States Supreme Court’s 1993 opinion in Daubert v. Merrell Dow Pharmaceuticals, Inc.”  Id. at 142.  The argument is overly kind to the judiciary.  There has not been so much as a rhetorical conflict as a reactionary revolt against evidence-based decision making in the federal courts.  Milward simply represents the highwater mark of this revolt against law and science.  See, e.g., David Bernstein on the Daubert Counterrevolution (April 19, 2013).

Sanders invokes the ghost of Derrida and his black brush of deconstruction to suggest that the Daubert process is nothing more than the unraveling of the scientific enterprise, with the goal of showing that it is arbitrary and subjective.  Sanders at 143-44.  According to Sanders, radical deconstruction pushes towards a leveling of “distinctions between fact and faction … more akin to poetry and music than to evidence and argument.” Id. at 145 (citing Stephan Fuchs & Steven Ward, “What Is Deconstruction and Where and When Does It Take Place? Making Facts in Science, Building Cases in Law,” 59 Am. Soc. Rev. 481, 482-83 (1994)).

Lawyers sometimes realize the cost of radical deconstruction is a nihilism that undermines their own credibility and their ability to claim or defend factual assertions.  Sometimes, of course, lawyers ignore these considerations and talk out of both sides of their mouths. In re Welding Fume Prods. Liab. Litig., No. 1:03-CV-17000, MDL 1535, 2006 WL 4507859, *33 (N.D. Ohio 2006) (“According to plaintiffs, the rate of PD [Parkinson’s disease] mortality is so poor a proxy for measuring the rate of overall PD incidence, that the Coggon study proves nothing. In the next breath, however, plaintiffs set forth an unpublished statistical analysis (by Dr. Wells) of PD mortality data collected by the National Center for Health Statistics, arguing it proves that welders, as a group, suffer earlier onset of PD than the general population.77 Of course, the devil is in the details, discussion of which is beyond the scope of this opinion (and perhaps beyond the scope of understanding of the average juror),78 but this example shows how hard it is to tease out whether the limitations of a given study make it unreliable under Daubert.”).

Sanders gives a nod to Sheila Jasanoff, whom he quotes with apparent approval:

“[t]he adversarial structure of litigation is particularly conducive to the deconstruction of scientific facts, since it provides both the incentive (winning the lawsuit) and the formal means (cross-examination) for challenging the contingencies in their opponents’ scientific arguments.”

Sanders at 147 (quoting Shelia Jasanoff, “What Judges Should Know About the Sociology of Science,” 32 Jurimetrics J. 345, 348 (1992)).

With his acknowledgment that adversarial litigation of scientific issues involves  “deconstruction,” or just good, old-fashioned rhetorical excess, Sanders points to the Daubert trilogy as the Supreme Court’s measured response to the problem.

Sanders is, however, not entirely happy about the judiciary’s attempt to rein in the rhetorical excesses of adversarial litigation of scientific issues. Daubert barely scratched the surface of the scientific validity and reliability issues in the Bendectin record, but Sanders asserts that Chief Justice Rehnquist went too far in looking under the hood of the lemon that Joiner’s expert witnesses were trying to sell:

“perhaps Chief Justice Rehnquist erred in the other direction in Joiner when he systematically reviewed the animal studies and four separate epidemiological studies cited by the plaintiff as supporting the position that exposure to PCBs either caused or ‘promoted’66 the plaintiff’s lung cancer.67

Sanders at 154.  Horrors!  A systematic review!! Perish the thought.

Sanders seems to fault the Chief Justice’s approach of picking off “one-by-one” the studies relied upon by Joiner’s expert witnesses as a deconstructive exercise.  Id. at 154 – 55.  As Sanders notes, the Court’s opinion delved into the internal and external validity of the four cited epidemiologic studies to recognize that the:

“studies did not support the plaintiff’s position because the authors of the study said there was no evidence of a relationship, the relationship was not statistically significant, the substance to which the subjects were exposed was not the same as that to which Mr. Joiner was exposed, and the subjects were simultaneously exposed to other carcinogens.70

Id. at 155.  To be fair, not all of these were dispositive considerations, but they represent a summary of a district court’s extensive consideration of the scientific record.

Channeling Susan Haack, Sanders argues that the one-by-one approach (which Professor Green pejoratively called a “Balkanized” approach, and Sanders calls “atomistic”) ignores that a wall is made up of constituent bricks.  Sanders might have done better to study a more accomplished philosopher-scientist:

“[O]n fait la science avec des faits comme une maison avec des pierres; mais une accumulation de faits n’est pas plus une science qu’un tas de pierres n’est une maison.”

Jules Henri Poincaré, La Science et l’Hypothèse (1905) (chapter 9, Les Hypothèses en Physique)(“Science is built up with facts, as a house is with stones. But a collection of facts is no more a science than a heap of stones is a house.”).  Poincaré’s metaphor is more powerful and descriptive than Sander’s because it acknowledges that interlocking pieces of evidence may cohere as a building, or they may be no more than a pile of rubble.  Deeper analysis is required. Poorly constructed walls revert to the pile of bricks from which they came.  Furthermore, the mason must look at the individual bricks to see whether they are cracked, crumbling, or crooked before building a wall that must endure. We want a wall that will endure at least long enough to stand on, or put a roof on. Much more is required than simply invoking “mosaics,” “walls from bricks,” or “crossword puzzles” to transmute a pile of studies into a “warranted” claim to knowledge. Litigants, either plaintiff or defendant, should not be allowed to pick out isolated findings in a variety of studies, and throw them together as if that were science.  This is precisely the rhetorical excess against which Rule 702, with its requirement of “knowledge,” should protect judges, juries, and litigants.

Indeed, as Sanders eventually concedes, the Joiner Court noted the appropriateness of considering the four relied-upon epidemiologic studies, individually or collectively:

“We further hold that, because it was within the District Court’s discretion to conclude that the studies upon which the experts relied were not sufficient, whether individually or in combination, to support their conclusions that Joiner’s exposure to PCB’s contributed to his cancer, the District Court did not abuse its discretion in excluding their testimony.71

General Elec. Co. v. Joiner, 522 U.S. 136, 146-47 (1997).

Professor Sanders might have raised a justiciable argument against the gatekeeping process in Joiner if he had shown, or if he had adverted to other analyses, that the four relied-upon studies collectively meshed to overcome each other’s clear inadequacies.  The silence of Sanders, and other critics, on this score is telling.  Was Rabbi Teitelbaum (one of the Joiners’ expert witnesses) simply insightful or prescient in reading the early returns on PCBs and lung cancer?  What has happened subsequently?  Has the IARC embraced PCBs as a known cause of lung cancer?  Have well-conducted studies and meta-analyses vindicated Teitelbaum’s claims, or have they further confirmed that the gatekeeping in Joiner successfully excluded witnesses who were advancing pathologically weak, specious claims, by pushing and squeezing data until they fit into a pre-determined causal conclusions.

The complaints about judicial “deconstruction” are unfair and empty without looking at these details.  It behooves evidence scholars who want to write in this area to roll up their sleeves and look at the evidence that was in front of the courts, and to learn something about science.  Of course, scientists did not stop exploring the PCB/lung cancer hypothesis after Joiner was decided.  See, e.g, Avima Ruder, Misty Hein, Nancy Nilsen, et al., “Mortality among Workers Exposed to Polychlorinated Biphenyls (PCBs) in an Electrical Capacitor Manufacturing Plant in Indiana: An Update,” 114 Envt’l Health Perspect. 18 (2006) (study by National Institute for Occupational Safety and Health, showing reduced rates of respiratory cancer among PCB-exposed workers, with age-standardized risk ratio of 0.85, and a 95% confidence interval, 0.6 to 1.1)

Stevens’ partial dissent in Joiner of course invested deeply in the mosaic theory, which we now know was the brainchild of plaintiffs’ counsel, Barry Nace. Michael D. Green, “Pessimism about Milward,” 3 Wake Forest J. L & Policy 41, 63 (2013) (reporting that Barry Nace acknowledged having “fed” this rhetorical device to expert witness Alan Done to support arguments for manufacturing certainty in the face of an emerging body of exonerative evidence).  Justice Stevens also cited the EPA as employing “weight of the evidence,” which simply makes the point that WOE is a precautionary approach to scientific evidence, not one for serious causal determinations.  Justice Stevens, and Professor Sanders, might have done better to have looked at what the FDA requires for health claims.  See, e.g., FDA, Guidance for industry evidence-based review system for the scientific evaluation of health claims (2009) (articulating an evidence-based approach). Justice Stevens’ argument fundamentally misconstrues the scientific enterprise of determining causation of health outcomes by reducing it to a precautionary enterprise of regulating possible harms.  Professor Sanders is unclear whether he is restating Justice Stevens’ view, or his own, when he writes:

“Chief Justice Rehnquist was wrong to show the flaws of individual bricks, because it is the wall as a whole that makes up the plaintiff’s case.74

Sanders at 157 (citing Stevens’ opinion in Joiner, 522 U.S. 136, 153n.5  (1997).  If this is Professor Sanders’ view, it is profoundly wrong.  Looking at the individual bricks is necessary to determine whether it can support the plaintiff’s case.  Of course, to the extent it was Justice Stevens’ view, it was a dissent, not a holding, and it was superseded by a statute when Rule 702 was revised in 2000.

 

Milward’s Singular Embrace of Comment C

May 4th, 2013

Professor Michael D. Green is one of the Reporters for the American Law Institute’s Restatement (Third) of Torts: Liability for Physical and Emotional Harm.   Green has been an important interlocutor in the on-going debate and discussion over standards for expert witness opinions.  Although many of his opinions are questionable, his writing is clear, and his positions, transparent.  The seduction of Professor Green and the Wake Forest School of Law by one of the litigation-industry’s organizations, the Center for Progressive Reform, is unfortunate, but the resulting symposium gave Professor Green an opportunity to speak and write about the justly controversial comment c.   Restatement (Third) of Torts: Liability for Physical and Emotional Harm § 28, cmt. c (2010).

Mock Pessimism Over Milward

Professor Green professes to be pessimistic about the Milward decision, but his only real ground for pessimism is that Milward will not be followed.  Michael D. Green, “Pessimism about Milward,” 3 Wake Forest J. L & Policy 41 (2013).  Green describes the First Circuit’s decision in Milward as “fresh,” “virtually unique and sophisticated,” and “satisfying.” Id. at 41, 43, and 50.  Green describes his own reaction to the decision in terms approaching ecstasy:  “delighted,” “favorable,” and “elation.”  Id. at 42, 42, and 43.

Green interprets Milward to embrace four comment c propositions:

  1. “Recognizing that judgment and interpretation are required in assessments of causation.52
  2. Endorsing explicitly and taking seriously weight of the evidence methodology,53 against the great majority of federal courts that had, since Joiner, employed a Balkanized approach to assessing different pieces of evidence bearing on causation.54
  3. Appreciating that because no algorithm exists to constrain the inferential process, scientists may reasonably reach contrary conclusions.55
  4. Not only stating, but taking seriously, the proposition that epidemiology demonstrating the connection between plaintiff’s disease and defendant’s harm is not required for an expert to testify on causation.56 Many courts had stated that idea, but very few had found non-epidemiologic evidence that satisfied them.57

Id. at 50-51.

Green’s points suggest that comment c was designed to reinject a radical subjectivism into scientific judgments allowed to pass for expert witness opinions in American courts.  None of the points is persuasive.  Point (1) is vacuous.  Saying that judgment is necessary does not imply that anything goes or that we will permit the expert witness to be the judge of whether his opinion rises to the level of scientific knowledge.  The required judgment involves an exacting attention to the role of random error, bias, or confounding in producing an apparent association, as well as to the validity of the data, methods, and analyses used to interpret observational or experimental studies.  The required judgment involves an appreciation that not all studies are equally weighty, or equally worthy of consideration for use in reaching causal knowledge.  Some inferences are fatally weak or wrong; some analyses or re-analyses of data are incorrect.  Not all judgments can be blessed by anointing some of them “subjective.”

Point (2) illustrates how far the Restatement process has wondered into the radical terrain of abandoning gatekeeping altogether.  The approach that Green pejoratively calls “Balkanized” is a careful look at what expert witnesses have relied upon to assess whether their conclusions or claims follow from their relied upon sources.  This is the approach used by International Agency for Research on Cancer (IARC) working groups, whose method Green seems to endorse.  Id. at 59.  IARC working groups discuss and debate their inclusionary and exclusionary criteria for studies to be considered, and the validity of each study and its analyses, before they get to an assessment of the entire evidentiary display.  (And several of the IARC working groups have been by no means free of the conscious bias and advocacy that Green sees in party-selected expert witnesses.)  Elsewhere, Green refers to the approach of most federal courts as “corpuscular.”  Id. at 51. Clearly, expert witnesses want to say things in court that do not, so to speak, add up, but Green appears to want to give them all a pass.

Point (3) is, at best, a half truth.  Is Green claiming that reasonable scientists always disagree?  His statement of the point suggests epistemiologic nihilism. Although there are no clear algorithms, the field of science is littered with abandoned and unsuccessful theories from which we can learn when to be skeptical or dismissive of claims and conclusions.  Certainly there are times when reasonable experts will disagree, but there are also times when experts on one side or the other, or both, are overinterpreting or misinterpreting the available evidence.  The judicial system has the option and the obligation to withhold judgments when faced with sparse or inconsistent data.  In many instances, litigation arises because the scientific issues are controversial and unsettled, and the only reasonable position is to look for more evidence, or to look more carefully at the extant evidence.

Point (4) is similarly overblown and misguided.  Green states his point as though epidemiology will never be required.  Here Green’s sympathies betray any sense of fidelity to law or science.  Of course, there may be instances in which epidemiologic evidence will not be necessary, but it is also clear that sometimes only epidemiologic methods can establish the causal claim with any meaningful degree of epistemic warrant.

ANECDOTES TO LIVE BY

Anthony Robbins’ Howler

Professor Green delightfully shares two important anecdotes.  Both are revealing of the process that led up to comment c, and to Milward.

The first anecdote involves the 2002 meeting of the American Law Institute.  Apparently someone thought to invite Dr. Anthony Robbins as a guest. (Green does not tell us who performed this subversive act.)  Robbins is a member of SKAPP, the organization started with plaintiffs’ counsel’s slush fund money diverted from MDL 926, the silicone-gel breast implant litigation.

Robbins rose at the meeting to chastise the ALI for not knowing what it was talking about:

“clear, in my opinion, misstatements of . . . science” or reflected a misunderstanding of scientific principles that “leaves everyone in doubt as to whether you know what you are talking about . . . .”

Id. at 44 (quoting from 79th Annual Meeting, 2002 A.L.I. PROC. at 294).  Pretty harsh, except that Professor Green proceeds to show that it was Robbins who had no idea of what he was talking about.

Robbins asserted that the requirement of a relative risk of greater than two was scientifically incorrect. From Green’s telling of the story, it is difficult to understand whether Robbins was complaining about the use of relative risks (greater than two) for inferring general or specific causation.  If the former, there is some truth to his point, but Robbins would be wrong as to the latter.  Many scientists have opined that relative risks provide information about attributable fractions, which in turn permit inferences about individual cases.  See, e.g., Troyen A. Brennan, “Can Epidemiologists Give Us Some Specific Advice?” 1 Courts, Health Science & the Law 397, 398 (1991) (“This indeterminancy complicates any case in which epidemiological evidence forms the basis for causation, especially when attributable fractions are lower than 50%.  In such cases, it is more probable than not that the individual has her illness as a result of unknown causes, rather than as a result of exposure to hazardous substance.”).  Others have criticized the inference, but usually on the basis that the inference requires that the risk be stochastically distributed in the population under consideration, and we often do not know whether this assumption is true.  Of course, the alternative is that we must stand mute in the face of even very large relative risks and established general causation.  See, e.g., McTear v. Imperial Tobacco Ltd., [2005] CSOH 69, at ¶ 6.180 (Nimmo Smith, L.J.) (“epidemiological evidence cannot be used to make statements about individual causation… . Epidemiology cannot provide information on the likelihood that an exposure produced an individual’s condition.  The population attributable risk is a measure for populations only and does not imply a likelihood of disease occurrence within an individual, contingent upon that individual’s exposure.”).

Robbins second point was truly a howler, one that suggests his animus against gatekeeping may grow out of a concern that he would never pass a basic test of statistical competency.  According to Green, Robbins claimed that “increasing the number of subjects in an epidemiology study can identify small effects with ‘an almost indisputable causal role’.” Id. at 45 (quoting Robbins).  Ironically, lawyer and law professor Green was left to take Robbins to school, to educate him on the differences between sampling error, bias, and confounding.  Green does not get the story completely right because he draws an artificial line between observational epidemiology and experimental clinical trials, and incorrectly implies that bias and confounding are problems only in observational studies.  Id. at 45 n. 24.  Although randomization is undertaken in clinical trials to control for bias and confounding, it is not true that this strategy always works or always works completely.  Still, here we have a lawyer delivering the comeuppance to the scolding scientist.  Sometimes scientists really have no good basis to support their claims, and it is the responsibility of the courts to say so.  Green’s handling of Robbins’ errant views is actually a wonderful demonstration of gatekeeping in action.  What is lovely about it is that the claims and their rebuttal were documented and reported, rather than being swept away in the fog of a jury verdict.

Professor Green’s account of Robbins’ foolery should be troubling because, despite Robbins’ manifest errors, and his more covert biases, we learn that Robbins’ remarks had “a profound impact” on the ALI’s deliberations. Courts that are tempted by the facile answers of comment c should find this impact profoundly disturbing.

Alan Done’s Weight of the Evidence (WOE) or Mosaic Methodology

Professor Green relays an anecdote that bears repeating, many times.  In the Bendectin litigation, plaintiffs’ expert witness, Alan Done testified that Bendectin caused birth defects in children of mothers who ingested the anti-nausea medication during pregnancy.  Done had a relatively easy time spinning his speculative web in the first Bendectin trial because there was only one epidemiologic study, which qualitatively was not very good.  In his second outing, Done was confronted by the defense with an emerging body of exonerative epidemiologic research. In response, he deployed his “mosaic theory” of evidence, of different pieces or lines of evidence that singularly do not show much, but together paint a conclusive picture of the causal pattern. Id. at 61 (describing Done’s use of structure-activity, in vitro animal studies, in vivo animal studies, and his own [idiosyncratic] interpretation of the epidemiologic studies).  Done called his pattern a “mosaic,” which Green correctly sees is none other than “weight of the evidence.”  Id. at 62.

After this second trial was won with the jury, but lost on post-trial motions, plaintiffs’ counsel, Barry Nace, pressed the mosaic theory as a legitimate scientific strategy to demonstrate causation, and the appellate court accepted the strategem:

“Like the pieces of a mosaic, the individual studies showed little or nothing when viewed separately from one another, but they combined to produce a whole that was greater than the sum of its parts: a foundation for Dr. Done’s opinion that Bendectin caused appellant’s birth defects. The evidence also established that Dr. Done’s methodology was generally accepted in the field of teratology, and his qualifications as an expert have not been challenged.103

Id. at 61(citing Oxendine v. Merrell Dow Pharm., Inc., 506 A.2d 1100, 1110 (D.C. 1986).  Green then drops his bombshell:  the philosopher of science who developed the “mosaic theory” (WOE) was the plaintiffs’ lawyer, Barry Nace. According to Green, Nace declared the mosaic idea “Damn brilliant, and I was the one who thought of it and fed it to Alan [Done].”  Id. at 63.

Green attempts to reassure himself that Milward does not mean that Done could use his WOE approach to testify today that Bendectin causes human birth defects.  Id. at 63.  Alas, he provides no meaningful solution to protect against future bogus cases.  Green fails to come to grips with the obvious truth that Done was wrong ab initio.  He was wrong before he was exposed for his perjurious testimony.  See id. at 62 n. 107, and he was wrong before there was a “solid body” of exonerative epidemiology.  His method never had the epistemic warrant he claimed for it, and the only thing that changed over time was a greater recognition of his character for veracity, and the emergence of evidence that collectively supported the null hypothesis of no association.  The defense, however, never had the burden to show that Done’s methodology was unreliable or invalid, and we should look to the more discerning scientists who saw through the smokescreen from the beginning.

I Don’t See Any Method At All

May 2nd, 2013

Kurtz: Did they say why, Willard, why they want to terminate my command?
Willard: I was sent on a classified mission, sir.
Kurtz: It’s no longer classified, is it? Did they tell you?
Willard: They told me that you had gone totally insane, and that your methods were unsound.
Kurtz: Are my methods unsound?
Willard: I don’t see any method at all, sir.
Kurtz: I expected someone like you. What did you expect? Are you an assassin?
Willard: I’m a soldier.
Kurtz: You’re neither. You’re an errand boy, sent by grocery clerks, to collect a bill.

* * * * * * * * * * * * * * * *

The Royal Society, the National Academies of Science, the Nobel Laureates have nothing on the organized plaintiffs’ bar.  Consider the genius and the accomplishments of these men and women.  They have discovered and built a perpetual motion machine — the asbestos litigation.  They have learned how to violate the law of non-contradiction with impunity (e.g., industry is evil, and (litigation) industry is good).  In the realm of the sciences, especially as applied in the courtroom, they have demonstrated the falsity of one of core beliefs: ex nihilo nihil fit.  We have a lot to learn from the plaintiffs’ bar.

WOE to Corporate America

Steve Baughman Jensen is a plaintiffs’ lawyer and he justifiably gloats over his success as lead counsel in Milward v. Acuity Products Group, Inc., 639 F.3d 11 (1st Cir. 2011), cert. denied, 132 S.Ct. 1002 (2012).  In a recent article for the litigation industry’s scholarly journal, Jensen touts Milward as Ariadne’s thread, which will take plaintiffs out of the mazes and traps set for them by the benighted law of expert witnesses.  Steve Baughman Jensen, “Reframing the Daubert Issue in Toxic Tort Cases,” 49 Trial 46 (Feb. 2013).  Jensen alleged that his client worked with solvents that contained varying amounts of  benzene, which caused him to develop Acute Promyelocytic Leukemia (APL), a subtype of Acute Myeloid Leukemia (AML).  The district court excluded plaintiffs’ expert witnesses’ causation opinions; the First Circuit reversed.  Jensen crows about his accomplished feat.

Weight of the Evidence (WOE) — Let Them Drink Ripple

Jensen, with help from philosopher of popular science Carl Cranor and toxicologist Martyn Smith persuaded the appellate court that a “weight of the evidence” (WOE) analysis necessarily involves scientific judgment. (Millward, 639 F.3d at 18), and that this “use of judgment in the weight of the evidence methodology is similar to that in differential diagnosis, which we have repeatedly found to be a reliable method of medical diagnosis.” Id. (internal citations omitted).

Is this what judicial gatekeeping of scientific expert opinion has come to?  Phrenology, homeopathy, aroma therapy, and reflexology involve medical judgment, of sorts, and so they too are now reliable methodologies.  Ripple makes red wine, and so does Chateau Margaux.  Chateau Margaux is based upon judgment in oenology, and so is Ripple.  That only one of these products will stand the test of time is irrelevant; both are the product of oenological judgment.  It’s all a question of the weight you would assign the differing qualities of Ripple and a premier cru bordeaux.

Jensen never defines WOE; the closest he comes to describing WOE is to tell us that it essentially involves a delegation to expert witnesses to validate their own subjective weighing of the evidence. As in the King of Hearts, Jensen rejoices that the inmates are now running the asylum.

Too Much of Nothing

Jensen complains about a “divide and conquer” strategy by which defendants take individual studies, one at a time, pronounce them inadequate to support a judgment of causality, and then claim that the aggregate evidence fails to support causality as well.  Surely sometimes that approach is misguided; yet sometimes the evidentiary display collectively represents “too much of nothing.”  In some litigations, there are hundreds of studies, which despite their numbers, still fail to support causation.  In General Electric v. Joiner, the Supreme Court discerned that the studies relied upon were largely irrelevant or inconclusive, and that taken alone or together, the cited studies failed to support plaintiffs’ claim of causality.  In the silicone-gel breast implant litigation, the plaintiffs’ steering committee submitted banker boxes of studies and argument to the court’s appointed expert witnesses, in an attempt to manufacture causation.  The committee, however, took its time and saw that the evidence taken individually or collectively did not amount to a scientific peppercorn.

Let Ignorance Rule

One of Jensen’s clever attempts to beguile the judiciary involves the transmutation of scientific inference into personal credibility.  “Second-guessing an expert’s application of scientific judgment necessarily requires assessing that expert’s credibility, which is the jury’s role.” Jensen, 49 Trial at 49.  Jensen attempts to reduce the “battle of experts” to a credibility contest and thus outside the purview of judicial gatekeeping.  His argument conflates credibility with methodology and its application. Because the expert witness will predictably opine that he applied the methodology faithfully, Jensen asserts that the court is barred from examining the correctness of the expert witness’s self-validation.

But scientific inference is scientific because it does not depend upon the person drawing it.  The inference may be weak, strong, erroneous, valid, or invalid.  How we characterize the inference will turn on the data and their analysis, not on the witness’s say so.

Jensen cites comment c, to Section 28 of Restatement (Third) of Torts, as supporting his reactionary arguments for abandoning judicial gatekeeping of expert witness opinion testimony.  “Juries, not judges, should determine the validity of two competing expert opinions, both of which typically fall within the realm of reasonable science.” Jensen, 49 Trial at 51 (emphasis added).  The law, however, requires trial courts to assess the validity vel non of would-be testifying expert witnesses:

“[A] trial judge, acting as ‘gatekeeper’, must ‘ensure that any and all scientific testimony or evidence admitted is not only relevant, but reliable’.  This requirement will sometimes ask judges to make subtle and sophisticated determinations about scientific methodology and its relation to the conclusions an expert witness seeks to offer— particularly when a case arises in an area where the science itself is tentative or uncertain, or where testimony about general risk levels in human beings or animals is offered to prove individual causation.”

General Elec. Co. v. Joiner, 522 U.S. 136, 147–49 (1997) (Breyer, J., concurring) (citations omitted).  Not only is Jensen’s argument contrary to the law, the argument is based upon a cynical understanding that juries will usually have little time, experience, or aptitude for assessing  validity issues, and that delegating validity issues to juries ensures that the legal system will not be able to root out pathologically weak evidence and inferences. The resolution of validity issues will be hidden behind the secretive walls of the jury room, rather than in the open sight of reasoned, published opinions, subject to public and scholarly commentary.   See, e.g., In re Welding Fume Prods. Liab. Litig., No. 1:03-CV-17000, MDL 1535, 2006 WL 4507859, *33 n.78 (N.D. Ohio 2006).  (“even the smartest and most attentive juror will be challenged by the parties’ assertions of observation bias, selection bias, information bias, sampling error, confounding, low statistical power, insufficient odds ratio, excessive confidence intervals, miscalculation, design flaws, and other alleged shortcomings of all of the epidemiological studies.”)

Martyn Smith

Jensen extols the achievements of Dr. Martyn Smith, his expert witness who was excluded by the trial court in Milward.  A disinterested reader might mistakenly think that Smith was among the leading benzene researchers in the world, but a little Googling would turn up that Milward was not his first litigation citation.  Smith has been pulled over for outrunning his expert-witness headlights in several other litigations, including:

  • Jacoby v. Rite Aid, Phila. Cty. Ct. Common Pleas (Order of April 27, 2012; Opinion of April 12, 2012) (excluding Smith as an expert witness on the toxicity of Fix-o-Dent)
  • In re Baycol Prods. Litig., 495 F. Supp.2d 977 (D. Minn. 2007)
  • In re Rezulin Prods. Liab. Litig., MDL 1348, 441 F.Supp.2d 567 (S.D.N.Y. 2006)(“silent injury”)

None of these other cases involved benzene, but they all involved speculative opinions.

The Milward Symposium

Jensen took another victory lap at the Milward Symposium Organized By Plaintiffs’ Counsel and Witnesses.  The presentations from this symposium have now appeared in print:  Wake Forest Publishes the Litigation Industry’s Views on MilwardSee Steve Baughman Jensen, “Sometimes Doubt Doesn’t Sell:  A Plaintiffs’ Lawyer’s Perspective on Milward v. Acuity Products,” 3 Wake Forest J. L. & Policy 177 (2013).  Jensen’s contribution was mostly a shrill ad hominem against corporations, as well as their lawyers and scientists who complicitly support an alleged campaign to manufacture doubt.

Perhaps someday the law journal’s faculty advisors and editors will feel some embarrassment over the lack of balance and scholarship in Jensen’s contribution to the symposium.  Corporations are bad; get it?  They manufacture doubt about the litigation industry’s enterprise.  Don’t pay attention to massive litigation fraud, such as faux silicosis, faux asbestosis, faux fen-phen heart disease, faux product identification, etc.  See Larry Husten, “79-Year-Old Cardiologist Sentenced To 6 Years In Prison For Fen-Phen Fraud,” Forbes (Mar. 27, 2013).   Forget that ATLA/AAJ is one of the most powerful rent-seeking lobbies in the United States.  Litigants have a constitutional right to extrapolate as they please.  If a substance causes one disease at a very high dose, then it causes every ailment known to mankind at moderate or low doses.  Specific disease entails general disease, etc.  What you balk?  You must be a doubt mongerer.

Jensen assures us that many scientists support and agree with Martyn Smith, both in his methodology and in his conclusions.  Jensen’s articles are sketchy on details, and of course, the devil is in the details.  See Amended Amicus Curiae Brief of the Council for Education and Research on Toxins et al., In Support of Appellants, in Milward.  This Council seems to fly under the internet radar, but I suspect that its membership and that of the Center for Progressive Reform overlaps somewhat.

Jensen’s article is just one of several published in the Wake Forest Journal of Law & Policy.  Let’s hope the remaining articles have more substance to them.