TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

A New Day – A New Edition of the Reference Manual of Scientific Evidence

September 28th, 2011

It’s an event that happens about once every ten years – a new edition of the Reference Manual on Scientific Evidence.  This sort of thing gets science-geek-nerd lawyers excited.  Today, the National Academies of Science released the new, third edition of the Manual today.  The work has grown from the second edition into a hefty volume, now over 1,000 pages. Shorter than War and Peace, but easier to read than Gravity’s Rainbow.  Paperback volumes are available for $71.96, but a PDF file is available for free.

Unlike the first two editions, which were products of the Federal Judicial Center, the new edition was produced under the supervision of an ad hoc committee – the Committee on the Development of the Third Edition of the Reference Manual on Scientific Evidence – under the auspices of the National Academies’ Committee on Science, Technology and the Law.  See Media Release from the National Academies.

The release of the Third Edition of the Manual was accompanied by a ceremony today at the National Academies’ Keck Center in Washington, DC.   Dr. Cicerone, President of the National Academies, and Judge Barbara Rothstein, director of the Federal Judicial Center, gave inspirational talks on the rightful, triumphant, aspirational role of science in the courtroom, and the Manual’s role of ensuring that science gets its day in the adversarial push and shove of litigation.

Co-chairs, Judge Kessler and Dr. Kassirer introduced the other members of the ad hoc committee, and the substantive work that makes up the Third Edition.

Other members of the ad hoc committee were:

  • Hon. Ming Chin, Associate Justice of the Supreme Court of California.
  • Hon. Pauline Newman, Judge, U.S. Court of Appeals for the Federal Circuit.
  • Hon. Kathleen O’Malley, Judge, U.S. Court of Appeals for the Federal Circuit.
  • Hon. Jed S. Rakoff, Judge, U.S. District Court for the Southern District of New York.
  • Channing R. Robertson, Ph.D., School of Engineering, Stanford University.
  • Joseph V. Rodricks, Ph.D., Environ Corp.
  • Allen J. Wilcox, Ph.D., Senior Investigator in the Epidemiology Branch, of the NIEHS.
  • Sandy L. Zabell, Professor of Statistics and Mathematics, Northwestern University.

The Third Edition has some notable new chapters on “Forensic Identification Expertise” (Paul Giannelli, Edward Imwinkelried, and Joseph Peterson), on “Neuroscience” (Henry Greely and Anthony Wagner), on “Mental Health Evidence” (Paul Appelbaum), and on “Exposure Science” (Joseph Rodricks).

Other chapters that were present in the Second Edition are substantial revised and updated, including the chapters on “Statistics” (David Kaye and the late David Freedman), on “DNA Identification Evidence” (David Kaye and George Sensabaugh), on “Epidemiology” (Michael Green, Michal Freedman, and Leon Gordis), and on “Engineering” (Channing R. Robertson, John E. Moalli, and David L. Black).

The chapter on “Medical Testimony” (John Wong, Lawrence Gostin, and Oscar Cabrera) is also substantially revised and expanded, with new authors, all for a welcomed change.

A substantial portion of the new addition incorporates chapters from the second addition with little or no change:

Justice Stephen Breyer’s “Introduction,” the late Professor Margaret Berger’s essay on “The Admissibility of Expert Testimony,” David Goodstein’s primer on “How Science Works,” as well the chapters on “Multiple Regression” (Daniel L. Rubinfeld), on “Survey Research” (Shari Diamond), and on “Toxicology” (Bernard Goldstein and Mary Sue Henifin).

Also new in this edition is the support of private organizations, the Starr Foundation and the Carnegie Corporation of New York.

Judge Kessler explained how the Supreme Court’s decision in Daubert introduced “rigorous standards” for scientific evidence in federal courtrooms.  Expert witnesses must follow the reasoning, principles, and procedures of modern science.  Lovely sentiments, and wonderful if heeded.

The public release ceremony ended with questions from the audience, both live and from the webcast.

To provide some comic relief to the serious encomia to science in the law, Dr. Tee Guidotti rose to give a deadpan imitation of Emily Litella.  Dr. Guidotti started by lecturing the gathering on the limitations of frequentist statistics and the need to accommodate Bayesian statistical analyses in medical testimony.  Dr. Kassirer politely interrupted Dr. Guidotti to point out that Bayesian analyses were covered in some detail in the chapters on statistics and on medical testimony.  Without missing a beat, Dr. Guidotti shifted blame to the index, which he claimed failed to identify these discussions for him.  Judge Kessler delivered the coup de grace by pointing out that the discussions of Bayesian statistics were amply referenced in the index.  Ooops; this was a hot court.  A chorus of sotto voce “never minds” rippled through the audience.

One question not asked was whether there are mandatory minimum sentences for judges who fail to bother to read the relevant sections before opining on science issues.

The Third Edition of the Manual appears to be a substantial work.  Lawyers will ignore this work at their peril.  Trial judges of course have immunity except to the extent appellate judges are paying attention.  Like its predecessors, the new edition is likely to have a few quibbles, quirks, and quarrels lurking in the 1,000 plus pages here, and I am sure that the scrutiny of the bar and the academy will find them.  Let’s hope that it takes fewer than a decade to see them corrected in a Fourth Edition.

Playing Hide the Substantial Factors in Asbestos Litigation

September 27th, 2011

In previous posts, I have noted that Dr. Selikoff, who did so much to shine light on the health hazards of asbestos, did much to keep fiber type differential causation in the dark.  Selikoff was a “crocidolite denier,” who went so far as to deny that American workers had crocidolite exposure at all.  SeeSelikoff and the Mystery of the Disappearing Amphiboles.”

Dr. Selikoff’s extreme positions on crocidolite are difficult to explain in terms of the data known to him.  In addition to some of the data already presented, consider the following statistical tables from the 1965 volume of the Annals of the New York Academy of Science, edited by Dr. Selikoff:

US Dept. of Commerce statistics on imported amosite and crocidolite

year           amosite              crocidolite

1957            14,197                   17,820

1958            16,994                   19,690

1959            16,614                   18,006

1960            19,581                   14,899

1961            15,501                   14,978

1962              9,602                   20,235

App. 3, Statistical Tables – Asbestos, prepared by T. May, United States Bureau of Mines, in I.J. Selikoff & J. Churg, eds., “Biological Effects of Asbestos,” 132 Ann. N.Y. Acad. Sci. at 753, Table 17 (1965).

Blue wins by about 13,000 short tons, over these 5 years.  Dr. Selikoff presided over the Academy meeting that gave rise to this publication, and he edited the volume that was contained these statistics.  Why did Selikoff deny the obvious?

A fair historical hypothesis, to be investigated, would posit that Dr. Selikoff was well aware of the fiber type differential, but he was also aware that the Canadian mining concerns were poised to play up the difference in mesothelioma potency, both in regulatory and litigation contexts.  We have seen how Dr. Selikoff was in close touch with plaintiffs’ advocates, such as Barry Castleman.  The hypothesis is that people like Barry Castleman and his principals, the plaintiffs’ asbestos bar, encouraged or pressured Dr. Selikoff to promote the notion that all asbestos minerals were equally pathogenic to undermine a substantial factor defense from companies that mined or used chrysotile fiber.

Dr. Selikoff almost certainly was aware that the South African companies were judgment proof in U.S. courtrooms.  South Africa was a renegade nation at the time, increasingly the subject of disinvestment campaigns and economic boycotts.  South Africa would not honor court judgments based upon verdicts in U.S. asbestos personal injury cases, and the intermediaries, distributors of amosite and crocidolite, were little more than shell corporations.

Plaintiffs’ counsel, as far back at the late 1970s, surely anticipated the substantial-factor battles ahead.  They obviously had talked to Dr. Schepers, who told them that in his view, chrysotile was innocuous with respect to mesothelioma causation.  The plaintiffs’ lawyers needed to keep the solvent North American companies in the courtroom.

I do not have a Castleman letter to, or a tape recording of a Ron Motley conversation with, Dr. Selikoff to document my postulated scenario.  It is hard, however, to fathom any good reason as to why Dr. Selikoff was so motivated to be a crocidolite denier, when the evidence on both prevalence of, and health effects from, the use of crocidolite and amosite, was so obvious.

Law school professors are fond of analogizing asbestos mesothelioma cases to the famous “two fires” hypothetical in the law of torts. See, e.g., Anderson v. Minneapolis, St. Paul & Sault Ste. Marie Railway 146 Minn. 430, 179 N.W. 45 (1920) (abandoning “but for” causation when two fires, each would have tortuously burned house); Restatement Second of Torts Sec. 432(2).  The analogy is far removed from the typical mesothelioma case, which involves multiple fiber types, with widely varying level of exposures.

Rather than 10 defendants, each responsible for 10% of the total risk, the real world court cases illustrate the misuse of joint and several liability, and the abuse from hiding exposures to products of bankrupt and judgment proof companies.  The following hypothetical is more typical of cases I have litigated:

Plaintiff was a shipyard worker, with 30 years of worksite exposure.  Plaintiff worked with a range of insulation products, some of which had crocidolite or amosite content, but most had only chrysotile asbestos in their makeup.  All or mostly all of the insulation manufacturers are bankrupt.  The plaintiff claims to have changed his car’s brake linings, and that he was exposed to chrysotile once a year, when he did this car repair.

To put some figures to the hypothetical, suppose a range of varying “potency factors” for different fiber types, with different breakdown of the three major asbestos mineral varieties:

10% crocidolite, with a potency factor 200x

20% amosite, with a potency factor 50x

70% chrysotile, with a potency factor 1x

These potency factors are realistic although not everyone would agree.  On these facts, the chrysotile exposure, although quantitatively substantial would have an insubstantial role in producing mesothelioma in such a shipyard worker.  The total relative chrysotile role would be about 2.28% of the total.  Realistically, all chrysotile products, considered together, would not be a substantial factor in producing a mesothelioma.

Now the brake linings exposure claimed from changing brakes once a year supposedly involved only chrysotile exposure.  Compared to the occupational exposure in the hulls of ships, this outdoor work rarely took more than a couple of hours.  A conservative estimate would put the chrysotile exposure somewhere at 0 to 0.01% of all the chrysotile exposure sustained, or somewhere from 0% to 0.0002.3% of causation, assuming that chrysotile can even cause mesothelioma (a doubtful assumption).

Dr. Selikoff surely not envision the gritty details of today’s world of asbestos litigation, in the wake of 90 bankruptcies, with its cynical game of hiding the bankrupt and judgment-proof companies’ shares of liability.  He did, however, likely see that chrysotile mining and manufacturing firms would press the relative innocuousness of chrysotile fiber in causing mesothelioma.  The ground work for the injustice of the mantra that “each and every exposure” to asbestos is a substantial factor was laid a long time ago.

Expert Evidence Free-for-All in Washington State

September 23rd, 2011

Daubert/Frye issues are fact specific. Meaningful commentary about expert witness decisions requires a close familiarity with the facts and data in the case under scrutiny.  A recent case in point comes from the Washington Supreme Court.   The plaintiff alleged that her child was born with birth defects as a result of her workplace exposure to solvents from mixing paints.  The trial court dismissed the case on summary judgment, after excluding plaintiff’s expert witnesses’ causation opinions. On appeal, the Court, en banc, reversed the summary judgment, and remanded for trail.  Anderson v. Akzo Nobel Coatings Inc., No. 82264-6, Wash. Sup.; 2011 Wash. LEXIS 669 (Sept. 8, 2011).

Anderson worked for Akzo Nobel Coatings, Inc., until the time she was fired, which occurred shortly after she filed a safety complaint.  Her last position was plant environmental coordinator for health and safety. Her job occasionally required her to mix paints.  Akzo’s safety policies required respirator usage when mixing paints, although Anderson claimed that enforcement was lax.  Slip op. at 2.  Anderson gave birth to a son, who was diagnosed with congenital nervous and renal system defects.  Id. at 3.

Anderson apparently had two expert witnesses:  one of her child’s treating physicians and Dr. Khattak, an author of an epidemiologic study on birth defects in women exposed to organic solvents. Sohail Khattak, et. al., “Pregnancy Outcome Following Gestational Exposure to Organic Solvents,” 281 J. Am. Med. Ass’n 1106 (1999). See Slip op. at 3.

The conclusions of the published paper were modest, and no claim to causality was made from either the study alone or from the study combined with the prior knowledge in the field.  When the author, Dr. Khattak donned the mantle of expert witness, intellectual modest went out the door:  He opined that the association was causal.  The treating physician echoed Dr. Khattak’s causal opinion.

The fact-specific nature of the decision makes it difficult to assess the accuracy or validity of the plaintiff’s expert witnesses’ opinions.  The claimed teratogenicity of paint solvents is an interesting issue, but I confess it is one with which I am not familiar.  Perhaps others will address the claim.  Regardless whether or not the claim has scientific merit, the Anderson decision is itself seriously defective.  The Washington Supreme Court’s opinion shows that it did little to familiarize itself with the factual issue, and holds that judges need not tax themselves very much to understand the application of scientific principles to the facts and data of their cases.  Indeed, what is disturbing about this decision is that it sets the bar so low for medical causation claims. Although Anderson does not mark a reversion to the old Ferebee standard, which would allow any qualified, willing expert witness to testify to any conclusion, the decision does appear to permit any opinion based upon a generally accepted methodology, without gatekeeping analysis of whether the expert has actually faithfully and appropriately applied the claimed methodology.  The decision eschews the three subparts of Federal Rule of Evidence 702, which requires that the proffered opinion:

(1) … is based upon sufficient facts or data,

(2) … is the product of reliable principles and methods, and

(3) …[is the product of the application of] the principles and methods reliably to the facts of the case.

Federal Rule of Evidence 702.

In abrogating standards for expert witness opinion testimony, the Washington Supreme Court manages to commit several important errors about the nature of scientific and medical testimony.  These errors are much more serious than any possible denial of intellectual due process in the Anderson case because they virtually ensure that meaningful gatekeeping will not take place in future Washington state court cases.

I. The Court Confused Significance Probability with Expert Witnesses’ Subjective Assessment of Posterior Probability

The Washington Supreme Court advances two grounds for abrogating gatekeeping in medical causation cases.  First, the Court mistakenly states that the degree of certainty for scientific propositions is greater in the scientific world than it is in a civil proceeding:

“Generally, the degree of certainty required for general acceptance in the scientific community is much higher than the concept of probability used in civil courts.  While the standard of persuasion in criminal cases is “beyond a reasonable doubt,” the standard in most civil cases is a mere “preponderance.”

Id. at 14.  No citation is provided for the proposition that the scientific degree of certainty is “much higher,” other than a misleading reference to a book by Marcia Angell, former editor of the New England Journal of Medicine:

“By contrast, “[f]or a scientific finding to be accepted, it is customary to require a 95 percent probability that it is not due to chance alone.”  Marcia Angell, M.D., Science on Trial: The Clash of Medical Evidence and the Law in the Breast Implant Case 114 (1996).  The difference in degree of confidence to satisfy the Frye “general acceptance” standard and the substantially lower standard of “preponderance” required for admissibility in civil matters has been referred to as “comparing apples to oranges.” Id. To require the exacting level of scientific certainty to support opinions on causation would, in effect, change the standard for opinion testimony in civil cases.”

Id. at 15.  This popular press book hardly supports the Court’s contention. The only charitable interpretation of the 95% probability is that the Court, through Dr. Angell, is taking an acceptable rate of false positive errors to be no more than the customary 5%, and is looking at a confidence interval based upon this specified error rate of 1 – α. This error rate, however, is not the probability that the null hypothesis is true.  If the Court would have read the very next sentence, after the first sentence it quotes from Dr. Angell, it would have seen:

“(I am here giving a shorthand version of a much more complicated statistical concept.)”

Science on Trial at 114 (1996).  The Court failed to note that Dr. Angell was talking about significance probability, which is used to assess the strength of the evidence in a single study against the null hypothesis of no association.  Dr. Angell was well aware that she was simplifying the meaning of significance probability in order to distinguish it from a totally different concept, the probability of attribution of a specific case to a known cause of the disease.  It is the probability of attribution that has some relevance to the Court’s preponderance standard; and the probability of attribution standard is not different from the civil preponderance standard.

The Court’s citation of Dr. Angell for the proposition that the “degree of confidence” and the “preponderance” standard are like “comparing apples to oranges,” is a complete distortion of Dr. Angell’s book.  She is comparing the attributable risk based upon an effect size – the relative risk, which need be only greater than 50% for specific causation, with a significance probability for the interpretation of the data from a single, based upon the assumption of the null hypothesis:

“Comparing the size of an effect with the probability that a given finding isn’t due to chance is comparing apples and oranges.”

Id. This statement is a far cry from the Court’s misleading paraphrase, and is no support at all for the Court’s statistical solecism. Implicit in the Court’s error is its commission of the transpositional fallacy; it has confused significance probability (the probability of the evidence given the null hypothesis) with Bayesian posterior probabilities (the probability of the null hypothesis given all the data and evidence in the case).

Having misunderstood significance probability to be at odds with the preponderance standard, the Court notes that the “absence of a statistically significant basis” for an expert witness’s opinion does not implicate Frye or render the expert witness’s opinion inadmissible.  Id. at 16.  In the Anderson case, this musing is pure dictum because Dr. Khattak’s study showed a highly statistically significant difference in the rate of birth defects among women with solvent exposures compared with women without such exposures.

II.  The Court Abandons Evidence or Data as Necessary to Support Judgments of Causality

The Anderson Court did not stop with its misguided distinction between burdens of proof in science and in law.  The Court went on to offer the remarkable suggestion that gatekeeping is unnecessary for medical opinions because they are not, in any event, evidence-based:

“Many expert medical opinions are pure opinions and are based on experience and training rather than scientific data.  We only require that ‘medical expert testimony . . . be based upon ‘a reasonable degree of medical certainty’ or probability.”

Slip op. at16 -17 (internal citations omitted).  There may be some opinions that are experientially based, but the Court did not, and could not, adduce any support for the proposition that judgments of teratogenic causation do not require scientific data.  Troublingly, the Court appears to allow medical expert opinions to be “pure opinions,” unsupported by empirical, scientific data.

Presumably as an example of non-evidence based medical opinions, the Anderson Court offers the example of differential diagnosis:

“Many medical opinions on causation are based upon differential diagnoses. A physician or other qualified expert may base a conclusion about causation through a process of ruling out potential causes with due consideration to temporal factors, such as events and the onset of symptoms.”

Id. at 17. This example, however, does not explain or justify anything the Court  claimed.  Differential diagnoses, or more accurately “differential etiology,” is a process of reasoning by iterative disjunctive syllogism to the most likely cause of a particular patient’s disease.  The syllogism assumes that any disjunct – possible cause of this specific case – has previously, independently been shown to be capable of causing the outcome in question.  There is no known methodology by which this syllogism itself can show general causation.

Not surprisingly, the Court makes no attempt to support its mistaken claim that differential diagnosis permits the assessment of general causation without the necessity of “scientific data.”

The Court’s confusion between significance probability (1 – α)% and posterior probability based upon all the evidence, as well as its confusion between differential diagnosis and evidence-based assessments of general causation, allowed the Court to take a short way with medical causation evidence.  The denial of scientific due process followed inevitably.

III.  The Court Abandoned All Gatekeeping for Expert Witness Opinion Testimony

The Anderson Court suggested that gatekeeping was required by Washington’s continued adherence to the stringent Frye test, but the Court then created an exception bigger than the rule:

“Once a methodology is accepted in the scientific community, then application of the science to a particular case is a matter of weight and admissibility under ER 702, the Frye test is only implicated where the opinion offered is based upon novel science.  It applies where either the theory and technique or method of arriving at the data relied upon is so novel that it is not generally accepted by the relevant scientific community.  There is nothing novel about the theory that organic solvent exposure may cause brain damage and encephalopathy.  See, e.g., Berry v. CSX Transp., Inc., 709 So. 2d 552, 568 & n.12, 571-72 (Fla. Dist. Ct. App. 1998) (surveying medical literature). Nor does it appear that there is anything novel about the methods of the study about which Dr. Khattak wrote. Khattak, supra, at 1106. Frye does not require that the specific conclusions drawn from the scientific data upon which Dr. Khatta relied be generally accepted in the scientific community.  Frye does not require every deduction drawn from generally accepted theories to be generally accepted.”

Slip op. at 18-19 (internal citations omitted).

By excepting the specific inferences and conclusions from judicial review, the Court has sanctioned any nonsense as long as the expert witness can proclaim that he used the methods of “toxicology,” or of “epidemiology,” or some other generally accepted branch of science.  The Court left no room to challenge whether the claim is correct at any other than the most general level.  The studies cited in support of a causation may completely lack internal or external validity, but if they are of a class of studies that are “scientific,” and purport to use a method that is generally accepted (e.g., cohort or case-control studies), then the inquiry is over. Indeed, the Court left no room at all for challenges to expert witnesses who give dubious opinions about medical causation.

IV. Fault Issues

Not content to banish science from the judicial assessment of scientific causality judgments, the Anderson Court went further to take away any defense based upon the mother’s fault in engaging in unprotected mixing of paints while pregnant, or the mother’s fault in smoking while pregnant.   Slip op. at 20.  Suing the mother as a tortfeasor may not be an attractive litigation option to the defendant in a case arising out of workplace exposure to an alleged teratogen, but clearly the mother could be at fault with respect to the causation of her child’s harm. She was in charge of environmental health and safety, and she may well have been aware of the hazards of solvent exposures.  In this case, there were grounds to assert the mother’s fault both in failing to comply with workplace safety rules, and in smoking during her pregnancy (assuming that there was evidence, at the same level as paint fumes, for the teratogenicity of smoking).

RULE OF EVIDENCE 703 — Problem Child of Article VII

September 19th, 2011

With the exception of a few evidence scholars, Federal Rule of Evidence 703 is ignored or misunderstood in practice.  There was a time when virtually every motion to exclude an expert witness’s opinion was framed on Rule 703 concepts, either alone, or in conjunction with Rule 702 requirements.  The Supreme Court’s decision in Daubert changed practice by holding that Rule 702 required gatekeeping, and by generally slighting Rule 703.

I.  Reform of the Common Law

Federal Rule of Evidence 703 formally abandoned the common-law requirement that expert witnesses base their opinions upon evidence of record, either personal observations or facts admitted into evidence.  The first sentence of Rule 703, which has remained unchanged since its original adoption, makes clear that an expert witness may rely upon facts or data that are never admitted into evidence.  This sentence details three methods of putting “facts or data” before expert witnesses.  First, expert witnesses may themselves be percipient witnesses to the facts or data upon which they rely.  Second, expert witnesses may learn of facts or data at the trial by observing other witnesses testify or by being asked to assume facts or data for purposes of giving an opinion.  Third, expert witnesses may come to learn of “facts or data” before the hearing.  It is this third method that represents a departure from the common law, and which raises the issue whether the expert witness has relied upon facts or data, which are themselves inadmissible.

The rationale for Rule 703 was the recognition that much of the expert witness’s understanding of an area of science, medicine, or technology was governed by training, prior experience, professional collaborations, and extensive reading, all of which represented the basis, often in large part, of the case-specific opinions that are then offered in the courtroom.  These bases are mostly hearsay, and mostly inadmissible if expert witnesses were to try to articulate any particular aspect of their personal learning.  The rationale for Rule 703, however, also included the economy and convenience of presenting expert testimony without the need of formal proof of predicate “facts or data,” at least if those facts or data were of the type reasonably relied upon by experts in the relevant field.  Not surprisingly, advocates responded by using Rule 703 to inject all manner of hearsay into their trials, including opinion testimony from witnesses that would never testify at trial.  Courts and commentators responded with confusion over whether Rule 703 created a new exception to the rule against hearsay.

II.  Conduit for Inadmissible Evidence

Much academic, judicial, and professional criticism of Rule 703, before its amendment in 2000, centered on the mischief created by expert witnesses’ reliance upon inadmissible evidence and the disclosure of this information to the jury.  To be sure, Federal Rule 705 made clear that the expert witness need not disclose any basis; the expert opinion could be elicited as a conclusory opinion, or the expert could disclose some but not all bases.  Parties, however, were often intent to use Rule 703 to present, at least selectively, those relied upon facts and data (and sometimes opinions) that would aid their case, regardless of the admissibility of the disclosed expert witness bases.  If the other side were foolish enough to request a limiting instruction, the proponent would revel in the emphasis that the Court gave to their inadmissible facts and data.[i]

Of course, the presentation of expert opinion without requiring disclosure of bases is hardly calculated to permit jurors or trial judges to assess the validity or correctness of the opinions that they must weigh at trial.  Furthermore, Rule 703 shifted the burden to opposing counsel to elicit bases in order to show flaws or weaknesses in reasoning and inference.  This crossexamination frequently could not take place without eliciting inadmissible evidence.

In 2000, Rule 703 was amended to include its third, last sentence, which creates a presumption against disclosure of inadmissible facts or data to the jury.  The presumption against disclosure may be overcome by a judicial finding that the probative value in helping the jury evaluate the opinion is outweighed by the prejudice of injecting inadmissible evidence into the trial.  Nothing in the revised rule makes the inadmissible “facts or data” admissible, although at one point, the Advisory Committee Notes confuse admissibility and disclosure when it writes in terms of relied upon information that is “admissible only for the purpose of assisting the jury in evaluating an expert’s opinion.”  Such evidence is not admissible at all, which is exactly why the presumption is against disclosure and the alternative is disclosure, along with consideration of a limiting instruction.

III.  Expert Witness Opinions – Castles in the Air

Whether underlying facts are disclosed or not, Rule 703, as currently applied in federal courts, raises serious concerns about whether expert witness opinion testimony has a reliable foundation.  The law in most states is that an expert witness’s opinion can rise no higher than the facts upon which the opinion is based.  If the jury does not hear the bases of the opinion, it cannot meaningfully evaluate the opinion.  Furthermore, the jury cannot make sense of an expert witness’s opinion, when it is bound by a limiting instruction, which explains that it may consider the basis in evaluating the expert witness’s opinion, but it may not consider the basis as evidence that has been established in the case. If this basis is not otherwise established in the case, then the jury would be compelled to reject the testimony as unsupported by facts or data in the case.  If the jury must consider the opinion because the expert witness claims to have relied reasonably upon inadmissible “facts or data,” then the expert witness has been given important fact-finding power in the case.

Perhaps Rule 702, with its imposition of gatekeeping responsibilities upon the trial court, is supposed to solve this problem.  Many of the Circuits appear to be moving toward a requirement of pretrial hearings for Rule 702 challenges, at least when requested, and sometimes even when not.  In some instances, the lack of a proper factual predicate, or unreasonableness in reliance upon an inadmissible factual predicate, can be developed in a pretrial hearing that allows the parties to join issue over the reasonableness of reliance and proof of the predicate facts or data.

IV.  Who Decides Reasonable Reliance?

Some of the earlier case law suggested that the expert witness could validate his or her own reliance upon “facts or data,” as “reasonable.”[ii] Judges, like most people, glibly assumed that what people normally or customarily do is reasonable.  Extending this assumption to the law of expert witnesses, courts have equated the reasonable reliance of Rule 703 with what experts customarily do in their field.[iii] Other courts appeared to go further, especially in the context of forensic expert witness opinion, to equate reasonable reliance with what experts do in their courtroom testimony.

The current view, influenced no doubt by the Supreme Court’s holdings in Daubert, Joiner, and Kumho Tire, has settled on requiring the trial court to make an independent assessment, based upon a factual showing, that the “facts or data” in question may be reasonably relied upon by experts in the relevant field.[iv] One of the important implications of this shift is that courts may now accept an expert witness’s testimony about what he normally does, but if opposing counsel challenges the reasonableness of the practice, with affidavits, testimony, learned treatises, and the like, then the court will be required to make a preliminary determination of the reasonableness of the expert’s “normal practice.”  Given that litigation often involves unusual situations outside both the statistical and prescriptive “norms” of ordinary life, the abandonment of extreme deference to expert witnesses as the ultimate arbiters of reasonableness is a significant advance in the evolution of the Federal Rules of Evidence.

V.  Reasonable Reliance and Reliability:  The Intersection Between Rules 702 and 703

Some of the early enthusiasm for Rule 703 as a speed bump for unreliable expert witness testimony came from the explicit use of the concept of “reasonable reliance” in the second sentence of the Rule.  The original Advisory Committee Note encouraged this view by giving an example, without much analysis, of an accident reconstruction expert whose testimony would not be reasonably based upon the statements of bystanders.  Before the advent of Daubert, this example was a tease to lawyers who were looking for some way to limit the flood of unreliable expert witness opinion testimony.  The Advisors, however, did not explain why such reliance would be unreasonable.  We could certainly imagine situations in which bystanders’ statements were essential to recreating an accident.  Furthermore, the statements of bystanders might be admissible under various exceptions to the rule against hearsay, and the Note thus seems to contradict the actual language of the Rule, which limits the reasonableness requirement to reliance upon inadmissible evidence.  In any event, the Advisory Committee’s example of an “accidentologist” seemed to imply a requirement of trustworthiness, which might apply to both admissible and inadmissible “facts or data.”

Perhaps because of the original Advisory Committee Note, litigants, in challenging the reliability of expert witness opinion testimony, frequently invoked both Rules 702 and 703 in support of exclusion.  Indeed, cases that focus on only Rule 703 are relatively uncommon; most cases note that they are addressing motions to bar expert witnesses, made under both rules.  After the Supreme Court’s Quartet on Rule 702 (Daubert, Joiner, Kumho, and Weisgram), the need to frame an exclusionary motion on Rule 703 has been largely dispelled.

One case that gave rise to much of the enthusiasm for Rule 703 as a basis for expert witness preclusion was Judge Weinstein’s decision in In re Agent Orange.[v] Some of the expert witnesses in the Agent Orange litigation relied upon checklists of symptoms prepared by the litigants.  Invoking Rule 703 to support exclusion of the expert witnesses’ opinions, the trial court observed that “no reputable physician relies on hearsay checklists by litigants to reach a conclusion with respect to the cause of their affliction.”[vi]

The lesson of Agent Orange was that Rule 703 could serve as a basis for excluding expert witness testimony.  If the expert witness relied unreasonably upon “facts or data,” then that expert witness’s testimony was fatally flawed under the Rules and had to be excluded.  The Court in Agent Orange avoided the obvious conclusion that an expert witness’s opinion, which was not reasonably based upon “facts or data,” could not be helpful to the trier of fact, and thus the opinion would necessarily offend Rule 702, as well.

Practitioners, faced with dubious expert witness opinion testimony after Agent Orange, increasingly relied upon Rule 703, along with Rules 702 and 403, in stating their challenges to proffered opinions.  Many Courts, in ruling upon these challenges, did not separate out their holdings or reasoning, in applying Rules 702 and 703 to exclude opinions.[vii] Some courts, especially before the Supreme Court’s decision in Daubert, framed reliability challenges almost exclusively in terms of compliance with Rule 703.[viii]

The early enthusiasm for an expansive role for Rule 703 as a tool for broad gatekeeping was problematic from the beginning.  Rule 703 has always required “reasonableness” for an expert witness’s reliance upon inadmissible “facts or data.”  The Rule is, and has always been, silent about reliance upon admissible “facts or data.”  As a result, Rule 703 could never have aspired to the principal role of limiting the flow of unreliable expert testimony.  The Rules of Evidence provide ample bases for expert witnesses to formulate unreliable opinions based solely, and unreasonably, upon admissible “facts or data,” such as inadvertent and false party admissions, self-serving statements made to examining physicians, or vanity press publications elevated to “learned treatise” status.  The resulting opinions have little or no epistemic warrant or claim to reliable methodology, but they may readily pass muster under Rule 703.  Furthermore, even if Rule 703 were applied to eliminate all unreasonable reliance upon “facts or data,” the Rule would not have guarded against unreliability that crept into the opinions as a result of invalid inferences or reasoning from “facts or data,” which themselves were beyond reproach.

The Advisory Committee Note to Rule 702, from 2000, attempts to answer some of the questions about the proper scope of Rule 703:

There has been some confusion over the relationship between Rules 702 and 703. The amendment makes clear that the sufficiency of the basis of an expert’s testimony is to be decided under Rule 702. Rule 702 sets forth the overarching requirement of reliability, and an analysis of the sufficiency of the expert’s basis cannot be divorced from the ultimate reliability of the expert’s opinion. In contrast, the ”reasonable reliance” requirement of Rule 703 is a relatively narrow inquiry. When an expert relies on inadmissible information, Rule 703 requires the trial court to determine whether that information is of a type reasonably relied on by other experts in the field. If so, the expert can rely on the information in reaching an opinion. However, the question whether the expert is relying on a sufficient basis of information – whether admissible information or not – is governed by the requirements of Rule 702.

This Note leaves a large gap in the analysis of expert witness opinion evidence.  The question of the sufficiency of an expert’s bases is understandably different from whether the “facts or data” are themselves reasonably (and thus presumably also reliably) relied upon by experts in the field.  Rule 702 provides guidance about the sufficiency of “facts or data,” as well the reliable application of reliable principles and methods to the facts of the case.  Rule 702, however, is silent about the reliability of the starting point in the scientific or technical knowledge:  the data.  Perhaps the Advisory Committee meant to imply that reliable methodology requires obtaining “facts or data” in a reliable way, but it failed to address the issue in the recent amendments to Rule 702.

There is another problem that amended Rules 702 and 703, along with the Advisory Notes, fail to address.  This problem further illustrates the gap in the coverage of the rules, and perhaps it explains why courts have strained at times to include Rule 703 as part of their analysis of the reliability of expert witness opinion testimony.  Consider what happens when a proffered expert witness’s opinion has already been held to satisfy the relevance and reliability requirements of Rule 702.  The Court has explicitly ruled that the expert’s opinion has a sufficient factual basis, and that the expert has reached the opinion by reliably applying reliable methods to the facts of a case.  After the Court’s Rule 702 ruling, the expert witness amends her report to add reliance upon a new study.  The study is unfinished, and unpublished.  The paper has yet to be peer-reviewed.  Furthermore, the study is written in a foreign language, and the expert has relied upon a translation that appears to have errors, with analyses that are at least partially incoherent or incorrect.  This new study no longer raises questions about sufficiency of data, and the expert’s overall opinion, ex hypothesi, satisfies Rule 702.  This new study appears to raise fresh questions under Rule 703, not provided for in the Advisory Committee’s allocation of issues between Rules 702 and 703.[ix] Some courts might think that the addition of another study, even if the study were scientifically questionable, in support of an already 702-sufficient opinion could not be harmful error.  Yet, the additional study would give the jury the sense that the expert witness had a surfeit of support for his opinion.  Furthermore, the additional study would prejudice the adverse party by requiring more cross-examination on details that may test the patience of the factfinder.

VI.  “Facts or Data” versus “Opinions”

Rule 703 describes the condition for permitting expert witnesses to rely upon inadmissible “facts or data.”  The Rule is silent about reliance upon others’ opinions.  Of course, the distinction between facts (or data) and opinions may occasionally be blurred or difficult to discern, but the entirety of Article VII is predicated upon the existence of the distinction.[x] The conspicuous absence of “opinions” from the Rule’s conditional allowance of expert testimony based upon inadmissible “facts or data” would seem to mean that such reliance upon extra-record opinion was not authorized under Rule 703.[xi] Other courts, especially the Third Circuit, have given their blessing to the wholesale backdoor introduction of opinions, and they have not distinguished facts or data from opinions, as the potentially reasonably relied upon inadmissible evidence under Rule 703.[xii] `

The Advisory Committee Note to the 2000 amendment to Rule 702 purports to answer the question of the scope of “facts or data” under Rule 703:

The term ”data” is intended to encompass the reliable opinions of other experts. See the original Advisory Committee Note to Rule 703. The language ”facts or data” is broad enough to allow an expert to rely on hypothetical facts that are supported by the evidence.

Id.

The original Advisory Committee Note to Rule 703, however, refers to opinions as within the scope of “facts or data” in just one single passage, and in a relatively narrow context:

Thus a physician in his own practice bases his diagnosis on information from numerous sources and of considerable variety, including statements by patients and relatives, reports and opinions from nurses, technicians and other doctors, hospital records, and X rays.  Most of them are admissible in evidence, but only with the expenditure of substantial time in producing and examining various authenticating witnesses. The physician makes life-and-death decisions in reliance upon them. His validation, expertly performed and subject to cross-examination, ought to suffice for judicial purposes. Rheingold, supra, at 531; McCormick § 15. A similar provision is California Evidence Code § 801(b).[xiii]

The original Note to Rule 703 is highly misleading because opinions that are recorded in medical records would be admissible in any event as business records.  Furthermore, even if physicians must sometimes make life-or-death decisions on the basis of limited, incomplete, undocumented opinions offered by another medical care provider, that in extremis scenario is hardly a propitious basis for opinion testimony at a judicial hearing where the trier of fact is charged with making a deliberate evaluation of the evidence.  Courts and juries are charged with trying to ascertain the truth, and they do not have a warrant to abridge the fact-finding process because a physician, or any other “expert,” at time past was acting under exigent circumstances.

This more recent attempt to endorse Rule 703 as a conduit for other expert “opinions” should fail for several reasons.  First, the entire Article VII concerns itself with opinions and opinion testimony.  To suggest that Rule 703 used “facts or data” to include “opinions” ignores the context of Article VII and the limited exception that Rule 703 was making to common-law procedure.  Second, the original Advisory Committee note spoke only, in one sentence, to opinions of medical-care providers.  These opinions would normally be recorded in the patient’s medical charts and records, and they would be admissible in any event.[xiv] There is nothing in the notes to Rule 703 to support the wholesale inclusion of hearsay opinion testimony.  Third, the expansion of Rule 703 to include opinions should not circumvent the reliability requirements of Rule 702.  Fourth, the rationale of convenience used to support the expansion of the common law through Rule 703 is stood on its head by this expansion to include opinions.  The Rule puts a heavy burden to ferret out reliance upon opinions of other non-testifying experts, and to take adequate discovery of those persons or organizations. This is a heavy price to pay for the “convenience” of having an opinion introduced without the usual safeguards of critical examination of the qualifications of the expert, or the reliability of his opinion.

Until this extension of Rule 703 is checked, practitioners must inquire of their adversary’s expert witnesses, either in interrogatories or in depositions, whether the witnesses have consulted with and relied upon the writings or oral discussions with any other person regarded by the testifying expert witness as an expert.  If the testifying expert witness has relied upon these non-testifying expert’s statements or opinions, opposing counsel may have to entertain the expensive, inconvenient resort to additional discovery of the out-of-court declarant.

VII.  Fulsome Importation of Untrustworthy Opinions Through Rule 703

One prevalent and problematic practice is for expert witnesses to rely upon a study in order to pass through the study’s authors’ conclusions. Most published studies have a basic ordered structure (IMRAD) :

  • Introduction – identifies the purpose and scientific context of the study;
  • Methods – identifies the materials used, the identification, organization, collection of data and controls;
  • Results – reports the data obtained and any statistical analyses of the data; and
  • Discussion – reports the study authors’ interpretation of the results and how they fit within the larger array of data from other studies.

See Luciana B. Sollaci & Mauricio G. Pereira, “The introduction, methods, results, and discussion (IMRAD) structure: a fifty-year survey,” 92 J. Med. Libr. Ass’n 364 (2004).

What becomes clear is that the testifying expert witnesses needs to have access to the methods and the results of published (and unpublished) papers in order to formulate and express their own opinions.  The introduction and discussion sections of relied upon papers are the scholarship and opinions of hearsay declarants, who in modern day publications are often quite untrustworthy.  The first and last section of most articles would rarely satisfy the procedural requirements of Federal Rule of Civil Procedure 26; nor would they satisfy the evidential reliability requirements of Rule 702. Rule 703’s limitation to “facts and data” should exclude the flood of hearsay opinion from relied upon studies by forcing expert witnesses to rely upon what is really necessary to their opinions.  If the testifying expert witness cannot testify without the scholarship and opinions of the relied upon studies, then he is probably not sufficiently expert to be giving an opinion in court.

There are many clear statements in the medical literature, which caution the consumers of medical studies against misleading claims.  Several years ago, the British Medical Journal published a paper by Montori, et al., “Users’ guide to detecting misleading claims in clinical research reports,” 329 Br. Med. J. 1093 (2004).  The authors distill their advice down to six suggestions in a “[g]uide to avoid being misled by biased presentation and interpretation of data, the first [suggestion] of which is to:  “Read only the Methods and Results sections; bypass the Discuss section.”  Id. at 1093 (emphasis added).

The federal courts have generally been oblivious to the problem of permitting fulsome presentation of hearsay opinions from the discussion and conclusion sections of articles relied upon by testifying expert witnesses.

The Supreme Court’s decision in Joiner provides a striking example.  The Court correctly assessed that plaintiffs’ expert witnesses in that case were relying upon pathologically deficient and unreliable evidence.  (Some of the expert witnesses in Joiner are known repeated offenders against Rule 702.)  In reaching the right result, and in advancing the jurisprudence of the reliability of expert witness opinion testimony, Joiner, however, stumbled in its analysis of the role of reliance upon published studies.  In his opinion in Joiner, Chief Justice Rehnquist gave considerable weight to the consideration that the plaintiffs’ expert witnesses relied upon studies, the authors of which explicitly refused to interpret as supporting a conclusion of human disease causation.  See General Electric Co. v. Joiner, 522 U.S. 136, 145-46 (1997) (noting that the PCB studies at issue did not support expert witnesses’ conclusion that PCB exposure caused cancer because the study authors, who conducted the research, were not willing to endorse a conclusion of causation).

Although the PCB study authors were well justified in their respective papers in refraining from over-interpreting their data and analyses, this consideration is of doubtful general value in evaluating the reliability of an expert witness’s proposed testimony.  First, as some plaintiffs’ counsel have argued, the testifying expert witness may be relying upon a more extensive and supportive evidentiary display than considered by the study authors.  The study, standing alone, might not support causation, but when considered with other evidence, the study could take on some importance in supporting a causal conclusion.  (This consideration would surely not save the sadly deficient opinions challenged in Joiner.) Second, as I have pointed out above, the Discussion sections of published papers of little value.  They are almost never comprehensive reviews of the subject matter, and they are often little more than the personal opinions of the study authors.  Peer reviewers may call for some acknowledgments of the weaknesses of the study, but the authors are generally allowed to press their speculations unchecked.

The use of a paper’s Discussion section to measure the reliability of a proffered expert testimony runs contrary to how scientists generally read and interpret papers.  Chief Justice Rehnquist’s emphasis upon the study authors’ Discussion of their own studies ignores the first important principal of interpreting medical studies, in an evidence-based world view:  In critically reading and evaluating a study, one should ignore anything in the paper other than the Methods and Results sections.

Joiner’s misplaced emphasis upon study authors’ Discussion sections has gained a foothold in the case law interpreting Rule 702.  In Huss v. Gayden, 571 F.3d 442  (5th Cir. 2009), for example, the Court declared:

“It is axiomatic that causation testimony is inadmissible if an expert relies upon studies or publications, the authors of which were themselves unwilling to conclude that causation had been proven.”

Id. (citing Vargas v. Lee, 317 F.3d 498, 501-01 (5th Cir. 2003) (noting that studies that did not themselves embrace causal conclusions undermined the reliability of the plaintiffs’ expert witness’s testimony that trauma caused fibromyalgia), and McClain v. Metabolife Int’l, Inc., 401 F.3d 1233, 1247-48 (11th Cir. 2005) (expert witnesses’ reliance upon studies that did not reach causal conclusions about ephedrine supported the challenge to the reliability of their proffered opinions).

The reference to what authors of relied upon papers state, in Joiner, perpetuates an authority-based view of science to the detriment of requiring good and sufficient reasons to support the testifying expert witnesses’ opinions.  The problem with Joiner’s suggestion that expert witness opinion should not be admissible if it disagrees with the study authors’ Discussion section is that sometimes study authors grossly over-interpret their data.  When it comes to scientific studies written by “political scientists” (scientists who see their work as advancing a political cause or agenda), then the Discussion section often becomes a fertile source of unreliable, speculative opinions that should not be given credence in Rule 104(a) contexts, and certainly should not be admissible in trials.

Perhaps the Discussion section, in the context of a Rule 104(a) proceeding, has some role in evaluating the challenged expert witness’s opinion, but surely it is a weak factor at best.  And clearly, the disagreement with the study authors’ conclusions or opinions, as reflected by speculative Discussion sections, can cut both ways.  Study authors may downplay their findings – appropriately or inappropriately, but study authors often overplay their findings and distort or misinterpret how their findings fit into the full picture of other studies and other evidence.  The quality of peer-reviewed publications is simply too irregular and unpredictable to make the subjective, evaluative comments in hearsay papers the touchstone for admissibility or inadmissibility.

There have been, and will continue to be, occasions in which published studies contain data, relevant and important to the causation issue, but which studies also contain speculative, personal opinions expressed in the Introduction and Discussion sections.  The parties’ expert witnesses may disagree with those opinions, but such disagreements hardly reflect poorly upon the testifying witnesses.  Neither sides’ expert witnesses should be judged by those out-of-court opinions.  Perhaps the hearsay Discussion section may be considered under Rule 104(a), which suspends the application of the Rules of Evidence, but it should hardly be an important or dispositive factor, other than raising questions for the reviewing court.

Expert witnesses should not be constrained or excluded for relying upon study data, when they disagree with the hearsay authors’ conclusions or discussions.  Given how many journals cater to advocacy scientists, and how variable the quality of peer review is, testifying expert witnesses should be required to have the expertise to interpret the data without substantial reliance upon, or reference to, the interpretative comments in the published literature.

VIII.  The Relationship Between Rules 703 and 705

Rule 705 simply provides:

The expert may testify in terms of opinion or inference and give reasons therefor without first testifying to the underlying facts or data, unless the court requires otherwise. The expert may in any event be required to disclose the underlying facts or data on cross-examination.

Rule 705, despite its brevity and apparent simplicity, encourages radical changes in the presentation of expert witness testimony in the courtroom.  Rule 705 permits expert witnesses to give opinions in the most conclusory terms.  Combined with Rule 703’s removal of admissibility as a requirement for materials “reasonably relied upon” by the expert witness, Rule 705 achieves the collapsing of difficult, technical issues into sound bites for juries and judges who increasingly suffer from inability to give sustained attention to such matters.  Under the banner of “convenience” and “economy,” these Rules operate to shift the burden to the crossexaminer to elicit the bases of an expert’s opinion as well as to then engage the expert witness on the reasonableness of his reliance, his methodology, and his application of method to the facts of the case, admissible or not.

The upshot of these changes is that the direct examination of an expert witness can often be very short, and it can be filled up with details of the expert witness’s qualifications and thinly veiled attempts to accredit the witness, even in advance of any attack on credibility.  The expert can then state his opinion as a conclusion, without any of the “messy” research facts or data, or other details.  The crossexaminer is left to dig through the bases, with judge and jury looking impatiently at the clock.  This imbalance creates practical and equitable hardships in how the Federal Rules allocate responsibility for developing factual bases for expert witness opinion between presenting and opposing counsel.  When courts impose time limits in trial of complex matters, the inequity created by the modern Rule 703 is compounded.[xv] Rule 705 gives trial courts discretion to require disclosure of bases.  In the proper case, counsel must be vigilant to motions to require this disclosure before the expert witness delivers his opinion.

Conclusion

Although Rule 703 successfully addresses some evidentiary problems in presenting expert witness opinion testimony, serious problems remain.  The Rule continues to permit expert witnesses to serve as conduits for inadmissible evidence, including opinion evidence that may escape the gatekeeping of Rule 702.  As legal scholars have pointed out, the Rule raises basic issues of fundamental fairness and constitutionality in both civil and criminal proceedings.[xvi] It is time for the Advisory Committee to go beyond restyling the Rule, and to reconsider its substance.

{This post is a revision of my article in article in 7 Proof 3 (Spring 2009).   An earlier version of that article was presented as part of an ALI-ABA Course of Study, “Opinion and Expert Testimony in Federal and State Courts,” on February 15, 2008, in San Diego, California.}


[i] Although elsewhere in the Federal Rules, the Advisory Committee disparaged limiting instructions, commentators and some courts engaged in the “judicial deception” of instructing the jury to accept the inadmissible basis as part of the explanation for the expert witness’s opinion, but not to accept or consider the basis for its truth.  See United States v. Grunewald, 233 F.2d 556, 574 (2d Cir. 1956)(“judicial deception”)(Frank, J.); Nash v. United States, 54 F.2d 1006, 1007 (2d Cir. 1932)(“mental gymnastic”)( Hand, J.).  Not only would such limiting instructions aggravate the problem by giving emphasis to the inadmissible evidence, the instructions surely would confuse most reasonable people who are trying to understand whether an expert witness has applied a reliable method to correctly ascertained “facts or data.”  The bases of an expert witness’s opinion are irrelevant if they do not have some evidential support.

[ii] Peteet v. Dow Chemical Co., 868 F.2d 1428, 1432 (8th Cir. 1989)(“[T]he trial court should defer to the expert’s opinion of what they find reasonably reliable.”); United States v. Sims, 514 F.2d 147 (9th Cir. 1975)(Rule 703 enacted, but not yet in effect)(affirming trial court’s allowing government’s psychologist to rely upon I.R.S. agent’s statement that defendant had previous “legal difficulties” to counter defendant’s claim of recent insanity against tax enforcement).

[iii] International Adhesive Coating Co. v. Bolton Emerson International, Inc., 851 F.2d 540, 544-45 (1st Cir. 1988)(equating reasonableness with “normal practice”).

[iv] United States v. Locascio, 6 F.3d 924, 938 (2d Cir. 1993).  The Third Circuit, which had adopted an extremely laissez-faire approach to expert witness testimony, signaled its compliance with the Supreme Court’s decision in Daubert, in In re Paoli Railroad Yard PCB Litigation:

We now make clear that it is the judge who makes the determination of reasonable reliance, and that for the judge to make the factual determination under Rule 104(a) that an expert is basing his or her opinion on a type of data reasonably relied upon by experts, the judge must conduct an independent evaluation into reasonableness.  The judge can of course take into account the particular expert’s opinion that experts reasonably rely on that type of data, as well as the opinions of other experts as to its reliability, but the judge can also take into account other factors he or she deems relevant.

35 F.3d 717, 748 (3d Cir. 1994)(emphasis in original).

[v] In re Agent Orange Product Liability Lit., 611 F. Supp. 1223 (E.D.N.Y. 1985), aff’d on other grounds, 818 F.2d 187 (2d Cir. 1987), cert. denied, 487 U.S. 1234 (1988).

[vi] 611 F. Supp. at 1246.  But see Fed. R. Evid. 803(4).

[vii] See, e.g., Soldo v. Sandoz Pharm. Corp., 244 F.Supp. 2d 434, 572 (W.D.Pa. 2003)(barring expert witness opinion testimony, under Rule 702 and 703).

[viii] See, e.g., Ealy v. Richardson-Merrell, Inc., 897 F.2d 1159, 1161-62 (D.C. Cir. 1990)(affirming exclusion of an expert whose opinion lacked scientific foundation, and ignored extensive contrary, published data); Lima v. United States, 708 F.2d 502, 508 (10th Cir. 1983)(affirming exclusion of epidemiologist who relied upon data not reasonably relied upon by experts in the fields of epidemiology and neurology).

[ix] See, e.g., Opinion, N.J. Super. Ct., Middlesex Cty., Docket L-5532-01-MT (denying motion to preclude expert witnesses from relying, in part, on unpublished study)(Garruto, J.).  This issue was anticipated in one of the leading cases on expert witness opinion testimony.  In re Paoli, 35 F.3d 717, 749 n. 19 (3d Cir. 1994)(pointing out that Rules 702 and 703 were not redundant, and that reliable opinions might be partially based upon unreliable data).

[x]See Beech Aircraft v. Rainey, 488 U.S. 153, 168 (1988)(“The distinction between statements of facts and opinion is, at best, one of degree.”)

[xi] American Key Corp. v. Cole Nat’l Corp., 762 F.2d 1569, 1580 (11th Cir. 1985)(“Expert opinions ordinarily cannot be based upon the opinions of others whether those opinions are in evidence or not.”); see also TK-7 Corp. v. Estate of Barbouti, 993 F.2d 722, 732 (10th Cir. 1993)(affirming exclusion of expert testimony under Rule 703 “where the expert failed to demonstrate any basis for concluding that another individual’s opinion on a subjective financial prediction was reliable, other than the fact that it was the opinion of someone he believed to be an expert who had a financial interest in making an accurate prediction”).

[xii] See, e.g., Lewis v. Rego Co.,757 F.2d 66, 73-74 (3d Cir. 1985)(holding that trial court had erred in excluding a testifying expert witness’s recounting of, and reliance upon, an out-of-court conversation with a non-testifying expert).  See also Barris v. Bob’s Drag Chutes & Safety Equipment, 685 F. 94, 102 n.10 (3d Cir. 1982)(“Under Rule 703, an expert’s testimony may be formulated by the use of the facts, data and conclusions of other experts.”); Seese v. Volkswagenwerk A.G., 648 F.2d 833, 845 (affirming admissibility, under Rule 703, of accident-reconstruction expert, whose opinion was based upon facts, data, and conclusions of a physician).

[xiii] Advisory Committee Note to Rule 703 (emphasis added).

[xiv] Fed. R. Evid. 803(6).

[xv] Evidentiary rules in state courts, even those states that have adopted Rule 703, vary considerably in how disclosure is required or allowed.  Pennsylvania, for instance, has adopted its Rule 703 verbatim from the Federal Rules, but it handles disclosure very differently under its version of Rule 705:

The expert may testify in terms of opinion or inference and give reasons therefore; however the expert must testify as to the facts or data on which the opinion or inference is based.

Pa. R. Evid. 705 (emphasis added).  See, e.g., Hansen v. Wyeth, Inc., 72 Pa. D. & C. 4th 225, 2005 WL 1114512, at *13, *19 (Phila. Ct. Common Pleas 2005)(Bernstein, J.)(granting new trial to verdict loser as result of expert witness’s failure or inability to provide all bases for his opinion).

[xvi] See, e.g., Seaman, “Triangulating Testimonial Hearsay:  The Constitutional Boundaries of Expert Opinion Testimony,” 96 Georgetown L.J. 827 (2008).

The New Wigmore on Learned Treatises

September 12th, 2011

I am indebted to Professor David Bernstein for calling my attention to the treatment of learned treatises in the new edition of his treatise on expert evidence:  David H. Kaye, David E. Bernstein, and Jennifer L. Mnookin, The  New Wigmore:  A Treatise on Evidence – Expert Evidence (2d ed. 2011).  Professor Bernstein suggested that I might find the treatment of learned treatises consistent with some of my concerns about the outdated rationale for allowing such works to be admissible for their truth.  See Unlearning The Learned Treatise Exception,” and  “Further Unraveling of the Learned Treatise Exception.”

Having used the first edition of the New Wigmore, I purchased a copy of the second edition of the volume on expert evidence.  The second edition appears to be a valuable addition to the scholarly literature on expert witness opinion evidence, and I recommend it strongly to students and practitioners who wrestle with expert witness issues.

Chapter 5, a treatment of “Treatises and Other Learned Writings,” is a good descriptive account of the historical development of the common law hearsay exception and its modification by various statutes and codes.  Unlike many discussions of the learned treatise exception, The New Wigmore delves into the overlap between 803(18), which specifies “reliable authority,” and the reliability factors set out in the most recent version of Rule 702.  Although the case law on the relationship between the two rules is sparse and inconsistent, the authors make a strong case for a reliability criterion for learned treatises when such treatises are offered for the truth of the matters asserted.

The New Wigmore acknowledges that many courts and scholars have assumed that juries and most normal people have a difficult time following a limiting instruction to consider a learned treatise for assessing credibility but not for the truth. Refreshingly, the New Wigmore rejects the notion that difficulty in following a limiting instruction (if real) equates to meaninglessness for the distinction.  In the context of Rule 702 or 703 motions to exclude, and accompanying motions for summary judgment, the issue whether a learned treatise statement is admissible for its truth may be outcome determinative of the motions.

The sad truth, touched on but not directly confronted by the New Wigmore, is that so much of the biomedical literature is carelessly written, with only cursory “peer review.”  SeeMisplaced Reliance On Peer Review to Separate Valid Science From Nonsense” (Aug. 14, 2011). Professor Wigmore was impressed by the desire of treatise authors to offer trustworthy opinions to avoid ridicule by their peers; in our era, scientists are not so impressed by publication as a guarantor of trustworthiness.  See, e.g., Douglas G. Altman, “Poor Quality Medical Research:  What Can Journals Do?” 287 J. Am. Med. Ass’n 2765 (2002).  There is a good deal of rubbish out there in the published literature, and most courts have not considered how to stem the flood of this rubbish into the courtroom through the 803(18) loophole.

There are yet other problems with Rule 803(18) discussed in New Wigmore.  The language of the rule is ambiguous. Does the requirement of “reliable authority” apply to the author, the text or journal, or the statement itself?  If the author or the publication, then there really is no assurance that the work satisfies reliability in the way required by 702.  If the status of the text, the journal, or the author is the sole criterion under 803(18), then we have a Ferebee-like rule that countenances the opinion of any willing, available, qualified author.  And the bar to publication these days is probably lower than the bar to being selected as a suitable testifying expert witness.

Authority is not a concept that is much at home in scientific discourse.  Nulla in verba, and all that.  If a statement in a publication is truly “authoritative,” it is because it is well supported by the facts and data on which it is based.

The New Wigmore goes beyond the coincidence of the word “reliable” in Rules 702 and 803(18), and argues that the logic of using a hearsay “learned treatise” for the truth of the matter asserted requires that the statement itself is reliably based. Here is how the second edition states its case for importing the requirements of Rule 702 into Rule 803(18):

“It would be not so difficult to conclude that assertions in a treatise that are not ‘the product of reliable principles or methods’ under Rule 702(2), for example, also are not ‘a reliable authority’ under Rule 803(18).”

Id. at 228, § 5.4.2. The triple negative may obscure the gist of the authors’ meaning, but I think their point is clear.  Let me attempt to restate their point without the negatives:

It is easy to conclude that treatise opinions that fail 702 would fail to qualify for 803(18) exception.

Of course, if a treatise statement satisfies 702, then that statement would not necessarily qualify for the 803(18) exception.  The learned treatise also has a “recognition” requirement; one of the testifying expert witnesses must recognize the treatise as “authoritative,” “learned,” or whatnot, or the court must take judicial notice of its status.  The treatise could have the most detailed discussion and documentation of its opinions, with flawless reasoning and evidential assessment, but if it were just translated from Georgian, and unknown to the expert witnesses and the court, it would not qualify as a learned treatise.  More than epistemic reliability seems to be required in terms of the status of the publication: the renown of the author and/or text. The status of the publication creates a normative obligation upon the expert witnesses to be aware of its pronouncements and to reconcile or to incorporate the publication’s statements into their courtroom opinions.

The New Wigmore’s rejection of “authoritarianism” for Rule 803(18) is commendable, but difficult to achieve in practice.  Rule 702 has evolved into an important tool to ensure that opinions offered in court are “evidence based,” rather than predicated solely on the professional status of their authors.  Along with the epistemic requirements of Rule 702, the procedural requirements of Federal Rule of Civil Procedure 26 ensure that the opinion’s author has stated all opinions, and all bases, as well as everything considered along the way in forming the opinion.  The reality is that most textbooks and treatises have short, conclusory consideration of issues that are likely important to the resolution of a lawsuit.  Frequently, a textbook cites a few studies that support the author’s opinion, without a sustained discussion of conflicting evidence, study validity, and the like.  An opinion that might be the subject of a 50 page Rule 26-compliant report may be reduced to a sentence or two in a textbook, which was published several years before the close of discovery in the case.  These are hardly propitious conditions for a truly learned treatise, and a 702-sufficient opinion.

Perhaps more promising is the development of the “systematic review,” which sets out to provide an evidence-based basis for causal claims. See, e.g., Michael B. Bracken, “Commentary: Toward systematic reviews in epidemiology,” 30 Internat’l J.  Epidem. 954 (2001).  Such reviews identify a research question, pre-specify the methodological approach to varying study designs and validity questions, search for all the data available that can contribute to answering the question, and provide a disciplined attempt to answer the research question.  Systematic reviews come very close to satisfying the needs of the courtroom, and the requirements of both Rules 702 and 803(18).  The trouble is, of course, that most traditional textbooks and narrative reviews, and “learned treatises,” are far off course from the epistemic path taken by systematic reviews.

The New Wigmore also raises the interesting question whether individual published studies are “learned treatises.” If they were, then an expert witness could rely upon them, per Rule 703, and the sponsoring party could actually offer them into evidence (or at least as an exhibit, with some right to show the jury their results).  An individual study, however, would seem to fall way short of the mark of the comprehensiveness required for a Rule 702 opinion, at least in the situation where there were other studies.

An irreducible problem in this area is that Rule 702 separates the “authority” of the speaker, in the form of qualifications to give an expert opinion, from the “reliability” of the opinion itself.  This separation, when followed, has been a huge achievement for the improvement of science in the courtroom.  Qualifications are a rather minimal necessary requirement, and even at best are a weak proxy for the reliability of the opinion given in court.  Many key 702 decisions involved expert witnesses with substantial, impressive qualifications. Despite these qualifications, courts excluded the witnesses’ proffered opinions because they were inadequately or unreliably supported.  Reliability under Rule 702 is thus an “evidence-based” requirement. The New Wigmore authors are correct that it is time to abandon “authority” as the guarantor of reliability in favor of “evidence-based principles.

Gerrit W. H. Schepers, MD — RIP

September 9th, 2011

Earlier this week, Barry Castleman, ScD, consultant to the asbestos plaintiffs’ bar, wrote to the Occupational and Environmental Medicine List to alert subscribers that Dr. Gerrit Schepers had died, on September 6, at the age of 97.  Dr. Castleman took the opportunity to portray Dr. Schepers as someone who had identified asbestos hazards early and his career and worked hard to call attention to those hazards.

Schepers was born and educated in South Africa, where he practiced medicine with South Africa’s Pneumoconiosis Bureau.  Most of his experience in South Africa was with amphibole asbestos – amosite and crocidolite.  According to Castleman’s narrative, Schepers complained to the Prime Minister about the horrors working conditions of black children who worked at Cape Asbestos’ amosite mill at Penge, in the Transvaal.  Schepers was curiously silent, however, in the printed medical literature, about these horrors, until after others, including Dr. Irving Selikoff began to publish about them in the 1960s.

Schepers came to the United States in 1949, and worked in various positions, including the Trudeau Laboratories, at Saranac Lake, New York.  According to Castleman, when Schepers planned to return to South Africa, he found himself bullied by Vandiver Brown, a lawyer for Johns Manville Corporation, over turning in a report of his American research and observations to the South African government.  According to Castlemen, the report was “suppressed,” but no details are provided.

Schepers ultimately stayed in the United States, and moved through jobs with DuPont, and later with the Veteran’s Administration.  At the famous 1964 New York Academy of Science meeting on asbestos, Schepers was a voluble presenter and discussant of papers.

Schepers’ Career as Testifying Expert Witness in Asbestos Litigation

Castleman reports that in the late 1970s, Schepers agreed to testify in asbestos personal injury cases against some of the companies that had employed him.  Castleman generously offers that “[s]ome of his recollections were later supported by corporate documents revealed in the litigation.”

And many of Schepers’ recollections proved to be illusory and fantastical.  I have previously discussed some of Dr. Schepers’ writings in a post on “Manufactured Certainty” (May 27, 2011).

Schepers was not drafted reluctantly to the role of testifying witness.  Here is a quote from a letter, dated March 10, 1978, Dr Schepers wrote to Captain Hoeffler, of the Dept of Navy’s Bureau of Medicine and Surgery, in Washington DC:

“Here is a CV and some reprints which will possibly be helpful.  Since I have been involved with so many things, my expertise with respect to asbestosis is somewhat hidden among the rest.  For emphasis therefore let me summarize that my clinical and research involvement with asbestosis and thus also lung cancer spans some thirty years.  I commenced this work in South Africa, where as a …. Medical director for the pneumoconiosis Bureau we researched the working conditions and health of all employees of that countries [sic] extensive crocidolite and amosite mines and industries.  The fact that mesothelioma can be associated with asbestos dust was first discovered by me during 1949 at the Penge Egnep mines in the Eastern Transvaal.  It is also important to know that only one out of three persons who develop mesothelioma ever was exposed to asbestos dust.  The Institute for Pneumoconiosis Research which I started there has abundant evidence about this.

In the USA I next studied the asbestos problem for the Quebec Government and the Johns Mansville Company and also for various asbestos producing companies.  This embraced research on human subjects, lung tissue and experimental animals.  The net result of my fifteen years of work in this field has been to convince me that chrysotile, which is the North American type of asbestos, is relatively innocuous as compared to the African and Russian varieties.  I have never seen a case of lung cancer develop on any person exposed to chrysotile only.  However I have seen plenty of lung cancers in asbestos workers.

This is because most asbestos workers are exposed to carcinogenic materials other than asbestos and all the cases with lung cancer also were chronic lung self-mutilators through cigarette smoking.  In a rathe major set of experiments of mine I exposed animals to the most potent known carcinogenic (beryllium sulphate) and then exposed them to asbestos (chrysotile) dust.  These animals had fewer cancers than those exposed to the beryllium sulphate.  So chrysotile is not even a significant co-carcinogen.  I reversed the order of the exposure – namely asbestos (chrysotile) first and the the BeSO4.  The result was the same.  The animals exposed only to chrysotile never developed any lung cancers.

I probably have the largest collection of asbestosis case materials, having been a consultant to hundreds of physicians.  I have a very detailed knowledge of what various types of asbestos can and cannot do to the lungs.  If my command of this subject can be of use to the Navy in the current law suit, please feel to use my services as consultant as you deem fit.”

Unfortunately, there is no similar letter to Ron Motley or Gene Locks readily available to detail how Dr. Schepers ended up on virtually ever plaintiffs’ witness list.

Chrysotile

As we can see from his 1978 correspondence, Dr. Schepers was not shy about touting his expertise, or his opinions about the innocuousness of chrysotile asbestos.  Castleman’s revisionist history has some support from Scheper’s own attempts to reinvent his past.  See, e.g., Gerrit W.H. Schepers, “Chronology of Asbestos Cancer Discoveries: Experimental Studies of the Saranac Laboratory,” 27 Am. J. Indus. Med. 593-606 (1995). The contemporaneous history seems at odds with words written after decades of consulting with and testifying for plaintiffs’ counsel in asbestos litigation.

These revisions to the historical record are, however, quite incredible.  Indeed, Dr. Schepers weighed in on the fiber-type controversies that were being fought out before the then young Occupational Safety and Health Adminsitration.  In a letter dated July 19, 1976, Schepers wrote Grover Wrenn, Chief, Division of Health Standards Development, OSHA:

“This is a follow-up on our recent meeting with the Assistant Secretary of Labor at which we discussed the question of asbestosis and berylliosis and the relationship of exposure of various industrial substances to lung cancer.

I promised to help you place items in the record which you appeared not have available.”

***

“As you can see my researches cast considerable doubt on the proposition that American fibrous minerals are carcinogenic.  I am not one of those that deny the carcinogenicity of everything.  To the contrary, I believe that I have helped prove that some environmental pollutants are carcinogenic.  For this reason you may perhaps accept the credibility of my findings when I state that I could detect no evidence of carcinogenicity for either chrysotile, talc or fiberglass.”

Unlike, Selikoff, Schepers was never a crocidolite denier, although after he started testifying regularly for plaintiffs’ counsel, his views appeared to change.

Wilhem Heuper:  Genius or Criminal

According to Castleman, others (unidentified) directed Schepers to “snub Dr. Hueper at scientific gatherings and ‘knock him’ in conversations with others.”  In my courtroom encounters with Dr. Schepers, I never found him shy about his opinions of other people and their abilities.  Here is what Schepers, under oath, told me about Wilhelm Heuper:

“Q . Surely. Back in the 1950s Doctor Hueper was fairly well regarded as an expert in industrial medicine?

A. No. No. No. No. He was a — he was a pathologist, epidemiologist, whose main focus was cancer, not all of the industrial medicine or hygiene, and his focus was almost singularly on the issue of relationship between industrial processes and cancer. That’s about the only way I can answer that question.

Q. All right. Was he regarded — was his opinions regarded — well regarded in the 1950s?

A. Oh, my goodness, some — some people thought that he was criminally irresponsible, and others thought he was a genius. I can’t answer that question.

Q. Did some think he was irresponsible because he rejected the association between smoking and lung cancer?

A. No. No. No. No. It is because he blamed everything, he blamed he just blamed everything as a cause. By then he got to the stage where you could get cancer from riding down the highway. You could get cancer from working with silica bricks, all things that are — you know, had been disproven, SO forth. So I would not classify him, you know, although I knew Doctor Hueper and respected him, I would not classify him as a final authority on that subject.”

Testimony on Cross-examination, at 234: 18 – 235:23, in De bene esse videotaped deposition of Gerrit W. H. Schepers, M.D. (June 14, 1991) (presented by plaintiffs’ counsel Jim Pettit, of Greitzer & Locks), in Radcliff v. Eagle-Picher Indus., Inc., Superior Court of New Jersey, Gloucester County, Law Div., Docket No. W-023456-88.

I had any number of courtroom encounters with Dr. Schepers over the years.  I owe him a debt for having carefully recording his thoughts on chrysotile before they became opprobrious to plaintiffs’ counsel.  He helped me win some interesting cases.

Milward — Unhinging the Courthouse Door to Dubious Scientific Evidence

September 2nd, 2011

It has been an interesting year in the world of expert witnesses.  We have seen David Egilman attempt a personal appeal of a district court’s order excluding him as an expert.  Stephen Ziliak has prattled on about how he steered the Supreme Court from the brink of disaster by helping them to avoid the horrors of statistical significance.  And then we had a philosophy professor turned expert witness, Carl Cranor, publicly touting an appellate court’s decision that held his testimony admissible.  Cranor, under the banner of the Center for Progressive Reform (CPR), hails the First Circuit’s opinion as the greatest thing since Sir Isaac Newton.   Carl Cranor, “Milward v. Acuity Specialty Products: How the First Circuit Opened Courthouse Doors for Wronged Parties to Present Wider Range of Scientific Evidence” (July 25, 2011).

Philosophy Professor Carl Cranor has been trying for decades to dilute the scientific approach to causal conclusions to permit the precautionary principle to find its way into toxic tort cases.  Cranor, along with others, has also criticized federal court expert witness gatekeeping for deconstructing individual studies, showing that the individual studies are weak, and ignoring the overall pattern of evidence from different disciplines.  This criticism has some theoretical merit, but the criticism is typically advanced as an excuse for “manufacturing certainty” from weak, inconsistent, and incoherent scientific evidence.  The criticism also ignores the actual text of the relevant rule – Rule 702, which does not limit the gatekeeping court to assessing individual “pieces” of evidence.  The scientific community acknowledges that there are times when a weaker epidemiologic dataset may be supplemented by strong experiment evidence that leads appropriately to a conclusion of causation.  See, e.g., Hans-Olov Adami, Sir Colin L. Berry, Charles B. Breckenridge, Lewis L. Smith, James A. Swenberg, Dimitrios Trichopoulos, Noel S. Weiss, and Timothy P. Pastoor, “Toxicology and Epidemiology: Improving the Science with a Framework for Combining Toxicological and Epidemiological Evidence to Establish Causal Inference,” 122 Toxicological Sci. 223 (2011) (noting the lack of a systematic, transparent way to integrate toxicologic and epidemiologic data to support conclusions of causality; proposing a “grid” to permit disparate lines of evidence to be integrated into more straightforward conclusions).

For the most part, Cranor’s publications have been ignored in the Rule 702 gatekeeping process.  Perhaps that is why he shrugged his academic regalia and took on the mantle of the expert witness, in Milward v. Acuity Specialty Products, a case involving a claim that benzene exposure caused plaintiff’s acute promyelocytic leukemia (APL), one of several types of acute myeloid leukemia.  Milward v. Acuity Specialty Products Group, Inc., 664 F.Supp. 2d 137 (D.Mass. 2009) (O’Toole, J.).

Philosophy might seem like the wrong discipline to help a court or a jury decide general and specific causation of a rare cancer, with an incidence of less 8 cases per million per year.  (A PubMed search on leukeumia and Cranor yielded no hits.)  Cranor supplemented the other, more traditional testimony from a toxiciologist, by attempting to show that the toxicologist’s testimony was based upon sound scientific method.  Cranor was particularly intent to show that the toxicologist, Dr. Martyn Smith, had used sound method to reach a scientific conclusion, even though he lacked strong epidemiologic studies to support his opinion.

The district court excluded Cranor’s testimony, along with plaintiff’s scientific expert witnesses.  The Court of Appeals, however, reversed, and remanded with instructions that plaintiff’s scientific expert witnesses’ opinions were admissible.  639 F.3d 11 (1st Cir. 2011).  Hence Cranor’s and the CPR’s hyperbole about the opening of the courthouse doors.

The district court was appropriately skeptical about plaintiff’s expert witnesses’ reliance upon epidemiologic studies, the results of which were not statistically significant.  Before reaching the issue of statistical significance, however, the district court found that Dr. Smith had relied upon studies that did not properly support his opinion.  664 F.Supp. 2d at 148.  The defense presented Dr. David Garabrant, an expert witness with substantial qualifications and accomplishments in epidemiologic science.  Dr. Garabrant persuaded the Court that Dr. Smith had relied upon some studies that tended to show no association, and others that presented faulty statistical analyses.  Other studies, relied upon by Dr. Smith, presented data on AML, but Dr. Smith speculated that these AML cases could have been APL cases.  Id.

None of the studies relied upon by plaintiffs’ Dr Smith had a statistically significant result for APL.  Id. at 144. The district court pointed out that scientists typically take care to rely upon data only that shows “statistical significance,” and Dr. Smith (plaintiff’s expert witness) deviated from sound scientific method in attempting to support his conclusion with studies that had not ruled out chance as an explanation for their increased risk ratios.  Id.  The district court did not summarize the studies’ results, and so the unsoundness of plaintiff’s method is difficult to evaluate.  Rather than engaging in hand waving and speculating about “trends” and suggestions, those witnesses could have performed a meta-analysis to increase the statistical precision of a summary point estimate beyond what was achieved in any single, small study.  Neither the plaintiff nor the district court addressed the issue of aggregating study results to address the role of chance in producing the observed results.

The inability to show a statistically significant result was not surprising given how rare the APL subtype of AML is.  Sample size might legitimately interfere with the ability of epidemiologic studies to detect a statistically significant association that really existed.  If this were truly the case, the lack of a statistically significant association could not be interpreted to mean the absence of an association without potentially committing a type II error. In any event, the district court in Milward was willing to credit the plaintiffs’ claim that epidemiologic evidence may not always be essential for establishing causality.  If causality does exist, however, epidemiologic studies are usually required to confirm the existence of the causal relationship.  Id. at 148.

The district court also took a close look at Smith’s mechanistic biological evidence, and found it equally speculative.  Although plausibility is a desirable feature of a causal hypothesis, it only sets the stage for actual data:

“Dr. Smith’s opinion is that ‘[s]ince benzene is clastogenic and has the capability of breaking and rearranging chromosomes, it is biologically plausible for benzene to cause’ the t(15;17) translocation. (Smith Decl. ¶ 28.b.) This is a kind of ‘bull in the china shop’ generalization: since the bull smashes the teacups, it must also smash the crystal. Whether that is so, of course, would depend on the bull having equal access to both teacups and crystal.”

Id. at 146.

“Since general extrapolation is not justified and since there is no direct observational evidence that benzene causes the t(15;17) translocation, Dr. Smith’s opinion — that because benzene is an agent that can cause some chromosomal mutations, it is ‘plausible’ that it causes the one critical to APL—is simply an hypothesis, not a reliable scientific conclusion.”

Id. at 147.

Judge O’Toole’s opinion is a careful, detailed consideration of the facts and data upon which Dr. Smith relied upon, but the First Circuit found an abuse of discretion, and reversed. 639 F.3d 11 (1st Cir. 2011).

The Circuit incorrectly suggested that Smith’s opinion was based upon a “weight of the evidence” methodology described by “the world-renowned epidemiologist Sir Arthur Bradford Hill in his seminal methodological article on inferences of causality. See Arthur Bradford Hill, The Environment and Disease: Association or Causation?, 58 Proc. Royal Soc’y Med. 295 (1965).” Id. at 17.  This suggestion is remarkable because everyone knows that it was Arthur’s much smarter brother, Austin, who wrote the seminal article and gave the Bradford Hill name to the famous presidential address published by the Royal Society of Medicine.  Arthur Bradford Hill was not even a knight if he existed at all.

The Circuit’s suggestion is also remarkable for confusing a vague “weight of the evidence” methodology with the statistical and epidemiologic approach of one of the 20th century’s great methodologists.  Sir Austin is known for having conducted the first double-blinded randomized clinical trial, as well as having shown, with fellow knight Sir Richard Doll, the causal relationship between smoking and lung cancer.  Sir Austin wrote one of the first texts on medical statistics, Principles of Medical Statistics (London 1937).  Sir Austin no doubt was turning in his grave when he was associated with Cranor’s loosey-goosey “weight of the evidence” methodology.  See, e.g., Douglas L. Weed, “Weight of Evidence: A Review of Concept and Methods,” 25 Risk Analysis 1545 (2005) (noting the vague, ambiguous, indefinite nature of the concept of “weight of evidence” review).

The Circuit adopted a dismissive attitude towards epidemiology in general, citing to an opinion piece by several cancer tumor biologists, whom the court described as a group from the National Cancer Institute (NCI).  The group was actually a workshop sponsored by the NCI, with participants from many institutions.  Id. at 17 (citing Michele Carbon[e] et al., “Modern Criteria to Establish Human Cancer Etiology,” 64 Cancer Res. 5518, 5522 (2004)).  The cited article did report some suggestions for modifying Bradford Hill’s criteria in the light of modern molecular biology, as well as a sense of the group that there was no “hierarchy” in which epidemiology was at the top.  (The group definitely did not address the established concept that some types of epidemiologic studies are analytically more powerful to support inferences of causality than others — the hierarchy of epidemiologic evidence.)

The Circuit then proceeded to evaluate Dr. Smith’s consideration of the available epidemiologic studies.  The Circuit mistakenly defined an “odds ratio” as the “the difference in the incidence of a disease between a population that has been exposed to benzene and one that has not.”  Id. at 24. Having failed to engage with the evidence sufficiently to learn what an odds ratio was, the Circuit Court then proceeded to state that the difference between Dr. Garabrant and Dr. Smith, as to how to calculate the odds ratio in some of the studies, was a mere difference in opinion between experts, and Dr. Garabrant’s criticisms of Dr. Smith’s approach went to the weight, not the admissibility, of the evidence.  These sparse words are, of course, a legal conclusion, not an explanation, and the Circuit leaves us without any real understanding of how Dr. Smith may have gone astray, but still have been advancing a legitimate opinion within epidemiology, which was not his discipline.  Id. at 22. If Dr. Smith’s idea of an odds ratio was as incorrect as the Circuit’s, his calculation may have had no validity whatsoever, and thus his opinions derived from his flawed ideas may have clearly failed the requirements of Rule 702.  The Circuit’s opinion is not terribly helpful in understanding anything other than its summary rejection of the district court’s more detailed analysis.

The Circuit also advanced the “impossibility” defense for Dr. Smith’s failure to rely upon epidemiologic studies with statistically significant results.  Id. at 24. As noted above, such studies fail to rule out chance for their finding of risk ratios above or below 1.0 (the measure of no association).  Because the likelihood of obtaining a risk ratio of exactly 1.0 is vanishingly small, epidemiologic science must and does consider the role of chance in explaining data that diverges from a measure of no association.  Dr. Smith’s hand waving about the large size of the studies needed to show an increased risk may have some validity in the context of benzene exposure and APL, but it does not explain or justify the failure to use aggregative techniques such as meta-analysis.  The hand waving also does nothing to rule out the role of chance in producing the results he relied upon in court.

The Circuit Court appeared to misunderstand the very nature of the need for statistical evaluation of stochastic biological events, such as APL incidence in a population.  According to the Circuit, Dr. Smith’s reliance upon epidemiologic data was merely

“meant to challenge the theory that benzene exposure could not cause APL, and to highlight that the limited data available was consistent with the conclusions that he had reached on the basis of other bodies of evidence. He stated that ‘[i]f epidemiologic studies of benzene-exposed workers were devoid of workers who developed APL, one could hypothesize that benzene does not cause this particular subtype of AML.’ The fact that, on the  contrary, ‘APL is seen in studies of workers exposed to benzene where the subtypes of AML have been separately analyzed and has been found at higher levels than expected’ suggested to him that the limited epidemiological evidence was at the very least consistent with, and suggestive of, the conclusion that benzene can cause APL.

* * *

Dr. Smith did not infer causality from this suggestion alone, but rather from the accumulation of multiple scientifically acceptable inferences from different bodies of evidence.”

Id. at 25

But challenging the theory that benzene exposure does not cause APL does not help show the validity of the studies relied upon, or the inferences drawn from them.  This was plaintiffs’ and Dr. Smith’s burden under Rule 702, and the Circuit seemed to lose sight of the law and the science with Professor Cranor’s and Dr. Smith’s sleight of hand.  As for the Circuit’s suggestion that scraps of evidence from different kinds of scientific studies can establish scientific knowledge, this approach was rejected by the great mathematician, physicist, and philosopher of science, Henri Poincaré:

“[O]n fait la science avec des faits comme une maison avec des pierres; mais une accumulation de faits n’est pas plus une science qu’un tas de pierres n’est une maison.”

Henri Poincaré, La Science et l’Hypothèse (1905) (chapter 9, Les Hypothèses en Physique).  Litigants, either plaintiff or defendant, should not be allowed to pick out isolated findings in a variety of studies, and throw them together as if that were science.

As unclear and dubious as the Circuit’s opinion is, the court did not throw out the last 18 years of Rule 702 law.  The Court distinguished the Milward case, with its sparse epidemiologic studies from those cases “in which the available epidemiological studies found that there is no causal link.”  Id. at 24 (citing Norris v. Baxter Healthcare Corp., 397 F.3d 878, 882 (10th Cir.2005), and Allen v. Pa. Eng’g Corp., 102 F.3d 194, 197 (5th Cir.1996).  The Court, however, provided no insight into why the epidemiologic studies must rise to the level of showing no causal link before an expert can torture weak, inconsistent, and contradictory data to claim such a link.  This legal sleight of hand is simply a shifting of the burden of proof, which should have been on plaintiffs and Dr. Smith.  Desperation is not a substitute for adequate scientific evidence to support a scientific conclusion.

The Court’s failure to engage more directly with the actual data, facts, and inferences, however, is likely to cause mischief in federal cases around the country.