TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

New-Age Levellers – Flattening Hierarchy of Evidence

October 30th, 2011

The Levelers were political dissidents in England, in the middle of the 17th century.  Among their causes, Levelers advanced popular sovereignty, equal protection of the law, and religious tolerance.

The political agenda of the Levelers sounds quite noble to 21st century Americans, but their ideals have no place in the world of science:  not all opinions or scientific studies are created equally; not all opinions are worthy of being taken seriously in scientific discourse or in courtroom presentations of science; and not all opinions should be tolerated, especially when they claim causal conclusions based upon shoddy or inadequate evidence.

In some litigations, legal counsel set out to obscure the important quantitative and qualitative distinctions among scientific studies.  Sometimes, lawyers find cooperative expert witnesses, willing to engage in hand waving about “the weight of the evidence,” where the weights are assigned post hoc, in a highly biased fashion.  No study (that favors the claim) left behind.  This is not science, and it is not how science operates, even though some expert witnesses, such as Professor Cranor in the Milward case, have been able to pass off their views as representative of scientific practice.

A sound appreciation of how scientists evaluate studies, and of why not all studies are equal, is essential to any educated evaluation of scientific controversies.  Litigants who face high-quality studies, with results inconsistent with their litigation claims, may well resort to “leveling” of studies.  This leveling may be advanced out of ignorance, but more likely the leveling is an attempt to snooker courts with evidence from exploratory, preliminary, and hypothesis-generating studies as somehow equal to, or greater than, the value of hypothesis-testing studies.

Some of the leveling tactics that have become commonplace in litigation include asserting that:

  • All experts witnesses are the same;
  • All expert witnesses conduct the same analysis;
  • All expert witnesses read articles, interpret them, and offer opinions;
  • All expert witnesses are inherently biased;
  • All expert witnesses select the articles to read and interpret in line with their biases;
  • All epidemiologic studies are the same;
  • All studies are flawed; and
  • All opinions are, in the final analysis, subjective.

This leveling strategy can be seen in Professor Margaret Berger’s introduction to the Reference Manual on Scientific Evidence (RMSE 3d), where she supported an ill-defined “weight-of-the-evidence” approach to causal judgments. SeeLate Professor Berger’s Introduction to the Reference Manual on Scientific Evidence” (Oct. 23, 2011).

Other chapters in the RMSE 3d are at odds with Berger’s introduction.  The epidemiology chapter does not explicitly address the hierarchy of studies, but it does describe cross-sectional, ecological, and secular trend studies are less able to support causal conclusions.  Cross-sectional studies are described as “rarely useful in identifying toxic agents,” RMSE 3d at 556, and as “used infrequently when the exposure of interest is an environmental toxic agent,” RMSE 3d at 561.  Cross-sectional studies are described as hypothesis-generating as opposed to hypothesis testing, although not in those specific terms.  Id. (describing cross-sectional studies as providing valuable leads for future research).  Ecological studies are described as useful for identifying associations, but not helpful in determining whether such associations are causal; and ecological studies are identified as a fertile source of error in the form of the “ecological fallacy.”  Id. at 561 -62.

The epidemiology chapter perhaps weakens its helpful description of the limited role of ecological studies by citing, with apparent approval, a district court that blinked at its gatekeeping responsibility to ensure that testifying expert witnesses did, in fact, rely upon “sufficient facts or data,” as well as upon studies that are “of a type reasonably relied upon by experts in the particular field in forming opinions or inferences upon the subject.” Rule 703. RMSE 3d at 561 n.34 (citing Cook v. Rockwell International Corp., 580 F. Supp. 2d 1071, 1095–96 (D. Colo. 2006), where the district court acknowledged the severe limitations of ecological studies in supporting causal inferences, but opined that the limitations went to the weight of the study). Of course, the insubstantial weight of an ecological study is precisely what may result in the study’s failure to support a causal claim.

The ray of clarity in the epidemiology chapter about the hierarchical nature of studies is muddled by an attempt to level epidemiology and toxicology.  The chapter suggests that there is no hierarchy of disciplines (as opposed to studies within a discipline).  RMSE 3d at 564 & n.48 (citing and quoting symposium paper that “[t]here should be no hierarchy [among different types of scientific methods to determine cancer causation]. Epidemiology, animal, tissue culture and molecular pathology should be seen as integrating evidences in the determination of human carcinogenicity.” Michele Carbone et al., “Modern Criteria to Establish Human Cancer Etiology,” 64 Cancer Res. 5518, 5522 (2004).)  Carbone, of course, is best known for his advocacy of a viral cause (SV40), of human mesothelioma, a claim unsupported, and indeed contradicted, by epidemiologic studies.  His statement does not support the chapter’s leveling of epidemiology and toxicology, and Carbone is, in any event, an unlikely source to cite.

The epidemiology chapter undermines its own description of the role of study design in evaluating causality by pejoratively asserting that most epidemiologic studies are “flawed”:

“It is important to emphasize that all studies have ‘flaws’ in the sense of limitations that add uncertainty about the proper interpretation of the results.9 Some flaws are inevitable given the limits of technology, resources, the ability and willingness of persons to participate in a study, and ethical constraints. In evaluating epidemiologic evidence, the key questions, then, are the extent to which a study’s limitations compromise its findings and permit inferences about causation.”

RSME 3d at 553.  This statement is actually a significant improvement over the second edition, where the authors of the epidemiology chapter asserted, without qualification:

“It is important to emphasize that most studies have flaws.”

RMSE 2d 337.  The “flaws” language from the earlier chapter was used on occasion by courts that were set on ignoring competing interpretations of epidemiologic studies.  Since all or most studies are flawed, why bother figuring out what is valid and reliable?  Just let the jury sort it out.  This is not an aid to gatekeeping, but rather a prescription for allowing the gatekeeper to call in sick.

The current epidemiology chapter essentially backtracks from the harsh connotations of its use of the term “flaws,” by now equating the term with “limitations.”  Flaws and limitations, however, are quite different from one another.  What is left out in the third edition’s description is the sense that there are indeed some studies that are so flawed that they must be disregarded altogether.  There may also be limitations in studies, especially observational studies, which is why the party with the burden of proof should generally not be allowed to proceed with only one or two epidemiologic studies.  Rule 702, after all, requires that an expert opinion to be based upon “sufficient facts or data.”

The RSME 3d chapter on medical evidence is a refreshing break from the leveling approach seen elsewhere.  Here at least, the chapter authors devote several pages to explaining the role of study design in assessing an etiological issue:

3. Hierarchy of medical evidence

With the explosion of available medical evidence, increased emphasis has been placed on assembling, evaluating, and interpreting medical research evidence.  A fundamental principle of evidence-based medicine (see also Section IV.C.5, infra) is that the strength of medical evidence supporting a therapy or strategy is hierarchical.

When ordered from strongest to weakest, systematic review of randomized trials (meta-analysis) is at the top, followed by single randomized trials, systematic reviews of observational studies, single observational studies, physiological studies, and unsystematic clinical observations.150 An analysis of the frequency with which various study designs are cited by others provides empirical evidence supporting the influence of meta-analysis followed by randomized controlled trials in the medical evidence hierarchy.151 Although they are at the bottom of the evidence hierarchy, unsystematic clinical observations or case reports may be the first signals of adverse events or associations that are later confirmed with larger or controlled epidemiological studies (e.g., aplastic anemia caused by chloramphenicol,152 or lung cancer caused by asbestos153). Nonetheless, subsequent studies may not confirm initial reports (e.g., the putative association between coffee consumption and pancreatic cancer).154

John B. Wong, Lawrence O. Gostin, and Oscar A. Cabrera, “Reference Guide on Medical Testimony,” RMSE 3d 687, 723 -24 (2011).  The third edition’s chapter is a significant improvement of the second edition’s chapter on medical testimony, which does not mention the hierarchy of evidence.  Mary Sue Henifin, Howard M. Kipen, and Susan R. Poulter, ” Reference Guide on Medical Testimony,” RMSE 2d 440 (2000).  Indeed, the only time the word “hierarchy” appeared in the entire second edition was in connection with the hierarchy of the federal judiciary.

The tension, contradictions, and differing emphases among the various chapters of the RSME 3d point to an important “flaw” in the new edition.  The chapters appear to have been written largely in isolation, and without much regard for what the other chapters contain.  The chapters overlap, and indeed contradict one another on key points.  Witness Berger’s rejection of the hierarchy of evidence, the epidemiology chapter’s inconstant presentation of the concept without mentioning it by name, and the medical testimony chapter’s embrace and explicit presentation of the hierarchical nature of medical study evidence.  Fortunately, the laissez-faire editorial approach allowed the disagreement to remain, without censoring any position, but the federal judiciary is not aided by the contradiction and tension in the approaches.

Given the importance of the concept, even the medical testimony chapter in RSME 3d may seem to be too little, too late to be helpful to the judiciary.  There are book-length treatments of systematic reviews and “evidence-based medicine”: the three pages in Wong’s chapter barely scratch the surface of this important topic of how evidence is categorized, evaluated, and synthesized in making judgments of causality.

There are many textbooks and articles available to judges and lawyers on how to assess medical studies.  Recently, John Cherrie has posted on his blog, OH-world, about a series of 17 articles, in the journal Aerzteblatt International, on the proper evaluation of medical and epidemiologic studies.

These papers, overall, make the point that not all studies are equal, and that not all evidentiary displays are adequate to support conclusions of causal association.  The papers are available without charge from the journal’s website:

01. Critical Appraisal of Scientific Articles

02. Study Design in Medical Research

03. Types of Study in Medical Research

04. Confidence Interval or P-Value?

05. Requirements and Assessment of Laboratory Tests: Inpatient Admission Screening

06. Systematic Literature Reviews and Meta-Analyses

07. The Specification of Statistical Measures and Their Presentation in Tables and Graphs

08. Avoiding Bias in Observational Studies

09. Interpreting Results in 2×2 Tables

10. Judging a Plethora of p-Values: How to Contend With the Problem of Multiple Testing

11. Data Analysis of Epidemiological Studies

12. Choosing statistical tests

13. Sample size calculation in clinical trials

14. Linear regression analysis

15. Survival analysis

16. Concordance analysis

17. Randomized controlled trials

This year, the Journal of Clinical Epidemiology began publishing a series of papers, known by the acronym GRADE, which aim to provide guidance on how studies are categorized and assessed for their evidential quality in supporting treatments and intervention.  The GRADE project is led by Gordon Guyatt, who is known for having coined the term “evidence-based medicine,” and written widely on the subject.  Guyatt, along with his colleagues including Peter Tugwell (who was one of the court-appointed expert witnesses in MDL 926), has described the GRADE project:

“The ‘Grades of Recommendation, Assessment, Development, and Evaluation’ (GRADE) approach provides guidance for rating quality of evidence and grading strength of recommendations in health care. It has important implications for those summarizing evidence for systematic reviews, health technology assessment, and clinical practice guidelines. GRADE provides a systematic and transparent framework for clarifying questions, determining the outcomes of interest, summarizing the evidence that addresses a question, and moving from the evidence to a recommendation or decision. Wide dissemination and use of the GRADE approach, with endorsement from more than 50 organizations worldwide, many highly influential   http://www.gradeworkinggroup.org/), attests to the importance of this work. This article introduces a 20-part series providing guidance for the use of GRADE methodology that will appear in the Journal of Clinical Epidemiology.”

Gordon Guyatt, Andrew D. Oxman, Holger Schünemann, Peter Tugwell, Andre Knottnerus, “GRADE guidelines – new series of articles in Journal of Clinical Epidemiology,” 64 J. Clin. Epidem. 380 (2011).  See also Gordon Guyatt, Andrew Oxman, et al., for the GRADE Working Group, “Rating quality of evidence and strength of recommendations GRADE: an emerging consensus on rating quality of evidence and strength of recommendations,” 336 Brit. Med. J. 924 (2008).  [pdf]

Of the 20 papers planned, 9 of the GRADE papers have been published to date in the Journal of Clinical Epidemiology:

01 Intro – GRADE evidence profiles & summary of findings tables

02 Framing question & deciding on important outcomes

03 Rating quality of evidence

04 Rating quality of evidence – study limitations (risk of bias)

05 Rating the quality of evidence—publication bias

06 Rating up quality of evidence – imprecision

07 Rating quality of evidence – inconsistency

08 Rating quality of evidence – indirectness

09 Rating up quality of evidence

The GRADE guidance papers focus on the efficacy of treatments and interventions, but in doing so, they evaluate “effects” and are thus applicable to the etiologic issues of alleged harm that find their way into court.  The papers build on other grading systems advanced previously by the Oxford Center for Evidence-Based Medicine, the U.S. Preventive Services Task Force (Agency for Healthcare Research and Quality AHRQ), the Cochrane Collaboration, as well as many individual professional organizations.

GRADE has had some success in harmonizing disparate grading systems, and forging a consensus among organizations that had been using their own systems, such as the  World Health Organization, the American College of Physicians, the American Thoracic Society, the Cochrane Collaboration, the American College of Chest Physicians, the British Medical Journal, and Kaiser Permanente.

There are many other important efforts to provide consensus support for improving the quality of the design, conduct, and reporting of published studies, as well as the interpretation of those studies once published.  Although the RSME 3d does a good job of introducing its readers to the basics of study design, it could have done considerably more to help judges become discerning critics of scientific studies and of conclusions based upon individual or multiple studies.

Historians As Expert Witnesses – A Wiki

October 28th, 2011

“The one duty we owe to history is to rewrite it.”

Oscar Wilde, The Critic As Artist (1891)

“What will history say?  History, sir, will tell lies as usual.”

George Bernard Shaw, The Devil’s Disciple (1901)

* * * * * * * * * * * * * * * * * * * * * * * * *

The Defense Research Institute recently announced that Bill Childs, a professor at the Western New England University School of Law, will be speaking the use of historians as expert witnesses in litigation.  Having puzzled about this very issue in previous writings, I look forward to Professor Childs’ contributions on the issue.  The announcement also noted Professor Childs’ creation, “the Historians as Experts Wiki,” which I knew about, but had not previously visited.

The wiki is a valuable resource of information about historians who have participated in the litigation process in all manner of cases, including art, asbestos, creationism, native Americans, holocaust, products liability, intellectual property, and voting rights.  There are pages for each historian witness, including expert witnesses in other fields, who have given testimony of an explicitly historical nature. The website is still in its formative stages, but it holds great promise as a resource to lawyers who are researching historians who have been listed as expert witnesses in their cases.

Most of my musings about historians as expert witnesses have been provoked by those who have testified about the history of silicosis.  Last year, I presented at a conference sponsored by the International Commission on Occupational Health (ICOH), about such historians.  See “A Walk on the Wild Side,” July 16, 2010.  My presentation abstract, along with all the proceedings of that conference, will be published next year as  “Courting Clio:  Historians and Their Testimony in Products Liability Action,” in: Brian Dolan and Paul Blanc, eds., At Work in the World: Proceedings of the Fourth International Conference on the History of Occupational and Environmental Health, Perspectives in Medical Humanities, University of California Medical Humanities Consortium, University of California Press (2012)(in press).

Philadelphia Courts – Structural Bias and Reverse Bifurcation

October 27th, 2011

When I studied federal courts in law school, some of the most interesting cases involving federal diversity and removal jurisdiction were decisions of the Third Circuit, on appeals from the Eastern District of Pennsylvania.  At the time, it did not occur to me that there must be strong incentives to push the boundaries of federal jurisdiction so hard to avoid state court.  A few years later, when I started to try cases in the Philadelphia County Court of Common Pleas, I “got it.”

You probably do not need to have a doctorate in economics to object when someone pisses on you, and calls it rain.  Still, it is comforting to have corroboration from someone with a doctorate.

Joshua D. Wright, a professor of law and economics at George Mason University School of Law, has written up the results of a study, “Are Plaintiffs Drawn to Philadelphia’s Civil Courts? An Empirical Examination,” published by the International Center for Law & Economics.  Professor Wright finds that the Philadelphia civil court system contains significant structural biases, which makes the Philadelphia Court of Common Pleas (PCCP) a magnet for plaintiffs from around the country, and which inflates verdicts and settlements in civil cases.

One such structural bias is the existence of a Complex Litigation Center.  Some of the judges and administrators in charge of the Center have seen their role to be rain makers, to bring litigation business to Philadelphia.  Of course, proper venue and the doctrine of forum non conveniens may tend to get in the way of such an official business plan.

Another structural bias in the Philadelphia courts is the automatic, unthinking use of a procedure called reverse bifurcation.  Typically bifurcation requires plaintiff to establish liability before proceeding to causation and damages, but reverse bifurcation puts causation and damages first.  This bizarre procedure was first urged by Johns-Manville lawyers in asbestos litigation, to avoid the shame and shock of having the jury hear their company’s liability case at the same time that the jury heard the evidence whether plaintiff was injury.  Reverse bifurcation gave them a chance to sanitize the trial on medical causation.  If they lost an up-or-down medical issue, the Johns-Manville lawyers could settle to avoid having the ugly liability evidence shared with the jury.

Johns-Manville soon filed for bankruptcy, but the plaintiffs’ bar learned that reverse bifurcation was a wonderful procedure.  They could get a verdict after three days of trial, and the second phase of the case was virtually untriable by the defense.  Why?  Because the plaintiffs’ lawyers found that they could inject their liability case surreptitiously into the first phase.  Claiming a relevancy to fear and emotional distress, plaintiffs’ counsel asked their clients whether they ever contemplated the horror of living with the increased risks of disease they now supposedly faced, and plaintiffs responded that they had no idea of the risks when they worked at the shipyards, refineries, or other workplaces.  In summation, plaintiffs’ counsel would slip in something like “After the last few days, you, members of the Jury, now know more about asbestos than my client did after 30 years of working in the shipyard.”  Defense objections and motions in limine were studiously ignored.  Who needs to prove a failure to warn, when you can simply assert it?

Egregiously, the reverse bifurcation procedure stuck, even when defendants, unlike Johns-Manville, had potent defenses.  Some Philadelphia judges, in second phase trials, tolerate indignant arguments from plaintiff’s counsel, to the effect that first the (recalcitrant) defendant caused this injury to his client, and now that defendant wants to take away plaintiff’s money, which the jury so thoughtfully, carefully, and justly awarded in the first phase.  Winning a second phase trial, in a case that has been reverse bifurcated, is a bit like cleaning out the Augean stables.

Some judges even went so far, in phase II liability trials as to sever crossclaims of the non-settling defendant.  This procedural maneuver required the defendant to post a bond for the entire judgment, without any offsets, in order to pursue an appeal.  The lack of a final judgment seemed not to disturb anyone other than the victimized defendant.

Not all Philadelphia judges were keen on these inequitable procedures.  I recall trying an asbestos case in front of Judge Levan Gordon, who refused to be bullied by the head of the Complex Litigation Center into reverse bifurcating asbestos trials.  (O’Donnell v. Celotex Corp., PCCP July Term 1982, No. 1619; May 1989)  Judge Gordon had his own strong medicine for defendants:  he tried the cases all issues, with no bifurcation of punitive damages.  Judge Gordon tried my case, which was prosecuted by now Philadelphia Judge Sandy Byrd, straight through.  Because my adversary, Sandy Byrd, insisted on pressing negligence and punitive damages, I was able to try an empty-chair defense against the United States government, which owned and ran the Philadelphia Naval Shipyard, where plaintiff worked.  I was also able to put on a state-of-the-art defense.  And my jury saw what juries rarely see in Philadelphia, the complete story.  They refused to hold my clients responsible for what really was the negligence of the government, even though I had a weak medical defense.

The head of the Complex Litigation Center was furious that Judge Gordon had taken up three weeks of courtroom time.  Her Honor was deaf to explanations that it was plaintiffs’ choice to pursue negligence and punitive damages, which claims opened the door to the sophisticated intermediary and state-of-the-art defenses.  Somehow it was the defendants’ fault for tying up a courtroom, and for derailing the all-important case statistics.

Then, as now, there are some excellent judges in Philadelphia, who are intent to try cases fairly and impartially, with even-handed procedures.  And then there are other judges, who have helped create Philadelphia’s reputation, and the statistics that support Professor Wright’s conclusions.

Manufacturing Certainty

October 25th, 2011

Steven Wodka is a plaintiffs’ lawyer, based in New Jersey, who has worked closely, for many years, with Dr. David Michaels, as his paid expert witness.  Yes, the David Michaels who is now the head of the Occupational Safety and Health Administration (OSHA).

When Michaels for nominated for his current post, the Democratic majority leaders in the Senate protected him from hearings, which would have revealed Michaels’ deep and disturbing conflicts of interest.  The Democratic Senators succeeded in their efforts, and Michaels was confirmed as undersecretary of the Department of Labor, on a voice vote, without hearings.

Mr. Wodka may have lost his friend, colleague, and expert witness to the OSHA, but at the same time he gained an ally in his litigation efforts on behalf of plaintiffs.  Wodka, who litigates in New Jersey and elsewhere, was troubled by court decisions that OSHA’s Hazard Communication regulations preempted his state-law tort claims. See, e.g., Bass v. Air Products, 2006 WL 1419375 (N.J. App. Div. 2006) (holding that OSHA’s hazard communication standard was a comprehensive regulatory scheme that preempted state tort failure-to-warn claims for warnings that complied with federal regulations).

Wodka may have lost his expert witness (for a while), but he gained an inside track to the Department of Labor.  Disappointed by New Jersey’s appellate court, Wodka sought an advisory opinion from the Department of Labor on the preemptive effect of HazCom.  See David Schwartz, “Solicitor Says Hazard Communication Rule Does Not Preempt Failure-to-Warn Lawsuits,” BNA (October 20, 2011).

The Department of Labor, now under control of his friend and paid expert witness, Dr. Michaels, did not disappoint.  Solicitor of Labor M. Patricia Smith, in a letter dated October 18, 2011, wrote Mr. Wodka that, notwithstanding what the appellate courts may have told him, he was correct after all.  The OSHA’s Hazard Commuication Standard, 29 C.F.R. 1200(a)(2), does not, according to the Department, preempt state tort claims alleging failures to warn.

The solicitor relied upon Section 4(b)(4) of the OSH Act, which states that nothing in the Act is intended to “enlarge or diminish or affect in any other manner the common law or statutory rights, duties or liabilities of employers and employees under any law with respect to injuries, diseases, or death arising out of, or in the course of, employment.”  The OSH Act, however, in making this disclaimer, was focused on the employer-employee relationship, with its attendant duties, rights, and obligations.  Failure-to-warn claims arise out of laws, whether statutory or common law, designed to protect consumers.  The solicitor’s analysis really misses the key point that a comprehensive scheme, such as the HazCom Act and regulations, applies to strangers to the employer-employee relationship, and constrains the nature and content of warnings communications to the employees of purchasers of chemical products and raw materials.

The solicitor was clear that “a definitive determination of conflict can only be made based on the particulars of each case.”  Smith Letter, at footnote 4.  This slight speedbump did not slow down Mr. Wodka, who was quoted by the BNA as saying that “[t]his letter makes the question clear,” and “I’m already going to move for reconsideration of one of my cases based on this letter.”

It is good to have friends in powerful places.

Of course, there is a good deal of irony involved in this story.  David Michaels has made a career out of scolding industry over conflicts of interest.  Michaels’ book, Doubt is Their Product, gets waved around in courtrooms, when defense expert witnesses testify that the plaintiffs’ evidence fails to show that a product causes harm, or has caused plaintiff’s harm.  Some people may find this scolding a little irritating, especially from someone, like Michaels, who fails to disclose his own significant conflicts of interest, from monies received as a testifying and consulting expert witness, and from running an organization,  The Project on Scientific Knowledge and Public Policy (SKAPP),  bankrolled by the plaintiffs’ counsel in the silicone gel breast implant litigation.

Doubt is not such a bad thing in the face of uncertain and inconclusive evidence.  We could use more doubt, and open-minded thought.  As Bertrand Russell wrote some years ago:

“The biggest cause of trouble in the world today is that the stupid people are so sure about things and the intelligent folks are so full of doubts.”

Late Professor Berger’s Introduction to the Reference Manual on Scientific Evidence

October 23rd, 2011

In several posts, I have addressed isolated issues in Professor Margaret Berger’s introductory chapter to the third edition of the Reference Manual on Scientific Evidence (RMSE 3d).  Let me back up and address the bigger, more disturbing picture.

Professor Berger was a well-respected evidence scholar, who had written about Daubert issues in her lifetime.  See generally Edward K. Cheng, ” Introduction: Festschrift in Honor of Margaret A. Berger,” 75 Brooklyn L. Rev. 1057 (2010).  Along with Judge Jack Weinstein, she was the author of Weinstein’s Evidence and Cases and Materials on Evidence.  Berger was intellectually opposed to the Daubert enterprise.  See, e.g., Margaret A. Berger & Aaron D. Twerski, “Uncertainty and Informed Choice:  Unmasking Daubert,” 104 Mich. L.  Rev. 257 (2005).  This opposition is clearly reflected in the Berger’s chapter in the new edition of the RMSE 3d.

Over the course of several years, Berger organized and supervised a series of symposia, Science for Judges.  Berger’s symposia involved many respected authors as well as some highly partisan, pro-plaintiff scholars.  Berger also participated in some of the four so-called Coronado Conferences, which featured discussions, with subsequent publications, on expert witness issues.  Both Science for Judges and the Coronado Conferences were sponsored by SKAPP, the Project on Scientific Knowledge and Public Policy, an anti-Daubert advocacy group, headed up mostly by plaintiffs’ expert witnesses.

According to SKAPP‘s website, the organization enjoyed past support from the Common Benefit Trust, a fund established pursuant to a court order in the Silicone Gel Breast Implant Products Liability litigation.  SKAPP has consistently misrepresented the funding source of its anti-Daubert organization.  What SKAPP hides is that this “fund” is nothing more than plaintiffs’ counsel’s walking-around money from MDL 926, which involved, ironically, claims for autoimmune disease allegedly caused by silicone gel breast implants.  This MDL collapsed after 1999, when court-appointed experts and then the Institute of Medicine declared that the scientific evidence did not support plaintiffs’ causal claims.  See Judge Jack B. Weinstein, “Preliminary Reflections on Administration of Complex Litigation” 2009 Cardozo L. Rev. de novo 1, 14 (2009) (describing plaintiffs’ expert witnesses in silicone litigation as “charlatans”; “[t]he breast implant litigation was largely based on a litigation fraud. … Claims—supported by medical charlatans—that enormous damages to women’s systems resulted could not be supported.”)

Flush with silicone MDL “common benefits money,” plaintiffs’ counsel helped fund SKAPP, rather than returning the money to their clients.  See Ralph Klier v. Elf Atochem North America Inc., 2011 U.S. App. LEXIS 19650 (5th Cir. 2011) (holding that district court abused its discretion in distributing residual funds from class action over arsenic exposure to charities; directing that residual funds be distributed to class members with manifest personal injuries).  As with all common benefit funds in multi-district litigations, the fund in MDL 926 was established pursuant to a court order, but it was certainly not money from the federal courts; SKAPP’s funding was from plaintiffs’ lawyers, who had been rebuffed and refuted by science in the courtroom.  Some of those plaintiffs’ lawyers used their left-over “walking-around” money, laundered through SKAPP, to help sponsor anti-Daubert articles in several fora, including Berger’s Science for Judges symposia, and the Coronado ConferencesSeeSKAPP A LOT” (April 30, 2010).

Given the misleading propaganda from SKAPP about the sources of its funding, Professor Berger may well have been misled, along with other scholars who participated at SKAPP-funded events.  On the other hand, I would have hoped that these scholars were aware that the “Common Benefit Trust, a fund established pursuant to a court order in the Silicone Gel Breast Implant Liability litigation,” was nothing more than plaintiffs’ counsels’ spending allowance for advancing their own litigation goals.

Back in 2000, Professor Berger wrote a similar introductory chapter on admissibility of expert witness testimony in the second edition of the RMSE.  The second edition’s chapter, however, was decidedly less partisan, with relatively neutral presentations and discussions of the leading Supreme Court and lower court decisions.  Berger’s opposition to judicial gatekeeping was subdued and in check, as befitted a neutral introduction in a volume published by the Federal Judicial Center.

The third edition of the RMSE features a very different introduction by Professor Berger.  The gloves are off, and so is any pretense at non-partisanship.

Berger, in her chapter in RSME 3d, provides a detailed discussion of Daubert, Joiner, Kumho Tire, and Weisgram, but remarkably, Berger offers virtually no discussion of the amendments to, and revisions of, Rule 702, in 2000, after she wrote the RSME 2d chapter.  The actual text of the Rule, which is now the operative, controlling legal language, is not set out in her RSME 3d chapter; nor does Berger present any of the discussion from the Advisory Committee notes on the scope and purpose of the 2000 revision.  Instead, Berger reports, and acquiesces in a loose practice, employed by some trial courts that continue to cite and to rely upon Daubert, or Circuit-level pre-2000 precedent, without mentioning the new Rule.  Later in the chapter, Berger does discuss a specific-causation decision by Judge Jack Weinstein, in In re Zyprexa, 2009 WL 1357236 (E.D.N.Y. May 12, 2009), where he excluded the expert witness.  A footnote makes clear that Judge Weinstein held the witness’s testimony failed the three prongs of the new Rule 702.  RMSE 3d at 24 & n. 64.  This discussion obscures as much as illustrates that the rule, as amended, is the operative language.  The chapter fails to note that Judge Weinstein’s correct practice of citing the actual Rule is correct as a matter of legal process.  Berger is not shy elsewhere about criticizing trial judges’ practices so her passivity in connection with the disregard of a statutory revision of Rule 702 is difficult to understand except as a way to dodge the mandates of the revised rule.

The second edition had a lengthy discussion of Rosen v. Ciba-Geigy Corp., 78 F.3d 316, 319 (7th Cir.), cert. denied, 519 U.S. 819 (1996), where Judge Posner famously declared “the courtroom is not the place for scientific guesswork, even of the inspired sort. Law lags science; it does not lead it.”  See Margaret A. Berger, “The Supreme Court’s Trilogy on the Admissibility of Expert Testimony,” RMSE 2d 9, 24 (2000).  In the RMSE 3d, Rosen is gone, and now we have the philosophy of Milward, with its radical leveling of evidence and expert witness opinion to replace Rosen.  Remarkably, the cite to Milward had to have been added after Professor Berger’s death, but she no doubt would have approved.  There are no counterbalancing citations to important decisions, reversing trial judges for inadequate gatekeeping, such as Tamraz v. Lincoln Elec. Co., 620 F.3d 665 (6th Cir. 2010), cert. den., ___ U.S. ___ (2011), which were decided before Professor Berger’s death.

As an academic scholar and a citizen, Berger was entitled to her views about Daubert.  In her lifetime, she wrote and spoke about those views, sincerely and passionately.  Her writings and lectures helped provoke an important discussion on the role of science in the courtroom.  Her selection, however, to introduce a National Research Council volume on science in the courtroom seems dubious given her partisan views.  One could only imagine the hue and cry if, say, Peter Huber (of Galileo’s Revenge fame) were selected to write the volume’s introduction to the law of expert witness admissibility, or if tobacco companies had funded Science for Judges seminars, with money laundered through not-for-profit organizations.

Libertine View of Expert Witness Admissibility

Berger complains that the Federal Rules of Evidence were intended to be interpreted liberally in favor of the admissibility of evidence.  RMSE 3d at 36 (“the preference for admissibility contained both in the Federal Rules of Evidence and in Daubert itself”).  The word “liberal” does not appear in the Federal Rules of Evidence.  Instead, the Rules contain an explicit statement of how judges must construe and apply their evidentiary provisions:

“These rules shall be construed to secure fairness in administration, elimination of unjustifiable expense and delay, and promotion of growth and development of the law of evidence to the end that the truth may be ascertained and proceedings justly determined.”

Rule 102 (“Purpose and Construction”).

Berger does not, nor can she, explain how a “let it all in” approach helps to secure fairness, eliminates unjustifiable expense and time of trial, or leads to just outcomes.  This would be a most illiberal result.  The truth will not be readily ascertained if expert witnesses are permitted to pass off hypotheses and ill-founded conclusions as scientific knowledge.

In any event, we should resist the mechanical, outcome-determinative interpretation of “liberal.”  Bertrand Russell presented a much more compelling understanding of what it means to have a liberal outlook in human enterprises:

“The essence of the liberal outlook lies not in what opinions are held, but in how they are held: instead of being held dogmatically, they are held tentatively, and with a consciousness that new evidence may at any moment lead to their abandonment. This is the way opinions are held in science, as opposed to the way in which they are held in theology.”

Bertrand Russell, “Philosophy and Politics,” in Unpopular Essays 15 (N.Y. 1950)(emphasis in original).  Lord Russell’s admonition counsels greater not less skepticism in the liberal outlook on opinions that lie at the fringes, and beyond the fringes, of human knowledge.

Now, it is true that the Supreme Court, back in 1993, spoke of the “Rule’s basic standard of relevance … is a liberal one.” Daubert v. Merrell Dow Pharm., Inc., 509 U.S. 579, 587, 588 (1993).  Similarly, the Court spoke of the Rules’ general “liberal thrust” in relaxing barriers to opinion testimony.  But in adopting an epistemic standard, rather than a nose-counting, sociological standard of “general acceptance,” the Court did, in fact, liberalize the rules of admissibility for expert witness opinions.  Implicit in Professor Berger’s critique is an unhappiness with both the liberal epistemic and the conservative general-acceptance approach.  The principal remaining option apparently would be Ferebee‘s libertine, “let it all in” approach, which was rejected by the Supreme Court and Congress.

Serious Omissions in Berger’s “Admissibility of Expert Testimony”

A. Short Shrifting The Rules

I have previously written about the complete omission of Rule 703 and its role in ensuring the trustworthiness of expert witness opinion.  See New Reference Manual on Scientific Evidence Short Shrifts Rule 703 (Oct. 16, 2011).  And above, I have explored how Professor Berger studiously ignored the amended Rule 702 itself, in order to hold on to inconsistent dicta in cases that predated the statutory amendment.

The Federal Rules of Evidence are statutory law.  In 1972, the Rules were adopted by order of the Supreme Court, and were transmitted by the Chief Justice to Congress.  By law, the proposed rules “shall have no force or effect except to the extent, and with such amendments, as they may be expressly approved by Act of Congress.”  Pub. L. 93-12, Mar. 30, 1973, 87 Stat. 9.  The Supreme Court has made clear that the Federal Rules of Evidence are legislatively enacted and that the Court must interpret them as it would any statute.  See, e.g., Daubert v. Merrell Dow Pharm., Inc., 509 U.S. 579, 587 (1993) (courts must “interpret the legislatively enacted Federal Rules of Evidence as [they] would any statute”); United States v. Salerno, 505 U.S. 317, 322 (1992) (refusing to ignore the plain language of Rule 802 and 803; “To respect [the legislature’s] determination, we must enforce the words that it enacted.”); Beech Aircraft Corp. v. Rainey, 488 U.S. 153, 163 (1988).

One of the key lessons of Daubert itself was that the Frye rule did not survive the 1972 enactment of the Federal Rules of Evidence, given the lack of reference to the Frye rule in chapter VII of the rules.  The Rules trump precedent.  See David Bernstein, “Courts Refusing to Apply Federal Rule of Evidence 702” (May 6, 2006) (arguing that the language of the 2000 amended Rule 702 trumps the various dicta scattered about the Daubert quartet as a matter of legal process).  But see Glen Weissenberger, “The Proper Interpretation of the Federal Rules of Evidence: Insights from Article VI,” 30 Cardozo L. Rev. 4 (2009) (arguing, admittedly contrary to Supreme Court precedent and the majority of evidence scholars, that the Federal Rules of Evidence are something more akin to a codification of common law, and that the usual canons of statutory interpretation do not fully apply).

B.  Ignoring the Hierarchy of Evidence

Professor Berger not only omits consideration of the reasonableness of relying upon individual scientific studies, she fails to give any consideration to a hierarchy of evidence, which distinguishes between and among study designs.  To some extent, the RSME 3d chapters on epidemiology and on medical testimony remedy this failure, but Berger’s chapter is thus badly out of synch with key chapters in the RMSE 3d, as well as with how science evaluates claims of causality and reaches conclusions of causality (or not) from multiple studies of varying designs and quality.  See RSME 3d, at 561 (noting that certain study designs, such as cross-sectional and ecological studies, are frequently unsuitable for supporting inferences of causal association); id. at 723-34 (describing the hierarchy of evidence in which some studies may raise interesting questions without offering much in the way of answering those questions).  The result of Berger’s treatment is that evidence is “leveled,” allowing litigants to escape meaningful gatekeeping as long as they can point to some study, regardless of study invalidity or poor quality.

Berger’s Concerns About Credibility

A. The Credibility of Theories

Berger worries that the 702 gatekeeping process leads to courts’ making credibility determinations of the expert witnesses and their scientific theories.  FMSE 3d at 36.  Surely federal judges have at least the ability to distinguish analytically between credibility of witnesses and the scientific opinions that are proffered.  As for the credibility of experts’ theories, I confess it is difficult to understand what Berger may have had in mind other than the actual requirements of Rule 702 itself.  If the proffered testimony is not based upon:

1. sufficient facts or data,

2. the product of reliable principles and methods, and

3. a reliable application of principles and methods to the facts of the case

then, no doubt, the testimony will be unreliable and incredible. The clear lesson of expert witness litigation, and of science in the law generally, is that qualified, and apparently credible expert witnesses, sometimes advance opinions and conclusions that fail one or more of the requirements of Rule 702.  Berger seems to have conflated reliability and credibility as a way of waving judges off any searching inquiry into the former.

B. The Credibility of Defense Expert Witnesses

Without any substantial support in case law or in the Rules, Professor Berger posits a concern over whether courts should permit a broad inquiry into the defense expert witnesses’ relationships with the defendant.  RMSE 3d at 21-22.  Berger worries that defendants will support their Daubert challenges with testimony from academics from “highly respected academic institution[s],” which likely receive donations and research grants from private corporations.

The posited concern is curious because it assumes that the “Daubert” challenge is to the plaintiff’s expert witness.  Accepting the assumption, why should not the concern be over whether the plaintiffs’ expert witnesses are compromised by their bias, whether financial or positional?  Berger’s assumption ignores the fact that the credibility and qualifications of expert witnesses are generally not at issue in a challenge to the reliability of proffered opinion testimony.

Berger’s entire discussion of credibility is a rather fanciful and far-fetched way of injecting credibility into Rule 702 determinations as a way to argue that such determinations must be left for the ultimate trier of fact — the jury — charged with resolving credibility issues.

Berger’s discussion is itself incredibly lopsided and biased attack on defendants’ expert witnesses. Her discussion is also beside the point of the Rule 702 and 703 evidentiary issues.  Courts should be focused on the reasonableness of the challenged expert witness’s reliance upon facts and data, and whether the witness has used the methods of science in a reliable way to reach his or her opinions.  Furthermore, there is a stark asymmetry between plaintiffs and defendants, and their expert witnesses, with respect to litigation bias.  Defense counsel and defense expert witnesses (assuming that they are financially compensated) stand to lose by having courts exclude plaintiffs’ expert witnesses and dismiss plaintiffs’ claims.  Plaintiffs’ expert witnesses and plaintiffs’ counsel, collectively, the litigation industry, have everything to gain and nothing to lose by abrogating the gatekeeping process.  Professor Berger’s introduction to expert witness admissibility in RSME 3d, wittingly or not, attempts to aid that litigation industry.

New Reference Manual on Scientific Evidence Short Shrifts Rule 703

October 16th, 2011

In “RULE OF EVIDENCE 703 — Problem Child of Article VII (Sept. 19, 2011),” I wrote about how Federal Rule of Evidence 703 is generally ignored and misunderstood in current federal practice.  The Supreme Court, in deciding Daubert, shifted the focus to Rule 702, as the primary tool to deploy in admitting, as well as limiting and excluding, expert witness opinion testimony.  The Court’s decision, however, did not erase the need for an additional, independent rule to control the quality of inadmissible materials upon which expert witnesses rely.  Indeed, Rule 702 as amended in 2000, incorporated much of the learning of the Daubert decision, and then some, but it does not address the starting place of any scientific opinion:  the data, the analyses (usually statistical) of data, and the reasonableness of relying upon those data and analyses.  Instead, Rule 702 asks whether the proffered testimony is based upon:

  1. sufficient facts or data,
  2. the product of reliable principles and methods, and
  3. a reliable application of principles and methods to the facts of the case

Noticeably absent from Rule 702, in its current form, is any directive to determine whether the proffered expert witness opinion is based upon facts or data of the sort upon which experts in the pertinent field would reasonably rely.  Furthermore,  Daubert did not address the fulsome importation and disclosure of untrustworthy hearsay opinions through Rule 703.  See Problem Child (discussing the courts’ failure to appreciate the structure of peer-reviewed articles, and the need to ignore the discussion and introduction sections of such articles as often containing speculative opinions and comments).  See also Luciana B. Sollaci & Mauricio G. Pereira, “The introduction, methods, results, and discussion (IMRAD) structure: a fifty-year survey,” 92 J. Med. Libr. Ass’n 364 (2004); Montori, et al., “Users’ guide to detecting misleading claims in clinical research reports,” 329 Br. Med. J. 1093, 1093 (2004) (advising readers on how to avoid being misled by published literature, and counseling readers to “Read only the Methods and Results sections; bypass the Discuss section.”)  (emphasis added).

Given this background, it is disappointing but not surprising that the new Reference Manual on Scientific Evidence severely slights Rule 703.  Using either a word search in the PDF version or the index at end of book tells the story:  There are five references to Rule 703 in the entire RMSE!  The statistics chapter has an appropriate but fleeting reference:

“Or the study might rest on data of the type not reasonably relied on by statisticians or substantive experts and hence run afoul of Federal Rule of Evidence 703. Often, however, the battle over statistical evidence concerns weight or sufficiency rather than admissibility.”

RMSE 3d at 214. At least this chapter acknowledges, however briefly, the potential problem that Rule 703 poses for expert witnesses.  The chapter on survey research similarly discusses how the data collected in a survey may “run afoul” of Rule 703.  RMSE 3d at 361, 363-364.

The chapter on epidemiology takes a different approach by interpreting Rule 703 as a rule of admissibility of evidence:

“An epidemiologic study that is sufficiently rigorous to justify a conclusion that it is scientifically valid should be admissible,184 as it tends to make an issue in dispute more or less likely.185

Id. at 610.  This view is mistaken.  Sufficient rigor in an epidemiologic study is certainly needed for reliance by an expert witness, but such rigor does not make the study itself admissible; the rigor simply permits the expert witness to rely upon a study that is typically several layers of inadmissible hearsay.  See Reference Manual on Scientific Evidence v3.0 – Disregarding Study Validity in Favor of the “Whole Gamish” (Oct. 14, 2011) (discussing the argument put forward by the epidemiology chapter for considering Rule 703 as an exception to the rule against hearsay).

While the treatment of Rule 703 in the epidemiology chapter is troubling, the introductory chapter on the admissibility of expert witness opinion testimony by the late Professor Margaret Berger really sets the tone and approach for the entire volume. See Berger, “The Admissibility of Expert Testimony,” RSME 3d 11 (2011).  Professor Berger never mentions Rule 703 at all!  Gone and forgotten. The omission is not, however, an oversight.  Rule 703, with its requirement of qualifying each study relied upon as having been “reasonably relied upon,” as measured by what experts in the appropriate discipline, is the refutation of Berger’s argument that somehow a pile of weak, flawed studies, taken together can yield a scientifically reliable conclusion. SeeWhole Gamish,” (Oct. 14th, 2011).

Rule 703 is not merely an invitation to trial judges; it is a requirement to look at the discrete studies relied upon to determine whether the building blocks are sound.  Only then can the methods and procedures of science begin to analyze the entire evidentiary display to yield reliable scientific opinions and conclusions.

Reference Manual on Scientific Evidence v3.0 – Disregarding Study Validity in Favor of the “Whole Gamish”

October 14th, 2011

There is much to digest in the new Reference Manual on Scientific Evidence, third edition (RMSE 3d).  Much of what is covered is solid information on the individual scientific and technical disciplines covered.  Although the information is easily available from other sources, there is some value in collecting the material in a single volume for the convenience of judges.  Of course, given that this information is provided to judges from an ostensibly neutral, credible source, lawyers will naturally focus on what is doubtful or controversial in the RMSE.

I have already noted some preliminary concerns, however, with some of the comments in the Preface, by Judge Kessler and Dr. Kassirer.  See “New Reference Manual’s Uneven Treatment of Conflicts of Interest.”  In addition, there is a good deal of overlap among the chapters on statistics, epidemiology, and medical testimony.  This overlap is at first blush troubling because the RMSE has the potential to confuse and obscure issues by having multiple authors address them inconsistently.  This is an area where reviewers should pay close attention.

From first looks at the RMSE 3d, there is a good deal of equivocation between encouraging judges to look at scientific validity, and discouraging them from any meaningful analysis by emphasizing inaccurate proxies for validity, such as conflicts of interest.  (As I have pointed out, the new RSME did not do quite so well in addressing its own conflicts of interest.  SeeToxicology for Judges – The New Reference Manual on Scientific Evidence (2011).”)

The strengths of the chapter on statistical evidence, updated from the second edition, remain, as do some of the strengths and flaws of the chapter on epidemiology.  I hope to write more about each of these important chapters at a later date.

The late Professor Margaret Berger has an updated version of her chapter from the second edition, “The Admissibility of Expert Testimony,” RSME 3d 11 (2011).  Berger’s chapter has a section criticizing “atomization,” a process she describes pejoratively as a “slicing-and-dicing” approach.  Id. at 19.  Drawing on the publications of Daubert-critic Susan Haack, Berger rejects the notion that courts should examine the reliability of each study independently. Id. at 20 & n. 51 (citing Susan Haack, “An Epistemologist in the Bramble-Bush: At the Supreme Court with Mr. Joiner,” 26 J. Health Pol. Pol’y & L. 217–37 (1999).  Berger contends that the “proper” scientific method, as evidenced by works of the International Agency for Research on Cancer, the Institute of Medicine, the National Institute of Health, the National Research Council, and the National Institute for Environmental Health Sciences, “is to consider all the relevant available scientific evidence, taken as a whole, to determine which conclusion or hypothesis regarding a causal claim is best supported by the body of evidence.” Id. at 19-20 & n.52.  This contention, however, is profoundly misleading.  Of course, scientists undertaking a systematic review should identify all the relevant studies, but some of the “relevant” studies may well be insufficiently reliable (because of internal or external validity issues) to answer the research question at hand. All the cited agencies, and other research organizations and researchers, exclude studies that are fundamentally flawed, whether as a result of bias, confounding, erroneous data analyses, or related problems.  Berger cites no support for the remarkable suggestion that scientists do not make “reliability” judgments about available studies when assessing the “totality of the evidence.”

Professor Berger, who had a distinguished career as a law professor and evidence scholar, died in November 2010.  She was no friend of Daubert, but remarkably her antipathy has outlived her.  Her critical discussion of “atomization” cites the notorious decision in Milward v. Acuity Specialty Products Group, Inc., 639 F.3d 11, 26 (1st Cir. 2011), which was decided four months after her passing. Id. at 20 n.51. (The editors note that the published chapter was Berger’s last revision, with “a few edits to respond to suggestions by reviewers.”)

Professor Berger’s contention about the need to avoid assessments of individual studies in favor of the whole gamish must also be rejected because Federal Rule of Evidence 703 requires that each study considered by an expert witness “qualify” for reasonable reliance by virtue of the study’s containing facts or data that are “of a type reasonably relied upon by experts in the particular field forming opinions or inferences upon the subject.”  One of the deeply troubling aspects of the Milward decision is that it reversed the trial court’s sensible decision to exclude a toxicologist, Dr. Martyn Smith, who outran his headlights on issues having to do with a field in which he was clearly inexperienced – epidemiology.

Scientific studies, and especially epidemiologic studies, involve multiple levels of hearsay.  A typical epidemiologic study may contain hearsay leaps from patient to clinician, to laboratory technicians, to specialists interpreting test results, back to the clinician for a diagnosis, to a nosologist for disease coding, to a national or hospital database, to a researcher querying the database, to a statistician analyzing the data, to a manuscript that details data, analyses, and results, to editors and peer reviewers, back to study authors, and on to publication.  Those leaps do not mean that the final results are untrustworthy, only that the study itself is not likely admissible in evidence.

The inadmissibility of scientific studies is not problematic because Rule 703 permits testifying expert witnesses to formulate opinions based upon facts and data, which are not themselves admissible in evidence. The distinction between relied upon, and admissible, studies is codified in the Federal Rules of Evidence, and in virtually every state’s evidence law.

Referring to studies, without qualification, as admissible in themselves is wrong as a matter of evidence law.  The error has the potential to encourage carelessness in gatekeeping expert witnesses’ opinions for their reliance upon inadmissible studies.  The error is doubly wrong if this approach to expert witness gatekeeping is taken as license to permit expert witnesses to rely upon any marginally relevant study of their choosing.  It is therefore disconcerting that the new Reference Manual on Science Evidence (RMSE 3d) fails to make the appropriate distinction between admissibility of studies and admissibility of expert witness opinion that has reasonably relied upon appropriate studies.

Consider the following statement from the chapter on epidemiology:

“An epidemiologic study that is sufficiently rigorous to justify a conclusion that it is scientifically valid should be admissible,184 as it tends to make an issue in dispute more or less likely.185

RMSE 3d at 610.  Curiously, the authors of this chapter have ignored Professor Berger’s caution against slicing and dicing, and speak to a single study’s ability to justify a conclusion. The authors of the epidemiology chapter seem to be stressing that scientifically valid studies should be admissible.  The footnote emphasizes the point:

See DeLuca v. Merrell Dow Pharms., Inc., 911 F.2d 941, 958 (3d Cir. 1990); cf. Kehm v. Procter & Gamble Co., 580 F. Supp. 890, 902 (N.D. Iowa 1982) (“These [epidemiologic] studies were highly probative on the issue of causation—they all concluded that an association between tampon use and menstrually related TSS [toxic shock syndrome] cases exists.”), aff’d, 724 F.2d 613 (8th Cir. 1984). Hearsay concerns may limit the independent admissibility of the study, but the study could be relied on by an expert in forming an opinion and may be admissible pursuant to Fed. R. Evid. 703 as part of the underlying facts or data relied on by the expert. In Ellis v. International Playtex, Inc., 745 F.2d 292, 303 (4th Cir. 1984), the court concluded that certain epidemiologic studies were admissible despite criticism of the methodology used in the studies. The court held that the claims of bias went to the studies’ weight rather than their admissibility. Cf. Christophersen v. Allied-Signal Corp., 939 F.2d 1106, 1109 (5th Cir. 1991) (“As a general rule, questions relating to the bases and sources of an expert’s opinion affect the weight to be assigned that opinion rather than its admissibility. . . .”).”

RMSE 3d at 610 n.184 (emphasis in bold, added).  This statement, that studies relied upon by an expert in forming an opinion may be admissible pursuant to Rule 703, is unsupported by Rule 703 and the overwhelming weight of case law interpreting and applying the rule.  (Interestingly, the authors of this chapter seem to abandon their suggestion that studies relied upon “might qualify for the learned treatise exception to the hearsay rule, Fed. R. Evid. 803(18), or possibly the catchall exceptions, Fed. R. Evid. 803(24) & 804(5),” which was part of their argument in the Second Edition of the RMSE.  RMSE 2d at 335 (2000).)  See also RMSE 3d at 214 (discussing statistical studies as generally “admissible,” but acknowledging that admissibility may be no more than permission to explain the basis for an expert’s opinion).

The cases cited by the epidemiology chapter, Kehm and Ellis, both involved “factual findings” in public investigative or evaluative reports, which were independently admissible under Federal Rule of Evidence 803(8)(C).  See Ellis, 745 F.2d at 299-303; Kehm, 724 F.2d at 617-18.  As such, the cases hardly support the chapter’s suggestion that Rule 703 is a rule of admissibility for epidemiologic studies.

Here the RMSE, in one sentence, confuses Rule 703 with an exception to the rule against hearsay, which would prevent the statistical studies from being received in evidence.  The point is reasonably clear, however, that the studies “may be offered” to explain an expert witness’s opinion.  Under Rule 705, that offer may also be refused. The offer, however, is to “explain,” not to have the studies admitted in evidence.

The RMSE is certainly not alone in advancing this notion that studies are themselves admissible.  Other well-respected evidence scholars lapse into this position:

“Well conducted studies are uniformly admitted.”

David L. Faigman, et al., Modern Scientific Evidence:  The Law and Science of Expert Testimony v.1, § 23:1,at 206 (2009)

Evidence scholars should not conflate admissibility of the epidemiologic (or other) studies with the ability of an expert witness to advert to a study to explain his or her opinion.  The testifying expert witness really has no need to become a conduit for off-hand comments and opinions in the introduction or discussion section of relied upon articles, and the wholesale admission of such hearsay opinions undermines the court’s control over opinion evidence.  Rule 703 authorizes reasonable reliance upon “facts and data,” not every opinion that creeps into the published literature.

New Reference Manual’s Uneven Treatment of Conflicts of Interest

October 12th, 2011

The new, third edition of the Reference Manual on Scientific Evidence (RMSE) appears to get off to a good start in the Preface by Judge Kessler and Dr. Kassirer, when they note that the Supreme Court mandated federal courts to

“examine the scientific basis of expert testimony to ensure that it meets the same rigorous standard employed by scientific researchers and practitioners outside the courtroom.”

RMSE at xiii.  The preface falters, however, on two key issues, causation and conflicts of interest, which are taken up as an introduction to the new volume.

1. CAUSATION

The authors tell us in squishy terms that causal assessments are judgments:

“Fundamentally, the task is an inferential process of weighing evidence and using judgment to conclude whether or not an effect is the result of some stimulus. Judgment is required even when using sophisticated statistical methods. Such methods can provide powerful evidence of associations between variables, but they cannot prove that a causal relationship exists. Theories of causation (evolution, for example) lose their designation as theories only if the scientific community has rejected alternative theories and accepted the causal relationship as fact. Elements that are often considered in helping to establish a causal relationship include predisposing factors, proximity of a stimulus to its putative outcome, the strength of the stimulus, and the strength of the events in a causal chain.”

RMSE at xiv.

The authors leave the inferential process as a matter of “weighing evidence,” but without saying anything about how the scientific community does its “weighing.”  Language about “proving” causation is also unclear because “proof” in scientific parlance connotes a demonstration, which we typically find in logic or in mathematics.  Proving empirical propositions suggests a bar set too high such that the courts must inevitable lower the bar considerably.  The question is, of course, how low will judges go to admit evidence.

The authors thus introduce hand waving and excuses for why evidence can be weighed differently in court proceedings from the world of science:

“Unfortunately, judges may be in a less favorable position than scientists to make causal assessments. Scientists may delay their decision while they or others gather more data. Judges, on the other hand, must rule on causation based on existing information. Concepts of causation familiar to scientists (no matter what stripe) may not resonate with judges who are asked to rule on general causation (i.e., is a particular stimulus known to produce a particular reaction) or specific causation (i.e., did a particular stimulus cause a particular consequence in a specific instance). In the final analysis, a judge does not have the option of suspending judgment until more information is available, but must decide after considering the best available science.”

RMSE at xiv.  But the “best available science” may be pretty crummy, and the temptation to turn desperation into evidence (“well, it’s the best we have now”) is often severe.  The authors of the Preface signal that “inconclusive” is not a judgment open to judges charged with expert witness gatekeeping.  If the authors truly mean to suggest that judges should go with whatever is dished out as “the best available science,” then they have overlooked the obvious:  Rule 702 opens the door to “scientific, technical, or other specialized knowledge,” not to hunches, suggestive but inconclusive evidence, and wishful thinking about how the science may turn out when further along.  Courts have a choice to exclude expert witness opinion testimony that is based upon incomplete or inconclusive evidence.

2. CONFLICTS OF INTEREST

Surprisingly, given the scope of the scientific areas covered in the RMSE, the authors discuss conflicts of interest (COI) at some length.  Conflicts of interest are a fact of life in all endeavors, and it is understandable counsel judges and juries to try to identify, assess, and control them.  COIs, however, are weak proxies for unreliability.  The emphasis given here is undue because federal judges are misled into thinking that they can discern unreliability from COI, when they should be focused on the data and the analysis.

The authors of the Preface set about to use COI as a basis for giving litigation plaintiffs a pass, and for holding back studies sponsored by corporate defendants.

“Conflict of interest manifests as bias, and given the high stakes and adversarial nature of many courtroom proceedings, bias can have a major influence on evidence, testimony, and decisionmaking. Conflicts of interest take many forms and can be based on religious, social, political, or other personal convictions. The biases that these convictions can induce may range from serious to extreme, but these intrinsic influences and the biases they can induce are difficult to identify. Even individuals with such prejudices may not appreciate that they have them, nor may they realize that their interpretations of scientific issues may be biased by them. Because of these limitations, we consider here only financial conflicts of interest; such conflicts are discoverable. Nonetheless, even though financial conflicts can be identified, having such a conflict, even one involving huge sums of money, does not necessarily mean that a given individual will be biased. Having a financial relationship with a commercial entity produces a conflict of interest, but it does not inevitably evoke bias. In science, financial conflict of interest is often accompanied by disclosure of the relationship, leaving to the public the decision whether the interpretation might be tainted. Needless to say, such an assessment may be difficult. The problem is compounded in scientific publications by obscure ways in which the conflicts are reported and by a lack of disclosure of dollar amounts.

Judges and juries, however, must consider financial conflicts of interest when assessing scientific testimony. The threshold for pursuing the possibility of bias must be low. In some instances, judges have been frustrated in identifying expert witnesses who are free of conflict of interest because entire fields of science seem to be co-opted by payments from industry. Judges must also be aware that the research methods of studies funded specifically for purposes of litigation could favor one of the parties. Though awareness of such financial conflicts in itself is not necessarily predictive of bias, such information should be sought and evaluated as part of the deliberations.”

RMSE at xiv-xv.  All in all, rather misleading advice.  Financial conflicts are not the only conflicts that can be “discovered.”  Often expert witnesses will have political and organizational alignments, which will show deep-seated ideological alignments with the party for which they are testifying.  For instance, in one silicosis case, an expert witness in the field of history of medicine testified, at an examination before trial, that his father suffered from a silica-related disease.  This witness’s alignment with Marxist historians and his identification with radical labor movements made his non-financial conflicts obvious, although these COI would not necessarily have been apparent from his scholarly publications alone.

How low will the bar be set for discovering COI?  If testifying expert witnesses are relying upon textbooks, articles, essays, will federal courts open the authors/hearsay declarants up to searching discovery of their finances?

Also misleading is the suggestion that “entire fields of science seem to be co-opted by payments from industry.”  Do the authors mean to exclude the plaintiffs’ lawyer litigation industry, which has grown so large and politically powerful in this country?  In litigations in which I have been involved, I have certainly seen plaintiffs’ counsel, or their proxies – labor unions or “victim support groups” provide substantial funding for studies.  The Preface authors themselves show an untoward bias by their pointing out industry payments without giving balanced attention to other interested parties’ funding of scientific studies.

The attention to COI is also surprising given that one of the key chapters, for toxic tort practitioners, was written by Dr. Bernard D. Goldstein, who has testified in toxic tort cases, mostly (but not exclusively) for plaintiffs.  See, e.g., Parker v. Mobil Oil Corp., 7 N.Y.3d 434, 857 N.E.2d 1114, 824 N.Y.S.2d 584 (2006); Exxon Corp. v. Makofski, 116 SW 3d 176 (Tex. Ct. App. 2003).  The Makofsky case is particularly interesting because Dr. Goldstein was forced to explain why he was willing to opine that benzene caused acute lymphocytic leukemia, despite the plethora of published studies finding no statistically significant relationship.  Dr. Goldstein resorted to the inaccurate notion that scientific “proof” of causation requires 95 percent certainty, whereas he imposed only a 51 percent certainty for his medico-legal testimonial adventures. Dr. Goldstein also attempted to justify the discrepancy from the published literature by adverting to the lower standards used by federal regulatory agencies and treating physicians. Id.

These explanations are particularly concerning because they reflect basic errors in statistics and in causal reasoning.  The 95 percent derives from the use of the same percentage in confidence intervals, but the probability involved there is not the probability of the association’s being correct, and it has nothing to do with the probability in the belief that an association is real or is causal.  (Thankfully the RMSE chapter on statistics gets this right, but my fear is that judges will skip over the more demanding chapter on statistics and place undue weight on the toxicology chapter, written by Dr. Goldstein.)  The reference to federal agencies (OSHA, EPA, etc.) and to treating physicians was meant, no doubt, to invoke precautionary principle concepts as a justification for some vague, ill-defined, lower standard of causal assessment.

The Preface authors might well have taken their own counsel and conducted a more searching assessment of COI among authors of Reference Manual.  Better yet, the authors might have focused the judiciary on the data and the analysis.

Toxicology for Judges – The New Reference Manual on Scientific Evidence (2011)

October 5th, 2011

I have begun to dip into the massive third edition of the Reference Manual on Scientific Evidence.  To date, there have been only a couple of acknowledgments of this new work, which was released to the public on September 28, 2011.  SeeA New Day – A New Edition of the Reference Manual of Scientific Evidence”; and David Kaye, “Prometheus Unbound: Releasing the New Edition of the FJC Reference Manual on Scientific Evidence.”

Like previous editions, the substantive scientific areas are covered in discrete chapters, written by subject matter specialists, often along with a lawyer who addresses the legal implications and judicial treatment of that subject matter.  From my perspective, the chapters on statistics, epidemiology, and toxicology are the most important in my practice and in teaching, and I decided to start with the toxicology.  The toxicology chapter, “Reference Guide on Toxicology,” in the third edition is written by Professor Bernard D. Goldstein, of the University of Pittsburgh Graduate School of Public Health, and Mary Sue Henifin, a partner in the law firm of Buchanan Ingersoll, P.C.

CONFLICTS OF INTEREST

At the question and answer session of the public release ceremony, one gentleman rose to note that some of the authors were lawyers with big firm affiliations, which he supposed must mean that they represent mostly defendants.  Based upon his premise, he asked what the review committee had done to ensure that conflicts of interest did not skew or distort the discussions in the affected chapters.  Dr. Kassirer and Judge Kessler responded by pointing out that the chapters were peer reviewed by outside reviewers, and reviewed by members of the supervising review committee.  The questioner seemed reassured, but now that I have looked at the toxicology chapter, I am not so sure.

The questioner’s premise that a member of a large firm will represent mostly defendants and thus have a pro-defense  bias is probably a common perception among unsophisticated lay observers.  What is missing from their analysis is the realization that although gatekeeping helps the defense lawyers’ clients, it takes away legal work from firms that represent defendants in the litigations that are pretermitted by effective judicial gatekeeping.  Erosion of gatekeeping concepts, however, inures to the benefit of plaintiffs, their counsel, as well as the expert witnesses engaged on behalf of plaintiffs in litigation.

The questioner’s supposition in the case of the toxicology chapter, however, is doubly flawed.  If he had known more about the authors, he would probably not have asked his question.  First, the lawyer author, Ms. Henifin, is known for having taken virulently anti-manufacturer positions.  See Richard M. Lynch and Mary S. Henifin, “Causation in Occupational Disease: Balancing Epidemiology, Law and Manufacturer Conduct,” 9 Risk: Health, Safety & Environment 259, 269 (1998) (conflating distinct causal and liability concepts, and arguing that legal and scientific causal criteria should be abrogated when manufacturing defendant has breached a duty of care).

As for the scientist author of the toxicology chapter, Professor Goldstein, the casual reader of the chapter may want to know that he has testified in any number of toxic tort cases, almost invariably on the plaintiffs’ side.  Unlike the defense lawyer, who loses business revenue, when courts shut down unreliable claims, plaintiffs’ testifying or consulting expert witnesses stand to gain by minimalist expert witness opinion gatekeeping.  Given the economic asymmetries, the reader must thus want to know that Prof. Goldstein was excluded as an expert witness in some high-profile toxic tort cases.  See, e.g., Parker v. Mobil Oil Corp., 7 N.Y.3d 434, 857 N.E.2d 1114, 824 N.Y.S.2d 584 (2006) (dismissing leukemia (AML) claim based upon claimed low-level benzene exposure from gasoline) , aff’g 16 A.D.3d 648 (App. Div. 2d Dep’t 2005).  No; you will not find the Parker case cited in the Manual‘s chapter on toxicology. (Parker is, however, cited in the chapter on exposure science.)

I have searched but I could not find any disclosure of Professor Goldstein’s conflicts of interests in this new edition of the Reference Manual.  I would welcome a correction if I am wrong.  Having pointed out this conflict, I would note that financial conflicts of interest are nothing really compared to ideological conflicts of interest, which often propel scientists into service as expert witnesses.

HORMESIS

One way that ideological conflicts might be revealed is to look for imbalances in the presentation of toxicologic concepts.  Most lawyers who litigate cases that involve exposure-response issues are familiar with the “linear no threshold” (LNT) concept that is used frequently in regulatory risk assessments, and which has metastasized to toxic tort litigation, where LNT often has no proper place.

LNT is a dubious assumption because it claims to “known” the dose response at very low exposure levels in the absence of data.  There is a thin plausibility for genotoxic chemicals claimed to be carcinogens, but even that plausibility evaporates when one realizes that there are defense and repair mechanisms to genotoxicity, which must first be saturated before there can be a carcinogenic response.  Hormesis is today an accepted concept that describes a dose-response relationship that shows a benefit at low doses, but harm at high doses.

The toxicology chapter in the Reference Manual has several references to LNT but none to hormesis.  That font of all knowledge, Wikipedia reports that hormesis is controversial, but so is LNT.  This is the sort of imbalance that may well reflect an ideological bias.

One of the leading textbooks on toxicology describes hormesis:

“There is considerable evidence to suggest that some non-nutritional toxic substances may also impart beneficial or stimulatory effects at low doses but that, at higher doses, they produce adverse effects. This concept of “hormesis” was first described for radiation effects but may also pertain to most chemical responses.”

Curtis D. Klaassen, Casarett & Doull’s Toxicology: The Basic Science of Poisons 23 (7th ed. 2008) (internal citations omitted).

Similarly, the Encyclopedia of Toxicology describes hormesis as an important phenomenon in toxicologic science:

“This type of dose–response relationship is observed in a phenomenon known as hormesis, with one explanation being that exposure to small amounts of a material can actually confer resistance to the agent before frank toxicity begins to appear following exposures to larger amounts.  However, analysis of the available mechanistic studies indicates that there is no single hormetic mechanism. In fact, there are numerous ways for biological systems to show hormetic-like biphasic dose–response relationship. Hormetic dose–response has emerged in recent years as a dose–response phenomenon of great interest in toxicology and risk assessment.”

Philip Wexler, Bethesda, et al., eds., 2 Encyclopedia of Toxicology 96 (2005).  One might think that hormesis would also be of great interest to federal judges, but they will not learn about it from reading the Reference Manual.

Hormesis research has come into its own.  The International Dose-Response Society, which “focus[es] on the dose-response in the low-dose zone,” publishes a journal, Dose-Response, and a newsletter, BELLE:  Biological Effects of Low Level Exposure.  In 2009, two leading researchers in the area of hormesis published a collection of important papers:  Mark P. Mattson and Edward J. Calabrese, eds., Hormesis: A Revolution in Biology, Toxicology and Medicine (N.Y. 2009).

A check in PubMed shows that LNT has more “hits” than “hormesis” or “hermetic,” but still the latter phrases exceed 1,267 references, hardly insubstantial.  In actuality, there are many more hermetic relationships identified in the scientific literature, which often fails to identify the relationship by the term hormesis or hermetic.  See Edward J. Calabrese and Robyn B. Blain, “The hormesis database: The occurrence of hormetic dose responses in the toxicological literature,” 61 Regulatory Toxicology and Pharmacology 73 (2011) (reviewing about 9,000 dose-response relationships for hormesis, to create a database of various aspects of hormesis).  See also Edward J. Calabrese and Robyn B. Blain, “The occurrence of hormetic dose responses in the toxicological literature, the hormesis database: An overview,” 202 Toxicol. & Applied Pharmacol. 289 (2005) (earlier effort to establish hormesis database).

The Reference Manual’s omission of hormesis is regrettable.  Its inclusion of references to LNT but not to hormesis appears to result from an ideological bias.

QUESTIONABLE SUBSTANTIVE OPINIONS

One would hope that the toxicology chapter would not put forward partisan substantive positions on issues that are currently the subject of active litigation.  Fondly we would hope that any substantive position advanced would at least be well documented.

For at least one issue, the toxicology chapter dashes our fondest hopes.  Table 1 in the chapter presents a “Sample of Selected Toxicological End Points and Examples of Agents of Concern in Humans.” No documentation or citations are provided for this table.  Most of the exposure agent/disease outcome relationships in the table are well accepted, but curiously at least one agent-disease pair is the subject of current litigation is wildly off the mark:

Parkinson’s disease and manganese

Reference Manual at 653.  If the chapter’s authors had looked, they would have found that Parkinson’s disease is almost universally accepted to have no known cause, except among a few plaintiffs’ litigation expert witnesses.  They would also have found that the issue has been addressed carefully and the claimed relationship or “concern” has been rejected by the leading researchers in the field (who have no litigation ties).  See, e.g., Karin Wirdefeldt, Hans-Olaf Adami, Philip Cole, Dimitrios Trichopoulos, and Jack Mandel, “Epidemiology and etiology of Parkinson’s disease: a review of the evidence.  26 European J. Epidemiol. S1, S20-21 (2011); Tomas R. Guilarte, “Manganese and Parkinson’s Disease: A Critical Review and New Findings,” 118 Environ Health Perspect. 1071, 1078 (2010) (“The available evidence from human and non­human primate studies using behavioral, neuroimaging, neurochemical, and neuropathological end points provides strong sup­port to the hypothesis that, although excess levels of [manganese] accumulation in the brain results in an atypical form of parkinsonism, this clini­cal outcome is not associated with the degen­eration of nigrostriatal dopaminergic neurons as is the case in PD.”)

WHEN ALL YOU HAVE IS A HAMMER, EVERYTHING LOOKS LIKE A NAIL

The substantive specialist author, Professor Goldstein, is not a physician; nor is he an epidemiologist.  His professional focus on animal and cell research shows, and biases the opinions offered in this chapter.

“In qualitative extrapolation, one can usually rely on the fact that a compound causing an effect in one mammalian species will cause it in another species. This is a basic principle of toxicology and pharmacology.  If a heavy metal, such as mercury, causes kidney toxicity in laboratory animals, it is highly likely to do so at some dose in humans.”

Reference Manual at 646.

Such extrapolations may make sense in regulatory contexts, where precauationary judgments are of interest, but they hardly can be said to be generally accepted in controversies in civil actions over actual causation.  Crystalline silica, for instance, causes something resembling lung cancer in rats, but not in mice, guinea pigs, or hamsters.  It hardly makes sense to ask juries to decide whether the plaintiff is more like a rat than a mouse.

For a sober second opinion to the toxicology chapter, one may consider the views of some well-known authors:

“Whereas the concordance was high between cancer-causing agents initially discovered in humans and positive results in animal studies (Tomatis et al., 1989; Wilbourn et al., 1984), the same could not be said for the reverse relationship: carcinogenic effects in animals frequently lacked concordance with overall patterns in human cancer incidence (Pastoor and Stevens, 2005).”

Hans-Olov Adami, Sir Colin L. Berry, Charles B. Breckenridge, Lewis L. Smith, James A. Swenberg, Dimitrios Trichopoulos, Noel S. Weiss, and Timothy P. Pastoor, “Toxicology and Epidemiology: Improving the Science with a Framework for Combining Toxicological and Epidemiological Evidence to Establish Causal Inference,” 122 Toxciological Sciences 223, 224 (2011).

Once again, there is a sense that the scholarship of the toxicology chapter is not as complete or thorough as we would hope.

Diluting “Reasonable Degree of Medical Certainty” – An AAJ-Restatement “Tool” to Help Plaintiffs

October 3rd, 2011

In “the Top Reason that the ALI’s Restatement of Torts Should Steer Clear of Partisan Conflicts,” I pointed out the inappropriateness of advertising the ALI’s Restatement of Torts to the organized plaintiffs’ bar, much as the plaintiffs’ bar advertises potential huge recoveries for the latest tort du jour.  See Michael D. Green & Larry S. Stewart, “The New Restatement’s Top 10 Tort Tools,” Trial 44 (April 2010).

Some of the authors’ tort tool kit may be unexceptionable.  Among these authors’ top ten tort tools, however, is the new Restatement’s edict that “reasonable degree of medical certainty” means, or should mean, nothing more than saying “more likely than not.”  The authors criticize the reasonable certainty standard with an abbreviated rendition of the Restatement’s critique:

“Many courts hold that expert opinion must be expressed in terms of medical or scientific certainty’. Requiring certainty seems to impose a criminal law-like burden of proof that is inconsistent with civil burdens of preponderance of the evidence to establish a fact. Such a requirement is also problematic at best because medical and scientific communities have no such ‘reasonable certainty’ standard. The standard then becomes whatever the attorney who hired the expert tells the expert it means or, absent that, whatever the expert imagines it means. Section 28, comment e, of the Restatement criticizes this standard and makes clear that the same preponderance standard (or ‘more likely than not’ standard), which is universally applied in all aspects of civil cases, also applies to expert testimony.”

Id. at 46-47.

Well, the more likely than not standard is not “universally applied in all aspects of civil cases,” because several states require exemplary damages to be proven by “clear and convincing” or greater evidence.  In some states, the burden of proof in fraud cases is higher than a mere preponderance of the evidence. This premise of the authors’ article is incorrect.

But even if the authors were correct that the preponderance standard applied “in all aspects” of civil cases, their scholarship would remain suspect, as others and I have previously pointed out.  SeeReasonable Degree of Medical Certainty,” and “More Uncertainty About Reasonable Degree of Medical Certainty.”

1. The Restatement’s Treatment of Expert Witness Evidentiary Rules Exceeded the Scope of the Tort Restatement.

The most peculiar aspect of this “top tool,” is that it has nothing to do with the law of torts.  The level of certitude required of an expert witness is an evidentiary and a procedural issue. Of course the issue comes up in tort cases, which frequently involve medical and scientific causation opinions, as well as other expert witness opinions.  The issue, however, comes up in all cases that involve expert witnesses:  trust and estates, regulatory, environmental, securities fraud, commercial, and other cases.

The Restatement of Torts weakly acknowledges its frolic and detour in treating a procedural issue concerning the admissibility of expert witness opinion testimony, by noting that it does “not address any other requirements for the admissibility of an expert witness’s testimony, including qualifications, expertise, investigation, methodology, or reasoning.” Restatement (Third) of Torts: Liability for Physical and Emotional Harm § 28, cmt. e (2010).  The certitude issue has nothing special to do with the substantive law of torts, and should not have been addressed in the torts restatement.

2. The Restatement’s Treatment of “Reasonable Degree of Medical Certainty” Has No Relevance to the Burden of Proof in Tort Cases.

The expert witness certitude issue has nothing to do with the burden of proof, and the Restatement should not have confused and conflated the burden of proof with the standard of certitude for expert witnesses.  The clear but unacceptable implication is that expert witnesses in criminal cases must testify to certitude “beyond a reasonable doubt,” and in claims for equitable relief, expert witnesses may share only opinions that are made, in their minds, by “clear and convincing evidence.”  There is no support in law or logic for the identification of witness certitude with parties’ burdens of proof.

Comment e states the critique more fully:

“If courts do interpret the reasonable-certainty standard to require a level of certitude greater than the preponderance-of-the-evidence standard requires, this creates a troubling inconsistency between standards for the admissibility of evidence and the threshold required for sufficiency of proof. The threshold for admissibility should not be higher than the threshold to sufficiency.  Moreover, the reasonable-certainty standard provides no assurance of the quality of the expert’s qualifications, expertise, investigation, methodology, or reasoning.  Thus, the Section adopts the same preponderance standard that is universally adopted in civil cases.  Direct and cross-examination can be employed to flesh out the degree of certainty with which an expert’s opinion is held and to identify opinions that are speculative and therefore inadmissible.”

Id. The critique badly misfires because there is no inconsistency and no trouble in having different standards for the admissibility of opinion evidence and the burden of proof.  As noted, expert witnesses testify on causation and other issues in criminal, equity, and tort cases, all with different burdens of proof.  Juries in criminal and tort cases must apply instructions on burdens of proof to an entire evidentiary display, not just the expert witnesses’ opinions.  In logic and law, there ultimately must be different burdens for admissibility of expert witness testimony and for sufficiency of a party’s proofs.

3. The Restatement’s Treatment of “Reasonable Degree of Medical Certainty” Incoherently Confuses Two Different Standards.

We can see that Comment e’s approach to legislating an equivalence between expert witness certitude and the burden must fail even on its own terms.  Consider the legal consequences of tort claimants, with the burden of proof, who produce expert witnesses to opine about key elements (e.g., causation) of torts by stating that their opinions were held by a mere “preponderance of the evidence.”

If this probability is understood to be only infinitesimally greater than 50%, then courts would have to direct verdicts in many (and perhaps most) cases.

Courts must ensure that a rational jury can find for the party with the burden of proof.  Juries must evaluate the credibility and reliability of expert witnesses, their opinions, as well as the predicate facts for those opinions.  If those expert witness opinions were barely greater than 50% probable on an essential element, then unless the witnesses had perfect credibility, and all predicate facts were as probable as claimed by the witnesses, then juries would frequently have to reject the witnesses’ opinions.  The bare preponderance of the expert witnesses’ opinions would result in an overall probability of the essential element less than 50%.

4. The Restatement Incorrectly Implies that Expert Witnesses Can Quantify Their Opinions in Probabilistic Terms.

There are even more far-reaching problems with simply substituting “more likely than not” for RDMC as a threshold requirement of expert witness testimony.  Comment e implies that expert witnesses can discern the difference between an opinion that they believe is “more likely than not” and another which is “as likely as not.” On some occasions, there may be opinions that derive from quantitative reasoning, for which an expert witness could truly say, with some level of certainty, that his or her opinion is “more likely than not.” On most occasions, an expert witness’s degree of certainty is a qualitative opinion that simply does not admit of a quantitative characterization. The Restatement’s comment perpetuates this confusion by casting the reasonable certainty standard as a bare probability.

Comment e further suggests that expert witnesses are themselves expert in assessing their own level of certainty, and that they have the training and experience to distinguish an opinion that is 50.1% likely from another that is only 50% likely. The assignment of precise mathematical probabilities to personal, subjective beliefs is a doubtful exercise, at best. See, e.g., Daniel Kahneman and Amos Tversky, “Judgment under Uncertainty: Heuristics and Biases,” 185 Science 1124 (1974).

5. The Restatement Incorrectly Labels “Reasonable Degree of Medical Certainty” As An Empty Formalism.

Comment e ignores the epistemic content of reasonable certainty, which bears an uncanny resemblance to the knowledge requirement of Rule 702.  The “mantra” is helpful to the extent it imposes an objective epistemic standard, especially in states that have failed to impose, or that have abrogated, expert witness gatekeeping.  In some states, there is no meaningful expert witness gatekeeping under either the Frye standard or Rule 702. See, e.g., “Expert Evidence Free-for-All in Washington State.”  See also Joseph Sanders, “Science, Law, and the Expert Witness,” 72 Law & Contemporary Problems 63, 87 & n. 118 (2009) (noting that the meaning of “reasonable degree of scientific certainty” is unclear, but that it can be understood as an alternative formulation of Kumho’s “same intellectual rigor” test).

Some of these “top” tools may be defective.  The authors may need good defense counsel.