TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

New Jersey Kemps Ovarian Cancer – Talc Cases

September 16th, 2016

Gatekeeping in many courtrooms has been reduced to requiring expert witnesses to swear an oath and testify that they have followed a scientific method. The federal rules of evidence and most state evidence codes require more. The law, in most jurisdictions, requires that judges actively engage with, and inspect, the bases for expert witnesses’ opinions and claims to determine whether expert witnesses who want to heard in a courtroom have actually, faithfully followed a scientific methodology.  In other words, the law requires judges to assess the scientific reasonableness of reliance upon the actual data cited, and to evaluate whether the inferences drawn from the data, to reach a stated conclusion, are valid.

We are getting close to a quarter of a century since the United States Supreme Court outlined the requirements of gatekeeping, in Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579 (1993). Since the Daubert decision, the Supreme Court’s decisional law, and changes in the evidence rules themselves, have clarified the nature and extent of the inquiry judges must conduct into the reasonable reliance upon facts and data, and into the inferential steps leading to a conclusion.  And yet, many judges resist, and offer up excuses and dodges for shirking their gatekeeping obligations.  See generally David E. Bernstein, “The Misbegotten Judicial Resistance to the Daubert Revolution,” 89 Notre Dame L. Rev. 27 (2013).

There is a courtroom in New Jersey, in which gatekeeping is taken seriously from beginning to end.  There is at least one trial judge who encourages and even demands that the expert witnesses appear and explain their methodologies and actually show their methodological compliance.  Judge Johnson first distinguished himself in In re Accutane, No. 271(MCL), 2015 WL 753674, 2015 BL 59277 (N.J.Super. Law Div. Atlantic Cty. Feb. 20, 2015).[1] And more recently, in two ovarian cancer cases, Judge Johnson dusted two expert witnesses, who thought they could claim their turn in the witness chair by virtue of their credentials and some rather glib hand waving. Judge Johnson conducted the New Jersey analogue of a Federal Rule of Evidence 104(a) Daubert hearing, as required by the New Jersey Supreme Court’s decision in Kemp v. The State of New Jersey, 174 N.J. 412 (2002). The result was disastrous for the two expert witnesses who opined that use of talcum powder by women causes ovarian cancer. Carl v. Johnson & Johnson, No. ATL-L-6546-14, 2016 WL 4580145 (N.J. Super. Ct. Law Div., Atl. Cty., Sept. 2, 2016) [cited as Carl].

Judge Johnson obviously had a good epidemiology teacher in Professor Stephen Goodman, who testified in the Accutane case.  Against this standard, it is easy to see how the plaintiffs’ talc expert witnesses, Drs. Daniel Cramer and Dr. Graham Colditz, fell “significantly” short. After presiding over seven days of court hearings, and reviewing extensive party submissions, including the actual studies relied upon by the expert witnesses and the parties, Judge Johnson made no secret of his disappointment with the lack of rigor in the analyses proffered by Cramer and Colditz:

“Throughout these proceedings the court was disappointed in the scope of Plaintiffs’ presentation; it almost appeared as if counsel wished the court to wear blinders. Plaintiffs’ two principal witnesses on causation, Dr. Daniel Cramer and Dr. Graham Colditz, were generally dismissive of anything but epidemiological studies, and within that discipline of scientific investigation they confined their analyses to evidence derived only from small retrospective case-control studies. Both witnesses looked askance upon the three large cohort studies presented by Defendants. As confirmed by studies listed at Appendices A and B, the participants in the three large cohort studies totaled 191,090 while those case-control studies advanced by Plaintiffs’ witnesses, and which were the ones utilized in the two meta-analyses performed by Langseth and Terry, total 18,384 participants. As these proceedings drew to a close, two words reverberated in the court’s thinking:

“narrow and shallow.” It was almost as if counsel and the expert witnesses were saying, Look at this, and forget everything else science has to teach as.

Carl at *12.

Judge Johnson did what for so many judges is unthinkable; he looked behind the curtain put up by highly credentialed Oz expert witnesses in his courtroom. What he found was unexplained, unjustified selectivity in their reliance upon some but not all the available data, and glib conclusions that gloss over significant limits in the resolving power of the available epidemiologic studies. Judge Johnson was particularly unsparing of Graham Colditz, a capable scientist, who deviated from the standards he set for himself in the work he had published in the scientific community:

“Dr. Graham Colditz is a brilliant scientist and a dazzling witness. His vocal inflection, cadence, and adroit use of histrionics are extremely effective. Dr. Colditz’s reputation for his breadth of knowledge about cancer and the esteem in which he is held by his peers is well deserved. Yet, at times, it seemed that issues raised in these proceedings, and the questions posed to him, were a bit mundane for a scientist of his caliber.”

Carl at *15. Dr. Colditz and the plaintiffs’ cause were not helped by Dr. Colditz’s own previous publications of studies and reviews that failed to support any “substantial association between perineal talc use and ovarian cancer risk overall,” and failed to conclude that talc was even a “risk factor” for ovarian cancer.  Carl at *18.

Relative Risk Size

Many courts have fumbled their handling of the issue whether applicable relative risks must exceed two before fact finders may infer specific causation between claimed exposures and specific diseases. There certainly can be causal associations that involve relative risks between 1.0, up to and including 2.0.  Eliminating validity concerns may be more difficult with such smaller relative risks, but there is nothing theoretically insuperable about having a causal association based upon such small relative risks. Judge Johnson apparently saw the diversity of opinions on this relative risk issue, many of which opinions are stridently maintained, and thoroughly fallacious.

Judge Johnson ultimately did not base his decision, with respect to general or specific causation, on the magnitude of relative risk, or the covering Bradford Hill factor of “strength of association.” Dr. Cramer appropriately acknowledged that his meta-analysis result, of an odds ratio of 1.29 was “weak,” Carl at *19, and Judge Johnson was critical of Dr. Colditz for failing to address the lack of strength of the association, and for engaging in a constant refrain that the association was “significant,” which is a precision not a size estimate for the measurement. Carl at *17.

Aware of the difficulty that New Jersey appellate courts have had with the issues surrounding relative risks greater than two, Judge Johnson was realistic to steer clear of any specific judicial reliance on the small size of the relative risk.  His Honor’s prudence is unfortunate however because ultimately small relative risks, even assuming that general causation is established, do nothing to support specific causation.  Indeed, relative risks of 1.29 (and odds ratios generally overstate the size of the underlying relative risk) would on a stochastic model support the conclusion that specific causation was less than 50% probable.  Critics have pointed out that risk may not be stochastically distributed, which is a great point, except that

(1) plaintiffs often have no idea how the risk, if real, is distributed in the observed sample, and

(2) the upshot of the point is that even for relative risks greater than 2.0, there is no warrant for inferring specific causation in a given case.

Judge Johnson did wade into the relative risk waters by noting that when relative risks were “significantly” less than two, establishing biological plausibility became essential.  Carl at *11.  This pronouncement is muddled on at least two fronts.  First, the relative risk scale is a continuum, and there is no standard reference for what relative risks greater than 1.0 are “significantly” less than 2.0.  Presumably, Judge Johnson thought that 1.29 was in the “significantly less than 2.0” range, but he did not say so; nor did he cite a source that supported this assessment. Perhaps he was suggesting that the upper bound of some meta-analysis was less than two. Second, and more troubling, the claim that biological plausibility becomes “essential” in the face of small relative risks is also unsupported. Judge Johnson does not cite any support for this claim, and I am not aware of any.  Elsewhere in his opinion, Judge Johnson noted that

“When a scientific rationale doesn’t exist to explain logically the biological mechanism by which an agent causes a disease, courts may consider epidemiologic studies as an alternate [sic] means of proving general causation.”

Carl at *8. So it seems that biological plausibility is not essential after all.

This glitch in the Carl opinion is likely of no lasting consequence, however, because epidemiologists are rarely at a loss to posit some biologically plausible mechanism. As the Dictionary of Epidemiology explains the matter:

“The causal consideration that an observed, potentially causal association between an exposure and a health outcome may plausibly be attributed to causation on the basis of existing biomedical and epidemiological knowledge. On a schematic continuum including possible, plausible, compatible, and coherent, the term plausible is not a demanding or stringent requirement, given the many biological mechanisms that often can be hypothesized to underlie clinical and epidemiological observations; hence, in assessing causality, it may be logically more appropriate to require coherence (biological as well as clinical and epidemiological). Plausibility should hence be used cautiously, since it could impede development or acceptance of new knowledge that does not fit existing biological evidence, pathophysiological reasoning, or other evidence.”

Miquel Porta, et al., eds., “Biological plausibility,” in A Dictionary of Epidemiology at 24 (6th ed. 2014). Most capable epidemiologists have thought up half a dozen biologically plausible mechanisms each morning before they have had their first cup of coffee. But the most compelling reason that this judicial hiccup is inconsequential is that the plaintiffs’ expert witnesses’ postulated mechanism, inflammation, was demonstrably absent in the tissue of the specific plaintiffs.  Carl at *13. The glib invocation of “inflammation” would seem bound to fail even as the most liberal test of plausibility when talc has anti-cancer properties that result from its ability to inhibit new blood vessel formation, a necessity of solid tumor growth, and the completely unexplained selectivity for ovarian tissue to the postulated effect, which leaves vaginal, endometrial, or fallopian tissues unaffected. Carl at *13-14. On at least two occasions, the United States Food and Drug Administration rejected “Citizen Petitions” for ovarian cancer warnings on talc products, advanced by the dubious Samuel S. Epstein for the Cancer Prevention Coalition, in large measure because of Epstein’s undue selectivity in citing epidemiologic studies and because a “cogent biological mechanism by which talc might lead to ovarian cancer is lacking… .” Carl at *15, citing Stephen M. Musser, Directory FDA Director, Letter Denying Citizens’ Petition (April 1, 2014).

Large Studies

Judge Johnson quoted the Reference Manual on Scientific Evidence (3d ed.  2011) for his suggestion that establishing causation requires large studies.  The quoted language, however, really does not bear on his suggestion:

“Common sense leads one to believe that a large enough sample of individuals must be studied if the study is to identify a relationship between exposure to an agent and disease that truly exists. Common sense also suggests that by enlarging the sample size (the size of the study group), researchers can form a more accurate conclusion and reduce the chance of random error in their results…With large numbers, the outcome of test is less likely to be influenced by random error, and the researcher would have greater confidence in the inferences drawn from the data.”

Reference Manual at page 576.  What the Reference Manual simply calls for studies with “large enough” samples.  How large is large enough is a variable that depends upon the magnitude of the association to be detected, the length of follow up, and the base rate or incidence of the outcome of interest. As far as “common sense,” goes, the Reference Manual is correct only insofar as larger is better with respect to sampling error.  Increasing sample size does nothing to address internal or external validity of studies, and may lead to erroneous interpretations by allowing results to achieve statistical significance at predetermined levels, when the observed associations result from bias or confounding, and not from any underlying relationship between exposure and disease outcome.

There is a more disturbing implication in Judge Johnson’s criticism of Graham Colditz for relying upon the smaller number of subjects in the case-control studies than are found in the available cohort studies. Ovarian cancer is a relatively rare cancer (compared with breast and colon cancer), and case-control studies are more efficient at assessing increased risk than are cohort studies for a rare outcome.  The number of cases in a case-control study represents an implied population many times larger than the number of actual cases in a case-control study.  If Judge Johnson had looked at the width of the confidence intervals for the “small” case-control studies, and compared those widths to the interval widths of the cohort studies, he would have seen that “smaller” case-control studies (fewer cases, as well as fewer total subjects) can generate more statistical precision than the larger cohort studies (with many more cohort and control subjects).  A more useful comparison would have been to the number of actual ovarian cancer cases in the meta-analyzed case-control studies with the number of actual ovarian cancer cases in the cohort studies. On this comparison, the cohort studies might not fare so well.

The size of the cohort for a rare outcome is thus fairly meaningless in terms of the statistical precision generated.  Smaller case-control studies will likely have much more power, and that should be reflected in the confidence intervals of the respective studies.

The issue, as I understand the talc litigation, is not size of the case-control versus cohort studies, but rather their analytical resolving power.  Case-control studies for this sort of exposure and outcome will be plagued by recall and other biases, as well as difficulty in selecting the right control group.  And the odds ratio will tend to overestimate the relative risk, in both directions.  Cohort studies, with good, pre-morbid exposure assessments, would thus be much more rigorous and accurate in estimating the true rate ratios. In the final analysis, Judge Johnson was correct to be critical of Graham Colditz for dismissing the cohort studies, but his rationale for this criticism was, in a few places, confused and confusing. There was nothing subtle about the analytical gaps, ipse dixits, and cherry picking shown by these plaintiffs’ expert witnesses.


[1] SeeJohnson of Accutane – Keeping the Gate in the Garden State” (Mar. 28, 2015).

Judge Bernstein’s Criticism of Rule 703 of the Federal Rules of Evidence

August 30th, 2016

Federal Rule of Evidence Rule 703 addresses the bases of expert witness opinions, and it is a mess. The drafting of this Rule is particularly sloppy. The Rule tells us, among other things, that:

“[i]f experts in the particular field would reasonably rely on those kinds of facts or data in forming an opinion on the subject, they need not be admissible for the opinion to be admitted.”

This sentence of the Rule has a simple grammatical and logical structure:

If A, then B;

where A contains the concept of reasonable reliance, and B tells us the consequence that the relied upon material need not be itself admissible for the opinion to be admissible.

But what happens if the expert witness has not reasonably relied upon certain facts or data; i.e., ~A?  The conditional statement as given does not describe the outcome in this situation. We are not told what happens when an expert witness’s reliance in the particular field is unreasonable.  ~A does not necessarily imply ~B. Perhaps the drafters meant to write:

B if and only if A.

But the drafters did not give us the above rule, and they have left judges and lawyers to make sense of their poor grammar and bad logic.

And what happens when the reliance material is independently admissible, say as a business record, government report, and first-person observation?  May an expert witness rely upon admissible facts or data, even when a reasonable expert would not do so? Again, it seems that the drafters were trying to limit expert witness reliance to some rule of reason, but by tying reliance to the admissibility of the reliance material, they managed to conflate two separate notions.

And why is reliance judged by the expert witness’s particular field?  Fields of study and areas of science and technology overlap. In some fields, it is common place for putative experts to rely upon materials that would not be given the time of day in other fields. Should we judge the reasonableness of homeopathic healthcare providers’ reliance by the standards of reasonableness in homeopathy, such as it is, or should we judge it by the standards of medical science? The answer to this rhetorical question seems obvious, but the drafters of Rule 703 introduced a Balkanized concept of science and technology by introducing the notion of the expert witness’s “particular field.” The standard of Rule 702 is “knowledge” and “helpfulness,” both of which concepts are not constrained by “particular fields.”

And then Rule 703 leaves us in the dark about how to handle an expert witness’s reliance upon inadmissible facts or data. According to the Rule, “the proponent of the opinion may disclose [the inadmissible facts or data] to the jury only if their probative value in helping the jury evaluate the opinion substantially outweighs their prejudicial effect. And yet, disclosing inadmissible facts or data would always be highly prejudicial because they represent facts and data that the jury is forbidden to consider in reaching its verdict.  Nonetheless, trial judges routinely tell juries that an expert witness’s opinion is no better than the facts and data on which the opinion is based.  If the facts and data are inadmissible, the jury must disregard them in its fact finding; and if an expert witness’s opinion is based upon facts and data that are to be disregarded, then the expert witness’s opinion must be disregarded as well. Or so common sense and respect for the trial’s truth-finding function would suggest.

The drafters of Rule 703 do not shoulder all the blame for the illogic and bad results of the rule. The judicial interpretation of Rule 703 has been sloppy, as well. The Rule’s “plain language” tells us that “[a]n expert may base an opinion on facts or data in the case that the expert has been made aware of or personally observed.”  So expert witnesses should be arriving at their opinions through reliance upon facts and data, but many expert witnesses rely upon others’ opinions, and most courts seem to be fine with such reliance.  And the reliance is often blind, as when medical clinicians rely upon epidemiologic opinions, which in turn are based upon data from studies that the clinicians themselves are incompetent to interpret and critique.

The problem of reliance, as contained within Rule 703, is deep and pervasive in modern civil and criminal trials. In the trial of health effect claims, expert witnesses rely upon epidemiologic and toxicologic studies that contain multiple layers of hearsay, often with little or no validation of the trustworthiness of many of those factual layers. The inferential methodologies are often obscure, even to the expert witnesses, and trial counsel are frequently untrained and ill prepared to expose the ignorance and mistakes of the expert witnesses.

Back in February 2008, I presented at an ALI-ABA conference on expert witness evidence about the problems of Rule 703.[1] I laid out a critique of Rule 703, which showed that the Rule permitted expert witnesses to rely upon “castles in the air.” A distinguished panel of law professors and judges seemed to agree; at least no one offered a defense of Rule 703.

Shortly after I presented at the ALI-ABA conference, Professor Julie E. Seaman published an insightful law review in which she framed the problems of rule 703 as constitutional issues.[2] Encouraged by Professor Seaman’s work, I wrote up my comments on Rule 703 for an ABA publication,[3] and I have updated those comments in the light of subsequent judicial opinions,[4] as well as the failure of the Third Edition of the Reference Manual of Scientific Evidence to address the problems.[5]

===================

Judge Mark I. Bernstein is a trial court judge for the Philadelphia County Court of Common Pleas. I never tried a case before Judge Bernstein, who has announced his plans to leave the Philadelphia bench after 29 years of service,[6] but I had heard from some lawyers (on both sides of the bar) that he was a “pro-plaintiff” judge. Some years ago, I sat next to him on a CLE panel on trial evidence, at which he disparaged judicial gatekeeping,[7] which seemed to support his reputation. The reality seems to be more complex. Judge Bernstein has shown that he can be a critical consumer of complex scientific evidence, and an able gatekeeper under Pennsylvania’s crazy quilt-work pattern of expert witness law. For example, in a hotly contested birth defects case involving sertraline, Judge Bernstein held a pre-trial evidentiary hearing and looked carefully at the proffered testimony of Michael D. Freeman, a chiropractor and self-styled “forensic epidemiologist, and Robert Cabrera, a teratologist. Applying a robust interpretation of Pennsylvania’s Frye rule, Judge Bernstein excluded Freeman and Cabrera’s proffered testimony, and entered summary judgment for defendant Pfizer, Inc. Porter v. Smithkline Beecham Corp., 2016 WL 614572 (Phila. Cty. Ct. Com. Pl.). SeeDemonstration of Frye Gatekeeping in Pennsylvania Birth Defects Case” (Oct. 6, 2015).

And Judge Bernstein has shown that he is one of the few judges who takes seriously Rule 705’s requirement that expert witnesses produce their relied upon facts and data at trial, on cross-examination. In Hansen v. Wyeth, Inc., Dr. Harris Busch, a frequent testifier for plaintiffs, glibly opined about the defendant’s negligence.  On cross-examination, he adverted to the volumes of depositions and documents he had reviewed, but when defense counsel pressed, the witness was unable to produce and show exactly what he had reviewed. After the jury returned a verdict for the plaintiff, Judge Bernstein set the verdict aside because of the expert witness’s failure to comply with Rule 705. Hansen v. Wyeth, Inc., 72 Pa. D. & C. 4th 225, 2005 WL 1114512, at *13, *19, (Phila. Ct. Common Pleas 2005) (granting new trial on post-trial motion), 77 Pa. D. & C. 4th 501, 2005 WL 3068256 (Phila. Ct. Common Pleas 2005) (opinion in support of affirmance after notice of appeal).

In a recent law review article, Judge Bernstein has issued a withering critique of Rule 703. See Hon. Mark I. Bernstein, “Jury Evaluation of Expert Testimony Under the Federal Rules,” 7 Drexel L. Rev. 239 (2015). Judge Bernstein is clearly dissatisfied with the current approach to expert witnesses in federal court, and he lays almost exclusive blame on Rule 703 and its permission to hide the crucial facts, data, and inferential processes from the jury. In his law review article, Judge Bernstein characterizes Rules 703 and 705 as empowering “the expert to hide personal credibility judgments, to quietly draw conclusions, to individually decide what is proper evidence, and worst of all, to offer opinions without even telling the jury the facts assumed.” Id. at 264. Judge Bernstein cautions that the subversion of the factual predicates for expert witnesses’ opinions under Rule 703 has significant, untoward consequences for the court system. Not only are lawyers allowed to hire professional advocates as expert witnesses, but the availability of such professional witnesses permits and encourages the filing of unnecessary litigation. Id. at 286. Hear hear.

Rule 703’s practical consequence of eliminating the hypothetical question has enabled the expert witness qua advocate, and has up-regulated the trial as a contest of opinions and opiners rather than as an adversarial procedure that is designed to get at the truth. Id. at 266-67. Without having access to real, admissible facts and data, the jury is forced to rely upon proxies for the truth: qualifications, demeanor, and courtroom poise, all of which fail the jury and the system in the end.

As a veteran trial judge, Judge Bernstein makes a persuasive case that the non-disclosure permitted under Rule 703 is not really curable under Rule 705. Id. at 288.  If the cross-examination inquiry into reliance material results in the disclosure of inadmissible facts, then judges and the lawyers must deal with the charade of a judicial instruction that the identification of the inadmissible facts is somehow “not for the truth.” Judge Bernstein argues, as have many others, that this “not for the truth” business is an untenable fiction, either not understood or ignored by jurors.

Opposing counsel, of course, may ask for an elucidation of the facts and data relied upon, but when they consider the time and difficulty involved in cross-examining highly experienced, professional witnesses, opposing counsel usually choose to traverse the adverse opinion by presenting their own expert witness’s opinion rather than getting into nettlesome details and risking looking foolish in front of the jury, or even worse, allowing the highly trained adverse expert witness to run off at the mouth.

As powerful as Judge Bernstein’s critique of Rule 703 is, his analysis misses some important points. Lawyers and judges have other motives for not wanting to elicit underlying facts and data: they do not want to “get into the weeds,” and they want to avoid technical questions of valid inference and quality of data. Yet sometimes the truth is in the weeds. Their avoidance of addressing the nature of inference, as well as facts and data, often serves to make gatekeeping a sham.

And then there is the problem that arises from the lack of time, interest, and competence among judges and jurors to understand the technical details of the facts and data, and inferences therefrom, which underlie complex factual disputes in contemporary trials. Cross examination is reduced to the attempt to elicit “sound bites” and “cheap shots,” which can be used in closing argument. This approach is common on both sides of the bar, in trials before judges and juries, and even at so-called Daubert hearings. See David E. Bernstein & Eric G. Lasker,“Defending Daubert: It’s Time to Amend Federal Rule of Evidence 702,” 57 William & Mary L. Rev. 1, 32 (2015) (“Rule 703 is frequently ignored in Daubert analyses”).

The Rule 702 and 703 pretrial hearing is an opportunity to address the highly technical validity questions, but even then, the process is doomed to failure unless trial judges make adequate time and adopt an attitude of real intellectual curiosity to permit a proper exploration of the evidentiary issues. Trial lawyers often discover that a full exploration is technical and tedious, and that it pisses off the trial judge. As much as judges dislike having to serve as gatekeepers of expert witness opinion testimony, they dislike even more having to assess the reasonableness of individual expert witness’s reliance upon facts and data, especially when this inquiry requires a deep exploration of the methods and materials of each relied upon study.

In favor of something like Rule 703, Bernstein’s critique ignores that there are some facts and data that will never be independently admissible. Epidemiologic studies, with their multiple layers of hearsay, come to mind.

Judge Bernstein, as a reformer, is wrong to suggest that the problem is solely in hiding the facts and data from the jury. Rules 702 and 703 march together, and there are problems with both that require serious attention. See David E. Bernstein & Eric G. Lasker,“Defending Daubert: It’s Time to Amend Federal Rule of Evidence 702,” 57 William & Mary L. Rev. 1 (2015); see alsoOn Amending Rule 702 of the Federal Rules of Evidence” (Oct. 17, 2015).

And we should remember that the problem is not solely with juries and their need to see the underlying facts and data. Judges try cases too, and can butcher scientific inference with any help from a lay jury. Then there is the problem of relied upon opinions, discussed above. And then there is the problem of unreasonable reliance of the sort that juries cannot discern even if they see the underlying, relied upon facts and data.


[1] Schachtman, “Rule 703 – The Problem Child of Article VII”; and “The Effective Presentation of Defense Expert Witnesses and Cross-examination of Plaintiffs’ Expert Witnesses”; at the ALI-ABA Course on Opinion and Expert Witness Testimony in State and Federal Courts (February 14-15, 2008).

[2] See Julie E. Seaman, “Triangulating Testimonial Hearsay: The Constitutional Boundaries of Expert Opinion Testimony,” 96 Georgetown L.J. 827 (2008).

[3]  Nathan A. Schachtman, “Rule of Evidence 703—Problem Child of Article VII,” 17 Proof 3 (Spring 2009).

[4]RULE OF EVIDENCE 703 — Problem Child of Article VII” (Sept. 19, 2011)

[5] SeeGiving Rule 703 the Cold Shoulder” (May 12, 2012); “New Reference Manual on Scientific Evidence Short Shrifts Rule 703,” (Oct. 16, 2011).

[6] Max Mitchell, “Bernstein Announces Plan to Step Down as Judge,” The Legal Intelligencer (July 29, 2016).

[7] See Schachtman, “Court-Appointed Expert Witnesses,” for Mealey’s Judges & Lawyers in Complex Litigation, Class Actions, Mass Torts, MDL and the Monster Case Conference, in West Palm Beach, Florida (November 8-9, 1999). I don’t recall Judge Bernstein’s exact topic, but I remember he criticized the Pennsylvania Supreme Court’s decision in Blum v. Merrill Dow Pharmaceuticals, 534 Pa. 97, 626 A.2d 537 ( 1993), which reversed a judgment for plaintiffs, and adopted what Judge Bernstein derided as a blending of Frye and Daubert, which he called Fraubert. Judge Bernstein had presided over the Blum trial, which resulted in the verdict for plaintiffs.

Excited Utterance Podcast Series on Evidence Law

August 25th, 2016

As a graduate student, I was impressed by the extent to which scholars traveled to other schools to present draft papers and obtain feedback from other faculties and graduate students.  As a student, these presentations were interesting opportunities to engage with leading scholars and learn from their new ideas, as well as their mistakes.  Law school faculties back in the 1970s seemed like a much less collegial community of scholars, who rarely shared their ideas before publication, and thus did not receive the benefit of feedback from other scholars.

The isolation of legal scholarship has been mitigated in good law schools with the introduction of invited lectures and presentations, often at weekly seminars or luncheons.  These meetings can be exciting and inspiring, but obviously participation is limited, and the financial and travel time restraints can be burdensome.

Edward Cheng, who teaches evidence and related subjects at Vanderbilt Law School, has introduced an interesting idea: scholarly podcasts on legal topics in his field of interest. Professor Cheng’s stated hope is that he can produce and provide podcasts, on scholarly topics in the law of evidence, which replicate the faculty seminar for a broader audience.

To be sure, there have been podcasts about specific legal cases, such as the famously successful “Undisclosed” podcast on the Adnan Syed case, which can honestly share in the credit in helping expose corruption and dishonesty in the prosecution of Mr. Syed, and in helping Mr. Syed obtain a new trial. Professor Cheng’s planned podcast series, “Excited Utterance: The Evidence and Proof Podcast,” will be on evidentiary topics more of interest to legal scholars, students, and practitioners. His stated goal is to focus on legal scholarship on evidence law and “to provide a weekly virtual workshop in the world of evidence throughout the academic year” to a broader audience, more efficiently than the sporadic visiting lectures that any one school can sponsor on evidentiary topics.

The project seems worth the effort in theory, and we will see what it produces in practice. The fall 2016 schedule for Cheng’s Excited Utterance podcasts is set out below; and the first one, by Daniel Chapra, is already available at iTunes, and at the Excited Utterance website.

Daniel Capra, “Electronically Stored Information and the Ancient Documents Exception” (Aug. 22, 2016)

Michael Pardo, “Group Agency and Legal Proof, or Why the Jury Is An It” (Aug. 29, 2016)

Mary Fan, “Justice Visualized” (Sept. 5, 2016)

Sachin Pandya, “The Constitutional Accuracy of Legal Presumptions” (Sept. 12, 2016)

Christopher Slobogin, “Gatekeeping Science” (Sept. 19, 2016)

Mark Spottswood, “Unraveling the Conjunction Paradox” (Sept. 26, 2016)

Deryn Strange, “Memory Errors in Alibi Generation” (Oct. 3, 2016)

Sandra Guerra Thompson, “Cops in Lab Coats” (Oct. 10, 2016)

Maggie Wittlin, “Hindsight Evidence” (Oct. 17, 2016)

Stephanos Bibas, “Designing Plea Bargaining from the Ground Up” (Oct. 24, 2016)

Erin Murphy, “Inside the Cell: The Dark Side of Forensic DNA” (Oct. 31, 2016)

Pamela R. Metzger, “Confrontation as a Rule of Production” (Nov. 7, 2016)

Nancy S. Marder, “Juries and Lay Participation: American Perspectives and Global Trends” (Nov. 14, 2016)

Jay Koehler, “Testing for Accuracy in the Forensic Sciences” (Nov. 21, 2016)

Art Historian Expert Testimony

August 15th, 2016

Art appraisal and authentication is sometimes held out as a non-technical and non-scientific area of expertise, and as such, not subject to rigorous testing.[1] But to what extent is this simply excuse mongering for an immature field of study? The law has seen way too much of this sort of rationalization in criminal forensic studies.[2] If an entire field of learning suffers from unreliability because of its reliance upon subjective methodologies, lack of rigor, inability or unwillingness to use measurements, failure to eliminate biases through blinding, and the like, then do expert witnesses in this field receive a “pass” under Rule 702, simply because they are doing reasonably well compared with their professional colleagues?

In the movie Who the Fuck is Jackson Pollack, the late Thomas Hoving was interviewed about the authenticity of a painting claimed to have been “painted” by Jackson Pollack. Hoving “authoritatively,” and with his typical flamboyance, averred that the disputed painting was not a Pollack because the work “did not sing to me like a Pollack.” Hoving did not, however, attempt to record the notes he heard; nor did Hoving speak to what key Pollack usually painted in.

In a recent case of defamation and tortious interference with prospective business benefit, a plaintiff sued over the disparagement of a painting’s authenticity and provenance. As a result of the defendants’ statements that the painting at issue was not created by Peter M. Doig, auction houses refused to sell the painting held by plaintiff. In litigation, the plaintiff proffered an expert witness who opined that the painting was, in fact, created by Doig. The defendants challenged plaintiff’s expert witness as not reliable or relevant under Federal Rule of Evidence 702. Fletcher v. Doig, 13 C 3270, 2016 U.S. Dist. LEXIS 95081 (N.D. Ill. July 21, 2016).

Peter Bartlow, the plaintiff’s expert witness on authenticity, was short on academic credentials. He had gone to college, and finished only one year of graduate study in art history. Bartlow did, however, have 40 years in experience in appraisal and authentication. Fletcher, at *3-4. Beyond qualifications, the defendants complained that Bartlow’s method was

(1) invented for the case,

(2) was too “generic” to establish authenticity, and

(3) failed to show that any claimed generic feature was unique to the work of the artist in question, Peter M. Doig.

The trial court rebuffed this challenge by noting that Peter Bartlow did not have to be an expert specifically in Doig’s work. Fletcher at *7. Similarly, the trial court rejected the defendants’ suggestion that the disputed work must exhibit “unique” features of Doig’s ouevre. Bartlow had made a legally sufficient case for his opinions based upon a qualitative analysis of 45 acknowledged works, using specific qualitative features of 11 known works. Id. At *10. Specifically, Bartlow compared types of paint, similarities in styles, shapes and positioning, and “repeated lineatures” by superimposing lines from known paintings to the questioned ones. Id. With respect to the last of these approaches, the trial court found that Bartlow’s explanation that the approach of superimposing lines to show similarity was simply a refinement of methods commonly used by art appraisers.

By comparison with Thomas Hoving’s subjective auditory methodology, as explained in Who the Fuck, Bartlow’s approach was positively brilliant, even if the challenged methodologies left much to be desired. For instance, Bartlow compared one disputed painting with 45 or so paintings of accepted provenance. No one tested Bartlow’s ability, blinded to provenance, to identify true and false positives of Doig paintings. SeeThe Eleventh Circuit Confuses Adversarial and Methodological Bias, Manifestly Erroneously” (June 6, 2015); see generally Christopher Robertson & Aaron Kesselheim, Blinding as a Solution to Bias: Strengthening Biomedical Science, Forensic Science, and Law (2016).

Interestingly, the Rule 702 challenges in Fletcher were in a case slated to be tried by the bench. The trial court thus toasted the chestnut that trial courts have even greater latitude in admitting expert witness opinion testimony in bench trials, in which “the usual concerns of [Rule 702] – keeping unreliable testimony from the jury – are not present.” Fletcher at *3 (citing Metavante Corp. v. Emigrants Savings Bank, 619 F.3d 648, 670 (7th Cir. 2010)). Citing Seventh Circuit precedent, the trial court, in Fletcher, asserted that the need to rule on admissibility before trial was lessened in a bench trial. Id. (citing In re Salem, 465 F.3d 767, 777 (7th Cir. 2006)). The courts that have taken this position have generally failed to explain why the standard for granting or denying a Rule 702 challenge should be different in a bench trial. Clearly, a bench trial can be just as much a waste of time, money, and energy as a jury trial. Even more clearly, judges can be, and are, snookered by misleading expert witness opinions, and they are also susceptible to their own cognitive biases and the false allure of unreliable opinion testimony, built upon invalid inferences. Men and women do not necessarily see more clearly when wearing black robes, but they can achieve some measure of objectivity by explaining and justifying their gatekeeping opinions in writing, subject to public review, comment, and criticism.


[1] See, e.g. Lees v. Carthage College, 714 F.3d 516, 525 (7th Cir. 2013) (holding that an expert witness’s testimony on premises security involved non-scientific expertise and knowledge that did “not easily admit of rigorous testing and replication”).

[2] See, e.g., National Academies of Science, Strengthening Forensic Science in the United States: A Path Forward (2009).

The opinions, statements, and asseverations expressed on Tortini are my own, or those of invited guests, and these writings do not necessarily represent the views of clients, friends, or family, even when supported by good and sufficient reason.