TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Beecher-Monas and the Attempt to Eviscerate Daubert from Within

November 23rd, 2010

Part 2, of a Critique of Evaluating Scientific Evidence, by Erica Beecher-Monas (EBM)

Giving advice to trial and appellate judges on how they should review scientific evidence can be a tricky business.  Such advice must reliably capture the nature of scientific reasoning in several different fields, such as epidemiology and toxicology, and show how such reasoning can and should be incorporated within a framework of statutes, rules, and common law rules.  Erica Beecher-Monas’ book, Evaluating Scientific Evidence, fails to accomplish these goals.  What she does accomplish is the confusion of regulatory assumptions and principles of precautionary principles with the science of health effects in humans.

7.  “Empowering one type of information or one kind of study to the exclusion of another makes no scientific evidentiary sense.”  Id. at 59.

It is telling that Erica Beecher-Monas (EBM) does not mention either the systematic review or the technique of meta-analysis, which is based upon the systematic review.  Of course, these approaches, whether qualitative or quantitative, require a commitment to pre-specify a hierarchy of evidence, and inclusionary and exclusionary criteria for studies.  What EBM seems to hope to accomplish is the flattening of the hierarchy of evidence, and making all types of evidence comparable in probative value.  This is not science or scientific, but part of an agenda to turn Daubert into a standard of bare relevancy.  Systematic reviews do not literally exclude any “one kind” of study, but they recognize that not all study designs are equal.  The omission in EBM’s book speaks volumes.

8. “[T]he likelihood that someone whose health was adversely affected will have the courthouse doors slammed in his or her face,”  id. at 64, troubles EBM. 

EBM recognizes that inferences and scientific methodologies involve false positives and false negatives, but she appears disproportionately concerned by false negatives.  Of course, this solicitude begs the question whether we have reasonably good knowledge that that someone really was adversely affected.  A similar solicitude for the defendant who has had the courthouse door slammed on his head, in cases in which it has caused no harm, is missing.  This imbalance leads EBM to excuse and defend gaps in plaintiffs’ evidentiary displays on scientific issues.

9.  “Gaps in scientific knowledge are inevitable, not fatal flaws.”  Id. at 51 (citing a work on risk assessment).

The author also seems to turn a blind eye to the size of gaps.  Some gaps are simply too big to be bridged by assumptions.  Scientists have to be honest about their assumptions, and temper their desire to reach conclusions.  Expert witnesses often lack the requisite scientific temper to remain agnostic; they take positions when they should rightfully press for the gaps to be filled.  Expert witnesses outrun their headlights, but EBM cites virtually no example of a gatekeeping decision with approval.

Excusing gaps in risk assessment may make some sense given that risk assessment is guided by the precautionary principle.  The proofs in a toxic tort case are not.  EBM’s assertion about the inevitability of “gaps” skirts the key question:  When are gaps too large to countenance, and to support a judgment?  The Joiner case made clear that when the gaps are supported only by the ipse dixit of an expert witness, courts should look hard to determine whether the conclusion is reasonably, reliably supported by the empirical evidence.  The alternative, which EBM seems to invite, is intellectual anarchy.

8.  “Extrapolation from rodent studies to human cancer causation is universally accepted as valid (at least by scientists) because ‘virtually all of the specific chemicals known to be carcinogenic in humans are also positive in rodent bioassays, and sometimes even at comparable dose and with similar organ specificity’.” Id. at 71n.55 (quoting Bernard Weinstein, “Mitogenesis is only one factor in carcinogenesis,” 251 Science 387, 388 (1991)).

When it comes to urging the primacy and superiority of animal evidence, EBM’s brief is relentless and baseless.

Remarkably, in the sentence quoted above, EBM has committed the logical fallacy of affirming the consequent:  If all human carcinogens are rat carcinogens, then all rat carcinogens are human carcinogens.  This argument form is invalid, and the consequent does not follow from the antecedent.  And it is the consequent that provides the desired, putative validity for extrapolating from rodent studies to humans.  Not only does EBM commit a non-sequitur, she quotes Dr. Weinstein’s article out of context, because his article makes quite clear that not all rat carcinogens are accepted causes of cancer in human beings.

9.  “Post-Daubert courts often exclude expert testimony in toxic tort cases simply because the underlying tests relate to animals rather than humans.”  Id. at 71n. 54.

Given EBM’s radical mission to “empower” animal evidence, we should not be too surprised that she is critical of Daubert decisions that have given lesser weight to animal evidence.  The above statement is another example of EBM’s over- and misstatement.  The cases cited, for instance the Hall decision by Judge Jones in the breast implant litigation, and the Texas Supreme Court in Havner, do not support the “simply because.”  Those cases represent complex evidentiary displays that involved animal, in vitro, chemical analysis, and epidemiologic studies. The Hall decision was based upon Rule 702, but it was followed by Judge Jack Weinstein, who, after conducting two weeks of hearings, entered summary judgment sua sponte against the plaintiffs (animal evidence and all).  Recently, Judge Weinstein characterized the expert witnesses who supported the plaintiffs’ claims as “charlatans.”  See Judge Jack B. Weinstein, “Preliminary Reflections on Administration of Complex Litigation.” Cardozo Law Review De Novo at 14, http://www.cardozolawreview.com/content/denovo/WEINSTEIN_2009_1.pdf (“[t]he breast implant litigation was largely based on a litigation fraud. …  Claims—supported by medical charlatans—that enormous damages to women’s systems resulted could not be supported.”) (emphasis added).

Given the widespread rejection of the junk science behind breast implant claims, by courts, scientists, court-appointed experts, and the Institute of Medicine, EBM’s insertion of “simply” in the sentence above simply tells volumes about how she would evaluate the evidentiary display in HallSee also Evaluating Scientific Evidence at 81n.99 (arguing that Hall was mistaken).  If the gatekeeping in the silicone breast implant litigation was mistaken, as EBM argues, it is difficult to imagine what slop would be kept out by a gatekeeper who chose to apply EBM’s “intellectual due process.”

10.  “Animal studies are more persuasive than epidemiology for demonstrating small increases of risk.” Id. at 70

EBM offers no support for this contention, and there is none unless one is concerned to demonstrate small risks for animals.  Even for the furry beasts themselves, the studies do not “demonstrate” (a mathematical concept) small increased risks at low doses comparable to the doses experienced by human beings. 

EBM’s urging of “scientifically justifiable default assumptions” turns into advocacy for regulatory pronouncements of precautionary principle, which have been consistently rejected by courts as not applicable to toxic tort litigation for personal injuries.

11.  “Nonthreshold effects, on the other hand, are characteristic of diseases (like some cancers) that are caused by genetic mutations.” Id. at 75.

EBM offers no support for this assertion, and she ignores the growing awareness that the dose-response curves for many substances are hormetic; that is, the substance often exercises a beneficial or therapeutic effect at low doses, but may be harmful at high doses.  Alcohol is a known human carcinogen, but at low doses, alcohol reduces cardiovascular mortality.  At moderate to high doses, alcohol causes female breast cancer, and liver cancer.  Liver cancer, however, requires sufficiently high, prolonged doses to causes permanent fibrotic and architectural changes in the liver (cirrhosis) before it increases risk of liver cancer.  These counterexamples, and others, show that thresholds are often important features of the dose-response curves of carcinogens.

Similarly, EBM incorrectly argues that the default assumption of a linear dose-response pattern is reasonable because it is, according to her, widely accepted.  Id. at 74n. 65.  Her supporting citation is, however, to an EPA document on risk assessment, which has nothing to do with determinations of causality.  Risk assessments assume causality and attempt to place an upper bound on the magnitude of the hypothetical risk.  Again, EBM’s commitment to the precautionary principle and regulatory approaches preempt scientific thinking.  If EBM had considered the actual and postulated mechanisms of carcinogenesis, even in sources she cites, she would have to acknowledge that the linear no threshold model makes no sense because it ignores the operation of multiple protective mechanisms that must be saturated and overwhelmed before carcinogenetic exposures can actually induce clinically meaningful tumors in animals.  See, e.g., Bernard Weinstein, “Mitogenesis is only one factor in carcinogenesis,” 251 Science 387, 388 (1991) (mistakenly cited by EBM for the proposition that rodent carcinogens should be “assumed” to cause cancer in humans).

12.  “Under this assumption [of the EPA], demonstrating the development of lung cancer in mice would be admissible to show human causation in any organ.  Because we know so little about cancer causation, there is justification for this as a workable but questionable assumption with respect to cancer.”  Id. at 77.

Extrapolation, across species, across organs, and across disparate doses!  No gap is too wide, too deep to be traversed by EBM’s gatekeepers.  In arguing that extrapolation is a routine part of EPA risk assessment, EBM ignores that the extrapolation is not the basis for reaching scientific conclusions about health effects in human beings.  Regulatory science is “mandating certainty” — the opposite side of David Michael’s caricature of industry’s “manufacturing doubt.”

13. “[T]he court in Hall was mistaken when it excluded the expert testimony because the studies relied on only showed that silicone could have caused the plaintiff’s diseases, not that it did.”  Id. at 81n.99.

Admittedly, it is difficult to tell whether EBM is discussing general or specific causation in this sentence, but it certainly seems as if she is criticizing the Hall decision, by Judge Jones, because the expert witnesses for the plaintiff were unable to say that silicone did, in fact, cause Hall’s illness.  EBM appears to be diluting specific causation to a “might have had some effect” standard. 

The readers who have actually read the Hall decision, or who are familiar with the record in Hall, will know that one key expert witness for plaintiffs, an epidemiologist, Dr. David Goldsmith, conceded that he could not say that silicone more likely than not caused autoimmune disease.  A few weeks after testifying in Hall, Goldsmith changed his testimony.  In October 1996, in Judge Weinstein’s courtroom, based upon an abstract of a study that he saw the night before testifying, Goldsmith asserted that believed that silicone did cause autoimmune connective tissue disease, more likely than not.  Before Goldsmith left the stand, Judge Weinstein declared that he did not believe that Goldsmith’s testimony would be helpful to a jury.

So perhaps EBM is indeed claiming that testimony that purports to provide the causal conclusion need not be expressed to some degree of certainty other than possibility.  This interpretation is consistent with what appears to be EBM’s dilution of “intellectual due process” to permit virtually any testimony at all that has the slightest patina of scientific opinion.

14.  “The underlying reason that courts appear to founder in this area [toxic torts] is that causation – an essential element for liability – is highly uncertain, scientifically speaking, and courts do not deal well with this uncertainty.”  Id. at 57.

Regulation in the face of uncertain makes sense as an application of the precautionary principle, but litigation requires expert witness opinion that rises to the level of “scientific knowledge.”  Rule 702.  EBM’s candid acknowledgment is the very reason that Daubert is an essential tool to strip out regulatory “science,” which may well support regulation against a potential, unproven hazard.  Regulations can be abrogated.  Judgments in litigation are forever.  The social goals and the evidentiary standards are different.

15.  “Causal inference is a matter of explanation.”  Id. at 43. 

Here and elsewhere, EBM talks of causality as though it were only about explanations, when in fact, the notion of causal inference includes an element of prediction, as well.  EBM seems to downplay the predictive nature of scientific theories, perhaps because this is where theories founder and confront their error rate.  Inherent in any statement of causal inference is a prediction that if the factual antecedents are the same, the result will be the same.  Causation is more than a narrative of why the effect followed the cause.

EBM’s work feeds the illusion that courts can act as gatekeepers, wrapped in the appearance of “intellectual due process,” but at the end of the day find just about any opinion to be admissible.  I could give further examples of the faux pas, ipse dixit, and non sequitur in EBM’s Evaluating Scientific Evidence, but the reader will appreciate the overall point.  Her topic is important, but there are better places for judges and lawyers to seek guidance in this difficult area.  The Federal Judicial Center’s Reference Manual on Scientific Evidence, although not perfect, is at least free of the sustained ideological noise that afflicts EBM’s text.

Friendly Fire Takes Aim at Daubert – Beecher-Monas And The Undue Attack on Expert Witness Gatekeeping

November 21st, 2010

Even though the principles of Daubert have been embraced by the entire Supreme Court, in a rare unanimous opinion, (See Weisgram v. Marley Co., 528 U.S. 440 (2000)(Ginsburg, J. writing for a unaminous court), and incorporated into a revised Rule 702, ratified by Congress, the enemies of Daubert abound.   Some advocates simply cannot let go of the notion that they have a constitutional right to bamboozle juries with unreliable evidence.

Daubert has some friends who would kill it by reinterpreting and diluting the reliability and relevance requirements so that anything goes, and everything is admissible.  Perhaps the best example of such a “friend,” is Professor Erica Beecher-Monas, who has written a book-length roadmap on how to eviscerate the gatekeeping concept.  See E. Beecher-Monas, Evaluating Scientific Evidence:  An Interdisciplinary Framework for Intellectual Due Process (New York 2007).

Erica Beecher-Monas (EBM, not to be confused with evidence-based medicine) starts off with a trenchant defense of the epistemic approach of Daubert, and an explanation of why proxies for scientific reliability and validity are doomed to fail.  EBM proceeds to offer a five step program of “intellectual due process,” to help trial courts carry out their screening:

1.  evaluate the challenged expert witness’s theory and hypothesis for their ability and power to explain the data;

2.  evaluate the data that weighs in favor, and against, the expert witness’s theory; the gatekeeper court must weigh all the evidence collectively.  The expert witness’s “theory” should explain and account for most of the evidence. According to EBM, the “theory” should explain the data that appears to weigh against the theory as well as the supporting evidence. 

3.  invoke “supportable assumptions” to bridge the inevitable gaps between underlying data and theory; there are, according to the author, “scientifically justifiable default assumptions,” which should be honored to fill in the gaps in an expert witness’s reasoning and explanations.

4.  evaluate the testifying expert witness’s methodology; and

5.  evaluate the statistical and probabilistic inferences between underlying data and opinions.  The trial court must synthesize all the available information to evaluate how well the data, methodology, “default assumptions,” taken together support the proffered conclusions.  

Id. at 6, 46 – 47.

This program sounds encouraging in theory.  As EBM describes how this “framework for analysis,” should work, however, things go poorly, and the law and scientific method are misrepresented.  “Default assumptions” becomes the pretense to let in opinions that would gag the proverbial horsefly off the manure cart.

Not all is bad.  EBM offers some important insights into how courts should handle scientific evidence.  She defends the gatekeeping process because of the serious danger of “dilution effect” among jurors, which overwhelms jurors with evidence of varying quality.  She reminds us that there are standards of care for research science and for clinical medicine, and standards for evaluating whether experimental results can be “honestly” attributed to the data.  Id. at 53.  Courts must evaluate whether the data and method really “show” the conclusion that the expert witness claims for them.  Id.  She criticizes those commentators who confuse the burden of proof with the statistical standard used in hypothesis testing for individual studies.  Id. at 65.

The narrative becomes confused and convoluted in addressing how trial courts should function as gatekeepers.  EBM is critical of how trial courts have discharged their gatekeeping responsibilities.  In many instances, EBM is unhappy with how judges carry out their evaluations, and criticizes them on the basis of her own ipse dixit.  It turns out that intellectual due process, as conceived of by EBM, allows pretty much anything to be admissible in EBM’s ideal juridical world.

Some of EBM’s assertions about the law and the science are startling, and deeply flawed.  In this post, I discuss some of the flawed scholarship, which has the potential to confuse and mislead.

1.  Daubert, which requires only a scintilla of scientifically valid and relevant evidence to survive an admissibility determination.” Id. at 82.                                                                                               

This assertion is wrong on its face.  Justice Blackmun, in writing his opinion in Daubert, discussed “scintilla” of evidence, not in the context of making an admissibility determination of an expert witness’s opinion, but rather in the context of ruling on motions for directed verdicts or summary judgment:

“Additionally, in the event the trial court concludes that the scintilla of evidence presented supporting a position is insufficient to allow a reasonable juror to conclude that the position more likely than not is true, the court remains free to direct a judgment, Fed.Rule Civ.Proc. 50(a), and likewise to grant summary judgment, Fed.Rule Civ.Proc. 56.  Cf., e.g., Turpin v. Merrell Dow Pharmaceuticals, Inc., 959 F.2d 1349 (6th Cir.) (holding that scientific evidence that provided foundation for expert testimony, viewed in the light most favorable to plaintiffs, was not sufficient to allow a jury to find it more probable than not that defendant caused plaintiff’s injury), cert. denied, 506 U.S. 826 (1992); Brock v. Merrell Dow Pharmaceuticals, Inc., 874 F.2d 307 (5th Cir. 1989) (reversing judgment entered on jury verdict for plaintiffs because evidence regarding causation was insufficient), modified, 884 F.2d 166 (5th Cir. 1989), cert. denied, 494 U.S. 1046 (1990); Green 680-681 [Green, “Expert Witnesses and Sufficiency of Evidence in Toxic Substances Litigation: The Legacy of Agent Orange and Bendectin Litigation,” 86 Nw.U.L.Rev. 643 (1992)].”

Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579, 594 (1993) (emphasis added).

Justice Blackmun was emphasizing that Rule 702 is not the only tool on the trial judge’s workbench; he was not setting a standard for the quantum of evidence that must govern an admissibility determination of an expert witness’s opinion.   Even if Justice Blackmun were discussing scintilla of evidence in the context of addressing admissibility (rather than sufficiency), his citation to the Bendectin decisions in the Court of Appeals makes clear that the “scintilla” of evidence offered by the party suffering entry of judgment might be fairly extensive in terms of expert witnesses’ opinions and their relied upon studies.  Nonetheless, this “scintilla” could be, and was, insufficient to resist judgment in the face of evidence of higher quality and relevance. 

EBM’s scholarship here is thus flawed at two levels.  First, she conflates admissibility with sufficiency (which elsewhere she faults various courts for doing, calling the conflation “pernicious”; see id. at 83).  Second, she fails to realize or acknowledge that the scintilla must be weighed against the entire evidentiary display.  Sometimes, as in the Bendectin litigation, the “scintilla” might include a fair amount of evidence, which is trumped by evidence superior in quality and quantity, and that this trumping is what leads to the finding that opining witnesses had offered unreliable opinions, unhelpful to the jury.

2.  “[C]onsistency of the observed effect is a criterion most scientists would deem important, but it may be absent even where there is a strong causal link, such as the link between smoking and lung cancer, which, although strong, is not inevitably observed.  Although it might be persuasive to find that there was a consistent specific association between exposure and a particular disease, such association is rarely observed.”  Id. at 59.

First, EBM offers no citation for the claim that the “link” between smoking and lung cancer is “not inevitably observed.”  The association is virtually always found in modern epidemiologic studies, and it is almost always statistically significant in adequately powered studies.  The repeated finding of an association, not likely due to chance, in many studies, conducted by different investigators, in different populations, at different times, with different study designs is the important point about consistency.  EBM muddles her unsupported, unsupportable assertion by then noting that a “consistent specific association” is rarely observed, but here she has moved, confusingly, to a different consideration – namely the specificity of the association, not its consistency.  Admittedly, specificity is a weak factor in assessing the causality vel non of an association, but EBM’s reference to a “consistent specific association” seems designed to confuse and conflate two different factors in the analysis.

3.  “[A]nimal studies are superior to epidemiologic studies because of the lack of controls endemic to epidemiologic studies, the difficulty in designing and analyzing such studies, and their costliness.”  Id. at 70.

This is one of EBM’s more strident, stunning pronouncements.  Her book makes clear that as an apologist for animal evidence, EBM deprecates and misunderstands epidemiologic evidence at almost every turn.  It is perhaps possible to interpret EBM charitably by suggesting that the epidemiologic studies she is thinking of without controls are “descriptive studies,” such as case reports or case series.  Such an interpretation is unwarranted, however, given EBM’s failure to qualify “epidemiologic studies.”  She paints with a broad brush, in a deliberate attempt to upend the generally accepted hierarchy of evidence.  Even a casual reading of the cases she cites, and the Reference Manual on Scientific Evidence, shows that the epidemiologic studies that are important to real intellectual due process are precisely the ones that have appropriate controls.  Most of the world, even if not EBM, thinks of analytic epidemiologic studies when comparing and contrasting with animal studies.

EBM offers no support for the asserted difficulty in designing and analyzing epidemiologic studies.  Is she making a personal, subjective declaration of her own difficulties?  The difficulties of judges and lawyers?  Or the difficulties of expert witnesses themselves?  To be sure, some lawyers have such difficulties, but they may have a good career choice to go to law rather than medical school.  (Perhaps they would do better yet in real estate litigation rather than in torts.)  Many physicians have “difficulty in designing and analyzing such studies,” but that is because these activities are outside the scope of their expertise, which until recently was rarely taught in medical schools.  In my experience, these activities have not been beyond the abilities of appropriately qualified expert witnesses, whether engaged by plaintiffs or defendants in civil litigation.

As for the “costliness” of epidemiologic studies, many studies can be conducted expeditiously and inexpensively.  Case-control studies can often be done relatively quickly and easily because they work from identified cases back to past exposures.  Cohort studies can often be assembled from administrative medical databases maintained for other purposes.  In the United States, such databases are harder to find, but several exist as a result of Medicare, VA, National Center for Health Statistics, and other managed care programs.  In Scandinavia, the entire countries of Sweden and Denmark are ongoing epidemiologic studies because of their national healthcare systems.  Cohort and case-control studies have been quickly and inexpensively set up to study many important public health issues, ranging from MMR vaccines and thimerosal and autism, abortion and breast cancer, and welding and parkinsonism.  See, e.g., Lone Frank, “Epidemiology: When an Entire Country Is a Cohort,” 287 Science 2398-2399 (2000).  Plaintiffs’ counsel, often with more money at their disposal than the companies they sue, have organized and funded any number of epidemiologic studies.  EBM’s attempted excuses and justifications of why animal studies are “superior” to epidemiology fail.

Perhaps we should take a moment to have a small reality check:

Would we accept an FDA decision that approved a drug that was safe and efficacious in rats, without insisting on a clinical trial in human beings?  How many drugs show great therapeutic promise in animal models only to fail on safety or efficacy, or both, when tested in humans?  I believe that the answers are: “no,” and “sadly, too many.”

4. “Clinical double-blind studies are rarely, if ever, available for litigation purposes.”  Id. at 69.

EBM again cites no support for this assertion, and she is plainly wrong.  Clinical trials have been important sources of evidence relied upon by both plaintiffs’ and defendants’ expert witnesses in pharmaceutical litigation, which makes up a large, increasing portion of all products liability litigation.  Even in cases involving occupational or environmental exposures, for which randomization would be impractical or unethical, double-blinded human clinical studies of toxicokinetics, or metabolic distribution and fate, are often important to both sides involved in litigating claims of personal injury.

5.  “[B]ecause there are so few good epidemiologic studies available, animal studies are often the primary source of information regarding the impact of chemicals.”  Id. at 73.

The field of occupational and environmental epidemiology is perhaps a half a century old, with high quality studies addressing many if not most of the chemicals that are involved in important personal injury litigations.  EBM’s claims about the prevalence of “good” studies, as well as the implicit claim about what proportion of lawsuits involve chemicals for which there exists no epidemiologic data, are themselves devoid of any empirical support.

6.  “[S]cientifc conclusions are couched in tentative phrases. ‘Association’ is preferred to ‘causation.’ Thus, failing to understand that causation, like other hypotheses, can never be proven true, courts may reject as unreliable even evidence that easily meets scientific criteria for validity.”  Id. at 55 (citing Hall, Havner, and Wright).

EBM writes that scientists prefer “association” to “causation,” but the law insists upon causation.  EBM fails to recognize that these are two separate, distinct concepts, and not a mere semantic preference for the intellectually timid.  An opinion about association is not an opinion about causation.  Scientists prefer to speak of association when the criteria for validly inferring the causal conclusion are not met; this preference thus has important epistemic implications for courts that must ensure that opinions are really reliable and relevant.  EBM sets up a straw man – hypotheses can never be proven to be true – in order to advocate for the acceptance and the admissibility of hypotheses masquerading as conclusions.  The fact is that, notwithstanding the mechanics of hypothesis testing, many hypotheses come to be accepted as scientific fact.  Indeed, EBM’s talk here, and elsewhere, of opinions “proven” or “demonstrated” to be true is a sloppy incorporation of mathematical language that is best avoided in evaluating empirical scientific claims.  Scientific findings are “shown” or “inferred,” not demonstrated.  Not all opinions stall at the “association” stage; many justifiably move to opinions about causation.  The hesitancy of scientists to assert that an association is causal usually means that they, like judges who are conscientious about their gatekeeping duties, recognize that there is an unacceptable error rate from indiscriminately treating all associations as causation.

(To Be Continued)

The “In” Thing To Do

November 11th, 2010

Most parents have confronted their children’s insistence to do or to have something based upon the popularity of that something, but we would not expect such behavior from scientists.

Or should we?

In this era of bashing authors for having taken a shekel or two from industry to support their work, how are we to identify and evaluate other non-financial biases that afflict science.  Anti-industry zealots write as though money were the only error-inducing incentive at play in the scientific arena, but they are wrong.  In addition to vanity, egotism, grant-mania, prestige, academic advancement, scientists are subject to “group think”; they are prone to advancing scientific conclusions that are the “in thing” to espouse.  Call it herd-think, or Zeitgeist, or the occupational medicine mafia; error creeps in when scientists reach and defend conclusions because those conclusions are the “in” thing.

Finding examples and admissions of scientists falling into error to align themselves with popular voices on controversial issues is, however, not easy.  One of my favorites was reported in the context of the federal government’s predictions of the United States’ cancer toll expected from occupational use of asbestos.  In 1978, then Secretary of Health Education and Welfare, Joseph Califano, announced the results of a report, prepared by scientists at the National Cancer Institute, the NIEHS, and the NIOSH, which predicted that 17 percent of all future cancers would be caused by asbestos.  This prediction was based largely upon the work of Dr Irving Selikoff and colleagues, who studied heavily exposed asbestos insulators and factory workers.  Tom Reynolds, “Asbestos-Linked Cancer Rates Up Less Than Predicted,” 84 J. Nat’l Cancer Instit. 560, 560 (1984).

Within a few years of the report, the scientific community realized that it had been duped.  How did so many high-level governmental scientists fall into error?  Selikoff’s prestige was great.  (Califano’s speech occurred well before the scandal of Selikoff’s infamous seminar organized by plaintiffs’ lawyers to showcase plaintiffs’ expert witnesses for the “benefit” of key state and federal judges.  See Cathleen M. Devlin, “Disqualification of Federal Judges – Third Circuit Orders District Judge James McGirr Kelly to Disqualify Himself So As to Preserve ‘The Appearance of Justice’ Under 28 U.S.C. 455: In re School Asbestos Litigation (1992),” 38 Vill. L. Rev. 1219 (1993).)  Scientists, however, should be evidence-based people, and not make important public pronouncements, likely to generate widespread public fear and concern, on the basis of someone’s prestige.  There was more to this error than the charm and reputation of Irving Selikoff.

By the time of the Califano report, the misdeeds of Johns-Manville had become well known among the scientific community.  Little attention was paid to the role of the U.S. government in promoting the use of asbestos, and its failure to warn and to provide safe workplaces in its naval shipyards.  The imbalance in reporting led scientists to enjoy a “feel good” attitude about reaching conclusions that exaggerated and distorted the scientific data to the detriment of the so-called “asbestos industry.”  In 1984, the Journal of the National Cancer Institute reported the phenomenon as follows:

“Enterline [an epidemiologist who published several studies on asbestos factory workers and who interviewed for the story] said the government’s exaggeration of the asbestos danger reflects a 1970s’ Zeitgeist that developed partly in response to revelations of industry misdeeds.

‘It was sort of the “in” thing to exaggerate … [because] that would be good for the environmental movement,’ he said. ‘At the time it looked like you were wearing a white hat if you made these wild estimates.  But I wasn’t sure whoever did that was doing all that much good’.”

Tom Reynolds, supra at 562.  The “in” thing to exaggerate; who would have thought scientists ever did that, much less acknowledge it.  The Califano report caught its scientist authors red handed, and there was not much they could do about it.  The report’s predictions were debunked by leading scientists, and the report’s authors confessed to having tortured the data.  R. Doll & R. Peto, “The causes of cancer: quantitative estimates of avoidable risks of cancer in the United States today, 66 J. Nat’l Cancer Instit. 1191 – 308 (1981). 

What is memorable about the incident is that the report was motivated by the desire to “wear a white hat,” not by lucre.  The lesson is that the current focus on “conflicts of interest” is little better than an excuse for ad hominem attacks, and that everyone would be better off if the focus were on the evidence.

Causation, Torts, and Epidemiologic Evidence

November 6th, 2010

Tort law writers naturally focus on the changes in tort doctrine, such as the advent of strict liability and the demise of privity of contract, as catalysts for the development of mass tort law. There were, however, causes external to the world of tort law itself. One significant cause was the development of epidemiologic evidence and the acceptance of stochastic concepts of causation. Epidemiology is built upon statistical and probabilistic thinking, and the law struggled to accept such thinking in place of the comfortable mechanistic approaches to causation that dominated the law in the 19th and early 20th centuries. Although one can find examples of epidemiologic studies in the medical literature before 1940, the discipline of epidemiology was poorly developed, both in terms of its statistical tools, and in terms of its acceptance as a legitimate scientific approach, until after World War II. The U.S. Surgeon General’s acceptance of tobacco smoking as a cause of lung cancer, in the mid-1960’s, without any clear mechanistic model of causation, was a major turning point for both epidemiologic science and the law. Interestingly, this turning point occurred at the same time that the American Law Institute accepted the “strict liability” concept of tort liability for harms caused by defects in consumer products.

The epidemiologic study is a relative newcomer to the law of evidence, and many courts, commentators, and lawyers still talk of the admissibility (vel non) of such a study. Such talk is imprecise and inaccurate; rarely will a study itself be admissible. A typical observational epidemiologic study (or for that matter, a randomized clinical trial) involves many levels of hearsay, including statements of study participants, statements of the study investigators to the participants to elicit their self-reported symptoms and diagnoses, statements and conclusions of the investigators who assessed and characterized exposure and health outcomes, statements and conclusions of investigators who collected, analyzed, and reported the study data, the statements of the peer reviewers and editors who called for changes in how the study would be reported, and so on, and so forth.

Perhaps the initial layer of hearsay from study participants could be considered admissible under Rule 803(4), which creates an exception for:

“Statements for purposes of medical diagnosis or treatment.—
Statements made for purposes of medical diagnosis or
treatment and describing medical history, or past or present
symptoms, pain, or sensations, or the inception or general
character of the cause or external source thereof insofar as
reasonably pertinent to diagnosis or treatment.”

Statements made by study participants to study investigators, however, are typically made for neither diagnosis or treatment. In a case-control study, for instance, the cases are already diagnosed, and the purpose of the study is not treatment. The control participants are selected because they have no diagnosis, and they certainly are not in need of any treatment. Rule 803(4) seems not to fit.

Perhaps the Rule 803(6) would permit the records in the form of questionnaires, laboratory reports, exposure assessments, to be admitted as business records:

“Records of regularly conducted activity.—A memorandum,
report, record, or data compilation, in any form, of acts,
events, conditions, opinions, or diagnoses, made at or near the
time by, or from information transmitted by, a person with
knowledge, if kept in the course of a regularly conducted business
activity, and if it was the regular practice of that business
activity to make the memorandum, report, record or data
compilation, all as shown by the testimony of the custodian or
other qualified witness, or by certification that complies with
Rule 902(11), Rule 902(12), or a statute permitting certification,
unless the source of information or the method or circumstances
of preparation indicate lack of trustworthiness. The
term ‘business’ as used in this paragraph includes business,
institution, association, profession, occupation, and calling of
every kind, whether or not conducted for profit.”

Even if one or another layer of hearsay could be removed, it is difficult to imagine that all the layers could fit into exceptions that would allow the study itself to be admitted. Furthermore, even if the study were admitted, the language and statistical analyses would not be appropriately used as direct evidence without the explanatory input of an expert witness. Epidemiologic evidence is thus virtually always not admissible evidence at all, but rather part of the “facts and data” upon which expert witnesses have relied to formulate their opinions. Epidemiologic studies are those otherwise inadmissible materials considered by expert witnesses, and because they are themselves largely inadmissible, Rules 703 and 705 govern whether, how, and when such studies will be disclosed to the trier of fact.

It is interesting to consider the admissibility of another research, investigatory tool, the survey, to help us understand the law’s consideration of epidemiologic studies. One frequently cited case provides a useful history and summary of what would be required to admit a survey and its results, when offered for the truth:

“The trustworthiness of surveys depends upon foundation evidence that
(1) the “universe” was properly defined,
(2) a representative sample of that universe was selected,
(3) the questions to be asked of interviewees were framed in a clear, precise and non-leading manner,
(4) sound interview procedures were followed by competent interviewers who had no knowledge of the litigation or the purpose for which the survey was conducted,
(5) the data gathered was accurately reported,
(6) the data was analyzed in accordance with accepted statistical principles and
(7) objectivity of the entire process was assured.”

Toys “R” Us, Inc. v. Canarsie Kiddie Shop, Inc., 559 F. Supp. 1189, 1205 (E.D.N.Y. 1983). The court noted that admitting survey evidence should normally include the testimony of the survey director, supervisor, interviewers, and interviewees. The absence of one or more of the seven identified indicia of survey trustworthiness may require the exclusion of the survey. Id. (excluding the survey at issue for failing to satisfy the indicia of trustworthiness).

A further important lesson of Toys “R” Us is that an expert witness cannot save an otherwise untrustworthy survey under the cloak of Rule 703. The court, having excluded the survey as direct evidence, went on to exclude the expert witness opinion testimony based upon the survey. Id. (relying upon Rule 703 and case law and treatises interpreting this rule).

This case has an important lesson to lawyers litigating mass tort cases that turn on epidemiologic evidence of harm. If the study is untrustworthy in the light of the seven Toys “R” Us criteria, then the study may not be sufficiently reliable for an expert witness to rely upon under Rule 703. A further lesson is that many of the criteria can be answered only by accessing the underlying data and materials from the study. Many lawyers seem to have lost track of the importance of Rule 703, after the Supreme Court placed its reliance upon Rule 702 to support a requirement of expert witness gatekeeping. Rule 702, however, addresses “sufficiency” of facts and data, and the reliability of methods and principles, not the reasonableness of reliance upon the underlying facts and data themselves.

Haacking at the Truth — Part Two

October 31st, 2010

Part Two.  (Professor Haack presents six “irreconcilable differences” between science and the law.  In the first part, I looked at the first three of these six differences.  The remaining three are discussed below.)

* * * *

(iv) Because of its adversarial character, the legal system tends to draw in as witnesses scientists who are in a sense marginal more willing than most of their colleagues to give an opinion on the basis of less-than-overwhelming evidence; moreover, the more often he serves as an expert witness, the more unbudgeably  confident a scientist may become in his opinion.”  Id. at 16.

Haack’s point appears unexceptional, although in my experience defendants typically cannot risk sponsoring “marginal” witnesses.  Plaintiffs’ counsel, however, do sponsor marginal witnesses because they know that the jury system gives them a sympathy boost from the emotions aroused in a serious injury case.

Haack provides examples of “marginal” science and witnesses that are disturbing for the biases and prejudices that she exhibits.  Haack focuses upon Dr Robert Brent, a toxicologist, who seems to pop into her mind as Merrell Dow’s expert witness “always ready to testify that Bendectin does not cause birth defects.”  Id. at 17.  Really?  Haack presents no evidence or suggestion that Brent was wrong, and indeed, Brent published widely on his views of the subject.  Wide publication does not necessarily mean Brent was right, but at least he was willing to subject himself to professional peer review, and post-publication, professional challenges.  Still, Haack is distressed that Dr Robert Brent opines with “unwarranted certainty” that Bendectin does not cause birth defects, but she offers no suggestion or support that his certainty was or is misplaced.

In stark contrast, Haack expresses no discomfort with Bendectin plaintiffs’ expert witness, Dr Done, and with the facile ease with which he opined with certainty that Bendectin does cause birth defects.  Here there really is a great deal of empirical evidence, and it has largely vindicated Dr Brent’s views on the safety and efficacy of Merrell Dow’s medication.  Dr Done’s subjective appreciation of “flaws” in some clinical trials does not transmute criticism into affirmative evidence in favor of the opinion that he so zealously, and overzealously, advocated in many Bendectin cases, for his own substantial pecuniary benefit.  What is remarkable about Haack’s article is that she singles out Dr Brent in the context of a discussion of “marginal” and “willing” testifying scientists, but she omits any mention of the plaintiffs’ cadre of ready, willing, and somewhat disreputable testifiers.  Perhaps even more remarkable is that Haack overlooks that Dr Done was essentially fired from his university for his entrepreneurial testimonial activities of dubious scientific worth, and that he may well have lied about his credentials.  See M. Green, Bendectin and Birth Defects:  The Challenges of Mass Toxic Substances Litigation 280 – 82 (Philadelphia 1996) (citing decisional law in which Done’s lack of veracity was judicially noted).

Haack offers the silicone breast implant litigation as another example of legal proceedings that may have been based upon adversarial posturing, but she equivocates by suggesting that the litigation may have been based upon a “(mis?)perception.”  Id. at 17. Haack’s question mark is telling.  Was the public’s (mis?)perception that silicone implants caused connective tissue diseases “generated in part by the legal system”? 

Here Haack is reluctant to come to terms with the reality that that the public really was misled by the legal system’s willingness to enter judgments upon verdicts for plaintiffs, based upon weak and bogus science.  These verdicts were returned, of course, before the spirit of Daubert helped cleanse the courtroom of the plaintiffs’ expert witnesses recently described by Judge Jack Weinstein as “medical charlatans”:

“[t]he breast implant litigation was largely based on a litigation fraud. …  Claims—supported by medical charlatans—that enormous damages to women’s systems resulted could not be supported.”

Weinstein, “Preliminary Reflections on Administration of Complex litigation.”  Cardozo Law Review De Novo 14 (emphasis added).

Haack’s brief narrative also misses the true origins of the silicone controversy.  The misleading started with scientists who had genuine “enthusiasm” for the causal hypothesis and exuberant, perhaps all-too-human, but unscientific excitement that an exogenous cause for autoimmune disease had been discovered.  The press and the Sidney Wolfes of the world then stirred the pot before the plaintiffs’ lawyers pounced on such an enticing opportunity.  Dr Kessler’s moratorium at FDA ultimately forced plaintiffs’ counsel to file cases (if for no other reason than to protect the clients against the statute of limitations).  Between Dr Kessler’s moratorium and the pronouncements of the IOM and Judge Pointer’s panels, there were jury verdicts in favor of plaintiffs (and many in favor of defendants), all signifying the waste of tremendous resources.

Haack’s observation that law relies on adversarial procedure, is hardly newsworthy, at least in common-law countries. This reliance is not a strongly distinguishing feature, however, between law and science.  Haack expresses a concern that some of our scientific knowledge base is developed by industry, which even in the communist world, is motivated by an adversarial spirit to capture markets and profits.  Money, however, is but one motive and inducement to adversity.  Surely, university professors are often locked in heated, adversarial disputes and debates over arcane scholarly issues.  Are full professorships, tenure, endowments, and funding mere bagatelles?  Sure, there are paeans to sharing data and collaborative scientific enterprises, but what is the empirical evidence that these lofty sentiments are followed in practice? Perhaps most persuasive is the testimony of scientists themselves, who acknowledge the presence and value of adversity in science:

“[One] way of dealing with errors is to have friends who are willing to spend the time necessary to carry out a critical examination of the experimental design beforehand and the results after the experiments have been completed. An even better way is to have an enemy. An enemy is willing to devote a vast amount of time and brain power to ferreting out errors both large and small, and this without any compensation. The trouble is that really capable enemies are scarce; most of them are only ordinary. Another trouble with enemies is that they sometimes develop into friends and lose a good deal of their zeal. It was in this way the writer lost his three best enemies. Everyone, not just scientists, need a few good enemies!”

George Olah, “My Search for Carbocations and Their Role in Chemistry; Nobel Lecture (Dec. 8, 1994), quoting George von Bekessy (Nobel prize winner in medicine, 1961).

The differentiation between law and science in terms of adversity fails.  Indeed, Haack herself notes an “erosion in the ethos of science.  Id. at 9n.54, and notes that scientists, like all human beings, sometimes act from mixed or dubious motives.  Id. at 9.  This concession alone is enough to support the legal procedures of expert witness opinion gatekeeping.

(v) “Legal rules can make it impossible to bring potentially useful scientific information to light; and the legal penchant for rules, “indicia,” and the like sometimes transmutes scientific subtleties into formulaic legal shibboleths.”  Id. at 18.

One of Haack’s concerns is that a scientific conclusion may be built from many different pieces of evidence, and that the Daubert process “atomizes” the overall evidence by looking at one witness’s opinions at a time.  She points out that a conclusion may be based upon toxicology, epidemiology, or clinical medicine, none of which is alone sufficient to warrant a causal conclusion.  Id. at 18.  This concern is rarely realized in practice because the witnesses always remain in control of their opinions.  They need not articulate the “bottom-line” conclusion; they can limit their opinion to a foundational opinion, which another expert witness will incorporate into a conclusion.  Once a witness, however, voices the ultimate causal conclusion, that witness will have to identify all the pieces and lines of supporting evidence.  The Daubert process can then proceed to ask whether the epistemic warrant is present in that witness’s opinion.  Haack’s misplaced concern appears to arise out of her unfamiliarity with how expert witness opinions are tendered, challenged, and reviewed.

Haack’s concern also ignores that regardless of the possibility of “interlocking pieces of evidence,” sometimes the evidence does not cohere sufficiently to warrant the conclusion.  “Science is built up with facts, as a house is with stones. But a collection of facts is no more a science than a heap of stones is a house.”   Jules Henri Poincaré, La Science et l’hypothèse (1905)

(vi) “Both because of its concern for precedent, and because of the desideratum of finality, the legal system has a tendency to inertia, and sometimes lags behind science.”  Id. at 20.

Like many naïve commentators, Haack seems perplexed and perhaps disturbed that the Daubert decision, which held that Rule 702 did not incorporate the Frye rule, led to more restrictive judicial gatekeeping of scientific opinion testimony.  Id. at 6.  Haack simply is unaware of the judicial legacy of Frye, which led to exclusion of evidence based upon novel scientific devices, but which was often feckless against expert witnesses who outran their headlights.  Before Daubert, there often was no standard; the epistemic anarchy of Ferebee ordained that expert witnesses had free-rein to reek and wreak.

Of course, “law lags science,” and for the good reasons explained by Judge Posner.  Rosen v. Ciba Geigy Corp., 78 F.3d 316, 319 (7th Cir. 1996).

Curiously, Haack expresses dismay that law is overly concerned with precedent, but at the same time notes that the admission and exclusion of expert witness opinion testimony is reviewed for abuse of discretion, and that in some instances, courts could rule either way and still be sustained on appeal.

Haack’s writings on science recognize that the core activity of science is inquiry, which is judged pragmatically as successful or not in terms of whether its answers have predictive and explanatory power.  For Haack, the legal system “could hardly be more different, with its emphasis on adversarial procedure, promptness and finality, case-specific fact finding, precedent, and policy considerations.  Id. at 12.  As discussed above, Haack overstates and misstates the differences.

The key to Haack’s conception of the legal side of the “marriage” is her insistence that the legal system exists to resolve disputes by making determinations of liability, not to find out whether a defendant is really liable.  Id.  There are certainly judges, who unduly impressed with their own procedural efficiency, and unconcerned with the truth-finding function of trials, who would agree with Haack’s rhetoric, but many judges, lawyers, and scholars would disagree.  A trial is a search for the truth, even if under time and procedural constraints.  The legal system suffers when judgments in court turn on scientific findings that diverge too much from good scientific practice.  This is the ultimate provenance of, and lesson from, the Supreme Court’s decision in Daubert.

Haacking at the Truth – Part One

October 31st, 2010

Professor Haack is a Professor of philosophy and of law, at the University of Miami, Florida.  She has written widely on the philosophy of science, in the spirit of pragmatism and C.S. Peirce.  Much of what she has written has been a useful corrective to formalistic writings on “the scientific method,” and are worthy of study by lawyers interested in the intersection of science and the law.  We lawyers need to develop a better (more accurate, both in explaining and predicting) theory of what science is to better accommodate our procedural rules to scientific inquiry.  Haack’s writings on science, are a helpful corrective.

A recent article by Professor Haack, provides a helpful précis of her views on science in the courtroom, but also reveals robust biases and prejudices that should raise red flags about her objectivity in commenting on the legal process.  Susan Haack, “Irreconcilable Differences?  The Troubled Marriage of Science and Law,” 72 Law & Contemporary Problems 1 (2009).

Haack’s paper grew out of a presentation at the Fourth Coronado Conference, organized by SKAPP (The Project on Scientific Knowledge and Public Policy).  Haack provides no information about the provenance of SKAPP, and the funding sources for SKAPP have been suppressed by its principals.  At the time of the Conference, SKAPP was headed by Dr David Michaels, who was a hired expert witness for plaintiffs’ counsel in tort litigation.  Dr Michaels is now the head of OSHA.  Michaels founded SKAPP with funding by plaintiffs’ counsel from monies left over from the silicone gel breast implant litigation. 

Ironically, the litigation shut down by Daubert has given rise to enough “left over” walking-around money to fund anti-Daubert writings and activities.  As in most multi-district litigations, the plaintiffs’ counsel set up a common benefit fund, which received a fixed percentage of every settlement, ostensibly to cover the costs of developing the plaintiffs’ case against the defendants.  Perversely, plaintiffs’ counsel have sufficient money on their hands, years after the silicone litigation is over, to fund conferences that help develop their case against the sort of gatekeeping that shut down their litigation machine.  People who have taken SKAPP money might ask why the money has not been distributed to claimants.

SKAPP’s hostility to expert witness gatekeeping is fairly obvious even if it had not been funded by the lawyers who sponsored such dubious evidence in the implant litigation.  See SKAPP A LOT  (posted April 30, 2010).  I am not suggesting that Haack’s paper was slanted to please the behind-the-scenes financial sponsors of the Coronado Conference.  I am, however, suggesting that the money went to SKAPP because of its ideological proclivities, and that Haack may well have been selected, in part, because of her anti-industry views.  The drumbeat for transparency and disclosures from authors affiliated with industry sounds out for transparency and disclosures from authors who want to speak out against that industry.  One could only imagine the hue and cry if a scholarly conference had been funded by an organization that had in turn been set up by, say, the tobacco industry.

In keeping with SKAPP’s priorities, Haack does not like the Daubert decision or its incorporation into statutory law by Federal Rule of Evidence 702.  Haack is critical of courts for excluding expert witnesses from testifying.  She anguishes for witnesses who have been “dauberted out,” and warns us that the consequences for such witnesses can be serious, and even disasterous.  Id. at 7 & n.48.  There is no such sympathy for the victims of unreliable expert witness testimony.

Haack sets out to characterize the differences between the scientific and legal enterprises, which make for a troubled “marriage.”  The sexual relational metaphor is Haack’s, and it fails.  Although Haack offers some important insights into science and scientific methodology, there are some significant problems, especially with her amateur marriage counseling.

Haack identifies six “irreconcilable differences” in a “not-so-tidy list.”  Id. at 15 – 21.  The differences ultimately, however, prove more insubstantial than Haack claims.

(i)  “Because its business is to resolve disputed issues, the law very often calls on those fields of science where the pressure of commercial interest is most severe.”  Id. at 15. 

True that, but Haack illustrates the pressure only with examples of industries that conduct research “for marketing purposes” or “with an eye to protecting itself against litigation.”  Id. Haack, for instance, gives an example of this difference in the form of Merck’s clinical trials of Vioxx.  Surely anyone familiar with the landscape of recent American tort law might think of examples from the claimants’ side.  The breast implant litigation spawned fraudulent studies on immunogenicity of silicone, by plaintiffs’ expert witnesses, who hoped to commercialize test kits for “silicone sensitivity.” Fenfluramine litigation showcased collateral litigation for fraud by plaintiffs’ counsel.  Silica litigation, based upon dubious medical screenings, resulted in fraudulent filings supported by fraudulent expert witness reports.  It would not have been difficult for Haack to find some examples of not just pressure, but criminal malfeasance, on the plaintiffs’ side of litigation, to illustrate where the “commercial interest is most severe.”  Perhaps Haack understood that the organizers of SKAPP wanted to keep the focus on industry.

(ii)  “Because the legal system aspires to resolve disputes promptly, the scientific questions to which it seeks answers will often be those for which the evidence is not yet in.  Id. at 16. 

This difference between science and law is real, although scientists themselves often overstate the certainty of their conclusions for which the evidence is not yet sufficiently complete.  This is hardly the stuff of an “irreconcilable difference,” because science, as Haack herself acknowledges, provides the reconciliation:

“Moreover, at any time there are many scientific questions to which there is no warranted answer, and to which scientists can only say, ‘at the moment, we just don’t know; we’re working on it, but we can’t tell you when we will have it figured out.’”

Id. at 12.  There are times that the law must await an answer as well. Expert witnesses are under no compulsion to offer an opinion that is not ready to be couched as a scientific conclusion.

(iii) “Because of its case-specificity, the legal system often demands answers of a kind science is not well-equipped to supply; for related reasons, the legal system constitutes virtually the entire market for certain fields of forensic science (or quasi-science), and for certain psychiatric specialties.” Id. at 16.

Although Haack’s characterization of the legal system’s demands is correct, she fails to explain why the legal system should countenance pseudo-science simply because real science is silent.  In looking at the Joiner litigation, which of course ended in the United States Supreme Court, Haack complains, no – whines, that specific causation of Mr. Joiner’s lung cancer was “an almost impossibly difficult question.”  Id. at 16, discussing General Elec. Co. v. Joiner, 522 U.S. 136, 139 – 40 (1997).  If so, then why should we allow a jury to speculate upon what is essentially a scientific issue?  Such speculative judgments are what led to the felt need for gatekeeping in the first place.

Haack goes on to complain further that the toxicity of PCB is “well-establish.”  Toxicity for what end point?  The Joiner case, however, involved challenges to the general causation question of lung cancer, on which the “well-established” toxicity of PCBs was quite irrelevant.  Good grief, water and oxygen are toxic at sufficiently large doses, but that does not mean we can attribute all diseases to them.

(to be continued)

The Thomas-Clown Affair

October 29th, 2010

All right; I know I said I wanted to write about the law of torts, but a tort is a civil wrong, and nothing could be more wrong than the behavior of the Thomas women.  First, Mrs. Thomas telephones Anita Hill at work on a Saturday morning, and leaves her a voicemail message about how Hill should apologize for her testimony given almost two decades ago.  The news wires and blogosphere came alive; everyone asked what was she thinking. 

Second, and even more bizarre than Mrs. Thomas’ lapse of judgment and civility, a former Thomas “girl friend,” Lillian McEwen provoked even greater outrage.  After having “dumped” Clarence Thomas more than 20 years ago, Ms. Ewen decided the time had come to go public with her great personal insights, gleaned from the bedroom behavior of Thomas. McEwen had no news of criminal activity, no news of abuse or mistreatment, no news of legal impropriety, and no personal observation of any event that could reconcile the contradictions in the Thomas-Hill testimony.  No, McEwen decided she could not go into senility without telling the world why she had dumped Thomas:  

  • he gave up heavy drinking,
  • he became “asexual,”
  • he decided to get into better shape by running,
  • he wanted to have a serious relationship that involved cohabiting,
  • he decided that he needed to advance his career, and
  • he paid more attention to controlling and disciplining his son.

Wow!  To be sure, McEwen also told us that Thomas enjoyed pornography and women with ample mammaries, but that was while she was still enjoying Thomas.  The post-epiphany Thomas was much less fun, no doubt, for Ms. McEwen, who preferred the Thomas who “drank to excess,” who lacked ambition, who was unstable, who had trouble concentrating, and who lacked “intellectual curiosity.”  Tom Cohen, “Former girlfriend says Clarence Thomas was a binge drinker, porn user.”< http://articles.cnn.com/2010-10-25/us/scotus.thomas.mcewen_1_anita-hill-pornography-binge-drinker?_s=PM:US>

Perhaps Ms. McEwen’s loss is the Supreme Court’s gain.

This is the stuff of news wires and Larry King interviews.  Mr. King interviewed Ms. McEwen, and even he appeared to have been flummoxed by McEwen’s public revelation that a sober, hard-working, physically fit Thomas was unacceptable in the McEwen bedroom.  Talk about too much information!  Still, Mr. King failed at the cross-examination by avoiding any meaningful inquiry into the nature and basis for McEwen’s long-term relationship with Clarence Thomas, before his having cleaned up his act. Afterall, McMcEwen had “opened the door,” as trial lawyers say, and she put her own peccadilloes, fetishes, and preferences into issue.  If she was willing to speak about Thomas, then surely fairness requires that she talk openly about herself as well.  Afterall, asexuality is in the eye of the beholder.  So many questions went unasked.  Inquiring minds want to know.

Mr. Justice Thomas declined comment, very appropriately.  I wish I had witnesses in my trials, who, like Lillian McEwen, impeached themselves so effectively and completely.  It is difficult to imagine that Ms. McEwen had been a prosecutor and administrative law judge, although it is only fair to consider that Ms. McEwen is accustomed to putting her “defendant” in the worst possible bad light.  The entire country now understands why then Senator Biden controlled his own urges and did not call Ms. McEwen as a witness at the Thomas confirmation hearings.

Copycat – Further Thoughts on Plagiarism in the Law

October 24th, 2010

Lynn Gates, of Smith, Murphy & Schoepperle LLP, pointed me to a recent article in ABA Mobile about an Iowa lawyer who reprimanded for submitting a brief in which large blocks of language and research were copied from a law review article.  Weiss, “Iowa Lawyer Reprimanded for Plagiarizing Bankruptcy Brief” (Oct 18, 2010).

The ABA article reports on a recent case, filed October 15, 2010, by the Supreme Court of Iowa, Iowa Supreme Court Attorney Disciplinary Board v. Cannon, which held that wholesale copying from a law review article was sanctionable and upheld a public reprimand of the offending lawyer for dishonesty or mispresentation toward the court. 

The gravamen of the complaint against the Iowa lawyer was that the lawyer copied large segments of a law review article into a brief he filed with the court.  The court preferred charges after finding that the lawyer’s briefs were of an “unusually high quality.”  The disciplinary board found that the lawyer’s conduct involved dishonesty and misrepresentation toward the court, and that the lawyer’s fee for writing the brief was unreasonable and excessive.  The lawyer had billed 25.5 hours for preparing the briefs.  The Grievance Commission agreed that the attorney had plagiarized, but not that the attorney had charged an excessive fee.  The Commission was apparently mollified by the lawyer’s having refunded his fee to the client, but nonetheless recommended a six-month suspension from practice. On review, the Iowa Supreme Court affirmed the finding of plagiarism, but concluded that a public reprimand was the appropriate sanction.

What are the lessons to be drawn from this interesting case?

First, and foremost, don’t plagiarize from well-written, well-researched law review articles.

Second, think twice about writing briefs of unusually high quality unless this is your regular practice and your clients can afford the quality.

Third, the board’s fixation on the excessive and unreasonable fee seems misplaced.  On appeal, the excessive fee charge evaporated in large measure because the lawyer had already refunded his fee.  There was no discussion whether the 25.5 hours billed was the actual time taken to find the relevant law review article, plagiarize it, and prepare the brief with the offending plagiarism.  If so, the time would have been honestly reported and not necessarily excessive.  We can only imagine how much more time the lawyer might have required to write the brief from scratch, and the client may well have benefitted substantially from the plagiarism.  The excessive fee charge seems not to fit the deed at all.

Fourth, copyright was never discussed.  Curiously, none of the tribunals involved appeared concerned about copyright infringement.  The Iowa Supreme Court never mentioned that passing off language and research as one’s own might be a copyright infringement, and that the lawyer should be sanctioned for violating federal copyright law. Surely the law review article that contributed such high quality to the lawyer’s brief had been infringed by the lawyer.

In my earlier post on plagiarism, I was not suggesting that law schools did not recognize and punish plagiarism.  Academic standards for plagiarism exist in law schools, although the definitions of plagiarism and the academic sanctions vary.  See Legal Writing Institute, “Law School Plagiarism v. Proper Attribution, A Publication of the Legal Writing Institute” (2003) (surveying law school policies and finding them often poorly defined, inconsistent, and contradictory); see also LeClercq, “Failure to Teach: Due Process and Law School Plagiarism,” 49 J. Leg. Educ. 236 (1999).

Plagiarism within law firms is another matter.  My own experience with “legal” plagiarism goes back to my work as a summer associate.  A partner for whom I was working asked me to research and write a manuscript on a topic of interest to him.  I told him that I would gladly do so, but that I expected to be noted as an author.  The partner told me that he would acknowledge my research contributions in a footnote, but that full authorship status was not appropriate for a student researcher.  The partner made it clear that the article would be for promotional purposes, and as a mere summer associate, my participation did not require authorship status.  I was stunned that the actual writer would not be also promoted as knowledgeable in the topic of the article, but my naiveté soon wore off.  I admit that my reaction was passive aggressive:  I put the research project at the end of my summer’s assignments, and somehow I never managed to get to do the research and writing for that partner.  In the long term, my reaction was more positive:  when I asked an associate to research a topic on which I wanted to write, I gave that associate authorship status if I used any part of the research or writing. Still, I was surrounded by lawyers, and even some partners, who held out writings that were ghost written by associates, law clerks, interns, and the like.  There is a lot of such intellectual slavery in law firms.

Reasonable Degree of Medical Certainty

October 20th, 2010

The ritualistic words “reasonable degree of medical certainty” (RDMC) are intoned by medical expert witnesses in most state and federal courts.  Courts in some liberal states, such as New Jersey, courts may dilute the typical formulation to require that expert witnesses opine with “reasonable degree of medical probability,” but the magic words are just as important.

Do the words have any meaning?

The words certainly have functional meaning in that their omission may lead to untoward consequences.  Although I have not seen many reported decisions on the issue, I have seen grown men cry when their adversaries pointed out that their expert witnesses failed to utter the magic words, and their trial judges seriously pondered striking the unadorned testimony.  In one case, my adversary begged me for a stipulation because his witness had failed to use the magic words, and had already fled the jurisdiction.  Because I (correctly) believed that the trial judge was going to grant a directed verdict on another ground, I cheerfully agreed to the stipulation that the witness, if he had been asked, would have stated that his opinions were all held and expressed to a RDMC.

Do the words have actual meaning besides the operational significance of being required by law?

David Faigman, who is truly a distinguished Professor, at the University of California Hastings College of Law, writes that the use of these words is an empty formalism.  The expression used in conjunction with a claim that X causes Y, or that X causes this particular case of Y, “has no empirical meaning and is simply a mantra repeated by experts for purposes of legal decision makers who similarly have no idea what it means.”  Faigman, “Evidentiary Incommensurability:  A Preliminary Exploration of the Problem of Reasoning from General Scientific Data to Individualized Decision-Making,” 75 Brooklyn Law Review 1115, 1134 (2010).  Faigman goes on to note that “less extreme versions” of RDMC attached to propositions about the causation of individual events are objectionable as well.  Faigman appears to take aim at both the RDMC qualifier as well as the assertion of some empirical propositions that are qualified by it.

In part, Professor Faigman’s concern about the lack of “empirical meaning” for some statements of individual causation are well taken.  He asks, for example, how can a witness say “more likely than not” that a given instance of cross-race identification is inaccurate. “Experts’ case-specific conclusions appear to be based largely on an admixture of an unknown combination of knowledge of the subject, experience over the years, commitment to the client or cause, intuition, and blind-faith.  Science it is not.” Id. at 1134 – 35.  Faigman gives other examples of the problem in the context of specific medical causation in personal injury cases, which illustrates that clinical training and practice often provide no basis for reliable attribution of causation in particular cases.  Id. at 1132 (“the core nature of clinical practice is at right angles to the crux of most legal inquiries); id. at 1133 & n.45 (citing Henricksen v. Conoco-Phillips Co., 605 F.Supp. 2d 1142 (E.D. Wash. 2009) for the proposition that differential etiology is useless when there is a large percentage of idiopathic cases and no discriminating feature of toxic causation in plaintiff’s case).

To the extent that Faigman has identified an embarrassing “lacuna” in the use of scientific evidence in courtrooms, his article is, as his articles usually are, an astute commentary on the sad state of how science is applied in court rooms.  Faigman, and a few other academic lawyers, have been willing to point to the naked judges and juries and boldly note that they are without clothes.

But is Faigman correct that the expression, RDMC, “is simply a mantra repeated by experts for purposes of legal decision makers who similarly have no idea what it means”?  Id. at 1134.

Faigman’s critique of RDMC appears to be aimed at expert witnesses who will utter the phrase, (and at courts that will superficially accept the utterance), without understanding the phrase, or perhaps not really meaning or caring what they say.  See generally H. Frankfurt, Bullshit 2005 (passim).  Surely, however, the phrase is not semantically empty.  “Certainty” has clear epistemic connotations and implications for the witness’s opinion, both in terms of his own state of mind, and in terms of the empirical support the witness has for his opinion in the form of reasonably relied upon data, and sound inferences to a reliable conclusion.  Subjectively, the witness who utters the phrase acknowledges that he is not speculating and that he believes that his opinion satisfies professional standards for claims of knowledge.  A witness who qualifies his opinion with these “magic words” communicates his willingness to put his professional reputation on the line, and to defend the opinion before his peers.  Objectively, the phrase conveys the notion of reliable knowledge.  To be sure, human beings may not enjoy “certainty” in their knowledge of empirical propositions, but the “reasonable” qualifier makes the entire phrase meaningful and important.  Even if judges and lawyers were to take the phrase as empty (because they are inured to bullshit in this setting), jurors are likely to take it as having a plain language meaning that adds epistemic and personal “heft” to the opinion.

Furthermore, Faigman’s comment about RDMC is inaccurate in some states that take the utterance very seriously.  In Pennsylvania, for instance:

“the expert has to testify, not that the condition of claimant might have, or even probably did, come from the accident, but that in his professional opinion the result in question came from the cause alleged. A less direct expression of opinion falls below the required standard of proof and does not constitute legally competent evidence.”

Menarde v. Philadelphia Transportation Co., 376 Pa. 497, 103 A.2d 681, 684 (1954).   This “formalistic” requirement in Pennsylvania is particularly important because the appellate courts have seriously eroded the gatekeeping function under Pennsylvania Rule of Evidence 702.  The epistemic requirements of RDMC are thus, for the time being, the only way to ensure that science adequately informs the verdicts and judgments of Pennsylvania courts.

Professor Faigman’s article raises an additional, “case-specific” concern.  For reasons that are unclear, Faigman uses the connection between asbestos and mesothelioma to serve as an example of an outcome that has a unique cause:

“An example of this is the relationship between asbestos exposure and mesothelioma. The unique cause of mesothelioma is exposure to asbestos, but not everyone exposed to asbestos develops mesothelioma.”  Id. at 1120.

and

“In the example of mesothelioma, a civil plaintiff who has this disease will be able to trace it back to asbestos exposure.”  Id. at 1121.

Not really.  Faigman offers no support for these startling assertions, and they are wrong.  Mesothelioma is known to be caused by erionite, a non-asbestos zeolite mineral, and the disease is probably caused by radiation as well.  Young adult cases among survivors of childhood Wilms’ tumor have been frequently described (after therapeutic radiation).  There is much that is known and unknown about mesothelioma causation.  Some forms of asbestos clearly cause mesothelioma, but there are few competent experts who will say, with RDMC, that all cases of mesothelioma are caused by asbestos.

Plagiarism in the Law

October 16th, 2010

Plagiarism is serious academic sin. Back in the day, my junior high teachers instilled a fear of this sin, and its dire consequences, in me. Given that I had abandoned a religious worldview long ago, the Purple “P” was a much worse branding than the Scarlet “A,” for anyone who lives by the written word.

The Chronicle of Higher Education reported a story yesterday about a graduate student’s outing of a professor’s apparent plagiarism at Rutgers University (at one of its satellite campuses in Newark, New Jersey). See Bartlett, “Alan Sokal, the 1996 Hoaxer, Takes Aim at an Accused Plagiarist at Rutgers.” http://chronicle.com/article/Alan-Sokal-Takes-Aim-at-an/124969/

The protagonists in this morality play are Mr. Frank Fischer, a professor of political science at Rutgers, and Mr. Kresimir Petkovic, a graduate student in the field of political science. Petkovic submitted an article to Critical Policy Studies; the paper was critical of Fischer’s work. Fischer, an award-winning scholar, is an editor of Critical Policy Studies. As you might imagine, Mr. Petkovic’s article did not fare too well. According to Bartlett’s account, initially, the journal initially told Petkovic that the paper might be published along with a response from Fischer. Ultimately, Petkovic’s paper was rejected.

The rejection led Petkovic to investigate, perhaps peevishly, whether Fischer’s scholarly work, the subject of his critique, was original. With the advent of electronic search engines, and software for comparing documents, the process of identifying plagiarism has been simplified. Thinking that he had found “pay dirt,” Petkovic sought out help from the well-known debunker of social constructivism, Alan Sokal, who offered to help in the investigation. Fischer threatened to sue, but the Chronicle apparently took it upon itself to publish the Petkovic-Sokal report on Fischer’s work as a linked document to Bartlett’s article. http://chronicle.com/items/biz/pdf/plagiarism_fischer.pdf  Fischer defended himself against the charges of plagiarism by interposing a plea of mere sloppiness.

There are several interesting lessons from the Fischer-Petkovic affair.

First, the Fischer affair illustrates some of the failings of peer review. It is a system run by human beings, and peer review is only as strong as the integrity of not just the reviewers, but of the editors as well. Even if the peer reviewers were selected in a fair manner, they were selected by the editors of the journal conducting the review. The reviewers may well be part of the clique that is being critiqued, and even if not, they are likely reviewers because they want to keep the option of someday publishing their work in the journal in question. This does not seem like a good system to provide unbiased review, with meritorious inclusion and exclusionary decisions.

This process takes surely place in medical publishing as well, where editorial boards and their friends are often possessed by various “enthusiasm” for and against certain lines of research. There is an awful lot of “political” science in medicine, as well. For parties who litigate medico-scientific issues, this problem in peer review is often problematic.

Second, the Fischer affair illustrates the existence of a certain inbred arrogance among intellectual groups. Fischer is an award-winning scholar in his circle. Many academic intellectual circles are very “tight,” and they seem not to care about what those external to the circle think. This phenomenon was seen in the 2005 award of the Sedgwick Memorial Medal by the American Public Health Association to Barry S. Levy. The Sedgwick award is meant to recognize outstanding achievements in public health. Shortly before receiving the award, Levy was awarded other epithets from a federal district judge. In re Silica Products Liability Litigation, 398 F. Supp. 2d 563 (S.D. Texas 2005)(expressing particular disappointment with Dr. Levy, who although not the worst offender of a bad lot of physicians, betrayed his “sterling credentials” in a questionable enterprise to manufacture diagnoses of silicosis for litigation). See also Schachtman, Silica Litigation: Screening, Scheming & Suing; Washington Legal Foundation Critical Legal Issues Working Paper Series No. 135 (Dec. 2005)(exploring the ethical and legal implications of the entrepreneurial litigation in which Levy and others were so heavily involved); available at http://www.wlf.org/upload/1205WPSchachtman.pdf. The Fischer affair is a reminder that qualifications do not substitute for indicia of reliability or integrity.

Third, the Fischer matter raises the interesting question for lawyers as to what is the permissible limit of plagiarism in the law? The law is built upon slavishly following what someone else did in the same or similar situation previously. That is “precedent.” Still, we would expect judges to attribute specific language to others when they use that language verbatim. Lawyers for litigants, however, may be all-too-happy to see their language in briefs appropriated wholesale by judges in their cases.

And what constraints operate upon lawyers themselves? Can they take, without attribution, language from another brief, for use in their most current case? Recently, I had the experience of circulating a draft appellate brief to my codefendants’ counsel for their review. My hope in doing so was to avoid unnecessary conflicts in our written submissions to the appellate court. Given the press of deadlines, I did not make much of not having my codefendants’ counsel return the favor in allowing me to see her draft brief. So imagine my consternation when I saw my codefendant’s brief, which used entire pages out of my brief! There appears to be no ethical canon, principle, or rule to address this issue.  Perhaps there should be.

The opinions, statements, and asseverations expressed on Tortini are my own, or those of invited guests, and these writings do not necessarily represent the views of clients, friends, or family, even when supported by good and sufficient reason.