TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Hypocrisy In Conflict Disclosure Rules

November 30th, 2010

In 2005, Sheldon Samuels, advocate for the international labor movement, presented a paper at the American Public Health Association, where he lauded the late Irving Selikoff as “labor’s doctor.”  See “Sheldon Samuels, Irving Selikoff: The Legacy of Labor’s Doctor,” http://apha.confex.com/apha/133am/techprogram/paper_120225.htm

There is nothing particularly remarkable about this hagiographic effort, other than Samuels’ disclosure:

“I wish to disclose that I have NO financial interests or other relationship with the manufactures [sic] of commercial products, suppliers of commercial services or commercial supporters.”

Id. (emphasis in the original).  Presumably the organizers of the APHA thought this was a serious, sufficient disclosure of potential conflicts of interest.  Samuels obviously believed that the only conflicts of interest were financial ones, or relationships with manufacturers.  Samuels’ lifelong affiliation with organized labor and his advocacy for labor’s causes did not register, in his mind or in the minds of the APHA leadership, as a potential conflict.

Samuels and the APHA leaders probably did not believe that labeling Selikoff as “labor’s doctor,” was at all pejorative.  They would have had something very different in mind if they had labeled someone as “industry’s doctor.”

To some extent, perhaps the asymmetry is superficially justified in that organized labor will have the health and welfare of its membership as a high priority.  Historically, however, organized labor has traded known occupational hazards for better pay, sometimes candidly called “dirty money.”  Industry might not  appear to be as concerned with workers’ health, but few industries want the reputational damage of being seen as callous or indifferent to their workers.  In the end, the caricature of industry as only concerned about “profits” is as false and defamatory as the caricature of labor as only concerned about wages.  Somewhere in the caricaturing, the interest in craftsmanship and in selling products or services of value to people is lost.  Workers are more than their paychecks, and employers are more than their profits.

But to return to the conflict of interest issue, Samuels’ faint, failed attempt at a disclosure reveals the hypocrisy in the constant drumbeat over financial conflicts.  Samuels’ conflict is real and palpable.  Labor organizations have been in the forefront of pushing for compensation for injuries, and they have benefitted from scientific claims, which may have been exaggerated or false.

The asymmetry in the criticisms over funding sources is quite vocal.  Typical is an article published in the journal, Occupational and Environmental Medicine:

“There are inherent problems with industry sponsored research in relation to intellectual property and ethical issues because industry funders and academic researchers work in different systems with different goals and means.”

S. Tong & J. Olsen, “The Threat to Scientific Integrity in Environmental and Occupational Medicine,” 62 Occup. Envt’l Med. 843, 843 (2005).  The authors recount anecdotes of industry influence on studies, and they call for reform:

“Research funding for public health should not come directly from the industry to the researcher; an independent, intermediate funding scheme should be established.”

Id. at 846.  Tong and Olsen are curiously silent about studies funded or sponsored by labor or by plaintiffs’ lawyers.  As we approach 100 bankrupt companies in the asbestos litigation, we might well ask whether studies conducted by “labors’ doctors” have exaggerated or misrepresented risks and causal claims.  In several litigations, plaintiffs’ counsel conspired with researchers to produce dubious studies, and then worked to hide their involvement. 

Tong and Olsen, who also call for openness among public health professionals, acknowledge help from David Michaels.  The casual reader, however, would not know that Michaels had worked as a plaintiffs’ testifying expert witness, or that he had directed an organization known as SKAPP, which was surreptitiously funded by plaintiffs’ lawyers, and which worked to undermine judicial screening of unreliable expert witness evidence.  So much for openness on the home front.

The asymmetry, or hypocrisy, of the occupational medicine community can sometimes approach shrill hysteria.   Samuels’ hagiographic presentation is a fascinating contrast with Professor Peter Bartrip’s historical research into the mystery of Selikoff’s medical degrees.  P.W.J. Bartrip, “Irving John Selikoff and the Strange Case of the Missing Medical Degrees,” 58 J. History Med. 3 (2003).  Although Bartrip praises many of Selikoff’s accomplishments, he is critical of Selikoff’s dissembling over his actual medical training. Bartrip offers a balanced account, and concedes on any number occasions that the record is inconclusive.

Criticizing “labor’s doctor,” however, can be dangerous business.  Bartrip’s paper brought the goodfellas out in a gaggle.  “P.W.J. Bartrip’s Attack on Irving J. Selikoff.”  See 46 Am. J. Indus. Med. 151 (2004).  The authors of this letter apparently were turned down by the Journal of the History of Medicine, and so they published in a non-history journal.  It is not clear whether this path deprived Professor Bartrip of the opportunity to publish a reply.  The letter writers assail Bartrip for his “ad hominem” attacks on Selikoff, and praise Selikoff as the epitome of “the committed public health professional.”  Id. at 151.  The gaggle claim that “Selikoff’s antagonists came up with nothing to discredit him in his lifetime … .”  Id. at 152.

The letter is signed by David Egilman, Geoffrey Tweedale, Jock McCulloch, William Kovarik, Barry Castleman, William Longo, Stephen Levin, and Susanna Rankin Bohme.  Several of these signatories testify extensively in asbestos personal injury litigation for plaintiffs; others are well-known idealogues on asbestos policy issues.  No disclosure of conflict is made in connection with this letter.  Why are we not surprised?  The letter challenges some of Bartrip’s findings, but ultimately it begs the question about the quality of Selikoff’s scientific contributions.

Ultimately, most of the rancor against conflicts of interest, and failure to disclose, is ad hominem.  Nothing in Bartrip’s historical piece makes Selikoff’s scientific work more or less true; nothing in the goodfellas’ ad hominem attacks on Bartrip make Selikoff’s training or dissembling about his training more or less true.  And the gaggle’s invocation of absence of evidence (to be used as a substitute for evidence of absence) is hardly persuasive.  By the time Selikoff’s advocacy had become problematic to certain industry “antagonists,” they were beleaguered by overwhelming litigation and silenced by their own “conflicts of interest.”  On the other hand, the hagiographers have had the day because they claim to have no such conflicts.  These assumptions about Selikoff’s work, and about the “interestedness” of those who challenge and defend his work cry out for re-examination.

Beecher-Monas and the Attempt to Eviscerate Daubert from Within

November 23rd, 2010

Part 2, of a Critique of Evaluating Scientific Evidence, by Erica Beecher-Monas (EBM)

Giving advice to trial and appellate judges on how they should review scientific evidence can be a tricky business.  Such advice must reliably capture the nature of scientific reasoning in several different fields, such as epidemiology and toxicology, and show how such reasoning can and should be incorporated within a framework of statutes, rules, and common law rules.  Erica Beecher-Monas’ book, Evaluating Scientific Evidence, fails to accomplish these goals.  What she does accomplish is the confusion of regulatory assumptions and principles of precautionary principles with the science of health effects in humans.

7.  “Empowering one type of information or one kind of study to the exclusion of another makes no scientific evidentiary sense.”  Id. at 59.

It is telling that Erica Beecher-Monas (EBM) does not mention either the systematic review or the technique of meta-analysis, which is based upon the systematic review.  Of course, these approaches, whether qualitative or quantitative, require a commitment to pre-specify a hierarchy of evidence, and inclusionary and exclusionary criteria for studies.  What EBM seems to hope to accomplish is the flattening of the hierarchy of evidence, and making all types of evidence comparable in probative value.  This is not science or scientific, but part of an agenda to turn Daubert into a standard of bare relevancy.  Systematic reviews do not literally exclude any “one kind” of study, but they recognize that not all study designs are equal.  The omission in EBM’s book speaks volumes.

8. “[T]he likelihood that someone whose health was adversely affected will have the courthouse doors slammed in his or her face,”  id. at 64, troubles EBM. 

EBM recognizes that inferences and scientific methodologies involve false positives and false negatives, but she appears disproportionately concerned by false negatives.  Of course, this solicitude begs the question whether we have reasonably good knowledge that that someone really was adversely affected.  A similar solicitude for the defendant who has had the courthouse door slammed on his head, in cases in which it has caused no harm, is missing.  This imbalance leads EBM to excuse and defend gaps in plaintiffs’ evidentiary displays on scientific issues.

9.  “Gaps in scientific knowledge are inevitable, not fatal flaws.”  Id. at 51 (citing a work on risk assessment).

The author also seems to turn a blind eye to the size of gaps.  Some gaps are simply too big to be bridged by assumptions.  Scientists have to be honest about their assumptions, and temper their desire to reach conclusions.  Expert witnesses often lack the requisite scientific temper to remain agnostic; they take positions when they should rightfully press for the gaps to be filled.  Expert witnesses outrun their headlights, but EBM cites virtually no example of a gatekeeping decision with approval.

Excusing gaps in risk assessment may make some sense given that risk assessment is guided by the precautionary principle.  The proofs in a toxic tort case are not.  EBM’s assertion about the inevitability of “gaps” skirts the key question:  When are gaps too large to countenance, and to support a judgment?  The Joiner case made clear that when the gaps are supported only by the ipse dixit of an expert witness, courts should look hard to determine whether the conclusion is reasonably, reliably supported by the empirical evidence.  The alternative, which EBM seems to invite, is intellectual anarchy.

8.  “Extrapolation from rodent studies to human cancer causation is universally accepted as valid (at least by scientists) because ‘virtually all of the specific chemicals known to be carcinogenic in humans are also positive in rodent bioassays, and sometimes even at comparable dose and with similar organ specificity’.” Id. at 71n.55 (quoting Bernard Weinstein, “Mitogenesis is only one factor in carcinogenesis,” 251 Science 387, 388 (1991)).

When it comes to urging the primacy and superiority of animal evidence, EBM’s brief is relentless and baseless.

Remarkably, in the sentence quoted above, EBM has committed the logical fallacy of affirming the consequent:  If all human carcinogens are rat carcinogens, then all rat carcinogens are human carcinogens.  This argument form is invalid, and the consequent does not follow from the antecedent.  And it is the consequent that provides the desired, putative validity for extrapolating from rodent studies to humans.  Not only does EBM commit a non-sequitur, she quotes Dr. Weinstein’s article out of context, because his article makes quite clear that not all rat carcinogens are accepted causes of cancer in human beings.

9.  “Post-Daubert courts often exclude expert testimony in toxic tort cases simply because the underlying tests relate to animals rather than humans.”  Id. at 71n. 54.

Given EBM’s radical mission to “empower” animal evidence, we should not be too surprised that she is critical of Daubert decisions that have given lesser weight to animal evidence.  The above statement is another example of EBM’s over- and misstatement.  The cases cited, for instance the Hall decision by Judge Jones in the breast implant litigation, and the Texas Supreme Court in Havner, do not support the “simply because.”  Those cases represent complex evidentiary displays that involved animal, in vitro, chemical analysis, and epidemiologic studies. The Hall decision was based upon Rule 702, but it was followed by Judge Jack Weinstein, who, after conducting two weeks of hearings, entered summary judgment sua sponte against the plaintiffs (animal evidence and all).  Recently, Judge Weinstein characterized the expert witnesses who supported the plaintiffs’ claims as “charlatans.”  See Judge Jack B. Weinstein, “Preliminary Reflections on Administration of Complex Litigation.” Cardozo Law Review De Novo at 14, http://www.cardozolawreview.com/content/denovo/WEINSTEIN_2009_1.pdf (“[t]he breast implant litigation was largely based on a litigation fraud. …  Claims—supported by medical charlatans—that enormous damages to women’s systems resulted could not be supported.”) (emphasis added).

Given the widespread rejection of the junk science behind breast implant claims, by courts, scientists, court-appointed experts, and the Institute of Medicine, EBM’s insertion of “simply” in the sentence above simply tells volumes about how she would evaluate the evidentiary display in HallSee also Evaluating Scientific Evidence at 81n.99 (arguing that Hall was mistaken).  If the gatekeeping in the silicone breast implant litigation was mistaken, as EBM argues, it is difficult to imagine what slop would be kept out by a gatekeeper who chose to apply EBM’s “intellectual due process.”

10.  “Animal studies are more persuasive than epidemiology for demonstrating small increases of risk.” Id. at 70

EBM offers no support for this contention, and there is none unless one is concerned to demonstrate small risks for animals.  Even for the furry beasts themselves, the studies do not “demonstrate” (a mathematical concept) small increased risks at low doses comparable to the doses experienced by human beings. 

EBM’s urging of “scientifically justifiable default assumptions” turns into advocacy for regulatory pronouncements of precautionary principle, which have been consistently rejected by courts as not applicable to toxic tort litigation for personal injuries.

11.  “Nonthreshold effects, on the other hand, are characteristic of diseases (like some cancers) that are caused by genetic mutations.” Id. at 75.

EBM offers no support for this assertion, and she ignores the growing awareness that the dose-response curves for many substances are hormetic; that is, the substance often exercises a beneficial or therapeutic effect at low doses, but may be harmful at high doses.  Alcohol is a known human carcinogen, but at low doses, alcohol reduces cardiovascular mortality.  At moderate to high doses, alcohol causes female breast cancer, and liver cancer.  Liver cancer, however, requires sufficiently high, prolonged doses to causes permanent fibrotic and architectural changes in the liver (cirrhosis) before it increases risk of liver cancer.  These counterexamples, and others, show that thresholds are often important features of the dose-response curves of carcinogens.

Similarly, EBM incorrectly argues that the default assumption of a linear dose-response pattern is reasonable because it is, according to her, widely accepted.  Id. at 74n. 65.  Her supporting citation is, however, to an EPA document on risk assessment, which has nothing to do with determinations of causality.  Risk assessments assume causality and attempt to place an upper bound on the magnitude of the hypothetical risk.  Again, EBM’s commitment to the precautionary principle and regulatory approaches preempt scientific thinking.  If EBM had considered the actual and postulated mechanisms of carcinogenesis, even in sources she cites, she would have to acknowledge that the linear no threshold model makes no sense because it ignores the operation of multiple protective mechanisms that must be saturated and overwhelmed before carcinogenetic exposures can actually induce clinically meaningful tumors in animals.  See, e.g., Bernard Weinstein, “Mitogenesis is only one factor in carcinogenesis,” 251 Science 387, 388 (1991) (mistakenly cited by EBM for the proposition that rodent carcinogens should be “assumed” to cause cancer in humans).

12.  “Under this assumption [of the EPA], demonstrating the development of lung cancer in mice would be admissible to show human causation in any organ.  Because we know so little about cancer causation, there is justification for this as a workable but questionable assumption with respect to cancer.”  Id. at 77.

Extrapolation, across species, across organs, and across disparate doses!  No gap is too wide, too deep to be traversed by EBM’s gatekeepers.  In arguing that extrapolation is a routine part of EPA risk assessment, EBM ignores that the extrapolation is not the basis for reaching scientific conclusions about health effects in human beings.  Regulatory science is “mandating certainty” — the opposite side of David Michael’s caricature of industry’s “manufacturing doubt.”

13. “[T]he court in Hall was mistaken when it excluded the expert testimony because the studies relied on only showed that silicone could have caused the plaintiff’s diseases, not that it did.”  Id. at 81n.99.

Admittedly, it is difficult to tell whether EBM is discussing general or specific causation in this sentence, but it certainly seems as if she is criticizing the Hall decision, by Judge Jones, because the expert witnesses for the plaintiff were unable to say that silicone did, in fact, cause Hall’s illness.  EBM appears to be diluting specific causation to a “might have had some effect” standard. 

The readers who have actually read the Hall decision, or who are familiar with the record in Hall, will know that one key expert witness for plaintiffs, an epidemiologist, Dr. David Goldsmith, conceded that he could not say that silicone more likely than not caused autoimmune disease.  A few weeks after testifying in Hall, Goldsmith changed his testimony.  In October 1996, in Judge Weinstein’s courtroom, based upon an abstract of a study that he saw the night before testifying, Goldsmith asserted that believed that silicone did cause autoimmune connective tissue disease, more likely than not.  Before Goldsmith left the stand, Judge Weinstein declared that he did not believe that Goldsmith’s testimony would be helpful to a jury.

So perhaps EBM is indeed claiming that testimony that purports to provide the causal conclusion need not be expressed to some degree of certainty other than possibility.  This interpretation is consistent with what appears to be EBM’s dilution of “intellectual due process” to permit virtually any testimony at all that has the slightest patina of scientific opinion.

14.  “The underlying reason that courts appear to founder in this area [toxic torts] is that causation – an essential element for liability – is highly uncertain, scientifically speaking, and courts do not deal well with this uncertainty.”  Id. at 57.

Regulation in the face of uncertain makes sense as an application of the precautionary principle, but litigation requires expert witness opinion that rises to the level of “scientific knowledge.”  Rule 702.  EBM’s candid acknowledgment is the very reason that Daubert is an essential tool to strip out regulatory “science,” which may well support regulation against a potential, unproven hazard.  Regulations can be abrogated.  Judgments in litigation are forever.  The social goals and the evidentiary standards are different.

15.  “Causal inference is a matter of explanation.”  Id. at 43. 

Here and elsewhere, EBM talks of causality as though it were only about explanations, when in fact, the notion of causal inference includes an element of prediction, as well.  EBM seems to downplay the predictive nature of scientific theories, perhaps because this is where theories founder and confront their error rate.  Inherent in any statement of causal inference is a prediction that if the factual antecedents are the same, the result will be the same.  Causation is more than a narrative of why the effect followed the cause.

EBM’s work feeds the illusion that courts can act as gatekeepers, wrapped in the appearance of “intellectual due process,” but at the end of the day find just about any opinion to be admissible.  I could give further examples of the faux pas, ipse dixit, and non sequitur in EBM’s Evaluating Scientific Evidence, but the reader will appreciate the overall point.  Her topic is important, but there are better places for judges and lawyers to seek guidance in this difficult area.  The Federal Judicial Center’s Reference Manual on Scientific Evidence, although not perfect, is at least free of the sustained ideological noise that afflicts EBM’s text.

Friendly Fire Takes Aim at Daubert – Beecher-Monas And The Undue Attack on Expert Witness Gatekeeping

November 21st, 2010

Even though the principles of Daubert have been embraced by the entire Supreme Court, in a rare unanimous opinion, (See Weisgram v. Marley Co., 528 U.S. 440 (2000)(Ginsburg, J. writing for a unaminous court), and incorporated into a revised Rule 702, ratified by Congress, the enemies of Daubert abound.   Some advocates simply cannot let go of the notion that they have a constitutional right to bamboozle juries with unreliable evidence.

Daubert has some friends who would kill it by reinterpreting and diluting the reliability and relevance requirements so that anything goes, and everything is admissible.  Perhaps the best example of such a “friend,” is Professor Erica Beecher-Monas, who has written a book-length roadmap on how to eviscerate the gatekeeping concept.  See E. Beecher-Monas, Evaluating Scientific Evidence:  An Interdisciplinary Framework for Intellectual Due Process (New York 2007).

Erica Beecher-Monas (EBM, not to be confused with evidence-based medicine) starts off with a trenchant defense of the epistemic approach of Daubert, and an explanation of why proxies for scientific reliability and validity are doomed to fail.  EBM proceeds to offer a five step program of “intellectual due process,” to help trial courts carry out their screening:

1.  evaluate the challenged expert witness’s theory and hypothesis for their ability and power to explain the data;

2.  evaluate the data that weighs in favor, and against, the expert witness’s theory; the gatekeeper court must weigh all the evidence collectively.  The expert witness’s “theory” should explain and account for most of the evidence. According to EBM, the “theory” should explain the data that appears to weigh against the theory as well as the supporting evidence. 

3.  invoke “supportable assumptions” to bridge the inevitable gaps between underlying data and theory; there are, according to the author, “scientifically justifiable default assumptions,” which should be honored to fill in the gaps in an expert witness’s reasoning and explanations.

4.  evaluate the testifying expert witness’s methodology; and

5.  evaluate the statistical and probabilistic inferences between underlying data and opinions.  The trial court must synthesize all the available information to evaluate how well the data, methodology, “default assumptions,” taken together support the proffered conclusions.  

Id. at 6, 46 – 47.

This program sounds encouraging in theory.  As EBM describes how this “framework for analysis,” should work, however, things go poorly, and the law and scientific method are misrepresented.  “Default assumptions” becomes the pretense to let in opinions that would gag the proverbial horsefly off the manure cart.

Not all is bad.  EBM offers some important insights into how courts should handle scientific evidence.  She defends the gatekeeping process because of the serious danger of “dilution effect” among jurors, which overwhelms jurors with evidence of varying quality.  She reminds us that there are standards of care for research science and for clinical medicine, and standards for evaluating whether experimental results can be “honestly” attributed to the data.  Id. at 53.  Courts must evaluate whether the data and method really “show” the conclusion that the expert witness claims for them.  Id.  She criticizes those commentators who confuse the burden of proof with the statistical standard used in hypothesis testing for individual studies.  Id. at 65.

The narrative becomes confused and convoluted in addressing how trial courts should function as gatekeepers.  EBM is critical of how trial courts have discharged their gatekeeping responsibilities.  In many instances, EBM is unhappy with how judges carry out their evaluations, and criticizes them on the basis of her own ipse dixit.  It turns out that intellectual due process, as conceived of by EBM, allows pretty much anything to be admissible in EBM’s ideal juridical world.

Some of EBM’s assertions about the law and the science are startling, and deeply flawed.  In this post, I discuss some of the flawed scholarship, which has the potential to confuse and mislead.

1.  Daubert, which requires only a scintilla of scientifically valid and relevant evidence to survive an admissibility determination.” Id. at 82.                                                                                               

This assertion is wrong on its face.  Justice Blackmun, in writing his opinion in Daubert, discussed “scintilla” of evidence, not in the context of making an admissibility determination of an expert witness’s opinion, but rather in the context of ruling on motions for directed verdicts or summary judgment:

“Additionally, in the event the trial court concludes that the scintilla of evidence presented supporting a position is insufficient to allow a reasonable juror to conclude that the position more likely than not is true, the court remains free to direct a judgment, Fed.Rule Civ.Proc. 50(a), and likewise to grant summary judgment, Fed.Rule Civ.Proc. 56.  Cf., e.g., Turpin v. Merrell Dow Pharmaceuticals, Inc., 959 F.2d 1349 (6th Cir.) (holding that scientific evidence that provided foundation for expert testimony, viewed in the light most favorable to plaintiffs, was not sufficient to allow a jury to find it more probable than not that defendant caused plaintiff’s injury), cert. denied, 506 U.S. 826 (1992); Brock v. Merrell Dow Pharmaceuticals, Inc., 874 F.2d 307 (5th Cir. 1989) (reversing judgment entered on jury verdict for plaintiffs because evidence regarding causation was insufficient), modified, 884 F.2d 166 (5th Cir. 1989), cert. denied, 494 U.S. 1046 (1990); Green 680-681 [Green, “Expert Witnesses and Sufficiency of Evidence in Toxic Substances Litigation: The Legacy of Agent Orange and Bendectin Litigation,” 86 Nw.U.L.Rev. 643 (1992)].”

Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579, 594 (1993) (emphasis added).

Justice Blackmun was emphasizing that Rule 702 is not the only tool on the trial judge’s workbench; he was not setting a standard for the quantum of evidence that must govern an admissibility determination of an expert witness’s opinion.   Even if Justice Blackmun were discussing scintilla of evidence in the context of addressing admissibility (rather than sufficiency), his citation to the Bendectin decisions in the Court of Appeals makes clear that the “scintilla” of evidence offered by the party suffering entry of judgment might be fairly extensive in terms of expert witnesses’ opinions and their relied upon studies.  Nonetheless, this “scintilla” could be, and was, insufficient to resist judgment in the face of evidence of higher quality and relevance. 

EBM’s scholarship here is thus flawed at two levels.  First, she conflates admissibility with sufficiency (which elsewhere she faults various courts for doing, calling the conflation “pernicious”; see id. at 83).  Second, she fails to realize or acknowledge that the scintilla must be weighed against the entire evidentiary display.  Sometimes, as in the Bendectin litigation, the “scintilla” might include a fair amount of evidence, which is trumped by evidence superior in quality and quantity, and that this trumping is what leads to the finding that opining witnesses had offered unreliable opinions, unhelpful to the jury.

2.  “[C]onsistency of the observed effect is a criterion most scientists would deem important, but it may be absent even where there is a strong causal link, such as the link between smoking and lung cancer, which, although strong, is not inevitably observed.  Although it might be persuasive to find that there was a consistent specific association between exposure and a particular disease, such association is rarely observed.”  Id. at 59.

First, EBM offers no citation for the claim that the “link” between smoking and lung cancer is “not inevitably observed.”  The association is virtually always found in modern epidemiologic studies, and it is almost always statistically significant in adequately powered studies.  The repeated finding of an association, not likely due to chance, in many studies, conducted by different investigators, in different populations, at different times, with different study designs is the important point about consistency.  EBM muddles her unsupported, unsupportable assertion by then noting that a “consistent specific association” is rarely observed, but here she has moved, confusingly, to a different consideration – namely the specificity of the association, not its consistency.  Admittedly, specificity is a weak factor in assessing the causality vel non of an association, but EBM’s reference to a “consistent specific association” seems designed to confuse and conflate two different factors in the analysis.

3.  “[A]nimal studies are superior to epidemiologic studies because of the lack of controls endemic to epidemiologic studies, the difficulty in designing and analyzing such studies, and their costliness.”  Id. at 70.

This is one of EBM’s more strident, stunning pronouncements.  Her book makes clear that as an apologist for animal evidence, EBM deprecates and misunderstands epidemiologic evidence at almost every turn.  It is perhaps possible to interpret EBM charitably by suggesting that the epidemiologic studies she is thinking of without controls are “descriptive studies,” such as case reports or case series.  Such an interpretation is unwarranted, however, given EBM’s failure to qualify “epidemiologic studies.”  She paints with a broad brush, in a deliberate attempt to upend the generally accepted hierarchy of evidence.  Even a casual reading of the cases she cites, and the Reference Manual on Scientific Evidence, shows that the epidemiologic studies that are important to real intellectual due process are precisely the ones that have appropriate controls.  Most of the world, even if not EBM, thinks of analytic epidemiologic studies when comparing and contrasting with animal studies.

EBM offers no support for the asserted difficulty in designing and analyzing epidemiologic studies.  Is she making a personal, subjective declaration of her own difficulties?  The difficulties of judges and lawyers?  Or the difficulties of expert witnesses themselves?  To be sure, some lawyers have such difficulties, but they may have a good career choice to go to law rather than medical school.  (Perhaps they would do better yet in real estate litigation rather than in torts.)  Many physicians have “difficulty in designing and analyzing such studies,” but that is because these activities are outside the scope of their expertise, which until recently was rarely taught in medical schools.  In my experience, these activities have not been beyond the abilities of appropriately qualified expert witnesses, whether engaged by plaintiffs or defendants in civil litigation.

As for the “costliness” of epidemiologic studies, many studies can be conducted expeditiously and inexpensively.  Case-control studies can often be done relatively quickly and easily because they work from identified cases back to past exposures.  Cohort studies can often be assembled from administrative medical databases maintained for other purposes.  In the United States, such databases are harder to find, but several exist as a result of Medicare, VA, National Center for Health Statistics, and other managed care programs.  In Scandinavia, the entire countries of Sweden and Denmark are ongoing epidemiologic studies because of their national healthcare systems.  Cohort and case-control studies have been quickly and inexpensively set up to study many important public health issues, ranging from MMR vaccines and thimerosal and autism, abortion and breast cancer, and welding and parkinsonism.  See, e.g., Lone Frank, “Epidemiology: When an Entire Country Is a Cohort,” 287 Science 2398-2399 (2000).  Plaintiffs’ counsel, often with more money at their disposal than the companies they sue, have organized and funded any number of epidemiologic studies.  EBM’s attempted excuses and justifications of why animal studies are “superior” to epidemiology fail.

Perhaps we should take a moment to have a small reality check:

Would we accept an FDA decision that approved a drug that was safe and efficacious in rats, without insisting on a clinical trial in human beings?  How many drugs show great therapeutic promise in animal models only to fail on safety or efficacy, or both, when tested in humans?  I believe that the answers are: “no,” and “sadly, too many.”

4. “Clinical double-blind studies are rarely, if ever, available for litigation purposes.”  Id. at 69.

EBM again cites no support for this assertion, and she is plainly wrong.  Clinical trials have been important sources of evidence relied upon by both plaintiffs’ and defendants’ expert witnesses in pharmaceutical litigation, which makes up a large, increasing portion of all products liability litigation.  Even in cases involving occupational or environmental exposures, for which randomization would be impractical or unethical, double-blinded human clinical studies of toxicokinetics, or metabolic distribution and fate, are often important to both sides involved in litigating claims of personal injury.

5.  “[B]ecause there are so few good epidemiologic studies available, animal studies are often the primary source of information regarding the impact of chemicals.”  Id. at 73.

The field of occupational and environmental epidemiology is perhaps a half a century old, with high quality studies addressing many if not most of the chemicals that are involved in important personal injury litigations.  EBM’s claims about the prevalence of “good” studies, as well as the implicit claim about what proportion of lawsuits involve chemicals for which there exists no epidemiologic data, are themselves devoid of any empirical support.

6.  “[S]cientifc conclusions are couched in tentative phrases. ‘Association’ is preferred to ‘causation.’ Thus, failing to understand that causation, like other hypotheses, can never be proven true, courts may reject as unreliable even evidence that easily meets scientific criteria for validity.”  Id. at 55 (citing Hall, Havner, and Wright).

EBM writes that scientists prefer “association” to “causation,” but the law insists upon causation.  EBM fails to recognize that these are two separate, distinct concepts, and not a mere semantic preference for the intellectually timid.  An opinion about association is not an opinion about causation.  Scientists prefer to speak of association when the criteria for validly inferring the causal conclusion are not met; this preference thus has important epistemic implications for courts that must ensure that opinions are really reliable and relevant.  EBM sets up a straw man – hypotheses can never be proven to be true – in order to advocate for the acceptance and the admissibility of hypotheses masquerading as conclusions.  The fact is that, notwithstanding the mechanics of hypothesis testing, many hypotheses come to be accepted as scientific fact.  Indeed, EBM’s talk here, and elsewhere, of opinions “proven” or “demonstrated” to be true is a sloppy incorporation of mathematical language that is best avoided in evaluating empirical scientific claims.  Scientific findings are “shown” or “inferred,” not demonstrated.  Not all opinions stall at the “association” stage; many justifiably move to opinions about causation.  The hesitancy of scientists to assert that an association is causal usually means that they, like judges who are conscientious about their gatekeeping duties, recognize that there is an unacceptable error rate from indiscriminately treating all associations as causation.

(To Be Continued)

The “In” Thing To Do

November 11th, 2010

Most parents have confronted their children’s insistence to do or to have something based upon the popularity of that something, but we would not expect such behavior from scientists.

Or should we?

In this era of bashing authors for having taken a shekel or two from industry to support their work, how are we to identify and evaluate other non-financial biases that afflict science.  Anti-industry zealots write as though money were the only error-inducing incentive at play in the scientific arena, but they are wrong.  In addition to vanity, egotism, grant-mania, prestige, academic advancement, scientists are subject to “group think”; they are prone to advancing scientific conclusions that are the “in thing” to espouse.  Call it herd-think, or Zeitgeist, or the occupational medicine mafia; error creeps in when scientists reach and defend conclusions because those conclusions are the “in” thing.

Finding examples and admissions of scientists falling into error to align themselves with popular voices on controversial issues is, however, not easy.  One of my favorites was reported in the context of the federal government’s predictions of the United States’ cancer toll expected from occupational use of asbestos.  In 1978, then Secretary of Health Education and Welfare, Joseph Califano, announced the results of a report, prepared by scientists at the National Cancer Institute, the NIEHS, and the NIOSH, which predicted that 17 percent of all future cancers would be caused by asbestos.  This prediction was based largely upon the work of Dr Irving Selikoff and colleagues, who studied heavily exposed asbestos insulators and factory workers.  Tom Reynolds, “Asbestos-Linked Cancer Rates Up Less Than Predicted,” 84 J. Nat’l Cancer Instit. 560, 560 (1984).

Within a few years of the report, the scientific community realized that it had been duped.  How did so many high-level governmental scientists fall into error?  Selikoff’s prestige was great.  (Califano’s speech occurred well before the scandal of Selikoff’s infamous seminar organized by plaintiffs’ lawyers to showcase plaintiffs’ expert witnesses for the “benefit” of key state and federal judges.  See Cathleen M. Devlin, “Disqualification of Federal Judges – Third Circuit Orders District Judge James McGirr Kelly to Disqualify Himself So As to Preserve ‘The Appearance of Justice’ Under 28 U.S.C. 455: In re School Asbestos Litigation (1992),” 38 Vill. L. Rev. 1219 (1993).)  Scientists, however, should be evidence-based people, and not make important public pronouncements, likely to generate widespread public fear and concern, on the basis of someone’s prestige.  There was more to this error than the charm and reputation of Irving Selikoff.

By the time of the Califano report, the misdeeds of Johns-Manville had become well known among the scientific community.  Little attention was paid to the role of the U.S. government in promoting the use of asbestos, and its failure to warn and to provide safe workplaces in its naval shipyards.  The imbalance in reporting led scientists to enjoy a “feel good” attitude about reaching conclusions that exaggerated and distorted the scientific data to the detriment of the so-called “asbestos industry.”  In 1984, the Journal of the National Cancer Institute reported the phenomenon as follows:

“Enterline [an epidemiologist who published several studies on asbestos factory workers and who interviewed for the story] said the government’s exaggeration of the asbestos danger reflects a 1970s’ Zeitgeist that developed partly in response to revelations of industry misdeeds.

‘It was sort of the “in” thing to exaggerate … [because] that would be good for the environmental movement,’ he said. ‘At the time it looked like you were wearing a white hat if you made these wild estimates.  But I wasn’t sure whoever did that was doing all that much good’.”

Tom Reynolds, supra at 562.  The “in” thing to exaggerate; who would have thought scientists ever did that, much less acknowledge it.  The Califano report caught its scientist authors red handed, and there was not much they could do about it.  The report’s predictions were debunked by leading scientists, and the report’s authors confessed to having tortured the data.  R. Doll & R. Peto, “The causes of cancer: quantitative estimates of avoidable risks of cancer in the United States today, 66 J. Nat’l Cancer Instit. 1191 – 308 (1981). 

What is memorable about the incident is that the report was motivated by the desire to “wear a white hat,” not by lucre.  The lesson is that the current focus on “conflicts of interest” is little better than an excuse for ad hominem attacks, and that everyone would be better off if the focus were on the evidence.

Causation, Torts, and Epidemiologic Evidence

November 6th, 2010

Tort law writers naturally focus on the changes in tort doctrine, such as the advent of strict liability and the demise of privity of contract, as catalysts for the development of mass tort law. There were, however, causes external to the world of tort law itself. One significant cause was the development of epidemiologic evidence and the acceptance of stochastic concepts of causation. Epidemiology is built upon statistical and probabilistic thinking, and the law struggled to accept such thinking in place of the comfortable mechanistic approaches to causation that dominated the law in the 19th and early 20th centuries. Although one can find examples of epidemiologic studies in the medical literature before 1940, the discipline of epidemiology was poorly developed, both in terms of its statistical tools, and in terms of its acceptance as a legitimate scientific approach, until after World War II. The U.S. Surgeon General’s acceptance of tobacco smoking as a cause of lung cancer, in the mid-1960’s, without any clear mechanistic model of causation, was a major turning point for both epidemiologic science and the law. Interestingly, this turning point occurred at the same time that the American Law Institute accepted the “strict liability” concept of tort liability for harms caused by defects in consumer products.

The epidemiologic study is a relative newcomer to the law of evidence, and many courts, commentators, and lawyers still talk of the admissibility (vel non) of such a study. Such talk is imprecise and inaccurate; rarely will a study itself be admissible. A typical observational epidemiologic study (or for that matter, a randomized clinical trial) involves many levels of hearsay, including statements of study participants, statements of the study investigators to the participants to elicit their self-reported symptoms and diagnoses, statements and conclusions of the investigators who assessed and characterized exposure and health outcomes, statements and conclusions of investigators who collected, analyzed, and reported the study data, the statements of the peer reviewers and editors who called for changes in how the study would be reported, and so on, and so forth.

Perhaps the initial layer of hearsay from study participants could be considered admissible under Rule 803(4), which creates an exception for:

“Statements for purposes of medical diagnosis or treatment.—
Statements made for purposes of medical diagnosis or
treatment and describing medical history, or past or present
symptoms, pain, or sensations, or the inception or general
character of the cause or external source thereof insofar as
reasonably pertinent to diagnosis or treatment.”

Statements made by study participants to study investigators, however, are typically made for neither diagnosis or treatment. In a case-control study, for instance, the cases are already diagnosed, and the purpose of the study is not treatment. The control participants are selected because they have no diagnosis, and they certainly are not in need of any treatment. Rule 803(4) seems not to fit.

Perhaps the Rule 803(6) would permit the records in the form of questionnaires, laboratory reports, exposure assessments, to be admitted as business records:

“Records of regularly conducted activity.—A memorandum,
report, record, or data compilation, in any form, of acts,
events, conditions, opinions, or diagnoses, made at or near the
time by, or from information transmitted by, a person with
knowledge, if kept in the course of a regularly conducted business
activity, and if it was the regular practice of that business
activity to make the memorandum, report, record or data
compilation, all as shown by the testimony of the custodian or
other qualified witness, or by certification that complies with
Rule 902(11), Rule 902(12), or a statute permitting certification,
unless the source of information or the method or circumstances
of preparation indicate lack of trustworthiness. The
term ‘business’ as used in this paragraph includes business,
institution, association, profession, occupation, and calling of
every kind, whether or not conducted for profit.”

Even if one or another layer of hearsay could be removed, it is difficult to imagine that all the layers could fit into exceptions that would allow the study itself to be admitted. Furthermore, even if the study were admitted, the language and statistical analyses would not be appropriately used as direct evidence without the explanatory input of an expert witness. Epidemiologic evidence is thus virtually always not admissible evidence at all, but rather part of the “facts and data” upon which expert witnesses have relied to formulate their opinions. Epidemiologic studies are those otherwise inadmissible materials considered by expert witnesses, and because they are themselves largely inadmissible, Rules 703 and 705 govern whether, how, and when such studies will be disclosed to the trier of fact.

It is interesting to consider the admissibility of another research, investigatory tool, the survey, to help us understand the law’s consideration of epidemiologic studies. One frequently cited case provides a useful history and summary of what would be required to admit a survey and its results, when offered for the truth:

“The trustworthiness of surveys depends upon foundation evidence that
(1) the “universe” was properly defined,
(2) a representative sample of that universe was selected,
(3) the questions to be asked of interviewees were framed in a clear, precise and non-leading manner,
(4) sound interview procedures were followed by competent interviewers who had no knowledge of the litigation or the purpose for which the survey was conducted,
(5) the data gathered was accurately reported,
(6) the data was analyzed in accordance with accepted statistical principles and
(7) objectivity of the entire process was assured.”

Toys “R” Us, Inc. v. Canarsie Kiddie Shop, Inc., 559 F. Supp. 1189, 1205 (E.D.N.Y. 1983). The court noted that admitting survey evidence should normally include the testimony of the survey director, supervisor, interviewers, and interviewees. The absence of one or more of the seven identified indicia of survey trustworthiness may require the exclusion of the survey. Id. (excluding the survey at issue for failing to satisfy the indicia of trustworthiness).

A further important lesson of Toys “R” Us is that an expert witness cannot save an otherwise untrustworthy survey under the cloak of Rule 703. The court, having excluded the survey as direct evidence, went on to exclude the expert witness opinion testimony based upon the survey. Id. (relying upon Rule 703 and case law and treatises interpreting this rule).

This case has an important lesson to lawyers litigating mass tort cases that turn on epidemiologic evidence of harm. If the study is untrustworthy in the light of the seven Toys “R” Us criteria, then the study may not be sufficiently reliable for an expert witness to rely upon under Rule 703. A further lesson is that many of the criteria can be answered only by accessing the underlying data and materials from the study. Many lawyers seem to have lost track of the importance of Rule 703, after the Supreme Court placed its reliance upon Rule 702 to support a requirement of expert witness gatekeeping. Rule 702, however, addresses “sufficiency” of facts and data, and the reliability of methods and principles, not the reasonableness of reliance upon the underlying facts and data themselves.