TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Broadbent on the Relative Risk > 2 Argument

October 31st, 2012

Alex Broadbent, of the University of Johannesburg, Department of Philosophy, has published a paper that contributes to the debate over whether a relative risk (RR) greater than (>) two is irrelevant, helpful, necessary, or sufficient in inferring that an exposure more likely than not caused an individual claimant’s disease. Alex Broadbent, “Epidemiological Evidence in Proof of Specific Causation,” 17 Legal Theory 237 (2011) [cited as Broadbent].  I am indebted to his having called his paper to my attention. Professor Broadbent’s essay is clearly written, which is helpful in assessing the current use of the RR > 2 argument in judicial decisions.

General vs. Specific Causation

Broadbent carefully distinguishes between general and specific causation.  By focusing exclusively upon specific causation (and assuming that general causation is accepted), he avoids the frequent confusion over when RR > 2 might play a role in legal decisions. Broadbent also “sanitizes” his portrayal of RR by asking us to assume that “the RR is not due to anything other than the exposure.” Id. at 241. This is a BIG assumption and a tall order for observational epidemiologic evidence.  The study or studies that establishes the RR we are reasoning from must be free of bias and confounding. Id.  Broadbent does not mention, however, the statistical stability of the RR, which virtually always will be based upon a sample, and thus subject to the play of random error.  He sidesteps the need for statistical significance in comparing two proportions, but the most charitable interpretation of his paper requires us to assume further that the hypothetical RR from which we are reasoning is sufficiently statistically stable that random error, along with bias and confounding, can be also ruled out as likely explanations for the RR > 1.

Broadbent sets out to show that RR > 2 may, in certain circumstances, suffices to show specific causation, but he argues that RR > 2 is never logically necessary, and must never be required to support a claim of specific causation.  Broadbent at 237.  On the same page in which he states that epidemiologic evidence of increased risk is a “last resort,” Broadbent contradicts himself by stating RR > 2 evidence “must never be required,” and then, in an apparent about face, he argues:

“that far from being epistemically irrelevant, to achieve correct and just outcomes it is in fact mandatory to take (high-quality) epidemiological evidence into account in deciding specific causation. Failing to consider such evidence when it is available leads to error and injustice. The conclusion is that in certain circumstances epidemiological evidence of RR > 2 is not necessary to prove specific causation but that it is sufficient.”

Id. at 237 (emphasis added). I am not sure how epidemiologic evidence can be mandatory but never logically necessary, and something that we should never require.

Presumably, Broadbent is using “to prove” in its legal and colloquial sense, and not as a mathematician.  Let us also give Broadbent his assumptions of “high quality” epidemiologic studies, with established general causation, and ask why, and explore when and whether, RR > 2 is not necessary to show specific causation.

The Probability of Causation vs. The Fact of Causation

Broadbent notes that he is arguing against what he perceives to be Professor Haack’s rejection of probabilistic inference, which would suggest that epidemiologic evidence is “never sufficient to establish specific causation.” Id. at 239 & n.3 (citing Susan Haack, “Risky Business: Statistical Proof of Individual Causation,” in Causación y Atribucion de Responsabilidad (J. Beltran ed., forthcoming)). He correctly points out that sometimes the probabilistic inference is the only probative inference available to support specific causation.  His point, however, does not resolve the dispute; it suffices only to show that whether we allow the probabilistic inference may be outcome determinative in many lawsuits.  Broadbent characterizes Haack’s position as one of two “serious mistakes in judicial and academic literature on this topic.”  Broadbent at 239.  The other alleged mistake is the claim that RR > 2 is needed to show specific causation:

“What follows, I conclude, is that epidemiological evidence is relevant to the proof of specific causation. Epidemiological evidence says that a particular exposure causes a particular harm within a certain population. Importantly, it quantifies: it says how often the exposure causes the harm. However, its methods are limited: they measure only the net effect of the exposure, leaving open the possibility that the exposure is causing more harm than the epidemiological evidence suggests—but ruling out the possibility that it causes less. Accordingly I suggest that epidemiological evidence can be used to estimate a lower bound on the probability of causation but that no epidemiological measure can be required. Thus a relative risk (RR, defined in Section II) of greater than 2 can be used to prove causation when there is no other evidence; but RR < 2 does not disprove causation. Given high-quality epidemiological evidence, RR > 2 is sufficient for proof of specific causation when no other evidence is available but not necessary when other evidence is available.”

Some of this seems reasonable enough.  Contrary to the claims of authors such as Haack and Wright, Broadbent maintains that some RR evidence is relevant and indeed probative of specific causation.  In a tobacco lung cancer, with a plaintiff who has smoked three packs a day, for 50 years (and RR > 50), we can confidently attribute the lung cancer to smoking, and rest assured that background cosmic radiation did not likely play a substantial role. The RR quantifies the strength of the association, and it does lead us to a measure of “attributable risk” (AR), also known as the attributable fraction (AF):

AR = 1 – 1/RR.

So far, so good.

Among the perplexing statements above, however, Broadbent suggests that:

1. The methods of epidemiologic evidence measure only the net effect of the exposure.  Epidemiologic evidence (presumably the RR or other risk ratio) provides a lower bound on the probability of causation.  I take up this suggestion in discussing Broadbent’s distinction between the “excess fraction,” and the “etiologic fraction,” below.

2. A RR > 2 “can be used to prove causation when there is no other evidence; but RR < 2 does not disprove causation.” (My emphasis.) When an author is usually clear about his qualifications, and his language generally, it is distressing for him to start comparing apples to oranges.  Note that RR > 2 suffices “when there is no other evidence,” but the parallel statement about RR < 2 is not similarly qualified, and the statement about RR < 2 is framed in terms of disproof of causation. Even if the RR < 2 did not “disprove” specific causation, when there was no other evidence, it would not prove causation.  And if there is no other evidence, judgment for the defense must result. Broadbent fails to provide us a persuasive scenario in which a RR ≤ 2, with no other evidence, would support an inference of specific causation.

Etiological Fraction vs. Excess Fraction — Occam’s Disposable Razor

Broadbent warns that the expression “attributable risk” (AR or “attributable fraction,” AF) is potentially misleading.  The numerical calculation identifies the excess number of cases, above “expected” per base rate, and proceeds from there.  The AR thus identifies the “excess fraction,” and not the “etiological fraction,” which is the fraction of all cases in which exposure makes a contribution. Broadbent tells us that:

“Granted a sound causal inference, we can infer that all the excess cases are caused by the exposure. But we cannot infer that the remaining cases are not caused by the exposure. The etiologic fraction—the cases in which the exposure makes a causal contribution—could be larger. Roughly speaking, this is because, in the absence of substantive biological assumptions, it is possible that the exposure could contribute to cases that would have occurred12 even without the exposure.13 For example, it might be that smoking is a cause of lung cancer even among some of those who would have developed it anyway. The fact that a person would have developed lung cancer anyway does not offer automatic protection against the carcinogenic effects of cigarette smoke (a point we return to in Section IV).”

Id. at 241. In large measure here, Broadbent has adopted (and acknowledged) his borrowings from Professor Sander Greenland.  Id. at 242 n.11. The argument  still fails.  What Broadbent has interposed is a “theoretical possibility” that the exposure in question may contribute to those cases that would have occurred anyway.  Note that raising theoretical possibilities here now alters the hypothetical; Broadbent is no longer working from a hypothetical that we have a RR and no other evidence.  Even more important, we are left guessing what it means to say that an exposure causes some cases that would have occurred anyway.  If we accept the postulated new evidence at face value, we can say confidently that the exposure is not the “but for” cause of the case at issue.  Without sufficient evidence of “but for” causation, plaintiff will lose. Furthermore, we are being told to add a new fact to the hypothetical, namely that the non-excess cases are causally over-determined.  If this is the only additional new fact being added, a court might invoke the rule in Summers v. Tice, but even so, the defense will be entitled to a directed verdict if the RR < 2. (If the RR = 2, I suppose, the new fact, and the change in the controlling rule, might alter the result.)

Exposures that Cause Some and Prevent Some Cases of Disease

Broadbent raises yet another hypothetical possibility, which adds to, and materially alters,  his original hypothetical.  If the exposure in question, causes some cases, and prevents others, then the RR ≤ 2 will not permit us to infer that a given case is less likely than not the result of the exposure.  (Broadbent might have given an example of what he had in mind, from well-established biological causal relationships; I am skeptical that he would have found one that would have satisfactorily made his argument.) The bimodal distribution of causal effects is certainly not typical of biological processes, but even if we indulge the “possibility,” we are now firmly in the realm of speculation.  This is a perfectly acceptable realm for philosophers, but in court, we want evidence.  Assuming that the claimant could present such evidence, finders of fact would still founder because the new evidence would leave them guessing whether the claimant was a person who would have gotten the disease anyway, or got it because of the exposure, or even got it in spite of the exposure.

Many commentators who urge a “probability of [specific] causation” approach equate the probability of causation (PC) with the AR.  Broadbent argues that because of the possibility that some biological model results in the etiologic fraction exceeded the excess fraction, the usual equation of PC = AR, must be represented as an equality:

PC ≥ AR

While the point is logically unexceptional, Broadbent must concede that some other evidence, which supports and justifies the postulated biological model, is required to change the equality to an inequality.  If no other evidence besides the RR is available, we are left with the equality.  Broadbent tells us that the biological model “often” requires that the etiological fraction exceeds the excess fraction, but he never tells us how often, or how we would ascertain the margin of error.  Id. at 256.

Broadbent does not review any of the decided judicial cases to point out which ones involved biological models that invalidated the equality.  Doing so would be an important exercise because it might well show that even where PC ≥ AR, with a non-quantified upper bound, the plaintiff might still fail in presenting a prima facie case of specific causation.  Suppose the population RR for the exposure in question were 1.1, and we “know” (and are not merely speculating) that the etiological fraction > excess fraction.   Unless we know how much greater is the etiological fraction, such that we can recalculate the PC, then we are left agnostic about specific causation.

Broadbent treats us to several biological scenarios in which PC possibly is greater than AR.  All of these scenarios violate his starting premiss that we have a RR with no other evidence. For instance, Broadbent hypothesizes that exposure might accelerate onset of a disease.  Id. at 256. This biological model of acceleration can be established with the same epidemiologic evidence that established the RR for the population.  Epidemiologists will frequently look at time windows from onset of exposure to explore whether there is an acceleration of onset of cases in a younger age range that offsets a deficit later in the lives of the exposed population.  If there were firm evidence of such a phenomenon, then we would look to the RR within the relevant time window.  If the relevant RR ≤ 2, the biological model will have added nothing to the plaintiff’s case.

Broadbent cites Greenland for the proposition that PC > AR:

“We know of no cancer or other important chronic disease for which current biomedical knowledge allows one to exclude mechanisms that violate the assumptions needed to claim that PC = [AF].”

Id. at 259, quoting form Sander Greenland & James Robins, “Epidemiology, Justice, and the Probability of Causation,” 40 Jurimetrics J. 321, 325 (2000).  Here, not only has Broadbent postulated a mechanism that makes PC > AR, but he has shifted the burden of proof to the defense to exclude it!

The notion that the etiological fraction may exceed the excess fraction is an important caveat.  Courts and lawyers should take note.  It will not do, however, wave hands and exclaim that the RR > 2 is not a “litmus test,” and proceed to let any RR > 1, or even RR ≤ 1 support a verdict.  The biological models that may push the etiological fraction higher than the excess fraction can be tested, and quantified, with the same epidemiologic approaches that provided a risk ratio, in the first place.  Broadbent gives us an example of this sort of hand waving:

“Thus, for example, evidence that an exposure would be likely to aggravate an existing predisposition to the disease in question might suffice, along with RR between 1 and 2, to make it more likely than not that the claimant’s disease was caused by the exposure.”

Id. at 275. This is a remarkable, and unsupported claim.  The magnitude of the aggravation might still leave the RR ≤ 2.  What is needed is evidence that would allow quantification of the risk ratio in the scenario presented. Speculation will not do the trick; nor will speculation get the case to a jury, or support a verdict.

 

Call for Evidence-Based Medicine in Medical Expert Opinions

October 30th, 2012

Evidence-based medicine (EBM) seeks to put health care decision making on a firm epistemic foundation, rather than on the personal opinion of health care providers.  David Sackett, et al., “Evidence based medicine: what it is and what it isn’t,” 312 Brit. Med. J. 71 (1996).  EBM thus offers a therapeutic intervention, sometimes in the form of strong medicine, to the sloppy thinking, intuition, mothers’ hunches, and leveling of studies that remain prevalent in the Rule 702 gatekeeping of medical causation opinion testimony in courts.  There are some who have suggested that EBM addresses therapeutic interventions only, and not disease causation by exogenous substances or processes.  A very recent publication in the Tort Trial & Insurance Practice Law Journal provides a strong rebuttal to the naysayers and a clear articulation of the need now, more than ever, for greater acknowledgment of EBM in the evaluation of expert witness opinion testimony.  Terence M. Davidson & Christopher P. Guzelian, “Evidence-based Medicine (EBM): The (Only) Means for Distinguishing Knowledge of Medical Causation from Expert Opinion in the Courtroom,” 47 Tort Trial & Ins. Practice L. J. 741 (2012) [cited as Davidson].

Terence M. Davidson is a physician, a Professor of Surgery, and the Associate Dean for Continuing Medical Education at the University of California, San Diego School of Medicine.  Christopher P. Guzelian   is an Assistant Professor of Law at Thomas Jefferson School of Law, in San Diego, California. Davidson and Guzelian bring the Rule 702 discussion and debate back to the need for epistemic warrant, not glitz, glamour, hunches, prestige, and the like.  Their article is a valuable contribution, and the authors’ presentation and defense of EBM in the gatekeeping process is commendable.

There are some minor dissents I would offer.  For instance, in applying EBM principles to causation of harm assessments, we should recognize that there are asymmetries between determining therapeutic benefit and environmental or occupational harm.  Physicians, even those practicing EBM, may well recommend removal from a potentially toxic exposure because the very nature of their clinical judgment is often precautionary.  Tamraz v. BOC Group Inc., No. 1:04-CV-18948, 2008 WL 2796726 (N.D. Ohio July 18, 2008) (denying Rule 702 challenge to treating physician’s causation opinion), rev’d sub nom., Tamraz v. Lincoln Elec. Co., 620 F.3d 665, 673 (6th Cir. 2010) (carefully reviewing record of trial testimony of plaintiffs’ treating physician; reversing judgment for plaintiff based in substantial part upon treating physician’s speculative causal assessment created by plaintiffs’ counsel; “Getting the diagnosis right matters greatly to a treating physician, as a bungled diagnosis can lead to unnecessary procedures at best and death at worst. But with etiology, the same physician may often follow a precautionary principle: If a particular factor might cause a disease, and the factor is readily avoidable, why not advise the patient to avoid it? Such advice—telling a welder, say, to use a respirator—can do little harm, and might do a lot of good. This low threshold for making a decision serves well in the clinic but not in the courtroom, where decision requires not just an educated hunch but at least a preponderance of the evidence.”) (internal citations omitted), cert. denied, ___ U.S. ___ , 131 S. Ct. 2454, 2011 WL 863879 (2011).

The wisdom of the Tamraz decision (in the 6th Circuit) lies in its recognition of the asymmetries involved in medical decision making.  For most diseases, physicians rarely have to identify an etiology to select efficacious treatment.  This asymmetry affects the general – specific causation distinction.  A physician will want some epistemic warrant for the judgment that a therapy or medication is efficacious.  In other words, the physician needs to know that there is efficacy, even though the intervention may not be efficacious in every case.  If the risk ratio for an intervention (where the risk is cure of the disease or disorder), is greater than 1.0, and chance, bias, and confounding are eliminated as explanations for the observed efficacy, then that intervention likely goes into the physician’s therapeutic armamentarium.  The risk ratio, of course, need not be greater than two for the intervention to remain clinically attractive.  Furthermore, if the therapy is provided, and the patient improves, the determination whether therapy itself was efficacious is often not a pressing clinical matter.  After all, if the risk ratio was greater than one, but two or less, then the improvement may have been spontaneous and unrelated to therapy.

Davidson and Guzelian do not fully recognize this asymmetry, which leads the authors into error.  They give an example in which a defense expert witness proferred a personal opinion about general causation of breast cancer by post-menopausal hormone replacement therapy, which opinion is undermined and contradicted by a judgment reached with EBM principles.  See Cross v. Wyeth Pharm., Inc., 2011 U.S. Dist. LEXIS 89078, at *10 (M.D. Fla. Aug. 10, 2011).  Fair enough, but Davidson and Guzelian then claim that the errant defense expert had no basis for claiming that there was no generally accepted basis for “diagnosing specific medical causation.” Davidson at 757.  The authors go even further and claim that the defense expert’s statement is “simply false.” Id.

I would suggest that the authors have gotten this dead wrong.  In this sort of case, the plaintiff’s expert witness is usually the one casting about for a basis to support specific attribution.  The authors offer no basis for their judgment that the defense expert witness is wrong, or lacks a basis for his specific causation judgment. The poor, pilloried defense expert was, in the cited case, opining that there was no way to attribute a particular patient’s breast cancer to her prior use of post-menopausal hormone replacement therapy.  Putting aside the possibility of long-term use (with risk ratio greater than 2.0), the expert’s opinion is reasonable. General causation does not logically or practically imply specific causation; they are separate and distinct determinations.  Perhaps a high risk ratio might justify a probabilistic inference that the medication caused the specific patient’s breast cancer, but for many HRT-use factual scenarios, the appropriate risk ratio is two or less.  If there is some other method Davidson and Guzelian have in mind, they should say so. The authors miss an important point, which is that EBM sets out to provide a proper foundation for judgments of causality (whether of therapeutic benefit or harm), but it often does not have the epistemic foundation to provide a resolution of the individual causation issue. In medicine, there often is simply no need to do so.

One other nit.  The authors briefly discuss statistical significance, citing the Supreme Court’s recent foray into statistical theory.  Davidson at 747 & n. 14 (citing Matrixx Initiatives, Inc. v. Siracusano, 131 S. Ct. 1309, 1321 (2011)).  In their explanatory parenthetical, however, the authors describe the case as “holding that a lack of statistical significance in a pharmaceutical company’s results does not exempt the company from material disclosure requirements for reporting adverse events during product testing.”  Id. 

Matrixx Initiatives held no such thing; the Supreme Court was faced with an adequacy of pleadings case. No evidence was ever offered; nor was there any ruling on the reliability or insufficiency of evidence of causation. Matrixx Initiative’s attempt to import Rule 702 principles of reliability into a motion to dismiss on the pleadings was seriously misguided. Even assuming that statistical significance was necessary to causation, regulatory action did not require a showing of causality. Therefore, statistical significance was never necessary for the plaintiffs’ case. Second, the company’s argument that the adverse event reports at issue were “not statistically significant” was fallacious because adverse event reports, standing alone, could not be “statistically significant” or “insignificant.” The company would need to know the expected base rate for anosmia among Zicam users, and it would need to frame the adverse event reports in terms of an observed rate, so that the expected and observed rates could be compared against an assumption of no difference. Third, the class plaintiffs had alleged considerably more than just the adverse events, and the allegations taken together deserved the attention of a reasonable investor.  Bottom line:  the comments that the Court made about the lack of necessity for statistical significance were pure obiter dictum.

Highlighting these two issues in the Davidson & Guzelian article should not detract from the importance of the authors’ general enterprise. There is an aversion to examining the “epistemic warrant” behind opinion evidence in federal court gatekeeping.  Anything that treats that aversion, such as Davidson & Guzelian’s article, is good medicine.

Old-Fashioned Probablism – Origins of Legal Probabilism

October 26th, 2012

In several posts, I have addressed Professor Haack’s attack on legal probabilism.  See

Haack Attack on Legal Probabilism (May 6, 2012).  The probabilistic mode of reasoning is not a modern innovation; nor is the notion that the universe is entirely determined, although revealed to humans as a stochastic phenomenon:

“I returned, and saw under the sun, that the race is not to the swift, nor the battle to the strong, neither yet bread to the wise, nor yet riches to men of understanding, nor yet favour to men of skill; but time and chance happeneth to them all.”

Ecclesiastes 9:11 King James Bible (Cambridge ed.)

The Old Testament describes the “casting of lots,” some sort of dice rolling or coin flipping, in a wide variety of human decision making.  The practice is described repeatedly in the Old Testament, and half a dozen times in the New Testament.

Casting of lots figures more prominently in the Old Testament, in the making of important decisions, and in attempting to ascertain “God’s will.”  The Bible describes matters of inheritance, Numbers 34:13; Joshua 14:2, and division of property, Joshua 14-21, Numbers 26:55, as decided by lots.  Elections to important office, including offices and functions in the Temple, were determined by lot. 1 Chronicles 24:5, 31; 25:8-9; 26:13-14; Luke 1:9.

Casting lots was an early form of alternative dispute resolution – alternative to slaying and smiting.  Proverbs describes the lot as used as a method to resolve quarrels.  Proverbs 18:18.  Lot casting determined fault in a variety of situations.  Lots were cast to identify the culprit who had brought God’s wrath upon Jonah’s ship. Jonah 1:7 (“Come, let us cast lots, that we may know on whose account this evil has come upon us.”).

What we might take as a form of gambling appeared to have been understood by the Israelites as a method for receiving instruction from God. Proverbs 16:33 (“The lot is cast into the lap, but its every decision is from the Lord.”).  This Old Testament fortune cookie suggests that the Lord knows the outcome of the lot casting, but mere mortals must wager.  I like to think the passage means that events that appear to be stochastic to humans may have a divinely determined mechanism.  In any event, the Bible describes various occasions on which lots were cast to access the inscrutable intentions and desires of the Lord.  Numbers 26:55; 33:54; 34:13; 36:2; Joshua 18:6-10; 1 Chronicles 24:5,31; 1 Samuel 14:42; Leviticus 16:8-10 (distinguishing between sacrificial and scape goat).

In the New Testament, the Apostles cast lots to decide upon a replacement for Judas (Acts 1:26). Matthias was the winner.  Matthew, Mark, and John describe Roman soldiers casting lots for Jesus’ garments (Matthew 27:35; Mark 15:24; John 19:24.  See also Psalm 22:18.  This use of lots by the Roman soldiers seems to have taken some of the magic out of lot casting, which fell into disrepute and gave way to consultations with the Holy Spirit for guidance on important decisions.

The Talmud deals with probabilistic inference in more mundane settings.  The famous “Nine Shops” hypothetical poses 10 butcher shops in a town, nine of which sell kosher meat.  The hypothetical addresses whether the dietary laws permit eating a piece of meat found in town, when its butchering cannot be attributed to either the nine kosher shops or the one non-kosher shop:

“A typical question involves objects whose identity is not known and reference is made to the likelihood that they derive from a specific type of source in order to determine their legal status, i.e. whether they be permitted or forbidden, ritually clean or unclean, etc. Thus, only meat which has been slaughtered in the prescribed manner is kasher, permitted for food. If it is known that most of the meat available in a town is kasher, there being, say, nine shops selling kasher meat and only one that sells non-kasher meat, then it can be assumed when an unidentified piece of meat is found in the street that it came from the majority and is therefore permitted.”

Nachum L. Rabinovitch, “Studies in the History of Probability and Statistics.  XXII Probability in the Talmud,” 56 Biometrika 437, 437 (1969).  Rabinovitch goes on to describe the Talmud’s resolution of this earthly dilemma:  “follow the majority” or the most likely inference.

A small digression on this Talmudic hypothetical.  First, why not try to find out whether someone has lost this package of meat? Or turn the package in to the local “lost and found.” Second, how can it be kosher to eat a piece of meat found lying around in the town?  This is really not very appetizing, and it cannot be good hygiene.  Third, why not open the package and determine whether it’s a nice pork tenderloin or a piece of cow?  This alone could resolve the issue. Fourth, the hypothetical posed asks us to assume a 9:1 ratio of kosher to non-kosher shops, but what if the one non-kosher shop had a market share equal to the other nine? The majority rule could lead to an untoward result for those who wish to keep kosher.

The Talmud’s proposed resolution is, nevertheless, interesting in anticipating the controversy over the use of “naked statistical inferences” in deciding specific causation or discrimination cases.  Of course, the 9:1 ratio is sufficiently high that it might allow an inference about the “likely” source of the meat.  The more interesting case would have been a town with 11 butcher shops, six of which were kosher.  Would the rabbis of old have had the intestinal fortitude to eat lost & found meat, on the basis of a ratio of 6:5?

In the 12th century, Maimonides rejected probabilistic conclusions for assigning criminal liability, at least where the death penalty was at issue:

“The 290th Commandment is a prohibition to carry out punishment on a high probability, even close to certainty . . .No punishment [should] be carried out except where . . . the matter is established in certainty beyond any doubt, and , moreover, it cannot be explained otherwise in any manner.  If we do not punish on very strong probabilities, nothing can happen other than a sinner be freed; but if punishment be done on probability and opinion it is possible that one day we might kill an innocent man — and it is better and more desirable to free a thousand sinners, than ever kill one innocent.”

Stephen E. Fienberg, ed., The Evolving Role of Statistical Assessments as Evidence in the Courts 213 (N.Y. 1989), quoting from Nachum Rabinovitch, Probability and Statistical Inference in Ancient and Medieval Jewish Literature 111 (Toronto 1973).

Indiana Senate candidate and theocrat, Republican Richard E. Mourdock, recently opined that conception that results from rape was God’s will:

“I’ve struggled with it myself for a long time, but I came to realize that life is that gift from God.  And even when life begins in that horrible situation of rape, that it is something that God intended to happen.”

Jonathan Weisman, “Rape Remark Jolts a Senate Race, and the Presidential One, Too,” N.Y. Times (Oct. 25, 2012 ).

Mourdock’s comments about pregnancies resulting from rape representing God’s will show that stochastic events continue to be interpreted as determined mechanistic events at some “higher plane.” Magical thinking is still with us.

Origins of the Relative Risk of Two Argument for Specific Causation

October 20th, 2012

In an unpublished paper, which Professor Susan Haack has presented several times over the last few years, she has criticized the relative risk [RR] >2 argument.  In these presentations, Haack has argued that the use of RR to infer specific causation is an example of flawed “probabilism” in the law.  Susan Haack, “Risky Business:  Statistical Proof of Individual Causation,” in Jordi Ferrer Beltrán, ed., Casuación y atribución de responsibilidad (Madrid: Marcial Pons, forthcoming)[hereafter Risky Business]; Presentation at the Hastings Law School (Jan. 20, 2012);  Presentation at University of Girona (May 24, 2011).  Elsewhere, Haack has criticized the use of relative risks for inferring specific causation on logical grounds.  See, e.g., Susan Haack, “Warrant, Causation, and the Atomism of Evidence Law,” 5 Episteme 253, 261 (2008)[hereafter “Warrant“];  “Proving Causation: The Holism of Warrant and the Atomism of Daubert,” 4 J. Health & Biomedical Law 273, 304 (2008)[hereafter “Proving Causation“].  (See Schachtman, “On the Importance of Showing Relative Risks Greater Than Two – Haack’s Arguments” (May 23, 2012) (addressing errors in Haack’s analysis).

In “Risky Business,” Haack describes the RR > 2 argument as the creation of government lawyers from the litigation over claims of Guillain-Barré syndrome (GBS), by patients who had received swine flu vaccine.  Like her logical analyses, Haack’s historical description is erroneous.  The swine flu outbreak of 1976, indeed, had led to a federal governmental immunization program, which in turn generated claims that the flu vaccine caused GBS.  Litigation, of course, ensued.  The origins of the RR > 2 argument, however, predate this litigation.

GBS is an auto-immune disease of the nervous system.  The cause or causes of GBS are largely unknown. In the GBS vaccine cases, the government took the reasonable position that treating physicians or clinicians have little or nothing to contribute to understanding whether the swine-flu vaccine can cause GBS or whether the vaccine caused a particular patient’s case.  Cook v. United States, 545 F. Supp. 306 (N.D. Cal. 1982); Iglarsh v. United States, No. 79 C 2148, 1983 U.S. Dist. Lexis 10950 (N.D. Ill. Dec. 9, 1983).  The government did, however, concede that cases that arose within 10 weeks of vaccination were more likely than not related on the basis of surveillance data from the Centers for Disease Control.  After 10 weeks, the relative risk dropped to two or less, and thus the plaintiffs who developed GBS 10 weeks, or more, after immunization were more likely than not idiopathic cases (or at least non-vaccine cases).  See Michael D. Green, “The Impact of Daubert on Statistically Based Evidence in the United States,” Am. Stat. Ass’n, Proc. Comm. Stat. Epidem. 35, 37-38 (1998) (describing use of probabilistic evidence in the GBS cases).

Haack’s narrative of the evolution of the RR > 2 argument creates the impression that the government lawyers developed their defense out of thin air.  This impression is false.  By the time, the Cook and Iglarsh cases were litigated, the doubling of risk notion had been around for decades in the medical literature on radiation risks and effects.  Ionizing radiation had been shown to have genetic effects, including cancer risk, in the 1920’s.  By the time of the Manhattan project, radiation was a known cause of certain types of cancer. Although there was an obvious dose-response relationship between radiation and cancer, the nature of the relationship and the existence of thresholds were not well understood.  Medical scientists, aware that there were background mutations and genetic mistakes, thus resorted to a concept of a “doubling dose” to help isolate exposures that would likely be of concern.  See, e.g., Laurence L. Robbins, “Radiation Hazards:  III. Radiation Protection in Diagnostic Procedures,” 257 New Engl. J. Med. 922, 923 (1957) (discussing doubling dose in context of the medical use of radiation).

By 1960, the connection between “doubling dose” and a legal “more likely than not” evidentiary standard was discussed in the law review literature.  See, e.g., Samuel D. Estep, “Radiation Injuries and Statistics: The Need for a New Approach to Injury Litigation, 59 Mich. L. Rev. 259 (1960).  If the doubling dose concept was not obviously important for specific causation previously, Professor Estep made it so in his lengthy law review article.  By 1960, the prospect of litigation over radiation-induced cancers, which had a baseline prevalence in the population, was a real threat.  Estep described the implications of the doubling dose:

“This number is known technically as the doubling dose and has great legal significance under existing proof rules.”

Id. at 271.

* * *

“The more-probable-than-not test surely means simply that the trier of fact must find that the chances that defendant’s force caused the plaintiff’s injuries are at least slightly better than 50 percent; or, to put it the other way, that the chances that all other forces or causes together could have caused the injury are at least no greater than just short of 50 percent. Even if such an analysis is inapplicable to other types of cases, in those cases in which the only proof of causal connection is a statistical correlation between radiation dose and injury, the only just approach is to use a percentage formula. This is the case with all nonspecific injuries, including leukemia. Under existing rules the only fair place to draw the line is at 50 percent. These rules apply when the injury is already manifested as of the time of trial.”

Id. at 274.

The RR >2 argument was also percolating through the biostatistical and epidemiologic communities before the Cook and Iglarsh cases.  For instance, Philip Enterline,  a biostatistician at the University of Pittsburgh, specifically addressed the RR > 2 argument in a 1980 paper:

“The purpose of this paper is to illustrate how epidemiologic data can be used to make statements about causality in a particular case.” 

* * *

“In summary, while in a given instance we cannot attribute an individual case of disease to a particular occupational exposure, we can, based on epidemiologic observation, make a statement as to the probability that a particular occupational exposure was the cause.  Moreover, we can modify this probability by taking into consideration various aspects of a particular case.” 

Philip Enterline, “Attributability in the Face of Uncertainty,” 78 (Supp.) Chest 377, 377, 378 (1980).

About the time of the Cook case, the scientific media discussed Enterline’s suggestion for using epidemiologic data to infer specific causation.  See, e.g., Janet Raloff, “Compensating radiation victims,” 124 Science News 330 (1983).  Dr. David Lilienfeld, son of the well-known epidemiologist Abraham Lilienfeld, along with a lawyer, further popularized the use of attributable risk, derived from a relevant RR to quantify the probability that an individual case is causally related to an exposure of interest.  See David Lilienfeld & Bert Black, “The Epidemiologist in Court,” 123 Am. J. Epidem. 961, 963 (1986) (describing how a relative risk of 1.5 allows an inference of attributable risk of 33%, which means any individual case is less likely than not to causally related to the exposure).

In the meanwhile, the RR argument picked up support from other professional epidemiologists.  In 1986, Dr. Otto Wong explained that for many common cancers, tied to multiple non-specific risk factors, probabilistic reasoning was the only way to make a specific attribution:

“In fact, all cancers have multiple causes. Furthermore, clinical features of cancer cases, caused by different risk factors, are seldom distinguishable from one another. Therefore, the only valid scientific way to address causation in a specific individual is through use of probability.”

Otto Wong, “Using Epidemiology to Determine Causation in Disease,” 3 Natural Resources & Env’t 20, 23 (1988).  The attributable risk [AR], derived from the RR, was the only rational link that could support attribution in many cases:

“For AR [attributable risk] to be greater than 50% (more likely than not), RR has to be greater than 2.  Thus, for any exposure with a RR of less than 2, the cancer cannot be attributed to that exposure according to the ‘more likely than not’ criterion.  That is, that cancer is ‘more likely than not’ a background case.”

***

“The epidemiologic measure for probability of causation is attributable risk, which can be used to determine whether a particular cause in an individual case meets the ‘more likely than not’ criterion.”

Id. at 24.

In 1988, three Canadian professional epidemiologists described the acceptance of the use of epidemiologic data to attribute bladder cancer cases in the aluminum industry. Ben Armstrong, Claude Tremblay, and Gilles Theriault, “Compensating Bladder Cancer Victims Employed in Aluminum Reduction Plants,” 30 J. Occup. Med. 771 (1988).

The use of the RR > 2 argument was not a phenomenon limited to defense counsel or defense-friendly expert witnesses.  In 1994, a significant textbook, edited by two occupational physicians who were then and now associated with plaintiffs’ causes, explicitly embraced the RR argument. Mark R. Cullen & Linda Rosenstock, “Principles and Practice of Occupational and Environmental Medicine,” chap. 1, in Linda Rosenstock & Marc Cullen, eds., Textbook of Clinical Occupational and Environmental Medicine 1 (Phila. 1994) [Cullen & Rosenstock].

The editors of this textbook were also the authors of the introductory chapter, which discussed the RR > 2 argument.  The first editor-author, Mark R. Cullen,  is now a Professor of Medicine in Stanford University’s School of Medicine.  He is a member of the Institute of Medicine (IOM). Professor Cullen has been involved in several litigations, almost always on the plaintiffs’ side.  In the welding fume litigation, Cullen worked on a plaintiff-sponsored study of Mississippi welders.  Linda Rosenstock was the director for the National Institute for Occupational Safety and Health (NIOSH) from 1994 through 2000. Dr. Rosenstock left NIOSH to become the dean of the University of California, Los Angeles School of Public Health.  She too is a member of the IOM.  Here is how Cullen and Rosenstock treat the RR > 2 argument in their textbook:

“In most workers’ compensation and legal settings, one of the physician’s roles in OEM [occupational and environmental medicine] practice is to establish whether or not it is probable (greater, than 50% likelihood) that the patient’s injury or disease is occupationally or environmentally related. Physicians, whose standards of scientific certainty are usually considerably higher than those of the legal field (for example, often at the 95% level that an observed association did not occur by chance), need to appreciate that a disease may be deemed work related (i.e., in legal jargon, with medical certainty or more probable than not) even when there remains significant uncertainty (up to 50%) about this judgment.

Epidemiologic or population-based data may be used to provide evidence of both the causal relationship between an exposure and an outcome and the likelihood that the exposure is related to the outcome in an individual case. *** Although they are not fully conclusive, well-performed and interpreted epidemiologic studies can play an important role in determining the work-relatedness of disease in a person, using some of the additional guidelines below.”

***

“The concept of attributable fraction, known by many names, including attributable risk and etiologic fraction, has particular utility in determining the likelihood of importance of a hazardous exposure. Although these numbers refer to risks in groups, as shown in the following section, reasonable extrapolations from these numbers can often be made about risks in individuals.”

Cullen & Rosenstock at 13. Cullen & Rosenstock work through an easy example and discuss its implications:

“For example, if all the members of a population are exposed to a factor, and there is a RR of 5 of disease in relation to the factor, then the PAR = 80% (= (5 – 1)/5 X 100). If exposures and other population characteristics are similar in a second population, then it also can be assumed that this factor will account for 80% of cases of the disease. A short conceptual leap can be made to individual attribution:  if an affected individual is similar (e.g., in age and gender) to those in the population and is similarly exposed (e.g., similar duration, intensity, and latency), then there is an 80% likelihood that the factor caused the disease in that individual.”

***

“By this reasoning of assuming that all in a population are exposed and the relative risk is greater that [sic] 2, then the PAR [population attributable risk] is greater than 50% (where PAR = (2 – 1)/2 X 100%).  Accordingly, if an affected individual is similar to the population in a study that has demonstrated a RR ≥  2, then the legal test (that there is a greater than 50% likelihood that the factor caused disease) can be met.”

***

“In cases in which the relative risks are stable (i.e., very narrow confidence intervals) and the patient is typical of the population studied, one can state these individual attributable risks with some assurance that they are valid estimates. When the studies are of limited power or give varying results, or if the patient’s exposure cannot be easily related to the study population., caution in using this method is appropriate.”

Cullen & Rosenstock at 13-14. Cullen and Rosenstock embraced probabilistic evidence because they understood that antipathy to probabilistic inference meant that there could be no rational basis for supporting recoveries in the face of known hazards that carried low relative risks (greater than 2).  The “conceptual leap” these authors described is small compared to the unbridgeable analytical gaps that result from trying to infer specific causation from clinicians’ hunches.

Egilman’s Allegations Against McDonald and His Epidemiologic Research Are Baseless

October 20th, 2012

Dr. David Egilman has been trash-talking the epidemiologic studies of Quebec asbestos miners and millers for so long that most sensible people have tuned out his diatribe.  The studies attacked by Egilman were done under the supervision of a capable epidemiologist, J. Corbett McDonald, Emeritus Professor of McGill University, in Montreal.  McDonald is now in his late 90’s, but remains active as a Honorary Professorial Research Fellow at the National Heart & Lung Institute, in the Imperial College of London (UK).

McDonald’s studies showed a significant fiber-type differential in mesothelioma causation.  Even though his studies have been corroborated by the work of researchers from around the world, McDonald’s studies remain among the largest, and best-conducted.  As such, the McDonald work has always stuck in the craw of Selikoff and his co-conspirators who have tried to politicize the science of fiber-type differential.

Irving Selikoff died 20 years ago, but his political heirs have continued to prosecute the reputation of scientists (Doll, McDonald, Wagner, and others) who dared to disagree with Selikoff dogma.  Egilman has often led the charge against McDonald, in publications and ultimately in an ethics complaint to McDonald’s former employer, McGill University.  This complaint was then used by Egilman’s trial lawyer supporters to impugn the studies that are anathema to their mission to squeeze every last possible cent from the asbestos fiasco.  See, e.g., Shein Law Center, Ltd., “McGill University Asbestos Study under Attack” (Feb. 12, 2012) (republishing Egilman’s attack on McDonald’s studies).

In response to Egilman’s scurrilous attacks upon McDonald and his work, McGill University undertook a formal investigation of the allegations.  In a report prepared by the University’s Research Integrity Officer, Abraham Fuks, the Egilman allegation were found to be baseless and unsupported.  Consultation Report to Dean David Eidelman (Sept. 23, 2012). The Report’s Conclusions and Recommendations decisively rebuffed the Egilman complaint:

“Following review of the documentation presented, the data available in the published literature, and materials available at the University, I was unable to find any support for these allegations. The financial support from the industry was acknowledged in publications and there is no evidence to suggest that the sponsors influenced the data analyses or the conclusions. In fact, JCM [J.Corbett McDonald] noted an excess of lung cancers in asbestos workers in the earliest papers and reports and this could not have been a happy outcome for the asbestos companies. JCM’s findings and conclusions have been replicated by other groups and their robustness has endured many critical analyses and legal inquiries. In fact, the recent statement by the combined epidemiology societies notes the gradient of toxicities of different types of asbestos fibers and refers to this as the current consensus, thus corroborating what the McGill group foreshadowed almost forty years ago.

Thus, I find no warrant to initiate further investigations of the allegations that we have received.

Id. at 13-14.

* * * * *

b. Did the asbestos industry launch its research programs with its own interests in mind?

To frame the question in those terms is to invite the obvious answer. Indeed, the documents made available during the many years of legal discovery make it clear that by supporting JCM and his group, and for that matter, the group at Mt Sinai led by Dr. Selikoff, the asbestos companies hoped to develop information that would vindicate their claims that asbestos, in certain forms and treated in certain ways, could continue to be used with safety. This is not surprising as such. One must acknowledge that other sources of support were not as readily available as they ought to have been and moreover, the researchers were aware of the pitfalls of the relationship they had accepted. It is all the more important to recognize that the research by JCM and other groups throughout the world generated the information that led to the near complete disappearance of the asbestos industry in the developed world and the universal recognition of the toxicity of the product. It is also clear that the industry attempted to misuse the research data to its own purposes in policy debates throughout the world and in setting standards for occupational exposures. However, it was these very same studies that permitted and permeated the litigation and policy statements clarifying the toxicity of the product.

c. Did McGill University collude with the asbestos industry in promoting the use of asbestos and in opposing the recommendations of the UICC?

These are amongst the allegations leveled at the University, albeit with no documentation or plausible evidence. The review of the materials described previously lends no credence to these allegations and claims.

Id. at 14.

Although the report falls into the trap of adopting the accusers’ loose language, such as condemning all companies through its use of term “the industry,” the report exculpates Professor McDonald for the alleged “collusion,” as well as the “industry” for attempting to manipulate his publications.  The advocacy uses or misuses of the Quebec studies by one or another companies seem mild in comparison with the distortions of the anti-asbestos zealots and their trial lawyer friends.

“Trust Me” Rules of Evidence

October 18th, 2012

Stating what should be obvious, Judge Posner noted that the “[l]aw lags science, it does not lead it.” Rosen v. Ciba Geigy, 78 F.3d 316, 319 (7th Cir. 1996). Science as a method and a process has long ago moved away from authoritative pronouncements.  Since 1663, the Royal Society has sported the motto:  “Nullius in verba.”  When confronted with a pamphlet entitled “100 Authors against Einstein,” Albert Einstein quipped “if I were wrong, one would have been enough.”  See Remigio Russo, 18 Mathematical Problems in Elasticity 125 (1996) (quoting Einstein). Disputes in science are resolved with data, from high-quality, reproducible experimental or observational studies, not with appeals to the prestige of the speaker.

Almost 20 years ago, the Supreme Court, in Daubert v. Merrell Dow Pharms., Inc.,  509 U.S. 579 (1993), redirected the course of the federal system of evidence, which had exalted expert witness opinion over knowledge.  The Court attempted to put expert witness testimony on the same footing as knowledge, or true justified belief, as required by the plain language of Rule 702.  The Court’s leadership culminated in today’s revised Federal Rule of Evidence 702.

Many rules of evidence, however, remain mired in the “trust me” authoritarian regime of subjective opinion.  Recently, the Committee on Rules of Practice and Procedure has approved draft amendments to three rules with built-in “trustworthiness” elements:

·       Rules 803(6) (Records of a Regularly Conducted Activity),

·       FRE 803(7) (Absence of a Record of a Regularly Conducted Activity), and

·       FRE 803(8) (Public Records).

Public comment on the draft rules closes on February 13, 2012. The amendments are designed to make clear that the party against whom the business or public record is offered must show the untrustworthiness of the record to keep the record out of evidence.  These exceptions to the rule against hearsay are problematic because medical records and governmental reports may be larded with subjective opinions that would never pass Rule 702 scrutiny.

There is something peculiar about this aspect of the federal rules and its insistence that a party, facing the admission of evidence, must show the absence of trustworthiness.  These exceptions to the rule against hearsay, dealing with public and business records, are not alone in employing trustworthiness of the source as a proxy for the truth.  For many years, Rule 703 was viewed as an exception to the rule against hearsay, with the predicate to admissibility being the reliance by a party’s expert witness.  The changes wrought by Daubert made this interpretation of Rule 703 untenable, and today, the text of the rule ensures against this once popular evidentiary fallacy.  In hindsight, the use of a party’s hired witness to provide the predicate for admissibility seems a fairly primitive move within the Federal Rules of Evidence.

This pending revision to the Federal Rules of Evidence ignores another trustworthiness-based rule, Rule 803(18), which creates limited admissibility for “statements in learned treatises, periodicals, or pamphlets.”  This rule does require the proponent to present expert witness testimony to qualify the source, or to seek judicial notice of “learnedness,” which has been interpreted to be a proxy for trustworthiness and knowledge.  As such, the rule represents a major gap in the requirement that the proponent of scientific testimony show its epistemic warrant.  Statements in treatises or periodicals are often made in conclusory fashion, without a complete explication of their bases. See Schachtman, “Further Unraveling of the Learned Treatise Exception” (Sept. 29, 2010); “The New Wigmore on Learned Treatises” (Sept. , 2011); and “Unlearning The Learned Treatise Exception” (Aug. 21, 2010).

Even within the current framework of judicial decisions interpreting Rule 702, courts still struggle when faced with appeals to authority, especially in the field of clinical medicine.  Courts have a difficult time getting past: “Trust me, I am a physician.”  See, e.g., Mueller v. Auker, No. 11-35351, ___ F.3d ___, 2012 WL 3892960 at *8 (9th Cir. Sept. 10, 2012) (noting that “clinical instinct” is a generally accepted method of decision making by physicians).  The evidence-based worldview continues to challenge, confound, and confuse judges.

Manganese Meta-Analysis Further Undermines Reference Manual’s Toxicology Chapter

October 15th, 2012

Last October, when the ink was still wet on the Reference Manual on Scientific Evidence (3d 2011), I dipped into the toxicology chapter only to find the treatment of a number of key issues to be partial and biased.  SeeToxicology for Judges – The New Reference Manual on Scientific Evidence” (Oct. 5, 2011).

The chapter, “Reference Guide on Toxicology,” was written by Professor Bernard D. Goldstein, of the University of Pittsburgh Graduate School of Public Health, and Mary Sue Henifin, a partner in the law firm of Buchanan Ingersoll, P.C.  In particular, I noted the authors’ conflicts of interest, both financial and ideological, which may have resulted in an incomplete and tendentious presentation of important concepts in the chapter.  Important concepts in toxicology, such as hormesis, were omitted completely from the chapter.  See, e.g., Mark P. Mattson and Edward J. Calabrese, eds., Hormesis: A Revolution in Biology, Toxicology and Medicine (N.Y. 2009); Curtis D. Klaassen, Casarett & Doull’s Toxicology: The Basic Science of Poisons 23 (7th ed. 2008) (“There is considerable evidence to suggest that some non-nutritional toxic substances may also impart beneficial or stimulatory effects at low doses but that, at higher doses, they produce adverse effects. This concept of “hormesis” was first described for radiation effects but may also pertain to most chemical responses.”)(internal citations omitted); Philip Wexler, et al., eds., 2 Encyclopedia of Toxicology 96 (2005) (“This type of dose–response relationship is observed in a phenomenon known as hormesis, with one explanation being that exposure to small amounts of a material can actually confer resistance to the agent before frank toxicity begins to appear following exposures to larger amounts.  However, analysis of the available mechanistic studies indicates that there is no single hormetic mechanism. In fact, there are numerous ways for biological systems to show hormetic-like biphasic dose–response relationship. Hormetic dose–response has emerged in recent years as a dose–response phenomenon of great interest in toxicology and risk assessment.”).

The financial conflicts are perhaps more readily appreciated.  Goldstein has testified in any number of so-called toxic tort cases, including several in which courts had excluded his testimony as being methodologically unreliable.  These cases are not cited in the ManualSee, e.g., Parker v. Mobil Oil Corp., 7 N.Y.3d 434, 857 N.E.2d 1114, 824 N.Y.S.2d 584 (2006) (dismissing leukemia (AML) claim based upon claimed low-level benzene exposure from gasoline) , aff’g 16 A.D.3d 648 (App. Div. 2d Dep’t 2005); Exxon Corp. v. Makofski, 116 S.W.3d 176 (Tex.App.–Houston [14th Dist.] 2003, pet. denied) (benzene and ALL claim).

One of the disappointments of the toxicology chapter was its failure to remain neutral in substantive disputes, unless of course it could document its position against adversarial claims.  Table 1 in the chapter presents, without documentation or citation,  a “Sample of Selected Toxicological End Points and Examples of Agents of Concern in Humans.” Although many of the agent/disease outcome relationships in the table are well accepted, one was curiously unsupported at the time; namely the claim that manganese causes Parkinson’s disease (PD).  Reference Manual at 653.This tendentious claim undermines the Manual’s attempt to remain disinterested in what was then an ongoing litigation effort.  Last year, I noted that Goldstein’s scholarship was questionable at the time of publication because PD is generally accepted to have no known cause.  Claims that manganese can cause PD had been addressed in several reviews. See, e.g., Karin Wirdefeldt, Hans-Olaf Adami, Philip Cole, Dimitrios Trichopoulos, and Jack Mandel, “Epidemiology and etiology of Parkinson’s disease: a review of the evidence.  26 European J. Epidemiol. S1, S20-21 (2011); Tomas R. Guilarte, “Manganese and Parkinson’s Disease: A Critical Review and New Findings,” 118 Environ Health Perspect. 1071, 1078 (2010) (“The available evidence from human and non­human primate studies using behavioral, neuroimaging, neurochemical, and neuropathological end points provides strong sup­port to the hypothesis that, although excess levels of [manganese] accumulation in the brain results in an atypical form of parkinsonism, this clini­cal outcome is not associated with the degen­eration of nigrostriatal dopaminergic neurons as is the case in PD.”).

More recently, three neuro-epidemiologists have published a systematic review and meta-analysis of the available analytical epidemiologic studies.  What they found was an inverse association between welding, a trade that involves manganese fume exposure, and Parkinson’s disease. James Mortimer, Amy Borenstein, and Lorene Nelson, “Associations of welding and manganese exposure with Parkinson disease: Review and meta-analysis,” 79 Neurology 1174 (2012).

Here are the summary figures from the published meta-analysis:

 

The Fourth Edition should aim at a better integration of toxicology into the evolving science of human health effects.

Union of Concerned Scientists on Criticism and Personal Attacks on Scientists

October 12th, 2012

The Union of Concerned Scientists (UCS)  has produced a glossy pamphlet with a checklist of how scientists may respond to criticism and personal attacks.  See UCS – Science in an Age of Scrutiny: How Scientists Can Respond to Criticism and Personal Attacks (2012).

The rationale for this publication is described at the UCS website.  The UCS notes that scientists are under a great deal of scrutiny, and “attack,” especially when their research is at the center of contentious debate over public policy.  According to the UCS, scientists are “sometimes attacked by individuals who do not like the research results. These attacks can take multiple forms—emails, newspaper op-eds, blogs, open-records requests, even subpoenas—but the goals are the same: to discredit the research by discrediting the researcher.

I am all for protecting scientists and researchers from personal attacks.  The UCS account, however, seems a bit narcissistic.  The UCS is making an ad hominem attack on the putative attackers for what they claim is an ad hominem attack on the researchers.  What if the so-called attackers don’t care a bit about discrediting the researchers, but only the research?

The “even subpoenas” got my attention.  Subpoenas have been propounded for good reason, and with good effect, on litigation-related research. See, e.g., In re Welding Fume Prods. Liab. Litig., MDL 1535, 2005 WL 5417815 (N.D. Ohio Aug. 8, 2005) (upholding defendants’ subpoena for documents and things from Dr. Racette author of study on welding and parkinsonism). The UCS has thus attacked motives of lawyers charged with protecting their clients from dubious scientific research; I suppose we could return the epithet and declare that the UCS goal is to discredit the process of compelling data sharing by discrediting the motives of the persons seeking data sharing.

Subpoenas served upon independent researchers, whose work bears on the issues in litigation, are a valid part of the litigation discovery process.  Litigants, especially defendants who are involuntarily before a tribunal by compulsory process, are entitled to “every man’s evidence.”

The Union of Concerned Scientists seem either unduly sensitive or cavalier and careless in their generalization about the goals of lawyers who propound subpoenas.  The goal is typically not to discredit the researcher.  The personality, reputation, and position of the researcher are irrelevant; it’s about the data.

The Federal Judicial Center’s Manual for Complex Litigation describes subpoenas for researchers’ underlying data and materials at some length.  See Fed. Jud. Center, Manual for Complex Litigation § 22.87 (4th ed. 2004).  The Manual acknowledges that the federal courts have protected unpublished research from discovery, but that courts permit discovery of underlying data and materials from studies that have been published.  Federal Rule of Civil Procedure 45(c)(3)(B)(ii) allows courts to enforce subpoenas against non-parties, on a showing of “substantial need for the testimony that cannot be otherwise met without undue hardship,” and on assurance that the subpoenaed third parties “will be reasonably compensated.” Manual at 444-45.  The federal courts have recognized that litigants’ need to obtain, examine, and re-analyze  data underlying research studies used to by their adversaries against them.  Although the researchers have interests that should be protected in the discovery process, such as their claims “for protection of confidentiality, intellectual property rights, research privilege, and the integrity of the research,” these claims must be balanced against the necessity of the evidence in the litigation process.  Id.

Of course, when the research is sponsored by litigants, whether by financial assistance or by assisting in recruiting study participants, and is completed, “courts generally require production of all data; for pending studies, courts often require disclosure of the written protocol, the statistical plan, sample data entry forms, and a specific description of the progress of the study until it is completed. Id.

Some have argued that the scientific enterprise should be immune from the rough and tumble of legal discovery because its essential collaborative nature is threatened by the adversarial interests at play in litigation.  Professor George Olah, in accepting his Nobel Prize in Chemistry, rebutted this sentiment:

“Intensive, critical studies of a controversial topic always help to eliminate the possibility of any errors. One of my favorite quotation is that by George von Bekessy (Nobel Prize in Medicine, 1961).

‘[One] way of dealing with errors is to have friends who are willing to spend the time necessary to carry out a critical examination of the experimental design beforehand and the results after the experiments have been completed. An even better way is to have an enemy. An enemy is willing to devote a vast amount of time and brain power to ferreting out errors both large and small, and this without any compensation. The trouble is that really capable enemies are scarce; most of them are only ordinary. Another trouble with enemies is that they sometimes develop into friends and lose a good deal of their zeal. It was in this way the writer lost his three best enemies. Everyone, not just scientists, need a few good enemies!’”

George A. Olah, “My Search for Carbocations and Their Role in Chemistry,” Nobel Lecture (Dec. 8, 1994), quoting George von Békésy, Experiments in Hearing 8 (N.Y. 1960).  The UCS should rejoice for its intellectual enemies.  “Out of life’s school of war: What does not destroy me, makes me stronger.  Friedrich Nietzsche, The Twilight of the Idols Maxim 8 (1899).

Summary Judgment in Gushue – Attempted Differential Diagnosis for Idiopathic Diseases Rebuffed

October 10th, 2012

Parkinson’s disease (PD) in young women is a rare disease.  Exposure to manganese fumes from a pottery kiln is a rare disease.  Plaintiff Kathleen Gushue, with the help of her expert witnesses, Drs. Paul Nausieda and Elan Louis, argued that the coincidence of both rare exposure and rare outcome must be probative of a causal relationship between the two.  Supreme Court Justice Jeffrey K. Oing, realizing that one in million happens eight times a day here in New York City, excluded the proffered testimony of Drs. Nausieda and Louis, and granted defendants summary judgment.  Gushue v. Estate of Norman Levy, et al., Supreme Court of New York, New York County, Index No.: 106645/05, Decision & Order  (Sept. 28, 2012).

Manganese in very high doses can cause a parkinsonism, but Justice Oing avoided the semantic traps set for him by the plaintiff.  Just because PD requires parkinsonism does not mean that manganese-induced parkinsonism can be equated with PD.  A dog is a carnivorous mammal, with fur, four legs, and a tail.  So is a cat, but a dog is not a cat.  Similarly, PD and the specific features of manganese-induced parkinsonism are different.  See Agency for Toxic Substances and Disease Registry, Draft Toxicological Profile for Manganese 16 (Draft 2008) (“While manganese neurotoxicity has clinical similarities to Parkinson’s disease, it can be clinically distinguished from Parkinson’s.”); id. at 66-67 (“Manganism and Parkinson’s disease also differ pathologically. * * *  It is likely that the terms Parkinson-like-disease and manganese-induced-Parkinsonism will continue to be used by those less knowledgeable about the significant differences between the two.”).

Plaintiff and her expert witnesses also attempted the differential diagnosis ploy, but Justice Oing followed prior New York law that requires a claimant, who is alleging toxic cause, to “reliably rule out reasonable alternative causes of [the alleged harm) or idiopathic causes.” Id., citing Barbaro v Eastman Kodak Co., 26 Misc. 3d 1124 (A) (Sup. Ct., Nassau Cty. 2010) .

Logically and legally, plaintiff could not rule out idiopathic causes that are responsible for the great majority of PD cases. Parkinson’s disease has no known causes other than a few uncommon genetic variants.  See John Hardy, “No Definitive Evidence for a Role for the Environment in the Etiology of Parkinson’s Disease,” 21 Movement Disorders 1790 (2006).  See also J. Mortimer, A. Borenstein, and L. Nelson, “Associations of welding and manganese exposure with Parkinson disease: Review and meta-analysis,” 79 Neurology 1174 (2012) (reporting a statistically significant decreased risk of Parkinson’s disease among welding tradesmen).

American Taliban and the Attack on Science

October 9th, 2012

Mostly I care about whether governmental policy is based upon facts, but discerning the facts requires intelligence.  In some areas of human endeavor, it involves something we call science.  Generally smart people are better at doing science than stupid people, but there may be the occasional idiot savant.

Political pundits focus on the dualism of America – rich and poor, but this is not the important divide.  The crucial distinction is between the smart and the stupid.

Rick Santorum says that smart people have no place in the Republican party.  Colleges and universities are the adversaries of the stupid.  Stupid people are the base.  See Kristen A. Lee, “Santorum complains to social conservatives about ‘smart people’” (Sept. 17, 2012).  Santorum accuses President Obama of being a snob:  “he wants everybody in America to go to college.” Santorum later backed away from his “What a snob,” remark, when he acknowledged that his comment was “probably not the smartest thing.”  Of course, Santorum was really complimenting himself, and reaffirming his core values.

Shifting gears, just slightly.

Science flourished in the Islamic world until it didn’t.  Most historians appear to accept that the rise of clerics and superstition killed a rich tradition of science in Islam, about the same time that the Reformation and other social changes in Europe allowed science to emerge from the shadows of the Church. The American Taliban would have us align ourselves with the current Islamic hostility to science.

Who are the American Taliban?

Meet Congressman Paul Broun.  Broun serves on the House Science Committee.  According to Wikipedia, the font of all knowledge, Broun has a bachelor’s degree in chemistry from the University of Georgia, and an M.D. degree from the Medical College of Georgia in Augusta.  Broun calls himself a scientist.

Last month, at a church-sponsored event in Georgia, Broun declared that “all that stuff I was taught about evolution and embryology and the Big Bang theory” are “lies straight from the pit of hell.” And these lies are no casual fibs; according to Broun, the lies are part of a conspiracy to “to try to keep me and all the folks who were taught that from understanding that they need a savior.” And Broun really needs a savior.

Broun is also an accomplished geologist:

“You see, there are a lot of scientific data that I’ve found out as a scientist that actually show that this is really a young Earth. I don’t believe that the earth’s but about 9,000 years old. I believe it was created in six days as we know them. That’s what the Bible says.”

Broun made his comments to constituents at the Sportsman’s Banquet at Liberty Baptist Church in Hartwell, Georgia.  In keeping with the Sportsman theme, members of the Bridge Project having been bird dogging Broun.  Instead of shooting big game; they shot video of Broun’s speech, which they proudly distributed by YouTube, which of course is their right for now.

The House Science Committee apparently has become a safe haven for the American Taliban.  Fellow scientist and Congressman, Todd Akin, also serves on the Committee.  Akin gained fame for his definitive study, which showed that women who experience “legitimate rape” cannot become pregnant because their tubes shut down.

Not all bad science is practiced in the courts.