TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Johnson of Accutane – Keeping the Gate in the Garden State

March 28th, 2015

Flag of Aquitaine     Nelson Johnson is the author of Boardwalk Empire: The Birth, High Times, and Corruption of Atlantic City (2010), a rattling good yarn, which formed the basis for a thinly fictionalized story of Atlantic City under the control of mob boss (and Republican politician) Enoch “Nucky” Johnson. HBO transformed Johnson’s book into a multi-season series, with Steve Buscemi playing Nucky Johnson (Thompson in the series). Robert Strauss, “Judge Nelson Johnson: Atlantic City’s Godfather — A Q&A with Judge Nelson Johnson,” New Jersey Monthly (Aug. 16, 2010).

Nelson Johnson is also known as the Honorable Nelson Johnson, a trial court judge in Atlantic County, New Jersey, where he inherited some of the mass tort docket of Judge Carol Higbee. Judge Higbee has since ascended to the Appellate Division of the New Jersey Superior Court. One of the litigations Judge Johnson presides over is the mosh pit of isotretinoin (Accutane) cases, involving claims that the acne medication causes irritable bowel syndrome (IBS) and Crohn’s disease (CD). Judge Johnson is not only an accomplished writer of historical fiction, but he is also an astute evaluator of the facts and data, and the accompanying lawyers’ rhetoric, thrown about in pharmaceutical products liability litigation.

Perhaps more than his predecessor ever displayed, Judge Johnson recently demonstrated his aptitude for facts and data in serving as a gatekeeper of scientific evidence, as required by the New Jersey Supreme Court, in Kemp v. The State of New Jersey, 174 NJ 412 (2002). Faced with a complex evidentiary display on the validity and reliability of the scientific evidence, Judge Johnson entertained extensive briefings, testimony, and oral argument. When the dust settled, the court ruled that the proffered testimony of Dr, Arthur Kornbluth and Dr. David Madigan did not meet the liberal New Jersey test for admissibility. In re Accutane, No. 271(MCL), 2015 WL 753674, 2015 BL 59277 (N.J.Super. Law Div. Atlantic Cty. Feb. 20, 2015). And in settling the dust, Judge Johnson dispatched several bogus and misleading “lines of evidence,” which have become standard ploys to clog New Jersey and other courthouses.

Case Reports

As so often is the case when there is no serious scientific evidence of harm in pharmaceutical cases, plaintiffs in the Accutane litigation relied heavily upon case and adverse event reports. Id. at *11. Judge Johnson was duly unimpressed, and noted that:

“[u]nsystematic clinical observations or case reports and adverse event reports are at the bottom of the evidence hierarchy.”

Id. at *16.

Bootstrapped, Manufactured Evidence

With respect to case reports that are submitted to the FDA’s Adverse Event Reporting System (FAERS), Judge Johnson acknowledged the “serious limitations” of the hearsay anecdotes that make up such reports. Despite the value of AERs in generating signals for future investigation, Judge Johnson, citing FDA’s own description of the reporting system, concluded that the system’s anecdotal data are “not evidentiary in a court of law.” Id. at 14 (quoting FDA’s description of FAERS).

Judge Johnson took notice of another fact; namely, the industry litigation creates evidence that it then uses to claim causal connections in the courtroom. Plaintiffs’ lawyers in pharmaceutical cases routinely file Medwatch adverse event reports, which thus inflate the “signal,” they claim supports the signal of harm from medication use. This evidentiary bootstrapping machine was hard at work in the isotretinoin litigation. See Derrick J. Stobaugh, Parakkal Deepak, and Eli D. Ehrenpreis, “Alleged Isotretinoin-Associated Inflammatory Bowel Disease: Disproportionate reporting by attorneys to the Food and Drug Administration Adverse Event Reporting System,” 69 J. Am. Acad. Dermatol. 398 (2013) (“Attorney-initiated reports inflate the pharmacovigilance signal of isotretinoin-associated IBD in the FAERS.”). Judge Johnson gave a wry hat tip to plaintiffs’ counsel’s industry, by acknowledging that the litigation industry itself had inflated this signal-generating process:

“The legal profession is a bulwark of our society, yet the courts should never underestimate the resourcefulness of some attorneys.”

In re Accutane, 2015 WL 753674, at *15.

Bias and Confounding

The epidemiologic studies referenced by the parties had identified a fairly wide range of “risk factors” for irritable bowel syndrome, including many prevalent factors in Westernized countries such as prior appendectomy, breast-feeding as an infant, stress, Vitamin D deficiency, tobacco or alcohol use, refined sugars, dietary animal fat, fast food. In re Accutane, 2015 WL 753674, at *9. The court also noted that there were four medications known to be risk factors for IBD: aspirin, nonsteroidal anti-inflammatory medications (NSAIDs), oral contraceptives, and antibiotics.

In reviewing the plaintiffs’ expert witnesses’ methodology, Judge Johnson found that they had been inordinately, and inappropriately selective in the studies chosen for reliance. The challenged witnesses had discounted and discarded most of the available studies in favor of two studies that were small, biased, and not population based. Indeed, one of the studies evidenced substantial selection bias by using referrals to obtain study participants, a process deprecated by the trial court as “cherry picking the subjects.” Id. at *18. “The scientific literature does not support reliance upon such insignificant studies to arrive at conclusions.” Id.

Animal Studies

Both sides in the isotretinoin cases seemed to concede the relative unimportance of animal studies. The trial court discussed the limitations on animal studies, especially the absence of a compelling animal model of human irritable bowel syndrome. Id. at *18.

Cherry Picking and Other Crafty Stratagems

With respect to the complete scientific evidentiary display, plaintiffs asserted that their expert witnesses had considered everything, but then failed to account for most of the evidence. Judge Johnson found this approach deceptive and further evidence of a cherry-picking, pathological methodology:

‘‘Finally, coursing through Plaintiffs’ presentation is a refrain that is a ruse. Repeatedly, counsel for the Plaintiffs and their witnesses spoke of ‛lines of evidence”, emphasizing that their experts examined ‛the same lines of evidence’ as did the experts for the Defense. Counsels’ sophistry is belied by the fact that the examination of the ‘lines of evidence’ by Plaintiffs’ experts was highly selective, looking no further than they wanted to—cherry picking the evidence—in order to find support for their conclusion-driven testimony in support of a hypothesis made of disparate pieces, all at the bottom of the medical evidence hierarchy.’’

Id. at *21.

New Jersey Rule of Evidence 703

The New Jersey rules of evidence, like the Federal Rules, imposes a reasonableness limit on what sorts of otherwise inadmissible evidence an expert witness may rely upon. SeeRULE OF EVIDENCE 703 — Problem Child of Article VII” (Sept. 9, 2011). Although Judge Johnson did not invoke Rule 703 specifically, he was clearly troubled by plaintiffs’ expert witnesses’ reliance upon an unadjusted odds ratio from an abstract, which did not address substantial confounding from a known causal risk factor – antibiotics use. Judge Johnson concluded that the reliance upon the higher, unadjusted risk figure, contrary to the authors’ own methods and conclusions, and without a cogent explanation for so doing was “pure advocacy” on the part of the witnesses. In re Accutane, 2015 WL 753674, at *17; see also id. at *5 (citing Landrigan v. Celotex Corp., 127 N.J. 404, 417 (1992), for the proposition that “when an expert relies on such data as epidemiological studies, the trial court should review the studies, as well as other information proffered by the parties, to determine if they are of a kind on which such experts ordinarily rely.”).

Discordance Between Courtroom and Professional Opinions

One of plaintiffs’ expert witnesses, Dr. Arthur Kornbluth actually had studied putative association between isotretinoin and CD before he became intensively involved in litigation as an expert witness. In re Accutane, 2015 WL 753674, at *7. Having an expert witness who is a real world expert can be a plus, but not when that expert witness maintains a double standard for assessing causal connections. Back in 2009, Kornbluth published an article, “Ulcerative Colitis Practice Guidelines in Adults” in The American Journal of Gastroenterology. Id. at *10. This positive achievement became a large demerit when cross-examination at the Kemp hearing revealed that Kornbluth had considered but rejected the urgings of a colleague, Dr. David Sachar, to comment on isotretinoin as a cause of irritable bowel syndrome. In front of Judge Johnson, Dr. Kornbluth felt no such scruples. Id. at *11. Dr. Kornbluth’s stature in the field of gastroenterology, along with his silence on the issue in his own field, created a striking contrast with his stridency about causation in the courtroom. The contrast raised the trial court’s level of scrutiny and skepticism about his causal opinions in the New Jersey litigation. Id. (citing and quoting Soldo v. Sandoz Pharms. Corp, 244 F. Supp. 2d 434, 528 (W.D. Pa. 2003) (“Expert opinions generated as the result of litigation have less credibility than opinions generated as the result of academic research or other forms of ‘pure’ research.”) (“The expert’s motivation for his/her study and research is important. … We may not ignore the fact that a scientist’s normal work place is the lab or field, not the courtroom or the lawyer’s office.”).

Meta-Analysis

Meta-analysis has become an important facet of pharmaceutical and other products liability litigation[1]. Fortunately for Judge Johnson, he had before him an extremely capable expert witness, Dr. Stephen Goodman, to explain meta-analysis generally, and two meta-analyses performed on isotretinoin and irritable bowel outcomes. In re Accutane, 2015 WL 753674, at *8. Dr. Goodman explained that:

“the strength of the meta-analysis is that no one feature, no one study, is determinant. You don’t throw out evidence except when you absolutely have to.”

Id. Dr. Goodman further explained that plaintiffs’ expert witnesses’ failure to perform a meta-analysis was telling meta-analysis “can get us closer to the truth.” Id.

Some Nitpicking

Specific Causation

After such a commanding judicial performance by Judge Johnson, nitpicking on specific causation might strike some as ungrateful. For some reason, however, Judge Johnson cited several cases on the appropriateness of expert witnesses’ reliance upon epidemiologic studies for assessing specific causation or for causal apportionment between two or more causes. In re Accutane, 2015 WL 753674, at *5 (citing Landrigan v. Celotex Corp., 127 N.J. 404 (1992), Caterinicchio v. Pittsburgh Corning, 127 N.J. 428 (1992), and Dafler v. Raymark Inc., 259 N.J. Super. 17, 36 (App. Div. 1992), aff’d. o.b. 132 N.J. 96 (1993)). Fair enough, but specific causation was not at issue in the Accutane Kemp hearing, and the Landrigan and Caterinicchio cases are irrelevant to general causation.

In both Landrigan and Caterincchio, the defendants moved for directed verdicts by arguing that, assuming arguendo that asbestos causes colon cancer, the plaintiffs’ expert witnesses had not presented a sufficient opinion to support that Landrigan’s and Caterinnichio’s colon cancers were caused by asbestos. SeeLandrigan v. The Celotex Corporation, Revisited” (June 4, 2013). General causation was thus never at issue, and the holdings never addressed the admissibility of the expert witnesses’ causation opinions. Only sufficiency of the opinions that equated increased risks, less than 2.0, to specific causation was at issue in the directed verdicts, and the appeals taken from the judgments entered on those verdicts.

Judge Johnson, in discussing previous case law suggests that the New Jersey Supreme Court reversed and remanded the Landrigan case for trial, holding that “epidemiologists could help juries determine causation in toxic tort cases and rejected the proposition that epidemiological studies must show a relative risk factor of 2.0 before gaining acceptance by a court.” In re Accutane, 2015 WL 753674, at *5, citing Landrigan, 127 N.J. at 419. A close and fair reading of Landrigan, however, shows that it was about a directed verdict, 127 N.J. at 412, and not a challenge to the use of epidemiologic studies generally, or to their use to show general causation.

Necessity of Precise Biological Mechanism

In the Accutane hearings, the plaintiffs’ counsel and their expert witnesses failed to provide a precise biological mechanism of the cause of IBD. Judge Johnson implied that any study that asserted that Accutane caused IBD ‘‘would, of necessity, require an explication of a precise biological mechanism of the cause of IBD and no one has yet to venture more than alternate and speculative hypotheses on that question.’’ In re Accutane, 2015 WL 753674, at *8. Conclusions of causality, however, do not always come accompanied by understood biological mechanisms, and Judge Johnson demonstrated that the methods and evidence relied upon by plaintiffs’ expert witnesses could not, in any event, allow them to draw causal conclusions.

Interpreting Results Contrary to Publication Authors’ Interpretations

There is good authority, no less than the United States Supreme Court in Joiner, that there is something suspect in expert witnesses’ interpreting a published study’s results in contrary to the authors’ publication. Judge Johnson found that the plaintiffs’ expert witnesses in the Accutane litigation had inferred that two studies showed increased risk when the authors of those studies had concluded that their studies did not appear to show an increased risk. Id. at *17. There will be times, however, when a published study may have incorrectly interpreted its own data, when “real” expert witnesses can, and should, interpret the data appropriately. Accutane was not such a case. In In re Accutane, Judge Johnson carefully documented and explained how the plaintiffs’ expert witnesses’ supposed reinterpretation was little more than attempted obfuscation. His Honor concluded that the witnesses’ distortion of, and ‘‘reliance upon these two studies is fatal and reveals the lengths to which legal counsel and their experts are willing to contort the facts and torture the logic associated with Plaintiffs’ hypothesis.’’ Id. at *18.


[1] “The Treatment of Meta-Analysis in the Third Edition of the Reference Manual on Scientific Evidence” (Nov. 14, 2011) (The Reference Manual fails to come to grips with the prevalence and importance of meta-analysis in litigation, and fails to provide meaningful guidance to trial judges).

The Joiner Finale

March 23rd, 2015

“This is the end
Beautiful friend
This is the end
My only friend, the end”

Jim Morrison, “The End” (c. 1966)

 *          *          *          *           *          *          *          *          *          *  

The General Electric Co. v. Joiner, 522 U.S. 136 (1997), case was based upon polychlorinated biphenyl exposures (PCB), only in part. The PCB part did not hold up well legally in the Supreme Court; nor was the PCB lung cancer claim vindicated by later scientific evidence. See How Have Important Rule 702 Holdings Held Up With Time?” (Mar. 20, 2015).

The Supreme Court in Joiner reversed and remanded the case to the 11th Circuit, which then remanded the case back to the district court to address claims that Mr. Joiner had been exposed to furans and dioxins, and that these other chemicals had caused, or contributed to, his lung cancer, as well. Joiner v. General Electric Co., 134 F.3d 1457 (11th Cir. 1998) (per curiam). Thus the dioxins were left in the case even after the Supreme Court ruled.

After the Supreme Court’s decision, Anthony Roisman argued that the Court had addressed an artificial question when asked about PCBs alone because the case was really about an alleged mixture of exposures, and he held out hope that the Joiners would do better on remand. Anthony Z. Roisman, “The Implications of G.E. v. Joiner for Admissibility of Expert Testimony,” 1 Res Communes 65 (1999).

Many Daubert observers (including me) are unaware of the legal fate of the Joiners’ claims on remand. In the only reference I could find, the commentator simply noted that the case resolved before trial.[1] I am indebted to Michael Risinger, and Joseph Cecil, for pointing me to documents from PACER, which shed some light upon the Joiner “endgame.”

In February 1998, Judge Orinda Evans, who had been the original trial judge, and who had sustained defendants’ Rule 702 challenges and granted their motions for summary judgments, received and reopened the case upon remand from the 11th Circuit. In March, Judge Evans directed the parties to submit a new pre-trial order by April 17, 1998. At a status conference in April 1998, Judge Evans permitted the plaintiffs additional discovery, to be completed by June 17, 1998. Five days before the expiration of their additional discovery period, the plaintiffs moved for additional time; defendants opposed the request. In July, Judge Evans granted the requested extension, and gave defendants until November 1, 1998, to file for summary judgment.

Meanwhile, in June 1998, new counsel entered their appearances for plaintiffs – William Sims Stone, Kevin R. Dean, Thomas Craig Earnest, and Stanley L. Merritt. The docket does not reflect much of anything about the new discovery other than a request for a protective order for an unpublished study. But by October 6, 1998, the new counsel, Earnest, Dean, and Stone (but not Merritt) withdrew as attorneys for the Joiners, and by the end of October 1998, Judge Evans entered an order to dismiss the case, without prejudice.

A few months later, in February 1999, the parties filed a stipulation, approved by the Clerk, dismissing the action with prejudice, and with each party to bear its own coasts. Given the flight of plaintiffs’ counsel, the dismissals without and then with prejudice, a settlement seems never to have been involved in the resolution of the Joiner case. In the end, the Joiners’ case fizzled perhaps to avoid being Frye’d.

And what has happened since to the science of dioxins and lung cancer?

Not much.

In 2006, the National Research Council published a monograph on dioxin, which took the controversial approach of focusing on all cancer mortality rather than specific cancers that had been suggested as likely outcomes of interest. See David L. Eaton (Chairperson), Health Risks from Dioxin and Related Compounds – Evaluation of the EPA Reassessment (2006). The validity of this approach, and the committee’s conclusions, were challenged vigorously in subsequent publications. Paolo Boffetta, Kenneth A. Mundt, Hans-Olov Adami, Philip Cole, and Jack S. Mandel, “TCDD and cancer: A critical review of epidemiologic studies,” 41 Critical Rev. Toxicol. 622 (2011) (“In conclusion, recent epidemiologicalevidence falls far short of conclusively demonstrating a causal link between TCDD exposure and cancer risk in humans.”

In 2013, the Industrial Injuries Advisory Council (IIAC), an independent scientific advisory body in the United Kingdom, published a review of lung cancer and dioxin. The Council found the epidemiologic studies mixed, and declined to endorse the compensability of lung cancer for dioxin-exposed industrial workers. Industrial Injuries Advisory Council – Information Note on Lung cancer and Dioxin (December 2013). See also Mann v. CSX Transp., Inc., 2009 WL 3766056, 2009 U.S. Dist. LEXIS 106433 (N.D. Ohio 2009) (Polster, J.) (dioxin exposure case) (“Plaintiffs’ medical expert, Dr. James Kornberg, has opined that numerous organizations have classified dioxins as a known human carcinogen. However, it is not appropriate for one set of experts to bring the conclusions of another set of experts into the courtroom and then testify merely that they ‘agree’ with that conclusion.”), citing Thorndike v. DaimlerChrysler Corp., 266 F. Supp. 2d 172 (D. Me. 2003) (court excluded expert who was “parroting” other experts’ conclusions).


[1] Morris S. Zedeck, Expert Witness in the Legal System: A Scientist’s Search for Justice 49 (2010) (noting that, after remand from the Supreme Court, Joiner v. General Electric resolved before trial)

How Have Important Rule 702 Holdings Held Up With Time?

March 20th, 2015

The Daubert case arose from claims of teratogenicity of Bendectin. The history of the evolving scientific record has not been kind to those claims. SeeBendectin, Diclegis & The Philosophy of Science” (Oct. 26, 2013); Gideon Koren, “The Return to the USA of the Doxylamine-Pyridoxine Delayed Release Combination (Diclegis®) for Morning Sickness — A New Morning for American Women,” 20 J. Popul. Ther. Clin. Pharmacol. e161 (2013). Twenty years later, the decisions in the Daubert appeals look sound, even if the reasoning was at times shaky. How have other notable Rule 702 exclusions stood up to evolving scientific records?

A recent publication of an epidemiologic study on lung cancer among workers exposed to polychlorinated biphenyls (PCBs) raised an interested question about a gap in so-called Daubert scholarship. Clearly, there are some cases, like General Electric v. Joiner[1], in which plaintiffs lack sufficient, valid evidence to make out their causal claims. But are there cases of Type II injustices, for which, in the fullness of time, the insufficiency or invalidity of the available evidentiary display is “cured” by subsequently published studies?

In Joiner, Chief Justice Rehnquist noted that the district court had carefully analyzed the four epidemiologic studies claimed by plaintiff to support the association between PCB exposure and lung cancer. The first such study[2] involved workers at an Italian capacitor plant who had been exposed to PCBs.

The Chief Justice reported that the authors of the Italian capacitor study had noted that lung cancer deaths among former employees were more numerous than expected (without reporting whether there was any assessment of random error), but that they concluded that “there were apparently no grounds for associating lung cancer deaths (although increased above expectations) and exposure in the plant.”[3] The court frowned at the hired expert witnesses’ willingness to draw a causal inference when the authors of the Bertazzi study would not. As others have noted, this disapproval was beside the point of the Rule 702 inquiry. It might well be the case that Bertazzi and his co-authors could not or did not conduct a causal analysis, but that does not mean that the study’s evidence could not be part of a larger effort to synthesize the available evidence. In any event, the Bertazzi study was small and uninformative. Although all cancer mortality was increased (14 observed vs. 5.5 expected, based upon national rates; SMR = 253; 95% CI 144-415), the study was too small to be meaningful for lung cancer outcomes.

The second cited study[4], from an unpublished report, followed workers at a Monsanto PCB production facility. The authors of the Monsanto study reported that the lung cancer mortality rate among exposed workers “somewhat” higher than expected, but that the “increase, however, was not statistically significant and the authors of the study did not suggest a link between the increase in lung cancer deaths and the exposure to PCBs.” Again, the Court’s emphasis on what the authors stated is unfortunate. What is important is obscured because the Court never reproduced the data from this unpublished study.

The third study[5] cited by plaintiff’s hired expert witnesses was of “no help,” in that the study followed workers exposed to mineral oil, without any known exposure to PCBs. Although the workers exposed to this particular mineral oil had a statistically significantly elevated lung cancer mortality, the study made no reference to PCBs.

The fourth study[6] cited by plaintiffs’ expert witnesses followed a Japanese PCB-exposed group, which had a “statistically significant increase in lung cancer deaths.” The Court, however, was properly concerned that the cohort was exposed to numerous other potential carcinogens, including toxic rice oil by ingestion.

The paucity of this evidence led the Court to observe:

“Trained experts commonly extrapolate from existing data. But nothing in either Daubert or the Federal Rules of Evidence requires a district court to admit opinion evidence which is connected to existing data only by the ipse dixit of the expert. A court may conclude that there is simply too great an analytical gap between the data and the opinion proffered. … That is what the District Court did here, and we hold that it did not abuse its discretion in so doing.”

Joiner, 522 U.S. at 146 (1997).

Interestingly omitted from the Supreme Court’s discussion was why the plaintiffs’ expert witnesses failed to rely upon all the available epidemiology. The excluded witnesses relied upon an unpublished Monsanto study, but apparently ignored an unpublished investigation by NIOSH researchers, who found that there were “no excess deaths from cancers of the … the lung,” among PCB-exposed workers at a Westinghouse Electric manufacturing facility[7]. Actually, NIOSH reported a statistically non-significant decrease in lung cancer rate, with fairly a narrow confidence interval.

Two Swedish studies[8] were perhaps too small to add much to the mix of evidence, but lung cancer rates were not apparently increased in a North American study[9].

Joiner thus represents not only an analytical gap case, but also a cherry picking case, as well. The Supreme Court was eminently correct to affirm the shoddy evidence proffered in the Joiner case.

But has the District Judge’s exclusion of Joiner’s expert witnesses (Dr. Arnold Schecter and Dr. (Rabbi) Daniel Teitelbaum) stood up to the evolving scientific record?

A couple of weeks ago, researchers published a large, updated cohort study, funded by General Electric, on the mortality experience of workers in a plant that manufactured capacitors with PCBs[10]. Although the Lobby and the Occupational Medicine Zealots will whine about the funding source, the study is a much stronger study than anything relied upon by Mr. Joiner’s expert witnesses, and its results are consistent with the NIOSH study available to, but ignored by, Joiner’s expert witnesses. And the results are not uniformly good for General Electric, but on the end point of lung cancer for men, the standardized mortality ratio was 81 (95% C.I., 68 – 96), nominally statistically significantly below the expected SMR of 100.


[1] General Electric v. Joiner, 522 U.S. 136 (1997).

[2] Bertazzi, Riboldi, Pesatori, Radice, & Zocchetti, “Cancer Mortality of Capacitor Manufacturing Workers, 11 Am. J. Indus. Med. 165 (1987).

[3] Id. at 172.

[4] J. Zack & D. Munsch, Mortality of PCB Workers at the Monsanto Plant in Sauget, Illinois (Dec. 14, 1979) (unpublished report), 3 Rec., Doc. No. 11.

[5] Ronneberg, Andersen, Skyberg, “Mortality and Incidence of Cancer Among Oil-Exposed Workers in a Norwegian Cable Manufacturing Company,” 45 Br. J. Indus. Med. 595 (1988).

[6] Kuratsune, Nakamura, Ikeda, & Hirohata, “Analysis of Deaths Seen Among Patients with Yusho – A Preliminary Report,” 16 Chemosphere 2085 (1987).

[7] Thomas Sinks, Alexander B. Smith, Robert Rinsky, M. Kathy Watkins, and Ruth Shults, Health Hazard Evaluation Report, HETA 89-116-209 (Jan. 1991) (reporting lung cancer SMR = 0.7 (95%CI, 0.4 – 1.2). This unpublished study was published by the time the Joiner case was litigated. Thomas Sinks, G. Steele, Alexander B. Smith, and Ruth Shults, “Mortality among workers exposed to polychlorinated biphenyls,” 136 Am. J. Epidemiol. 389 (1992). A follow-up on this unpublished study confirmed the paucity of lung cancer in the cohort. See Avima M. Ruder, Misty J. Hein, Nancy Nilsen, Martha A. Waters, Patricia Laber, Karen Davis-King, Mary M. Prince, and Elizabeth Whelan, “Mortality among Workers Exposed to Polychlorinated Biphenyls (PCBs) in an Electrical Capacitor Manufacturing Plant in Indiana: An Update,” 114 Environmental Health Perspect. 18 (2006).

[8] P. Gustavsson, C. Hogstedt, and C. Rappe, “Short-term mortality and cancer incidence in capacitor manufacturing workers exposed to polychlorinated biphenyls (PCBs),” 10 Am. J. Indus. Med. 341 (1986); P. Gustavsson & C. Hogstedt, “A cohort study of Swedish capacitor manufacturing workers exposed to polychlorinated biphenyls (PCBs),” 32 Am. J. Indus. Med. 234 (1997) (cancer incidence for entire cohort, SIR = 86, 95%; CI 51-137).

[9] David P. Brown, “Mortality of workers exposed to polychlorinated biphenyls–an update,” 42 Arch. Envt’l Health 333 (1987)

[10] See Renate D. Kimbrough, Constantine A. Krouskas, Wenjing Xu, and Peter G. Shields, “Mortality among capacitor workers exposed to polychlorinated biphenyls (PCBs), a long-term update,” 88 Internat’l Arch. Occup. & Envt’l Health 85 (2015).

The Mythology of Linear No-Threshold Cancer Causation

March 13th, 2015

“For the great enemy of the truth is very often not the lie—deliberate, contrived, and dishonest—but the myth—persistent, persuasive, and unrealistic. Too often we hold fast to the clichés of our forebears. We subject all facts to a prefabricated set of interpretations. We enjoy the comfort of opinion without the discomfort of thought.”

John F. Kennedy, Yale University Commencement (June 11, 1962)

         *        *        *        *        *        *        *        *        *

The linear no-threshold model for risk assessment has its origins in a dubious attempt of scientists playing at policy making[1]. The model has survived as a political strategy to inject the precautionary principle into regulatory decision making, but it has turned into a malignant myth in litigation over low-dose exposures to putative carcinogens. Ignorance or uncertainty about low-dose exposures is turned into an affirmative opinion that the low-dose exposures are actually causative. Call it contrived, or dishonest, or call it a myth, the LNT model is an intellectual cliché.

The LNT cliché pervades American media as well as courtrooms. Earlier this week, the New York Times provided a lovely example of the myth taking center stage, without explanation or justification. Lumber Liquidators is under regulatory and litigation attack for having sold Chinese laminate wood flooring made with formaldehyde-containing materials. According to a “60 Minutes” investigation, the flooring off-gases formaldehyde at concentrations in excess of regulatory permissible levels. See Aaron M. Kessler & Rachel Abrams, “Homeowners Try to Assess Risks From Chemical in Floors,” New York Times (Mar. 10, 2015).

The Times reporters, in discussing whether a risk exists to people who live in houses and apartments with the Lumber Liquidators flooring sought out and quoted the opinion of Marilyn Howarth:

“Any exposure to a carcinogen can increase your risk of cancer,” said Marilyn Howarth, a toxicologist at the University of Pennsylvania’s Perelman School of Medicine.

Id. Dr. Howarth, however, is not a toxicologist; she is an occupational and environmental physician, and serves as the Director of Occupational and Environmental Consultation Services at the Hospital of the University of Pennsylvania. She is also an adjunct associate professor of emergency medicine, and the Director, of the Community Outreach and Engagement Core, Center of Excellence in Environmental Toxicology, at the University of Pennsylvania Perelman School of Medicine. Without detracting from Dr. Howarth’s fine credentials, the New York Times reporters might have noticed that Dr. Howarth’s publications are primarily on latex allergies, and not on the issue of the effect of low-dose exposure to carcinogens.

The point is not to diminish Dr. Howarth’s accomplishments, but to criticize the Times reporters for seeking out an opinion of a physician whose expertise is not well matched to the question they raise about risks, and then to publish that opinion even though it is demonstrably wrong. Clearly, there are some carcinogens, and perhaps all, that do not increase risk at “any exposure.” Consider ethanol, which is known to cause cancer of the larynx, liver, female breast, and perhaps other organs[2]. Despite known causation, no one would assert that “any exposure” to alcohol-containing food and drink increases the risk of these cancers. And the same could be said for most, if not all, carcinogens. The human body has defense mechanisms to carcinogens, including DNA repair mechanisms and programmed cell suicide, which work to prevent carcinogenesis from low-dose exposures.

The no threshold hypothesis is really at best an hypothesis, with affirmative evidence showing that the hypothesis should be rejected for some cancers[3]. The factual status of LNT is a myth; it is an opinion, and a poorly supported opinion at that.

         *        *        *        *        *        *        *        *        *

“There are, in fact, two things: science and opinion. The former brings knowledge, the latter ignorance.”

Hippocrates of Cos


[1] See Edward J. Calabrese, “Cancer risk assessment foundation unraveling: New historical evidence reveals that the US National Academy of Sciences (US NAS), Biological Effects of Atomic Radiation (BEAR) Committee Genetics Panel falsified the research record to promote acceptance of the LNT,” 89 Arch. Toxicol. 649 (2015); Edward J. Calabrese & Michael K. O’Connor, “Estimating Risk of Low Radiation Doses – A Critical Review of the BEIR VII Report and its Use of the Linear No-Threshold (LNT) Hypothesis,” 182 Radiation Research 463 (2014); Edward J. Calabrese, “Origin of the linearity no threshold (LNT) dose–response concept,” 87 Arch. Toxicol. 1621 (2013); Edward J. Calabrese, “The road to linearity at low doses became the basis for carcinogen risk assessment,” 83 Arch. Toxicol. 203 (2009).

[2] See, e.g., IARC Monographs on the Evaluation of Carcinogenic Risks to Humans – Alcohol Consumption and Ethyl Carbamate; volume 96 (2010).

[3] See, e.g., Jerry M. Cuttler, “Commentary on Fukushima and Beneficial Effects of Low Radiation,” 11 Dose-Response 432 (2013); Jerry M. Cuttler, “Remedy for Radiation Fear – Discard the Politicized Science,” 12 Dose Response 170 (2014).

Don’t Double Dip Data

March 9th, 2015

Meta-analyses have become commonplace in epidemiology and in other sciences. When well conducted and transparently reported, meta-analyses can be extremely helpful. In several litigations, meta-analyses determined the outcome of the medical causation issues. In the silicone gel breast implant litigation, after defense expert witnesses proffered meta-analyses[1], court-appointed expert witnesses adopted the approach and featured meta-analyses in their reports to the MDL court[2].

In the welding fume litigation, plaintiffs’ expert witness offered a crude, non-quantified, “vote counting” exercise to argue that welding causes Parkinson’s disease[3]. In rebuttal, one of the defense expert witnesses offered a quantitative meta-analysis, which provided strong evidence against plaintiffs’ claim.[4] Although the welding fume MDL court excluded the defense expert’s meta-analysis from the pre-trial Rule 702 hearing as untimely, plaintiffs’ counsel soon thereafter initiated settlement discussions of the entire set of MDL cases. Subsequently, the defense expert witness, with his professional colleagues, published an expanded version of the meta-analysis.[5]

And last month, a meta-analysis proffered by a defense expert witness helped dispatch a long-festering litigation in New Jersey’s multi-county isotretinoin (Accutane) litigation. In re Accutane Litig., No. 271(MCL), 2015 WL 753674 (N.J. Super., Law Div., Atlantic Cty., Feb. 20, 2015) (excluding plaintiffs’ expert witness David Madigan).

Of course, when a meta-analysis is done improperly, the resulting analysis may be worse than none at all. Some methodological flaws involve arcane statistical concepts and procedures, and may be easily missed. Other flaws are flagrant and call for a gatekeeping bucket brigade.

When a merchant puts his hand the scale at the check-out counter, we call that fraud. When George Costanza double dipped his chip twice in the chip dip, he was properly called out for his boorish and unsanitary practice. When a statistician or epidemiologist produces a meta-analysis that double counts crucial data to inflate a summary estimate of association, or to create spurious precision in the estimate, we don’t need to crack open Modern Epidemiology or the Reference Manual on Scientific Evidence to know that something fishy has taken place.

In litigation involving claims that selective serotonin reuptake inhibitors cause birth defects, plaintiffs’ expert witness, a perinatal epidemiologist, relied upon two published meta-analyses[6]. In an examination before trial, this epidemiologist was confronted with the double counting (and other data entry errors) in the relied-upon meta-analyses, and she readily agreed that the meta-analyses were improperly done and that she had to abandon her reliance upon them.[7] The result of the expert witness’s deposition epiphany, however, was that she no longer had the illusory benefit of an aggregation of data, with an outcome supporting her opinion. The further consequence was that her opinion succumbed to a Rule 702 challenge. See In re Zoloft (Sertraline Hydrochloride) Prods. Liab. Litig., MDL No. 2342; 12-md-2342, 2014 U.S. Dist. LEXIS 87592; 2014 WL 2921648 (E.D. Pa. June 27, 2014) (Rufe, J.).

Double counting of studies, or subgroups within studies, is a flaw that most careful readers can identify in a meta-analysis, without advance training. According to statistician Stephen Senn, double counting of evidence is a serious problem in published meta-analytical studies. Stephen J. Senn, “Overstating the evidence – double counting in meta-analysis and related problems,” 9, at *1 BMC Medical Research Methodology 10 (2009). Senn observes that he had little difficulty in finding examples of meta-analyses gone wrong, including meta-analyses with double counting of studies or data, in some of the leading clinical medical journals. Id. Senn urges analysts to “[b]e vigilant about double counting,” id. at *4, and recommends that journals should withdraw meta-analyses promptly when mistakes are found,” id. at *1.

Similar advice abounds in books and journals[8]. Professor Sander Greenland addresses the issue in his chapter on meta-analysis in Modern Epidemiology:

Conducting a Sound and Credible Meta-Analysis

Like any scientific study, an ideal meta-analysis would follow an explicit protocol that is fully replicable by others. This ideal can be hard to attain, but meeting certain conditions can enhance soundness (validity) and credibility (believability). Among these conditions we include the following:

  • A clearly defined set of research questions to address.

  • An explicit and detailed working protocol.

  • A replicable literature-search strategy.

  • Explicit study inclusion and exclusion criteria, with a rationale for each.

  • Nonoverlap of included studies (use of separate subjects in different included studies), or use of statistical methods that account for overlap. * * * * *”

Sander Greenland & Keith O’Rourke, “Meta-Analysis – Chapter 33,” in Kenneth J. Rothman, Sander Greenland, Timothy L. Lash, Modern Epidemiology 652, 655 (3d ed. 2008) (emphasis added).

Just remember George Costanza; don’t double dip that chip, and don’t double dip in the data.


[1] See, e.g., Otto Wong, “A Critical Assessment of the Relationship between Silicone Breast Implants and Connective Tissue Diseases,” 23 Regulatory Toxicol. & Pharmacol. 74 (1996).

[2] See Barbara Hulka, Betty Diamond, Nancy Kerkvliet & Peter Tugwell, “Silicone Breast Implants in Relation to Connective Tissue Diseases and Immunologic Dysfunction:  A Report by a National Science Panel to the Hon. Sam Pointer Jr., MDL 926 (Nov. 30, 1998)”; Barbara Hulka, Nancy Kerkvliet & Peter Tugwell, “Experience of a Scientific Panel Formed to Advise the Federal Judiciary on Silicone Breast Implants,” 342 New Engl. J. Med. 812 (2000).

[3] Deposition of Dr. Juan Sanchez-Ramos, Street v. Lincoln Elec. Co., Case No. 1:06-cv-17026, 2011 WL 6008514 (N.D. Ohio May 17, 2011).

[4] Deposition of Dr. James Mortimer, Street v. Lincoln Elec. Co., Case No. 1:06-cv-17026, 2011 WL 6008054 (N.D. Ohio June 29, 2011).

[5] James Mortimer, Amy Borenstein & Laurene Nelson, Associations of Welding and Manganese Exposure with Parkinson’s Disease: Review and Meta-Analysis, 79 Neurology 1174 (2012).

[6] Shekoufeh Nikfar, Roja Rahimi, Narjes Hendoiee, and Mohammad Abdollahi, “Increasing the risk of spontaneous abortion and major malformations in newborns following use of serotonin reuptake inhibitors during pregnancy: A systematic review and updated meta-analysis,” 20 DARU J. Pharm. Sci. 75 (2012); Roja Rahimi, Shekoufeh Nikfara, Mohammad Abdollahic, “Pregnancy outcomes following exposure to serotonin reuptake inhibitors: a meta-analysis of clinical trials,” 22 Reproductive Toxicol. 571 (2006).

[7] “Q So the question was: Have you read it carefully and do you understand everything that was done in the Nikfar meta-analysis?

A Yes, I think so.

* * *

Q And Nikfar stated that she included studies, correct, in the cardiac malformation meta-analysis?

A That’s what she says.

* * *

Q So if you look at the STATA output, the demonstrative, the — the forest plot, the second study is Kornum 2010. Do you see that?

A Am I —

Q You’re looking at figure four, the cardiac malformations.

A Okay.

Q And Kornum 2010, —

A Yes.

Q — that’s a study you relied upon.

A Mm-hmm.

Q Is that right?

A Yes.

Q And it’s on this forest plot, along with its odds ratio and confidence interval, correct?

A Yeah.

Q And if you look at the last study on the forest plot, it’s the same study, Kornum 2010, same odds ratio and same confidence interval, true?

A You’re right.

Q And to paraphrase My Cousin Vinny, no self-respecting epidemiologist would do a meta-analysis by including the same study twice, correct?

A Well, that was an error. Yeah, you’re right.

***

Q Instead of putting 2 out of 98, they extracted the data and put 9 out of 28.

A Yeah. You’re right.

Q So there’s a numerical transposition that generated a 25-fold increased risk; is that right?

A You’re correct.

Q And, again, to quote My Cousin Vinny, this is no way to do a meta-analysis, is it?

A You’re right.”

Testimony of Anick Bérard, Kuykendall v. Forest Labs, at 223:14-17; 238:17-20; 239:11-240:10; 245:5-12 (Cole County, Missouri; Nov. 15, 2013). According to a Google Scholar search, the Rahimi 2005 meta-analysis had been cited 90 times; the Nikfar 2012 meta-analysis, 11 times, as recently as this month. See, e.g., Etienne Weisskopf, Celine J. Fischer, Myriam Bickle Graz, Mathilde Morisod Harari, Jean-Francois Tolsa, Olivier Claris, Yvan Vial, Chin B. Eap, Chantal Csajka & Alice Panchaud, “Risk-benefit balance assessment of SSRI antidepressant use during pregnancy and lactation based on best available evidence,” 14 Expert Op. Drug Safety 413 (2015); Kimberly A. Yonkers, Katherine A. Blackwell & Ariadna Forray, “Antidepressant Use in Pregnant and Postpartum Women,” 10 Ann. Rev. Clin. Psychol. 369 (2014); Abbie D. Leino & Vicki L. Ellingrod, “SSRIs in pregnancy: What should you tell your depressed patient?” 12 Current Psychiatry 41 (2013).

[8] Julian Higgins & Sally Green, eds., Cochrane Handbook for Systematic Reviews of Interventions 152 (2008) (“7.2.2 Identifying multiple reports from the same study. Duplicate publication can introduce substantial biases if studies are inadvertently included more than once in a meta-analysis (Tramèr 1997). Duplicate publication can take various forms, ranging from identical manuscripts to reports describing different numbers of participants and different outcomes (von Elm 2004). It can be difficult to detect duplicate publication, and some ‘detectivework’ by the reviewauthors may be required.”); see also id. at 298 (Table 10.1.a “Definitions of some types of reporting biases”); id. at 304-05 (10.2.2.1 Duplicate (multiple) publication bias … “The inclusion of duplicated data may therefore lead to overestimation of intervention effects.”); Julian P.T. Higgins, Peter W. Lane, Betsy Anagnostelis, Judith Anzures-Cabrera, Nigel F. Baker, Joseph C. Cappelleri, Scott Haughie, Sally Hollis, Steff C. Lewis, Patrick Moneuse & Anne Whitehead, “A tool to assess the quality of a meta-analysis,” 4 Research Synthesis Methods 351, 363 (2013) (“A common error is to double-count individuals in a meta-analysis.”); Alessandro Liberati, Douglas G. Altman, Jennifer Tetzlaff, Cynthia Mulrow, Peter C. Gøtzsche, John P.A. Ioannidis, Mike Clarke, Devereaux, Jos Kleijnen, and David Moher, “The PRISMA Statement for Reporting Systematic Reviews and Meta-Analyses of Studies That Evaluate Health Care Interventions: Explanation and Elaboration,” 151 Ann. Intern. Med. W-65, W-75 (2009) (“Some studies are published more than once. Duplicate publications may be difficult to ascertain, and their inclusion may introduce bias. We advise authors to describe any steps they used to avoid double counting and piece together data from multiple reports of the same study (e.g., juxtaposing author names, treatment comparisons, sample sizes, or outcomes).”) (internal citations omitted); Erik von Elm, Greta Poglia; Bernhard Walder, and Martin R. Tramèr, “Different patterns of duplicate publication: an analysis of articles used in systematic reviews,” 291 J. Am. Med. Ass’n 974 (2004); John Andy Wood, “Methodology for Dealing With Duplicate Study Effects in a Meta-Analysis,” 11 Organizational Research Methods 79, 79 (2008) (“Dependent studies, duplicate study effects, nonindependent studies, and even covert duplicate publications are all terms that have been used to describe a threat to the validity of the meta-analytic process.”) (internal citations omitted); Martin R. Tramèr, D. John M. Reynolds, R. Andrew Moore, Henry J. McQuay, “Impact of covert duplicate publication on meta­analysis: a case study,” 315 Brit. Med. J. 635 (1997); Beverley J Shea, Jeremy M Grimshaw, George A. Wells, Maarten Boers, Neil Andersson, Candyce Hamel, Ashley C. Porter, Peter Tugwell, David Moher, and Lex M. Bouter, “Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews,” 7(10) BMC Medical Research Methodology 2007 (systematic reviews must inquire whether there was “duplicate study selection and data extraction”).

The Legacy of Irving Selikoff & Wicked Wikipedia

March 7th, 2015

Earlier this year, January 15, 2015, would have been Irving J. Selikoff’s 100th birthday. Selikoff left a lifetime legacy of having improved public health awareness, with a shadow of some rather questionable opinions and conduct in the world of litigation[1]. Given Selikoff’s fame and prestige among public health advocates and labor union activists, it is remarkable that now, over twenty since his death, there are no major biographies of Selikoff. Even Selikoff’s Wikipedia page[2] is skimpy and devoid of many details of his activities.

There are some comical aspects to the Selikoff wikipedia page, some of which revolve around someone’s anonymous disparaging of my writing about Selikoff::

“Part of the contrary perspective was presented by a Nathan A. Schachtman, an adjunct lecturer at the Columbia Law School. He suggested that Selikoff and his supporters may have organized ‘a lopsided medical conference, arranged for the conference to feature defendant’s expert witnesses, set out to give short shrift to opposing points of view, invited key judges to attend the conference, and paid for the judges’ travel and hotel expenses’. This quote from Schachtman came from a web site he maintained, unlike the quote from McCulloch and Tweedale, whose comments were published only after being accepted by reviewers for a refereed academic journal.“Nathan A. Schachtman”. www.law.columbia.edu. Columbia Law School. Retrieved September 16, 2013.”

Make no mistake about it; I wasn’t “suggesting”; I was stating a fact. As for the reviewers who “refereed” the journal article by McCulloch and Tweedale, I have shown that this peer review was not worth a warm bucket of spit[3].

One of the disturbing aspects of Wikipedia is that contributors can hide behind I.P. addresses or pseudonyms. Whoever attempted to quote my blog posting distorted my meaning by selectively and incompletely quoting me to suggest that the conference featured defendants’ experts. I can understand that the dumbot wanted to remain anonymous to mislead in this way, but what I wrote was:

“One can only imagine the hue and cry that would arise if a defendant company had funded a lopsided medical conference, arranged for the conference to feature defendant’s expert witnesses, set out to give short shrift to opposing points of view, invited key judges to attend the conference, and paid for the judges’ travel and hotel expenses.”

The counterfactual point, obviously, was that if defense counsel had conspired with defense expert witnesses, to hold an ex parte conference with sitting judges, to feature the work of defense experts, there would have been acrimonious denunciations from the public health community about the evils of corporate influence. In the Wikipedia article, the only reference to Selikoff’s participation in the conspiracy with the litigation industry was an attack on my writing, and a distortion of my posting by incomplete citation. But the misquotation was welcomed in motivating me to register with Wikipedia to correct the misattribution.

There are two document archives of Selikoff documents, one at Mt. Sinai Hospital in New York[4], and the other in St. Louis[5]. Jock McCulloch described Selikoff as having “avoided litigation” and having “fought to keep his papers away from the legal arena.”[6] The first part of McCulloch’s description is demonstrably wrong, but the efforts to suppress access to his papers, and data, is sadly all too true. The accusations of “cover up” flow so freely against industry, but why the cover up of Selikoff’s papers? And who would trust the Mt. Sinai custodians?

The Asbestos Disease Awareness Organization (ADAO) claims to be an “independent asbestos victims’ organization,” started in 2004. Its website points out that the ADAO is a registered 501(c)(3) nonprofit corporation, which “does not make legal referrals.” The ADAO posted a kind memoriam to the late Dr. Selikoff: “Dr. Irving Selikoff: Clinician, Researcher, Public Health Advocate and Occupational Health Pioneer (1915 – 2015)” (Jan. 15, 2015).

For almost ten years, the ADAO has been recognizing “exceptional leaders” with the Dr. Irving Selikoff Lifetime Achievement Award, for the recipient’s efforts to increase awareness and prevention of asbestos-related diseases.

Remarkably, many of the “exceptional leaders,” in the eyes of the ADAO, are (or were before their deaths) regular testifiers for the litigation industry:

Paul Brodeur 2006

Yasunosuke Suzuki 2006

Michael Harbut 2008:

Barry Castleman 2008

Stephen Levin 2009

Arthur Frank, 2012

Richard Lemen, 2012

Celeste Monforton 2013

David Egilman 2014

Brodeur, of course, did not testify; he wrote for the New Yorker, including a series that became the book, Outrageous Misconduct: the Asbestos Industry on Trial, This book became an important lobbying tool for plaintiffs’ counsel with judges and legislatures. His subsequent book, The Great Power-Line Cover-Up: How the Utilities and Government Are Trying to Hide the Cancer Hazard Posed by Electromagnetic Fields (1993) revealed his aptitude for overinterpreting studies and failing to appreciate validity concerns. See Sander Greenland, Asher R. Sheppard, William T. Kaune, Charles Poole, and Michael A. Kelsh, “A Pooled Analysis of Magnetic Fields, Wire Codes, and Childhood Leukemia,” 11 Epidemiology 624 (2000).

Harbut was the proponent, in the silicone gel breast implant litigation, of a half-baked theory about a role for platinum in causing autoimmune disease among claimants. The FDA and The Institute of Medicine easily dispatched Harbut’s theory. Suzuki, Castleman, Levin, Frank, and Lemen testify (or did testify when alive) with some frequency and regularity in asbestos litigation, on behalf of the litigation industry. Egilman to his credit is perhaps the lone recipient who has spoken out[7], on one or more occasions against the depredations of the litigation industry’s unethical[8] and unlawful screenings, but he has openly acknowledged his bias against corporate industry (although not against litigation industry). See David S. Egilman, “Corporate and Government Suppression of Research” (2004). And Monforton was one of the movers and shakers in establishing SKAPP[9], which misrepresented its funding sources, while lobbying against the legal requirements of reliability and validity for scientific expert witness opinion testimony.


[1] SeeSelikoff and the Mystery of the Disappearing Testimony” (Dec. 3, 2010); “Selikoff and the Mystery of the Disappearing Asbestosis” (Dec. 6, 2010); “Selikoff and the Mystery of the Disappearing Amphiboles” (Dec. 10, 2010); “The Selikoff – Castleman Conspiracy” (Mar. 13, 2011); “Irving Selikoff and the Right to Peaceful Dissembling” (June 5, 2013); “The Mt. Sinai Catechism” (June 7, 2013); “Historians Should Verify Not Vilify or Abilify – The Difficult Case of Irving Selikoff” (Jan. 4, 2014); “What Happens When Historians Have Bad Memories” (Mar. 15, 2014); “The Last Squirmish Between Irving Selikoff and Sir Richard Doll” (Sept. 9, 2014); “Irving Selikoff – Media Plodder to Media Zealot” (Sept. 9, 2014); “Scientific Prestige, Reputation, Authority & The Creation of Scientific Dogmas” (Oct. 4, 2014). See also Cathleen M. Devlin, “Disqualification of Federal Judges – Third Circuit Orders District Judge James McGirr Kelly to Disqualify Himself so as to Preserve the Appearance of Justice under 28 U.S.C.§ 455,” 38 Vill. L. Rev. 1219 (1993); W.K.C. Morgan, “Asbestos and cancer: history and public policy,” 49 Br. J. Indus. Med. 451, 451 (1992).

[2] Wikipedia, “Irving Selikoff” (last visited March 6, 2015).

[3]Historians Should Verify Not Vilify or Abilify – The Difficult Case of Irving Selikoff” (Jan. 4, 2014); “Scientific Prestige, Reputation, Authority & The Creation of Scientific Dogmas” (Oct. 4, 2014).

[4] 83 Am. J. Pub. Health 609, 609 (1993)(describing the Irving J. Selikoff Asbestos Archives and Research Center holdings of Dr. Selikoff’s research documents).

[5] http://beckerarchives.wustl.edu/?p=collections/controlcard&id=6725

[6] Jock McCulloch and Geoffrey Tweedale, Defending The Indefensible: The Global Asbestos Industry and its Fight for Survival 271 (Oxford 2008) (describing how even after his death, the Selikoff papers have still not been made generally available, but thanking Valerie Josephson, Philip Landrigan, and Stephen Levin, for helping McCulloch gain access to the papers).

[7] David Egilman & Susanna Rankin Bohme, “Attorney-Directed Screenings Can Be Hazardous,” 45 Am. J. Indus. Med. 305 (2004).

[8] Nathan A. Schachtman & Cynthia J. Rhodes, “Medico-Legal Issues in Occupational Lung Disease Litigation,” 27 Sem. Roentgenology 140 (1992).

[9]SKAPP-A-LOT” (April 30, 2010); “Conflicted Public Interest Groups” (Nov. 3, 2013).

Bentham’s Legacy – Quantification of Fact Finding

March 1st, 2015

Jeremy Bentham, radical philosopher, was a source of many antic proposals. Perhaps his most antic proposal was to have himself stuffed, mounted, and displayed in the halls of University College of London, where he may still be observed during normal school hours. In ethical theory, Bentham advocated for an extreme ethical reductionism, known as utilitarianism. Bentham shared Edmund Burke’s opposition to the invocation of natural rights, but unlike Burke, Bentham was an ardent foe of the American Revolution.

Bentham was also a non-practicing lawyer who had an inexhaustible capacity for rationalistic revisions of legal practice. Among his revisionary schemes, Bentham proposed to reduce or translate qualitative beliefs to a numerical a scale, like a thermometer. Jeremy Bentham, 6 The Works of Jeremy Bentham; Rationale of Evidence, Rationale of Judicial Evidence at 225 (1843); 1 Rationale of Judicial Evidence Specially Applied to Judicial Practice at 76 (1827). The legal profession, that is lawyers who actually tried or judged cases, did think much of Bentham’s proposal:

“The notions of those who have proposed that mere moral probabilities or relations could ever be represented by numbers or space, and thus be subjected to arithmetical analysis, cannot but be regarded as visionary and chimerical.”

Thomas Starkie, A Practical Treatise of the Law of Evidence 225 (2d ed. 1833). Having graduated from St. John’s College, Cambridge University, as senior wrangler, Starkie was no slouch in mathematics, and he was an accomplished lawyer and judge later in life.

Starkie’s pronouncement upon Bentham’s proposal was, in the legal profession, a final judgment. The idea of having witnesses provide a decigrade or centigrade scale of belief in facts never caught on in the law. No evidentiary code or set of rules allows for, or requires, such quantification, but on the fringes, Bentham’s ideas still resonate with some observers who would require juries or judges to quantify their findings of fact:

“Consequently statistical ideas should be used in court and have already been used in the analysis of forensic data. But there are other areas to explore. Thus I do not think a jury should be required to decide guilty or innocent; they should provide their probability of guilt. The judge can then apply MEU [maximised expected utility] by incorporating society’s utility. Hutton could usefully have used some probability. A lawyer and I wrote a paper on the evidential worth of failure to produce evidence.”

Lindley, “Bayesian Thoughts,” Significance 73, 74-75 (June 2004). Some might say that Lindley was trash picking in the dustbin of legal history.