TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

The Joiner Finale

March 23rd, 2015

“This is the end
Beautiful friend
This is the end
My only friend, the end”

Jim Morrison, “The End” (c. 1966)

 *          *          *          *           *          *          *          *          *          *  

The General Electric Co. v. Joiner, 522 U.S. 136 (1997), case was based upon polychlorinated biphenyl exposures (PCB), only in part. The PCB part did not hold up well legally in the Supreme Court; nor was the PCB lung cancer claim vindicated by later scientific evidence. See How Have Important Rule 702 Holdings Held Up With Time?” (Mar. 20, 2015).

The Supreme Court in Joiner reversed and remanded the case to the 11th Circuit, which then remanded the case back to the district court to address claims that Mr. Joiner had been exposed to furans and dioxins, and that these other chemicals had caused, or contributed to, his lung cancer, as well. Joiner v. General Electric Co., 134 F.3d 1457 (11th Cir. 1998) (per curiam). Thus the dioxins were left in the case even after the Supreme Court ruled.

After the Supreme Court’s decision, Anthony Roisman argued that the Court had addressed an artificial question when asked about PCBs alone because the case was really about an alleged mixture of exposures, and he held out hope that the Joiners would do better on remand. Anthony Z. Roisman, “The Implications of G.E. v. Joiner for Admissibility of Expert Testimony,” 1 Res Communes 65 (1999).

Many Daubert observers (including me) are unaware of the legal fate of the Joiners’ claims on remand. In the only reference I could find, the commentator simply noted that the case resolved before trial.[1] I am indebted to Michael Risinger, and Joseph Cecil, for pointing me to documents from PACER, which shed some light upon the Joiner “endgame.”

In February 1998, Judge Orinda Evans, who had been the original trial judge, and who had sustained defendants’ Rule 702 challenges and granted their motions for summary judgments, received and reopened the case upon remand from the 11th Circuit. In March, Judge Evans directed the parties to submit a new pre-trial order by April 17, 1998. At a status conference in April 1998, Judge Evans permitted the plaintiffs additional discovery, to be completed by June 17, 1998. Five days before the expiration of their additional discovery period, the plaintiffs moved for additional time; defendants opposed the request. In July, Judge Evans granted the requested extension, and gave defendants until November 1, 1998, to file for summary judgment.

Meanwhile, in June 1998, new counsel entered their appearances for plaintiffs – William Sims Stone, Kevin R. Dean, Thomas Craig Earnest, and Stanley L. Merritt. The docket does not reflect much of anything about the new discovery other than a request for a protective order for an unpublished study. But by October 6, 1998, the new counsel, Earnest, Dean, and Stone (but not Merritt) withdrew as attorneys for the Joiners, and by the end of October 1998, Judge Evans entered an order to dismiss the case, without prejudice.

A few months later, in February 1999, the parties filed a stipulation, approved by the Clerk, dismissing the action with prejudice, and with each party to bear its own coasts. Given the flight of plaintiffs’ counsel, the dismissals without and then with prejudice, a settlement seems never to have been involved in the resolution of the Joiner case. In the end, the Joiners’ case fizzled perhaps to avoid being Frye’d.

And what has happened since to the science of dioxins and lung cancer?

Not much.

In 2006, the National Research Council published a monograph on dioxin, which took the controversial approach of focusing on all cancer mortality rather than specific cancers that had been suggested as likely outcomes of interest. See David L. Eaton (Chairperson), Health Risks from Dioxin and Related Compounds – Evaluation of the EPA Reassessment (2006). The validity of this approach, and the committee’s conclusions, were challenged vigorously in subsequent publications. Paolo Boffetta, Kenneth A. Mundt, Hans-Olov Adami, Philip Cole, and Jack S. Mandel, “TCDD and cancer: A critical review of epidemiologic studies,” 41 Critical Rev. Toxicol. 622 (2011) (“In conclusion, recent epidemiologicalevidence falls far short of conclusively demonstrating a causal link between TCDD exposure and cancer risk in humans.”

In 2013, the Industrial Injuries Advisory Council (IIAC), an independent scientific advisory body in the United Kingdom, published a review of lung cancer and dioxin. The Council found the epidemiologic studies mixed, and declined to endorse the compensability of lung cancer for dioxin-exposed industrial workers. Industrial Injuries Advisory Council – Information Note on Lung cancer and Dioxin (December 2013). See also Mann v. CSX Transp., Inc., 2009 WL 3766056, 2009 U.S. Dist. LEXIS 106433 (N.D. Ohio 2009) (Polster, J.) (dioxin exposure case) (“Plaintiffs’ medical expert, Dr. James Kornberg, has opined that numerous organizations have classified dioxins as a known human carcinogen. However, it is not appropriate for one set of experts to bring the conclusions of another set of experts into the courtroom and then testify merely that they ‘agree’ with that conclusion.”), citing Thorndike v. DaimlerChrysler Corp., 266 F. Supp. 2d 172 (D. Me. 2003) (court excluded expert who was “parroting” other experts’ conclusions).


[1] Morris S. Zedeck, Expert Witness in the Legal System: A Scientist’s Search for Justice 49 (2010) (noting that, after remand from the Supreme Court, Joiner v. General Electric resolved before trial)

How Have Important Rule 702 Holdings Held Up With Time?

March 20th, 2015

The Daubert case arose from claims of teratogenicity of Bendectin. The history of the evolving scientific record has not been kind to those claims. SeeBendectin, Diclegis & The Philosophy of Science” (Oct. 26, 2013); Gideon Koren, “The Return to the USA of the Doxylamine-Pyridoxine Delayed Release Combination (Diclegis®) for Morning Sickness — A New Morning for American Women,” 20 J. Popul. Ther. Clin. Pharmacol. e161 (2013). Twenty years later, the decisions in the Daubert appeals look sound, even if the reasoning was at times shaky. How have other notable Rule 702 exclusions stood up to evolving scientific records?

A recent publication of an epidemiologic study on lung cancer among workers exposed to polychlorinated biphenyls (PCBs) raised an interested question about a gap in so-called Daubert scholarship. Clearly, there are some cases, like General Electric v. Joiner[1], in which plaintiffs lack sufficient, valid evidence to make out their causal claims. But are there cases of Type II injustices, for which, in the fullness of time, the insufficiency or invalidity of the available evidentiary display is “cured” by subsequently published studies?

In Joiner, Chief Justice Rehnquist noted that the district court had carefully analyzed the four epidemiologic studies claimed by plaintiff to support the association between PCB exposure and lung cancer. The first such study[2] involved workers at an Italian capacitor plant who had been exposed to PCBs.

The Chief Justice reported that the authors of the Italian capacitor study had noted that lung cancer deaths among former employees were more numerous than expected (without reporting whether there was any assessment of random error), but that they concluded that “there were apparently no grounds for associating lung cancer deaths (although increased above expectations) and exposure in the plant.”[3] The court frowned at the hired expert witnesses’ willingness to draw a causal inference when the authors of the Bertazzi study would not. As others have noted, this disapproval was beside the point of the Rule 702 inquiry. It might well be the case that Bertazzi and his co-authors could not or did not conduct a causal analysis, but that does not mean that the study’s evidence could not be part of a larger effort to synthesize the available evidence. In any event, the Bertazzi study was small and uninformative. Although all cancer mortality was increased (14 observed vs. 5.5 expected, based upon national rates; SMR = 253; 95% CI 144-415), the study was too small to be meaningful for lung cancer outcomes.

The second cited study[4], from an unpublished report, followed workers at a Monsanto PCB production facility. The authors of the Monsanto study reported that the lung cancer mortality rate among exposed workers “somewhat” higher than expected, but that the “increase, however, was not statistically significant and the authors of the study did not suggest a link between the increase in lung cancer deaths and the exposure to PCBs.” Again, the Court’s emphasis on what the authors stated is unfortunate. What is important is obscured because the Court never reproduced the data from this unpublished study.

The third study[5] cited by plaintiff’s hired expert witnesses was of “no help,” in that the study followed workers exposed to mineral oil, without any known exposure to PCBs. Although the workers exposed to this particular mineral oil had a statistically significantly elevated lung cancer mortality, the study made no reference to PCBs.

The fourth study[6] cited by plaintiffs’ expert witnesses followed a Japanese PCB-exposed group, which had a “statistically significant increase in lung cancer deaths.” The Court, however, was properly concerned that the cohort was exposed to numerous other potential carcinogens, including toxic rice oil by ingestion.

The paucity of this evidence led the Court to observe:

“Trained experts commonly extrapolate from existing data. But nothing in either Daubert or the Federal Rules of Evidence requires a district court to admit opinion evidence which is connected to existing data only by the ipse dixit of the expert. A court may conclude that there is simply too great an analytical gap between the data and the opinion proffered. … That is what the District Court did here, and we hold that it did not abuse its discretion in so doing.”

Joiner, 522 U.S. at 146 (1997).

Interestingly omitted from the Supreme Court’s discussion was why the plaintiffs’ expert witnesses failed to rely upon all the available epidemiology. The excluded witnesses relied upon an unpublished Monsanto study, but apparently ignored an unpublished investigation by NIOSH researchers, who found that there were “no excess deaths from cancers of the … the lung,” among PCB-exposed workers at a Westinghouse Electric manufacturing facility[7]. Actually, NIOSH reported a statistically non-significant decrease in lung cancer rate, with fairly a narrow confidence interval.

Two Swedish studies[8] were perhaps too small to add much to the mix of evidence, but lung cancer rates were not apparently increased in a North American study[9].

Joiner thus represents not only an analytical gap case, but also a cherry picking case, as well. The Supreme Court was eminently correct to affirm the shoddy evidence proffered in the Joiner case.

But has the District Judge’s exclusion of Joiner’s expert witnesses (Dr. Arnold Schecter and Dr. (Rabbi) Daniel Teitelbaum) stood up to the evolving scientific record?

A couple of weeks ago, researchers published a large, updated cohort study, funded by General Electric, on the mortality experience of workers in a plant that manufactured capacitors with PCBs[10]. Although the Lobby and the Occupational Medicine Zealots will whine about the funding source, the study is a much stronger study than anything relied upon by Mr. Joiner’s expert witnesses, and its results are consistent with the NIOSH study available to, but ignored by, Joiner’s expert witnesses. And the results are not uniformly good for General Electric, but on the end point of lung cancer for men, the standardized mortality ratio was 81 (95% C.I., 68 – 96), nominally statistically significantly below the expected SMR of 100.


[1] General Electric v. Joiner, 522 U.S. 136 (1997).

[2] Bertazzi, Riboldi, Pesatori, Radice, & Zocchetti, “Cancer Mortality of Capacitor Manufacturing Workers, 11 Am. J. Indus. Med. 165 (1987).

[3] Id. at 172.

[4] J. Zack & D. Munsch, Mortality of PCB Workers at the Monsanto Plant in Sauget, Illinois (Dec. 14, 1979) (unpublished report), 3 Rec., Doc. No. 11.

[5] Ronneberg, Andersen, Skyberg, “Mortality and Incidence of Cancer Among Oil-Exposed Workers in a Norwegian Cable Manufacturing Company,” 45 Br. J. Indus. Med. 595 (1988).

[6] Kuratsune, Nakamura, Ikeda, & Hirohata, “Analysis of Deaths Seen Among Patients with Yusho – A Preliminary Report,” 16 Chemosphere 2085 (1987).

[7] Thomas Sinks, Alexander B. Smith, Robert Rinsky, M. Kathy Watkins, and Ruth Shults, Health Hazard Evaluation Report, HETA 89-116-209 (Jan. 1991) (reporting lung cancer SMR = 0.7 (95%CI, 0.4 – 1.2). This unpublished study was published by the time the Joiner case was litigated. Thomas Sinks, G. Steele, Alexander B. Smith, and Ruth Shults, “Mortality among workers exposed to polychlorinated biphenyls,” 136 Am. J. Epidemiol. 389 (1992). A follow-up on this unpublished study confirmed the paucity of lung cancer in the cohort. See Avima M. Ruder, Misty J. Hein, Nancy Nilsen, Martha A. Waters, Patricia Laber, Karen Davis-King, Mary M. Prince, and Elizabeth Whelan, “Mortality among Workers Exposed to Polychlorinated Biphenyls (PCBs) in an Electrical Capacitor Manufacturing Plant in Indiana: An Update,” 114 Environmental Health Perspect. 18 (2006).

[8] P. Gustavsson, C. Hogstedt, and C. Rappe, “Short-term mortality and cancer incidence in capacitor manufacturing workers exposed to polychlorinated biphenyls (PCBs),” 10 Am. J. Indus. Med. 341 (1986); P. Gustavsson & C. Hogstedt, “A cohort study of Swedish capacitor manufacturing workers exposed to polychlorinated biphenyls (PCBs),” 32 Am. J. Indus. Med. 234 (1997) (cancer incidence for entire cohort, SIR = 86, 95%; CI 51-137).

[9] David P. Brown, “Mortality of workers exposed to polychlorinated biphenyls–an update,” 42 Arch. Envt’l Health 333 (1987)

[10] See Renate D. Kimbrough, Constantine A. Krouskas, Wenjing Xu, and Peter G. Shields, “Mortality among capacitor workers exposed to polychlorinated biphenyls (PCBs), a long-term update,” 88 Internat’l Arch. Occup. & Envt’l Health 85 (2015).

The Mythology of Linear No-Threshold Cancer Causation

March 13th, 2015

“For the great enemy of the truth is very often not the lie—deliberate, contrived, and dishonest—but the myth—persistent, persuasive, and unrealistic. Too often we hold fast to the clichés of our forebears. We subject all facts to a prefabricated set of interpretations. We enjoy the comfort of opinion without the discomfort of thought.”

John F. Kennedy, Yale University Commencement (June 11, 1962)

         *        *        *        *        *        *        *        *        *

The linear no-threshold model for risk assessment has its origins in a dubious attempt of scientists playing at policy making[1]. The model has survived as a political strategy to inject the precautionary principle into regulatory decision making, but it has turned into a malignant myth in litigation over low-dose exposures to putative carcinogens. Ignorance or uncertainty about low-dose exposures is turned into an affirmative opinion that the low-dose exposures are actually causative. Call it contrived, or dishonest, or call it a myth, the LNT model is an intellectual cliché.

The LNT cliché pervades American media as well as courtrooms. Earlier this week, the New York Times provided a lovely example of the myth taking center stage, without explanation or justification. Lumber Liquidators is under regulatory and litigation attack for having sold Chinese laminate wood flooring made with formaldehyde-containing materials. According to a “60 Minutes” investigation, the flooring off-gases formaldehyde at concentrations in excess of regulatory permissible levels. See Aaron M. Kessler & Rachel Abrams, “Homeowners Try to Assess Risks From Chemical in Floors,” New York Times (Mar. 10, 2015).

The Times reporters, in discussing whether a risk exists to people who live in houses and apartments with the Lumber Liquidators flooring sought out and quoted the opinion of Marilyn Howarth:

“Any exposure to a carcinogen can increase your risk of cancer,” said Marilyn Howarth, a toxicologist at the University of Pennsylvania’s Perelman School of Medicine.

Id. Dr. Howarth, however, is not a toxicologist; she is an occupational and environmental physician, and serves as the Director of Occupational and Environmental Consultation Services at the Hospital of the University of Pennsylvania. She is also an adjunct associate professor of emergency medicine, and the Director, of the Community Outreach and Engagement Core, Center of Excellence in Environmental Toxicology, at the University of Pennsylvania Perelman School of Medicine. Without detracting from Dr. Howarth’s fine credentials, the New York Times reporters might have noticed that Dr. Howarth’s publications are primarily on latex allergies, and not on the issue of the effect of low-dose exposure to carcinogens.

The point is not to diminish Dr. Howarth’s accomplishments, but to criticize the Times reporters for seeking out an opinion of a physician whose expertise is not well matched to the question they raise about risks, and then to publish that opinion even though it is demonstrably wrong. Clearly, there are some carcinogens, and perhaps all, that do not increase risk at “any exposure.” Consider ethanol, which is known to cause cancer of the larynx, liver, female breast, and perhaps other organs[2]. Despite known causation, no one would assert that “any exposure” to alcohol-containing food and drink increases the risk of these cancers. And the same could be said for most, if not all, carcinogens. The human body has defense mechanisms to carcinogens, including DNA repair mechanisms and programmed cell suicide, which work to prevent carcinogenesis from low-dose exposures.

The no threshold hypothesis is really at best an hypothesis, with affirmative evidence showing that the hypothesis should be rejected for some cancers[3]. The factual status of LNT is a myth; it is an opinion, and a poorly supported opinion at that.

         *        *        *        *        *        *        *        *        *

“There are, in fact, two things: science and opinion. The former brings knowledge, the latter ignorance.”

Hippocrates of Cos


[1] See Edward J. Calabrese, “Cancer risk assessment foundation unraveling: New historical evidence reveals that the US National Academy of Sciences (US NAS), Biological Effects of Atomic Radiation (BEAR) Committee Genetics Panel falsified the research record to promote acceptance of the LNT,” 89 Arch. Toxicol. 649 (2015); Edward J. Calabrese & Michael K. O’Connor, “Estimating Risk of Low Radiation Doses – A Critical Review of the BEIR VII Report and its Use of the Linear No-Threshold (LNT) Hypothesis,” 182 Radiation Research 463 (2014); Edward J. Calabrese, “Origin of the linearity no threshold (LNT) dose–response concept,” 87 Arch. Toxicol. 1621 (2013); Edward J. Calabrese, “The road to linearity at low doses became the basis for carcinogen risk assessment,” 83 Arch. Toxicol. 203 (2009).

[2] See, e.g., IARC Monographs on the Evaluation of Carcinogenic Risks to Humans – Alcohol Consumption and Ethyl Carbamate; volume 96 (2010).

[3] See, e.g., Jerry M. Cuttler, “Commentary on Fukushima and Beneficial Effects of Low Radiation,” 11 Dose-Response 432 (2013); Jerry M. Cuttler, “Remedy for Radiation Fear – Discard the Politicized Science,” 12 Dose Response 170 (2014).

Don’t Double Dip Data

March 9th, 2015

Meta-analyses have become commonplace in epidemiology and in other sciences. When well conducted and transparently reported, meta-analyses can be extremely helpful. In several litigations, meta-analyses determined the outcome of the medical causation issues. In the silicone gel breast implant litigation, after defense expert witnesses proffered meta-analyses[1], court-appointed expert witnesses adopted the approach and featured meta-analyses in their reports to the MDL court[2].

In the welding fume litigation, plaintiffs’ expert witness offered a crude, non-quantified, “vote counting” exercise to argue that welding causes Parkinson’s disease[3]. In rebuttal, one of the defense expert witnesses offered a quantitative meta-analysis, which provided strong evidence against plaintiffs’ claim.[4] Although the welding fume MDL court excluded the defense expert’s meta-analysis from the pre-trial Rule 702 hearing as untimely, plaintiffs’ counsel soon thereafter initiated settlement discussions of the entire set of MDL cases. Subsequently, the defense expert witness, with his professional colleagues, published an expanded version of the meta-analysis.[5]

And last month, a meta-analysis proffered by a defense expert witness helped dispatch a long-festering litigation in New Jersey’s multi-county isotretinoin (Accutane) litigation. In re Accutane Litig., No. 271(MCL), 2015 WL 753674 (N.J. Super., Law Div., Atlantic Cty., Feb. 20, 2015) (excluding plaintiffs’ expert witness David Madigan).

Of course, when a meta-analysis is done improperly, the resulting analysis may be worse than none at all. Some methodological flaws involve arcane statistical concepts and procedures, and may be easily missed. Other flaws are flagrant and call for a gatekeeping bucket brigade.

When a merchant puts his hand the scale at the check-out counter, we call that fraud. When George Costanza double dipped his chip twice in the chip dip, he was properly called out for his boorish and unsanitary practice. When a statistician or epidemiologist produces a meta-analysis that double counts crucial data to inflate a summary estimate of association, or to create spurious precision in the estimate, we don’t need to crack open Modern Epidemiology or the Reference Manual on Scientific Evidence to know that something fishy has taken place.

In litigation involving claims that selective serotonin reuptake inhibitors cause birth defects, plaintiffs’ expert witness, a perinatal epidemiologist, relied upon two published meta-analyses[6]. In an examination before trial, this epidemiologist was confronted with the double counting (and other data entry errors) in the relied-upon meta-analyses, and she readily agreed that the meta-analyses were improperly done and that she had to abandon her reliance upon them.[7] The result of the expert witness’s deposition epiphany, however, was that she no longer had the illusory benefit of an aggregation of data, with an outcome supporting her opinion. The further consequence was that her opinion succumbed to a Rule 702 challenge. See In re Zoloft (Sertraline Hydrochloride) Prods. Liab. Litig., MDL No. 2342; 12-md-2342, 2014 U.S. Dist. LEXIS 87592; 2014 WL 2921648 (E.D. Pa. June 27, 2014) (Rufe, J.).

Double counting of studies, or subgroups within studies, is a flaw that most careful readers can identify in a meta-analysis, without advance training. According to statistician Stephen Senn, double counting of evidence is a serious problem in published meta-analytical studies. Stephen J. Senn, “Overstating the evidence – double counting in meta-analysis and related problems,” 9, at *1 BMC Medical Research Methodology 10 (2009). Senn observes that he had little difficulty in finding examples of meta-analyses gone wrong, including meta-analyses with double counting of studies or data, in some of the leading clinical medical journals. Id. Senn urges analysts to “[b]e vigilant about double counting,” id. at *4, and recommends that journals should withdraw meta-analyses promptly when mistakes are found,” id. at *1.

Similar advice abounds in books and journals[8]. Professor Sander Greenland addresses the issue in his chapter on meta-analysis in Modern Epidemiology:

Conducting a Sound and Credible Meta-Analysis

Like any scientific study, an ideal meta-analysis would follow an explicit protocol that is fully replicable by others. This ideal can be hard to attain, but meeting certain conditions can enhance soundness (validity) and credibility (believability). Among these conditions we include the following:

  • A clearly defined set of research questions to address.

  • An explicit and detailed working protocol.

  • A replicable literature-search strategy.

  • Explicit study inclusion and exclusion criteria, with a rationale for each.

  • Nonoverlap of included studies (use of separate subjects in different included studies), or use of statistical methods that account for overlap. * * * * *”

Sander Greenland & Keith O’Rourke, “Meta-Analysis – Chapter 33,” in Kenneth J. Rothman, Sander Greenland, Timothy L. Lash, Modern Epidemiology 652, 655 (3d ed. 2008) (emphasis added).

Just remember George Costanza; don’t double dip that chip, and don’t double dip in the data.


[1] See, e.g., Otto Wong, “A Critical Assessment of the Relationship between Silicone Breast Implants and Connective Tissue Diseases,” 23 Regulatory Toxicol. & Pharmacol. 74 (1996).

[2] See Barbara Hulka, Betty Diamond, Nancy Kerkvliet & Peter Tugwell, “Silicone Breast Implants in Relation to Connective Tissue Diseases and Immunologic Dysfunction:  A Report by a National Science Panel to the Hon. Sam Pointer Jr., MDL 926 (Nov. 30, 1998)”; Barbara Hulka, Nancy Kerkvliet & Peter Tugwell, “Experience of a Scientific Panel Formed to Advise the Federal Judiciary on Silicone Breast Implants,” 342 New Engl. J. Med. 812 (2000).

[3] Deposition of Dr. Juan Sanchez-Ramos, Street v. Lincoln Elec. Co., Case No. 1:06-cv-17026, 2011 WL 6008514 (N.D. Ohio May 17, 2011).

[4] Deposition of Dr. James Mortimer, Street v. Lincoln Elec. Co., Case No. 1:06-cv-17026, 2011 WL 6008054 (N.D. Ohio June 29, 2011).

[5] James Mortimer, Amy Borenstein & Laurene Nelson, Associations of Welding and Manganese Exposure with Parkinson’s Disease: Review and Meta-Analysis, 79 Neurology 1174 (2012).

[6] Shekoufeh Nikfar, Roja Rahimi, Narjes Hendoiee, and Mohammad Abdollahi, “Increasing the risk of spontaneous abortion and major malformations in newborns following use of serotonin reuptake inhibitors during pregnancy: A systematic review and updated meta-analysis,” 20 DARU J. Pharm. Sci. 75 (2012); Roja Rahimi, Shekoufeh Nikfara, Mohammad Abdollahic, “Pregnancy outcomes following exposure to serotonin reuptake inhibitors: a meta-analysis of clinical trials,” 22 Reproductive Toxicol. 571 (2006).

[7] “Q So the question was: Have you read it carefully and do you understand everything that was done in the Nikfar meta-analysis?

A Yes, I think so.

* * *

Q And Nikfar stated that she included studies, correct, in the cardiac malformation meta-analysis?

A That’s what she says.

* * *

Q So if you look at the STATA output, the demonstrative, the — the forest plot, the second study is Kornum 2010. Do you see that?

A Am I —

Q You’re looking at figure four, the cardiac malformations.

A Okay.

Q And Kornum 2010, —

A Yes.

Q — that’s a study you relied upon.

A Mm-hmm.

Q Is that right?

A Yes.

Q And it’s on this forest plot, along with its odds ratio and confidence interval, correct?

A Yeah.

Q And if you look at the last study on the forest plot, it’s the same study, Kornum 2010, same odds ratio and same confidence interval, true?

A You’re right.

Q And to paraphrase My Cousin Vinny, no self-respecting epidemiologist would do a meta-analysis by including the same study twice, correct?

A Well, that was an error. Yeah, you’re right.

***

Q Instead of putting 2 out of 98, they extracted the data and put 9 out of 28.

A Yeah. You’re right.

Q So there’s a numerical transposition that generated a 25-fold increased risk; is that right?

A You’re correct.

Q And, again, to quote My Cousin Vinny, this is no way to do a meta-analysis, is it?

A You’re right.”

Testimony of Anick Bérard, Kuykendall v. Forest Labs, at 223:14-17; 238:17-20; 239:11-240:10; 245:5-12 (Cole County, Missouri; Nov. 15, 2013). According to a Google Scholar search, the Rahimi 2005 meta-analysis had been cited 90 times; the Nikfar 2012 meta-analysis, 11 times, as recently as this month. See, e.g., Etienne Weisskopf, Celine J. Fischer, Myriam Bickle Graz, Mathilde Morisod Harari, Jean-Francois Tolsa, Olivier Claris, Yvan Vial, Chin B. Eap, Chantal Csajka & Alice Panchaud, “Risk-benefit balance assessment of SSRI antidepressant use during pregnancy and lactation based on best available evidence,” 14 Expert Op. Drug Safety 413 (2015); Kimberly A. Yonkers, Katherine A. Blackwell & Ariadna Forray, “Antidepressant Use in Pregnant and Postpartum Women,” 10 Ann. Rev. Clin. Psychol. 369 (2014); Abbie D. Leino & Vicki L. Ellingrod, “SSRIs in pregnancy: What should you tell your depressed patient?” 12 Current Psychiatry 41 (2013).

[8] Julian Higgins & Sally Green, eds., Cochrane Handbook for Systematic Reviews of Interventions 152 (2008) (“7.2.2 Identifying multiple reports from the same study. Duplicate publication can introduce substantial biases if studies are inadvertently included more than once in a meta-analysis (Tramèr 1997). Duplicate publication can take various forms, ranging from identical manuscripts to reports describing different numbers of participants and different outcomes (von Elm 2004). It can be difficult to detect duplicate publication, and some ‘detectivework’ by the reviewauthors may be required.”); see also id. at 298 (Table 10.1.a “Definitions of some types of reporting biases”); id. at 304-05 (10.2.2.1 Duplicate (multiple) publication bias … “The inclusion of duplicated data may therefore lead to overestimation of intervention effects.”); Julian P.T. Higgins, Peter W. Lane, Betsy Anagnostelis, Judith Anzures-Cabrera, Nigel F. Baker, Joseph C. Cappelleri, Scott Haughie, Sally Hollis, Steff C. Lewis, Patrick Moneuse & Anne Whitehead, “A tool to assess the quality of a meta-analysis,” 4 Research Synthesis Methods 351, 363 (2013) (“A common error is to double-count individuals in a meta-analysis.”); Alessandro Liberati, Douglas G. Altman, Jennifer Tetzlaff, Cynthia Mulrow, Peter C. Gøtzsche, John P.A. Ioannidis, Mike Clarke, Devereaux, Jos Kleijnen, and David Moher, “The PRISMA Statement for Reporting Systematic Reviews and Meta-Analyses of Studies That Evaluate Health Care Interventions: Explanation and Elaboration,” 151 Ann. Intern. Med. W-65, W-75 (2009) (“Some studies are published more than once. Duplicate publications may be difficult to ascertain, and their inclusion may introduce bias. We advise authors to describe any steps they used to avoid double counting and piece together data from multiple reports of the same study (e.g., juxtaposing author names, treatment comparisons, sample sizes, or outcomes).”) (internal citations omitted); Erik von Elm, Greta Poglia; Bernhard Walder, and Martin R. Tramèr, “Different patterns of duplicate publication: an analysis of articles used in systematic reviews,” 291 J. Am. Med. Ass’n 974 (2004); John Andy Wood, “Methodology for Dealing With Duplicate Study Effects in a Meta-Analysis,” 11 Organizational Research Methods 79, 79 (2008) (“Dependent studies, duplicate study effects, nonindependent studies, and even covert duplicate publications are all terms that have been used to describe a threat to the validity of the meta-analytic process.”) (internal citations omitted); Martin R. Tramèr, D. John M. Reynolds, R. Andrew Moore, Henry J. McQuay, “Impact of covert duplicate publication on meta­analysis: a case study,” 315 Brit. Med. J. 635 (1997); Beverley J Shea, Jeremy M Grimshaw, George A. Wells, Maarten Boers, Neil Andersson, Candyce Hamel, Ashley C. Porter, Peter Tugwell, David Moher, and Lex M. Bouter, “Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews,” 7(10) BMC Medical Research Methodology 2007 (systematic reviews must inquire whether there was “duplicate study selection and data extraction”).

The opinions, statements, and asseverations expressed on Tortini are my own, or those of invited guests, and these writings do not necessarily represent the views of clients, friends, or family, even when supported by good and sufficient reason.