For your delectation and delight, desultory dicta on the law of delicts.

Daubert’s Silver Anniversary – Retrospective View of Its Friends and Enemies

October 21st, 2018

Science is inherently controversial because when done properly it has no respect for power or political correctness or dogma or entrenched superstition. We should thus not be surprised that the scientific process has many detractors in houses of worship, houses of representatives, and houses of the litigation industry. And we have more than a few “Dred Scott” decisions, in which courts have held that science has no criteria of validity that they are bound to follow.

To be sure, many judges have recognized a different danger in scientific opinion testimony, namely, its ability to overwhelm the analytical faculties of lay jurors. Fact-finders may view scientific expert witness opinion testimony as having an overwhelming certainty and authority, which swamps their ability to evaluate the testimony.1

One errant judicial strategy to deal with their own difficulty in evaluating scientific evidence was to invent a fictitious divide between a scientific and legal burden of proof:2

Petitioners demand sole reliance on scientific facts, on evidence that reputable scientific techniques certify as certain. Typically, a scientist will not so certify evidence unless the probability of error, by standard statistical measurement, is less than 5%. That is, scientific fact is at least 95% certain. Such certainty has never characterized the judicial or the administrative process. It may be that the ‘beyond a reasonable doubt’ standard of criminal law demands 95% certainty. Cf. McGill v. United States, 121 U.S.App. D.C. 179, 185 n.6, 348 F.2d 791, 797 n.6 (1965). But the standard of ordinary civil litigation, a preponderance of the evidence, demands only 51% certainty. A jury may weigh conflicting evidence and certify as adjudicative (although not scientific) fact that which it believes is more likely than not.”

By falsely elevating the scientific standard, judges see themselves free to decide expeditiously and without constraints, because they are operating at much lower epistemic level.

Another response advocated by “the Lobby,” scientists in service to the litigation industry, has been to deprecate gatekeeping altogether. Perhaps the most brazen anti-science response to the Supreme Court’s decision in Daubert was advanced by David Michaels and his Project on Scientific Knowledge and Public Policy (SKAPP). In its heyday, SKAPP organized meetings and conferences, and cranked out anti-gatekeeping propaganda to the delight of the litigation industry3, while obfuscating and equivocating about the source of its funding (from the litigation industry).4

SKAPP principal David Michaels was also behind the efforts of the American Public Health Association (APHA) to criticize the judicial move to scientific standards in gatekeeping. In 2004, Michaels and fellow litigation industrialists prevailed upon the APHA to adopt a policy statement that attacked evidence-based science and data transparency in the form of “Policy Number: 2004-11 Threats to Public Health Science.”5

SKAPP appears to have gone the way of the dodo, although the defunct organization still has a Wikipedia­ page with the misleading claim that a federal court had funded its operation, and the old link for this sketchy outfit now redirects to the website for the Union of Concerned Scientists. In 2009, David Michaels, fellow in the Collegium Ramazzini, and formerly the driving force of SKAPP, went on to become an under-secretary of Labor, and OSHA administrator in the Obama administration.6

With the end of his regulatory work, Michaels is now back in the litigation saddle. In April 2018, Michaels participated in a ruse in which he allowed himself to be “subpoenaed” by Mark Lanier, to give testimony in a cases involving claims that personal talc use caused ovarian cancers.7 Michaels had no real subject matter expertise, but he readily made himself available so that Mr. Lanier could inject Michaels’ favorite trope of “doubt is their product” into his trial.

Against this backdrop of special pleading from the litigation industry’s go-to expert witnesses, it is helpful to revisit the Daubert decision, which is now 25 years old. The decision followed the grant of the writ of certiorari by the Supreme Court, full briefing by the parties on the merits, oral argument, and twenty two amicus briefs. Not all briefs are created equal, and this inequality is especially true of amicus briefs, for which the quality of argument, and the reputation of the interested third parties, can vary greatly. Given the shrill ideological ranting of SKAPP and the APHA, we might find some interest in what two leading scientific organizations, the American Association for the Advancement of Science (AAAS) and the National Academy of Science (NAS), contributed to the debate over the proper judicial role in policing expert witness opinion testimony.

The Amicus Brief of the AAAS and the NAS, filed in Daubert v. Merrell Dow Pharmaceuticals, Inc., U.S. Supreme Court No. 92-102 (Jan. 19, 1993), was submitted by Richard A. Meserve and Lars Noah, of Covington & Burling, and by Bert Black, of Weinberg & Green. Unfortunately, the brief does not appear to be available on Westlaw, but it was republished shortly after filing, at 12 Biotechnology Law Report 198 (No. 2, March-April 1993) [all citations below are to this republication].

The amici were and are well known to the scientific community. The AAAS is a not-for-profit scientific society, which publishes the prestigious journal Science, and engages in other activities to advance public understanding of science. The NAS was created by congressional charter in the administration of Abraham Lincoln, to examine scientific, medical, and technological issues of national significance. Brief at 208. Meserve, counsel of record for these Amici Curiae, is a member of the National Academy, a president emeritus of the Carnegie Institution for Science, and a former chair of the U.S. Nuclear Regulatory Commission. He received his doctorate in applied physics from Stanford University, and his law degree from Harvard. Noah is now a professor of law in the University of Florida, and Black is still a practicing lawyer, ironically for the litigation industry.

The brief of the AAAP and the NAS did not take a position on the merits of whether Bendectin can cause birth defects, but it had a great deal to say about the scientific process, and the need for courts to intervene to ensure that expert witness opinion testimony was developed and delivered with appropriate methodological rigor.

A Clear and Present Danger

The amici, AAAS and NAS, clearly recognized a threat to the integrity of scientific fact-finding in the regime of uncontrolled and unregulated expert witness testimony. The amici cited the notorious case of Wells v. Ortho Pharmaceutical Corp.8, which had provoked an outcry from the scientific community, and a particularly scathing article by two scientists from the National Institute of Child Health and Human Development.9

The amici also cited several judicial decisions on the need for robust gatekeeping, including the observations of Judge Jack Weinstein that

[t]he uncertainty of the evidence in [toxic tort] cases, dependent as it is on speculative scientific hypotheses and epidemiological studies, creates a special need for robust screening of experts and gatekeeping under Rules 403 and 703 by the court.”10

The AAAS and the NAS saw the “obvious danger that research results generated solely for litigation may be skewed.” Brief at 217& n.11.11 The AAAS and the NAS thus saw a real, substantial threat in countenancing expert witnesess who proffered “putatively scientific evidence that does not in fact reflect the application of scientific principles.” Brief at 208. The Supreme Court probably did not need the AAAS and the NAS to tell them that “[s]uch evidence can lead to incorrect decisions and can serve to discredit the contributions of science,” id., but it may have helped ensure that the Court articulated meaningful guidelines to trial judges to police their courtrooms against scientific conclusions that were not reached in accordance with scientific principles. The amici saw and stated that

[t]he unique persuasive power of scientific evidence and its inherent limitations requires that courts engage special efforts to ensure that scientific evidence is valid and reliable before it is admitted. In performing that task, courts can look to the same criteria that scientists themselves use to evaluate scientific claims.”

Brief at 212.

It may seem quaint to the post-modernists at the APHA, but the AAAS and the NAS were actually concerned “to avoid outcomes that are at odds with reality,” and they were willing to urge that “courts must exercise special care to assure that such evidence is based on valid and reliable scientific methodologies.” Brief at 209 (emphasis added). The amici also urged caution in allowing opinion testimony that conflicted with existing learning, and which had not been presented to the scientific community for evaluation. Brief at 218-19. In the words of the amici:

Courts should admit scientific evidence only if it conforms to scientific standards and is derived from methods that are generally accepted by the scientific community as valid and reliable. Such a test promotes sound judicial decisionmaking by providing workable means for screening and assessing the quality of scientific expert testimony in advance of trial.”

Brief at 233. After all, part of the scientific process itself is weeding out false ideas.

Authority for Judicial Control

The AAAS and NAS and its lawyers gave their full support to Merrill Dow’s position that “courts have the authority and the responsibility to exclude expert testimony that is based upon unreliable or misapplied methodologies.” Brief at 209. The Federal Rules of Evidence, and Rules 702, 703, and 403 in particular, gave trial courts “ample authority for empowering courts to serve as gatekeepers.” Brief at 230. The amici argued what ultimately would become the law, that judicial control, in the spirit and text of the Federal Rules, of “[t]hreshold determinations concerning the admissibility of scientific evidence are necessary to ensure accurate decisions and to avoid unnecessary expenditures of judicial resources on collateral issues. Brief at 210. The AAAS and NAS further recommended that:

Determinations concerning the admissibility of expert testimony based on scientific evidence should be made by a judge in advance of trial. Such judicial control is explicitly called for under Rule 104(a) of the Federal Rules of Evidence, and threshold admissibility determinations by a judge serve several important functions, including simplification of issues at trial (thereby increasing the speed of trial), improvement in the consistency and predictability of results, and clarification of the issues for purposes of appeal. Indeed, it is precisely because a judge can evaluate the evidence in a focused and careful manner, free from the prejudices that might infect deliberations by a jury, that the determination should be made as a threshold matter.”

Brief at 228 (internal citations omitted).

Criteria of Validity

The AAAS and NAS did not shrink from the obvious implications of their position. They insisted that “[i]n evaluating scientific evidence, courts should consider the same factors that scientists themselves employ to assess the validity and reliability of scientific assertions.” Brief at 209, 210. The amici may have exhibited an aspirational view of the ability of judges, but they shared their optimistic view that “judges can understand the fundamental characteristics that separate good science from bad.” Brief at 210. Under the gatekeeping regime contemplated by the AAAS and the NAS, judges would have to think and analyze, rather than delegating to juries. In carrying out their task, judges would not be starting with a blank slate:

When faced with disputes about expert scientific testimony, judges should make full use of the scientific community’s criteria and quality-control mechanisms. To be admissible, scientific evidence should conform to scientific standards and should be based on methods that are generally accepted by the scientific community as valid and reliable.”

Brief at 210. Questions such as whether an hypothesis has survived repeated severe, rigorous tests, whether the hypothesis is consistent with other existing scientific theories, whether the results of the tests have been presented to the scientific community, need to be answered affirmatively before juries are permitted to weigh in with their verdicts. Brief at 216, 217.

The AAAS and the NAS acknowledged implicitly and explicitly that courtrooms were not good places to trot out novel hypotheses, which lacked severe testing and sufficient evidentiary support. New theories must survive repeated testing and often undergo substantial refinements before they can be accepted in the scientific community. The scientific method requires nothing less. Brief at 219. These organizational amici also acknowledged that there will be occasionally “truly revolutionary advances” in the form of an hypothesis not fully tested. The danger of injecting bad science into broader decisions (such as encouraging meritless litigation, or the abandonment of useful products) should cause courts to view unestablished hypotheses with “heightened skepticism pending further testing and review.” Brief at 229. In other words, some hypotheses simply have not matured to the point at which they can support tort or other litigation.

The AAAS and the NAS contemplated that the gatekeeping process could and should incorporate the entire apparatus of scientific validity determinations into Rule 104(a) adjudications. Nowhere in their remarkable amicus brief do they suggest that if there some evidence (however weak) favoring a causal claim, with nothing yet available to weigh against it, expert witnesses can declare that they have the “weight of the evidence” on their side, and gain a ticket to the courthouse door. The scientists at SKAPP, or now those at the Union for Concerned Scientists, prefer to brand gatekeeping as a trick to sell “doubt.” What they fail to realize is that their propaganda threatens both universalism and organized skepticism, two of the four scientific institutional norms, described by sociologist of science Robert K. Merton.12

1 United States v. Brown, 557 F.2d 541, 556 (6th Cir. 1977) (“Because of its apparent objectivity, an opinion that claims a scientific basis is apt to carry undue weight with the trier of fact”); United States v. Addison, 498 F.2d 741, 744 (D.C. Cir. 1974) (“scientific proof may in some instances assume a posture of mystic infallibility in the eyes of a jury of laymen”). Some people say that our current political morass reflects poorly on the ability of United States citizens to assess and evaluate evidence and claims to the truth.

2 See, e.g., Ethyl Corp. v. EPA, 541 F.2d 1, 28 n.58 (D.C. Cir.), cert. denied, 426 U.S. 941 (1976). See also Rhetorical Strategy in Characterizing Scientific Burdens of Proof(Nov. 15, 2014).

3 See, e.g., Project on Scientific Knowledge and Public Policy, “Daubert: The Most Influential Supreme Court Ruling You’ve Never Heard Of(2003).

4 See, e.g., SKAPP A LOT(April 30, 2010); “Manufacturing Certainty(Oct. 25, 2011);David Michaels’ Public Relations Problem(Dec. 2, 2011); Conflicted Public Interest Groups (Nov. 3, 2013).

7 Notes of Testimony by David Michaels, in Ingham v. Johnson & Johnson, Case No. 1522-CC10417-01, St. Louis Circuit Ct, Missouri (April 17, 2018).

8 788 F.2d 741, 744-45 (11th Cir.), cert. denied, 479 U.S. 950 (1986). Remarkably, consultants for the litigation industry have continued to try to “rehabilitate” the Wells decision. SeeCarl Cranor’s Conflicted Jeremiad Against Daubert” (Sept. 23, 2018).

9 James L. Mills & Duane Alexander, “Teratogens and Litogens,” 315 New Engl. J. Med. 1234, 1235 (1986).

10 Brief at n. 31, citing In re Agent Orange Product Liab. Litig., 611 F. Supp. 1267, 1269 (E.D.N.Y. 1985), aff’d, 818 F.2d 187 (2th Cir. 1987), cert. denied, 487 U.S. 1234 (1988).

11 citing among other cases, Perry v. United States, 755 F.2d 888, 892 (11th Cir. 1985) (“A scientist who has a formed opinion as to the answer he is going to find before he even begins his research may be less objective than he needs to be in order to produce reliable scientific results.”).

12 Robert K. Merton, “The Normative Structure of Science,” in Robert K. Merton, The Sociology of Science: Theoretical and Empirical Investigations, chap. 13, at 267, 270 (1973).

David Egilman and Friends Circle the Wagons at the International Journal of Occupational & Environmental Health

May 4th, 2017

Andrew Maier is an associate professor in the Department of Environmental Health, in the University of Cincinnati. Maier received his Ph.D. degree in toxicology, with a master’s degree in industrial health. He is a Certified Industrial Hygienest and has published widely on occupational health issues. Earlier this year, Maier was named the editor-in-chief of the International Journal of Occupational and Environmental Health (IJOEH). See Casey Allen, “Andy Maier Named Editor of Environmental Health Journal(Jan. 18, 2017).

Before Maier’s appointment, the IJOEH was, for the last several years, the vanity press for former editor-in-chief David Egilman and “The Lobby,” the expert witness brigade of the lawsuit industry. Egilman’s replacement with Andrew Maier apparently took place after the IJOEH was acquired by the scientific publishing company Taylor & Francis, from the former publisher, Maney.

The new owner, however, left the former IJOEH editorial board, largely a gaggle of Egilman friends and fellow travelers in place. Last week, the editorial board revoltingly wrote [contact information redacted] to Roger Horton, Chief Executive Officer of Taylor & Francis, to request that Egilman be restored to power, or that the current Editorial Board be empowered to choose Egilman’s successor. With Trump-like disdain for evidence, the Board characterized the new Editor as a “corporate consultant.” If Maier has consulted with corporations, his work appears to have rarely if ever landed him in a courtroom at the request of a corporate defendant. And with knickers tightly knotted, the Board also made several other demands for control over Board membership and journal content.

Andrew Watterson wrote to Horton on behalf of all current and former IJOEH Editorial Board members, a group heavily populated by plaintiffs’ litigation expert witnesses and “political” scientists, including among others:

Arthur Frank

Morris Greenberg

Barry S. Levy

David Madigan

Jock McCulloch

David Wegman

Barry Castleman

Peter Infante

Ron Melnick

Daniel Teitelbaum

None of the signatories apparently disclosed their affiliations as corporate consultants for the lawsuit industry.

Removing Egilman from control was bad enough, but the coup de grâce for the Lobby came earlier in April 2016, when Taylor & Francis notified Egilman that a paper that he had published in IJOEH was being withdrawn. According to the petitioners, the paper, “The production of corporate research to manufacture doubt about the health hazards of products: an overview of the Exponent Bakelite simulation study,” was removed without explanation. See Public health journal’s editorial board tells publisher they have ‘grave concerns’ over new editor,” Retraction Watch (April 27, 2017).

According to Taylor & Francis, the Egilman article was “published inadvertently, before the review process had been completed. On completing that review, it was decided the article was unsuitable for publication in the journal.” Id. Well, of course, Egilman’s article was unlikely to receive much analytical scrutiny at a journal where he was Editor-in-Chief, and where the Board was populated by his buddies. The same could be said for many articles published under Egilman’s tenure at the IJOEH. Taylor & Francis owes Egilman and the scientific and legal community a detailed statement of what was in the article, which was “unsuitable,” and why. Certainly, the law department at Taylor & Francis should make sure that it does not give Egilman and his former Board of Editors grounds for litigation. They are, after all, tight with the lawsuit industry. More important, Taylor & Francis owes Dr. Egilman, as well as the scientific and legal community, a full explanation of why the article in question was unsuitable for publication in the IJOEH.

Don’t Double Dip Data

March 9th, 2015

Meta-analyses have become commonplace in epidemiology and in other sciences. When well conducted and transparently reported, meta-analyses can be extremely helpful. In several litigations, meta-analyses determined the outcome of the medical causation issues. In the silicone gel breast implant litigation, after defense expert witnesses proffered meta-analyses[1], court-appointed expert witnesses adopted the approach and featured meta-analyses in their reports to the MDL court[2].

In the welding fume litigation, plaintiffs’ expert witness offered a crude, non-quantified, “vote counting” exercise to argue that welding causes Parkinson’s disease[3]. In rebuttal, one of the defense expert witnesses offered a quantitative meta-analysis, which provided strong evidence against plaintiffs’ claim.[4] Although the welding fume MDL court excluded the defense expert’s meta-analysis from the pre-trial Rule 702 hearing as untimely, plaintiffs’ counsel soon thereafter initiated settlement discussions of the entire set of MDL cases. Subsequently, the defense expert witness, with his professional colleagues, published an expanded version of the meta-analysis.[5]

And last month, a meta-analysis proffered by a defense expert witness helped dispatch a long-festering litigation in New Jersey’s multi-county isotretinoin (Accutane) litigation. In re Accutane Litig., No. 271(MCL), 2015 WL 753674 (N.J. Super., Law Div., Atlantic Cty., Feb. 20, 2015) (excluding plaintiffs’ expert witness David Madigan).

Of course, when a meta-analysis is done improperly, the resulting analysis may be worse than none at all. Some methodological flaws involve arcane statistical concepts and procedures, and may be easily missed. Other flaws are flagrant and call for a gatekeeping bucket brigade.

When a merchant puts his hand the scale at the check-out counter, we call that fraud. When George Costanza double dipped his chip twice in the chip dip, he was properly called out for his boorish and unsanitary practice. When a statistician or epidemiologist produces a meta-analysis that double counts crucial data to inflate a summary estimate of association, or to create spurious precision in the estimate, we don’t need to crack open Modern Epidemiology or the Reference Manual on Scientific Evidence to know that something fishy has taken place.

In litigation involving claims that selective serotonin reuptake inhibitors cause birth defects, plaintiffs’ expert witness, a perinatal epidemiologist, relied upon two published meta-analyses[6]. In an examination before trial, this epidemiologist was confronted with the double counting (and other data entry errors) in the relied-upon meta-analyses, and she readily agreed that the meta-analyses were improperly done and that she had to abandon her reliance upon them.[7] The result of the expert witness’s deposition epiphany, however, was that she no longer had the illusory benefit of an aggregation of data, with an outcome supporting her opinion. The further consequence was that her opinion succumbed to a Rule 702 challenge. See In re Zoloft (Sertraline Hydrochloride) Prods. Liab. Litig., MDL No. 2342; 12-md-2342, 2014 U.S. Dist. LEXIS 87592; 2014 WL 2921648 (E.D. Pa. June 27, 2014) (Rufe, J.).

Double counting of studies, or subgroups within studies, is a flaw that most careful readers can identify in a meta-analysis, without advance training. According to statistician Stephen Senn, double counting of evidence is a serious problem in published meta-analytical studies. Stephen J. Senn, “Overstating the evidence – double counting in meta-analysis and related problems,” 9, at *1 BMC Medical Research Methodology 10 (2009). Senn observes that he had little difficulty in finding examples of meta-analyses gone wrong, including meta-analyses with double counting of studies or data, in some of the leading clinical medical journals. Id. Senn urges analysts to “[b]e vigilant about double counting,” id. at *4, and recommends that journals should withdraw meta-analyses promptly when mistakes are found,” id. at *1.

Similar advice abounds in books and journals[8]. Professor Sander Greenland addresses the issue in his chapter on meta-analysis in Modern Epidemiology:

Conducting a Sound and Credible Meta-Analysis

Like any scientific study, an ideal meta-analysis would follow an explicit protocol that is fully replicable by others. This ideal can be hard to attain, but meeting certain conditions can enhance soundness (validity) and credibility (believability). Among these conditions we include the following:

  • A clearly defined set of research questions to address.

  • An explicit and detailed working protocol.

  • A replicable literature-search strategy.

  • Explicit study inclusion and exclusion criteria, with a rationale for each.

  • Nonoverlap of included studies (use of separate subjects in different included studies), or use of statistical methods that account for overlap. * * * * *”

Sander Greenland & Keith O’Rourke, “Meta-Analysis – Chapter 33,” in Kenneth J. Rothman, Sander Greenland, Timothy L. Lash, Modern Epidemiology 652, 655 (3d ed. 2008) (emphasis added).

Just remember George Costanza; don’t double dip that chip, and don’t double dip in the data.

[1] See, e.g., Otto Wong, “A Critical Assessment of the Relationship between Silicone Breast Implants and Connective Tissue Diseases,” 23 Regulatory Toxicol. & Pharmacol. 74 (1996).

[2] See Barbara Hulka, Betty Diamond, Nancy Kerkvliet & Peter Tugwell, “Silicone Breast Implants in Relation to Connective Tissue Diseases and Immunologic Dysfunction:  A Report by a National Science Panel to the Hon. Sam Pointer Jr., MDL 926 (Nov. 30, 1998)”; Barbara Hulka, Nancy Kerkvliet & Peter Tugwell, “Experience of a Scientific Panel Formed to Advise the Federal Judiciary on Silicone Breast Implants,” 342 New Engl. J. Med. 812 (2000).

[3] Deposition of Dr. Juan Sanchez-Ramos, Street v. Lincoln Elec. Co., Case No. 1:06-cv-17026, 2011 WL 6008514 (N.D. Ohio May 17, 2011).

[4] Deposition of Dr. James Mortimer, Street v. Lincoln Elec. Co., Case No. 1:06-cv-17026, 2011 WL 6008054 (N.D. Ohio June 29, 2011).

[5] James Mortimer, Amy Borenstein & Laurene Nelson, Associations of Welding and Manganese Exposure with Parkinson’s Disease: Review and Meta-Analysis, 79 Neurology 1174 (2012).

[6] Shekoufeh Nikfar, Roja Rahimi, Narjes Hendoiee, and Mohammad Abdollahi, “Increasing the risk of spontaneous abortion and major malformations in newborns following use of serotonin reuptake inhibitors during pregnancy: A systematic review and updated meta-analysis,” 20 DARU J. Pharm. Sci. 75 (2012); Roja Rahimi, Shekoufeh Nikfara, Mohammad Abdollahic, “Pregnancy outcomes following exposure to serotonin reuptake inhibitors: a meta-analysis of clinical trials,” 22 Reproductive Toxicol. 571 (2006).

[7] “Q So the question was: Have you read it carefully and do you understand everything that was done in the Nikfar meta-analysis?

A Yes, I think so.

* * *

Q And Nikfar stated that she included studies, correct, in the cardiac malformation meta-analysis?

A That’s what she says.

* * *

Q So if you look at the STATA output, the demonstrative, the — the forest plot, the second study is Kornum 2010. Do you see that?

A Am I —

Q You’re looking at figure four, the cardiac malformations.

A Okay.

Q And Kornum 2010, —

A Yes.

Q — that’s a study you relied upon.

A Mm-hmm.

Q Is that right?

A Yes.

Q And it’s on this forest plot, along with its odds ratio and confidence interval, correct?

A Yeah.

Q And if you look at the last study on the forest plot, it’s the same study, Kornum 2010, same odds ratio and same confidence interval, true?

A You’re right.

Q And to paraphrase My Cousin Vinny, no self-respecting epidemiologist would do a meta-analysis by including the same study twice, correct?

A Well, that was an error. Yeah, you’re right.


Q Instead of putting 2 out of 98, they extracted the data and put 9 out of 28.

A Yeah. You’re right.

Q So there’s a numerical transposition that generated a 25-fold increased risk; is that right?

A You’re correct.

Q And, again, to quote My Cousin Vinny, this is no way to do a meta-analysis, is it?

A You’re right.”

Testimony of Anick Bérard, Kuykendall v. Forest Labs, at 223:14-17; 238:17-20; 239:11-240:10; 245:5-12 (Cole County, Missouri; Nov. 15, 2013). According to a Google Scholar search, the Rahimi 2005 meta-analysis had been cited 90 times; the Nikfar 2012 meta-analysis, 11 times, as recently as this month. See, e.g., Etienne Weisskopf, Celine J. Fischer, Myriam Bickle Graz, Mathilde Morisod Harari, Jean-Francois Tolsa, Olivier Claris, Yvan Vial, Chin B. Eap, Chantal Csajka & Alice Panchaud, “Risk-benefit balance assessment of SSRI antidepressant use during pregnancy and lactation based on best available evidence,” 14 Expert Op. Drug Safety 413 (2015); Kimberly A. Yonkers, Katherine A. Blackwell & Ariadna Forray, “Antidepressant Use in Pregnant and Postpartum Women,” 10 Ann. Rev. Clin. Psychol. 369 (2014); Abbie D. Leino & Vicki L. Ellingrod, “SSRIs in pregnancy: What should you tell your depressed patient?” 12 Current Psychiatry 41 (2013).

[8] Julian Higgins & Sally Green, eds., Cochrane Handbook for Systematic Reviews of Interventions 152 (2008) (“7.2.2 Identifying multiple reports from the same study. Duplicate publication can introduce substantial biases if studies are inadvertently included more than once in a meta-analysis (Tramèr 1997). Duplicate publication can take various forms, ranging from identical manuscripts to reports describing different numbers of participants and different outcomes (von Elm 2004). It can be difficult to detect duplicate publication, and some ‘detectivework’ by the reviewauthors may be required.”); see also id. at 298 (Table 10.1.a “Definitions of some types of reporting biases”); id. at 304-05 ( Duplicate (multiple) publication bias … “The inclusion of duplicated data may therefore lead to overestimation of intervention effects.”); Julian P.T. Higgins, Peter W. Lane, Betsy Anagnostelis, Judith Anzures-Cabrera, Nigel F. Baker, Joseph C. Cappelleri, Scott Haughie, Sally Hollis, Steff C. Lewis, Patrick Moneuse & Anne Whitehead, “A tool to assess the quality of a meta-analysis,” 4 Research Synthesis Methods 351, 363 (2013) (“A common error is to double-count individuals in a meta-analysis.”); Alessandro Liberati, Douglas G. Altman, Jennifer Tetzlaff, Cynthia Mulrow, Peter C. Gøtzsche, John P.A. Ioannidis, Mike Clarke, Devereaux, Jos Kleijnen, and David Moher, “The PRISMA Statement for Reporting Systematic Reviews and Meta-Analyses of Studies That Evaluate Health Care Interventions: Explanation and Elaboration,” 151 Ann. Intern. Med. W-65, W-75 (2009) (“Some studies are published more than once. Duplicate publications may be difficult to ascertain, and their inclusion may introduce bias. We advise authors to describe any steps they used to avoid double counting and piece together data from multiple reports of the same study (e.g., juxtaposing author names, treatment comparisons, sample sizes, or outcomes).”) (internal citations omitted); Erik von Elm, Greta Poglia; Bernhard Walder, and Martin R. Tramèr, “Different patterns of duplicate publication: an analysis of articles used in systematic reviews,” 291 J. Am. Med. Ass’n 974 (2004); John Andy Wood, “Methodology for Dealing With Duplicate Study Effects in a Meta-Analysis,” 11 Organizational Research Methods 79, 79 (2008) (“Dependent studies, duplicate study effects, nonindependent studies, and even covert duplicate publications are all terms that have been used to describe a threat to the validity of the meta-analytic process.”) (internal citations omitted); Martin R. Tramèr, D. John M. Reynolds, R. Andrew Moore, Henry J. McQuay, “Impact of covert duplicate publication on meta­analysis: a case study,” 315 Brit. Med. J. 635 (1997); Beverley J Shea, Jeremy M Grimshaw, George A. Wells, Maarten Boers, Neil Andersson, Candyce Hamel, Ashley C. Porter, Peter Tugwell, David Moher, and Lex M. Bouter, “Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews,” 7(10) BMC Medical Research Methodology 2007 (systematic reviews must inquire whether there was “duplicate study selection and data extraction”).

Beware the Academic-Publishing Complex!

January 11th, 2012

Today’s New York Times contains an important editorial on an attempt by some congressmen to undermine access to federally funded research.  See Michael B. Eisen, “Research Bought, Then Paid ForNew York Times (January 11, 2012).  Eisen’s editorial alerts us to this attempt to undo a federal legal requirement that requires federally funded medical research be made available, for free, on the National Library of Medicine’s Web site (NLM).

As a founder of the Public Library of Science (PLoS), which is committed to promoting and implementing the free distribution of scientific research, Eisen may be regarded as an “interested” ora  biased commentator.  Such a simple-minded ascription of bias would be wrong. The PLoS has become an important distribution source of research results in the world of science, and competes with the publishing oligarchies:  Elsevier, Springer, and others.  The articles of the sort that PLoS makes available for free are sold by publishers for $40 or more.  Subscriptions from these oligarchical sources are often priced in the thousands of dollars per year. Eisen’s simple and unassailable point is that the public, whether the medical profession, patients and citizens, students and teachers, should be able to read about the results of research funded with their tax monies.

“[I]f the taxpayers paid for it, they own.”

The United States government and its employees do not enjoy copyright protections for their creative work (and they do not), neither should their contractors.

Public access is all the more important given that the mainstream media seems so reluctant or unable to cover scientific research in a thoughtful and incisive way.

The Bill goes beyond merely unraveling a requirement of making published papers available free of charge at the NLM.    The language of the Bill, H.R.3699, the Research Works Act, creates a false dichotomy between public and private sector research:


No Federal agency may adopt, implement, maintain, continue, or otherwise engage in any policy, program, or other activity that—

(1) causes, permits, or authorizes network dissemination of any private-sector research work without the prior consent of the publisher of such work … .”

Work that is conducted in private or in state universities, but funded by the federal taxpayers, cannot be said to be “private” in any meaningful sense.  The public’s access to this research, as well as its underlying data, is especially important when the subject matter of the research involves issues that are material to public policy and litigation disputes.

Who is behind this bailout for the private-sector publishing industry?  Congressman Darrell Issa (California) introduced the Bill, on December 16, 2011.  The Bill was cosponsored by Congresswoman Carolyn B. Maloney, the Democratic representative of New York’s 14th district.  Oh Lord, Congresswoman Maloney represents me!  NOT.  How humiliating to be associated with this regressive measure.

This heavy-handed piece of legislation was referred to the House Committee on Oversight and Government Reform.  Let us hope it dies a quick death in committee.  See Michael Eisen, “Elsevier-funded NY Congresswoman Carolyn Maloney Wants to Deny Americans Access to Taxpayer Funded Research” (Jan. 5, 2012).