TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

The Role of Peer Review in Rule 702 and 703 Gatekeeping

November 19th, 2023

“There is no expedient to which man will not resort to avoid the real labor of thinking.”
              Sir Joshua Reynolds (1723-92)

Some courts appear to duck the real labor of thinking, and the duty to gatekeep expert witness opinions,  by deferring to expert witnesses who advert to their reliance upon peer-reviewed published studies. Does the law really support such deference, especially when problems with the relied-upon studies are revealed in discovery? A careful reading of the Supreme Court’s decision in Daubert, and of the Reference Manual on Scientific Evidence provides no support for admitting expert witness opinion testimony that relies upon peer-reviewed published studies, when the studies are invalid or are based upon questionable research practices.[1]

In Daubert v. Merrell Dow Pharmaceuticals, Inc.,[2] The Supreme Court suggested that peer review of studies relied upon by a challenged expert witness should be a factor in determining the admissibility of that expert witness’s opinion. In thinking about the role of peer-review publication in expert witness gatekeeping, it is helpful to remember the context of how and why the Supreme was talking about peer review in the first place. In the trial court, the Daubert plaintiff had proffered an expert witness opinion that featured reliance upon an unpublished reanalysis of published studies. On the defense motion, the trial court excluded the claimant’s witness,[3] and the Ninth Circuit affirmed.[4] The intermediate appellate court expressed its view that unpublished, non-peer-reviewed reanalyses were deviations from generally accepted scientific discourse, and that other appellate courts, considering the alleged risks of Bendectin, refused to admit opinions based upon unpublished, non-peer-reviewed reanalyses of epidemiologic studies.[5] The Circuit expressed its view that reanalyses are generally accepted by scientists when they have been verified and scrutinized by others in the field. Unpublished reanalyses done for solely for litigation would be an insufficient foundation for expert witness opinion.[6]

The Supreme Court, in Daubert, evaded the difficult issues involved in evaluating a statistical analysis that has not been published by deciding the case on the ground that the lower courts had applied the wrong standard.  The so-called Frye test, or what I call the “twilight zone” test comes from the heralded 1923 case excluding opinion testimony based upon a lie detector:

“Just when a scientific principle or discovery crosses the line between the experimental and demonstrable stages is difficult to define. Somewhere in this twilight zone the evidential force of the principle must be recognized, and while the courts will go a long way in admitting expert testimony deduced from a well recognized scientific principle or discovery, the thing from which the deduction is made must be sufficiently established to have gained general acceptance in the particular field in which it belongs.”[7]

The Supreme Court, in Daubert, held that with the promulgation of the Federal Rules of Evidence in 1975, the twilight zone test was no longer legally valid. The guidance for admitting expert witness opinion testimony lay in Federal Rule of Evidence 702, which outlined an epistemic test for “knowledge,” which would be helpful to the trier of fact. The Court then proceeded to articulate several  non-definitive factors for “good science,” which might guide trial courts in applying Rule 702, such as testability or falsifiability, a showing of known or potential error rate. Another consideration, general acceptance carried over from Frye as a consideration.[8] Courts have continued to build on this foundation to identify other relevant considerations in gatekeeping.[9]

One of the Daubert Court’s pertinent considerations was “whether the theory or technique has been subjected to peer review and publication.”[10] The Court, speaking through Justice Blackmun, provided a reasonably cogent, but probably now out-dated discussion of peer review:

 “Publication (which is but one element of peer review) is not a sine qua non of admissibility; it does not necessarily correlate with reliability, see S. Jasanoff, The Fifth Branch: Science Advisors as Policymakers 61-76 (1990), and in some instances well-grounded but innovative theories will not have been published, see Horrobin, “The Philosophical Basis of Peer Review and the Suppression of Innovation,” 263 JAMA 1438 (1990). Some propositions, moreover, are too particular, too new, or of too limited interest to be published. But submission to the scrutiny of the scientific community is a component of “good science,” in part because it increases the likelihood that substantive flaws in methodology will be detected. See J. Ziman, Reliable Knowledge: An Exploration of the Grounds for Belief in Science 130-133 (1978); Relman & Angell, “How Good Is Peer Review?,” 321 New Eng. J. Med. 827 (1989). The fact of publication (or lack thereof) in a peer reviewed journal thus will be a relevant, though not dispositive, consideration in assessing the scientific validity of a particular technique or methodology on which an opinion is premised.”[11]

To the extent that peer review was touted by Justice Blackmun, it was because the peer-review process advanced the ultimate consideration of the scientific validity of the opinion or claim under consideration. Validity was the thing; peer review was just a crude proxy.

If the Court were writing today, it might well have written that peer review is often a feature of bad science, advanced by scientists who know that peer-reviewed publication is the price of admission to the advocacy arena. And of course, the wild proliferation of journals, including the “pay-to-play” journals, facilitates the festschrift.

Reference Manual on Scientific Evidence

Certainly, judicial thinking evolved since 1993, and the decision in Daubert. Other considerations for gatekeeping have been added. Importantly, Daubert involved the interpretation of a statute, and in 2000, the statute was amended.

Since the Daubert decision, the Federal Judicial Center and the National Academies of Science have weighed in with what is intended to be guidance for judges and lawyers litigating scientific and technical issue. The Reference Manual on Scientific Evidence is currently in a third edition, but a fourth edition is expected in 2024.

How does the third edition[12] treat peer review?

An introduction by now retired Associate Justice Stephen Breyer blandly reports the Daubert considerations, without elaboration.[13]

The most revealing and important chapter in the Reference Manual is the one on scientific method and procedure, and sociology of science, “How Science Works,” by Professor David Goodstein.[14] This chapter’s treatment is not always consistent. In places, the discussion of peer review is trenchant. At other places, it can be misleading. Goodstein’s treatment, at first, appears to be a glib endorsement of peer review as a substitute for critical thinking about a relied-upon published study:

“In the competition among ideas, the institution of peer review plays a central role. Scientifc articles submitted for publication and proposals for funding often are sent to anonymous experts in the field, in other words, to peers of the author, for review. Peer review works superbly to separate valid science from nonsense, or, in Kuhnian terms, to ensure that the current paradigm has been respected.11 It works less well as a means of choosing between competing valid ideas, in part because the peer doing the reviewing is often a competitor for the same resources (space in prestigious journals, funds from government agencies or private foundations) being sought by the authors. It works very poorly in catching cheating or fraud, because all scientists are socialized to believe that even their toughest competitor is rigorously honest in the reporting of scientific results, which makes it easy for a purposefully dishonest scientist to fool a referee. Despite all of this, peer review is one of the venerated pillars of the scientific edifice.”[15]

A more nuanced and critical view emerges in footnote 11, from the above-quoted passage, when Goodstein discusses how peer review was framed by some amici curiae in the Daubert case:

“The Supreme Court received differing views regarding the proper role of peer review. Compare Brief for Amici Curiae Daryl E. Chubin et al. at 10, Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579 (1993) (No. 92-102) (“peer review referees and editors limit their assessment of submitted articles to such matters as style, plausibility, and defensibility; they do not duplicate experiments from scratch or plow through reams of computer-generated data in order to guarantee accuracy or veracity or certainty”), with Brief for Amici Curiae New England Journal of Medicine, Journal of the American Medical Association, and Annals of Internal Medicine in Support of Respondent, Daubert v. Merrell Dow Pharm., Inc., 509 U.S. 579 (1993) (No. 92-102) (proposing that publication in a peer-reviewed journal be the primary criterion for admitting scientifc evidence in the courtroom). See generally Daryl E. Chubin & Edward J. Hackett, Peerless Science: Peer Review and U.S. Science Policy (1990); Arnold S. Relman & Marcia Angell, How Good Is Peer Review? 321 New Eng. J. Med. 827–29 (1989). As a practicing scientist and frequent peer reviewer, I can testify that Chubin’s view is correct.”[16]

So, if, as Professor Goodstein attests, Chubin is correct that peer review does not “guarantee accuracy or veracity or certainty,” the basis for veneration is difficult to fathom.

Later in Goodstein’s chapter, in a section entitled “V. Some Myths and Facts about Science,” the gloves come off:[17]

Myth: The institution of peer review assures that all published papers are sound and dependable.

Fact: Peer review generally will catch something that is completely out of step with majority thinking at the time, but it is practically useless for catching outright fraud, and it is not very good at dealing with truly novel ideas. Peer review mostly assures that all papers follow the current paradigm (see comments on Kuhn, above). It certainly does not ensure that the work has been fully vetted in terms of the data analysis and the proper application of research methods.”[18]

Goodstein is not a post-modern nihilist. He acknowledges that “real” science can be distinguished from “not real science.” He can hardly be seen to have given a full-throated endorsement to peer review as satisfying the gatekeeper’s obligation to evaluate whether a study can be reasonably relied upon, or whether reliance upon such a particular peer-reviewed study can constitute sufficient evidence to render an expert witness’s opinion helpful, or the application of a reliable methodology.

Goodstein cites, with apparent approval, the amicus brief filed by the New England Journal of Medicine, and other journals, which advised the Supreme Court that “good science,” requires a “a rigorous trilogy of publication, replication and verification before it is relied upon.” [19]

“Peer review’s ‘role is to promote the publication of well-conceived articles so that the most important review, the consideration of the reported results by the scientific community, may occur after publication.’”[20]

Outside of Professor Goodstein’s chapter, the Reference Manual devotes very little ink or analysis to the role of peer review in assessing Rule 702 or 703 challenges to witness opinions or specific studies.  The engineering chapter acknowledges that “[t]he topic of peer review is often raised concerning scientific and technical literature,” and helpfully supports Goodstein’s observations by noting that peer review “does not ensure accuracy or validity.”[21]

The chapter on neuroscience is one of the few chapters in the Reference Manual, other than Professor Goodstein’s, to address the limitations of peer review. Peer review, if absent, is highly suspicious, but its presence is only the beginning of an evaluation process that continues after publication:

Daubert’s stress on the presence of peer review and publication corresponds nicely to scientists’ perceptions. If something is not published in a peer-reviewed journal, it scarcely counts. Scientists only begin to have confidence in findings after peers, both those involved in the editorial process and, more important, those who read the publication, have had a chance to dissect them and to search intensively for errors either in theory or in practice. It is crucial, however, to recognize that publication and peer review are not in themselves enough. The publications need to be compared carefully to the evidence that is proffered.[22]

The neuroscience chapter goes on to discuss peer review also in the narrow context of functional magnetic resonance imaging (fMRI). The authors note that fMRI, as a medical procedure, has been the subject of thousands of peer-reviewed, but those peer reviews do little to validate the use of fMRI as a high-tech lie detector.[23] The mental health chapter notes in a brief footnote that the science of memory is now well accepted and has been subjected to peer review, and that “[c]areful evaluators” use only tests that have had their “reliability and validity confirmed in peer-reviewed publications.”[24]

Echoing other chapters, the engineering chapter also mentions peer review briefly in connection with qualifying as an expert witness, and in validating the value of accrediting societies.[25]  Finally, the chapter points out that engineering issues in litigation are often sufficiently novel that they have not been explored in peer-reviewed literature.[26]

Most of the other chapters of the Reference Manual, third edition, discuss peer review only in the context of qualifications and membership in professional societies.[27] The chapter on exposure science discusses peer review only in the narrow context of a claim that EPA guidance documents on exposure assessment are peer reviewed and are considered “authoritative.”[28]

Other chapters discuss peer review briefly and again only in very narrow contexts. For instance, the epidemiology chapter discusses peer review in connection with two very narrow issues peripheral to Rule 702 gatekeeping. First, the chapter raises the question (without providing a clear answer) whether non-peer-reviewed studies should be included in meta-analyses.[29] Second, the chapter asserts that “[c]ourts regularly affirm the legitimacy of employing differential diagnostic methodology,” to determine specific causation, on the basis of several factors, including the questionable claim that the methodology “has been subjected to peer review.”[30] There appears to be no discussion in this key chapter about whether, and to what extent, peer review of published studies can or should be considered in the gatekeeping of epidemiologic testimony. There is certainly nothing in the epidemiology chapter, or for that matter elsewhere in the Reference Manual, to suggest that reliance upon a peer-reviewed published study pretermits analysis of that study to determine whether it is indeed internally valid or reasonably relied upon by expert witnesses in the field.


[1] See Jop de Vrieze, “Large survey finds questionable research practices are common: Dutch study finds 8% of scientists have committed fraud,” 373 Science 265 (2021); Yu Xie, Kai Wang, and Yan Kong, “Prevalence of Research Misconduct and Questionable Research Practices: A Systematic Review and Meta-Analysis,” 27 Science & Engineering Ethics 41 (2021).

[2] 509 U.S. 579 (1993).

[3]  Daubert v. Merrell Dow Pharmaceuticals, Inc., 727 F.Supp. 570 (S.D.Cal.1989).

[4] 951 F. 2d 1128 (9th Cir. 1991).

[5]  951 F. 2d, at 1130-31.

[6] Id. at 1131.

[7] Frye v. United States, 293 F. 1013, 1014 (D.C. Cir. 1923) (emphasis added).

[8]  Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 590 (1993).

[9] See, e.g., In re TMI Litig. II, 911 F. Supp. 775, 787 (M.D. Pa. 1995) (considering the relationship of the technique to methods that have been established to be reliable, the uses of the method in the actual scientific world, the logical or internal consistency and coherence of the claim, the consistency of the claim or hypothesis with accepted theories, and the precision of the claimed hypothesis or theory).

[10] Id. at  593.

[11] Id. at 593-94.

[12] National Research Council, Reference Manual on Scientific Evidence (3rd ed. 2011) [RMSE]

[13] Id., “Introduction” at 1, 13

[14] David Goodstein, “How Science Works,” RMSE 37.

[15] Id. at 44-45.

[16] Id. at 44-45 n. 11 (emphasis added).

[17] Id. at 48 (emphasis added).

[18] Id. at 49 n.16 (emphasis added)

[19] David Goodstein, “How Science Works,” RMSE 64 n.45 (citing Brief for the New England Journal of Medicine, et al., as Amici Curiae supporting Respondent, 1993 WL 13006387 at *2, in Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579 (1993).

[20] Id. (citing Brief for the New England Journal of Medicine, et al., 1993 WL 13006387 *3)

[21] Channing R. Robertson, John E. Moalli, David L. Black, “Reference Guide on Engineering,” RMSE 897, 938 (emphasis added).

[22] Henry T. Greely & Anthony D. Wagner, “Reference Guide on Neuroscience,” RMSE 747, 786.

[23] Id. at 776, 777.

[24] Paul S. Appelbaum, “Reference Guide on Mental Health Evidence,” RMSE 813, 866, 886.

[25] Channing R. Robertson, John E. Moalli, David L. Black, “Reference Guide on Engineering,” RMSE 897, 901, 931.

[26] Id. at 935.

[27] Daniel Rubinfeld, “Reference Guide on Multiple Regression,” 303, 328 RMSE  (“[w]ho should be qualified as an expert?”); Shari Seidman Diamond, “Reference Guide on Survey Research,” RMSE 359, 375; Bernard D. Goldstein & Mary Sue Henifin, “Reference Guide on Toxicology,” RMSE 633, 677, 678 (noting that membership in some toxicology societies turns in part on having published in peer-reviewed journals).

[28] Joseph V. Rodricks, “Reference Guide on Exposure Science,” RMSE 503, 508 (noting that EPA guidance documents on exposure assessment often are issued after peer review).

[29] Michael D. Green, D. Michal Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” RMSE 549, 608.

[30] Id. at 617-18 n.212.

Collegium Ramazzini & Its Fellows – The Lobby

November 19th, 2023

Back in 1997, Francis Douglas Kelly Liddell, a real scientist in the area of asbestos and disease, had had enough of the insinuations, slanders, and bad science from the minions of Irving John Selikoff.[1] Liddell broke with the norms of science and called out his detractors for what they were doing:

 “[A]n anti-asbestos lobby, based in the Mount Sinai School of Medicine of the City University of New York, promoted the fiction that asbestos was an all-pervading menace, and trumped up a number of asbestos myths for widespread dissemination, through media eager for bad news.”[2]

What Liddell did not realize is that the Lobby had become institutionalized in the form of an organization, the Collegium Ramazzini, started by Selikoff under false pretenses.[3] Although the Collegium operates with some degree of secrecy, the open and sketchy conduct of its members suggest that we could use the terms “the Lobby” and “the Collegium Ramazzini,” interchangeably.

Ramazzini founder Irving Selikoff had an unfortunate track record for perverting the course of justice. Selikoff conspired with Ron Motley and others to bend judges with active asbestos litigation dockets by inviting them to a one-sided conference on asbestos science, and to pay for their travel and lodging. Presenters included key expert witnesses for plaintiffs; defense expert witnesses were conspicuously not invited to the conference. In his invitation to this ex parte soirée, Selikoff failed to mention that the funding came from plaintiffs’ counsel. Selikoff’s shenanigans led to the humiliation and disqualification of James M. Kelly,[4] the federal judge in charge of the asbestos school property damage litigation,

Neither Selikoff nor the co-conspirator counsel for plaintiffs ever apologized for their ruse. The disqualification did lead to a belated disclosure and mea culpa from the late Judge Jack Weinstein. Because of a trial in progress, Judge Weinstein did not attend the plaintiffs’ dog-and-pony show, Selikoff’s so-called “Third Wave” conference, but Judge Weinstein and a New York state trial judge, Justice Helen Freedman, attended an ex parte private luncheon meeting with Dr. Selikoff. Here is how Judge Weinstein described the event:

“But what I did may have been even worse [than Judge Kelly’s conduct that led to his disqualification]. A state judge and I were attempting to settle large numbers of asbestos cases. We had a private meeting with Dr. Irwin [sic] J. Selikoff at his hospital office to discuss the nature of his research. He had never testified and would never testify. Nevertheless, I now think that it was a mistake not to have informed all counsel in advance and, perhaps, to have had a court reporter present and to have put that meeting on the record.”[5]

Judge Weinstein’s false statement that Selikoff “had never testified”[6] not only reflects an incredible and uncharacteristic naiveté by a distinguished evidence law scholar, but the false statement was in a journal, Judicature, which was, and is, widely circulated to state and federal judges. The source of the lie appears to have been Selikoff himself in the ethically dodgy ex parte meeting with judges actively presiding over asbestos personal injury cases.

The point apparently weighed on Judge Weinstein’s conscience. He repeated his mea culpa almost verbatim, along with the false statement about Selikoff’s having never testified, in a law review article in 1994, and then incorporated the misrepresentation into a full-length book.[7] I have no doubt that Judge Weinstein did not intend to mislead anyone; like many others, he had been duped by Selikoff’s deception.

There is no evidence that Selikoff was acting as an authorized agent for the Collegium Ramazzini in conspiring to influence trial judges, or in lying to Judge Weinstein and Justice Freedman, but Selikoff was the founder of the Collegium, and his conduct seems to have set a norm for the organization. Furthermore, the Third-Wave Conference was sponsored by the Collegium. Two years later, the Collegium created an award in Selikoff’s name, in 1993, not long after the Third Wave misconduct.[8] Perhaps the award was the Collegium’s ratification of Selikoff’s misdeeds. Two of the recipients, Stephen M. Levin, and Yasunosuke Suzuki, were “regulars,” as expert witnesses for plaintiffs in asbestos litigation. The Selikoff Award is funded by the Irving J. Selikoff Endowment of the Collegium Ramazzini. The Collegium can fairly be said to be the continuation of Selikoff’s work in the form of advocacy organization.

Selikoff’s Third-Wave Conference and his lies to two key judges would not be the last of efforts to pervert the course of justice. With the Selikoff imprimatur and template in hand, Fellows of the Collegium have carried on, by carrying on. Collegium Fellows Carl F. Cranor and Thomas Smith Martyn Thomas served as partisan paid expert witnesses in the notorious Milward case.[9]

After the trial court excluded the proffered opinions of Cranor and Smith, plaintiff appealed, with the help of an amicus brief filed by The Council for Education and Research on Toxics (CERT). The plaintiffs’ counsel, Cranor and Smith, CERT, and counsel for CERT all failed to disclose that CERT was founded by the two witnesses, Cranor and Smith, whose exclusion was at the heart of the appeal.[10] Among the 27 signatories to the CERT amicus brief, a majority (15) were fellows of the Collegium Ramazzini. Others may have been members but not fellows. Many of the signatories, whether or not members or fellows of the Collegium, were frequent testifiers for plaintiffs’ counsel.

None raised any ethical qualms about the obvious conflict of interest on how scrupulous gatekeeping might hurt their testimonial income, or their (witting or unwitting) participation in CERT’s conspiracy to pervert the course of justice.[11]

The CERT amici signatories are listed below. The bold  names are identified as Collegium fellows at its current website. Others may have been members but not fellows. The asterisks indicate those who have testified in tort litigation; please accept my apologies if I missed anyone.

Nicholas A. Ashford,
Nachman Brautbar,*
David C. Christiani,*
Richard W. Clapp,*
James Dahlgren,*
Devra Lee Davis,
Malin Roy Dollinger,*
Brian G. Durie,
David A. Eastmond,
Arthur L. Frank,*
Frank H. Gardner,
Peter L. Greenberg,
Robert J. Harrison,
Peter F. Infante,*
Philip J. Landrigan,
Barry S. Levy,*
Melissa A. McDiarmid,
Myron Mehlman,
Ronald L. Melnick,*
Mark Nicas,*
David Ozonoff,*
Stephen M. Rappaport,
David Rosner,*
Allan H. Smith,*
Daniel Thau Teitelbaum,*
Janet Weiss,* and
Luoping Zhang

This D & C (deception and charade) was repeated on other occasions when Collegium fellows and members signed amicus briefs without any disclosures of conflicts of interest. In Rost v. Ford Motor Co.,[12] for instance, an amicus brief was filed by by “58 physicians and scientists,” many of whom were Collegium fellows.[13]

Ramazzini Fellows David Michaels and Celeste Monforton were both involved in the notorious Project on Scientific Knowledge and Public Policy (SKAPP) organization, which consistently misrepresented its funding from plaintiffs’ lawyers as having come from a “court fund.”[14]

Despite Selikoff’s palaver about how the Collegium would seek consensus and open discussions, it has become an echo-chamber for the rent-seeking mass-tort lawsuit industry, for the hyperbolic critics of any industry position, and for the credulous shills for any pro-labor position. In its statement about membership, the Collegium warns that

“Persons who have any type of links which may compromise the authenticity of their commitment to the mission of the Collegium Ramazzini do not qualify for Fellowship. Likewise, persons who have any conflict of interest that may negatively affect his or her impartiality as a researcher should not be nominated for Fellowship.”

This exclusionary criterion ensures lack of viewpoint diversity, and makes the Collegium an effective proxy for the law industry in the United States.

Among the Collegium’s current and past fellows, we can find many familiar names from the annals of tort litigation, all expert witnesses for plaintiffs, and virtually always only for plaintiffs. After over 40 years at the bar, I do not recognize a single name of anyone who has ever testified on behalf of a defendant in a tort case.

Henry A. Anderson

Barry I. Castleman      

Martin Cherniack

David Christiani 

Arthur Frank

Lennart Hardell 

David G. Hoel

Stephen M. Levin

Ronald L. Melnick

David Michaels

Celeste Monforton

Albert Miller

Brautbar Nachman

Christopher Portier

Steven B. Markowitz

Christine Oliver                 

Colin L, Soskolne

Yasunosuke Suzuki

Daniel Thau Teitelbaum

Laura Welch


[1]The Lobby – Cut on the Bias” (July 6, 2020).

[2] F.D.K. Liddell, “Magic, Menace, Myth and Malice,” 41 Ann. Occup. Hyg. 3, 3 (1997).

[3] SeeThe Dodgy Origins of the Collegium Ramazzini” (Nov. 15, 2023).

[4] In re School Asbestos Litigation, 977 F.2d 764 (3d Cir. 1992). See Cathleen M. Devlin, “Disqualification of Federal Judges – Third Circuit Orders District Judge James McGirr Kelly to Disqualify Himself So As To Preserve ‘The Appearance of Justice’ Under 28 U.S.C. § 455 – In re School Asbestos Litigation (1992),” 38 Villanova L. Rev. 1219 (1993); Bruce A. Green, “May Judges Attend Privately Funded Educational Programs? Should Judicial Education Be Privatized?: Questions of Judicial Ethics and Policy,” 29 Fordham Urb. L.J. 941, 996-98 (2002).

[5] Jack B. Weinstein, “Learning, Speaking, and Acting: What Are the Limits for Judges?” 77 Judicature 322, 326 (May-June 1994) (emphasis added).

[6]Selikoff and the Mystery of the Disappearing Testimony” (Dec. 3, 2010).

[7] See Jack B. Weinstein, “Limits on Judges’ Learning, Speaking and Acting – Part I- Tentative First Thoughts: How May Judges Learn?” 36 Ariz. L. Rev. 539, 560 (1994) (“He [Selikoff] had never testified and would   never testify.”); Jack B. Weinstein, Individual Justice in Mass Tort Litigation: The Effect of Class Actions, Consolidations, and other Multi-Party Devices 117 (1995) (“A court should not coerce independent eminent scientists, such as the late Dr. Irving Selikoff, to testify if, like he, they prefer to publish their results only in scientific journals.”).

[8] See also “The Selikoff – Castleman Conspiracy” (Mar. 13, 2011).

[9] Milward v. Acuity Specialty Products Group, Inc., 664 F.Supp.2d 137, 140 (D.Mass.2009), rev’d, 639 F. 3d 11 (1st Cir. 2011), cert. denied, 132 S.Ct. 1002 (2012).

[10]  See “The Council for Education and Research on Toxics” (July 9, 2013).

[11]Carl Cranor’s Inference to the Best Explanation” (Dec. 12, 2021).

[12] Rost v. Ford Motor Co., 151 A.3d 1032, 1052 (Pa. 2016).

[13]The Amicus Curious Brief” (Jan. 4, 2018).

[14] See, e.g., “SKAPP A LOT” (April 30, 2010); “Manufacturing Certainty” (Oct. 25, 2011); “David Michaels’ Public Relations Problem” (Dec. 2, 2011); “Conflicted Public Interest Groups” (Nov. 3, 2013).

The Dodgy Origins of the Collegium Ramazzini

November 15th, 2023

Or How Irving Selikoff and His Lobby (the Collegium Ramazzini) Fooled the Monsanto Corporation

Anyone who litigates occupational or environmental disease cases has heard of the Collegium Ramazzini. The group is named after a 17th century Italian physician, Bernardino Ramazzini, who is sometimes referred to as the father of occupational medicine.[1] His children have been an unruly lot. In Ramazzini’s honor, the Collegium was founded just over 40 years old, to acclaim and promises of neutrality and consensus.

Back in May 1983, a United Press International reporter chronicled the high aspirations and the bipartisan origins of the Collegium.[2] The UPI reporter noted that the group was founded by the late Irving Selikoff, who is also well known in litigation circles. Selikoff held himself out as an authority on occupational and environmental medicine, but his actual training in medicine was dodgy. His training in epidemiology and statistics was non-existent.

Selikoff was, however, masterful at marketing and prosyletizing. Selikoff would become known for misrepresenting his training, and creating a mythology that he did not participate in litigation, that crocidolite was not used in products in the United State, and that asbestos would become a major cause of cancer in the United States, among other things.[3] It is thus no surprise that Selikoff successfully masked the intentions of the Ramazzini group, and was thus able to capture the support of two key legislators, Senators Charles Mathias (Rep., Maryland) and Frank Lautenberg (Dem., New Jersey), along with officials from both organized labor and industry.

Selikoff was able to snooker the Senators and officials with empty talk of a new organization that would work to obtain scientific consensus on occupational and environmental issues. It did not take long after its founding in 1983 for the Collegium to become a conclave of advocates and zealots.

The formation of the Collegium may have been one of Selikoff’s greatest deceptions. According to the UPI news report, Selikoff represented that the Collegium would not lobby or seek to initiate legislation, but rather would interpret scientific findings in accessible language, show the policy implications of these findings, and make recommendations. This representation was falsified fairly quickly, but certainly by 1999, when the Collegium called for legislation banning the use of asbestos.  Selikoff had promised that the Collegium

“will advise on the adequacy of a standard, but will not lobby to have a standard set. Our function is not to condemn, but rather to be a conscience among scientists in occupational and environmental health.”

The Adventures of Pinocchio (1883); artwork by Enrico Mazzanti

Senator Mathias proclaimed the group to be “dedicated to the improvement of the human condition.” Perhaps no one was more snookered than the Monsanto Corporation, which helped fund the Collegium back in 1983. Monte Throdahl, a Monsanto senior vice president, reportedly expressed his hopes that the group would emphasize the considered judgments of disinterested scientists and not the advocacy and rent seeking of “reporters or public interests groups” on occupational medical issues. Forty years in, those hopes are long since gone. Recent Collegium meetings have been sponsored and funded by the National Institute for Environmental Sciences, Centers for Disease Control, National Cancer Institute, and Environmental Protection Agency. The time has come to cut off funding.


[1] Giuliano Franco & Francesca Franco, “Bernardino Ramazzini: The Father of Occupational Medicine,” 91 Am. J. Public Health 1382 (2001).

[2] Drew Von Bergen, “A group of international scientists, backed by two senators,” United Press International (May 10, 1983).

[3]Selikoff Timeline & Asbestos Litigation History” (Feb. 26, 2023); “The Lobby – Cut on the Bias” (July 6, 2020); “The Legacy of Irving Selikoff & Wicked Wikipedia” (Mar. 1, 2015). See also “Hagiography of Selikoff” (Sept. 26, 2015);  “Scientific Prestige, Reputation, Authority & The Creation of Scientific Dogmas” (Oct. 4, 2014); “Irving Selikoff – Media Plodder to Media Zealot” (Sept. 9, 2014).; “Historians Should Verify Not Vilify or Abilify – The Difficult Case of Irving Selikoff” (Jan. 4, 2014); “Selikoff and the Mystery of the Disappearing Amphiboles” (Dec. 10, 2010); “Selikoff and the Mystery of the Disappearing Testimony” (Dec. 3, 2010).

Consenus is Not Science

November 8th, 2023

Ted Simon, a toxicologist and a fellow board member at the Center for Truth in Science, has posted an intriguing piece in which he labels scientific consensus as a fool’s errand.[1]  Ted begins his piece by channeling the late Michael Crichton, who famously derided consensus in science, in his 2003 Caltech Michelin Lecture:

“Let’s be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. Science, on the contrary, requires only one investigator who happens to be right, which means that he or she has results that are verifiable by reference to the real world. In science, consensus is irrelevant. What is relevant is reproducible results. The greatest scientists in history are great precisely because they broke with the consensus.

* * * *

There is no such thing as consensus science. If it’s consensus, it isn’t science. If it’s science, it isn’t consensus. Period.”[2]

Crichton’s (and Simon’s) critique of consensus is worth remembering in the face of recent proposals by Professor Edward Cheng,[3] and others,[4] to make consensus the touchstone for the admissibility of scientific opinion testimony.

Consensus or general acceptance can be a proxy for conclusions drawn from valid inferences, within reliably applied methodologies, based upon sufficient evidence, quantitatively and qualitatively. When expert witnesses opine contrary to a consensus, they raise serious questions regarding how they came to their conclusions. Carl Sagan declaimed that “extraordinary claims require extraordinary evidence,” but his principle was hardly novel. Some authors quote the French polymath Pierre Simon Marquis de Laplace, who wrote in 1810: “[p]lus un fait est extraordinaire, plus il a besoin d’être appuyé de fortes preuves,”[5] but as the Quote Investigator documents,[6] the basic idea is much older, going back at least another century to church rector who expressed his skepticism of a contemporary’s claim of direct communication with the almighty: “Sure, these Matters being very extraordinary, will require a very extraordinary Proof.”[7]

Ted Simon’s essay is also worth consulting because he notes that many sources of apparent consensus are really faux consensus, nothing more than self-appointed intellectual authoritarians who systematically have excluded some points of view, while turning a blind eye to their own positional conflicts.

Lawyers, courts, and academics should be concerned that Cheng’s “consensus principle” will change the focus from evidence, methodology, and inference, to a surrogate or proxy for validity. And the sociological notion of consensus will then require litigation of whether some group really has announced a consensus. Consensus statements in some areas abound, but inquiring minds may want to know whether they are the result of rigorous, systematic reviews of the pertinent studies, and whether the available studies can support the claimed consensus.

Professor Cheng is hard at work on a book-length explication of his proposal, and some criticism will have to await the event.[8] Perhaps Cheng will overcome the objections placed against his proposal.[9] Some of the examples Professor Cheng has given, however, such as his errant his dramatic misreading of the American Statistical Association’s 2016 p-value consensus statement to represent, in Cheng’s words:

“[w]hile historically used as a rule of thumb, statisticians have now concluded that using the 0.05 [p-value] threshold is more distortive than helpful.”[10]

The 2016 Statement said no such thing, although a few statisticians attempted to distort the statement in the way that Cheng suggests. In 2021, a select committee of leading statisticians, appointed by the President of the ASA, issued a statement to make clear that the ASA had not embraced the Cheng misinterpretation.[11] This one example alone does not bode well for the viability of Cheng’s consensus principle.


[1] Ted Simon, “Scientific consensus is a fool’s errand made worse by IARC” (Oct. 2023).

[2] Michael Crichton, “Aliens Cause Global Warming,” Caltech Michelin Lecture (Jan. 17, 2003).

[3] Edward K. Cheng, “The Consensus Rule: A New Approach to Scientific Evidence,” 75 Vanderbilt L. Rev. 407 (2022) [Consensus Rule]

[4] See Norman J. Shachoy Symposium, The Consensus Rule: A New Approach to the Admissibility of Scientific Evidence (2022), 67 Villanova L. Rev. (2022); David S. Caudill, “The ‘Crisis of Expertise’ Reaches the Courtroom: An Introduction to the Symposium on, and a Response to, Edward Cheng’s Consensus Rule,” 67 Villanova L. Rev. 837 (2022); Harry Collins, “The Owls: Some Difficulties in Judging Scientific Consensus,” 67 Villanova L. Rev. 877 (2022); Robert Evans, “The Consensus Rule: Judges, Jurors, and Admissibility Hearings,” 67 Villanova L. Rev. 883 (2022); Martin Weinel, “The Adversity of Adversarialism: How the Consensus Rule Reproduces the Expert Paradox,” 67 Villanova L. Rev. 893 (2022); Wendy Wagner, “The Consensus Rule: Lessons from the Regulatory World,” 67 Villanova L. Rev. 907 (2022); Edward K. Cheng, Elodie O. Currier & Payton B. Hampton, “Embracing Deference,” 67 Villanova L. Rev. 855 (2022).

[5] Pierre-Simon Laplace, Théorie analytique des probabilités (1812) (The more extraordinary a fact, the more it needs to be supported by strong proofs.”). See Tressoldi, “Extraordinary Claims Require Extraordinary Evidence: The Case of Non-Local Perception, a Classical and Bayesian Review of Evidences,” 2 Frontiers Psych. 117 (2011); Charles Coulston Gillispie, Pierre-Simon Laplace, 1749-1827: a life in exact science (1997).

[6]Extraordinary Claims Require Extraordinary Evidence” (Dec. 5, 2021).

[7] Benjamin Bayly, An Essay on Inspiration 362, part 2 (2nd ed. 1708).

[8] The Consensus Principle, under contract with the University of Chicago Press.

[9] SeeCheng’s Proposed Consensus Rule for Expert Witnesses” (Sept. 15, 2022);
Further Thoughts on Cheng’s Consensus Rule” (Oct. 3, 2022); “Consensus Rule – Shadows of Validity” (Apr. 26, 2023).

[10] Consensus Rule at 424 (citing but not quoting Ronald L. Wasserstein & Nicole A. Lazar, “The ASA Statement on p-Values: Context, Process, and Purpose,” 70 Am. Statistician 129, 131 (2016)).

[11] Yoav Benjamini, Richard D. DeVeaux, Bradly Efron, Scott Evans, Mark Glickman, Barry Braubard, Xuming He, Xiao Li Meng, Nancy Reid, Stephen M. Stigler, Stephen B. Vardeman, Christopher K. Wikle, Tommy Wright, Linda J. Young, and Karen Kafadar, “The ASA President’s Task Force Statement on Statistical Significance and Replicability,” 15 Annals of Applied Statistics 1084 (2021); see also “A Proclamation from the Task Force on Statistical Significance” (June 21, 2021).