TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Science Bench Book for Judges

July 13th, 2019

On July 1st of this year, the National Judicial College and the Justice Speakers Institute, LLC released an online publication of the Science Bench Book for Judges [Bench Book]. The Bench Book sets out to cover much of the substantive material already covered by the Federal Judicial Center’s Reference Manual:

Acknowledgments

Table of Contents

  1. Introduction: Why This Bench Book?
  2. What is Science?
  3. Scientific Evidence
  4. Introduction to Research Terminology and Concepts
  5. Pre-Trial Civil
  6. Pre-trial Criminal
  7. Trial
  8. Juvenile Court
  9. The Expert Witness
  10. Evidence-Based Sentencing
  11. Post Sentencing Supervision
  12. Civil Post Trial Proceedings
  13. Conclusion: Judges—The Gatekeepers of Scientific Evidence

Appendix 1 – Frye/Daubert—State-by-State

Appendix 2 – Sample Orders for Criminal Discovery

Appendix 3 – Biographies

The Bench Book gives some good advice in very general terms about the need to consider study validity,[1] and to approach scientific evidence with care and “healthy skepticism.”[2] When the Bench Book attempts to instruct on what it represents the scientific method of hypothesis testing, the good advice unravels:

“A scientific hypothesis simply cannot be proved. Statisticians attempt to solve this dilemma by adopting an alternate [sic] hypothesis – the null hypothesis. The null hypothesis is the opposite of the scientific hypothesis. It assumes that the scientific hypothesis is not true. The researcher conducts a statistical analysis of the study data to see if the null hypothesis can be rejected. If the null hypothesis is found to be untrue, the data support the scientific hypothesis as true.”[3]

Even in experimental settings, a statistical analysis of the data do not lead to a conclusion that the null hypothesis is untrue, as opposed to not reasonably compatible with the study’s data. In observational studies, the statistical analysis must acknowledge whether and to what extent the study has excluded bias and confounding. When the Bench Book turns to speak of statistical significance, more trouble ensues:

“The goal of an experiment, or observational study, is to achieve results that are statistically significant; that is, not occurring by chance.”[4]

In the world of result-oriented science, and scientific advocacy, it is perhaps true that scientists seek to achieve statistically significant results. Still, it seems crass to come right out and say so, as opposed to saying that the scientists are querying the data to see whether they are compatible with the null hypothesis. This first pass at statistical significance is only mildly astray compared with the Bench Book’s more serious attempts to define statistical significance and confidence intervals:

4.10 Statistical Significance

The research field agrees that study outcomes must demonstrate they are not the result of random chance. Leaving room for an error of .05, the study must achieve a 95% level of confidence that the results were the product of the study. This is denoted as p ≤ 05. (or .01 or .1).”[5]

and

“The confidence interval is also a way to gauge the reliability of an estimate. The confidence interval predicts the parameters within which a sample value will fall. It looks at the distance from the mean a value will fall, and is measured by using standard deviations. For example, if all values fall within 2 standard deviations from the mean, about 95% of the values will be within that range.”[6]

Of course, the interval speaks to the precision of the estimate, not its reliability, but that is a small point. These definitions are virtually guaranteed to confuse judges into conflating statistical significance and the coefficient of confidence with the legal burden of proof probability.

The Bench Book runs into problems in interpreting legal decisions, which would seem softer grist for the judicial mill. The authors present dictum from the Daubert decision as though it were a holding:[7]

“As noted in Daubert, ‘[t]he focus, of course, must be solely on principles and methodology, not on the conclusions they generate’.”

The authors fail to mention that this dictum was abandoned in Joiner, and that it is specifically rejected by statute, in the 2000 revision to the Federal Rule of Evidence 702.

Early in the Bench Book, it authors present a subsection entitled “The Myth of Scientific Objectivity,” which they might have borrowed from Feyerabend or Derrida. The heading appears misleading because the text contradicts it:

“Scientists often develop emotional attachments to their work—it can be difficult to abandon an idea. Regardless of bias, the strongest intellectual argument, based on accepted scientific hypotheses, will always prevail, but the road to that conclusion may be fraught with scholarly cul-de-sacs.”[8]

In a similar vein, the authors misleadingly tell readers that “the forefront of science is rarely encountered in court,” and so “much of the science mentioned there shall be considered established….”[9] Of course, the reality is that many causal claims presented in court have already been rejected or held to be indeterminate by the scientific community. And just when readers may think themselves safe from the goblins of nihilism, the authors launch into a theory of naïve probabilism that science is just placing subjective probabilities upon data, based upon preconceived biases and beliefs:

“All of these biases and beliefs play into the process of weighing data, a critical aspect of science. Placing weight on a result is the process of assigning a probability to an outcome. Everything in the universe can be expressed in probabilities.”[10]

So help the expert witness who honestly (and correctly) testifies that the causal claim or its rejection cannot be expressed as a probability statement!

Although I have not read all of the Bench Book closely, there appears to be no meaningful discussion of Rule 703, or of the need to access underlying data to ensure that the proffered scientific opinion under scrutiny has used appropriate methodologies at every step in its development. Even a 412 text cannot address every issue, but this one does little to help the judicial reader find more in-depth help on statistical and scientific methodological issues that arise in occupational and environmental disease claims, and in pharmaceutical products litigation.

The organizations involved in this Bench Book appear to be honest brokers of remedial education for judges. The writing of this Bench Book was funded by the State Justice Institute (SJI) Which is a creation of federal legislation enacted with the laudatory goal of improving the quality of judging in state courts.[11] Despite its provenance in federal legislation, the SJI is a a private, nonprofit corporation, governed by 11 directors appointed by the President, and confirmed by the Senate. A majority of the directors (six) are state court judges, one state court administrator, and four members of the public (no more than two from any one political party). The function of the SJI is to award grants to improve judging in state courts.

The National Judicial College (NJC) originated in the early 1960s, from the efforts of the American Bar Association, American Judicature Society and the Institute of Judicial Administration, to provide education for judges. In 1977, the NJC became a Nevada not-for-profit (501)(c)(3) educational corporation, which its campus at the University of Nevada, Reno, where judges could go for training and recreational activities.

The Justice Speakers Institute appears to be a for-profit company that provides educational resources for judge. A Press Release touts the Bench Book and follow-on webinars. Caveat emptor.

The rationale for this Bench Book is open to question. Unlike the Reference Manual for Scientific Evidence, which was co-produced by the Federal Judicial Center and the National Academies of Science, the Bench Book’s authors are lawyers and judges, without any subject-matter expertise. Unlike the Reference Manual, the Bench Book’s chapters have no scientist or statistician authors, and it shows. Remarkably, the Bench Book does not appear to cite to the Reference Manual or the Manual on Complex Litigation, at any point in its discussion of the federal law of expert witnesses or of scientific or statistical method. Perhaps taxpayers would have been spared substantial expense if state judges were simply encouraged to read the Reference Manual.


[1]  Bench Book at 190.

[2]  Bench Book at 174 (“Given the large amount of statistical information contained in expert reports, as well as in the daily lives of the general society, the ability to be a competent consumer of scientific reports is challenging. Effective critical review of scientific information requires vigilance, and some healthy skepticism.”).

[3]  Bench Book at 137; see also id. at 162.

[4]  Bench Book at 148.

[5]  Bench Book at 160.

[6]  Bench Book at 152.

[7]  Bench Book at 233, quoting Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579, 595 (1993).

[8]  Bench Book at 10.

[9]  Id. at 10.

[10]  Id. at 10.

[11] See State Justice Institute Act of 1984 (42 U.S.C. ch. 113, 42 U.S.C. § 10701 et seq.).

The Shmeta-Analysis in Paoli

July 11th, 2019

In the Paoli Railroad yard litigation, plaintiffs claimed injuries and increased risk of future cancers from environmental exposure to polychlorinated biphenyls (PCBs). This massive litigation showed up before federal district judge Hon. Robert F. Kelly,[1] in the Eastern District of Pennsylvania, who may well have been the first judge to grapple with a litigation attempt to use meta-analysis to show a causal association.

One of the plaintiffs’ expert witnesses was the late William J. Nicholson, who was a professor at Mt. Sinai School of Medicine, and a colleague of Irving Selikoff. Nicholson was trained in physics, and had no professional training in epidemiology. Nonetheless, Nicholson was Selikoff’s go-to colleague for performing epidemiologic studies. After Selikoff withdrew from active testifying for plaintiffs in tort litigation, Nicholson was one of his colleagues who jumped into the fray as a surrogate advocate for Selikoff.[2]

For his opinion that PCBs were causally associated with liver cancer in humans,[3] Nicholson relied upon a report he wrote for the Ontario Ministry of Labor. [cited here as “Report”].[4] Nicholson described his report as a “study of the data of all the PCB worker epidemiological studies that had been published,” from which he concluded that there was “substantial evidence for a causal association between excess risk of death from cancer of the liver, biliary tract, and gall bladder and exposure to PCBs.”[5]

The defense challenged the admissibility of Nicholson’s meta-analysis, on several grounds. The trial court decided the challenge based upon the Downing case, which was the law in the Third Circuit, before the Supreme Court decided Daubert.[6] The Downing case allowed some opportunity for consideration of reliability and validity concerns; there is, however, disappointingly little discussion of any actual validity concerns in the courts’ opinions.

The defense challenge to Nicholson’s proffered testimony on liver cancer turned on its characterization of meta-analysis as a “novel” technique, which is generally unreliable, and its claim that Nicholson’s meta-analysis in particular was unreliable. None of the individual studies that contributed data showed any “connection” between PCBs and liver cancer; nor did any individual study conclude that there was a causal association.

Of course, the appropriate response to this situation, with no one study finding a statistically significant association, or concluding that there was a causal association, should have been “so what?” One of the reasons to do a meta-analysis is that no available study was sufficiently large to find a statistically significant association, if one were there. As for drawing conclusions of causal associations, it is not the role or place of an individual study to synthesize all the available evidence into a principled conclusion of causation.

In any event, the trial court concluded that the proffered novel technique lacked sufficient reliability, that the meta-analysis would “overwhelm, confuse, or mislead the jury,” and that the proffered meta-analysis on liver cancer was not sufficiently relevant to the facts of the case (in which no plaintiff had developed, or had died of, liver cancer). The trial court noted that the Report had not been peer-reviewed, and that it had not been accepted or relied upon by the Ontario government for any finding or policy decision. The trial court also expressed its concern that the proffered testimony along the lines of the Report would possibly confuse the jury because it appeared to be “scientific” and because Nicholson appeared to be qualified.

The Appeal

The Court of Appeals for the Third Circuit, in an opinion by Judge Becker, reversed Judge Kelly’s exclusion of the Nicholson Report, in an opinion that is still sometimes cited, even though Downing is no longer good law in the Circuit or anywhere else.[7] The Court was ultimately not persuaded that the trial court had handled the exclusion of Nicholson’s Report and its meta-analysis correctly, and it remanded the case for a do-over analysis.

Judge Becker described Nicholson’s Report as a “meta-analysis,” which pooled or “combined the results of numerous epidemiologic surveys in order to achieve a larger sample size, adjusted the results for differences in testing techniques, and drew his own scientific conclusions.”[8] Through this method, Nicholson claimed to have shown that “exposure to PCBs can cause liver, gall bladder and biliary tract disorders … even though none of the individual surveys supports such a conclusion when considered in isolation.”[9]

Validity

The appellate court gave no weight to the possibility that a meta-analysis would confuse a jury, or that its “scientific nature” or Nicholson’s credentials would lead a jury to give it more weight than it deserved.[10] The Court of Appeals conceded, however, that exclusion would have been appropriate if the methodology used itself was invalid. The appellate opinion further acknowledged that the defense had offered opposition to Nicholson’s Report in which it documented his failure to include data that were inconsistent with his conclusions, and that “Nicholson had produced a scientifically invalid study.”[11]

Judge Becker’s opinion for a panel of the Third Circuit provided no details about the cherry picking. The opinion never analyzed why this charge of cherry-picking and manipulation of the dataset did not invalidate the meta-analytic method generally, or Nicholson’s method as applied. The opinion gave no suggestion that this counter-affidavit was ever answered by the plaintiffs.

Generally, Judge Becker’s opinion dodged engagement with the specific threats to validity in Nicholson’s Report, and took refuge in the indisputable fact that hundreds of meta-analyses were published annually, and that the defense expert witnesses did not question the general reliability of meta-analysis.[12] These facts undermined the defense claim that meta-analysis was novel.[13] The reality, however, was that meta-analysis was in its infancy in bio-medical research.

When it came to the specific meta-analysis at issue, the court did not discuss or analyze a single pertinent detail of the Report. Despite its lack of engagement with the specifics of the Report’s meta-analysis, the court astutely observed that prevalent errors and flaws do not mean that a particular meta-analysis is “necessarily in error.”[14] Of course, without bothering to look, the court would not know whether the proffered meta-analysis was “actually in error.”

The appellate court would have given Nicholson’s Report a “pass” if it was an application of an accepted methodology. The defense’s remedy under this condition would be to cross-examine the opinion in front of a jury. If, on the other hand, the Nicholson had altered an accepted methodology to skew its results, then the court’s gatekeeping responsibility under Downing would be invoked.

The appellate court went on to fault the trial court for failing to make sufficiently explicit findings as to whether the questioned meta-analysis was unreliable. From its perspective, the Court of Appeals saw the trial court as resolving the reliability issue upon the greater credibility of defense expert witnesses in branding the disputed meta-analysis as unreliability. Credibility determinations are for the jury, but the court left room for a challenge on reliability itself:[15]

“Assuming that Dr. Nicholson’s meta-analysis is the proper subject of Downing scrutiny, the district court’s decision is wanting, because it did not make explicit enough findings on the reliability of Dr. Nicholson’s meta-analysis to satisfy Downing. We decline to define the exact level at which a district court can exclude a technique as sufficiently unreliable. Reliability indicia vary so much from case to case that any attempt to define such a level would most likely be pointless. Downing itself lays down a flexible rule. What is not flexible under Downing is the requirement that there be a developed record and specific findings on reliability issues. Those are absent here. Thus, even if it may be possible to exclude Dr. Nicholson’s testimony under Downing, as an unreliable, skewed meta-analysis, we cannot make such a determination on the record as it now stands. Not only was there no hearing, in limine or otherwise, at which the bases for the opinions of the contesting experts could be evaluated, but the experts were also not even deposed. All of the expert evidence was based on affidavits.”

Peer Review

Understandably, the defense attacked Nicholson’s Report as not having been peer reviewed. Without any scrutiny of the scientific bona fides of the workers’ compensation agency, the appellate court acquiesced in Nicholson’s self-serving characterization of his Report as having been reviewed by “cooperating researchers” and the Panel of the Ontario Workers’ Compensation agency. Another partisan expert witness characterized Nicholson’s Report as a “balanced assessment,” and this seemed to appease the Third Circuit, which was wary of requiring peer review in the first place.[16]

Relevancy Prong

The defense had argued that Nicholson’s Report was irrelevant because no individual plaintiff claimed liver cancer.[17] The trial court largely accepted this argument, but the appellate court disagreed because of conclusory language in Nicholson’s affidavit, in which he asserted that “proof of an increased risk of liver cancer is probative of an increased risk of other forms of cancer.” The court seemed unfazed by the ipse dixit, asserted without any support. Indeed, Nicholson’s assertion was contradicted by his own Report, in which he reported that there were fewer cancers among PCB-exposed male capacitor manufacturing workers than expected,[18] and that the rate for all cancers for both men and women was lower than expected, with 132 observed and 139.40 expected.[19]

The trial court had also agreed with the defense’s suggestion that Nicholson’s report, and its conclusion of causality between PCB exposure and liver cancer, were irrelevant because the Report “could not be the basis for anyone to say with reasonable degree of scientific certainty that some particular person’s disease, not cancer of the liver, biliary tract or gall bladder, was caused by PCBs.”[20]

Analysis

It would likely have been lost on Judge Becker and his colleagues, but Nicholson presented SMRs (standardized mortality ratios) throughout his Report, and for the all cancers statistic, he gave an SMR of 95. What Nicholson clearly did in this, and in all other instances, was simply divide the observed number by the expected, and multiply by 100. This crude, simplistic calculation fails to present a standardized mortality ratio, which requires taking into account the age distribution of the exposed and the unexposed groups, and a weighting of the contribution of cases within each age stratum. Nicholson’s presentation of data was nothing short of false and misleading. And in case anyone remembers General Electric v. Joiner, Nicholson’s summary estimate of risk for lung cancer in men was below the expected rate.[21]

Nicholson’s Report was replete with many other methodological sins. He used a composite of three organs (liver, gall bladder, bile duct) without any biological rationale. His analysis combined male and female results, and still his analysis of the composite outcome was based upon only seven cases. Of those seven cases, some of the cases were not confirmed as primary liver cancer, and at least one case was confirmed as not being a primary liver cancer.[22]

Nicholson failed to standardize the analysis for the age distribution of the observed and expected cases, and he failed to present meaningful analysis of random or systematic error. When he did present p-values, he presented one-tailed values, and he made no corrections for his many comparisons from the same set of data.

Finally, and most egregiously, Nicholson’s meta-analysis was meta-analysis in name only. What he had done was simply to add “observed” and “expected” events across studies to arrive at totals, and to recalculate a bogus risk ratio, which he fraudulently called a standardized mortality ratio. Adding events across studies is not a valid meta-analysis; indeed, it is a well-known example of how to generate a Simpson’s Paradox, which can change the direction or magnitude of any association.[23]

Some may be tempted to criticize the defense for having focused its challenge on the “novelty” of Nicholson’s approach in Paoli. The problem of course was the invalidity of Nicholson’s work, but both the trial court’s exclusion of Nicholson, and the Court of Appeals’ reversal and remand of the exclusion decision, illustrate the problem in getting judges, even well-respected judges, to accept their responsibility to engage with questioned scientific evidence.

Even in Paoli, no amount of ketchup could conceal the unsavoriness of Nicholson’s scrapple analysis. When the Paoli case reached the Court Appeals again in 1994, Nicholson’s analysis was absent.[24] Apparently, the plaintiffs’ counsel had second thoughts about the whole matter. Today, under the revised Rule 702, there can be little doubt that Nicholson’s so-called meta-analysis should have been excluded.


[1]  Not to be confused with the Judge Kelly of the same district, who was unceremoniously disqualified after attending an ex parte conference with plaintiffs’ lawyers and expert witnesses, at the invitation of Dr. Irving Selikoff.

[2]  Pace Philip J. Landrigan & Myron A. Mehlman, “In Memoriam – William J. Nicholson,” 40 Am. J. Indus. Med. 231 (2001). Landrigan and Mehlman assert, without any support, that Nicholson was an epidemiologist. Their own description of his career, his undergraduate work at MIT, his doctorate in physics from the University of Washington, his employment at the Watson Laboratory, before becoming a staff member in Irving Selikoff’s department in 1969, all suggest that Nicholson brought little to no experience in epidemiology to his work on occupational and environmental exposure epidemiology.

[3]  In re Paoli RR Yard Litig., 706 F. Supp. 358, 372-73 (E.D. Pa. 1988).

[4]  William Nicholson, Report to the Workers’ Compensation Board on Occupational Exposure to PCBs and Various Cancers, for the Industrial Disease Standards Panel (ODP); IDSP Report No. 2 (Toronto, Ontario Dec. 1987).

[5]  Id. at 373.

[6]  United States v. Downing, 753 F.2d 1224 (3d Cir.1985)

[7]  In re Paoli RR Yard PCB Litig., 916 F.2d 829 (3d Cir. 1990), cert. denied sub nom. General Elec. Co. v. Knight, 111 S.Ct. 1584 (1991).

[8]  Id. at 845.

[9]  Id.

[10]  Id. at 841, 848.

[11]  Id. at 845.

[12]  Id. at 847-48.

[13]  See, e.g., Robert Rosenthal, Judgment studies: Design, analysis, and meta-analysis (1987); Richard J. Light & David B. Pillemer, Summing Up: the Science of Reviewing Research (1984); Thomas A. Louis, Harvey V. Fineberg & Frederick Mosteller, “Findings for Public Health from Meta-Analyses,” 6 Ann. Rev. Public Health 1 (1985); Kristan A. L’abbé, Allan S. Detsky & Keith O’Rourke, “Meta-analysis in clinical research,” 107 Ann. Intern. Med. 224 (1987).

[14]  Id. at 857.

[15]  Id. at 858/

[16]  Id. at 858.

[17]  Id. at 845.

[18]  Report, Table 16.

[19]  Report, Table 18.

[20]  In re Paoli, 916 F.2d at 847.

[21]  See General Electric v. Joiner, 522 U.S. 136 (1997); NAS, “How Have Important Rule 702 Holdings Held Up With Time?” (March 20, 2015).

[22]  Report, Table 22.

[23]  James A. Hanley, Gilles Thériault, Ralf Reintjes and Annette de Boer, “Simpson’s Paradox in Meta-Analysis,” 11 Epidemiology 613 (2000); H. James Norton & George Divine, “Simpson’s paradox and how to avoid it,” Significance 40 (Aug. 2015); George Udny Yule, Notes on the theory of association of attributes in Statistics, 2 Biometrika 121 (1903).

[24]  In re Paoli RR Yard Litig., 35 F.3d 717 (3d Cir. 1994).

California Roasts Fear-Mongering Industry

June 16th, 2019

A year ago, California set out to create an exemption for coffee from its Proposition 65 regulations. The lawsuit industry, represented by the Council for Education and Research on Toxics (CERT) had been successfully deploying Prop 65’s private right of action provisions to pick the pockets of coffee vendors. Something had to give.

In 2010, Mr. Metzger, on behalf of CERT, sued Starbucks and 90 other coffee manufacturers and distributors, claiming they had failed to warn consumers about the cancer risks of acrylamide. CERT’s mission was to shake down the roasters and the vendors because coffee has minor amounts of acrylamide in it. Acrylamide in very high doses causes tumors in rats[1]; coffee consumption by humans is generally regarded as beneficial.

Earlier last year a Los Angeles Superior Court ordered the coffee companies to put cancer warnings on their beverages. In the upcoming damages phase of the case, Metzger sought as much as $2,500 in civil penalties for each cup of coffee the defendants sold over at least a decade. Suing companies for violating California’s Proposition 65 is like shooting fish in a barrel, but the State’s regulatory initiative to save California from the embarrassment of branding coffee a carcinogen was a major setback for CERT.

And so the Office of Environmental Health Hazard Assessment (OEHHA) began a rulemaking largely designed to protect the agency from the public relations nightmare created by the application of the governing statute and regulations to squeeze the coffee roasters and makers.[2] The California’s agency’s proposed regulation on acrylamide in coffee resulted in a stay of CERT’s enforcement action against Starbucks.[3] CERT’s lawyers were not pleased; they had already won a trial court’s judgment that they were owed damages, and only the amount needed to be set. In September 2018, CERT filed a lawsuit in Los Angeles Superior Court against the state of California challenging OEHHA’s proposed rule, saying it was being rammed through the agency on the order of the Office of the Governor in an effort to kill CERT’s suit against the coffee companies. Or maybe it was simply designed to allow people to drink their coffee without the Big Prop 65 warning.

Earlier this month, after reviewing voluminous submissions and holding a hearing, the OEHHA announced its ruling that Californians do not need to be warned that coffee causes cancer. Epistemically, coffee is not known to the State of California to be hazardous to human health.[4] According to Sam Delson, a spokesperson for the OEHHA, “Coffee is a complex mixture of hundreds of chemicals that includes both carcinogens and anti-carcinogens. … The overall effect of coffee consumption is not associated with any significant cancer risk.” The regulation saving coffee goes into effect in October 2019. CERT, no doubt, will press on in its litigation campaign against the State.

CERT is the ethically dodgy organization founded by C. Sterling Wolfe, a former environmental lawyer; Brad Lunn; Carl Cranor, a philosophy professor at University of California Riverside; and Martyn T. Smith, a toxicology professor at University of California Berkeley.[5] Metzger has been its lawyer for many years; indeed, Metzger and CERT share the same office. Smith has been the recipient of CERT’s largesse in funding toxicologic studies. Cranor and Smith have both testified for the lawsuit industry.

In the well-known Milward case,[6] both Cranor and Smith served as paid expert witnesses for plaintiff. When the trial court excluded their proffered testimonies as unhelpful and unreliable, their own organization, CERT, came to the rescue by filing an amicus brief in the First Circuit. Supporting by a large cast of fellow travelers, CERT perverted the course of justice by failing to disclose the intimate relationship between the “amicus” CERT and the expert witnesses Cranor and Smith, whose opinions had been successfully challenged.[7]

The OEHHA coffee regulation shows that not all regulation is bad.


[1]  National Cancer Institute, “Acrylamide and Cancer Risk.”

[2]  See Sam Delson, “Press Release: Proposed OEHHA regulation clarifies that cancer warnings are not required for coffee under Proposition 65” (June 15, 2018).

[3]  Council for Education and Research on Toxics v. Starbucks Corp., case no. B292762, Court of Appeal of the State of California, Second Appellate District.

[4]  Associated Press, “Perk Up: California Says Coffee Cancer Risk Insignificant,” N.Y. Times (June 3, 2019); Sara Randazzo, “Coffee Doesn’t Warrant a Cancer Warning in California, Agency Says; Industry scores win following finding on chemical found in beverage,” W.S.J. (June 3, 2019); Editorial Board, “Coffee Doesn’t Kill After All: California has a moment of sanity, and a lawyer is furious,” Wall.St.J. (June 5, 2019).

[5]  Michael Waters, “The Secretive Non-Profit Gaming California’s Health Laws,” The Outline (June 18, 2018); Beth Mole, “The secretive nonprofit that made millions suing companies over cancer warnings,” Ars Technica (June 6, 2019); NAS, “Coffee with Cream, Sugar & a Dash of Acrylamide” (June 9, 2018); NAS, “The Council for Education & Research on Toxics” (July 9, 2013); NAS, “Sand in My Shoe – CERTainly” (June 17, 2014) (CERT briefs supported by fellow-travelers, testifying expert witnesses Jerrold Abraham, Richard W. Clapp, Ronald Crystal, David A. Eastmond, Arthur L. Frank, Robert J. Harrison, Ronald Melnick, Lee Newman, Stephen M. Rappaport, David Joseph Ross, and Janet Weiss, all without disclosing conflicts of interest).

[6]  Milward v. Acuity Specialty Products Group, Inc., 664 F. Supp. 2d 137, 148 (D.Mass. 2009), rev’d, 639 F.3d 11 (1st Cir. 2011), cert. den. sub nom. U.S. Steel Corp. v. Milward, 565 U.S. 1111 (2012), on remand, Milward v. Acuity Specialty Products Group, Inc., 969 F.Supp. 2d 101 (D.Mass. 2013) (excluding specific causation opinions as invalid; granting summary judgment), aff’d, 820 F.3d 469 (1st Cir. 2016).

[7]  NAS, “The Council for Education & Research on Toxics” (July 9, 2013) (CERT amicus brief filed without any disclosure of conflict of interest). The fellow travelers who knowingly or unknowingly aided CERT’s scheme to pervert the course of justice, included some well-known testifiers for the lawsuit industry: Nicholas A. Ashford, Nachman Brautbar, David C. Christiani, Richard W. Clapp, James Dahlgren, Devra Lee Davis, Malin Roy Dollinger, Brian G. Durie, David A. Eastmond, Arthur L. Frank, Frank H. Gardner, Peter L. Greenberg, Robert J. Harrison, Peter F. Infante, Philip J. Landrigan, Barry S. Levy, Melissa A. McDiarmid, Myron Mehlman, Ronald L. Melnick, Mark Nicas, David Ozonoff, Stephen M. Rappaport, David Rosner, Allan H. Smith, Daniel Thau Teitelbaum, Janet Weiss, and Luoping Zhang. See also NAS, “Carl Cranor’s Conflicted Jeremiad Against Daubert” (Sept. 23, 2018); Carl Cranor, “Milward v. Acuity Specialty Products: How the First Circuit Opened Courthouse Doors for Wronged Parties to Present Wider Range of Scientific Evidence” (July 25, 2011).

 

 

Creators of ToxicDocs Show Off Their Biases

June 7th, 2019

Columbia Magazine’s most recent issue includes a laudatory story about David Rosner, a professor of history in Columbia University.1 The “story” focuses on Rosner’s website, ToxicDocs, which has become his and Gerald Markowitz’s clearing house for what they assert are industry’s misdeeds in the realm of public health.

What the magazine’s story chooses not to discuss is the provenance of the ToxicDocs website in Rosner and Markowitz’s long collaboration with the lawsuit industry in a variety of litigation endeavors. And what you will not find on ToxicDocs are documents of the many misdeeds of the sponsoring lawsuit industry’s misdeeds, such as unlawful and unethical screenings, evidentiary frauds, specious claiming, and misleading and incompetent medical advice to its clients. Nor will you find much in the way of context for the manufacturing industry’s documents.

Media coverage of ToxicDocs from last year provides some further insight into the provenance of the website.2 According one account, Rosner and Markowitz (collectively Rosnowitz) bristled when they were attacked for their litigation work by historian Philip Scranton, a professor in Rutgers University. Scranton showed that Rosnowitz were guilty of a variety of professional sins, from “overgeneralization and failure to corroborate” to “selectively appropriat[ing] information.” Although the radical left came to Rosnowitz’s defense by labeling Scranton a “hired gun,” that charge range rather hollow when Scranton was a well-regarded historian, and Rosnowitz were long-term hired guns for the lawsuit industry.3

And so these leftist historians felt the need to defend their long-term collaboration with the lawsuit industry by putting what they believed were incriminating documents on line at their website, ToxicDocs.4 The problem, however, with Rosnowitz’s response to the Scranton critique is that their website suffers from all the undue selectivity, lack of context, and bias, which afflict their courtroom work, and which validated Scranton’s report. Most important, the reader will not find anything on ToxicDocs that challenges the misdeeds of the lawsuit industry, which has employed them for so many years.

In February 2018, the Journal of Public Health Policy (vol. 39, no. 1) published a series of editorials lauding ToxicDocs.5 Remarkably, not a single paper by Rosnowitz, and their associates, Robert Proctor, David Wegman, or Anthony Robbins mentioned their service to the lawsuit industry or the extent of their income from that service. Sheldon Whitehouse wrote an editorial, in which he disclosed his having served as Rhode Island’s Attorney General, but failed to disclose that he had worked in lockstep with the plaintiffs’ firm, Motley Rice, and that he had hired Rosnowitz, in Rhode Island’s lawsuit against major paint manufacturers. For those observers who are in a moral panic over “industry” conflicts of interest, please note the conflicts of lawsuit industrial complex.


1 Carla Cantor, “ToxicDocs Exposes Industry MisdeedsColumbia Magazine (Summer 2019).

2 Tik Root, “In ToxicDocs.org, a Treasure Trove of Industry Secrets,” Undark (Jan. 10, 2018).

3 See, e.g., Jon Wiener, “Cancer, Chemicals and History: Companies try to discredit the experts,” The Nation (Jan. 20, 2005).

4 SeeToxicHistorians Sponsor ToxicDocs” (Feb. 1, 2018); “David Rosner’s Document Repository” (July 23, 2017).

5 Anthony Robbins & Phyllis Freeman, “ToxicDocs (www.ToxicDocs.org) goes live: A giant step toward leveling the playing field for efforts to combat toxic exposures,” 39 J. Pub. Health Policy 1 (2018); David Rosner, Gerald Markowitz, and Merlin Chowkwanyun, “ToxicDocs (www.ToxicDocs.org): from history buried in stacks of paper to open, searchable archives online,” 39 J. Pub. Health Policy 4 (2018); Stéphane Horel, “Browsing a corporation’s mind,” 39 J. Pub. Health Policy 12 (2018); Christer Hogstedt & David H. Wegman, “ToxicDocs and the fight against biased public health science worldwide,” 39 J. Pub. Health Policy 15 (2018); Joch McCulloch, “Archival sources on asbestos and silicosis in Southern Africa and Australia,” 39 J. Pub. Health Policy 18 (2018); Sheldon Whitehouse, “ToxicDocs: using the US legal system to confront industries’ systematic counterattacks against public health,” 39 J. Pub. Health Policy 22 (2018); Robert N. Proctor, “God is watching: history in the age of near-infinite digital archives,” 39 J. Pub. Health Policy 24 (2018); Elena N. Naumova, “The value of not being lost in our digital world,” 39 J. Pub. Health Policy 27 (2018); Nicholas Freudenberg, “ToxicDocs: a new resource for assessing the impact of corporate practices on health,” 39 J. Pub. Health Policy 30 (2018).

The opinions, statements, and asseverations expressed on Tortini are my own, or those of invited guests, and these writings do not necessarily represent the views of clients, friends, or family, even when supported by good and sufficient reason.