TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Pritchard v. Dow Agro – Gatekeeping Exemplified

August 25th, 2014

Robert T. Pritchard was diagnosed with Non-Hodgkin’s Lymphoma (NHL) in August 2005; by fall 2005, his cancer was in remission. Mr. Pritchard had been a pesticide applicator, and so, of course, he and his wife sued the deepest pockets around, including Dow Agro Sciences, the manufacturer of Dursban. Pritchard v. Dow Agro Sciences, 705 F.Supp. 2d 471 (W.D.Pa. 2010).

The principal active ingredient of Dursban is chlorpyrifos, along with some solvents, such as xylene, cumene, and ethyltoluene. Id. at 474.  Dursban was licensed for household insecticide use until 2000, when the EPA phased out certain residential applications.  The EPA’s concern, however, was not carcinogenicity:  the EPA categorizes chlorpyrifos as “Group E,” non-carcinogenetic in humans. Id. at 474-75.

According to the American Cancer Society (ACS), the cause or causes of NHL cases are unknown.  Over 60,000 new cases are diagnosed annually, in people from all walks of life, occupations, and lifestyles. The ACS identifies some risk factors, such as age, gender, race, and ethnicity, but the ACS emphasizes that chemical exposures are not proven risk factors or causes of NHL.  See Pritchard, 705 F.Supp. 2d at 474.

The litigation industry does not need scientific conclusions of causal connections; their business is manufacturing certainty in courtrooms. Or at least, the appearance of certainty. The Pritchards found their way to the litigation industry in Pittsburgh, Pennsylvania, in the form of Goldberg, Persky & White, P.C. The Goldberg Persky firm sued Dow Agro, and then put the Pritchards in touch with Dr. Bennet Omalu, to serve as their expert witness.  A lawsuit ensued.

Alas, the Pritchards’ lawsuit ran into a wall, or at least a gate, in the form of Federal Rule of Evidence 702. In the capable hands of Judge Nora Barry Fischer, Rule 702 became an effective barrier against weak and poorly considered expert witness opinion testimony.

Dr. Omalu, no stranger to lost causes, was the medical examiner of San Joaquin County, California, at the time of his engagement in the Pritchard case. After careful consideration of the Pritchards’ claims, Omalu prepared a four page report, with a single citation, to Harrison’s Principles of Internal Medicine.  Id. at 477 & n.6.  This research, however, sufficed for Omalu to conclude that Dursban caused Mr. Pritchard to develop NHL, as well as a host of ailments he had never even sued Dow Agro for, including “neuropathy, fatigue, bipolar disorder, tremors, difficulty concentrating and liver disorder.” Id. at 478. Dr. Omalu did not cite or reference any studies, in his report, to support his opinion that Dursban caused Mr. Pritchard’s ailments.  Id. at 480.

After counsel objected to Omalu’s report, plaintiffs’ counsel supplemented the report with some published articles, including the “Lee” study.  See Won Jin Lee, Aaron Blair, Jane A. Hoppin, Jay H. Lubin, Jennifer A. Rusiecki, Dale P. Sandler, Mustafa Dosemeci, and Michael C. R. Alavanja, “Cancer Incidence Among Pesticide Applicators Exposed to Chlorpyrifos in the Agricultural Health Study,” 96 J. Nat’l Cancer Inst. 1781 (2004) [cited as Lee].  At his deposition, and in opposition to defendants’ 702 motion, Omalu became more forthcoming with actual data and argument.  According to Omalu, the Lee study “the 2004 Lee Study strongly supports a conclusion that high-level exposure to chlorpyrifos is associated with an increased risk of NHL.’’ Id. at 480.

This opinion put forward by Omalu bordered on scientific malpractice.  No; it was malpractice.  The Lee study looked at many different cancer end points, without adjustment for multiple comparisons.  The lack of adjustment means at the very least that any interpretation of p-values or confidence intervals would have to modified to acknowledge the higher rate of random error.  Now for NHL, the overall relative risk (RR) for chlorpyrifos exposure was 1.03, with a 95% confidence interval, 0.62 to 1.70.  Lee at 1783.  In other words, the study that Omalu claimed supported his opinion was about as null a study as can be, with reasonably tight confidence interval that made a doubling of the risk rather unlikely given the sample RR.

If the multiple endpoint testing were not sufficient to dissuade a scientist, intent on supporting the Pritchards’ claims, then the exposure subgroup analysis would have scared any prudent scientist away from supporting the plaintiffs’ claims.  The Lee study authors provided two different exposure-response analyses, one with lifetime exposure and the other with an intensity-weighted exposure, both in quartiles.  Neither analysis revealed an exposure-response trend.  For the lifetime exposure-response trend, the Lee study reported an NHL RR of 1.01, for the highest quartile of chloripyrifos exposure. For the intensity-weighted analysis, for the highest quartile, the authors reported RR = 1.61, with a 95% confidence interval, 0.74 to 3.53).

Although the defense and the district court did not call out Omalu on his fantasy statistical inference, the district judge certainly appreciated that Omalu had no statistically significant associations between chloripyrifos and NHL, to support his opinion. Given the weakness of relying upon a single epidemiologic study (and torturing the data therein), the district court believed that a showing of statistical significance was important to give some credibility to Omalu’s claims.  705 F.Supp. 2d at 486 (citing General Elec. Co. v. Joiner, 522 U.S. 136, 144-46 (1997);  Soldo v. Sandoz Pharm. Corp., 244 F.Supp. 2d 434, 449-50 (W.D. Pa. 2003)).

Figure 3 adapted from Lee

Figure 3 adapted from Lee

What to do when there is really no evidence supporting a claim?  Make up stuff.  Here is how the trial court describes Omalu’s declaration opposing exclusion:

 “Dr. Omalu interprets and recalculates the findings in the 2004 Lee Study, finding that ‘an 80% confidence interval for the highly-exposed applicators in the 2004 Lee Study spans a relative risk range for NHL from slightly above 1.0 to slightly above 2.5.’ Dr. Omalu concludes that ‘this means that there is a 90% probability that the relative risk within the population studied is greater than 1.0’.”

705 F.Supp. 2d at 481 (internal citations omitted); see also id. at 488. The calculations and the rationale for an 80% confidence interval were not provided, but plaintiffs’ counsel assured Judge Fischer at oral argument that the calculation was done using high school math. Id. at 481 n.12. Judge Fischer seemed unimpressed, especially given that there was no record of the calculation.  Id. at 481, 488.

The larger offense, however, was that Omalu’s interpretation of the 80% confidence interval as a probability statement of the true relative risk’s exceeding 1.0, was bogus. Dr. Omalu further displayed his lack of statistical competence when he attempted to defend his posterior probability derived from his 80% confidence interval by referring to a power calculation of a different disease in the Lee study:

“He [Omalu] further declares that ‘‘the authors of the 2004 Lee Study themselves endorse the probative value of a finding of elevated risk with less than a 95% confidence level when they point out that ‘this analysis had a 90% statistical power to detect a 1.5–fold increase in lung cancer incidence’.”

Id. at 488 (court’s quoting of Omalu’s quoting from the Lee study). To quote Wolfgang Pauli, Omalu is so far off that he is “not even wrong.” Lee and colleagues were offering a pre-study power calculation, which they used to justify their looking at the cohort for lung cancer, not NHL, outcomes.  Lee at 1787. The power calculation does not apply to the data observed for lung cancer; and the calculation has absolutely nothing to do with NHL. The power calculation certainly has nothing to do with Omalu’s misguided attempt to offer a calculation of a posterior probability for NHL based upon a subgroup confidence interval.

Given that there were epidemiologic studies available, Judge Fischer noted that expert witnesses were obligated to factor such studies into their opinions. See 705 F.Supp. 2d at 483 (citing Soldo, 244 F.Supp. 2d at 532).  Omalu sins against Rule 702 included his failure to consider any studies other than the Lee study, regardless of how unsupportive the Lee study was of his opinion.  The defense experts pointed to several studies that found lower NHL rates among exposed workers than among controls, and Omalu completely failed to consider and to explain his opinion in the face of the contradictory evidence.  See 705 F.Supp. 2d at 485 (citing Perry v. Novartis Pharm. Corp. 564 F.Supp. 2d 452, 465 (E.D. Pa. 2008)). In other words, Omalu was shown to have been a cherry picker. Id. at 489.

In addition to the abridged epidemiology, Omalu relied upon an analogy between the ethyl-toluene and other solvents that contained benzene rings and benzene itself to argue that these chemicals, supposedly like benzene, cause NHL.  Id. at 487. The analogy was never supported by any citations to published studies, and, of course, the analogy is seriously flawed. Many chemicals, including chemicals made and used by the human body, have benzene rings, without the slightest propensity to cause NHL.  Indeed, the evidence that benzene itself causes NHL is weak and inconsistent.  See, e.g., Knight v. Kirby Inland Marine Inc., 482 F.3d 347 (2007) (affirming the exclusion of Dr. B.S. Levy in a case involving benzene exposure and NHL).

Looking at all the evidence, Judge Fischer found Omalu’s general causation opinions unreliable.  Relying upon a single, statistically non-significant epidemiologic study (Lee), while ignoring contrary studies, was not sound science.  It was not even science; it was courtroom rhetoric.

Omalu’s approach to specific causation, the identification of what caused Mr. Pritchard’s NHL, was equally spurious. Omalu purportedly conducted a “differential diagnosis” or a “differential etiology,” but he never examined Mr. Pritchard; nor did he conduct a thorough evaluation of Mr. Pritchard’s medical records. 705 F.Supp. 2d at 491. Judge Fischer found that Omalu had not conducted a thorough differential diagnosis, and that he had made no attempt to rule out idiopathic or unknown causes of NHL, despite the general absence of known causes of NHL. Id. at 492. The one study identified by Omalu reported a non-statistically significant 60% increase in NHL risk, for a subgroup in one of two different exposure-response analyses.  Although Judge Fischer treated the relative risk less than two as a non-dispositive factor in her decision, she recognized that

“The threshold for concluding that an agent was more likely than not the cause of an individual’s disease is a relative risk greater than 2.0… . When the relative risk reaches 2.0, the agent is responsible for an equal number of cases of disease as all other background causes. Thus, a relative risk of 2.0 … implies a 50% likelihood that an exposed individual’s disease was caused by the agent. A relative risk greater than 2.0 would permit an inference that an individual plaintiff’s disease was more likely than not caused by the implicated agent.”

Id. at 485-86 (quoting from Reference Manual on Scientific Evidence at 384 (2d ed. 2000)).

Left with nowhere to run, plaintiffs’ counsel swung for the bleachers by arguing that the federal court, sitting in diversity, was required to apply Pennsylvania law of evidence because the standards of Rule 702 constitute “substantive,” not procedural law. The argument, which had been previously rejected within the Third Circuit, was as legally persuasive as Omalu’s scientific opinions.  Judge Fischer excluded Omalu’s proffered opinions and granted summary judgment to the defendants. The Third Circuit affirmed in a per curiam decision. 430 Fed. Appx. 102, 2011 WL 2160456 (3d Cir. 2011).

Practical Evaluation of Scientific Claims

The evaluative process that took place in the Pritchard case missed some important details and some howlers committed by Dr. Omalu, but it was more than good enough for government work. The gatekeeping decision in Pritchard was nonetheless the target of criticism in a recent book.

Kristin Shrader-Frechette (S-F) is a professor of science who wants to teach us how to expose bad science. S-F has published, or will soon publish, a book that suggests that philosophy of science can help us expose “bad science.”  See Kristin Shrader-Frechette, Tainted: How Philosophy of Science Can Expose Bad Science (Oxford U.P. 2014)[cited below at Tainted; selections available on Google books]. S-F’s claim is intriguing, as is her move away from the demarcation problem to the difficult business of evaluation and synthesis of scientific claims.

In her introduction, S-F tells us that her book shows “how practical philosophy of science” can counteract biased studies done to promote special interests and PROFITS.  Tainted at 8. Refreshingly, S-F identifies special-interest science, done for profit, as including “individuals, industries, environmentalists, labor unions, or universities.” Id. The remainder of the book, however, appears to be a jeremiad against industry, with a blind eye towards the litigation industry (plaintiffs’ bar) and environmental zealots.

The book promises to address “public concerns” in practical, jargon-free prose. Id. at 9-10. Some of the aims of the book are to provide support for “rejecting demands for only human evidence to support hypotheses about human biology (chapter 3), avoiding using statistical-significance tests with observational data (chapter 12), and challenging use of pure-science default rules for scientific uncertainty when one is doing welfare-affecting science (chapter 14).”

Id. at 10. Hmmm.  Avoiding statistical significance tests for observational data?!?  If avoided, what does S-F hope to use to assess random error?

And then S-F refers to plaintiffs’ hired expert witness (from the Milward case), Carl Cranor, as providing “groundbreaking evaluations of causal inferences [that] have helped to improve courtroom verdicts about legal liability that otherwise put victims at risk.” Id. at 7. Whether someone is a “victim” and has been “at risk” turns on assessing causality. Cranor is not a scientist, and his philosophy of science turns of “weight of the evidence” (WOE), a subjective, speculative approach that is deaf, dumb, and blind to scientific validity.

There are other “teasers,” in the introduction to Tainted.  S-F advertises that her Chapter 5 will teach us that “[c]ontrary to popular belief, animal and not human data often provide superior evidence for human-biological hypotheses.”  Tainted at 11. Chapter 6 will show that“[c]ontrary to many physicists’ claims, there is no threshold for harm from exposure to ionizing radiation.” Id.  S-F tells us that her Chapter 7 will criticize “a common but questionable way of discovering hypotheses in epidemiology and medicine—looking at the magnitude of some effect in order to discover causes. The chapter shows instead that the likelihood, not the magnitude, of an effect is the better key to causal discovery.” Id. at 13. Discovering hypotheses — what is that about? You might have thought that hypotheses were framed from observations and then tested.

Which brings us to the trailer for Chapter 8, in which S-F promises to show that “[c]ontrary to standard statistical and medical practice, statistical-significance tests are not causally necessary to show medical and legal evidence of some effect.” Tainted at 11. Again, the teaser raises lots of questions such as what could S-F possibly mean when she says statistical tests are not causally necessary to show an effect.  Later in the introduction, S-F says that her chapter on statistics “evaluates the well-known statistical-significance rule for discovering hypotheses and shows that because scientists routinely misuse this rule, they can miss discovering important causal hypotheses. Id. at 13. Discovering causal hypotheses is not what courts and regulators must worry about; their task is to establish such hypotheses with sufficient, valid evidence.

Paging through the book reveals that a rhetoric that is thick and unremitting, with little philosophy of science or meaningful advice on how to evaluate scientific studies.  The statistics chapter calls out, and lo, it features a discussion of the Pritchard case. See Tainted, Chapter 8, “Why Statistics Is Slippery: Easy Algorithms Fail in Biology.”

The chapter opens with an account of German scientist Fritz Haber’s development of organophosphate pesticides, and the Nazis use of related compounds as chemical weapons.  Tainted at 99. Then, in a fevered non-sequitur and rhetorical flourish, S-F states, with righteous indignation, that although the Nazi researchers “clearly understood the causal-neurotoxic effects of organophosphate pesticides and nerve gas,” chemical companies today “claim that the causal-carcinogenic effects of these pesticides are controversial.” Is S-F saying that a chemical that is neurotoxic must be carcinogenic for every kind of human cancer?  So it seems.

Consider the Pritchard case.  Really, the Pritchard case?  Yup; S-F holds up the Pritchard case as her exemplar of what is wrong with civil adjudication of scientific claims.  Despite the promise of jargon-free language, S-F launches into a discussion of how the judges in Pritchard assumed that statistical significance was necessary “to hypothesize causal harm.”  Tainted at 100. In this vein, S-F tells us that she will show that:

“the statistical-significance rule is not a legitimate requirement for discovering causal hypotheses.”

Id. Again, the reader is left to puzzle why statistical significance is discussed in the context of hypothesis discovery, whatever that may be, as opposed to hypothesis testing or confirmation. And whatever it may be, we are warned that “unless the [statistical significance] rule is rejected as necessary for hypothesis-discovery, it will likely lead to false causal claims, questionable scientific theories, and massive harm to innocent victims like Robert Pritchard.”

Id. S-F is decidedly not adverting to Mr. Pritichard’s victimization by the litigation industry and the likes of Dr. Omalu, although she should. S-F not only believes that the judges in Pritchard bungled their gatekeeping wrong, she knows that Dr. Omalu was correct, and the defense experts wrong, and that Pritchard was a victim of Dursban and of questionable scientific theories that were used to embarrass Omalu and his opinions.

S-F promised to teach her readers how to evaluate scientific claims and detect “tainted” science, but all she delivers here is an ipse dixit.  There is no discussion of the actual measurements, extent of random error, or threats to validity, for studies cited either by the plaintiffs or the defendants in Pritchard.  To be sure, S-F cites the Lee study in her endnotes, but she never provides any meaningful discussion of that study or any other that has any bearing on chlorpyrifos and NHL.  S-F also cited two review articles, the first of which provides no support for her ipse dixit:

“Although mutagenicity and chronic animal bioassays for carcinogenicity of chlorpyrifos were largely negative, a recent epidemiological study of pesticide applicators reported a significant exposure response trend between chlorpyrifos use and lung and rectal cancer. However, the positive association was based on small numbers of cases, i.e., for rectal cancer an excess of less than 10 cases in the 2 highest exposure groups. The lack of precision due to the small number of observations and uncertainty about actual levels of exposure warrants caution in concluding that the observed statistical association is consistent with a causal association. This association would need to be observed in more than one study before concluding that the association between lung or rectal cancer and chlorpyrifos was consistent with a causal relationship.

There is no evidence that chlorpyrifos is hepatotoxic, nephrotoxic, or immunotoxic at doses less than those that cause frank cholinesterase poisoning.”

David L. Eaton, Robert B. Daroff, Herman Autrup, James Bridges, Patricia Buffler, Lucio G. Costa, Joseph Coyle, Guy McKhann, William C. Mobley, Lynn Nadel, Diether Neubert, Rolf Schulte-Hermann, and Peter S. Spencer, “Review of the Toxicology of Chlorpyrifos With an Emphasis on Human Exposure and Neurodevelopment,” 38 Critical Reviews in Toxicology 1, 5-6(2008).

The second cited review article was written by clinical ecology zealot[1], William J. Rea. William J. Rea, “Pesticides,” 6 Journal of Nutritional and Environmental Medicine 55 (1996). Rea’s article does not appear in Pubmed.

Shrader-Frechette’s Criticisms of Statistical Significance Testing

What is the statistical significance against which S-F rails? She offers several definitions, none of which is correct or consistent with the others.

“The statistical-significance level p is defined as the probability of the observed data, given that the null hypothesis is true.8

Tainted at 101 (citing D. H. Johnson, “What Hypothesis Tests Are Not,” 16 Behavioral Ecology 325 (2004). Well not quite; attained significance probability is the probability of data observed or those more extreme, given the null hypothesis.  A Tainted definition.

Later in Chapter 8, S-F discusses significance probability in a way that overtly commits the transposition fallacy, not a good thing to do in a book that sets out to teach how to evaluate scientific evidence:

“However, typically scientists view statistical significance as a measure of how confidently one might reject the null hypothesis. Traditionally they have used a 0.05 statistical-significance level, p < or = 0.05, and have viewed the probability of a false-positive (incorrectly rejecting a true null hypothesis), or type-1, error as 5 percent. Thus they assume that some finding is statistically significant and provides grounds for rejecting the null if it has at least a 95-percent probability of not being due to chance.

Tainted at 101. Not only does the last sentence ignore the extent of error due to bias or confounding, it erroneously assigns a posterior probability that is the complement of the significance probability.  This error is not an isolated occurrence; here is another example:

“Thus, when scientists used the rule to examine the effectiveness of St. John’s Wort in relieving depression,14 or when they employed it to examine the efficacy of flutamide to treat prostate cancer,15 they concluded the treatments were ineffective because they were not statistically significant at the 0.05 level. Only at p < or = 0.14 were the results statistically significant. They had an 86-percent chance of not being due to chance.16

Tainted at 101-02 (citing papers by Shelton (endnote 14)[2], by Eisenberger (endnote 15) [3], and Rothman’s text (endnote 16)[4]). Although Ken Rothman has criticized the use of statistical significance tests, his book surely does not interpret a p-value of 0.14 as an 86% chance that the results were not due to chance.

Although S-F previous stated that statistical significance is interpreted as the probability that the null is true, she actually goes on to correct the mistake, sort of:

“Requiring the statistical-significance rule for hypothesis-development also is arbitrary in presupposing a nonsensical distinction between a significant finding if p = 0.049, but a nonsignificant finding if p = 0. 051.26 Besides, even when one uses a 90-percent (p < or = 0.10), an 85-percent (p < or = 0.15), or some other confidence level, it still may not include the null point. If not, these other p values also show the data are consistent with an effect. Statistical-significance proponents thus forget that both confidence levels and p values are measures of consistency between the data and the null hypothesis, not measures of the probability that the null is true. When results do not satisfy the rule, this means merely that the null cannot be rejected, not that the null is true.”

Tainted at 103.

S-F’s repeats some criticisms of significance testing, most of which involve their own misunderstandings of the concept.  It hardly suffices to argue that evaluating the magnitude of random error is worthless because it does not measure the extent of bias and confounding.  The flaw lies in those who would interpret the p-value as the sole measure of error involved in a measurement.

S-F takes the criticisms of significance probability to be sufficient to justify an alternative approach: evaluating causal hypotheses “on a preponderance of evidence,47 whether effects are more likely than not.”[5] Here citations, however, do not support the notion that an overall assessment of the causal hypothesis is a true alternative of statistical testing, but rather only a later step in the causal assessment, which presupposes the previous elimination of random variability in the observed associations.

S-F compounds her confusion by claiming that this purported alternative is superior to significance testing or any evaluation of random variability, and by noting that juries in civil cases must decide causal claims on the preponderance of the evidence, not on attained significance probabilities:

“In welfare-affecting areas of science, a preponderance-of-evidence rule often is better than a statistical-significance rule because it could take account of evidence based on underlying mechanisms and theoretical support, even if evidence did not satisfy statistical significance. After all, even in US civil law, juries need not be 95 percent certain of a verdict, but only sure that a verdict is more likely than not. Another reason for requiring the preponderance-of-evidence rule, for welfare-related hypothesis development, is that statistical data often are difficult or expensive to obtain, for example, because of large sample-size requirements. Such difficulties limit statistical-significance applicability. ”

Tainted at 105-06. S-F’s assertion that juries need not have 95% certainty in their verdict is either a misunderstanding or a misrepresentation of the meaning of a confidence interval, and a conflation of two very kinds of probability or certainty.  S-F invites a reading that commits the transposition fallacy by confusing the probability involved in a confidence interval with that involved in a posterior probability.  S-F’s claim that sample size requirements often limit the ability to use statistical significance evaluations is obviously highly contingent upon the facts of case, but in civil cases, such as Pritchard, this limitation is rarely at play.  Of course, if the sample size is too small to evaluate the role of chance, then a scientist should probably declare the evidence too fragile to support a causal conclusion.

S-F also postulates that that a posterior probability rather than a significance probability approach would “better counteract conflicts of interest that sometimes cause scientists to pay inadequate attention to public-welfare consequences of their work.” Tainted at 106. This claim is a remarkable assertion, which is not supported by any empirical evidence.  The varieties of evidence that go into an overall assessment of a causal hypothesis are often quantitatively incommensurate.  The so-called preponderance-of-the-evidence described by S-F is often little more than a subjective overall assessment of weight of the evidence.  The approving citations to the work of Carl Cranor support interpreting S-F to endorse this subjective, anything-goes approach to weight of the evidence.  As for WOE eliminating inadequate attention to “public welfare,” S-F’s citations actually suggest the opposite. S-F’s citations to the 1961 reviews by Wynder and by Little illustrate how subjective narrative reviews can be, with diametrically opposed results.  Rather than curbing conflicts of interest, these subjective, narrative reviews illustrate how contrary results may be obtained by the failure to pre-specify criteria of validity, and inclusion and exclusion of admissible evidence. Still, S-F asserts that “up to 80 percent of welfare-related statistical studies have false-negative or type-II errors, failing to reject a false null.” Tainted at 106. The support for this assertion is a citation to a review article by David Resnik. See David Resnik, “Statistics, Ethics, and Research: An Agenda for Education and Reform,” 8 Accountability in Research 163, 183 (2000). Resnik’s paper is a review article, not an empirical study, but at the page cited by S-F, Resnik in turn cites to well-known papers that present actual data:

“There is also evidence that many of the errors and biases in research are related to the misuses of statistics. For example, Williams et al. (1997) found that 80% of articles surveyed that used t-tests contained at least one test with a type II error. Freiman et al. (1978)  * * *  However, empirical research on statistical errors in science is scarce, and more work needs to be done in this area.”

Id. The papers cited by Resnik, Williams (1997)[6] and Freiman (1978)[7] did identify previously published studies that over-interpreted statistically non-significant results, but the identified type-II errors were potential errors, not ascertained errors, because the authors made no claim that every non-statistically significant result actually represented a missed true association. In other words, S-F is not entitled to say that these empirical reviews actually identified failures to reject fall null hypotheses. Furthermore, the empirical analyses in the studies cited by Resnik, who was in turn cited by S-F, did not look at correlations between alleged conflicts of interest and statistical errors. The cited research calls for greater attention to proper interpretation of statistical tests, not for their abandonment.

In the end, at least in the chapter on statistics, S-F fails to deliver much if anything on her promise to show how to evaluate science from a philosophic perspective.  Her discussion of the Pritchard case is not an analysis; it is a harangue. There are certainly more readable, accessible, scholarly, and accurate treatments of the scientific and statistical issues in this book.  See, e.g., Michael B. Bracken, Risk, Chance, and Causation: Investigating the Origins and Treatment of Disease (2013).


[1] Not to be confused with the deceased federal judge by the same name, William J. Rea. William J. Rea, 1 Chemical Sensitivity – Principles and Mechanisms (1992); 2 Chemical Sensitivity – Sources of Total Body Load (1994),  3 Chemical Sensitivity – Clinical Manifestation of Pollutant Overload (1996), 4 Chemical Sensitivity – Tools of Diagnosis and Methods of Treatment (1998).

[2] R. C. Shelton, M. B. Keller, et al., “Effectiveness of St. John’s Wort in Major Depression,” 285 Journal of the American Medical Association 1978 (2001).

[3] M. A. Eisenberger, B. A. Blumenstein, et al., “Bilateral Orchiectomy With or Without Flutamide for Metastic [sic] Prostate Cancer,” 339 New England Journal of Medicine 1036 (1998).

[4] Kenneth J. Rothman, Epidemiology 123–127 (NY 2002).

[5] Endnote 47 references the following papers: E. Hammond, “Cause and Effect,” in E. Wynder, ed., The Biologic Effects of Tobacco 193–194 (Boston 1955); E. L. Wynder, “An Appraisal of the Smoking-Lung-Cancer Issue,”264  New England Journal of Medicine 1235 (1961); see C. Little, “Some Phases of the Problem of Smoking and Lung Cancer,” 264 New England Journal of Medicine 1241 (1961); J. R. Stutzman, C. A. Luongo, and S. A McLuckey, “Covalent and Non-Covalent Binding in the Ion/Ion Charge Inversion of Peptide Cations with Benzene-Disulfonic Acid Anions,” 47 Journal of Mass Spectrometry 669 (2012). Although the paper on ionic charges of peptide cations is unfamiliar, the other papers do not eschew traditional statistical significance testing techniques. By the time these early (1961) reviews were written, the association that was reported between smoking and lung cancer was clearly accepted as not likely explained by chance.  Discussion focused upon bias and potential confounding in the available studies, and the lack of animal evidence for the causal claim.

[6] J. L. Williams, C. A. Hathaway, K. L. Kloster, and B. H. Layne, “Low power, type II errors, and other statistical problems in recent cardiovascular research,” 42 Am. J. Physiology Heart & Circulation Physiology H487 (1997).

[7] Jennie A. Freiman, Thomas C. Chalmers, Harry Smith and Roy R. Kuebler, “The importance of beta, the type II error and sample size in the design and interpretation of the randomized control trial: survey of 71 ‛negative’ trials,” 299 New Engl. J. Med. 690 (1978).

Climategate on Appeal

August 17th, 2014

Michael Mann, a Professor of Meteorology at Penn State University, studies and writes about climate change. When the email servers of the University of East Anglia were hacked in 2009, Mann’s emails were among those used to suggest that there was a conspiracy to suppress evidence inconsistent with climate change.

Various committees investigated allegations of scientific malfeasance, which has come to be known as “climategate”; none found evidence of scientific misconduct. Some of the committees, however, did urge that the investigators engage in greater sharing of their supporting data, methods, and materials.

In February 2010, Penn State issued a report of its investigation, which found there was “no credible evidence that Dr. Mann had or has ever engaged in, or participated in, directly or indirectly, any actions with an intent to suppress or to falsify data.” A Final Investigation Report, from Penn State in June 2010, further cleared Mann.

In the United Kingdom, a Parliamentary Committee on Science and Technology published a report, in March 2010, finding that the criticisms of the Climate Research Unit (CRU) at the University of East Anglia (UEA) were not well founded. A month later, the UEA issued a Report of an International Panel, which found no evidence of deliberate scientific malpractice. Another UEA report, the Independent Climate Change Email Review report, found no reason to doubt the honesty of the scientists involved. An official UK governmental report, in September 2010, similarly cleared the climate researchers of wrongdoing.

The view from this side of the Atlantic largely exonerated the climate researchers of any scientific misconduct. An EPA report, in July 2010, dismissed the email content as merely a “candid discussion” among scientists collaborating on complex data. An independent review by the Department of Commerce’s Inspector General found no “inappropriate” manipulation of data in the emails. The National Science Foundation reported, in August 2011, that it could discern no research misconduct in the climategate emails.

Rand Simberg, an adjunct scholar with the Competitive Enterprise Institute (CEI) wrote a blog post, “The Other Scandal in Unhappy Valley” (July 13, 2012), in which he referred to Mann and his research as “wrongdoing” and “hockey-stick deceptions.” Simberg describes the hacked UEA emails as having “revealed” that Mann “had been engaging in data manipulation to keep the blade on his famous hockey-stick graph.” Similarly, Simberg states that “many of the luminaries of the ‛climate science’ community were shown to have been behaving in a most unscientific manner.”

The current on-line version of the Simberg’s blog post ends with a note[1]:

*Two inappropriate sentences that originally appeared in this post have been removed by the editor.

A post by Mark Steyn on the National Review online website called Mann’s hockey stick “fraudulent.” A subsequent National Review piece offered that in “common polemical usage, ‛fraudulent’ doesn’t mean honest-to-goodness criminal fraud. It means intellectually bogus and wrong.”

Legal counsel for Penn State wrote the Competitive Enterprise Institute, in August 2012, to request an apology from Simberg and the CEI, and a retraction of Simberg’s blog post. I am not sure what was in the two, subsequently removed, “inappropriate sentences” in Simberg’s piece were, or when the sentences were removed, but Dr. Mann, represented by Cozen O’Connor, went on to  sue Mark Steyn, Rand Simberg, the CEI, and National Review, for libel, in October 2012, in the Superior Court of the District of Columbia. Further publications led to an Amended Complaint in 2013.

Mann obviously does not like being called the author of fraudulent and intellectually bogus work, and he claims that the publications by Simberg and Steyn are libelous as “allegations of academic fraud.”

The D.C. Superior Court denied defendants’ motion to dismiss, setting up interlocutory appeals to the D.C. Court of Appeals, which is the highest court for the District. The appellate court allowed an interlocutory appeal, with a schedule that calls for appellants’ briefs by August 4, 2014. Dr. Mann’s brief is due by September 3, 2014, and appellants’ reply briefs by September 24, 2014.  The Court set consideration of the appeal for its November calendar.

Defendants CEI and National Review filed their opening briefs last week. This week, on August 11, 2014, the Cato Institute, Reason Foundation, Individual Rights Foundation and Goldwater Institute filed a brief in support of CEI and National Review. Other amici who filed in support of the defendants are Mark Steyn, the District of Columbia, and the Alliance Defending Freedom, and the Electronic Frontier Foundation.

I am not sure that all the epithets point to academic fraud.  Some of the adjectives, such as “bogus” do not really connote scienter or intent to deceive.  The use of the adjective “fraudulent,” however, does connote intentional falsity, designed to mislead. Deceit and intent to mislead seem to be at the heart of an accusation of fraud.

The defendants’ arguments, and their amici, on appeal predictably rely heavily upon the First Amendment to protect their speech, but surprisingly, they characterize labeling someone’s research as “fraudulent” as merely “hyperbolic” or “robust” debate and polemics.

Some of the defendants’ other arguments are even more surprising.  For instances, Cato correctly points out that “Courts are ill-suited to officiate scientific debate to determine ‛truth’ or ‛falsity’.” True, but officiate they must in criminal fraud, intellectual property, product liability, and in securities fraud cases, as well as many other kinds of litigations. Cato admonishes that the “[e]volution of scientific thought over time highlights the danger of courts[’] determining ‛truth’ in public debate.” Dangerous indeed, but a commonplace in state and federal courts throughout the land.

Is this Think Tank Thuggery or robust free speech? The use of “fraudulent” seems to be an accusation, and it would have much more “robust” to have had a careful documentation of what exactly was Professor Mann’s supposed deviation from a scientific standard of care.


[1] The words “fraud” and “fraudulent” do not appear in the current on-line version of Simberg’s post.

Zoloft MDL Excludes Proffered Testimony of Anick Bérard, Ph.D.

June 27th, 2014

Anick Bérard is a Canadian perinatal epidemiologist in the Université de Montréal.  Bérard was named by plaintiffs’ counsel in the Zoloft MDL to offer an opinion that selective serotonin reuptake inhibitor (SSRI) antidepressants as a class, and Zoloft (sertraline) specifically, cause a wide range of birth defects. Bérard previously testified against GSK about her claim that paroxetine, another SSRI antidepressant is a teratogen.

Pfizer challenged Bérard’s proffered testimony under Federal Rules of Evidence 104(a), 702, 703, and 403.  Today, the Zoloft MDL transferee court handed down its decision to exclude Dr. Bérard’s testimony at the time of trial.  In re Zoloft (Sertraline Hydrochloride) Prods. Liab. Litig., MDL 2342, Document 979 (June 27, 2014).  The MDL court acknowledged the need to consider the selectivity (“cherry picking”) of studies upon which Dr. Bérard relied, as well as her failure to consider multiple comparisons, ascertainment bias, confounding by indication, and lack of replication of specific findings across the different SSRI medications, and across studies. Interestingly, the MDL court recognized that Dr. Bérard’s critique of studies as “underpowered” was undone by her failure to consider available meta-analyses or to conduct one of her own. The MDL court seemed especially impressed by Dr. Bérard’s having published several papers that rejected a class effect of teratogenicity for all SSRIs, as recently as 2012, while failing to identify anything that was published subsequently that could explain her dramatic change in opinion for litigation.

NIEHS Study – CHARGE Failure to Disclose Conflicts of Interest

June 23rd, 2014

At midnight, the Environmental Health Perspectives (EHP) posted an “in-press” paper on autism and pesticides, slated for full publication in the next few weeks.  Janie F. Shelton, Estella M. Geraghty, Daniel J. Tancredi, Lora D. Delwiche, Rebecca J. Schmidt, Beate Ritz, Robin L. Hansen, and Irva Hertz-Picciotto, “Neurodevelopmental disorders and prenatal residential Proximity to Agricultural pesticides: the CHARGE Study,” Envt’l Health Persp. (advanced publication: June 23, 2014).

The paper was embargoed until midnight, but the principal investigator, Prof. Irva Hertz-Picciotto, violated that embargo by talking about the study’s results in a YouTube video, posted two weeks ago. SeeSelective Leaking — Breaking Ingelfinger’s Rule” (June 20, 2014).

The paper is already attracting media attention. Predictably, the coverage trades on inaccurate and misleading terms, such as “links” and “increased risks.”  See, e.g., Agence France-Presse, “Study finds link between pesticides and autism,” (Yahoo news story claiming “link” in headline, but in text, noting that the study findings “do not show cause-and-effect.”); Arielle Duhaime-Ross (The Verge), “Study further confirms link between autism and pesticide exposure: Living near farms and fields can put a foetus at risk,”  (June 23, 2014 12:01 am) (filed one minute after the embargo was officially lifted, and declaring that “neurotoxins, which include everything from pesticides, to mercury and diesel, are thought to alter brain development in foetuses. Now, a new study further confirms this link by showing that pregnant women who live within a mile of farms and fields where pesticides are employed see their risk of having a child with autism increase by 60 percent — and that risk actually doubles if the exposure occurs in the third trimester”); Zoë Schlanger (Newseek), “Autism Risk Much Higher for Children of Pregnant Women Living Near Agricultural Pesticide Areas” (June 23, 2014).

There are few more incendiary issues than autism or brain damage and environmental exposures.  The media is unlikely to look very critically at this paper.  News reports talk of “links” and “increased risks,” but they do not look at methodological problems and limitations.  They should.

The media should also look at conflicts of interest (COIs). Well, in an ideal world, the media and everyone else would stop trying to use COIs as a proxy for interpreting study validity. The reality, however, is that much of the media treats corporate financial interests as sufficient reason to discount or disregard a study.  If the media want to avoid being hoisted with their own hypocritical petard, they will look closely at the undisclosed COIs in this new paper by Shelton, et al.

First, they will note that the authors disclose that they have no COIs:

“Competing financial interests: The authors have no competing financial interests.”

Second, the media will note that EHP provides explicit instructions to authors on COI disclosures:

Competing Financial Interests

EHP has a policy of full disclosure. Authors must declare all actual or potential competing finan­cial interests involving people or organizations that might reasonably be perceived as relevant. Disclosure of competing interests does not imply that the information in the article is questionable or that conclusions are biased. Decisions to pub­lish or reject an article will not be based solely on a declaration of a competing interest.

***

Employment of any author by a for-profit or nonprofit foundation or advocacy group or work as a consultant also must be indicated on the CFID form.”

EHP Instructions to authors (2013).

Third, the media will ask whether the COI disclosure (“none”) was proper.  The study is one in a series of papers that comes out of research funded by the federal government, THE CHARGE STUDY: CHILDHOOD AUTISM RISKS FROM GENETICS AND THE ENVIRONMENT (2R01ES015359-06). Journalists may want to look, in the first instance, to the principal investigator, Irva Hertz-PicciottoHertz-Picciotto is an epidemiologist at the University of California, Davis, where she is the chief of the Division of Environmental and Occupational Health, Department of Public Health Sciences.

Fourth, the media may want to ask whether Dr. Hertz-Picciotto’s COI disclosure complied with the journal’s requirements.  Recall that EHP requires authors to disclose work or consultancy for a “nonprofit foundation or advocacy group… .” Dr. Hertz-Picciotto sits on the advisory board of Autism Speaks, an advocacy group. More telling, Hertz-Picciotto also serves on the advisory board of the radically anti-chemical Healthy Child, Healthy World organization, located in California (12100 Wilshire Blvd. Suite 800, Los Angeles CA 90025).  According to its website, Healthy Child Healthy World is a California non-profit corporation that advocates to:

“  •  Demand corporate accountability
•  Engage communities for collective action
•  Support safer chemicals and products
•  Influence legislative and regulatory reform.”

Both organizations would seem to come under the EHP COI disclosure policy, but these memberships are not disclosed in the on-line article. Certainly, these affiliations are every bit as potentially enlightening about the principal investigator’s motivations and methodological choices as corporate sponsorship. Of course, it is possible that Dr. Hertz-Picciotto made these disclosures, but the EHP editors chose not to make them public.  If so, shame on the editors.

Most important, the media should provide critical review of the substance of the Shelton paper, and certainly more than sound bites on COIs or “links.” For one thing, even a quick review shows that there are four exposure periods (pre-conception, and three trimesters of pregnancy), two outcome variables (autism spectrum disorder and developmental delay), five exposure substances, and three exposure proximities, for 120 comparisons.  The statistical analysis in the paper uses an alpha of 0.05, which provides a study-wise Type I error rate, and cannot be used to evaluate any one of the 120 comparisons.  The paper’s use of “statistical significance” terminology should be taken with a grain of salt.[1] For another thing, many of the risk factors identified in other studies are not addressed here. See, e.g., Xin Zhang, Cong-Chao Lv, Jiang Tian, Ru-Juan Miao, Wei Xi, Irva Hertz-Picciotto, and Lihong Qi, “Prenatal and Perinatal Risk Factors for Autism in China,” 40 J. Autism Dev. Disord. 1311 (2010) (“In the adjusted analysis, nine risk factors showed significant association with autism: maternal second-hand smoke exposure, maternal chronic or acute medical conditions unrelated to pregnancy, maternal unhappy emotional state, gestational complications, edema, abnormal gestational age (<35 or >42 weeks), nuchal cord, gravidity >1, and advanced paternal age at delivery (>30 year-old)). Ultimately, a more demanding inquiry may be required to investigate the extent to which anti-pesticide advocacy groups have actually created an apparent increase in autism rates by informational, political, and environmental campaigns.


[1] See United States v. Harkonen, No. C 08–00164 MHP, 2010 WL 2985257, at *1 (N.D. Cal. July 27, 2010), aff’d, 510 F. App’x 633, 636 (9th Cir. Mar. 4, 2013)(affirming wire fraud conviction for author of press release who failed to disclose that endpoint was not prespecified, and failed to adjust for multiple comparisons), cert. denied, ___ U.S. ___ (Dec. 16, 2013).

 

Selective Leaking — Breaking Ingelfinger’s Rule

June 20th, 2014

The government wants us to believe that Snowden is a very evil man because he is a “leaker,” but the government leaks the information that it wants the world to have, and keeps confidential the rest.  The double standard is obvious.

Scientific publishing has its own double standard as well.  In 1969, Franz J. Ingelfinger, as editor of the prestigious New England Journal of Medicine (NEJM), set out two conditions for publication in the Journal:

(1) an embargo on articles and their content, when slated for publication in the NEJM, and

(2) a prohibition against duplicative publication or presentation of the substance of the article in any other journal or media source.

These conditions became known as the Ingelfinger rule, a rule that authors were willing to agree to in advance because they received a prestigious publication in the NEJM, and attention from the media because of that publication. The “rule” has been under relentless criticism, but has been defended in a modified form by subsequent NEJM editors. See Arnold S. Relman, “The Ingelfinger Rule,” 305 New Engl. J. Med. 824 (1981); Arnold S. Relman, “More on the Ingelfinger Rule,” 318 New Engl. J. Med. 1125 (1988); Marcia Angell & Jerome P. Kassirer, “The Ingelfinger Rule Revisited,” 325 New Engl. J. Med. 1371 (1991).  Most journals have followed the lead of the NEJM by implementing a similar set of conditions on publication.

The Ingelfinger Rule could be quite pointy when thrust into a researcher’s face.  I recall well how my cousin, conducting research on viral diseases, ran into the Rule, when his laboratory uncovered important information about the transmission of HIV. He and his colleagues wrote up their work, which was accepted by the NEJM, but the Centers for Disease Control felt that the information needed to be made public immediately.  The NEJM threatened to withdraw publication, and the intervention of a medical school dean was ultimately required to broker a compromise that allowed the authors and the NEJM to keep the publication.  See Joseph E. Fitzgibbon, Sunanda Gaur, Lawrence D. Frenkel, Fabienne Laraque, Brian R. Edlin, and Donald T. Dubin, “Transmission from one child to another of Human Immunodeficiency Virus type 1 with a zidovudine-resistance mutation,” 329 New Engl. J. Med. 1835 (1993).

In a recent blog post, scientist Dr. David Schwartz, writes about a soon-to-be published observational study of autism and environmental exposures.  See David Schwartz, “New Study Expected to Impugn Pesticides, But Is It Legit?” (June 20, 2014).  Dr. Schwartz explains:

“A new epidemiological study linking autism with pesticide exposure is expected to surface soon, according to our sources in the scientific community. The study was commissioned by Childhood Autism Risks from Genes and Environment (CHARGE). We anticipate that the mainstream media and some in the scientific community will latch onto the study because the rising rate of autism is alarming, and the public is understandably searching for answers. But we would caution the scientific community, the media, and the public to approach the study with skepticism instead of automatically buying into the results.

Speaking about the results, ahead of publication in a recent video, (posted on June 10, 2014) the senior author, Dr. Hertz-Picciotto, states: “there were associations with several classes of pesticides.” She goes on to state: “this is actually the third study to show some link with the organophosphates and autism risk.” I was surprised to see a study author talking publicly about the results of an embargoed study, since other scientists and journalists are precluded from talking about the study until after its publication.”

Dr. Irva Hertz-Picciotto is the principal investigator of the study, which is funded by the federal government, THE CHARGE STUDY: CHILDHOOD AUTISM RISKS FROM GENETICS AND THE ENVIRONMENT (2R01ES015359-06).  University of California at Davis is the funded institution. The study is due to be published in the Environmental Health Perspectives, (EHP) but according to Dr. Schwartz, the paper is still under embargo, and it does not appear on the EHP website, even in the advance of publication page. Unlike Dr. Hertz-Picciotto, Dr. Schwartz observed the embargo and did not comment upon the substance of the paper.  What is regrettable is that the EHP tolerates this selective leaking of the paper’s content, by its author, in apparent violation of its embargo policy. Perhaps everyone should join in and disregard the policy to keep the discussion balanced and to permit all sides to be heard.

Autism is a serious concern, which has been the subject of a great deal of biased and confounded research, and advocacy in courtrooms and elsewhere. The thimerosal-autism scare is still playing out in Vaccine court. Whatever the merits or demerits of Hertz-Picciotto’s study, soon to be published, what is disturbing is the cavalier breaking of the embargo by the principal investigator, on YouTube of all places.  Given the anxiety and concern over autism, scrupulous adherence to the the journal’s policies would have seemed prudent, to give serious scientific journalists a chance to comment critically on the paper at issue.

Supreme Court Denies Certiorari in Harkonen v. United States

December 16th, 2013

The Supreme Court took up Dr. Harkonen’s petition for certiorari on Friday, December 13, 2013.  The SCOTUS Blog made the petition its “Petition of the Day.”

Unfortunately, the blog’s attention did not carry over to the Court’s conference. The Court release its orders this morning, which included a denial of certiorari in Harkonen v. United States.

Although there was a good deal of noise about Dr. Harkonen’s intent, that issue is uniquely case specific.  The real issue going forward would seem to be the reach of the government’s position, and now the Ninth Circuit’s position, that failure to disclose multiple testing or deviation from a protocol is demonstrably false or misleading.  In Dr. Harkonen’s case, the causal inference was reasonably supported despite the non-protocol analysis.  The use of the verb “demonstrate,” however, is often used carelessly, and the Harkonen decision may well breathe life into Rule 702 gatekeeping.

 

Harkonen’s Appeal Updated

October 9th, 2013

The Solicitor General’s office has obtained yet another extension in which to file its opposition to Dr. Harkonen’s petition for a writ of certiorari. The new due date is November 8, 2013.

This week, Nature published a news article on the Harkonen case. See Ewen Callaway, “Uncertainty on trial,” 502 Nature 17 (2013).  Mr. Calloway’s story accurately recounts that Thomas Fleming, a biostatistician at the University of Washington, chaired the data safety monitoring board for the InterMune trial, and that he had told Dr. Harkonen and others at InterMune that he, Fleming, believed that the press release was misleading.   But this “fact” simply represents that Fleming disagreed with the causal inference of efficacy.  His opinion might well have been correct, but it did not make Dr. Harkonen’s press release “demonstrably” false.  Overstating confidence in a conclusion may be the occasion for disputing the evidentiary warrant for the conclusion, but it does not make the speaker a liar.

Calloway also reports that the government believed that documents suggested that there had been off-label promotion of interferon γ-1b, but of course, the jury acquitted Dr. Harkonen of mislabeling.  Calloway’s recitation of  these discredited allegations, however, provide important context for why the federal government continues to overreach by pressing its opposition to Dr. Harkonen’s appeal on the conviction for criminal wire fraud.

Mr. Calloway notes that there were “statisticians, clinical researchers and legal scholars” who criticized the judgment of conviction on grounds that it rested upon misinterpretations and misunderstandings of statistics, and that it could criminalize much expert witness testimony, grant applications, and article submissions. But Mr. Calloway’s presentation is subtly biased.  He fails to identify those “statisticians, clinical researchers and legal scholars,” other than a few whom he then impugns as having been “compensated” by the defense.

He quotes Stanford Professor Steven Goodman as filing a brief stating that:

“You don’t want to have on the books a conviction for a practice that many scientists do, and in fact think is critical to medical research… .”

Calloway errs in suggesting that Professor Goodman was a brief writer rather than an affiant.  Dr. Zibrak, a pulmonary physician is quoted, with the note that he was compensated to tell other physicians about his clinical experience with interferon γ-1b in patients with idiopathic pulmonary fibrosis.  By playing the “compensation card,” Calloway tries to diminish the force of Goodman’s and Zibrak’s substantive arguments.  This sly attempt, however, is blunted by the significant number of legal scholars and scientists who filed amicus briefs without compensation.  More important, the attempt is irrelevant to the issues in the case.

MISLEADING REPORTING

Calloway described the trial as showing that only “slightly fewer” patients had died on interferon γ-1b than on placebo, but that the difference was not statistically significant “because the probability that it was not due to the drug was greater than 5%, a widely accepted statistical threshold.”  Well, the p-value was 0.08 on the intent-to-treat analysis, and 0.055 on the per-protocol analysis.  When the investigators published a more sophisticated time-to-event analysis in the New England Journal of Medicine, their reported “hazard ratio for death in the interferon gamma-1b group, as compared with the placebo group, was 0.3 (95 percent confidence interval, 0.1 to 0.9).” Raghu et al. N. Engl. J. Med. 350, 125–133; 2004 (for the entire trial, not the “controversial” subgroup).  Calloway notes the publication of the results, but fails to inform the Nature readers of this hazard ratio or the confidence interval.  Some might say that Mr. Calloway misled readers by inaptly describing this hazard ratio as “slightly fewer” deaths on interferon γ-1b than placebo, and by failing to provide all the pertinent information.

Oh; wait. Is failure to present all the facts, fraud???

TRANSPOSITION FALLACY

Perhaps more ironic was that Mr. Calloway’s interpretation of statistical significance is wrong. The p-value is not the “probability that it was not due to the drug” or the probability that the null hypothesis is true.  Good thing that Mr. Calloway does not live in the United States where statistical errors of this sort can be a criminal offense.

The Nature news story quoted Gordon Guyatt, from McMaster University, who thinks that Dr. Harkonen skewed the findings:

“This guy gave a very unbalanced presentation; whether it is sufficiently unbalanced that you should send him to jail, I don’t know… .”

But the data were all accurately presented; it was the use of the verb “demonstrate,” which triggered the prosecution.  And it was hardly a “presentation”; it was a press release, which clearly communicated that a presentation was forthcoming within a couple of weeks at a scientific conference.

The story also cites Patricia Zettler, a former FDA attorney, who now teaches at Stanford Law School, for her doubts that the case will matter to most scientists.  See also Zettle, “U.S. v. Harkonen: Should Scientists Worry About Being Prosecuted for How They Interpret Their Research Results?” (Oct. 7, 2013).  If her prediction is correct, then this is a sad commentary on the scientific community.  Ms. Zettler suggests that the Supreme Court is likely to deny the petition, and leave Dr. Harkonen’s conviction in place, and that this denial will not seriously affect scientific discourse.  If this suggestion is true, then courts will have acquiesced in a very selective prosecution, given the widespread prevalence of the statistical reporting practices that were on trial here.

As much as everyone would like to see editors, scientists, governments, companies, and universities held to higher standards in science reporting, criminalizing the commonplace because the speaker is an unpopular scientist who has a commercial, as well as a scientific, interest is profoundly disturbing.  Ultimately, all scientists, from private or public sectors, from academic or non-academic institutions, have financial or reputational interests to be advanced in their communication of scientific results.

The irony is that many federal judges would not exclude an expert witness who would testify under oath to a conclusion based upon much weaker evidence than Dr. Harkonen presented in a press release, and which announced a much fuller discussion at an upcoming scientific conference in a couple of weeks.  If Ms. Zettler is correct, it will be much more difficult for federal and state trial judges to reject challenges to expert witness testimony based upon statistically “non-significant” results, with the old “goes to the weight, not the admissibility” excuse.

Trevor Ogden’s Challenge to the Lobby’s Hypocrisy

July 6th, 2013

Trevor Ogden, the editor of the Annals of Occupational Hygiene, addressed sharing of underlying research data in an editorial, a few years ago.  See Trevor Ogen, “Data Sharing, Federal Rule of Evidence 702, and the Lions in the Undergrowth,” 53 Ann. Occup. Hyg. 651 (2009). Ogden was responding to attacks on industry-sponsored research and demands that the exposure data from such studies be made available as a condition of publication in the Annals.

Ogden reported that he was sympathetic, to an extent, with the attack on industry bona-fides, but that editorial board discussions raised several issues with data sharing:

“(1) The researcher puts a lot of effort into getting good exposure data and may have plans for their further use; also access to the unpublished data can be an asset in getting further grants.

(2) It takes time and effort to prepare data for publication, and in the short term the people who do this to make their data available are not the ones who benefit by their availability.

(3) There may be problems with confidentiality and liability for the workplaces where the measurements were obtained.

(4) The data may be misused; in particular, they may be reinterpreted by those with a commercial interest in undermining the conclusions drawn by the original researchers.”

Id. at 652.

Had Ogden stopped there, he might have been spared the unceremonious attacks by members of “The Lobby,” but he went further to point out that some of the accusers (David Michaels; McCullogh & Tweedale) were guilty of their own rhetorical excesses.

While acknowledging that industry has taken errant positions or distorted research data on occasions, Ogden thought it was important to note that:

“industry is not always wrong, and campaigners can overlook this because it is easier to identify the paymaster than judge the science.”

* * * *

“It is a mistake if we think that because the industry helped pay for the study and has exploited the findings in its propaganda, the results must necessarily be wrong—life, including science, is not this simple.”

Id. at 653 -54.

Ogden offered, as a scientist would, further alternative explanations for why industry-sponsored scientific research appears to yield results favorable to the sponsor:

“It seems that an industry-sponsored study is much more likely to find results favourable to industry, but this may partly or wholly be because non-industry researchers find it harder to publish negative or inconclusive results. Scientific studies must be judged primarily on the quality of the evidence, not on who pays for them.”

Id. at 654. Ogden might well have opened his mind to the possibility that some government agency and academic scientists may well be biased in favor of finding outcomes that support greater agency regulation and control of occupational and environmental exposures. In any event, Ogden interpreted the situation to require skepticism of all positions, both pro- and anti-industry:

“This is not a very encouraging picture. It looks as if we cannot trust industry, and its critics are not very reliable either.”

Ogden would thus not let any side off the hook when it came to disclosures of potential conflicts of interest:

“Declarations of interest in publications are essential, especially if the authors are likely to be involved in legal testimony. Failure to offer this must be treated very seriously.”

Id. at 655.

Even Ogden’s more modest alternative explanation and his balanced comments provoked shouts of outrage from “the Lobby.” SeeThe Lobby Lives – Lobbyists Attack IARC for Conducting Scientific Research” (Feb. 19, 2013).  Ogden, however, gave them ample space in which to voice their disagreements. See Celester Monforton, Colin Soskolne, John Last, Joseph Ladou, Daniel Teitelbaum , Kathleen Ruff, “Comment on: Ogden T (2009) ‘data sharing, federal rule of evidence 702, and the lions in the undergrowth’,” 54 Ann. Occup. Hyg. 365 (2010); Barry I. Castleman, Fernand Turcotte, Morris Greenberg, “Comment on: Ogden T (2009) ‘Data sharing, Federal Rule of Evidence 702, and the Lions in the Undergrowth’,” 54 Ann. Occup. Hyg. 360 (2010).

The remarkable thing about the Lobby’s letters to the editor is that they scolded industry for conflicts of interests, but failed to reveal their own.  Celeste Monforton, for instance, declared her academic affiliations, but overlooked her connection with an anti-Daubert advocacy organization that is funded by left-over common-benefit trust fund money from the silicone gel breast implant litigation. See SKAPP A LOT (April 30, 2010). Monforton and all her co-authors did, however, report their membership in the Rideau Institute on International Affairs, a Canadian “non-profit” organization, established in 2007. They failed, however, to disclose that the Rideau Institute engages in lobbying and advocacy efforts for trade unions and “non-profits.”  See Rideau Institute website  (“The Rideau Institute is an independent research, advocacy, and consulting group based in Ottawa. It provides research, analysis and commentary on public policy issues to decision makers, opinion leaders and the public.”).  Several of Monforton’s co-authors have testified, some frequently, for the litigation industry (plaintiffs) in occupational and environmental exposure cases.  Daniel Thau Teitelbaum, for instance, was an early testifier in the silicone breast implant litigation, and was the subject of analysis in General Electric Co. v. Joiner, 522 U.S. 136 (1997).

Barry Castleman’s letter is even more offensive to its own stated principles of extirpating conflicted science.  Castleman has been part of the litigation industry’s expert witness army in asbestos cases for over three decades.

Ogden’s statement of the problem was insightful, even if not definitive.  His suggestion that “hostile” analysts should be kept from access to underlying data ignores the intense need for this access in areas of science that inform litigation and regulation.  As George A. Olah pointed out in his Nobel Prize address, scientists need adversaries to keep them creative, focused, and accurate. Ogden’s call for disclosure of interests, “especially if the authors are likely to be involved in legal testimony,” ignores that litigants on both sides need access to scientific expertise on the issues that drive litigation and regulatory battles.  More distressingly, however, Ogden’s journal let his interlocutors slide on their obligation to disclose their deep financial and positional conflicts of interests.

EPA Research on Ultrafine Particulate Matter

June 26th, 2013

White Hat Bias

Hyping environmental and so-called toxic risk has gone on so long that many Americans have no sense of the truth when it comes to the causal consequences of personal, occupational, and environmental exposures.  Recently, I listened to a lecture given by Judge Calibresi of the Second Circuit.  In the course of talking about regulatory prohibitions and tort-law incentives, he told of his visit to the late Professor Bickel, who had then just been diagnosed with brain cancer.  In his lecture, Judge Calibresi stated that he knew that Bickel’s brain cancer was caused by his smoking, and went on to muse whether banning smoking would have saved his friend’s life.  Judge Calibresi’s ruminations upon the nature of regulation and tort law were profound; his cursory hipshot about what caused his friend’s terminal illness, juvenile.

Some years ago, a science journalist published an account of how dire predictions of asbestos deaths had not come to pass.  Tom Reynolds, “Asbestos-Linked Cancer Rates Up Less Than Predicted,” 84 J. Nat’l Cancer Instit. 560 (1992)

Reynolds quoted one scientist as saying that:

“the government’s exaggeration of the asbestos danger reflects a 1970s’ Zeitgeist that developed partly in response to revelations of industry misdeeds.  ‘It was sort of the “in” thing to exaggerate … [because] that would be good for the environmental movement’….  ‘At the time it looked like you were wearing a white hat if you made these wild estimates. But I wasn’t sure whoever did that was doing all that much good.”

Id. at 562.  Reynolds’ quote captures the nature of “white-hat” bias, a form of political correctness applied to issues that really depend upon scientific method and data for their resolution.  Perhaps the temptation to overstate the evidence against a toxic substance is unavoidable, but it diminishes the authority and credibility of regulators entrusted with promulgating and enforcing protective measures.

White-Hat Bias & Black-Hat Ethics

I recently came across a disturbing article in the Environmental Health Perspectives, a peer-reviewed journal, supported by the National Institute of Environmental Health Sciences, National Institutes of Health, United States Department of Health and Human Services.  The article detailed a case report of an individual woman experimentally exposed to ultrafine particulate matter (PM 2.5) in a test chamber.  Andrew J. Ghio, Maryann Bassett, Tracey Montilla, Eugene H. Chung, Candice B. Smith, Wayne E. Cascio, and Martha Sue Carraway, “Case Report: Supraventricular Arrhythmia after Exposure to Concentrated Ambient Air Pollution Particles,” 120 Envt’l Health Perspect. 2275 (2012) [Ghio article].  There were no controls.

The point of the case report was that a person exposed to PM 2.5, experimentally, experienced a “cardiac event,” (atrial fibrillation or AFib) which resolved after cessation of exposure.  The experiment was conducted in a federal agency facility, Environmental Public Health Division, National Health and Environmental Effects Research Laboratory, U.S. Environmental Protection Agency, Chapel Hill, North Carolina.

The authors stated that the experiment had the approval of the University of North Carolina School of Medicine Committee on the Protection of the Rights of Human Subjects. Given that the EPA has made extraordinary claims about the harmfulness of PM 2.5, including heart disease, lung disease, and cancers, and that there was no imaginable benefit to the subject from participating in the experiment, this experiment seemed dubious indeed.  The narrative of the case report, however, reveals even more disturbing information about the potential improprieties of the human experiment.

The PM Chamber

The human guinea pig was 58 years old.  Her age is significant. Although the authors claim that AFib is uncommon among those under 60, it does increase with age.  The human subject in the published experiment was close to the age at which AFib is no longer uncommon, and she was unwell to begin with.

The case report notes that the human subject had previously participated in the same exposure “protocol” without “complications.”  The report does not explain why this human subject was returning to the EPA center for being placed in a “chamber,” and being exposed sequentially to “filtered air and concentrated ambient particles (CAPs).” The implied suggestion is that she was a likely candidate to experience a “cardiac event” eventually from repeated exposures in the PM 2.5 chamber.

The “Subject”

The human subject was not well.  Although she was asymptomatic on the day of the experimental exposure to CAPs, she had a history of osteoarthritis and hypertension.  The latter condition was being treated with an angiotensin-converting enzyme inhibitor and a diuretic (10 mg. lisinopril and 12.5 mg. hydrochlorothiazide).  The subject had had surgeries for hernia repair, cholecystectomy, and knee arthroplasty. She was a little over 5 feet 6 inches tall, and obese, weighing over 230 pounds, with a 45 inch waist.

In addition to her chronic hypertension, morbid obesity, and musculoskeletal disease, the subject also had a family and personal history of heart disease.  Her father had died from a myocardial infarction, at the age of 57.  Immediately before the experimental exposure, a Holter monitor showed evidence of increased supraventricular ectopy, with 157 ± 34 premature atrial contractions/hour.

The investigators do not tell us how the experimental exposure relates to typical urban exposures, or to EPA regulatory standards, guidelines, or recommendations.  About 23 minutes after exposure to CAPs started (filter weight, 112 μg/m3; particle number, 563,912/cc), the human subject developed a “nonsustained atrial fibrillation that quickly organized into atrial flutter.”  The woman remained asymptomatic, and her EKG showed that she spontaneously reverted to a normal sinus rhythm.

The investigators acknowledge that there are many risk factors for AFib, and that the subject had several of them: hypertension, obesity, and possibly family history.  The woman had a history of premature atrial contractions, which may have increased her risk for AFib.

Despite this rich clinical background, the authors claim, without apparently trying very hard, that there was no “obvious” explanation for the subject’s arrhythmia while in the chamber.  They argue, however, that the exposure to PM 2.5 was causal because the arrhythmia began in the chamber, and resolved when the subject was removed from exposure.  The argument is rather weak considering that the subject may have been stressed by the mere fact of placement in a chamber, or being wired up to monitors.  See, e.g., Luana Colloca, MD, PhD, and Damien Finniss, MSc Med., “Nocebo Effects, Patient-Clinician Communication, and Therapeutic Outcomes,” 307 J. Am. Med. Ass’n 567, 567 (2012). The authors of the PM 2.5 case report acknowledge that “coincident atrial fibrillation cannot be excluded,” but they fail to deal with the potentially transient nature of AFib.

Human experimentation requires a strong rationale in terms of helping the experimental participant.  What was the rationale for this human experiment?  Here is what the EPA investigators posit:

“Although epidemiologic data strongly support a relationship between exposure to air pollutants and cardiovascular disease, this methodology does not permit a description of the clinical presentation in an individual case. To our knowledge, this is the first case report of cardiovascular disease after exposure to elevated concentrations of any air pollutant.”

Ghio at 275. The authors seemed to be saying that we know that PM 2.5 causes cardiovascular disease, but we wanted to be able to describe a person in the throes of a cardiovascular event brought on by exposure. See also Andrew J. Ghio, Jon R. Sobus, Joachim D. Pleil, Michael C. Madden, “Controlled human exposures to diesel exhaust,” 142 Swiss Med. Weekly w13597 (2012).

The Whole Truth

The Ghio article mentions only the one woman who experienced the mild, transient AFib.  A reader might wonder whether she was the only test subject.  Why was she retested after a previously incident-free experience in the chamber? How many other people were subjected to this protocol?

What is remarkable is that the authors claim not only an “association,” but causality, in a totally uncontrolled experiment, and without ruling out chance, bias, or confounding.  The article is both deficient in scientific methodological rigor, and dubious on ethical principles.

EPA – Hoisted With Its Own Petard

What I did not realize when I read this experimental case report is that the article had been a cause célèbre of the anti-regulatory right.  The EPA had walked right onto an ethical landmine by first sponsoring this research, and then by relying upon it to support a regulatory report. Steven Milloy editorialized about the EPA research, followed by a FOIA investigation. In September 2012, a regulatory watchdog group filed a lawsuit to strike an EPA report, which was based in part upon the questionable research.

Milloy’s strategy was designed to impale the EPA on the horns of a dilemma:

“I accused EPA of either: (1) conducting unethical human experimentation or exaggerating the dangers of fine airborne particulate matter (PM2.5). It must be one or the other; it can’t be neither, according to EPA’s own documents.”

Steven Milloy, “Did Obama’s EPA relaunch Tuskegee experiments?” (April 24, 2012).  The EPA had branded diesel particulate, which was used in the experiments, as “carcinogenic” and “lethal,” even from short exposures.  Accordingly, if the EPA were sincere, it should have never conducted the experiment documented in Environmental Health Perspectives.  If the agency believed that PM 2.5 was innocuous, then it unethically exaggerated and overstated its dangers.

In course of his FOIA initiative, Milloy did obtain answers to some of the questions I had from reading the Ghio article.  There were apparently about 40 human subjects, who were subjected to PM 2.5 exposure in chambers fed by diesel exhaust or other sources.  The exposure levels were upwards of 20 times what the EPA labeled a “permissible” level.  Of course, the EPA is positionally committed to a “linear, no-threshold” model of carcinogenesis, which makes any exposure to a substance it “knows” to cause cancer ethically improper.

The information from the FOIA requests puts the Ghio article in an extremely bad light.  The human subject on whom the authors reported had run about 40 other people through their chamber without ill effect, but failed to mention these cases.  The consent forms and IRB documents show that the investigators were specifically interested in “vulnerable” patients, who had diabetes, asthma, etc.  The fair inference is that the investigators wanted to provoke an anecdote that would support their causal narrative, which they believed had already been established with epidemiologic evidence.  This seems like a scientific hat trick: bad science, bad ethics, and bad publication practice.

The lawsuit did not fare well.  Predictably, it foundered on the lack of final agency action, and the lack of standing.  On January 31, 2013, Judge Anthony J. Trenga dismissed the complaint, after having previously denied a temporary restraining order.  The American Tradition Institute Environmental Law Center v. U.S. EPA, Case 1:12-cv-01066-AJT-TCB (E.D. Va. 2013).  While legally correct, the opinion is blandly devoid of any sense of ethical concern.

From a brief search, there does not appear to be an appeal to the Fourth Circuit in the works.  Of course, there were no personal injuries alleged in the ATI lawsuit, and the human subject in the Ghio article has appeared not to have sued.  Despite the lack of legal recourse, the science “right” is up in arms over EPA duplicity.  In a recent publication, Milloy and a co-author, quote former EPA Administrator Lisa Jackson, at a congressional hearing:

“Particulate matter causes premature death. It’s directly causal to dying sooner than you should.”

When Representative, now Senator, Edward J. Markey asked, “How would you compare [the benefits of reducing airborne PM2.5] to the fight against cancer?” Jackson answered hyperbolically:

“If we could reduce particulate matter to healthy levels, it would have the same impact as finding a cure for cancer in our country.”

Steve Milloy & John Dale Dunn, “Environmental Protection Agency’s Air Pollution Research: Unethical and Illegal?” 17 J. Am. Phys. & Surg. 109 (2012) (quoting Jackson).  The FOIA and other background materials from this EPA posturing can be found on one of Milloy’s websites.

Sadly, I found the answers raised by the Ghio article only because of the anti-regulatory activism of Milloy and the American Tradition Institute.  The white-hat bias remains a potent force in regulatory agencies, and in scientific laboratories.

Irving Selikoff and the Right to Peaceful Dissembling

June 5th, 2013

Among concerned writers on corporate conflicts of interest, it is a commonplace that industrial sponsors of epidemiologic and other research selectively publish studies favorable to their positions in litigation and regulatory controversies.  In my experience, most companies are fairly scrupulous about publishing the studies they funded.  If there is a correlation in industry funding and outcome, it is largely the result of corporate funding being directed to areas in which weak or corrupt politically motivated, public-interested scientists have already published studies with dubious results.  Common sense suggests that a fair test of the their claims will result in exonerative results.

It is also a commonplace that academic and public-spirited researchers will not have similar motives to suppress unfavorable results.  Again, in my experience, the opposite is true.  Consider that paragon of public-interested, political scientist, the late Dr. Irving Selikoff. During the course of discovery in the Caterinnichio case, I obtained manuscripts of two studies that Selikoff and his colleague, Bill Nicholson, prepared, but never published.  One study examined the mortality, and especially the cancer mortality, of workers at a Johns-Manville asbestos product manufacturing plant in New Jersey.  William J. Nicholson& Irving J. Selikoff, “Mortality experience of asbestos factory workers; effect of differing intensities of asbestos exposure”(circa 1988).

Selikoff’s failure to publish this manuscript on the Manville plantworkers is curious given his tireless and repeated republication of data from his insulator cohort.  For those familiar with Selikoff’s agenda, the failure to publish this paper appears to have an obvious goal:  suppress the nature and extent of Johns Manville’s use of crocidolite asbestos in its products:

“[O]ther asbestos varieties (amosite, crocidolite, anthophyllite) were also used for some products. In general, chrysotile was used for textiles, roofing materials, asbestos cements, brake and friction products, fillers for plastics, etc.; chrysotile with or without amosite for insulation materials; chrysotile and crocidolite for a variety of asbestos cement products.”

Id.  The suppression of studies obviously takes place outside the world of commercial or industrial interests.  SeeSelikoff and the Mystery of the Disappearing Amphiboles.”

There was yet another studied never published by Selikoff, his work, again with Bill Nicholson, on the mortality of shipyard workers at the Electric Boat Company, in Groton, Connecticut. Irving Selikoff & William Nicholson, “Mortality Experience of 1,918 Employees of the Electric Boat Company, Groton, Connecticut January 1, 1967 – June 30, 1978” (Jan. 27, 1984) [cited below as Electric Boat].

Many of the asbestos cases that worked their way through the legal system in the 1980s and 1990s were filed by shipyard workers.  Most of these shipyard workers were not insulators, but claimed asbestos bystander exposure from work near insulators.  Invariably, the expert witnesses for these shipyard worker plaintiffs relied upon risk data from the Selikoff of asbestos insulators, even though Selikoff himself cautioned against using the insulator data for non-insulators:

“These particular figures apply to the particular groups of asbestos workers in this study.  The net synergistic effect would not have been the same if their smoking habits had been different; and it probably would have been different if their lapsed time from first exposure to asbestos dust had been different or if the amount of asbestos dust they had inhaled had been different.”

Selikoff, et al., “Asbestos Exposure, Cigarette Smoking and Death Rates,” 330 Ann. N.Y. Acad. Sci. at 487 (1979).

Having access to Selikoff’s shipyard worker data would have been extremely useful to the fact-finding process, because these data failed to support the cancer projections used by testifying expert witnesses.  Selikoff and Nicholson pointed out that about 50% of the Electric Boat shipyard workers had X-ray abnormalities  Electric Boat at 2. (This finding must be interpreted in the darkness of Selikoff’s documented propensity to overread chest X-rays for asbestos findings.  Rossiter, “Initial repeatability trials of the UICC/ Cincinnati classification of the radiographic appearances of pneumoconioses.” 29 Brit. J. Indus. Med. 407 (1972) (reporting IJS’s readings as among the most extreme outliers in a panel of pulmonary and radiology physicians; showing that IJS films were read as showing abnormal profusion of small, irregular densities up to twice as often as the most reliable readers in the study.)).

Selikoff’s unpublished Electric Boat study cautioned that the mortality data reflected short duration and latency, and that the full extent of asbestos-related manifestations had not been reached.  Electric Boat at 3.  This assertion was not really borne out by the data.  Selikoff’s paper reported the following observed and expected data for lung cancer:

Years from onset of employment 10-14 15-19 20-24 25-29 30+ TOTAL
OBSERVED 4 23.3 15 3 4 35
EXPECTED 1.3 17.7 8.1 4.7 5.1 25.9

The study is primitive even by then contemporary standards.  There is no control for smoking; and no data on smoking habits.  There is no data on radiation exposure. (Electric Boat built nuclear submarines.) No p-values or confidence intervals are supplied; nor are any estimates of trends included.

Despite Selikoff’s assertion that the follow-up period was not sufficiently long to capture asbestos-related malignancies, the data tell a different story.  The lung cancer Obs./Exp. ratios are increased for 10-14 years, and for 15-19 years, and so these risk ratios reflect that the cohort likely had non-asbestos-related risks for lung cancer, which risks are at work before the cohort entered the lagged period in which they might have elevated asbestos-related risks.  Although the numbers are smaller for the time intervals that involve more than 20 years from first employment, the observed numbers and risk ratios of lung cancers hardly suggests very much in terms of an occupational asbestos risk.

These data were obtained only because Bill Nicholson often served as an expert witness for plaintiffs in personal injury actions.  When he did so in New Jersey, he was subject to fairly broad discovery obligations, and thus I was able to obtain his unpublished studies.  Otherwise, the public and the scientific community learned only what Selikoff selectively disclosed in media interviews.  See Samuel G. Freedman, “Worker’s suit over asbestos at Groton shipyard to openNew York Times (Jan. 19, 1982) (noting the 50% prevalence finding, but not the mortality data).