The recent issue of Environmental Health Perspectives contains several interesting articles on scientific methodology of interest to lawyers who litigate claimed health effects.[1] The issue also contains a commentary that argues for greater transparency in science and science policy, which should be a good thing, but yet the commentary has the potential to obscure and confuse. Kevin C. Elliott and David B. Resnik, “Science, Policy, and the Transparency of Values,” 122 Envt’l Health Persp. 647 (2014) [Elliott & Resnik].
David B. Resnik has a Ph.D., in philosophy from University of North Carolina, and his law degree from the on-line Concord University School of Law. He is currently a bioethicist and the chairman of the NIEHS Institutional Review Board. Kevin Elliott received his doctorate in the History and Philosophy of Science (Notre Dame), and he is currently an Associate Professor in Michigan State University. Elliott and Resnik advance a plea for transparency that superficially is as appealing as motherhood and apple pie. The authors argue
“that society is better served when scientists strive to be as transparent as possible about the ways in which interests or values may influence their reasoning.”
The argument appears superficially innocuous. Indeed, in addition to the usual calls for great disclosure of conflicts of interest, the authors call for more data sharing and less tendentious data interpretation:
“When scientists are aware of important background assumptions or values that inform their work, it is valuable for them to make these considerations explicit. They can also make their data publicly available and strive to acknowledge the range of plausible interpretations of available scientific information, the limitations of their own conclusions, the prevalence of various interpretations across the scientific community, and the policy options supported by these different interpretations.”
Alas, we may as well wish for the Kingdom of Heaven on Earth! An ethos or a requirement of publicly sharing data would indeed advance the most important transparency, the transparency that would allow full exploration of the inferences and conclusions claimed in a particular study. Despite their high-mindedness, the authors’ argument becomes muddled when it comes to conflating scientific objectivity with subjective values:
“In the past, scientists and philosophers have argued that the best way to maintain science’s objectivity and the public’s trust is to draw a sharp line between science and human values or policy (Longino 1990). However, it is not possible to maintain this distinction, both because values are crucial for assessing what counts as sufficient evidence and because ethical, political, economic, cultural, and religious factors unavoidably affect scientific judgment (Douglas 2009; Elliott 2011; Longino 1990; Resnik 2007, 2009).”
This argument confuses pathology of science with what actually makes science valuable and enduring. The Nazis invoked cultural arguments, explicitly or implicitly to reject “Jewish” science; religious groups in the United States invoke religious and political considerations to place creationism on an equal or superior footing with evolution; anti-vaccine advocacy groups embrace case reports over rigorous epidemiologic analyses. To be sure, these and other examples show that “ethical, political, economic, cultural, and religious factors unavoidably affect scientific judgment,” but yet science can and does transcend them. There is no Jewish or Nazi science; indeed, there is no science worthy of its name that comes from any revealed religion or cult. As Tim Minchin has pointed out, alternative medicine is either known not to work or not known to work because if alternative medicine is known to work, then we call it “medicine.” The authors are correct that these subjective influences require awareness and understanding of prevalent beliefs, prejudices, and corrupting influences, but they do not, and they should not, upset our commitment to an evidence-based world view.
Elliott and Resnik are focused on environmentalism and environmental policy, and they seem to want to substitute various presumptions, leaps of faith, and unproven extrapolations for actual evidence and valid inference, in the hope of improving the environment and reducing risk to life. The authors avoid the obvious resolution: value the environment, but acknowledge ignorance and uncertainty. Rather than allow precautionary policies to advance with a confession of ignorance, the authors want to retain their ability to claim knowledge even when they simply do not know, just because the potential stakes are high. The circularity becomes manifest in their ambiguous use of “risk,” which strictly means a known causal relationship between the “risk” and some deleterious outcome. There is a much weaker usage, popularized by journalists and environmentalists, in which “risk” refers to something that might cause a deleterious outcome. The might in “risk” here does not refer to a known probabilistic or stochastic relationship between the ex ante risk and the outcome, but rather to an uncertainty whether or not the relationship exists at all. We can see the equivocation in how the authors attempt to defend the precautionary principle:
“Insisting that chemicals should be regulated only in response to evidence from human studies would help to prevent false positive conclusions about chemical toxicity, but it would also prevent society from taking effective action to minimize the risks of chemicals before they produce measurable adverse effects in humans. Moreover, insisting on human studies would result in failure to identify some human health risks because the diseases are rare, or the induction and latency periods are long, or the effects are subtle (Cranor 2011).”
Elliott & Resnik at 648.
If there is uncertainty about the causal relationship, then by calling some exposures a “risk,” the authors prejudge whether there will be “adverse effects” at all. This is just muddled. If the relationship is uncertain, and false positive conclusions are possible, then we simply cannot claim to know that there will be such adverse effects, without assuming what we wish to prove.
The authors compound the muddle by introducing a sliding scale of “standards of evidence,” which appears to involve both variable posterior probabilities that the causal claim is correct, as well as variable weighting of types of evidence. It is difficult to see how this will aid transparency and reduce confusion. Indeed, we can see how manipulative the authors’ so-called transparency becomes in the context of evaluating causal claims in pharmaceutical approvals versus tort claims:
“Very high standards of evidence are typically expected in order to infer causal relationships or to approve the marketing of new drugs. In other social contexts, such as tort law and chemical regulation, weaker standards of evidence are sometimes acceptable to protect the public (Cranor 2008).”
Remarkably, the authors cite no statute, no case law, no legal treatise writer for the proposition that the tort law standard for causation is somehow lower than for a claim of drug efficacy before the Food and Drug Administration. The one author they cite, Carl Cranor, is neither a scientist nor a lawyer, but a philosophy professor who has served as an expert witness for plaintiffs in tort litigation (usually without transparently disclosing his litigation work). As for the erroneous identification of tort and regulatory standards, there is of course, much real legal authority to the contrary[2].
The authors go on to suggest that demanding
“the very highest standards of evidence for chemical regulation—including, for example, human evidence, accompanying animal data, mechanistic evidence, and clear exposure data—would take very long periods of time and leave the public’s health at risk.”
Elliott & Resnik at 648.
Of course, the point is that until such data are developed, we really do not know whether the public’s health is at risk. Transparency would be aided not by some sliding and slippery scale of evidence, but by frank admissions that we do not know whether the public’s health is at risk, but we choose to act anyway, and to impose whatever costs, inconvenience, and further uncertainty by promoting alternatives that are accompanied by even greater risk or uncertainty. Environmentalists rarely want to advance such wishy-washy proposals, devoid of claims of scientific knowledge that their regulations will avoid harm, and promote health, but honesty and transparency require such admissions.
The authors advance another claim in their Commentary: transparency in the form of more extensive disclosure of conflicts of interest will aid sound policy formulation. To their credit, the authors do not limit the need for disclosure to financial benefits; rather they take an appropriately expansive view:
“Disclosures of competing financial interests and nonfinancial interests (such as professional or political allegiances) also provide opportunities for more transparent discussions of the impact of potentially implicit and subconscious values (Resnik and Elliott 2013).”
Elliott & Resnik at 649. Problematically, however, when the authors discuss some specific instances of apparent conflicts, they note industry “ties,” of the authors of an opinion piece on endocrine disruptors[3], but they are insensate to the ties of critics, such as David Ozonoff and Carl Cranor, to the litigation industry, and of others to advocacy groups that might exert much more substantial positional bias and control over those critics.
The authors go further in suggesting that women have greater perceptions of risk than men, and presumably we must know whether we are being presented with a feminist or a masculinist risk assessment. Will self-reported gender suffice or must we have a karyotype? Perhaps we should have tax returns and a family pedigree as well? The call for transparency seems at bottom a call for radical subjectivism, infused with smug beliefs that want to be excused from real epistemic standards.
[1] In addition to the Elliott and Resnick commentary, see Andrew A. Rooney, Abee L. Boyles, Mary S. Wolfe, John R. Bucher, and Kristina A. Thayer, “Systematic Review and Evidence Integration for Literature-Based Environmental Health Science Assessments,” 122 Envt’l Health Persp. 711 (2014); Janet Pelley, “Science and Policy: Understanding the Role of Value Judgments,” 122 Envt’l Health Persp. A192 (2014); Kristina A. Thayer, Mary S. Wolfe, Andrew A. Rooney, Abee L. Boyles, John R. Bucher, and Linda S. Birnbaum, “Intersection of Systematic Review Methodology with the NIH Reproducibility Initiative,” 122 Envt’l Health Persp. A176 (2014).
[2] Sutera v. The Perrier Group of America, 986 F. Supp. 655, 660 (D. Mass. 1997); In re Agent Orange Product Liab. Litig., 597 F. Supp. 740, 781 (E.D.N.Y. 1984) (Weinstein, J.), aff’d, 818 F.2d 145 (2d Cir. 1987); Allen v. Pennsylvania Engineering Corp., 102 F.3d 194, 198 (5th Cir. 1996) (distinguishing regulatory pronouncements from causation in common law actions, which requires higher thresholds of proof); Glastetter v. Novartis Pharms. Corp., 107 F. Supp. 2d 1015, 1036 (E.D. Mo. 2000), aff’d, 252 F.3d 986 (8th Cir. 2001); Wright v. Willamette Indus., Inc., 91 F.3d 1105 (8th Cir. 1996); Siharath v. Sandoz Pharms. Corp., 131 F. Supp. 2d 1347, 1366 (N.D. Ga. 2001), aff’d, 295 F.3d 1194 330 (11th Cir. 2002).
[3] Daniel R. Dietrich, Sonja von Aulock, Hans Marquardt, Bas Blaauboer, Wolfgang Dekant, Jan Hengstler, James Kehrer, Abby Collier, Gio Batta Gori, Olavi Pelkonen, Frans P. Nijkamp, Florian Lang, Kerstin Stemmer, Albert Li, KaiSavolainen, A. Wallace Hayes, Nigel Gooderham, and Alan Harvey, “Scientifically unfounded precaution drives European Commission’s recommendations on EDC regulation, while defying common sense, well-established science and risk assessment principles,” 62 Food Chem. Toxicol. A1 (2013)