Transparency, Confusion, and Obscurantism

In NIEHS Transparency? We Can See Right Through You (July 10, 2014), I chastised authors Kevin C. Elliott and David B. Resnik for their confusing and confused arguments about standards of proof, the definition of risk, and conflicts of interest (COIs). See Kevin C. Elliott and David B. Resnik, “Science, Policy, and the Transparency of Values,” 122 Envt’l Health Persp. 647 (2014) [Elliott & Resnik]. In their focus on environmentalism and environmental policy, Elliott and Resnik seem intent upon substituting various presumptions, leaps of faith, and unproven extrapolations for actual evidence and valid inference, in the hope of improving the environment and reducing risk to life. But to get to their goal, Elliott and Resnik engage in various equivocations and ambiguities in their use of “risk,” and they compound the muddle by introducing a sliding scale of “standards of evidence,” for legal, regulatory, and scientific conclusions.

Dr. David H. Schwartz is a scientist, who received his doctoral degree in Neuroscience from Princeton University, and his postdoctoral training in Neuropharmacology and Neurophysiology at the Center for Molecular and Behavioral Neuroscience, in Rutgers University. Dr. Schwartz has since gone to found one of the leading scientific consulting firms, Innovative Science Solutions (ISS), which supports both regulatory and litigation claims and defenses, as may scientifically appropriate. Given his experience, Dr. Schwartz is well positioned to address the standards of scientific evidentiary conclusions across regulatory, litigation, and scientific communities.

In this month’s issue of Environmental Health Perspectives (EHP), Dr. David Schwartz adds to the criticism of Elliott and Resnik’s tendentious editorial. David H. Schwartz, “Policy and the Transparency of Values in Science,” 122 Envt’l Health Persp. A291 (2014). Schwartz points out that “[a]lthough … different venues or contexts require different standards of evidence, it is important to emphasize that the actual scientific evidence remains constant.” Id.

Dr. Schwartz points out transparency is needed in how standards and evidence are represented in scientific and legal discourse, and he takes Elliott and Resnik to task for arguing, from ignorance, that litigation burdens are different from scientific standards. At times some writers misrepresent the nature of their evidence, or its weakness, and when challenged, attempt to excuse the laxness in standards by adverting to the regulatory or litigation contexts in which they are speaking. In some regulatory contexts, the burdens of proof are deliberately reduced, or shifted to the regulated industry. In litigation, the standard or burden of proof is rarely different from the scientific enterprise itself. As the United States Supreme Court made clear, trial courts must inquire whether an expert witness ‘‘employs in the courtroom the same level of intellectual rigor that characterizes the practice of an expert in the relevant field.’’ Kumho Tire Co. v. Carmichael, 526 U.S. 137, 152 (1999). Expert witnesses who fail to exercise the same intellectual rigor in the courtroom as in the laboratory, are eminently disposable or excludable from the legal process.

Schwartz also points out, as I had in my blog post, that “[w]hen using science to inform policy, transparency is critical. However, this transparency should include not only financial ties to industry but also ties to advocacy organizations and other strongly held points of view.”

In their Reply to Dr. Schwartz, Elliott and Resnik concede the importance of non-financial conflicts of interest, but they dig in on the supposed lower standard for scientific claims:

“we caution against equating the standards of evidence expected in tort law with those expected in more traditional scientific contexts. The tort system requires only a preponderance of evidence (> 50% likelihood) to win a case; this is much weaker evidence than scientists typically demand when presenting or publishing results, and confusion about these differing standards has led to significant legal controversies (Cranor 2006).”

Rather than citing any pertinent or persuasive legal authority, Elliott and Resnik cite an expert witness, Carl Cranor, neither a lawyer nor a scientist, who has worked steadfastly for the litigation industry (the plaintiffs’ bar) on various matters. The “caution” of Elliott and Resnik is directly contradicted by the Supreme Court’s pronouncement in Kumho Tire, and is fueled by a ignoratio elenchi that is based upon a confusion between the probability of repeated sampling with confidence intervals (usually 95%) and the posterior probability of a claim: namely, the probability of the claim given the admissible evidence. As the Reference Manual for Scientific Evidence makes clear, these are very different probabilities, which Cranor and others have consistently confused. Elliott and Resnik ought to know better.