TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

The Role of the “Science Lawyer” in Modern Litigation

July 27th, 2014

“For the rational study of the law the blackletter man may be the man of the present, but the man of the future is the man of statistics and the master of economics.”[1]

“Judges commonly are elderly men, and are more likely to hate at sight any analysis to which they are not accustomed, and which disturbs repose of mind … .”[2]

The emergence of complex scientific issues in post-World War II American litigation has challenged state and federal legal systems, both civil and criminal. Various issues, such as the validity of forensic science and biological causation, have tested the competency of lawyers, both judges and counsel. This scientific complexity also questions whether lay juries can or should be continued in their traditional role as fact finders[3].  Some commentators have suggested that complexity should be a basis for reallocating fact finding to judges.[4] Other commentators remain ideologically or constitutionally committed to the jury as fact finder.

The superiority of judges as fact finders in complex scientific cases remains to be shown. Clearly juries often perform better than individual lay judges as scientific fact finders, but their time commitment is inadequate.  Furthermore, the jury decision process is typically a binary decision without any articulation of reasoning.  The jury’s hidden reasoning process on important scientific issues violates basic tenets of transparency and due process.  Many modern cases present such difficult scientific issues that some authors have argued that we should establish scientific courts or institute procedures for blue-ribbon juries[5].  The constitutionality of having scientists, or specially qualified judges, serve as fact finders has never been clearly addressed.[6]  Other commentators have argued in favor of the existing set of judicial tools, such as appointment of testifying “neutral” expert witnesses and scientific advisors for trial judges.[7] These approaches have been generally available in federal and some state trial courts, but they have rarely been used.  Some examples of their deployment include the silicone gel breast implant litigation[8], and cases involving Parlodel[9] and Bendectin[10].

Over 20 years ago, in 1993, the United States Supreme Court handed down its Daubert decision.  In large measure, the Court’s insistence upon trial court gatekeeping of expert witnesses has obscured the discussion and debate about science courts and alternative procedures for addressing complex scientific issues in litigation.  With fear and trembling, and sometimes sickness not quite unto death, federal and state judges, and lawyers on both sides of the “v,” must now do more than attack, defend, and evaluate expert witnesses on simplistic surrogates for the truth, such as personal bias or qualifications.  Lord have mercy, judges and lawyers must now actually read and analyze the bases of expert witnesses’ opinions, assess validity of studies and conclusions, and present their challenges and evaluations in clear, non-technical language.

The notion that “law is an empty vessel” is an imperfect metaphor, but it serves to emphasize that lawyers may have to get their hands wet with vessel’s contents, now and then.  In litigating scientific issues, lawyers and judges will necessarily have to engage with substantive matters.  Lawyers without scientific training or aptitude are not likely to serve clients, whether plaintiffs or defendants, well in the post-Daubert litigation world.  Untrained lawyers will choose the wrong theory, emphasize the wrong evidence, and advance the wrong conclusions.  The stakes are higher now for all the players.  An improvident claim or defense will become a blot on a lawyer’s escutcheon.  A poor gatekeeping effort or judicial decision[11] can embarrass the entire judicial system[12].

Over a decade ago, Professor David Faigman asked whether science was different for lawyers.  Professor Faigman emphatically answered his own question in the negative, and urged lawyers and courts to evolve from their pre-scientific world view.[13] Both lawyers and judges must learn about the culture, process, and content of science.  Science is often idealized as a cooperative endeavor, when in fact, much scientific work can be quite adversarial.[14]Some judges and commentators have argued that the scientific enterprise should be immune from the rough and tumble of legal discovery because the essential collaborative nature of science is threatened by the adversarial interests at play in litigation.  In becoming better judges of science, judges (and lawyers) need to develop a sense of the history of science, with its findings, fanaticisms, feuds, and fraud. The good, bad, and the ugly are all the proper subject for study. Professor George Olah, in accepting his Nobel Prize in Chemistry, rebutted the lofty sentiments about scientific collegiality and collaboration[15]:

“Intensive, critical studies of a controversial topic always help to eliminate the possibility of any errors. One of my favorite quotation is that by George von Bekessy (Nobel Prize in Medicine, 1961).

‘[One] way of dealing with errors is to have friends who are willing to spend the time necessary to carry out a critical examination of the experimental design beforehand and the results after the experiments have been completed. An even better way is to have an enemy. An enemy is willing to devote a vast amount of time and brain power to ferreting out errors both large and small, and this without any compensation. The trouble is that really capable enemies are scarce; most of them are only ordinary. Another trouble with enemies is that they sometimes develop into friends and lose a good deal of their zeal. It was in this way the writer lost his three best enemies. Everyone, not just scientists, needs a few good enemies!’”

In other words, peer review is a shabby substitute for cross-examination and an adversarial process.[16]  That adversarial process cannot always unfold fully and fairly in front of a jury.

Chief Justice Rehnquist no doubt spoke for most judges and lawyers in expressing his discomfort with the notion that courts would have to actually look at science (rather than qualifications, demeanor, and credibility)[17]:

“I defer to no one in my confidence in federal judges; but I am at a loss to know what is meant when it is said that the scientific status of a theory depends on its ‘falsifiability’, and I suspect some of them will be, too.”

Before the Daubert decision, some commentators opined that judges and lawyers were simply too innumerate and too dull to be involved in litigating scientific issues[18]. Today, most federal judges at least would not wear their ignorance so proudly, largely because of the efforts of the Federal Judicial Center, and its many educational efforts.  Judges (and lawyers) cannot and should not be scientific or statistically illiterate[19].  If generalist judges think this is beyond their ken, then they should have the intellectual integrity to say so, and get out of the way.  If the scientific tasks are beyond the ken of judges, then they are likely beyond the abilities of ordinary jurors as well, and we should start to think seriously once again about science courts and blue-ribbon juries.

Astute and diligent gatekeeping judges and qualified juries can get the job done[20]. Certainly, some judges have seen through expert witnesses’ evasions and errors in reasoning, and lawyers’ shystering on both sides of litigation.  Evaluating scientific evidence and drawing inferences from scientific studies do require understanding of basic statistical concepts, study design, and scientific apparatus, but at the bottom, there is no esoteric “scientific method” that is not careful and skeptical reasoning about evidence in the context of all the facts.[21]  Surely, judges with university educations, “20 years of schooling,” and at least 10 years of exemplary professional practice should, in theory, be able to get the job done, even if a few judges are unqualified or ill suited by aptitude or training. To take an example from the breast implant litigation, understanding a claim about immunology may well require some depth in what the immune system is, how it works, and how it may be altered.  This understanding will require time, diligence, intellectual energy, and acumen.  Surely, ordinary lay juries often do not have the time to devote to such a case to do justice to the issues. If judges do not have the interest or the time to render the required service, they should say so.

And what about lawyers?  Twenty years into the Daubert era is time enough for law schools to implement curriculum to train lawyers to understand and to litigate scientific issues, in civil, criminal, and regulatory contexts.  Many law schools still fail to turn out graduates with basic competence to understand and advocate about scientific issues.[22]  Lawyers who want to litigate scientific issues owe their clients and themselves the commitment to understand the issues.  It has been a long time since C.P. Snow complained of the “two cultures,” science and the humanities, and the hostility he faced when he challenged colleagues about their ignorance of science.[23]  Law schools have a role in building a bridge between the two cultures of science and the humanities. Just as tax programs in law schools require basic accounting, law schools should ensure that their graduates, destined to work on scientific litigation, legislation, and regulation, have had some education in statistics, probability, and scientific method.

Becoming or remaining scientific literate is a daunting but essential task for busy lawyers who hope to serve clients well in technical litigation.  There are many resources available for the “scientific legal counselor.”  Dr. David Schwartz at Innovative Science Solutions LLC has published a resource guide, The Litigator’s Guide to Combating Junk Science, available at his firm’s website.[24] Sense About Science, a charitable trust, has worked hard to advance an evidence-based world view, and to help people, including lawyers, make sense of scientific and medical claims.  Its website offers some helpful publications,[25] one of the more interesting being its recent pamphlet, Making Sense of Uncertainty.[26]

The Sense About Science group’s annual lectures are important events, which have featured lectures by:

  • Dr Fiona Godlee, the editor of the British Medical Journal, on “It’s time to stand up for science once more,” in 2010.[27]
  • Dr Olivia Judson, scientist and science journalist, on “Why Experiment?” in 2009.[28]
  • Professor Alan Sokal, quantum physicist and famous perpetrator of the Sokal hoax[29] that helped deconstruct the post-modernist deconstructionists, on “What is Science and Why Should We Care?” in 2008.[30]
  • Sir John Krebs FRS, discoverer of the Krebs cycle, on “Science Advice, Impartiality and Policy,” in 2006.[31]

In addition to keeping abreast of scientific developments, science lawyers need to understand both how science is “made,” and how it goes bad.  The National Institutes of Health website has many resources about the grant process.  Science lawyers should understand the grant process and how to use the Freedom of Information Act to obtain information about research at all stages of development.  Contrary to the fairy-tale accounts of idealized science, the scientific research process goes astray with some frequency, so much so that there is a federal agency, the Office of Research Integrity (ORI) tasked with investigating research misconduct. News of the ORI’s investigations, and findings, is readily available through its website and its informative blog.[32] Research misconduct has resulted in an ever-increasing rate of retractions in peer-reviewed journals.  The Retraction Watch[33] blog offers a fascinating window into the inadequacies of peer review, and the attempts, some successful, some not, of the scientific community to police itself.

Before Daubert was decided, the legal system emphasized credentials and qualifications of witnesses, “authoritative texts,” and general acceptance.  Science, however, is built upon an evidence-based foundation.  Since 1663, the Royal Society has sported the motto:  “Nullius in verba”: on no person’s authority.  When confronted with a pamphlet entitled “100 Authors against Einstein,” Albert Einstein quipped “if I were wrong, one would have been enough.”[34]  Disputes in science are resolved with data, from high-quality, reproducible experimental or observational studies, not with appeals to the prestige of the speaker.  The shift from authority-based decision making to evidence-based inferences and conclusions has strained the legal system in the United States, where people are free to propagate cults and superstitions.  The notion that lawyers who spend most of their time litigating leveraged boxcar leases can appreciate the nuances, strengths, weaknesses, and flaws of scientific studies and causal inference is misguided.  Litigants, whether those who make claims or those who defend claims, deserve legal counsel who are conversant with the language of their issues.  Law schools, courts, and bar associations must rise to the occasion to meet the social need.


[1] Oliver Wendell Holmes, Jr., “The Path of the Law,” 10 Harv. L. Rev. 457, 469 (1897)

[2] Oliver Wendell Holmes, Jr., Collected Legal Papers 230 (1921).

[3] An earlier version of this post appeared as a paper in David M. Cohen & Nathan A. Schachtman, eds., Daubert Practice 2013: How to Avoid Getting Sliced by Cutting-Edge Developments in Expert Witness Challenges (PLI 2013).

[4] See Ross v. Bernhard, 396 U.S. 531, 538 n.l0 (1970) (suggesting that the right to a jury trial may be limited by considerations of complexity, and that practical abilities and limitations of juries are a factor in determining whether an issue is to be decided by the judge or the jury).  See also Note, “The Right to a Jury Trial in Complex Civil litigation,” 92 Harv. L. Rev. 898 (1979) (considering the implications of footnote 10 in Ross for complex litigation; noting that the Ross test has been applied infrequently, and in the limited contexts of antitrust or securities cases); Peter Huber, “Comment, A Comment on Toward Incentive-Based Procedure: Three Approaches for Regulating Scientific Evidence by E. Donald Elliott,” 69 Boston Univ. L. Rev. 513, 514 (1989) (arguing that judges cannot make difficult scientific judgments).  See also George K. Chamberlin, “Annotation: Complexity of Civil Action as Affecting Seventh Amendment Right to Trial by Jury,” 54 A.L.R. FED. 733, 737-44 (1981).

[5] James A. Martin, “The Proposed Science Court,” 75 Mich. L. Rev. 1058, 1058-91 (1977) (evaluating the need and feasibility of various proposals for establishing science courts); Troyen A. Brennan, “Helping Courts with Toxic Torts,” 51 U. Pitt. L. Rev. 1, 5 (1989) (arguing for administrative panels or boards to assist courts with scientific issues); Donald Elliott, “Toward Incentive-Based Procedure: Three Approaches for Regulating Scientific Evidence,” 69 Boston Univ. L. Rev. 487, 501-07 (1989) (discussing various models of ancilliary peer review for courts); Paul K. Sidorenko, “Comment, Evidentiary Dilemmas in Establishing Causation: Are Courts Capable of Adjudicating Toxic Torts?” 7 Cooley L. Rev. 441, 442 (1990) (recommending administrative panels of scientific experts in toxic tort litigation); John W. Osborne, “Judicial Technical Assessment of Novel Scientific Evidence,” 1990 U. Ill. L. Rev. 497, 540-46 (arguing for procedures with specially qualified judges); Edward V. DiLello, “Note, Fighting Fire with Firefighters: A Proposal for Expert Judges at the Trial Level,” 93 Colum. L. Rev. 473, 473 (1993) (advocating establishment of specialist judicial assistants for judges).  For a bibliography of publications on the science court concept, see Jon R. Cavicchi, “The Science Court: A Bibliography” (1994) <http://ipmall.info/risk/vol4/spring/bibliography.htm>, last visited on July 27, 2014.

[6] See Ross v. Bernhard, 396 U.S. at 538 n.l0.

[7] John W. Wesley, “Scientific Evidence and the Question of Judicial Capacity,” 25 William & Mary L. Rev. 675, 702-03 (1984).

[8] See Karen Butler Reisinger, “Expert Panels: A Comparison of Two Models,” 32 Indiana L.J. 225 (1998); Laural L. Hooper, Joe S. Cecil & Thomas E. Willging, Neutral Science Panels: Two Examples of Panels of Court-Appointed Experts in the Breast Implants Product Liability Litigation, (Federal Judicial Center 2001), available at http://www.fjc.gov/public/pdf.nsf/lookup/neuscipa.pdf/$file/neuscipa.pdf

[9] Soldo v. Sandoz Pharms. Corp., 244 F. Supp. 2d 434 (W.D. Pa. 2003); see also Joe S. Cecil, “Construing Science in the Quest for Ipse Dixit,” 33 Seton Hall L. Rev. 967 (2003).

[10] DePyper v. Navarro, No. 191949, 1998 WL 1988927 (Mich. Ct. App. Nov. 6, 1998)

[11] See, e.g., Wells v. Ortho Pharmaceutical Corp., 615 F. Supp. 262 (N.D. Ga. 1985), aff’d and rev’d in part on other grounds, 788 F.2d 741 (11th Cir.), cert. denied, 479 U.S.950 (1986).

[12] See, e.g., James L. Mills and Duane Alexander, “Teratogens and ‘Litogens’,” 15 New Engl. J. Med. 1234 (1986); Samuel R. Gross, “Expert Evidence,” 1991 Wis. L. Rev. 1113, 1121-24 (1991) (“Unfortunately, Judge Shoob’s decision is absolutely wrong. There is no scientifically credible evidence that Ortho-Gynol Contraceptive Jelly ever causes birth defects.”). See also Editorial, “Federal Judges v. Science,” N.Y. Times, December 27, 1986, at A22 (unsigned editorial);  David E. Bernstein, “Junk Science in the Courtroom,” Wall St. J. at A 15 (Mar. 24,1993) (pointing to Wells as a prominent example of how the federal judiciary had embarrassed American judicial system with its careless, non-evidence based approach to scientific evidence); Bert Black, Francisco J. Ayala & Carol Saffran-Brinks, “Science and the Law in the Wake of Daubert: A New Search for Scientific Knowledge,” 72 Texas L. Rev. 715, 733-34 (1994) (lawyers and leading scientist noting that the district judge “found that the scientific studies relied upon by the plaintiffs’ expert were inconclusive, but nonetheless held his testimony sufficient to support a plaintiffs’ verdict. *** [T]he court explicitly based its decision on the demeanor, tone, motives, biases, and interests that might have influenced each expert’s opinion. Scientific validity apparently did not matter at all.”) (internal citations omitted); Troyen A. Brennan, “Untangling Causation Issues in Law and Medicine: Hazardous Substance Litigation,” 107 Ann. Intern. Med. 741, 744-45 (1987) (describing the result in Wells as arising from the difficulties created by the Ferebee case; “[t]he Wells case can be characterized as the court embracing the hypothesis when the epidemiologic study fails to show any effect”).  Kenneth R. Foster, David E. Bernstein, and Peter W. Huber, eds., Phantom Risk: Scientific Inference and the Law 28-29, 138-39 (MIT Press 1993) (criticizing Wells decision); Hans Zeisel & David Kaye, Prove It With Figures: Empirical Methods in Law and Litigation § 6.5, at 93 (1997) (noting the multiple comparisons in studies of birth defects among women who used spermicides, based upon the many reported categories of birth malformations, and the large potential for even more unreported categories); id. at § 6.5 n.3, at 271 (characterizing Wells as “notorious,” and noting that the case became a “lightning rod for the legal system’s ability to handle expert evidence.”).

[13] David L. Faigman, “Is Science Different for Lawyers?” 297 Science 339, 340 (2002) (“some courts are still in a prescientific age”).

[14] McMillan v. Togus Reg’l Office, 294 F. Supp. 2d 305, 317 (E.D.N.Y. 2003) (“As in political controversy, ‘science is, above all, an adversary process’.”) (internal citation omitted).

[15] George A. Olah, “My Search for Carbocations and Their Role in Chemistry,” Nobel Lecture (Dec. 8, 1994), quoting George von Békésy, Experiments in Hearing 8 (N.Y. 1960).

[16] Nietzsche expressed Olah’s sentiment more succinctly.  Friedrich Nietzsche, The Twilight of the Idols Maxim 8 (1899) (“Out of life’s school of war: What does not destroy me, makes me stronger.”).

[17] Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579, 598 (1993) (Justice Rehnquist, C.J., concurring and dissenting).

[18] Joseph Nicol, “Symposium on Science and the Rules of Evidence,” 99 F.R.D. 188, 221 (1983) (claiming that lawyers and judges are “simply are incapable by education, and all too often by inclination, to become sufficiently familiar with scientific evidence to discharge their responsibilities toward the administration of justice.”)

[19] Robert P. Merges, “The Nature and Necessity of Law and Science,” 381 Leg. Educ. 315, 324-26 (1988) (arguing that lawyers can and should understand scientific concepts and issues)

[20] David L. Faigman, “To Have and Have Not: Assessing the Value of Social Science to the Law as Science and Policy,” 38 Emory L.J. 1005, 1014, 1030 (1989) (contending that judges can understand science and that lawyers must probe beyond conclusions into methods by which the conclusions were reached).

[21] David H. Kaye, “Proof in Law and Science,” 32 Jurimetrics J. 313, 318 (1992) (arguing that science and law share identical methods of establishing factual conclusions); Lee Loevinger, “Standards of Proof in Science and Law,” 32 Jurimetrics J. 323, 328 (1992) (“[T]he basic principles of reasoning or logic are no different in the field of law than science.”).

[22] Howell E. Jackson, “Analytical Methods for Lawyers,” 53 J. Legal Educ. 321 (2003); Steven B. Dow, “There’s Madness in the Method:  A Commentary on Law, Statistics, and the Nature of Legal Education,” 57 Okla. L. Rev. 579 (2004).

[23] C.P. Snow, The Rede Lecture 1959.

[24] Available with registration at http://www.innovativescience.net/complimentary-copy-of-our-junk-science-ebook/

[25]http://www.senseaboutscience.org/

[26] http://www.senseaboutscience.org/resources.php/127/making-sense-of-uncertainty

[27] http://www.senseaboutscience.org/pages/annual-lecture-2010.html

[28] http://www.senseaboutscience.org/pages/annual-lecture-2009.html

[29] Sokal’s famous parody of postmodern writers, the so-called “‘Sokal Hoax,” was his publication in one of the postmodernists’ own journals on how gravity was nothing more than a social construct.  Alan D. Sokal, “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity,” Social Text 217 (Nos. 46/47 1996) (“It has thus become increasingly apparent that physical ‘reality’, no less than social ‘reality’, is at bottom a social and linguistic construct; that scientific ‘knowledge’, far from being objective, reflects and encodes the dominant ideologies and power relations of the culture that produced it; that the truth claims of science are inherently theory-laden and self-referential; and consequently, that the discourse of the scientific community, for all its undeniable value, cannot assert a privileged epistemological status with respect to counter-hegemonic narratives emanating from dissident or marginalized communities.”)

[30] http://www.senseaboutscience.org/pages/annual-lecture-2008.html

[31] http://www.senseaboutscience.org/pages/annual-lecture-2006.htm

[32] http://ori.hhs.gov/blog/

[33] http://retractionwatch.wordpress.com/

[34] See Remigio Russo, 18 Mathematical Problems in Elasticity 125 (1996) (quoting Einstein).

Conflict of Interest Regulations Apply Symmetrically

July 25th, 2014

Last week, a federal judge ruled that working as a plaintiffs’ expert witness in tobacco litigation or as consultants for pharmaceutical companies with tobacco-cessation medications was a conflict of interest, which invalidated a FDA report on menthol cigarettes. Lorillard Inc. et al. v. U.S. Food and Drug Administration, 1:11-cv-00440, D.D.C. (July 21, 2014).

The report was issued in 2011 by the agency’s Tobacco Products Scientific Advisory Committee (TPSAC).  Two tobacco companies sought an injunction against the FDA’s report because of the improper memberships on the TPSAC of Drs. Jonathan Samet and Neil Benowitz. Lorillard’s General Counsel also challenged agency pronouncements under the Data Quality Act.  Letter Request for Correction of Information Disseminated to the Public and the Tobacco Products Scientific Advisory Committee (March 16, 2011) (petition lodged under the Data Quality Act (DQA), 44 U.S.C. § 3516, and the related agency regulations of the Office of Management and Budget, Health and Human Services, and FDA).

Although the TPSAC report concluded that menthol cigarettes imparted no greater risk of lung cancer than the already deadly non-menthol cigarettes, the report claimed that studies show menthol flavoring increased usage among young people and African Americans.  The TPSAC recommended a ban on menthol cigarettes in the interest of public health. The district court held that the FDA’s decision that the disputed members had no conflict of interest violated the Administrative Procedures Act, and that the violation required the agency to reconstitute the TPSAC in compliance with the applicable ethics regulations.

What is remarkable about the case is its rejection of the delusion that advocacy for plaintiffs is not a potential conflict of interest, whereas advocacy for a company is.  An amicus brief filed on behalf of several medical and public health groups, including Public Citizen, Inc., American Cancer Society, American Medical Association, American Thoracic Society, and others, supported the agency’s motion to dismiss. Amici argued that balancing scientific opinions of experts on the committee would impair the integrity of advisory committees and would be unmanageable for courts. Given the obvious adversarial bias from service as an expert witness in tobacco litigation, the district court rejected amici’s arguments. Although amici’s position is understandable as a reaction to the callousness of tobacco marketing, the brief’s indifference to the ethical implications about conflicts of interest is surprising.  One can only imagine the hue and cry if there had been committee members who had been engaged as expert witnesses for the tobacco companies.

The implications of the Lorillard decision are considerable, especially for FDA advisory committees. See Glenn Lammi, “FDA Advisory Committee Not Rife with Conflicts of Interest? — ‛Please!’ Quips Federal Judge” (July 24, 2014) (discussing Lorillard case and detailing efforts to obtain FDA compliance with the Federal Advisory Committee Act). Industry representatives are typically non-voting members, but members who have been retained by the litigation industry have served in voting positions. In other contexts, expert witnesses for plaintiffs accuse scientists who testify for a defendant of “conflicts of interest,” but conveniently ignore and fail to disclose their own. SeeMore Hypocrisy Over Conflicts of Interest” (Dec. 4, 2010) (Arthur Frank, Richard Lemen, and Barry Castleman); James Coyne, “Lessons in Conflict of Interest: The Construction of the Martyrdom of David Healy and The Dilemma of Bioethics,” 5 Am. J. Bioethics W3 (2005). The Lorillard case teaches that “white hat” bias is as disqualifying as “black hat” bias.

Subgroups — Subpar Statistical Practice versus Fraud

July 24th, 2014

Several people have asked me why I do not enable comments on this blog.  Although some bloggers (e.g., Deborah Mayo’s Error Statistics site) have had great success in generating interesting and important discussions, I have seen too much spam on other websites, and I want to avoid having to police the untoward posts.  Still, I welcome comments and I try to respond to helpful criticism.  If and when I am wrong, I will gladly eat my words, which usually have been quite digestible.

Probably none of the posts here have generated more comments and criticisms than those written about the prosecution of Dr. Harkonen.  In general, critics have argued that defending Harkonen and his press release was tantamount to condoning bad statistical practice.  I have tried to show that Dr. Harkonen’s press release was much more revealing than it was portrayed in abbreviated accounts of his case, and the evidentiary support for his claim of efficacy in a subgroup was deeper and broader than acknowledged. The criticism and condemnation of Dr. Harkonen’s press release in the face of prevalent statistical practice, among leading journals and practitioners, is nothing short of hypocrisy and bad faith. If Dr. Harkonen deserves prison time for a press release, which promised a full analysis and discussion in upcoming conference calls and presentations at scientific meetings, then we can only imagine what criminal sanction awaits the scientists and journal editors who publish purportedly definitive accounts of clinical trials and epidemiologic studies, with subgroup analyses not prespecified and not labeled as post-hoc.

The prevalence of the practice does not transform Dr. Harkonen’s press release into “best practice,” but some allowance must be made for offering a causal opinion in the informal context of a press release rather than in a manuscript for submission to a journal.  And those critics, with prosecutorial temperaments, must recognize that, when the study was presented at conferences, and when manuscript was written up and submitted to the New England Journal of Medicine, the authors did reveal the ad hoc nature of the subgroup.

The Harkonen case will remain important for several reasons. There is an important distinction in the Harkonen case, ignored and violated by the government’s position, between opinion and fact.  If Harkonen is guilty of Wire Fraud, then so are virtually every cleric, minister, priest, rabbi, imam, mullah, and other religious person who makes supernatural claims and predictions.  Add in all politicians, homeopaths, vaccine deniers, and others who reject evidence for superstition, who are much more culpable than a scientist who accurately reports the actual data and p-value.

Then there is the disconnect between what expert witnesses are permitted to say and what resulted in Dr. Harkonen’s conviction. If any good could come from the government’s win, it would be the insistence upon “best practice” for gatekeeping of expert witness opinion testimony.

For better or worse, scientists often describe post-hoc subgroup findings as “demonstrated” effects. Although some scientists would disagree with this reporting, the practice is prevalent.  Some scientists would go further and contest the claim that pre-specified hypotheses are inherently more reliable than post-hoc hypotheses. See Timothy Lash & Jan Vandenbroucke, “Should Preregistration of Epidemiologic Study Protocols Become Compulsory?,” 23 Epidemiology 184 (2012).

One survey compared grant applications with later published papers and found that subgroup analyses were pre-specified in only a minority of cases; in a substantial majority (77%) of the subgroup analyses in the published papers, the analyses were not characterized as either pre-specified or post hoc. Chantal W. B. Boonacker, Arno W. Hoes, Karen van Liere-Visser, Anne G. M. Schilder, and Maroeska M. Rovers, “A Comparison of Subgroup Analyses in Grant

Applications and Publications,” 174 Am. J. Epidem. 291, 291 (2011).  Indeed, this survey’s comparison between grant applications and published papers revealed that most of the published subgroup analyses were post hoc, and that the authors of the published papers rarely reported justifications for their post-hoc subgroup. Id.

Again, for better or worse, the practice of presenting unplanned subgroup analyses, is common in the biomedical literature. Several years ago, the New England Journal of Medicine reported a survey of publication practice in its own pages, with findings similar to those of Boonacker and colleagues. Rui Wang, Stephen W. Lagakos, James H. Ware, David J. Hunter, and Jeffrey M. Drazen, “Statistics in Medicine — Reporting of Subgroup Analyses in Clinical Trials,” 357 New Eng. J. Med. 2189 (2007).  In general, Wang, et al.,  were unable to determine the total number of subgroup analyses performed; and in the majority (68%) of trials discussed, Wang could not determine whether the subgroup analyses were prespecified. Id. at 2912. Although Wang proposed guidelines for identifying subgroup analyses as prespecified or post-hoc, she emphasized that the proposals were not “rules” that could be rigidly prescribed. Id. at 2194.

The Wang study is hardly unique; the Journal of the American Medical Association reported a similar set of results. An-Wen Chan, Asbjørn Hrobjartsson, Mette T. Haahr, Peter C. Gøtzsche, and Douglas G. Altman, “Empirical Evidence for Selective Reporting of Outcomes in Randomized Trials Comparison of Protocols to Published Articles,” 291 J. Am. Med. Ass’n 2457 (2004).  Chan and colleagues set out to document and analyze “outcome reporting bias” in studies; that is, the extent to which publications fail to report accurately the pre-specified outcomes in published studies of randomized clinical trials.  The authors compared and analyzed protocols and published reports of randomized clinical trials conducted in Denmark in 1994 and 1995. Their findings document a large discrepancy between idealized notion of pre-specification of study design, outcomes, and analyses, and the actual practice revealed by later publication.

Chan identified 102 clinical trials, with 3,736 outcomes, and found that 50% of efficacy, and 65% of harm outcomes were incompletely reported. There was a statistically significant risk of statistically significant outcomes to be fully reported compared with statistically insignificant results. (pooled odds ratio for efficacy outcomes = 2.4; 95% confidence interval, 1.4 – 4.0, and pooled odds ratio for harm outcomes = 4.7; 95% confidence interval, 1.8 -12.0. Their comparison of protocols with later published articles revealed that a majority of trials (62%) had at least one primary outcome that was changed, omitted, or innovated in the published version. The authors concluded that published accounts of clinical trials were frequently incomplete, biased, and inconsistent with protocols.

This week, an international group of scientists published their analysis of agreement vel non between protocols and corresponding later publications of randomized clinical trials. Matthias Briel, DISCO study group, “Subgroup analyses in randomised controlled trials: cohort study on trial protocols and journal publications,” 349 Brit. Med. J. g4539 (Published 16 July 2014). Predictably, the authors found a good deal of sloppy practice, or worse.  Of the 515 journal articles identified, about half (246 or 47.8%) reported one or more subgroup analysis. Of the articles that reported subgroup analyses, 81 (32.9%) publications stated that the subgroup analyses were prespecified, but in 28 of these articles (34.6%), the corresponding protocols did not identify the subgroup analysis.

In 86 of the publications surveyed, the authors found that the articles claimed a subgroup “effect,” but only 36 of the corresponding protocols reported a planned subgroup analysis.  Briel and the DISCO study group concluded that protocols of randomized clinical trials insufficiently describe subgroup analyses. In over one-third of publications, the articles reported subgroup analyses not pre-specified in earlier protocols. The DISCO study group called for access to protocols and statistical analysis plans for all randomized clinical trials.

In view of these empirical data, the government’s claims against Dr. Harkonen stand out, at best, as vindictive, selective prosecution.

Careless Scholarship About Silica History

July 21st, 2014

David Egilman is the Editor in Chief of the International Journal of Occupational and Environmental Health (IJOEH). A YouTube “selfie” interview provides some insight into Dr. Egilman’s motivations and editorial agenda.  Previous posts have chronicled Egilman’s testimonial adventures because of his propensity to surface in litigations of interest. See, e.g., “David Egilman’s Methodology for Divining Causation” (Sept. 6, 2012); “Egilman Petitions the Supreme Court for Review of His Own Exclusion in Newkirk v. Conagra Foods” (Dec. 13, 2012).

Dr. Egilman has used his editorial role at the IJOEH to disseminate his litigation positions.  Several of his articles are little more than his litigation reports, filed in various cases, ranging from occupational dust disease claims to pharmaceutical off-target effect claims. A recent issue of the IJOEH has yet another article of this ilk, which scatters invective across several litigations. David Egilman, Tess Bird[1], and Caroline Lee[2], “Dust diseases and the legacy of corporate manipulation of science and law, 20 Internat’l J. Occup. & Envt’l Health 115 (2014).

The article mostly concerns Egilman’s allegations that companies influenced the scientific, medical, and governmental understanding and perception of asbestos hazards.  I will defer to others to address his allegations with respect to asbestos. The article, however, in its Abstract, takes broader aim at other exposures, in particular, silica:

“Knowledge that asbestos and silica were hazardous to health became public several decades after the industry knew of the health concerns. This delay was largely influenced by the interests of Metropolitan Life Insurance Company (MetLife) and other asbestos mining and product manufacturing companies.”

Egilman at 115, Abstract (emphasis added).

In their Abstract, the authors further proclaim their purpose

“To understand the ongoing corporate influence on the science and politics of asbestos and silica exposure, including litigation defense strategies related to historical manipulation of science.”

Egilman at 115. I demur for the time being with respect to asbestos, but the authors’ claims about silica are never supported in their article. A brief review of two monographs by Frederick L. Hoffman should be sufficient to condemn the authors’ carelessness to the dustbin of occupational history. Frederick L. Hoffman, Mortality from Respiratory Diseases in the Dusty Trades; Dep’t of Labor, Bureau of Labor Statistics (1918); The Problem of Dust Phthisis in the Granite Stone Industry; Dep’t of Labor, Bureau of Labor Statistics (1922).  The bibliographies in both these monographs documents the widespread interest in, and awareness of, the occupational hazards of silica dusts, going back into the 19th century, among the media, the labor movement, and the non-industrial scientific community.

Not surprisingly, the authors’ conclusions are stated only in terms of asbestos hazards, knowledge, and company conduct:

“Conclusions: Asbestos product companies would like the public to believe that there was a legitimate debate surrounding the dangers of asbestos during the twentieth century, particularly regarding the link to cancer, which delayed adequate regulation. The asbestos–cancer link was not a legitimate contestation of science; rather the companies directly manipulated the scientific literature. There is evidence that industry manipulation of scientific literature remains a continuing problem today, resulting in inadequate regulation and compensation and perpetuating otherwise preventable worker and consumer injuries and deaths.”

The authors note that Rutherford Johnstone’s 1960 “seminal” textbook relied upon a study (Braun and Truan), which study Egilman attacks as corrupted by industry influence. Rutherford Johnstone & Seward E. Miller, Occupational Diseases and Industrial Medicine 328 (Philadelphia 1960). According to the Egilman, Rutherford Johnstone was the official American Medical Asociation’s consultant for occupational disease questions, which explains why he was providing answers to questions submitted to the Journal of the American Medical Association, on silica and asbestos issues. The authors note that Johnstone, in 1961, asserted that there was no epidemiologic evidence that asbestos causes lung cancer among American workers, which view reflects Johnstone’s reliance upon the Braun-Truan study. The authors fail, however, to note that Johnstone also opined that

“There is no epidemiological evidence that silicosis, resulting from undue exposure to free silica produces cancer of the lung.”

Rutherford T. Johnstone, “Silicosis and Cancer,” 176 J. Am. Med. Ass’n 81, 81 (1961). Neither the authors nor anyone else has ever shown that Johnstone was misled by any industry group with respect to his silica/lung cancer opinion.

Some of the Egilman’s scholarship is quite careless.  For instance, he, along with his employees, assert that

“By the mid 1940s, the international scientific community had recognized the link between asbestos and cancer.10–18”

Readers should review all the endnotes, 10 – 18, but endnote 12 is especially interesting:

“12 Macklin MT, Macklin CC. Does chronic irritation cause primary carcinoma of the human lung? Arch Path. 1940;30:924–55.”

As I have noted before, the Macklins, and especially Dr. Madge Macklin, brought a great deal of rigor and skepticism to broad claims about the causation of lung cancer. SeeSilicosis, Lung Cancer, and Evidence-Based Medicine in North America” (July 4, 2014).  This citation and others do not appear to support the sweep of Egilman and his student authors’ claim.

The next mention of silica occurs in the context of an allegation that corporations (presumably not plaintiffs’ lawyers’ law firm corporations) have worked to “disguise” health concerns and influence governmental policy about several products, materials, including silica:

“During the last several decades, researchers in a wide spectrum of fields have documented the direct and purposeful efforts of corporations to disguise public health concerns and affect government policies, particularly in the tobacco, alcohol, silica, and asbestos industries, and more recently, the pharmaceutical, chemical, and ultra-processed food and drink industries.79,73

Egilman at 121.

The authors’ citations, however, do not support any such allegation about silica. Endnote 73[3] is an article by Egilman, and others, on Vioxx; and endnote 79[4] is an article about alcohol, tobacco, and foods. In the very next sentence, the authors further claim that:

“Corporate-funded ‘objective science’ leading to the corruption of scientific literature remains a major problem.65,68,69,71,73,75,80–86

Once again, none of the endnotes (65, 68, 69, 71, 75, and 80-86) supports the authors’ claim that anyone in the mining, milling, or marketing of crystalline silica has funded science in a way that led to the corruption of the scientific literature. Not surprisingly, the authors ignore the frauds perpetuated by litigation industry players. See, e.g., In re Silica Products Liab. Lit., 398 F. Supp. 2d 563 (S.D. Tex. 2005) (federal trial judge rebukes the litigation industry for fraudulent claiming in MDL 1553).


[1] The article acknowledges that Ms. Bird and Ms. Lee were employees of Dr. Egilman.  Ms. Bird appears now to be a student in the U.K., studying medical anthropology.  Ms. Bird, and Ms. Lee, appeared on earlier works by Egilman.  See, e.g., David S Egilman, Tess Bird, and Caroline Lee, “MetLife and its corporate allies: dust diseases and the manipulation of science,” 19 Internat’l J. Occup. & Envt’l Health 287 (2013); David Steven Egilman, Emily Laura Ardolino, Samantha Howe, and Tess Bird, “Deconstructing a state-of-the-art review of the asbestos brake industry,” 21 New Solutions 545 (2011).

[2] Ms. Lee appears to have been employed for Egilman’s litigation consulting firm, Never Again Consulting, from 2011 until August 2013, when she entered the University of Maryland law school.

[3] Krumholz HM, Ross JS, Presler AH, Egilman DS. What have we learnt from Vioxx. Br Med J. 2007;334(7585):120–3.

[4] Moodie R, Stuckler D, Montiero C, Sheron N, Neal B, Thamarangsi T, et al. Profits and pandemics: prevention of harmful effects of tobacco, alcohol, and ultra-processed food and drink industries. Lancet. 2013;381:670–79.

Discovery of Litigation Financing – The Jackpot Justice Finance Corporation

July 16th, 2014

Over two years ago, I wrote that courts and counsel have not done enough to adapt to the litigation industry’s use of third-party financing. SeeLitigation Funding” (May 8, 2012). A few days ago, Byron Stier, at the Mass Tort Litigation Blog, posted a short news item about a recent effort to modify discovery rules to take into account the litigation industry’s business model of seeking third-party litigation funding.  SeeIndustry Groups Seek Amendment of Rule 26 to Require Disclosure of Third Party Litigation Financing” (July 13, 2014).

Stier reported that the U.S. Chamber of Commerce Institute for Legal Reform, American Insurance Association, American Tort Reform Association, Lawyers for Civil Justice, and National Association of Manufacturers, back in April 2014, wrote a letter to the Committee on Rules of Practice and Procedure of the Administrative Office of the federal courts, to propose an amendment to Federal Rule of Civil Procedure 26(a)(1)(A). Their proposed new language is underscored, and follows 26(a)(1)(A)(i)-(iv):

“(v) for inspection and copying as under Rule 34, any agreement under which any person, other than an attorney permitted to charge a contingent fee representing a party, has a right to receive compensation that is contingent on, and sourced from, any proceeds of the civil action, by settlement, judgment or otherwise.”

This proposal is important and necessary to ensure that defendants can inquire about financial bias in jury voir dire, as well as identify bias among judges, special masters, and witnesses.  Other procedural rule reforms will be needed as well. Appellate briefs should disclose financiers that have a stake in the litigation. When courts limit counsel and parties’ communications to the media about a case, such limits should apply as well to insurers and third-party financiers of the litigation efforts. Third-party financiers provide a convenient way for the litigation industry to lobby regulators and legislators, and more expansive disclosure rules are needed to capture the activities of the litigation financiers.  Funding of litigation-related research by third-party financiers should be anticipated by journal editors with more expansive disclosure rules; journal editors should be alert to evolving financial markets that may influence research agendas and publications. Lawyers’ scrutiny of new clients for conflicts now requires inquiry into the veiled interests created by litigation financing.

Too Many Narratives – Historians in the Dock

July 13th, 2014

Historical Associates Inc. (HAI) is a commercial vendor for historical services, including litigation services. Understandably, this firm, like the academic historians who service the litigation industry, takes a broad view of the desirability of historian expert witness testimony.  An article in one of the HAI’s newsletters stakes out lawyer strategies in trying to prove historical facts.  Lawyers can present percipient witnesses, or they

“can present the story themselves, but in the end, arguments by advocates can raise questions of bias that obscure, rather than clarify, the historical facts at issue.”

Mike Reis and Dave Wiseman, “Introducing and interpreting facts-in-evidence: the historian’s role as expert witness,” HAIpoints 1 (Summer 2010)[1]. These commercial historians recommend that advocacy bias, so clear in lawyers’ narratives, be diffused or obscured by having a professional historian present the “story.”  They tout the research skills of historians: “Historians know how to find critical historical information.” And to be sure, historians, whether academic or for-hire may offer important bibliographic services, as well as help in translating, authenticating, and contextualizing documents.  But these historians from HAI want a role on center-stage, or at least in the witness box.  They tell us that:

“Historians synthesize information into well-documented, compelling stories.”

Ah yes, compelling stories, as in “the guiltless gust of a rattling good yarn[2].” The legal system should take a pass on such stories.

*     *     *     *     *     *

A recent law review article attempts to provide a less commercial defense of expert witness testimony.  See Alvaro Hasani, “Putting history on the stand: a closer look at the legitimacy of criticisms levied against historians who testify as expert witnesses,” 34 Whittier L. Rev. 343 (2013) [Hasani].  Hasani argues that historians strive to provide objective historical “interpretation,” by selecting reliable sources, and reliably reading and interpreting these sources to create a reliable “narrative.” Hasani at 355. Hasani points to some courts that have thrown up their hands and declared Daubert reliability factors inapplicable to non-scientific historian testimony. See, e.g., United States v. Paracha, No. 03 CR. 1 197(SHS), 2006 WL 12768, at *19 (S.D.N.Y. Jan. 3, 2006) (noting that Daubert is not designed for gatekeeping of a non-scientific, historian expert witness’s methodology); Saginaw Chippewa Indian Tribe of Michigan v. Granholm, 690 F. Supp. 2d 622, 634 (E.D. Mich. 2010) (noting that “[t]here is no way to ‘test’ whether the experts’ testimony concerning the historical understanding of the treaties is correct. Nor is it possible to establish an ‘error rate’ for historical experts.”).

Not all testifying historians agree, however, that their research and findings are non-scientific.  Here is how one plaintiffs’ expert witness characterized historical thinking:

“Q. Do you believe that historical thinking is a form of scientific thinking?

A. I do. I think that history is sometimes classed with the humanities, sometimes classed with the social sciences, but I think there is a good deal of historical research and writing that is a form of social science.”

Examination Before Trial of Gerald Markowitz, in Mendez v. American Optical, District Court for Tarrant County, Texas (342d Judicial District), at 44:13-20 (July 19, 2005). Professor Susan Haack, and others, have made a persuasive case that the epistemic warrants for claims of knowledge, whether denominated scientific or non-scientific, are not different in kind. If historian testimony is not about knowledge of the past, then it clearly has no role in a trial. Furthermore, Professor Markowitz is correct that sometimes historical opinions are scientific in the sense that they can be tested. If a labor historian asserts that workers are exploited and subjected to unsafe work conditions due to the very nature of capitalism and the profit motives, then that historian’s opinion will be substantially embarrassed by the widespread occupational disease in European and Asian communist regimes.

When Deborah Lipstadt described historian David Irving as a holocaust denier[3], Irving sued Lipstadt for defamation.  In defending against the claim, Lipstadt successfully carried the burden of proving the truth of her accusation.  The trial court’s judgment, quoted by Hasani, reads like a so-called Daubert exclusion of plaintiff Irving’s putative historical writing. Irving v. Penguin Books Ltd., No. 1996-1-1113, 2000 WL 362478, at ¶¶ 1.1, 13.140 (Q.B. Apr. 11, 2000)(finding that “Irving ha[d] misstated historical evidence; adopted positions which run counter to the weight of the evidence; given credence to unreliable evidence and disregarded or dismissed credible evidence.”).

The need for gatekeeping of historian testimony should be obvious.  Historian testimony is often narrative of historical fact that is not beyond the ken of an ordinary fact finder, once the predicate facts are placed into evidence.  Such narratives of historical fact present a serious threat to the integrity of fact finding by creating the conditions for delegation and deferring fact finding responsibility to the historian witness, with an abdication of responsibility by the fact finder. See Ronald J. Allen, “The Conceptual Challenge of Expert Evidence,” 14 Discusiones Filosóficas 41, 50-53 (2013).

Some historians clearly believe that they are empowered by the witness chair to preach or advocate. Allan M. Brandt, who has served as a party expert witness to give testimony on many occasions for plaintiffs in tobacco cases, unapologetically described the liberties he has taken thus:

“It seems to me now, after the hopes and disappointments of the courtroom battle, that we have a role to play in determining the future of the tobacco pandemic. If we occasionally cross the boundary between analysis and advocacy, so be it. The stakes are high, and there is much work yet to do.”

Allan M. Brandt, The Cigarette Century: The Rise, Fall, and Deadly Persistance of the Product That Defined American 505 (2007).

Hasani never comes to grips with the delegation problem or with Brandt’s attitude, which is quite prevalent in the product liability arena. The problem is more than merely “occasional.” The overreaching by historian witnesses reflects the nature of their discipline, the lack of necessity for their testimony, and the failure of courts to exercise their gatekeeping. The problem with Brandt’s excuse making is that neither analysis nor advocacy is needed or desired. Advocacy is the responsibility of counsel, as well as the kind of analysis involved in much of historian testimony.  For instance, when historians offer testimony about the so-called “state of the art,” they are drawing inferences from published and unpublished sources about what people knew or should have known, and about their motivations.  Although their bibliographic and historical researches can be helpful to the fact finder’s effort to understand who was writing what about the issue in times past, historians have no real expertise, beyond the lay fact finder, in discerning intentions, motivations, and belief states.

Hasani concludes that the prevalence of historian expert witness testimony is growing. Hasani at 364.  He cites, however, only four cases for the proposition, three of which pre-date Daubert.  The fourth is an native American rights case. Hasani at 364 n.139. There is little or no evidence that historian expert witness testimony is becoming more prevalent, although it continues in product liability where state of the art — who knew what, when — remains an issue in strict liability and negligence. Mack v. Stryker Corp., 893 F. Supp. 2d 976 (D. Minn. 2012), aff’d, 748 F.3d 845 (8th Cir. 2014). There remains a need for judicial vigilance in policing such state-of-the-art testimony.


[1] Mike Reis is the Vice President and Director of Litigation Research at History Associates Inc. Mr. Reis was received his bachelor’s degree from Loyola College, and his master’s degree from George Washington University, both in history. David Wiseman, an erstwhile trial attorney, conducts historical research for History Associates.

[2] Attributed to Anthony Burgess.

[3] Deborah E. Lipstadt, Denying the Holocaust: The Growing Assault on Truth and Memory 8 (1993).

 

NIEHS Transparency? We Can See Right Through You

July 10th, 2014

The recent issue of Environmental Health Perspectives contains several interesting articles on scientific methodology of interest to lawyers who litigate claimed health effects.[1] The issue also contains a commentary that argues for greater transparency in science and science policy, which should be a good thing, but yet the commentary has the potential to obscure and confuse. Kevin C. Elliott and David B. Resnik, “Science, Policy, and the Transparency of Values,” 122 Envt’l Health Persp. 647 (2014) [Elliott & Resnik].

David B. Resnik has a Ph.D., in philosophy from University of North Carolina, and his law degree from the on-line Concord University School of Law.  He is currently a bioethicist and the chairman of the NIEHS Institutional Review Board. Kevin Elliott received his doctorate in the History and Philosophy of Science (Notre Dame), and he is currently an Associate Professor in Michigan State University. Elliott and Resnik advance a plea for transparency that superficially is as appealing as motherhood and apple pie. The authors argue

“that society is better served when scientists strive to be as transparent as possible about the ways in which interests or values may influence their reasoning.”

The argument appears superficially innocuous.  Indeed, in addition to the usual calls for great disclosure of conflicts of interest, the authors call for more data sharing and less tendentious data interpretation:

“When scientists are aware of important background assumptions or values that inform their work, it is valuable for them to make these considerations explicit. They can also make their data publicly available and strive to acknowledge the range of plausible interpretations of available scientific information, the limitations of their own conclusions, the prevalence of various interpretations across the scientific community, and the policy options supported by these different interpretations.”

Alas, we may as well wish for the Kingdom of Heaven on Earth!  An ethos or a requirement of publicly sharing data would indeed advance the most important transparency, the transparency that would allow full exploration of the inferences and conclusions claimed in a particular study.  Despite their high-mindedness, the authors’ argument becomes muddled when it comes to conflating scientific objectivity with subjective values:

“In the past, scientists and philosophers have argued that the best way to maintain science’s objectivity and the public’s trust is to draw a sharp line between science and human values or policy (Longino 1990). However, it is not possible to maintain this distinction, both because values are crucial for assessing what counts as sufficient evidence and because ethical, political, economic, cultural, and religious factors unavoidably affect scientific judgment (Douglas 2009; Elliott 2011; Longino 1990; Resnik 2007, 2009).”

This argument confuses pathology of science with what actually makes science valuable and enduring.  The Nazis invoked cultural arguments, explicitly or implicitly to reject “Jewish” science; religious groups in the United States invoke religious and political considerations to place creationism on an equal or superior footing with evolution; anti-vaccine advocacy groups embrace case reports over rigorous epidemiologic analyses. To be sure, these and other examples show that “ethical, political, economic, cultural, and religious factors unavoidably affect scientific judgment,” but yet science can and does transcend them.  There is no Jewish or Nazi science; indeed, there is no science worthy of its name that comes from any revealed religion or cult.  As Tim Minchin has pointed out, alternative medicine is either known not to work or not known to work because if alternative medicine is known to work, then we call it “medicine.” The authors are correct that these subjective influences require awareness and understanding of prevalent beliefs, prejudices, and corrupting influences, but they do not, and they should not, upset our commitment to an evidence-based world view.

Elliott and Resnik are focused on environmentalism and environmental policy, and they seem to want to substitute various presumptions, leaps of faith, and unproven extrapolations for actual evidence  and valid inference, in the hope of improving the environment and reducing risk to life.  The authors avoid the obvious resolution: value the environment, but acknowledge ignorance and uncertainty.  Rather than allow precautionary policies to advance with a confession of ignorance, the authors want to retain their ability to claim knowledge even when they simply do not know, just because the potential stakes are high. The circularity becomes manifest in their ambiguous use of “risk,” which strictly means a known causal relationship between the “risk” and some deleterious outcome.  There is a much weaker usage, popularized by journalists and environmentalists, in which “risk” refers to something that might cause a deleterious outcome.  The might in “risk” here does not refer to a known probabilistic or stochastic relationship between the ex ante risk and the outcome, but rather to an uncertainty whether or not the relationship exists at all. We can see the equivocation in how the authors attempt to defend the precautionary principle:

“Insisting that chemicals should be regulated only in response to evidence from human studies would help to prevent false positive conclusions about chemical toxicity, but it would also prevent society from taking effective action to minimize the risks of chemicals before they produce measurable adverse effects in humans. Moreover, insisting on human studies would result in failure to identify some human health risks because the diseases are rare, or the induction and latency periods are long, or the effects are subtle (Cranor 2011).”

Elliott & Resnik at 648.

If there is uncertainty about the causal relationship, then by calling some exposures a “risk,” the authors prejudge whether there will be “adverse effects” at all. This is just muddled.  If the relationship is uncertain, and false positive conclusions are possible, then we simply cannot claim to know that there will be such adverse effects, without assuming what we wish to prove.

The authors compound the muddle by introducing a sliding scale of “standards of evidence,” which appears to involve both variable posterior probabilities that the causal claim is correct, as well as variable weighting of types of evidence.  It is difficult to see how this will aid transparency and reduce confusion. Indeed, we can see how manipulative the authors’ so-called transparency becomes in the context of evaluating causal claims in pharmaceutical approvals versus tort claims:

“Very high standards of evidence are typically expected in order to infer causal relationships or to approve the marketing of new drugs. In other social contexts, such as tort law and chemical regulation, weaker standards of evidence are sometimes acceptable to protect the public (Cranor 2008).”

Remarkably, the authors cite no statute, no case law, no legal treatise writer for the proposition that the tort law standard for causation is somehow lower than for a claim of drug efficacy before the Food and Drug Administration.  The one author they cite, Carl Cranor, is neither a scientist nor a lawyer, but a philosophy professor who has served as an expert witness for plaintiffs in tort litigation (usually without transparently disclosing his litigation work). As for the erroneous identification of tort and regulatory standards, there is of course, much real legal authority to the contrary[2].

The authors go on to suggest that demanding

“the very highest standards of evidence for chemical regulation—including, for example, human evidence, accompanying animal data, mechanistic evidence, and clear exposure data—would take very long periods of time and leave the public’s health at risk.”

Elliott & Resnik at 648.

Of course, the point is that until such data are developed, we really do not know whether the public’s health is at risk.  Transparency would be aided not by some sliding and slippery scale of evidence, but by frank admissions that we do not know whether the public’s health is at risk, but we choose to act anyway, and to impose whatever costs, inconvenience, and further uncertainty by promoting alternatives that are accompanied by even greater risk or uncertainty.  Environmentalists rarely want to advance such wishy-washy proposals, devoid of claims of scientific knowledge that their regulations will avoid harm, and promote health, but honesty and transparency require such admissions.

The authors advance another claim in their Commentary:  transparency in the form of more extensive disclosure of conflicts of interest will aid sound policy formulation.  To their credit, the authors do not limit the need for disclosure to financial benefits; rather they take an appropriately expansive view:

“Disclosures of competing financial interests and nonfinancial interests (such as professional or political allegiances) also provide opportunities for more transparent discussions of the impact of potentially implicit and subconscious values (Resnik and Elliott 2013).”

Elliott & Resnik at 649.  Problematically, however, when the authors discuss some specific instances of apparent conflicts, they note industry “ties,” of the authors of an opinion piece on endocrine disruptors[3], but they are insensate to the ties of critics, such as David Ozonoff and Carl Cranor, to the litigation industry, and of others to advocacy groups that might exert much more substantial positional bias and control over those critics.

The authors go further in suggesting that women have greater perceptions of risk than men, and presumably we must know whether we are being presented with a feminist or a masculinist risk assessment. Will self-reported gender suffice or must we have a karyotype? Perhaps we should have tax returns and a family pedigree as well? The call for transparency seems at bottom a call for radical subjectivism, infused with smug beliefs that want to be excused from real epistemic standards.



[1] In addition to the Elliott and Resnick commentary, see Andrew A. Rooney, Abee L. Boyles, Mary S. Wolfe, John R. Bucher, and Kristina A. Thayer, “Systematic Review and Evidence Integration for Literature-Based Environmental Health Science Assessments,” 122 Envt’l Health Persp. 711 (2014); Janet Pelley, “Science and Policy: Understanding the Role of Value Judgments,” 122 Envt’l Health Persp. A192 (2014); Kristina A. Thayer, Mary S. Wolfe, Andrew A. Rooney, Abee L. Boyles, John R. Bucher, and Linda S. Birnbaum, “Intersection of Systematic Review Methodology with the NIH Reproducibility Initiative,” 122 Envt’l Health Persp. A176 (2014).

[2] Sutera v. The Perrier Group of America, 986 F. Supp. 655, 660 (D. Mass. 1997); In re Agent Orange Product Liab. Litig., 597 F. Supp. 740, 781 (E.D.N.Y. 1984) (Weinstein, J.), aff’d, 818 F.2d 145 (2d Cir. 1987); Allen v. Pennsylvania Engineering Corp., 102 F.3d 194, 198 (5th Cir. 1996) (distinguishing regulatory pronouncements from causation in common law actions, which requires higher thresholds of proof); Glastetter v. Novartis Pharms. Corp., 107 F. Supp. 2d 1015, 1036 (E.D. Mo. 2000), aff’d, 252 F.3d 986 (8th Cir. 2001);  Wright v. Willamette Indus., Inc., 91 F.3d 1105 (8th Cir. 1996); Siharath v. Sandoz Pharms. Corp., 131 F. Supp. 2d 1347, 1366 (N.D. Ga. 2001), aff’d, 295 F.3d 1194 330 (11th Cir. 2002).

[3] Daniel R. Dietrich, Sonja von Aulock, Hans Marquardt, Bas Blaauboer, Wolfgang Dekant, Jan Hengstler, James Kehrer, Abby Collier, Gio Batta Gori, Olavi Pelkonen, Frans P. Nijkamp, Florian Lang, Kerstin Stemmer, Albert Li, KaiSavolainen, A. Wallace Hayes, Nigel Gooderham, and Alan Harvey, “Scientifically unfounded precaution drives European Commission’s recommendations on EDC regulation, while defying common sense, well-established science and risk assessment principles,” 62 Food Chem. Toxicol. A1 (2013)

 

Twerski’s Defense of Daubert

July 6th, 2014

Professor Aaron D. Twerski teaches torts and products liability at the Brooklyn Law School.  Along with a graduating student, Lior Sapir, Twerski has published an article in which the authors mistakenly asseverate that “[t]his is not another article about Daubert.” Aaron D. Twerski & Lior Sapir, “Sufficiency of the Evidence Does Not Meet Daubert Standards: A Critique of the Green-Sanders Proposal,” 23 Widener L.J. 641, 641 (2014) [Twerski & Sapir].

A few other comments.

1. The title of the article.  True, true, and immaterial. As Professor David Bernstein has pointed out many times, Daubert is no longer the law; Federal Rule of Evidence 702, a statute, is the law.  Just as the original Rule 702 superseded Frye in 1975, a revised Rule 702, in 2000, superseded Daubert in 1975. See David E. Bernstein, “The Misbegotten Judicial Resistance to the Daubert Revolution,” 89 Notre Dame L. Rev. 27 (2013).

2. Twerski and Sapir have taken aim at a draft paper by Professors Green and Sanders, who also presented similar ideas at a workshop in March 2012, in Spain. The Green-Sanders manuscript is available on line. Michael D. Green & Joseph Sanders, “Admissibility Versus Sufficiency: Controlling the Quality of Expert Witness Testimony in the United States,” (March 5, 2012) <downloaded on March 25, 2012>. This article appears to have matured since spring 2012, but it has never progressed to parturition.  Professor Green’s website suggests a mutated version is in the works:  “The Daubert Sleight of Hand: Substituting Reliability, Methodology, and Reasoning for an Old Fashioned Sufficiency of the Evidence Test.”

Indeed, the draft paper is a worthwhile target. SeeAdmissibility versus Sufficiency of Expert Witness Evidence” (April 18, 2012).  Green and Sanders pursue a reductionist approach to Rule 702, which is unfaithful to the letter and spirit of the law.

3. In their critique of Green and Sanders, Twerski and Sapir get some issues wrong. First they insist upon talking about Daubert criteria.  The “criteria” were never really criteria, and as Bernstein’s scholarship establishes, it is time to move past Daubert.

4. Twerski and Sapir assert that Daubert imposes a substantial or heavy burden of proof upon the proponent of expert witness opinion testimony:

“The Daubert trilogy was intended to set a formidable standard for admissibility before one entered the thicket of evaluating whether it was sufficient to serve as grounds for recovery.”

Twerski & Sapir at 648.

Daubert instituted a “high threshold of reliability”.

Twerski & Sapir at 649.

“But, the message from the Daubert trilogy is unmistakable: a court must have a high degree of confidence in the integrity of scientific evidence before it qualifies for consideration in any formal test to be utilized in litigation.”

Twerski & Sapir at 650.

“The Daubert standard is anything but minimal.”

Twerski & Sapir at 651.

Twerski and Sapir never explain whence comes “high,” “formidable,” and “anything but minimal.” To be sure, the Supreme Court noted that “[s]ince Daubert . . . parties relying on expert evidence have had notice of the exacting standards of reliability such evidence must meet.” Weisgram v. Marley Co., 528 U.S. 440, 455 (2000) (emphasis added). An exacting standard, however, is not necessarily a heavy burden.  It may be that the exacting standard is infrequently satisfied because the necessary evidence and inferences, of sufficiency quality and validity, are often missing. The truth is that science is often in the no-man’s land of indeterminate, inconclusive, and incomplete. Nevertheless, Twerski and Sapir play into the hands of the reductionist Green-Sanders’ thesis by talking about what appears to be a [heavy] burden of proof and the “weight of evidence” needed to sustain the burden.

5. Twerski and Sapir obviously recognize that reliability is different from sufficiency, but they miss the multi-dimensional aspect of expert witness opinion testimony.  Consider their assertion that:

“[t]he Court of Appeals for the Eleventh Circuit in Joiner had not lost its senses when it relied on animal studies to prove that PCBs cause lung cancer. If the question was whether any evidence viewed in the light most favorable to plaintiff supported liability, the answer was probably yes.”

Twerski & Sapir at 649; see Joiner v. Gen. Electric Co., 78 F.3d 524, 532 (11th Cir. 1996) rev’d, 522 U.S. 136 (1997).

The imprecision in thinking about expert witness testimony obscures what happened in Joiner, and what must happen under the structure of the evidence statutes (or case law).  The Court of Appeals never relied upon animal studies; nor did the district court below.  Expert witnesses relied upon animal studies, and other studies, and then offered an opinion that these studies “prove” PCBs cause human lung cancer, and Mr. Joiner’s lung cancer in particular.  Those opinions, which the Eleventh Circuit would have taken at face value, would be sufficient to support submitting the case to jury.  Indeed, courts that evade the gatekeeping requirements of Rule 702 routinely tout the credentials of the expert witnesses, recite that they have used science in some sense, and that criticisms of their opinions “go to the weight not the admissibility” of the opinions.  These are, of course, evasions used to dodge Daubert and Rule 702. They are evasions because the science recited is at a very high level of abstraction (“I relied upon epidemiology”), because credentials are irrelevant, and because “weight not the admissibility” is a conclusion not a reason.

Some of the issues obscured by the reductionist weight-of-the-evidence approach are the internal and external validity of the studies cited, whether the inferences drawn from the studies cited are valid and accurate, and whether the method of synthesizing  conclusion from disparate studies is appropriate. These various aspects of an evidentiary display cannot be reduced to a unidimensional “weight.” Consider how many observational studies suggested, some would say demonstrated, that beta carotene supplements reduced the risk of lung cancer, only to be pushed aside by one or two randomized clinical trials.

6. Twerski and Sapir illustrate the crucial point that gatekeeping judges must press beyond the conclusory opinions by exploring the legal controversy over Parlodel and post-partum strokes.  Twerski & Sapir at 652. Their exploration takes them into some of the same issues that confronted the Supreme Court in Joiner:  extrapolations or “leaps of faith” between different indications, different species, different study outcomes, between surrogate end points and the end point of interest, between very high to relatively low therapeutic doses. Twerski and Sapir correctly discern that these various issues cannot be simply subsumed under weight or sufficiency.

7. Professors Green and Sanders have published a brief reply, in which they continue their “weight of the evidence” reductionist argument. Michael D. Green & Joseph Sanders, “In Defense of Sufficiency: A Reply to Professor Twerski and Mr. Sapir,” 23 Widener L.J. 663 (2014). Green and Sanders restate their position that courts can, should, and do sweep all the nuances of evidence and inference validity into a single metric – weight and sufficiency – to adjudicate so-called Daubert challenges.  What Twerski and Sapir seem to have stumbled upon is that Green and Sanders are not engaged in a descriptive enterprise; they are prescribing a standard that abridges and distorts the law and best practice in order to ensure that dubious causal claims are submitted to the finder of fact.

Silicosis, Lung Cancer, and Evidence-Based Medicine in North America

July 4th, 2014

According to her biographies[1], Madge Thurlow Macklin excelled in mathematics, graduated from Goucher College, received a fellowship to study physiology at Johns Hopkins University, and then went on graduate with honors from the Johns Hopkins Medical School, in 1919.  Along the way, she acquired a husband, Charles C. Macklin, an associate professor of anatomy at Hopkins, and had her first child.

In 1921, the Macklins moved to London, Ontario, to take positions at the University of Western Ontario.  Charles received an appointment as a professor of histology and embryology, and went on to distinguish himself in pulmonary pathology. Madge Macklin received an appointment as a part-time instructor at Western, but faced decades of resistance because of her sex and her marriage to a professor. She was never promoted beyond part-time assistant professor, at Western.

Despite the hostile work environment, Madge Macklin published and lectured on statistical and medical genetics.  Her papers made substantial contributions to the inheritable aspects of human cancer and other diseases.

Macklin advocated tirelessly for the inclusion of medical genetics in the American medical school curriculum. See, e.g., Marge T. Macklin, “Should The Teaching Of Genetics As Applied To Medicine Have A Place In The Medical Curriculum?” 7 J. Ass’n Am. Med. Coll. 368 (1932); “The Teaching of Inheritance of Disease to Medical Students: A Proposed Course in Medical Genetics,” 6 Ann. Intern. Med. 1335 (1933). Her advocacy largely succeeded both in medical education and in the recognition of the importance of genetics for human diseases.

Macklin’s commitment to medical genetics led her to believe that physicians had a social responsibility to engage in sensible genetics counseling, and reasonable guidance on procreation and birth control. In 1930, Macklin helped found the Eugenics Society of Canada, and went on to serve as its Director in 1935. Her writings show none of the grandiosity or pretensions that lie in creating a master race, as much as avoiding procreation among imbeciles. See, e.g., Madge Macklin, “Genetical Aspects of Sterilization of the Mentally Unfit,” 30 Can. Med. Ass’n J. 190 (1934).

Some of her biographers suggest that Macklin lost her position at Western due to her views on eugenics, and others suggest that her trenchant criticisms of the inequity of the University’s sexism led her to go to Ohio State University in 1946, as a cancer researcher, funded by the National Research Council. Macklin taught genetics at Ohio State, something that Western never permitted her to do. In 1959, three years before her death, Macklin was elected president of the American Society for Human Genetics.

By all accounts, Macklin was an extraordinary woman and a gifted scientist, but my interest in her work stems from her recognition in the 1930s and 1940s, for the need for greater rigor in drawing etiological inferences in medical science.  Well ahead of her North American colleagues, Macklin emphasized the need to rule out bias, confounding, and chance before accepting apparent associations as causal. She wrote with unusual clarity and strength on the subject, decades before Sir Austin Bradford Hill. Her early mathematical prowess served her well in rebutting case reports and associations that were often embraced uncritically.

 *  *  *  *  *  *  *

In 1939, Professor Max Klotz of the University of Toronto, reported a very crude analysis from which he inferred a putative association between silicosis and lung cancer. Max O. Klotz, “The Association of Silicosis and Carcinoma of the Lung, 35 Am. J. Cancer 38 (1939). Klotz was a pathologist, and he worked with autopsy series, without statistical tools or understanding, as was common at the time. Macklin wrote a thorough refutation, which amply illustrates her abilities and her clear thinking:

“Another type of improper control for analysing cancer data arises through ignoring the fact that every cancer has a specific age incidence, and sex predilection. I have already mentioned breast, uterine and prostatic cancers, but other types of cancer, not of the generative organs,  have marked sex predilection. Cancer of the lung is a good example. It occurs four times as frequently in the male as in the female. If we desire to make any study of causative factors in lung cancer we must be sure that our control group is comparable to our experimental group. Again I will take an example from the literature. A worker was investigating the possible role of silicosis in inducing lung cancer. He compared the incidence of lung cancer in a group of 50 cases of silicosis, and in a large necropsy group of 4500 ‘unselected’ cases from a general hospital. He found that lung cancer was 7 times as frequent in the silicosis group as in the unselected necropsies. This is an excellent example of misunderstanding as to what is meant by ‘random’ sample. Because the 4500 necropsies were ‘unselected’ the worker thought that he had a good control group. As a matter of fact, in order to have a good control, he needed to select very carefully from these 4500 necropsies, those which he was to use as his standard. He forgot two things:

(1) that lung cancer is 4 times as common in the male as in the female and that all his silicosis cases were males, therefore his unselected necropsies should have been highly selected to contain only males. Assuming that half of his 4500 necropsies were females, and that among them one fifth of the lung cancers occurred, one can easily show that had his control group been all males as was his silicosis group, lung cancer would have been only 4.8 times as common among the silicosis patients as among the general necropsy group instead of 7 times as he found it.

(2) The second thing he forgot is that silicosis does not develop until 15 or 20 years of exposure have passed by. That placed all his silicosis patients in the late forties or early fifties, just when lung cancer becomes most common. Many of his general necropsy group were in the age range below 45, hence not in the lung cancer age. He should have selected only those males from the necropsy group who matched the age distribution of his silicosis patients. If he then found a significantly higher percentage of lung cancer among his silicosis patients he could have suggested a relationship between the two. Until that control group is properly studied, his results are valueless.”

****

SUMMARY

* * *

“The second point to be noted is that the control group should correspond as nearly as possible in all respects with the group under investigation, with the single exception of the etiologic factor being investigated. If silicosis is being considered as a causative agent in lung cancer, the control group should be as nearly like the experimental or observed group as possible in sex, age distribution, race, facilities for diagnosis, other possible carcinogenic factors, etc. The only point in which the control group should differ in an ideal study would be that they were not exposed to free silica, whereas the experimental group was. The incidence of lung cancer could then be compared in the two groups of patients.

This necessity is often ignored; and a ‘random’ control group is obtained for comparison on the assumption that any group taken at random is a good group for comparison. Fallacious results based on such studies are discussed briefly.”

Madge Thurlow Macklin, “Pitfalls in Dealing with Cancer Statistics, Especially as Related to Cancer of the Lung,” 14 Diseases Chest 525 532-33, 529-30 (1948).

The recognition that uncontrolled, or improperly controlled, research was worthless was a great advance in thinking about medical causation.  In the 1940s, Macklin was ahead of her time; indeed, if she were alive today, she would be ahead of many contemporary epidemiologists.

——

[1]Barry Mehler, “Madge Thurlow Macklin,” from Barbara Sicherman and Carl Hurd Green, eds., Notable American Women: The Modern Period 451-52 (1980); Laura Lynn Windsor, Women in Medicine: An Encyclopedia 134 (2002).

 

 

 

 

 

 



[1] Barry Mehler, “Madge Thurlow Macklin,” from Barbara Sicherman and Carl Hurd Green, eds., Notable American Women: The Modern Period 451-52 (1980); Laura Lynn Windsor, Women in Medicine: An Encyclopedia 134 (2002).