TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

National Academies Press Publications Are Now Free

June 3rd, 2011

Publications of the National Research Council, as well as those of its constitutive organizations, the National Academy of Science, the Institute of Medicine, and the National Academy of Engineering, are often important resources for lawyers who litigate scientific and technical issues.  Right or wrong, these publications become forces in their own right in the courtroom, where they command serious attention from trial and appellate judges.

According to the National Academies Press’s website, all electronic versions of its books, in portable document format (pdf), will be available at its website, for free:

“As of June 2, 2011, all PDF versions of books published by the National Academies Press (NAP) will be downloadable to anyone free of charge.

That’s more than 4,000 books plus future reports produced by NAP – publisher for the National Academy of Sciences, National Academy of Engineering, Institute of Medicine, and National Research Council.”

Important works on forensic evidence, asbestos, dioxin, beryllium, research ethics, and data sharing published by the NAP, for the IOM or NRC, are now available for free.  The NAP charged upwards of $40 or 50 for some of these books previously.

This summer, the NRC’s Committee on Science, Technology and Law will release the Third Edition of the Reference Manual on Scientific Evidence, previously prepared by the Federal Judicial Center.  See http://sites.nationalacademies.org/PGA/stl/development_manual/index.htm

Statistical Power in the Academy

June 1st, 2011

Previously I have written about the concept of statistical power and how it is used and abused in the courts.  See here and here.

Statistical power was discussed in both the chapters on statistics and on epidemiology in the Second Edition of The Reference Manual on Scientific Evidence. In my earlier posts, I pointed out that the chapter on epidemiology provided some misleading, outdated guidance on the use of power.  See Michael D. Green, D. Michal Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” in Federal Judicial Center, The Reference Manual on Scientific Evidence 333, 362-63 (2ed. 2000) (recommending use of power curves to assess whether failure to achieve statistical significance is exonerative of the exposure in question).  This chapter suggests that “[t]he concept of power can be helpful in evaluating whether a study’s outcome is exonerative or inconclusive.” Id.; see also David H. Kaye and David A. Freedman, Reference Guide on Statistics,” Federal Judicial Center, Reference Manual on Scientific Evidence 83, 125-26 (2ed. 2000).

The fact of the matter is that power curves are rarely or never used in contemporary epidemiology, and post-hoc power calculations have been discouraged and severely criticized for a long time. After the data are collected, the appropriate method to evaluate the “resolving power” of a study is to examine the confidence interval around the study’s estimate of risk size.  These confidence intervals allow a concerned reader to evaluate what can reasonably ruled out (on the basis of random variation only) by the data in a given study. Post-hoc power calculations or considerations fail to provide meaningful consideration because they require a specified alternative hypothesis.

Twenty-five years ago, the use of post-hoc power was thoughtfully put in the dust bin of statistical techniques in the leading clinical medical journal:

“Although power is a useful concept for initially planning the size of a medical study, it is less relevant for interpreting studies at the end.  This is because power takes no account of the actual results obtained.”

***

“[I]n general, confidence intervals are more appropriate than power figures for interpreting results.”

Richard Simon, “Confidence intervals for reporting results of clinical trials,” 105 Ann. Intern. Med. 429, 433 (1986) (internal citation omitted).

An accompanying editorial by Ken Rothman reinforced the guidance given by Simon:

“[Simon] rightly dismisses calculations of power as a weak substitute for confidence intervals, because power calculations address only the qualitative issue of statistical significance and do not take account of the results already in hand.”

Kenneth J. Rothman, “Significance Questing,” 105 Ann. Intern. Med. 445, 446 (1986)

These two papers must be added to the 20 consensus statements, textbooks, and articles I previously cited.  See Schachtman, Power in the Courts, Part Two (2011).

The danger of the Reference Manual’s misleading advice is illustrated in a recent law review article by Professor Gold, of the Rutgers Law School, who asks “[w]hat if, as is frequently the case, such study is possible but of limited statistical power?”  Steve C. Gold, “The ‘Reshapement’ of the False Negative Asymmetry in Toxic Tort Causation, 37 William Mitchell L. Rev. 101, 117 (2011) (available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1797826).

Never mind for the moment that Professor Gold offers no empirical evidence to support his assertion that studies of limited statistical power are “frequently” used in litigation.  Gold critically points to Dunn v. Sandoz Pharmaceuticals Corp., 275 F. Supp. 2d 672, 677–81, 684 (M.D.N.C. 2003), a Parlodel case in which case plaintiff relied upon a single case-control study that found an elevated odds ratio (8.4), which was not statistically significant.  Gold at 117.  Gold complains that “a study’s limited statistical power, rather than the absence of a genuine association, may lead to statistically insignificant results that courts treat as disproof of causation, particularly in situations without the large study samples that result from mass exposures.” Id.  Gold goes on to applaud two cases for emphasizing consideration of post-hoc power.  Id. at 117 & n. 80 – 81 (citing Smith v. Wyeth-Ayerst Labs. Co., 278 F. Supp. 2d 684, 692 – 93 (W.D.N.C. 2003) (“[T]he concept of power is key because it’s helpful in evaluating whether the study‘s outcome . . . is exonerative or inconclusive.”), and Cooley v. Lincoln Elec. Co., 693 F. Supp. 2d 767, 774 (N.D. Ohio 2010) (prohibiting expert witness from opining that epidemiologic studies are evidence of no association unless the witness “has performed a methodologically reliable analysis of the studies’ statistical power to support that conclusion”).

What of Professor Gold’s suggestion that power should be considered in evaluating studies that do not have statistically significant outcomes of interest?  See id. at 117. Not only is Gold’s endorsement at odds with sound scientific and statistical advice, but his approach reveals a potential hypocrisy when considered in the light of his criticisms of significance testing.  Post-hoc power tests ignore the results obtained, including the variance of the actual study results, and they are calculated based upon a predetermined arbitrary measure of Type I error (alpha) that is the focus of so much of Gold’s discomfort with statistical evidence.  Of course, power calculations also are made on the basis of arbitrarily selected alternative hypotheses, but this level of arbitrariness seems not to disturb Gold so much.

Where does the Third Edition of the Reference Manual on Scientific Evidence come out on this issue?  The Third Edition is not yet published, but Professor David Kaye has posted his chapter on statistics on the internet.  David H. Kaye & David A. Freedman, “Reference Guide on Statistics,” chapter 5.  http://www.personal.psu.edu/dhk3/pubs/11-FJC-Ch5-Stat.pdf (David Freedman died in 2008, after the chapter was submitted to the National Academy of Sciences for review; only Professor Kaye responded to the Academy’s reviews).

The chapter essentially continues the Second Edition’s advice:

“When a study with low power fails to show a significant effect, the results may therefore be more fairly described as inconclusive than negative. The proof is weak because power is low. On the other hand, when studies have a good chance of detecting a meaningful association, failure to obtain significance can be persuasive evidence that there is nothing much to be found.”

Chapter 5, at 44-46 (citations and footnotes omitted).

The chapter’s advice is not, of course, limited to epidemiologic studies, where a risk ratio or a risk difference is typically reported with an appropriate confidence interval.  In the broad generality of considering all statistical tests, some of which do not report a measure of “effect size,” and the variability of the sample statistic, the chapter’s advice is fine.  But, as we can see from Professor Gold’s discussion and case review, the advice runs into trouble when measured against the methodological standards for evaluating an epidemiologic study’s results when confidence intervals are available.  Gold’s assessment of the cases is considerably skewed by his failure to recognize the inappropriateness of post-hoc power assessments of epidemiologic studies.

Manufactured Certainty

May 27th, 2011

With the help of Selikoff’s Lobby, the anti-asbestos zealots have created a false, manufactured certainty about various asbestos issues.  The manufacturing of faux certainty has taken place with respect to the history of knowledge about asbestos, as well as to the current state of knowledge about asbestos hazards.

The Selikoff lobby exercised a great deal of influence on regulators and scientists.  The Lobby was able to bully many scientists and policy makers into adopting a position that held all asbestos mineral fiber types as relatively equal in their potency to cause disease.  The Lobby accomplished this by suppressing evidence of past use of amphibole asbestos, and by overstating the hazards of chrysotile asbestos.

In the past, I have marshaled evidence of Selikoff’s activities as a crocidolite denier.  But was there really a controversy among honest scientists outside the Lobby?

Of course, there was and there is, but the Lobby has done a good job of branding the contrarians as tools of industry.  It is important, therefore, to come to terms with evidence that scientists without connections to industry took similar positions.

For many years, starting in the late 1970s, Dr. Gerrit Schepers was a mainstay of the plaintiffs’ state-of-the-art case against asbestos mining and manufacturing companies in asbestos personal litigation.  Dr. Schepers testified as a hired expert witness for plaintiffs near and far.  I encountered and crossexamined Dr. Schepers on several occasions, for different clients.  He was a fascinating witness, filled with contradictions and mixed motives.  In one particularly horrible mesothelioma case (Hill v. Carey Canada), I confronted Dr. Schepers with his own publication, from 1973, in which he largely exonerated chrysotile as a carcinogen.  Dr. Schepers twisted and turned, but he really had no where to go to avoid the full force of his own statements.  This publication is worth revisiting as an historical document, to show that there was a good deal of dissent from the Lobby’s positions, at least until the asbestos personal injury and property damage litigations mushroomed out of control in the early 1980s.

Here is what Dr. Schepers wrote, in 1973, while an employee of the United States government (Chief of the Medical Service, Veterans Administration, Lebanon, Pa.):

“There are marked differences between the capacities of the individual classes of silicate minerals to provoke responses in human and animal tissues. There also are major misconceptions as to what these substances can do when inhaled by man or other mammals. Two of the most extreme of these are (1) that all siliceous minerals are equally pathogenic and (2) that there is even the least semblance between the effects of the asbestiform and the non-asbestiform silicates.”

Gerrit W. H. Schepers, M.D. D.Sc., “The Biological Action of Talc and Other Silicate Minerals,” at 54, in Aurel Goodwin, Proceedings of the symposium on talc: U.S. Bureau of Mines; Information Circular 8639 (1974) [available at http://www.scribd.com/doc/56461314].  The symposium was sponsored by the United States Department of the Interior, in May 1973. Recall that the dispute of non-asbestiform amphibole health effects was very much at issue in the Reserve Mining case, and the trial proceedings were about to start when Dr. Schepers delivered his paper, in 1973. Members of the Lobby, from Selikoff on down, were very much involved in the Reserve Mining case.  See U.S. Environmental Protection Agency v. Reserve Mining Co., 514 F.2d 492 (8th Cir. 1975) (en banc).

“Is chrysotile a carcinogen? This is a very perplexing question. A crescendo of popular opinion has sought to incriminate chrysotile. This author remains unconvinced.  The main premise for carcinogenicity stems from epidemiological observation of employees of the insulation and shipbuilding industries. In both these industries there has been in the past considerable exposure of pipe laggers to asbestos dust. Only in recent decades, however, have these insulation bats been composed predoninantly of chrysotile. In former years crocidolite and amosite were important components.

***

Finally, it should be pointed out that the role of cigarette smoking has not been satisfactorily discounted in the referenced epidemiological studies of lung cancer among insulation workers. In some groups reported an excess prevalence of lung cancer was not demonstrable when cigarette smoking was taken into consideration. Epidemiological surveys of chrysotile workers in Quebec showed no excess of lung cancer. A review of pleural mesothiliomatosis in Canada also failed to focus attention on Quebec or any other center where chrysotile industries are concentrated.”

Id. at 70.

That was in 1973, but within a few years, Dr. Schepers was coopted by the asbestos plaintiff industry, which manufactured lawsuits and epistemic certainty about the hazards of all asbestos minerals.  The rest is “history.”

Interestingly, another would-be historian in the asbestos litigation, Dr. David S. Egilman, has written a paper, highly critical of W.R. Grace, based in part on another presentation given at the 1973 symposium, referenced above.  David Egilman, Wes Wallace, and Candace Hom, Corporate corruption of medical literature: Asbestos studies concealed by W. R. Grace & Co., 6 Accountability in Research 127 (1998)(citing a paper in the same volume by Dr. William E. Smith, “Experimental studies on biological effects of tremolite talc on hamsters.”).  Egilman’s paper was available at is website, http://www.egilman.com/Documents/publications/Wr_Grace.pdf The paper by Dr. Schepers no doubt missed Egilman’s attention, even though it follows immediately after Dr. Smith’s contribution.

Sub-group Analyses in Epidemiologic Studies — Dangers of Statistical Significance as a Bright-Line Test

May 17th, 2011

Both aggregation and disaggregation of outcomes poses difficult problems for statistical analysis, and for epidemiology.  If outcomes are bundled into a single composite outcome, there has to be some basis for the bundling to make sense.  Even so, a composite outcome, such as all cardiovascular disease events, could easily hide an association in a component outcome.  For instance, studies of a drug under scrutiny may show no increased risk for all cardiovascular events, but closer inspection may show an increased risk for heart attacks while also showing a decreased risk for strokes.

The opposite problem arises when studies report multiple subgroups.  The opportunity for post hoc data mining runs rampant, and the existence of multiple subgroups means that the usual level of statistical significance becomes ineffective for ruling out chance as an explanation for an increased or decreased risk in a subgroup.  This problem is well known and extensively explored in the epidemiology literature, but it receives no attention in the Federal Judicial Center’s current Reference Manual on Scientific Evidence.  I hope that the authors of the Third Edition, which is due out in a few months, give some attention to the problem of subgroup analysis in epidemiology.  This seems to be an area where judges need a good deal of assistance, and where the Reference Manual lets them down.

Litigation tends to be a fertile field for the data dredging or the Texas Sharp shooters’ approach to epidemiology. (The Texas Sharp shooter shoots first and draws the target later.) When studies look at many outcomes, or many subgroups, chance alone will lead to results that have p-values less than the usual level for statistical significance (p < 0.05).  Accepting a result as “significant” when there is a multiplicity of testing or comparisons resulting from subgroup analyses is a form of “data torturing.” Mills, “Data Torturing,” 329 New Engl. J. Med. 1196, 1196 (1993)(“If you torture the data long enough, they will confess.”).

The multiple testing or comparison issue arises in both cohort and case-control studies.  Cohort studies have the ability to look at cancer morbidity or mortality at 20 different organs, with multiple histological subtypes for each cancer.  There are hundreds of diseases, by World Health Organization disease codes, which can be a possible outcome in a cohort study.  The odds are very good that several disease outcomes will be significantly elevated or decreased by chance alone.  Similarly, in a case-control study, participants with the outcome of interest can be questioned about hundreds of lifestyle and exposure variables.  Again, the finding of a “risk factor,” with statistical significance is not very compelling under these circumstances.

The problem of subgroup analyses is exacerbated by defense counsel’s emphasis on statistical significance as a “bright-line” test.  When subgroup analyses yield a statistically significant result, at the usual p < 0.05, which they will often do by chance alone, plaintiffs’ counsel obtain a “gotcha” moment.  Having built up the importance of statistical significance, defense counsel are hard pressed to dismiss the “significant” finding, even though study design makes it highly questionable if not downright meaningless.

Although the Reference Manual ignores this recurrent problem, several authors have issued severe alerts to the issue. For instance, Lisa Bero, who writes frequently on science and the law issues, admonishes:

“Specifying subgroup analysis after data collection for the review has already begun can be a ‘fishing expedition’ or “data dredging” for statistically significant results and is not appropriate.”

L. Bero, “Evaluating Systematic Reviews and Meta-Analyses,” J. L. & Policy 569, 576 (2006).

Eggers and Davey Smith, two well-respected English authors, who write about methodological issues in epidemiology, warn:

“Similarly, unplanned data-driven subgroup analyses are likely to produce spurious results.”

Matthias Egger & George Davey Smith, “Principles of and procedures for systematic reviews,” 24 chap. 2, in M. Egger, G. Davey Smith, D. Altman, eds., Systematic Reviews in Health Care:  Meta-Analysis in Context (2d ed. 2001).

Stewart and Parmar explain the genesis of the problem and the result of diluting the protection that statistical significance usually provides against Type I errors:

“In general, the results of these subgroup analyses can be very misleading owing to the very high probability that any observed differences is due solely to chance.8 For example, if 10 subgroup analyses are carried out, there is a 40% chance of finding at least one significant false-positive effect (5% significance level).  Further, when the results of subgroup analyses are reported, often only those that have yielded a significant result are presented, without noting that many other analyses have been performed.”

Stewart and Parmar, “Bias in the Analysis and Reporting of Randomized Controlled Trials,” 12 Internat’l J. Tech. Assessment in Health Care 264, 271 (1996)

“Such data dredging must be avoided and subgroup analyses should be limited to those that are specified a priori in the trial protocol.”

Id. at 272.

“Readers and reviewers should be aware that subgroup analyses, exploratory or otherwise, are likely to be particularly unreliable in situations where no overall effect of treatment has been observed.  In this case, if one subgroup exhibits a particularly positive effect of treatment, then another subgroup has to have a counteracting negative effect.”

* * *

“Consequently, perhaps the most sensible advice to readers and reviewers is to be very skeptical about the results of subgroup analyses.”

Id.  See also Sleight, “Subgroup analyses in clinical trials – – fun to look at, but don’t believe them,” 1 Curr. Control Trials Cardiovasc. Med. 25 (2000) (“Analysis of subgroup results in a clinical trial is surprisingly unreliable, even in a large trial.  This is the result of a combination of reduced statistical power, increased variance and the play of chance.  Reliance on such analyses is likely to be erroneous, and hence harmful, than application of the overall proportional (or relative) result in the whole trial to the estimate of absolute risk in that subgroup.  Plausible explanations can usually be found for effects that are, in reality, simply due to the play of chance.  When clinicians believe such subgroup analyses, there is a real damage of harm to the individual patient.”)

These warnings and admonitions are important caveats to statistical significance.  In emphasizing the importance of statistical significance in evaluating statistical evidence, defense lawyers are sometimes unwittingly hoisted with their own petard, in the form of studies that have results that meet the usual p-value threshold of lower than 5%.  Courts see these defense lawyers as engaged in special pleading when counsel argues that study multiplicity requires changing the p-value threshold to preserve the desired rate of Type I error, but that is exactly what must be done.

A few years ago, the New England Journal of Medicine published an article that detailed the problem and promulgated guidelines for avoiding the worst abuses.  R. Wang, S. Lagakos, J. H. Ware, et al., “Statistics in Medicine — Reporting of Subgroup Analyses in Clinical Trials,” 357 New Engl. J. Med. 2189 (2007).  Wang and colleagues provide some important insights for how subgroup analyses can lead to increased rates of Type I errors, and they provide guidelines for authors on appropriate descriptions of subgroup analyses:

“However, subgroup analyses also introduce analytic challenges and can lead to overstated and misleading results.”

Id. at 2189a.

“When multiple subgroup analyses are performed, the probability of a false positive finding can be substantial.”

Id. at 2190a.

“There are several methods for addressing multiplicity that are based on the use of more stringent criteria for statistical significance than the customary P < 0.05.”

Id. at 2190b.

“A pre-specified subgroup analysis is one that is planned and documented before any examination of the data, preferably in the study protocol.”

Id. at 2190b.

“Post hoc analyses refer to those in which the hypotheses being tested are not specified before any examination of the data. Such analyses are of particular concern because it is often unclear how many were undertaken and whether some were motivated by inspection of the data. However, both pre-specified and post hoc subgroup analyses are subject to inflated false positive rates arising from multiple testing. Investigators should avoid the tendency to pre-specify many subgroup analyses in the mistaken belief that these analyses are free of the multiplicity problem.”

Id. at 2190b.

“When properly planned, reported, and interpreted, subgroup analyses can provide valuable information.”

Id. at 2193b.

Although Wang and colleagues take their primary aim at the abuse of subgroup analyses in randomized clinical trials, they make clear that the abuse is equally present in observational studies:

“In other settings, including observational studies, we encourage complete and thorough reporting of the subgroup analyses in the spirit of the guidelines listed.”

Id. at 2193b.

Wang and colleagues provide some very specific guidelines for reporting subgroup analyses.  These guidelines are a helpful source for helping courts make sober assessments of results from subgroup analyses.

Recently, another guideline initiative, STROBE, in the field of observational epidemiology provided similar guidance to authors and journals for reporting subgroup analyses:

“[M]any debate the use and value of analyses restricted to subgroups of the study population. Subgroup analyses are nevertheless often done. Readers need to know which subgroup analyses were planned in advance, and which arose while analyzing the data. Also, it is important to explain what methods were used to examine whether effects or associations differed across groups … .”

Jan P. Vandenbroucke, Erik von Elm, Douglas G. Altman, Peter C. Gøtzsche, Cynthia D. Mulrow, Stuart J. Pocock, Charles Poole, James J. Schlesselman, and Matthias Egger, for the STROBE Initiative, “Strengthening the Reporting of Observational Studies in Epidemiology (STROBE):  Explanation and Elaboration,” 18 Epidemiology 805, 817 (2007).

“There is debate about the dangers associated with subgroup analyses, and multiplicity of analyses in general.  In our opinion, there is too great a tendency to look for evidence of subgroup-specific associations, or effect-measure modification, when overall results appear to suggest little or no effect. On the other hand, there is value in exploring whether an overall association appears consistent across several,

preferably pre-specified subgroups especially when a study is large enough to have sufficient data in each subgroup. A second area of debate is about interesting subgroups that arose during the data analysis. They might be important findings, but might also arise by chance. Some argue that it is neither possible nor necessary to inform the reader about all subgroup analyses done as future analyses of other data will tell to what extent the early exciting findings stand the test of time. We advise authors to report which analyses were planned, and which were not   … . This will allow readers to judge the implications of multiplicity, taking into account the study’s position on the continuum from discovery to verification or refutation.”

Id. at 826-27.

Bibliography

E. Akl, M. Briel, J.J. You, et al., “LOST to follow-up Information in Trials (LOST-IT): a protocol on the potential impact,” 10 Trials 40 (2009).

Susan Assmann, Stuart Pocock, Laura Enos, Linda Kasten, “Subgroup analysis and other (mis)uses of baseline data in clinical trials,” Lancet 2000; 355: 1064–69.

M. Bhandari, P.J. Devereaux, P. Li, et al., “Misuse of baseline comparison tests and subgroup analyses in surgical trials,” 447 Clin. Orthoped. Relat. Res. 247 (2006).

S. T. Brookes, E. Whitely, M. Egger, et al., “Subgroup analyses in randomized trials: risks of subgroup-specific analyses; power and sample size for the interaction test,” 57 J. Clin. Epid. 229 (2004).

A-W Chan, A. Hrobjartsson, K.J. Jorgensen, et al., “Discrepancies in sample size calculations and data analyses reported in randomised trials: comparison of publications with protocols,” 337 Brit. Med. J. a2299 (2008).

L. Cui, H.M. Hung, S.J. Wang, et al., “Issues related to subgroup analysis in clinical trials,” 12 J. Biopharm. Stat. 347 (2002).

Matthias Egger & George Davey Smith, “Principles of and procedures for systematic reviews,” chap. 2, in M. Egger, G. Davey Smith, D. Altman, eds., Systematic Reviews in Health Care:  Meta-Analysis in Context (2d ed. 2001).

J. Fletcher, “Subgroup analyses: how to avoid being misled,” 335 Brit. Med. J. 96 (2007).

Nick Freemantle,”Interpreting the results of secondary end points and subgroup analyses in clinical trials: should we lock the crazy aunt in the attic?” 322 Brit. Med. J. 989 (2001).

G. Guyatt, P.C. Wyer, J. Ioannidis, “When to Believe a Subgroup Analysis,” in G. Guyatt, et al., eds., User’s Guide to the Medical Literature: A Manual for Evidence-Based Clinical Practice 571-83 (2008).

J. Hasford, P. Bramlage, G. Koch, W. Lehmacher, K. Einhäupl, and P.M. Rothwell, “Inconsistent trial assessments by the National Institute for Health and Clinical Excellence and IQWiG: standards for the performance and interpretation of subgroup analyses are needed,” 63 J. Clin. Epidem. 1298 (2010).

J. Hasford, P. Bramlage, G. Koch, W. Lehmacher, K. Einhäupl, and P.M. Rothwell, “Standards for subgroup analyses are needed? We couldn’t agree more,”  64 J. Clin. Epidem. 451 (2011).

R. Hatala, S. Keitz, P. Wyer, et al., “Tips for learners of evidence-based medicine: 4. Assessing heterogeneity of primary studies in systematic reviews and whether to combine their results,” 172 Can. Med. Ass’n J. 661 (2005).

A.V. Hernandez, E.W. Steyerberg, G.S. Taylor, et al., “Subgroup analysis and covariate adjustment in randomized clinical trials of traumatic brain injury: a systematic review,” 57 Neurosurgery 1244 (2005).

A.V. Hernandez, E. Boersma, G.D. Murray, et al., “Subgroup analyses in therapeutic cardiovascular clinical trials: are most of them misleading?” 151 Am. Heart J. 257 (2006).

K. Hirji & M. Fagerland, “Outcome based subgroup analysis: a neglected concern,” 10 Trials 33 (2009).

Stephen W. Lagakos, “The Challenge of Subgroup Analyses — Reporting without Distorting,” 354 New Engl. J. Med. 1667 (2006).

C.M. Martin, G. Guyatt, V. M. Montori, “The sirens are singing: the perils of trusting trials stopped early and subgroup analyses,” 33 Crit. Care Med. 1870 (2005).

D. Moher, K. Schulz, D. Altman, et al.,“The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials,” 357 Lancet 1191 (2001).

V.M. Montori, R. Jaeschke, H.J. Schunemann, et al., “Users’ guide to detecting misleading claims in clinical research reports,” 329 Brit. Med. J. 1093 (2004).

A.D. Oxman & G.H. Guyatt, “A consumer’s guide to subgroup analyses,” 116 Ann. Intern. Med. 78 (1992).

A. Oxman, G. Guyatt, L. Green, et al., “When to believe a subgroup analysis,” in G. Guyatt, et al., eds., User’s Guide to the Medical Literature: A Manual for Evidence-Based Clinical Practice 553-65 (2008).

S. Pocock, M. D. Hughes, R.J. Lee, “Statistical problems in the reporting of clinical trials:  A survey of three medical journals,” 317 New Engl. J. Med. 426 (1987).

S. Pocock, S. Assmann, L. Enos, et al., “Subgroup analysis, covariate adjustment and baseline comparisons in clinical trial reporting: current practice and problems,” 21 Statistics in Medicine 2917 (2002).

Peter Rothwell, “Subgroup analysis in randomised controlled trials:  importance, indications, and interpretation,” 365 Lancet 176 (2005).

Kenneth Schulz & David Grimes, “Multiplicity in randomised trials II: subgroup and interim analyses,” 365 Lancet 1657 (2005).

Sleight, “Subgroup analyses in clinical trials – – fun to look at, but don’t believe them,” 1 Curr. Control Trials Cardiovasc. Med. 25 (2000).

Reuel Stallones, “The Use and Abuse of Subgroup Analysis in Epidemiological Research,” 16 Prev. Med. 183 (1987).

Stewart & Parmar, “Bias in the Analysis and Reporting of Randomized Controlled Trials,” 12 Internat’l J. Tech. Assessment in Health Care 264, 271 (1996).

Xin Sun, Matthias Briel, Jason Busse, Elie A. Akl, John J .You, Filip Mejza, Malgorzata Bala, Natalia Diaz-Granados, Dirk Bassler, Dominik Mertz, Sadeesh K Srinathan, Per Olav Vandvik, German Malaga, Mohamed Alshurafa, Philipp Dahm, Pablo Alonso-Coello, Diane M Heels-Ansdell, Neera Bhatnagar, Bradley C. Johnston, Li Wang, Stephen D. Walter, Douglas G. Altman, and Gordon Guyatt, “Subgroup Analysis of Trials Is Rarely Easy (SATIRE): a study protocol for a systematic review to characterize the analysis, reporting, and claim of subgroup effects in randomized trials,” 10 Trials 1010 (2009).

A. Trevor & G. Sheldon, “Criteria for the Implementation of Research Evidence in Policy and Practice,” in A. Haines, ed., Getting Research Findings Into Practice 11 (2d ed. 2008).

Jan P. Vandenbroucke, Erik von Elm, Douglas G. Altman, Peter C. Gøtzsche, Cynthia D. Mulrow, Stuart J. Pocock, Charles Poole, James J. Schlesselman, and Matthias Egger, for the STROBE Initiative, “Strengthening the Reporting of Observational Studies in Epidemiology (STROBE):  Explanation and Elaboration,” 18 Epidemiology 805–835 (2007).

Erik von Elm & Matthias Egger, “The scandal of poor epidemiological research Reporting guidelines are needed for observational epidemiology,” 329 Brit. Med. J. 868 (2004).

R. Wang, S. Lagakos, J. H. Ware, et al., “Statistics in Medicine — Reporting of Subgroup Analyses in Clinical Trials,” 357 New Engl. J. Med. 2189 (2007).

S. Yusuf, J. Wittes, J. Probstfield, et al., “Analysis and interpretation of treatment effects in subgroups of patients in randomized clinical trials,” 266 J. Am. Med. Ass’n 93 (1991).

The Law’s Obsession with Warnings

May 11th, 2011

Professor Beth J. Rosenberg is Assistant Professor, in the Department of Public Health & Community Medicine, in Tufts University School of Medicine, Boston, Massachusetts.  Rosenberg is an unabashed activist.  She is driven by concerns that humans are ruining the environment and poisoning themselves.  She is a champion of workers’ safety and workers’ rights.  So when she writes about her personal experience with the lack of interest among workers in the hazards of silica, we all can learn something about whether the law’s obsession with warnings makes sense.

In 2003, Rosenberg wrote an article about her experiences in trying to have silica added to the list of substances regulated under the Massachusetts’ Toxics Use Reduction Act (TURA).  Beth Rosenberg, “Second Thoughts About Silicosis,” 13 New Solutions 223 (2003) (http://www.ncbi.nlm.nih.gov/pubmed/17208725).  Working with support from the Environmental League of Massachusettes and the Massachusettes Public Health Association, Rosenberg petitioned to have silica added to the TURA list of substances, in part out of a desire to help fuel a ban on abrasive blasting with silica in Massachusettes.  She figured that by piggybacking on the environmental movement, or riding “the green wave,” as she put it, the state’s environmental laws could be used to help control occupational exposures.

Rosenberg’s ideals and aspirations ran into the wall of worker expectations and needs.  They did not want abrasive blasting banned; they wanted stronger enforcement from OSHA, and better respirators.  Rosenberg admits that the workers were pursuing a path that was not her goal, and she learned that, at legislative hearings, she needed “to take tighter control of the scripts of any hearings that I’m orchestrating.”  Id. at 227.

Rosenberg worked with the Painters’ union to study substitutes for silica in abrasive blasting.  Motivated by a recognition that “[s]ilica-related disease is completely preventable,” id.  at 224, she hoped to move them towards supporting a ban on silica for abrasive blasting. After several years of this work, however, Rosenberg decided to give up on her silica mission.  Her experience is instructive for correcting the misapplication of “failure to warn” products liability law to the use of a raw material such as silica in the workplace:

“The main point here is that the men I’ve interviewed are not terribly concerned about silica dust. They care about being treated decently and respectfully by their bosses. They’re concerned about being encouraged to work too fast to work safely. They care about lead dust, particularly bringing it home to their families, so they get really angry when the foreman wants to lock up the yard at five o’clock and doesn’t leave them enough time to shower and change their clothes. They feel that they are expendable. And although most are fully aware of silica’s dangers, silica is not a top priority for them. The silica agenda was set by some physicians and health professionals who are outraged that anybody is still dying of this 100 percent preventable disease. This is understandable, and I am one of those people, but I’m not sure this is the best way to be of service. I see that there are other, more pressing issues than silica.

I’ve chosen to serve working people, and yet they’ve had little or no role in setting the research agenda. Not only is this unrewarding for me, but it’s also a bad political strategy because you need a lot of support and collaboration to accomplish anything—even when everyone agrees that action is required—and interest in silica is tepid among the people most affected. This may not be true in other trades or in other countries, but it is true with abrasive blasters. And I stress that is not an awareness problem; they know breathing dust is bad for them, but it’s just not their top concern, and I can see why. So, henceforth, I’m going to let the community I choose to serve set the research agenda, and I will offer my assistance in their battles. That to me is the best way to do public health.”

Id. at 229 (emphasis added).  Rosenberg’s epiphany should lead to some thoughtful re-evaluation of how the law of products liability is applied to the use of a natural material such as crystalline silica.  While Professor Rosenberg was working with the Painters’ Union, and having her “Second Thoughts about Silicosis,” plaintiffs’ lawyers were screening, scheming, and suing for silicosis among the same union’s members.  If only plaintiffs’ law firms took heed of Professor Rosenberg’s lessons, and stopped signing up sand blasters under the paternalistic pretense that the law must provide a remedy for the alleged failure to warn.  The faux historians of silicosis, with their conspiratorial theories, could learn a great deal from Professor Rosenberg, as well.

De-Zincing the Matrixx

April 12th, 2011

Although the plaintiffs, in Matrixx Intiatives, Inc. v. Siracusano,  generally were accurate in defining statistical significance than the defendant, or than the so-called “statistical expert” amici (Ziliak and McCloskey), the plaintiffs’ brief goes off the rails when it turned to discussing the requirements for proving causation.  Of course, the admissibility and sufficiency of evidence to show causation were not at issue in the case, but plaintiffs got pulled down the rabbit hole dug by the defendant, in its bid to establish a legal bright-line rule about pleading.

Differential Diagnosis

In an effort to persuade the Court that statistical significance is not required, the plaintiffs/respondents threw science and legal principles to the wind.  They contended that statistical significance is not at all necessary to causal determinations because

“[c]ourts have recognized that a physician’s differential diagnosis (which identifies a likely cause of certain symptoms after ruling out other possibilities) can be reliable evidence of causation.”

Respondents’ Brief at 49.   Perhaps this is simply the Respondents’ naiveté, but it seems to suggest scienter to deceive. Differential diagnosis is not about etiology; it is about diagnosis, which rarely incorporates an assessment of etiology.  Even if the differentials were etiologies and not diagnoses, the putative causes in the differential must already be shown, independently, to be capable of causing the outcome in question. See, e.g., Tamraz v. Lincoln Electric Co., 620 F.3d 665 (6th Cir. 2010).  A physician cannot rule in an etiology in a specific person simply by positing it among the differentials, without independent, reliable evidence that the ruled in “specific cause” can cause the outcome in question, under the circumstances of the plaintiff’s exposure.  Furthermore, differential diagnosis or etiology is nothing more than a process of elimination to select a specific cause; it has nothing to do with statistical significance because it has nothing to do with general causation.

This error in the Respondent’s brief about differential diagnosis unfortunately finds its way into Justice Sotomayor’s opinion.

Daubert Denial and the Recrudescence of Ferebee

In their zeal, the Respondents go further than advancing a confusion between general and specific causation, and an erroneous view of what must be shown before a putative cause can be inserted in a set of differential (specific) causes.  They cite one of the most discredited cases in 20th century American law of expert witnesses:

Ferebee v. Chevron Chem. Co., 736 F.2d 1529, 1536 (D.C. Cir. 1984) (“products liability law does not preclude recovery until a ‘statistically significant’ number of people have been injured”).”

Respondents’ Brief at 50.  This is not a personal, subjective opinion about this 1984 pre-Daubert decision.  Ferebee was wrongly decided when announced, and it was soon abandoned by the very court that issued the opinion.  It has been a derelict on the sea of evidence law for over a quarter of a century.  Citing to Ferebee, without acknowledging its clearly overruled status, raises an interesting issue about candor to the Court, and the responsibilities of counsel in trash picking in the dustbin of expert witness law.

Along with its apparent rejection of statistical significance, Ferebee is known for articulating an “anything goes” philosophy toward the admissibility and sufficiency of expert witnesses:

“Judges, both trial and appellate, have no special competence to resolve the complex and refractory causal issues raised by the attempt to link low-level exposure to toxic chemicals with human disease.  On questions such as these, which stand at the frontier of current medical and epidemiological inquiry, if experts are willing to testify that such a link exists, it is for the jury to decide to credit such testimony.”

Ferebee v. Chevron Chemical Co., 736 F.2d 1529, 1534 (D.C. Cir.), cert. denied, 469 U.S. 1062 (1984).  Within a few years, the nihilism of Ferebee was severely limited by the court that decided the case:

“The question whether Bendectin causes limb reduction defects is scientific in nature, and it is to the scientific community that the law must look for the answer.  For this reason, expert witnesses are indispensable in a case such as this.  But that is not to say that the court’s hands are inexorably tied, or that it must accept uncritically any sort of opinion espoused by an expert merely because his credentials render him qualified to testify… . Whether an expert’s opinion has an adequate basis and whether without it an evidentiary burden has been met, are matters of law for the court to decide.”

Richardson v. Richardson-Merrell, Inc., 857 F.2d 823, 829 (D.C. Cir. 1988).

Of course, several important decisions intervened between Ferebee and Richardson.  In 1986, the Fifth Circuit expressed a clear message to trial judges that it would no longer continue to tolerate the anything-goes approach to expert witness opinions:

“We adhere to the deferential standard for review of decisions regarding the admission of testimony by xperts.  Nevertheless, we … caution that the standard leaves appellate judges with a considerable task.  We will turn to that task with a sharp eye, particularly in those instances, hopefully few, where the record makes it evident that the decision to receive expert testimony was simply tossed off to the jury under a ‘let it all in’ philosophy.  Our message to our able trial colleagues:  it is time to take hold of expert testimony in federal trials.

In re Air Crash Disaster, 795 F.2d 1230, 1234 (5th Cir. 1986) (emphasis added).

In the same intervening period between Ferebee and Richardson, Judge Jack Weinstein, a respected evidence scholar and well-known liberal judge, announced :

“The expert is assumed, if he meets the test of Rule 702, to have the skill to properly evaluate the hearsay, giving it probative force appropriate to the circumstances.  Nevertheless, the court may not abdicate its independent responsibilities to decide if the bases meet minimum standards of reliability as a condition of admissibility.  See Fed. Rule Ev. 104(a).  If the underlying data are so lacking in probative force and reliability that no reasonable expert could base an opinion on them, an opinion which rests entirely upon them must be excluded.”

In re “Agent Orange” Prod. Liab. Litig., 611 F. Supp. 1223, 1245 (E.D.N.Y. 1985)(excluding plaintiffs’ expert witnesses), aff’d, 818 F.2d 187 (2d Cir. 1987), cert. denied, 487 U.S. 1234 (1988).

The notion that technical decisions had to be evidence based, not opinion based, emerged elsewhere as well. For example, in the context of applying statistics, the federal courts pronounced that the ipse dixit of parties and witnesses did not count for much:

“When a litigant seeks to prove his point exclusively through the use of statistics, he is borrowing the principles of another discipline, mathematics, and applying these principles to the law. In borrowing from another discipline, a litigant cannot be selective in which principles are applied. He must employ a standard mathematical analysis. Any other requirement defies logic to the point of being unjust. Statisticians do not simply look at two statistics, such as the actual and expected percentage of blacks on a grand jury, and make a subjective conclusion that the statistics are significantly different. Rather, statisticians compare figures through an objective process known as hypothesis testing.”

Moultrie v. Martin, 690 F.2d 1078, 1082 (4th Cir. 1982)(citations omitted)

Of course, not long after the District of Columbia Circuit decided Ferebee, in 1993, the Supreme Court decided Daubert, followed by decisions in Joiner, Kumho Tire, and Weisgram.  In 2000, Congress approved a new Rule of Evidence 702, which incorporated the learning and experience in judicial gatekeeping from a wide range of cases and principles.

Do the Respondents have a defense to having cited an overruled, superseded, discredited precedent in the highest federal Court?  Perhaps they would argue that they are in pari delicto with courts (Daubert-Deniers), which remarkably have ignored the status of Ferebee, and cited it.  See, e.g., Betz v. Pneumo Abex LLC, 998 A.2d 962, 981 (Pa. Super. 2010); McCarrell v. Hoffman-La Roche, Inc., 2009 WL 614484, *23 (N.J.Super.A.D. 2009).  See also Rubanick v. Witco Chemical Corp., 125 N.J. 421, 438-39 (1991)(quoting Ferebee before it was overruled by the Supreme Court, but after it was disregarded by the D.C. Circuit in Richardson).

Matrixx Galvanized – More Errors, More Comedy About Statistics

April 9th, 2011

Matrixx Initiatives is a rich case – rich in irony, comedy, tragedy, and error.  It is well worth further exploration, especially in terms of how this 9-0 decision was reached, what it means, and how it should be applied.

It pains me that the Respondents (plaintiffs) generally did a better job in explaining significance testing than did the Petitioner (defendant).

At least some of the Respondents’ definitional efforts are unexceptional.  For instance:

“Researchers use the term ‘statistical significance’ to characterize a result from a test that satisfies a particular kind of test designed to show that the result is unlikely to have occurred by random chance.  See David H. Kaye & David A. Freedman, Reference Guide on Statistics, in Reference Manual on Scientific Evidence 83, 122 (Fed. Judicial Ctr., 2d ed. 2000) (“Reference Manual”).”

Brief for Respondents, at 38 – 39 (Nov 5, 2010).

“The purpose of significance testing in this context is to assess whether two events (here, taking Zicam and developing anosmia) occur together often enough to make it sufficiently implausible that no actual underlying relationship exists between them.”

Id. at 39.   These definitions seem acceptable as far as they go, as long as we realize that the relationship that remains, when chance is excluded, may not be causal, and indeed, it may well be a false-positive relationship that results from bias or confounding.

Rather than giving one good, clear definition, the Respondents felt obligated to and repeat and restate their definitions, and thus wandered into error:

“To test for significance, the researcher typically develops a ‘null hypothesis’ – e.g., that there is no relationship between using intranasal Zicam and the onset of burning pain and subsequent anosmia. The researcher then selects a threshold (the ‘significance level’) that reflects an acceptably low probability of rejecting a true null hypothesis – e.g., of concluding that a relationship between Zicam and anosmia exists based on observations that in fact reflect random chance.”

Id. at 39.  Perhaps the Respondents were using the “cooking frogs” approach.  As the practical wisdom has it, dropping a frog into boiling water risks having the frog jump out, but if you put a frog into a pot of warm water, and gradually bring the pot to a boil, you will have a cooked frog.  Here the Respondents repeat and morph their definition of statistical significance until they have brought it around to their rhetorical goal of confusing statistical significance with causation.  Note that now the definition is muddled, and the Respondents are edging closer towards claiming that statistical significance signals the existence of a “relationship” between Zicam and anosmia, when in fact, the statistical significance simply means that chance is not a likely explanation for the observations.  Whether a “relationship” exists requires further analysis, and usually a good deal more evidence.

“The researcher then calculates a value (referred to as p) that reflects the probability that the observed data could have occurred even if the null hypothesis were in fact true.”

Id. at 39-40 (emphasis in original). Well, this is almost true.  It’s not “even if,” but simply “if”; that is, the p-value is based upon the assumption that the null hypothesis is correct.  The “if” is not an incidental qualifier, it is essential to the definition of statistical significance. “Even” here adds nothing, but a slightly misleading rhetorical flourish.  And the p-value is not the probability that the observed data are correct; it’s the probability of observing the data obtained, or data more extreme, assuming the null hypothesis is true.

The Respondents/plaintiffs efforts at serious explication ultimately succumb to their hyperbolic rhetoric.  They explained that statistical significance may not be “practical significance,” which is true enough.  There are, of course, instances in which a statistical significant difference is not particularly interesting.  A large clinical trial, testing two cancer medications head to head, may show one extends life expectancy by a week or two, but has a worse side-effect profile.  The statistically significant “better” drug may be refused a license from regulatory agencies, or be rejected by knowledgeable oncologists and sensible patients, who are more concerned about quality of life issues.

The Respondents are also correct that invoking statistically significance does not provide the simple, bright-line test, Petitioner desired.  Someone would still have to specify the level of alpha, the acceptable level of Type I error, and this would further require a specification of either a one-sided or two-sided test.  To be sure, the two-sided test, with an alpha of 5%, is generally accepted in the world of biostatistics and biomedical research.  Regulatory agencies, including the FDA, however, lower the standard test to implement their precautionary principles and goals.  Furthermore, evaluation of statistical significance requires additional analysis to determine whether the observed deviation from expected is due to bias or confounding, or whether the statistical test has been unduly diluted by multiple comparisons, subgroup analyses, or data mining techniques.

Of course, statistical significance today usually occurs in conjunction with an assessment of “effect size,” usually through an analysis of a confidence interval around a point estimate of a risk ratio.  The Respondents’ complaint that the p-value does not convey the magnitude of the association is a bit off the mark, but not completely illegitimate.  For instance, if there were a statistically significant finding of anosmia from Zicam use, in the form of an elevated risk that was itself small, the FDA might well decide that the risk was manageable with a warning to users to discontinue the medication if they experienced a burning sensation upon use.

The Respondents, along with their two would-be “statistical expert” amici, misrepresent the substance of many of the objections to statistical significance in the medical literature.  A telling example is the Respondents’ citation to an article by Professor David Savitz:

David A. Savitz, “Is Statistical Significance Testing Useful in Interpreting Data?” 7 Reproductive Toxicology 95, 96 (1993) “[S]tatistical significance testing is not useful in the analysis or interpretation of scientific research.”).

Id. at 52, n. 40.

More complete quotations from Professor Savitz’ article, however, reveals a more nuanced, and rather different message:

“Although P values and statistical significance testing have become entrenched in the practice of biomedical research, their usefulness and drawbacks should be reconsidered, particularly in observational epidemiology. The central role for the null hypothesis, assuming an infinite number of replications, and the dichotomization of results as positive or negative are argued to be detrimental to the proper design and evaluation of research. As an alternative, confidence intervals for estimated parameters convey some information about random variation without several of these limitations. Elimination of statistical significance testing as a decision rule would encourage those who present and evaluate research to more comprehensively consider the methodologic features that may yield inaccurate results and shift the focus from the potential influence of random error to a broader consideration of possible reasons for erroneous results.”

Savitz, 7 Reproductive Toxicology at 95.  Respondents’ case would hardly have been helped by substituting a call for statistical significance with a call for using confidence intervals, along with careful scrutiny of the results for erroneous results.

“Regardless of what is taught in statistics courses or advocated by editorials, including the recent one in this journal, statistical tests are still routinely invoked as the primary criterion for assessing whether the hypothesized phenomenon has occurred.”

7 Reproductive Toxicology at 96 (internal citation omitted).

“No matter how carefully worded, “statistically significant” misleadingly conveys notions of causality and importance.”

Id. at 99.  This last quotation really unravels the Respondents’ fatuous use of citations.  Of course, the Savitz article is quite inconsistent generally with the message that the Respondents wished to convey to the Supreme Court, but intellectually honesty required a fuller acknowledgement of Prof. Savitz’ thinking about the matter.

Finally, there are some limited cases, in which the failure to obtain a conventionally statistically significant result is not fatal to an assessment of causality.  Such cases usually involve instances in which it is extremely difficult to find observational or experimental data to analyze for statistical significance, but other lines of evidence support the conclusion in a way that scientists accept.  Although these cases are much rarer than Respondents imagine, they may well exist, but they do not detract much from Sir Ronald Fisher’s original conception of statistical significance:

“In the investigation of living beings by biological methods statistical tests of significance are essential. Their function is to prevent us being deceived by accidental occurrences, due not to the causes we wish to study, or are trying to detect, but to a combination of the many other circumstances which we cannot control. An observation is judged significant, if it would rarely have been produced, in the absence of a real cause of the kind we are seeking. It is a common practice to judge a result significant, if it is of such a magnitude that it would have been produced by chance not more frequently than once in twenty trials. This is an arbitrary, but convenient, level of significance for the practical investigator, but it does not mean that he allows himself to be deceived once in every twenty experiments. The test of significance only tells him what to ignore, namely all experiments in which significant results are not obtained. He should only claim that a phenomenon is experimentally demonstrable when he knows how to design an experiment so that it will rarely fail to give a significant result. Consequently, isolated significant results which he does not know how to reproduce are left in suspense pending further investigation.”

Ronald A. Fisher, “The Statistical Method in Psychical Research,” 39 Proceedings of the Society for Psychical Research 189, 191 (1929). Note that Fisher was talking about experiments, not observational studies, and that he hardly was advocating a mechanical, thoughtless criterion of significance.

The Supreme Court’s decision in Castenada illustrates how misleading statistical significance can be.  In a five-to-four decision, the Court held that a prima facie case of ethnic discrimination could be made out on the basis of statistical significance alone.  In dictum, the Court suggested that statistical evidence alone sufficed when the observed outcome was more than two or three standard deviations from the expected outcome.  Castaneda v. Partida, 430 U.S. 482, 496 n. 17 (1977).  The facts of Castaneda illustrate a compelling case in which the statistical significance observed was likely the result of confounding effects of reduced civic participation by poor, itinerant minorities, in a Texas county in which the ethnic minority controlled political power, and made up a majority of the petit jury that convicted Mr. Partida.

The Matrixx – A Comedy of Errors

April 6th, 2011

1. Incubi Curiae

As I noted in the Matrixx Unloaded, Justice Sotomayor’s scholarship, in discussing case law under Federal Rule of Evidence 702, was seriously off base.  Of course, Matrixx Initiatives was only a pleading case, and so there was no real reason to consider rules of admissibility or sufficiency, such as Rule 702.

Fortunately, Justice Sotomayor avoided further embarrassment by not discussing the fine details of significance or hypothesis testing.  Not so the two so-called “statistics experts” who submitted an amicus brief.

Consider the following statement by McCloskey and Ziliak, about adverse event reports (AER) and statistical significance.

“Suppose that a p-value for a particular test comes in at 9 percent.  Should this p-value be considered “insignificant” in practical, human, or economic terms? We respectfully answer, “No.” For a p-value of .09, the odds of observing the AER is 91 percent divided by 9 percent. Put differently, there are 10-to-l odds that the adverse effect is “real” (or about a 1 in 10 chance that it is not).”

Brief of Amici Curiae Statistics Experts Professors Deirdre N. McCloskey and Stephen T. Zilliak in Support of Respondents, at 18 (Nov. 18, 2010), 2010 WL 4657930 (U.S.) (emphasis added).

Of course, the whole enterprise of using statistical significance to evaluate AER is suspect because there is no rate, either expected or observed.  A rate could be estimated from number of AER reported per total number of persons using the medication in some unit of time.  Pharmacoepidemiologists sometimes do engage in such speculative blue-sky enterprises to determine whether a “signal” may have been generated by the AER.  Even if a denominator were implied, and significance testing used, it would be incorrect to treat the association as causal.  Our statistics experts here have committed several serious mistakes; they have

  • treated the AERs as a rate, when it is simply a count;
  • treated the AERs as an observed rate that can be evaluated against a null hypothesis of no increase in rate, when there is no expected rate for the event in question; and
  • treated the pseudo-statistical analysis as if it provided a basis for causal assessment, when at best it would be a very weak observational study that raised an hypothesis for study.

Now that would be, and should be, enough error for any two “statistics experts” in a given day, and we might have hoped that these putative experts would have thought through their ideas before imposing themselves upon a very busy Court.  But there is another mistake, which is even more stunning for having come from self-styled “statistics experts.”  Their derivation of a probability (or an odds statement) that the null hypothesis of no increased rate of AER is false is statistically incorrect.  A p-value is based upon the assumption that the null hypothesis is true, and it measures the probability of having obtained data as extreme, or more extreme, from the expected value, as seen in the study.  The p-value is thus a conditional probability statement of the probability of the data given the hypothesis.  As every first year statistics student learns, you cannot reverse the order of the conditional probability statement without committing a transpositional fallacy.  In other words, you cannot obtain a statement of the probability of the hypothesis given the data, from the probability of the data given the hypothesis.  Bayesians, of course, point to this limitation as a “failing” of frequentist statistics, but the limitation cannot be overcome by semantic fiat.

No Confidence in Defendant’s Confidence Intervals

Lest anyone think I am picking on the “statistics experts,” consider the brief filed by Matrixx Initiatives.  In addition to the whole crazy business of relying upon statistical significance in the absence of a study that used a statistical test, there are the two following howlers.  You would probably think that the company putting forward a “no statistical significance” defense would want to state statistical concepts clearly, but take a look at the Petitioner’s brief:

“Various analytical methods can be used to determine whether data reflect a statistically significant result. One such method, calculating confidence intervals, is especially useful for epidemiological analysis of drug safety, because it allows the researcher to estimate the relative risk associated with taking a drug by comparing the incidence rate of an adverse event among a sample of persons who took a drug with the background incidence rate among those who did not. Dividing the former figure by the latter produces a relative risk figure (e.g., a relative risk of 2.0 indicates a 50% greater risk among the exposed population). The researcher then calculates the confidence interval surrounding the observed risk, based on the preset confidence level, to reflect the degree of certainty that the “true” risk falls within the calculated interval. If the lower end of the interval dips below 1.0—the point at which the observed risk of an adverse event matches the background incidence rate—then the result is not statistically significant, because it is equally probable that the actual rate of adverse events following product use is identical to (or even less than) the background incidence rate. Green et al., Reference Guide on Epidemiology, at 360-61. For further discussion, see id. at 348-61.”

Matrixx Initiatives Brief at p. 36 n. 18 (emphasis added). Both passages in bold are wrong.  The Federal Judicial Center’s Reference Manual does not support the bold statements. A relative risk of 2.0 represents a 100% increase in risk, not 50%, although Matrixx Initiatives may have been thinking of a very different risk metric – the attributable risk, which would be 50% when the relative risk is 2.0.

The second bold statement is much worse because there is no possible word choice that might make the brief a correct understanding of a confidence interval (CI). The CI does not permit us to make a direct probability statement about the truth of any point within the interval. Although the interval does provide some insight into the true value of the parameter, the meaning of the confidence interval must be understood operationally.  For a 95% interval, if 100 samples were taken and (100 – α) percent CIs constructed, we would expect that 95 of the intervals to cover, or include, the true value of the variable.  (And α is our measure of Type I error, or probability of false positives.)

To realize how wrong the Petitioner’s brief is, consider the following example.  The observed relative risk is 10, but it is not statistically significant on a two-tailed test of significance, with α set at 0.05.  Suppose further that the two-sided 95% confidence interval around the observed rate is (0.9 to 18).  Matrixx Initiatives asserts:

“If the lower end of the interval dips below 1.0—the point at which the observed risk of an adverse event matches the background incidence rate—then the result is not statistically significant, because it is equally probable that the actual rate of adverse events following product use is identical to (or even less than) the background incidence rate.

The Petitioner would thus have the Court believe that with the example of a relative risk of 10, with the CI noted above, the result should be interpreted to mean that it is equally probable that the true value is 1.0 or less.  This is statistically silliness.

I have collected some statements about the CI, from well-known statisticians, below, as an aid to avoid such distortions of statistical concepts, as we see in the Matrixx.


“It would be more useful to the thoughtful reader to acknowledge the great differences that exist among the p-values corresponding to the parameter values that lie within a confidence interval …”

Charles Poole, “Confidence Intervals Exclude Nothing,” 77 Am. J. Pub. Health 492, 493 (1987)

“Nevertheless, the difference between population means is much more likely to be near to the middle of the confidence interval than towards the extremes. Although the confidence interval is wide, the best estimate of the population difference is 6-0 mm Hg, the difference between the sample means.

* * *

“The two extremes of a confidence interval are sometimes presented as confidence limits. However, the word “limits” suggests that there is no going beyond and may be misunderstood because, of course, the population value will not always lie within the confidence interval. Moreover, there is a danger that one or other of the “limits” will be quoted in isolation from the rest of the results, with misleading consequences. For example, concentrating only on the upper figure and ignoring the rest of the confidence interval would misrepresent the finding by exaggerating the study difference. Conversely, quoting only the lower limit would incorrectly underestimate the difference. The confidence interval is thus preferable because it focuses on the range of values.”

Martin Gardner & Douglas Altman, “Confidence intervals rather than P values: estimation rather than hypothesis testing,” 292 Brit. Med. J. 746, 748 (1986)

“The main purpose of confidence intervals is to indicate the (im)precision of the sample study estimates as population values. Consider the following points for example: a difference of 20% between the percentages improving in two groups of 80 patients having treatments A and B was reported, with a 95% confidence interval of 6% to 34%*2 Firstly, a possible difference in treatment effectiveness of less than 6% or of more than 34% is not excluded by such values being outside the confidence interval-they are simply less likely than those inside the confidence interval. Secondly, the middle half of the confidence interval (13% to 27%) is more likely to contain the population value than the extreme two quarters (6% to 13% and 27% to 34%) – in fact the middle half forms a 67% confidence interval. Thirdly, regardless of the width of the confidence interval, the sample estimate is the best indicator of the population value – in this case a 20% difference in treatment response.”

Martin Gardner & Douglas Altman, “Estimating with confidence,” 296 Brit. Med. J. 1210 (1988)

“Although a single confidence interval can be much more informative than a single P-value, it is subject to the misinterpretation that values inside the interval are equally compatible with the data, and all values outside it are equally incompatible.”

“A given confidence interval is only one of an infinite number of ranges nested within one another. Points nearer the center of these ranges are more compatible with the data than points farther away from the center.”

Kenneth J. Rothman, Sander Greenland, and Timothy L. Lash, Modern Epidemiology 158 (3d ed. 2008)

“A popular interpretation of a confidence interval is that it provides values for the unknown population proportion that are ‘compatible’ with the observed data.  But we must be careful not to fall into the trap of assuming that each value in the interval is equally compatible.”

Nicholas P. Jewell, Statistics for Epidemiology 23 (2004)

The Matrixx Oversold

April 4th, 2011

“Now their view is the rule of law: Statistical significance is neither necessary nor sufficient for proving a commercial or scientific result.”

Statistics Experts

The perverse rhetorical distortions of the Matrixx case have begun.  The quote above, from the website of one of the amicus brief authors, will probably not be the last distortion or perversion of scientific method or of the holding of Matrixx Initiatives, Inc. v. Siracusano, 2011 WL 977060 (March 22, 2011, U.S. Supreme Court).  Still, the distortion of the holding raises some interesting questions about who these would-be friends of the Court are, and why would they misrepresent the case in a way that any first-year law student would see was incorrect.  What is the agenda of these authors?

I had never heard of Deirdre N. McCloskey or Stephen T. Ziliak before the Matrixx case.  After the decision was delivered on March 22, 2011, I started to look at the amicus briefs.  McCloskey and Ziliak filed one such brief, on behalf of the respondents.  Their brief was styled “Brief of Amici Curiae Statistics Experts Professors Deirdre N. McCloskey and Stephen N. Ziliak in Support of Respondents.”  The more I considered this amicus brief, the more troubling I found it, both procedurally and substantively.

1. No statistical organization (such as the American Statistical Association) joined this amicus brief, and none of the many statistician-lawyers who frequently contribute amicus briefs on quantitative issues was associated with their effort.  This was the first peculiarity of the McCloskey-Ziliak brief, which attracted my attention only after the Supreme Court issued its opinion in the Matrixx case.

2. The second remarkable fact about these amici is that they are not statisticians or statistics professors, despite titling their brief as that of “statistics experts.”  According to his website, Stephen T. Ziliak, is a Professor of Economics,in the department of economics, in Roosevelt University (Chicago). His doctorate was in economics.  Deirdre N. McCloskey is a professor of economics, history, English, and communication, at the University of Illinois (Chicago).  Of course, this is not to say that these professors do not have expertise in statistics.  Both authors have written on the history of statistics, but the title of their brief seems a bit misleading.  Why would they not say that they were economists?  I, for one, found this ruse peculiarly misleading for a brief filed in our highest Court.

3. The third curious fact is the incestuous nature of the brief’s authors.  McCloskey was Ziliak’s doctoral supervisor. Again, there is nothing wrong with a mentor and his or her student joining together in a project such as this, but the work suggests an intellectual inbreeding, which was, well, peculiar in that no one else with putative substantive expertise was involved in the amicus brief.

4.  Some of the McCloseky-Ziliak brief is unexceptional exposition about the meaning of Type I and Type II errors, and hypothesis testing.  The Supreme Court really did not need this information, which could readily be found in the Federal Judicial Center’s Reference Manual on Scientific Evidence.  Some of the brief, however, is peculiarly tendentious nonsense, which I will explore in follow-up posts.

5. The Supreme Court, in its opinion, did not dignify this amicus brief with a citation, but the amici nonetheless appear to have a delusionally inflated view of their influence.  Now there is nothing at all peculiar about such delusions in academia.  A short trip to Ziliak’s and McCloskey’s websites revealed many references to their efforts on the brief, including their (inflated) assessment of their influence. McCloskey’s website goes further, with what appears to be a press release, in which she claims, without citation or support that some of “their book and some of their articles did affect the case.”

6. The press release ends with the harrumphing, noted above:

“Now their [McCloskey and Ziliak’s] view is the rule of law: ‘Statistical significance is neither necessary nor sufficient for proving a commercial or scientific result.””

This statement, of course, is not the rule of law; nor is it the holding of the case.  The statement is so clearly wrong that the reader has to wonder about the authors’ academic pretenses, qualifications, and claimed disinterest in the proceedings.  Rhetorical excess is no stranger in the halls of academia, but our learned professors appear to have jumped the rhetorical shark.

This amicus brief certainly got my attention, and it raises serious questions about who files amicus briefs, and whether they distort the appellate process.   In a follow-up post, I will look at some of the substantive opinions put forward by McCloskey and Ziliak.  Like the curious distortions of their credentials, the  misleading assessment of their own influence, and the erroneous conclusion about the Matrixx holding, the substantive claims and statements by these authors, in their amicus brief, are equally dubious.  Their claims are worth exploring as a road map to how other irresponsible advocates may use and misuse the Matrixx.

Matrixx Unloaded

March 29th, 2011

In writing for a unanimous Court in Matrixx Initiatives, Inc. v. Siracusano, Justice Sotomayor wandered far afield from the world of pleading rules to flyblow the world of expert witness jurisprudence.  How and why did this happen?  Why did Matrixx invoke the concept of statistical significance to counter case reports of adverse events? Did Matrixx oversell its scientific position, thereby handing Justice Sotomayor an opportunity to unravel decades of evolution of law on the admissibility of expert witness opinion testimony?  Inquiring minds want to know.

Still, whatever the occasion for the obiter dicta, Court’s pronouncements on expert witnesses are stunning for their irrelevance and questionable scholarship:

“We note that courts frequently permit expert testimony on causation based on evidence other than statistical significance. See, e.g., Best v. Lowe’s Home Centers, Inc., 563 F. 3d 171, 178 (6th Cir 2009); Westberry v. Gislaved Gummi AB, 178 F. 3d 257, 263–264 (4th Cir. 1999) (citing cases); Wells v. Ortho Pharmaceutical Corp., 788 F. 2d 741, 744–745 (11th Cir. 1986). We need not consider whether the expert testimony was properly admitted in those cases, and we do not attempt to define here what constitutes reliable evidence of causation.”

Id. at 12.  What is remarkable about this passage is that the first two cases cited involved differential etiology or diagnosis to assess specific causation, not general causation.  As most courts have recognized, this assessment strategy requires that general causation has already been established. See, e.g., Hall v. Baxter Healthcare, 947 F. Supp. 1387 (D. Ore. 1996).

The citation to the third case, Wells, is noteworthy because the case has nothing to do with adverse event reports or statistical significance.  Wells involved a claim of birth defects caused by the use of spermicidal jelly contraceptive, which had been the subject of several studies, one of which at least yielded a statistically significant increase in detected birth defects over what was expected.  Wells v. Ortho Pharmaceutical Corp., 615 F. Supp. 262 (N.D.Ga. 1985), aff’d and rev’d in part on other grounds, 788 F.2d 741 (11th Cir.), cert. denied, 479 U.S.950 (1986).  Wells could thus hardly be an example of a case in which there was a judgment of causation based upon a scientific study that lacked statistical significance in its findings. Of course, finding statistical significance is just the beginning of assessing the causality of an association; Wells was notorious for its poor assessment of all the determinants of scientific causation.

The citation to Wells is thus remarkable because the Wells decision was rightly and widely criticized for its failure to evaluate the entire evidentiary display, as well as for its failure to rule out bias and confounding in the studies relied upon by the plaintiff.  See , e.g., James L. Mills and Duane Alexander, “Teratogens and ‘Litogens’,” 15 New Engl. J. Med. 1234 (1986); Samuel R. Gross, “Expert Evidence,” 1991 Wis. L. Rev. 1113, 1121-24 (1991) (“Unfortunately, Judge Shoob’s decision is absolutely wrong. There is no scientifically credible evidence that Ortho-Gynol Contraceptive Jelly ever causes birth defects.”). See also Editorial, “Federal Judges v. Science,” N.Y. Times, December 27, 1986, at A22 (unsigned editorial);  David E. Bernstein, “Junk Science in the Courtroom,” Wall St. J. at A 15 (Mar. 24,1993) (pointing to Wells as a prominent example of how the federal judiciary had embarrassed American judicial system with its careless, non-evidence based approach to scientific evidence). A few years later, another case in the same judicial district against the same defendant for the same product resulted in the grant of summary judgment.  Smith v. Ortho Pharmaceutical Corp., 770 F. Supp. 1561 (N.D. Ga. 1991) (supposedly distinguishing Wells on the basis of more recent studies).

Perhaps the most remarkable aspect of the Court’s citation to Wells is that the case, and all it stands for, was overruled sub silentio by the Supreme Court’s own decisions in Daubert, Joiner, Kumho Tire, and Weisgram.  And if that did not kill the concept, then there was the simple matter of a supervening statute:  the 2000 amendment of Rule 702, of Federal Rules of Evidence.

Citing a case as jurisprudentially dead and discredited as Wells could have been sloppy scholarship and lawyering.  The principle of charity, however, suggests it was purposeful, and that is a frightful prospect.

The opinions, statements, and asseverations expressed on Tortini are my own, or those of invited guests, and these writings do not necessarily represent the views of clients, friends, or family, even when supported by good and sufficient reason.