TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Systematic Reviews and Meta-Analyses in Litigation, Part 2

February 11th, 2016

Daubert in Knee’d

In a recent federal court case, adjudicating a plaintiff’s Rule 702 challenge to defense expert witnesses, the trial judge considered plaintiff’s claim that the challenged witness had deviated from PRISM guidelines[1] for systematic reviews, and thus presumably had deviated from the standard of care required of expert witnesses giving opinions about causal conclusions.

Batty v. Zimmer, Inc., MDL No. 2272, Master Docket No. 11 C 5468, No. 12 C 6279, 2015 WL 5050214 (N.D. Ill. Aug. 25, 2015) [cited as Batty I]. The trial judge, the Hon. Rebecca R. Pallmeyer, denied plaintiff’s motion to exclude the allegedly deviant witness, but appeared to accept the premise of the plaintiff’s argument that an expert witness’s opinion should be reached in the manner of a carefully constructed systematic review.[2] The trial court’s careful review of the challenged witness’s report and deposition testimony revealed that there had mean no meaningful departure from the standards put forward for systematic reviews. SeeSystematic Reviews and Meta-Analyses in Litigation” (Feb. 5, 2016).

Two days later, the same federal judge addressed a different set of objections by the same plaintiff to two other of the defendant’s, Zimmer Inc.’s, expert witnesses, Dr. Stuart Goodman and Dr. Timothy Wright. Batty v. Zimmer, Inc., MDL No. 2272, Master Docket No. 11 C 5468, No. 12 C 6279, 2015 WL 5095727, (N.D. Ill. Aug. 27, 2015) [cited as Batty II]. Once again, plaintiff Batty argued for the necessity of adherence to systematic review principles. According to Batty, Dr. Wright’s opinion, based upon his review of the clinical literature, was scientifically and legally unreliable because he had not conducted a proper systematic review. Plaintiff alleged that Dr. Wright’s review selectively “cherry picked” favorable studies to buttress his opinion, in violation of systematic review guidelines. The trial court, which had assumed that a systematic review was the appropriate “methodology” for Dr. Vitale, in Batty I, refused to sustain the plaintiff’s challenge in Batty II, in large part because the challenged witness, Dr. Wright, had not claimed to have performed a systematic or comprehensive review, and so his failure to follow the standard methodology did not require the exclusion of his opinion at trial. Batty II at *3.

The plaintiff never argued that Dr. Wright misinterpreted any of his selected studies upon which he relied, and the trial judge thus suggested that Dr. Wright’s discussion of the studies, even if a partial, selected group of studies, would be helpful to the jury. The trial court thus left the plaintiff to her cross-examination to highlight Dr. Wright’s selectivity and lack of comprehensiveness. Apparently, in the trial judge’s view, this expert witness’s failure to address contrary studies did not render his testimony unreliable under “Daubert scrutiny.” Batty II at *3.

Of course, it is no longer the Daubert judicial decision that mandates scrutiny of expert witness opinion testimony, but Federal Rule of Evidence 702. Perhaps it was telling that when the trial court backed away from its assumption, made in Batty I, that guidelines or standards for systematic reviews should inform a Rule 702 analysis, the court cited Daubert, a judicial opinion superseded by an Act of Congress, in 2000. The trial judge’s approach, in Batty II, threatens to make gatekeeping meaningless by deferring to the expert witness’s invocation of personal, idiosyncratic, non-scientific standards. Furthermore, the Batty II approach threatens to eviscerate gatekeeping for clinical practitioners who remain blithely unaware of advances in epidemiology and evidence-based medicine. The upshot of Batty I and II combined seems to be that systematic review principles apply to clinical expert witnesses only if those witness choose to be bound by such principles. If this is indeed what the trial court intended, then it is jurisprudential nonsense.

The trial court, in Batty II, exercised a more searching approach, however, to Dr. Wright’s own implant failure analysis, which he relied upon in an attempt to rebut plaintiff’s claim of defective design. The plaintiff claimed that the load-bearing polymer surfaces of the artificial knee implant experienced undue deformation. Dr. Wright’s study found little or no deformation on the load bearing polymer surfaces of the eight retrieved artificial joints. Batty II at *4.

Dr. Wright assessed deformation qualitatively, not quantitatively, through the use of a “colormetric map of deformation” of the polymer surface. Dr. Wright, however, provided no scale to define or assess how much deformation was represented by the different colors in his study. Notwithstanding the lack of any metric, Dr. Wright concluded that his findings, based upon eight retrieved implants, “suggested” that the kind of surface failing claimed by plaintiff was a “rare event.”

The trial court had little difficulty in concluding that Dr. Wright’s evidentiary base was insufficient, as was his presentation of the study’s data and inferences. The challenged witness failed to explain how his conclusions followed from his data, and thus his proffered testimony fell into the “ipse dixit” category of inadmissible opinion testimony. General Electric v. Joiner, 522 U.S. 136, 146 (1997). In the face of the challenge to his opinions, Dr. Wright supplemented his retrieval study with additional scans of surficial implant wear patterns, but he failed again to show the similarity of previous use and failure conditions in the patients from whom these implants were retrieved and the plaintiff’s case (which supposedly involved aseptic loosening). Furthermore, Dr. Wright’s interpretation of his own retrieval study was inadequate in the trial court’s view because he had failed to rule out other modes of implant failure, in which the polyethylene surface would have been preserved. Because, even as supplemented, Dr. Wright’s study failed to support his proffered opinions, the court held that his opinions, based upon his retrieval study had to be excluded under Rule 702. The trial court did not address the Rule 703 implications for Dr. Wright’s reliance upon a study that was poorly designed and explained, and which lacked the ability to support his contention that the claimed mode of implant failure was a “rare” event. Batty II at *4 – 5.


[1] See David Moher , Alessandro Liberati, Jennifer Tetzlaff, Douglas G. Altman, & The PRISMA Group, “Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement,” 6 PLoS Med e1000097 (2009) [PRISMA].

[2] Batty v. Zimmer, Inc., MDL No. 2272, Master Docket No. 11 C 5468, No. 12 C 6279, 2015 WL 5050214 (N.D. Ill. Aug. 25, 2015).

Systematic Reviews and Meta-Analyses in Litigation

February 5th, 2016

Kathy Batty is a bellwether plaintiff in a multi-district litigation[1] (MDL) against Zimmer, Inc., in which hundreds of plaintiffs claim that Zimmer’s NexGen Flex implants are prone to have their femoral and tibial elements prematurely aseptically loosen (independent of any infection). Batty v. Zimmer, Inc., MDL No. 2272, Master Docket No. 11 C 5468, No. 12 C 6279, 2015 WL 5050214 (N.D. Ill. Aug. 25, 2015) [cited as Batty].

PRISMA Guidelines for Systematic Reviews

Zimmer proffered Dr. Michael G. Vitale, an orthopedic surgeon, with a master’s degree in public health, to testify that, in his opinion, Batty’s causal claims were unfounded. Batty at *4. Dr. Vitale prepared a Rule 26 report that presented a formal, systematic review of the pertinent literature. Batty at *3. Plaintiff Batty challenged the admissibility of Dr. Vitale’s opinion on grounds that his purportedly “formal systematic literature review,” done for litigation, was biased and unreliable, and not conducted according to generally accepted principles for such reviews. The challenged was framed, cleverly, in terms of Dr. Vitale’s failure to comply with a published set of principles outlined in “PRISMA” guidelines (Preferred Reporting Items for Systematic reviews and Meta-Analyses), which enjoy widespread general acceptance among the clinical journals. See David Moher , Alessandro Liberati, Jennifer Tetzlaff, Douglas G. Altman, & The PRISMA Group, “Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement,” 6 PLoS Med e1000097 (2009) [PRISMA]. Batty at *5. The trial judge, Hon. Rebecca R. Pallmeyer, denied plaintiff’s motion to exclude Dr. Vitale, but in doing so accepted, arguendo, the plaintiff’s implicit premise that an expert witness’s opinion should be reached in the manner of a carefully constructed systematic review.

The plaintiff’s invocation of the PRISMA guidelines presented several difficult problems for her challenge and for the court. PRISMA provides a checklist of 27 items for journal editors to assess the quality and completeness of systematic reviews that are submitted for publication. Plaintiff Batty focused on several claimed deviations from the guidelines:

  • “failing to explicitly state his study question,
  • failing to acknowledge the limitations of his review,
  • failing to present his findings graphically, and failing to reproduce his search results.”

Batty’s challenge to Dr. Vitale thus turned on whether Zimmer’s expert witness had failed to deploy “same level of intellectual rigor,” as someone in the world of clinical medicine would [should] have in conducting a similar systematic review. Batty at *6.

Zimmer deflected the challenge, in part by arguing that PRISMA’s guidelines are for the reporting of systematic reviews, and they are not necessarily criteria for valid reviews. The trial court accepted this rebuttal, Batty at *7, but missed the point that some of the guidelines call for methods that are essential for rigorous, systematic reviews, in any forum, and do not merely specify “publishability.” To be sure, PRISMA itself does not always distinguish between what is essential for journal publication, as opposed to what is needed for a sufficiently valid systematic review. The guidelines, for instance, call for graphical displays, but in litigation, charts, graphs, and other demonstratives are often not produced until the eve of trial, when case management orders call for the parties to exchange such materials. In any event, Dr. Vitale’s omission of graphical representations of his findings was consistent with his finding that the studies were too clinical heterogeneous in study design, follow-up time, and pre-specified outcomes, to permit nice, graphical summaries. Batty at *7-8.

Similarly, the PRISMA guidelines call for a careful specification of the clinical question to be answered, but in litigation, the plaintiff’s causal claims frame the issue to be addressed by the defense expert witness’s literature review. The trial court readily found that Dr. Vitale’s research question was easily discerned from the context of his report in the particular litigation. Batty at *7.

Plaintiff Batty’s challenge pointed to Dr. Vitale’s failure to acknowledge explicitly the limitations of his systematic review, an omission that virtually defines expert witness reports in litigation. Given the availability of discovery tools, such as a deposition of Dr. Vitale (at which he readily conceded the limitations of his review), and the right of confrontation and cross-examination (which are not available, alas, for published articles), the trial court found that this alleged deviation was not particularly relevant to the plaintiff’s Rule 702 challenge. Batty at *8.

Batty further charged that Dr. Vitale had not “reproduced” his own systematic review. Arguing that a systematic review’s results must be “transparent and reproducible,” Batty claimed that Zimmer’s expert witness’s failure to compile a list of studies that were originally retrieved from his literature search deprived her, and the trial court, of the ability to determine whether the search was complete and unbiased. Batty at *8. Dr. Vitale’s search protocol and inclusionary and exclusionary criteria were, however, stated, explained, and reproducible, even though Dr. Vitale did not explain the application of his criteria to each individual published paper. In the final analysis, the trial court was unmoved by Batty’s critique, especially given that her expert witnesses failed to identify any relevant studies omitted from Dr. Vitale’s review. Batty at *8.

Lumping or Commingling of Heterogeneous Studies

The plaintiff pointed to Dr. Vitale’s “commingling” of studies, heterogeneous in terms of “study length, follow-up, size, design, power, outcome, range of motion, component type” and other clinical features, as a deep flaw in the challenged expert witness’s methodology. Batty at *9. Batty’s own retained expert witness, Dr. Kocher, supported Batty’s charge by adverting to the clinical variability in studies included in Dr. Vitale’s review, and suggesting that “[h]igh levels of heterogeneity preclude combining study results and making conclusions based on combining studies.” Dr. Kocher’s argument was rather beside the point because Dr. Vitale had not impermissibly combined clinically or statistically heterogeneous outcomes.[2] Similarly, the plaintiff’s complaint that Dr. Vitale had used inconsistent criteria of knee implant survival rates was dismissed by the trial court, which easily found Dr. Vitale’s survival criteria both pre-specified and consistent across his review of studies, and relevant to the specific alleged by Ms. Batty. Batty at *9.

Cherry Picking

The trial court readily agreed with Plaintiff’s premise that an expert witness who used inconsistent inclusionary and exclusionary criteria would have to be excluded under Rule 702. Batty at *10, citing In Re Zoloft, 26 F. Supp. 3d 449, 460–61 (E.D. Pa.2014) (excluding epidemiologist Dr. Anick Bérard proffered testimony because of her biased cherry picking and selection of studies to support her studies, and her failure to account for contradictory evidence). The trial court, however, did find that Dr. Vitale’s review was corrupted by the kind of biased cherry picking that Judge Rufe found to have been committed by Dr. Anick Bérard, in the Zoloft MDL.

Duplicitous Duplication

Plaintiff’s challenge of Dr. Vitale did manage to spotlight an error in Dr. Vitale’s inclusion of two studies that were duplicate analyses of the same cohort. Apparently, Dr. Vitale had confused the studies as not being of the same cohort because the two papers reported different sample sizes. Dr. Vitale admitted that his double counting the same cohort “got by the peer-review process and it got by my filter as well.” Batty at *11, citing Vitale Dep. 284:3–12. The trial court judged Dr. Vitale’s error to have been:

“an inadvertent oversight, not an attempt to distort the data. It is also easily correctable by removing one of the studies from the Group 1 analysis so that instead of 28 out of 35 studies reporting 100% survival rates, only 27 out of 34 do so.”

Batty at *11.

The error of double counting studies in quantitative reviews and meta-analyses has become a prevalent problem in both published studies[3] and in litigation reports. Epidemiologic studies are sometimes updated and extended with additional follow up. The prohibition against double counting data is so obvious that it often is not even identified on checklists, such as PRISMA. Furthermore, double counting of studies, or subgroups within studies, is a flaw that most careful readers can identify in a meta-analysis, without advance training. According to statistician Stephen Senn, double counting of evidence is a serious problem in published meta-analytical studies.[4] Senn observes that he had little difficulty in finding examples of meta-analyses gone wrong, including meta-analyses with double counting of studies or data, in some of the leading clinical medical journals. Senn urges analysts to “[b]e vigilant about double counting,” and recommends that journals should withdraw meta-analyses promptly when mistakes are found.”[5]

An expert witness who wished to skate over the replication and consistency requirement might be tempted, as was Dr. Michael Freeman, to count the earlier and later iteration of the same basic study to count as “replication.” Proper methodology, however, prohibits double dipping data to count the later study that subsumes the early one as a “replication”:

“Generally accepted methodology considers statistically significant replication of study results in different populations because apparent associations may reflect flaws in methodology. Dr. Freeman claims the Alwan and Reefhuis studies demonstrate replication. However, the population Alwan studied is only a subset of the Reefhuis population and therefore they are effectively the same.”

Porter v. SmithKline Beecham Corp., No. 03275, 2015 WL 5970639, at *9 (Phila. Cty. Pennsylvania, Ct. C.P. October 5, 2015) (Mark I. Bernstein, J.)

Conclusions

The PRISMA and similar guidelines do not necessarily map the requisites of admissible expert witness opinion testimony, but they are a source of some important considerations for the validity of any conclusion about causality. On the other hand, by specifying the requisites of a good publication, some PRISMA guidelines are irrelevant to litigation reports and testimony of expert witnesses. Although Plaintiff Batty’s challenge overreached and failed, the premise of her challenge is noteworthy, as is the trial court’s having taken the premise seriously. Ultimately, the challenge to Dr. Vitale’s opinion failed because the specified PRISMA guidelines, supposedly violated, were either irrelevant or satisfied.


[1] Zimmer Nexgen Knee Implant Products Liability Litigation.

[2] Dr. Vitale’s review is thus easily distinguished from what has become commonplace in litigation of birth defect claims, where, for instance, some well-known statisticians [names available upon request] have conducted qualitative reviews and quantitative meta-analyses of highly disparate outcomes, such as any and all cardiovascular congenital anomalies. In one such case, a statistician expert witness hired by plaintiffs presented a meta-analysis that included study results of any nervous system defect, and central nervous system defect, and any neural tube defect, without any consideration of clinical heterogeneity or even overlap with study results.

[3] See, e.g., Shekoufeh Nikfar, Roja Rahimi, Narjes Hendoiee, and Mohammad Abdollahi, “Increasing the risk of spontaneous abortion and major malformations in newborns following use of serotonin reuptake inhibitors during pregnancy: A systematic review and updated meta-analysis,” 20 DARU J. Pharm. Sci. 75 (2012); Roja Rahimi, Shekoufeh Nikfara, Mohammad Abdollahic, “Pregnancy outcomes following exposure to serotonin reuptake inhibitors: a meta-analysis of clinical trials,” 22 Reproductive Toxicol. 571 (2006); Anick Bérard, Noha Iessa, Sonia Chaabane, Flory T. Muanda, Takoua Boukhris, and Jin-Ping Zhao, “The risk of major cardiac malformations associated with paroxetine use during the first trimester of pregnancy: A systematic review and meta-analysis,” 81 Brit. J. Clin. Pharmacol. (2016), in press, available at doi: 10.1111/bcp.12849.

[4] Stephen J. Senn, “Overstating the evidence – double counting in meta-analysis and related problems,” 9, at *1 BMC Medical Research Methodology 10 (2009).

[5] Id. at *1, *4.


DOUBLE-DIP APPENDIX

Some papers and textbooks, in addition to Stephen Senn’s paper, cited above, which note the impermissible method of double counting data or studies in quantitative reviews.

Aaron Blair, Jeanne Burg, Jeffrey Foran, Herman, Gibb, Sander Greenland, Robert Morris, Gerhard Raabe, David Savitz, Jane Teta, Dan Wartenberg, Otto Wong, and Rae Zimmerman, “Guidelines for Application of Meta-analysis in Environmental Epidemiology,” 22 Regulatory Toxicol. & Pharmacol. 189, 190 (1995).

“II. Desirable and Undesirable Attributes of Meta-Analysis

* * *

Redundant information: When more than one study has been conducted on the same cohort, the later or updated version should be included and the earlier study excluded, provided that later versions supply adequate information for the meta-analysis. Exclusion of, or in rare cases, carefully adjusting for overlapping or duplicated studies will prevent overweighting of the results by one study. This is a critical issue where the same cohort is reexamined or updated several times. Where duplication exists, decision criteria should be developed to determine which of the studies are to be included and which excluded.”

Sander Greenland & Keith O’Rourke, “Meta-Analysis – Chapter 33,” in Kenneth J. Rothman, Sander Greenland, Timothy L. Lash, Modern Epidemiology 652, 655 (3d ed. 2008) (emphasis added)

Conducting a Sound and Credible Meta-Analysis

Like any scientific study, an ideal meta-analysis would follow an explicit protocol that is fully replicable by others. This ideal can be hard to attain, but meeting certain conditions can enhance soundness (validity) and credibility (believability). Among these conditions we include the following:

  • A clearly defined set of research questions to address.

  • An explicit and detailed working protocol.

  • A replicable literature-search strategy.

  • Explicit study inclusion and exclusion criteria, with a rationale for each.

  • Nonoverlap of included studies (use of separate subjects in different included studies), or use of statistical methods that account for overlap.* * * * *”

Matthias Egger, George Davey Smith, and Douglas G. Altman, Systematic Reviews in Health Care: Meta-Analysis in Context 59 – 60 (2001).

Duplicate (multiple) publication bias

***

The production of multiple publications from single studies can lead to bias in a number of ways.85 Most importantly, studies with significant results are more likely to lead to multiple publications and presentations,45 which makes it more likely that they will be located and included in a meta-analysis. The inclusion of duplicated data may therefore lead to overestimation of treatment effects, as recently demonstrated for trials of the efficacy of ondansetron to prevent postoperative nausea and vomiting86.”

Khalid Khan, Regina Kunz, Joseph Kleijnen, and Gerd Antesp, Systematic Reviews to Support Evidence-Based Medicine: How to Review and Apply Findings of Healthcare Research 35 (2d ed. 2011)

“2.3.5 Selecting studies with duplicate publication

Reviewers often encounter multiple publications of the same study. Sometimes these will be exact  duplications, but at other times they might be serial publications with the more recent papers reporting increasing numbers of participants or lengths of follow-up. Inclusion of duplicated data would inevitably bias the data synthesis in the review, particularly because studies with more positive results are more likely to be duplicated. However, the examination of multiple reports of the same study may provide useful information about its quality and other characteristics not captured by a single report. Therefore, all such reports should be examined. However, the data should only be counted once using the largest, most complete report with the longest follow-up.”

Julia H. Littell, Jacqueline Corcoran, and Vijayan Pillai, Systematic Reviews and Meta-Analysis 62-63 (2008)

Duplicate and Multiple Reports

***

It is a bit more difficult to identify multiple reports that emanate from a single study. Sometimes these reports will have the same authors, sample sizes, program descriptions, and methodological details. However, author lines and sample sizes may vary, especially when there are reports on subsamples taken from the original study (e.g., preliminary results or special reports). Care must be taken to ensure that we know which reports are based on the same samples or on overlapping samples—in meta-analysis these should be considered multiple reports from a single study. When there are multiple reports on a single study, we put all of the citations for that study together in summary information on the study.”

Kay Dickersin, “Publication Bias: Recognizing the Problem, Understanding Its Origins and Scope, and Preventing Harm,” Chapter 2, in Hannah R. Rothstein, Alexander J. Sutton & Michael Borenstein, Publication Bias in Meta-Analysis – Prevention, Assessment and Adjustments 11, 26 (2005)

“Positive results appear to be published more often in duplicate, which can lead to overestimates of a treatment effect (Timmer et al., 2002).”

Julian P.T. Higgins & Sally Green, eds., Cochrane Handbook for Systematic Reviews of Interventions 152 (2008)

“7.2.2 Identifying multiple reports from the same study

Duplicate publication can introduce substantial biases if studies are  inadvertently included more than once in a meta-analysis (Tramer 1997). Duplicate publication can take various forms, ranging from identical manuscripts to reports describing different numbers of participants and different outcomes (von Elm 2004). It can be difficult to detect duplicate publication, and some ‘detectivework’ by the review authors may be required.”

Lipitor MDL Takes The Fat Out Of Dose Extrapolations

December 2nd, 2015

Philippus Aureolus Theophrastus Bombastus von Hohenheim thankfully went by the simple moniker Paracelsus, sort of the Cher of the 1500s. Paracelsus’ astrological research is graciously overlooked today, but his 16th century dictum, in the German vernacular has created a lasting impression on linguistic conventions and toxicology:

“Alle Ding’ sind Gift, und nichts ohn’ Gift; allein die Dosis macht, dass ein Ding kein Gift ist.”

(All things are poison and nothing is without poison, only the dose permits something not to be poisonous.)

or more simply

“Die Dosis macht das Gift.”

Paracelsus, “Die dritte Defension wegen des Schreibens der neuen Rezepte,” Septem Defensiones (1538), in 2 Werke 510 (Darmstadt 1965). Today, his notion that the “dose is the poison” is a basic principle of modern toxicology,[1] which can be found in virtually every textbook on the subject.[2]

Paracelsus’ dictum has also permeated the juridical world, and become a commonplace in legal commentary and judicial decisions. The Reference Manual on Scientific Evidence is replete with supportive statements on the general acceptance of Paracelsus’ dictum. The chapter on epidemiology notes:

“The idea that the ‘dose makes the poison’ is a central tenet of toxicology and attributed to Paracelsus, in the sixteenth century… [T]his dictum reflects only the idea that there is a safe dose below which an agent does not cause any toxic effect.”

Michael D. Green, D. Michal Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” 549, 603 & n.160, in Reference Manual on Scientific Evidence (3d ed. 2011). Citing an unpublished, non-scientific advocacy piece written for a regulatory agency, the chapter does, however, claim that “[t]he question whether there is a no-effect threshold dose is a controversial one in a variety of toxic substances areas.”[3] The epidemiology chapter thus appears to confuse two logically distinct propositions: that there is no threshold dose and that there is no demonstrated threshold dose.

The Reference Manual’s chapter on toxicology also weighs in on Paracelsus:

“There are three central tenets of toxicology. First, “the dose makes the poison”; this implies that all chemical agents are intrinsically hazardous—whether they cause harm is only a question of dose. Even water, if consumed in large quantities, can be toxic.”

Bernard D. Goldstein & Mary Sue Henifin, “Reference Guide on Toxicology,” 633, 636, in Reference Manual on Scientific Evidence (3d ed. 2011) (internal citations omitted).

Recently, Judge Richard Mark Gergel had the opportunity to explore the relevance of dose-response to plaintiffs’ claims that atorvastatin causes diabetes. In re Lipitor (Atorvastatin Calcium) Marketing, Sales Practices & Prod. Liab. Litig., MDL No. 2:14–mn–02502–RMG, Case Mgmt. Order 49, 2015 WL 6941132 (D.S.C. Oct. 22, 2015) [Lipitor]. Plaintiffs’ expert witnesses insisted that they could disregard dose once they had concluded that there was a causal association between atorvastatin at some dose and diabetes. On Rule 702 challenges to plaintiffs’ expert witnesses, the court held that, when there is a dose-response relationship and there is an absence of association at low doses, then plaintiffs must show, through expert witness testimony, that the medication is capable of causing the alleged harm at particular doses. The court permitted the plaintiffs’ expert witnesses to submit supplemental reports to address the dose issue, and the defendants to relodge their Rule 702 challenge after discovery on the new reports. Lipitor at *6.

The Lipitor court’s holding built on the ruling by Judge Breyer’s treatment of dose in In re Bextra & Celebrex Mktg. Sales Practices & Prod. Liab. Litig., 524 F. Supp. 2d 1166, 1174-75 (N.D. Cal.2007). Judge Breyer, Justice Breyer’s kid brother, denied defendants’ Rule 702 challenges to plaintiffs’ expert witnesses who opined that Bextra and Celebrex can cause heart attacks and strokes, at 400 mg./day. For plaintiffs who ingested 200 mg/day, however, Judge Breyer held that the lower dose had to be analyzed separately, and he granted the motions to exclude plaintiffs’ expert witnesses’ opinions about the alleged harms caused by the lower dose. Lipitor at *1-2. The plaintiffs’ expert witnesses reached their causation opinions about 200 mg by cherry picking from the observational studies, and disregarding the randomized trials and meta-analyses of observational studies that failed to find an association between 200 mg/day and cardiovascular risk. Id. at *2. Given the lack of support for an association at 200mg/day, the court rejected the plaintiffs’ speculative downward extrapolation asserted.

Because of dose-response gradients, and the potential for a threshold, a risk estimate based upon greater doses or exposure does not apply to a person exposed at lower doses or exposure levels. See Michael D. Green, D. Michal Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” 549, 613, in Reference Manual on Scientific Evidence (3d ed. 2011) (“[A] risk estimate from a study that involved a greater exposure is not applicable to an individual exposed to a lower dose.”).

In the Lipitor case, as in the Celebrex case, multiple studies reported no statistically significant associations between the lower doses and the claimed adverse outcome. This absence, combined with a putative dose-response relationship, made plaintiffs’ downward extrapolation impermissibly speculative. See, e.g., McClain v. Metabolife Int’l, Inc., 401 F.3d 1233, 1241 (11th Cir. 2005) (reversing admission of expert witness’s testimony when the witness conceded a dose-response, but failed to address the dose of the medication needed to cause the claimed harm).

Courts sometimes confuse thresholds with dose-response relationships. The concepts are logically independent. There can be a dose-response relationship with or without a threshold. And there can be an absence of a dose-response relationship with a threshold, as in cases in which the effect is binary: positive at or above some threshold level, and negative below. A causal claim can run go awry because it ignores the possible existence of a threshold, or the existence of a dose-response relationship. The latter error is commonplace in litigation and regulatory contexts, when science or legal advocates attempt to evaluate risk using data that are based upon higher exposures or doses. The late Irving Selikoff, no stranger to exaggerated claims, warned against this basic error, when he wrote that his asbestos insulator cancer data were inapposite for describing risks of other tradesmen:

“These particular figures apply to the particular groups of asbestos workers in this study. The net synergistic effect would not have been the same if their smoking habits had been different; and it probably would have been different if their lapsed time from first exposure to asbestos dust had been different or if the amount of asbestos dust they had inhaled had been different.”

E.. Cuyler Hammond, Irving J. Selikoff, and Herbert Seidman, “Asbestos Exposure, Cigarette Smoking and Death Rates,” 330 Ann. N.Y. Acad. Sci. 473, 487 (1979).[4] Given that the dose-response between asbestos exposure and disease outcomes was an important tenant of the Selikoff’s work, it is demonstrable incorrect for expert witnesses to invoke relative risks for heavily exposed asbestos insulators and apply them to less exposed workers, as though the risks were the same and there are no thresholds.

The legal insufficiency of equating high and low dose risk assessments has been noted by many courts. In Texas Independent Ginners Ass’n v. Marshall, 630 F.2d 398 (5th Cir. 1980), the Circuit reviewed an OSHA regulation promulgated to protect cotton gin operators from the dangers of byssinosis. OSHA based its risk assessments on cotton dust exposures experienced by workers in the fabric manufacturing industry, but the group of workers to be regulated had intermittent exposures, at different levels, from that of the workers in the relied upon studies. Because of the exposure level disconnect, the Court of Appeals struck the OSHA regulation. Id. at 409. OSHA’s extrapolation from high to low doses was based upon an assumption, not evidence, and the regulation could not survive the deferential standard required for judicial review of federal agency action. Id.[5]

The fallacy of “extrapolation down” often turns on the glib assumption that an individual claimant must have experienced a “causative exposure” because he has the disease that can result at some, higher level of exposure. In reasoning backwards from untoward outcome to sufficient dose, when dose is at issue, is a petitio principii, as recognized by several astute judges:

“The fallacy of the ‘extrapolation down’ argument is plainly illustrated by common sense and common experience. Large amounts of alcohol can intoxicate, larger amounts can kill; a very small amount, however, can do neither. Large amounts of nitroglycerine or arsenic can injure, larger amounts can kill; small amounts, however, are medicinal. Great volumes of water may be harmful, greater volumes or an extended absence of water can be lethal; moderate amounts of water, however, are healthful. In short, the poison is in the dose.”

In re Toxic Substances Cases, No. A.D. 03-319.No. GD 02-018135, 05-010028, 05-004662, 04-010451, 2006 WL 2404008, at *6-7 (Alleghany Cty. Ct. C.P. Aug. 17, 2006) (Colville, J.) (“Drs. Maddox and Laman attempt to “extrapolate down,” reasoning that if high dose exposure is bad for you, then surely low dose exposure (indeed, no matter how low) must still be bad for you.”)(“simple logical error”), rev’d sub nom. Betz v. Pneumo Abex LLC, 998 A.2d 962 (Pa. Super. 2010), rev’d 615 Pa. 504, 44 A.3d 27 (2012).

Exposure Quantification

An obvious corollary of the fallacy of downward extrapolation is that claimants must have a reasonable estimate of their dose or exposure in order to place themselves on the dose-response curve, to estimate in turn what their level of risk was before they developed the claimed harm. For example, in Mateer v. U.S. Aluminum Co., 1989 U.S. Dist. LEXIS 6323 (E.D. Pa. 1989), the court, applying Pennsylvania law, dismissed plaintiffs’ claim for personal injuries in a ground-water contamination case. Although the plaintiffs had proffered sufficient evidence of contamination, their expert witnesses failed to quantify the plaintiffs’ actual exposures. Without an estimate of the claimants’ actual exposure, the challenged expert witnesses could not give reliable, reasonably based opinions and conclusions whether plaintiffs were injured from the alleged exposures. Id. at *9-11.[6]

Science and law are sympatico; dose or exposure matters, in pharmaceutical, occupational, and environmental cases.


[1] Joseph F. Borzelleca, “Paracelsus: Herald of Modern Toxicology,” 53 Toxicol. Sci. 2 (2000); David L. Eaton, “Scientific Judgment and Toxic Torts – A Primer in Toxicology for Judges and Lawyers,” 12 J.L. & Pol’y 5, 15 (2003); Ellen K. Silbergeld, “The Role of Toxicology in Causation: A Scientific Perspective,” 1 Cts. Health Sci. & L. 374, 378 (1991). Of course, the claims of endocrine disruption have challenged the generally accepted principle. See, e.g., Dan Fagin, “Toxicology: The learning curve,” Nature (24 October 2012) (misrepresenting Paracelsus’ dictum as meaning that dose responses will be predictably linear).

[2] See, e.g., Curtis D. Klaassen, “Principles of Toxicology and Treatment of Poisoning,” in Goodman and Gilman’s The Pharmacological Basis of Therapeutics 1739 (11th ed. 2008); Michael A Gallo, “History and Scope of Toxicology,” in Curtis D. Klaassen, ed., Casarett and Doull’s Toxicology: The Basic Science of Poisons 1, 4–5 (7th ed. 2008).

[3] Michael D. Green, D. Michal Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” 549, 603 & n.160, in Reference Manual on Scientific Evidence (3d ed. 2011) (Irving J. Selikoff, Disability Compensation for Asbestos-Associated Disease in the United States: Report to the U.S. Department of Labor 181–220 (1981). The chapter also cites two judicial decisions that clearly were influenced by advocacy science and regulatory assumptions. Ferebee v. Chevron Chemical Co., 736 F.2d 1529, 1536 (D.C. Cir. 1984) (commenting that low exposure effects are “one of the most sharply contested questions currently being debated in the medical community”); In re TMI Litig. Consol. Proc., 927 F. Supp. 834, 844–45 (M.D. Pa. 1996) (considering extrapolations from high radiation exposure to low exposure for inferences of causality).

[4] See also United States v. Reserve Mining Co., 380 F. Supp. 11, 52-53 (D. Minn. 1974) (questioning the appropriateness of comparing community asbestos exposures to occupational and industrial exposures). Risk assessment modesty was uncharacteristic of Irving Selikoff, who used insulator risk figures, which were biased high, to issue risk projections for total predicted asbestos-related mortality.

[5] See also In re “Agent Orange” Prod. Liab. Litig., 611 F. Supp. 1223, 1250 (E.D.N.Y. 1985), aff’d, 818 F.2d 187 (2d Cir. 1987), cert. denied, Pinkney v. Dow Chemical Co., 487 U.S. 1234 (1988) (noting that the plaintiffs’ expert witnesses relied upon studies that involved heavier exposures than those experienced by plaintiffs; the failure to address the claimants’ actual exposures rendered the witnesses’ proposed testimony legally irrelevant); Gulf South Insulation v. United States Consumer Products Safety Comm’n, 701 F.2d 1137, 1148 (5th Cir. 1983) (invalidating CPSC’s regulatory ban on urea formaldehyde foam insulation, as not supported by substantial evidence, when the agency based its ban upon high-exposure level studies and failed to quantify putative risks at actual exposure levels; criticizing extrapolations from high to low doses); Graham v. Wyeth Laboratories, 906 F.2d 1399, 1415 (10th Cir.) (holding that trial court abused its discretion in failing to grant new trial upon a later-discovered finding that plaintiff’s expert misstated the level of toxicity of defendant’s DTP vaccine by an order of magnitude), cert. denied, 111 S.Ct. 511 (1990).

Two dubious decisions that fail to acknowledge the fallacy of extrapolating down from high-exposure risk data have come out of the Fourth Circuit. See City of Greenville v. W.R. Grace & Co., 827 F.2d 975 (4th Cir. 1987) (affirming judgment based upon expert testimony that identified risk at low levels of asbestos exposure based upon studies at high levels of exposure); Smith v. Wyeth-Ayerst Labs Co., 278 F. Supp. 2d 684, 695 (W.D.N.C. 2003)(suggesting that expert witnesses may extrapolate down to lower doses, and even to extrapolate to different time window of latency).

[6] See also Christophersen v. Allied-Signal Corp., 939 F.2d 1106, 1111, 1113-14 (5th Cir. 1991) (en banc) (per curiam) (trial court may exclude opinion of expert witness whose opinion is based upon incomplete or inaccurate exposure data), cert. denied, 112 S. Ct. 1280 (1992); Wills v. Amareda Hess Co., 2002 WL 140542, *10 (S.D.N.Y. Jan. 31, 2002) (noting that the plaintiff’s expert witness failed to quantify the decedent’s exposure, but was nevertheless “ready to form a conclusion first, without any basis, and then try to justify it” by claiming that the decedent’s development of cancer was itself sufficient evidence that he had had intensive exposure to the alleged carcinogen).

Vaccine Court Inoculated Against Pathological Science

October 25th, 2015

Richard I. Kelley, M.D., Ph.D., is the Director of the Division of Metabolism, Kennedy Krieger Institute, and a member of the Department of Pediatrics, in Johns Hopkins University. The National Library of Medicine’s Pubmed database shows that Dr. Kelley has written dozens of articles on mitochondrial disease, but none that concludes that thimerosal or the measles, mumps and rubella vaccine plays a causal role in causing autism by inducing or aggravating mitochondrial disease. In one article, Kelley opines:

“Large, population-based studies will be needed to identify a possible relationship of vaccination with autistic regression in persons with mitochondrial cytopathies.”

Jacqueline R. Weissman, Richard I. Kelley, Margaret L. Bauman, Bruce H. Cohen, Katherine F. Murray, Rebecca L. Mitchell, Rebecca L. Kern, and Marvin R. Natowicz, “Mitochondrial Disease in Autism Spectrum Disorder Patients: A Cohort Analysis,” 3 PLoS One e3815 (Nov. 26, 2008). Those large scale population-based studies to support the speculation of Kelley and his colleagues have not materialized since 2008, and meta-analyses and systematic reviews have dampened the enthusiasm for Kelley’s hypothesis.[1]

Special Master Denise K. Vowell, in the Federal Court of Claims, has now further dampened the enthusiasm for Dr. Kelley’s mitochondrial theories, in a 115 page opinion, written in support of rejecting Kelley’s testimony and theories that the MMR vaccine caused a child’s autism. Madariaga v. Sec’y Dep’t H.H.S., No. 02-1237V (Ct. Fed. Claims Sept. 26, 2015) Slip op. [cited as Madariaga].

Special Master Vowell fulsomely recounts the history of vaccine litigation, in which the plaintiffs have presented theories that the combination of thimerosal-containing vaccines and the MMR vaccine cause autism, or just the thimerosal-containing vaccines cause autism. Madariaga at 3. Both theories were tested in the crucible of litigation and cross-examination in a series of test cases. The first theory resulted in verdicts against the claimants, which were affirmed on appeal.[2] Similarly, the trials on the thimerosal-only claims uniformly resulted in decisions from the Special Masters against the claims.[3] The three Special Masters, hearing the cases, found that the vaccine-causation claims were not close cases, and were based upon unreliable evidence.[4] Madariaga at 4.[5]

In Madariaga, Special Master Vowell noted that Doctor Kelley had conceded the “absence of an association between the MMR vaccine and autism in large epidemiological studies.” Madariaga at 61. Kelley attempted to evade the force of his lack of evidence by retreating into a claim that “autistic regressions caused by the live attenuated MMR vaccine are rare events,” and an assertion that there are many inflammatory factors that can induce autistic regression. Madariaga at 61.

Special Master described the whole of Kelley’s testimony as “meandering, confusing, and completely unpersuasive elaboration of his unique insights and methods.” Madariaga at 66. Although it is clear from the Special Master’s opinion that Kelley was unbridled in his over-interpretation of studies, and perhaps undisciplined in his interpretation of test results, the lengthy opinion provides only a high-altitude view of Kelley’s errors. There are tantalizing comments and notes in the Special Master’s decision, such as one that reports that one study may have been over-interpreted by Kelley because he ignored the authors’ comment that their findings could be consistent with chance because of their multiple comparisons, and another that paper that failed to show statistical significance. Madariaga at 90 & n.160.

The unreliability of Kelley’s testimony appeared to be more than hand waving in the absence of evidence. He compared the child’s results on a four-hour fasting test, when the child had not fasted for four hours. When pressed about this maneuver, Kelley claimed that he had made calculations to bring the child’s results “back to some standard.” Madariaga at 66 & n.115.

Although the Special Master’s opinion itself was ultimately persuasive, the tome left me eager to know more about Dr. Kelley’s epistemic screw ups, and less about vaccine court procedure.


[1] See Vittorio Demicheli, Alessandro Rivetti, Maria Grazia Debalini, and Carlo Di Pietrantonj, “Vaccines for measles, mumps and rubella in children,” Cochrane Database Syst. Rev., Issue 2. Art. No. CD004407, DOI:10.1002/14651858.CD004407.pub3 (2012) (“Exposure to the MMR vaccine was unlikely to be associated with autism … .”); Luke E. Taylor, Amy L. Swerdfeger, and Guy D. Eslick, “Vaccines are not associated with autism: An evidence-based meta-analysis of case-control and cohort studies,” 32 Vaccine 3623 (2014) (“Findings of this meta-analysis suggest that vaccinations are not associated with the development of autism or autism spectrum disorder. Furthermore, the components of the vaccines (thimerosal or mercury) or multiple vaccines (MMR) are not associated with the development of autism or autism spectrum disorder.”).

[2] Cedillo v. Sec’y, HHS, No. 98-916V, 2009 WL 331968 (Fed. Cl. Spec. Mstr. Feb. 12, 2009), aff’d, 89 Fed. Cl. 158 (2009), aff’d, 617 F.3d 1328 (Fed. Cir. 2010); Hazlehurst v. Sec’y, HHS, No. 03-654V, 2009 WL 332306 (Fed. Cl. Spec. Mstr. Feb. 12, 2009), aff’d, 88 Fed. Cl. 473 (2009), aff’d, 604 F.3d 1343 (Fed. Cir. 2010); Snyder v. Sec’y, HHS, No. 01-162V, 2009 WL 332044 (Fed. Cl. Spec. Mstr. Feb. 12, 2009), aff’d, 88 Fed. Cl. 706 (2009).

[3] Dwyer v. Sec’y, HHS, 2010 WL 892250; King v. Sec’y, HHS, No. 03-584V, 2010 WL 892296 (Fed. Cl. Spec. Mstr. Mar. 12, 2010); Mead v. Sec’y, HHS, 2010 WL 892248.

[4] See, e.g., King, 2010 WL 892296, at *90 (emphasis in original); Snyder, 2009 WL 332044, at *198.

[5] The Federal Rule of Evidence technically do not control the vaccine court proceedings, but the Special Masters are bound by the requirement of Daubert v. Merrell Dow Pharm., 509 U.S. 579, 590 (1993), to find that expert witness opinion testimony is reliable before they consider it. Knudsen v. Sec’y, HHS, 35 F.3d 543, 548-49 (Fed. Cir. 1994). Madariaga at 7.

Demonstration of Frye Gatekeeping in Pennsylvania Birth Defects Case

October 6th, 2015

Michael D. Freeman is a chiropractor and self-styled “forensic epidemiologist,” affiliated with Departments of Public Health & Preventive Medicine and Psychiatry, Oregon Health & Science University School of Medicine, in Portland, Oregon. His C.V. can be found here. Freeman has an interesting publication in press on his views of forensic epidemiology. Michael D. Freeman & Maurice Zeegers, “Principles and applications of forensic epidemiology in the medico-legal setting,” Law, Probability and Risk (2015); doi:10.1093/lpr/mgv010. Freeman’s views on epidemiology did not, however, pass muster in the courtroom. Porter v. Smithkline Beecham Corp., Phila. Cty. Ct. C.P., Sept. Term 2007, No. 03275. Slip op. (Oct. 5, 2015).

In Porter, plaintiffs sued Pfizer, the manufacturer of the SSRI antidepressant Zoloft. Plaintiffs claimed the mother plaintiff’s use of Zoloft during pregnancy caused her child to be born with omphalocele, a serious defect that occurs when the child’s intestines develop outside his body. Pfizer moved to exclude plaintiffs’ medical causation expert witnesses, Dr. Cabrera and Dr. Freeman. The trial judge was the Hon. Mark I. Bernstein, who has written and presented frequently on expert witness evidence.[1] Judge Bernstein held a two day hearing in September 2015, and last week, His Honor ruled that the plaintiffs’ expert witnesses failed to meet Pennsylvania’s Frye standard for admissibility. Judge Bernstein’s opinion reads a bit like a Berenstain Bear book on how not to use epidemiology.

GENERAL CAUSATION SCREW UPS

Proper Epidemiologic Method

First, Find An Association

Dr. Freeman has a methodologic map that included Bradford Hill criteria at the back end of the procedure. Dr. Freeman, however, impetuously forgot that before you get to the back end, you must traverse the front end:

“Dr. Freemen agrees that he must, and claims he has, applied the Bradford Hill Criteria to support his opinion. However, the starting procedure of any Bradford-Hill analysis is ‘an association between two variables’ that is ‘perfectly clear-cut and beyond what we would care to attribute to the play of chance’.35 Dr. Freeman testified that generally accepted methodology requires a determination, first, that there’s evidence of an association and, second, whether chance, bias and confounding have been accounted for, before application of the Bradford-Hill criteria.36 Because no such association has been properly demonstrated, the Bradford Hill criteria could not have been properly applied.”

Slip op. at 12-13. In other words, don’t go rushing to the Bradford Hill factors until and unless you have first shown an association; second, you have shown that it is “clear cut,” and not likely the result of bias or confounding; and third, you have ruled out the play of chance or random variability in explaining the difference between the observed and expected rates of disease.

Proper epidemiologic method requires surveying the pertinent published studies that investigate whether there is an association between the medication use and the claimed harm. The expert witnesses must, however, do more than write a bibliography; they must assess any putative associations for “chance, confounding or bias”:

“Proper epidemiological methodology begins with published study results which demonstrate an association between a drug and an unfortunate effect. Once an association has been found, a judgment as whether a real causal relationship between exposure to a drug and a particular birth defect really exists must be made. This judgment requires a critical analysis of the relevant literature applying proper epidemiologic principles and methods. It must be determined whether the observed results are due to a real association or merely the result of chance. Appropriate scientific studies must be analyzed for the possibility that the apparent associations were the result of chance, confounding or bias. It must also be considered whether the results have been replicated.”

Slip op. at 7.

Then Rule Out Chance

So if there is something that appears to be an association in a study, the expert epidemiologist must assess whether it is likely consistent with a chance association. If we flip a fair coin 10 times, we “expect” 5 heads and 5 tails, but actually the probability of not getting the expected result is about three times greater than obtaining the expected result. If on one series of 10 tosses we obtain 6 heads and 4 tails, we would certainly not reject a starting assumption that the expected outcome was 5 heads/ 5 tails. Indeed, the probability of obtaining 6 heads / 4 tails or 4 heads /6 tails is almost double that of the probability of obtaining the expected outcome of equal number of heads and tails.

As it turned out in the Porter case, Dr. Freeman relied rather heavily upon one study, the Louik study, for his claim that Zoloft causes the birth defect in question. See Carol Louik, Angela E. Lin, Martha M. Werler, Sonia Hernández-Díaz, and Allen A. Mitchell, “First-Trimester Use of Selective Serotonin-Reuptake Inhibitors and the Risk of Birth Defects,” 356 New Engl. J. Med. 2675 (2007). The authors of the Louik study were quite clear that they were not able to rule out chance as a sufficient explanation for the observed data in their study:

“The previously unreported associations we identified warrant particularly cautious interpretation. In the absence of preexisting hypotheses and the presence of multiple comparisons, distinguishing random variation from true elevations in risk is difficult. Despite the large size of our study overall, we had limited numbers to evaluate associations between rare outcomes and rare exposures. We included results based on small numbers of exposed subjects in order to allow other researchers to compare their observations with ours, but we caution that these estimates should not be interpreted as strong evidence of increased risks.24

Slip op at 10 (quoting from Louik study).

Judge Bernstein thus criticized Freeman for failing to account for chance in explaining his putative association between maternal Zoloft use and infant omphalocele. The appropriate and generally accepted methodology for accomplishing this step of evaluating a putative association is to consider whether the association is statistically significant at the conventional level.

In relying heavily upon the Louik study, Dr. Freeman opened himself up to serious methodological criticism. Judge Bernstein’s opinion stands for the important proposition that courts should not be unduly impressed with nominal statistical significance in the presence of multiple comparisons and very broad confidence intervals:

“The Louik study is the only study to report a statistically significant association between Zoloft and omphalocele. Louik’s confidence interval which ranges between 1.6 and 20.7 is exceptionally broad. … The Louik study had only 3 exposed subjects who developed omphalocele thus limiting its statistical power. Studies that rely on a very small number of cases can present a random statistically unstable clustering pattern that may not replicate the reality of a larger population. The Louik authors were unable to rule out confounding or chance. The results have never been replicated concerning omphalocele. Dr. Freeman’s testimony does not explain, or seemingly even consider these serious limitations.”

Slip op. at 8. Statistical precision in the point estimate of risk, which includes assessing the outcome in the context of whether the authors conducted multiple comparisons, and whether the observed confidence intervals were very broad, is part of the generally accepted epidemiologic methodology, which Freeman flouted:

“Generally accepted methodology considers statistically significant replication of study results in different populations because apparent associations may reflect flaws in methodology.”

Slip op. at 9. The studies that Freeman cited and apparently relied upon failed to report statistically significant associations between sertraline (Zoloft) and omphalocele. Judge Bernstein found this lack to be a serious problem for Freeman and his epidemiologic opinion:

“While non-significant results can be of some use, despite a multitude of subsequent studies which isolated omphalocele, there is no study which replicates or supports Dr. Freeman’s conclusions.”

Slip op. at 10. The lack of statistical significance, in the context of repeated attempts to find it, helped sink Freeman’s proffered testimony.

Then Rule Out Bias and Confounding

As noted, Freeman relied heavily upon the Louik study, which was the only study to report a nominally statistically significant risk ratio for maternal Zoloft use and infant omphalocele. The Louik study, by its design, however, could not exclude chance or confounding as full explanation for the apparent association, and Judge Bernstein chastised Dr. Freeman for overselling the study as support for the plaintiffs’ causal claim:

“The Louik authors were unable to rule out confounding or chance. The results have never been replicated concerning omphalocele. Dr. Freeman’s testimony does not explain, or seemingly even consider these serious limitations.”

Slip op. at 8.

And Only Then Consider the Bradford Hill Factors

Even when an association is clear cut, and beyond what we can likely attribute to chance, generally accepted methodology requires the epidemiologist to consider the Bradford Hill factors. As Judge Bernstein explains, generally accepted methodology for assessing causality in this area requires a proper consideration of Hill’s factors before a conclusion of causation is reached:

“As the Bradford-Hill factors are properly considered, causality becomes a matter of the epidemiologist’s professional judgment.”

Slip op. at 7.

Consistency or Replication

The nine Hill factors are well known to lawyers because they have been stated and discussed extensively in Hill’s original article, and in references such as the Reference Manual on Scientific Evidence. Not all the Hill factors are equally important, or important at all, but one that is consistency or concordance of results among the available epidemiologic studies. Stated alternatively, a clear cut association unlikely to be explained by chance is certainly interesting and probative, but it raises an important methodological question — can the result be replicated? Judge Bernstein restated this important Hill factor as an important determinant of whether a challenged expert witness employed a generally accepted method:

“Generally accepted methodology considers statistically significant replication of study results in different populations because apparent associations may reflect flaws in methodology.”

Slip op. at 10.

“More significantly neither Reefhuis nor Alwan reported statistically significant associations between Zoloft and omphalocele. While non-significant results can be of some use, despite a multitude of subsequent studies which isolated omphalocele, there is no study which replicates or supports Dr. Freeman’s conclusions.”

Slip op. at 10.

Replication But Without Double Dipping the Data

Epidemiologic studies are sometimes updated and extended with additional follow up. An expert witness who wished to skate over the replication and consistency requirement might be tempted, as was Dr. Freeman, to count the earlier and later iteration of the same basic study to count as “replication.” The Louik study was indeed updated and extended this year in a published paper by Jennita Reefhuis and colleagues.[2] Proper methodology, however, prohibits double dipping data to count the later study that subsumes the early one as a “replication”:

“Generally accepted methodology considers statistically significant replication of study results in different populations because apparent associations may reflect flaws in methodology. Dr. Freeman claims the Alwan and Reefhuis studies demonstrate replication. However, the population Alwan studied is only a subset of the Reefhuis population and therefore they are effectively the same.”

Slip op. at 10.

The Lumping Fallacy

Analyzing the health outcome of interest at the right level of specificity can sometimes be a puzzle and a challenge, but Freeman generally got it wrong by opportunistically “lumping” disparate outcomes together when it helps him get a result that he likes. Judge Bernstein admonishes:

“Proper methodology further requires that one not fall victim to the … the ‘Lumping Fallacy’. … Different birth defects should not be grouped together unless they a part of the same body system, share a common pathogenesis or there is a specific valid justification or necessity for an association20 and chance, bias, and confounding have been eliminated.”

Slip op. at 7. Dr. Freeman lumped a lot, but Judge Bernstein saw through the methodological ruse. As Judge Bernstein pointed out:

“Dr. Freeman’s analysis improperly conflates three types of data: Zoloft and omphalocele, SSRI’s generally and omphalocele, and SSRI’s and gastrointestinal and abdominal malformations.”

Slip op. at 8. Freeman’s approach, which sadly is seen frequently in pharmaceutical and other products liability cases, is methodologically improper:

“Generally accepted causation criteria must be based on the data applicable to the specific birth defect at issue. Dr. Freeman improperly lumps together disparate birth defects.”

Slip op. at 11.

Class Effect Fallacy

Another kind of improper lumping results from treating all SSRI antidepressants the same to either lump them together, or to pick and choose from among all the SSRIs, the data points that are supportive of the plaintiffs’ claims (while ignoring those SSRI data points not supportive of the claims). To be sure, the SSRI antidepressants do form a “class,” in that they all have a similar pharmacologic effect. The SSRIs, however, do not all achieve their effect in the serotonergic neurons the same way; nor do they all have the same “off-target” effects. Treating all the SSRIs as interchangeable for a claimed adverse effect, without independent support for this treatment, is known as the class effect fallacy. In Judge Bernstein’s words:

“Proper methodology further requires that one not fall victim to the ‘Class Effect Fallacy’ … . A class effect cannot be assumed. The causation conclusion must be drug specific.”

Slip op. at 7. Dr. Freeman’s analysis improperly conflated Zoloft data with SSRI data generally. Slip op. at 8. Assuming what you set out to demonstrate is, of course, a fine way to go methodologically into the ditch:

“Without significant independent scientific justification it is contrary to generally accepted methodology to assume the existence of a class effect. Dr. Freeman lumps all SSRI drug results together and assumes a class effect.”

Slip op. at 10.

SPECIFIC CAUSATION SCREW UPS

Dr. Freeman was also offered by plaintiffs to provide a specific causation opinion – that Mrs. Porter’s use of Zoloft in pregnancy caused her child’s omphalocele. Freeman claimed to have performed a differential diagnosis or etiology or something to rule out alternative causes.

Genetics

In the field of birth defects, one possible cause looming in any given case is an inherited or spontaneous genetic mutation. Freeman purported to have considered and ruled out genetic causes, which he acknowledged to make up a substantial percentage of all omphalocele cases. Bo Porter, Mrs. Porter’s son, was tested for known genetic causes, and Freeman argued that this allowed him to “rule out” genetic causes. But the current state of the art in genetic testing allowed only for identifying a small number of possible genetic causes, and Freeman failed to explain how he might have ruled out the as-of-yet unidentified genetic causes of birth defects:

“Dr. Freeman fails to properly rule out genetic causes. Dr. Freeman opines that 45-49% of omphalocele cases are due to genetic factors and that the remaining 50-55% of cases are due to non-genetic factors. Dr. Freeman relies on Bo Porter’s genetic testing which did not identify a specific genetic cause for his injury. However, minor plaintiff has not been tested for all known genetic causes. Unknown genetic causes of course cannot yet be tested. Dr. Freeman has made no analysis at all, only unwarranted assumptions.”

Slip op. at 15-16. Judge Bernstein reviewed Freeman’s attempted analysis and ruling out of potential causes, and found that it departed from the generally accepted methodology in conducting differential etiology. Slip op. at 17.

Timing Errors

One feature of putative terotogenicity is that an embryonic exposure must take place at a specific gestational developmental time in order to have its claimed deleterious effect. As Judge Bernstein pointed out, omphalocele results from an incomplete folding of the abdominal wall during the third to fifth weeks of gestation. Mrs. Porter, however, did not begin taking Zoloft until her seventh week of pregnancy, which left Dr. Freeman opinion-less as to how Zoloft contributed to the claimed causation of the minor plaintiff’s birth defect. Slip op. at 14. This aspect of Freeman’s specific causation analysis was glaringly defect, and clearly not the sort of generally accepted methodology of attributing a birth defect to a teratogen.

******************************************************

All in all, Judge Bernstein’s opinion is a tour de force demonstration of how a state court judge, in a so-called Frye jurisdiction, can show that failure to employ generally accepted methods renders an expert witness’s opinions inadmissible. There is one small problem in statistical terminology.

Statistical Power

Judge Bernstein states, at different places, that the Louik study was and was not statistically significant for Zoloft and omphalocele. The court’s opinion ultimately does explain that the nominal statistical significance was vitiated by multiple comparisons and an extremely broad confidence interval, which more than justified its statement that the study was not truly statistically significant. In another moment, however, the court referred to the problem as one of lack of statistical power. For some reason, however, Judge Bernstein chose to explain the problem with the Louik study as a lack of statistical power:

“Equally significant is the lack of power concerning the omphalocele results. The Louik study had only 3 exposed subjects who developed omphalocele thus limiting its statistical power.”

Slip op. at 8. The adjusted odds ratio for Zoloft and omphalocele, was 5.7, with a 95% confidence interval of 1.6 – 20.7. Power was not the issue because if the odds ratio were otherwise credible, free from bias, confounding, and chance, the study had the power to observe an increased risk of close to 500%, which met the pre-stated level of significance. The problem, however, was multiple testing, fragile and imprecise results, and inability to evaluate the odds ratio fully for bias and confounding.


 

[1] Mark I. Bernstein, “Expert Testimony in Pennsylvania,” 68 Temple L. Rev. 699 (1995); Mark I. Bernstein, “Jury Evaluation of Expert Testimony under the Federal Rules,” 7 Drexel L. Rev. 239 (2014-2015).

[2] Jennita Reefhuis, Owen Devine, Jan M Friedman, Carol Louik, Margaret A Honein, “Specific SSRIs and birth defects: bayesian analysis to interpret new data in the context of previous reports,” 351 Brit. Med. J. (2015).

The C-8 (Perfluorooctanoic Acid) Litigation Against DuPont, part 1

September 27th, 2015

The first plaintiff has begun her trial against E.I. Du Pont De Nemours & Company (DuPont), for alleged harm from environmental exposure to perfluorooctanoic acid or its salts (PFOA). Ms. Carla Bartlett is claiming that she developed kidney cancer as a result of drinking water allegedly contaminated with PFOA by DuPont. Nicole Hong, “Chemical-Discharge Case Against DuPont Goes to Trial: Outcome could affect thousands of claims filed by other U.S. residents,” Wall St. J. (Sept. 13, 2015). The case is pending before Chief Judge Edmund A. Sargus, Jr., in the Southern District of Ohio.

PFOA is not classified as a carcinogen in the Integrated Risk Information System (IRIS), of the U.S. Environmental Protection Agency (EPA). In 2005, the EPA Office of Pollution Prevention and Toxics submitted a “Draft Risk Assessment of the Potential Human Health Effects Associated With Exposure to Perfluorooctanoic Acid and Its Salts (PFOA),” which is available at the EPA’s website. The draft report, which is based upon some epidemiology and mostly animal toxicology studies, stated that there was “suggestive evidence of carcinogenicity, but not sufficient to assess human carcinogenic potential.”

In 2013, The Health Council of the Netherlands evaluated the PFOA cancer issue, and found the data unsupportive of a causal conclusions. The Health Council of the Netherlands, “Perfluorooctanoic acid and its salts: Evaluation of the carcinogenicity and genotoxicity” (2013) (“The Committee is of the opinion that the available data on perfluorooctanoic acid and its salts are insufficient to evaluate the carcinogenic properties (category 3)”).

Last year, the World Health Organization (WHO) through its International Agency for Research on Cancer (IARC) reviewed the evidence on the alleged carcinogenicity of PFOA. The IARC, which has fostered much inflation with respect to carcinogenicity evaluations, classified as PFOA as only possibly carcinogenic. See News, “Carcinogenicity of perfluorooctanoic acid, tetrafl uoroethylene, dichloromethane, 1,2-dichloropropane, and 1,3-propane sultone,” 15 The Lancet Oncology 924 (2014).

Most independent reviews also find the animal and epidemiologic unsupportive of a causal conclusion between PFOA and any human cancer. See, e.g., Thorsten Stahl, Daniela Mattern, and Hubertus Brunn, “Toxicology of perfluorinated compounds,” 23 Environmental Sciences Europe 38 (2011).

So you might wonder how DuPont lost its Rule 702 challenges in such a case, which it surely did. In re E. I. du Pont de Nemours & Co. C-8 Pers. Injury Litig., Civil Action 2:13-md-2433, 2015 U.S. Dist. LEXIS 98788 (S.D. Ohio July 21, 2015). That is a story for another day.

David Faigman’s Critique of G2i Inferences at Weinstein Symposium

September 25th, 2015

The DePaul Law Review’s 20th Annual Clifford Symposium on Tort Law and Social Policy is an 800-plus page tribute in honor of Judge Jack Weinstein. 64 DePaul L. Rev. (Winter 2015). There are many notable, thought-provoking articles, but my attention was commanded by the contribution on Judge Weinstein’s approach to expert witness opinion evidence. David L. Faigman & Claire Lesikar, “Organized Common Sense: Some Lessons from Judge Jack Weinstein’s Uncommonly Sensible Approach to Expert Evidence,” 64 DePaul L. Rev. 421 (2015) [cited as Faigman].

Professor Faigman praises Judge Jack Weinstein for his substantial contributions to expert witness jurisprudence, while acknowledging that Judge Weinstein has been a sometimes reluctant participant and supporter of judicial gatekeeping of expert witness testimony. Professor Faigman also uses the occasion to restate his own views about the so-called “G2i” problem, the problem of translating general knowledge that pertains to groups to individual cases. In the law of torts, the G2i problem arises from the law’s requirement that plaintiffs show that they were harmed by defendants’ products or environmental exposures. In the context of modern biological “sufficient” causal set principles, this “proof” requirement entails that the product or exposure can cause the specified harms in human beings generally (“general causation”) and that the product or exposure actually played a causal role in bringing about plaintiffs’ specific harms.

Faigman makes the helpful point that courts initially and incorrectly invoked “differential diagnosis,” as the generally accepted methodology for attributing causation. In doing so, the courts extrapolated from the general acceptance of differential diagnosis in the medical community to the courtroom testimony about etiology. The extrapolation often glossed over the methodological weaknesses of the differential approach to etiology. Not until 1995 did a court wake to the realization that what was being proffered was a “differential etiology,” and not a differential diagnosis. McCullock v. H.B. Fuller Co., 61 F.3d 1038, 1043 (2d Cir. 1995). This realization, however, did not necessarily stimulate the courts’ analytical faculties, and for the most part, they treated the methodology of specific causal attribution as general acceptance and uncontroversial. Faigman’s point that the courts need to pay attention to the methodological challenges to differential etiological analysis is well taken.

Faigman also claims, however, that in advancing “differential etiologies, expert witnesses were inventing wholesale an approach that had no foundation or acceptance in their scientific disciplines:

 “Differential etiology is ostensibly a scientific methodology, but one not developed by, or even recognized by, physicians or scientists. As described, it is entirely logical, but has no scientific methods or principles underlying it. It is a legal invention and, as such, has analytical heft, but it is entirely bereft of empirical grounding. Courts and commentators have so far merely described the logic of differential etiology; they have yet to define what that methodology is.”

Faigman at 444.[1] Faigman is correct that courts often have left unarticulated exactly what the methodology is, but he does not quite make sense when he writes that the method of differential etiology is “entirely logical,” but has no “scientific methods or principles underlying it.” Afterall, Faigman starts off his essay with a quotation from Thomas Huxley that “science is nothing but trained and organized common sense.”[2] As I have written elsewhere, the form of reasoning involved in differential diagnosis is nothing other than the iterative disjunctive syllogism.[3] Either-or reasoning occurs throughout the physical and biological sciences; it is not clear why Faigman declares it un- or extra-scientific.

The strength of Faigman’s claim about the made-up nature of differential etiology appears to be undermined and contradicted by an example that he provides from clinical allergy and immunology:

“Allergists, for example, attempt to identify the etiology of allergic reactions in order to treat them (or to advise the patient to avoid what caused them), though it might still be possible to treat the allergic reactions without knowing their etiology.”

Faigman at 437. Of course, not only allergists try to determine the cause of an individual patient’s disease. Psychiatrists, in the psychoanalytic tradition, certain do so as well. Physicians who use predictive regression models use group data, in multivariate analyses, to predict outcomes, risk, and mortality in individual patients. Faigman’s claim is similarly undermined by the existence of a few diseases (other than infectious diseases) that are defined by the causative exposure. Silicosis and manganism have played a large role in often bogus litigation, but they represent instances in which a differential diagnosis and puzzle may also be an etiological diagnosis and puzzle. Of course, to the extent that a disease is defined in terms of causative exposures, there may be serious and even intractable problems caused by the lack of specificity and accuracy in the diagnostic criteria for the supposedly pathognomonic disease.

As for whether the concept of “differential etiology” is ever used in the sciences themselves, a few citations for consideration follow.

Kløve & D. Doehring, “MMPI in epileptic groups with differential etiology,” 18 J. Clin. Psychol. 149 (1962)

Kløve & C. Matthews, “Psychometric and adaptive abilities in epilepsy with differential etiology,” 7 Epilepsia 330 (1966)

Teuber & K. Usadel, “Immunosuppression in juvenile diabetes mellitus? Critical viewpoint on the treatment with cyclosporin A with consideration of the differential etiology,” 103 Fortschr. Med. 707 (1985)

G.May & W. May, “Detection of serum IgA antibodies to varicella zoster virus (VZV)–differential etiology of peripheral facial paralysis. A case report,” 74 Laryngorhinootologie 553 (1995)

Alan Roberts, “Psychiatric Comorbidity in White and African-American Illicity Substance Abusers” Evidence for Differential Etiology,” 20 Clinical Psych. Rev. 667 (2000)

Mark E. Mullinsa, Michael H. Leva, Dawid Schellingerhout, Gilberto Gonzalez, and Pamela W. Schaefera, “Intracranial Hemorrhage Complicating Acute Stroke: How Common Is Hemorrhagic Stroke on Initial Head CT Scan and How Often Is Initial Clinical Diagnosis of Acute Stroke Eventually Confirmed?” 26 Am. J. Neuroradiology 2207 (2005)

Qiang Fua, et al., “Differential Etiology of Posttraumatic Stress Disorder with Conduct Disorder and Major Depression in Male Veterans,” 62 Biological Psychiatry 1088 (2007)

Jesse L. Hawke, et al., “Etiology of reading difficulties as a function of gender and severity,” 20 Reading and Writing 13 (2007)

Mastrangelo, “A rare occupation causing mesothelioma: mechanisms and differential etiology,” 105 Med. Lav. 337 (2014)


[1] See also Faigman at 448 (“courts have invented a methodology – differential etiology – that purports to resolve the G2i problem. Unfortunately, this method has only so far been described; it has not been defined with any precision. For now, it remains a highly ambiguous idea, sound in principle, but profoundly underdefined.”).

[2] Thomas H. Huxley, “On the Education Value of the Natural History Sciences” (1854), in Lay Sermons, Addresses and Reviews 77 (1915).

[3] See, e.g.,Differential Etiology and Other Courtroom Magic” (June 23, 2014) (collecting cases); “Differential Diagnosis in Milward v. Acuity Specialty Products Group” (Sept. 26, 2013).

Beecher-Monas Proposes to Abandon Common Sense, Science, and Expert Witnesses for Specific Causation

September 11th, 2015

Law reviews are not peer reviewed, not that peer review is a strong guarantor of credibility, accuracy, and truth. Most law reviews have no regular provision for letters to the editor; nor is there a PubPeer that permits readers to point out errors for the benefit of the legal community. Nonetheless, law review articles are cited by lawyers and judges, often at face value, for claims and statements made by article authors. Law review articles are thus a potent source of misleading, erroneous, and mischievous ideas and claims.

Erica Beecher-Monas is a law professor at Wayne State University Law School, or Wayne Law, which considers itself “the premier public-interest law school in the Midwest.” Beware of anyone or any institution that describes itself as working for the public interest. That claim alone should put us on our guard against whose interests are being included and excluded as legitimate “public” interest.

Back in 2006, Professor Beecher-Monas published a book on evaluating scientific evidence in court, which had a few goods points in a sea of error and nonsense. See Erica Beecher-Monas, Evaluating Scientific Evidence: An Interdisciplinary Framework for Intellectual Due Process (2006)[1]. More recently, Beecher-Monas has published a law review article, which from its abstract suggests that she might have something to say about this difficult area of the law:

“Scientists and jurists may appear to speak the same language, but they often mean very different things. The use of statistics is basic to scientific endeavors. But judges frequently misunderstand the terminology and reasoning of the statistics used in scientific testimony. The way scientists understand causal inference in their writings and practice, for example, differs radically from the testimony jurists require to prove causation in court. The result is a disconnect between science as it is practiced and understood by scientists, and its legal use in the courtroom. Nowhere is this more evident than in the language of statistical reasoning.

Unacknowledged difficulties in reasoning from group data to the individual case (in civil cases) and the absence of group data in making assertions about the individual (in criminal cases) beset the courts. Although nominally speaking the same language, scientists and jurists often appear to be in dire need of translators. Since expert testimony has become a mainstay of both civil and criminal litigation, this failure to communicate creates a conundrum in which jurists insist on testimony that experts are not capable of giving, and scientists attempt to conform their testimony to what the courts demand, often well beyond the limits of their expertise.”

Beecher-Monas, “Lost in Translation: Statistical Inference in Court,” 46 Arizona St. L.J. 1057, 1057 (2014) [cited as BM].

A close read of the article shows, however, that Beecher-Monas continues to promulgate misunderstanding, error, and misdirection on statistical and scientific evidence.

Individual or Specific Causation

The key thesis of this law review is that expert witnesses have no scientific or epistemic warrant upon which to opine about individual or specific causation.

“But what statistics cannot do—nor can the fields employing statistics, like epidemiology and toxicology, and DNA identification, to name a few—is to ascribe individual causation.”

BM at 1057-58.

Beecher-Monas tells us that expert witnesses are quite willing to opine on specific causation, but that they have no scientific or statistical warrant for doing so:

“Statistics is the law of large numbers. It can tell us much about populations. It can tell us, for example, that so-and-so is a member of a group that has a particular chance of developing cancer. It can tell us that exposure to a chemical or drug increases the risk to that group by a certain percentage. What statistics cannot do is tell which exposed person with cancer developed it because of exposure. This creates a conundrum for the courts, because nearly always the legal question is about the individual rather than the group to which the individual belongs.”

BM at 1057. Clinical medicine and science come in for particular chastisement by Beecher-Monas, who acknowledges the medical profession’s legitimate role in diagnosing and treating disease. Physicians use a process of differential diagnosis to arrive at the most likely diagnosis of disease, but the etiology of the disease is not part of their normal practice. Beecher-Monas leaps beyond the generalization that physicians infrequently ascertain specific causation to the sweeping claim that ascertaining the cause of a patient’s disease is beyond the clinician’s competence and scientific justification. Beecher-Monas thus tells us, in apodictic terms, that science has nothing to say about individual or specific causation. BM at 1064, 1075.

In a variety of contexts, but especially in the toxic tort arena, expert witness testimony is not reliable with respect to the inference of specific causation, which, Beecher-Monas writes, usually without qualification, is “unsupported by science.” BM at 1061. The solution for Beecher-Monas is clear. Admitting baseless expert witness testimony is “pernicious” because the whole purpose of having expert witnesses is to help the fact finder, jury or judge, who lack the background understanding and knowledge to assess the data, interpret all the evidence, and evaluate the epistemic warrant for the claims in the case. BM at 1061-62. Beecher-Monas would thus allow the expert witnesses to testify about what they legitimately know, and let the jury draw the inference about which expert witnesses in the field cannot and should not opine. BM at 1101. In other words, Beecher-Monas is perfectly fine with juries and judges guessing their way to a verdict on an issue that science cannot answer. If her book danced around this recommendation, now her law review article has come out into the open, declaring an open season to permit juries and judges to be unfettered in their specific causation judgments. What is touching is that Beecher-Monas is sufficiently committed to gatekeeping of expert witness opinion testimony that she proposes a solution to take a complex area away from expert witnesses altogether rather than confront the reality that there is often simply no good way to connect general and specific causation in a given person.

Causal Pies

Beecher-Monas relies heavily upon Professor Rothman’s notion of causal pies or sets to describe the factors that may combine to bring about a particular outcome. In doing so, she commits a non-sequitur:

“Indeed, epidemiologists speak in terms of causal pies rather than a single cause. It is simply not possible to infer logically whether a specific factor caused a particular illness.”[2]

BM at 1063. But the question on her adopted model of causation is not whether any specific factor was the cause, but whether it was one of the multiple slices in the pie. Her citation to Rothman’s statement that “it is not possible to infer logically whether a specific factor was the cause of an observed event,” is not the problem that faces factfinders in court cases.

With respect to differential etiology, Beecher-Monas claims that “‘ruling in’ all potential causes cannot be done.” BM at 1075. But why not? While it is true that disease diagnosis is often made upon signs and symptoms, BM at 1076, sometimes physicians are involved in trying to identify causes in individuals. Psychiatrists of course are frequently involved in trying to identify sources of anxiety and depression in their patients. It is not all about putting a DSM-V diagnosis on the chart, and prescribing medication. And there are times, when physicians can say quite confidently that a disease has a particular genetic cause, as in a man with BrCa1, or BrCa2, and breast cancer, or certain forms of neurodegenerative diseases, or an infant with a clearly genetically determined birth defect.

Beecher-Monas confuses “the” cause with “a” cause, and wonders away from both law and science into her own twilight zone. Here is an example of how Beecher-Monas’ confusion plays out. She asserts that:

“For any individual case of lung cancer, however, smoking is no more important than any of the other component causes, some of which may be unknown.”

BM at 1078. This ignores the magnitude of the risk factor and its likely contribution to a given case. Putting aside synergistic co-exposures, for most lung cancers, smoking is the “but for” cause of individual smokers’ lung cancers. Beecher-Monas sets up a strawman argument by telling us that is logically impossible to infer “whether a specific factor in a causal pie was the cause of an observed event.” BM at 1079. But we are usually interested in whether a specific factor was “a substantial contributing factor,” without which the disease would not have occurred. This is hardly illogical or impracticable for a given case of mesothelioma in a patient who worked for years in a crocidolite asbestos factor, or for a case of lung cancer in a patient who smoked heavily for many years right up to the time of his lung cancer diagnosis. I doubt that many people would hesitate, on either logical or scientific grounds, to attribute a child’s phocomelia birth defects to his mother’s ingestion of thalidomide during an appropriate gestational window in her pregnancy.

Unhelpfully, Beecher-Monas insists upon playing this word game by telling us that:

“Looking backward from an individual case of lung cancer, in a person exposed to both asbestos and smoking, to try to determine the cause, we cannot separate which factor was primarily responsible.”

BM at 1080. And yet that issue, of “primary responsibility” is not in any jury instruction for causation in any state of the Union, to my knowledge.

From her extreme skepticism, Beecher-Monas swings to the other extreme that asserts that anything that could have been in the causal set or pie was in the causal set:

“Nothing in relative risk analysis, in statistical analysis, nor anything in medical training, permits an inference of specific causation in the individual case. No expert can tell whether a particular exposed individual’s cancer was caused by unknown factors (was idiopathic), linked to a particular gene, or caused by the individual’s chemical exposure. If all three are present, and general causation has been established for the chemical exposure, one can only infer that they all caused the disease.115 Courts demanding that experts make a contrary inference, that one of the factors was the primary cause, are asking to be misled. Experts who have tried to point that out, however, have had a difficult time getting their testimony admitted.”

BM at 1080. There is no support for Beecher-Monas’ extreme statement. She cites, in footnote 115, to Kenneth Rothman’s introductory book on epidemiology, but what he says at the cited page is quite different. Rothman explains that “every component cause that played a role was necessary to the occurrence of that case.” In other words, for every component cause that actually participated in bringing about this case, its presence was necessary to the occurrence of the case. What Rothman clearly does not say is that for a given individual’s case, the fact that a factor can cause a person’s disease means that it must have caused it. In Beecher-Monas’ hypothetical of three factors – idiopathic, particular gene, and chemical exposure, all three, or any two, or only one of the three may have made a given individual’s causal set. Beecher-Monas has carelessly or intentionally misrepresented Rothman’s actual discussion.

Physicians and epidemiologists do apply group risk figures to individuals, through the lens of predictive regression equations.   The Gail Model for 5 Year Risk of Breast Cancer, for instance, is a predictive equation that comes up with a prediction for an individual patient by refining the subgroup within which the patient fits. Similarly, there are prediction models for heart attack, such as the Risk Assessment Tool for Estimating Your 10-year Risk of Having a Heart Attack. Beecher-Monas might complain that these regression equations still turn on subgroup average risk, but the point is that they can be made increasingly precise as knowledge accumulates. And the regression equations can generate confidence intervals and prediction intervals for the individual’s constellation of risk factors.

Significance Probability and Statistical Significance

The discussion of significance probability and significance testing in Beecher-Monas’ book was frequently in error,[3] and this new law review article is not much improved. Beecher-Monas tells us that “judges frequently misunderstand the terminology and reasoning of the statistics used in scientific testimony,” BM at 1057, which is true enough, but this article does little to ameliorate the situation. Beecher-Monas offers the following definition of the p-value:

“The P- value is the probability, assuming the null hypothesis (of no effect) is true (and the study is free of bias) of observing as strong an association as was observed.”

BM at 1064-65. This definition misses that the p-value is a cumulative tail probability, and can be one-sided or two-sided. More seriously in error, however, is the suggestion that the null hypothesis is one of no effect, when it is merely a pre-specified expected value that is the subject of the test. Of course, the null hypothesis is often one of no disparity between the observed and the expected, but the definition should not mislead on this crucial point.

For some reason, Beecher-Monas persists in describing the conventional level of statistical significance as 95%, which substitutes the coefficient of confidence for the complement of the frequently pre-specified p-value for significance. Annoying but decipherable. See, e.g., BM at 1062, 1064, 1065. She misleadingly states that:

“The investigator will thus choose the significance level based on the size of the study, the size of the effect, and the trade-off between Type I (incorrect rejection of the null hypothesis) and Type II (incorrect failure to reject the null hypothesis) errors.”

BM at 1066. While this statement is sometimes, rarely true, it mostly is not. A quick review of the last several years of the New England Journal of Medicine will document the error. Invariably, researchers use the conventional level of alpha, at 5%, unless there is multiple testing, such as in a genetic association study.

Beecher-Monas admonishes us that “[u]sing statistical significance as a screening device is thus mistaken on many levels,” citing cases that do not provide support for this proposition.[4] BM at 1066. The Food and Drug Administration’s scientists, who review clinical trials for efficacy and safety will be no doubt be astonished to hear this admonition.

Beecher-Monas argues that courts should not factor statistical significance or confidence intervals into their gatekeeping of expert witnesses, but that they should “admit studies,” and leave it to the lawyers and expert witnesses to explain the strengths and weaknesses of the studies relied upon. BM at 1071. Of course, studies themselves are rarely admitted because they represent many levels of hearsay by unknown declarants. Given Beecher-Monas’ acknowledgment of how poorly judges and lawyers understand statistical significance, this argument is cynical indeed.

Remarkably, Beecher-Monas declares, without citation, that the

“the purpose of epidemiologists’ use of statistical concepts like relative risk, confidence intervals, and statistical significance are intended to describe studies, not to weed out the invalid from the valid.”

BM at 1095. She thus excludes by ipse dixit any inferential purposes these statistical tools have. She goes further and gives us a concrete example:

“If the methodology is otherwise sound, small studies that fail to meet a P-level of 5 [sic], say, or have a relative risk of 1.3 for example, or a confidence level that includes 1 at 95% confidence, but relative risk greater than 1 at 90% confidence ought to be admissible. And understanding that statistics in context means that data from many sources need to be considered in the causation assessment means courts should not dismiss non-epidemiological evidence out of hand.”

BM at 1095. Well, again, studies are not admissible; the issue is whether they may be reasonably relied upon, and whether reliance upon them may support an opinion claiming causality. And a “P-level” of 5 is, well, let us hope a serious typographical error. Beecher-Monas’ advice is especially misleading when there is there is only one study, or only one study in a constellation of exonerative studies. See, e.g., In re Accutane, No. 271(MCL), 2015 WL 753674, 2015 BL 59277 (N.J. Super. Law Div. Atlantic Cty. Feb. 20, 2015) (excluding Professor David Madigan for cherry picking studies to rely upon).

Confidence Intervals

Beecher-Monas’ book provided a good deal of erroneous information on confidence intervals.[5] The current article improves on the definitions, but still manages to go astray:

“The rationale courts often give for the categorical exclusion of studies with confidence intervals including the relative risk of one is that such studies lack statistical significance.62 Well, yes and no. The problem here is the courts’ use of a dichotomous meaning for statistical significance (significant or not).63 This is not a correct understanding of statistical significance.”

BM at 1069. Well yes and no; this interpretation of a confidence interval, say with a coefficient of confidence of 95%, is a reasonable interpretation of whether the point estimate is statistically significant at an alpa of 5%. If Beecher-Monas does not like strict significant testing, that is fine, but she cannot mandate its abandonment by scientists or the courts. Certainly the cited interpretation is one proper interpretation among several.

Power

There were several misleading references to statistical power in Beecher-Monas’ book, but the new law review tops them by giving a new, bogus definition:

“Power, the probability that the study in which the hypothesis is being tested will reject the alterative [sic] hypothesis when it is false, increases with the size of the study.”

BM at 1065. For this definition, Beecher-Monas cites to the Reference Manual on Scientific Evidence, but butchers the correct definition give by the late David Freedman and David Kaye.[6] All of which is very disturbing.

Relative Risks and Other Risk Measures

Beecher-Monas begins badly by misdefining the concept of relative risk:

“as the percentage of risk in the exposed population attributable to the agent under investigation.”

BM at 1068. Perhaps this percentage can be derived from the relative risk, if we know it to be the true measure with some certainty, through a calculation of attributable risk, but confusing and conflating attributable and relative risk in a law review article that is taking the entire medical profession to task, and most of the judiciary to boot, should be written more carefully.

Then Beecher-Monas tells us that the “[r]elative risk is a statistical test that (like statistical significance) depends on the size of the population being tested.” BM at 1068. Well, actually not; the calculation of the RR is unaffected by the sample size. The variance of course will vary with the sample size, but Beecher-Monas seems intent on ignoring random variability.

Perhaps most egregious is Beecher-Monas’ assertion that:

“Any increase above a relative risk of one indicates that there is some effect.”

BM at 1067. So much for ruling out chance, bias, and confounding! Or looking at an entire body of epidemiologic research for strength, consistency, coherence, exposure-response, etc. Beecher-Monas has thus moved beyond a liberal, to a libertine, position. In case the reader has any doubts of the idiosyncrasy of her views, she repeats herself:

“As long as there is a relative risk greater than 1.0, there is some association, and experts should be permitted to base their causal explanations on such studies.”

BM at 1067-68. This is evidentiary nihilism in full glory. Beecher-Monas has endorsed relying upon studies irrespective of their study design or validity, their individual confidence intervals, their aggregate summary point estimates and confidence intervals, or the absence of important Bradford Hill considerations, such as consistency, strength, and dose-response. So an expert witness may opine about general causation from reliance upon a single study with a relative risk of 1.05, say with a 95% confidence interval of 0.8 – 1.4?[7] For this startling proposition, Beecher-Monas cites the work of Sander Greenland, a wild and wooly plaintiffs’ expert witness in various toxic tort litigations, including vaccine autism and silicone autoimmune cases.

RR > 2

Beecher-Monas’ discussion of inferring specific causation from relative risks greater than two devolves into a muddle by her failure to distinguish general from specific causation. BM at 1067. There are different relevancies for general and specific causation, depending upon context, such as clinical trials or epidemiologic studies for general causation, number of studies available, and the like. Ultimately, she adds little to the discussion and debate about this issue, or any other.


[1] See previous comments on the book at “Beecher-Monas and the Attempt to Eviscerate Daubert from Within”; “Friendly Fire Takes Aim at Daubert – Beecher-Monas And The Undue Attack on Expert Witness Gatekeeping; and “Confidence in Intervals and Diffidence in the Courts.”

[2] Kenneth J. Rothman, Epidemiology: An Introduction 250 (2d ed. 2012).

[3] Erica Beecher-Monas, Evaluating Scientific Evidence: An Interdisciplinary Framework for Intellectual Due Process 42 n. 30, 61 (2007) (“Another way of explaining this is that it describes the probability that the procedure produced the observed effect by chance.”) (“Statistical significance is a statement about the frequency with which a particular finding is likely to arise by chance.”).

[4] See BM at 1066 & n. 44, citing “See, e.g., In re Breast Implant Litig., 11 F. Supp. 2d 1217, 1226–27 (D. Colo. 1998); Haggerty v. Upjohn Co., 950 F. Supp. 1160, 1164 (S.D. Fla. 1996), aff’d, 158 F.3d 588 (11th Cir. 1998) (“[S]cientifically valid cause and effect determinations depend on controlled clinical trials and epidemiological studies.”).”

 

[5] See, e.g., Erica Beecher-Monas, Evaluating Scientific Evidence 58, 67 (N.Y. 2007) (“No matter how persuasive epidemiological or toxicological studies may be, they could not show individual causation, although they might enable a (probabilistic) judgment about the association of a particular chemical exposure to human disease in general.”) (“While significance testing characterizes the probability that the relative risk would be the same as found in the study as if the results were due to chance, a relative risk of 2 is the threshold for a greater than 50 percent chance that the effect was caused by the agent in question.”)(incorrectly describing significance probability as a point probability as opposed to tail probabilities).

[6] David H. Kaye & David A. Freedman, Reference Guide on Statistics, in Federal Jud. Ctr., Reference Manual on Scientific Evidence 211, 253–54 (3d ed. 2011) (discussing the statistical concept of power).

[7] BM at 1070 (pointing to a passage in the FJC’s Reference Manual on Scientific Evidence that provides an example of one 95% confidence interval that includes 1.0, but which shrinks when calculated as a 90% interval to 1.1 to 2.2, which values “demonstrate some effect with confidence interval set at 90%). This is nonsense in the context of observational studies.

Seventh Circuit Affirms Exclusion of Expert Witnesses in Vinyl Chloride Case

August 30th, 2015

Last week, the Seventh Circuit affirmed a federal district court’s exclusion of plaintiffs’ expert witnesses in an environmental vinyl chloride exposure case. Wood v. Textron, Inc., No. 3:10 CV 87, 2014 U.S. Dist. LEXIS 34938 (N.D. Ind. Mar. 17, 2014); 2014 U.S. Dist. LEXIS 141593, at *11 (N.D. Ind. Oct. 3, 2014), aff’d, Slip op., No. 14-3448, 20125 U.S. App. LEXIS 15076 (7th Cir. Aug. 26, 2015). Plaintiffs, children C.W. and E.W., claimed exposure from Textron’s manufacturing facility in Rochester, Indiana, which released vinyl chloride as a gas that seeped into ground water, and into neighborhood residential water wells. Slip op. at 2-3. Plaintiffs claimed present injuries in the form of “gastrointestinal issues (vomiting, bloody stools), immunological issues, and neurological issues,” as well as future increased risk of cancer. Importantly, the appellate court explicitly approved the trial court’s careful reading of relied upon studies to determine whether they really did support the scientific causal claims made by the expert witnesses. Given the reluctance of some federal district judges to engage with the studies actually cited, this holding is noteworthy.

To support their claims, plaintiffs offered the testimony from three familiar expert witnesses:

(1) Dr. James G. Dahlgren;

(2) Dr. Vera S. Byers; and

(3) Dr. Jill E. Ryer-Powder.

Slip op. at 5. This gaggle offered well-rehearsed but scientifically unsound arguments in place of actual evidence that the children were hurt, or would be afflicted, as a result of their claimed exposures:

(a) extrapolation from high dose animal and human studies;

(b) assertions of children’s heightened vulnerability;

(c) differential etiology;

(d) temporality; and

(e) regulatory exposure limits.

On appeal, a panel of the Seventh Circuit held that the district court had properly conducted “an in-depth review of the relevant studies that the experts relied upon to generate their differential etiology,” and their general causation opinions. Slip op. at 13-14 (distinguishing other Seventh Circuit decisions that reversed district court Rule 702 rulings, and noting that the court below followed Joiner’s lead by analyzing the relied-upon studies to assess analytical gaps and extrapolations). The plaintiffs’ expert witnesses simply failed in analytical gap bridging, and dot connecting.

Extrapolation

The Circuit agreed with the district court that the extrapolations asserted were extreme, and that they represented “analytical gaps” too wide to be permitted in a courtroom. Slip op. at 15. The challenged expert witnesses extrapolated between species, between exposure levels, between exposure duration, between exposure circumstances, and between disease outcomes.

The district court faulted Dahlgren for relying upon articles that “fail to establish that [vinyl chloride] at the dose and duration present in this case could cause the problems that the [p]laintiffs have experienced or claim that they are likely to experience.” C.W. v. Textron, 2014 U.S. Dist. LEXIS 34938, at *53, *45 (N.D. Ind. Mar. 17, 2014) (finding that the analytical gap between the cited studies and Dahlgren’s purpose in citing the studies was an unbridged gap, which Dahlgren had failed to explain). Slip op. at 8.

Byers, for instance, cited one study[1] that involved exposure for five years, at an average level that was over 1,000 times higher than the children’s alleged exposure levels, which lasted less than 17 and 7 months, each. Perhaps even more extreme were the plaintiffs’ expert witnesses’ attempted extrapolations from animal studies, which the district court recognized as “too attenuated” from plaintiffs’ case. Slip op. at 14. The Seventh Circuit rejected plaintiffs’ alleged error that the district court had imposed a requirement of “absolute precision,” in holding that the plaintiffs’ expert witnesses’ analytical gaps (and slips) were too wide to be bridged. The Circuit provided a colorful example of a study on laboratory rodents, pressed into service for a long-term carcinogenetic assay, which found no statistically significant increase in tumors fed 0.03 milligrams vinyl chloride per kilogram of bodyweight, (0.03 mg/kg), for 4 to 5 days each week, for 59 weeks, compared to control rodents fed olive oil.[2] Slip op. at 14-15. This exposure level in this study of 0.03 mg/kg was over 10 times the children’s exposure, as estimated by Ryer-Powder. The 59 weeks of study exposure represents the great majority of the rodents’ adult years, which greatly exceeds the children’s exposure was took place over several months of their lives. Slip op. at 15.

The Circuit held that the district court was within its discretion in evaluating the analytical gaps, and that the district court was correct to look at the study details to exercise its role as a gatekeeper under Rule 702. Slip op. at 15-17. The plaintiffs’ expert witnesses failed to explain their extrapolations, which was made their opinions suspect. As the Circuit court noted, there is a methodology by which scientists sometimes attempt to model human risks from animal evidence. Slip op. at 16-17, citing Bernard D. Goldtsein & Mary Sue Henifin, “Reference Guide on Toxicology,” in Federal Manual on Scientific Evidence 646 (3d ed. 2011) (“The mathematical depiction of the process by which an external dose moves through various compartments in the body until it reaches the target organ is often called physiologically based pharmokinetics or toxicokinetics.”). Given the abject failures of plaintiffs’ expert witnesses to explain their leaps of faith, the appellate court had no occasion to explore the limits of risk assessment outside regulatory contexts.

Children’s Vulnerability

Plaintiffs’ expert witness asserted that children are much more susceptible than adult workers, and even laboratory rats. As is typical in such cases, these expert witnesses had no evidence to support their assertions, and they made no effort even to invoke models that attempted reasonable risk assessments of children’s risk.

Differential Etiology

Dahlgren and Byers both claimed that they reached individual or specific causation conclusions based upon their conduct of a “differential etiology.” The trial and appellate court both faulted them for failing to “rule in” vinyl chloride for plaintiffs’ specific ailments before going about the business of ruling out competing or alternative causes. Slip op. at 6-7; 9-10; 20-21.

The courts also rejected Dahlgren’s claim that he could rule out all potential alternative causes by noting that the children’s treating physicians had failed to identify any cause for their ailments. So after postulating a limited universe of alternative causes of “inheritance, allergy, infection or another poison,” Dahlgren ruled all of them out of the case, because these putative causes “would have been detected by [the appellants’] doctors and treated accordingly.” Slip op. at 7, 18. As the Circuit court saw the matter:

“[T]his approach is not the stuff of science. It is based on faith in his fellow physicians—nothing more. The district court did not abuse its discretion in rejecting it.”

Slip op. at 18. Of course, the court might well have noted that physicians are often concerned exclusively with identifying effective therapy, and have little or nothing to offer on actual causation.

The Seventh Circuit panel did fuss with dicta in the trial court’s opinion that suggested differential etiology “cannot be used to support general causation.” C.W. v. Textron, 2014 U.S. Dist. LEXIS 141593, at *11 (N.D. Ind. Oct. 3, 2014). Elsewhere, the trial court wrote, in a footnote, that “[d]ifferential [etiology] is admissible only insofar as it supports specific causation, which is secondary to general causation … .” Id. at *12 n.3. Curiously the appellate court characterized these statements as “holdings” of the trial court, but disproved their own characterization by affirming the judgment below. The Circuit court countered with its own dicta that

“there may be a case where a rigorous differential etiology is sufficient to help prove, if not prove altogether, both general and specific causation.”

Slip op. at 20 (citing, in turn, improvident dicta from the Second Circuit, in Ruggiero v. Warner-Lambert Co., 424 F.3d 249, 254 (2d Cir. 2005) (“There may be instances where, because of the rigor of differential diagnosis performed, the expert’s training and experience, the type of illness or injury at issue, or some other … circumstance, a differential diagnosis is sufficient to support an expert’s opinion in support of both general and specific causation.”).

Regulatory Pronouncements

Dahlgren based his opinions upon the children’s water supply containing vinyl chloride in excess of regulatory levels set by state and federal agencies, including the U.S. Environmental Protection Agency (E.P.A.). Slip op. at 6. Similarly, Ryer-Powder relied upon exposure levels’ exceeding regulatory permissible limits for her causation opinions. Slip op. at 10.

The district court, with the approval now of the Seventh Circuit would have none of this nonsense. Exceeding governmental regulatory exposure limits does not prove causation. The con-compliance does not help the fact finder without knowing “the specific dangers” that led the agency to set the permissible level, and thus the regulations are not relevant at all without this information. Even with respect to specific causation, the regulatory infraction may be weak or null evidence for causation. Slip op. at 18-19 (citing Cunningham v. Masterwear Corp., 569 F.3d 673, 674–75 (7th Cir. 2009).

Temporality

Byers and Dahlgren also emphasized that the children’s symptoms began after exposure and abated after removal from exposure. Slip op. at 9, 6-7. Both the trial and appellate courts were duly unimpressed by the post hoc ergo propter hoc argument. Slip op. at 19, citing Ervin v. Johnson & Johnson, 492 F.3d 901, 904-05 (7th Cir. 2007) (“The mere existence of a temporal relationship between taking a medication and the onset of symptoms does not show a sufficient causal relationship.”).

Increased Risk of Cancer

The plaintiffs’ expert witnesses offered opinions about the children’s future risk of cancer that were truly over the top. Dahlgren testified that the children were “highly likely” to develop cancer in the future. Slip op. at 6. Ryer-Powder claimed that the children’s exposures were “sufficient to present an unacceptable risk of cancer in the future.” Slip op. at 10. With no competence evidence to support their claims of present or past injury, these opinions about future cancer were no longer relevant. The Circuit thus missed an opportunity to comment on how meaningless these opinions were. Most people will develop a cancer at some point in their lifetime, and we might all agree that any risk is unacceptable, which is why medical research continues into the causes, prevention, and cure of cancer. An unquantified risk of cancer, however, cannot support an award of damages even if it were a proper item of damages. See, e.g., Sutcliffe v. G.A.F. Corp., 15 Phila. 339, 1986 Phila. Cty. Rptr. LEXIS 22, 1986 WL 501554 (1986). See alsoBack to Baselines – Litigating Increased Risks” (Dec. 21, 2010).


[1] Steven J. Smith, et al., “Molecular Epidemiology of p53 Protein Mutations in Workers Exposed to Vinyl Chloride,” 147 Am. J. Epidemiology 302 (1998) (average level of workers’ exposure was 3,735 parts per million; children were supposedly exposed at 3 ppb). This study looked only at a putative biomarker for angiosarcoma of the liver, not at cancer risk.

[2] Cesare Maltoni, et al., “Carcinogenity Bioassays of Vinyl Chloride Monomer: A Model of Risk Assessment on an Experimental Basis, 41 Envt’l Health Persp. 3 (1981).

Events, Outcomes, and Effects – Media Responsibility to Be Accurate

July 29th, 2015

Thanks to Dr. David Schwartz for the pointer to a story, by a Bloomberg, Reuters health reporter, on a JAMA online-first article on drug “side effects.” See David Schwartz, “Lack of compliance on ADR Reporting: Some serious drug side effects not told to FDA within 15 days” (July 29, 2015).

The reporter, Lisa Rapaport, wrote about an in-press article in JAMA Internal Medicine, about delays in drug company mandatory reporting. Lisa Rapaport, “Some serious drug side effects not told to FDA within 15 days,” (July 27, 2015). The article that gave rise to this media coverage, however, was not about side effects, or direct effects, for that matter; it was about adverse events. See Paul Ma, Iván Marinovic, and Pinar Karaca-Mandic, “Drug Manufacturers’ Delayed Disclosure of Serious and Unexpected Adverse Events to the US Food and Drug Administration,” JAMA Intern. Med. (published online July 27, 2015) (doi:10.1001/jamainternmed.2015.3565).

The word “effect[s]” occurs 10 times in Rapaport’s news item; and yet, that word does not appear at all in the JAMA article, except in a footnote that points to a popular media article. And Reuters is the source of the footnoted popular media article.[1] Apparently, Reuter’s reporters are unaware of the difference between an event and an effect. The companies’ delay in reporting apparently made up 10% of all adverse event reports, but spinning the story as though it were about adverse effects makes the story seem more important and the delays more nefarious.

Why would a reporter covering a medical journal article not be familiar with the basic terminology and concepts at issue? The FDA’s description of its adverse event system makes clear that adverse events have nothing to do with “effects.” The governing regulations for post-marketing reporting of adverse drug experiences are even more clear that adverse events or experiences are not admissions or conclusions of causality. 21 C.F.R. 314.80(a), (k). See also ICH Harmonised Tripartite Guideline for Good Clinical Practice E6(R1) (10 June 1996).

Perhaps this is an issue with which Sense about Science USA can help? Located in the brain basket of America – Brooklyn, NY – Sense about Science is:

“a non-profit, non-partisan American branch of the British charitable trust, Sense About Science, which was founded in 2003 and which grew to play a pivotal role in promoting scientific understanding and defending scientific integrity in the UK and Europe.”

One of the organization’s activities is offering media help in understanding scientific and statistical issues. Let’s hope that they take the help being offered.


[1] S. Heavey, “FDA warns Pfizer for not reporting side effects” (June 10, 2010).