TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Systematic Reviews and Meta-Analyses in Litigation

February 5th, 2016

Kathy Batty is a bellwether plaintiff in a multi-district litigation[1] (MDL) against Zimmer, Inc., in which hundreds of plaintiffs claim that Zimmer’s NexGen Flex implants are prone to have their femoral and tibial elements prematurely aseptically loosen (independent of any infection). Batty v. Zimmer, Inc., MDL No. 2272, Master Docket No. 11 C 5468, No. 12 C 6279, 2015 WL 5050214 (N.D. Ill. Aug. 25, 2015) [cited as Batty].

PRISMA Guidelines for Systematic Reviews

Zimmer proffered Dr. Michael G. Vitale, an orthopedic surgeon, with a master’s degree in public health, to testify that, in his opinion, Batty’s causal claims were unfounded. Batty at *4. Dr. Vitale prepared a Rule 26 report that presented a formal, systematic review of the pertinent literature. Batty at *3. Plaintiff Batty challenged the admissibility of Dr. Vitale’s opinion on grounds that his purportedly “formal systematic literature review,” done for litigation, was biased and unreliable, and not conducted according to generally accepted principles for such reviews. The challenged was framed, cleverly, in terms of Dr. Vitale’s failure to comply with a published set of principles outlined in “PRISMA” guidelines (Preferred Reporting Items for Systematic reviews and Meta-Analyses), which enjoy widespread general acceptance among the clinical journals. See David Moher , Alessandro Liberati, Jennifer Tetzlaff, Douglas G. Altman, & The PRISMA Group, “Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement,” 6 PLoS Med e1000097 (2009) [PRISMA]. Batty at *5. The trial judge, Hon. Rebecca R. Pallmeyer, denied plaintiff’s motion to exclude Dr. Vitale, but in doing so accepted, arguendo, the plaintiff’s implicit premise that an expert witness’s opinion should be reached in the manner of a carefully constructed systematic review.

The plaintiff’s invocation of the PRISMA guidelines presented several difficult problems for her challenge and for the court. PRISMA provides a checklist of 27 items for journal editors to assess the quality and completeness of systematic reviews that are submitted for publication. Plaintiff Batty focused on several claimed deviations from the guidelines:

  • “failing to explicitly state his study question,
  • failing to acknowledge the limitations of his review,
  • failing to present his findings graphically, and failing to reproduce his search results.”

Batty’s challenge to Dr. Vitale thus turned on whether Zimmer’s expert witness had failed to deploy “same level of intellectual rigor,” as someone in the world of clinical medicine would [should] have in conducting a similar systematic review. Batty at *6.

Zimmer deflected the challenge, in part by arguing that PRISMA’s guidelines are for the reporting of systematic reviews, and they are not necessarily criteria for valid reviews. The trial court accepted this rebuttal, Batty at *7, but missed the point that some of the guidelines call for methods that are essential for rigorous, systematic reviews, in any forum, and do not merely specify “publishability.” To be sure, PRISMA itself does not always distinguish between what is essential for journal publication, as opposed to what is needed for a sufficiently valid systematic review. The guidelines, for instance, call for graphical displays, but in litigation, charts, graphs, and other demonstratives are often not produced until the eve of trial, when case management orders call for the parties to exchange such materials. In any event, Dr. Vitale’s omission of graphical representations of his findings was consistent with his finding that the studies were too clinical heterogeneous in study design, follow-up time, and pre-specified outcomes, to permit nice, graphical summaries. Batty at *7-8.

Similarly, the PRISMA guidelines call for a careful specification of the clinical question to be answered, but in litigation, the plaintiff’s causal claims frame the issue to be addressed by the defense expert witness’s literature review. The trial court readily found that Dr. Vitale’s research question was easily discerned from the context of his report in the particular litigation. Batty at *7.

Plaintiff Batty’s challenge pointed to Dr. Vitale’s failure to acknowledge explicitly the limitations of his systematic review, an omission that virtually defines expert witness reports in litigation. Given the availability of discovery tools, such as a deposition of Dr. Vitale (at which he readily conceded the limitations of his review), and the right of confrontation and cross-examination (which are not available, alas, for published articles), the trial court found that this alleged deviation was not particularly relevant to the plaintiff’s Rule 702 challenge. Batty at *8.

Batty further charged that Dr. Vitale had not “reproduced” his own systematic review. Arguing that a systematic review’s results must be “transparent and reproducible,” Batty claimed that Zimmer’s expert witness’s failure to compile a list of studies that were originally retrieved from his literature search deprived her, and the trial court, of the ability to determine whether the search was complete and unbiased. Batty at *8. Dr. Vitale’s search protocol and inclusionary and exclusionary criteria were, however, stated, explained, and reproducible, even though Dr. Vitale did not explain the application of his criteria to each individual published paper. In the final analysis, the trial court was unmoved by Batty’s critique, especially given that her expert witnesses failed to identify any relevant studies omitted from Dr. Vitale’s review. Batty at *8.

Lumping or Commingling of Heterogeneous Studies

The plaintiff pointed to Dr. Vitale’s “commingling” of studies, heterogeneous in terms of “study length, follow-up, size, design, power, outcome, range of motion, component type” and other clinical features, as a deep flaw in the challenged expert witness’s methodology. Batty at *9. Batty’s own retained expert witness, Dr. Kocher, supported Batty’s charge by adverting to the clinical variability in studies included in Dr. Vitale’s review, and suggesting that “[h]igh levels of heterogeneity preclude combining study results and making conclusions based on combining studies.” Dr. Kocher’s argument was rather beside the point because Dr. Vitale had not impermissibly combined clinically or statistically heterogeneous outcomes.[2] Similarly, the plaintiff’s complaint that Dr. Vitale had used inconsistent criteria of knee implant survival rates was dismissed by the trial court, which easily found Dr. Vitale’s survival criteria both pre-specified and consistent across his review of studies, and relevant to the specific alleged by Ms. Batty. Batty at *9.

Cherry Picking

The trial court readily agreed with Plaintiff’s premise that an expert witness who used inconsistent inclusionary and exclusionary criteria would have to be excluded under Rule 702. Batty at *10, citing In Re Zoloft, 26 F. Supp. 3d 449, 460–61 (E.D. Pa.2014) (excluding epidemiologist Dr. Anick Bérard proffered testimony because of her biased cherry picking and selection of studies to support her studies, and her failure to account for contradictory evidence). The trial court, however, did find that Dr. Vitale’s review was corrupted by the kind of biased cherry picking that Judge Rufe found to have been committed by Dr. Anick Bérard, in the Zoloft MDL.

Duplicitous Duplication

Plaintiff’s challenge of Dr. Vitale did manage to spotlight an error in Dr. Vitale’s inclusion of two studies that were duplicate analyses of the same cohort. Apparently, Dr. Vitale had confused the studies as not being of the same cohort because the two papers reported different sample sizes. Dr. Vitale admitted that his double counting the same cohort “got by the peer-review process and it got by my filter as well.” Batty at *11, citing Vitale Dep. 284:3–12. The trial court judged Dr. Vitale’s error to have been:

“an inadvertent oversight, not an attempt to distort the data. It is also easily correctable by removing one of the studies from the Group 1 analysis so that instead of 28 out of 35 studies reporting 100% survival rates, only 27 out of 34 do so.”

Batty at *11.

The error of double counting studies in quantitative reviews and meta-analyses has become a prevalent problem in both published studies[3] and in litigation reports. Epidemiologic studies are sometimes updated and extended with additional follow up. The prohibition against double counting data is so obvious that it often is not even identified on checklists, such as PRISMA. Furthermore, double counting of studies, or subgroups within studies, is a flaw that most careful readers can identify in a meta-analysis, without advance training. According to statistician Stephen Senn, double counting of evidence is a serious problem in published meta-analytical studies.[4] Senn observes that he had little difficulty in finding examples of meta-analyses gone wrong, including meta-analyses with double counting of studies or data, in some of the leading clinical medical journals. Senn urges analysts to “[b]e vigilant about double counting,” and recommends that journals should withdraw meta-analyses promptly when mistakes are found.”[5]

An expert witness who wished to skate over the replication and consistency requirement might be tempted, as was Dr. Michael Freeman, to count the earlier and later iteration of the same basic study to count as “replication.” Proper methodology, however, prohibits double dipping data to count the later study that subsumes the early one as a “replication”:

“Generally accepted methodology considers statistically significant replication of study results in different populations because apparent associations may reflect flaws in methodology. Dr. Freeman claims the Alwan and Reefhuis studies demonstrate replication. However, the population Alwan studied is only a subset of the Reefhuis population and therefore they are effectively the same.”

Porter v. SmithKline Beecham Corp., No. 03275, 2015 WL 5970639, at *9 (Phila. Cty. Pennsylvania, Ct. C.P. October 5, 2015) (Mark I. Bernstein, J.)

Conclusions

The PRISMA and similar guidelines do not necessarily map the requisites of admissible expert witness opinion testimony, but they are a source of some important considerations for the validity of any conclusion about causality. On the other hand, by specifying the requisites of a good publication, some PRISMA guidelines are irrelevant to litigation reports and testimony of expert witnesses. Although Plaintiff Batty’s challenge overreached and failed, the premise of her challenge is noteworthy, as is the trial court’s having taken the premise seriously. Ultimately, the challenge to Dr. Vitale’s opinion failed because the specified PRISMA guidelines, supposedly violated, were either irrelevant or satisfied.


[1] Zimmer Nexgen Knee Implant Products Liability Litigation.

[2] Dr. Vitale’s review is thus easily distinguished from what has become commonplace in litigation of birth defect claims, where, for instance, some well-known statisticians [names available upon request] have conducted qualitative reviews and quantitative meta-analyses of highly disparate outcomes, such as any and all cardiovascular congenital anomalies. In one such case, a statistician expert witness hired by plaintiffs presented a meta-analysis that included study results of any nervous system defect, and central nervous system defect, and any neural tube defect, without any consideration of clinical heterogeneity or even overlap with study results.

[3] See, e.g., Shekoufeh Nikfar, Roja Rahimi, Narjes Hendoiee, and Mohammad Abdollahi, “Increasing the risk of spontaneous abortion and major malformations in newborns following use of serotonin reuptake inhibitors during pregnancy: A systematic review and updated meta-analysis,” 20 DARU J. Pharm. Sci. 75 (2012); Roja Rahimi, Shekoufeh Nikfara, Mohammad Abdollahic, “Pregnancy outcomes following exposure to serotonin reuptake inhibitors: a meta-analysis of clinical trials,” 22 Reproductive Toxicol. 571 (2006); Anick Bérard, Noha Iessa, Sonia Chaabane, Flory T. Muanda, Takoua Boukhris, and Jin-Ping Zhao, “The risk of major cardiac malformations associated with paroxetine use during the first trimester of pregnancy: A systematic review and meta-analysis,” 81 Brit. J. Clin. Pharmacol. (2016), in press, available at doi: 10.1111/bcp.12849.

[4] Stephen J. Senn, “Overstating the evidence – double counting in meta-analysis and related problems,” 9, at *1 BMC Medical Research Methodology 10 (2009).

[5] Id. at *1, *4.


DOUBLE-DIP APPENDIX

Some papers and textbooks, in addition to Stephen Senn’s paper, cited above, which note the impermissible method of double counting data or studies in quantitative reviews.

Aaron Blair, Jeanne Burg, Jeffrey Foran, Herman, Gibb, Sander Greenland, Robert Morris, Gerhard Raabe, David Savitz, Jane Teta, Dan Wartenberg, Otto Wong, and Rae Zimmerman, “Guidelines for Application of Meta-analysis in Environmental Epidemiology,” 22 Regulatory Toxicol. & Pharmacol. 189, 190 (1995).

“II. Desirable and Undesirable Attributes of Meta-Analysis

* * *

Redundant information: When more than one study has been conducted on the same cohort, the later or updated version should be included and the earlier study excluded, provided that later versions supply adequate information for the meta-analysis. Exclusion of, or in rare cases, carefully adjusting for overlapping or duplicated studies will prevent overweighting of the results by one study. This is a critical issue where the same cohort is reexamined or updated several times. Where duplication exists, decision criteria should be developed to determine which of the studies are to be included and which excluded.”

Sander Greenland & Keith O’Rourke, “Meta-Analysis – Chapter 33,” in Kenneth J. Rothman, Sander Greenland, Timothy L. Lash, Modern Epidemiology 652, 655 (3d ed. 2008) (emphasis added)

Conducting a Sound and Credible Meta-Analysis

Like any scientific study, an ideal meta-analysis would follow an explicit protocol that is fully replicable by others. This ideal can be hard to attain, but meeting certain conditions can enhance soundness (validity) and credibility (believability). Among these conditions we include the following:

  • A clearly defined set of research questions to address.

  • An explicit and detailed working protocol.

  • A replicable literature-search strategy.

  • Explicit study inclusion and exclusion criteria, with a rationale for each.

  • Nonoverlap of included studies (use of separate subjects in different included studies), or use of statistical methods that account for overlap.* * * * *”

Matthias Egger, George Davey Smith, and Douglas G. Altman, Systematic Reviews in Health Care: Meta-Analysis in Context 59 – 60 (2001).

Duplicate (multiple) publication bias

***

The production of multiple publications from single studies can lead to bias in a number of ways.85 Most importantly, studies with significant results are more likely to lead to multiple publications and presentations,45 which makes it more likely that they will be located and included in a meta-analysis. The inclusion of duplicated data may therefore lead to overestimation of treatment effects, as recently demonstrated for trials of the efficacy of ondansetron to prevent postoperative nausea and vomiting86.”

Khalid Khan, Regina Kunz, Joseph Kleijnen, and Gerd Antesp, Systematic Reviews to Support Evidence-Based Medicine: How to Review and Apply Findings of Healthcare Research 35 (2d ed. 2011)

“2.3.5 Selecting studies with duplicate publication

Reviewers often encounter multiple publications of the same study. Sometimes these will be exact  duplications, but at other times they might be serial publications with the more recent papers reporting increasing numbers of participants or lengths of follow-up. Inclusion of duplicated data would inevitably bias the data synthesis in the review, particularly because studies with more positive results are more likely to be duplicated. However, the examination of multiple reports of the same study may provide useful information about its quality and other characteristics not captured by a single report. Therefore, all such reports should be examined. However, the data should only be counted once using the largest, most complete report with the longest follow-up.”

Julia H. Littell, Jacqueline Corcoran, and Vijayan Pillai, Systematic Reviews and Meta-Analysis 62-63 (2008)

Duplicate and Multiple Reports

***

It is a bit more difficult to identify multiple reports that emanate from a single study. Sometimes these reports will have the same authors, sample sizes, program descriptions, and methodological details. However, author lines and sample sizes may vary, especially when there are reports on subsamples taken from the original study (e.g., preliminary results or special reports). Care must be taken to ensure that we know which reports are based on the same samples or on overlapping samples—in meta-analysis these should be considered multiple reports from a single study. When there are multiple reports on a single study, we put all of the citations for that study together in summary information on the study.”

Kay Dickersin, “Publication Bias: Recognizing the Problem, Understanding Its Origins and Scope, and Preventing Harm,” Chapter 2, in Hannah R. Rothstein, Alexander J. Sutton & Michael Borenstein, Publication Bias in Meta-Analysis – Prevention, Assessment and Adjustments 11, 26 (2005)

“Positive results appear to be published more often in duplicate, which can lead to overestimates of a treatment effect (Timmer et al., 2002).”

Julian P.T. Higgins & Sally Green, eds., Cochrane Handbook for Systematic Reviews of Interventions 152 (2008)

“7.2.2 Identifying multiple reports from the same study

Duplicate publication can introduce substantial biases if studies are  inadvertently included more than once in a meta-analysis (Tramer 1997). Duplicate publication can take various forms, ranging from identical manuscripts to reports describing different numbers of participants and different outcomes (von Elm 2004). It can be difficult to detect duplicate publication, and some ‘detectivework’ by the review authors may be required.”

Lawyers as Historians

February 2nd, 2016

“It has been said that though God cannot alter the past, historians can; it is perhaps because they can be useful to Him in this respect that He tolerates their existence.”     Samuel Butler

The negligence standard does not require omniscience by the defendant; rather, in products liability law, the manufacturer is expected to know what experts in the relevant field know, at the time of making and marketing the allegedly offending product. In long-tail litigation, involving harms that occur, if at all, only after a long latency period, the inquiry thus become an historical one, sometimes reaching back decades. Combine this aspect of products liability law, with the propensity of plaintiffs to ascribe long-standing, often fantastic, secret conspiracies and cabals to manufacturers, the historical aspect of many products cases becomes essential. The law leaves much uncertainty about how litigants are supposed to deal with uncertainty among experts at the relevant point in time. Plaintiffs typically find one or a few experts who were “out there,” at the time of the marketing, with good intuitions, but poor evidentiary bases, in asserting a causal connection. Defendants may take the opposite tack, but the important point is that the standard is epistemic and the Gettier problem[1] seriously afflicts most discussions in the legal state-of-art defenses.

Scott Kozak in a recent article calls attention to the exercised writings of David Rosner and Gerald Markowitz, who attempt to privilege their for-pay, for-plaintiffs, testimonial adventures, while deprecating similar work by defense expert witnesses and defense counsel.[2] Kozak’s article is a helpful reminder of how Markowitz and Rosner misunderstand and misrepresent the role of lawyers, while aggressively marketing their Marxist historiography in service of the Litigation Industry. Although Rosnowitz’s approach has been debunked on many occasions,[3] their biases and errors remain important, especially given how frequently they have showed up as highly partisan, paid expert witnesses in litigation. As I have noted on many occasions, historians can play an important scholarly role in identifying sources, connections, and interpretations of evidence, but the work of drawing and arguing those inferences in court, belongs to lawyers, who are subject to rules of procedure, evidence, and ethics.

Of course, lawyers, using the same set of skills of factual research and analysis as historians, have made important contributions to historical scholarship. A recent article[4] in the Wall Street Journal pointed out the historical contributions made by William Henry Herndon, Abraham Lincoln’s law partner, to our understanding of the Lincoln presidency.[5] The example could be multiplied.

Recently, I set out to research some issues in my own family history, surrounding its immigration and adjustment to life in the United States. I found some interesting points of corroboration between the oral and the documentary history, but what was most remarkable was what omitted from the oral history, and rediscovered among ancient documents. The information omitted could have been by accident or by design.  The embarrassing, the scandalous, the unpleasant, the mistakes, and the inane seem destined to be forgotten or suppressed, and thus left out of the narrative. The passage of time cloaked past events in a shroud of mystery.  And then there was false memory and inaccurate recall.  The Rashomon effect is in full bloom in family histories, as are all the cognitive biases, and unwarranted exceptionalist propaganda.

From all this, you might think that family histories are as intellectually corrupt and barren as national histories. Perhaps, but there is some documentary evidence that is likely to be mostly correct. Sometimes the documents even corroborate the oral history. Every fact documented, however, raises multiple new questions. Often, we are left with the black box of our ancestors’ motivation and intent, even when we can establish some basic historical facts.

In conducting this bit of family research, I was delighted to learn that there are standards for what constitutes reasonably supportable conclusions in family histories. The elements of the “genealogical proof standard,” set out in various places,[6] are generally regarded as consisting of:

 

  • reasonably exhaustive search
  • complete and accurate citation to sources
  • analysis and correlation of collected information
  • resolution of conflicting evidence
  • soundly reasoned, coherently written conclusion

If only all historians abided by this standard! There are standards for professional conduct of historians,[7] but curiously they are not as demanding as what the genealogical community has accepted as guiding and governing genealogical research.  The Genealogy Standards is worth consulting as a set of methodological principles that historians of all stripes should be heeding, and should be excluded from courtroom when disregarded.


[1] Edmund L. Gettier, “Is Justified True Belief Knowledge?” 23 Analysis 121 (1963).

[2] Scott Kozak, “Use and Abuse of ‘Historical Experts’ in Toxic Tort Cases,” in Toxic & Hazardous Substances Litigation (March 2015), available at < >.

[3] For a sampling of Rosnowitz decontruction, seeCounter Narratives for Hire”; “Historians Noir”; “Too Many Narratives – Historians in the Dock”; “Courting Clio: Historians Under Oath – Part 2”; “Courting Clio: Historians Under Oath – Part 1”; “Courting Clio: Historians and Their Testimony in Products Liability Litigation”; “How testifying historians are like lawn-mowing dogs” (May 2010); “What Happens When Historians Have Bad Memories”; “Narratives & Historians for Hire”; “A Walk on the Wild Side” (July 16, 2010).”

[4] David S. Reynolds, “Abraham Lincoln and Friends,” Wall St. J. (Jan. 29, 2016).

[5] Douglas L. Wilson & Rodney O. Davis, eds., Herndon on Lincoln: Letters (2016).

[6] See generally Board for Certification of Genealogists, Genealogy Standards (50th Anniversary ed. 2014).

[7] See, e.g., American Historical Ass’n, Statement on Standards of Professional Conduct, 2005 Edition, available at <http://www.historians.org/pubs/Free/ProfessionalStandards.cfm> (last revised January 2011). For histories that live up to high standards, see Annette Gordon-Reed, The Hemingses of Monticello: An American Family (2009); Timothy Snyder, Black Earth: The Holocaust as History and Warning (2015). But see David Rosner & Gerald Markowitz, Deadly Dust: Silicosis and the On-Going Struggle to Protect Workers’ Health (2006).

Lipitor MDL Takes The Fat Out Of Dose Extrapolations

December 2nd, 2015

Philippus Aureolus Theophrastus Bombastus von Hohenheim thankfully went by the simple moniker Paracelsus, sort of the Cher of the 1500s. Paracelsus’ astrological research is graciously overlooked today, but his 16th century dictum, in the German vernacular has created a lasting impression on linguistic conventions and toxicology:

“Alle Ding’ sind Gift, und nichts ohn’ Gift; allein die Dosis macht, dass ein Ding kein Gift ist.”

(All things are poison and nothing is without poison, only the dose permits something not to be poisonous.)

or more simply

“Die Dosis macht das Gift.”

Paracelsus, “Die dritte Defension wegen des Schreibens der neuen Rezepte,” Septem Defensiones (1538), in 2 Werke 510 (Darmstadt 1965). Today, his notion that the “dose is the poison” is a basic principle of modern toxicology,[1] which can be found in virtually every textbook on the subject.[2]

Paracelsus’ dictum has also permeated the juridical world, and become a commonplace in legal commentary and judicial decisions. The Reference Manual on Scientific Evidence is replete with supportive statements on the general acceptance of Paracelsus’ dictum. The chapter on epidemiology notes:

“The idea that the ‘dose makes the poison’ is a central tenet of toxicology and attributed to Paracelsus, in the sixteenth century… [T]his dictum reflects only the idea that there is a safe dose below which an agent does not cause any toxic effect.”

Michael D. Green, D. Michal Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” 549, 603 & n.160, in Reference Manual on Scientific Evidence (3d ed. 2011). Citing an unpublished, non-scientific advocacy piece written for a regulatory agency, the chapter does, however, claim that “[t]he question whether there is a no-effect threshold dose is a controversial one in a variety of toxic substances areas.”[3] The epidemiology chapter thus appears to confuse two logically distinct propositions: that there is no threshold dose and that there is no demonstrated threshold dose.

The Reference Manual’s chapter on toxicology also weighs in on Paracelsus:

“There are three central tenets of toxicology. First, “the dose makes the poison”; this implies that all chemical agents are intrinsically hazardous—whether they cause harm is only a question of dose. Even water, if consumed in large quantities, can be toxic.”

Bernard D. Goldstein & Mary Sue Henifin, “Reference Guide on Toxicology,” 633, 636, in Reference Manual on Scientific Evidence (3d ed. 2011) (internal citations omitted).

Recently, Judge Richard Mark Gergel had the opportunity to explore the relevance of dose-response to plaintiffs’ claims that atorvastatin causes diabetes. In re Lipitor (Atorvastatin Calcium) Marketing, Sales Practices & Prod. Liab. Litig., MDL No. 2:14–mn–02502–RMG, Case Mgmt. Order 49, 2015 WL 6941132 (D.S.C. Oct. 22, 2015) [Lipitor]. Plaintiffs’ expert witnesses insisted that they could disregard dose once they had concluded that there was a causal association between atorvastatin at some dose and diabetes. On Rule 702 challenges to plaintiffs’ expert witnesses, the court held that, when there is a dose-response relationship and there is an absence of association at low doses, then plaintiffs must show, through expert witness testimony, that the medication is capable of causing the alleged harm at particular doses. The court permitted the plaintiffs’ expert witnesses to submit supplemental reports to address the dose issue, and the defendants to relodge their Rule 702 challenge after discovery on the new reports. Lipitor at *6.

The Lipitor court’s holding built on the ruling by Judge Breyer’s treatment of dose in In re Bextra & Celebrex Mktg. Sales Practices & Prod. Liab. Litig., 524 F. Supp. 2d 1166, 1174-75 (N.D. Cal.2007). Judge Breyer, Justice Breyer’s kid brother, denied defendants’ Rule 702 challenges to plaintiffs’ expert witnesses who opined that Bextra and Celebrex can cause heart attacks and strokes, at 400 mg./day. For plaintiffs who ingested 200 mg/day, however, Judge Breyer held that the lower dose had to be analyzed separately, and he granted the motions to exclude plaintiffs’ expert witnesses’ opinions about the alleged harms caused by the lower dose. Lipitor at *1-2. The plaintiffs’ expert witnesses reached their causation opinions about 200 mg by cherry picking from the observational studies, and disregarding the randomized trials and meta-analyses of observational studies that failed to find an association between 200 mg/day and cardiovascular risk. Id. at *2. Given the lack of support for an association at 200mg/day, the court rejected the plaintiffs’ speculative downward extrapolation asserted.

Because of dose-response gradients, and the potential for a threshold, a risk estimate based upon greater doses or exposure does not apply to a person exposed at lower doses or exposure levels. See Michael D. Green, D. Michal Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” 549, 613, in Reference Manual on Scientific Evidence (3d ed. 2011) (“[A] risk estimate from a study that involved a greater exposure is not applicable to an individual exposed to a lower dose.”).

In the Lipitor case, as in the Celebrex case, multiple studies reported no statistically significant associations between the lower doses and the claimed adverse outcome. This absence, combined with a putative dose-response relationship, made plaintiffs’ downward extrapolation impermissibly speculative. See, e.g., McClain v. Metabolife Int’l, Inc., 401 F.3d 1233, 1241 (11th Cir. 2005) (reversing admission of expert witness’s testimony when the witness conceded a dose-response, but failed to address the dose of the medication needed to cause the claimed harm).

Courts sometimes confuse thresholds with dose-response relationships. The concepts are logically independent. There can be a dose-response relationship with or without a threshold. And there can be an absence of a dose-response relationship with a threshold, as in cases in which the effect is binary: positive at or above some threshold level, and negative below. A causal claim can run go awry because it ignores the possible existence of a threshold, or the existence of a dose-response relationship. The latter error is commonplace in litigation and regulatory contexts, when science or legal advocates attempt to evaluate risk using data that are based upon higher exposures or doses. The late Irving Selikoff, no stranger to exaggerated claims, warned against this basic error, when he wrote that his asbestos insulator cancer data were inapposite for describing risks of other tradesmen:

“These particular figures apply to the particular groups of asbestos workers in this study. The net synergistic effect would not have been the same if their smoking habits had been different; and it probably would have been different if their lapsed time from first exposure to asbestos dust had been different or if the amount of asbestos dust they had inhaled had been different.”

E.. Cuyler Hammond, Irving J. Selikoff, and Herbert Seidman, “Asbestos Exposure, Cigarette Smoking and Death Rates,” 330 Ann. N.Y. Acad. Sci. 473, 487 (1979).[4] Given that the dose-response between asbestos exposure and disease outcomes was an important tenant of the Selikoff’s work, it is demonstrable incorrect for expert witnesses to invoke relative risks for heavily exposed asbestos insulators and apply them to less exposed workers, as though the risks were the same and there are no thresholds.

The legal insufficiency of equating high and low dose risk assessments has been noted by many courts. In Texas Independent Ginners Ass’n v. Marshall, 630 F.2d 398 (5th Cir. 1980), the Circuit reviewed an OSHA regulation promulgated to protect cotton gin operators from the dangers of byssinosis. OSHA based its risk assessments on cotton dust exposures experienced by workers in the fabric manufacturing industry, but the group of workers to be regulated had intermittent exposures, at different levels, from that of the workers in the relied upon studies. Because of the exposure level disconnect, the Court of Appeals struck the OSHA regulation. Id. at 409. OSHA’s extrapolation from high to low doses was based upon an assumption, not evidence, and the regulation could not survive the deferential standard required for judicial review of federal agency action. Id.[5]

The fallacy of “extrapolation down” often turns on the glib assumption that an individual claimant must have experienced a “causative exposure” because he has the disease that can result at some, higher level of exposure. In reasoning backwards from untoward outcome to sufficient dose, when dose is at issue, is a petitio principii, as recognized by several astute judges:

“The fallacy of the ‘extrapolation down’ argument is plainly illustrated by common sense and common experience. Large amounts of alcohol can intoxicate, larger amounts can kill; a very small amount, however, can do neither. Large amounts of nitroglycerine or arsenic can injure, larger amounts can kill; small amounts, however, are medicinal. Great volumes of water may be harmful, greater volumes or an extended absence of water can be lethal; moderate amounts of water, however, are healthful. In short, the poison is in the dose.”

In re Toxic Substances Cases, No. A.D. 03-319.No. GD 02-018135, 05-010028, 05-004662, 04-010451, 2006 WL 2404008, at *6-7 (Alleghany Cty. Ct. C.P. Aug. 17, 2006) (Colville, J.) (“Drs. Maddox and Laman attempt to “extrapolate down,” reasoning that if high dose exposure is bad for you, then surely low dose exposure (indeed, no matter how low) must still be bad for you.”)(“simple logical error”), rev’d sub nom. Betz v. Pneumo Abex LLC, 998 A.2d 962 (Pa. Super. 2010), rev’d 615 Pa. 504, 44 A.3d 27 (2012).

Exposure Quantification

An obvious corollary of the fallacy of downward extrapolation is that claimants must have a reasonable estimate of their dose or exposure in order to place themselves on the dose-response curve, to estimate in turn what their level of risk was before they developed the claimed harm. For example, in Mateer v. U.S. Aluminum Co., 1989 U.S. Dist. LEXIS 6323 (E.D. Pa. 1989), the court, applying Pennsylvania law, dismissed plaintiffs’ claim for personal injuries in a ground-water contamination case. Although the plaintiffs had proffered sufficient evidence of contamination, their expert witnesses failed to quantify the plaintiffs’ actual exposures. Without an estimate of the claimants’ actual exposure, the challenged expert witnesses could not give reliable, reasonably based opinions and conclusions whether plaintiffs were injured from the alleged exposures. Id. at *9-11.[6]

Science and law are sympatico; dose or exposure matters, in pharmaceutical, occupational, and environmental cases.


[1] Joseph F. Borzelleca, “Paracelsus: Herald of Modern Toxicology,” 53 Toxicol. Sci. 2 (2000); David L. Eaton, “Scientific Judgment and Toxic Torts – A Primer in Toxicology for Judges and Lawyers,” 12 J.L. & Pol’y 5, 15 (2003); Ellen K. Silbergeld, “The Role of Toxicology in Causation: A Scientific Perspective,” 1 Cts. Health Sci. & L. 374, 378 (1991). Of course, the claims of endocrine disruption have challenged the generally accepted principle. See, e.g., Dan Fagin, “Toxicology: The learning curve,” Nature (24 October 2012) (misrepresenting Paracelsus’ dictum as meaning that dose responses will be predictably linear).

[2] See, e.g., Curtis D. Klaassen, “Principles of Toxicology and Treatment of Poisoning,” in Goodman and Gilman’s The Pharmacological Basis of Therapeutics 1739 (11th ed. 2008); Michael A Gallo, “History and Scope of Toxicology,” in Curtis D. Klaassen, ed., Casarett and Doull’s Toxicology: The Basic Science of Poisons 1, 4–5 (7th ed. 2008).

[3] Michael D. Green, D. Michal Freedman, and Leon Gordis, “Reference Guide on Epidemiology,” 549, 603 & n.160, in Reference Manual on Scientific Evidence (3d ed. 2011) (Irving J. Selikoff, Disability Compensation for Asbestos-Associated Disease in the United States: Report to the U.S. Department of Labor 181–220 (1981). The chapter also cites two judicial decisions that clearly were influenced by advocacy science and regulatory assumptions. Ferebee v. Chevron Chemical Co., 736 F.2d 1529, 1536 (D.C. Cir. 1984) (commenting that low exposure effects are “one of the most sharply contested questions currently being debated in the medical community”); In re TMI Litig. Consol. Proc., 927 F. Supp. 834, 844–45 (M.D. Pa. 1996) (considering extrapolations from high radiation exposure to low exposure for inferences of causality).

[4] See also United States v. Reserve Mining Co., 380 F. Supp. 11, 52-53 (D. Minn. 1974) (questioning the appropriateness of comparing community asbestos exposures to occupational and industrial exposures). Risk assessment modesty was uncharacteristic of Irving Selikoff, who used insulator risk figures, which were biased high, to issue risk projections for total predicted asbestos-related mortality.

[5] See also In re “Agent Orange” Prod. Liab. Litig., 611 F. Supp. 1223, 1250 (E.D.N.Y. 1985), aff’d, 818 F.2d 187 (2d Cir. 1987), cert. denied, Pinkney v. Dow Chemical Co., 487 U.S. 1234 (1988) (noting that the plaintiffs’ expert witnesses relied upon studies that involved heavier exposures than those experienced by plaintiffs; the failure to address the claimants’ actual exposures rendered the witnesses’ proposed testimony legally irrelevant); Gulf South Insulation v. United States Consumer Products Safety Comm’n, 701 F.2d 1137, 1148 (5th Cir. 1983) (invalidating CPSC’s regulatory ban on urea formaldehyde foam insulation, as not supported by substantial evidence, when the agency based its ban upon high-exposure level studies and failed to quantify putative risks at actual exposure levels; criticizing extrapolations from high to low doses); Graham v. Wyeth Laboratories, 906 F.2d 1399, 1415 (10th Cir.) (holding that trial court abused its discretion in failing to grant new trial upon a later-discovered finding that plaintiff’s expert misstated the level of toxicity of defendant’s DTP vaccine by an order of magnitude), cert. denied, 111 S.Ct. 511 (1990).

Two dubious decisions that fail to acknowledge the fallacy of extrapolating down from high-exposure risk data have come out of the Fourth Circuit. See City of Greenville v. W.R. Grace & Co., 827 F.2d 975 (4th Cir. 1987) (affirming judgment based upon expert testimony that identified risk at low levels of asbestos exposure based upon studies at high levels of exposure); Smith v. Wyeth-Ayerst Labs Co., 278 F. Supp. 2d 684, 695 (W.D.N.C. 2003)(suggesting that expert witnesses may extrapolate down to lower doses, and even to extrapolate to different time window of latency).

[6] See also Christophersen v. Allied-Signal Corp., 939 F.2d 1106, 1111, 1113-14 (5th Cir. 1991) (en banc) (per curiam) (trial court may exclude opinion of expert witness whose opinion is based upon incomplete or inaccurate exposure data), cert. denied, 112 S. Ct. 1280 (1992); Wills v. Amareda Hess Co., 2002 WL 140542, *10 (S.D.N.Y. Jan. 31, 2002) (noting that the plaintiff’s expert witness failed to quantify the decedent’s exposure, but was nevertheless “ready to form a conclusion first, without any basis, and then try to justify it” by claiming that the decedent’s development of cancer was itself sufficient evidence that he had had intensive exposure to the alleged carcinogen).

Putting the Liability Spotlight on Employers

November 30th, 2015

In 2013, the Pennsylvania Supreme Court held that employers could be directly liable to employees for injuries that become manifest outside the time limits (300 weeks) of the Commonwealth’s workman’s compensation statute. Tooey v. AK Steel Corp., 81 A.3d 851 (Pa. 2013). The implications for so-called long latency, toxic tort claims were obvious, and the generated some commentary. SeePennsylvania Workers Regain Their Right of Action in Tort against Employers for Latent Occupational Diseases” (Feb. 14, 2014); “The Erosion of Employer Immunity in Tort Litigation” (Jan. 20, 2015).

The Legal Intelligencer has now reported the first “cashing in” or “cashing out” on the change in Pennsylvania law. Plaintiff’s lawyer, Benjamin Shein, took an employer to trial on claims that the employer was responsible for alleged asbestos exposure that caused John F. Busbey to develop mesothelioma. Bobbie R. Bailey of Leader & Berkon, in Los Angeles, defended. The case was tried before Philadelphia Judge Lisette Shirdan-Harris and a jury. After a three week trial, on November 10, the jury returned a verdict in favor of plaintiff, against the employer defendant, in the amount of 1.7 million dollars. Busbey v. ESAB Group, Phila. Court of Common Pleas, No. 120503046. Max Mitchell, “Employer Found Liable In Asbestos Verdict: Busbey v. ESAB Group $1.7 Million Verdict,” The Legal Intelligencer (Dec. 1, 2015).

For witnesses, Shein called frequent litigation-industry testifiers, Dr. Steven Markowitz on occupational disease, and Dr. Daniel Dupont, a local pulmonary physician. Shein also called one of the pink panther historians, Gerald Markowitz. SeeNarratives & Historians for Hire” (Dec. 15, 2010). The employer defendant called an industrial hygienist, Delno D. Malzahn.

According Ben Shein, the verdict represented the first trial win in Pennsylvania for an asbestos claim against an employer, since the Pennsylvania Supreme Court decided Tooey in 2013. From the Legal Intelligencer’s account, and the line-up of litigation industry witnesses, the plaintiff’s trial evidence on exposure and standard of care seems shaky, and the winner may not be discernible until the appellate review is concluded.

In Illinois, an intermediate appellate court held out the prospect of a legal change similar to Tooey. In 2014, the Illinois Court of Appeals held that workman compensation petitioners, whose claims fell outside the Illinois statute were not barred by the exclusive remedy provisions that gave employers immunity from civil suit. Folta v. Ferro Engineering, 2014 IL App (1st) 123219. See Patrick W. Stufflebeam, “Folta v. Ferro Engineering: A Shift in Illinois Workers’ Compensation Protection for Illinois Employers in Asbestos Cases,” News & Press: IDC Quarterly (Mar. 11, 2015).

The Illinois Supreme Court allowed an appeal, as well as extensive amicus briefings from the Illinois Trial Lawyers Association, the Asbestos Disease Awareness Organization, the Illinois AFL-CIO, the Illinois Self-Insurers’ Association, the Illinois Defense Trial Counsel, a joint brief from insurers,[1] and a joint brief from various manufacturing companies.[2]

Earlier this month, the Illinois Supreme Court reversed and held that even though claims fell outside the Illinois workman’s compensation statute, those claims were still barred by the Act’s exclusive remedy provisions that gave employers immunity from civil suit. Folta v. Ferro Engineering, 2015 IL 118070 (November 4, 2015).


[1] the American Insurance Association, Property Casualty Insurers Association of America, and the Travelers Indemnity Company.

[2] Caterpillar Inc., Aurora Pump Company, Innophos, Inc., Rockwell Automation, Inc., United States Steel Corporation, F.H. Leinweber Company, Inc., Driv-Lok, Inc., Ford Motor Company, and ExxonMobil Oil Corporation.

The opinions, statements, and asseverations expressed on Tortini are my own, or those of invited guests, and these writings do not necessarily represent the views of clients, friends, or family, even when supported by good and sufficient reason.