TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

Reanalysis of Epidemiologic Studies – Not Intrinsically WOEful

December 27th, 2012

A recent student law review article discusses reanalyses of epidemiologic studies, an important, and overlooked topic in the jurisprudence of scientific evidence.  Alexander J. Bandza, “Epidemiological-Study Reanalyses and Daubert: A Modest Proposal to Level the Playing Field in Toxic Tort Litigation,” 39 Ecology L. Q. 247 (2012).

In the Daubert case itself, the Ninth Circuit, speaking through Judge Kozinksi, avoided the methodological issues raised by Shanna Swan’s reanalysis of Bendectin epidemiologic studies, by assuming arguendo its validity, and holding that the small relative risk yielded by the reanalysis would not support a jury verdict of specific causation. Daubert v. Merrell Dow Pharm., Inc., 43 F.3d 1311, 1317–18 (9th Cir. 1995).

There is much that can, and should, be said about reanalyses in litigation and in the scientific process, but Bandza never really gets down to the business at hand. His 36 page article curiously does not begin to address reanalysis until the bottom of the 20th page. The first half of the article, and then some, reviews some time-worn insights and factoids about scientific evidence. Finally, at page 266, the author introduces and defines reanalysis:

“Reanalysis occurs ‘when a person other than the original investigator obtains an epidemiologic data set and conducts analyses to evaluate the quality, reliability or validity of the dataset, methods, results or conclusions reported by the original investigator’.”

Bandza at 266 (quoting Raymond Neutra et al., “Toward Guidelines for the Ethical Reanalysis and Reinterpretation of Another’s Research,” 17 Epidemiology 335, 335 (2006).

Bandza correctly identifies some of the bases for judicial hostility to re-analyses. For instance, some courts are troubled or confused when expert witnesses disagree with, or reevaluate, the conclusions of a published article. The witnesses’ conclusions may not be published or peer reviewed, and thus the proffered testimony fails one of the Daubert factors.  Bandza correctly notes that peer review is greatly overrated by judges. Bandza at 270. I would add that peer review is an inappropriate proxy for validity, a “test,” which reflects a distrust of the unpublished.  Unfortunately, this judicial factor ignores the poor quality of much of what is published, and the extreme variability in the peer review process. Judges overrate peer review because they are desperate for a proxy for validity of the studies relied upon, which will allow them to pass their gatekeeping responsibility on to the jury. Furthermore, the authors’ own conclusions are hearsay, and their qualifications are often not fully before the court.  What is important is the opinion of the expert witness who can be cross-examined and challenged.  SeeFOLLOW THE DATA, NOT THE DISCUSSION.” What counts is the validity of the expert witness’s reasoning and inferences.

Bandza’s article, which by title advertises itself to be about re-analyses, gives only a few examples of re-analyses without much detail.  He notes concerns that reanalyses may impugn the reputation of published scientists, and burden them with defending their data.  Who would have it any other way? After this short discussion, the article careens into a discussion of “weight of the evidence” (WOE) methodology. Bandza tells us that the rejection of re-analyses in judicial proceedings “implicitly rules out using the weight-of-the-evidence methodology often appropriate for, or even necessary to, scientific analysis of potentially toxic substances.” Bandza at 270.  This argument, however, is one sustained non-sequitur.  WOE is defined in several ways, but none of the definitions require or suggest the incorporation of re-analyses. Re-analyses raise reliability and validity issues regardless whether an expert witness incorporates them into a WOE assessment. Yet Bandza tells us that the rejection of re-analyses “Implicitly Ignores the Weight-of-the-Evidence Methodology Appropriate for the Scientific Analysis of Potentially Toxic Substances.” Bandza at 274. This conclusion simply does not follow from the nature of WOE methodology or reanalyses.

Bandza’s ipse dixit raises the independent issue whether WOE methodology is appropriate for scientific analysis. WOE is described as embraced or used by regulatory agencies, but that description hardly recommends the methodology as the basis for a scientific, as opposed to a regulatory, conclusion.  Furthermore, Bandza ignores the ambiguity and variability of WOE by referring to it as a methodology, when in reality, WOE is used to describe a wide variety of methods of reasoning to a conclusion. Bandza cites Douglas Weed’s article on WOE, but fails to come to grips with the serious objections raised by Weed in his article to the use of WOE methodologies.  Douglas Weed, “Weight of Evidence: A Review of Concept and Methods,” 25 Risk Analysis 1545, 1546–52 (2005) (describing the vagueness and imprecision of WOE methodologies). See also “WOE-fully Inadequate Methodology – An Ipse Dixit By Another Name.”

Bandza concludes his article with a hymn to the First Circuit’s decision in Milward v. Acuity Specialty Products Group, Inc., 639 F.3d 11 (1st Cir. 2011). Plaintiffs’ expert witness, Dr. Martyn Smith claimed to have performed a WOE analysis, which in turn was based upon a re-analysis of several epidemiologic studies. True, true, and immaterial.  The re-analyses were not inherently a part of a WOE approach. Presumably, Smith re-analyzed some of the epidemiologic studies because he felt that the data as presented did not support his desired conclusion.  Given the motivations at work, the district court in Milward was correct to look skeptically and critically at the re-analyses.

Bandza notes that there are procedural and evidentiary safeguards in federal court against unreliable or invalid re-analyses of epidemiologic studies.  Bandza at 277. Yes, there are safeguards but they help only when they are actually used. The First Circuit in Milward reversed the district court for looking too closely at the re-analyses, spouting the chestnut that the objections went to the weight not the admissibility of the evidence.  Bandza embraces the rhetoric of the Circuit, but he offers no description or analysis of the liberties that Martyn Smith took with the data, or the reasonableness of Smith’s reliance upon the re-analyzed data.

There is no necessary connection between WOE methodologies and re-analyses of epidemiologic studies.  Re-analyses can be done properly to support or deconstruct the conclusions of published papers.  As Bandza points out, some re-analyses may go on to be peer reviewed and published themselves.  Validity is the key, and WOE methodologies have little to do with the process of evaluating the original or the re-analyzed study.

 

 

Wells v. Ortho Pharmaceutical Corp. Reconsidered – Part 6

November 21st, 2012

In 1984, before Judge Shoob gave his verdict in the Wells case, another firm filed a birth defects case against Ortho for failure to warn in connection with its non-ionic surfactant spermicides, in the same federal district court, the Northern District of Georgia. The mother in Smith used Ortho’s product about the same time as the mother in Wells (in 1980).  The case was assigned to Judge Shoob, who recused himself.  Smith v. Ortho Pharmaceutical Corp., 770 F. Supp. 1561, 1562 n.1 (N.D. Ga. 1991) (no reasons for the recusal provided).  The Smith case was reassigned to Judge Horace Ward, who entertained Ortho’s motion for summary judgment in July 1988.  Two and one-half years later, Judge Ward granted summary judgment to Ortho on grounds that the plaintiffs’ expert witnesses’ testimony was not based upon the type of data reasonably relied upon by experts in the field, and was thus inadmissible under Federal Rule of Evidence 703. 770 F. Supp. at 1681.

A prevalent interpretation of the split between Wells and Smith is that the scientific evidence developed with new studies, and that the scientific community’s views matured in the five years between the two district court opinions. The discussion in Modern Scientific Evidence is typical:

“As epidemiological evidence develops over time, courts may change their view as to whether testimony based on other evidence is admissible. In this regard it is worth comparing Wells v. Ortho Pharmaceutical Corp., 788 F.2d 741 (11th Cir. 1986), with Smith v. Ortho Pharmaceutical Corp., 770 F. Supp. 1561 (N.D. Ga. 1991). Both involve allegations that the use of spermicide caused a birth defect. At the time of the Wells case there was limited epidemiological evidence and this type of claim was relatively novel.  In a bench trial the court found for the plaintiff.  *** The Smith court, writing five years later, noted that, ‘The issue of causation with respect to spermicide and birth defects has been extensively researched since the Wells decision.’ Smith v. Ortho Pharmaceutical Corp., 770 F. Supp. 1561, 1563 (N.D. Ga. 1991).”

1 David L. Faigman, Michael J. Saks, Joseph Sanders, and Edward K. Cheng, Modern Scientific Evidence:  The Law and Science of Expert Testimony, “Chapter 23 – Epidemiology,” § 23:4, at 213 n.12 (West 2011) (internal citations omitted).

Although Judge Ward was being charitable to his judicial colleague, this attempt to reconcile Wells and Smith does a disservice to Judge Ward’s hard work in Smith, and Judge Shoob’s errors in Wells.

Even a casual reading of Smith and Wells reveals that the injuries were completely differently.  Plaintiff Crystal Smith was born with a chromosomal defect known as Trisomy-18; Plaintiff Katie Wells was born with limb reduction deficits.   Some studies relevant to one injury had no information about the other.  Other studies, which addressed both injuries, yielded different results for the different injuries.  Although some additional studies were available to Judge Ward in 1988, this difference is hardly the compelling difference between the two cases.

Perhaps the most important difference between the cases is that in Smith, the biologically plausibility that spermicides could cause a Trisomy-18 was completely absent.  The chromosomal defect arises from a meiotic disjunction, an error in meiosis that is part of the process in which germ cells are formed.  Simply put, spermicides arrive on the scene too late to cause a Trisomy-18.  Notwithstanding the profound differences between the injuries involved in Wells and Smith, the Smith plaintiffs sought the application of collateral estoppel.  Judge Ward refused this motion, on the basis of the factual differences in the cases, as well as the availability of new evidence.  770 F.Supp. at 1562.

The difference in injuries, however, was not the only important difference between these two cases.  Wells was actually tried, apparently without any challenge under Frye, or Rules 702 or 703, to the admissibility of expert witness testimony.  There is little to no discussion of scientific validity of studies, or analysis of the requisites for evaluating associations for causality.  It is difficult to escape the conclusion that Judge Shoob decided the Wells case on the basis of superficial appearances, and that he frequently ignored validity concerns in drawing invidious distinctions between plaintiffs’ and defendant’s expert witnesses and their “credibility.”  Smith, on the other hand, was never tried.  Judge Ward entertained and granted dispositive motions for summary judgment, on grounds that the plaintiffs’ expert witnesses’ testimony was inadmissible. Legally, the cases are light years apart.

In Smith, Judge Ward evaluated the same FDA reports and decisions seen by Judge Shoob.  Judge Ward did not, however, dismiss these agency materials simply because one or two of dozens of independent scientists involved had some fleeting connection with industry. 770 F.Supp. at 1563-64.

Judge Ward engaged with the structure and bases of the expert witnesses’ opinions, under Rules 702 and 703.  The Smith case thus turned on whether expert witness opinions were admissible, an issue not considered or discussed in Wells.  As was often the case before the Supreme Court decided Daubert in 1993, Judge Ward paid little attention to Rule 702’s requirement of helpfulness or knowledge.  The court’s 702 analysis was limited to qualifications.  Id. at 1566-67.  The qualifications of the plaintiffs’ witnesses were rather marginal.  They relied upon genetic and epidemiologic studies, but they had little training or experience in these disciplines. Finding the plaintiffs’ expert witnesses to meet the low threshold for qualification to offer an opinion in court, Judge Ward focused on Rule 703’s requirement that expert witnesses reasonably rely upon facts and data that are not otherwise admissible.

The trial court in Smith struggled with how it should analyze the underpinnings of plaintiffs’ witnesses’ proffered testimony.  The court acknowledged that conflicts between expert witnesses typically raise questions of weight, not admissibility.  Id. at 1569.  Ortho had, however, challenged plaintiffs’ witnesses for having given opinions that lacked a “sound underlying methodology.” Id.  The trial court found at least one Fifth Circuit case that suggested that Rule 703 requires trial courts to evaluate the reliability of expert witnesses’ sources.  Id. (citing Soden v. Freightliner Corp., 714 F.2d 498, 505 (5th Cir. 1983). Elsewhere, the trial court also found precedent from Judge Weinstein’s opinion in Agent Orange, as well as Court of Appeals decisions involving Bendectin, all of which turned to Rule 703 as the legal basis for reviewing, and in some cases limiting or excluding expert witness opinion testimony.  Id.

The defendant’s argument under Rule 703 was strained; Ortho argued that the plaintiffs’

“experts’ selection and use of the epidemiological data is faulty and thus provides an insufficient basis upon which experts in the field of diagnosing the source of birth defects normally form their opinions. The defendant also contends that the plaintiffs’ experts’ data on genetics is not of the kind reasonably relied upon by experts in field of determining causation of birth defects.”

Id. at 1572.  Nothing in Rule 703 addresses the completeness or thoroughness of expert witnesses in their consideration of facts and data; nor does Rule 703 address the sufficiency of data or the validity vel non of inferences drawn from facts and data considered.  Nonetheless, the trial court in Smith took Rule 703 as its legal basis for exploring the epistemic warrant for plaintiffs’ witnesses’ causation opinions.

Although plaintiffs’ expert witnesses stated that they had relied upon epidemiologic studies and method, the trial court in Smith went beyond their asseverations.  The Smith trial court explored the credibility of these witnesses at a whole other level.  The court reviewed and discussed the basic structure of epidemiologic studies, and noted that the objective of such studies is to provide a statistical analysis:

“The objective of both case-control and cohort studies is to determine whether the difference observed in the two groups, if any, is ‘statistically significant’, (that is whether the difference found in the particular study did not occur by chance alone).40 However, statistical methods alone, or the finding of a statistically significant association in one study, do not establish a causal relationship.41 As one authority states:

‘Statistical methods alone cannot establish proof of a causal relationship in an association’.42

As a result, once a statistical association is found in an epidemiological study, that data must then be evaluated in a systematic manner to determine causation. If such an association is present, then the researcher looks for ‘bias’ in the study.  Bias refers to the existence of factors in the design of a study or in the manner in which the study was carried out which might distort the result.43

If a statistically significant association is found and there is no apparent ‘bias’, an inference is created that there may be a cause-and-effect relationship between the agent and the medical effect. To confirm or rebut that inference, an epidemiologist must apply five criteria in making judgments as to whether the associations found reflect a cause-and-effect relationship.44 The five criteria are:

1. The consistency of the association;

2. The strength of the association;

3. The specificity of the association;

4. The temporal relationship of the association; and,

5. The coherence of the association.

Assuming there is some statistical association, it is these five criteria that provide the generally accepted method of establishing causation between drugs or chemicals and birth defects.45

The Smith court acknowledged that there were differences of opinion in weighting these five factors, but that some of them were very important to drawing a reliable inference of causality.  Id. at 1775.

A major paradigm shift thus separates Wells and Smith.  The trial court in Wells contented itself with superficial and subjective indicia of witnesses’ personal credibility; the trial in Smith delved into the methodology of drawing an appropriate scientific conclusion about causation.  Telling was the Smith court’s citation to Moultrie v. Martin, 690 F.2d 1078, 1082 (4th Cir. 1982) (“In borrowing from another discipline. a litigant cannot be selective in which principles are applied.”).  770 F.Supp. at 1575 & n.45.  Gone is the Wells retreat from engagement with science, and the dodge that the court must make a legal, not a scientific decision.

Applying the relevant principles, the Smith court found that the plaintiffs’ expert witnesses had deviated from the scientific standards of reasoning and analysis:

“It is apparent to the court that the testimony of Doctors Bussey and Holbrook is insufficiently grounded in any reliable evidence. * * * The conclusions Doctors Bussey and Holbrook reach are also insufficient as a basis for a finding of causality because they fail to consider critical information, such as the most relevant epidemiologic studies and the other possible causes of disease.81

The court finds that the opinions of plaintiffs’ experts are not based upon the type of data reasonably relied upon by experts in determining the cause of birth defects. Experts in determining birth defects rely upon a consensus in genetic or epidemiological investigations or specific generally accepted studies in these fields. While a consensus in genetics or epidemiology is not a prerequisite to a finding of causation in any and all birth defect cases, Rule 703 requires some reliable evidence for the basis of an expert’s opinion.

Experts in determining birth defects also utilize methodologies and protocols not followed by plaintiffs’ experts. Without a well-founded methodology, opinions which run contrary to the consensus of the scientific community and are not supported by any reliable data are necessarily speculative and lacking in the type of foundation necessary to be admissible.

For the foregoing reasons, the court finds that plaintiffs have failed to produce admissible evidence sufficient to show that defendant’s product caused Crystal’s birth defects.”

Id. at 1581.  Rule 703 was forced into a service to filter out methodologically specious opinions.

Not all was smooth sailing for Judge Ward.  Like Judge Shoob, Judge Ward seemed to think that a physical examination of the plaintiff provided helpful, relevant evidence, but he never articulated what the basis for this opinion was. (His Honor did note that the parties agreed that the physical examination offered no probative evidence about causation.  Id. at 1572 n.32.) No harm came of this opinion.  Judge Ward wrestled with the lack of peer review in some unpublished studies, and the existence of a study only in abstract form.  See, e.g., id. at 1579 (“a scientific study not subject to peer review has little probative value”); id. at 1578 (insightfully noting that an abstract had insufficient data to permit a reader to evaluate its conclusions).  The Smith court recognized the importance of statistical analysis, but it confused Bayesian posterior probabilities with significance probabilities:

“Because epidemiology involves evidence on causation derived from group based information, rather than specific conclusions regarding causation in an individual case, epidemiology will not conclusively prove or disprove that an agent or chemical causes a particular birth defect. Instead, its probative value lies in the statistical likelihood of a specific agent causing a specific defect. If the statistical likelihood is negligible, it establishes a reasonable degree of medical certainty that there is no cause-and-effect relationship absent some other evidence.”

The confusion here is hardly unique, but ultimately it did not prevent Judge Ward from reaching a sound result in Smith.

What intervened between Wells and Smith was not any major change in the scientific evidence on spermicides and birth defects; the sea change came in the form of judicial attitudes toward the judge’s role in evaluating expert witness opinion testimony.  In 1986, for instance, after the Court of Appeals affirmed the judgment in Wells, Judge Higginbotham, speaking for a panel of the Fifth Circuit, declared:

“Our message to our able trial colleagues: it is time to take hold of expert testimony in federal trials.”

 In re Air Crash Disaster at New Orleans, 795 F.2d 1230, 1234 (5th Cir. 1986).  By the time the motion for summary judgment in Smith was decided, that time had come.

Union of Concerned Scientists on Criticism and Personal Attacks on Scientists

October 12th, 2012

The Union of Concerned Scientists (UCS)  has produced a glossy pamphlet with a checklist of how scientists may respond to criticism and personal attacks.  See UCS – Science in an Age of Scrutiny: How Scientists Can Respond to Criticism and Personal Attacks (2012).

The rationale for this publication is described at the UCS website.  The UCS notes that scientists are under a great deal of scrutiny, and “attack,” especially when their research is at the center of contentious debate over public policy.  According to the UCS, scientists are “sometimes attacked by individuals who do not like the research results. These attacks can take multiple forms—emails, newspaper op-eds, blogs, open-records requests, even subpoenas—but the goals are the same: to discredit the research by discrediting the researcher.

I am all for protecting scientists and researchers from personal attacks.  The UCS account, however, seems a bit narcissistic.  The UCS is making an ad hominem attack on the putative attackers for what they claim is an ad hominem attack on the researchers.  What if the so-called attackers don’t care a bit about discrediting the researchers, but only the research?

The “even subpoenas” got my attention.  Subpoenas have been propounded for good reason, and with good effect, on litigation-related research. See, e.g., In re Welding Fume Prods. Liab. Litig., MDL 1535, 2005 WL 5417815 (N.D. Ohio Aug. 8, 2005) (upholding defendants’ subpoena for documents and things from Dr. Racette author of study on welding and parkinsonism). The UCS has thus attacked motives of lawyers charged with protecting their clients from dubious scientific research; I suppose we could return the epithet and declare that the UCS goal is to discredit the process of compelling data sharing by discrediting the motives of the persons seeking data sharing.

Subpoenas served upon independent researchers, whose work bears on the issues in litigation, are a valid part of the litigation discovery process.  Litigants, especially defendants who are involuntarily before a tribunal by compulsory process, are entitled to “every man’s evidence.”

The Union of Concerned Scientists seem either unduly sensitive or cavalier and careless in their generalization about the goals of lawyers who propound subpoenas.  The goal is typically not to discredit the researcher.  The personality, reputation, and position of the researcher are irrelevant; it’s about the data.

The Federal Judicial Center’s Manual for Complex Litigation describes subpoenas for researchers’ underlying data and materials at some length.  See Fed. Jud. Center, Manual for Complex Litigation § 22.87 (4th ed. 2004).  The Manual acknowledges that the federal courts have protected unpublished research from discovery, but that courts permit discovery of underlying data and materials from studies that have been published.  Federal Rule of Civil Procedure 45(c)(3)(B)(ii) allows courts to enforce subpoenas against non-parties, on a showing of “substantial need for the testimony that cannot be otherwise met without undue hardship,” and on assurance that the subpoenaed third parties “will be reasonably compensated.” Manual at 444-45.  The federal courts have recognized that litigants’ need to obtain, examine, and re-analyze  data underlying research studies used to by their adversaries against them.  Although the researchers have interests that should be protected in the discovery process, such as their claims “for protection of confidentiality, intellectual property rights, research privilege, and the integrity of the research,” these claims must be balanced against the necessity of the evidence in the litigation process.  Id.

Of course, when the research is sponsored by litigants, whether by financial assistance or by assisting in recruiting study participants, and is completed, “courts generally require production of all data; for pending studies, courts often require disclosure of the written protocol, the statistical plan, sample data entry forms, and a specific description of the progress of the study until it is completed. Id.

Some have argued that the scientific enterprise should be immune from the rough and tumble of legal discovery because its essential collaborative nature is threatened by the adversarial interests at play in litigation.  Professor George Olah, in accepting his Nobel Prize in Chemistry, rebutted this sentiment:

“Intensive, critical studies of a controversial topic always help to eliminate the possibility of any errors. One of my favorite quotation is that by George von Bekessy (Nobel Prize in Medicine, 1961).

‘[One] way of dealing with errors is to have friends who are willing to spend the time necessary to carry out a critical examination of the experimental design beforehand and the results after the experiments have been completed. An even better way is to have an enemy. An enemy is willing to devote a vast amount of time and brain power to ferreting out errors both large and small, and this without any compensation. The trouble is that really capable enemies are scarce; most of them are only ordinary. Another trouble with enemies is that they sometimes develop into friends and lose a good deal of their zeal. It was in this way the writer lost his three best enemies. Everyone, not just scientists, need a few good enemies!’”

George A. Olah, “My Search for Carbocations and Their Role in Chemistry,” Nobel Lecture (Dec. 8, 1994), quoting George von Békésy, Experiments in Hearing 8 (N.Y. 1960).  The UCS should rejoice for its intellectual enemies.  “Out of life’s school of war: What does not destroy me, makes me stronger.  Friedrich Nietzsche, The Twilight of the Idols Maxim 8 (1899).

Viagra, Part II — MDL Court Sees The Light – Bad Data Trump Nuances of Statistical Inference

July 8th, 2012

In the Viagra vision loss MDL, the first Daubert hearing did not end well for the defense.  Judge Magnuson refused to go beyond conclusory statements by the plaintiffs’ expert witness, Gerald McGwin, and to examine the qualitative and quantitative evaluative errors invoked to support plaintiffs’ health claims.  The weakness of McGwin’s evidence, however, appeared to  encourage Judge Magnuson to authorize extensive discovery into McGwin’s study.  In re Viagra Products Liab. Litig., 572 F. Supp. 2d 1071, 1090 (D. Minn. 2008).

The discovery into McGwin’s study had already been underway, with subpoenas to him and to his academic institution.  As it turned out, defendant’s discovery into the data and documents underlying McGwin’s study won the day.  Although Judge Magnuson struggled with inferential statistics, he understood the direct attack on the integrity of McGwin’s data.  Over a year after denying defendant’s Rule 702 motion to exclude Gerald McGwin, the MDL court reconsidered and granted the motion.  In re Viagra Products Liab. Litig., 658 F. Supp. 2d 936, 945 (D. Minn. 2009).

The basic data on prior exposures and risk factors for the McGwin study was collected by telephone surveys, from which the information was coded into an electronic dataset.  In analyzing the data, McGwin used the electronic dataset and not the survey forms.  Id. at 939.  The transfer from survey forms to electronic dataset did not go smoothly; about 11 patients were miscoded as “exposed“ when their use of Viagra post-dated the onset of NAION. Id. at 942.  Furthermore, the published article incorrectly stated personal history of heart attack as a “risk factor ”; the survey inquired about family not personal history of heart attack. Id. at 944.

The plaintiffs threw several bombs in response, but without legal effect.  First, the plaintiffs claimed that the study participants had been recontacted and the database had been corrected, but they were unable to document this process or the alleged corrections.  Id. at 433.  Furthermore, the plaintiffs could not explain how, if their contention had been true, McGwin would have not committed serious violations of his university’s institutional review board’s regulations with respect to deviations from the original protocol.  Id. at 943 n.7.

Second, the plaintiffs argued that the underlying survey forms were “inadmissible ” and thus the defense could not use them to impeach the McGwin study.  Some might think this a duplicitous argument, utterly at odds with Rule 703 – rely upon a study but prevent use of underlying data and documents to explain that the study does not show what it purports to show.  The MDL court spared the plaintiffs the embarrassment of ruling that the documents on which McGwin had based his study were inadmissible, and found that the forms were business records and admissible under Federal Rule of evidence 803(6).  The court could have gone further to point out that McGwin’s reliance upon hearsay in the form of his study, McGwin 2006, opened the door to impeaching the hearsay relied upon with other hearsay.  See Rule 806.

When defense counsel sat down with McGwin in a deposition, they found that he had not undertaken any new analyses of corrected data.  Plaintiffs’ counsel directed him not to do so.  Id. at 940-41.  But then after the deposition was over, McGwin submitted a letter to the journal to report a corrected analysis.  Pfizer’s counsel obtained the letter in response to their subpoena to McGwin’s university, the University of Alabama, Birmingham.  Mirabile dictu; now the increase risk appeared limited to only to the defendant’s medication, Viagra!

The trial court was not amused.  First, the new analysis was no longer peer reviewed, and the court had placed a great deal of emphasis on peer review in denying the first challenge to McGwin.  Second, the new analysis was no longer that of an independent scientist, but was conducted and submitted as a letter to the editor, while McGwin was working for plaintiffs’ counsel.  Third, the plaintiffs and McGwin conceded that the data were not accurate.  Last, but not least, the trial court clearly was not pleased that the plaintiffs’ counsel had deliberately delayed McGwin’s further analyses until after the deposition, and then tried to submit yet another supplemental report with those further analyses. In sum:

“the Court finds good reason to vacate its original Daubert Order permitting Dr. McGwin to testify as a general causation expert based on the McGwin Study as published. Almost every indicia of reliability the Court relied on in its previous Daubert Order regarding the McGwin Study has been shown now to be unreliable.  Peer review and publication mean little if a study is not based on accurate underlying data. Likewise, the known rate of error is also meaningless if it is based on inaccurate data. Even if the McGwin Study as published was conducted according to generally accepted epidemiologic research and did not result from post-litigation research, the fact that the McGwin Study appears to have been based on data that cannot now be documented or supported renders it inadmissibly unreliable. The Court concludes that under Daubert, Dr. McGwin’s opinion, to the extent that it is based on the McGwin Study as published, lacks sufficient indicia of reliability to be admitted as a general causation opinion.”

Id. at 945-46.  The remaining evidence was the Margo & French study, but McGwin had previously criticized that study as lacking data that ensured that Viagra use preceded onset of NAION.  In the end, McGwin was left with bupkes, and the plaintiffs were left with even less.

*******************

McGwin 2006 Was Also A Pain in the Rear End for McGwin

The Rule 702 motions and hearings on McGwin’s proposed testimony had consequences in the scientific world itself.  In 2011, the British Journal of Ophthalmology retracted McGwin’s 2006 paper.  “Retraction: Non-arteritic anterior ischaemic optic neuropathy and the treatment of erectile dysfunction, ” 95 Brit. J. Ophthalmol. 595 (2011).

Interestingly, the retraction was reported in the Retraction Watch blog, “Retractile dysfunction? Author says journal yanked paper linking Viagra, Cialis to vision problem after legal threats.”  The blog treated the retraction as routine except for the hint of “legal pressure”:

“One of the authors of the paper, a researcher at the University of Alabama named Gerald McGwin Jr., told us that the journal retracted the article because it had become a tool in a lawsuit involving Pfizer, which makes Viagra, and, presumably, men who’d developed blindness after taking the drug:

‘The article just became too much of a pain in the rear end. It became one of those things where we couldn’t provide all the relevant documentation [to the university, which had to provide records for attorneys].’

Ultimately, however, McGwin said that the BJO pulled the plug on the paper.”

Id. The legal threat is hard to discern other than the fact that lawyers wanted to see something that peer reviewers almost never see – the documentation underlying the published paper.  So now, the study that formed the basis for the original ruling against Pfizer floats aimlessly as a derelict on the sea of science.  McGwin is, however, still at his craft.  In a study he published in 2010, he claimed that Viagra but not Cialis use was associated with hearing impairment.  Gerald McGwin, Jr, “Phosphodiesterase Type 5 Inhibitor Use and Hearing Impairment,” 136 Arch. Otolaryngol. Head & Neck Surgery 488 (2010).

Where are Senator Grassley and Congressman Waxman when you need them?

Ethics and Statistics

January 21st, 2012

Chance magazine has started a new feature, the “Ethics and Statistics column, which is likely to be of interest to lawyers and to statisticians who work on litigation issues.  The column is edited by Andrew Gelman.  Judging from the Gelman’s first column, I think that the column may well become a valuable forum for important scientific and legal issues arising from studies used in public policy formulation, and in reaching conclusions that are the bases for scientific expert witnesses’ testimony in court.

Andrew Gelman is a professor of statistics and political science in Columbia University.  He is also the director of the University’s Applied Statistics Center.   Gelman’s inaugural column touches on some issues of great importance to legal counsel who litigate scientific issues involving scientific studies:  access to underlying data in the studies that are the bases for expert witness opinions.  See Andrew Gelman, “Open Data and Open Methods,” 24 Chance 51 (2011).

Gelman acknowledges that conflicts are not only driven by monetary gain; they can be potently raised by positions or causes espoused by the writer:

“An ethics problem arises when you are considering an action that

(a) benefits you or some cause you support,

(b) hurts or reduces benefits to others, and

(c) violates some rule.”

Id. at 51a.

Positional conflicts among scientists whose studies touch upon policy issues give rise to “the ethical imperative to share data.”  Id. at 51c.  Naming names, Professor Gelman relates an incident in which he wrote to an  EPA scientist, Carl Blackman, who had presented a study on the supposed health effects of EMF radiation.   Skeptical of how Blackman had analyzed data, Gelman wrote to Blackman to request his data to carry out additional, alternative statistical analyses.  Blackman answered that he did not think these other analyses were needed, and he declined to share his data.

This sort of refusal is all too common, and typical of the arrogance of scientists who do not want others to be able to take a hard look at how they arrived at their conclusions.  Gelman reminds us that:

“Refusing to share your data is improper… .”

* * * *

“[S]haring data is central to scientific ethics.  If you really believe your results, you should want your data out in the open. If, on the other hand, you have a sneaking suspicion that maybe there’s something there you don’t want to see, and then you keep your raw data hidden, it’s a problem.”

* * * *

“Especially for high-stakes policy questions (such as the risks of electric power lines), transparency is important, and we support initiatives for automatically making data public upon publication of results so researchers can share data without it being a burden.”

Id. at 53.

To be sure, there are some problems with sharing data, but none that is insuperable, and none that should be an excuse for withholding data.  The logistical, ethical, and practical problems of data sharing should now be anticipated long before publication and the requests for data sharing arrive.

Indeed, the National Institutes of Health requires data sharing plans to be part of a protocol for a federally funded study.  See Final NIH Statement on Sharing Research Data (Feb. 26, 2003). Unfortunately, the NIH’s implementation and enforcement of its data-sharing policy is as spotty as a Damien Hirst painting.  SeeSeeing Spots” The New Yorker (Jan. 23, 2012).

Beware the Academic-Publishing Complex!

January 11th, 2012

Today’s New York Times contains an important editorial on an attempt by some congressmen to undermine access to federally funded research.  See Michael B. Eisen, “Research Bought, Then Paid ForNew York Times (January 11, 2012).  Eisen’s editorial alerts us to this attempt to undo a federal legal requirement that requires federally funded medical research be made available, for free, on the National Library of Medicine’s Web site (NLM).

As a founder of the Public Library of Science (PLoS), which is committed to promoting and implementing the free distribution of scientific research, Eisen may be regarded as an “interested” ora  biased commentator.  Such a simple-minded ascription of bias would be wrong. The PLoS has become an important distribution source of research results in the world of science, and competes with the publishing oligarchies:  Elsevier, Springer, and others.  The articles of the sort that PLoS makes available for free are sold by publishers for $40 or more.  Subscriptions from these oligarchical sources are often priced in the thousands of dollars per year. Eisen’s simple and unassailable point is that the public, whether the medical profession, patients and citizens, students and teachers, should be able to read about the results of research funded with their tax monies.

“[I]f the taxpayers paid for it, they own.”

The United States government and its employees do not enjoy copyright protections for their creative work (and they do not), neither should their contractors.

Public access is all the more important given that the mainstream media seems so reluctant or unable to cover scientific research in a thoughtful and incisive way.

The Bill goes beyond merely unraveling a requirement of making published papers available free of charge at the NLM.    The language of the Bill, H.R.3699, the Research Works Act, creates a false dichotomy between public and private sector research:

 “SEC. 2. LIMITATION ON FEDERAL AGENCY ACTION.

No Federal agency may adopt, implement, maintain, continue, or otherwise engage in any policy, program, or other activity that—

(1) causes, permits, or authorizes network dissemination of any private-sector research work without the prior consent of the publisher of such work … .”

Work that is conducted in private or in state universities, but funded by the federal taxpayers, cannot be said to be “private” in any meaningful sense.  The public’s access to this research, as well as its underlying data, is especially important when the subject matter of the research involves issues that are material to public policy and litigation disputes.

Who is behind this bailout for the private-sector publishing industry?  Congressman Darrell Issa (California) introduced the Bill, on December 16, 2011.  The Bill was cosponsored by Congresswoman Carolyn B. Maloney, the Democratic representative of New York’s 14th district.  Oh Lord, Congresswoman Maloney represents me!  NOT.  How humiliating to be associated with this regressive measure.

This heavy-handed piece of legislation was referred to the House Committee on Oversight and Government Reform.  Let us hope it dies a quick death in committee.  See Michael Eisen, “Elsevier-funded NY Congresswoman Carolyn Maloney Wants to Deny Americans Access to Taxpayer Funded Research” (Jan. 5, 2012).

Lording the Data – Scientific Fraud

November 10th, 2011

Last week, the New York Times published a news story about psychologist Diederik Stapel, of the Netherlands.  Tilburg University accused him of having committed research fraud  in several dozen published papers, including the journal Science, the official journal of the AAAS.  See Benedict Carey, “Fraud Case Seen as a Red Flag for Psychology Research: Noted Dutch Psychologist, Stapel, Accused of Research Fraud,” New York Times (Nov. 2, 2011).  The Times expressed surprise over the suggestion that psychology is plagued by fraud and sloppy research.  The surprise is that there are not more stories in the lay media over the poor quality of scientific research.  The readers of Retraction Watch, and the Office of Research Integrity’s blog will recognize how commonplace Stapel’s fraud is.

Stapel’s fraud has wide-ranging implications for the doctoral students, whose dissertations he supervised, and for colleagues, with whom he collaborated.  Stapel apologized and expressed his regret, but his conduct leaves a large body of his work, and that of others, under a cloud of suspicion.

Lording the Data

The University committee reported that Stapel had escaped detection for a long time because he was “lord of the data,” by refusing to disclose and share the data.

“Outright fraud may be rare, these experts say, but they contend that Dr. Stapel took advantage of a system that allows researchers to operate in near secrecy and massage data to find what they want to find, without much fear of being challenged.”

Benedict Carey, “Fraud Case,” New York Times (Nov. 2, 2011).  Data sharing is preached but rarely practice.

In a recent publication, Dr. Wicherts and his colleagues, at the University of Amsterdam, reported that two-thirds of his sample of Dutch research psychologists refused to share their data, in contravention of the established ethical rules of the discipline. Remarkably, many of the refuseniks had explicit contractual obligations with their publishing journals to provide data.  Jelte Wicherts, Marjan Bakker, Dylan Molenaar, “Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results,” PLoS ONE 6(11): e26828 (Nov. 2, 2011)

Scientific fraud seems no more common among scientists with industry ties, which are so often the subject of ad hominem conflict of interest claims.  Instead, fraudfeasors such as Stapel or Hwang Woo-suk are more often simply egotistical, narcissistic, self-aggrandizing, self-promoting, or delusional.  In the United States, litigation, occasionally has brought out charlatans, but it has also resulted in high-quality studies that have provided strong evidence for or against litigation claims.  Compare Hon. Jack B. Weinstein, “Preliminary Reflections on Administration of Complex Litigation” 2009 Cardozo L. Rev. de novo 1, 14 (2009) (describing plaintiffs’ expert witnesses in silicone litigation as “charlatans” and the litigation as largely based upon fraud) with Committee on the Safety of Silicone Breast Implants, Institute of Medicine, Safety of Silicone Breast Implants (Wash. D.C. 1999) (reviewing studies, many of which were commissioned by litigation defendants, and which collectively showed lack of association between silicone and autoimmune diseases).

The relation between litigation and research is one that has typically been approached by self-righteous voices, such as David Michaels and David Egilman, and others who have their own deep conflicts of interest.  What is clear is that all litigants, as well as the public, would benefit from enforcing data sharing requirements.  SeeLitigation and Research” (April 15, 2007) (science should not be built upon blind trust of scientists: “Nullius in verba.”).

The Times article emphasized Wicherts’ research about lack of data sharing, and suggested that data sharing could improve the quality of scientific publications.  The time may have come, however, for sterner measures of civil and criminal penalties for scientists who abuse and waste governmental funding, or who aid and abet fraudulent litigation.