TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

The Continuing Saga of Bad-Faith Assertions of Conflicts of Interest

December 28th, 2011

Conflicts of interest (COI), real or potential, have become a weapon used to silence the manufacturing industry in various scientific debates and discussions.  Other equally “interested” parties, labor unions, advocacy groups, and consultants to the other industry – the litigation industry – have used conflicts and ethical claims to silence the manufacturing industry and to engage in unfettered false scientific speech. The public, unwilling and untrained to look at evidence on the merits, is conditioned to accepting an allegation of COI as the end of the discussion on scientific issues.

Recently, journalist Shannon Brownlee criticized the FDA for its suggestion that the agency was having difficulty in finding experts who cleared the agency’s conflict-of-interest prohibitions.  Brownlee explicitly contended that she could easily find “unbiased” scientists who could advise the agency on drug and device issues.

Shannon Brownlee, “Is There an Independent Unbiased Expert in the House” (Aug. 3, 2011).

Indeed, Brownlee sent FDA Commissioner Margaret Hamburg a list of allegedly neutral experts who could advise the agency.  Brownlee gave everyone on her list a clean bill of ethical health, and has published the list on multiple occasions, both on the website Healthnewsreview.org, and a few years ago, in the British Medical Journal:  Jeanne Lenzer & Shannon Brownlee, “Is there an (unbiased) doctor in the house?” 337 Brit. Med. J. 206 (2008).

Brownlee tells us that journalists from respectable print media, including the New York Times, and the Wall Street Journal, have requested the list, apparently to contact the “unbiased” experts to help investigate news stories about drugs and medical devices.  What the gullible may not appreciate is that the list fallaciously is based upon only one exclusionary criterion:  having consulted for the pharmaceutical industry.  The list omits other important COI exclusionary criteria, such as having consulted for the litigation industry, or having taken erroneous, unwarranted, and ideologically driven positions on scientific issues.

What litigation industry?  Brownlee may have missed the fact that plaintiffs’ lawyers represent a huge financial interest in obtaining compensation for others, with 40 percent of the proceeds going to themselves.  This litigation industry thrives, even with Dickie Scruggs in prison, and Stanley Chesley in disrepute.

In today’s litigation environment, with aggregation of claims in federal multi-district cases, plaintiffs’ counsel stand to profit in the billions from scientific positions espoused by their expert witnesses.

Who are the litigation industry expert witnesses on Brownlee’s list?  Here are some obvious candidates:

Peter R. Breggin, MD, psychiatrist, clinical psychopharmacologist, independent author and scientist; Founder and Director Emeritus, International Center for the Study of Psychiatry and Psychology

Adriane Fugh-Berman, MD, Professor, Department of Physiology and Biophysics, Georgetown University Medical Center; Director, PharmedOut.org

Curt Furberg, MD, PhD, Professor of Public Health Sciences, Wake Forest University School of Medicine

Joseph Glenmullen, MD, Clinical instructor in psychiatry, Harvard Medical School

Bruce Psaty, MD, PhD, Professor, Medicine & Epidemiology, University of Washington Cardiovascular Health Research Unit

Also on the list were well-known anti-industry zealots, who focus almost exclusively on the manufacturing industry, while ignoring or endorsing the excesses and unwarranted claims of the litigation industry:

Lisa Bero, PhD, Professor, University of California, San Francisco U.S.

Sheldon Krimsky, PhD, Tufts University & Council for Responsible Genetics

Sidney Wolfe, MD, Director, Health Research Group of Public Citizen.

Now some people may claim that the litigation industry consultants, and the anti-industry zealots, take their positions not to please their sponsors, or to pursue lucrative opportunity, but because they fervently believe the positions that they take. But then why not give the pharmaceutical industry consultants the same benefit of the doubt?  Indeed, why not move beyond COI allegations to creating lists of scientists and physicians who have demonstrated proficiency in advancing evidence-based judgments that have withstood the test of time?

This anti-industry hypocrisy manifests not only in assertions of conflicts of interest, but also in calls for industry to disclose all underlying data from industry-funded or sponsored studies, while taking a protectionist stance on all other underlying data.

Let’s hope that in 2012, industry fights back, and evidence regains its primary role in resolving scientific disputes.

A Rule of Completeness for Statistical Evidence

December 23rd, 2011

Witnesses swear to tell the “whole” truth, but lawyers are allowed to deal in half truths.  Given this qualification on lawyers’ obligation of truthfulness, the law prudently modifies the law of admissibility for writings to permit an adverse party to require that written statements are not yanked out of context.  Waiting days, if not weeks, in a trial to restore the context is an inadequate remedy for these “half truths.”  If a party introduces all or part of a writing or recorded statement, an adverse party may ” require the introduction, at that time, of any other part — or any other writing or recorded statement — that in fairness ought to be considered at the same time.”  Fed. R. Evid. 106 (Remainder of or Related Writings or Recorded Statements).  See also Fed. R. Civ. Pro. Rule 32(a)(4) (rule of completeness for depositions).

This “rule of completeness” has its roots in the common law, and in the tradition of narrative testimony.  The Advisory Committee notes to Rule 106 comments that the rule is limited to “writings and recorded statements and does not apply to conversations.”  The Rule and the notes ignore that the problematic incompleteness might be in the form of mathematical or statistical evidence.

Confidence Intervals

Consider sampling estimates of means or proportions.  The Reference Manual on Scientific Evidence (2d ed. 2000) urges that:

“[w]henever possible, an estimate should be accompanied by its standard error.”

RMSE 2d ed. at 117-18.

The new third edition dilutes this clear prescription, but still conveys the basic message:

What is the standard error? The confidence interval?

An estimate based on a sample is likely to be off the mark, at least by a small amount, because of random error. The standard error gives the likely magnitude of this random error, with smaller standard errors indicating better estimates.”

RMSE 3d ed. at 243.

The evidentiary point is that the standard error, or the confidence interval (C.I.), is an important component of the sample statistic, without which the sample estimate is virtually meaningless.  Just as a narrative statement should not be truncated, a statistical or numerical expression should not be unduly abridged.

Of course, the 95 percent confidence interval is the estimate (the risk ratio, the point estimate) plus or minus 1.96 standard errors.  By analogy to Rule 106, lawyers should insist that the confidence interval, or some similar expression of the size of the standard error, be provided at the time that the examiner asks about, or the witness gives, the sample estimate.  There are any number of consensus position papers, as well as guidelines for authors of papers, which specify that risk ratios should be accompanied by confidence intervals.  Courts should heed those recommendations, and require parties to present the complete statistical idea – estimate and random error – at one time.

One disreputable lawyer trick is to present incomplete confidence intervals.  Plaintiffs’ counsel, for instance, may inquire into the upper bound of a confidence interval, and attempt to silence witnesses when they respond with both the lower and upper bounds.  “Just answer the question, and stop volunteering information not asked.”  Indeed, some unscrupulous lawyers have been known to cut off witnesses from providing the information about both bounds of the interval, on the claim that the witness was being “unresponsive.”  Judges who are impatient with technical statistical testimony may even admonish witnesses who are trying to make sure that they present the “whole truth.”  Here again, the completeness rule should protect the integrity of the fact finding by allowing, and requiring, that the full information be presented at once, in context.

Although I have seen courts permit the partial, incomplete presentation of statistical evidence, I have yet to see a court acknowledge the harm from failing to apply Rule 106 to quantitative, statistical evidence.  One court, however, did address the inherent error of permitting a party to emphasize the extreme values within a confidence interval as “consistent” with the data sample.  Marder v. G.D. Searle & Co., 630 F.Supp. 1087 (D.Md. 1986), aff’d mem. on other grounds sub nom. Wheelahan v. G.D.Searle & Co., 814 F.2d 655 (4th Cir. 1987)(per curiam).

In Marder, the plaintiff claimed pelvic inflammatory disease from a IUD.  The jury was deadlocked on causation, and the trial court decided to grant the defendant’s motion for directed verdict, on grounds that the relative risk involved was less than two. Id. at 1092. (“In epidemiological terms, a two-fold increased risk is an important showing for plaintiffs to make because it is the equivalent of the required legal burden of proof—a showing of causation by the preponderance of the evidence or, in other words, a probability of greater than 50%.”)

The plaintiff sought to resist entry of judgment by arguing that although the relative risk was less than two, the court should consider the upper bound of the confidence interval, which ranged from 0.9 to 4.0.  Id.  So in other words, the plaintiff argued that she was entitled to have the jury consider and determine that the actual value was actually 4.0.

The court, fairly decisively, rejected this attempt to isolate the upper bound of the confidence interval:

“The upper range of the confidence intervals signify the outer realm of possibilities, and plaintiffs cannot reasonably rely on these numbers as evidence of the probability of a greater than two fold risk.  Their argument reaches new heights of speculation and has no scientific basis.”

The Marder court could have gone further by pointing out that the confidence interval does not provide a probability for any value within the interval.

Multiple Testing

In some situations, completeness may require more than the presentation of the size of the random error, or the width of the confidence interval.  When the sample estimate arises from a study with multiple testing, presenting the sample estimate with the confidence interval, or p-value, can be highly misleading if the p-value is used for hypothesis testing.  The fact of multiple testing will inflate the false-positive error rate.

Here is the relevant language from Kaye and Freedman’s chapter on statistics, in the Reference Manual (3d ed.):

4. How many tests have been done?

Repeated testing complicates the interpretation of significance levels. If enough comparisons are made, random error almost guarantees that some will yield ‘significant’ findings, even when there is no real effect. To illustrate the point, consider the problem of deciding whether a coin is biased. The probability that a fair coin will produce 10 heads when tossed 10 times is (1/2)10 = 1/1024. Observing 10 heads in the first 10 tosses, therefore, would be strong evidence that the coin is biased. Nonetheless, if a fair coin is tossed a few thousand times, it is likely that at least one string of ten consecutive heads will appear. Ten heads in the first ten tosses means one thing; a run of ten heads somewhere along the way to a few thousand tosses of a coin means quite another. A test—looking for a run of ten heads—can be repeated too often.

Artifacts from multiple testing are commonplace. Because research that fails to uncover significance often is not published, reviews of the literature may produce an unduly large number of studies finding statistical significance.111 Even a single researcher may examine so many different relationships that a few will achieve statistical significance by mere happenstance. Almost any large dataset—even pages from a table of random digits—will contain some unusual pattern that can be uncovered by diligent search. Having detected the pattern, the analyst can perform a statistical test for it, blandly ignoring the search effort. Statistical significance is bound to follow.

There are statistical methods for dealing with multiple looks at the data, which permit the calculation of meaningful p-values in certain cases.112 However, no general solution is available… . In these situations, courts should not be overly impressed with claims that estimates are significant. …”

RMSE 3d ed. at 256-57.

When a lawyer asks a witness whether a sample statistic is “statistically significant,” there is the danger that the answer will be interpreted or argued as a Type I error rate, or worse yet, as a posterior probability for the null hypothesis.  When the sample statistic has a p-value below 0.05, in the context of multiple testing, completeness requires the presentation of the information about the number of tests and the distorting effect of multiple testing on preserving a pre-specified Type I error rate.  Even a nominally statistically significant finding must be understood in the full context of the study.

Many texts and journals recommend that the Type I error rate not be modified in the paper, as long as readers can observe the number of multiple comparisons that took place and make the adjustment for themselves.  Most jurors and judges are not sufficiently knowledgeable to make the adjustment without expert assistance, and so the fact of multiple testing, and its implication, are additional examples of how the rule of completeness may require the presentation of appropriate qualifications and explanations at the same time as the information about “statistical significance.”

The Integrity of Facts in Judicial Decisions

December 21st, 2011

One of the usual tasks of an appellate judge’s law clerk is to read the record – the entire record.  In my clerking experience, the law clerk who had the assignment for a case in which the judge was writing an opinion was responsible for knowing every detail of the record.  The judge believed that fidelity to the factual record was an absolute.

Not so for other appellate judges.  See, e.g., Jacoby, “Judicial Opinions as “Minefields of Misinformation: Antecedents, Consequences and Remedies,” University Public Law and Legal Theory Working Papers Paper 35 (N.Y. 2006).

Some important cases turn on facts misunderstood or misrepresented by appellate courts.  A few days ago, Kyle Graham blogged about a startling discovery in the Summers v. Tice case, which is covered in every first-year torts class.  Kyle Graham, “Summers v. Tice: The Rest of the Story” (Dec. 1, 2011).

Summers v. Tice, 33 Cal.2d 80, 199 P.2d 1 (1948), is a leading California tort law case that shifted the burden of proof on causation to the two defendants.  The rationale for shifting the burden was the gross negligence of both defendants, and the plaintiff’s faultless inability to identify which of the two defendants, Simonson or Tice, was responsible for shooting the plaintiff with a shotgun in their ill-fated quail hunt.

Professor Graham did something unusual:  he actually read the record of the bench trial.  It turns out that the facts were different from, and much more interesting than, those presented by the California Supreme Court.  Simonson admitted shooting Summers, and implicated Tice.  Tice denied shooting.  The trial judge resolved credibility issue against Tice, although it seems to have been a close issue.

More important, Tice testified that his gun was loaded with No. 6 shot, whereas Simonson had used No. 7.5 shot.  Summers admitted that the pellets had been given to him after his medical treatment, but he could not find them at the time of trial.  Had he kept the pellets, Summers would have been able to distinguish between the gunfeasors.

Spoliation anyone?  Missing evidence?  Adverse inference?

Even if the trial judge was unimpressed with Tice’s denial of having discharged his shotgun, Tice’s lack of credibility could not turn into affirmative evidence that he had used number 7.5 shot, as had Simonson.  This was a contested issue, on which the plaintiff could have adduced evidence.  The plaintiff’s failure to do so was the result of his own post-accident carelessness (or worse) in not keeping important evidence.  Tice’s testimony on the size of the shot in his gun was undisputed, even if the trial court thought that he was not a credible witness.

Thus, on the real facts, the shifting of the burden of proof, on the rationale that the plaintiff was without fault for his inability to produce evidence against Summers or Tice, was quite unjustified.  The plaintiff was culpable for the failure of proof, and there was no affirmative evidence that the two potential causative agents were indistinguishable. The defendants were not in a better position than the plaintiff to identify who had been the cause of plaintiff’s wounds.

The trial court’s credibility assessment of Tice, for having denied a role in shooting, did not turn the absence of evidence into affirmative evidence that both defendants used the same size pellets in their shotguns.  What makes for a great law school professor’s hypothetical was the result of an obviously fallacious inference, and a factual fabrication, borne of sloppy judicial decision making.

We can see a similar scenario play out in the New Jersey decisions that reversed directed verdicts in asbestos colorectal cancer cases.  Landrigan v. Celotex Corp., 127 NJ. 404, 605 A2d 1079 (1992); Caterinicchio v. Pittsburgh Corning Corp., 127 NJ. 428, 605 A.2d 1092 (1992). In both cases, the trial courts directed verdicts, assuming arguenda that asbestos can cause colorectal cancer (a dubious proposition), on the ground that the low relative risk cited by plaintiffs’ expert witnesses (about 1.5) was factually insufficient to support a verdict for plaintiffs on specific causation.  Indeed, the relative risk suggested that the odds were about 2 to 1 in defendants’ favor that the plaintiffs’ colorectal cancers were not caused by asbestos.

The intermediate appellate courts affirmed the directed verdicts, but the New Jersey Supreme Court reversed and remanded both judgments on curious grounds.  According to the Court, there were other probative factors that the juries could have used to make out specific causation:

“Dr. Wagoner did not rely exclusively on epidemiological studies in addressing that issue.   In addition to relying on such studies, he, like Dr. Sokolowski, reviewed specific evidence about decedent’s medical and occupational histories.   Both witnesses also excluded certain known risk factors for colon cancer, such as excessive alcohol consumption, a high-fat diet, and a positive family history.   From statistical population studies to the conclusion of causation in an individual, however, is a broad leap, particularly for a witness whose training, unlike that of a physician, is oriented toward the study of groups and not of individuals.   Nonetheless, proof of causation in toxic-tort cases depends largely on inferences derived from statistics about groups.”

Landrigan, 127 N.J. at 422.  The NJ Supreme Court held that the plaintiffs’ failure to show a relative risk in excess of 2.0 was not fatal to their cases, when there was other evidence that the jury could consider, in addition to the relative risks.

Well, actually there was no expert witness support for the assertion.  Completely absent from the evidentiary displays in both the Landrigan and Caterinicchio cases was any evidence, apart from plaintiffs’ expert witnesses’ hand waving, that a higher relative risk existed among the subcohort of asbestos insulators who had had heavier exposure or who had concomitant pulmonary disease.  There was no evidence that those exposed workers who lacked “excessive alcohol consumption, a high-fat diet, and a positive family history” had any increase risk.  Indeed, the Selikoff study relied upon extensively by plaintiffs’ expert witnesses failed to make any adjustment for the noted risk factors, as well as for the greater prevalence of smoking histories among the insulators than among the unexposed comparator population.  The Court turned the absence of evidence into the factual predicate for its holding that defendants were not entitled to judgment.

Now that’s judicial activism.

COURTING CLIO: HISTORIANS UNDER OATH – Part 2

December 17th, 2011

Continued from Part 1:

Court-Appointed Historians

One lawyer, Jonathan Martin, trained in historical scholarship in Princeton University, has argued that historian expert witness opinion testimony is both unavoidable and refractory to the protections of judicial gatekeeping.  Martin, Historians at the Gate:  Accommodating Expert Historical Testimony in Federal Courts.” 78 N.Y.U.L. Rev. 1518 (2003).  Mr. Martin acknowledges that historians are beholding to an objective methodology, but when they are in the employ of lawyers, historians abridge or abrogate their commitment to objectivity:

Just as scientific testimony must adhere to the scientific method so too must historical testimony adhere to the historical method.  Unfortunately, historians often neglect the conventional method of their craft when offering expert testimony.  Outside the courtroom, historians generally expect one another to formulate complex, nuanced, and balanced arguments that take into account all available evidence, including any countervailing evidence.  At trial, however, the pressures of the adversary system routinely push historians toward interpretations of the past that are compressed and categorical . . .  .  As a result, historians now frequently offer unreliable evidence.

Id. at 1521.  Mr. Martin proposes to remedy the frequent, unreliable testimony from historians by the routine appointment of court-appointed expert witnesses.

In passing, Mr. Martin notes that others have urged judicial gatekeeping, under Daubert or Frye, to address unreliable historian testimony, but he rejects gatekeeping of adversarial expert witnesses as insufficient.  Id. at 1522 n.23.  Given the dearth of reported cases of such gatekeeping, this rejection seems premature.  Perhaps more important, Mr. Martin, in his rush to advocate court-appointed historians, fails to address how and why historians’ opinions are different from the opinions of experts in other fields, which are successfully subjected to cross-examination and to reliability analysis.  Historians are not alone, certainly, in succumbing to the temptation to stray from objective methodology.  Mr. Martin is correct, however, in his implicit acknowledgment that historian opinion testimony warrants increased judicial scrutiny.

One way historians differ from other fields of objective study is that historical scholarship is perfused with argument.  In biomedical and physical sciences, the presentation of research is carefully and routinely segregated into hypothesis, materials and methods, findings, and discussion.  Research findings are neatly presented without inferences to conclusions.  If conclusions can be reliably reached from the research or experiment, the investigators present their conclusions, with appropriate qualifications and caveats, in the discussion sections of their writings.  Readers understand that the discussion section is often the least important part of a published article.

Lawyering is similarly segregated into proofs and argument.  The trial lawyers’ evidence, whether real, documentary, or testimonial, is confined to a portion of the trial open for proof of facts in issue.  The trial court has the responsibility to prevent argument, argumentative questioning, and argumentative testimony in the proof-phase of the trial.  Only in closing argument, may the trial lawyers urge inferences and conclusions that assist the trier of fact to resolve the factual disputes in the case.  To be sure, trial lawyers try and sometimes succeed in advancing their argument in the proof phase of trial, either by clever juxtaposition in presenting facts, by adducing opinions in carefully defined exceptions (such as character evidence), or by successfully evading the trial court’s supervision.

Historians, in their scholarship, may acknowledge an objective method in their fact-finding, but they are under no professional constraint to separate their fact-finding and argument.  Both popular and academic historical scholarship blend fact and opinion in a manner antithetical to the sciences.  The strength and persuasiveness of historical scholarship often turns on how well the historian creates a complex narrative of fact, inference, argument, and opinion.  And the greatest art is that which conceals itself.

The pervasive role of argument is a relatively small problem compared to the dominance and legitimacy of subjective perspective in historical narrative.  Historians write from a point of view.  Openly and honestly, they narrate historical facts and events from a Marxist, labor, feminist, free-market, religious, or other point of view.  Sometimes, their point of view is covert, but it still colors the narrative.  Importantly, the point of view is often not scientific in that the scholars would likely refuse to count any empirical evidence as refuting the “truth” of their narrative.

The problems and excesses of historian opinion testimony are thus not likely to be remedied by having a court-appointed historian weigh in on the issues.  Such a court-appointed historian would present a challenge to the parties, who would need to cross examine vigorously, and to the court, which would be obligated to review and pass on the reliability of its own expert witness.  The prestige and imprimatur of court appointment would just as likely thwart as promote the truth-finding function of trial.  The argumentativeness of historical narrative would escape meaningful detection and confrontation.  Court appointments of historian witnesses might well have the effect of ending the dispute, but not in a way that advances the just resolution of the parties’ claims.

Appointment of “neutral” expert witnesses may appear to be an attractive judicial strategy to a trial court faced with party expert witnesses that are “too extreme.”  Trial judges, especially in federal Multidistrict Litigation (MDL), hear capable advocates present highly credentialed expert witnesses.  Often the opinions of the parties’ expert witnesses are diametrically opposed in ways that do not let the trial court gauge their competing claims to truth.  If trial courts find assessment of these expert witnesses’ opinions to be difficult, juries are not likely to fare better.  In perplexity, judges may try to align themselves in the middle, and comfort themselves with the belief that the trust must lie somewhere between the parties’ polar views of the world.

In the silicone gel breast implant litigation, MDL 926, Judge Sam Pointer found himself in the “middle.”  He had refused Daubert challenges to plaintiffs’ expert witnesses, and stated that the parties’ expert witnesses were too extreme.  After Judge Jack Weinstein sua sponte raised the issue of court-appointed experts in breast implant cases, plaintiffs’ counsel petitioned Judge Pointer to appoint expert witnesses in all the federal cases.  Over defendants’ objections, Judge Pointer appointed a toxicologist, a rheumatologist, an immunologist, and an epidemiologist to address the plaintiffs’ claims that silicone causes systemic autoimmune and connective tissue diseases.  After a lengthy, expensive, complex proceeding, the MDL court-appointed expert witnesses filed reports and gave testimony that rejected plaintiffs’ claims.  Much to Judge Pointer’s surprise, but not the scientific community’s, the Court’s expert witnesses opined that plaintiffs’ claims were not supported and shown by sound scientific evidence.  Subsequently, a committee of the Institute of Medicine, of the National Academy of Sciences, reached the same exculpatory conclusion.

In MDL 926, the resort to court-appointed witnesses was necessitated by that trial court’s refusal or failure to engage in meaningful gatekeeping.  Remarkably, before the MDL Court even embarked upon the expensive detour of four Rule 706 witnesses, another federal court, employing expert witness advisors, reached the same conclusion in Daubert proceedings.  Hall v. Baxter, 947 F.Supp. 1387 (D. Or. 1996).  Judge Weinstein, sitting on all federal cases in the Eastern and Southern District of New York, had already granted partial summary judgment to defendants on plaintiffs’ systemic injury claims.  In re Breast Implant Cases, 942 F. Supp. 958 (E. & S.D.N.Y. 1996).  Rule 706 was used by plaintiffs’ counsel to prolong and protract the federal proceedings, in the hope that they would be saved by research that they were sponsoring through their expert witnesses.

In looking at disputes of historical scholarship, we can easily imagine that judges will see the parties’ expert witnesses as too extreme.  The time-consuming, expensive resort to court-appointed witnesses, however, will not likely advance the resolution of issues of historical scholarship.  Unlike the selection process in MDL 926, where Judge Pointer could relatively quickly find his way to well-qualified, credible, and disinterested witnesses, the selection of an historian would stumble over the disinterestedness criterion.  Historians, by the nature of their craft, are permitted, and are encouraged, to advance a point of view that is out of place in the judicial process.

Historian Witnesses on State-of-the-Art in Tort Cases

In products liability litigation over designs or warnings, a supplier or manufacturer is typically held to the knowledge and expertise of an expert in the field.  Unfortunately, the law offers little help in answering the obvious question of which expert, of all the experts in the world, sets the appropriate standard.  In litigation over the quality of medical care, the law in many states resolves this issue by providing a defense under the “Two Schools of Thought Doctrine.”  See, e.g., “Two Schools of Thought and Informed Consent Doctrine in Pennsylvania.”  98 Dickenson L. Rev. 713 (1994).  A physician does not deviate from the standard of care simply because many or most physicians reject the approach he or she took to the patient’s problem.  As long as a substantial minority of physicians would have concurred in the judgment of the defendant physician, the claim of malpractice fails.  The Two School Doctrine has obvious implications for the standard of design or warning in products cases.

What is clear in products liability cases is that the standard of expertise must be assessed at a given time, when the product or material enters the stream of commerce.  In silicosis cases, which may involve long latency periods between exposure and manifestation of claimed disease, the parties may face historical issues of what experts knew at the legally relevant time of the sale.  Intellectual historians may indeed provide helpful insights into what was actually believed by experts in the past, but such historical data about past “beliefs” can answer the state-of-the-art inquiry only in part.  Knowledge requires at least true, justified belief.  Robert Nozick, Philosophical Explanations 167-288 (Cambridge 1981).  Hunches, suspicions, and hypotheses, even when published in respected books or journals, do not rise to the level of scientific knowledge that can be charged to the manufacturer or the supplier defendants.  Historians, unless adequately trained and expert in scientific method and research, will be inadequate to the task of explaining whether a given belief was justified and true.  Historians, motivated by politics or ideology, may try to advance their causes by trumpeting some past scientific findings, but in the last analysis, scientific theories cannot be chosen the way one chooses to be a Democrat or a Republican.  Proof of “state of the art,” or who knew what when, will require substantial expertise in science and medicine.  Historians may have to emote on the sidelines of these debates.