TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

What Happens When Historians Have Bad Memories

March 15th, 2014

Well when patients have poor recall of their medical treatments, signs, and symptoms, physicians say that they are poor historians. Can one say that about Barry Castleman, plaintiffs’ standard bearer on asbestos state-of-the-art issues?

Back in March 2011, I wrote about a memorandum, dated November 5, 1979, apparently written by Castleman to Dr. Irving Selikoff, “Defense Attorneys’ Efforts to Use Background Files of Selikoff-Hammond Studies to Avert Liability.” SeeThe Selikoff – Castleman Conspiracy” (Mar. 13, 2011). A year later, defense counsel, in a Delaware jury trial before Judge John Parkins, Jr., confronted Castleman with the memorandum.  The exchange was short:

“Q. So, between 1971 and 1992, you’ve had many exchanges with Dr. Selikoff; is that correct?

A. Yes.

Q. And you once asked him to conceal some of the research that he might have done on the 1964 study; is that correct?

A. No. What you’re referring to is a memorandum that doesn’t have any signature and it doesn’t have any letterhead, and was produced in cross-examination about two years ago in a trial. And I have no memory of this document.”

Carlton v. Crane Co., et al., No. 10C-08-216, Delaware Superior Court, New Castle Cty., at p. 152 (June 11, 2012).

After Castleman testified, Judge Parkins issued an order to show cause whether the examining defense counsel violated the rules of professional conduct in his examination of Castleman.  Defense counsel filed a thorough rebuttal to the suggestion that he lacked a good-faith basis for having asked questions about the 1979 memorandum. See Defendant Crane Co.’s Response to Order to Show Cause, Transaction ID 44889066 (June 19, 2012).

In responding to the Order to Show Cause, defense counsel marshaled past testimony given by Castleman, about the memorandum.  In the following 2010 testimony, Castleman acknowledged that he might well have written the memorandum, and that the memorandum reflected contemporaneous concerns of plaintiffs’ counsel Ron Motley and Motley’s requests to Castleman to communicate with Selikoff:

“Q. And you actually wrote a letter to Dr. Selikoff in 1979 wherein you told him Ron Motley, the plaintiffs’ lawyer I work for, knows that you got some information about insulators who said they knew about the hazards of asbestos in the ’40s and ’50s, please don’t let that get out?

A. That is a gross mischaracterization of what I wrote to Dr. Selikoff.

Q. Tell me what the letter said.

A. The memo showed up last summer for the first time. I hadn’t seen this thing or didn’t even remember it. It showed up in cross-examination at some trial last summer. It’s dated 1979 and it — it’s not on any letterheads and not signed, but it looks like something I might have written. I had testified a total of one time at the time I wrote this and I conveyed to Dr. Selikoff one of the plaintiffs’ lawyers with whom I had been in contact, this guy, Motley, was concerned that Selikoff’s medical research records might contain a questionnaire that would include information asking the workers when they first heard that asbestos work was dangerous. And Motley — I conveyed to Selikoff — I am basically conveying Motley’s concern and was saying that if such a thing was turned over to defense counsel, they would use this to get people’s cases dismissed. . . .”

See Transcript of Castleman Testimony, at 753-55, in Farag v. Advance Auto Parts, No. 431525 California Superior, Los Angeles Cty. (Dec. 1, 2010).  Castleman’s testimony further supports the authenticity and his authorship of memorandum, when he explained that he had agreed to communicate with Selikoff because Motley was experiencing a “paranoid fit” over the possibility of the defendants’ obtaining information that would support their defenses of contributory negligence and assumption of risk Id. at 756.

In a Madison County, Illinois, case in 2010, Castleman testified at deposition in a way that appeared to accept his authorship of the memorandum, and his active collaboration with Motley to suppress defendants’ access to discovery of information about the insulators’ knowledge of asbestos hazards:

“Q: Okay. Now, obviously, but you asked Dr. Selikoff, you said, [i]t strikes me as most important to hold these files confidential and resist efforts to get them released to the defendants. Isn’t that true?

A: Yes. I felt that medical research was not something that should just be – – I mean, again, the date of this memo is 1979. I had testified in a total of one trial in my whole life by that time. I was not at all familiar with the legal system. I was very concerned about what Motley told me, because I thought it would jeopardize Selikoff’s ability to do epidemiology studies on workers and identify occupational health hazards, not just with asbestos but with all kinds of things.”

Castleman Deposition Transcript at 26, in Luna v. A.W. Chesterton, Inc., et al., No. 08-L-619, Circuit Court of Madison County, Illinois (July 12, 2010). See also Transcript of Testimony of Barry Castleman at 377, in Benton v. John Crane, Inc., No. 109661/02, Supreme Court of the State of New York, New York County (Oct. 14, 2011) (testifying in response to questions about the memorandum that “I go on to say in the next sentence that it might impair Selikoff’s ability to obtain the cooperation of unions and workers in other studies. . . .”).

In response to this offer of proof for the good-faith basis to inquire about the memorandum, Judge Parkins withdrew the rule to show cause.

==============================

I have been to Madison County, Illinois, only a couple of times.  Some years ago, I had a deposition in Granite City, a double misnomer; it is neither a city, nor does it have any granite.  Some might say that the county court house, not far away, in Edwardsville, Illinois, has also been a misnomer at times.

A recent trial suggests that the truth will sometime come out in a Madison County trial.  Local media coverage of the trial reported that Barry Castleman testified early in the proceedings, and that he denied writing the conspiratorial 1979 memorandum.  See Heather Isringhausen Gvillo, “Plaintiffs expert denies writing letter to asbestos researcher during Madison County trial” (Feb. 21, 2014) (reporting on Brian King, individually and as special administrator of the estate of Tom King vs. Crane Co.) Actually the text of the article makes clear that Castleman did not deny writing the memorandum; rather, he testified that he had no memory of having written it. “I have no memory of writing this and I don’t recognize it.” Id. On February 28, 2014, distancing themselves from Castleman’s poor memory for his own writings, the jury in the King case rejected the plaintiffs’ claims. Gvillo, “Defense verdict reached in asbestos trial” (Mar. 3, 2014).

Castleman’s lapse of memory is perhaps convenient, and maybe even a disability in someone who aspires to be an historian.  In addition to being a “poor historian” of his own career, which was financed by plaintiffs’ counsel, Castleman appears to have taken direction from Ron Motley and his partners, on where to look, and where not to look, for historical support for the plaintiffs’ version of the state of the art. SeeDiscovery into the Origin of Historian Expert Witnesses’ Opinions” (Jan. 30, 2012).

There are steps that could be taken to shore up the authenticity of the Castleman-Selikoff memorandum.  A subpoena to the Selikoff document archive might be in order. Since everyone loves a conspiracy, why not convene a grand jury to inquire into an ongoing conspiracy to suppress evidence?

A Black Swan Case – Bayesian Analysis on Medical Causation

March 15th, 2014

Last month, I posted about an article that Professor Greenland wrote several years ago about his experience as a plaintiffs’ expert witness in a fenfluramine case. “The Infrequency of Bayesian Analyses in Non-Forensic Court Decisions (Feb. 16, 2014).” Greenland chided a defense expert for having declared that Bayesian analyses are rarely or never used in analyzing clinical trials or in assessments of pharmaco-epidemiologic data.  Greenland’s accusation of ludicrousness appeared mostly to blow back on him, but his stridency for Bayesian analyses did raise the question, whether such analyses have ever moved beyond random-match probability analyses in forensic evidence (DNA, fingerprint, paternity, etc.) or in screening and profiling cases.  I searched Google Scholar and Westlaw for counter-examples and found none, but I did solicit references to “Black Swan” cases. Shortly after I posted about the infrequency of Bayesian analyses, I came across a website that was dedicated to collecting legal citations of cases in which Bayesian analyses were important, but this website appeared to confirm my initial research.

Some months ago, Professor Brian Baigrie, of the Jackman Humanities Institute, at the University of Toronto, invited me to attend a meeting of an Institute working group on The Reliability of Evidence in Science and the Law.  The Institute fosters interdisciplinary scholarship, and this particular working group has a mission statement close to my interests:

The object of this series of workshops is to formulate a clear set of markers governing the reliability of evidence in the life sciences. The notion of evidence is a staple in epistemology and the philosophy of science; the notion of this group will be the way the notion of ‘evidence’ is understood in scientific contexts, especially in the life sciences, and in judicial form as something that ensures the objectivity of scientific results and the institutions that produce these results.

The Reliability of Evidence in Science and the Law. The faculty on the working group represent disciplines of medicine (Andrew Baines), philosophy (James R. Brown, Brian Baigrie), and law (Helena Likwornik, Hamish Stewart), with graduate students in the environmental science (Amy Lemay), history & philosophy of science and technology (Karolyn Koestler, Gwyndaf Garbutt ), and computer science (Maya Kovats).

Coincidentally, in preparation for the meeting, Professor Baigrie sent me links to a Canadian case, Goodman v. Viljoen, which turned out to be a black swan case! The trial court’s decision, in this medical malpractice case focused mostly on a disputed claim of medical causation, in which the plaintiffs’ expert witnesses sponsored a Bayesian analysis of the available epidemiologic evidence; the defense experts maintained that causation was not shown, and they countered with the unreliability of the proffered Bayesian analysis. The trial court resolved the causation dispute in favor of the plaintiffs, and their witnesses’ Bayesian approach. Goodman v. Viljoen, 2011 ONSC 821 (CanLII), aff’d, 2012 ONCA 896 (CanLII).  The Court of Appeals’ affirmance was issued over a lengthy, thoughtful dissent. The Canadian Supreme Court denied leave to appeal.

Goodman was a medical practice case. Mrs. Goodman alleged that her obstetrician deviated from the standard of care by failing to prescribe corticosteroids sufficiently early in advance of delivery to avoid or diminish the risk of cerebral palsy in her twins.  Damages were stipulated, and the breach of duty turned on a claim that Mrs. Goodman, in distress, called her obstetrician.  Given the decade that passed between the event and the lawsuit, the obstetrician was unable to document a response.  Duty and breach were disputed, but were not the focus of the trial.

The medical causation claim, in Goodman, turned upon a claim that the phone call to the obstetrician should have led to an earlier admission to the hospital, and the administration of antenatal corticosteroids.  According to the plaintiffs, the corticosteroids would have, more probably than not, prevented the twins from developing cerebral palsy, or would have diminished the severity of their condition.  The plaintiffs’ expert witnesses relied upon studies that suggested a 40% reduction and risk, and a probabilistic argument that they could infer from this risk ratio that the plaintiffs’ condition would have been avoided.  The case thus raises the issue whether evidence of risk can substitute for evidence of causation.  The Canadian court held that risk sufficed, and it went further, contrary to the majority of courts in the United States, to hold that a 40% reduction in risk sufficed to satisfy the more-likely-than-not standard.  See, e.g., Samaan v. St. Joseph Hosp., 670 F.3d 21 (1st Cir. 2012) (excluding expert witness testimony based upon risk ratios too small to support opinion that failure to administer intravenous tissue plasminogen activator (t-PA) to a patient caused serious stroke sequelae); see also “Federal Rule of Evidence 702 Requires Perscrutations — Samaan v. St. Joseph Hospital (2012)” (Feb. 4, 2012).

The Goodman courts, including the dissenting justice on the Ontario Court of Appeals, wrestled with a range of issues that warrant further consideration.  Here are some that come to mind from my preliminary read of the opinions:

1. Does evidence of risk suffice to show causation in a particular case?

2. If evidence of risk can show causation in a particular case, are there requirements that the magnitude of risk be quantified and of a sufficient magnitude to support the inference of causation in a particular case?

3. The judges and lawyers spoke of scientific “proof.”  When, if ever, is it appropriate to speak of scientific proof of a medical causal association?

4. Did the judges incorrectly dichotomize legal and scientific standards of causation?

5. Did the judges, by rejecting the need for “conclusive proof,” fail to articulate a meaningful standard for scientific evidence in any context, including judicial contexts?

6. What exactly does the “the balance of probabilities” mean, especially in the face of non-quantitative evidence?

7. What is the relationship between “but for” and “substantial factor” standards of causation?

8. Can judges ever manage to define “statistical significance” correctly?

9. What is the role of “common sense” in drawing inferences by judges and expert witnesses in biological causal reasoning?  Is it really a matter of common sense that if a drug did not fully avert the onset of a disease, it would surely have led to a less severe case of the disease?

10. What is the difference between “effect size” and the measure of random or sampling error?

11. Is scientific certainty really a matter of being 95% certain, or is this just another manifestation of the transposition fallacy?

12. Are Bayesian analyses acceptable in judicial settings, and if so, what information about prior probabilities must be documented before posterior probabilities can be given by expert witnesses and accepted by courts?

13. Are secular or ecological trends sufficiently reliable data for expert witnesses to rely upon in court proceedings?

14. Is the ability to identify biological plausibility sufficient to excuse the lack of statistical significance and other factors that are typically needed to support the causality of a putative association?

15. What are the indicia of reliability of meta-analyses used in judicial proceedings?

16. Should courts give full citations to scientific articles that are heavily relied upon as part of the requirement that they publicly explain and justify their decisions?

These are some of the questions that come to mind from my first read of the Goodman case.  The trial judge attempted to explain her decision in a fairly lengthy opinion. Unfortunately, the two judges, of the Ontario Court of Appeals, who voted to affirm, did not write at length. Justice Doherty wrote a thoughtful dissent, but the Supreme Court denied leave to appeal.  Many of the issues are not fully understandable from the opinions, but I hope to be able to read the underlying testimony before commenting.

Thanks to Professor Baigrie for the reference to this case.

The Rise and Rise of Junk Science

March 8th, 2014

Many authors attribute the term “junk science” to Peter Huber and his use of it in the term in his book, Galileo’s Revenge: Junk Science In The Courtroom (1991). As important as Huber’s book was to raising judicial consciousness to what was going on in courtrooms around the United States, the phrase “junk science” clearly predates Huber’s book.

Lawrence Hubert and Howard Wainer note that the phrase appears to have been in use by the early 1980’s, and sugggest that the first use of the pejorative phrase occurred in a Reagan administration white paper.  Lawrence Hubert and Howard Wainer, A Statistical Guide for the Ethically Perplexed 460 (Boca Raton 2013). The document cited by Hubert and Wainer notes:

“Another way in which causation often is undermined – also an increasingly serious problem in toxic tort cases – is the reliance by  judges and juries on non-credible scientific or medical testimony, studies or opinions. It has become all too common for “experts” or “studies” on the fringes of, or even well beyond the outer parameters of mainstream scientific or medical views, to be presented to juries as valid evidence from which conclusions may be drawn. The use of such invalid scientific evidence (commonly referred to as “junk science”) has resulted in findings of causation which simply cannot be justified or understood from the standpoint of the current state of credible scientific and medical knowledge. Most importantly, this development has led to a deep and growing cynicism about the ability of tort law to deal with difficult scientific and medical concepts in a principled and rational way.”

United States Dep’t of Justice, Tort Policy Working Group, Report of the Tort Policy Working Group on the causes, extent and policy implications of the current crisis in insurance availability and affordability at 35 (Report No. 027-000-01251-5) (Wash.DC 1986).  So according to the Justice Department authors, “junk science” was already in common use by 1986.  We really would not expect linguistic creativity in such a document.

Whence comes the phrase “junk science”?  Clearly, the phrase is an analogue of “junk food,” food that fills but fails to nourish.  Here is the Google ngram of the emergence of the phrase “junk food,” which shows the phrase took off in common use shortly before 1970: junk food Google NgramWhat then about junk science?

The Rise of Junk Science - Google Ngram Viewer

With a little tweaking of Google’s smoothing function, this search can be run to reveal more about the low end of the curve.

Junk Science without smoothing

This chart suggests that there was very small flurry of usage in the first half of the 1970s, with a re-emergence around 1982 or so, and then a re-introduction in 1985, with a steady increase every since.

Here is how “junk science” compares to “junk food” (with more smoothing to the curve added):

junk science vs junk food

Junk science seems to have overtaken junk food in books, at any rate.

Of course, “junk science” is an epithet to be hurled at science that the speaker dislikes.  It has an emotive content, but its persistence reflects that it has an epistemic content as well.  “Junk science” is science that lacks an epistemic warrant, and pretends to be something that it is not.  Honest scientists, engaged in hypothesis-generating work, should not be defamed as junk scientists, but the boundary between hypothesis generation and conclusion mongering is often blurred by advocate scientists of all political persuasions.

There are many synonyms for junk science, which has been with us ever science gained prestige and persuasiveness over religious pronouncements about the real world.  To avoid the politicization of the term “junk science,” here are some alternatives:

advertising

alternative medicine

alchemy

anti-vaccination movements

aroma therapy

astrology

baloney

blackguardism

blood letting

bogus science

bullshit

bunk

bunkum

cargo cult science

clinical ecology

cold fusion

creationism

cult

denialism

dodgy data

error

faith-based science

flim-flam

flotsam and jetsam

fraud

free-energy devices

fuzzy thinking

homeopathy

ignorance

intelligent design

junk science

Lysenkoism

magical thinking

magnetic therapy

miracles

misrepresentation

New Age science

nonsense on stilts

N rays

nostrums

not even wrong

paranormal phenomena

pathological science

phrenology

political science

propaganda

pseudoscience

pseudosymmetry

quackademic

parapsychology

quackery

rubbish

shamanism

spiritualism

voodoo science

Judicial Notice of Untruths

March 3rd, 2014

Judicial notice is a procedure for admitting facts the truth of which are beyond dispute. A special kind of magically thinking occurs when judges take judicial notice of falsehoods, myths, or lies.

In the federal judicial system, Federal Rule of Evidence 201 addresses judicial notice of adjudicative facts, and provides:

(b) Kinds of Facts That May Be Judicially Noticed. The court may judicially notice a fact that is not subject to reasonable dispute because it:

(1) is generally known within the trial court’s territorial jurisdiction; or

(2) can be accurately and readily determined from sources whose accuracy cannot reasonably be questioned.

Procedurally, Rule 201 provides that a court must take judicial upon the request of a party who has supplied any needed basis for the fact to be noticed.  A court may take notice sua sponte.  Rule 201(c)(1), (2).

In the Chantix litigation, counsel for Pfizer challenged plaintiffs’ expert witness, Curt Furberg, on Rule 702 grounds.  According the MDL judge, the Hon. Inge Prytz Johnson, Pfizer asserted that Furberg’s proferred testimony because the FDA approved Chantix as safe and effective. In re Chantix (Varenicline) Prods. Liab. Litig., 889 F. Supp. 2d 1272, 1285 n.8 (N.D. Ala. 2012).  Citing no authority or text, Judge Johnson announced that “[a]pproval by the FDA is not evidence of the safety of a medication.” Id.

To be sure, safety issue can sometimes arise after initial approval, but before the FDA or the manufacturer and sponsor of the medication can react to the new safety data.  The sweeping statement, however, that the FDA’s approval is not any evidence of safety seems bereft of factual support and common sense.

Judge Johnson went on, however, to invent supporting evidence out of thin air:

“The court takes judicial notice of such things as that at one time, thalidomide was used for morning sickness in pregnant women. Unfortunately, 10,000 children were born with birth defects from it before it was banned. And 50  years elapsed before doctors understood why thalidomide caused limbs to disappear. See e.g. http://www.nytimes.com/2010/03/16/science/16limb.html?pagewanted=all. Similarly, the fact that the FDA at one time approved Vioxx did not prevent the same being removed from the market due to growing concerns that it increased the risk of heart attacks and strokes. http://www.fda.gov/Drugs/DrugSafety/PostmarketDrugSafetyInformationforPatientsandProviders/ucm103420.htm. Hence, initial approval by the FDA is not proof of the safety of a medication.”

The point about the FDA’s approval not constituting evidence of safety may simply be sloppy writing and reasoning.  In the quote above, perhaps Her Honor merely meant to say that initial approval is not evidence that a medication is safe in view of later obtained data that were not available to the FDA on its review of the new drug application.  If so, fair enough, but the sweeping statement that the initial approval is no evidence of safety ignores the considerable time, cost, and energy that goes into the FDA’s review of safety before agency approves marketing.

More egregious, however, is Judge Johnson’s taking judicial notice of the marketing of thalidomide as though it had some relevancy and probative value for her claim about the inefficacy of the FDA’s safety reviews.[1]  Consider the recent review of the FDA’s handling of thalidomide by Margaret Hamburg, M.D., Commissioner of the U. S. Food and Drug Administration:

“Fifty years ago, the vigilance of FDA medical officer Dr. Frances Kelsey prevented a public health tragedy of enormous proportion by ensuring that the sedative thalidomide was never approved in the United States.  As many remember, in the early 1960’s, reports were coming in from around the world of countless women who were giving birth to children with extremely deformed limbs and other severe birth defects.  They had taken thalidomide. Although it was being used in many countries, Dr. Kelsey discovered that it hadn’t even been tested on pregnant animals.”

Margaret Hamburg, “50 Years after Thalidomide: Why Regulation Matters” (Feb. 7, 2012).

Judge Johnson took judicial notice of a non-fact. The FDA never approved thalidomide for use in the United States, back in the 1950s or 1960s.[2]



[1] Judge Johnson’s fantastical history of the FDA was recently cited by plaintiffs’ counsel in the Zoloft birth defects litigation.  See Plaintiffs’ Opposition to Defendants’ Motion to Exclude the Testimony of Anick Berard, Ph.D., at 13 (Filed Feb. 24, 2014), in In re Zoloft (sertraline hydrochloride) Prods. Liab. Litig., Case 2:12-md-02342-CMR Document 713.

[2] Judge Johnson’s errant history may have resulted from her European perspective of the thalidomide tragedy.  Judge Inge Prytz Johnson immigrated from Denmark, where she was born and educated. She became a U.S. citizen in 1978, and a state court judge one year later.  In 1998, she was nominated by President Clinton to the Northern District of Alabama.  In October 2012, Judge Johnson assumed senior status. See Kent Faulk, “U.S. District Judge Inge Johnson goes into semi-retirement” (Oct. 19, 2012) (quoting Judge Johnson as saying that “One thing I like about my job is I don’t have to take sides.”)

“Judges and other lawyers must learn how to deal with scientific evidence and inference.”

March 1st, 2014

Late last year, a panel of 7th Circuit reversed an Administrative Law Judge (ALJ) who had upheld a citation and fine against Caterpillar Logistics, Inc. (Cat).  The panel, in a wonderfully succinct, but meaty decision by Judge Easterbrook, wrote of the importance of judges’ and lawyers’ learning to deal with scientific and statistical evidence. Caterpillar Logistics, Inc. v. Perez, 737 F.3d 1117 (7th Cir. 2013)

Pseudonymous MK, a worker in Cat’s packing department, developed epidcondylitis (tennis elbow).  Id. at 1118. OSHA regulations require employers to report injuries  “the work environment either caused or contributed to the resulting condition”. 29 C.F.R. § 1904.5(a). MK’s work required her to remove items from containers and place items in shipping cartons. The work was repetitive, but MK acknowledged that the work involved little or no impact or force.  Apparently, Cat gave some rather careful consideration to whether MK’s epidcondylitis was work related; it assembled a panel of three specialists in musculoskeletal disorders and two generalists to consider the matter.  The panel, relying upon NIOSH and AMA guidelines, rejected MK’s claim of work relatedness.  Both the NIOSH and the AMA guidelines conclude that repetitive motion in the absence of weight or impact does not cause epicondylitis. Id.

MK called an expert witness, Dr. Robert Harrison, a clinical professor of medicine, at the University of California, San Francisco.  Id. at 1118-1119.  Harrison unequivocally attributed MK’s condition to her work at Cat, but he failed to explain why no one else in Cat’s packing department ever developed the condition.  Id. at 1119.

Harrison acknowledged that epidemiologic evidence could confirm his opinion, but he dismissed such evidence as being able to disconfirm his opinion.  The ALJ echoed Dr. Harrison in holding epidemiologic evidence to be irrelevant:

“none of these [other] people are [sic] MK. Similar to the concept of the ‘eggshell skull’ plaintiff in civil litigation, you take your workers as they are.”

Id. at 1119-20, citing ALJ, at 2012 OSAHRC LEXIS 118 at *32.

Judge Easterbrook found this attempt to disqualify any opposing evidence to lie beyond the pale:

“Judges and other lawyers must learn how to deal with scientific evidence and inference.”

Id. (citing Jackson v. Pollion, 733 F.3d 786 (7th Cir. 2013).

Judge Easterbrook called out the ALJ for misunderstanding the nature of epidemiology and the role of statistics, in the examination of causation of health outcomes that have a baseline incidence or prevalence in the population:

“The way to test whether Harrison is correct is to look at data from thousands of workers in hundreds of workplaces—or at least to look at data about hundreds of worker-years in Caterpillar’s own workplace. Any given worker may have idiosyncratic susceptibility, though there’s no evidence that MK does. But the antecedent question is whether Harrison’s framework is sound, and short of new discoveries about human physiology only statistical analysis will reveal the answer. Any large sample of workers will contain people with idiosyncratic susceptibilities; the Law of Large Numbers ensures that their experience is accounted for. If studies of large numbers of workers show that the incidence of epicondylitis on jobs that entail repetitive motion but not force is no higher than for people who do not work in jobs requiring repetitive motion, then Harrison’s view has been refuted.”

Id. at 1120.

Judge Easterbrook acknowledged that Cat’s workplace evidence may have been a sample too small from which to draw a valid statistical inference, given the low base rate of epicondylitis in the general population.  Dr. Harrison’s and the ALJ’s stubborn refusal, however, to consider any disconfirming evidence, obviating the need to consider sample size and statistical power issues.

Finally,  Judge Easterbrook chastised the ALJ for dismissing Cat’s experience as irrelevant because many other employers will not have sufficient workforces or record keeping to offer similar evidence.  In Judge Easterbrook’s words:

“This is irrational. If the camera in a police car captures the events of a highspeed chase, the judiciary would not ignore that video just because other police cars lack cameras; likewise, if the police record an interrogation, courts will consider that information rather than wait for the day when all interrogations are recorded.”

Id. This decision illustrates why some commentators at places such as the Center for Progressive Reform get their knickers in a knot over the prospect of applying the strictures of Rule 702 to agency fact finding; they know it will make a difference.

As for the “idiosyncratic gambit,” this argument is made all too frequently in tort cases, with similar lack of predicate.  Plaintiffs claim that there may be a genetic or epigenetic susceptibility in a very small subset of the population, and that epidemiologic studies may miss this small, sequestered risk.  Right, and the light in the refrigerator may stay on when you close the door.  Prove it!