TORTINI

For your delectation and delight, desultory dicta on the law of delicts.

The Continuing Saga of Bad-Faith Assertions of Conflicts of Interest

December 28th, 2011

Conflicts of interest (COI), real or potential, have become a weapon used to silence the manufacturing industry in various scientific debates and discussions.  Other equally “interested” parties, labor unions, advocacy groups, and consultants to the other industry – the litigation industry – have used conflicts and ethical claims to silence the manufacturing industry and to engage in unfettered false scientific speech. The public, unwilling and untrained to look at evidence on the merits, is conditioned to accepting an allegation of COI as the end of the discussion on scientific issues.

Recently, journalist Shannon Brownlee criticized the FDA for its suggestion that the agency was having difficulty in finding experts who cleared the agency’s conflict-of-interest prohibitions.  Brownlee explicitly contended that she could easily find “unbiased” scientists who could advise the agency on drug and device issues.

Shannon Brownlee, “Is There an Independent Unbiased Expert in the House” (Aug. 3, 2011).

Indeed, Brownlee sent FDA Commissioner Margaret Hamburg a list of allegedly neutral experts who could advise the agency.  Brownlee gave everyone on her list a clean bill of ethical health, and has published the list on multiple occasions, both on the website Healthnewsreview.org, and a few years ago, in the British Medical Journal:  Jeanne Lenzer & Shannon Brownlee, “Is there an (unbiased) doctor in the house?” 337 Brit. Med. J. 206 (2008).

Brownlee tells us that journalists from respectable print media, including the New York Times, and the Wall Street Journal, have requested the list, apparently to contact the “unbiased” experts to help investigate news stories about drugs and medical devices.  What the gullible may not appreciate is that the list fallaciously is based upon only one exclusionary criterion:  having consulted for the pharmaceutical industry.  The list omits other important COI exclusionary criteria, such as having consulted for the litigation industry, or having taken erroneous, unwarranted, and ideologically driven positions on scientific issues.

What litigation industry?  Brownlee may have missed the fact that plaintiffs’ lawyers represent a huge financial interest in obtaining compensation for others, with 40 percent of the proceeds going to themselves.  This litigation industry thrives, even with Dickie Scruggs in prison, and Stanley Chesley in disrepute.

In today’s litigation environment, with aggregation of claims in federal multi-district cases, plaintiffs’ counsel stand to profit in the billions from scientific positions espoused by their expert witnesses.

Who are the litigation industry expert witnesses on Brownlee’s list?  Here are some obvious candidates:

Peter R. Breggin, MD, psychiatrist, clinical psychopharmacologist, independent author and scientist; Founder and Director Emeritus, International Center for the Study of Psychiatry and Psychology

Adriane Fugh-Berman, MD, Professor, Department of Physiology and Biophysics, Georgetown University Medical Center; Director, PharmedOut.org

Curt Furberg, MD, PhD, Professor of Public Health Sciences, Wake Forest University School of Medicine

Joseph Glenmullen, MD, Clinical instructor in psychiatry, Harvard Medical School

Bruce Psaty, MD, PhD, Professor, Medicine & Epidemiology, University of Washington Cardiovascular Health Research Unit

Also on the list were well-known anti-industry zealots, who focus almost exclusively on the manufacturing industry, while ignoring or endorsing the excesses and unwarranted claims of the litigation industry:

Lisa Bero, PhD, Professor, University of California, San Francisco U.S.

Sheldon Krimsky, PhD, Tufts University & Council for Responsible Genetics

Sidney Wolfe, MD, Director, Health Research Group of Public Citizen.

Now some people may claim that the litigation industry consultants, and the anti-industry zealots, take their positions not to please their sponsors, or to pursue lucrative opportunity, but because they fervently believe the positions that they take. But then why not give the pharmaceutical industry consultants the same benefit of the doubt?  Indeed, why not move beyond COI allegations to creating lists of scientists and physicians who have demonstrated proficiency in advancing evidence-based judgments that have withstood the test of time?

This anti-industry hypocrisy manifests not only in assertions of conflicts of interest, but also in calls for industry to disclose all underlying data from industry-funded or sponsored studies, while taking a protectionist stance on all other underlying data.

Let’s hope that in 2012, industry fights back, and evidence regains its primary role in resolving scientific disputes.

Silica Science – Junk Science is Not Limited to The Courts

December 12th, 2011

“Clowns to the left of me; Jokers to the right; here I am, stuck in the middle with you.”


David Michaels, head of OSHA, back in October, was testifying at a House congressional oversight hearing, “Workplace Safety: Ensuring a Responsible Regulatory Environment.” The Congressmen were inquiring into OSHA’s enforcement and regulatory initiatives on several fronts, including silica exposures.

This is the same David Michaels who used to be a hired expert witness for plaintiffs in toxic tort cases. SeeDavid Michaels’ Public Relations Problem,” (Dec. 2, 2011).

Not surprisingly, when the questioning turned to silica, Michaels played the cancer card:  crystalline silica is a “known” human carcinogen.

Republican congressman Larry Bucshon (R-IN), a surgeon when he is not holding forth in Congress, found the talk of cancer to be provocative.  Buchson scolded Michaels:

“I don’t like it when people use buzz words that try to get people’s attention, and cancer is one of those.”

* * * * *

“…I’m a thoracic surgeon, so I want to focus a little bit on what you said earlier as it relates to silica dust. I’m curious about your comment about silica-dust related lung cancer, because I’ve been a thoracic surgeon for 15 years and I’ve done a lot of lung cancer surgery, and I haven’t seen one patient that’s got it from silica dust.”

A fascinating exchange for several reasons.

First, we could expect Michaels to play the cancer card, just as he has in his role as plaintiffs’ expert witness.  As we will see, his cancer evidence is not far fetched, although it is also not particularly convincing.

Second, the junk science from Congressman Buchson is distressing.  As a physician, he should know better that his experience in surgery has no relevance at all to the question whether crystalline silica can cause lung cancer.

Back in 1996, a working group of the World Health Organization’s International Agency for Research on Cancer (IARC) voted to reclassify crystalline silica, the most ubiquitous mineral on the face of Planet Earth, a known human carcinogen.  Michaels recited this “evidence,” but he failed to mention that the evidence was conflicting, as were the votes of the working group. The response of the scientific community to the IARC pronouncement was highly critical.  See Patrick A. Hessel, John F. Gamble, J. Bernard L. Gee, Graham Gibbs, Francis H.Y. Green, W. Keith C. Morgan, and Brooke T. Mossman, “Silica, Silicosis, and Lung Cancer: A Response to a Recent Working Group Report,” 42 J. Occup. Envt’l Med. 704 (2000).

The vote of the working group was very close; indeed, the swing of a single vote would have changed the outcome. One of the working group members later wrote:

“Some equally expert panel of scientists presented with the same information on another occasion could of course have reached a different verdict. The evidence was conflicting and difficult to assess and such judgments are essentially subjective.”

Corbett McDonald & Nicola Cherry, “Crystalline Silica and Lung Cancer:  The Problem of Conflicting Evidence,” 8 Indoor Built Environment 121, 121 (1999).  Remarkably, this panel member explained his decision to vote for reclassification as follows:

“The basic problem was that the evidence for carcinogenicity was conflicting – generally absent in situations of high and widespread exposure and strong only in a few rather special occupations.  The advice by the IARC to consider hazard rather than risk did much to resolve the difficulty.”

Id. at 125.  I suspect that the evidence for a difference in meaning between “hazard” and “risk” is even more tenuous and conflicting than the evidence in favor of carcinogenicity.

IARC classifications, however, take on a life of their own.  They are an invitation to stop thinking, and to stop analyzing the evidence.  Federal bureaucrats and staff scientists love them for exactly this reason:  they can hide behind the authority of the WHO without having to work on reviewing the evidence, or updating their judgment when new studies come out.

It should not be surprising, therefore, that the National Institutes of Health’s National Toxicology Program (NTP), working off the WHO decision, recognized crystalline silica as a human carcinogen. Other groups followed in lock step.  Other agencies and medical groups followed.

What you will not hear from Michaels or his followers is that when the National Institute for Occupational Safety and Health conducted the largest mortality study on the issue, it found a decreased lung cancer risk among men who actually had sufficient silica exposure to develop silicosis. See Geoffrey Calvert, et al., “Occupational silica exposure and risk of various diseases:  an analysis using death certificates from 27 states of the United States,” 60 Occup. Envt’l Med. 122 (2003).  Cf. “Congressman tells OSHA chief not to use “buzz” words like cancer.” (Oct. 10, 2011).

To give the devil his due, at least Michaels had “some” evidence to support his pronouncement, even if the evidence was incomplete and contradicted by other important evidence.  Congressman Bucshon’s recitation of his experience as a surgeon was completely off the mark.  His staffers obviously failed him in their research, and Bucshon’s reliance upon his own anecdotal experience was quite inappropriate to rebut the dubious judgment of the OSHA Administrator.

Some people might describe the exchange between Bucshon and Michaels as resembling two monkeys playing chess.  I think of it as exemplifying the scientific illiteracy in all three branches of our government.

David Michaels’ Public Relations Problem

David Michaels’ Public Relations Problem

December 2nd, 2011

OSHA requires strong, credible leadership from someone who will not outrun his scientific headlights, while at the same time enforcing standards that protect workers. President Obama made a serious error in appointing David Michaels, whose scientific and enforcement bona fides are weak.

Michaels has made a career out of targeting industry for perceived ethical lapses, yet he has routinely failed to make adequate disclosures himself, and some of his disclosures are downright deceptive.  This hypocrisy might be shrugged off as part of the politicization of occupational and environmental medicine, except that Michaels is now an Undersecretary of Labor.  When his agency starts handing out legal opinion letters to his former employers in the United States litigation industry, Michaels’ hypocrisy becomes something of a public nuisance and a scandal.  SeeManufacturing Certainty” (Oct. 25, 2011).  The Department of Labor’s “Dear Mr. Wodka” letter can now be found online at OSHA’s website.

Well before David Michaels became head of OSHA, his hypocrisy over conflicts of interest was noteworthy.  SeeHypocrisy In Conflict Disclosure Rules.” In his book, Doubt is Their Product: How Industry’s War on Science Threatens Your Health (2008), Michaels provides no disclosure of his prior activities and testimonial adventures on behalf of the litigation industry.  There is, among his acknowledgments, a tip of the hat to friends and colleagues, such as Steven Wodka.  Wodka is a plaintiffs’ lawyer who retained and paid Michaels in various litigations, but you will not learn that from reading Doubt is Their Product.  Not surprisingly, this book is waved around by plaintiffs’ counsel in cross-examinations in courtrooms all across the United States.

Michaels does reveal that his organization, The Project on Scientific Knowledge and Public Policy (SKAPP), accepted funding from “the Common Benefit Trust, a fund established pursuant to a court order in the Silicone Gel Breast Implant Products Liability Litigation.”  This revelation is, however, quite misleading.  The “Trust” is a fund for plaintiffs’ counsel in the silicone gel breast implant litigation, which was diverted to help support Michaels, and others who would advocate against evidence-based limitations to expert witness opinions.

Michaels insists that SKAPP accepts only unrestricted funding, but this insistence is also misleading.  Plaintiffs’ counsel could feel safe putting “their” money into the coffers of SKAPP, which was openly committed to undermining the implementation of evidence-based standards for causation opinion testimony in federal and state courts.  If the manufacturing industry, as opposed to the litigation industry, funded a not-for-profit, headed up by one of its testifying expert witnesses, most folks would call this maneuver “money laundering.”  Dirty money is dirty money, regardless whose ox is gored.  See also David Michaels & Celeste Monforton, “Scientific Evidence and the Regulatory System: Manufacturing Uncertainty and the Demise of the Formal Regulatory System,” 18 J. Law & Policy 17 (2005) (“Major support for SKAPP is provided by the Common Benefit Trust, a fund established pursuant to a court order in the Silicone Gel Breast Implant Products Liability Litigation.”).

Other anemic or absent conflict of interest disclosures abound in Michaels’ publications.

Michaels has been involved in at least four different mass tort litigations, involving alleged injuries from exposures to asbestos, ortho-toluidene, beryllium, and vinyl chloride.  He has collaborated with Wodka in three of these litigations, by serving as Wodka’s expert witness.  This litigation collaboration should raise serious questions about the “Dear Mr. Wodka letter.”

Asbestos Litigation

Michaels has written several publications about health outcomes in sheet metal workers.  The premise of these papers is that the workers were exposed to asbestos, and they might have greater than expected cancer mortality as a result.  Most of Michaels’ papers fail to reveal that he consulted and testified for asbestos claimants.  See, e.g., David Michaels & Stephen Zoloth, “Asbestos Disease in Sheet Metal Workers: Proportional Mortality Update,” 13 Am. J. Indus. Med. 731-734 (1988).

One of Michaels’ publications on asbestos exposure and health outcomes does contains a disclosure, which even reveals on which side of asbestos litigation he worked:

“This work was supported by the Sheet Metal Occupational Health Institute Trust. Drs. Welch, Michaels, and Dement have worked as consultants for law firms representing individuals with asbestos-related disease. None of the authors have a financial interest in any organization that could profit from the research presented here.”

Laura Welch, Elizabeth Haile, John Dement, and David Michaels, “Change in Prevalence of Asbestos-Related Disease Among Sheet Metal Workers 1986 to 2004,” 131 Chest 863, 863 (2007).  Note the advocacy even in the disclosure.  Law firms that represent only individuals with asbestos-related disease!  Do we infer from this that Michaels did not consult for any law firms that represented individuals who claimed asbestos-related disease, but where the truth in God’s eye would have it that their claims were erroneous?  Perhaps the Principle of Charity requires us to infer that Michaels meant to disclose that he consulted for firms that represented persons claiming asbestos-related disease.  Having read Michaels’ litigation testimony, however, I think he really meant to say that what appears in the article.

There have been many thousands of asbestos cases, most of which have been settled or dismissed.  It is thus difficult to know exactly how many asbestos cases have seen the consulting work of David Michaels.  Clearly, however, some of Michaels’ asbestos testimony was given at the request of Steve Wodka, for Wodka’s clients.  See David Michaels deposition testimony at p. 41,  in Nicastro v. Aceto Corp., New Jersey Superior Court, Law Division for Monmouth County, Docket No. L-3062-08 (Sept. 2, 2009).

Ortho-Toluidine Litigation

According to federal Magistrate Judge H. Kenneth Schroeder, Jr., Steve Wodka represents numerous plaintiffs who claim to have been harmed by exposure to ortho-toluidine.  David Michaels is a common fixture in these cases brought by Wodka.  See Pardee v. E.I. DuPont Nemours & Co., Case 1:07-cv-00268-WMS-HKS Document 29 (W.D.N.Y. March 31, 2008).  Faced with losing his expert witness to OSHA, Wodka noticed a trial deposition de bene esse of David Michaels in several cases.

Michaels was permitted to give his testimony, before moving into his OSHA position, in the following cases:

Pardee v. E.I. DuPont Nemours & Co., W.D.N.Y., Plaintiff, No. 07-CV-0268S(Sr)

Band v. E.I. DuPont Nemours & Co., W.D.N.Y., No. 07-CV-0267S(Sr)

Weist v. E.I. DuPont Nemours & Co., W.D.N.Y., No. 05-CV-0534A(Sr)

Nicastro v. Aceto Corp., New Jersey Superior Court, Law Division for Monmouth County, Docket No. L-3062-08

Polyvinyl Chloride Litigation

David Michaels served as a plaintiffs’ expert witness in at least one PVC case, Lattin v. Borden Chemical Co., New Jersey Superior Court, Law Div. Mercer Cty. Docket No. L-3850-01.  Mr. Wodka was the attorney for plaintiff.

Beryllium Litigation

One of David Michaels’ publications criticized the beryllium industry, on grounds that it advanced weak scientific data and arguments against changes in permissible exposure limits. David Michaels & Celeste Monforton, “Beryllium’s Public Relations Problem: Protecting Workers When There Is No Safe Exposure Level,” 123 Public Health Reports 79 (2008).  In this article, Michaels acknowledges that he “served as an expert witness in a civil suit involving chronic beryllium disease.”  Apparently, Michaels forgot to point out that he was paid for his services, and that the payor was the claimant, whose interests he was advancing in his paper.

Marc Kolanz for one of the companies sued over beryllium health claims noted, in rebuttal, that:

“Dr. Michaels is a paid expert witness in beryllium litigation.  Dr. Michaels’ has not published beryllium industrial hygiene or medical research; however, he has provided litigation support serving as a paid expert witness for plaintiffs in beryllium litigation. Consistent with this role, as a hired advocate for plaintiff’s counsel, he has sought to ‘manufacture certainty’ by applying a hindsight approach to criticize the good works of dedicated beryllium researchers.”

Marc Kolanz, “Beryllium History and Public Policy,” 123 Public Health Reports 423, 427 (2008).

Michaels was an expert witness for Philadelphia plaintiffs’ attorney, Ed Reeves, in the Lonnie Pierce case, in Pennsylvania.

*   *   *   *

There is nothing ignoble or disreputable in serving as an expert witness.  Indeed, real experts may well have an obligation to make their expertise available to the civil and criminal justice system.  What is unseemly is the incessant hypocrisy in accusing manufacturing industry of conflicts of interest, while hiding and misrepresenting litigation industry conflicts.  David Michaels has been in the forefront of this hypocrisy.  The “Dear Mr. Wodka” letter deserves more scrutiny under the principles that Michaels has advocated for manufacturing industry.

Lording the Data – Scientific Fraud

November 10th, 2011

Last week, the New York Times published a news story about psychologist Diederik Stapel, of the Netherlands.  Tilburg University accused him of having committed research fraud  in several dozen published papers, including the journal Science, the official journal of the AAAS.  See Benedict Carey, “Fraud Case Seen as a Red Flag for Psychology Research: Noted Dutch Psychologist, Stapel, Accused of Research Fraud,” New York Times (Nov. 2, 2011).  The Times expressed surprise over the suggestion that psychology is plagued by fraud and sloppy research.  The surprise is that there are not more stories in the lay media over the poor quality of scientific research.  The readers of Retraction Watch, and the Office of Research Integrity’s blog will recognize how commonplace Stapel’s fraud is.

Stapel’s fraud has wide-ranging implications for the doctoral students, whose dissertations he supervised, and for colleagues, with whom he collaborated.  Stapel apologized and expressed his regret, but his conduct leaves a large body of his work, and that of others, under a cloud of suspicion.

Lording the Data

The University committee reported that Stapel had escaped detection for a long time because he was “lord of the data,” by refusing to disclose and share the data.

“Outright fraud may be rare, these experts say, but they contend that Dr. Stapel took advantage of a system that allows researchers to operate in near secrecy and massage data to find what they want to find, without much fear of being challenged.”

Benedict Carey, “Fraud Case,” New York Times (Nov. 2, 2011).  Data sharing is preached but rarely practice.

In a recent publication, Dr. Wicherts and his colleagues, at the University of Amsterdam, reported that two-thirds of his sample of Dutch research psychologists refused to share their data, in contravention of the established ethical rules of the discipline. Remarkably, many of the refuseniks had explicit contractual obligations with their publishing journals to provide data.  Jelte Wicherts, Marjan Bakker, Dylan Molenaar, “Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results,” PLoS ONE 6(11): e26828 (Nov. 2, 2011)

Scientific fraud seems no more common among scientists with industry ties, which are so often the subject of ad hominem conflict of interest claims.  Instead, fraudfeasors such as Stapel or Hwang Woo-suk are more often simply egotistical, narcissistic, self-aggrandizing, self-promoting, or delusional.  In the United States, litigation, occasionally has brought out charlatans, but it has also resulted in high-quality studies that have provided strong evidence for or against litigation claims.  Compare Hon. Jack B. Weinstein, “Preliminary Reflections on Administration of Complex Litigation” 2009 Cardozo L. Rev. de novo 1, 14 (2009) (describing plaintiffs’ expert witnesses in silicone litigation as “charlatans” and the litigation as largely based upon fraud) with Committee on the Safety of Silicone Breast Implants, Institute of Medicine, Safety of Silicone Breast Implants (Wash. D.C. 1999) (reviewing studies, many of which were commissioned by litigation defendants, and which collectively showed lack of association between silicone and autoimmune diseases).

The relation between litigation and research is one that has typically been approached by self-righteous voices, such as David Michaels and David Egilman, and others who have their own deep conflicts of interest.  What is clear is that all litigants, as well as the public, would benefit from enforcing data sharing requirements.  SeeLitigation and Research” (April 15, 2007) (science should not be built upon blind trust of scientists: “Nullius in verba.”).

The Times article emphasized Wicherts’ research about lack of data sharing, and suggested that data sharing could improve the quality of scientific publications.  The time may have come, however, for sterner measures of civil and criminal penalties for scientists who abuse and waste governmental funding, or who aid and abet fraudulent litigation.

Manufacturing Certainty

October 25th, 2011

Steven Wodka is a plaintiffs’ lawyer, based in New Jersey, who has worked closely, for many years, with Dr. David Michaels, as his paid expert witness.  Yes, the David Michaels who is now the head of the Occupational Safety and Health Administration (OSHA).

When Michaels for nominated for his current post, the Democratic majority leaders in the Senate protected him from hearings, which would have revealed Michaels’ deep and disturbing conflicts of interest.  The Democratic Senators succeeded in their efforts, and Michaels was confirmed as undersecretary of the Department of Labor, on a voice vote, without hearings.

Mr. Wodka may have lost his friend, colleague, and expert witness to the OSHA, but at the same time he gained an ally in his litigation efforts on behalf of plaintiffs.  Wodka, who litigates in New Jersey and elsewhere, was troubled by court decisions that OSHA’s Hazard Communication regulations preempted his state-law tort claims. See, e.g., Bass v. Air Products, 2006 WL 1419375 (N.J. App. Div. 2006) (holding that OSHA’s hazard communication standard was a comprehensive regulatory scheme that preempted state tort failure-to-warn claims for warnings that complied with federal regulations).

Wodka may have lost his expert witness (for a while), but he gained an inside track to the Department of Labor.  Disappointed by New Jersey’s appellate court, Wodka sought an advisory opinion from the Department of Labor on the preemptive effect of HazCom.  See David Schwartz, “Solicitor Says Hazard Communication Rule Does Not Preempt Failure-to-Warn Lawsuits,” BNA (October 20, 2011).

The Department of Labor, now under control of his friend and paid expert witness, Dr. Michaels, did not disappoint.  Solicitor of Labor M. Patricia Smith, in a letter dated October 18, 2011, wrote Mr. Wodka that, notwithstanding what the appellate courts may have told him, he was correct after all.  The OSHA’s Hazard Commuication Standard, 29 C.F.R. 1200(a)(2), does not, according to the Department, preempt state tort claims alleging failures to warn.

The solicitor relied upon Section 4(b)(4) of the OSH Act, which states that nothing in the Act is intended to “enlarge or diminish or affect in any other manner the common law or statutory rights, duties or liabilities of employers and employees under any law with respect to injuries, diseases, or death arising out of, or in the course of, employment.”  The OSH Act, however, in making this disclaimer, was focused on the employer-employee relationship, with its attendant duties, rights, and obligations.  Failure-to-warn claims arise out of laws, whether statutory or common law, designed to protect consumers.  The solicitor’s analysis really misses the key point that a comprehensive scheme, such as the HazCom Act and regulations, applies to strangers to the employer-employee relationship, and constrains the nature and content of warnings communications to the employees of purchasers of chemical products and raw materials.

The solicitor was clear that “a definitive determination of conflict can only be made based on the particulars of each case.”  Smith Letter, at footnote 4.  This slight speedbump did not slow down Mr. Wodka, who was quoted by the BNA as saying that “[t]his letter makes the question clear,” and “I’m already going to move for reconsideration of one of my cases based on this letter.”

It is good to have friends in powerful places.

Of course, there is a good deal of irony involved in this story.  David Michaels has made a career out of scolding industry over conflicts of interest.  Michaels’ book, Doubt is Their Product, gets waved around in courtrooms, when defense expert witnesses testify that the plaintiffs’ evidence fails to show that a product causes harm, or has caused plaintiff’s harm.  Some people may find this scolding a little irritating, especially from someone, like Michaels, who fails to disclose his own significant conflicts of interest, from monies received as a testifying and consulting expert witness, and from running an organization,  The Project on Scientific Knowledge and Public Policy (SKAPP),  bankrolled by the plaintiffs’ counsel in the silicone gel breast implant litigation.

Doubt is not such a bad thing in the face of uncertain and inconclusive evidence.  We could use more doubt, and open-minded thought.  As Bertrand Russell wrote some years ago:

“The biggest cause of trouble in the world today is that the stupid people are so sure about things and the intelligent folks are so full of doubts.”

Reference Manual on Scientific Evidence v3.0 – Disregarding Study Validity in Favor of the “Whole Gamish”

October 14th, 2011

There is much to digest in the new Reference Manual on Scientific Evidence, third edition (RMSE 3d).  Much of what is covered is solid information on the individual scientific and technical disciplines covered.  Although the information is easily available from other sources, there is some value in collecting the material in a single volume for the convenience of judges.  Of course, given that this information is provided to judges from an ostensibly neutral, credible source, lawyers will naturally focus on what is doubtful or controversial in the RMSE.

I have already noted some preliminary concerns, however, with some of the comments in the Preface, by Judge Kessler and Dr. Kassirer.  See “New Reference Manual’s Uneven Treatment of Conflicts of Interest.”  In addition, there is a good deal of overlap among the chapters on statistics, epidemiology, and medical testimony.  This overlap is at first blush troubling because the RMSE has the potential to confuse and obscure issues by having multiple authors address them inconsistently.  This is an area where reviewers should pay close attention.

From first looks at the RMSE 3d, there is a good deal of equivocation between encouraging judges to look at scientific validity, and discouraging them from any meaningful analysis by emphasizing inaccurate proxies for validity, such as conflicts of interest.  (As I have pointed out, the new RSME did not do quite so well in addressing its own conflicts of interest.  SeeToxicology for Judges – The New Reference Manual on Scientific Evidence (2011).”)

The strengths of the chapter on statistical evidence, updated from the second edition, remain, as do some of the strengths and flaws of the chapter on epidemiology.  I hope to write more about each of these important chapters at a later date.

The late Professor Margaret Berger has an updated version of her chapter from the second edition, “The Admissibility of Expert Testimony,” RSME 3d 11 (2011).  Berger’s chapter has a section criticizing “atomization,” a process she describes pejoratively as a “slicing-and-dicing” approach.  Id. at 19.  Drawing on the publications of Daubert-critic Susan Haack, Berger rejects the notion that courts should examine the reliability of each study independently. Id. at 20 & n. 51 (citing Susan Haack, “An Epistemologist in the Bramble-Bush: At the Supreme Court with Mr. Joiner,” 26 J. Health Pol. Pol’y & L. 217–37 (1999).  Berger contends that the “proper” scientific method, as evidenced by works of the International Agency for Research on Cancer, the Institute of Medicine, the National Institute of Health, the National Research Council, and the National Institute for Environmental Health Sciences, “is to consider all the relevant available scientific evidence, taken as a whole, to determine which conclusion or hypothesis regarding a causal claim is best supported by the body of evidence.” Id. at 19-20 & n.52.  This contention, however, is profoundly misleading.  Of course, scientists undertaking a systematic review should identify all the relevant studies, but some of the “relevant” studies may well be insufficiently reliable (because of internal or external validity issues) to answer the research question at hand. All the cited agencies, and other research organizations and researchers, exclude studies that are fundamentally flawed, whether as a result of bias, confounding, erroneous data analyses, or related problems.  Berger cites no support for the remarkable suggestion that scientists do not make “reliability” judgments about available studies when assessing the “totality of the evidence.”

Professor Berger, who had a distinguished career as a law professor and evidence scholar, died in November 2010.  She was no friend of Daubert, but remarkably her antipathy has outlived her.  Her critical discussion of “atomization” cites the notorious decision in Milward v. Acuity Specialty Products Group, Inc., 639 F.3d 11, 26 (1st Cir. 2011), which was decided four months after her passing. Id. at 20 n.51. (The editors note that the published chapter was Berger’s last revision, with “a few edits to respond to suggestions by reviewers.”)

Professor Berger’s contention about the need to avoid assessments of individual studies in favor of the whole gamish must also be rejected because Federal Rule of Evidence 703 requires that each study considered by an expert witness “qualify” for reasonable reliance by virtue of the study’s containing facts or data that are “of a type reasonably relied upon by experts in the particular field forming opinions or inferences upon the subject.”  One of the deeply troubling aspects of the Milward decision is that it reversed the trial court’s sensible decision to exclude a toxicologist, Dr. Martyn Smith, who outran his headlights on issues having to do with a field in which he was clearly inexperienced – epidemiology.

Scientific studies, and especially epidemiologic studies, involve multiple levels of hearsay.  A typical epidemiologic study may contain hearsay leaps from patient to clinician, to laboratory technicians, to specialists interpreting test results, back to the clinician for a diagnosis, to a nosologist for disease coding, to a national or hospital database, to a researcher querying the database, to a statistician analyzing the data, to a manuscript that details data, analyses, and results, to editors and peer reviewers, back to study authors, and on to publication.  Those leaps do not mean that the final results are untrustworthy, only that the study itself is not likely admissible in evidence.

The inadmissibility of scientific studies is not problematic because Rule 703 permits testifying expert witnesses to formulate opinions based upon facts and data, which are not themselves admissible in evidence. The distinction between relied upon, and admissible, studies is codified in the Federal Rules of Evidence, and in virtually every state’s evidence law.

Referring to studies, without qualification, as admissible in themselves is wrong as a matter of evidence law.  The error has the potential to encourage carelessness in gatekeeping expert witnesses’ opinions for their reliance upon inadmissible studies.  The error is doubly wrong if this approach to expert witness gatekeeping is taken as license to permit expert witnesses to rely upon any marginally relevant study of their choosing.  It is therefore disconcerting that the new Reference Manual on Science Evidence (RMSE 3d) fails to make the appropriate distinction between admissibility of studies and admissibility of expert witness opinion that has reasonably relied upon appropriate studies.

Consider the following statement from the chapter on epidemiology:

“An epidemiologic study that is sufficiently rigorous to justify a conclusion that it is scientifically valid should be admissible,184 as it tends to make an issue in dispute more or less likely.185

RMSE 3d at 610.  Curiously, the authors of this chapter have ignored Professor Berger’s caution against slicing and dicing, and speak to a single study’s ability to justify a conclusion. The authors of the epidemiology chapter seem to be stressing that scientifically valid studies should be admissible.  The footnote emphasizes the point:

See DeLuca v. Merrell Dow Pharms., Inc., 911 F.2d 941, 958 (3d Cir. 1990); cf. Kehm v. Procter & Gamble Co., 580 F. Supp. 890, 902 (N.D. Iowa 1982) (“These [epidemiologic] studies were highly probative on the issue of causation—they all concluded that an association between tampon use and menstrually related TSS [toxic shock syndrome] cases exists.”), aff’d, 724 F.2d 613 (8th Cir. 1984). Hearsay concerns may limit the independent admissibility of the study, but the study could be relied on by an expert in forming an opinion and may be admissible pursuant to Fed. R. Evid. 703 as part of the underlying facts or data relied on by the expert. In Ellis v. International Playtex, Inc., 745 F.2d 292, 303 (4th Cir. 1984), the court concluded that certain epidemiologic studies were admissible despite criticism of the methodology used in the studies. The court held that the claims of bias went to the studies’ weight rather than their admissibility. Cf. Christophersen v. Allied-Signal Corp., 939 F.2d 1106, 1109 (5th Cir. 1991) (“As a general rule, questions relating to the bases and sources of an expert’s opinion affect the weight to be assigned that opinion rather than its admissibility. . . .”).”

RMSE 3d at 610 n.184 (emphasis in bold, added).  This statement, that studies relied upon by an expert in forming an opinion may be admissible pursuant to Rule 703, is unsupported by Rule 703 and the overwhelming weight of case law interpreting and applying the rule.  (Interestingly, the authors of this chapter seem to abandon their suggestion that studies relied upon “might qualify for the learned treatise exception to the hearsay rule, Fed. R. Evid. 803(18), or possibly the catchall exceptions, Fed. R. Evid. 803(24) & 804(5),” which was part of their argument in the Second Edition of the RMSE.  RMSE 2d at 335 (2000).)  See also RMSE 3d at 214 (discussing statistical studies as generally “admissible,” but acknowledging that admissibility may be no more than permission to explain the basis for an expert’s opinion).

The cases cited by the epidemiology chapter, Kehm and Ellis, both involved “factual findings” in public investigative or evaluative reports, which were independently admissible under Federal Rule of Evidence 803(8)(C).  See Ellis, 745 F.2d at 299-303; Kehm, 724 F.2d at 617-18.  As such, the cases hardly support the chapter’s suggestion that Rule 703 is a rule of admissibility for epidemiologic studies.

Here the RMSE, in one sentence, confuses Rule 703 with an exception to the rule against hearsay, which would prevent the statistical studies from being received in evidence.  The point is reasonably clear, however, that the studies “may be offered” to explain an expert witness’s opinion.  Under Rule 705, that offer may also be refused. The offer, however, is to “explain,” not to have the studies admitted in evidence.

The RMSE is certainly not alone in advancing this notion that studies are themselves admissible.  Other well-respected evidence scholars lapse into this position:

“Well conducted studies are uniformly admitted.”

David L. Faigman, et al., Modern Scientific Evidence:  The Law and Science of Expert Testimony v.1, § 23:1,at 206 (2009)

Evidence scholars should not conflate admissibility of the epidemiologic (or other) studies with the ability of an expert witness to advert to a study to explain his or her opinion.  The testifying expert witness really has no need to become a conduit for off-hand comments and opinions in the introduction or discussion section of relied upon articles, and the wholesale admission of such hearsay opinions undermines the court’s control over opinion evidence.  Rule 703 authorizes reasonable reliance upon “facts and data,” not every opinion that creeps into the published literature.

New Reference Manual’s Uneven Treatment of Conflicts of Interest

October 12th, 2011

The new, third edition of the Reference Manual on Scientific Evidence (RMSE) appears to get off to a good start in the Preface by Judge Kessler and Dr. Kassirer, when they note that the Supreme Court mandated federal courts to

“examine the scientific basis of expert testimony to ensure that it meets the same rigorous standard employed by scientific researchers and practitioners outside the courtroom.”

RMSE at xiii.  The preface falters, however, on two key issues, causation and conflicts of interest, which are taken up as an introduction to the new volume.

1. CAUSATION

The authors tell us in squishy terms that causal assessments are judgments:

“Fundamentally, the task is an inferential process of weighing evidence and using judgment to conclude whether or not an effect is the result of some stimulus. Judgment is required even when using sophisticated statistical methods. Such methods can provide powerful evidence of associations between variables, but they cannot prove that a causal relationship exists. Theories of causation (evolution, for example) lose their designation as theories only if the scientific community has rejected alternative theories and accepted the causal relationship as fact. Elements that are often considered in helping to establish a causal relationship include predisposing factors, proximity of a stimulus to its putative outcome, the strength of the stimulus, and the strength of the events in a causal chain.”

RMSE at xiv.

The authors leave the inferential process as a matter of “weighing evidence,” but without saying anything about how the scientific community does its “weighing.”  Language about “proving” causation is also unclear because “proof” in scientific parlance connotes a demonstration, which we typically find in logic or in mathematics.  Proving empirical propositions suggests a bar set too high such that the courts must inevitable lower the bar considerably.  The question is, of course, how low will judges go to admit evidence.

The authors thus introduce hand waving and excuses for why evidence can be weighed differently in court proceedings from the world of science:

“Unfortunately, judges may be in a less favorable position than scientists to make causal assessments. Scientists may delay their decision while they or others gather more data. Judges, on the other hand, must rule on causation based on existing information. Concepts of causation familiar to scientists (no matter what stripe) may not resonate with judges who are asked to rule on general causation (i.e., is a particular stimulus known to produce a particular reaction) or specific causation (i.e., did a particular stimulus cause a particular consequence in a specific instance). In the final analysis, a judge does not have the option of suspending judgment until more information is available, but must decide after considering the best available science.”

RMSE at xiv.  But the “best available science” may be pretty crummy, and the temptation to turn desperation into evidence (“well, it’s the best we have now”) is often severe.  The authors of the Preface signal that “inconclusive” is not a judgment open to judges charged with expert witness gatekeeping.  If the authors truly mean to suggest that judges should go with whatever is dished out as “the best available science,” then they have overlooked the obvious:  Rule 702 opens the door to “scientific, technical, or other specialized knowledge,” not to hunches, suggestive but inconclusive evidence, and wishful thinking about how the science may turn out when further along.  Courts have a choice to exclude expert witness opinion testimony that is based upon incomplete or inconclusive evidence.

2. CONFLICTS OF INTEREST

Surprisingly, given the scope of the scientific areas covered in the RMSE, the authors discuss conflicts of interest (COI) at some length.  Conflicts of interest are a fact of life in all endeavors, and it is understandable counsel judges and juries to try to identify, assess, and control them.  COIs, however, are weak proxies for unreliability.  The emphasis given here is undue because federal judges are misled into thinking that they can discern unreliability from COI, when they should be focused on the data and the analysis.

The authors of the Preface set about to use COI as a basis for giving litigation plaintiffs a pass, and for holding back studies sponsored by corporate defendants.

“Conflict of interest manifests as bias, and given the high stakes and adversarial nature of many courtroom proceedings, bias can have a major influence on evidence, testimony, and decisionmaking. Conflicts of interest take many forms and can be based on religious, social, political, or other personal convictions. The biases that these convictions can induce may range from serious to extreme, but these intrinsic influences and the biases they can induce are difficult to identify. Even individuals with such prejudices may not appreciate that they have them, nor may they realize that their interpretations of scientific issues may be biased by them. Because of these limitations, we consider here only financial conflicts of interest; such conflicts are discoverable. Nonetheless, even though financial conflicts can be identified, having such a conflict, even one involving huge sums of money, does not necessarily mean that a given individual will be biased. Having a financial relationship with a commercial entity produces a conflict of interest, but it does not inevitably evoke bias. In science, financial conflict of interest is often accompanied by disclosure of the relationship, leaving to the public the decision whether the interpretation might be tainted. Needless to say, such an assessment may be difficult. The problem is compounded in scientific publications by obscure ways in which the conflicts are reported and by a lack of disclosure of dollar amounts.

Judges and juries, however, must consider financial conflicts of interest when assessing scientific testimony. The threshold for pursuing the possibility of bias must be low. In some instances, judges have been frustrated in identifying expert witnesses who are free of conflict of interest because entire fields of science seem to be co-opted by payments from industry. Judges must also be aware that the research methods of studies funded specifically for purposes of litigation could favor one of the parties. Though awareness of such financial conflicts in itself is not necessarily predictive of bias, such information should be sought and evaluated as part of the deliberations.”

RMSE at xiv-xv.  All in all, rather misleading advice.  Financial conflicts are not the only conflicts that can be “discovered.”  Often expert witnesses will have political and organizational alignments, which will show deep-seated ideological alignments with the party for which they are testifying.  For instance, in one silicosis case, an expert witness in the field of history of medicine testified, at an examination before trial, that his father suffered from a silica-related disease.  This witness’s alignment with Marxist historians and his identification with radical labor movements made his non-financial conflicts obvious, although these COI would not necessarily have been apparent from his scholarly publications alone.

How low will the bar be set for discovering COI?  If testifying expert witnesses are relying upon textbooks, articles, essays, will federal courts open the authors/hearsay declarants up to searching discovery of their finances?

Also misleading is the suggestion that “entire fields of science seem to be co-opted by payments from industry.”  Do the authors mean to exclude the plaintiffs’ lawyer litigation industry, which has grown so large and politically powerful in this country?  In litigations in which I have been involved, I have certainly seen plaintiffs’ counsel, or their proxies – labor unions or “victim support groups” provide substantial funding for studies.  The Preface authors themselves show an untoward bias by their pointing out industry payments without giving balanced attention to other interested parties’ funding of scientific studies.

The attention to COI is also surprising given that one of the key chapters, for toxic tort practitioners, was written by Dr. Bernard D. Goldstein, who has testified in toxic tort cases, mostly (but not exclusively) for plaintiffs.  See, e.g., Parker v. Mobil Oil Corp., 7 N.Y.3d 434, 857 N.E.2d 1114, 824 N.Y.S.2d 584 (2006); Exxon Corp. v. Makofski, 116 SW 3d 176 (Tex. Ct. App. 2003).  The Makofsky case is particularly interesting because Dr. Goldstein was forced to explain why he was willing to opine that benzene caused acute lymphocytic leukemia, despite the plethora of published studies finding no statistically significant relationship.  Dr. Goldstein resorted to the inaccurate notion that scientific “proof” of causation requires 95 percent certainty, whereas he imposed only a 51 percent certainty for his medico-legal testimonial adventures. Dr. Goldstein also attempted to justify the discrepancy from the published literature by adverting to the lower standards used by federal regulatory agencies and treating physicians. Id.

These explanations are particularly concerning because they reflect basic errors in statistics and in causal reasoning.  The 95 percent derives from the use of the same percentage in confidence intervals, but the probability involved there is not the probability of the association’s being correct, and it has nothing to do with the probability in the belief that an association is real or is causal.  (Thankfully the RMSE chapter on statistics gets this right, but my fear is that judges will skip over the more demanding chapter on statistics and place undue weight on the toxicology chapter, written by Dr. Goldstein.)  The reference to federal agencies (OSHA, EPA, etc.) and to treating physicians was meant, no doubt, to invoke precautionary principle concepts as a justification for some vague, ill-defined, lower standard of causal assessment.

The Preface authors might well have taken their own counsel and conducted a more searching assessment of COI among authors of Reference Manual.  Better yet, the authors might have focused the judiciary on the data and the analysis.

Toxicology for Judges – The New Reference Manual on Scientific Evidence (2011)

October 5th, 2011

I have begun to dip into the massive third edition of the Reference Manual on Scientific Evidence.  To date, there have been only a couple of acknowledgments of this new work, which was released to the public on September 28, 2011.  SeeA New Day – A New Edition of the Reference Manual of Scientific Evidence”; and David Kaye, “Prometheus Unbound: Releasing the New Edition of the FJC Reference Manual on Scientific Evidence.”

Like previous editions, the substantive scientific areas are covered in discrete chapters, written by subject matter specialists, often along with a lawyer who addresses the legal implications and judicial treatment of that subject matter.  From my perspective, the chapters on statistics, epidemiology, and toxicology are the most important in my practice and in teaching, and I decided to start with the toxicology.  The toxicology chapter, “Reference Guide on Toxicology,” in the third edition is written by Professor Bernard D. Goldstein, of the University of Pittsburgh Graduate School of Public Health, and Mary Sue Henifin, a partner in the law firm of Buchanan Ingersoll, P.C.

CONFLICTS OF INTEREST

At the question and answer session of the public release ceremony, one gentleman rose to note that some of the authors were lawyers with big firm affiliations, which he supposed must mean that they represent mostly defendants.  Based upon his premise, he asked what the review committee had done to ensure that conflicts of interest did not skew or distort the discussions in the affected chapters.  Dr. Kassirer and Judge Kessler responded by pointing out that the chapters were peer reviewed by outside reviewers, and reviewed by members of the supervising review committee.  The questioner seemed reassured, but now that I have looked at the toxicology chapter, I am not so sure.

The questioner’s premise that a member of a large firm will represent mostly defendants and thus have a pro-defense  bias is probably a common perception among unsophisticated lay observers.  What is missing from their analysis is the realization that although gatekeeping helps the defense lawyers’ clients, it takes away legal work from firms that represent defendants in the litigations that are pretermitted by effective judicial gatekeeping.  Erosion of gatekeeping concepts, however, inures to the benefit of plaintiffs, their counsel, as well as the expert witnesses engaged on behalf of plaintiffs in litigation.

The questioner’s supposition in the case of the toxicology chapter, however, is doubly flawed.  If he had known more about the authors, he would probably not have asked his question.  First, the lawyer author, Ms. Henifin, is known for having taken virulently anti-manufacturer positions.  See Richard M. Lynch and Mary S. Henifin, “Causation in Occupational Disease: Balancing Epidemiology, Law and Manufacturer Conduct,” 9 Risk: Health, Safety & Environment 259, 269 (1998) (conflating distinct causal and liability concepts, and arguing that legal and scientific causal criteria should be abrogated when manufacturing defendant has breached a duty of care).

As for the scientist author of the toxicology chapter, Professor Goldstein, the casual reader of the chapter may want to know that he has testified in any number of toxic tort cases, almost invariably on the plaintiffs’ side.  Unlike the defense lawyer, who loses business revenue, when courts shut down unreliable claims, plaintiffs’ testifying or consulting expert witnesses stand to gain by minimalist expert witness opinion gatekeeping.  Given the economic asymmetries, the reader must thus want to know that Prof. Goldstein was excluded as an expert witness in some high-profile toxic tort cases.  See, e.g., Parker v. Mobil Oil Corp., 7 N.Y.3d 434, 857 N.E.2d 1114, 824 N.Y.S.2d 584 (2006) (dismissing leukemia (AML) claim based upon claimed low-level benzene exposure from gasoline) , aff’g 16 A.D.3d 648 (App. Div. 2d Dep’t 2005).  No; you will not find the Parker case cited in the Manual‘s chapter on toxicology. (Parker is, however, cited in the chapter on exposure science.)

I have searched but I could not find any disclosure of Professor Goldstein’s conflicts of interests in this new edition of the Reference Manual.  I would welcome a correction if I am wrong.  Having pointed out this conflict, I would note that financial conflicts of interest are nothing really compared to ideological conflicts of interest, which often propel scientists into service as expert witnesses.

HORMESIS

One way that ideological conflicts might be revealed is to look for imbalances in the presentation of toxicologic concepts.  Most lawyers who litigate cases that involve exposure-response issues are familiar with the “linear no threshold” (LNT) concept that is used frequently in regulatory risk assessments, and which has metastasized to toxic tort litigation, where LNT often has no proper place.

LNT is a dubious assumption because it claims to “known” the dose response at very low exposure levels in the absence of data.  There is a thin plausibility for genotoxic chemicals claimed to be carcinogens, but even that plausibility evaporates when one realizes that there are defense and repair mechanisms to genotoxicity, which must first be saturated before there can be a carcinogenic response.  Hormesis is today an accepted concept that describes a dose-response relationship that shows a benefit at low doses, but harm at high doses.

The toxicology chapter in the Reference Manual has several references to LNT but none to hormesis.  That font of all knowledge, Wikipedia reports that hormesis is controversial, but so is LNT.  This is the sort of imbalance that may well reflect an ideological bias.

One of the leading textbooks on toxicology describes hormesis:

“There is considerable evidence to suggest that some non-nutritional toxic substances may also impart beneficial or stimulatory effects at low doses but that, at higher doses, they produce adverse effects. This concept of “hormesis” was first described for radiation effects but may also pertain to most chemical responses.”

Curtis D. Klaassen, Casarett & Doull’s Toxicology: The Basic Science of Poisons 23 (7th ed. 2008) (internal citations omitted).

Similarly, the Encyclopedia of Toxicology describes hormesis as an important phenomenon in toxicologic science:

“This type of dose–response relationship is observed in a phenomenon known as hormesis, with one explanation being that exposure to small amounts of a material can actually confer resistance to the agent before frank toxicity begins to appear following exposures to larger amounts.  However, analysis of the available mechanistic studies indicates that there is no single hormetic mechanism. In fact, there are numerous ways for biological systems to show hormetic-like biphasic dose–response relationship. Hormetic dose–response has emerged in recent years as a dose–response phenomenon of great interest in toxicology and risk assessment.”

Philip Wexler, Bethesda, et al., eds., 2 Encyclopedia of Toxicology 96 (2005).  One might think that hormesis would also be of great interest to federal judges, but they will not learn about it from reading the Reference Manual.

Hormesis research has come into its own.  The International Dose-Response Society, which “focus[es] on the dose-response in the low-dose zone,” publishes a journal, Dose-Response, and a newsletter, BELLE:  Biological Effects of Low Level Exposure.  In 2009, two leading researchers in the area of hormesis published a collection of important papers:  Mark P. Mattson and Edward J. Calabrese, eds., Hormesis: A Revolution in Biology, Toxicology and Medicine (N.Y. 2009).

A check in PubMed shows that LNT has more “hits” than “hormesis” or “hermetic,” but still the latter phrases exceed 1,267 references, hardly insubstantial.  In actuality, there are many more hermetic relationships identified in the scientific literature, which often fails to identify the relationship by the term hormesis or hermetic.  See Edward J. Calabrese and Robyn B. Blain, “The hormesis database: The occurrence of hormetic dose responses in the toxicological literature,” 61 Regulatory Toxicology and Pharmacology 73 (2011) (reviewing about 9,000 dose-response relationships for hormesis, to create a database of various aspects of hormesis).  See also Edward J. Calabrese and Robyn B. Blain, “The occurrence of hormetic dose responses in the toxicological literature, the hormesis database: An overview,” 202 Toxicol. & Applied Pharmacol. 289 (2005) (earlier effort to establish hormesis database).

The Reference Manual’s omission of hormesis is regrettable.  Its inclusion of references to LNT but not to hormesis appears to result from an ideological bias.

QUESTIONABLE SUBSTANTIVE OPINIONS

One would hope that the toxicology chapter would not put forward partisan substantive positions on issues that are currently the subject of active litigation.  Fondly we would hope that any substantive position advanced would at least be well documented.

For at least one issue, the toxicology chapter dashes our fondest hopes.  Table 1 in the chapter presents a “Sample of Selected Toxicological End Points and Examples of Agents of Concern in Humans.” No documentation or citations are provided for this table.  Most of the exposure agent/disease outcome relationships in the table are well accepted, but curiously at least one agent-disease pair is the subject of current litigation is wildly off the mark:

Parkinson’s disease and manganese

Reference Manual at 653.  If the chapter’s authors had looked, they would have found that Parkinson’s disease is almost universally accepted to have no known cause, except among a few plaintiffs’ litigation expert witnesses.  They would also have found that the issue has been addressed carefully and the claimed relationship or “concern” has been rejected by the leading researchers in the field (who have no litigation ties).  See, e.g., Karin Wirdefeldt, Hans-Olaf Adami, Philip Cole, Dimitrios Trichopoulos, and Jack Mandel, “Epidemiology and etiology of Parkinson’s disease: a review of the evidence.  26 European J. Epidemiol. S1, S20-21 (2011); Tomas R. Guilarte, “Manganese and Parkinson’s Disease: A Critical Review and New Findings,” 118 Environ Health Perspect. 1071, 1078 (2010) (“The available evidence from human and non­human primate studies using behavioral, neuroimaging, neurochemical, and neuropathological end points provides strong sup­port to the hypothesis that, although excess levels of [manganese] accumulation in the brain results in an atypical form of parkinsonism, this clini­cal outcome is not associated with the degen­eration of nigrostriatal dopaminergic neurons as is the case in PD.”)

WHEN ALL YOU HAVE IS A HAMMER, EVERYTHING LOOKS LIKE A NAIL

The substantive specialist author, Professor Goldstein, is not a physician; nor is he an epidemiologist.  His professional focus on animal and cell research shows, and biases the opinions offered in this chapter.

“In qualitative extrapolation, one can usually rely on the fact that a compound causing an effect in one mammalian species will cause it in another species. This is a basic principle of toxicology and pharmacology.  If a heavy metal, such as mercury, causes kidney toxicity in laboratory animals, it is highly likely to do so at some dose in humans.”

Reference Manual at 646.

Such extrapolations may make sense in regulatory contexts, where precauationary judgments are of interest, but they hardly can be said to be generally accepted in controversies in civil actions over actual causation.  Crystalline silica, for instance, causes something resembling lung cancer in rats, but not in mice, guinea pigs, or hamsters.  It hardly makes sense to ask juries to decide whether the plaintiff is more like a rat than a mouse.

For a sober second opinion to the toxicology chapter, one may consider the views of some well-known authors:

“Whereas the concordance was high between cancer-causing agents initially discovered in humans and positive results in animal studies (Tomatis et al., 1989; Wilbourn et al., 1984), the same could not be said for the reverse relationship: carcinogenic effects in animals frequently lacked concordance with overall patterns in human cancer incidence (Pastoor and Stevens, 2005).”

Hans-Olov Adami, Sir Colin L. Berry, Charles B. Breckenridge, Lewis L. Smith, James A. Swenberg, Dimitrios Trichopoulos, Noel S. Weiss, and Timothy P. Pastoor, “Toxicology and Epidemiology: Improving the Science with a Framework for Combining Toxicological and Epidemiological Evidence to Establish Causal Inference,” 122 Toxciological Sciences 223, 224 (2011).

Once again, there is a sense that the scholarship of the toxicology chapter is not as complete or thorough as we would hope.

Diluting “Reasonable Degree of Medical Certainty” – An AAJ-Restatement “Tool” to Help Plaintiffs

October 3rd, 2011

In “the Top Reason that the ALI’s Restatement of Torts Should Steer Clear of Partisan Conflicts,” I pointed out the inappropriateness of advertising the ALI’s Restatement of Torts to the organized plaintiffs’ bar, much as the plaintiffs’ bar advertises potential huge recoveries for the latest tort du jour.  See Michael D. Green & Larry S. Stewart, “The New Restatement’s Top 10 Tort Tools,” Trial 44 (April 2010).

Some of the authors’ tort tool kit may be unexceptionable.  Among these authors’ top ten tort tools, however, is the new Restatement’s edict that “reasonable degree of medical certainty” means, or should mean, nothing more than saying “more likely than not.”  The authors criticize the reasonable certainty standard with an abbreviated rendition of the Restatement’s critique:

“Many courts hold that expert opinion must be expressed in terms of medical or scientific certainty’. Requiring certainty seems to impose a criminal law-like burden of proof that is inconsistent with civil burdens of preponderance of the evidence to establish a fact. Such a requirement is also problematic at best because medical and scientific communities have no such ‘reasonable certainty’ standard. The standard then becomes whatever the attorney who hired the expert tells the expert it means or, absent that, whatever the expert imagines it means. Section 28, comment e, of the Restatement criticizes this standard and makes clear that the same preponderance standard (or ‘more likely than not’ standard), which is universally applied in all aspects of civil cases, also applies to expert testimony.”

Id. at 46-47.

Well, the more likely than not standard is not “universally applied in all aspects of civil cases,” because several states require exemplary damages to be proven by “clear and convincing” or greater evidence.  In some states, the burden of proof in fraud cases is higher than a mere preponderance of the evidence. This premise of the authors’ article is incorrect.

But even if the authors were correct that the preponderance standard applied “in all aspects” of civil cases, their scholarship would remain suspect, as others and I have previously pointed out.  SeeReasonable Degree of Medical Certainty,” and “More Uncertainty About Reasonable Degree of Medical Certainty.”

1. The Restatement’s Treatment of Expert Witness Evidentiary Rules Exceeded the Scope of the Tort Restatement.

The most peculiar aspect of this “top tool,” is that it has nothing to do with the law of torts.  The level of certitude required of an expert witness is an evidentiary and a procedural issue. Of course the issue comes up in tort cases, which frequently involve medical and scientific causation opinions, as well as other expert witness opinions.  The issue, however, comes up in all cases that involve expert witnesses:  trust and estates, regulatory, environmental, securities fraud, commercial, and other cases.

The Restatement of Torts weakly acknowledges its frolic and detour in treating a procedural issue concerning the admissibility of expert witness opinion testimony, by noting that it does “not address any other requirements for the admissibility of an expert witness’s testimony, including qualifications, expertise, investigation, methodology, or reasoning.” Restatement (Third) of Torts: Liability for Physical and Emotional Harm § 28, cmt. e (2010).  The certitude issue has nothing special to do with the substantive law of torts, and should not have been addressed in the torts restatement.

2. The Restatement’s Treatment of “Reasonable Degree of Medical Certainty” Has No Relevance to the Burden of Proof in Tort Cases.

The expert witness certitude issue has nothing to do with the burden of proof, and the Restatement should not have confused and conflated the burden of proof with the standard of certitude for expert witnesses.  The clear but unacceptable implication is that expert witnesses in criminal cases must testify to certitude “beyond a reasonable doubt,” and in claims for equitable relief, expert witnesses may share only opinions that are made, in their minds, by “clear and convincing evidence.”  There is no support in law or logic for the identification of witness certitude with parties’ burdens of proof.

Comment e states the critique more fully:

“If courts do interpret the reasonable-certainty standard to require a level of certitude greater than the preponderance-of-the-evidence standard requires, this creates a troubling inconsistency between standards for the admissibility of evidence and the threshold required for sufficiency of proof. The threshold for admissibility should not be higher than the threshold to sufficiency.  Moreover, the reasonable-certainty standard provides no assurance of the quality of the expert’s qualifications, expertise, investigation, methodology, or reasoning.  Thus, the Section adopts the same preponderance standard that is universally adopted in civil cases.  Direct and cross-examination can be employed to flesh out the degree of certainty with which an expert’s opinion is held and to identify opinions that are speculative and therefore inadmissible.”

Id. The critique badly misfires because there is no inconsistency and no trouble in having different standards for the admissibility of opinion evidence and the burden of proof.  As noted, expert witnesses testify on causation and other issues in criminal, equity, and tort cases, all with different burdens of proof.  Juries in criminal and tort cases must apply instructions on burdens of proof to an entire evidentiary display, not just the expert witnesses’ opinions.  In logic and law, there ultimately must be different burdens for admissibility of expert witness testimony and for sufficiency of a party’s proofs.

3. The Restatement’s Treatment of “Reasonable Degree of Medical Certainty” Incoherently Confuses Two Different Standards.

We can see that Comment e’s approach to legislating an equivalence between expert witness certitude and the burden must fail even on its own terms.  Consider the legal consequences of tort claimants, with the burden of proof, who produce expert witnesses to opine about key elements (e.g., causation) of torts by stating that their opinions were held by a mere “preponderance of the evidence.”

If this probability is understood to be only infinitesimally greater than 50%, then courts would have to direct verdicts in many (and perhaps most) cases.

Courts must ensure that a rational jury can find for the party with the burden of proof.  Juries must evaluate the credibility and reliability of expert witnesses, their opinions, as well as the predicate facts for those opinions.  If those expert witness opinions were barely greater than 50% probable on an essential element, then unless the witnesses had perfect credibility, and all predicate facts were as probable as claimed by the witnesses, then juries would frequently have to reject the witnesses’ opinions.  The bare preponderance of the expert witnesses’ opinions would result in an overall probability of the essential element less than 50%.

4. The Restatement Incorrectly Implies that Expert Witnesses Can Quantify Their Opinions in Probabilistic Terms.

There are even more far-reaching problems with simply substituting “more likely than not” for RDMC as a threshold requirement of expert witness testimony.  Comment e implies that expert witnesses can discern the difference between an opinion that they believe is “more likely than not” and another which is “as likely as not.” On some occasions, there may be opinions that derive from quantitative reasoning, for which an expert witness could truly say, with some level of certainty, that his or her opinion is “more likely than not.” On most occasions, an expert witness’s degree of certainty is a qualitative opinion that simply does not admit of a quantitative characterization. The Restatement’s comment perpetuates this confusion by casting the reasonable certainty standard as a bare probability.

Comment e further suggests that expert witnesses are themselves expert in assessing their own level of certainty, and that they have the training and experience to distinguish an opinion that is 50.1% likely from another that is only 50% likely. The assignment of precise mathematical probabilities to personal, subjective beliefs is a doubtful exercise, at best. See, e.g., Daniel Kahneman and Amos Tversky, “Judgment under Uncertainty: Heuristics and Biases,” 185 Science 1124 (1974).

5. The Restatement Incorrectly Labels “Reasonable Degree of Medical Certainty” As An Empty Formalism.

Comment e ignores the epistemic content of reasonable certainty, which bears an uncanny resemblance to the knowledge requirement of Rule 702.  The “mantra” is helpful to the extent it imposes an objective epistemic standard, especially in states that have failed to impose, or that have abrogated, expert witness gatekeeping.  In some states, there is no meaningful expert witness gatekeeping under either the Frye standard or Rule 702. See, e.g., “Expert Evidence Free-for-All in Washington State.”  See also Joseph Sanders, “Science, Law, and the Expert Witness,” 72 Law & Contemporary Problems 63, 87 & n. 118 (2009) (noting that the meaning of “reasonable degree of scientific certainty” is unclear, but that it can be understood as an alternative formulation of Kumho’s “same intellectual rigor” test).

Some of these “top” tools may be defective.  The authors may need good defense counsel.

The Populist Attack on Scientific Free Speech

July 18th, 2011

Siddhartha Mukherjee’s opinion piece in Sunday’s New York Times illustrates the populist efforts to muzzle and minimize industry’s efforts to communicate about scientific issues that affect public policy.  Mukherjee, “Opinion:  Patrolling Cancer’s Borderlands,” New York Times, Sunday Review, p. 8 (July 17, 2011).

Mukherjee, an an assistant professor of medicine at Columbia University, is the author of The Emperor of All Maladies: A Biography of Cancer, and a frequent commentator on public health issues.  In his recent article, Mukherjee notes how difficult it is identify a carcinogen with reasonable certainty.  Tobacco as a cause of lung cancer was easy, relatively, to identify because of the very strong associations shown by observational studies.  Scientists are dealing with smaller candidate risks now, and with cancers that are less common and therefore with more expected variability in population samples.  Mukerhjee seems to acknowledge these considerations, but he appears much less concerned with scientific accuracy than with what he perceives as industrial lobbying against the labeling of certain chemicals as carcinogens.

There is much that is objectionable in this populist attack on scientific speech and the right to petition the government.  Putting aside scientific inaccuracies such as referring to epidemiologic studies as “trials,” let me focus on what emerges as the dominant theme of the opinion article.  Three times in his short editorial, Mukherjee uses the term “lobbying” to describe scientific speech and analyses submitted by industrial representatives:

“Second: in mid-June, the National Toxicology Program, countering years of lobbying by certain industries, finally classified formaldehyde (used in plywood manufacturing and embalming) as a carcinogen.”

* * *

“The second challenge facing cancer control agencies is political. The formaldehyde case illustrates this. Unlike phone radiation, formaldehyde has a well-established mechanism to cause cancer: it is a strikingly reactive chemical that can directly attack DNA. Experiments performed in the 1970s demonstrated that the chemical causes cancer in mice and rats. Following this data, sophisticated trials [sic] showed that men and women exposed to formaldehyde — morticians, for instance — had higher rates of leukemia than unexposed people.

But some of these studies were performed three decades ago. Why have 30 years elapsed between them and the National Toxicology Program announcement? In part, because of active lobbying by various industries, in particular, plywood manufacturers, who have tried to thwart this classification.”

* * *

“Identifying a carcinogen, in short, isn’t sufficient. Beyond the science — which, as the cellphone example shows, can be hard enough — cancer-control agencies need to bolster political support, and neutralize lobbying interests, before a culprit carcinogen can be revealed to the public.”

Mukherjee, supra. Now, the references to lobbying over scientific interests suggest an image of industrial gladhanders plying agency scientists and bureaucrats with expensive gifts, meals, and travel.  If that were so, then the decried “lobbying” might well be offensive, but what Mukherjee is talking about is nothing more or less than scientific free speech.  Industrial concerns and associations submit discussions that call attention to inadequacies in the data and evidence that regulators seek to rely upon in their zealous attempts to protect the public health.  The issue, of course, is a scientific one of the accuracy of the regulators’ interpretation of the data.  By using the term “lobbying,” with its pejorative connotations, Mukherjee is playing to the Zeitgeist’s impatience with the facts, when they embarrass regulatory or tort law attempts to condemn aspects of our industrialized society.  The exhibited hostility to scientific speech is at odds with our core political, constitutional values of both free speech and the right to petition the government.  The dismissive attitude is also contrary to a good deal of scientific evidence.  See, e.g., C. Bosetti, J. McLaughlin, et al., “Formaldehyde and cancer risk: a quantitative review of cohort studies through 2006,” 19 Ann. Oncol. 29 (2008). The Times and Mukherjee know that most readers will not familiar with the factual dispute underlying the classification of formaldehyde, and this editorial is nothing less than a cynical attempt to mold public opinion by the use of ad hominem attacks on industry.

Note that Citizens for Science in the Public Interest, the Center for Regulatory Reform, SKAPP, and dozens of other organizations, submit their views on issues of carcinogenicity, or other other health concerns, but they are not labeled as “lobbyists.”  Note also that Mukherjee urges cancer-control agencies “to bolster political support,” as well as “neutralize lobbying interests.”  The identification of carcinogens is a scientific issue, not a political one.  Society can certainly decide to err on the side of precaution, but agencies such as the National Toxicology Program, or the International Agency for Research on Cancer, hold themselves out to be scientific agencies, not political organizations.  These agencies should act scientifically, and they should be amenable to scientific evidence and evaluation, marshaled by any stakeholder in the discussion over putative carcinogens.  Mukherjee’s rhetoric and propaganda should be rejected in a free society.