Expert Information and Expert Evidence: A Preliminary Taxonomy

47 Pages Posted: 8 Dec 2003

See all articles by Samuel R. Gross

Samuel R. Gross

University of Michigan Law School

Jennifer Mnookin

University of Wisconsin-Madison

Abstract

In the wake of the trio of recent Supreme Court cases examining the admissibility of expert evidence - Daubert v. Merrell Dow Pharmaceuticals, Inc., General Electric Co. v. Joiner, and Kumho Tire Co. v. Carmichael - many thousands of pages have been written about both the proper criteria for evaluating the reliability of expert evidence and the institutional competence of judges to evaluate scientific reliability. We believe that the monocular focus on issues of scientific method and reliability has obscured some broader points; therefore, in this Article, we try to step back (gingerly) and take a broader view. Instead of beginning with the problems of reliability, we start by briefly detailing the array of informational issues facing a consumer of expert evidence, thereby putting the attention-getting problems of reliability into a broader context. We then attempt to review and classify the full range of expert testimony. We offer first a brief taxonomy, an outline with examples. We then develop our classification scheme by exploring each kind of expert statement in more detail. Our taxonomy is functional rather than methodological: we look at the purpose for which the statement is introduced - for example, description, instruction, or assessment - rather than distinguishing among categories of scientific and non-scientific expert evidence.

Our taxonomy has four purposes. First, we want to emphasize the extraordinary range of information that is presented in court by expert witnesses. Some of this information is "scientific" and some of it is "non-scientific"; but often, whichever categories it falls in, it is wholly unproblematic. Many if not most expert witnesses testify without objection and present information that may be critical for the factfinder but is not in dispute. Second, for many categories of expert evidence, even when there may be some degree of controversy or disagreement, there really is no Daubert or Kumho Tire problem. For certain categories of expert information, a focus on the credentials of the expert is generally sufficient, and courts need not (and typically do not) make any additional elaborate inquiry into validity. Third, our taxonomy reminds us of the continuing importance within the evaluation of expert testimony of more mundane credibility issues than "reliability" in the Daubert sense: specifically, bias, competence, and lack of clarity. Finally, our discussion reveals the sometimes overlooked importance of paying attention not only to what the expert says, but to how she says it. We suggest that of the central problems with much expert testimony introduced in court - both scientific and non-scientific alike - is that experts claim as matters of fact or probability opinions that should be couched in more cautious terms, as possibilities or hypotheses. Often, whether testimony is based on scientific study or more casual forms of observation, what makes an expert's conclusion unreliable is that it is expressed with a confidence not warranted by the evidence. The clinical observations of a physician or an engineer or a mechanic or a fingerprint examiner may be quite appropriate as the basis for testimony, but the degree of certainty expressed by the witness should reflect both knowledge and its limits, both what is known and what is not.

Suggested Citation

Gross, Samuel R. and Mnookin, Jennifer L., Expert Information and Expert Evidence: A Preliminary Taxonomy. Available at SSRN: https://ssrn.com/abstract=477202 or http://dx.doi.org/10.2139/ssrn.477202

Samuel R. Gross

University of Michigan Law School ( email )

625 South State Street
Ann Arbor, MI 48109-1215
United States
734-764-1519 (Phone)
734-764-8309 (Fax)

Jennifer L. Mnookin (Contact Author)

University of Wisconsin-Madison ( email )

975 Bascom Mall
Madison, WI 53706
United States

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
652
Abstract Views
4,439
Rank
75,286
PlumX Metrics