Toolmark-Comparison Testimony: A Report to the Texas Forensic Science Commission
33 Pages Posted: 18 May 2022
Date Written: May 2, 2022
Abstract
To assist the Texas Forensic Science Commission in a pending review of traditional toolmark-comparison testimony, the Yale Law School Forensic Science Standards Practicum submitted this report on the range of approaches that courts, legal commentators, and scientists have proposed for presenting toolmark-comparison evidence in trial settings. The report addresses four major topics: (1) the case law on source attributions for firearms and other tools; (2) consensus standards from the forensic-science community; (3) expressing uncertainty when presenting source attributions; and (4) alternative modes of interpreting the microscopic similarities and differences in compared items.
On the first topic, the report concludes that opinions eschewing certain phrases and placing limits on how strongly an examiner can testify to a source attribution still leave legal finders of fact without the knowledge required to appreciate the probative value of the limited conclusions.
Second, the report observes that voluntary, consensus standards from the forensic-science community have not filled this gap.
Third, it maintains that if testimony includes source attributions, then estimated measures of accuracy derived from pertinent studies of examiner performance should accompany them, with the recognition that these figures are averages across examiners and across the toolmarks compared in the studies. For this purpose, when computing false-positive "error rates" from experiments, judgments of "inconclusive" should be excluded from the analysis. The report illustrates this procedure with data from a recent study that compares how examiners perform with the conventional yes-no-inconclusive scale compared to a proposed scale with a richer set of reporting categories. Furthermore, the report notes that the best measure of probative value is not the false-positive probability, as courts have assumed. Rather, it is the likelihood ratio involving the true and the false-positive proportions. Consequently, the report makes suggestions on how such numbers could be presented and perhaps supplemented by particular words.
Finally, the report questions the assumption that examiners should classify the pairings of impressions under comparison. It notes cogent and longstanding arguments for shifting the locus of the examination and testimony away from conclusions about the truth of the same-source hypothesis to direct statements of support for this hypothesis. In the absence of validated ratios from automated systems for comparisons, examiners could provide personal likelihood ratios, with suitable explanations of their basis and meaning.
None of the approaches canvassed in the report are panaceas, but the report suggests that there are viable alternatives to traditional firearms and toolmark comparison testimony and discusses some of the more detailed issues involved in implementing these alternatives.
Suggested Citation: Suggested Citation