How to Argue with an Algorithm: Lessons from the COMPAS ProPublica Debate

Accepted for publication. The Colorado Technology Law Journal. Volume 17 Issue 1 http://ctlj.colorado.edu

37 Pages Posted: 19 Apr 2019

Date Written: February 4, 2019

Abstract

The United States optimizes the efficiency of its growing criminal justice system with algorithms however, legal scholars have overlooked how to frame courtroom debates about algorithmic predictions. In State v Loomis, the defense argued that the court’s consideration of risk assessments during sentencing was a violation of due process because the accuracy of the algorithmic prediction could not be verified. The Wisconsin Supreme Court upheld the consideration of predictive risk at sentencing because the assessment was disclosed and the defendant could challenge the prediction by verifying the accuracy of data fed into the algorithm.

Was the court correct about how to argue with an algorithm?

The Loomis court ignored the computational procedures that processed the data within the algorithm. How algorithms calculate data is equally as important as the quality of the data calculated. The arguments in Loomis revealed a need for new forms of reasoning to justify the logic of evidence-based tools. A “data science reasoning” could provide ways to dispute the integrity of predictive algorithms with arguments grounded in how the technology works.

This article’s contribution is a series of arguments that could support due process claims concerning predictive algorithms, specifically the Correctional Offender Management Profiling for Alternative Sanctions (“COMPAS”) risk assessment. As a comprehensive treatment, this article outlines the due process arguments in Loomis, analyzes arguments in an ongoing academic debate about COMPAS, and proposes alternative arguments based on the algorithm’s organizational context.

Risk assessment has dominated one of the first wide-ranging academic debates within the emerging field of data science. ProPublica investigative journalists claimed that the COMPAS algorithm is biased and released their findings as open data sets. The ProPublica data started a prolific and mathematically-specific conversation about risk assessment as well as a broader conversation on the social impact of algorithms. The ProPublica-COMPAS debate repeatedly considered three main themes: mathematical definitions of fairness, explainable interpretation of models, and the importance of population comparison groups.

While the Loomis decision addressed permissible use for a risk assessment at sentencing, a deeper understanding of daily practice within the organization could extend debates about algorithms to questions about procurement, implementation, or training. The criminal justice organization that purchased the risk assessment is in the best position to justify how one individual’s assessment matches the algorithm designed for its administrative needs. People subject to a risk assessment cannot conjecture how the algorithm ranked them without knowing why they were classified within a certain group and what criteria control the rankings. The controversy over risk assessment algorithms hints at whether procedural due process is the cost of automating a criminal justice system that is operating at administrative capacity.

Keywords: COMPAS, risk assessment algorithms, argument, due process

Suggested Citation

Washington, Anne, How to Argue with an Algorithm: Lessons from the COMPAS ProPublica Debate (February 4, 2019). Accepted for publication. The Colorado Technology Law Journal. Volume 17 Issue 1 http://ctlj.colorado.edu, Available at SSRN: https://ssrn.com/abstract=3357874

Anne Washington (Contact Author)

NYU Steinhardt ( email )

New York University
Steinhardt School
New York, NY 10003-711
United States

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
1,488
Abstract Views
6,798
Rank
23,861
PlumX Metrics