Human Decision Making with Machine Assistance: An Experiment on Bailing and Jailing

25 Pages Posted: 12 Oct 2019 Last revised: 8 Nov 2019

See all articles by Nina Grgic-Hlaca

Nina Grgic-Hlaca

Max Planck Institute for Software Systems

Christoph Engel

Max Planck Institute for Research on Collective Goods; University of Bonn - Faculty of Law & Economics; Erasmus University Rotterdam (EUR), Erasmus School of Law, Rotterdam Institute of Law and Economics, Students; Universität Osnabrück - Faculty of Law

Krishna P. Gummadi

Max Planck Institute for Software Systems

Date Written: 2019

Abstract

Much of political debate focuses on the concern that machines might take over. Yet in many domains it is much more plausible that the ultimate choice and responsibility remain with a human decision-maker, but that she is provided with machine advice. A quintessential illustration is the decision of a judge to bail or jail a defendant. In multiple jurisdictions in the US, judges have access to a machine prediction about a defendant’s recidivism risk. In our study, we explore how receiving machine advice influences people’s bail decisions.

We run a vignette experiment with laypersons whom we test on a subsample of cases from the database of this prediction tool. In study 1, we ask them to predict whether defendants will recidivate before tried, and manipulate whether they have access to machine advice. We find that receiving machine advice has a small effect, which is biased in the direction of predicting no recidivism.

In the field, human decision makers sometimes have a chance, after the fact, to learn whether the machine has given good advice. In study 2, after each trial we inform participants of ground truth. This does not make it more likely that they follow the advice, despite the fact that the machine is (on average) slightly more accurate than real judges. This also holds if initially the advice is mostly correct, or if it initially is mostly to predict (no) recidivism.

Real judges know that their decisions affect defendants’ lives. They may also be concerned about reelection or promotion. Hence a lot is at stake. In study 3 we emulate high stakes by giving participants a financial incentive. An incentive to find the ground truth, or to avoid false positive or false negatives, does not make participants more sensitive to machine advice. But an incentive to follow the advice is effective.

Keywords: Machine-Assisted Decision Making; Human-Centered Machine Learning; Algorithmic Decision Making; Algorithmic Fairness, Accountability and Transparency

Suggested Citation

Grgic-Hlaca, Nina and Engel, Christoph and Gummadi, Krishna P., Human Decision Making with Machine Assistance: An Experiment on Bailing and Jailing (2019). Available at SSRN: https://ssrn.com/abstract=3465622 or http://dx.doi.org/10.2139/ssrn.3465622

Nina Grgic-Hlaca

Max Planck Institute for Software Systems ( email )

Germany
004968193038636 (Phone)

HOME PAGE: http://https://people.mpi-sws.org/~nghlaca/

Christoph Engel (Contact Author)

Max Planck Institute for Research on Collective Goods ( email )

Kurt-Schumacher-Str. 10
D-53113 Bonn, 53113
Germany
+049 228 914160 (Phone)
+049 228 9141655 (Fax)

HOME PAGE: http://www.coll.mpg.de/engel.html

University of Bonn - Faculty of Law & Economics

Postfach 2220
D-53012 Bonn
Germany

Erasmus University Rotterdam (EUR), Erasmus School of Law, Rotterdam Institute of Law and Economics, Students ( email )

Burgemeester Oudlaan 50
PO Box 1738
Rotterdam
Netherlands

Universität Osnabrück - Faculty of Law

Osnabruck, D-49069
Germany

Krishna P. Gummadi

Max Planck Institute for Software Systems ( email )

Campus E1 5
Saarbrücken, Saarland 66123
Germany

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
301
Abstract Views
1,739
Rank
185,709
PlumX Metrics