Exclusivity and Paternalism in the Public Governance of Explainable AI
King's College London Law School Research Paper Forthcoming
Computer Law & Security Review (2020 Forthcoming)
7 Pages Posted: 9 Nov 2020
Date Written: September 9, 2020
Abstract
In this comment, we address the apparent exclusivity and paternalism of goal and standard setting for explainable AI and its implications for the public governance of AI. We argue that the widening use of AI decision-making, including the development of autonomous systems, not only poses widely-discussed risks for human autonomy in itself, but is also the subject of a standard-setting process that is remarkably closed to effective public contestation. The implications of this turn in governance for democratic decision-making in Britain have also yet to be fully appreciated. As the governance of AI gathers pace, one of the major tasks will be ensure not only that AI systems are technically ‘explainable’ but that, in a fuller sense, the relevant standards and rules are contestable and that governing institutions and processes are open to democratic contestability.
Keywords: artificial intelligence, transparency, accountability, paternalism
Suggested Citation: Suggested Citation