The Oversight of Content Moderation by AI: Impact Assessments and Their Limitations

54 Pages Posted: 24 Apr 2020 Last revised: 2 Oct 2021

See all articles by Yifat Nahmias

Yifat Nahmias

University of Haifa - Faculty of Law

Maayan Perel (Filmar)

Netanya Academic College

Date Written: February 13, 2020

Abstract

In a world in which artificial intelligence (AI) systems are increasingly shaping our environment, as well as our access to and exclusion from opportunities and resources, it is essential to ensure some form of AI oversight. Such oversight will help to maintain the rule of law, to protect individual rights, and to ensure the protection of core democratic values. Nevertheless, achieving AI oversight is challenging due to the dynamic and opaque nature of such systems. Recently, in an attempt to increase oversight and accountability for AI systems, the proposed US Algorithmic Accountability Act introduced a mandatory impact assessment for private entities that deploy automated decision-making systems. Impact assessment as a means to enhance oversight was likewise recently adopted under the EU's General Data Protection Regulation. Taken together, these initiatives mark the latest development in AI oversight policy.

In this paper, we question the merits of impact assessment as a tool for promoting oversight of AI systems. Using the case of AI systems of content moderation, we highlight the strengths and weaknesses of this oversight tool and propose how to improve it. Additionally, we argue that even an improved impact assessment does not fit equally with the over-sight challenge raised by different systems of AI. Especially, impact assessments might be insufficient to oversee AI systems that are deployed to achieve purposes that could be classified as public, such as making our online public sphere safer. Meaningful oversight of AI systems that impose costs on society as a whole, like AI systems of content moderation, cannot be pursued by mechanisms of self-assessments alone. Therefore, as we suggest in this paper, such systems should be additionally subjected to objective mechanisms of external oversight.

Keywords: AI, Impact Assessment, Content Moderation, oversight, Algorithmic Accountability, GDPR

Suggested Citation

Nahmias, Yifat and Perel (Filmar), Maayan, The Oversight of Content Moderation by AI: Impact Assessments and Their Limitations (February 13, 2020). Harvard Journal on Legislation, Vol. 58, No. 1, 2021, Available at SSRN: https://ssrn.com/abstract=3565025

Yifat Nahmias (Contact Author)

University of Haifa - Faculty of Law ( email )

Mount Carmel
Haifa, 31905
Israel

Maayan Perel (Filmar)

Netanya Academic College ( email )

1 Unisversity Street
Netanya
Netanya, 31905
Israel

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
664
Abstract Views
1,954
Rank
72,927
PlumX Metrics