Algorithmic Content Moderation on Social Media in EU Law: Illusion of Perfect Enforcement

University of Illinois Journal of Law, Technology & Policy (JLTP), Forthcoming

32 Pages Posted: 6 May 2020

See all articles by Céline Castets-Renard

Céline Castets-Renard

Civil Law Faculty; University of Toulouse 1; ANITI (Artificial and Natural Intelligence Toulouse Institute); Institut Universitaire de France; University of Ottawa

Date Written: February 9, 2020

Abstract

Intermediaries today do much more than passively distribute user content and facilitate user interactions. They now have near-total control of users’ online experience and content moderation. Even though these service providers benefit from the same liability exemption regime as technical intermediaries (E-Commerce Directive, Art. 14), they have unique characteristics that must be addressed. Consequently, many debates are ongoing to decide whether or not platforms should be more strictly regulated.

Platforms are required to remove illegal content in the event of notice and take-down procedures built on automated processing and are equally encouraged to take proactive and automated measures to detect and remove it. Algorithmic decision-making helps to scale down the massive task of content moderation. It would, therefore, seem that algorithmic decision-making would be the most effective way to provide perfect enforcement.

However, this is an illusion. A first difficulty occurs when deciding what, precisely, is illegal. Platforms manage the removal of illegal content automatically, which makes it particularly challenging to verify that the law is being respected. The automated decision-making systems are opaque and many scholars have shown that the main problem here is the over-removal chilling effect. Moreover, content removal is a task which, in many circumstances, should not be automated, as it depends on an appreciation of both the context and the rule of law.

To address this multi-faceted issue, I offer solutions to improve algorithmic accountability and to increase the transparency around automated decision-making. Improvements may be made specifically by providing platform users with new rights, which in turn will provide stronger guarantees for judicial and non-judicial redress in the event of over-removal.

Keywords: Artificial Intelligence(AI), Automated Decison Systems (ADS), Content Moderation, Platforms, Liability of Internet Intermediaries, Platforms and EU Law

Suggested Citation

Castets-Renard, Céline, Algorithmic Content Moderation on Social Media in EU Law: Illusion of Perfect Enforcement (February 9, 2020). University of Illinois Journal of Law, Technology & Policy (JLTP), Forthcoming, Available at SSRN: https://ssrn.com/abstract=3535107 or http://dx.doi.org/10.2139/ssrn.3535107

Céline Castets-Renard (Contact Author)

Civil Law Faculty ( email )

57 Louis Pasteur Street
Ottawa, Ontario K1N 6N5
Canada

HOME PAGE: http://https://droitcivil.uottawa.ca/fr

University of Toulouse 1 ( email )

2 rue du doyen Gabriel Marty
Toulouse, 31000
France

ANITI (Artificial and Natural Intelligence Toulouse Institute) ( email )

41 Allées Jules Guesde - CS 61321
TOULOUSE
France

Institut Universitaire de France ( email )

103, bld Saint-Michel
75005 Paris
United States

University of Ottawa ( email )

2292 Edwin Crescent
Ottawa, Ontario K2C 1H7
Canada

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
441
Abstract Views
1,607
Rank
120,725
PlumX Metrics