Algorithmic Content Moderation on Social Media in EU Law: Illusion of Perfect Enforcement
University of Illinois Journal of Law, Technology & Policy (JLTP), Forthcoming
32 Pages Posted: 6 May 2020
Date Written: February 9, 2020
Abstract
Intermediaries today do much more than passively distribute user content and facilitate user interactions. They now have near-total control of users’ online experience and content moderation. Even though these service providers benefit from the same liability exemption regime as technical intermediaries (E-Commerce Directive, Art. 14), they have unique characteristics that must be addressed. Consequently, many debates are ongoing to decide whether or not platforms should be more strictly regulated.
Platforms are required to remove illegal content in the event of notice and take-down procedures built on automated processing and are equally encouraged to take proactive and automated measures to detect and remove it. Algorithmic decision-making helps to scale down the massive task of content moderation. It would, therefore, seem that algorithmic decision-making would be the most effective way to provide perfect enforcement.
However, this is an illusion. A first difficulty occurs when deciding what, precisely, is illegal. Platforms manage the removal of illegal content automatically, which makes it particularly challenging to verify that the law is being respected. The automated decision-making systems are opaque and many scholars have shown that the main problem here is the over-removal chilling effect. Moreover, content removal is a task which, in many circumstances, should not be automated, as it depends on an appreciation of both the context and the rule of law.
To address this multi-faceted issue, I offer solutions to improve algorithmic accountability and to increase the transparency around automated decision-making. Improvements may be made specifically by providing platform users with new rights, which in turn will provide stronger guarantees for judicial and non-judicial redress in the event of over-removal.
Keywords: Artificial Intelligence(AI), Automated Decison Systems (ADS), Content Moderation, Platforms, Liability of Internet Intermediaries, Platforms and EU Law
Suggested Citation: Suggested Citation