Artificial Intelligence Needs Human Rights: How the Focus on Ethical AI Fails to Address Privacy, Discrimination and Other Concerns

25 Pages Posted: 2 Jun 2020

See all articles by Kate Saslow

Kate Saslow

Stiftung Neue Verantwortung

Philippe Lorenz

Stiftung Neue Verantwortung

Date Written: September 30, 2019

Abstract

AI has been a catalyst for automation and efficiency in numerous ways, but has also had harmful consequences, including: unforeseen algorithmic bias that affects already marginalized communities, as with Amazon’s AI recruiting algorithm that showed bias against women; accountability and liability coming into question if an autonomous vehicle injures or kills, as seen with Uber’s self-driving car casualties; even the notion of democracy is being challenged as the technology enables authoritarian and democratic states like China and the United States to practice surveillance at an unprecedented scale.

The risks as well as the need for some form of basic rules have not gone unnoticed and governments, tech companies, research consortiums or advocacy groups have broached the issue. In fact, this has been the topic of local, national, and supranational discussion for some years now, as can be seen with new legislation popping up to ban facial recognition software in public spaces. The problem with these discussions, however, is that they have been heavily dominated by how we can make AI more “ethical”. Companies, states, and even international organizations discuss ethical principles, such as fair, accountable, responsible, or safe AI in numerous expert groups or ad hoc committees, such as the High-Level Expert Group on AI in the European Commission, the group on AI in Society of the Organization for Economic Co-operation and Development (OECD), or the select committee on Artificial Intelligence of the United Kingdom House of Lords.

This may sound like a solid approach to tackling the dangers that AI poses, but to actually be impactful, these discussions must be grounded in rhetoric that is focused and actionable. Not only may the principles be defined differently depending on the stakeholders, but there are overwhelming differences in how principles are interpreted and what requirements are necessary for them to materialize. In addition, ethical debates on AI are often dominated by American or Chinese companies, which are both propagating their own idea of ethical AI, but which may in many cases stand in conflict with the values of other cultures and nations. Not only do different countries have different ideas of which “ethics” principles need to be protected, but different countries play starkly different roles in developing AI. Another problem is when ethical guidelines are discussed, suggestions often come from tech companies themselves, while voices from citizens or even governments are marginalized.

Self-regulation around ethical principles is too weak to address the spreading implications that AI technologies have had. Ethical principles lack clarity and enforcement capabilities. We must stop focusing the discourse on ethical principles, and instead shift the debate to human rights. Debates must be louder at the supranational level. International pressure must be put on states and companies who fail to protect individuals by propagating AI technologies that carry risks. Leadership must be defined not by actors who come up with new iterations of ethical guidelines, but by those who develop legal obligations regarding AI, which are anchored in and derived from a human rights perspective.

A way to do this would be to reaffirm the human-centric nature of AI development and deployment that follows actionable standards of human rights law. The human rights legal framework has been around for decades and has been instrumental in fighting and pressuring states to change domestic laws. Nelson Mandela referred to the duties spelled out in the Universal Declaration of Human Rights while fighting to end apartheid in South Africa; in 1973 with Roe v. Wade the United States Supreme Court followed a larger global trend of recognizing women’s human rights by protecting individuals from undue governmental interference in private affairs and giving women the ability to participate fully and equally in society; more recently, open access to the Internet has been recognized as a human right essential to not only freedom of opinion, expression, association, and assembly, but also instrumental in mobilizing the population to call for equality, justice, and accountability in order to advance global respect for human rights. These examples show how human rights standards have been applied to a diverse set of domestic and international rules. That these standards are actionable and enforceable show that they are well-suited to regulate the cross-border nature of AI technologies. AI systems must be scrutinized through a human rights perspective to analyze current and future harms either created or exacerbated by AI, and take action to avoid any harm.

The adoption of AI technologies has spread across borders and has had diverse effects on societies all over the world. A globalized technology needs international obligations to mitigate the societal problems being faced at an accelerated and larger scale. Companies and states should strive for the development of AI technologies that uphold human rights. Centering the AI discourse around human rights rather than simply ethics can be one way of providing a clearer legal basis for development and deployment of AI technologies. The international community must raise awareness, build consensus, and analyze thoroughly how AI technologies violate human rights in different contexts and develop paths for effective legal remedies. Focusing the discourse on human rights rather than ethical principles can provide more accountability measures, more obligations for state and private actors, and can redirect the debate to rely on consistent and widely accepted legal principles developed over decades.

Keywords: Artificial Intelligence, Human Rights, Foreign Policy, Digital Rights, Machine Learning, Ethical AI, International Relations

Suggested Citation

Saslow, Kate and Lorenz, Philippe, Artificial Intelligence Needs Human Rights: How the Focus on Ethical AI Fails to Address Privacy, Discrimination and Other Concerns (September 30, 2019). Available at SSRN: https://ssrn.com/abstract=3589473 or http://dx.doi.org/10.2139/ssrn.3589473

Kate Saslow (Contact Author)

Stiftung Neue Verantwortung ( email )

Berliner Freiheit 2
Berlin, 10785
Germany

HOME PAGE: http://https://www.stiftung-nv.de/de/person/kate-saslow

Philippe Lorenz

Stiftung Neue Verantwortung ( email )

Berliner Freiheit 2
Berlin, 10785
Germany

HOME PAGE: http://https://www.stiftung-nv.de/en/person/philippe-lorenz

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
437
Abstract Views
1,399
Rank
121,971
PlumX Metrics