Affectively Mistaken? How Human Augmentation and Information Transparency Offset Algorithmic Failures in Emotion Recognition AI

Posted: 10 Dec 2019 Last revised: 28 Oct 2021

See all articles by Lauren Rhue

Lauren Rhue

University of Maryland - Robert H. Smith School of Business

Date Written: November 22, 2019

Abstract

Affective AI, or emotion-recognition artificial intelligence, is increasingly adopted to heighten organizational capabilities. As with other machine learning artificial intelligence, affective AI may be susceptible to algorithmic failures that lead to unfair and/or biased outcomes. The objective of this paper is to explore the effectiveness of information transparency and human augmentation to offset these types of algorithmic failures, particularly in light of human cognitive biases such as the anchoring effect. This study scored two datasets for emotions with three commercially available affective AI tools. Labelers score emotions and facial expressions in images with varying access to information about the affective AI models’ outputs and average demographic parity. This study yielded several interesting findings. First, human augmentation was effective at counterbalancing some inference inconsistencies, e.g., when the affective AI identified a particular facial expression but did not infer the concomitant emotion. Second, facial expression uncertainty, i.e., disagreements among affective AI models about an image’s facial expression, was associated with demographic-based differences in the emotions recognized by humans and by affective AI models. Third, information transparency, i.e., reporting average demographic parity, affected human emotion scores but often led to spillovers across all images, not just images from a particular population. This paper contributes to our understanding of the affective AI, information transparency, and human augmentation for algorithmic failures, especially for AI with difficult to quantify fairness like affective AI.

Keywords: Algorithmic Bias, Ethics in AI, Discrimination, Artificial Intelligence, Facial Recognition, Affective AI,Emotion-Recognition

Suggested Citation

Rhue, Lauren, Affectively Mistaken? How Human Augmentation and Information Transparency Offset Algorithmic Failures in Emotion Recognition AI (November 22, 2019). Available at SSRN: https://ssrn.com/abstract=3492129 or http://dx.doi.org/10.2139/ssrn.3492129

Lauren Rhue (Contact Author)

University of Maryland - Robert H. Smith School of Business ( email )

College Park, MD 20742-1815
United States

Do you have negative results from your research you’d like to share?

Paper statistics

Abstract Views
1,594
PlumX Metrics