How Explainability Contributes to Trust in AI

2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22)

10 Pages Posted: 30 Jan 2022 Last revised: 9 May 2022

See all articles by Andrea Ferrario

Andrea Ferrario

University of Zurich; ETH Zürich

Michele Loi

Algorithmwatch

Date Written: January 28, 2022

Abstract

We provide a philosophical explanation of the relation between artificial intelligence (AI) explainability and trust in AI, providing a case for expressions, such as “explainability fosters trust in AI,” that commonly appear in the literature. This explanation relates the justification of the trustworthiness of an AI with the need to monitor it during its use. We discuss the latter by referencing an account of trust, called “trust as anti-monitoring,” that different authors contributed developing. We focus our analysis on the case of medical AI systems, noting that our proposal is compatible with internalist and externalist justifications of trustworthiness of medical AI and recent accounts of warranted contractual trust. We propose that “explainability fosters trust in AI” if and only if it fosters justified and warranted paradigmatic trust in AI, i.e., trust in the presence of the justified belief that the AI is trustworthy, which, in turn, causally contributes to rely on the AI in the absence of monitoring. We argue that our proposed approach can intercept the complexity of the interactions between physicians and medical AI systems in clinical practice, as it can distinguish between cases where humans hold different beliefs on the trustworthiness of the medical AI and exercise varying degrees of monitoring on them. Finally, we apply our account to user’s trust in AI, where, we argue, explainability does not contribute to trust. By contrast, when considering public trust in AI as used by a human, we argue, it is possible for explainability to contribute to trust. Our account can explain the apparent paradox that in order to trust AI, we must trust AI users not to trust AI completely. Summing up, we can explain how explainability contributes to justified trust in AI, without leaving a reliabilist framework, but only by redefining the trusted entity as an AI-user dyad.

Keywords: Artificial Intelligence, Explainable Artificial Intelligence, Trust, Healthcare, Trustworthiness, Ethics of Artificial Intelligence

undefined

Suggested Citation

Ferrario, Andrea and Loi, Michele, How Explainability Contributes to Trust in AI (January 28, 2022). 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22), Available at SSRN: https://ssrn.com/abstract=4020557 or http://dx.doi.org/10.2139/ssrn.4020557

Andrea Ferrario (Contact Author)

University of Zurich ( email )

Rämistrasse 71
Zürich, CH-8006
Switzerland

ETH Zürich ( email )

Zürichbergstrasse 18
8092 Zurich, CH-1015
Switzerland

Michele Loi

Algorithmwatch ( email )

Berlin
Germany

0 References

    0 Citations

      Do you have a job opening that you would like to promote on SSRN?

      Paper statistics

      Downloads
      879
      Abstract Views
      2,717
      Rank
      58,137
      PlumX Metrics
      Plum Print visual indicator of research metrics
      • Citations
        • Citation Indexes: 3
      • Usage
        • Abstract Views: 2695
        • Downloads: 860
      • Captures
        • Readers: 13
      • Social Media
        • Shares, Likes & Comments: 11
      see details