When AI Fails, Who Do We Blame? Attributing Responsibility in Human-AI Interactions

Date

2024-01-10

Journal Title

Journal ISSN

Volume Title

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Abstract

While previous studies of trust in artificial intelligence have focused on perceived user trust, the paper examines how an external agent (e.g., an auditor) assigns responsibility, perceives trustworthiness, and explains the successes and failures of AI. In two experiments, participants (university students) reviewed scenarios about automation failures and assigned perceived responsibility, trustworthiness, and preferred explanation type. Participants’ cumulative responsibility ratings for three agents (operators, developers, and AI) exceeded 100%, implying that participants were not attributing trust in a wholly rational manner, and that trust in the AI might serve as a proxy for trust in the human software developer. Dissociation between responsibility and trustworthiness suggested that participants used different cues, with the kind of technology and perceived autonomy affecting judgments. Finally, we additionally found that the kind of explanation used to understand a situation differed based on whether the AI succeeded or failed.

Description

Keywords

Artificial intelligence, Task analysis, Software, Software algorithms, Ethics, Automation, Decision making

Citation

Schoenherr, Jordan Richard, and Robert Thomson. "When AI Fails, Who Do We Blame? Attributing Responsibility in Human-AI Interactions." IEEE Transactions on Technology and Society (2024).