When AI Fails, Who Do We Blame? Attributing Responsibility in Human-AI Interactions

No Thumbnail Available

Authors

Schoenherr, Jordan Richard
Thomson, Robert

Issue Date

2024-01-10

Type

Journal articles

Language

Keywords

Artificial intelligence , Task analysis , Software , Software algorithms , Ethics , Automation , Decision making

Research Projects

Organizational Units

Journal Issue

Alternative Title

Abstract

While previous studies of trust in artificial intelligence have focused on perceived user trust, the paper examines how an external agent (e.g., an auditor) assigns responsibility, perceives trustworthiness, and explains the successes and failures of AI. In two experiments, participants (university students) reviewed scenarios about automation failures and assigned perceived responsibility, trustworthiness, and preferred explanation type. Participants’ cumulative responsibility ratings for three agents (operators, developers, and AI) exceeded 100%, implying that participants were not attributing trust in a wholly rational manner, and that trust in the AI might serve as a proxy for trust in the human software developer. Dissociation between responsibility and trustworthiness suggested that participants used different cues, with the kind of technology and perceived autonomy affecting judgments. Finally, we additionally found that the kind of explanation used to understand a situation differed based on whether the AI succeeded or failed.

Description

Citation

Schoenherr, Jordan Richard, and Robert Thomson. "When AI Fails, Who Do We Blame? Attributing Responsibility in Human-AI Interactions." IEEE Transactions on Technology and Society (2024).

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

License

Journal

Volume

Issue

PubMed ID

ISSN

2637-6415

EISSN