Cognitive-Level Salience for Explainable Artificial Intelligence

dc.contributor.authorSomers, Sterling
dc.contributor.authorMitsopoulos, Konstantinos
dc.contributor.authorLebiere, Christian
dc.contributor.authorThomson, Robert
dc.date.accessioned2024-10-10T15:23:22Z
dc.date.available2024-10-10T15:23:22Z
dc.date.issued2019-07
dc.description.abstractWe present a general-purpose method for determining the salience of features in action decisions of artificial intelligent agents. Our method does not rely on a specific implementation of an AI (e.g. deep-learning, symbolic AI). The method is also amenable to features at different levels of abstraction. We present three implementations of our salience technique: two directed at explainable artificial intelligence (deep reinforcement learning agents), and a third directed at risk assessment.
dc.description.sponsorshipDARPA BS&L EECS Army Cyber Institute
dc.identifier.citationSomers, Sterling, K. Mitsopoulos, Christian Lebiere, and Robert Thomson. "Cognitive-level salience for explainable artificial intelligence." In Proceedings of the 17th Annual Meeting of the International conference on Cognitive Modeling, pp. 19-22. 2019.
dc.identifier.urihttps://hdl.handle.net/20.500.14216/1590
dc.publisherInternational Conference on Cognitive Modeling
dc.subjectcomputational model
dc.subjectsaliencea
dc.subjectartificial intelligence
dc.subjectreinforcement learning
dc.titleCognitive-Level Salience for Explainable Artificial Intelligence
dc.typeConference presentations, papers, posters
local.USMAemailrobert.thomson@westpoint.edu
local.peerReviewedYes

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
ICCM2019_paper_53.pdf
Size:
1.95 MB
Format:
Adobe Portable Document Format