Knowledge-to-information translation training (kitt): An adaptive approach to explainable artificial intelligence
Loading...
Authors
Schoenherr, Jordan Richard
Thomson, Robert
Issue Date
2020-07-10
Type
Conference presentations, papers, posters
Language
Keywords
explainable AI , Artificial Intelligence , machine learning
Alternative Title
Abstract
Modern black-box artificial intelligence algorithms are computationally powerful yet fallible in unpredictable ways. While much research has gone into developing techniques to interpret these algorithms, less have also integrated the requirement to understand the algorithm as a function of their training data. In addition, few have examined the human requirements for explainability, so these interpretations provide the right quantity and quality of information to each user. We argue that Explainable Artificial Intelligence (XAI) frameworks need to account the expertise and goals of the user in order to gain widespread adoptance. We describe the Knowledge-to-Information Translation Training (KITT) framework, an approach to XAI that considers a number of possible explanatory models that can be used to facilitate users’ understanding of artificial intelligence. Following a review of algorithms, we provide a taxonomy of explanation types and outline how adaptive instructional systems can facilitate knowledge translation between developers and users. Finally, we describe limitations of our approach and paths for future research opportunities.
Description
Citation
Thomson, Robert, and Jordan Richard Schoenherr. "Knowledge-to-information translation training (kitt): An adaptive approach to explainable artificial intelligence." In Adaptive Instructional Systems: Second International Conference, AIS 2020, Held as Part of the 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, July 19–24, 2020, Proceedings 22, pp. 187-204. Springer International Publishing, 2020.
Publisher
Springer
