Advancing the Research and Development of Assured Artificial Intelligence and Machine Learning Capabilities

Abstract

Artificial intelligence (AI) and machine learning (ML) have become increasingly vital in the development of novel defense and intelligence capabilities across all domains of warfare. An adversarial AI (A2I) and adversarial ML (AML) attack seeks to deceive and manipulate AI/ML models. It is imperative that AI/ML models can defend against these attacks. A2I/AML defenses will help provide the necessary assurance of these advanced capabilities that use AI/ML models. The A2I Working Group (A2IWG) seeks to advance the research and development of assured AI/ML capabilities via new A2I/AML defenses by fostering a collaborative environment across the U.S. Department of Defense and U.S. Intelligence Community. The A2IWG aims to identify specific challenges that it can help solve or address more directly, with initial focus on three topics: AI Trusted Robustness, AI System Security, and AI/ML Architecture Vulnerabilities.

Description

Presented at AAAI FSS-20: Artificial Intelligence in Government and Public Sector, Washington, DC, USA

Keywords

AI Trusted Robustness, AI System Security, AI/ML Architecture Vulnerabilities

Citation

Tyler J. Shipp, Daniel J. Clouse, Michael J. De Lucia, Metin B. Ahiskali, Kai Steverson, Jonathan M. Mullin, Nathaniel D. Bastian. "Advancing the Research and Development of Assured Artificial Intelligence and Machine Learning Capabilities". AAAI FSS-20, 2020.