Army Cyber Institute
Permanent URI for this collection
Browse
Browsing Army Cyber Institute by Subject "Adversarial machine learning"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Constrained optimization based adversarial example generation for transfer attacks in network intrusion detection systems(Optimization Letters, 2023) Chalé, Marc; Cox, Bruce; Weir, Jeffery; Bastian, Nathaniel D.Deep learning has enabled network intrusion detection rates as high as 99.9% for malicious network packets without requiring feature engineering. Adversarial machine learning methods have been used to evade classifiers in the computer vision domain; however, existing methods do not translate well into the constrained cyber domain as they tend to produce non-functional network packets. This research views the payload of network packets as code with many functional units. A meta-heuristic based generative model is developed to maximize classification loss of packet payloads with respect to a surrogate model by repeatedly substituting units of code with functionally equivalent counterparts. The perturbed packets are then transferred and tested against three test network intrusion detection system classifiers with various evasion rates that depend on the classifier and malicious packet type. If the test classifier is of the same architecture as the surrogate model, near-optimal adversarial examples penetrate the test model for 69% of packets whereas the raw examples succeeds for only 5% of packets. This confirms hypotheses that NIDS classifiers are vulnerable to adversarial attacks, motivating research in robust learning for cyber.Item Counter-AI Tool System Design for AI System Adversarial Testing and Evaluation(Proceedings of the Annual General Donald R. Keith Memorial Conference, 2022) Byington, Nathan; Davis, Carter; Meehan, Matthew; Vincent, Caroline; Woodward, David; Bastian, Nathaniel D.This work consists of the initial recommendations and conclusions found while soliciting functional requirements for the research, design and development of a Counter-AI Tool for conducting adversarial testing and evaluation of artificial intelligence (AI) systems. The report includes a literature review of relevant AI concepts and extensive research within the adversarial AI domain. An intensive stakeholder analysis, consisting of requirement elicitation from over twenty governmental and non-governmental organizations, assisted in determining what functional requirements should be included in the system design of a Counter-AI Tool. The subsequent system architecture diagram takes user input, tests for various types of adversarial AI attacks, and outputs the vulnerabilities of the AI model. Prior to the operationalization of this tool, iterative experimentation will be conducted by partner organizations, which is the next step in the development and deployment of this Counter-AI Tool.