Evaluating Model Robustness to Adversarial Samples in Network Intrusion Detection

No Thumbnail Available

Authors

Schneider, Madeleine
Aspinall, David
Bastian, Nathaniel D.

Issue Date

2021-12

Type

proceedings-article

Language

en_US

Keywords

Research Projects

Organizational Units

Journal Issue

Alternative Title

Abstract

Adversarial machine learning, a technique which seeks to deceive machine learning (ML) models, threatens the utility and reliability of ML systems. This is particularly relevant in critical ML implementations such as those found in Network Intrusion Detection Systems (NIDS). This paper considers the impact of adversarial influence on NIDS and proposes ways to improve ML based systems. Specifically, we consider five feature robustness metrics to determine which features in a model are most vulnerable, and four defense methods. These methods are tested on six ML models with four adversarial sample generation techniques. Our results show that across different models and adversarial generation techniques, there is limited consistency in vulnerable features or in effectiveness of defense method.

Description

Citation

M. Schneider, D. Aspinall and N. D. Bastian, "Evaluating Model Robustness to Adversarial Samples in Network Intrusion Detection," 2021 IEEE International Conference on Big Data (Big Data), Orlando, FL, USA, 2021, pp. 3343-3352, doi: 10.1109/BigData52589.2021.9671580.

Publisher

License

Journal

Volume

Issue

PubMed ID

ISSN

EISSN