Adversarial machine learning in Network Intrusion Detection Systems
No Thumbnail Available
Authors
Alhajjar, Elie
Maxwell, Paul
Bastian, Nathaniel D.
Issue Date
2021-12-30
Type
journal-article
Language
en_US
Keywords
Alternative Title
Abstract
Adversarial examples are inputs to a machine learning system intentionally crafted by an attacker to fool the model into producing an incorrect output. These examples have achieved a great deal of success in several domains such as image recognition, speech recognition and spam detection. In this paper, we study the nature of the adversarial problem in Network Intrusion Detection Systems (NIDS). We focus on the attack perspective, which includes techniques to generate adversarial examples capable of evading a variety of machine learning models. More specifically, we explore the use of evolutionary computation (particle swarm optimization and genetic algorithm) and deep learning (generative adversarial networks) as tools for adversarial example generation. To assess the performance of these algorithms in evading a NIDS, we apply them to two publicly available data sets, namely the NSL-KDD and UNSW-NB15, and we contrast them to a baseline perturbation method: Monte Carlo simulation. The results show that our adversarial example generation techniques cause high misclassification rates in eleven different machine learning models, along with a voting classifier. Our work highlights the vulnerability of machine learning based NIDS in the face of adversarial perturbation.
Description
Citation
Elie Alhajjar, Paul Maxwell, Nathaniel Bastian, Adversarial machine learning in Network Intrusion Detection Systems, Expert Systems with Applications, Volume 186, 2021, 115782, ISSN 0957-4174, https://doi.org/10.1016/j.eswa.2021.115782.
Publisher
License
Journal
Volume
Issue
PubMed ID
ISSN
0957-4174
