Army Cyber Institute
Permanent URI for this collection
Browse
Browsing Army Cyber Institute by Title
Now showing 1 - 20 of 269
Results Per Page
Sort Options
Item 1000 Cuts(Army Cyber Institute, 2018) Johnson, Brian DavidScience Fiction Prototypes are science fiction stories based on future trends, technologies, economics, and cultural change. The story you are about to read is based on threatcasting research from the Army Cyber Institute at West Point and Arizona State University’s Threatcasting Lab. Our story does not shy away from a dystopian vision of tomorrow. Exploring these dark regions inspires us to build a better, stronger, and more secure future for our Armed Forces.Item 11-25-2027(Army Cyber Institute, 2018) Johnson, Brian DavidScience Fiction Prototypes are science fiction stories based on future trends, technologies, economics, and cultural change. The story you are about to read is based on threatcasting research from the Army Cyber Institute at West Point and Arizona State University’s Threatcasting Lab. Our story does not shy away from a dystopian vision of tomorrow. Exploring these dark regions inspires us to build a better, stronger, and more secure future for our Armed Forces. Once a year, Americans sit down to a Thanksgiving meal that unites us in gratitude for our safety and security. As many follow the celebration with a football game or an after-dinner nap, our defense automated supply chain never sleeps. Our economy is becoming more and more automated. Between global supply chains and high frequency trading, our national and economic security is increasingly dependent on automation and AI. But what safeguards monitor the machines that we depend upon? On Thanksgiving Day 2027, robots and algorithms will hyper-efficiently run our supply chains, but are these systems themselves secure?Item 5 assumptions that should change how we think about hackbacks(C4ISRNET, 2018) Kallberg, JanThe demand for legalizing corporate hackbacks is growing – and there is significant interest by private corporations to utilize hack back if the technique was lawful. If private companies were able to obtain the right to hack back legally, the risks for blowback is likely more significant than the opportunity and potential gains from private hackbacks. The proponents of private hackback tend to build their case on a set of assumptions. But if these assumptions are not valid, private hackback could become a federal problem through uncontrolled escalation and spillover from these private counterstrikes.Item 6G Systems and the Future of Multidimensional Attack Planes(Army Cyber Institute, 2023) Palochak, Joshua; Brown, Jason C.; Johnson, Brian David; Marx, John; Aranda, AnnetteIn the coming decade, future threats from attacks on 6G communications systems appear in four categories or groups. The threats are specific to government, military, and critical infrastructure. In many ways, 6G promises to serve as the catalyst to the utopian world that science fiction has conditioned western society to envision and pursue. As such, we describe futures in 2030 where 6G has fully permeated human lives. However, a paradox emerges where the journey to achieve this connected state further isolated humans, and comes at a cost of privacy and increased vulnerabilities. More astoundingly is society’s apparent ignorance of these risks. In short, while 6G may not generate unimaginable technologies, there is a toxic side of humanity that is not being recognized, while it is simultaneously surrendering its power.Item A Capstone Design Project for Teaching Cybersecurity to Non-technical Users(ACM, 2016) Estes, Tanya T.; Finocchiaro, James; Blair, Jean R.S.; Robison, Jonathan; Dalme, Justin; Emana, Michael; Jenkins, Luke; Sobiesk, EdwardThis paper presents a multi-year undergraduate computing capstone project that holistically contributes to the development of cybersecurity knowledge and skills in non-computing high school and college students. We describe the student-built Vulnerable Web Server application, which is a system that packages instructional materials and pre-built virtual machines to provide lessons on cybersecurity to non-technical students. The Vulnerable Web Server learning materials have been piloted at several high schools and are now integrated into multiple security lessons in an intermediate, general education information technology course at the United States Military Academy. Our paper interweaves a description of the Vulnerable Web Server materials with the senior capstone design process that allowed it to be built by undergraduate information technology and computer science students, resulting in a valuable capstone learning experience. Throughout the paper, a call is made for greater emphasis on educating the non-technical user.Item A control measure framework to limit collateral damage and propagation of cyber weapons(IEEE, 2013) Raymond, David; Conti, Gregory; Cross, Tom; Fanelli, RobertWith the recognition of cyberspace as a warfighting domain by the U.S. Department of Defense, we anticipate increased use of malicious software as weapons during hostilities between nation-states. Such conflict could occur solely on computer networks, but increasingly will be used in conjunction with traditional kinetic attack, or even to eliminate the need for kinetic attack. In either context, precise targeting and effective limiting of collateral damage from cyber weaponry are desired goals of any nation seeking to comply with the law of war. Since at least the Morris Worm, malicious software found in the wild has frequently contained mechanisms to target effectively, limit propagation, allow self-destruction, and minimize consumption of host resources to prevent detection and damage. This paper surveys major variants of malicious software from 1982 to present and synthesizes the control measures they contain that might limit collateral damage in future cyber weapons. As part of this work, we provide a framework for critical analysis of such measures. Our results indicate that a compelling framework for critical analysis emerges by studying these measures allowing classification of new forms of malware and providing insight into future novel technical mechanisms for limiting collateral damage.Item A Lot More than a Pen Register, and Less than a Wiretap: What the Stingray Teaches Us About How Congress Should Approach the Reform of Law Enforcement Surveillance Authorities(Yale J.L. & Tech, 2013) Pell, Stephanie K.; Soghoian, ChristopherIn June 2013, through an unauthorized disclosure to the media by ex-NSA contractor Edward Snowden, the public learned that the NSA, since 2006, had been collecting nearly all domestic phone call detail records and other telephony metadata pursuant to a controversial, classified interpretation of Section 215 of the USA PATRIOT Act. Prior to the Snowden disclosure, the existence of this intelligence program had been kept secret from the general public, though some members of Congress knew both of its existence and of the statutory interpretation the government was using to justify the bulk collection. Unfortunately, the classified nature of the Section 215 metadata program prevented them from alerting the public directly, so they were left to convey their criticisms of the program directly to certain federal agencies as part of a non-public oversight process. The efficacy of an oversight regime burdened by such strict secrecy is now the subject of justifiably intense debate. In the context of that debate, this Article examines a very different surveillance technology — one that has been used by federal, state and local law enforcement agencies for more than two decades without invoking even the muted scrutiny Congress applied to the Section 215 metadata program. During that time, this technology has steadily and significantly expanded the government’s surveillance capabilities in a manner and to a degree to date largely unnoticed and unregulated. Indeed, it has never been explicitly authorized by Congress for law enforcement use. This technology, commonly called the StingRay, the most well-known brand name of a family of surveillance devices, enables the government, directly and in real-time, to intercept communications data and detailed location information of cellular phones — data that it would otherwise be unable to obtain without the assistance of a wireless carrier. Drawing from the lessons of the StingRay, this Article argues that if statutory authorities regulating law enforcement surveillance technologies and methods are to have any hope of keeping pace with technology, some formalized mechanism must be established through which complete, reliable and timely information about new government surveillance methods and technologies can be brought to the attention of Congress.Item A Modeling Framework for Studying Quantum Key Distribution System Implementation Nonidealities(IEEE Access, 2015) Mailloux, Logan O.; Morris, Jeffrey D.; Grimaila, Michael R.; Hodson, Douglas D.; Jacques, David R.; Colombi, John M.; Mclaughlin, Colin V.; Holes, Jennifer A.Quantum key distribution (QKD) is an innovative technology that exploits the laws of quantum mechanics to generate and distribute unconditionally secure shared key for use in cryptographic applications. However, QKD is a relatively nascent technology where real-world system implementations differ significantly from their ideal theoretical representations. In this paper, we introduce a modeling framework built upon the OMNeT++ discrete event simulation framework to study the impact of implementation nonidealities on QKD system performance and security. Specifically, we demonstrate the capability to study the device imperfections and practical engineering limitations through the modeling and simulation of a polarization-based, prepare and measure BB84 QKD reference architecture. The reference architecture allows users to model and study complex interactions between physical phenomenon and system-level behaviors representative of real-world design and implementation tradeoffs. Our results demonstrate the flexibility of the framework to simulate and evaluate current, future, and notional QKD protocols and components.Item A New Mindset for the Army: Silent Running(C4ISRNET, 2019) Kallberg, Jan; Hamilton, Stephen S.In the past two decades, the U.S. Army has continually added new technology to the battlefield. While this technology has enhanced the ability to fight, it has also greatly increased the ability for an adversary to detect and potentially interrupt and/or intercept operations.Item A ranked solution for social media fact checking using epidemic spread modeling(2022-01) Smith, John H. ; Bastian, Nathaniel D.Within the past decade, social media has become a primary platform for consumption of information and current events. Unlike with traditional news sources, however, social media posts do not have to go through a rigorous validation process prior to publication. The 2019 Mueller Report illustrates how malicious actors have taken advantage of these lax requirements to sway public opinion on topics from the #blacklivesmatter movement to the 2016 U.S. Presidential election. Currently, social media companies rely primarily on communal-policing of misinformation; it is unlikely that this will happen with regularity. To counteract this, other literature on the topic is focused on using deep learning models to separate accurate from misleading content; however, the rapidly evolving nature of misinformation means that they will have to be retrained and redeployed on an iterative and time-consuming basis. This work, therefore, proposes a novel approach to the problem: treating misinformation as a virus. Specifically, we propose a ranking system that third-party fact checkers can utilize to prioritize posts for checking. This algorithm is then tested against multiple data sets with strong positive results, decreasing viral spread in a matter of minutes.Item A Sensitivity Analysis of Poisoning and Evasion Attacks in Network Intrusion Detection System Machine Learning Models(2021-12-30) Talty, Kevin ; Stockdale, John ; Bastian, Nathaniel D.As the demand for data has increased, we have witnessed a surge in the use of machine learning to help aid industry and government in making sense of massive amounts of data and, subsequently, making predictions and decisions. For the military, this surge has manifested itself in the Internet of Battlefield Things. The pervasive nature of data on today's battlefield will allow machine learning models to increase soldier lethality and survivability. However, machine learning models are predicated upon the assumptions that the data upon which these machine learning models are being trained is truthful and the machine learning models are not compromised. These assumptions surrounding the quality of data and models cannot be the status-quo going forward as attackers establish novel methods to exploit machine learning models for their benefit. These novel attack methods can be described as adversarial machine learning (AML). These attacks allow an attacker to unsuspectingly alter a machine learning model before and after model training in order to degrade a model's ability to detect malicious activity. In this paper, we show how AML, by poisoning data sets and evading well trained models, affect machine learning models' ability to function as Network Intrusion Detection Systems (NIDS). Finally, we highlight why evasion attacks are especially effective in this setting and discuss some of the causes for this degradation of model effectiveness.Item A Widening Attack Plain(Army Cyber Institute, 2016) Johnson, Brian DavidThe Army Cyber Institute at West Point is the Army’s and the nation’s think tank for cyber warfare and the cyber domain. The ACI creates knowledge, builds public and private sector partnerships, and creates an entrepreneurial and innovation laboratory to focus investments. Positioned to establish and maintain relationships with the nation’s economic center of gravity in New York City, the ACI directs and synchronizes efforts across the U.S. Military Academy in the cyber domain. The ACI collaborates with the U.S. Army Cyber Command and U.S. Army Cyber Center of Excellence to prevent strategic surprise and ensure the Army’s dominance of the cyber domain.Item A Year of Cyber Professional Development(Cyber Defense Review, 2015) Vanatta, NatalieThe nation that will insist upon drawing a broad line of demarcation between the fighting man and the thinking man is liable to find its fighting done by fools and its thinking by cowards. – Sir William Francis Butler, 19th-century British Lieutenant General After more than a decade at war, the Army is not the same institution that I joined before the 9/11 terrorist attacks. Traditions that bound generations of service members together have been forgotten and institutional knowledge has vanished. The development of leaders in a fiscally constrained environment is one of the key skills that has been lost. With military budgets shrinking now, the art of developing leaders prepared to handle diverse situations seems a daunting challenge. We have relied on mobile training teams, scripted rotations in the box, and deployments in sustained bases to train Soldiers and Leaders to handle typical scenarios. All of which incur expenses that are no longer sustainable, while none of them truly focus on stretching leaders’ skills and capabilities to handle the unknown.Item ACI Threat Trends and Predictions 2017 Report(Army Cyber Institute, 2017) Rhoades, Quincey; Twist, Jim“The expanding attack surface enabled by technology innovations such as cloud computing and IoT devices, a global shortage of cyber-security talent, and regulatory pressures continue to be significant drivers of cyber-threats. The pace of these changes is unprecedented, resulting in a critical tipping point as the impact of cyber-attacks are felt well beyond their intended victims in personal, political, and business consequences. Going forward, the need for accountability at multiple levels is urgent and real affecting vendors, governments, and consumers alike. Without swift action, there is a real risk of disrupting the progress of the global digital economy.” – Derek Manky, Global Security Strategist at FortinetItem Advancing the Research and Development of Assured Artificial Intelligence and Machine Learning Capabilities(AAAI FSS-20, 2020) Shipp, Tyler J.; Clouse, Daniel J.; De Lucia, Michael J.; Ahiskali, Metin B.; Steverson, Kai; Mullin, Jonathan; Bastian, Nathaniel D.Artificial intelligence (AI) and machine learning (ML) have become increasingly vital in the development of novel defense and intelligence capabilities across all domains of warfare. An adversarial AI (A2I) and adversarial ML (AML) attack seeks to deceive and manipulate AI/ML models. It is imperative that AI/ML models can defend against these attacks. A2I/AML defenses will help provide the necessary assurance of these advanced capabilities that use AI/ML models. The A2I Working Group (A2IWG) seeks to advance the research and development of assured AI/ML capabilities via new A2I/AML defenses by fostering a collaborative environment across the U.S. Department of Defense and U.S. Intelligence Community. The A2IWG aims to identify specific challenges that it can help solve or address more directly, with initial focus on three topics: AI Trusted Robustness, AI System Security, and AI/ML Architecture Vulnerabilities.Item Adversarial machine learning in Network Intrusion Detection Systems(2021-12-30) Alhajjar, Elie ; Maxwell, Paul ; Bastian, Nathaniel D.Adversarial examples are inputs to a machine learning system intentionally crafted by an attacker to fool the model into producing an incorrect output. These examples have achieved a great deal of success in several domains such as image recognition, speech recognition and spam detection. In this paper, we study the nature of the adversarial problem in Network Intrusion Detection Systems (NIDS). We focus on the attack perspective, which includes techniques to generate adversarial examples capable of evading a variety of machine learning models. More specifically, we explore the use of evolutionary computation (particle swarm optimization and genetic algorithm) and deep learning (generative adversarial networks) as tools for adversarial example generation. To assess the performance of these algorithms in evading a NIDS, we apply them to two publicly available data sets, namely the NSL-KDD and UNSW-NB15, and we contrast them to a baseline perturbation method: Monte Carlo simulation. The results show that our adversarial example generation techniques cause high misclassification rates in eleven different machine learning models, along with a voting classifier. Our work highlights the vulnerability of machine learning based NIDS in the face of adversarial perturbation.Item Aesop’s wolves: the deceptive appearance of espionage and attacks in cyberspace(Intelligence and National Security, 2015) Brantly, Aaron F.Appearances in cyberspace are deceptive and problematic. Deception in the cyber domain poses an immensely difficult challenge for states to differentiate between espionage activities in cyberspace and cyber attacks. The inability to distinguish between cyber activities places US cyber infrastructure in a perilous position and increases the possibility of a disproportionate or inadequate response to cyber incidents. This paper uses case analysis to examine the characteristics associated with the tools and decisions related to cyber espionage and cyber attacks to develop a framework for distinction leveraging epidemiological models for combating disease.Item After a cyberattack, the waiting is the hardest part(C4ISRNET, 2019) Kallberg, JanWe tend to see vulnerabilities and concerns about cyber threats to critical infrastructure from our own viewpoint. But an adversary will assess where and how a cyberattack on America will benefit the adversary’s strategy. I am not convinced attacks on critical infrastructure, in general, have the payoff that an adversary seeks.Item Algorithm selection framework for cyber attack detection(2020-07) Chalé, Marc ; Bastian, Nathaniel D. ; Weir, JefferyThe number of cyber threats against both wired and wireless computer systems and other components of the Internet of Things continues to increase annually. In this work, an algorithm selection framework is employed on the NSL-KDD data set and a novel paradigm of machine learning taxonomy is presented. The framework uses a combination of user input and meta-features to select the best algorithm to detect cyber attacks on a network. Performance is compared between a rule-of-thumb strategy and a meta-learning strategy. The framework removes the conjecture of the common trial-and-error algorithm selection method. The framework recommends five algorithms from the taxonomy. Both strategies recommend a high-performing algorithm, though not the best performing. The work demonstrates the close connectedness between algorithm selection and the taxonomy for which it is premised.Item An Account of Interference in Associative Memory: Learning the Fan Effect(Topics in Cognitive Science, 2017) Thomson, Robert; Harrison, Anthony M.; Trafton, J. Gregory; Hiatt, Laura M.Associative learning is an essential feature of human cognition, accounting for the influence of priming and interference effects on memory recall. Here, we extend our account of associative learning that learns asymmetric item‐to‐item associations over time via experience (Thomson, Pyke, Trafton, & Hiatt, 2015) by including link maturation to balance associations between longer‐term stability while still accounting for short‐term variability. This account, combined with an existing account of activation strengthening and decay, predicts both human response times and error rates for the fan effect (Anderson, 1974; Anderson & Reder, 1999) for both target and foil stimuli.