NIST AI

NIST-AI

This NIST AI report is intended to be a step toward developing a taxonomy and terminol-ogy of dversarial machine learning (AML), which in turn may aid in securing applications of artificial intelligence (AI) against adversarial manipulations of AI systems. The compo-nents of an AI system include – at a minimum – the data, model, and processes for training, testing, and deploying the machine learning (ML) models and the infrastructure required for using them. The data-driven approach of ML introduces additional security and privacy challenges in different phases of ML operations besides the classical security and privacy threats faced by most operational systems. These security and privacy challenges include the potential for adversarial manipulation of training data, adversarial exploitation of model vulnerabilities to adversely affect the performance of ML classification and regression, and even malicious manipulations, modifications or mere interaction with models to exfiltrate sensitive information about people represented in the data or about the model itself. Such attacks have been demonstrated under real-world conditions, and their sophistication and potential impact have been increasing steadily. AML is concerned with studying the capa-bilities of attackers and their goals, as well as the design of attack methods that exploit the vulnerabilities of ML during the development, training, and deployment phase of the ML life cycle. AML is also concerned with the design of ML algorithms that can withstand these security and privacy challenges. When attacks are launched with malevolent intent,
the robustness of ML refers to mitigations intended to manage the consequences of such attacks.

This report adopts the notions of security, resilience, and robustness of ML systems from the NIST AI Risk Management Framework [170]. Security, resilience, and robustness are gauged by risk, which is a measure of the extent to which an entity (e.g., a system) is threat-ened by a potential circumstance or event (e.g., an attack) and the severity of the outcome should such an event occur. However, this report does not make recommendations on risk tolerance (the level of risk that is acceptable to organizations or society) because it is highly contextual and application/use-case specific. This general notion of risk offers a useful ap-proach for assessing and managing the security, resilience, and robustness of AI system components. Quantifying these likelihoods is beyond the scope of this document. Corre-
spondingly, the taxonomy of AML is defined with respect to the following four dimensions of AML risk assessment: (i) learning method and stage of the ML life cycle process when the attack is mounted, (ii) attacker goals and objectives, (iii) attacker capabilities, (iv) and attacker knowledge of the learning process and beyond.

The spectrum of effective attacks against ML is wide, rapidly evolving, and covers all phases of the ML life cycle – from design and implementation to training, testing, and fi-nally, to deployment in the real world. The nature and power of these attacks are different and can exploit not just vulnerabilities of the ML models but also weaknesses of the in- frastructure in which the AI systems are deployed. Although AI system components may also be adversely affected by various unintentional factors, such as design and implemen-tation flaws and data or algorithm biases, these factors are not intentional attacks. Even though these factors might be exploited by an adversary, they are not within the scope of the literature on AML or this report.

This document defines a taxonomy of attacks and introduces terminology in the field of AML. The taxonomy is built on a survey of the AML literature and is arranged in a con- ceptual hierarchy that includes key types of ML methods and life cycle stages of attack, attacker goals and objectives, and attacker capabilities and knowledge of the learning pro- cess. The report also provides corresponding methods for mitigating and managing the consequences of attacks and points out relevant open challenges to take into account in the life cycle of AI systems. The terminology used in the report is consistent with the liter-ature on AML and is complemented by a glossary that defines key terms associated with the security of AI systems in order to assist non-expert readers. Taken together, the tax-
onomy and terminology are meant to inform other standards and future practice guides for assessing and managing the security of AI systems by establishing a common language and understanding for the rapidly developing AML landscape. Like the taxonomy, the termi-nology and definitions are not intended to be exhaustive but rather to aid in understanding key concepts that have emerged in AML literature.

Facebook
Twitter
LinkedIn
Pinterest

Leave a Reply

Your email address will not be published. Required fields are marked *