web analytics

NIST: Better Defenses are Needed for AI Systems – Source: securityboulevard.com

Rate this post

Source: securityboulevard.com – Author: Jeffrey Burt

The accelerating development and expanding deployment of AI systems is creating significant security and privacy risks that aren’t being mitigated by modern solutions, according to a research paper from the U.S. National Institute of Standards and Technology (NIST).

Predictive and generative AI systems and machine learning operations rely on massive amounts of data that open them up to a range of attacks and data leaks and for which there is no “silver bullet” for protecting against them, according to the 106-page paper written by NIST in conjunction with Northeastern University and Robust Intelligence, a San Francisco company that offers an AI risk detection and mitigation platform.

“We are providing an overview of attack techniques and methodologies that consider all types of AI systems,” NIST computer scientist Apostol Vassilev, one of the publication’s authors, said in a statement. “We also describe current mitigation strategies reported in the literature, but these available defenses currently lack robust assurances that they fully mitigate the risks.”

The cybersecurity community needs to develop better defenses, Vassilev said.

A Look at AML

The paper, “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” is designed to better define adversarial machine learning (AML) in hopes of helping developers create solutions for security AI applications and system against manipulation by bad actors.

The report outlines that predictive and generative AI systems include data, machine learning models, and processes for training, testing, and deploying the models and necessary infrastructure. In addition, generative AI system also can be linked to corporate documents and databases.

“The data-driven approach of ML introduces additional security and privacy challenges in different phases of ML operations besides the classical security and privacy threats faced by most operational systems,” the researchers wrote. “These security and privacy challenges include the potential for adversarial manipulation of training data, adversarial exploitation of model vulnerabilities to adversely affect the performance of the AI system, and even malicious manipulations, modifications or mere interaction with models to exfiltrate sensitive information about people represented in the data, about the model itself, or proprietary enterprise data.”

These kinds of attacks already are happening and their sophistication and potential impact are increasing.

Four Types of Attacks

The report four types of attacks that can occur on AI systems, including evasion attacks, where a bad actor will try to alter an input to change how a system responds to it, such as adding markings that make autonomous vehicles misinterpret road signs. Poisoning attacks involved adding corrupted data to a training dataset and privacy attacks hit when threat actors try to access sensitive information about the AI or the data an AI system was trained on in hopes of misusing it.

Abuse attacks involve inserting incorrect information into a source, like a legitimate webpage or online document, that an AI system pulls in.

“Most of these attacks are fairly easy to mount and require minimum knowledge of the AI system and limited adversarial capabilities,” paper co-author Alina Oprea, a professor at Northeastern, said in a statement. “Poisoning attacks, for example, can be mounted by controlling a few dozen training samples, which would be a very small percentage of the entire training set.”

Watch Out for ‘Snake Oil’

In the report, the researchers talk about the size of the large-language models (LLMs) used to create generative AI and the large datasets being used to train them. A challenge is that the datasets are too large for individuals to monitor and filter properly, so there are no foolproof methods for protecting AI from misdirection, they wrote.

The report outlines some ways to mitigate the security and privacy threats, though current defenses are “thus far incomplete at best,” the authors wrote. It’s important that developers and organizations that want to deploy and use AI technologies are aware of the limitations, according to Vassilev.

“Despite the significant progress AI and machine learning have made, these technologies are vulnerable to attacks that can cause spectacular failures with dire consequences,” he said. “There are theoretical problems with securing AI algorithms that simply haven’t been solved yet. If anyone says differently, they are selling snake oil.”

Putting AI Threats Front and Center

The paper is part of the White House’s whole-of-government approach to dealing with the growing threat presented by the rapid innovation around AI. NIST last year unveiled its AI Risk Management Framework and is seeking comments through February 2 on its efforts to create trustworthy ways of developing and using AI.

Other agencies also are taking on the challenge of securing AI development. The Cybersecurity and Infrastructure Security Agency (CISA) in August 2023 advised developers that AI applications – like all software – need to have security designed into them. The same month, the U.S. Defense Advanced Research Projects Agency (DARPA) unveiled the AI Cyber Challenge to urge cybersecurity and AI specialists to create ways to automatically detect and fix software flaws and protect critical infrastructure.

Also, high-profile companies like Google, Microsoft, OpenAI, and Meta are working with the White House to address risks posed by AI, and Google, Microsoft, OpenAI, and Anthropic in July 2023 announced the Frontier Model Forum, an industry group developing ways to ensure the safe development of foundation AI models.

In November, the Federal Trade Commission (FTC) and Federal Communications Commission announced separate efforts to protect consumers against scammers using AI-enabled voice technologies in fraud and other schemes, with the FTC this month asking for submissions for ways to address the malicious use of voice-cloning technologies.

The same agency also is hosting a virtual summit January 25 to talk about the emerging AI market and its potential impacts.

Recent Articles By Author

Original Post URL: https://securityboulevard.com/2024/01/nist-better-defenses-are-needed-for-ai-systems/

Category & Tags: Cybersecurity,Data Privacy,Data Security,DevOps,Featured,Governance, Risk & Compliance,Industry Spotlight,IoT & ICS Security,Network Security,News,Security Awareness,Security Boulevard (Original),Social – Facebook,Social – LinkedIn,Social – X,Spotlight,Threat Intelligence,Generative AI risks,NIST – Cybersecurity,Data Privacy,Data Security,DevOps,Featured,Governance, Risk & Compliance,Industry Spotlight,IoT & ICS Security,Network Security,News,Security Awareness,Security Boulevard (Original),Social – Facebook,Social – LinkedIn,Social – X,Spotlight,Threat Intelligence,Generative AI risks,NIST

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post

More Latest Published Posts