web analytics

AI Experts: Account for AI/ML Resilience & Risk While There’s Still Time – Source: www.darkreading.com

Rate this post

Source: www.darkreading.com – Author: Ericka Chickowski, Contributing Writer, Dark Reading

RSA CONFERENCE 2023 – San Francisco – As enterprises and government agencies increasingly weave artificial intelligence (AI) and machine learning (ML) into their broader set of systems, they’ll need to account for a range of risk resilience issues that start with cybersecurity concerns but spread far beyond those. 

A panel on April 24 at RSA Conference 2023 composed of distinguished AI and security researchers examined the problem space of AI resilience, which includes weighty issues like adversarial AI attacks, AI bias, and ethical application of AI modeling.

Cybersecurity professionals need to start to tackling these issues both within their organizations and as collaborators with government and industry groups, panelists noted.

“Many organizations will look to integrate AI/ML capabilities of their core business functions, but in doing so will increase their own attack surface,” explained panel moderator Bryan Vorndran, assistant director at the FBI Cyber Division. “Attacks can occur in every stage of the AI and ML development and deployment cycle, models, training data, and APIs can be targeted.”

The good news is that there is time to ramp up into these efforts if the community begins work now.

“We have a really unique opportunity here as a community,” said Neil Serebryany, CEO of CalypsoAI. “We’re aware of the fact that there is a threat, we’re seeing early incidents of this threat, and the threat is not full-blown yet.”

The ‘yet’ is the operative word, he emphasized, and his fellow panelists agree. The field of risk management is in a place with AI similar to where cybersecurity was with the Internet in the 1980s, said Bob Lawton, chief of missions capabilities for the Office of the Director of National Intelligence Science and Technology Group.

“Imagine if it’s 1985 and you knew the challenges that we were going to face in the cyber domain now, what would we as a community, as an industry do differently 35 years ago, and that’s exactly where we’re at with AI right now,” said Lawton. “We have the time and space to get it right.”

Specifically when it comes to direct attacks against AI systems by adversaries, the threats are still very rudimentary, but that’s only because the attackers are only putting the work they need to in order to achieve their objectives right now, said Christina Liaghati, AI strategy execution and operations manager for MITRE Corporation.

“I think we’re going to see many more of the malicious actors having a higher level of sophistication of these attacks, but right now they don’t have to, which I think is what’s really interesting about this space,” she told the audience.

Nevertheless, she warned that organizations can’t treat the risks lightly. The interest of threat actors to increase their sophistication and knowledge of AI models will only keep growing as it is embedded into systems they can profitably attack. And this is just as true of smaller organizations using simple ML models in financial systems as it would be for government agencies using it in an intelligence capacity.

Everyone Is at Risk

“If you’re deploying AI in any environment where any actor might want to misuse or evade or attack that system, your system is vulnerable,” she said. “So, it’s not just super advanced tech giants or anybody that’s deploying AI in a massive way. If your system is in any kind of consequential environment and then you incorporate AI and machine learning into that broader system of systems context, you could be exposing it in new ways that you’re probably not thinking about or necessarily prepared for.”

The challenge with AI for many cybersecurity executives is that addressing these risks will require they and their teams gain a whole new set of knowledge and parlance around AI and data science.

“I don’t think that AI assurance at its core is a traditional infosec problem,” Serebryany said. “It’s a machine learning problem that we’re trying to figure out how to translate into the infosec community.”

For example, hardening the models requires understanding of key data science metrics like recall precision accuracy and F1 scores, he said.

“So I think it’s kind of incumbent upon us to be able to figure out how to take these underlying ML concepts and research and translate the parlance, the concepts and the standard operating procedures within a soft context that makes sense within the infosec community,” he said.

At the same time, Liaghati said to not discount the security basics as AI/ML models and systems will be deployed in the context of other systems for which security teams have decades of experience managing risk. The principles of data security, application security, and network security are still extremely relevant, as are standard risk management and OpSec best practices.

“So many of those are just good practices. It’s not just a big, fancy adversarial layer or being able to patch a data set. It’s not necessarily that complicated,” she says. “Many of the ways that you can mitigate these threats are just thinking about the amount of information that you’re putting out within public domain on what models you’re using, what data you’re using, where it’s coming from, [and] what the broader system context looks like around that AI system.

Original Post URL: https://www.darkreading.com/vulnerabilities-threats/ai-experts-account-ai-ml-resilience-risk-time

Category & Tags: –

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post

More Latest Published Posts