web analytics

AI Supply Chain Attack Method Demonstrated Against Google, Microsoft Products – Source: www.securityweek.com

Rate this post

Source: www.securityweek.com – Author: Eduard Kovacs

Researchers at Palo Alto Networks have uncovered a new attack method that could pose a significant AI supply chain risk, and they demonstrated its impact against Microsoft and Google products, as well as the potential threat for open source projects.

Named ‘Model Namespace Reuse’, the AI supply chain attack method involves threat actors registering names associated with deleted or transferred models that are fetched by developers from platforms such as Hugging Face. 

A successful attack can enable threat actors to deploy malicious AI models and achieve arbitrary code execution, Palo Alto Networks said in a blog post describing Model Namespace Reuse.

Hugging Face is a popular platform for hosting and sharing pre-trained models, datasets, and AI applications. When developers want to use a model, they can reference or pull it based on the name of the model and the name of its developer in the format ‘Author/ModelName’.

In a Model Namespace Reuse attack, the attacker searches for models whose owner has deleted their account or transferred it to a new name, leaving the old name available for registration. 

The attacker can register an account with the targeted developer’s name, and create a malicious model with a name that is likely to be referenced by many — or by specifically targeted — projects.

Palo Alto Networks researchers demonstrated the potential risks against Google’s Vertex AI managed machine learning platform, specifically its Model Garden repository for pre-trained models.

Model Garden supports the direct deployment of models from Hugging Face and the researchers showed that an attacker could have abused it to conduct a Model Namespace Reuse attack by registering the name of a Hugging Face account associated with a project that had been deleted but still listed and verified by Vertex AI.

Advertisement. Scroll to continue reading.

“To demonstrate the potential impact of such a technique, we embedded a payload in the model that initiates a reverse shell from the machine running the deployment back to our servers. Once Vertex AI deployed the model, we gained access to the underlying infrastructure hosting the model — specifically, the endpoint environment,” the researchers explained.

The attack was also demonstrated against Microsoft’s Azure AI Foundry platform for developing ML and gen-AI applications. Azure AI Foundry also allows users to deploy models from Hugging Face, which makes it susceptible to attacks.

“By exploiting this attack vector, we obtained permissions that corresponded to those of the Azure endpoint. This provided us with an initial access point into the user’s Azure environment,” the researchers said.

In addition to demonstrating the attack against the Google and Microsoft cloud platforms, the Palo Alto employees looked at open source repositories that could be susceptible to attacks due to referencing Hugging Face models using Author/ModelName format identifiers.

“This investigation revealed thousands of susceptible repositories, among them several well-known and highly starred projects,” the researchers reported. “These projects include both deleted models and transferred models with the original author removed, causing users to remain unaware of the threat as these projects continue to function normally.”

Google, Microsoft and Hugging Face have been notified about the risks, and Google has since started to perform daily scans for orphaned models to prevent abuse.

However, Palo Alto pointed out that “the core issue remains a threat to any organization that pulls models by name alone. This discovery proves that trusting models based solely on their names is insufficient and necessitates a critical reevaluation of security in the entire AI ecosystem.”

In order to mitigate the risks associated with Model Namespace Reuse, the security firm recommends pinning the used model to a specific commit to prevent unexpected behavior changes, cloning the model and storing it in a trusted location rather than fetching it from a third-party service, and proactively scanning code for model references that could pose a risk. 

Related: Hackers Weaponize Trust with AI-Crafted Emails to Deploy ScreenConnect

Related: PromptLock: First AI-Powered Ransomware Emerges

Related: Beyond the Prompt: Building Trustworthy Agent Systems

Original Post URL: https://www.securityweek.com/ai-supply-chain-attack-method-demonstrated-against-google-microsoft-products/

Category & Tags: Artificial Intelligence,AI,Hugging Face,model namespace reuse – Artificial Intelligence,AI,Hugging Face,model namespace reuse

Views: 4

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post