web analytics

ChatGPT Spreads Malicious Packages in AI Package Hallucination Attack – Source: securityboulevard.com

Rate this post

Source: securityboulevard.com – Author: Teri Robinson

A newly discovered ChatGPT-based attack technique, dubbed AI package hallucination, lets attackers publish their own malicious packages in place of an unpublished package. In this way, attackers can execute supply chain attacks through the deployment of malicious libraries to known repositories.

The technique plays off of the fact that generative AI platforms like ChatGPT use hallucinated sources, links, blogs and stats to answer questions as well as generate “questionable fixes to CVEs and offer links to coding libraries that don’t exist,” the Vulcan Cyber Voyager18 research team wrote in a blog post detailing a proof-of-concept (PoC).

Cloud Native Now

“In our research, we have discovered that attackers can easily use ChatGPT to help them spread malicious packages into developers’ environments,” researchers Bar Lanyado, Ortal Keizman and Yair Divinsky wrote. The researchers said they wanted to send an early warning “given the widespread, rapid proliferation of AI tech for essentially every business use case, the nature of software supply chains and the broad adoption of open source code libraries.”

“This is yet another example of the arms race that exists between threat actors and defenders,” said Bud Broomhead, CEO at Viakoo. “Ideally, security researchers and software publishers can also leverage generative AI to make software distribution more secure.”

In these “early innings” of generative AI being used for both cybersecurity offense and defense, Broomhead credited Vulcan researchers and other organizations for detecting and alerting cybersecurity professionals and organization to new threats in time for the language learning models (LLMs) to be tuned and, hopefully, prevent this form of exploit. “Remember, it was only a few months ago I could ask Chat GPT to create a new piece of malware and it would; now it takes very specific and directed guidance for it to inadvertently create it—and, hopefully soon, even that approach will be prevented by the AI engines.”

Vulcan Cyber Voyager18 researchers explained that “if ChatGPT is fabricating code libraries (packages), attackers could use these hallucinations to spread malicious packages without using familiar techniques like typosquatting or masquerading,” techniques that are both suspicious and detectable. “But if an attacker can create a package to replace the ‘fake’ packages recommended by ChatGPT, they might be able to get a victim to download and use it.”

Nowhere is the impact more obvious than when developers “had been searching for coding solutions online (for example, on Stack Overflow); many have now turned to ChatGPT for answers, creating a major opportunity for attackers,” the researchers said.

Craig Jones, vice president of security operations at Ontinue, said that “in the context of this attack technique, here’s how it could work:

    1. An attacker asks ChatGPT for coding help for common tasks.
    2. ChatGPT might provide a recommendation for a package that doesn’t exist or isn’t published yet (a “hallucination.”)
    3. The attacker then creates a malicious version of this suggested package and publishes it.
    4. When other developers ask ChatGPT similar questions, ChatGPT might recommend the same (now existing but malicious) package to them.”

Developers—and other potential victims—should take care and follow basic security hygiene rules. “You should never download and execute code you don’t understand and haven’t tested by just grabbing it from a random source—such as open source GitHub repos or ChatGPT’s recommendations,” said Melissa Bischoping, director of endpoint security research at Tanium. “Any code you intend to run should be evaluated for security and you should have private copies of it.  Do not import directly from public repositories such as those used in the example attack.”

While the use of ChatGPT as a delivery mechanism is novel, Bischoping said the technique of compromising the supply chain through the use of shared/imported third-party libraries is not.  “Use of this strategy will continue, and the best defense is to employ secure coding practices and thoroughly test and review code—especially code developed by a third party—intended for use in production environments,” she explained. “Don’t blindly trust every library or package you find on the internet—or in a chat with an AI.”

Recent Articles By Author

Original Post URL: https://securityboulevard.com/2023/06/chatgpt-spreads-malicious-packages-in-ai-package-hallucination-attack/

Category & Tags: Analytics & Intelligence,Application Security,Cybersecurity,Featured,Incident Response,Malware,News,Security Boulevard (Original),Spotlight,Threat Intelligence,Threats & Breaches,Vulnerabilities,AI hallucination,ChatGPT,malicious packages,software supply chain – Analytics & Intelligence,Application Security,Cybersecurity,Featured,Incident Response,Malware,News,Security Boulevard (Original),Spotlight,Threat Intelligence,Threats & Breaches,Vulnerabilities,AI hallucination,ChatGPT,malicious packages,software supply chain

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post

More Latest Published Posts