web analytics

ChatGPT Hallucinations Can Be Exploited to Distribute Malicious Code Packages – Source: www.securityweek.com

Rate this post

Source: www.securityweek.com – Author: Eduard Kovacs

It’s possible for threat actors to manipulate artificial intelligence chatbots such as ChatGPT to help them distribute malicious code packages to software developers, according to vulnerability and risk management company Vulcan Cyber. 

The issue is related to hallucinations, which occur when AI, specifically a large language model (LLM) such as ChatGPT, generates factually incorrect or nonsensical information that may look plausible. 

In Vulcan’s analysis, the company’s researchers noticed that ChatGPT — possibly due to its use of older data for training — recommended code libraries that currently do not exist. 

The researchers warned that threat actors could collect the names of such non-existent packages and create malicious versions that developers could download based on ChatGPT’s recommendations.

Specifically, Vulcan researchers analyzed popular questions on the Stack Overflow coding platform and asked ChatGPT those questions in the context of Python and Node.js. 

ChatGPT was asked more than 400 questions and roughly 100 of its responses included references to at least one Python or Node.js package that does not actually exist. In total, ChatGPT’s responses mentioned more than 150 non-existent packages.

An attacker can collect the names of the packages recommended by ChatGPT and create malicious versions. Since the AI is likely to recommend the same packages to others asking similar questions, unsuspecting developers may look for and install the malicious version uploaded by the attacker to popular repositories. 

Vulcan Cyber demonstrated how this method would work in the wild by creating a package that can steal system information from a device and uploading it to the NPM Registry.  

“It can be difficult to tell if a package is malicious if the threat actor effectively obfuscates their work, or uses additional techniques such as making a trojan package that is actually functional,” the company explained. 

“Given how these actors pull off supply chain attacks by deploying malicious libraries to known repositories, it’s important for developers to vet the libraries they use to make sure they are legitimate. This is even more important with suggestions from tools like ChatGPT which may recommend packages that don’t actually exist, or didn’t before a threat actor created them,” it added.

Related: Malicious Prompt Engineering With ChatGPT

Related: ChatGPT’s Chief Testifies Before Congress, Calls for New Agency to Regulate Artificial Intelligence

Related: Vulnerability Could Have Been Exploited for ‘Unlimited’ Free Credit on OpenAI Accounts

Original Post URL: https://www.securityweek.com/chatgpt-hallucinations-can-be-exploited-to-distribute-malicious-code-packages/

Category & Tags: Artificial Intelligence,AI,ChatGPT,hallucination – Artificial Intelligence,AI,ChatGPT,hallucination

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post