web analytics

WormGPT, the generative AI tool to launch sophisticated BEC attacks – Source: securityaffairs.com

Rate this post

Source: securityaffairs.com – Author: Pierluigi Paganini

The WormGPT case: How Generative artificial intelligence (AI) can improve the capabilities of cybercriminals and allows them to launch sophisticated attacks.

Researchers from SlashNext warn of the dangers related to a new generative AI cybercrime tool dubbed WormGPT. Since chatbots like ChatGPT made the headlines, cybersecurity experts warned of potential abuses of Generative artificial intelligence (AI) that can be exploited by cybercriminals to launch sophisticated attacks.

Generative AI, or generative artificial intelligence, is a type of machine learning that is able to produce text, video, images, and other types of content. It is a subset of artificial intelligence (AI) that focuses on creating new data rather than simply analyzing existing data.

WormGPT is advertised on underground forums as a perfect tool to carry out sophisticated phishing campaigns and business email compromise (BEC) attacks.

The benefits of using generative AI for BEC attacks are multiple and include the use of impeccable grammar and the lowered entry threshold for the creation of BEC campaigns.

Threat actors can use WormGPT to automate the creation of highly convincing malicious emails crafted for the recipient.

WormGPT

“Our team recently gained access to a tool known as “WormGPT” through a prominent online forum that’s often associated with cybercrime. This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities.” reads the post published by Slashnext. “WormGPT is an AI module based on the GPTJ language model, which was developed in 2021. It boasts a range of features, including unlimited character support, chat memory retention, and code formatting capabilities.”

Unlike ChatGPT, this generative AI-based tool allows crooks to do a broad range of illegal activities and doesn’t have any limitations.

Slashnext experts also highlighted that on underground forums threat actors also offer “jailbreaks” for interfaces like ChatGPT. The “jailbreaks” are specialized prompts designed to manipulate interfaces of popular chatbots bypassing measured implemented to prevent disclosing sensitive information, producing inappropriate content, or even executing harmful code.

According to a recent analysis published by Check Point that compared anti-abuse restrictions implemented by ChatGPT and Google Bard, Bard’s restrictors are significantly lower compared to those of ChatGPT. This means that threat actors can easily use Bard to generate malicious content.

Below are the key findings of the report:

  1. Bard’s anti-abuse restrictors in the realm of cybersecurity are significantly lower compared to those of ChatGPT. Consequently, it is much easier to generate malicious content using Bard’s capabilities.
  2. Bard imposes almost no restrictions on the creation of phishing emails, leaving room for potential misuse and exploitation of this technology.
  3. With minimal manipulations, Bard can be utilized to develop malware keyloggers, which poses a security concern.
  4. Our experimentation revealed that it is possible to create basic ransomware using Bard’s capabilities.

The author of WormGPT state that their model was trained on a diverse array of data sources, particularly concentrating on malware-related data.

SlashNext tested the tool and revealed that the results were unsettling because it created a very persuasive and strategically cunning email.

“In summary, it’s similar to ChatGPT but has no ethical boundaries or limitations. This experiment underscores the significant threat posed by generative AI technologies like WormGPT, even in the hands of novice cybercriminals.” concludes the report.

The researchers provided the following recommendations to mitigate AI-Driven BEC attacks:

  • BEC-Specific Training;
  • Enhanced Email Verification Measures;

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, WormGPT)




Original Post URL: https://securityaffairs.com/148504/cyber-crime/wormgpt-bec-attacks.html

Category & Tags: Breaking News,Cyber Crime,Hacking,Artificial Intelligence,BEC,ChatGPT,Cybercrime,generative AI,information security news,IT Information Security,phishing,Pierluigi Paganini,Security Affairs,Security News,WormGPT – Breaking News,Cyber Crime,Hacking,Artificial Intelligence,BEC,ChatGPT,Cybercrime,generative AI,information security news,IT Information Security,phishing,Pierluigi Paganini,Security Affairs,Security News,WormGPT

Views: 0

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post