web analytics

FraudGPT Follows WormGPT as Next Threat to Enterprises – Source: securityboulevard.com

Rate this post

Source: securityboulevard.com – Author: Jeffrey Burt

Less than two weeks after WormGPT hit the scene as threat actors’ alternative to the wildly popular ChatGPT generative AI chatbot, a similar tool called FraudGPT is making the rounds on the dark web. FraudGPT offers cybercriminals more effective ways to launch phishing attacks and create malicious code.

FraudGPT has been circulating on Telegram Channels since July 22, Rakesh Krishnan, senior threat analyst with cybersecurity company Netenrich, wrote in a report today.

AWS Builder Community Hub

“This is an AI bot, exclusively targeted for offensive purposes, such as crafting spear phishing emails, creating cracking tools, carding, etc.,” Krishnan wrote. “The tool is currently being sold on various dark web marketplaces and the Telegram platform.”

The tool is being offered on a subscription basis; cost ranges from $200 per month to up to $1,700 a year. Much of Netenrich’s report focused on how bad actors can use FraudGPT for phishing-based business email compromise (BEC) campaigns against enterprises.

That includes enabling an attacker to create an email that is more likely to convince a targeted victim to click on a malicious link and helping the attacker pick their targets.

But that’s not all. FraudGPT also can make it easier to write malicious code, create undetectable malware, create hacking tools and identify leaks and vulnerabilities in organizations’ technologies, according to Krishnan. It also can teach wannabe bad actors to code and hack.

According to Netenrich, there have been more than 3,000 confirmed sales and reviews and the operators behind FraudGPT are offering ’round-the-clock escrow capabilities.

Krishnan wrote that FraudGPT is similar to WormGPT, which launched July 13.

“These kinds of malicious alternatives to ChatGPT continue to attract criminals and less tech-savvy ne’er-do-wells to target victims for financial gain,” he wrote.

ChatGPT and Security

That has concerned cybersecurity pros as innovation around AI technologies has accelerated over the past several years, and now with the rapid emergence of generative AI technologies. ChatGPT–the chatbot developed by startup OpenAI and heavily promoted by Microsoft–took off almost immediately after it was launched in November 2022, reaching 100 million monthly active users in January to make it the fastest-growing consumer application of all time.

At least until Meta’s Threads social media network eclipsed that in June, needing only a week to get past 100 million users.

However, the introduction of ChatGPT brought a range of cybersecurity concerns. It’s easy to use and, as WormGPT and now FraudGPT have shown, are both relatively easy to adaptation by threat actors. It also enables less-skilled attackers to more easily launch campaigns and can be used for more than generating text.

“Generative AI need not write only in human language; it can also write in code,” John Bambenek, principal threat hunter at Netenrich, told Security Boulevard. “Almost every sophisticated attack uses PowerShell somewhere in the chain of events. One could use generative AI to write better PowerShell tooling or write many PowerShell tools quickly. At its core, this is a generative AI tool without the ethical safeguards in ChatGPT.”

Immediate Threat of FraudGPT is Unclear

That said, some security pros are questioning how effective FraudGPT, WormGPT or other AI-based threat tools are, at least for now. Melissa Bischoping, director of endpoint security research at Tanium, told Security Boulevard that the features FraudGPT offers aren’t much different than what attackers can do with ChatGPT, minus some workaround to get back security tools.

“I would even challenge whether you’re doing them ‘better, faster,’ because we all know GPT-generated code is error-prone and there’s not yet a ton of conclusive, well-designed research on whether GPT-generated phishing lures are more effective than human-generated ones,” Bischoping said. “This seems like a lot of hot air to scam script kiddies out of cash and capitalize on the surge in interest around LLM [large-language model]-based attacker tools.”

Timothy Morris, chief security advisor at Tanium, agreed that it could be a scam and that enterprises should keep using proven security technologies, from threat hunting and strong security controls to multifactor authentication and user training.

“What does FraudGPT allow attackers to do that they couldn’t do before?” Morris said. “Unlike ChatGPT or any other LLM GPT, it allows would-be miscreants to use FraudGPT without guardrails. Meaning the abuse filters aren’t there, so almost anything is fair game since misuse isn’t being checked for.”

Pyry Avist, co-founder and CTO at security firm Hoxhunt, said “black hat GPT models” like FraudGPT are “bad news,” but that they’re essentially ChatGPT without the security and ethical restrictions.

“They’re more emblematic of a larger trend than proof of a new and darkly innovative form of malicious technology. It’s basically generative AI jailbreaking for dummies,” Avist told Security Boulevard. “You can’t just tell ChatGPT to create a convincing phishing email and credential harvesting template sent from your CEO. But you can pretend to be the CEO and easily draft an urgent email to the finance team demanding them to alter an invoice payment.”

Getting a Line on the Attacker Behind FraudGPT

According to Netenrich’s Krishnan, the threat actor behind FraudGPT created his Telegram Channel June 23 and claims to be a verified vendor on such dark web marketplaces like Empire, Torrez, AlphaBay and Versus. Exit scams are common in such marketplaces, so he said the FraudGPT operator went to Telegram Channels to offer his services without the threat of such scams.

Netenrich traced the hacker’s identity through a dark web forum, even finding their email address.

Malicious versions of ChatGPT can be a problem, but more focus needs to be on the automation of a multi-step attack, Hoxhunt’s Avist said.

“We’ve seen attacks using chatbots for successful BECs, where the malicious actors often must interact with the victim to obtain credentials or bypass MFA,” he said. “You can also leverage chatbots with deepfake technology to have a convincing conversation with a human voice and face. These models could do highly sophisticated attack campaigns at scale and make malware and BEC even more of a problem.”

Recent Articles By Author

Original Post URL: https://securityboulevard.com/2023/07/fraudgpt-follows-wormgpt-as-next-threat-to-enterprises/

Category & Tags: Cybersecurity,Featured,News,Security Boulevard (Original),Spotlight,Threats & Breaches,ChatGPT,FraudGPT,generative AI,identity – Cybersecurity,Featured,News,Security Boulevard (Original),Spotlight,Threats & Breaches,ChatGPT,FraudGPT,generative AI,identity

Views: 0

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post