web analytics

Inside the Rise of ‘Dark’ AI Tools – Scary, But Effective? – Source: www.govinfosecurity.com

Rate this post

Source: www.govinfosecurity.com – Author: 1

Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development

WormGPT, DarkGPT and Their Ilk Underdelivered – or Were Scams, Researchers Report

Mathew J. Schwartz
(euroinfosec)


August 17, 2023    

Inside the Rise of 'Dark' AI Tools - Scary, But Effective?
Advertisement for WormGPT (Image: SlashNext)

When it comes to “dark” generative artificial intelligence tools designed to help criminals more quickly and easily amass victims, let the buyer beware.

See Also: Live Webinar | Unmasking Pegasus: Understand the Threat & Strengthen Your Digital Defense

Numerous new tools this year have purported to provide an evil alternative to existing large language models such as OpenAI’s ChatGPT and Google’s Bard. These tools often claim to be customized for criminals’ particular malicious requirements – writing malware, hacking into remote networks and more – and backed by a promise of nonexistent ethical safeguards.

As with so many things involving emerging technology, hype hasn’t lived up to reality.

The first tool to hit the market, WormGPT, debuted in June and was being sold on a dedicated Telegram channel by an individual using the handle “Last.” Based on the GPT-J-6B LLM first released in 2021, WormGPT subscriptions started at $90 per month.

The service quickly claimed to have hundreds of users, and email security vendor SlashNext reported it could craft a convincing-sounding phishing email. Beyond that, reviewers suggested the tool underdelivered.

In late July, a number of rival offerings debuted, including DarkGPT, FraudGPT, DarkBARD and DarkBERT, all of which appeared to be marketed by someone using the handle CanadianKingpin12, says a report from Margarita Del Val, a senior researcher with Outpost24’s threat intelligence division, Kraken Labs.

In an unexpected turn, CanadianKingpin12 apparently pulled the plug on all four services on Aug. 3. Around Aug. 9, WormGPT’s seller and core developer, Last, followed suit, saying there was too much public attention on his service.

WormGPT’s closure coincided with cybersecurity journalist Brian Krebs publishing an interview with the man allegedly behind the Last handle – Portugal-based Rafael Morais.

‘Wrapper Services’ or AI-Branded Scams?

While WormGPT appeared to be a real, customized LLM, the other four rival services might have been either outright scams or “wrapper services” that queried legitimate services using stolen accounts, VPN connections and ethical jailbreaks, Trend Micro researchers David Sancho and Vincenzo Ciancaglini said in a report.

“Despite all the announcements, we could not find any concrete proof that these systems worked,” the report states. “Even for FraudGPT, the most well-known of the four LLMs, only promotional material or demo videos from the seller can be found in other forums.”

This shouldn’t be surprising, since building LLMs is an intensive endeavor. “As what WormGPT showed, even with a dedicated team of people, it would take months to develop just one customized language model,” Sancho and Ciancaglini said in the report. Once a product launched, service providers would need to fund not just ongoing refinements but also the cloud computing power required to support users’ queries.

Another challenge for would-be malicious chatbot developers is that widely available legitimate tools can already be put to illicit use. Underground forums abound with posts from users detailing fresh “jailbreaks” for the likes of ChatGPT, designed to evade providers’ restrictions, which are designed to prevent the tool from responding to queries about unethical or illegal topics.

In his WormGPT signoff earlier this month, Last made the same point, noting that his service was “nothing more than an unrestricted ChatGPT,” and that “anyone on the internet can employ a well-known jailbreak technique and achieve the same, if not better, results by using jailbroken versions of ChatGPT.”

“These restriction bypasses are a constant game of cat and mouse: as new updates are deployed to the LLM, jailbreaks are disabled,” Trend Micro’s Sancho and Ciancaglini said. “Meanwhile, the criminal community reacts and attempts to stay one step ahead by creating new ones.”

Royal’s Likely Use and Abuse of AI

More evidence suggesting criminals don’t need evil-branded takes on existing tools to streamline their workflow comes in the form of a faux press release from the Royal ransomware group. The statement reads as if it had been produced by instructing ChatGPT to rebrand a Russian ransomware group to make it sound reputable after it had stolen data from a Boston-area school district and was attempting to extort the victim.

On July 19, Royal posted to its data leak site a statement “for immediate release” stating that “due to a miscommunication, some data was temporarily exposed, for a very short time,” concerning Braintree Public Schools.

The statement makes repeat reference to “Brian Tree Schools” and exhorts anyone who may have downloaded the leaked data to delete it, telling them: “Do not be a cheap Twitter vulture and delete anything you downloaded immediately.”

Royal’s statement concludes with language perhaps never before seen on a Russian-speaking cybercrime group’s data leak site: “As we make this decision we want to reaffirm our commitment to trust, respect and transparency which are the bedrock principles upon which Royal Data Services operates.”

The only thing seemingly missing from this twisted brand-management exercise is that hoary breach notification boilerplate claiming “the security of our customers’ data is our top concern.”

Royal’s message was “quite possibly an AI, as this helps them with language and translation,” said Yelisey Bohuslavskiy, chief research officer at threat intelligence firm RedSense. The timing of the group’s July promise to delete stolen data is notable because it arrived several weeks before the White House on Aug. 7 announced a slew of activities designed to bolster cyber resilience – not least against ransomware – among tens of thousands of K-12 school districts ahead of students’ return.

“They clearly monitor the political situation very well, as they made the statement before the summit,” Bohuslavskiy said of Royal.

Whether the group is rebranding as Royal Data Services or that’s an AI hallucination of its name that it decided to leave extant remains unclear.

So far, this is the state of “evil AI” and crime: not remaking criminal operations as we know them, but perhaps supporting it in unexpected, oftentimes banal ways.

Original Post URL: https://www.govinfosecurity.com/blogs/inside-rise-dark-ai-tools-scary-but-effective-p-3496

Category & Tags: –

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post

More Latest Published Posts