Source: securityboulevard.com – Author: Jeffrey Burt
The rapidly evolving “pig butchering” ecosystem is adding another – and unsurprising – tool to its malicious arsenal: generative AI.
Operators behind what cybersecurity firm Sophos dubs “CryptoRom” scams are now using AI chatbots like OpenAI’s ChatGPT or Google’s Bard to craft messages used as the initial approach to potential victims using iPhones or Android phones.
Such a message sent to a target through the Tandem language exchange app had all the signs of being generated by a generative AI chatbot, Sophos researchers Jagadeesh Chandraiah and Sean Gallagher said in a report. They noted that in one part of the text, the message read: “Thank you very much for your kind words! As a language model of ‘me’ I don’t have feelings or emotions like humans do.”
“The combination of this edited block of text amongst otherwise grammatically awkward text was an artifact from a generative AI tool being used by the scammers,” Chandraiah and Gallagher wrote. “The text was likely copied and pasted into the conversation with the targeted user to be more grammatically correct, with a more extensive vocabulary and more in line with the recipient’s expectations, based on the location the scammer was pretending to be from (in this case, New York).”
That said, even though the language in the message raised the suspicions of the intended target, the use of generative AI is a new step for such scammers, with the researchers noting that “use of a generative AI tool could not only make the conversations more convincing but also reduce the workload on scammers interacting with multiple victims.”
Adapting with the Times
Pig butchering scams are not new. They originated in China and were given the name “shā zhū pán” – translated as “butcher plate,” according to Sophos – because the aim is to take everything from the victim. In the case of CryptoRom scams, that means stealing all of their cryptocurrency.
Sophos began tracking the CryptoRom version of the pig butchering schemes in 2020. In these scams, the bad actors typically approach their targets through dating apps or other social media and feign an interest in starting a romantic relationship. In the most recent case, the approach came through Tandem, an app that is used to pair people with native speakers of a language they’re learning.
After connecting through the initial app, the scammers move the conversation to a private chat app, like WhatsApp or Telegram, and then bring up the topic of cryptocurrency and talk about a fake crypto trading app, offering to help the victim install the app and then move their crypto into it. After that, they tell the victim they must pay a tax or fee before accessing their non-existing profits. They steal the crypto and ditch the victims.
These scams are run by large cybercrime organizations that tend to enlist groups of low-level “keyboarders” that interact with targets, which can create problems that a generative AI chatbot could overcome.
“In order to be convincing as romantic interests, scammers often have to overcome issues with those keyboarders communicating with targets in a non-native language in order to present a believable persona,” Chandraiah and Gallagher wrote.
Being able to leverage generative AI tools also would help threat groups that are interacting with large numbers of potential victims.
Evolving and Adapting
The shift toward AI – which essentially every organization, whether legitimate or criminal, is doing – is only the latest evolution in the CryptoRom business. The researchers said that, until last year, scammers were staying away platform apps stores like Apple’s App Store and Google Play and opting for third-party stores.
However, they’ve found ways to sneak their CryptoRom apps past the approval steps in Apple’s and Google’s platforms.
“We have also seen scammers using new tactics to extract more money from victims even after they pay the ‘tax’— including the fake hacking of their accounts,” they wrote.
In a recent case, after the target paid a 20% tax in hopes of getting their money back, the scammers demanded another 20% deposit, alleging “insecure behavior” in the account, that the funds had been hacked and were unattainable.
“Undoubtedly, the scammers would have continued to come up with even more reasons for the victim to pay more, if they had continued to deposit funds,” Chandraiah and Gallagher wrote. “The only endgame for the scams is when victims become disillusioned or run out of resources they can tap in an attempt to get their funds back.”
Recent Articles By Author
Original Post URL: https://securityboulevard.com/2023/08/pig-butchering-scammers-now-using-ai-chatbots-to-lure-victims/
Category & Tags: Cybersecurity,Data Security,Featured,Mobile Security,News,Security Boulevard (Original),Social Engineering,Spotlight,cryptocurrency,generative AI,sophos – Cybersecurity,Data Security,Featured,Mobile Security,News,Security Boulevard (Original),Social Engineering,Spotlight,cryptocurrency,generative AI,sophos
Views: 0