Source: go.theregister.com – Author: Iain Thomson
The FBI has warned that fraudsters are impersonating “senior US officials” using deepfakes as part of a major fraud campaign.
According to the agency, the campaign has been running since April and most of the messages target former and current US government officials. The attackers are after login details for official accounts, which they then use to compromise other government systems and try to harvest financial account information.
If you receive a message claiming to be from a senior US official, do not assume it is authentic
“The malicious actors have sent text messages and AI-generated voice messages — techniques known as smishing and vishing, respectively — that claim to come from a senior US official in an effort to establish rapport before gaining access to personal accounts,” the warning reads.
“If you receive a message claiming to be from a senior US official, do not assume it is authentic.”
The deepfake voices and SMS messages encourage targets to move to a separate messaging platform. The FBI didn’t identify that platform or say which government officials have been deep faked.
The agency advises that recipients of these messages should call back using the official number of the relevant department, rather than the one provided. They should also listen out for verbal tics or words that would be unlikely to be used in any conversation, as that could indicate a deepfake in operation.
“AI-generated content has advanced to the point that it is often difficult to identify,” the FBI advised. “When in doubt about the authenticity of someone wishing to communicate with you, contact your relevant security officials or the FBI for help.”
- China’s Spamouflage cranks up trolling of US Senator Rubio as election day looms
- Pakistani politician deepfakes himself to deliver a speech from behind bars
- Whoa, bot wars: As cybercrooks add more AI to their arsenal, the goodies will have to too
- Generative AI makes fraud fluent – from phishing lures to fake lovers
The use of deepfakes has increased as the technology to create them improves and costs fall. In this case, the attackers appear to have used AI simply to generate a message using available voice samples, rather than using generative AI to fake real-time interactions.
Attackers have used this approach for over five years. The technology needed to run such attacks is so commonplace and cheap that it’s an easy attack vector. Deepfake videos have been around for a similar period, although they were initially much harder and more expensive to do convincingly.
Real-time text deepfaking is now relatively commonplace and has revolutionized scams to the point at which conversations that with random messages offering you the chance for love or a crypto investment probably see victims talk to a computer.
Interactive deepfakes that can impersonate humans in their own voices remain harder and more expensive to create. OpenAI last year claimed its Voice Engine could create a real time deepfake chat bot, but the biz restricted access to it – presumably either because it’s not very good or due to the risks it poses.
Interactive video deepfakes may soon be technically possible, and a Hong Kong trader claimed they wired $25 million overseas after a deepfake fooled them into making the transfer. However, Chester Wisniewski, global field CISO of British security biz Sophos, told The Register this was most likely an excuse and the technology is probably impossible to wield without the kind of budget only a government or multinational business would possess.
“Right now, based on discussions I’ve had, it would probably take $30 million to do it, so maybe if you’re the NSA it’s possible,” he opined. “But if we’re following the same trajectory of audio then it’s a few years away before your wacky uncle will be making them as a joke.” ®
Original Post URL: https://go.theregister.com/feed/www.theregister.com/2025/05/16/fbi_deepfake_us_government_warning/
Category & Tags: –
Views: 1