web analytics

Uncle Sam warns deepfakes are coming for your brand and bank account – Source: go.theregister.com

Rate this post

Source: go.theregister.com – Author: Team Register

Deepfakes are coming for your brand, bank accounts, and corporate IP, according to a warning from US law enforcement and cyber agencies.

In a report published on Tuesday, the National Security Agency (NSA), the Federal Bureau of Investigation (FBI), and the Cybersecurity and Infrastructure Security Agency (CISA) warned that threats from “synthetic media” pose a growing threat. 

The Feds note this specifically includes military, government employees, first responders using the national security systems, defense industrial base firms, and critical infrastructure owners and operators. 

“Synthetic media” is just what it sounds like — fake info and communications spanning text, video, audio, and images. 

As technology improves, it’s getting more difficult to tell the real deal from deepfake media that uses artificial intelligence and machine learning to produce highly realistic, believable messages and content. 

“The most substantial threats from the abuse of synthetic media include techniques that threaten an organization’s brand, impersonate leaders and financial officers, and use fraudulent communications to enable access to an organization’s networks, communications, and sensitive information,” Uncle Sam warned in the new Cybersecurity Information Sheet [PDF].

While Feds say that there’s only “limited indication” that state-sponsored criminals are using deepfakes they caution that the increasing availability of free deep-learning tools make it easier and cheaper to mass produce fake media. 

To this point, the government agencies cite the Eurasia Group’s list of top risks for 2023, which puts generative AI in the No. 3 spot. It’s a chilling read: “Resulting technological advances in artificial intelligence (AI) will erode social trust, empower demagogues and authoritarians, and disrupt businesses and markets.”

The US government’s concerns about synthetic media also includes disinformation operations designed to sow false information about political, social, military and economic issues, causing unrest and uncertainty.

We’ve seen examples of this already this year, both in America and abroad. 

In May, a fake image of an explosion near the Pentagon went viral after being shared by multiple verified Twitter accounts. In addition to causing general confusion, the AI-generated photo also prompted a short dip in the stock market.

A month later, several Russian TV channels and radio stations were hacked and aired a deepfake video of Russian President Vladimir Putin declaring material law. Of course, phony images and social media posts are also favored by Putin’s goons.

Criminals are also increasingly using fake media in attempts to defraud organizations for financial gain, according to the alert. These typically deploy a combination of social engineering along with manipulated audio, video, or text to trick employees into transferring funds to attacker-controlled bank accounts.

Beware of CEOs asking for money

The FBI and friends cite two examples from May: In one, miscreants used synthetic visual and audio media techniques to impersonate the CEO of the company, calling a product line manager over WhatsApp and claiming to be the CEO.

“The voice sounded like the CEO and the image and background used likely matched an existing image from several years before and the home background belonging to the CEO,” the deepfake threat report says.

In another example, also from May, criminals used a combo of fake audio, video and text messages to impersonate a company exec, first over WhatsApp and then moving to a Teams meeting that appeared to show the executive in their office. “The connection was very poor, so the actor recommended switching to text and proceeded to urge the target to wire them money,” the Feds wrote. “The target became very suspicious and terminated the communication at this point.”

The Cybersecurity Information Sheet also includes several recommendations for spotting deepfakes, and not falling victim to these schemes. Safeguards include using deepfake detection and real-time verification technologies, and taking preventative measures such as making a copy of media and hashing both the original and the copy to verify an actual copy.

As always, verify the source and make sure that the message or media is coming from a reputable — and real — organization or person.

It’s also a good idea to have a plan in place to respond to and minimize potential damages caused by deepfakes. Create an incident response plan that details how security and other teams should respond to a variety of these techniques, and then run tabletop exercises to rehearse the plan. ®

Original Post URL: https://go.theregister.com/feed/www.theregister.com/2023/09/13/us_agencies_deepfake_threat/

Category & Tags: –

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post

More Latest Published Posts