Source: www.securityweek.com – Author: Eduard Kovacs
Several US government agencies on Tuesday published a cybersecurity information sheet focusing on the threat posed by deepfakes and how organizations can identify and respond to deepfakes.
Deepfake is a term used to describe synthetic media — typically fake images and videos. Deepfakes have been around for a long time, but advancements in artificial intelligence (AI) and machine learning (ML) have made it easier and less costly to create highly realistic deepfakes.
Deepfakes can be useful for propaganda and misinformation operations. For example, deepfakes of both Russia’s president, Vladimir Putin, and his Ukrainian counterpart, Volodymyr Zelensky, have emerged since the start of the war.
However, in their new report, the FBI, NSA and CISA warn that deepfakes can also pose a significant threat to organizations, including government, national security, defense, and critical infrastructure organizations.
“Organizations and their employees may be vulnerable to deepfake tradecraft and techniques which may include fake online accounts used in social engineering attempts, fraudulent text and voice messages used to avoid technical defenses, faked videos used to spread disinformation, and other techniques,” the agencies said. “Many organizations are attractive targets for advanced actors and criminals interested in executive impersonation, financial fraud, and illegitimate access to internal communications and operations.”
Specifically, malicious actors could, for instance, create video and audio content impersonating executives for brand manipulation or in an effort to influence stock prices.
Another example involves cybercriminals using deepfakes for social engineering. This can include business email compromise (BEC) attacks and cryptocurrency scams.
Deepfakes could also be leveraged to impersonate someone in an effort to gain access to a user account or valuable data, such as proprietary information, internal security details, or financial information.
To show that deepfake threats are not just theoretical, the agencies provided two real-world examples of attacks that occurred in May 2023. In one of the attacks, a malicious actor used synthetic audio and visual media techniques to impersonate a CEO and target the company’s product line manager.
In the second incident, profit-driven cybercriminals used a combination of audio, video and text message deepfakes to impersonate an executive and attempt to convince an employee to wire money to the attackers.
The report provides a summary of current efforts to detect deepfakes and authenticate media (for example, watermarks). The list includes initiatives from DARPA, DeepMedia, Microsoft, Intel, Google, and Adobe.
The agencies have made a series of recommendations for implementing technology to detect deepfakes and demonstrate media provenance. In addition, they urge organizations to protect the data of important individuals that may be targeted — deepfakes are more realistic if the attacker possesses the target’s personal information and has significant amounts of unwatermarked media content that they can feed to their deepfake creation software.
Organizations are also advised to implement measures that can help minimize the impact of deepfakes. This includes creating a response plan in case executives are targeted (including conducting tabletop exercises), sharing experiences with the US government, and training personnel to spot deepfakes.
Related: Pre-Deepfake Campaign Targets Putin Critics
Related: The Growing Threat of Deepfake Videos
Related: Defeating the Deepfake Danger
Related: Deepfakes – Significant or Hyped Threat?
Related: Deepfakes Are a Growing Threat to Cybersecurity and Society: Europol
Original Post URL: https://www.securityweek.com/us-agencies-publish-cybersecurity-report-on-deepfake-threats/
Category & Tags: Fraud & Identity Theft,Management & Strategy,CISA,Deepfake,guidance – Fraud & Identity Theft,Management & Strategy,CISA,Deepfake,guidance
Views: 0