web analytics

Increasing Threat of DEEPFAKES Identities by Homeland Security

Rate this post

Abstract

Deepfakes, an emergent type of threat falling under the greater and more pervasive umbrella of synthetic media, utilize a form of artificial intelligence/machine learning (AI/ML) to create believable, realistic videos, pictures, audio, and text of events which never happened.
Many applications of synthetic media represent innocent forms of entertainment, but others carry risk.
The threat of Deepfakes and synthetic media comes not from the technology used to create it, but from people’s natural inclination to believe what they see, and as a result deepfakes and synthetic media do not need to be particularly advanced or believable in order to be effective in spreading is/disinformation.
Based on numerous interviews conducted with experts in the field, it is apparent that the severity and urgency of the current threat from synthetic media depends on the exposure, perspective, and position of who you ask. The spectrum of concerns ranged from “an urgent threat” to “don’t panic, just be prepared.”
To help customers understand how a potential threat might arise, and what that threat might look like, we considered a number of scenarios specific to the arenas of commerce, society, and national security.
The likelihood of any one of these scenarios occurring and succeeding will undoubtedly increase as the cost and other resources needed to produce usable deepfakes simultaneously decreases – just as synthetic media became easier to create as non-AI/ML techniques became more readily available.
In line with the multifaceted nature of the problem, there is no one single or universal solution, though elements of technological innovation, education, and regulation must comprise part of any detection and mitigation measures.
In order to have success there will have to be significant cooperation among stakeholders in the private and public sectors to overcome current obstacles such as “stovepiping” and to ultimately protect ourselves from these emerging threats while protecting civil liberties.

Introduction
In late 2017, Motherboard reported on a video that had appeared on the Internet in which the face of Gal Gadot had been superimposed on an existing pornographic video to make it appear that the actress was engaged in the acts depicted.1 Despite being a fake, the video quality was good enough that a casual viewer might be convinced – or might not care.
An anonymous user of the social media platform Reddit, who referred to himself as “deepfakes,” claimed to be the creator of this video.”2
The term “deepfakes” is derived from the fact that the technology involved in creating this particular style of manipulated content (or “fakes”) involves the use of deep learning techniques. Deep learning represents a subset of machine learning techniques which are themselves a subset of artificial intelligence. In machine learning, a model uses training data to develop a model for a specific task. The more robust and complete the training data, the better the model gets. In deep learning, a model is able to automatically discover representations of features in the data that permit classification or parsing of the data. They are effectively trained at a “deeper” level.3
The data which can be examined using deep learning is not restricted to images and videos of people. It can include images and videos of anything, as well as audio and text. In 2020, Dave Gershgorn, a reporter for OneZero reported on the release of “new” music by famous artists on the OpenAI website.4 Using existing tracks from well-known artists, living and dead, programmers were able to create realistic tracks of new songs by Elvis, Frank Sinatra, and Jay-Z. Jay-Z’s company, Roc Nation LLC, sued YouTube to take the tracks down.5
AI-generated text is another type of deepfake that is a growing challenge. Whereas researchers have identified a number of weaknesses in image, video, and audio deepfakes as means of detecting them, deepfake text is not so easy to detect.6 It is not out of the question that a user’s texting style, which can often be informal, could be replicated using deepfake technology.
All of these types of deepfake media – image, video, audio, and text – could be used to simulate or alter a specific individual or the representation of that individual. This is the primary threat of deepfakes. However, this threat is not restricted to deepfakes alone, but incorporates the entire field of “Synthetic Media” and their use in disinformation.
More than just “deepfakes” – “Synthetic Media” and Disinformation
Deepfakes actually represent a subset of the general category of “synthetic media” or “synthetic content.” Many popular articles on the subject78 define synthetic media as any media which has been created or modified through the use of artificial intelligence/machine learning (AI/ML), especially if done in an automated fashion. From a practical standpoint, however, within the law enforcement and intelligence communities, synthetic media is generally defined to encompass all media which has either been created through digital or artificial means (think computer-generated people) or media which has been modified or otherwise manipulated through the use of technology, whether analog or digital. For

example, physical audio tape can be manually cut and spliced to remove words or sentences and alter the overall meaning of a recording’s content. “Cheapfakes” are another version of synthetic media in which simple digital techniques are applied to content to alter the observer’s perception of an event. Cheapfake examples described elsewhere in this paper demonstrate speech being slowed, and video being accelerated.
Science and technology are constantly advancing. Deepfakes, along with automated content creation and modification techniques, merely represent the latest mechanisms developed to alter or create visual, audio, and text content. The key difference they represent, however, is the ease with which they can be made – and made well. In the past, casual viewers (or listeners) could easily detect fraudulent content. This may no longer always be the case and may allow any adversary interested in sowing misinformation or disinformation to leverage far more realistic image, video, audio, and text content in their campaigns than ever before.
How are deepfakes made and how might they be used?
Since the first deepfake in 2017, there have been many developments in deepfake and related-synthetic media technologies. The timeline below provides a listing of some of the most well-known and representative examples of deepfakes, as well as some “cheapfakes” and one example of an instance in which deepfakes were initially implicated, but never proven to have been used. An addendum to this report, which provides summaries of these examples and links for further information is also available.

Download & read the complete document below 👇👇👇

Views: 1

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post

More Latest Published Posts