web analytics

Deepfakes being used in ‘sextortion’ scams, FBI warns – Source: go.theregister.com

Rate this post

Source: go.theregister.com – Author: Team Register

Miscreants are using AI to create faked images of a sexual nature, which they then employ in sextortion schemes.

Scams of this sort used to see crims steal intimate images – or convince victims to send share them – before demanding payments to prevent their wide release.

But scammers are now accessing publicly available and benign images from social media sites or other sources and using AI techniques to render explicit videos or pictures, then demanding money – even though the material is not real.

The FBI this week issued an advisory about the threat, warning people to be cautious when posting or sending any images of themselves, or identifying information, over social media, dating apps, or other online sites.

The agency said there has been an “uptick” in reports since April of deepfakes being used in sextortion scams, with the images or videos being shared online to harass the victims with demands for money, gift cards, or other payments. The scammers also may demand the victim send real explicit content, according to federal investigators.

“Many victims, which have included minors, are unaware their images were copied, manipulated, and circulated until it was brought to their attention by someone else,” the FBI wrote, saying victims often learn about the images from the attackers or by finding them on the internet themselves. “Once circulated, victims can face significant challenges in preventing the continual sharing of the manipulated content or removal from the internet.”

Easier access to AI tools

Sextortion is a depressingly common tactic – as one El Reg hack discovered – but AI has added a dark twist to such schemes.

In the advisory, the FBI noted the rapid advancements in AI technologies and increased availability of tools that allow creation of deepfake material. For example, cloud giant Tencent recently announced a deepfake-as-a-service for just $145 a time.

Such ease of access and use will continue to be a challenge, according to Ricardo Amper, founder and CEO of identity verification and authentication firm Incode. Amper defended the use of such technology, while warning of its dangers.

“We’ve replaced photo editing tools with face swap apps and filters on the app store,” Amper told The Register. “Neural networks are being synthesized for anyone to access incredibly powerful deepfake software at a low entry point.”

Many deepfakes are “lighthearted,” Amper added, but also warned “we’ve democratized access to technology that requires a mere 30 seconds or a handful, rather than thousands, of images. Everyone’s identities, even those with a small digital footprint, are now at risk of impersonation and fraud.

“The capacity for abuse and disinformation compounds with deep learning’s effect on deepfake development as bad actors spread convincing lies or make compelling, defamatory impersonations.”

Deepening concerns about deepfakes

There have been ongoing worries about the effect of deepfakes on society for years, catching the national attention in 2017 with the release of a deceptively real-looking video of former President Barack Obama and fueling concerns about the accelerated dissemination of disinformation.

Such concerns surfaced again this week when a deepfake video aired on Russian TV purportedly depicted Russian President Vladimir Putin declaring martial law against the backdrop of the country’s ongoing illegal invasion of Ukraine.

Now deepfakes are being used to perpetrate sex crimes. Over the past six months the FBI has issued at least two alerts about the threat of sextortion scams – especially against children and teens.

Internet users in 2017 were introduced to how deep-learning techniques and neural networks could create realistic videos using a person’s image when the face of actress Gal Gadot was superimposed in an existing adult video, according to a report [PDF] from the US Department of Homeland Security.

“Despite being a fake, the video quality was good enough that a casual viewer might be convinced – or might not care,” the report read.

What is to be done?

As with large-language models and generative AI tools like ChatGPT, efforts are underway to develop technologies that can detect AI-generated text and images. Color amplification tools that visualize blood flow or machine learning algorithms trained on spectral analysis can detect and vet extreme behavior, Incode’s Amper said.

Intel claimed in April it had developed an AI model that could detect a deepfake in milliseconds by using such capabilities as looking for subtle changes in color that real humans display, but are too detailed for AI to render.

Viakoo CEO Bud Broomhead told The Register that there is “a race between AI being used to create deepfakes and AI being used to detect them.  Tools such as Deeptrace, Sensity, and Truepic can be used to automate detecting deepfakes, but their effectiveness will vary depending on whether new methods are being used to create deepfakes that these tools may not have seen before.”

That said, much of the responsibility will continue to fall on the shoulders of individuals. Beyond the recommendations noted above, the FBI also suggests people take such steps as running frequent online searches of themselves and their children, applying privacy settings on social media accounts, and using discretion when dealing with people online.

John Bambenek, principal threat hunter at cybersecurity firm Netenrich, believes AI innovation could advance to the point that at which detecting deepfakes will become impossible. Even with tools available now, most of those targeted by deepfake sextortion schemes are in a difficult position.

“The primary victim isn’t high-profile,” Bambenek told The Register. “These [attacks] are most often used for creating synthetic revenge where the victims have no real ability to respond or protect themselves from the harassment generated.” ®

Original Post URL: https://go.theregister.com/feed/www.theregister.com/2023/06/08/ai_deepfakes_sextortion_fbi/

Category & Tags: –

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post