web analytics

AI Deepfakes Rising as Risk for APAC Organisations – Source: www.techrepublic.com

Rate this post

Source: www.techrepublic.com – Author: Ben Abbott

AI deepfakes were not on the risk radar of organisations just a short time ago, but in 2024, they are rising up the ranks. With AI deepfakes’ potential to cause anything from a share price tumble to a loss of brand trust through misinformation, they are likely to feature as a risk for some time.

Robert Huber, chief security officer and head of research at cyber security firm Tenable, argued in an interview with TechRepublic that AI deepfakes could be used by a range of malicious actors. While detection tools are still maturing, APAC enterprises can prepare by adding deepfakes to their risk assessments and better protecting their own content.

Ultimately, more protection for organisations is likely when international norms converge around AI. Huber called on larger tech platform players to step up with stronger and clearer identification of AI-generated content, rather than leaving this to non-expert individual users.

AI deepfakes are a rising risk for society and businesses

The risk of AI-generated misinformation and disinformation is emerging as a global risk. In 2024, following the launch of a wave of generative AI tools in 2023, the risk category as a whole was the second largest risk on the World Economic Forum’s Global Risks Report 2024 (Figure A).

Figure A

AI misinformation has the potential to be
AI misinformation has the potential to be “a material crisis on a global scale” in 2024, according to the Global Risks Report 2024. Image: World Economic Forum

Over half (53%) of respondents, who were from business, academia, government and civil society, named AI-generated misinformation and disinformation, which includes deepfakes, as a risk. Misinformation was also named the biggest risk factor over the next two years (Figure B).

Figure B

The risk of misinformation and disinformation is expected to be high in the short-term and remain in the top five over 10 years.
The risk of misinformation and disinformation is expected to be high in the short-term and remain in the top five over 10 years. Image: World Economic Forum

Enterprises have not been so quick to consider AI deepfake risk. Aon’s Global Risk Management Survey, for example, does not mention it, though organisations are concerned about business interruption or damage to their brand and reputation, which could be caused by AI.

Huber said the risk of AI deepfakes is still emergent, and it is morphing as change in AI happens at a fast rate. However, he said that it is a risk that APAC organisations should be factoring in. “This is not necessarily a cyber risk. It’s an enterprise risk,” he said.

AI deepfakes provide a new tool for almost any threat actor

AI deepfakes are expected to be another option for any adversary or threat actor to use to achieve their aims. Huber said this could include nation states with geopolitical aims and activist groups with idealistic agendas, with motivations including financial gain and influence.

“You will be running the full gamut here, from nation state groups to a group that’s environmentally aware to hackers who just want to monetise depfakes. I think it is another tool in the toolbox for any malicious actor,” Huber explained.

SEE: How generative AI could increase the global threat from ransomware

The low cost of deepfakes means low barriers to entry for malicious actors

The ease of use of AI tools and the low cost of producing AI material mean there is little standing in the way of malicious actors wishing to make use of new tools. Huber said one difference from the past is the level of quality now at the fingertips of threat actors.

“A few years ago, the [cost] barrier to entry was low, but the quality was also poor,” Huber said. “Now the bar is still low, but [with generative AI] the quality is greatly improved. So for most people to identify a deepfake on their own with no additional cues, it is getting difficult to do.”

What are the risks to organisations from AI deepfakes?

The risks of AI deepfakes are “so emergent,” Huber said, that they are not on APAC organisational risk assessment agendas. However, referencing the recent state-sponsored cyber attack on Microsoft, which Microsoft itself reported, he invited people to ask: What if it were a deepfake?

“Whether it would be misinformation or influence, Microsoft is bidding for large contracts for their enterprise with different governments and reasons around the world. That would speak to the trustworthiness of an enterprise like Microsoft, or apply that to any large tech organisation.”

Loss of enterprise contracts

For-profit enterprises of any type could be impacted by AI deepfake material. For example, the production of misinformation could cause questions or loss of contracts around the world or provoke social concerns or reactions to an organisation that could damage their prospects.

Physical security risks

AI deepfakes could add a new dimension to the key risk of business disruption. For instance, AI-sourced misinformation could cause a riot or even the perception of a riot, causing either danger to physical persons or operations, or just the perception of danger.

Brand and reputation impacts

Forrester released a list of potential deepfake scams. These include risks to an organisation’s reputation and brand or employee experience and HR. One risk was amplification, where AI deepfakes are used to spread other AI deepfakes, reaching a broader audience.

Financial impacts

Financial risks include the ability to use AI deepfakes to manipulate stock prices and the risk of financial fraud. Recently, a finance employee at a multinational firm in Hong Kong was tricked into paying criminals US $25 million (AUD $40 million) after they used a sophisticated AI deepfake scam to pose as the firm’s chief financial officer in a video conference call.

Individual judgment is no deepfake solution for organisations

The big problem for APAC organisations is AI deepfake detection is difficult for everyone. While regulators and technology platforms adjust to the growth of AI, much of the responsibility is falling to individual users themselves to identify deepfakes, rather than intermediaries.

This could see the beliefs of individuals and crowds impact organisations. Individuals are being asked to decide in real-time whether a damaging story about a brand or employee may be true, or deepfaked, in an environment that could include media and social media misinformation.

Individual users are not equipped to sort fact from fiction

Huber said expecting individuals to discern what is an AI-generated deepfake and what is not is “problematic.” At present, AI deepfakes can be difficult to discern even for tech professionals, he argued, and individuals with little experience identifying AI deepfakes will struggle.

“It’s like saying, ‘We’re going to train everybody to understand cyber security.’ Now, the ACSC (Australian Cyber Security Centre) puts out a lot of great guidance for cyber security, but who really reads that beyond the people who are actually in the cybersecurity space?” he asked.

Bias is also a factor. “If you’re viewing material important to you, you bring bias with you; you’re less likely to focus on the nuances of movements or gestures, or whether the image is 3D. You are not using those spidey senses and looking for anomalies if it’s content you’re interested in.”

Tools for detecting AI deepfakes are playing catch-up

Tech companies are moving to provide tools to meet the rise in AI deepfakes. For example, Intel’s real-time FakeCatcher tool is designed to identify deepfakes by assessing human beings in videos for blood flow using video pixels, identifying fakes using “what makes us human.”

Huber said the capabilities of tools to detect and identify AI deepfakes are still emerging. After canvassing some tools available on the market, he said that there was nothing he would recommend in particular at the moment because “the space is moving too fast.”

What will help organisations fight AI deepfake risks?

The rise of AI deepfakes is likely to lead to a “cat and mouse” game between malicious actors generating deepfakes and those trying to detect and thwart them, Huber said. For this reason, the tools and capabilities that aid the detection of AI deepfakes are likely to change fast, as the “arms race” creates a war for reality.

There are some defences organisations may have at their disposal.

The formation of international AI regulatory norms

Australia is one jurisdiction looking at regulating AI content through measures like watermarking. As other jurisdictions around the world move towards consensus on governing AI, there is likely to be convergence about best practice approaches to support better identification of AI content.

Huber said that while this is very important, there are classes of actors that will not follow international norms. “There has to be an implicit understanding there will still be people who are going to do this regardless of what regulations we put in place or how we try to minimise it.”

SEE: A summary of the EU’s new rules governing artificial intelligence

Large tech platforms identifying AI deepfakes

A key step would be for large social media and tech platforms like Meta and Google to better fight AI deepfake content and more clearly identify it for users on their platforms. Taking on more of this responsibility would mean that non-expert end users like organisations, employees and the public have less work to do in trying to identify if something is a deepfake themselves.

Huber said this would also assist IT teams. Having large technology platforms identifying AI deepfakes on the front foot and arming users with more information or tools would take the task away from organisations; there would need to be less IT investment required in paying for and managing deepfake detection tools and the allocation of security resources to manage it.

Adding AI deepfakes to risk assessments

APAC organisations may soon need to consider making the risks associated with AI deepfakes a part of regular risk assessment procedures. For example, Huber said organisatinos may need to be much more proactive about controlling and protecting the content organisations produce both internally and externally, as well as documenting these measures for third parties.

“Most mature security companies do third party risk assessments of vendors. I’ve never seen any class of questions related to how they are protecting their digital content,” he said. Huber expects that third-party risk assessments conducted by technology companies may soon need to include questions relating to the minimisation of risks arising out of deepfakes.

Original Post URL: https://www.techrepublic.com/article/ai-deepfake-risks-enterprises-apac/

Category & Tags: Artificial Intelligence,Australia,CXO,International,Security,ai,ai deepfakes,ai risks,apac,artificial intelligence,cyber security,risk management,tenable – Artificial Intelligence,Australia,CXO,International,Security,ai,ai deepfakes,ai risks,apac,artificial intelligence,cyber security,risk management,tenable

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post

More Latest Published Posts