Source: www.infosecurity-magazine.com – Author:
Malicious actors are using AI tools to fine-tune cyber-attacks, even as governments race to encourage AI investment.
National programs to bolster AI expertise and R&D should be seen in the context of the growing use of AI tools by criminal hackers, advised Brett Taylor, UK sales engineering director at SentinelOne, in his talk at Infosecurity Europe 2025.
Just as enterprises and public-sector bodies are looking to AI to improve productivity and drive economic growth, so criminal groups are using AI-based tools to develop malware and find vulnerabilities. Additionally, hackers are actively looking for any weak spots in AI deployments.
According to Taylor, the UK is “up there” with the world leaders when it comes to AI investments.
Although the USA, and increasingly China, are dominating the AI market, the UK has committed to £14bn of investment in AI. The technology is at the heart of the UK’s industrial strategy and is expected to create over 13,000 jobs.
However, this is overshadowed by US plans to spend more than $500bn over four years on Stargate AI.
Although the scale of AI investment in China is less well documented, DeepSeek, the Chinese open-source AI model, is the most downloaded free app ever in the US.
Threat Actors Investing in AI
However, Taylor warns, threat actors are also investing in AI tools.
“Threat actors are investing and innovating too,” he said.
“They see the opportunity to have their own market.”
Sites such as WormGPT, EvilGPT and FraudGPT are making it easier than ever to create malware or carry out online crime. Criminal groups are even using generative AI tools to improve the effectiveness of their attacks.
“AI is transforming security,” Taylor told Infosecurity, following his talk.
“AI is being used to pre-prove attacks before they are launched. Attacks have been through a GAN [generative adversarial network]. They will only launch the ones that have gone through the GAN and look to have the highest probability of success.”
Attack groups are also using AI to “amplify the traditional attack vectors,” such as phishing and password breaches.
“If you look at password breaches, in the old world it would be firing a brute force attack at you,” Taylor said.
“Now they research your interests and see if there are any leaked passwords for you, and put that into a model that understands how you structure your passwords.”
This reduces the number of attempts needed to successfully breach an account to just a few hundred.
Threat actors are also turning to dedicated, malicious AI tools as mainstream AI vendors step up their protections against misuse. Attackers are not concerned about responsible use of AI.
“Adversaries are less constrained,” said Taylor.
“Their motivation is to gain profit, intelligence or intellectual property.”
Step Change for Defense
As a result, there need to be a “step change” in how organizations and governments defend against this new generation of attacks.
Fortunately, AI is being deployed by defenders too. CISOs are turning to automation to counter a growing number of threats, as well as to make up for skills shortages.
“Generative AI allows you to democratize access to security analysis,” Taylor said.
Using natural language interfaces will make it easier to work with sophisticated security tools, and agentic AI tools can carry out some investigation and threat hunting tasks.
“You can ask if Scattered Spider is in your network, and it will look for indicators of compromise,” Taylor added.
This is part of a move towards greater automation in security defenses.
Read more from #Infosec2025: #Infosec2025: Concern Grows Over Agentic AI Security Risks
Security operations centers (SOCs) are increasingly automated, with automation as the only way to counter the increasing volumes and speed of attacks. Over time, vendors such as SentinelOne believe that a fully autonomous SOC is possible.
Analysts have said the autonomous SOC is a pipe dream,” said Taylor.
“But in the new generative AI world, that is no longer true.”
The SOC processes most likely to benefit from AI and automation include monitoring, evidence collection, investigations, triage of incidents, response and remediation, and reporting, according to SentinelOne.
“An autonomous SOC allows scale and precision that human driven SOCs struggle to maintain,” Taylor explained
Human and AI partnering, Taylor suggests, will provide more intelligent decision making as well as preventing human SOC analysts from becoming overwhelmed and burning out. Currently, analysts risk becoming “snow blind and unable to respond to threats effectively.”
Human analysts are set to move to a more supervisory role, with AI and automation handling the immediate response to an incident and potentially even remediation. Threat actors are well funded and innovative, can choose where and when to attack, and which “weaponized payloads” to use.
“We need to step up how we defend against that rising tide of attacks,” Taylor said.
Original Post URL: https://www.infosecurity-magazine.com/news/infosec2025-arms-race-ai/
Category & Tags: –
Views: 2