Source: www.securityweek.com – Author: Ryan Naraine
The cyber threat intelligence business has struggled to become a major market category, hampered by stale data, limited information sharing, and the high costs of traditional detection and response tools.
But artificial intelligence (AI) may be poised to change that. Tech giants like Microsoft, Google, and OpenAI are quietly transforming into early warning systems, using AI to track malicious actors — sometimes down to the individual level — before they launch malware campaigns.
By monitoring attempts to abuse their platforms, these companies are uncovering fresh, actionable intelligence in real time, offering a glimpse of how AI-driven platforms could finally deliver the timely, cost-effective threat detection the cybersecurity industry has been chasing for years.
Just recently, Google Threat Intelligence Group (GTIG) shared data on how it caught nation-state hackers linked to Iran, China, North Korea, and Russia attempting to misuse its Gemini gen-AI tool for activities ranging from reconnaissance on U.S. defense networks to drafting malicious scripts aimed at bypassing corporate security measures.
According to Google, Iranian government-backed hackers were among the heaviest users of Gemini, probing vulnerabilities and exploring phishing techniques designed to compromise government and defense entities. Chinese groups, including multiple PRC-backed APTs, similarly leveraged the AI model for scripting tasks, Active Directory maneuvers, and stealthy lateral movement within target networks. North Korean operatives used it to explore free hosting providers, craft malicious code, and draft cover letters and job proposals to embed clandestine IT workers inside Western companies.
By watching Gemini’s queries, Google boasted that it can anticipate an attacker’s next steps, an advantage that effectively turns the platform into an early-warning system for cyber campaigns. It also puts AI providers in an unfamiliar role: policing who gets to use their technology and for what ends, with potential legal and ethical questions still up in the air.
Like Google, software giant Microsoft is also trumpeting its ability to capture evidence of foreign hacking teams interacting with OpenAI’s ChatGPT to automate malicious vulnerability research, target reconnaissance and malware creation tasks.
In one case, Redmond’s threat hunters saw the Russian APT known as Forest Blizzard (APT28/FancyBear) using LLMs to conduct research into various satellite and radar technologies that may pertain to conventional military operations in Ukraine, as well as generic research aimed at supporting their cyber operations.
Advertisement. Scroll to continue reading.
In another case, Microsoft said it caught notorious North Korean APT Emerald Sleet (aka Kimsuky) using LLMs to generate content likely to be used in spear-phishing campaigns. In addition, the Pyongyang hackers were caught using LLMs to understand publicly known vulnerabilities, to troubleshoot technical issues, and for assistance with using various web technologies.
OpenAI, too, has publicly shared stories of catching Iranian APTs planning ICS attacks and disrupting more than 20 cyber and covert nation-state influence operations.
With these early success stories, there’s a general feeling that ‘Big AI’ might be the game-changer for threat-intelligence. The logic is simple: if professional hackers are running a new phishing scheme through ChatGPT or Google’s Gemini, the provider can flag those queries in real time to disrupt and help set traps for malware campaigns.
There’s also an element of real-time espionage at play: AI platforms learn how multiple campaigns connect, which malicious tools get repeated, and how often threat actors pivot to new malicious infrastructure and domains. That kind of cross-campaign insight is gold for defenders, especially when the data is available in real time.
Of course, adversaries won’t line up to feed their best secrets to OpenAI, Microsoft or Google AI platforms. Some hacker groups prefer open-source models, hosting them on private servers where there’s zero chance of being monitored. As these open-source models gain sophistication, criminals can test or refine their attacks without Big Tech breathing down their necks but the lure of advanced online models with powerful capabilities will be hard to avoid.
Even as security experts remain bullish on the power of AI to save threat intel, there are adversarial concerns at play. Some warn that attackers can poison AI systems, manipulate data to produce false negatives, or exploit generative models for their own malicious scripts.
But as it stands, the big AI platforms already see more malicious signals in a day than any single cybersecurity vendor sees in a year. That scale is exactly what’s been missing from threat intelligence. For all the talk about “community sharing” and open exchanges, it’s always been a tangled mess. But if these AI powerhouses act as near-instant radars, funneling actionable intel to defenders, we might actually see a leap forward where attacks are intercepted early and for a fraction of the cost defenders are used to spending on legacy detection tools.
It’s never wise to suggest a single technology can save an entire market category, but if anything has the potential to jumpstart threat intel, it’s this AI-driven early warning approach. The real question: will industry and governments support it, and will the threat actors simply adapt faster?
Many are watching closely to see if AI can finally deliver on a goal that’s eluded the industry for far too long.
Related: Mastercard to Acquire Threat Intelligence Firm Recorded Future for $2.6 Billion
Related: Mandiant Offers Clues to Catching North Korean Fake IT Workers
Related: OpenAI Says Iranian Hackers Used ChatGPT to Plan ICS Attacks
Related: Microsoft Puts ChatGPT to Work on Automating Cybersecurity
Related: Microsoft Catches APTs Using ChatGPT for Vuln Research, Malware Scripting
Original Post URL: https://www.securityweek.com/can-ai-early-warning-systems-reboot-the-threat-intel-industry/
Category & Tags: Artificial Intelligence,Malware & Threats,Threat Intelligence,APT,China,Featured,google,Iran,Microsoft,North Korea,OpenAI,threat intelligence – Artificial Intelligence,Malware & Threats,Threat Intelligence,APT,China,Featured,google,Iran,Microsoft,North Korea,OpenAI,threat intelligence
Views: 1