web analytics

The Hidden Cybersecurity Crisis: How GenAI is Fueling the Growth of Unchecked Non-Human Identities – Source: securityboulevard.com

Rate this post

Source: securityboulevard.com – Author: John D. Boyle

Generative AI continues its promises of revolutionizing industries and transforming everything from customer service to software development. Behind the excitement, organizations continue to struggle to identify and validate applicable use cases for GenAI that bring real business benefits. At the same time, industry leaders and social pundits continue to beat their loud revolutionary GenAI drums, enticing employees to frantically play with every new AI tool or risk losing professional and personal relevance. By doing so, they unknowingly grant access to APIs, service accounts, tokens and other NHIs which in in turn grant these applications extensive ecosystem permissions and access. As a result, a growing cybersecurity crisis is emerging — one that many organizations are unprepared to handle. GenAI is accelerating the unchecked growth of NHIs, fueling a new wave of cyberthreats. 

AI applications rely on machine-to-machine communication to function. Whether it’s an AI-powered chatbot, an automation script, or an advanced data processing tool, each requires its own NHI to access and interact with corporate systems. These identities often exist without proper security oversight. The ungoverned proliferation of NHIs means that organizations are losing visibility into who — or what — is accessing critical systems and data. The result is an explosion of new NHIs that spawn fresh and attractive targets for threat actors at an accelerating rate. 

Automated decision-making by AI compounds the problem. Unlike human users, AI-driven automation can operate at speeds that make containment difficult. If an attacker compromises an AI-generated NHI, they can execute unauthorized actions faster than security teams can respond. A threat actor gaining control of an AI service account could manipulate financial transactions, alter customer records, or delete sensitive system data in seconds. AI-powered bots interacting with APIs may execute unauthorized commands, causing operational disruptions before defenders even detect a breach. Ransomware operators could leverage AI-generated NHIs to create self-replicating attack patterns. 

Techstrong Gang Youtube

AWS Hub

To prevent AI from creating security chaos, organizations must first gain visibility into how many NHIs exist in their environments. A full audit of AI-powered tools and integrations is necessary to identify API tokens, service accounts and automation scripts operating without oversight. If security teams do not know where their non-human identities are, they cannot secure them. 

Applying zero-trust principles is critical to securing AI-generated NHIs. Organizations must enforce just-in-time access controls, requiring NHIs to authenticate just like human users. Multi-factor authentication should not be reserved only for employees but extended to service accounts and automated processes as well. Security teams must monitor AI-driven transactions in real-time, tracking anomalies that could indicate credential misuse or unauthorized access. 

Governance must also evolve to meet the demands of AI adoption. IT leaders should require security reviews before new AI tools are deployed, ensuring that all NHIs are accounted for and properly secured. Organizations should set expiration policies for API tokens and automate the detection and revocation of unused credentials. Without governance, the adoption of AI becomes an open invitation for cyberthreats.

As businesses rush to adopt AI, they are blindly granting these applications broad access to their IT environments, rapidly expanding the attack surface and exposing themselves to security risks. Organizations must gain complete visibility into AI-generated NHIs, apply zero-trust principles, enforce governance policies and educate employees about AI security risks. The AI-driven future is here, but without security, it is a disaster waiting to happen. Proper GenAI governance will control and manage the risks associated with NHI growth, bringing equilibrium and balance between security and AI innovation to IT ecosystems. Businesses that fail to secure their NHIs today will guarantee their place in the breach headlines of tomorrow. 

Original Post URL: https://securityboulevard.com/2025/02/the-hidden-cybersecurity-crisis-how-genai-is-fueling-the-growth-of-unchecked-non-human-identities/

Category & Tags: AI and Machine Learning in Security,AI and ML in Security,Cybersecurity,Featured,Identity & Access,Security Boulevard (Original),Social – Facebook,Social – LinkedIn,Social – X,Spotlight,artifical intelligence,Attack Vectors,GenAI,Machine Identity,NHI,non-human identity,Ransomware,risk management,zero trust – AI and Machine Learning in Security,AI and ML in Security,Cybersecurity,Featured,Identity & Access,Security Boulevard (Original),Social – Facebook,Social – LinkedIn,Social – X,Spotlight,artifical intelligence,Attack Vectors,GenAI,Machine Identity,NHI,non-human identity,Ransomware,risk management,zero trust

Views: 2

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post