Source: securityboulevard.com – Author: Kevin Sapp
Artificial intelligence (AI) agents are starting to do more than generate text. They perform actions – reading from databases, writing to internal syst ms, triggering webhooks, and updating tickets. Anthropic recently warned that fully AI “employees” may be only a year away, accelerating the need to rethink security for these new actors.
What’s new is how they’re doing it: not by following hardcoded workflows, but by making decisions at runtime.
This new pattern is showing up everywhere, from internal support bots to automated research assistants to developer productivity tools. In some cases, LLMs write and execute SQL queries. In others, they’re connecting systems that weren’t designed with agentic AI use cases in mind.
What we’re seeing is the rise of self-assembling systems, where an LLM-powered agent interprets a goal and builds its own integration logic on the fly. And while this is incredibly powerful, it comes with serious challenges, especially around security, identity, and access.
Self-Assembly in AI: Code at Runtime
In traditional software, developers design integrations by wiring together APIs, contracts, and credentials. These systems typically rely on a service mesh or API gateway, along with a logic pipeline that has been reviewed, versioned, and tested.
In agentic AI, that wiring happens at runtime.
This pattern, sometimes referred to as “self-assembly,” emerges when an LLM agent dynamically determines which tools or APIs to use to complete a task. There’s no predefined flow, no hardcoded sequence – just a prompt, a plan generated by the model, and a set of actions executed based on its own reasoning.
Here’s an example:
An LLM agent is instructed to “Find any new high-priority support tickets created in the last 48 hours for our top 10 customers and summarize them in an email to the on-call manager.”
To fulfill that task, the agent might:
- Authenticate to Zendesk and run a filtered query.
- Look up customer tier in Salesforce.
- Compose a Markdown report.
- Email the report via Google Workspace APIs.
This all happens without the developer manually wiring those systems together. Instead, the agent figures it out in real time, using the available tools and access it has.
And that’s the issue: What access does it have?
Each Integration Is an Access Point
Every action – fetching tickets, accessing CRM data, sending emails – requires an identity and permissions. The agent needs some form of credential to access each system. In a human-driven workflow, that might mean logging in and clicking “Authorize.” In an agent-driven one, that handshake needs to happen automatically.
Here’s what this looks like in practice:
- Developers passing API keys into environment variables.
- Secrets hardcoded in YAML files or scripts.
- Access tokens shared across tools because “We’re just prototyping.”
- Agents inheriting the identity of the dev environment or CI job that launched them.
These patterns have moved beyond edge cases and are now common in open source projects, internal automations, and commercial tools – many of them fragile or risky.
Systems Not Meant to Talk
One of the biggest shifts in this new architecture is the coordination among systems that weren’t designed to coordinate. In legacy environments, each system was protected by its own access model. GitHub has OAuth, Snowflake has signed JWTs, and Google Workspace has service accounts and scopes.
These systems were never built to be accessed in sequence by a semi-autonomous agent interpreting a prompt. And certainly not by an agent blending human and machine identity.
Original Post URL: https://securityboulevard.com/2025/05/self-assembling-ai-and-the-security-gaps-it-leaves-behind/?utm_source=rss&utm_medium=rss&utm_campaign=self-assembling-ai-and-the-security-gaps-it-leaves-behind
Category & Tags: Security Bloggers Network,AI,Automation,identities,Industry Insights,workloads – Security Bloggers Network,AI,Automation,identities,Industry Insights,workloads
Views: 1