Source: go.theregister.com – Author: Guy Matthews
Sponsored feature The cyberthreat landscape is evolving fast, with highly organized bad actors launching ever more devastating and sophisticated attacks against often ill-prepared targets.
In this worrying scenario, the time taken to discover and respond to an attack has become critical. Providers of managed detection and response (MDR) services, whose job it is to protect their customers against this malicious onslaught, now have a potent new weapon they can build into their platforms: agentic AI.
While generative AI relies on prompts for simple tasks, agentic AI breaks down complex tasks into simpler ones that it can then complete autonomously. A set of such agents will often work together, using its own specialized training to carry out parts of a more complex task. It looks set to be a game changer, with Gartner forecasting a third of AI use cases will use agentic AI to fulfill their role by 2028.
Agentic AI is already recasting the entire MDR market, taking it beyond generative Ai models and towards something much more autonomous. MDR providers can use various forms of agentic AI to speed up the kinds of tasks traditionally handled by human security operations center (SOC) analysts.
This kind of automation has the power not only to accelerate detection and response, but to bypass human error. It promises to help address the skills shortage affecting the market for experienced security professionals.
Powered by agentic AI, so the theory goes, MDR platforms should continuously adapt and learn from real-time threats. They will provide more potent responses and prompt remediation to contain cyber attacks well before they can disrupt essential operations.
Choose wisely
But there’s a catch that senior security professionals need to be alert to before they appoint an MDR provider. Not all are using agentic AI in the same way (and some are not using it at all). So customers must take great care when navigating a crowded market for the best choice of partner.
Organizations need to understand how a potential security partner is using agentic AI, how they measure the outcomes from this technology, and how transparent and collaborative they are in sharing these insights. Are they using the sort of basic AI that does little more than filter out possible threats from a sea of data? Or more sophisticated AI models that build reasoning and multi-stage task solving into the mix? And how do they use human experience alongside their AI deployment ?
To better understand how the market should be navigated, The Register spoke to Dustin Hillard, CTO of threat detection and response specialist eSentire. An experienced data scientist, Hillard has spent the past 15 years focused on automating security and understanding network behavior through machine learning.
How much automation is too much?
He points out that some MDR providers position agentic AI automating all of the duties of a regular SOC analyst, eliminating humans from the process altogether. The danger here, he says, is that such agents might fail to classify cyber activity appropriately, causing false positives and negatives that add to a security team’s workload rather than reducing it. That might create unfortunate consequences that an experienced human security analyst could avoid. Not all MDR providers are equally transparent when it comes to sharing insights, either.
“Many MDR providers are talking about how they can take away the human role,” he warns. “But at eSentire we are trying to take the expertise of humans and use AI to amplify it. Humans are still in the loop and making the critical decisions , and we argue that should always be the case .”
Agentic AI’s potential value is indisputable, especially when combined with human experience. “Within minutes of a security signal being received by our SOC, our agentic AI system, Atlas AI, kicks off a pre-investigation. While the investigation would take human SOC analysts at least five hours on their own, Atlas AI takes seven minutes” he notes. “It can assess the situation and collect all the essential data before putting it in front of an analyst for the final call, whether to escalate the investigation or close it out.“
Agentic AI done right
eSentire’s approach to using AI in its SOC operations is based around an agentic workflow that mimics a SOC analyst’s investigation process. Its Atlas AI engine poses security questions, pulls research, and compiles the findings into a report. This can help determine whether an employee’s computer has been compromised or not and it uses a confidence score from one to 10. It then shares that report and all the pertinent data within the analysts.
“eSentire’s SOC analysts always have the final say as to whether a threat is truly a threat, and they will decide on next steps no matter the report’s assessment,” explains Hillard.
“Atlas AI means our analysts are faster out of the investigation starting blocks, so they can validate threats with confidence, and act in record time,” he adds. “We track our first host isolation rate as a measure of how successful our service is at protecting customers. A 99.3% first-host isolation rate prevents lateral spread of a threat with minimal delay.”
eSentire’s approach to using AI for detection and response stems from years of experience. In 2018 it acquired leading AI solution developer Versive, along with Hillard, who was its CTO. It integrated Versive’s innovative IP throughout its Atlas XDR platform and SOCs. eSentire continued to build out its AI capabilities, launching the Atlas AI Investigator in 2023, an AI powered tool that provides access to investigation, response and remediation tools through simple natural language interaction.
“Our approach is different from others,” Hillard says. “The first layer of our agentic AI is a data mesh that collects information from all our customers’ environments and brings all of this rich data into one place. This allows our human agents to learn from that data in a scalable fashion.”
The next layer handles orchestration, using telemetry tools. Customers can tweak the automation dial here to involve human analysts as much as they like. The agent combines telemetry and threat intelligence data.
Then it’s about having a thorough understanding of the customer’s environment, preferences, and business practices. The agentic system uses an LLM to assess the security situation and present hypotheses. If Atlas AI determines the investigation needs to be escalated, it will suggest remediation and containment actions in the context of all the information that’s been gathered. Human agents can decide whether to escalate the investigation or close it out.
The speedy and accurate results that agentic AI enables mean that an organization can demonstrate full compliance with data regulation requirements in the immediate aftermath of an incident. Decision trails, mobile oversight and compliance alignment with standards like SOC 2 and GDPR help keep SOCs audit-ready. It is possible to produce a detailed report for regulatory scrutiny based on data from the very early stages of an attack.
“All in all, eSentire is about generating expert-level investigation and response actions while other MDR providers focus on filtering out the noise and carrying out less important remedial tasks,” Hillard concludes. “We’re more about delivering deep investigation when and where it matters the most, and we don’t take human expertise off the table to do that.”
Your agentic MDR vendor checklist
MDR providers rush to integrate agentic AI into their offerings. Some are highly sophisticated, while others are still at the development stage. Here are ten questions for an MDR provider to help you decide if they are really going to accelerate your security operations reliably and effectively.
- How real is it? Is your agentic AI solution fully operational or still at beta stage, and are there active deployments?
- How much can you tweak the level of automation? Which actions does the agentic AI system take autonomously and which require human validation? It should be possible to define exactly where AI-driven management stops and human decision-making takes over.
- How transparent and auditable is the system? Regulators will expect to see reports detailing which threats were investigated, what mitigation steps were triggered, and where AI improved efficiency by reducing manual effort.
- What sort of adversarial testing program do you run? Agentic AI must be continuously validated against ever- evolving threats. What scenario-based evaluations are carried out?
- What AI models and data sources do you use? Ask for evidence of third-party data sources and upstream services. Query the roles these components play in detection and response processes. How does the system handle service degradation or disruption?
- How do you measure the performance of your agentic AI? Unproven claims are easy to make. Ask your MDR provider for firm data on things like Mean Time to Respond (MTTR), alert volume reduction, false-positive suppression and analyst engagement.
- What assurances can you give that data won’t leak into multi-tenant training environments? Agentic AI relies on continuous learning from a wide pool of data. But telemetry must be segmented to prevent cross-contamination with other peoples’ data.
- How do you handle collaboration between our internal security team and your analysts? Effective incident response is all about seamless coordination, especially in the high-pressure eventuality of a major cyber attack.
- Does your AI meet my organization’s security and compliance standards? A proper AI deployment should generate audit-ready data.
Sponsored by eSentire.
Original Post URL: https://go.theregister.com/feed/www.theregister.com/2025/08/07/could_agentic_ai_save/
Category & Tags: –
Views: 3