Source: www.lastwatchdog.com – Author: bacohido
By Byron V. Acohido
LAS VEGAS — A decade ago, the rise of public cloud brought with it a familiar pattern: runaway innovation on one side, and on the other, a scramble to retrofit security practices not built for the new terrain.
Related: GenAI workflow risks
Shadow IT flourished. S3 buckets leaked. CISOs were left to piece together fragmented visibility after the fact.
Something similar—but more profound—is happening again. The enterprise rush to GenAI is triggering a structural shift in how software is built, how decisions get made, and where the risk lives. Yet the foundational tools and habits of enterprise security—built around endpoints, firewalls, and user identities—aren’t equipped to secure what’s happening inside the large language models (LLMs) now embedded across critical workflows.
This is not just a new attack surface. It’s a systemic exposure—poorly understood and dangerously under-addressed.
The newly published IBM 2025 Cost of a Data Breach Report highlights a widening chasm between AI adoption and governance. It reveals that 13% of organizations suffered breaches involving AI models or applications, and among these, a staggering 97% lacked proper AI access controls.
Encouragingly, a new generation of AI-native security vendors is quietly charting the contours of this gap. Among them: Straiker, DataKrypto, and PointGuard AI.
I encountered all three here in Las Vegas at Black Hat 2025 — and their candid insights helped crystallize what I now see as a systemic failure hiding in plain sight.
Each startup is tackling a different facet of GenAI’s attack surface. None claim to offer a silver bullet. But taken together, they hint at what an AI-native security stack might eventually require.
AI-powered tools are flooding enterprise workflows at every level. From marketing copy to software development, GenAI is now threaded into production processes with startling speed. But the underlying engines—LLMs—operate using unfamiliar logic, drawing conclusions and taking actions in ways security teams aren’t trained to inspect or control.
Shadow AI is more than an abstract concern. Research from Menlo Security shows a 68% increase in shadow GenAI usage in just 2025, with 57% of employees admitting they’ve input corporate data into unsanctioned AI tools. The rise of AI web traffic—up 50% to 10.5 billion visits—signals how widespread this risk has become, even in just-browser usage contexts.
Ankur Shah, CEO of Straiker, put it bluntly: “If you’re not watching what your AI agent is doing in real time, you’re blind.” Straiker focuses on what happens when GenAI becomes agentic—when it starts chaining reasoning steps, invoking tools, or making decisions based on inferred context.
In this mode, traditional AppSec and data loss prevention tools fall flat. Straiker’s Ascend AI and Defend AI offerings are designed to red-team these behaviors and enforce runtime policy guardrails. Their insight: the attack surface is no longer just the prompt. It’s the behavior of the agent.
If Straiker focuses on the “what,” then DataKrypto focuses on the “where.” Specifically: where does GenAI process and store its most sensitive data? The answer, according to DataKrypto founder Luigi Caramico, is both simple and alarming: in cleartext, inside RAM.
“All the data—the model weights, the training materials, even user prompts—are held unencrypted in memory,” Caramico observes. “If you have access to the machine, you have access to everything.”
This exposure isn’t hypothetical. As more companies fine-tune LLMs with proprietary IP, the risk of theft or leakage escalates dramatically. Caramico likens LLMs to the largest lossy compression engines ever built—compressing terabytes of training data into billions of vulnerable parameters.
DataKrypto’s response is a product called Phenom for AI: a secure SDK that encrypts model data in memory using homomorphic encryption, integrated with trusted execution environments (TEEs). This protects both the model itself and the sensitive data flowing into and out of it—without degrading performance. “Encryption at rest and in motion aren’t enough,” Caramico said. “This is encryption in use.”
The third leg of the emerging GenAI security stool comes from PointGuard AI, which focuses on discovery and governance. As AI code generation and prompt engineering proliferate, organizations are losing track of what AI tools are being used where, and by whom. Willy Leichter, PointGuard’s Chief Security Officer, frames it as a shadow IT problem on steroids.
“AI is the fastest-growing development platform we’ve ever seen,” he noted. “Developers are pulling in open-source models, auto-generating code, and building apps without any oversight from security teams.”
PointGuard scans code repos, runtime environments, and MLOps pipelines to surface unsanctioned AI use, detect prompt injection exposures, and score AI posture. It builds a bridge between AppSec and data governance teams who increasingly find themselves on the same front lines.
While their approaches differ, these companies are all converging on the same conclusion: the current security model isn’t just incomplete—it’s obsolete. Straiker brings behavioral monitoring into the spotlight. DataKrypto protects the compute layer itself. PointGuard restores visibility and governance to a world of AI-driven code and logic. Their respective visions are drawing the early contours of what a security-first foundation for GenAI might look like.
There is now, in fact, an OWASP Top 10 list of LLM vulnerabilities. But it is still early days, and there are few universal frameworks or agreed-upon best practices for how to integrate these new risks into traditional security operations. CISOs face a landscape that is both fragmented and urgent, where model misuse, shadow deployments, and memory scraping represent three fundamentally different risks—each requiring new tools and mental models.
To keep pace, security itself must evolve. That means understanding AI not just as a tool, but as a new kind of software logic that demands purpose-built protection. It means building systems that can interpret autonomous behavior, encrypt active memory, and continuously surface hidden AI integrations. Most of all, it means learning to think less like compliance officers and more like language models—probabilistic, context-aware, and relentlessly adaptive.
“Security can’t just follow the playbook anymore,” Leichter observed. “We have to match the speed and shape of the thing we’re trying to protect.”
That, in the end, may be the most important shift of all.
Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.
(Editor’s note: A machine assisted in creating this content. I used ChatGPT-4o to accelerate research, to scale correlations, to distill complex observations and to tighten structure, grammar, and syntax. The analysis and conclusions are entirely my own — drawn from lived experience and editorial judgment honed over decades of investigative reporting.)
August 7th, 2025 | My Take | RSAC | Top Stories
Original Post URL: https://www.lastwatchdog.com/my-take-the-genai-security-crisis-few-can-see-but-these-startups-are-quietly-mapping-the-gaps/
Category & Tags: My Take,RSAC,Top Stories – My Take,RSAC,Top Stories
Views: 2