Source: www.csoonline.com – Author:
In an overly reactive market to the risks posed by large language models (LLMs), CISO’s need not panic. Here are four common-sense security fundamentals to support AI-enabled business operations across the enterprise.
From risks in AI applications such as poisoned training data and hallucinations, to AI-enabled security, to deep fakes, user error, and novel AI-generated attack techniques, the cybersecurity industry is abuzz with dire security threats that are overwhelming CISOs.
For example, during and after the RSA Conference in April 2025, attendees posted vociferously about the overload of AI fear, uncertainty, and doubt (FUD), particularly on the part of vendors.
One of them is Netflix staff information risk engineer Tony Martin-Vegue who, in a post-RSAC interview, tells CSO that there’s no stopping this AI train, but there are ways to cut through hype and apply basic controls where they matter most.
First, he says, focus on why the organization deploys AI. “The way I see it is there is a risk of not using AI even though there is a lot of over-hype and promise about its capability. That said, organizations that don’t use AI will get left behind. The risk of using AI is where all the FUD is.”
In terms of applying controls, rinse, wash, and repeat the processes you followed when adopting cloud, BYOD, and other powerful technologies, he says. Start with knowing where and how AI is used, by whom and for what purpose. Then, focus on securing the data employees are sharing with the tools.
Get to know your AI
“AI is a fundamental change that is going to permeate society in a way that might even eclipse the internet. But this change is happening at such a rapid rate that the ability to distinguish the blur effect is hard to comprehend for a lot of people,” explains Rob T. Lee, chief of research, AI and emerging threats, at SANS Institute. “Now, every single part of the organization is going to be utilizing AI in different forms. You need a way to reduce risk fundamentally for implementation. And that means seeing where people use it, and under what business use cases, across the organization.”
Lee, who’s helping SANS develop a community-consensus AI security guidelines checklist, spends 30 minutes a day using advanced AI agents for various business purposes and encourages other cyber security and executive leaders to do the same. Because once they get familiar with the programs and their capabilities, then they can get down to selecting controls.
As an example, Lee points to Moderna, which announced in May 2025 that it merged human resources and IT under a new role, chief people and digital technology officer. “The work is no longer just about humans, but about managing both humans and AI agents,” Lee explains. “This requires HR and IT to collaborate in new ways.”
Revisit security fundamentals
That’s not to say that because AI is so new, current security fundamentals don’t count. They most certainly do.
Chris Hetner, senior cyber risk advisor at the National Association of Corporate Directors (NACD), explains: “The cybersecurity industry often operates in an echo chamber and is calibrated to be highly reactive. The echo chamber spins up the machine by talking about Agentic AI [AI agents], AI drift, and other risks. And a whole new set of vendors then overwhelms the CISO portfolio,” he explains. “AI is merely an extension of existing technology. It serves as another lens through which we can bring our focus back to the essentials.”
When Hetner speaks of the essentials, he highlights the importance of understanding the business profile, pinpointing threats within the digital landscape, and discerning the interconnections among business units. From there, security leaders should assess the operational, legal, regulatory, and financial repercussions that could arise in the event of a breach or exposure. Then they should aggregate this information into a comprehensive risk profile to present to the executive team and board so they can determine what risks they’re willing to accept, mitigate, and transfer.
Protect the data
Given how AI is used to analyze financial, sales, HR, product development, customer relationship and other sensitive data, Martin-Vegue feels that data protection should be at the top of the risk manager’s list of specific controls. This points back to knowing how employees use AI, for what functions, and the type of data they feed into the AI-enabled application.
Or, as a May 2025 joint memo on AI data security from security agencies across Australia, New Zealand, The UK and the US explains: Know what your data is, where it is, and where it’s going.
Of course, this is easier said than done, given that most organizations don’t know where all their sensitive data is, let alone how to control it, according to multiple surveys. Yet, as with other new technologies, protecting data used in LLMs boils down to user education and data governance, including traditional controls such as scanning and encryption.
“Your users may not understand the best ways to use these AI solutions, so cybersecurity and governance leaders need to help architect use cases and deployments that work for them and your risk management team,” says Diana Kelley, long-time cybersecurity analyst and CISO at Protect AI.
Protect the model
Kelley points out the differences in risk between various AI adoption and deployment models. Free, public versions of AI like ChatGPT, where the user plugs data into a web-based chat prompt, provide the least control over what happens with data that employees share with the interface. Paying for the professional version and bringing AI in-house gives enterprises more control, but enterprise licenses and self-hosting costs are often out of reach for small businesses. Another option involves running foundation models on managed cloud platforms like Amazon Bedrock and other securely-configured cloud services, where the data is processed and analyzed within the account holder’s protected environment.
“This is not magic or little sparkles, even though AI is often represented that way in your applications. It’s math. It’s software. We know how to protect software. However, AI is a new kind of software that requires new types of security approaches and tools,” Kelley adds. “A model file is a different type of file, so you need a purpose-built scanner designed for their unique structure.”
A model file is a set of weights and biases, she continues, and when it is deserialized then organizations are running untrusted code. This makes models a primary target for model serialization attacks (MSAs) by cybercriminals wanting to manipulate target systems.
In addition to MSA risks, AI models, especially those pulled from open source, can fall victim to typosquatting attacks that mimic the names of trusted files but contain malware in them. They’re also susceptible to neural backdoors and other supply chain vulnerabilities, which is why Kelley recommends scanning AI models before approving them for deployment and development.
Because the LLMs supporting AI applications are different from traditional software, the need for different types of scanning and monitoring has led to a flood of specialized solutions. But signs point to this market contracting, as traditional security vendors start to pull in specialty tools, such as with Palo Alto Network’s pending acquisition of Protect AI.
“Understand how the AI tech works, know how your employees are using it, and build in controls,” Kelley iterates. “Yes, there is a lot of work involved, but it doesn’t have to be scary, and you don’t need to believe the FUD. It’s the way we do risk management.”
SUBSCRIBE TO OUR NEWSLETTER
From our editors straight to your inbox
Get started by entering your email address below.
Original Post url: https://www.csoonline.com/article/4006436/llms-hype-versus-reality-what-cisos-should-focus-on.html
Category & Tags: Data and Information Security, Generative AI – Data and Information Security, Generative AI
Views: 2