Source: www.csoonline.com – Author:
Report shows that every organization uses an average of 6.6 high risk generative AI applications.
Employees in every organization use an average of 6.6 high-risk generative AI applications – including some unknown to CISOs — says Palo Alto Networks in a new study.
But, an expert says, that estimate is low. “I think it’s probably worse,” said Joseph Steinberg, a cybersecurity and AI expert. “In a major company it’s got to be higher than that.”
In fact, he predicts the number of risky AI apps in the enterprise is only going to grow.
That means that CISOs need to do a risk assessment of every genAI app employees are using, he said in an interview, and then set policies and procedures staff have to follow.
He warned CISOs and CEOs against following ‘the Ostrich algorithm’ – pretending the danger doesn’t exist by ignoring, if not rewarding, the shadow use of AI by employees, either in the office or at home.
“There’s no question there’s a tremendous amount of use of generative AI apps being used in ways that are highly problematic for the organization,” he said. “Remember, I can use a genAI app from my personal computer that my company has no control over, and still leak a tremendous amount of data just from what I’m asking – and it may not be only what I’m asking, but what others are also asking, and the generative AI learns from the pattern of questions.
“It’s hard to block that, because the risk can’t be completely controlled by the organization, because someone can do it on their own time from their own machine.”
And organizations sometimes deliberately or inadvertently reward employees for using unapproved genAI apps, he added, for example, by applauding a report that’s just too good.
“Let’s be honest,” he said. “Many of the companies that ban generative AI are rewarding their employees [for using it]. They’ll never admit it. But if you’re getting reviewed based on your performance, and your performance is enhanced by using shadow IT or AI on your own machine on your own time, if you’re not being punished, you’re not going to stop.”
Steinberg was commenting on a study released Thursday by Palo Alto Networks (PAN) on the popularity of genAI in organizations. It analyzed traffic logs from just over 7,000 PAN customers during the 12 months of 2024 to detect use of software-as-a-service apps such as ChatGPT, Microsoft Copilot, Amazon Bedrock and more. It also included a separate look at anonymized data from its customers’ loss prevention incidents from the first three months of this year.
It observed:
- on average, most organizations will see a total of 66 genAI apps in their environments. The bulk of those among PAN customers were “writing assistants” (34% of the sample. The biggest in this category was Grammarly); “conversational agents” (just under 29%, apps such as Microsoft Copilot, ChatGPT and Google Gemini); “enterprise search” apps (just over 10% of the sample) and “developer platform” apps (just over 10%). These four alone make up 84% of the genAI apps seen;
- 10% of genAI apps are called ‘high-risk’ because, according to customer telemetry, access to them was restricted or blocked by customers at some point or points during the study period;
- data loss prevention (DLP) incidents for genAI detected by PAN more than doubled this year compared to 2024.
Writing assistants aren’t applications to be taken lightly, the report warns. “If an AI writing assistant is integrated into an organization’s systems without proper security controls, it could become a vector for cyberattacks. Hackers could exploit weaknesses in the genAI app to gain access to internal systems or sensitive data.”
“As genAI adoption grows, so do its risks,” it says. “Without visibility into genAI apps, and their broader AI ecosystems, businesses can risk exposing sensitive data, violating regulations, and losing control of intellectual property. Monitoring AI interactions is no longer optional. It’s critical for helping prevent shadow AI adoption, enforcing security policies, and enabling responsible AI use.”
The report identifies these genAI security best practices for CISOs:
- understand genAI usage and control in the enterprise and what is allowed. Implement conditional access management to limit access to genAI platforms, apps, and plugins based on users and/or groups, location, application risk, compliant devices, and legitimate business rationale;
- guard sensitive data from unauthorized access and leakage through real-time content inspection with centralized policy enforcement across the infrastructure and within data security workflows to help prevent unauthorized access and sensitive data leakage;
- defend against modern AI-based cyberthreats through a zero trust security framework to identify and block highly sophisticated, evasive, and stealthy malware and threats within genAI responses.
SUBSCRIBE TO OUR NEWSLETTER
From our editors straight to your inbox
Get started by entering your email address below.
Original Post url: https://www.csoonline.com/article/4002103/cisos-beware-genai-use-is-outpacing-security-controls.html
Category & Tags: Artificial Intelligence, Generative AI, Risk Management – Artificial Intelligence, Generative AI, Risk Management
Views: 2