Source: www.csoonline.com – Author:
GenAI simplifies work – for companies as well as for attackers and malicious insiders. CISOs need to be prepared.
It’s every company’s nightmare: A competitor is targeting its own customers with targeted campaigns. And it’s doing it so precisely that it can’t be a coincidence. It’s reasonable to assume that the competitor has somehow gained access to this sensitive data.
The source of the data breach: a former employee used an AI assistant to access an internal database full of account data. He then copied sensitive details such as customer sales and product usage. He then took them to his new employer.
This example illustrates a rapidly growing problem: the rapidly increasing use of generative AI tools will inevitably lead to more data breaches. According to a recent Gartner survey, the most common AI use cases include GenAI-based applications such as Microsoft 365 Copilot and Salesforce. While these tools are an excellent way for companies to increase productivity, they also pose a major challenge for data security.
Data risks
Research shows that almost 99 percent of authorizations are not used, with more than half of them being high-risk. In principle, unused and overly broad access rights are a problem for data security. Artificial intelligence exacerbates the situation many times over. When a user asks an AI assistant a question, the tool formulates an answer in natural language based on internet content and company data using graph technology. For example, Microsoft Copilot can access all the data that the user can access – even if the user is not even aware that they have access to it. Accordingly, Copilot can easily disclose sensitive data.
AI lowers the barriers for attackers
AI has made the days when attackers had to “hack” systems and slowly and carefully scout out the environment a thing of the past. Now they can simply ask an AI assistant for sensitive information or access data to move laterally within the environment.
The biggest challenges for cybersecurity posed by AI are:
- Employees have access to too much data
- Sensitive data is often not marked or is marked incorrectly
- Insiders can quickly find and exfiltrate data using natural language
- Attackers can find secrets for privilege escalation and lateral movement
- It’s impossible to manually set the right level of access
- GenAI quickly generates new sensitive data
These data security challenges are not new. However, the speed and ease with which AI can expose information to attackers makes them easier than ever to exploit.
Protective measures against the AI risk
The first step in eliminating the risks associated with AI is ensuring the homework has been done. Before using tools as powerful as Copilot, CISOs need to know where all their sensitive data is located. They also need to be able to analyze threats and risks, close security gaps, and efficiently fix misconfigurations.
Only when CISOs have a firm grip on data security in their environment and the right processes are in place is the company ready to introduce AI assistants. Even after installation, security managers should continuously monitor the following three areas:
- Access rights. It’s important to ensure that employee permissions are properly sized and that the AI tool’s access matches those permissions.
- Classification. As soon as CISOs know what sensitive data the company has, they can label it to effectively enforce DLP rules.
- Human activity. The use of AI assistants must be monitored and any suspicious behavior detected. Analyzing the prompts and the files that are accessed is crucial to prevent the misuse of artificial intelligence.
Volker Sommer has worked in the software sector for more than 25 years – the last eight of which have been in cybersecurity. Since early 2024, he has been responsible for the German-speaking region and eastern Europe as regional sales director at Varonis Systems. Previously, he worked for VMWare/Carbon Black, Sailpoint and Palo Alto Networks, among others.
SUBSCRIBE TO OUR NEWSLETTER
From our editors straight to your inbox
Get started by entering your email address below.
Original Post url: https://www.csoonline.com/article/3827114/how-to-prevent-ai-based-data-incidents.html
Category & Tags: Data Breach, Generative AI, Security – Data Breach, Generative AI, Security
Views: 2