Source: securityboulevard.com – Author: Michael Vizard
Netskope today published a report that finds source code is posted to ChatGPT more than any other type of sensitive data—at a rate of 158 incidents per 10,000 users per month.
At the same time, Netskope, a provider of a secure access service edge (SASE) platform, revealed it has added artificial intelligence (AI) capabilities to its platform that, in addition to accurately identifying more potential threats, also includes data classification and classifier technology that can be trained to recognize new data.
Naveen Palavalli, vice president of product go-to-market (GTM) strategy for Netskope, said Netskope AI is a superset of AI technologies the company is now applying via its cloud-based SASE platform. Netskope is, in effect, extending the data loss prevention (DLP) capabilities it provides using AI and machine learning algorithms to monitor network traffic, improve performance and identify threats, he added.
DLP is becoming a bigger concern as end users increasingly load sensitive data—like intellectual property such as source code, health care data and other personally identifiable information—into a publicly available service such as ChatGPT, said Palavalli. Unless the right restrictions are placed on the data, it will be used to train large language models (LLMs), which can result in the data being made available to anyone.
Some organizations are banning the use of platforms such as ChatGPT because of those concerns, but on a practical level, it makes more sense to employ a DLP capability that alerts end users when they are about to share sensitive data, said Palavalli. Otherwise, end users will simply find ways to surreptitiously use a generative AI platform without any guidance, he added.
In general, as AI continues to evolve, it’s become increasingly apparent that organizations will need to rely on a cloud-based platform to store and analyze the massive amount of data needed to train AI models, said Palavalli. Most organizations will not have the IT resources required to collect, store and analyze that volume of data on their own, he noted.
In fact, like it or not, organizations are now engaged in a cybersecurity AI arms race that can only be won by relying on vendors that have the resources required to keep pace with the level of investments cybercriminals are making in generative AI to create, for example, phishing attacks using multimedia images to mimic a targeted individual’s boss.
There is, of course, no shortage of cybersecurity vendors investing in AI. The issue that organizations need to come to terms with is that most cybersecurity professionals won’t want to work for organizations that don’t provide them with the tools they need to succeed. The simple truth is organizations that don’t have access to cybersecurity AI platforms will find themselves easy targets for cybercriminals who are now leveraging AI as quickly as they can across a wide range of nefarious attack vectors.
Arguably, the only thing left to determine is how much damage those attacks will inflict before organizations can shore up their defenses.
Recent Articles By Author
Original Post URL: https://securityboulevard.com/2023/07/netskope-sees-lots-of-source-code-pushed-in-chatgpt/
Category & Tags: Analytics & Intelligence,Application Security,Cybersecurity,Data Security,Featured,News,Security Boulevard (Original),Spotlight,Threat Intelligence,Threats & Breaches,Vulnerabilities,AppSec,ChatGPT,LLMs,Netskope,Source Code – Analytics & Intelligence,Application Security,Cybersecurity,Data Security,Featured,News,Security Boulevard (Original),Spotlight,Threat Intelligence,Threats & Breaches,Vulnerabilities,AppSec,ChatGPT,LLMs,Netskope,Source Code
Views: 0