web analytics

Aim Security to Limit Exposure of Sensitive Data to Generative AI Services – Source: securityboulevard.com

Rate this post

Source: securityboulevard.com – Author: Michael Vizard

Aim Security this week emerged from stealth to launch a platform that leverages large language models (LLMs) to prevent end users from inadvertently sharing sensitive data or intellectual property with a generative artificial intelligence (AI) platform such as ChatGPT.

Fresh from raising $10 million in funding, Aim Security CEO Matan Getz said the company’s platform provides data loss prevention (DLP) capabilities and leverages LLMs to identify issues that might exist in the prompts end users are creating to invoke generative AI platforms.

In effect, Aim Security is employing generative AI to better secure publicly available generative AI platforms. That approach is required because existing DLP tools are not able to identify sensitive data in the prompts used to invoke a generative AI service, said Getz.

This has become a pressing issue because organizations are concerned that whatever data is used to create a prompt might one day be used to train an AI model and ultimately make that data accessible to anyone, noted Getz. The overall goal is to provide a platform that enables cybersecurity teams to holistically apply policies to limit what types of data can be shared with a generative AI platform, he added.

In effect, publicly available generative AI platforms such as ChatGPT are yet another instance of shadow IT services. Some organizations have attempted to ban the usage of these services because of this concern, but practically, that may be impossible to enforce. Instead, organizations need to find ways to enact policies that enable end users to safely invoke these services, said Getz.

All Webinars

The proverbial generative AI genie is already out of the bottle, so there is no going back. Providers of these platforms generally promise not to use the data included in prompts to train future iterations of their models, but many organizations are going to prefer to exercise more control over their assets. It’s also only a matter of time before fines are levied for violating existing mandates because, for example, personally identifiable information (PII) data was exposed to a generative AI service.

It’s still not clear whether cybersecurity or governance, risk and compliance (GRC) teams will ultimately assume responsibility for ensuring that data is not leaked via a generative AI service. However, as these functions increasingly converge, many organizations are centralizing access control over cloud services, noted Getz.

Like most innovations, generative AI can be a force for good or evil. Cybercriminals will undoubtedly attempt to employ stolen credentials to gain access to any data that might be shared with these platforms. The challenge is many organizations have lax data management policies in place, which means that they don’t typically have a firm handle on what data might be stored in a cloud service or in a spreadsheet residing on a PC. The one thing that is certain, however, is that regardless of where that data winds up, they will still be held accountable for how it’s being used.

Recent Articles By Author

Original Post URL: https://securityboulevard.com/2024/02/aim-security-to-limit-exposure-of-sensitive-data-to-generative-ai-services/

Category & Tags: Analytics & Intelligence,Application Security,Cybersecurity,Data Privacy,Data Security,Featured,Identity & Access,News,Security Boulevard (Original),Social – X,Spotlight,Threat Intelligence,Threats & Breaches,Vulnerabilities,AI models,Aim Security,data protection,generative AI,LLMs – Analytics & Intelligence,Application Security,Cybersecurity,Data Privacy,Data Security,Featured,Identity & Access,News,Security Boulevard (Original),Social – X,Spotlight,Threat Intelligence,Threats & Breaches,Vulnerabilities,AI models,Aim Security,data protection,generative AI,LLMs

Views: 0

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post

More Latest Published Posts