Source: securityboulevard.com – Author: Michael Vizard
Sentra this week introduced a tool that automatically redacts personally identifiable information (PII) from prompts used to share data with either the ChatGPT or Google Bard generative artificial intelligence (AI) platform.
Sentra CTO Ron Reiter said Sentra ChatDLP Anonymizer would enable organizations to strike a balance between an outright ban on the use of these platforms and the need to make sure sensitive data isn’t made publicly available.
There have already been several instances—including at Samsung, for example—where an employee shared their organization’s proprietary data with a generative AI platform to improve their productivity. The issue is that once data is shared via a prompt with a generative AI platform, it is recorded and used to expand the corpus of data that refines the models it uses. Any proprietary data shared with a generative AI engine is likely to show up in a query result for anyone to read.
Reiter said ChatDLP is an extension of the data security posture management (DSPM) platform Sentra already provides. Sentra’s DSPM platform enables organizations to identify sensitive data that violates cybersecurity or compliance policies. It also leverages Named Entity Recognition models and regular expressions based on generative AI technologies the company has added to the platform to filter out sensitive information, including names, email addresses, credit card numbers and phone numbers from any prompt.
That approach enables employees to take advantage of the productivity benefits of generative AI with minimal effect on the prompts being used, said Reiter.
It’s still early days as far as enterprise adoption of generative AI is concerned, but there is little doubt these platforms are already widely used. There is also little doubt that data privacy and other compliance requirements will soon be extended to include how data is shared with generative AI platforms. Privacy regulations such as the General Data Protection Regulation (GDPR) in the European Union that gives individuals the right to be forgotten are going to be especially problematic for any organization that discovers personal data has been shared with a generative AI platform.
One way or another, the rise of generative AI platforms is going to force more organizations to revisit how they manage and secure data. Many organizations, despite any number of regulations, are still a little too cavalier when it comes to who can access and share data. Platforms such as ChatGPT—and similar services that are starting to proliferate—will force that issue.
In the meantime, it will fall to cybersecurity teams to make sure policies are adhered to in the least disruptive way possible. Banning access to platforms such as ChatGPT won’t prove effective, given enforcement challenges. The Sentra approach makes it possible for cybersecurity teams to apply a level of control without resorting to an outright ban.
Recent Articles By Author
Original Post URL: https://securityboulevard.com/2023/06/sentra-adds-tool-for-redacting-generative-ai-prompts/
Category & Tags: Application Security,Cybersecurity,Data Security,Featured,Governance, Risk & Compliance,Identity & Access,News,Security Boulevard (Original),Spotlight,ChatGPT,Data Privacy,data protection,GDPR,generative AI,Sentra – Application Security,Cybersecurity,Data Security,Featured,Governance, Risk & Compliance,Identity & Access,News,Security Boulevard (Original),Spotlight,ChatGPT,Data Privacy,data protection,GDPR,generative AI,Sentra
Views: 0