Broader support for confidential AI use cases provides safeguards for machine learning and AI models to execute on encrypted data inside of trusted executions environments.

Opaque Systems has announced new features in its confidential computing platform to protect the confidentiality of organizational data during large language model (LLM) use. Through new privacy-preserving generative AI and zero-trust data clean rooms (DCRs) optimized for Microsoft Azure confidential computing, Opaque said it also now enables organizations to securely analyze their combined confidential data without sharing or revealing the underlying raw data. Meanwhile, broader support for confidential AI use cases provides safeguards for machine learning and AI models to use encrypted data inside of trusted executions environments (TEEs), preventing exposure to unauthorized parties, according to Opaque.

LLM use can expose businesses to significant security, privacy risks

The potential risks of sharing sensitive business information with generative AI algorithms are well-documented, as are vulnerabilities known to impact LLM applications. While some generative AI LLM models such as ChatGPT are trained on public data, the usefulness of LLMs can skyrocket if trained on an organization’s confidential data without risk of exposure, according to Opaque. However, if an LLM provider has visibility into the queries set by their users, the possibility of access to very sensitive queries – like proprietary code – becomes a significant security and privacy issue as the possibility of hacking increases dramatically, Jay Harel, VP of product at Opaque Systems, tells CSO. Protecting the confidentiality of sensitive data like personally identifiable information (PII) or internal data, such as sales figures is critical for enabling the expanded use of LLMs in an enterprise setting, he adds.

“Organizations want to fine-tune their models on company data, but in order to do so, they must either give the LLM provider access to their data or allow the provider to deploy the proprietary model within the customer organization,” Harel says. “Additionally, when training AI models, the training data is retained regardless of how confidential or sensitive it is. If the host system’s security is compromised, it may lead to the data leaking or landing in the wrong hands.”

Opaque platform leverages multiple layers of protection for sensitive data

By running LLM models within Opaque’s confidential computing platform, customers can ensure that their queries and data remain private and protected – never exposed to the model/service provider or used in unauthorized ways and only accessible to authorized parties, Opaque claimed. “The Opaque platform utilizes privacy-preserving technologies to secure LLMs, leveraging multiple layers of protection for sensitive data against potential cyber-attacks and data breaches through a powerful combination of secure hardware enclaves and cryptographic fortification,” Harel says.

For example, the solution allows generative AI models to run inference inside confidential virtual machines (CVMs), he adds. “This enables the creation of secure chatbots that allow organizations to meet regulatory compliance requirements.”

Michael Hill is the UK editor of CSO Online. He has spent the past five-plus years covering various aspects of the cybersecurity industry, with particular interest in the ever-evolving role of the human-related elements of information security.

Copyright © 2023 IDG Communications, Inc.