At Google, we recognize that the potential of artificial intelligence (AI), especially generative AI, is immense.
However, in the pursuit of progress within these new frontiers of innovation, we believe it is equally important to establish clear industry
security standards for building and deploying this technology in a bold and responsible manner. A framework across the public and private sectors is essential for making sure that responsible actors safeguard the technology that supports AI advancements, so that when AI models are implemented, they’re secure-by-design.
That’s why last month, we introduced the Secure AI Framework (SAIF), a conceptual framework for secure AI systems. SAIF is inspired by the security best practices — like reviewing, testing and controlling the supply chain — that we’ve applied to software development, while incorporating our understanding of security mega-trends and risks specific to AI systems. SAIF is designed to start addressing risks specific to AI systems like stealing the model, poisoning the training data, injecting malicious inputs through prompt injection, and extracting confidential information in the training data.
Views: 4


















































