web analytics

Google Introduces SAIF, a Framework for Secure AI Development and Use – Source: www.securityweek.com

Rate this post

Source: www.securityweek.com – Author: Kevin Townsend

The Google SAIF (Secure AI Framework) is designed to provide a security framework or ecosystem for the development, use and protection of AI systems.

All new technologies bring new opportunities, threats, and risks. As business concentrates on harnessing opportunities, threats and risks can be overlooked. With AI, this could be disastrous for business, business customers, and people in general. SAIF offers six core elements to ensure maximum security in AI.

Expand strong security foundations to the AI ecosystem

Many existing security controls can be expanded and/or focused on AI risks. A simple example is protection against injection techniques, such as SQL injection. “Organizations can adapt mitigations, such as input sanitization and limiting, to help better defend against prompt injection style attacks,” suggests SAIF.

Traditional security controls will often be relevant to AI defense but may need to be strengthened or expanded. Data governance and protection becomes critical to protect the integrity of the learning data used by AI systems. The old concept of ‘rubbish in, rubbish out’ is magnified manyfold by AI, but made critical where business and people decisions are based on that rubbish.

Extend detection and response to bring AI into an organization’s threat universe

Threat intelligence must now also include an understanding and awareness of threats relevant to organizations’ own AI usage, including the consequences of a breach. If a data pool is poisoned without knowledge of that poisoning, AI outputs will be adversely and possibly invisibly affected. 

It will be necessary to monitor AI output to detect algorithmic errors and adversarial input. “Organizations that use AI systems must have a plan for detecting and responding to security incidents and mitigate the risks of AI systems making harmful or biased decisions,” says Google.

Automate defenses to keep pace with existing and new threats

This is the most common advice used in the face of AI-based attacks – automate defenses with AI to counter the increasing speed and magnitude of adversarial AI-based attacks. But Google warns that humans must be kept in the loop for important decisions, such as determining what constitutes a threat and how to respond to it.

The human element is important for both detection and response. “This is because AI systems can be biased or make mistakes, and human oversight is necessary to ensure that AI systems are used ethically and responsibly,” says Google.

AI-based automation goes beyond the automated detection of threats and can also be used to decrease the workload and increase the efficiency of the security team. Secure scripts could be generated through no-code systems to control and automate security processes. Reverse engineering a malicious binary could be automated, and the subsequent automatic generation of a Yara rule could look for evidence of related activity.

Harmonize platform level controls to ensure consistent security across the organization

As the use of AI grows, it is important to have periodic reviews to identify and mitigate associated risks. This should include the AI models used and the data used to train them, together with the security measures implemented, and the AI security risk awareness and training for all employees.

Reduce overlapping frameworks for security and compliance controls to help reduce fragmentation. Fragmentation increases complexity, costs, and inefficiencies. Reducing fragmentation will, suggests Google, “provide a ‘right fit’ approach to controls to mitigate risk.”

Adapt controls to adjust mitigations and create faster feedback loops for AI deployment

This involves continuously testing and evolving systems in use, including techniques such as reinforcement learning based on incidents and user feedback. The training data needs to be monitored and updated as necessary, and the models fine-tuned to respond to attacks.

It involves being continuously aware of new attacks involving prompt injection, data poisoning and evasion attacks. “By staying up to date on the latest attack methods, organizations can take steps to mitigate these risks,” says Google. Red teaming can also help organizations identify and mitigate security risks before they can be exploited by malicious actors.

An effective feedback loop is required to ensure that everything learned is put to good use, whether that is to improve defenses or improve the AI model itself.

Contextualize AI system risks in surrounding business processes

This involves a thorough understanding of how AI will be used within business processes, and requires a complete inventory of AI models in use. Assess their risk profile based on the specific use cases, data sensitivity, and shared responsibility when leveraging third-party solutions and services.

“Implement data privacy, cyber risk, and third-party risk policies, protocols and controls throughout the ML model lifecycle to guide the model development, implementation, monitoring, and validation,” says Google.

Throughout this, it is important to assemble a strong AI security team. “AI systems are often complex and opaque, have a large number of moving parts, rely on large amounts of data, are resource intensive, can be used to apply judgment-based decisions, and can generate novel content that may be offensive, harmful, or can perpetuate stereotypes and social biases,” warns Google.

For many organizations this will expand the necessary expertise available to the security team, including business use case owners, security, cloud engineering, risk and audit teams, privacy, legal, data science, development, and responsible AI and ethics.

Google has based its SAIF framework on the experience of 10-years in the development and use of AI in its own products. The company hopes that making public its own experience in AI will lay the groundwork for secure AI – just as its BeyondCorp access model led to the zero trust principles which are industry standard today.

Related: Insider Q&A: Artificial Intelligence and Cybersecurity In Military Tech

Related: ChatGPT’s Chief Testifies Before Congress, Calls for New Agency to Regulate Artificial Intelligence

Related: Harris to Meet With CEOs About Artificial Intelligence Risks

Related: Cyber Insights 2023 | Artificial Intelligence

Original Post URL: https://www.securityweek.com/google-introduces-saif-a-framework-for-secure-ai-development-and-use/

Category & Tags: Artificial Intelligence,AI,google – Artificial Intelligence,AI,google

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post