web analytics

How Microsoft Secures Generative AI – Source: www.databreachtoday.com

Rate this post

Source: www.databreachtoday.com – Author: 1

Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development

Enabling Safety in the Age of Generative AI

Microsoft


May 23, 2024    

How Microsoft Secures Generative AI

Trustworthy. Responsible. Foundational. Now.

See Also: Webinar | Mythbusting MDR

We are at the start of an exciting new age: an age of limitless possibilities, an age of generative AI, or GAI. Like nothing else in human history, gen AI has emerged as an engine for innovation, with vast applications and new use cases that continue to reveal themselves every day.

Because the technology is rapidly revolutionizing business and the world at large, and its unbridled potential has captured the popular imagination like nothing else in recent memory, we feel a responsibility to share what we have learned about using this technology safely.

We take this responsibility seriously. Microsoft is committed to security, privacy and compliance across everything we do, and our approach to AI has been no different. In 2016, we began our work on responsible AI. By 2018, we had identified our Responsible AI Principles, and by 2019, we became the first major cloud provider to create a permanent Office of Responsible AI to both govern our AI program and provide actionable guidance for engineering teams building AI systems. During our decadelong focus on delivering AI to our customers, we have developed standards and best practices to address them – standards and best practices we share openly.

No matter which AI solutions you choose – one of Microsoft’s Copilot offerings, your own AI application built on the Azure AI platform, or an AI system offered elsewhere – we want to help you use the awesome power of AI safely and responsibly across systems in a way that keeps your data secure and private.

We believe the possibilities for generative AI are limitless. To respect this awesome power, we take a holistic approach to generative AI security that considers the technology, its users and society at large across four areas of protection: data privacy and ownership, transparency and accountability, user guidance and policy, and secure by design.

Data privacy and ownership

Microsoft applies the same privacy commitments to GAI as we do our software products, including our Copilots. That means customer data remains private. We empower customers to retain control of their information, which will never be used to train foundational models or be shared with OpenAI or other Microsoft customers without permission.

Transparency and accountability

Generative AI, just like human beings, sometimes gets it wrong. To make sure the content created by generative AI is as credible as it appears, it’s essential for the AI to use authoritative data sources to foster accuracy, showcase reasoning and sources to maintain transparency, and encourage an open dialog with the provision for feedback – an avenue that permits users to contribute substantially to the enhancement of AI results.

User guidance and policy

AI is an awesome force. To mitigate potential overreliance, we try to temper users’ trust while encouraging them to think critically about the information served by citing sources and using carefully considered language. We also consider hostile misuse, where users try to engage the AI in harmful actions, like generating dangerous code or instructions to build a weapon. To shield against this kind of misuse, we layer deep safety protocols into the system, setting clear boundaries on what AI can and cannot do to maintain a safe and responsible usage environment.

Secure by design

To prepare for generative AI threat vectors, we’ve added new steps to our Security Development Lifecycle, including updating the Threat Modeling SDL requirement to account for AI and machine learning-specific threats and mandating that teams adhere to the Responsible AI Standard. We also
continually monitor and log our large language model, or LLM, interactions for threats and implement strict input validation and sanitization of user-provided prompts. Finally, we put all of our GAI products through multiple rounds of AI red teaming to look for vulnerabilities and ensure we have proper mitigation strategies in place.

AI Red Team

Instituted in 2018, Microsoft’s AI Red Team mirrors the tactics and techniques of potential adversaries to find and fix vulnerabilities. The team’s charge extends beyond just securing against potential threats; it encompasses a critical examination of other system failures, including the generation of potentially harmful content, providing us with a comprehensive picture of the system’s integrity, confidentiality, and availability. The world of AI is always in flux, and as such, our red teaming efforts are relentless and adaptive, embracing an ongoing cycle of testing both before and after product release.

AI adoption rates are increasing fast – frequently without the knowledge or oversight of management – and demand for its applications continues to rise exponentially. As a business decision-maker, embracing GAI is a strategic move that can revolutionize your organization.

Is your organization ready? Here’s how to get started:

Step 1: Implement a Zero Trust Security Model

Zero trust is the cornerstone of any resilience plan, limiting the impact on an organization. Instead of assuming everything behind the corporate firewall is safe, the zero trust model assumes breach and verifies each request as though it originates from an open network.

Learn more about Zero Trust.

Step 2: Adopt Cyber Hygiene Standards

Basic security hygiene still protects against 99% of attacks. Meeting the minimum standards for cyber hygiene is essential for protecting against cyberthreats, minimizing risk and ensuring the ongoing viability of the business.

Learn more about cyber hygiene.

Step 3: Establish a Data Security and Protection Plan

For today’s environment, a defense-in-depth approach offers the best protection to fortify your data security. There are five components to this strategy, all of which can be enacted in whatever order suits your organization’s unique needs and possible regulatory requirements.

Learn more about data security and protection.

Step 4: Establish an AI Governance Structure

AI-ready organizations will have implemented processes, controls, and accountability frameworks that govern data privacy, security, and development of their AI systems, including the implementation of Responsible AI Standards.

Learn more about AI governance.

Original Post url: https://www.databreachtoday.com/blogs/how-microsoft-secures-generative-ai-p-3629

Category & Tags: –

Views: 0

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post

More Latest Published Posts