Artificial intelligence (AI) technologies have significant potential to transform society and
people’s lives – from commerce and health to transportation and cybersecurity to the environment
and our planet. AI technologies can drive inclusive economic growth and support
scientific advancements that improve the conditions of our world. AI technologies, however,
also pose risks that can negatively impact individuals, groups, organizations, communities,
society, the environment, and the planet. Like risks for other types of technology, AI
risks can emerge in a variety of ways and can be characterized as long- or short-term, highor
low-probability, systemic or localized, and high- or low-impact.
While there are myriad standards and best practices to help organizations mitigate the risks
of traditional software or information-based systems, the risks posed by AI systems are in
many ways unique (See Appendix B). AI systems, for example, may be trained on data that
can change over time, sometimes significantly and unexpectedly, affecting system functionality
and trustworthiness in ways that are hard to understand. AI systems and the contexts
in which they are deployed are frequently complex, making it difficult to detect and respond
to failures when they occur. AI systems are inherently socio-technical in nature, meaning
they are influenced by societal dynamics and human behavior. AI risks – and benefits –
can emerge from the interplay of technical aspects combined with societal factors related
to how a system is used, its interactions with other AI systems, who operates it, and the
social context in which it is deployed.
These risks make AI a uniquely challenging technology to deploy and utilize both for organizations
and within society. Without proper controls, AI systems can amplify, perpetuate,
or exacerbate inequitable or undesirable outcomes for individuals and communities. With
proper controls, AI systems can mitigate and manage inequitable outcomes.
AI risk management is a key component of responsible development and use of AI systems.
Responsible AI practices can help align the decisions about AI system design, development,
and uses with intended aim and values. Core concepts in responsible AI emphasize
human centricity, social responsibility, and sustainability. AI risk management can
drive responsible uses and practices by prompting organizations and their internal teams
who design, develop, and deploy AI to think more critically about context and potential
or unexpected negative and positive impacts. Understanding and managing the risks of AI
systems will help to enhance trustworthiness, and in turn, cultivate public trust.