Artificial intelligence (AI) technologies are increasingly used by individuals and public and private institutions to enhance the speed and quality of decision-making. Yet the rise of AI has introduced new risks, including potential for bias and violation of individuals’ rights. AI risk and impact assessments offer formalized, structured means to characterize risks arising from the use of AI systems, and to identify proportionate risk mitigation measures. These assessments may be used by both public and private entities hoping to develop and deploy trustworthy AI systems, and are broadly considered a promising tool for AI governance and accountability.
The paper illustrates how AI risk and impact assessments may help mitigate the harms arising from specific AI systems, but it also highlights some of the limitations associated with the use of these assessments. The paper provides an overview of five AI risk and impact assessments that have been implemented or proposed by governments around the world— in Canada, New Zealand, Germany, the European Union, and San Francisco— and includes a comparative analysis of how they can help assess and mitigate risks. This paper includes analysis of both AI risk and impact assessments, which vary but are often used interchangeably, to highlight meaningful overlap and enable comparison.
The paper then delves into the context of the United States, focusing on current efforts underway to develop an AI risk management framework at the National Institute of Standards and Technology (NIST). NIST has been tasked by the United States Congress to develop a voluntary AI risk management framework that organizations can use to promote trustworthy AI development and use. The paper looks at past risk management frameworks developed by NIST for cybersecurity and privacy, and provides suggestions about the novel considerations associated with an AI risk framework, which may not perfectly map onto previous NIST frameworks.
In addition, the paper includes recommendations to help NIST and other interested entities develop AI risk and impact assessments that are effective in safeguarding the wider interests of society. As examples of these recommendations:
- Certain risk mitigation measures are emphasized across all surveyed frameworks and should be considered essential as a starting point. These include human oversight, external review and engagement, documentation, testing and mitigation of bias, alerting those affected by an AI system of its use, and regular monitoring and evaluation.
- In addition to assessing impacts on safety and rights, it is important to account for impacts on inclusiveness and sustainability in order to protect the wider interests of society and ensure that marginalized communities are not left behind.
- Individuals and communities affected by the use of AI systems should be included in the process of designing risk and impact assessments to help co-construct the criteria featured in the framework.
- Risk and impact assessments should include banning the use of specific AI systems that present unacceptable risks, to ensure that fundamental values and safety are not compromised.
- Periodic risk and impact reassessments should be required to ensure that continuous learning AI systems meet the standards required after they have undergone notable changes.
- Risk and impact assessments should be tied to procurement and purchase decisions to incentivize the use of voluntary frameworks.
The widespread use of AI risk and impact assessments will help to ensure we can gauge the risks of AI systems as they are developed and deployed in society, and that we are informed enough to take appropriate steps to mitigate potential harms. In turn, this will help promote public confidence in AI and enable us to enjoy the potential benefits of AI systems.
Views: 4


















































