web analytics

MITIGATING ARTIFICIAL INTELLIGENCE (AI) RISK: Safety and Security Guidelinesfor Critical Infrastructure Ownersand Operators

Rate this post

The U.S. Department of Homeland Security (DHS) was tasked in Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence to develop safety and security guidelines for use by critical infrastructure owners and operators. DHS developed these guidelines in coordination with the Department of Commerce, the Sector Risk Management Agencies (SRMAs) for the 16 critical infrastructure sectors, and relevant independent regulatory agencies.
The guidelines begin with insights learned from the Cybersecurity and Infrastructure Security Agency’s (CISA) crosssector analysis of sector-specific AI risk assessments completed by SRMAs and relevant independent regulatory agencies in January 2024. The CISA analysis includes a profile of cross-sector AI use cases and patterns in adoption and establishes a foundational analysis of cross-sector AI risks across three distinct types: 1) Attacks Using AI, 2) Attacks Targeting AI Systems, and 3) Failures in AI Design and Implementation. DHS drew upon this analysis, as well as analysis from existing U.S. government policy, to develop specific safety and security guidelines
to mitigate the identified cross-sector AI risks to critical infrastructure. The guidelines incorporate the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF), including its four functions that help organizations address the risks of AI systems: Govern, Map, Measure, and Manage. While the guidelines in this document are written broadly so they are applicable across critical infrastructure sectors, DHS encourages owners and operators of critical infrastructure to consider sector-specific and contextspecific AI risks and mitigations.

Views: 1

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post

More Latest Published Posts