In April 2021, the European Commission published a proposal for an artificial intelligence (AI) regulation (1). The proposal focuses on high-risk AI systems, for which requirements include dequate levels for robustness, accuracy and cybersecurity. The proposed regulation requires the use of technical standards during the design and development of high-risk AI systems to ensure a consistent and high level of protection of public interests, such as health, safety and fundamental rights. Work on the AI-related standards has already begun, however standards development takes a long time, so they most likely will not be ready before the regulation enters into force. Until then, a collection of good practices would be beneficial for the AI ecosystem stakeholders.
To this end, ENISA has published already two studies on cybersecurity for AI: AI Cybersecurity Challenges – Threat landscape for artificial intelligence (2) and Securing Machine Learning Algorithms (3), which provide guidance for cybersecurity within the AI machine learning (ML) life cycle. However, these studies do not fully cover the entire AI life cycle (from concept to decommissioning), the associated infrastructure and all the elements of the AI supply chain.
The importance of identifying good cybersecurity practices for AI, which go beyond ML, has also been noticed by the Commission, which requested ENISA assistance in identifying not only existing cybersecurity practices for AI, but also in gathering information on the current state of cybersecurity requirements for AI on the EU and national levels, along with the monitoring and enforcement of these requirements by national competent authorities (NCAs).
In this report, we present a scalable framework to guide NCAs and AI stakeholders on the steps they need to follow to secure their AI systems, operations and processes by using existing knowledge and best practices and identifying missing elements. The framework consists of three layers (cybersecurity foundations, AI-specific cybersecurity and sector-specific cybersecurity for AI) and aims to provide a step-by-step approach on following good cybersecurity practices in order to build trustworthiness in their AI activities.
We gathered information through a survey, which is based on the framework presented in this report and the main principles of the proposed Artificial Intelligence Act (AI Act) and the coordinated plan on AI (4) from the NCAs (AI-specific or cybersecurity-related). We analysed the current state of cybersecurity requirements and monitoring and enforcement practices that the NCAs have adopted (or plan to develop) to ensure that the national AI stakeholders address cybersecurity requirements. The survey results revealed that the readiness level of NCAs is low and that further measures are needed. The report also
points out additional research efforts needed for the development of these additional cybersecurity AI practices.
The main recommendation is to treat the cybersecurity of AI systems as an additional effort to existing practices for the security of organisations’ information and communications technology (ICT). The existing cybersecurity practices need to be complemented with AI-specific practices, which address, among other things, their dynamic socio-technical nature. Examples of additional practices include dynamic, measurable risk assessments of AI technical (e.g., poisoning data) and social threats (e.g., bias, lack of fairness) and continuous risk management (RM) during the AI system life cycle. The operational environment (e.g., energy sector) and usage (e.g., monitoring the smart meters) of the AI system need to be considered for the realistic and accurate mitigation of sectoral threats.