web analytics

Cybersecurity of Artificial Intelligence in the AI Act

The EU AI Act (European Commission 2021b) represents a significant milestone in the regulation of artificial intelligence (AI) technologies. This legislation aims to establish a horizontal framework for trustworthy AI systems that are safe, secure and compliant with European fundamental rights and values. At its core, the AI Act is designed to address risks to health, safety and fundamental rights specifically associated with AI technologies by setting legally binding requirements for high-risk AI systems. Prior to the deployment on the EU market, a provider of a high-risk AI application must adopt the necessary organisational and technical measures to achieve conformity with the requirements. Behind this approach is the intention to build public trust in AI as a transformative technology and ensure that it benefits the society as a whole. Note that at the time of publication the AI Acts is a still legislative proposal and awaits final adoption. By “AI Act”, this report refers to the original European Commission proposal (European Commission 2021b).

In line with a well-established EU system of product safety regulation (European Parliament and Council of the European Union 2012), harmonised standards are one of the main means to achieve compliance and conformity with the legislative requirements. Harmonised standards will be developed by European Standardisation Organisations following a standardisation request of the European Commission published in May 2023 (European Commission 2023). It is important to note that the standardisation request makes clear that the reach of AI standardisation is determined by the technological maturity of AI technologies, explicitly referring to the state of the art (SOTA) in technology 1 as the basis of standardisation.

AI is a rapidly evolving field, with novel and emergent approaches, models, and tools being introduced on an increasingly frequent basis (OECD 2023; 2022). This pace has dramatically accelerated in recent months driven by new developments and products on large-scale AI models (Bommasani et al. 2021). In the current AI scientific ecosystem, a wide variety of techniques and approaches of different levels of maturity coexist. Whilst some AI models are based on well-established techniques that have been used for decades, many techniques, in particular those driving the current innovative developments, have only been in use for a few years or even months. Additionally, research has focused primarily on improving the accuracy of models, and the shift towards the consideration of trustworthy requirements has only gained momentum over the past few years, in the light of potential negative consequences of the use of AI in the society (High Level Expert Group on Artificial Intelligence 2019). Therefore, considerations such as robustness, explainability or cybersecurity of AI models are often in earlier stages of research and development.

This report focusses on the requirement of cybersecurity for high-risk AI systems, as set out in Article 15 of the proposed AI Act. The requirements of cybersecurity, accuracy and robustness, are connected to the technical dimension of AI systems and require a deep understanding of the inner workings of AI systems, established technical practices and standards.

Even though established standards and practices in cybersecurity may apply to AI systems as they do to other software systems, AI-specific technological challenges exist and have not yet been the subject of established security practices or specific standards. However, work is increasingly being dedicated to the topic in form of reports, studies and first international standardisation work items (Tabassi et al. 2019; Berghoff et al. 2021; Malatras, Agrafiotis, and Adamczyk 2021; The MITRE Corporation 2022) . Currently, for security engineering purposes such as the practical implementation of processes and techniques to secure systems, many AI-specific security approaches and tools may not be considered mature enough to be directly used for properly securing certain AI models individually (Berghoff et al. 2021). The cybersecurity of AI (or AI cybersecurity) is an emerging field that aims to fill this gap, and that strongly relies on ongoing research activities in fields such as security engineering or adversarial machine learning (Papernot et al. 2016). In fact, the main purpose of the AI Act is to address the AI-specific risk in AI technology as discussed in detail in the AI Act impact assessment (European Commission 2021a) – and as such it can be expected that considerations are needed that go beyond established practice in software security to address its requirements.

In the report, these considerations are elaborated and guidance is provided for standardisation bodies and AI providers that seek to comply with the cybersecurity requirement of the proposed AI Act. These results are summarised in four key messages and recommendations. The report is conducted as part of an ongoing collaboration between the JRC and DG CONNECT, providing scientific and technical support to the development of the AI Act and related standardisation activities.

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post