EXECUTIVE SUMMARY
The overall objective of the present document is to provide an overview of standards (existing,
being drafted, under consideration and planned) related to the cybersecurity of artificial
intelligence (AI), assess their coverage and identify gaps in standardisation. It does so by
considering the specificities of AI, and in particular machine learning, and by adopting a broad
view of cybersecurity, encompassing both the ‘traditional’ confidentiality–integrity–availability
paradigm and the broader concept of AI trustworthiness. Finally, the report examines how
standardisation can support the implementation of the cybersecurity aspects embedded in the
proposed EU regulation laying down harmonised rules on artificial intelligence (COM(2021) 206
final) (draft AI Act).
The report describes the standardisation landscape covering AI, by depicting the activities of the
main Standards-Developing Organisations (SDOs) that seem to be guided by concern about
insufficient knowledge of the application of existing techniques to counter threats and
vulnerabilities arising from AI. This results in the ongoing development of ad hoc reports and
guidance, and of ad hoc standards.
The report argues that existing general purpose technical and organisational standards (such as
ISO-IEC 27001 and ISO-IEC 9001) can contribute to mitigating some of the risks faced by AI
with the help of specific guidance on how they can be applied in an AI context. This
consideration stems from the fact that, in essence, AI is software and therefore software
security measures can be transposed to the AI domain.
The report also specifies that this approach is not exhaustive and that it has some limitations.
For example, while the report focuses on software aspects, the notion of AI can include both
technical and organisational elements beyond software, such as hardware or infrastructure.
Other examples include the fact that determining appropriate security measures relies on a
system-specific analysis, and the fact that some aspects of cybersecurity are still the subject of
research and development, and therefore might be not mature enough to be exhaustively
standardised. In addition, existing standards seem not to address specific aspects such as the
traceability and lineage of both data and AI components, or metrics on, for example,
robustness.
The report also looks beyond the mere protection of assets, as cybersecurity can be considered
as instrumental to the correct implementation of trustworthiness features of AI and – conversely
–the correct implementation of trustworthiness features is key to ensuring cybersecurity. In this
context, it is noted that there is a risk that trustworthiness is handled separately within AIspecific
and cybersecurity-specific standardisation initiatives. One example of an area where
this might happen is conformity assessment.
Last but not least, the report complements the observations above by extending the analysis to
the draft AI Act. Firstly, the report stresses the importance of the inclusion of cybersecurity
aspects in the risk assessment of high-risk systems in order to determine the cybersecurity risks
that are specific to the intended use of each system. Secondly, the report highlights the lack of
standards covering the competences and tools of the actors performing conformity
assessments. Thirdly, it notes that the governance systems drawn up by the draft AI Act and the
Cybersecurity Act (CSA)1 should work in harmony to avoid duplication of efforts at national
level.
Finally, the report concludes that some standardisation gaps might become apparent only as
the AI technologies advance and with further study of how standardisation can support
cybersecurity.