web analytics

Artificial Intelligence and Cybersecurity

Rate this post

The Centre for European Policy Studies (CEPS) launched a Task Force on Artificial Intelligence (AI) and Cybersecurity in the autumn of 2019. The goal of this Task Force wasto draw attention to the technical, ethical, market and governance challenges posed by the intersection of AI and cybersecurity. The Task Force, multistakeholder by design, was composed of seventeen private organisations, eight European Union (EU) institutions, one international and one multilateral organisation, five universities and think tanks, and two civil society organisations (see a list of participants in Annex II). Meeting on four separate occasions and continuing to work remotely when the Covid-19 lockdown started, the group explored ways to formulate practical guidelines for governments and businesses to ease the adoption of AI in cybersecurity in the EU while addressing the cybersecurity risks posed by the implementation of AI systems. These discussions led to policy recommendations being addressed to EU institutions, member states, the private sector and the research community for the development and deployment of secure AI systems.

AI is playing an increasingly central role in people’s everyday lives. The benefits of implementing AI technology are numerous, but so are the challenges. The adoption of AI in cybersecurity could be hampered or even lead to significant problems for society if the security and ethical concerns are not properly addressed through governmental processes and policies. This report aims to contribute to EU efforts to establish a sound policy framework for AI. Its specific objectives are to:

  • provide an overview of the current landscape of AI in terms of beneficial applicationsi the cybersecurity sector and the risks that stem from the likelihood of AI-enabled systems being subject to manipulation
  • present the main ethical implications and policy issues related to the implementation
    of AI as they pertain to cybersecurity
  • put forward constructive and concrete policy recommendations to ensure the AI rollout
    is securely adopted according to the objectives of the EU digital strategy.

The report raises several issues about policy implications. It suggests that, because of the lack of transparency and the learning abilities of AI systems, it is hard to evaluate whether a system will continue to behave as expected in any given context. Therefore, some form of control and human oversight is necessary. Furthermore, the point is made that AI systems, unlike brains, are designed, and so all the decisions based on these systems should be auditable. Talk about brains or consciousness has become rather a means to evade regulation and oversight. Poor cybersecurity in the protection of open-source models could also lead to hacking opportunities for actors seeking to steal such information. Limitations on the dissemination and the sharing of data and codes could therefore enable a more complete assessment of the related security risks. It should be noted that the overview is not exhaustive and other policy issues and ethical implications are raised throughout the report.

Views: 1

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post

More Latest Published Posts