web analytics

U.K. and U.S. Agree to Collaborate on the Development of Safety Tests for AI Models – Source: www.techrepublic.com

Rate this post

Source: www.techrepublic.com – Author: Fiona Jackson

The U.K. government has formally agreed to work with the U.S. in developing tests for advanced artificial intelligence models. A Memorandum of Understanding, which is a non-legally binding agreement, was signed on April 1, 2024 by the U.K. Technology Secretary Michelle Donelan and U.S. Commerce Secretary Gina Raimondo (Figure A).

Figure A

U.S. Commerce Secretary Gina Raimondo and U.K. Technology Secretary Michelle Donelan.
U.S. Commerce Secretary Gina Raimondo (left) and U.K. Technology Secretary Michelle Donelan (right). Source: UK Government. Image: U.K. government

Both countries will now “align their scientific approaches” and work together to “accelerate and rapidly iterate robust suites of evaluations for AI models, systems, and agents.” This action is being taken to uphold the commitments established at the first global AI Safety Summit last November, where governments from around the world accepted their role in safety testing the next generation of AI models.

What AI initiatives have been agreed upon by the U.K. and U.S.?

With the MoU, the U.K. and U.S. have agreed how they will build a common approach to AI safety testing and share their developments with each other. Specifically, this will involve:

  • Developing a shared process to evaluate the safety of AI models.
  • Performing at least one joint testing exercise on a publicly accessible model.
  • Collaborating on technical AI safety research, both to advance the collective knowledge of AI models and to ensure any new policies are aligned.
  • Exchanging personnel between respective institutes.
  • Sharing information on all activities undertaken at the respective institutes.
  • Working with other governments on developing AI standards, including safety.

“Because of our collaboration, our Institutes will gain a better understanding of AI systems, conduct more robust evaluations, and issue more rigorous guidance,” Secretary Raimondo said in a statement.

SEE: Learn how to Use AI for Your Business (TechRepublic Academy)

The MoU primarily relates to moving forward on plans made by the AI Safety Institutes in the U.K. and U.S. The U.K.’s research facility was launched at the AI Safety Summit with the three primary goals of evaluating existing AI systems, performing foundational AI safety research and sharing information with other national and international actors. Firms including OpenAI, Meta and Microsoft have agreed for their latest generative AI models to be independently reviewed by the U.K. AISI.

Similarly, the U.S. AISI, formally established by NIST in February 2024, was created to work on the priority actions outlined in the AI Executive Order issued in October 2023; these actions include developing standards for the safety and security of AI systems. The U.S.’s AISI is supported by an AI Safety Institute Consortium, whose members consist of Meta, OpenAI, NVIDIA, Google, Amazon and Microsoft.

Will this lead to the regulation of AI companies?

While neither the U.K. or U.S. AISI is a regulatory body, the results of their combined research is likely to inform future policy changes. According to the U.K. government, its AISI “will provide foundational insights to our governance regime,” while the U.S. facility will “​develop technical guidance that will be used by regulators.”

The European Union is arguably still one step ahead, as its landmark AI Act was voted into law on March 13, 2024. The legislation outlines measures designed to ensure that AI is used safely and ethically, among other rules regarding AI for facial recognition and transparency.

SEE: Most Cybersecurity Professionals Expect AI to Impact Their Jobs

The majority of the big tech players, including OpenAI, Google, Microsoft and Anthropic, are based in the U.S., where there are currently no hardline regulations in place that could curtail their AI activities. October’s EO does provide guidance on the use and regulation of AI, and positive steps have been taken since it was signed; however, this legislation is not law. The AI Risk Management Framework finalized by NIST in January 2023 is also voluntary.

In fact, these major tech companies are mostly in charge of regulating themselves, and last year launched the Frontier Model Forum to establish their own “guardrails” to mitigate the risk of AI.

What do AI and legal experts think of the safety testing?

AI regulation should be a priority

The formation of the U.K. AISI was not a universally popular way of holding the reins on AI in the country. In February, the chief executive of Faculty AI — a company involved with the institute — said that developing robust standards may be a more prudent use of government resources instead of trying to vet every AI model.

“I think it’s important that it sets standards for the wider world, rather than trying to do everything itself,” Marc Warner told The Guardian.

A similar viewpoint is held by experts in tech law when it comes to this week’s MoU. “Ideally, the countries’ efforts would be far better spent on developing hardline regulations rather than research,” Aron Solomon, legal analyst and chief strategy officer at legal marketing agency Amplify, told TechRepublic in an email.

“But the problem is this: few legislators — I would say, especially in the US Congress — have anywhere near the depth of understanding of AI to regulate it.

Solomon added: “We should be leaving rather than entering a period of necessary deep study, where lawmakers really wrap their collective mind around how AI works and how it will be used in the future. But, as highlighted by the recent U.S. debacle where lawmakers are trying to outlaw TikTok, they, as a group, don’t understand technology, so they aren’t well-positioned to intelligently regulate it.

“This leaves us in the hard place we are today. AI is evolving far faster than regulators can regulate. But deferring regulation in favor of anything else at this point is delaying the inevitable.”

Indeed, as the capabilities of AI models are constantly changing and expanding, safety tests performed by the two institutes will need to do the same. “Some bad actors may attempt to circumvent tests or misapply dual-use AI capabilities,” Christoph Cemper, the chief executive officer of prompt management platform AIPRM, told TechRepublic in an email. Dual-use refers to technologies which can be used for both peaceful and hostile purposes.

Cemper said: “While testing can flag technical safety concerns, it does not replace the need for guidelines on ethical, policy and governance questions… Ideally, the two governments will view testing as the initial phase in an ongoing, collaborative process.”

SEE: Generative AI may increase the global ransomware threat, according to a National Cyber Security Centre study

Research is needed for effective AI regulation

While voluntary guidelines may not prove enough to incite any real change in the activities of the tech giants, hardline legislation could stifle progress in AI if not properly considered, according to Dr. Kjell Carlsson.

The former ML/AI analyst and current head of strategy at Domino Data Lab told TechRepublic in an email: “There are AI-related areas today where harm is a real and growing threat. These are areas like fraud and cybercrime, where regulation usually exists but is ineffective.

“Unfortunately, few of the proposed AI regulations, such as the EU AI Act, are designed to effectively tackle these threats as they mostly focus on commercial AI offerings that criminals do not use. As such, many of these regulatory efforts will damage innovation and increase costs, while doing little to improve actual safety.”

Many experts therefore think that the prioritization of research and collaboration is more effective than rushing in with regulations in the U.K. and U.S.

Dr. Carlsson said: “Regulation works when it comes to preventing established harm from known use cases. Today, however, most of the use cases for AI have yet to be discovered and nearly all the harm is hypothetical. In contrast, there is an incredible need for research on how to effectively test, mitigate risk and ensure safety of AI models.

“As such, the establishment and funding of these new AI Safety Institutes, and these international collaboration efforts, are an excellent public investment, not just for ensuring safety, but also for fostering the competitiveness of firms in the US and the UK.”

Original Post URL: https://www.techrepublic.com/article/uk-us-agreement-ai-safety-testing/

Category & Tags: Artificial Intelligence,International,Security,United Kingdom,ai,ai safety,artificial intelligence,emea,policy,uk,usa – Artificial Intelligence,International,Security,United Kingdom,ai,ai safety,artificial intelligence,emea,policy,uk,usa

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post

More Latest Published Posts