web analytics

US Body to Assess OpenAI and Anthropic Models Before Release – Source: www.databreachtoday.com

Rate this post

Source: www.databreachtoday.com – Author: 1

Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development

The AI Safety Institute Will Evaluate Safety and Suggest Improvements

Rashmi Ramesh (rashmiramesh_) •
August 30, 2024    

US Body to Assess OpenAI and Anthropic Models Before Release
The U.S. AI Safety Institute will evaluate OpenAI and Anthropic models for safety. (Image: Shutterstock)

Leading artificial intelligence companies OpenAI and Anthropic made a deal with a U.S. federal body to provide early access to major models for safety evaluations.

See Also: The SIEM Selection Roadmap: Five Features That Define Next-Gen Cybersecurity

The memorandum of understanding with the U.S. Artificial Intelligence Safety Institute, a part of the Department of Commerce’s National Institute of Standards and Technology, will also allow all the participants to collaborate for research on how to evaluate models for safety and risk mitigation methods.

The agreements are “just the start, but they are an important milestone as we work to help responsibly steward the future of AI,” said U.S. AI Safety Institute Director Elizabeth Kelly, adding that safety was “essential” to fuel breakthrough technological innovation.

The news comes weeks after OpenAI chief Sam Altman announced the agreement on social media platform X, saying that the deal would “push forward the science of AI evaluations” (see: US AI Safety Body to Get Early Access to OpenAI’s Next Model).

The AI Safety Institute was set up in February as part of the Biden administration’s AI executive order to develop testing methodologies and testbeds for research on large language models, while also operationalizing use cases for federal government use.

As part of the latest deal, the agency will have access to the new OpenAI and Anthropic models both before and following their releases. The institute will suggest safety improvements to the companies and also plans to work with its U.K. counterpart to shape the recommendations.

The United States and the United Kingdom partnered earlier this year to develop safety tests in a bid to collaborate and address the growing, common concerns about the security of AI systems at a time when federal and state legislatures are mulling setting up guardrails without stifling innovation.

Altman said in a social media post that for “many reasons,” it was important for AI regulation to happen “at the national level. U.S. needs to continue to lead!” His remarks come a day after California state lawmakers sent to the desk of Gov. Gavin Newsom a bill establishing first-in-the-nation safety standards for advanced AI models – a piece of legislation that OpenAI opposes and Anthropic cautiously supports.

“Our collaboration with the U.S. AI Safety Institute leverages their wide expertise to rigorously test our models before widespread deployment,” said Anthropic co-founder and head of policy Jack Clark.

The NIST announcement said the partnerships with OpenAI and Anthropic are the “first of their kind” between the U.S. government and the tech industry. Both OpenAI and Anthropic already share their models with the United Kingdom.

Both companies are also among the 16 signatories who have made voluntary commitments to develop and use AI responsibly. Several of them have also committed to invest in cybersecurity and work on labeling AI-generated content via watermarking.

Original Post url: https://www.databreachtoday.com/us-body-to-assess-openai-anthropic-models-before-release-a-26177

Category & Tags: –

Views: 0

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post

More Latest Published Posts