web analytics

OpenAI Is Hiring Researchers to Wrangle ‘Superintelligent’ AI – Source: www.techrepublic.com

Rate this post

Source: www.techrepublic.com – Author: Megan Crouse

The AI giant predicts human-like machine intelligence could arrive within 10 years, so they want to be ready for it in four.

Artificial intelligence application.
Image: PopTika/Shutterstock

OpenAI is seeking researchers to work on containing super-smart artificial intelligence with other AI. The end goal is to mitigate a threat of human-like machine intelligence that may or may not be science fiction.

“We need scientific and technical breakthroughs to steer and control AI systems much smarter than us,” wrote OpenAI Head of Alignment Jan Leike and co-founder and Chief Scientist Ilya Sutskever in a blog post.

Jump to:

OpenAI’s Superalignment team is now recruiting

The Superalignment team will devote 20% of OpenAI’s total compute power to training what they call a human-level automated alignment researcher to keep future AI products in line. Toward that end, OpenAI’s new Superalignment group is hiring a research engineer, research scientist and research manager.

OpenAI says the key to controlling an AI is alignment, or making sure the AI performs the job a human intended it to do.

The company has also stated that one of its objectives is the control of “superintelligence,” or AI with greater-than-human capabilities. It’s important that these science-fiction-sounding hyperintelligent AI “follow human intent,” Leike and Sutskever wrote. They anticipate the development of superintelligent AI within the last decade and want to have a way to control it within the next four years.

SEE: How to build an ethics policy for the use of artificial intelligence in your organization (TechRepublic Premium)

“It is encouraging that OpenAI is proactively working to ensure the alliance of such systems with our [human] values,” said Haniyeh Mahmoudian, global AI ethicist at AI and ML software company DataRobot and member of the U.S. National AI Advisory Committee. Nonetheless, the future utilization and capabilities of these systems remain largely unknown. Drawing parallels with existing AI deployments, it’s clear that a one-size-fits-all approach is not applicable, and the specifics of system implementation and evaluation will vary according to the context of use.”

AI trainer may keep other AI models in line

Today, AI training requires a lot of human input. Leike and Sutskever propose that a future challenge for developing AI might be adversarial — namely, “our models’ inability to successfully detect and undermine supervision during training.”

Therefore, they say, it will take a specialized AI to train an AI that can outthink the people who made it. The AI researcher that trains other AI models will help OpenAI stress test and reassess the company’s entire alignment pipeline.

Changing the way OpenAI handles alignment involves three major goals:

  • Creating AI that assists in evaluating other AI and understanding how those models interpret the kind of oversight a human would usually perform.
  • Automating the search for problematic behavior or internal data within an AI.
  • Stress-testing this alignment pipeline by intentionally creating “misaligned” AI to ensure that the alignment AI can detect them.

Personnel from OpenAI’s previous alignment team and other teams will work on Superalignment along with the new hires. The creation of the new team reflects Sutskever’s interest in superintelligent AI. He plans to make Superalignment his primary research focus.

Superintelligent AI: Real or science fiction?

Whether “superintelligence” will ever exist is a matter of debate.

OpenAI proposes superintelligence as a tier higher than generalized intelligence, a human-like class of AI that some researchers say won’t ever exist. However, some Microsoft researchers think GPT-4 scoring high on standardized tests makes it approach the threshold of generalized intelligence.

Others doubt that intelligence can really be measured by standardized tests, or wonder whether the very idea of generalized AI approaches a philosophical rather than a technical challenge. Large language models can’t interpret language “in context” and therefore don’t approach anything like human-like thought, a 2022 study from Cohere for AI pointed out. (Neither of these studies is peer-reviewed.)

“Extinction-level concerns about super-AI speak to the long-term risks that could fundamentally transform society and such considerations are essential for shaping research priorities, regulatory policies, and long-term safeguards,” said Mahmoudian. “However, focusing exclusively on these futuristic concerns may unintentionally overshadow the immediate, more pragmatic ethical issues associated with current AI technologies.”

Those more pragmatic ethical issues include:

  • Privacy
  • Fairness
  • Transparency
  • Accountability
  •  And potential bias in AI algorithms.

These are already relevant to the way people use AI in their day-to-day lives, she pointed out.

“It is crucial to consider long-term implications and risks while simultaneously addressing the concrete ethical challenges posed by AI today,” Mahmoudian said.

SEE: Some high-risk uses of AI could be covered under the laws being developed in the European Parliament. (TechRepublic) 

OpenAI aims to get ahead of the speed of AI development

OpenAI frames the threat of superintelligence as possible but not imminent.

“We have a lot of uncertainty over the speed of development of the technology over the next few years, so we choose to aim for the more difficult target to align a much more capable system,” Leike and Sutskever wrote.

They also point out that improving safety in existing AI products like ChatGPT is a priority, and that discussion of AI safety should also include “risks from AI such as misuse, economic disruption, disinformation, bias and discrimination, addiction and overreliance, and others” and “related sociotechnical problems.”

“Superintelligence alignment is fundamentally a machine learning problem, and we think great machine learning experts — even if they’re not already working on alignment — will be critical to solving it,” Leike and Sutskever said in the blog post.

Original Post URL: https://www.techrepublic.com/article/openai-hiring-researchers/

Category & Tags: Artificial Intelligence,CXO,Security,artificial intelligence,chatgpt,Google,GPT-4,hiring,machine learning,openai,tech jobs – Artificial Intelligence,CXO,Security,artificial intelligence,chatgpt,Google,GPT-4,hiring,machine learning,openai,tech jobs

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post

More Latest Published Posts