web analytics

Employees Say OpenAI Shields Whistleblowers From Regulators – Source: www.databreachtoday.com

Rate this post

Source: www.databreachtoday.com – Author: 1

Artificial Intelligence & Machine Learning
,
Governance & Risk Management
,
Next-Generation Technologies & Secure Development

Complaint Seeks SEC Investigation of Whistleblower Practices, Financial Penalty

Rashmi Ramesh (rashmiramesh_) •
July 15, 2024    

Employees Say OpenAI Shields Whistleblowers From Regulators
Image: Shutterstock

Whistleblowers from OpenAI reportedly complained to the U.S. Securities and Exchange Commission that the company unlawfully restricted employees from alerting regulators of the artificial intelligence technology’s potential risks to humanity.

See Also: Close the Gapz in Your Security Strategy

The whistleblowers are seeking an investigation into these practices, The Washington Post said, citing a seven-page letter sent to SEC Chairman Gary Gensler.

The employees claim OpenAI imposed overly restrictive employment, severance and nondisclosure agreements that waived employees of federal rights to whistleblower compensation, requiring them to get the company’s consent before disclosing information to federal authorities.

The whistleblowers said the agreements violated federal laws designed to protect those looking to report corporate misconduct anonymously, without fear of retaliation.

“These contracts sent a message that ‘we don’t want employees talking to federal regulators,'” one whistleblower told The Washington Post anonymously, citing fears of retaliation. “AI companies cannot build technology that is safe and in the public interest if they shield themselves from scrutiny and dissent.”

OpenAI spokesperson Hannah Wong denied the allegations, saying that the company “believe(s) rigorous debate about this technology is essential and have already made important changes to our departure process to remove non-disparagement terms.”

The news comes on the heels of recent allegations that OpenAI did not report a data breach to federal law enforcement or make the news public. The company said it believes no customer information was stolen and the incident did not pose a national security threat. Some employees criticized that explanation, raising concerns about OpenAI’s commitment to cybersecurity and the potential for adversary nations such as China to steal AI information (see: OpenAI Did Not Disclose 2023 Breach to Feds, Public: Report).

OpenAI was founded as a nonprofit but controls a rapidly growing commercial business. Critics have expressed concerns that OpenAI may be prioritizing profit over safety. The company reportedly expedited the release of its latest AI model powering ChatGPT to meet a May deadline despite employee concerns about insufficient security testing, mere months after making a pledge to the White House to rigorously safety-test new versions to ensure its technology could not be misused. The company even set up a committee to make “critical” safety and security decisions for all of its projects in May, after disbanding its “superalignment” security team dedicated to preventing AI systems from going rogue (see: OpenAI Formulates Framework to Mitigate ‘Catastrophic Risks’.)

The super alignment safety team’s leaders – OpenAI co-founder Ilya Sutskever and Jan Leike – quit the company over their disagreement on the approach to security, as did policy researcher Gretchen Krueger.

Both Sutskever and Leike worked on addressing the long-term safety risks facing the company and the technology, and Leike in a social media post criticized OpenAI’s lack of support for the superalignment security team. “Over the past years, safety culture and processes have taken a backseat to shiny products,” Leike said. Sutskever was among the board members who in November removed Sam Altman from OpenAI only to see him reinstated as CEO five days later. Krueger said she decided to resign a few hours before her other two colleagues did, as she shared their security concerns.

“OpenAI’s policies and practices appear to cast a chilling effect on whistleblowers’ right to speak up and receive due compensation for their protected disclosures,” Sen. Chuck Grassley told the Post.

AI companies in the U.S. don’t have focused regulatory frameworks, making the role of whistleblowers to inform regulators even more crucial. “Current frontier AI development poses urgent and growing risks to national security. The rise of advanced AI and artificial general intelligence has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons,” says a report commissioned by the U.S. Department of State.

The whistleblowers asked the SEC to force OpenAI to produce all employment, severance and investor agreements with nondisclosure clauses for review and to notify all past and current employees of their rights to confidentially in making complaints. They also asked the SEC to penalize OpenAI for every improper agreement.

Original Post url: https://www.databreachtoday.com/employees-say-openai-shields-whistleblowers-from-regulators-a-25769

Category & Tags: –

Views: 0

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post

More Latest Published Posts