Close Menu
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

What's On

Review: LG S95AR Soundbar

25 August 2025

Shinobi: Art of Vengeance Review – A Cut Above

25 August 2025

Meta Has Already Won the Smart Glasses Race

25 August 2025
Facebook X (Twitter) Instagram
Just In
  • Review: LG S95AR Soundbar
  • Shinobi: Art of Vengeance Review – A Cut Above
  • Meta Has Already Won the Smart Glasses Race
  • The Mysterious Shortwave Radio Station Stoking US-Russia Nuclear Fears
  • IBM and NASA Develop a Digital Twin of the Sun to Predict Future Solar Storms
  • A Crypto Micronation Is Making Friends at the White House
  • Is It Ever Legal—or Ethical—to Remove DRM?
  • What Is the Magnetic Constant and Why Does It Matter?
Facebook X (Twitter) Instagram Pinterest Vimeo
Best in TechnologyBest in Technology
  • News
  • Phones
  • Laptops
  • Gadgets
  • Gaming
  • AI
  • Tips
  • More
    • Web Stories
    • Global
    • Press Release
Subscribe
Best in TechnologyBest in Technology
Home » OpenAI Employees Warn of a Culture of Risk and Retaliation
News

OpenAI Employees Warn of a Culture of Risk and Retaliation

News RoomBy News Room4 June 20244 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

A group of current and former OpenAI employees have issued a public letter warning that the company and its rivals are building artificial intelligence with undue risk, without sufficient oversight, and while muzzling employees who might witness irresponsible activities.

“These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction,” reads the letter published at righttowarn.ai. “So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable.”

The letter calls for not just OpenAI but all AI companies to commit to not punishing employees who speak out about their activities. It also calls for companies to establish “verifiable” ways for workers to provide anonymous feedback on their activities. “Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated,” the letter reads. “Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry.”

OpenAI came under criticism last month after a Vox article revealed that the company has threatened to claw back employees’ equity if they do not sign non-disparagement agreements that forbid them from criticizing the company or even mentioning the existence of such an agreement. OpenAI’s CEO, Sam Altman, said on X recently that he was unaware of such arrangements and the company had never clawed back anyone’s equity. Altman also said the clause would be removed, freeing employees to speak out. OpenAI did not respond to a request for comment by time of posting.

OpenAI has also recently changed its approach to managing safety. Last month, an OpenAI research group responsible for assessing and countering the long-term risks posed by the company’s more powerful AI models was effectively dissolved after several prominent figures left and the remaining members of the team were absorbed into other groups. A few weeks later, the company announced that it had created a Safety and Security Committee, led by Altman and other board members.

Last November, Altman was fired by OpenAI’s board for allegedly failing to disclose information and deliberately misleading them. After a very public tussle, Altman returned to the company and most of the board was ousted.

The letters’ signatories include people who worked on safety and governance at OpenAI, current employees who signed anonymously, and researchers who currently work at rival AI companies. It was also endorsed by several big-name AI researchers including Geoffrey Hinton and Yoshua Bengio, who both won the Turing Award for pioneering AI research, and Stuart Russell, a leading expert on AI safety.

Former employees to have signed the letter include William Saunders, Carroll Wainwright, and Daniel Ziegler, all of whom worked on AI safety at OpenAI.

“The public at large is currently underestimating the pace at which this technology is developing,” says Jacob Hilton, a researcher who previously worked on reinforcement learning at OpenAI and who left the company more than a year ago to pursue a new research opportunity. Hilton says that although companies like OpenAI commit to building AI safely, there is little oversight to ensure that is the case. “The protections that we’re asking for, they’re intended to apply to all frontier AI companies, not just OpenAI,” he says.

“I left because I lost confidence that OpenAI would behave responsibly,” says Daniel Kokotajlo, a researcher who previously worked on AI governance at OpenAI. “There are things that happened that I think should have been disclosed to the public,” he adds, declining to provide specifics.

Kokotajlo says the letter’s proposal would provide greater transparency, and he believes there’s a good chance that OpenAI and others will reform their policies given the negative reaction to news of non-disparagement agreements. He also says that AI is advancing with worrying speed. “The stakes are going to get much, much, much higher in the next few years, he says, “at least so I believe.”

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleTecno Camon 30 5G Series Gets Upgraded With AI Assistant Ella-GPT That Supports Over 70 Languages
Next Article Flock launches this July, and you can watch its hypnotic new trailer here

Related Articles

News

Review: LG S95AR Soundbar

25 August 2025
News

Meta Has Already Won the Smart Glasses Race

25 August 2025
News

The Mysterious Shortwave Radio Station Stoking US-Russia Nuclear Fears

25 August 2025
News

IBM and NASA Develop a Digital Twin of the Sun to Predict Future Solar Storms

25 August 2025
News

A Crypto Micronation Is Making Friends at the White House

25 August 2025
News

Is It Ever Legal—or Ethical—to Remove DRM?

24 August 2025
Demo
Top Articles

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 2024105 Views

Costco partners with Electric Era to bring back EV charging in the U.S.

28 October 202495 Views

5 laptops to buy instead of the M4 MacBook Pro

17 November 202488 Views

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Latest News
News

A Crypto Micronation Is Making Friends at the White House

News Room25 August 2025
News

Is It Ever Legal—or Ethical—to Remove DRM?

News Room24 August 2025
News

What Is the Magnetic Constant and Why Does It Matter?

News Room24 August 2025
Most Popular

The Spectacular Burnout of a Solar Panel Salesman

13 January 2025129 Views

ChatGPT o1 vs. o1-mini vs. 4o: Which should you use?

15 December 2024105 Views

Costco partners with Electric Era to bring back EV charging in the U.S.

28 October 202495 Views
Our Picks

The Mysterious Shortwave Radio Station Stoking US-Russia Nuclear Fears

25 August 2025

IBM and NASA Develop a Digital Twin of the Sun to Predict Future Solar Storms

25 August 2025

A Crypto Micronation Is Making Friends at the White House

25 August 2025

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Facebook X (Twitter) Instagram Pinterest
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact Us
© 2025 Best in Technology. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.