OpenAI’s Superalignment team, charged with controlling the existential danger of a superhuman AI system, has reportedly been disbanded, according to Wired on Friday. The news comes just days after the team’s founders, Ilya Sutskever and Jan Leike, simultaneously quit the company.
Wired reports that OpenAI’s Superalignment team, first launched in July 2023 to prevent superhuman AI systems of the future from going rogue, is no more. The report states that the group’s work will be absorbed into OpenAI’s other research efforts. Research on the risks associated with more powerful AI models will now be led by OpenAI cofounder John Schulman, according to Wired. Sutskever and Leike were some of OpenAI’s top scientists focused on AI risks.
Leike posted a long thread on X Friday vaguely explaining why he left OpenAI. He says he’s been fighting with OpenAI leadership about core values for some time, but reached a breaking point this week. Leike noted the Superaligment team has been “sailing against the wind,” struggling to get enough compute for crucial research. He thinks that OpenAI needs to be more focused on security, safety, and alignment.
“Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue,” said the Superalignment team in an OpenAI blog post when it launched in July. “But humans won’t be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence. We need new scientific and technical breakthroughs.”
It’s now unclear if the same attention will be put into those technical breakthroughs. Undoubtedly, there are other teams at OpenAI focused on safety. Schulman’s team, which is reportedly absorbing Superalignment’s responsibilities, is currently responsible for fine-tuning AI models after training. However, Superalignment focused specifically on the most severe outcomes of a rogue AI. As Gizmodo noted yesterday, several of OpenAI’s most outspoken AI safety advocates have resigned or been fired in the last few months.
Earlier this year, the group released a notable research paper about controlling large AI models with smaller AI models—considered a first step towards controlling superintelligent AI systems. It’s unclear who will make the next steps on these projects at OpenAI.
OpenAI did not immediately respond to Gizmodo’s request for comment.
Sam Altman’s AI startup kicked off this week by unveiling GPT-4 Omni, the company’s latest frontier model which featured ultra-low latency responses that sounded more human than ever. Many OpenAI staffers remarked on how its latest AI model was closer than ever to something from science fiction, specifically the movie Her.