Jan Leike, one of the lead safety researchers at OpenAI who resigned from the artificial intelligence company earlier this month, said on Tuesday that he has joined rival AI startup Anthropic.
Leike announced his resignation from OpenAI early on May 15, days before the company dissolved the superalignment group that he co-led. That team, formed in 2023, focused on long-term AI risks. OpenAI co-founder Ilya Sutskever announced his departure in a post on X on May 14.
“I’m excited to join @AnthropicAI to continue the superalignment mission,” Leike wrote on X on Tuesday. “My new team will work on scalable oversight, weak-to-strong generalization, and automated alignment research.”
In a post following his departure from OpenAI, Leike wrote, “Stepping away from this job has been one of the hardest things I have ever done, because we urgently need to figure out how to steer and control AI systems much smarter than us.”
AI safety has gained rapid importance across the tech sector since OpenAI introduced ChatGPT in late 2022, ushering in a boom in generative AI products and investments. Some in the industry have expressed concern that companies are moving too quickly in releasing powerful AI products to the public without adequately considering potential societal harm.
Microsoft-backed Open AI said Tuesday that it created a new safety and security committee led by senior executives, including CEO Sam Altman. The committee will recommend “safety and security decisions for OpenAI projects and operations” to the company’s board.
Anthropic, founded in 2021 by siblings Dario Amodei and Daniela Amodei and other ex-OpenAI executives, launched its ChatGPT rival Claude 3 in March. The company has received funding from Google, Salesforce and Zoom, in addition to funding from Amazon.