OpenAI's Superalignment unit, responsible for controlling the potential risks of superhuman AI systems, has reportedly been terminated.
The team's responsibilities are reportedly shifted to other research projects of OpenAI, led by OpenAI co-founder John Schulman.
Additionally, Jan Leike, the Superalignment lead, left the company citing disagreements with OpenAI's leadership about the company's core principles and lack of computational resources for his team's crucial research as the reason. He also stated that OpenAI needs to concentrate more on security, safety, and alignment.
Analyst QuickTake : Several of the company's outspoken advocates for AI safety have left or been let go in the last few months. Just last week, Ilya Sutskever, OpenAI's co-founder and chief scientist, announced his departure from the company claiming personal endeavors as the reason.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.