All Updates

All Updates

icon
Filter
Management news
OpenAI reportedly ends Superalignment unit handling superhuman AI risks
Foundation Models
May 18, 2024
This week:
Funding
​​OpenAI secures USD 4 billion revolving line of credit
Generative AI Applications
Yesterday
Product updates
Oracle adds GenAI capabilities to HeatWave to improve performance of transactional applications
Generative AI Infrastructure
Yesterday
Product updates
Cohere updates fine-tuning service for AI language models and enhances offering
Generative AI Infrastructure
Yesterday
Product updates
Dataiku launches LLM Guard Services to manage enterprise GenAI deployments
Generative AI Infrastructure
Yesterday
Product updates
Black Forest Labs launches Flux 1.1 Pro and new API
Foundation Models
Yesterday
Product updates
Cohere updates fine-tuning service for AI language models
Foundation Models
Yesterday
Product updates
Appy Pie launches AI model 'Flawless Text' for image generation
Foundation Models
Yesterday
Product updates
Appy Pie launches AI model 'Flawless Text' for image generation
No-code Software
Yesterday
Partnerships
Terra Quantum partners with RWTH Aachen University to enhance molecular conformer search in drug discovery
Quantum Computing
Yesterday
Product updates
GenoPalate launches upgraded Food Index Report for personalized nutritional guidance
Functional Nutrition
Yesterday
Foundation Models

Foundation Models

May 18, 2024

OpenAI reportedly ends Superalignment unit handling superhuman AI risks

Management news

  • OpenAI's Superalignment unit, responsible for controlling the potential risks of superhuman AI systems, has reportedly been terminated.

  • The team's responsibilities are reportedly shifted to other research projects of OpenAI, led by OpenAI co-founder John Schulman. 

  • Additionally, Jan Leike, the Superalignment lead, left the company citing disagreements with OpenAI's leadership about the company's core principles and lack of computational resources for his team's crucial research as the reason. He also stated that OpenAI needs to concentrate more on security, safety, and alignment.

  • Analyst QuickTake : Several of the company's outspoken advocates for AI safety have left or been let go in the last few months. Just last week, Ilya Sutskever, OpenAI's co-founder and chief scientist, announced his departure from the company claiming personal endeavors as the reason.

Contact us

Gain access to all industry hubs, market maps, research tools, and more
Get a demo
arrow
menuarrow

By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.