OpenAI has formed a Safety and Security Committee potentially due to safety and security concerns raised by key AI researchers who resigned this month.
The safety committee's initial task is to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days. Afterward, the Safety and Security Committee will share their recommendations with the board. Additionally, the company will publicly share the adopted recommendations that are claimed to be consistent with safety and security.
The team will be led by directors Bret Taylor (chair), Adam D’Angelo, Nicole Seligman, and Sam Altman (CEO). OpenAI technical and policy experts Aleksander Madry (head of preparedness), Lilian Weng (head of safety systems), John Schulman (head of alignment science), Matt Knight (head of security), and Jakub Pachocki (chief scientist) will also be on the committee.
Moreover, the company has also begun training its next frontier model to enhance its capabilities in AGI.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.