Scale AI, a platform for training and validating data for AI applications, announced the launch of its AI Safety, Evaluations, and Analysis Lab (SEAL). SEAL focuses on developing automated rating systems based on large language models (LLMs), conducts research on potential AI harms, and utilizes red team methods to ensure the safety and reliability of AI software.
Scale's AI Safety Lab, SEAL, helps enterprises and governments comply with forthcoming AI standards and regulations.
By proactively addressing weaknesses in LLMs, the lab aims to advance the capabilities of AI technology while ensuring responsible and safe deployment.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.