OpenAI has launched a new multimodal content moderation model called "omni-moderation-latest." This model is designed to detect harmful text and images based on GPT-4o technology. It is accessible through OpenAI's free Content Moderation API.
The new model supports moderation of both text and image inputs and can handle non-English content. It can evaluate categories such as violence, self-harm, and sexual content. The model has added two new categories for text moderation related to illegal and violent content.
OpenAI claims that the new model improves detection accuracy by 42% across 40 languages, particularly in low-resource languages, allowing developers to build and create a safer online environment for users.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.