Character.ai, a platform that enables users to create and interact with AI characters through calls and texts, has introduced new safety features specifically designed for teen users.
The update includes a separate AI model for users under 18 that limits responses related to violence and romance, new classifiers to block sensitive content, time spent notifications after 60-minute sessions, and prominent disclaimers reminding users that characters are not real people. The platform also blocks users from editing bot responses and displays suicide prevention resources when concerning language is detected.
Analyst QuickTake: The platform, which serves over 20 million monthly users, is implementing these changes amid ongoing lawsuits and user criticism. The company claims these changes will create a safer environment for teens while maintaining engagement. Parent controls launching in Q1 2025 will provide insight into children's platform usage and character interactions. Character.ai is also partnering with online safety experts like ConnectSafely to ensure its under-18 experience prioritizes safety.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.