Character AI has faced a new lawsuit in Texas filed on behalf of a 17-year-old, alleging the chatbot service caused mental health harm and encouraged self-harm. The lawsuit targets Character AI and Google, claiming negligence and defective product design.
The lawsuit, filed by the Social Media Victims Law Center and Tech Justice Law Project, argues that Character AI knowingly designed its platform without proper safeguards to protect minors from harmful content, including sexually explicit and violent material. The legal action challenges Character AI's current safety measures, including its 13-year age limit without parental consent for older minors, and argues that chatbot service creators should be liable for harmful content their bots produce, despite Section 230 protections.
In response, Character AI has reportedly implemented new safety features, including suicide prevention resources, while Google maintains it has no involvement in Character AI's operations or technology.
Analyst QuickTake: This lawsuit comes on the heels of a previous case where the company was accused of encouraging self-harm and even violence among young users. The recurring nature of these allegations raises significant concerns about the safety and ethical implications of AI-driven conversational agents, particularly in their interactions with vulnerable populations. This situation may prompt a broader dialogue about the responsibilities of AI developers in safeguarding users, particularly minors, from harmful content.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.