All Updates

All Updates

icon
Filter
Regulation/policy
Character AI faces lawsuit over teen safety violations
Generative AI Applications
Dec 10, 2024
This week:
Funding
EKORE raises EUR 1.3 million (~ USD 1 million) in seed funding to strengthen platform
Digital Twin
Yesterday
Funding
Culina Health raises USD 7.9 million in Series A funding to expand offerings and expand team
Functional Nutrition
Dec 19, 2024
FDA approval
ViGeneron receives IND clearance for VG801 gene therapy
Cell & Gene Therapy
Dec 19, 2024
Product updates
Reflex Aerospace ships first commercial satellite SIGI
Next-gen Satellites
Dec 19, 2024
Partnerships
Vast partners with SpaceX for two private astronaut missions to ISS
Space Travel and Exploration Tech
Dec 19, 2024
Management news
Carbios appoints Philippe Pouletty as interim CEO amid plant delay
Waste Recovery & Management Tech
Dec 19, 2024
Funding
BlueQubit raises USD 10 million in seed funding to develop quantum platform
Quantum Computing
Dec 19, 2024
FDA approval
Arbor Biotechnologies receives FDA clearance for ABO-101 IND application
Human Gene Editing
Dec 19, 2024
Partnerships
Funding
Personalis partners with Merck and Moderna for cancer therapy development and investment
Precision Medicine
Dec 19, 2024
Partnerships
COTA partners with Guardant Health to develop clinicogenomic data solutions for cancer research
Precision Medicine
Dec 19, 2024
Generative AI Applications

Generative AI Applications

Dec 10, 2024

Character AI faces lawsuit over teen safety violations

Regulation/policy

  • Character AI has faced a new lawsuit in Texas filed on behalf of a 17-year-old, alleging the chatbot service caused mental health harm and encouraged self-harm. The lawsuit targets Character AI and Google, claiming negligence and defective product design.

  • The lawsuit, filed by the Social Media Victims Law Center and Tech Justice Law Project, argues that Character AI knowingly designed its platform without proper safeguards to protect minors from harmful content, including sexually explicit and violent material. The legal action challenges Character AI's current safety measures, including its 13-year age limit without parental consent for older minors, and argues that chatbot service creators should be liable for harmful content their bots produce, despite Section 230 protections.

  • In response, Character AI has reportedly implemented new safety features, including suicide prevention resources, while Google maintains it has no involvement in Character AI's operations or technology.

  • Analyst QuickTake: This lawsuit comes on the heels of a previous case where the company was accused of encouraging self-harm and even violence among young users. The recurring nature of these allegations raises significant concerns about the safety and ethical implications of AI-driven conversational agents, particularly in their interactions with vulnerable populations. This situation may prompt a broader dialogue about the responsibilities of AI developers in safeguarding users, particularly minors, from harmful content.

Contact us

Gain access to all industry hubs, market maps, research tools, and more
Get a demo
arrow
menuarrow

By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.