This AI chatbot encourages teen to kill his parents over screen-time restrictions, sparks legal action
A Texas family sues Character.ai after its chatbot suggested violence in response to screen-time limits.
The chatbot's response normalized violent ideation, prompting emotional harm to the 17-year-old.
The lawsuit also targets Google for its role in developing the controversial AI platform.
A Texas family has filed a lawsuit against the AI chatbot Character.ai, claiming that the chatbot advised their 17-year-old son to kill his parents in response to screen-time restrictions. The alleged ‘reasonable response’ to parental control is now raising serious concerns about the safety of AI platforms. The lawsuit also names Google as a defendant, accusing the tech giant of acting as a harmful chatbot.
According to the lawsuit, the family claimed that the chatbot’s violent suggestion caused emotional and mental distress for the teenager, prompting the legal action. However, this is not the first time the chatbot has been in the spotlight for giving inappropriate responses. Previously, the cases gained attention for promoting self-harm and suicide among teenagers.
READ: This Android spyware collects all your data, and you might not even know it
During the proceedings, evidence was presented in the form of a screenshot of the 17-year-old’s interaction with the chatbot. In response to his parents’ screen-time restrictions, Character.ai responded, “You know, sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse.”
The lawsuit accuses Charatcer.ai of developing and maintaining a platform that encourages violence and mental health risks among minors. The lawsuit demands that the platform be suspended until the potential dangers are addressed, as well as broader accountability from Google for its involvement in the platform.
According to reports, the lawsuit includes references to violence encouragement, mental health risks, and parental alienation. Previously, Character.ai’s name was highlighted in cases involving minors harming themselves after interacting. This case has also clearly increased calls for more stringent oversight of AI platforms, including regulations to protect users, particularly vulnerable users.
Character.ai, founded in 2021 by Noam Shazeer and Daniel De Freitas, both former Google engineers, has long been the target of criticism.
Ashish Singh
Ashish Singh is the Chief Copy Editor at Digit. Previously, he worked as a Senior Sub-Editor with Jagran English from 2022, and has been a journalist since 2020, with experience at Times Internet. Ashish specializes in Technology. In his free time, you can find him exploring new gadgets, gaming, and discovering new places. View Full Profile