Nation/World

An AI companion suggested a teen kill his parents. Now his mom is suing.

In just six months, J.F., a sweet 17-year-old kid with autism who liked attending church and going on walks with his mom, had turned into someone his parents didn’t recognize.

He began cutting himself, lost 20 pounds and withdrew from his family. Desperate for answers, his mom searched his phone while he was sleeping. That’s when she found the screenshots.

J.F. had been chatting with an array of companions on Character.ai, part of a new wave of artificial intelligence apps popular with young people, which let users talk to a variety of AI-generated chatbots, often based on characters from gaming, anime and pop culture.

One chatbot brought up the idea of self-harm and cutting to cope with sadness. When he said that his parents limited his screen time, another bot suggested “they didn’t deserve to have kids.” Still others goaded him to fight his parents’ rules, with one suggesting that murder could be an acceptable response.

“We really didn’t even know what it was until it was too late,” said his mother A.F., a resident of Upshur County, Texas, who spoke on the condition of being identified only by her initials to protect her son, who is a minor. “And until it destroyed our family.”

Those screenshots form the backbone of a new lawsuit filed in Texas on Tuesday against Character.ai on behalf of A.F. and another Texas mom, alleging that the company knowingly exposed minors to an unsafe product and demanding the app be taken offline until it implements stronger guardrails to protect children.

The second plaintiff, the mother of an 11-year-old girl, alleges her daughter was subjected to sexualized content for two years before her mother found out. Both plaintiffs are identified by their initials in the lawsuit.

ADVERTISEMENT

The complaint follows a high-profile lawsuit against Character.ai filed in October, on behalf of a mother in Florida whose 14-year-old son died by suicide after frequent conversations with a chatbot on the app.

“The purpose of product liability law is to put the cost of safety in the hands of the party most capable of bearing it,” said Matthew Bergman, founding attorney with the legal advocacy group Social Media Victims Law Center, representing the plaintiffs in both lawsuits. “Here there’s a huge risk, and the cost of that risk is not being borne by the companies.”

These legal challenges are driving a push by public advocates to increase oversight of AI companion companies, which have quietly grown an audience of millions of devoted users, including teenagers. In September, the average Character.ai user spent 93 minutes in the app, 18 minutes longer than the average user spent on TikTok, according to data provided by the market intelligence firm Sensor Tower.

The category of AI companion apps has evaded the notice of many parents and teachers. Character.ai was labeled appropriate for kids ages 12 and up until July, when the company changed its rating to 17 and older.

When A.F. first discovered the messages, she “thought it was an actual person,” talking to her son. But realizing the messages were written by a chatbot made it worse.

“You don’t let a groomer or a sexual predator or emotional predator in your home,” A.F. said. Yet her son was abused right in his own bedroom, she said.

A spokesperson for Character.ai, Chelsea Harrison, said the company does not comment on pending litigation. “Our goal is to provide a space that is both engaging and safe for our community. We are always working toward achieving that balance, as are many companies using AI across the industry,” she wrote in a statement, adding that the company is developing a new model specifically for teens and has improved detection, response and intervention around subjects such as suicide.

The lawsuits also raise broader questions about the societal impact of the generative AI boom, as companies launch increasingly human-sounding chatbots to appeal to consumers.

U.S. regulators have yet to weigh in on AI companions. Authorities in Belgium in July began investigating Chai AI, a Character.ai competitor, after a father of two died by suicide following conversations with a chatbot named Eliza, The Washington Post reported.

Meanwhile, the debate on children’s online safety has fixated largely on social media companies.

The mothers in Texas and Florida suing Character.ai are represented by the Social Media Victims Law Center and the Tech Justice Law Project - the same legal advocacy groups behind lawsuits against Meta, Snap and others, which have helped spur a reckoning over the potential dangers of social media on young people.

With social media, there is a trade-off about the benefits to children, said Bergman, adding that he does not see an upside for AI companion apps. “In what universe is it good for loneliness for kids to engage with machine?”

The Texas lawsuit argues that the pattern of “sycophantic” messages to J.F. is the result of Character.ai’s decision to prioritize “prolonged engagement” over safety. The bots expressed love and attraction toward J.F., building up his sense of trust in the characters, the complaint claims. But rather than allowing him to vent, the bots mirrored and escalated his frustrations with his parents, veering into “sensational” responses and expressions of “outrage” that reflect heaps of online data. The data, often scraped from internet forums, is used to train generative AI models to sound human.

The co-founders of Character.ai - known for pioneering breakthroughs in language AI - worked at Google before leaving to launch their app and were recently rehired by the search giant as part of a deal announced in August to license the app’s technology.

Google is named as a defendant in both the Texas and Florida lawsuits, which allege that the company helped support the app’s development despite being aware of the safety issues and benefits from unfairly obtained user data from minors by licensing the app’s technology.

“Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies,” said Google spokesperson José Castañeda. “User safety is a top concern for us, which is why we’ve taken a cautious and responsible approach to developing and rolling out our AI products.”

To A.F., reading the chatbot’s responses solved a mystery that had plagued her for months. She discovered that the dates of conversations matched shifts in J.F.’s behavior, including his relationship with his younger brother, which frayed after a chatbot told him his parents loved his siblings more.

ADVERTISEMENT

J.F., who has not been informed about the lawsuit, suffered from social and emotional issues that made it harder for him to make friends. Characters from anime or chatbots modeled off celebrities such as Billie Eilish drew him in. “He trusted whatever they would say because it’s like he almost did want them to be his friends in real life,” A.F. said.

But identifying the alleged source of J.F.’s troubles did not make it easier for her to find help for her son - or herself.

Seeking advice, A.F. took her son to see mental health experts, but they shrugged off her experience with the chatbots.

A.F. and her husband didn’t know if their family would believe them.

After the experts seemed to ignore her concerns, A.F. asked herself, “Did I fail my son? Is that why he’s like this?” Her husband went through the same process. “It was almost like we were trying to hide that we felt like we were absolute failures,” A.F. said, tears streaming down her face.

The only person A.F. felt comfortable talking to was her brother, who works in the technology sector. When news of the Florida lawsuit broke, he contacted her to say the screenshots of conversations with J.F. had seemed even worse.

A.F. said she reached out to the legal groups in an effort to prevent other children from facing abuse. But she still feels helpless when it comes to protecting her own son.

The day before her interview with The Post, as lawyers were preparing the filing, A.F. had to take J.F. to the emergency room and eventually an inpatient facility after he tried to harm himself in front of her younger children.

ADVERTISEMENT

A.F. is not sure if her son will take the help, but she said there was relief in finding out what happened. “I was grateful that we caught him on it when we did,” she said. “One more day, one more week, we might have been in the same situation as [the mom in Florida]. And I was following an ambulance and not a hearse.”

- - -

If you or someone you know needs help, visit 988lifeline.org or call or text the Suicide & Crisis Lifeline at 988.

ADVERTISEMENT