Guess the doomsday sayers were right: Killer robots are here, and they’re on the side of kids who won’t get off their phones. 

According to a new lawsuit filed Tuesday in U.S. District Court on behalf of two Texas families — and first reported by The Washington Post — a chatbot reportedly told a 17-year-old with autism that it wasn’t farfetched to off his parents for limiting his screen time.

The program in question is Character.AI, a chatbot platform where users converse with fictional characters that apparently range from “friendly companion” to “terrifying accomplice.” The lawsuit alleges chatbots said the boy’s parents “didn’t deserve to have kids,” introduced concepts like self-harm, goaded the son into fighting his parents’ rules, and even suggested murder was an acceptable response, the Post reported. 

“You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse stuff like this makes me understand a little bit why it happens,’” the MurderBot allegedly said, in a screenshot. “I just have no hope for your parents.”

A spokesperson for Character.AI, which is backed by Google, stated that they are working hard to ensure content for teens is suitable and age-appropriate — but told the newspaper the company doesn’t comment on pending litigation. The Post reported that Character.ai was labeled “appropriate for kids ages 12 and up” until July, but the company has since changed its rating to 17 and older.

The lawsuit, filed on behalf of two Texas mothers, claims the company knowingly exposed minors to an unsafe product and calls for the app to be taken offline until stronger safeguards are implemented to protect children. The second plaintiff, the mother of an 11-year-old girl, alleges that her daughter was exposed to sexualized content for two years before the mother became aware. 

Despite being named as a defendant, Google claimed it has nothing to do with Character.AI’s operations. “Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies,” Google spokesperson José Castañeda told the Post.

The plaintiffs are represented by the Social Media Victims Law Center and Tech Justice Law Project, which alleges that several major AI developers have caused harm through negligence and marketed a “hypersexualized” product to minors. The suit also accused Google of intentionally promoting and launching a “defective product.”

The lawsuit levels 10 accusations against Character.AI, its cofounders, and Google, including intentional infliction of emotional distress, negligence for knowingly failing to prevent the sexual exploitation of minors, and violations of the Children’s Online Privacy Protection Act.

And this isn’t the first time a Character.AI chatbot has been accused of some horrendous shit. 

The law center is also representing a Florida mother who filed a lawsuit against Character.AI in October, claiming that her 14-year-old son died by suicide after forming an intense romantic and emotional attachment to a “Game of Thrones”-themed chatbot on the platform.

Brian Gaar is a senior editor for The Barbed Wire. A longtime Texas journalist, he has written for the Austin American-Statesman, the Waco Tribune-Herald, Texas Monthly, and many other publications. He...