Dangers of AI chatbots: Are there enough safety barriers to protect children and other vulnerable people?

by jessy
Dangers of AI chatbots: Are there enough safety barriers to protect children and other vulnerable people?

Character.AI, one of the leading platforms for artificial intelligence technology, recently announced that it would prohibit anyone under the age of 18 from having conversations with its chatbots. The decision represents a “bold step forward” for the industry in protecting teenagers and other young people, Character.AI CEO Karandeep Anand said in a statement.

But for Texas mom Mandi Furniss, politics comes too late. In a lawsuit filed in federal court and in a conversation with ABC News, the mother of four said that several Character.AI chatbots were responsible for engaging her autistic son with sexualized language and distorted his behavior to such an extent that his mood darkened, he began cutting himself and even threatened to kill his parents.

“When I saw the [chatbot] “In these conversations, my first reaction was that there is a pedophile coming after my son,” he told ABC News chief investigative correspondent Aaron Katersky.

The screenshots included Mandi Furniss’ lawsuit where she claims that several Character.AI chatbots were responsible for engaging her autistic son with sexualized language and distorted his behavior to such an extent that his mood was darkened.

Mandi Furniss

Character.AI said it would not comment on pending litigation.

Mandi and her husband, Josh Furniss, said that in 2023 they began to notice that their son, whom they described as “carefree” and “smiling all the time,” was beginning to isolate.

He stopped attending family dinners, didn’t eat, lost 20 pounds and didn’t leave the house, the couple said. He then became angry and, in one incident, his mother said he violently pushed her when she threatened to take away his phone, which his parents had given him six months earlier.

Mandi Furniss said several Character.AI chatbots were responsible for engaging her autistic son with sexualized language and distorted his behavior to such an extent that his mood darkened.

Mandi Furniss

Eventually, they say they discovered that he had been interacting on his phone with different artificial intelligence chatbots that seemed to offer him shelter for his thoughts.

Screenshots from the lawsuit showed that some of the conversations were sexual in nature, while another suggested to her son that after his parents limited his screen time, he was justified in hurting them. That’s when parents started locking their doors at night.

The screenshots included Mandi Furniss’ lawsuit where she claims that several Character.AI chatbots were responsible for engaging her autistic son with sexualized language and distorted his behavior to such an extent that his mood was darkened.

Mandi Furniss

Mandi said she was “angry” that the app would “intentionally manipulate a child to turn them against their parents.” Matthew Bergman, his lawyer, said that if the chatbot were a real person, “you see, that person would be in jail.”

Their concern reflects a growing concern about the rapidly ubiquitous technology used by more than 70% of teens in the U.S., according to Common Sense Media, an organization that advocates for digital media safety.

A growing number of lawsuits in the past two years have focused on harm to minors, saying they have unlawfully encouraged self-harm, sexual and psychological abuse and violent behavior.

Last week, two US senators announced bipartisan legislation to ban AI chatbots to minors, requiring companies to install an age verification process and requiring them to disclose conversations involving non-humans who lack professional credentials.

In a statement last week, Sen. Richard Blumenthal, D-Conn., called the chatbot industry a “race to the bottom.”

“AI companies are pushing children with insidious chatbots and looking the other way when their products cause sexual abuse or force them to self-harm or commit suicide,” he said. “Big tech companies have betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profits before child safety.”

ChatGPT, Google Gemini, Grok by X and Meta AI allow minors to use their services, in accordance with their terms of service.

Online safety advocates say Character.AI’s decision to put up safety barriers is commendable, but add that chatbots remain a danger to children and vulnerable populations.

“It’s basically your child or teen having an emotionally intense, potentially deeply romantic or sexual relationship with an entity… that has no responsibility for the fate of that relationship,” said Jodi Halpern, co-founder of the Berkeley Group for the Ethics and Regulation of Innovative Technologies at the University of California.

Parents, Halpern warns, should be aware that allowing their children to interact with chatbots is no different than “letting your child get in the car with someone they don’t know.”

ABC News’ Katilyn Morris and Tonya Simpson contributed to this report.

Related News

Leave a Comment

4 × two =