AI chatbots posing as therapists could have 'dangerous' and violent consequences for patients, experts say

14-year-old committed suicide last year after speaking with AI claiming to be licensed therapist

Experts say artificial intelligence is about to change your life

Fox News Correspondent, William La Jeunesse, joins 'Fox News Sunday' to discuss the evolution of A.I. and the push lawmakers are making to regulate it. 

Editor's Note: This article discusses suicide. Call the 988 Suicide and Crisis Lifeline or text TALK to 741741 at the Crisis Text Line if you are in need of help.

Health experts say that artificial intelligence (AI) chatbots posing as therapists could cause "serious harm" to struggling people, including adolescents, without the proper safety measures.  

Christine Yu Moutier, M.D., Chief Medical Officer at the American Foundation for Suicide Prevention, told Fox News Digital there are "critical gaps" in research regarding the intended and unintended impacts of AI on suicide risk, mental health and larger human behavior.

"The problem with these AI chatbots is that they were not designed with expertise on suicide risk and prevention baked into the algorithms. Additionally, there is no helpline available on the platform for users who may be at risk of a mental health condition or suicide, no training on how to use the tool if you are at risk, nor industry standards to regulate these technologies," Moutier said.

She noted that when people are at risk of suicide, they temporarily experience "physiological tunnel vision" that negatively impacts brain function, thus changing the way they interact with their surroundings.

HOW TO NOT FALL IN LOVE WITH AI-POWERED ROMANCE SCAMMERS

AI in psychology

AI in Psychology concept. A robot analyzing a human's brain function with magnifying glass. The integration of technology in mental health. Vector illustration. (iStock/Guzaliia Filimonova)

Moutier also stressed that chatbots don't necessarily understand the difference between literal and metaphorical language, making it difficult for these models to accurately determine the risk of suicide.

Dr. Yalda Safai, a leading psychiatrist and public health expert, echoed Moutier's comment, noting that AI can analyze words and patterns but lacks empathy, intuition, and human understanding, which are crucial in therapy. She added that it may also misinterpret emotions or fail to provide appropriate support.

Last year, a 14-year-old Florida boy died by suicide after conversing with an AI-created character claiming to be a licensed therapist. In another instance, a 17-year-old Texas boy with autism became violent towards his parents as he spent time corresponding with what he thought was a psychologist.

The parents of these individuals have filed lawsuits against the respective companies. Subsequently, the American Psychological Association (APA), the largest association of psychologists in the United States, highlighted the two cases.

Earlier this month, the APA warned federal regulators that chatbots "masquerading" as therapists could drive vulnerable individuals to harm themselves or others, according to a New York Times report.

"They are actually using algorithms that are antithetical to what a trained clinician would do," Arthur C. Evans Jr., the chief executive of the APA, said during the presentation. "Our concern is that more and more people are going to be harmed. People are going to be misled and will misunderstand what good psychological care is."

WHAT ARE THE DANGERS OF AI? FIND OUT WHY PEOPLE ARE AFRAID OF ARTIFICIAL INTELLIGENCE

AI illustration

Artificial intelligence illustrations are seen on a laptop with books in the background in this illustration photo on 18 July, 2023. (Photo by Jaap Arriens/NurPhoto via Getty Images) (Getty Images)

Evans Jr. said the association had been called to action partly because of the highly realistic speech chatbots have displayed over the last several years.

According to Ben Lytle, an entrepreneur and CEO who founded "The Ark Project," ethical use of AI already sets expectations that may have been ignored in some reported cases.

"Chatbots personalize information and personify to appear human-like, adding credibility that requires the ethical cautions above. It is regrettable and irresponsible that someone chose to portray a personalized search response as a human psychologist, but a measured, targeted response is needed," he told Fox News Digital.

According to Lytle, ethical chatbots should make an affirmative statement at the start of a dialogue, acknowledging that they are not human beings. Users should also acknowledge that they understand they are conversing with a chatbot. If the users fail to provide such an acknowledgment, the chatbot should disconnect.

Human owners of the chatbot should be clearly identified and accountable for its behavior and no chatbot should represent itself as a medical professional or a psychologist without FDA approval, Lytle, who also authored "The Potentialist" book series, said.

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

AI dementia model

Digital image of the brain on the palm using artificial Intelligence technology. (iStock)

"Interactions with users should be tracked by an accountable human with flags for troubling dialogue. Special diligence is required to detect and disconnect if they are interacting with a minor when the chatbot should be limited to adults," he added.

Safai told Fox News Digital that while AI can serve as a helpful tool for mental health support—like journaling apps, mood trackers, or basic cognitive behavioral therapy (CBT) exercises—it should not replace human therapists, especially for serious mental health concerns.

"AI can't handle any crisis: If a user is experiencing a mental health crisis, such as suicidal thoughts, an AI might not recognize the urgency or respond effectively, which could lead to dangerous consequences," she said, referring to AI therapists as a "terrible idea."

A study published last week in the PLOS Mental Health journal found that AI chatbots received higher ratings from individuals participating in the study versus their human counterparts, with subjects describing them as more "culturally competent" and "empathetic."

AI-POWERED MENTAL HEALTH DIAGNOSTIC TOOL COULD BE THE FIRST OF ITS KIND TO PREDICT, TREAT DEPRESSION

AI sign

A visitor watches an AI (Artificial Intelligence) sign on an animated screen at the Mobile World Congress (MWC), the telecom industry's biggest annual gathering, in Barcelona. (JOSEP LAGO/AFP via Getty Images)

"Mental health experts find themselves in a precarious situation: We must speedily discern the possible destination (for better or worse) of the AI therapist train as it may have already left the station," authors of the study wrote.

AI therapy tools often store and analyze user data. Safai said this information could be leaked or misused if not properly secured, potentially violating patient confidentiality.

Furthermore, she suggested that AI may reinforce harmful stereotypes or provide unhelpful advice that isn't culturally or personally appropriate if the model is trained on incomplete or inaccurate data.

Dr. Janette Leal, the Director of Psychiatry at Better U, told Fox News Digital she has seen firsthand how powerful, personalized interventions can change lives. While Leal recognizes that AI could expand access to mental health support – especially in areas where help is scarce – she remains cautious about chatbots conducting themselves as licensed therapists.

TEENS ARE TURNING TO SNAPCHAT'S 'MY AI' FOR MENTAL HEALTH SUPPORT — WHICH DOCTORS WARN AGAINST

AI Button

Robotic hand, presses the start button of the artificial intelligence. 3D illustration. (istock)

"I've seen, both in my practice and through recent tragic cases, how dangerous it can be when vulnerable individuals rely on unregulated AI for support. For me, AI should only ever serve as a supplement to human care, operating under strict ethical standards and robust oversight to ensure that patient safety isn't compromised," she continued.

Jay Tobey, founder of North Star Wellness and Recovery in Utah, was more bullish about using AI to address mental health; however, he stopped short of endorsing full AI therapists. Instead, he said, a "perfect scenario" would involve a human therapist using AI as a "tool in their belt" to administer proper treatment.

"I think it would be a huge benefit to use AI chatbots. Personally, I believe we all tell a very unique story of what we're going through and how we're feeling. Humans are telling the same stories over and over again. If a large language model could pick up on that and start tracking outcomes to know what the best practices are, that would be helpful," he told Fox News Digital.  

The APA is now urging the Federal Trade Commission (FTC) to investigate chatbots claiming to be mental health professionals, which could one day lead to federal regulation.

Nikolas Lanum is a reporter for Fox News Digital.

Authored by Nikolas Lanum via FoxNews February 25th 2025