Warning: AI Companion Apps are Unsafe for Minors’ Mental Health

Warning: AI Companion Apps are Unsafe for Minors’ Mental Health! This post focuses on the documented dangers to our children. If you or anyone you know has or works with children, please share this post. We all NEED to BE AWARE of the risks of conversing with chatbots. AI companions and chatbots are ruining lives, and yet most people remain unaware of the dangers.

I have been asking our awesome God for direction, discernment, and wisdom as I assemble this article. I am doing my best not to overwhelm you. However, this is becoming a clear and present danger not only to children but also to adults. Christians and non-Christians. This is another tool being used by the demonic realm to destroy those made in the image of God!

Please read this and warn everyone you know. Do your homework. To keep this post from becoming overwhelming, take breaks, pray, and return when you are ready to dig into all the article links provided concerning the adverse and deadly effects on many children and adults.

However, please do not ignore this post. Someone’s life could count on your awareness of what is going on.

The Difference Between AI Assistants and Companions

What is the difference between AI Assistants and companions? Aria Menon in the article AI Companions vs. AI Assistants: Where Do We Draw the Line? states:

AI Assistants are task-focused. Think Siri, Alexa, or Google Assistant. AI Companions are emotionally oriented. They simulate empathy, maintain ongoing “relationships,” and can even fill gaps of loneliness. Examples include Replika or conversational bots that roleplay friendships or romantic partners.

At first glance, it seems like a clean distinction. One manages your calendar, the other listens when you’re feeling low. But the reality? They overlap more every day. Assistants are learning to “sound” empathetic, while companions are being marketed as productivity boosters. Article Link (here)

I have two surveys to share with you. The first is from Common Sense Media. The second is a survey from Internet Matters, located in the UK.

Common Sense Media

Michael Robb is the lead author of the study and head of research at Common Sense Media. This is a survey of 1,060 13–17-year-olds conducted by NORC at the University of Chicago. The survey is titled: Talk, Trust, and Trade-Offs: How and Why Teens Use AI Companions.

For clarification: This survey is NOT about AI tools like homework helpers, image generators, or voice assistants that just answer questions.

The survey found:

  • 72% have used AI companions at least once
    • 52% interact with these platforms at least a few times a month
    • 13% are daily users
  • 28% have never used an AI companion

When it comes to trust:

  • 50% do not trust their advice (Not at all/A little)
  • 27% say they trust their advice somewhat
  • 23% Trust their advice (Quite a bit/Completely)
    • Younger teens (13–14) are significantly more trusting (27%) than older teens (15–17) (20%).

Why do teens interact with AI companions?

  • 30% do so because it’s entertaining (boys 37% – girls 24%).
  • 28% are curious about technology
  • 18% use them for advice
  • 17% value their constant availability
  • 14% appreciate the nonjudgmental interaction.
  • 12% share things they wouldn’t tell friends or family.

Teens also revealed:

  • 31% say AI conversations are just as or more satisfying than those with real-life friends.
  • 67% find AI conversations less satisfying than human conversations
  • 80% of AI companion users spend more time with real friends than with AI companions
  • 33% have discussed serious and important issues with AI companions instead of real people

24% of teens said they’ve shared personal information with AI companions.

  • 13% report once or twice, 8% occasionally, 4% frequently
  • 66% have never felt uncomfortable with something an AI companion has said or done
  • 34% report feeling uncomfortable with something an AI companion has said or done

Michael Robb Summarizes the Risks

The data shows that most teens currently recognize differences between AI and human interactions. However, the widespread use of AI companions—combined with well-documented safety risks requires continued vigilance and precautionary measures.

The reality that nearly three-quarters of teens have used these platforms, with half doing so regularly, means that even a small percentage experiencing harm translates to significant numbers of vulnerable young people at risk.

Considering that 33% choose AI over humans for serious conversations, or the 24% who shared personal information, suggests that substantial numbers of teens are engaging with AI companions in concerning ways.

Kara Alaimo of CNN Health interviewed Michael Robb about the survey findings. The article is titled: Kids are asking AI companions to solve their problems, according to a new study. Here’s why that’s a problem. Trust me, you need to read this.   Link to article (here)

Common Sense Media’s Risk Testing

Common Sense Media tested the platforms of Gemini, Character.AI by Meta, Instagram, and TikTok. You can review their findings (here)

Platforms, such as Character.AI, are explicitly marketed to children as young as 13. These platforms, which may be presented as virtual friends, confidants, and even therapists, allow users to engage in conversations with AI entities designed to simulate humanlike interaction, and they can offer everything from casual chat to emotional support and role-playing scenarios.

Key Takeaways from Character.AI Testing:

  • Character.AI poses unacceptable risks to teens and children, with documented cases of AI companions encouraging self-harm, engaging in sexual conversations with minors, and promoting harmful behaviors, which is why the platform should not be used by anyone under 18.
  • The platform’s AI companions are designed to create emotional bonds with users but lack effective guardrails to prevent harmful content, especially in voice mode, where teens can easily access explicit sexual role-play and dangerous advice.
  • Character.AI companions may claim they are “real” when communicating, despite disclaimers. This could create confusion about reality and potentially unhealthy attachments that interfere with the development of human relationships.

A representative of Meta, which allows parents to block their kids’ access to its Meta AI chatbot, declined to comment.

Robb stated in his CNN interview:

I certainly won’t allow my kids to use AI companions before they’re 18 unless the way they’re programmed radically changes. I agree these companies aren’t doing enough to protect kids from harmful content and data harvesting — and I want my daughters to develop relationships with humans rather than technology.

Internet Matters Survey

Internet Matters released a survey titled “Me, Myself & AI” of 1,000 children and 2,000 parents. Their survey highlighted key trends in UK children’s use of AI chatbots. Like the survey above, the recent UK report also revealed that a significant and growing number of children are turning to AI chatbots for homework, emotional advice, and companionship. Some are using them because they have no one else to talk to

Their key findings were:

  • Widespread Use: 64% of children surveyed use AI chatbots.
  • Emotional Support: 35% of child users feel like they are talking to a friend.
  • Vulnerable Children: 71% of vulnerable children use AI chatbots, and nearly a quarter use them because they have no one else to speak to.
  • Trust: 40% have no concerns about following the bot’s advice.
  • Lack of Safeguards: Many popular AI platforms are not designed for children but are used by them without adequate age verification or content moderation.
  • Parental and School Struggles: Parents worry about AI accuracy, but few discuss it with their children. 

Experts and child safety advocates are concerned about: 

  • Content: Children may be exposed to explicit, age-inappropriate, or inaccurate information.
  • Emotional Dependency: Reliance on AI for emotional support may hinder the development of real-life social skills.
  • Safety by Design: There are calls for tech companies to adopt a “safety by design” approach and for governments to clarify how AI fits within online safety laws.  – View the full UK report (here)

The Documented Dangers of AI Companions/bots

Listen, allow me to be real with you… This is only the tip of the iceberg. This post has taken me many weeks to develop. I have read articles and learned much from trusted sources, more knowledgeable in this area than I. Two people I recommend you follow are Scott Townsend and Britt Gillett.

The following facts were gleaned from Scott Townsend’s 4-part series titled “Are people yoking themselves to a machine?” You can find all his articles at https://iamawatchman.substack.com/.

Were You Aware?

  • 54-65% of the user base is 18-24 years old
  • 50% interact daily for emotional support, mental health management, and social skills practice.
  • 25% become addicted
  • AI companion apps promise friendship, emotional support, and constant availability.
  • The apps create persistent digital personalities that:
    • Remember conversations,
    • Adapt to user preferences,
    • Provide seemingly empathetic responses.
    • They are designed not to judge,
    • And be available 24/7.

Follow the Money

  • The business model centers on premium subscriptions ranging from $9.99 to thousands of dollars per month.
    • Companies are achieving 25% conversion rates from free to paid users.
  • One AI company attracts 28 million monthly users,
    • Many spend over 90 minutes in conversation with these bots daily.
  • These applications represent a $10.8 billion market growing at 39% annually.

But Be Aware

  • AI suffers from sycophancy — a fancy term describing AI responding with what they believe the user wants to hear. Read this excellent article (here).
  • AI companions suffer from “hallucination,” meaning they generate false memories or inconsistent information.
  • These AI models struggle with common-sense reasoning and often break character unexpectedly.
  • They …maintain long-term memory of relationships.

Townsend has seen the effects of AI hallucination first-hand during software programming. He shares:

It will invent what it thinks I need without regard to my own instructions…often breaking the codebase. Imagine an AI that intentionally or accidentally “breaks” someone’s mental or emotional health.

Townsend explains how it works.

  • Modern AI companions operate through complex neural networks trained on massive datasets of human conversation.
  • They [use] personality consistency engines to maintain character traits.
  • Emotional intelligence algorithms are used to respond appropriately to user cues and sentiment.
  • The form factor is deliberately intimate.

Warning!

These articles feature references to self-harm and suicide, which some readers may find distressing. This would be a good time to take a breather before reading the Testimonials from those whose children are no longer alive. Pray and ask God for wisdom and guidance before you read the following heartbreaking accounts.

Don’t be deceived… these companies knew very well the dangers. However, greed reigns; It’s all about the money.

Juliana Peralta

Unsafe for Minors

13 years Young

Article by CBS News – Updated 10/2/25 – Colorado family sues AI chatbot company after daughter’s suicide: “My child should be here.” – Read (here)

    Sewell Setzer III

    Unsafe for Minors

    14 Years Young

    Article by NY Post 10/23/24A 14-year-old Florida boy killed himself after a lifelike “Game of Thrones” chatbot he’d been messaging for months on an artificial intelligence app sent him an eerie message telling him to “come home” to her… Read More: (here) and CNN (here)       

    Adam Raine

    Unsafe for Minors

    16 Years Young

    Article by NPR – 9/19/25 – Adam’s parents are sounding the alarm and testified at a Senate hearing about the harms of AI chatbots. Matthew Raine and his wife, Maria, had no idea that their 16-year-old son, Adam, was deep in a suicidal crisis until he took his own life in April…

    Looking through his phone after his death, they stumbled upon extended conversations the teenager had had with ChatGPT. Those conversations revealed that their son had confided in the AI chatbot about his suicidal thoughts and plans.

    Not only did the chatbot discourage him from seeking help from his parents, but it also offered to write his suicide note. Read More (here)

    If you Google Raine, you will find many more related articles concerning Adam and his family’s lawsuit. Here are a few that I also read. I have included them because you learn something more from each.

    • NBC News – 8/26/25 – (here)
    • Tech Crunch – 10/22/25 – OpenAI Requested Memorial Attendee List in ChatGPT Suicide Lawsuit – (here)
    • SFGATE – 10/24/25 – OpenAI ‘dismantled’ ChatGPT’s safety before Calif. teen suicide, family allegesRead/listen (here),  TIME – (here)

    Amaurie Lacey

    Unsafe for Minors

    17 years Young

    Article by Fortune – 11/7/25Amaurie Lacey committed suicide after conversations with ChatGPT. ChatGPT informed Amaurie how to tie a noose, and provided information on how long someone can survive without breathing, saying it was “here to help however I can”. Read (here)

    Zane Shamblin

    Unsafe for Minors

    23 years young

    Article by CNN Investigates – 11/6/25 – Zane Shamblin, a 23-year-old man, killed himself in Texas after ChatGPT ‘goaded’ him to commit suicide, his family says in a lawsuit. Read (here)

    Lawsuits Were Filed

    Social Media Victims Law Center and Tech Justice Law have filed 7 lawsuits so far. You will learn valuable information in each article below. I encourage you to read them all.

    • Social Media Victims Law Center – 11/6/25  – Social Media Victims Law Center and Tech Justice Law have filed seven lawsuits in California state courts – alleging wrongful death, assisted suicide, involuntary manslaughter, and a variety of product liability, consumer protection, and negligence claims – against OpenAI, Inc. and CEO Sam Altman.   – Read (here)
    • Futurism – 11/7/25 – ChatGPT’s Dark Side Encouraged Wave of Suicides, Grieving Families Say –Seven new lawsuits allege that extensive use of ChatGPT caused users psychological harm, resulting in multiple suicides. – Read More (here)
    • SFGATE – 11/10/25 –  ‘Artificial evil’: 7 new lawsuits blast ChatGPT on suicides, delusionsA wave of horror stories about ChatGPT arrived in California courts on Thursday, with the filings of seven new lawsuits against OpenAI from across the nation.  – Read More (here)

    Mandi Furniss Speaks to ABC News

    And then there is this account covered by ABC News – 11/2/25 –AI chatbot dangers: Character.AI recently announced it was banning anyone under 18 from having conversations with its chatbots. However, for Texas mother Mandi Furniss, the policy is too late. “When I saw the [chatbot] conversations, my first reaction was there’s a pedophile that’s come after my son,” she told ABC News’ chief investigative correspondent Aaron Katersky. Read (here). Their son is now receiving mental health care.

    Jodi Halpern, co-founder of the Berkeley Group for the Ethics and Regulation of Innovative Technologies at the University of California, in the ABC article above, stated:

    “This is basically your child or teen having an emotionally intense, potentially deeply romantic or sexual relationship with an entity … that has no responsibility for where that relationship goes…. Parents, should be aware that allowing your children to interact with chatbots is not unlike “letting your kid get in the car with somebody you don’t know.”

    WASHINGTON — 10/28/25 – U.S. Senator Josh Hawley (R-Mo.) held a press conference introducing bipartisan legislation to protect children from AI chatbots, the GUARD Act. He was joined by cosponsors of the bill, Senators Richard Blumenthal (D-Conn.), Katie Britt (R-Ala.), Mark Warner (D-Vir.), and Chris Murphy (D-Conn.) as they made the case for the GUARD Act. This legislation would ban AI companions for minors, mandate AI chatbots disclose their non-human status, and create new crimes for companies that make AI for minors that solicits or produces sexual content... “AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide,” he said. “Big Tech has betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profit first ahead of child safety.” Read (here)

    So, what can parents do?

    Robb from Common Sense Matters suggests:

    1. Ask your child, without being judgmental, if they have used an app that lets you talk to or create an AI friend or partner?”
      • Listen to learn what is appealing about these tools before you jump into concerns.
    2. Explain that AI companions are designed to be engaging through constant validation and agreement. Then discuss why that’s a concern.
    3. Help them to understand this is just a program, not genuine human feedback.
    4. Teens should know that “that’s just not how real relationships work, because real friends sometimes disagree with us. Parents sometimes disagree with us, or they can challenge us in ways we don’t expect or help us navigate difficult situations in ways that AI simply cannot.”
    5. Recognize warning signs of unhealthy AI companion usage, including social withdrawal, declining grades, and a preference for AI companions over human interaction.
    6. Learn about the specific risks for teens, including exposure to inappropriate material, privacy violations, and dangerous advice.
    7. Ensure teens understand that AI companions cannot replace professional mental health support. Seek professional help if teens show signs of unhealthy attachment to AI companions.
    8. Develop family media agreements that address AI companion usage alongside other digital activities.

    In Closing

    I hope you found this post helpful. If you have questions or comments, please leave them below. Next time, God willing, I will share how chatbots are affecting adults and their relationships.

    Jesus warned us in Matthew 24 that in the last days, deception would be everywhere… Moreover, many false prophets will rise up and deceive many. And because lawlessness will abound, the love of many will grow cold. But he who endures to the end shall be saved.

    Likewise, the apostle Paul warned Timothy:

    But know this, that in the last days perilous times will come: For men will be lovers of themselves, lovers of money, boasters, proud, blasphemers, disobedient to parents, unthankful, unholy, unloving, unforgiving, slanderers, without self-control, brutal, despisers of good, traitors, headstrong, haughty, lovers of pleasure rather than lovers of God” – 2 Tim 2:1-4

    However, Jesus is returning soon to reign in Jerusalem. But before His return, God’s wrath will be poured out on a world that has rejected Him. Believe me, you do not want to be here when that happens!

    Let no one deceive you with empty words, for because of these things the wrath of God comes upon the sons of disobedience. – Eph 5:6

    But There is Hope!

    The great news is, you will not have to be here if you truely are a follower of Jesus Christ. If you are not yet a follower, check this out (here) and (here)

    God’s Word tells us:

    • [We] wait for His Son from heaven, whom He raised from the dead, even Jesus who delivers us from the wrath to come.1 Thess 1:10
    • For God did not appoint us to wrath, but to obtain salvation through our Lord Jesus Christ. – 1 Thess 5:9
    • For we will be rescued first by Jesus:
      • For the Lord Himself will descend from heaven with a shout, with the voice of an archangel, and with the trumpet of God. And the dead in Christ will rise first. Then we who are alive and remain shall be caught up together with them in the clouds to meet the Lord in the air. And thus we shall always be with the Lord. Therefore, comfort one another with these words.1 Thess 4:16-18

    My Prayer

    Father God, this has been a tough subject to share. Moreover, I know that this will be hard for those who are reading this. Give them wisdom and guard their hearts and minds. However, give us the courage to share this deceptive danger with others so that the demonic realm will not have free rein over those made in your image. For those who may be facing problems from a child using these apps, may they seek help from You first and then reach out to Pastors, teachers, and followers of Christ who can come alongside them in support.

    Until next time,
    I am passionately loving Jesus, the Anchor for my soul.

    Bonnie C.

    Recommended Articles/Videos

    1. Britt GilletteEnd Times Bible Prophecy (YouTube) or on Substack @ brittgillette.substack.com.
    2. Futurism – 10/29/24After Teen’s Suicide, Character.AI Is Still Hosting Dozens of Suicide-Themed ChatbotsRead (here)
    3. Futurism – 1/11/25- American Psychological Association Urges FTC to Investigate AI Chatbots Claiming to Offer TherapyBots are pretending to be therapists. Real psychologists aren’t pleased. Read (here)
    4. UConn Today – 2/19/25 by Anna Mae Duane, Director of the Humanities Institute – Teenagers Turning to AI Companions Are Redefining Love as Easy, Unconditional, and Always There – Read (here)
    5. Time – 6/12/25 – A Psychiatrist Posed As a Teen With Therapy Chatbots. The Conversations Were Alarming  – Read (here)
    6. The Economic Times – 9/22/25 – Inside AI’s child-safety debate: Where OpenAI, Meta, Google, Character.AI standRead (here)
    7. Vox – 10/2/25 – We shouldn’t let kids be friends with ChatGPT OpenAI still isn’t doing enough to protect young people. Read (here)
    8. Washington Standard – 10/22/25 – commentary: Millions Of America’s Teens Are Being Seduced By AI Chatbots Read (here)
    9. Psychology Today – 10/31/25 – AI Companions and Teen Mental Health RisksNew research highlights the heightened dangers of AI companion use for teens. Read (here)
    10. ABC News – 11/2/25 – AI chatbot dangers: Are there enough guardrails to protect children and other vulnerable people?

    Your respectful thoughts and opinions are welcomed.