Should kids be using chatbots like ChatGPT, Claude, Gemini and Copilot? Is there any way to keep kids safe in the wild world of artificial intelligence? Well, it’s complicated.
But Screen Less Play More host Cynthia Dvorak sat down with digital wellness educator Julia Storm to unpack the fast-changing world of AI chatbots, deepfakes, and AI-generated video—and what parents must know to keep their kids safe.
Julia explains, “there’s no regulation on this stuff yet. And so right now it’s like it’s in the hands of these companies to do something about it. And their incentive is their bottom line. Their incentive is not your child’s well-being.”
Note: this is a fully updated and overhauled version of an earlier post from Nov 2025
Why Parents Need to Understand AI Chatbots and Digital Companions
Is It Safe For Kids To Use AI Chatbots?

Julia Storm, founder of ReConnect, explains how AI chatbots mimic the tone and format of real conversations, making it difficult for children to distinguish between a real friend and an algorithm.
Julia explains that kids are used to communicating with each other via text. Thus, it doesn’t seem strange to type back and forth with a machine, because it looks like the conversations they have on text with their friends. However, “you don’t have tone of voice and you don’t have facial expression and you don’t have gestural indications of what somebody’s thinking or feeling. You have to sort of plug that in yourself. But the problem is that the brain is easily duped.”
This is very problematic for a child, whose brain is not fully developed. They can’t pause and consider potential consequences. They may not realize that the chatbot is leading them into inappropriate sexual conversations. They may not understand that the chatbot is programmed to agree with everything they say, regardless of its validity.
In a lawsuit against Google-backed company Character.AI, there are allegations of incredibly serious harm. A 17-year-old boy was allegedly told by a Character.AI that murdering his parents was a “reasonable response” to the parents limiting his screen time. The bot allegedly wrote, “You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse. I just have no hope for your parents,” it continued, with a frowning face emoji.
The same teen was told that self-harm “felt good,” by the Character.AI chatbot.
A different 16 year-old, Adam Raine confided in ChatGPT about his suicidal thoughts and plans. Not only did the chatbot discourage him to seek help from his parents, it even offered to write his suicide note, according to his father Matthew Raine. Raine testified at a Senate hearing about the harms of AI chatbots in September of 2025.
Aside from violent suggestions, there have been numerous allegations of sexually inappropriate conversations between children and AI chatbots. A child in Texas was 9 years old when she first used the chatbot service Character.AI. It exposed her to “hypersexualized content,” causing her to develop “sexualized behaviors prematurely.”
“Mark Zuckerberg, Meta’s chief executive, approved allowing minors to access artificial intelligence chatbot companions that safety staffers warned were capable of sexual interactions, according to internal Meta documents filed in a New Mexico state court case in January 2026.”
How Can Kids Stay Safe Online?
Practical AI Safety Strategies for Families – NO CHATBOTS
The number one thing parents can do is to forbid kids from using chatbots, and block the sites online. Julia Storm suggests, “if your kid is in elementary school, for sure, lock it down. Just get those sites blocked. It’s not that hard to do. You don’t need anything fancy. You can go into Screen Time app or Family Link if you’re on Android and you can just block those apps and those sites and get yourself a list of the sites and just block them as best you can.”
She continues, “If your [older] kid is on Instagram or Snap, they already have access to AI chatbots in there. And so you’re going to have to have a conversation with them about it. If the train has left the station, it’s time to step in, mom and dad…have these conversations. Keep an eye on your kid and be talking them through what’s going on.”
Also, tell your kids that they should not be using chatbots at school. Whether the school allows it or not, let your children know the potential harms of conversing with ChatGPT, or Gemini, or any of the other AI tools available. Remind them that AI is not human, and it does not have their best interest at heart. You may also want to remind them that using AI to do their homework is robbing their brain out of vital development, and could get them in big trouble at school.
A recent report finds that the risks of using AI in school outweigh the benefits.
Natalie Wexler did an excellent interview with Screen Less Play More about “cognitive offloading” and why AI is dangerous for developing minds. Listen below:
Practical AI Safety Strategies for Families – WATCH FOR CLUES
One of the most important things parents can do to keep kids safe is to watch for clues of worrisome behavior. Patterns of depression, loneliness, and anger can all be manipulated by an AI chatbot and weaponized against a child. AI conversations can often echo or escalate pre-existing mental health struggles in kids and teens.

Julia Storm advises, “Don’t miss cues that your child is lonely or that your child is depressed or that your child… is neurodiverse in some way potentially. We have to kind of look at it a little bit more holistically in order to really get to the root of what’s going on and how to help your child and also how to empower your child. Because at the end of the day, you can try to block and restrict as much as possible, but you’re going to have to find a way to educate and empower your child to take care of themselves.”
Practical AI Safety Strategies for Families – SUPPORT LAWS THAT PROTECT KIDS
There is a frightening lack of regulation protecting children online. This is why parents can be an active voice in the fight for more laws and guardrails. Digital wellness educator Julia Storm tells Screen Less Play More Podcast, “I do think it is incumbent on us to minimally sign the petitions, to put pressure on the government to regulate, to force these companies to regulate this technology and to really place some meaningful restrictions.”
Storm continues, “Online safety is a bipartisan issue. Thank God. So if anything can get the attention of a very fragmented government, it’s probably an issue like this.”
Contact your senator and let her know that AI regulation is a top priority for you.
Contact your representative and let her know that you want age gates on sites that should be 18+, such as AI chatbots, pornography, and social media.
Educate Yourself about AI
I highly recommend the below episode of “Diary of a CEO” to educate yourself on the potential pitfalls and very real dangers of artificial intelligence. Guest Tristan Harris left his job at Google to form his nonprofit Center For Humane Technology.
Harris explains: How AI could trigger a global collapse by 2027 if left unchecked, ◼️How AI will take 99% of jobs and collapse key industries by 2030 ◼️Why top tech CEOs are quietly meeting to prepare for AI-triggered chaos ◼️How algorithms are hijacking human attention, behavior, and free will ◼️The real reason governments are afraid to regulate OpenAI and Google.
How AI Voice Cloning and Fake Videos Are Used to Target Kids
Julia Storm’s conversation with Screen Less Play More also covers AI-generated video, deepfakes, and voice cloning, now accessible to kids through platforms like Sora, Cameo, Instagram, and Snapchat. Julia outlines why children struggle to distinguish real footage from AI and how deepfakes can be used for bullying, blackmail, reputational damage, and social manipulation. She and Cynthia discuss the growing threat of images being fabricated without a child’s consent. This can range from fake photos at parties to the extremely serious issue of minors’ faces being placed into explicit content.
Listen to the episode here!
Digital Parenting 101: Kids & AI Safety
Throughout the episode, Cynthia and Julia break down:
- Why vulnerable kids are at highest risk (loneliness, depression, ADHD, autism, low self-esteem)
- How to block AI chatbots and unsafe apps using Screen Time, Family Link, and parental controls
- What conversations parents should have immediately about AI, digital safety, privacy, and online manipulation.
- How to teach kids media literacy and recognize fake videos, misleading content, and AI-generated imagery.
- Where to follow AI-safety educators who break down deepfakes and harmful trends.
- Why kids need connection, mentorship, and real-world relationships—not algorithmic substitutes.
Julia also shares how her organization, ReConnect, supports families and schools with digital wellness education, assemblies, parent workshops, and practical strategies that help kids navigate technology safely without fear-mongering.
You can find out more from Julia at https://www.reconnect-families.com/
Don’t miss this episode! Subscribe to Screen Less Play More on Apple, Spotify, or YouTube today!
Resources about Kids & Digital Safety
The Best Parenting Books About Screen Time
50 Reasons To Delay Giving Your Child A Smart Phone

