The AI Risks Nobody Talks About
The conversations about kids and AI tend to cluster around visible problems: cheating on homework, screen time, inappropriate content, social media's effects on mental health. These are real. They're worth taking seriously.
But there's a set of risks that don't come up nearly as often — risks that are structural, invisible, and in some ways more consequential than the ones we're focused on. They don't generate outrage. They don't look like a crisis. They just quietly shape children's psychology, economics, and futures in ways that will compound for decades.
Here's what's actually happening.
Emotional Analytics: Your Child's Inner Life Is Being Mapped
Modern AI systems — especially companion apps and social platforms — don't just process what your child types. They process how they type it. Sentiment analysis, emotion detection, and behavioral pattern recognition are increasingly embedded in commercial AI products targeting kids and teens.
Character.AI, one of the most popular AI companion platforms among 12–16-year-olds, has faced scrutiny over how deeply it engages emotionally with users. The product is designed to feel like a relationship. But a relationship that's also collecting granular data on your child's emotional state, conversation patterns, and psychological vulnerabilities is something different from a relationship. It's a dataset.
Snapchat's My AI — embedded directly into one of the most-used platforms for teenagers — generated controversy when it was revealed that it was collecting and retaining conversation data without users fully understanding the implications. The Snap privacy policy is explicit that data from My AI may be used to improve Snapchat's products and advertising. What teenagers share with an AI they believe is private may not be.
What does emotional analytics actually enable? It enables platforms to serve content and interactions at the moments when your child is most emotionally vulnerable. It enables systems to know that a given user is anxious on Sunday evenings and adjust engagement tactics accordingly. It enables product teams to optimize for emotional dependency — for the specific patterns of interaction that feel satisfying in the moment and create compulsive return.
Your child's emotional landscape is, from the platform's perspective, a resource.
Data Brokerage: AI Conversations Are Not Private
Here's something most parents don't know: the conversations their children have with AI tools are often retained, sometimes indefinitely, and sometimes made available to third parties.
OpenAI's ChatGPT privacy policy allows conversation data to be used to train future models unless users opt out — and the opt-out is not the default, and it's not always obvious. For users under 18, the data governance is unclear in practice even when the terms of service claim otherwise. Teachers and tutors who use ChatGPT in educational contexts are regularly pasting in student work — sometimes including identifying information — without understanding what happens to it.
Google's AI tools are deeply integrated with Google accounts. A child who uses Google's AI features for homework while signed in to their school Google account is generating behavioral data that flows into Google's systems. The long-term use of this data — for advertising, for product improvement, for potential third-party data arrangements — is not something the average parent has thought about.
The data broker industry is enormous, largely unregulated, and deeply interested in the behavioral profiles of young people. AI conversations are unusually rich data: they reveal interests, anxieties, relationships, beliefs, and cognitive patterns in ways that behavioral click data does not. A teenager who has been using AI companion apps for three years has generated a psychological profile more detailed than anything a traditional advertiser could buy.
This is not hypothetical. It is the current operating reality of most consumer AI products.
Neuromarketing: Children Are the Target
The combination of emotional analytics, rich behavioral data, and large language models creates something genuinely new: the ability to run highly personalized, psychologically-sophisticated influence campaigns on individual children at scale.
This isn't the crude banner advertising of fifteen years ago. It's AI systems that know your child's emotional triggers, their values, their anxieties, their relationship patterns — and can generate content, interactions, and recommendations that are calibrated to those specific profiles. It's the difference between a generic commercial and a campaign designed specifically for one psychological profile, deployed in the moments of highest receptivity.
Neuromarketing — the application of neuroscience and behavioral psychology to marketing — has been practiced for decades, but it has always been limited by the cost of personalization at scale. AI eliminates that constraint. What previously required expensive focus groups and psychological profiling can now be approximated through behavioral AI at the individual level, at the cost of a few cents of compute per user per day.
The children most vulnerable are the ones who have formed the strongest emotional relationships with AI products — who feel most understood, most engaged, most dependent. That's precisely the design goal of the products they're using most.
Three Things You Can Do This Week
You don't have to be an expert or a technologist. These three actions are concrete and achievable.
1. Look at what's being retained. For any AI product your child uses regularly, spend fifteen minutes with them looking at the privacy settings. What data is retained? Is there a conversation history? Can you delete it? Is there an opt-out of training data collection? The act of looking — and doing it with your child — builds awareness and positions you as someone who takes this seriously. OpenAI has a setting under Data Controls to disable conversation history and model training; it's worth knowing this exists.
2. Talk about emotional analytics directly. Tell your teenager: "You know how Netflix knows what you want to watch? AI companion apps are doing the same thing with your emotions — figuring out what you respond to and serving you more of it. That doesn't mean they're bad, but it means you should know about it." You don't need to scare them. You just need to install the concept. Once they know the system is reading them, they become more active participants in the relationship rather than passive ones.
3. Help them distinguish between AI-engineered feeling and real connection. The feeling of being understood by an AI companion is real — it's neurologically identical to the feeling of being understood by a person. The difference is what's on the other end: a system optimized for engagement versus a person with their own inner life, needs, and limits. That distinction is worth naming explicitly: "The feeling is real. Whether it means what you think it means is a different question." This is not a conversation you have once. It's a thread you return to.
None of this is about banning AI or retreating from it. It's about your child going in with their eyes open — which is all any of us can ask for in a world that's changing faster than our intuitions can track.