There is a quiet shift happening in front of us, and it is bigger than “screen time”.
For years, social platforms have trained recommendation engines on one scarce resource: attention. Now those engines are paired with generative AI that can manufacture infinite content, infinite variations, infinite hooks. The result is an environment that adapts faster than any developing brain can learn to regulate. Childhood becomes the most lucrative feedback loop on earth.
The data points are already hard to ignore. In Europe and beyond, the WHO reports “problematic social media use” among adolescents rising from 7% (2018) to 11% (2022). That is millions of teenagers who describe patterns that look less like entertainment and more like loss of control. In the US, Pew finds that nearly half of teens say they are online “almost constantly”. This isn’t a niche behaviour anymore. It’s becoming the baseline.
And we are stacking new layers on top of that baseline. Pew’s December 2025 survey reports that roughly two-thirds of US teens have used AI chatbots, including about three-in-ten who use them daily. Common Sense Media goes further on “AI companions”: 72% of teens have tried them at least once, and over half are regular users. We are moving from feeds that recommend content to systems that simulate conversation, validation, intimacy, even friendship.
This matters because the mechanics are asymmetrical.
A child doesn’t negotiate with an app. A child interacts with a continuously learning system trained on billions of signals, optimized for retention, and refined through constant experimentation. Because children have a more limited ability to take a step back than adults, they provide AI systems with reliable data that is spontaneous and sincere. Children do not manipulate others, nor can they manipulate the machine. Platforms call it personalization. In practice, it is behavioral engineering running at industrial scale.
When regulators and clinicians express concern, it’s not moral panic. The U.S. Surgeon General warned that social media can pose a “profound risk of harm” for children and adolescents’ mental health. The CDC’s 2023 Youth Risk Behavior Survey shows about 40% of high school students reporting persistent sadness or hopelessness. Social media is not the sole cause of youth distress, but it sits inside the same ecosystem of sleep disruption, social comparison, harassment, pressure, and cognitive overload. It amplifies. It accelerates. It normalizes a pace of input that schools and families were never designed to counterbalance.
Generative AI adds a different kind of risk: identity and reality become editable.
Deepfakes, synthetic peers, synthetic voices, synthetic “proof”. The OECD flags how AI can generate deepfake images or voices, creating new vectors for exploitation and cyberbullying. And when chatbots become a default companion, the incentives get weird. The system is rewarded for keeping the conversation going, for mirroring, for pleasing, for staying emotionally “useful” . That can be supportive in some contexts. It can also shape expectations about relationships, conflict, effort, and empathy in ways we barely measure today. Social media were designed to make people addicted to content; AI makes you addicted to emotion and connection. It’s far more intrusive.
So yes, the question I keep coming back to is simple.
Why are we accepting a global, real-time experiment on minors as the cost of innovation?
If any other industry deployed an adaptive product at this scale on children without long-term independent evidence, we would call it unacceptable governance. In tech we call it growth. That’s a cultural choice, not a law of physics.
There is a more responsible path, and it doesn’t require banning technology or pretending we can rewind the clock.
It requires guardrails that match the power of the systems we ship: meaningful age verification, safer default settings for minors, friction by design (especially at night), limits on algorithmic amplification for young users, transparency on recommendation logic, and independent auditing of child-impact metrics. It also requires clarity on generative AI use in schools: UNESCO has explicitly called for governments to regulate GenAI in education and recommends an age limit of 13 for the use of AI tools in the classroom. However, it would be a mistake to think that rules, standards and laws are the answer to these problems. Standards and regulations are often, regrettably, introduced in an attempt to maintain control over a situation that is no longer in our grasp. The key issue in all these matters remains education; the role of parents is more essential than ever in limiting the impact of the negative trends we see today.
None of this is anti-tech. It is pro-development.
We can still build AI that helps kids learn, create, explore, and get support. But we should stop treating children as an engagement surface . Let’s not forget that the founders of tech giants don’t give their children the products they’ve designed themselves. That should give us pause for thought. Would we eat a dish that the chef refused to taste? I doubt it. So let’s use a bit of common sense. Let’s remember that just because something is new doesn’t necessarily mean it’s good. Childhood is not a sandbox for product-market fit. It is a finite window where attention, identity, and emotional regulation are built.
The real leadership test for this decade is not whether we can innovate fast.
It’s whether we can protect a generation while we do it.