On July 30, 2025, YouTube announced a sweeping new update: it will now use artificial intelligence to estimate whether a user is over or under 18 based on their online behavior. If the AI thinks you’re a minor — even if you’re not — it will automatically change your settings, restrict some content, and ask you to verify your age using ID or a selfie.
It might sound simple. Maybe even helpful. But for many users, especially content creators and neurodivergent individuals, this kind of algorithmic profiling can carry a deeper psychological weight.
Imagine being misjudged by a machine — told, in essence, “We don’t believe you know who you are.” For adults who’ve struggled with being taken seriously, or for creators who have painstakingly built their brand only to have it hidden from parts of their audience, the emotional impact can be profound.
This goes beyond inconvenience. It can feel like:
- Being erased or infantilized
Whether you’re a young-looking adult or someone whose interests don’t match their age demographic, being labeled “too young” can feel invalidating — especially for marginalized people who’ve already had to fight to be seen. - Losing control over your identity
When an AI decides how old you “seem” based on your clicks or searches, it reduces your digital identity to a behavioral stereotype. That loss of agency can trigger feelings of powerlessness, especially for those with histories of anxiety, trauma, or control-related issues. - Fear of being locked out
For creators who rely on YouTube to express themselves, build community, or even earn income, the possibility of being wrongly flagged means the fear of losing connection — something that’s already hard to come by in a fragmented, algorithm-driven world. - Increased stress and burnout
The need to constantly prove you’re “real” or “mature enough” can compound existing stress, especially for people dealing with imposter syndrome, digital fatigue, or mental health conditions that make bureaucratic systems feel overwhelming.
And for teens and younger users — the very people these systems are supposed to protect — the mental health impact of having their behavior constantly watched, labeled, and filtered through AI remains largely unexamined.
Yes, safety matters. But so does mental and emotional well-being. And when those in charge of platforms make decisions without involving the communities they affect — creators, neurodivergent folks, marginalized voices — the result is often isolation dressed as protection.
We need more than filters. We need dialogue, empathy, and mental health–centered design that considers not just what we see online, but how it makes us feel.

Leave a Reply