To ban or not to ban? Let’s focus on what’s at stake
Across Europe, calls to restrict children’s access to social media are growing. The choice is not simply between “ban or not to ban”. That framing obscures the real issue. What is at stake is whether it is acceptable to have a digital environment built around attention capture, data extraction and profit, despite avoidable harms for children or whether Europe decides to insist on platforms that are safe by default, accountable by design and compatible with children’s rights.
Society already sets age limits when activities pose meaningful risks to children’s health, safety or development. The same logic can apply online: the issue is not whether every child is affected in the same way but whether the overall risk profile of certain mainstream services, combined with their scale and design, justifies protective safeguards. At the same time, if age restrictions become the only answer, there is the risk of a false sense of security: children may move to other platforms, use services without registering or bypass restrictions while the underlying harms remain.
Those harms are tied to business models and product choices. As evidenced by whistleblowers and wider public reporting, major platforms are engineered to maximise engagement and convert attention into revenue. Infinite scroll, autoplay, persistent notifications, algorithmic amplification and highly personalised feeds are commercial decisions designed to keep users online for longer. This matters because adolescence is a sensitive developmental window and teens are especially sensitive to reward and social comparison, making highly reinforcing environments hard to disengage from.
The “brain rot” phenomenon, an informal label for the cognitive fatigue and reduced focus people report after heavy scrolling, is increasingly echoed in research. For instance, a 2025 systematic review and meta-analysis, pooling 71 studies and around 98,000 participants, found that heavier short-form video use was associated with poorer cognitive outcomes, with the strongest links for attention and inhibitory or impulse control.
Risk is not only about harmful content online but about how recommender systems steer children towards it and keep them there. A study by Amnesty International shows that accounts registered as 13-year-olds are reportedly pushed into depressive and self-harm “rabbit holes,” with feeds becoming dominated by mental-health and suicide-related content. Harms also spread through what platforms actively recommend. Engagement-based ranking can amplify emotionally charged and polarising content. It notes evidence that some systems weigh reaction emojis (including “angry”) more heavily than “likes”. In England and Wales, a large survey of 13–17-year-olds found 70% encountered real-life violence online in the past year, and among those exposed, one quarter said they saw it because platforms promoted it through feeds and recommendations.
A Dublin City University study found it took about 23–26 minutes of watching for new young male accounts to be recommended toxic or misogynistic content on some social media. Related research argues that recommender dynamics can contribute not only to exposure to discriminatory content but also to normalisation, with the manosphere, including misogynistic narratives framed as entertainment, advice or “self-help.’’
Image-based platforms can sharpen the focus on appearance, especially when edited images are treated as everyday norms and paired with public metrics such as likes and follower counts. When recommender systems prioritise engagement over reliability and synthetic content blurs the line between evidence and fabrication, children’s ability to seek, receive and trust information is also undermined. Current evidence suggests that girls are disproportionately exposed to appearance-focused content linked to thinness and dieting, while boys are increasingly exposed to content promoting muscularity and performance ideals.
Taken together, these findings point to one conclusion: platform accountability is essential, whatever the policy choice on age access. If restrictions are introduced, they should be necessary and proportionate while being accompanied by broader protections and realistic alternatives for children’s participation and support.
They must never become an excuse to avoid regulating design. Core safeguards should include safe-by-default settings, limits on engagement-maximising features, transparency around recommender systems, robust risk assessment and independent scrutiny as well aseffective enforcement of existing European Union rules, including the digital services act, the general data protection regulation (GDPR) and the artificial intelligence act, alongside new measures to tackle dark patterns and exploitative personalisation. Even where age-based access conditions exist, platforms must be designed to uphold the rights of all children under 18.
Social media can also offer connection, expression, creativity and participation. But this also points to an uncomfortable reality: platforms have filled gaps that should never have been left open. When children turn to social media for support or a sense of belonging, it may reflect the absence of safe community spaces, youth services, mental health support and opportunities to be heard elsewhere. Governments are responsible for ensuring these conditions and protecting children’s rights under the United Nations convention on the rights of the child and should not outsource this to companies.
So yes, it is reasonable to scrutinise whether some services should be accessible to children in the same way they are now. But this should lead to a structural shift away from business models that profit from children’s vulnerabilities and towards a digital environment that protects children’s rights by design. If the European Union is serious about children’s safety online, then it has to move beyond the ban debate. The real challenge is not simply deciding who should be allowed onto social media but what kind of social media are allowed at all.
Francesca Pisanu is the European Union advocacy officer at Eurochild, where she informs and influences European policy and legislation to advance children’s rights across Europe. Her work focuses in particular on children’s rights in the digital environment, alongside broader human rights and social issues affecting children, with a particular emphasis on those in vulnerable situations. She holds an advanced Master of Laws in International Children’s Rights from Leiden University and has more than five years of experience in supporting human rights in Europe and beyond.
For more information on Eurochild’s position on age restrictions on social media, please click here.
This blog post appeared on Social Media Ban for Kids, an interactive website managed by The Lisbon Council, a Brussels-based think tank, to gather available evidence and data points on the social media ban for children. Its website is https://socialmediaban.lisboncouncil.net/.
More Analysis
- Eurochild’s position on age restrictions on social media
- Feeds, feelings, and focus: A systematic review and meta-analysis examining the cognitive and mental health correlates of short-form video use.
- Summary Report: Recommending Toxicity: The role of algorithmic recommender functions on TikTok in promoting male supremacist influencers, Dublin City University
- France: Dragged into the rabbit hole: New evidence of TikTok’s risks to children’s mental health, Amnesty International
- Children, violence and vulnerability 2024, What role does social media play in violence affecting young people?