The Evidence Gap at the Heart of the Ban Debate

Caroline De Cock   

The political momentum behind age-based social media bans has acquired a quality that should give evidence-focused policymakers pause: it is largely immune to evidence. Jurisdictions are moving at legislative speed while the underlying research remains, at best, contested. That gap between political confidence and scientific consensus is the central problem addressed by this contribution.

We are witnessing a rapid global shift from frameworks that manage data to frameworks that prohibit access. Australia’s ban on under-16-year-olds took effect in December 2025. France is advancing a proposal for under-15-year-olds. Spain, Denmark, Portugal and Norway are moving in the same direction and Austria has added its voice to the choir but for under 14-year-olds. Beyond Europe, age-based access restrictions are working their way through legislatures and courts across North America. At Forum Europa in February 2026, European Commissioner Michael McGrath acknowledged that while consensus exists on the necessity of child protection, a profound disagreement persists within the European Union on whether strict bans or technologically enabled controls are the right instrument. The choice made in this matter goes to the heart of what kind of digital public sphere Europe wants to build.

Bestsellers Make for Compelling Reading, Not for Compelling Evidence

The intellectual architecture of the current wave of bans often refers to the same popularising work: Jonathan Haidt’s The Anxious Generation, published in March 2024. The book’s central claim, that the transition to phone-based childhood has triggered a mental health epidemic, is compelling and politically resonant. It is also disputed by leading researchers in the field. Candice Odgers of UC Irvine, writing in Nature, concluded that the book’s core thesis is not supported by the scientific literature. She noted that where individual-level studies do find associations, they more frequently suggest that young people already experiencing mental health difficulties are heavier platform users, not that platform use is causing those difficulties. That is a question of causal direction, and it matters enormously for policy design.

Andrew Przybylski and Amy Orben’s landmark analysis,published in Nature Human Behaviour and drawing on data from over 355,000 individuals, found that digital technology use explains at most 0.4% of the variation in adolescent wellbeing. Eating potatoes or wearing glasses statistically carries a similar to bigger negative effect on adolescent mental health. European suicide rates fell 17% between 2012 and 2021 despite equivalent smartphone penetration across the continent, further undermining the claim of a universal, device-driven crisis. A separate systematic review points to a U-shaped relationship between social media use and well-being: while excessive use correlates with depression, children with very low or zero use also show higher rates of depression than moderate users, likely due to social isolation. Total exclusion, in other words, carries its own risks. Haidt himself acknowledged to Science editor Holden Thorp that he is “promoting a social change program before the scientific community has reached full agreement.” When a spouse’s reading of a popular book can set the terms of a nation’s internet policy, as was explicitly the case in Australia, we have traded deliberative policymaking for reactionary politics. Haidt was also asked to address UK government officials directly at the request of UK Health Secretary Wes Streeting, and visited the European Parliament as indicated by MEP Yon-Courtin.

What the Australian Experience Reveals

Australia provides the most advanced case study and the first official verdict is not encouraging. The eSafety Commissioner’s initial compliance report, published in March 2026, found that the number of children circumventing the ban is worse than early anecdotal evidence had suggested, that reported levels of online abuse among teenagers remain unchanged, and that age verification technology is not performing as well as its proponents had anticipated. This is the first rigorous data from a policy that Prime Minister Albanese had pre-emptively declared a success. The gap between that declaration and the evidence warrants attention.

As Professor Amanda Third of Western Sydney University sets out in her analysis for this Hub, the legislation contains a structural irony: accounts are among the primary mechanisms through which platforms direct safety measures at child users. Removing accounts may strip away a layer of child-specific protection rather than add one. Professor Third also warns that if platforms are legally absolved of responsibility for hosting children, they may have reduced incentive to invest in safety-by-design altogether, hence jeopardising hard-won features such as disabled autoplay and private-by-default accounts for minors.

The circumvention picture that emerges from the compliance data is consistent with what was already visible on the ground. Teenagers responded to the ban by migrating to alternatives, including Lemon8 and Yope, using virtual private networks (VPNs), or accessing social media through older siblings’ credentials. Some moved into less moderated spaces entirely, where the absence of safety infrastructure increases exposure to grooming and radicalisation. The ban did not reduce exposure to online risk; it redistributed it. Australian mental health group headspace has reported that 10% of teenagers calling mental health helplines cited the ban itself as a primary stressor. Professor Third aptly describes the enforcement dynamic as a game of whack-a-marsupial: a perpetual reactive cycle of identifying newly popular platforms and extending legislation to cover them. The compliance report suggests that cycle is already well underway.

Who Is Not in the Room: the Teens

Article 12 of the United Nations Convention on the Rights of the Child (UNCRC) guarantees children’s right to be heard in matters that affect them. Across the jurisdictions now adopting or considering bans, that right is receiving minimal practical effect. The Australian Senate inquiry allowed public submissions for approximately one day. At a Canadian parliamentary hearing in early 2026, a 12-year-old witness told the committee that adults rarely ask children what it is actually like to grow up online.

When young people’s perspectives are gathered, they complicate the prevailing narrative significantly. ReachOut, Australia’s leading youth mental health service, found that 73% of young people who accessed mental health support did so through social media. For lesbian, gay, bisexual, transgender, queer, intersex, and asexual (LGBTQIA+) youth, organisation Minus18 found that 95.7% relied on social media for peer connection and emotional support, with 82% reporting that a ban would leave them more isolated. Australia’s peak mental health organisations collectively opposed the ban on the grounds that it would stop young people from accessing essential support. As Gabriele Battimelli argues in his contribution to this hub, for LGBTQIA+ teenagers in small towns, young people with disabilities, and those in geographically isolated communities, social media is frequently not a peripheral indulgence but a primary channel for connection and support unavailable offline. A blanket ban does not protect these young people. It targets them, and drives them toward less regulated platforms where protections are weaker still.

Two fifteen-year-olds, Noah Jones and Macy Neyland, have launched a challenge in the High Court of Australia that cuts to the heart of a category error running through this entire debate. Proponents of bans treat social media like alcohol or tobacco: physical products with linear, measurable harm. But social media is a medium of expression and, increasingly, the primary infrastructure through which young people engage in civic and democratic life. As Katarzyna Szkuta and Giulia Grandin observe on this hub, countries like Austria, Belgium, and Germany allow sixteen-year-olds to vote, yet under a social media ban, those same teenagers would be locked out of the digital town square until the day they reach the voting booth. Restricting access to social media is not the equivalent of banning a drink. It is the equivalent of banning a conversation.

Addressing Features, Not Platforms

As Francesca Pisanu of Eurochild argues on this hub, the framing of “ban or not to ban” obscures the real issue: what kind of digital environment Europe is willing to insist on. Platform harms such as infinite scroll, algorithmic amplification of emotionally charged content, and profiling-based advertising targeted at minors are tied to business models and product choices, not to the presence of children on platforms. Removing children does not change those models.

The prevailing position of UNICEF, Eurochild, Save the Children, the 5Rights Foundation, and the World Health Organisation (WHO) is that blanket age-based exclusion is the wrong instrument. These organisations converge on a platform-centred model: mandatory elimination of addictive design features, prohibition on profiling-based advertising targeted at minors, age-appropriate design codes with tiered protections rather than total exclusion, mandatory data access for independent researchers, and sustained investment in digital literacy.

The tools to deliver this already exist. The Digital Services Act (DSA), the Artificial Intelligence Act (AI Act), and the Audiovisual Media Services Directive (AVMSD), properly enforced, already provide regulators with the authority to mandate safe defaults, restrict behavioural advertising targeting minors, compel algorithmic transparency, and require child impact assessments before attention-maximising systems are deployed against under-18s. The European Commission’s February 2026 probe into TikTok’s addictive design features, including infinite scroll and autoplay, signals that the DSA has considerable untapped potential that regulators have yet to deploy. If that probe is conclusive, it will demonstrate that targeted, design-focused regulation can achieve what blunt access bans cannot. What has been missing, as Battimelli notes, is not legal authority but political will.

As Europe shapes its own approach, the evidence gathered on this hub serves a specific purpose: to ensure that the complexity visible to researchers, practitioners, and young people themselves is not dissolved by the simplicity of a political moment. The children these policies are designed to protect deserve a process that takes evidence and their voices as seriously as it takes adult anxiety.

Caroline De Cock is Senior Fellow at the Lisbon Council and Head of Research at Information Labs. She is the author of AI Tools, Not Gods (BTF Press, 2026) and participated in the Lisbon Council’s High-Level Working Breakfast “To Ban or Not to Ban: Kids’ Safety on Social Media” in February 2026.


This blog post appeared on Social Media Ban for Kids, an interactive website managed by The Lisbon Council, a Brussels-based think tank, to gather available evidence and data points on the social media ban for children. Its website is https://socialmediaban.lisboncouncil.net/.

More Analysis