Social media ban makes a good headline but does it make a good policy?

Katarzyna Szkuta   Giulia Grandin   

What began as a regional laboratory in Australia, which implemented a world-first social media ban for under-16s in late 2025, has rapidly crossed the oceans. The Australian model demonstrated that radical regulations could be legislated if the political will was sufficient but also that political appetite for moderate solutions is waning.

The catalyst for this shift is a perceived acceleration of the youth mental health crisis. Public sentiment has reached a breaking point: 93% of Europeans – 67% of whom categorise it as ‘very urgent’ and 26% as ‘fairly urgent’ – now call for urgent public authority intervention regarding the negative impact of social media on children’s mental health. This is no longer a niche conversation, it is a rare policy initiative that commands approval across the entire political spectrum.

It is an ambition shared by leaders like Emmanuel Macron, who recently declared in his Sorbonne speech: ‘We need to take back control of the lives of our children and teenagers in Europe and impose the digital majority at the age of 15, not before, and require platforms to moderate or shut down certain websites,’ and Pedro Sánchez, who has vowed to protect minors from what he calls the ‘failed state’ and ‘Wild West’ of the digital world. 

In Brussels, Commission President Ursula von der Leyen has placed the protection of minors at the core of her current term. While the European Parliament has signalled support for a harmonised digital age of 16, with a critical minimum age of 13. The Coalition of the Digital Willing, member states moving ahead of European regulation, as well as other countries, is watching the European Commission’s Expert Group, which is set to deliver its definitive recommendations by June 2026.

While governments speed toward regulation, child rights advocates and sociologists remain cautious, given inconclusive evidence on whether social media is a toxic substance to be banned or a vehicle that simply needs a better safety belt.

A Cigarette Moment or a Safety Belt?

Policy circles often describe this as social media’s cigarette moment. The moment when a government decides that a product or a service is a harm rather than a risk to be dealt with. But is social media really a toxic substance like alcohol or tobacco to be banned completely for young people or merely a vehicle that needs a safety belt to increase safety of the ride? Unlike tobacco, social media, while being a risky activity, is also the primary infrastructure for modern social autonomy and community building.

Academic research tends to present a very mixed picture of the link between social media and young people’s mental health. While a correlation exists (co-existence of mental health problems and social media use), causation (social media use causing mental health problems) is more difficult to prove. 

A 2024 WHO study found that 11% of European adolescents show signs of problematic behaviour, while non-problematic active users report stronger peer support and social connections. A systematic data review reveals a U-shaped curve: while excessive use (3+ hours/day) correlates with depression, children with very low or zero use also show higher rates of depression than moderate users, likely due to social isolation and lack of a support group. The social modalities of the internet make them an important support system, particularly for LGBTIQ+ youth, neurodivergent children, and those in geographical isolation. 

Research published in Nature Communications (Orben et al., 2022) suggests that a one-size-fits-all age limit for social media may fail to protect children effectively because girls and boys experience peak vulnerability at different ages. The study identified specific “windows of developmental sensitivity” where high social media use most strongly  predicts a drop in life satisfaction one year later: for girls, this occurs between ages 11 and 13, while for boys, it happens later, between ages 14 and 15. For policy makers, this evidence suggests that digital safety strategies may need to be more nuanced, potentially focusing on gender-specific educational interventions that reflect these distinct biological and social developmental timelines.

Looking from the children’s rights and privacy perspective, civil society organisations, including Eurochild, the 5Rights Foundation, European Digital Rights (EDRi), and Save the Children argue that these measures may violate the UN Convention on the Rights of the Child (UNCRC), which guarantees children the right to freedom of expression, information, and participation. This shows a friction between Europe’s democratic ambitions and its digital anxieties. As countries like Austria, Belgium, and Germany allow 16-year-olds to vote, we are witnessing a strange policy collision: we are handing teenagers the keys to democracy while simultaneously locking them out of the digital town square.

By the time a teenager reaches the voting booth, they are expected to be an ‘informed citizen. Yet, under a social media ban, their first encounter with AI-powered election interference and deepfakes will not happen under the guidance of school or parental supervision, it will happen the moment they turn 16 and ‘plug in’ for the first time. Ultimately, it creates another vulnerability.

The main question is whether the ban removes the incentive for platforms to build safer spaces for younger users and more broadly, all users? An online sphere that is safe for children guarantees an online sphere that is safer for every user, despite their age, background or level of digital literacy.

Lessons from the Australian Precedent 

Read Professor Third’s blogpost.

The Australian model shifts the legal burden onto providers rather than families. Professor Amanda Third warns about governments playing a whack-a-mole game with listing applications and spaces that have social media features. Mandatory ID checks raise profound privacy concerns, and early data suggests teens are already migrating to darker corners of the internet and unregulated, decentralised platforms where safety-by-design features are non-existent. Anecdotal data point as well to the sharp increase of the use of VPNs and identity-sharing and trading among teens. 

A socio-economic gamble

The digital majority approach is a significant policy gamble, but it is also political gold because it promises a quick fix to a complex socio-economic situation for the digital-first citizens. It taps directly into a shifting public mood, one heard in the chatter of neighbourhood cafés and reflected in recent polling data from concerned parents. 

Some policymakers surf on the parents’ assumption that removing the screen will spontaneously return their kids to an analogue childhood. This ignores a hard truth: if physical infrastructure like parks, cultural centres, and safe streets are lacking, children will still seek engagement through screens. Data shows that screen use is significantly higher among lower socioeconomic groups, often a symptom of parents struggling with work-family balance and a lack of affordable alternatives. Without addressing these socio-economic factors, we risk merely treating one of the important symptoms but still not getting to the root cause of youth lower well-being. Social natives, by definition, are building their networks in hybrid ways, both analogously and online.  

Other regulators view a ban not as a perfect wall but as vital friction. Even imperfectly, they act as a speed bump that protects the most vulnerable, shifts the compliance burden onto tech giants, and gives parents the breathing room needed to regain control.

Yet, treating social media exactly like tobacco carries a hidden cost. We risk introducing a generation to a world full of smoke the moment they turn 16, without the digital literacy required to navigate it. True protection requires enforcing safety-by-design for all users, including the removal of addictive features.

The New Digital Village

While the Digital Services Act has been in place since early 2024, only recently have we witnessed actions that go beyond content removal. A recent European Commission’s probe (February 2026) into TikTok’s addictive features: infinite scroll and autoplay, suggests the DSA has far more untapped potential than we have seen so far, i.e. a possibility of requirement of structural design changes – end of infinite scrolling, compulsory screen-time breaks and a ban on engagement-based recommenders for children. It is the first time a regulator has attempted to set a legal standard for the addictiveness of platform design.  Interestingly, the justification is exactly a risk to the mental health of users, particularly children. It would be the first instance of a legal challenge to addictive designs on platforms. If the probe is conclusive, it will prove the efficiency of the safety belt approach: a targeted risk-based approach to social media. 

As Europe seeks a unified approach, we must avoid the fragmented interpretations that initially plagued the GDPR. 

It is often said that it takes a village to raise a child. In 2026, that village expanded. It now includes the regulator, the developer, and the algorithmic architect.


Katarzyna Szkuta is the director of strategy at The Lisbon Council.

Giulia Grandin is policy analyst and equality officer at The Lisbon Council.


This blog post appeared on Social Media Ban for Kids, an interactive website managed by The Lisbon Council, a Brussels-based think tank, to gather available evidence and data points on the social media ban for children. Its website is https://socialmediaban.lisboncouncil.net/.