Beyond the Ban. Part 1: What Australia’s Social Media Age Restrictions Can Teach European Policymakers

Amanda Third   

Australia made history in late 2024 when it passed the Online Safety Amendment (Social Media Minimum Age) Act — national legislation prohibiting children under 16 from holding social media accounts. The legislation has been closely watched by countries around the world, inspiring a wave of similar regulatory initiatives. From Denmark to the United Kingdom, from France to Malaysia, the momentum to keep children off social media platforms is growing, and shows no signs of abating.

The rush to ban is understandable. People are frightened by the possibility that social media is negatively impacting children’s mental health, safety, and development. We have read the headlines, been shocked by the statistics, and heard devastating stories of children harmed through online bullying, predatory behaviour, and exposure to harmful content. Many of us have navigated the arguments around the dinner table about screen time, algorithmic rabbit holes, and the opacity of platform design. And the power of social media companies — their data practices, their opacity, and their reach — is legitimately a matter of urgent public concern.

There is no question that stronger regulation is required to protect children across the full range of digital products and services, including social media. The digital world was never designed with children, or even with them in mind. Consequently, too many children face risks of harm that are not incidental but structurally embedded in the design decisions of platforms, which optimise for engagement rather than wellbeing. Factors such as children’s intensified reliance on technology during the COVID-19 pandemic, and the acceleration of generative artificial intelligence, with its attendant new vectors of risk, have further amplified both the harms children encounter and the anxieties that attend them.

Children deserve better. Much better. And urgently.

The conversation about how to regulate in ways that genuinely respect, protect, and fulfil children’s rights in relation to the digital environment is long overdue. The question is not whether but how to act effectively in the interests of children.

In the context of the urgent demand for decisive and protective responses, the idea of excluding children wholesale from social media is incredibly seductive. Bans promise to make the problems disappear by enacting a kind of digital cordon sanitaire — a clean legislative line between children and the platforms perceived to be harming them. But good intentions do not always translate into good policy. In October 2024, over 140 experts nationally and internationally urged the Australian government to reconsider its approach. They argued that age restrictions will not necessarily make children safer online, and that the costs to children’s rights, wellbeing, and digital futures are potentially far greater than the public conversation has acknowledged.

The central question that European decision makers must now carefully consider is this: will age-based restrictions — like the ‘digital age of majority’ proposals currently under consideration across Europe — actually make children safer online?

We will not have definitive answers to this question until the Australian Government releases the findings of the independent evaluation of the legislation’s implementation, commissioned by the Australian eSafety Commissioner and led by the Stanford Social Media Lab with the support of a high-calibre international expert advisory board. The evaluation will investigate both intended and unintended consequences, with the explicit aim of sharing its findings with regulators, policymakers, and technology platforms internationally. In the meantime, all energies in Australia are rightly directed at implementing the legislation as effectively as possible, prioritising children’s safety and supporting their families. It is in that spirit — not of opposition but of rigorous scrutiny — that I reflect here on the dilemmas the Australian experience has surfaced, in the hope that European legislators might draw on them as they chart their own regulatory course.

Some Dilemmas Presented by the Australian Legislation

First, let’s be precise about what the Australian legislation does and does not do.

The legislation prevents children under 16 from holding a social media account. On most platforms, however, children can still access a wide variety of content without an account. This creates an immediate and significant limitation: the legislation addresses account-holding but not content access, leaving children exposed to many of the risks of harm it was designed to mitigate. More striking still is one of the legislation’s deepest structural ironies: accounts are among the primary mechanisms through which platforms can identify child users and direct safety measures toward them. By removing accounts, the legislation arguably strips away a key layer of child-specific protection rather than adding one — a consequence that deserves far more scrutiny than it has received in the public debate.

While messaging and educational platforms are exempt from the law, the application of the Australian legislation to all other platforms remains at the discretion of the Minister for Communications. Notably, the exemptions framework — which might have provided a powerful incentive for technology platforms to proactively redesign their products and services in order to qualify for exemption from the regulation — was dropped from the Bill the night before it passed through Parliament. This last-minute removal foreclosed a significant opportunity to use the regulation to drive genuine platform accountability.

The circumvention phenomenon compounds these concerns considerably. Driven by the desire to connect with peers — a developmental imperative, not a frivolous preference — in the leadup to the implementation of the Australian legislation in December 2025, Australian children were reportedly migrating to lesser-known platforms not covered by the legislation. This dynamic is likely to continue, drawing the government into a perpetual reactive cycle of identifying newly popular platforms and extending the legislation’s reach to cover them —what one commentator aptly described as a game of whack-a-marsupial. When children migrate to less regulated and less visible corners of the internet, they do not leave behind the risks the legislation sought to address; they encounter those risks in environments where protective infrastructure is thinner, enforcement is weaker, and adult oversight is more limited.

The legislation also raises a series of unresolved privacy concerns. While the Australian government has asserted that age verification technologies need not be applied to all users, it remains unclear how children can be reliably identified without some form of age assessment across the broader user population. The government has stipulated that age verification, estimation, assurance, and inference technologies should not require the storage of personal information — a sensible principle but one whose practical implementation will need to be closely monitored. The age assurance industry faces significant technical and ethical challenges in meeting this standard, and the privacy implications for the millions of adult users whose age may need to be assessed in order to identify underage ones are not yet clear.

Finally, the legislation’s relationship to parental authority is more complicated than its proponents have acknowledged. While some parents — particularly those less confident in their own technology use — have welcomed the legislation’s support in managing their children’s digital interactions, others regard it as an unwarranted intrusion on their autonomy to make parenting decisions tailored to their individual children’s needs. Research commissioned by the Australian government in the lead-up to implementation found that one third of parents would actively support their underage children to access social media. That signals not parental irresponsibility but the genuine complexity of a regulatory intervention that treats all children under 16 as categorically equivalent.

Potential Unintended Consequences

Robust evidence on the unintended consequences of digital age of majority initiatives will take time to accumulate. But the early anecdotal indicators from Australia warrant serious attention in European policy deliberations.

The circumvention challenge is perhaps the most immediate. Children are not passive recipients of regulation. They are resourceful, digitally literate, and motivated by the deeply human desire to connect with one another. VPNs, borrowed accounts, and migration to unregulated platforms are already emerging as predictable responses to the legislation. But the concern here extends well beyond the practical difficulty of enforcement. When children circumvent age restrictions, they do not merely access social media in defiance of the law — they do so in spaces where they are less visible to those who might protect them, less likely to encounter platform-level safety interventions, and, critically, less likely to seek help when things go wrong, because doing so would mean disclosing that they have been breaking the law.

There is also a deeper civic question at stake. Laws derive their authority, in part, from their reasonableness in the eyes of those who are expected to abide by them. Legislation that is widely circumvented — particularly by an entire generation of young people who experience it as arbitrary, disproportionate and disconnected from the realities of their lives — risks eroding precisely the respect for legal authority that democratic societies depend upon. This is not a trivial concern. It is a question about what widespread, normalised circumvention of a specific law teaches the next generation about their obligations as citizens more broadly.

The benefits dimension of this debate has been consistently underweighted, and the consequences of that underweighting may prove significant. Research I have led with children and young people across more than 80 countries — including the consultations to inform the UN Committee on the Rights of the Child’s General Comment No. 25 on children’s rights in the digital environment — demonstrates consistently that children engage with digital environments primarily for communication, connection, and sharing, followed by access to information, formal and informal learning, and creative expression. Children report that their digital participation supports their health and wellbeing, their education, their skill development, their access to information and resources, and their capacity to contribute to their communities. Crucially, the evidence suggests these benefits are protective, augmenting children’s capacity to navigate and recover from adverse online experiences. A regulatory framework that removes children from digital spaces does not simply reduce their exposure to risk — it simultaneously removes the resources, connections, and capacities that help them manage the risks of harm they encounter.

These benefits are especially pronounced for children who experience heightened vulnerability. For young people living with disability, those who identify as LGBTQIA+, those managing mental health challenges, and those who are geographically or socially isolated, social media is frequently not a peripheral indulgence but a primary channel for connection, community, identity exploration, and access to support that may be unavailable offline. It is telling, and should be deeply instructive for European policymakers, that Australia’s leading youth mental health organisations opposed the social media age restrictions legislation on the grounds that it would undermine children’s access to mental health information and peer support networks, and erode their capacity to develop the digital health literacies they need to navigate the online world.

The implications for democratic citizenship also deserve sustained attention. Social media has become the primary infrastructure through which children learn about, organise, and take action on the issues that matter to them — from climate change and mental health to social justice and electoral politics. At a time of declining trust in democratic institutions internationally, when children increasingly report feeling alienated from formal political processes, and when the challenges that will most profoundly shape the futures of today’s children — climate disruption, economic inequality, geopolitical instability — demand their informed and active participation, the question of how age-based restrictions interact with children’s right to participation under Article 12 of the UN Convention on the Rights of the Child is not a secondary consideration. It is a governance question of the first order.

Understanding these dilemmas is only the first step, the urgent task now is to define a regulatory course that fulfils children’s rights without compromising their digital futures. Continue reading, always authored by Professor Amanda Third: Beyond the Ban. Part 2: A Child-Centred Approach to Online Protection Regulation.


Professor Amanda Third is Professorial Research Fellow and Co-Director of the Young and Resilient Research Centre in the Institute for Culture and Society at Western Sydney University and Faculty Associate in the Berkman Klein Center for Internet and Society at Harvard University. She is the lead author of Young People in Digital Society: Control/Shift (Palgrave, 2019) and was a lead researcher on the consultations informing UNCRC General Comment No. 25 on children’s rights in the digital environment.


This blog post appeared on Social Media Ban for Kids, an interactive website managed by The Lisbon Council, a Brussels-based think tank, to gather available evidence and data points on the social media ban for children. Its website is https://socialmediaban.lisboncouncil.net/.