Australia made history in late 2024 when it passed the Online Safety Amendment (Social Media Minimum Age) Act — national legislation prohibiting children under 16 from holding social media accounts. The legislation has been closely watched by countries around the world, inspiring a wave of similar regulatory initiatives. From Denmark to the United Kingdom, from France to Malaysia, the momentum to keep children off social media platforms is growing, and shows no signs of abating.
The rush to ban is understandable. People are frightened by the possibility that social media is negatively impacting children’s mental health, safety, and development. We have read the headlines, been shocked by the statistics, and heard devastating stories of children harmed through online bullying, predatory behaviour, and exposure to harmful content. Many of us have navigated the arguments around the dinner table about screen time, algorithmic rabbit holes, and the opacity of platform design. And the power of social media companies — their data practices, their opacity, and their reach — is legitimately a matter of urgent public concern.
There is no question that stronger regulation is required to protect children across the full range of digital products and services, including social media. The digital world was never designed with children, or even with them in mind. Consequently, too many children face risks of harm that are not incidental but structurally embedded in the design decisions of platforms, which optimise for engagement rather than wellbeing. Factors such as children’s intensified reliance on technology during the COVID-19 pandemic, and the acceleration of generative artificial intelligence, with its attendant new vectors of risk, have further amplified both the harms children encounter and the anxieties that attend them.
Children deserve better. Much better. And urgently.