Commentary

Australia’s Under-16 Social Media Ban has Sinister Intentions

The public is told that the harms are obvious, the evidence is settled, and the trade-offs are necessary. None of that is true.

Australia has introduced one of the most sweeping sets of online age-restriction policies ever seen in the democratic world. As of the 10th of December 2025, the restrictions block anyone under sixteen from accessing a number of major social-media platforms.

Platforms must prevent under-16s from creating accounts or evading the new restrictions, or face fines of up to A$49.5 million ($32 million). Each company may determine an acceptable margin of error when estimating a user’s age, depending upon the scale of its user base. So long as a platform can demonstrate it has taken “reasonable steps” to prevent children from accessing its services, it will avoid sanctions, although what constitutes “reasonable steps” remains vague, giving regulators substantial discretion. Children and parents will not be punished if minors are found to be breaking the rules.

Restricted platforms include Facebook, Instagram, TikTok, Snapchat, X (formerly Twitter), YouTube, Reddit, Twitch, Kick, and Threads. Whether further platforms will be included in future remains unclear, but the government has stated that Bluesky, Discord, Roblox, WhatsApp, Facebook Messenger, YouTube Kids, Pinterest, Google Classroom, Steam, and LinkedIn will remain unaffected for the time being.

There is one conspicuous inconsistency in this approach: the left-leaning platform Bluesky is exempt, while Elon Musk’s right-leaning X is not. These platforms are, functionally, equivalents of one another and restricting one while exempting the other inevitably gives the appearance of political favouritism. The formal justification is that Bluesky has a comparatively small Australian under-16 user base of around 50,000 and was therefore assessed as low risk. Nevertheless, Bluesky has announced it will comply with the under-16 ban voluntarily, despite not being compelled to do so.

Become a Free Member

Enjoy independent, ad-free journalism - delivered to your inbox each week

The legislation also fails to target platforms that have developed reputations for presenting some of the most significant risks to children. Discord and Roblox, both widely used by minors and repeatedly associated with grooming and predatory behaviour, appear to fall outside the immediate scope of the crackdown. Last month Roblox blocked children from talking to adults after twenty-eight lawsuits in Florida alleged that the platform enabled the “systemic predation of minors”. Discord is equally well known for hosting groups that engage in blackmail, stalking, extortion, coercion of minors into producing child sexual abuse material, encouragement of self-harm, and exposure to harmful content. Any legislation seeking to protect children would logically begin with platforms where clear and documented harm already exists. Legislators have signalled that the current approach is only the beginning, but the exclusion of these examples signals a significant oversight on behalf of regulators.

Regulators face an uphill struggle, as many minors are already going to alternative platforms such as “Yope” and “Lemon8”, which have been propelled to the top of the app store preceding the implementation of the ban. Chinese-owned TikTok-like service Rednote and US-based Coverstar also saw rapid growth. Virtual private networks, or VPNs, have become increasingly popular, allowing users to access restricted platforms via servers outside Australian jurisdiction. Regulators will need to keep pace with the emergence of alternative platforms serving the same functions as those banned but not yet captured by the legislation. The pace at which minors migrate between platforms far exceeds the pace at which governments can regulate them. Although there remain many viable ways to circumvent the restrictions today, it is likely that loopholes will close over time. Regulating the internet, given its scale, dynamism and decentralised architecture, is extraordinarily difficult. Nevertheless, the current measures will unquestionably reduce under-16 social-media usage nationwide.

The most consequential issue for all Australians is that mandatory age verification will erode the last remnants of online anonymity. Any system robust enough to reliably identify minors must collect sensitive identity data from the entire population, including adults who have legitimate reasons to engage anonymously. Journalists, whistleblowers, dissidents, and vulnerable individuals will be forced to choose between surrendering their privacy or losing access to social media.

Current plans outline three categories of age checking: age verification, age estimation, and age inference.

Age verification relies on government-issued documents, but platforms are not permitted to rely solely on this method.

Age estimation involves biometric analysis of a user’s face, voice, or other physical traits.

Age inference relies on monitoring a user’s language, browsing history, behavioural habits, or friendship networks.

In practice, this means that — under the auspices of protecting children — Australian adults must now submit government identification, images of their face, recordings of their voice, or submit to continuous behavioural monitoring in order to participate in ordinary social-media use. All three approaches require extensive collection and analysis of personal data.

The scientific evidence underpinning the widespread claims that social media is harmful to minors is far less conclusive than much of the rhetoric justifying the ban suggests. Advocates point to online bullying, competitive pressures, and addictive design features, but the empirical research simply does not support a narrative of widespread psychological damage caused by social media exposure. A 2024 meta-analysis of 143 studies, covering more than one million adolescents, highlighted a critical gap in research focusing on minors with diagnosed clinical mental health conditions which “hinders our ability to evaluate and compare the link between social media use and mental health”. If policymakers wish to understand whether social media contributes to severe mental health symptoms, rather than the typical emotional turbulence of adolescence, this population needs to be examined more closely. Without that data, the generalisability of existing research remains limited.

Even in studies that found links between social media and mental health issues, the effect sizes are so small that they could either be regarded as negligible or within the margin for error. This point is further reinforced by a major Oxford University study using nearly a million participants across seventy-two countries over a twelve-year period. Its conclusion was unambiguous: there is no evidence that global social-media adoption has produced widespread psychological harm. Taken together, the evidence suggests that the societal panic surrounding online platforms is not supported by data. If one were to take a cynical view of the situation, you might conclude that this was an artificial moral panic created to manufacture consent for more expansive control over the internet.

All of these measures would be unnecessary if two far more straightforward approaches had been taken seriously. The modern internet has almost entirely abandoned the idea of child-specific platforms. In earlier eras of the web, digital spaces were more clearly segregated by age. Today, we simply accept that platforms designed for adults to socialise with one another will also be populated by minors. This is an odd and unhealthy normalisation. Adults and young children should not be socialising together in the same digital environments, just as they would not be expected to do so offline.

Mixed-age platforms expose children to potential malicious adults and equally force adults to censor themselves to accommodate minors. This includes limiting coarse language, avoiding graphic or war-related journalism, toning down political satire, softening dark humour, and inhibiting artistic works that explore emotively challenging themes. Such compromises benefit neither group. It would be better for minors to inhabit platforms specifically designed to be age-appropriate and for adults to maintain spaces where mature discussions can take place without censorship.

It was once widely understood that the people most responsible for children are their parents. However, the notion of parental responsibility has deteriorated. Increasingly, there is an expectation that the government should assume this role. Children should not be navigating the internet unsupervised, and teenagers should have at least some degree of monitored usage until they possess the cognitive maturity required to evaluate content independently. Parents should warn their children about risks, teach them how to avoid harmful content, and remain attentive to what they do online. No regulatory regime can compensate for absent or disengaged parenting, and the importance of parental oversight should be emphasised far more seriously than it currently is.

Australia’s proposal frames itself as a child-protection measure, but it risks becoming a broad mechanism for population-wide identity tracking while sidestepping some of the platforms where genuine risks are concentrated. It attempts to address a problem that current research does not convincingly demonstrate, and does so by imposing significant burdens on everyday users and on the open, anonymous culture that made the internet valuable in the first place. The public is being told that the harms are obvious, the evidence is settled, and the trade-offs are necessary. None of that is true.

Donate today

Help Ensure our Survival

Comments (0)

Want to join the conversation?

Only supporting or founding members can comment on our articles.