0
Your Cart
No products in the cart.

The Trans-Tasman Teen Social Media Challenge, Compliance and Consequences

Digital network overlay on a New Zealand landscape, symbolising the Trans-Tasman Teen Social Media Challenge, Compliance and Consequences

The world’s largest social media companies have confirmed they will comply with a strict new Australian law. This landmark legislation, passed in late 2024, bans social media access for users under the age of 16. It forces companies like Meta, TikTok, and Snap to take immediate action to remove hundreds of thousands of accounts.

This development is significant. It represents one of the world’s most forceful regulatory actions against the largest platforms. The move shifts the responsibility for age policing away from parents and onto the tech companies themselves. They now face substantial financial penalties for failing to comply with the rules.

The law takes effect from December 10, 2025. This date marks the deadline for platforms to implement “reasonable steps” to prevent Australians under 16 from having an account. While the companies have expressed strong reservations about the effectiveness and practicality of the law, they have stated they will follow it.

The position from the platforms is clear. Meta, the parent company of Facebook and Instagram, TikTok, and Snapchat owner Snap, all told a recent Senate hearing that they do not agree with the legislation. However, they acknowledged they must abide by Australian law. This stance highlights the growing tension between global technology operations and national attempts to protect children in the digital space.

For New Zealand, the Australian decision is not just a neighbour’s problem. Our government has already signalled similar intentions. This means the technical and legal difficulties facing Meta and TikTok across the Tasman could very soon become a core problem for digital marketing services here too. The outcome of the Australian experiment will directly inform New Zealand’s next regulatory steps. The digital safety of young people has become a central policy issue for governments everywhere.

The Details of the Australian Mandate

The Australian law amends the existing Online Safety Act 2021. It introduces the Social Media Minimum Age framework, or SMMA. This framework requires “age restricted social media platforms” to take reasonable steps to prevent accounts for those under 16 years old.

The definition of an age restricted platform is broad. It includes any service where the main purpose is to enable social interaction between two or more users. The service must allow users to link to others and post material. This broad scope catches the major players: Facebook, Instagram, TikTok, Snapchat, X, and YouTube.

The Australian government has been careful to provide clear exemptions. Services that primarily function as communication tools are exempt. This includes email, instant messaging, voice calling, and video calling services like WhatsApp. Professional networking sites, education services, health services, and online games are also excluded. This distinction attempts to target services primarily built for open social interaction and content sharing.

The consequences for failure to comply are considerable. Companies that do not take reasonable steps to detect and deactivate underage accounts face heavy fines. These monetary punishments can reach up to A$49.5 million for corporations. Crucially, the law places the burden entirely on the platforms. Children or parents who find ways to bypass the age restriction face no penalties under the law.

The legislation passed the Australian Parliament in late 2024 with strong support. In the House of Representatives, 96 voted for the bill, with only 6 voting against. The Senate vote was 34 in favour and 19 against. This cross party support shows a clear political consensus that the social media companies must be held accountable for harms to minors.

The eSafety Commissioner in Australia is the body tasked with oversight. They released regulatory guidance in September 2025. This guidance outlines what “reasonable steps” mean in practice. It is expected that platforms will need to use a combination of automated behaviour tracking, parental controls, and, potentially, new age assurance technology to demonstrate compliance.

Industry Objections and The Compliance Challenge

Despite the legal mandate, the platforms have not held back in their criticism of the law. The general consensus from the tech industry is that the law is problematic, vague, and difficult to enforce.

At the Senate hearing in late October 2025, representatives from the major companies outlined their concerns. Mia Garlick, Meta’s policy director for Australia and New Zealand, spoke about the “numerous challenges” involved. She noted that identifying and removing hundreds of thousands of underage accounts before the December deadline poses “significant new engineering and age assurance challenges.”

Data provided by the companies illustrates the scale of the clean up required. Meta stated it would soon contact the holders of about 450,000 accounts across Instagram and Facebook confirmed to be under 16 in Australia. TikTok reported it has approximately 200,000 under 16 accounts in Australia. Snap, the owner of Snapchat, stated it had about 440,000 underage accounts. In total, these three platforms alone must deal with over one million accounts in a matter of weeks.

The platforms worry that the law is too “blunt” an instrument. Ella Woods-Joyce, TikTok’s Australia policy lead, cautioned that an outright ban could have unintended negative consequences. The main concern raised by the industry is that by forcing young people off mainstream platforms, they will simply migrate to less regulated, or “darker corners of the Internet,” where no safety protections exist. These sites often lack content moderation and robust reporting tools.

Jennifer Stout, Snap’s senior vice president of global policy, summed up the industry’s conflicted position. She stated, “We don’t agree, but we accept and we will abide by the law.” The platforms are therefore proceeding with compliance. They have stated they will use automated behaviour tracking software to help identify users claiming to be over 16 who may be underage. They will also reach out to underage account holders to give them choices, such as deleting their photos and data or having it stored until they turn 16.

The practicalities of age verification are complex and privacy intrusive. The Australian government has run trials of age assurance technology. These technologies can include methods that match uploaded photos against official ID or use AI powered facial estimation. The problem is that such methods raise significant privacy concerns for all users, not just minors. Australians have already shown low trust in digital platforms to handle personal data securely. Any system requiring deeper personal data for age verification faces public skepticism.

The New Zealand Perspective and Regulatory Pressure

The situation in Australia holds direct importance for New Zealand businesses and parents. New Zealand’s government is actively investigating similar measures. Prime Minister Christopher Luxon has repeatedly expressed concern about the harm social media causes young New Zealanders. He has stated that restricting access for under 16s would help protect children from bullying, harmful content, and social media addiction.

A draft Bill, the Social Media (Age Restricted Users) Bill, has been under consideration. This proposed law is closely modelled on the Australian legislation. It aims to force digital platforms to verify the age of users or face fines of up to NZ$2 million for non compliance.

This potential shift is driven by local evidence of harm. Research in New Zealand shows that young people are exposed to significant risks online. A 2022 survey of over 3,600 young people aged 14 to 20 found that 97% reported using the internet several times a day or almost constantly. They used an average of five platforms regularly, with TikTok, Instagram, Snapchat, and YouTube being the most common.

The constant exposure carries consequences. Participants in studies reported significant exposure to marketing for unhealthy products, including vape and alcohol advertising. More than half of respondents in one youth survey recognised that excessive social media use negatively affects their wellbeing. Social media addiction also ranked as a top issue affecting their overall wellbeing.

The New Zealand government is under pressure to act. The Mental Health Foundation has called for urgent action on youth mental health, noting that more than one in four young New Zealanders struggle. They point to the need for regulation to hold providers accountable for online harm.

However, the proposed New Zealand ban, just like the Australian one, faces opposition from rights groups. Rights Aotearoa, a human rights organisation, warns that the Bill could violate young people’s rights to freedom of expression under the New Zealand Bill of Rights Act 1990. They argue the ban restricts children’s access to information and their ability to participate in digital civic life without clear proof the restrictions are necessary. They suggest the government should focus on regulating the platforms’ algorithms and data practices, not restricting the users.

The debate in New Zealand highlights the difficult balancing act. On one side is the desire to protect children from proven harm, cyberbullying, and addiction. On the other side is the concern about privacy, potential isolation for vulnerable youth, and restricting access to networks that can be crucial for support, especially for LGBTQ+ young people in unsupportive home environments. The lessons learned from the Australian implementation, both the successes and the failures, will be essential for New Zealand policy makers to consider before any ban becomes law here.

The Effect on Digital Marketing Services

The regulatory changes in Australia, and the potential for similar laws in New Zealand, force a major rethink for businesses that rely on social media marketing to reach customers. The ban on under 16s is not a minor adjustment. It changes the fundamental makeup of the audience available on these platforms.

The primary impact is on audience targeting. For brands whose core customer is in the 13 to 25 age bracket, the removal of the 13 to 15 age group is a significant drop in available verified audience data. Even if accounts are not immediately deleted, the platforms are now legally obligated to stop actively targeting ads at users they know or suspect are under 16. This means the pool of addressable young people for social media marketing campaigns shrinks.

Businesses can no longer rely on the sheer volume of young users to drive awareness. They must be more precise with their spending and more creative with their digital marketing services.

First, marketing teams must ensure they are using legally compliant age filters. Relying on self reported age during ad setup is no longer sufficient. Businesses need assurance that the platforms they use are indeed taking the “reasonable steps” required to verify age. Any failure by the platform could still impact the advertiser by association, or simply by wasting ad budget on an unverified audience that is legally restricted.

Second, the focus must shift to where the audience moves next. If young people are forced off Facebook and Instagram, they may move to exempt platforms like certain messaging apps, or to gaming sites like Roblox. They may also move to less regulated corners of the internet. For brands wanting to connect with this demographic, their social media marketing strategy must consider these secondary channels, often requiring different content and engagement rules.

This new regulatory environment makes professional guidance more important than ever. Businesses need expert help to navigate compliance, retargeting strategies, and the technical changes platforms will make to age assurance systems. A business must ensure its social media marketing is not only effective but also legally compliant with changing national laws.

Third, there is the issue of digital marketing services being used to bypass the ban. Advertisers must be careful not to create campaigns that obviously target the restricted age group, even if the ad platform’s filters technically allow it due to incomplete age data. The ethical line has been firmly moved by the Australian government. Businesses need a clear digital marketing strategy that prioritises ethical compliance and transparency. For New Zealand businesses, understanding these new compliance requirements and adapting their digital strategies is now a matter of urgency. The rules are changing, and marketers must change with them. Getting quality strategic advice is crucial in this rapidly shifting environment. Our team helps businesses understand the legal changes and adapt their marketing efforts responsibly. You can find out more about our complete range of services at our digital marketing services page.

Implementation Challenges and The Privacy Trade-off

The difficulty of enforcement remains a central point of discussion. The eSafety Commissioner does not require companies to verify the age of every user. Instead, the focus is on “reasonable steps.” This flexibility is meant to allow for evolving technology, but it also creates ambiguity. What one court considers reasonable, another may not.

The key challenge lies in the sheer volume of users who lied about their age when signing up. With millions of unverified accounts globally, and over a million in Australia alone for the main platforms, cleaning up this historical data is a massive technical undertaking. Platforms have spent years avoiding mandatory age verification, arguing it harms user privacy and freedom of access. Now they must perform it retrospectively.

The methods proposed for age checking present a privacy trade off. Systems that require users to upload official photo identification or provide bank details are highly accurate. However, they introduce significant risk of data breaches. Given the global concern about platforms’ handling of personal data, imposing new, more intrusive data collection requirements on all users to police a minority creates a widespread privacy risk.

This dilemma is already a concern in New Zealand. Rights groups here have warned that age verification systems threaten everyone’s privacy, creating surveillance infrastructure that could be misused. They argue that age checking becomes a “backdoor to digital identity requirements and data harvesting.”

This conflict forces a choice between two forms of protection. Should the government protect the mental health of minors by restricting access, even if that means requiring platforms to gather more intrusive data on all users? Or should the government prioritise the privacy of the whole population, accepting that it will be harder to verify the age of minors? Australia has clearly chosen the former, placing the burden of responsibility and the technical challenge squarely on the tech giants.

The ongoing conversation about effective social media marketing also involves this privacy angle. As platforms are forced to reduce the data they hold on younger users, the precise targeting capabilities marketers have come to rely on will degrade. This pushes marketers toward privacy respecting methods, focusing more on contextual advertising and broad content campaigns rather than specific demographic data mining.

For marketers who want to succeed in this new era, they must start by creating content that adds value to the customer, rather than trying to track them aggressively. They need to find compliant and creative ways to reach their intended audience. We help businesses create these ethical and compliant strategies. You can learn more about how we structure winning campaigns on our dedicated social media marketing page.

Global Precedent and The Path Ahead

Australia’s ban is not occurring in isolation. It is part of a growing global movement. Countries like the United States have seen individual states attempt to pass similar restrictions. Denmark’s prime minister has also declared her country is looking at a ban for under 15s. The global concern over youth mental health and social media is reaching a tipping point, moving from discussion to enforced regulation.

Australia, being the first democracy to implement such a sweeping restriction, acts as a live testing ground. The world’s regulators are watching closely to see two things: whether the ban is actually enforceable, and whether it achieves the stated goal of improving youth wellbeing.

If the platforms can successfully remove over a million accounts and prevent new underage sign ups, it will prove that such regulation is possible. This success would give great confidence to the New Zealand government and other nations considering similar laws. However, if enforcement fails, or if the ban simply pushes vulnerable youth onto more dangerous, unregulated sites, then the Australian law will be seen as a policy failure.

The consensus from the platforms is that while they will comply, the law will not fulfil its promise of making kids safer online. This highlights the risk that regulators are treating a complex social and psychological issue with a simple technical fix. Social media is integral to how young people socialise, learn, and engage with the world. A sudden, blunt prohibition risks isolating some of the most vulnerable young people, who rely on these online networks for support they may not find elsewhere.

The long term solution likely involves a combination of platform regulation, parental guidance, and comprehensive digital marketing services that promote digital literacy for children. Regulation must focus on making platforms safer by design, rather than just banning access. This means making algorithms less addictive and changing business models that rely on maximum user engagement, often at the expense of mental health.

Ultimately, the Australian law represents a significant step. It tells global tech companies that national governments are willing to use their legal power and financial penalty mechanisms to enforce changes aimed at child safety. For New Zealand, this is a clear indication that similar changes are coming. Businesses, parents, and marketers must prepare for a future where access to digital audiences is tightly controlled and legally mandated, requiring smarter, cleaner, and more ethical social media marketing strategies than ever before.

The current political discussion in New Zealand is focused on the issue of youth access to social media. Social media ban for young people to be investigated, Luxon says provides further context on the New Zealand government’s investigation into similar restrictions.

LET'S TALK

GET IN TOUCH
Email Address
partners@thewebco.co.nz

Phone Number
0800 444 000

"(Required)" indicates required fields