Protecting teenagers in the digital world or prohibiting them from it?

  • Children and Young People
  • Online Safety
  • Privacy and Data Protection
Decorative image

By Rys Farthing, Director of Policy & Research

The realisation that something is ‘wrong’ when it comes to young people and social media is now relatively widespread. From detailed research investigating correlations between the functionalities of social media platforms and disorder eating to discussions about the impact of Andrew Tate content on classroom behaviours, the public discourse is awash with conversations about the risks the digital world presents to the young. However, while there is an emerging consensus that there is a ‘problem’ there is significant divergence when it comes to ‘solutions’, including policy solutions.

For a few years now, there has been a growing call for ‘systemic’ policy solutions that make digital platforms responsible for the risks they pose to younger users and how they design and build their systems. This has led to significant policy changes internationally, such as the UK’s Age Appropriate Design Code and the introduction of a duty of care model in the UK’s Online Safety Act and requirements to risk assess systems underpinning the EU’s Digital Services Act.There have also been nascent calls in Australia for a systemic approach in data privacy for children and online safety frameworks. These sorts of systemic approaches aim to make platforms redesign and reshape themselves so that they are safer and more private for young users in the first instance.

However, more recently there have been calls to simply ‘ban’ young people from social media or even mobile phone usage until they are older. These calls have predominantly emerged from Republican states in the US, such as Florida, Utah, Georgia, Louisiana and Texas where for various political reasons the idea of regulating for safety is unappealing. However, despite being peculiarly American, these policy arguments to ban young people from social media have now travelled across the Pacific and landed in Australian public policy discourse. A number of States appear to have adopted this conservative stance and are exploring bans for either social media or smart phones. Backed by celebrity campaigns, they appear to be gaining traction at the Federal level as well.

The emergence of ‘bans’ as the policy approach to protecting children online in red states, as opposed to systemic protections emerging in Europe and Democratic states, is not by accident. In a particular brand of American legal thinking, the idea of requiring systemic safety protections for children — such as ensuring an algorithm would not promote suicide content to minors — violates the principles of free speech and has been challenged in the courts.

Banning teenagers from the digital world, simple though it may seem, may not be the most productive regulatory response. Young people have the right to go online and they also have the right to be safe online: these are not mutually exclusive rights. As the UN Committee on the Rights of the Child stated “national policies should be aimed at providing children with the opportunity to benefit from engaging with the digital environment and ensuring their safe access to it” (emphasis added). In the Australian context, this requires a brave focus on online safety regulations that places a duty of care on tech companies, and requires them to make proactive and preventative changes to their systems and processes to ensure young users safety. Australia’s online safety framework is currently being reviewed to deliver just this sort of bold, proactive approach. Banning children seems to run counter to this and serves to limit their ability to access the digital world.

The world is becoming more digital every day, and bans won’t serve Australia well. As the Department for Education simply put it “the world is changing around us. Digital technology has become a core part of our everyday lives”. Simply banning teenagers from social media, rather than reforming social media sets a worrying precedent. If we set off down a policy path of bans, we may inadvertently create a precedent of narrowing digital opportunities for young people at the exact time where we are talking about expanding digital literacies and growing STEM skills among the young to meet an emerging digital skills shortage. It cannot be ‘beyond the wit’ of Australians to work out how to safeguard our digital architecture for young people. If we want our young people to thrive in the future, we may be better placed developing regulation that protects teenagers in the rich diversity of the digital world rather than regulation that prohibits them from it.

Beyond limiting opportunities, prohibitions embed an inappropriate responsibilisation into policy. While the overt responsibilisation that e-safety curriculum often implies has been long critiqued, the idea of bans extends this logic of holding young people and parents responsible for online risks. On top of placing responsibility on teenagers and parents to ‘make smart choices online’ or to ‘think before you click’, responsibility would fall on them to ‘stay off’ the technology. Noting that still, no responsibility falls to digital platforms to fix their safety issues in the first instance.

Currently and historically, many commercial products have posed risks to teenagers, from cars to home swimming pools, and rarely have we responded to these technologies with such an individualised focus. As a society, we have always responded to these new technologies by ensuring that they were safe for young people, rather than banning them from it. We implement requirements for seat belts or pool fences, rather than banning teenagers from cars or backyard pools. It is a particular form of tech exceptionalism to believe that managing the risks of digital technologies requires such individualised over-responsibilisation.

Tech companies should have the same safety obligations as other sectors, and should be required to make themselves safe for teenagers. Banning young people, rather than requiring tech companies to make necessary safety changes, lets tech companies off the hook and doesn’t fix the problem. If we simply ban kids until they are 14, 15 or 16, these products will still be unsafe when young people finally can join. We may keep them out of the frying pan, but they’ll still be jumping into the fire; a better approach would be regulations that tamper down the flames in the first instance.

Aside from these inherent issues, as decision-makers grapple with this debate, a number of implementation issues probably warrant deeper scrutiny. Specifically;

  • Bans push the problem on to parents, without making platforms safe. If we look at how social media bans have been implemented in the US States that have adopted them, largely they are introduced by requiring parental consent for young people to have social media accounts. In this sense, they are less ‘bans’ and more ‘only with your parents say so’ restrictions. But parental consent does not fix unsafe products. For many reasons (lack of knowledge, lack of energy to fight with their child about yet another issue, personal circumstances), some parents will consent and leave their teenagers accessing unsafe products. But for parents who do resist, this simply escalates a whole world of intra-family conflict. I doubt there are many parents of teenagers who aren’t already having an ongoing or regular debate with their kids about how much, or which, technologies they are able to use – often leading to conflict. Introducing additional consent requirements for social media just pushes more pressure onto parents. Besides causing an extra bunch of fighting and upset, it still doesn’t actually fix anything.

  • Defining what we are banning young people from isn’t simple. It seems easy to say ‘ban kids from social media’, but when it comes down to it we need to be mindful of what we are banning young people from. Are we banning young people from social media as defined by functionality, such as digital products that enable you to create a public account, post content or send individual or group chats? If so, we might be taking them off of YouTube which is now a go to for all sorts of life tutorials, Roblox where they learn coding or iMessage where they stay in touch with family. Or are we blocking them by company and preventing them using products created by ‘social media companies’, like Meta and Alphabet? If so, we need to think about other services run by these parent companies like GoogleMaps and Whatsapp. While we may be clear in our minds what we want to block teenagers from, the deep vertical integration of the digital world means translating this into law isn’t always easy. Australia’s Online Safety Act has a range of industry classifications including social media services, but also relevant electronic services, designated internet services, hosting services and internet service providers, internet carriage services…. The lines between them are not always clear and will continue to blur as digital functionalities evolve and vertical integration continues.

  • Implementing a ban without a focus on systems is impossible, as the technology to check ages and parental consent is not yet fully functional. There is no simple way for a digital service to check a user’s age—known as ‘age assurance’—and even less effective ways for services to check for accuracy in parental consent. Issues with age assurance are currently plaguing attempts to restrict children’s access to pornography, and a $6.5m trial taking place to see if we can get it right in Australia. However, the technology needed to prove someone is over 18 and can access pornography is not the same as the technology needed to prove you’re over 14 or 16, because 14 and 16 year olds largely do not have the same types of ID and ‘biometric’ technologies like facial scanning current have error bandsof around 1.5 years – which is a lot for a 14 year old. An EU wide investigation into parental consent technologies demonstrated similar difficulties regarding parental consent. These challenges do not mean we cannot or should not try, but again, we’re talking about improving systems and functionality for age assurance. A focus on systems is still needed, and it seems peculiar to insist on improvements to age assurance systems rather than improvements to safety systems.

     

It comes down to what Australian decision makers see the problem as, and how they want to address it. Is the problem that billion dollar companies are creating unsafe digital products, or is the problem that some of the people who use them are teenagers?