More children are going online, more frequently, via more devices and services, at ever-younger ages. The internet offers huge opportunities for children, but it also poses major risks to minors’ safety, well-being and rights.
There is growing evidence of harm to children online, but addressing the risks is complex because the risks are often highly sensitive, their causes vary, long-term effects are hard to anticipate and multiple stakeholders are required for the solutions.
Against this backdrop, the UK government appointed Ofcom (aka the Office of Communications) as an internet watchdog, giving it the ability to fine social media companies that do not protect users from harmful content. Ofcom is the government-approved regulatory and competition authority for the broadcasting, telecommunications and postal industries of the United Kingdom.
What will Ofcom have the power to do?
Ofcom will broadly oversee two specific areas covering illegal and harmful content. For the former, it will make sure companies quickly take down illegal content — with a particular focus on terrorism and child abuse imagery — and prevent much of it being posted in the first place.
For the latter, Ofcom will primarily make sure social networks enforce their own terms and conditions. That means if a social network says, for instance, that material promoting self-harm is banned, it will be required to take action to enforce that.
Why is the government censoring the internet?
The government argues that the two areas it covers suffer from a lack of regulation. For illegal content, social networks currently face an all-or-nothing approach to liability, where they are free from all penalties provided they are not seen to be actively supporting the content. The government wants the ability to use penalties to encourage speedy enforcement, and to discourage companies from deliberately turning a blind eye to their own platforms.
For “harmful but not illegal content,” the government says it needs to act to protect children online, and wants to create a legal duty of care on the part of social networks to ensure they face penalties for harms their platforms cause.
It’s worth noting that age-restricted spaces cover a lot of territory. It obviously covers social media sites, dating sites, streaming services, gambling and other sites where the purpose is to create an online community. But, it also necessarily includes websites that sell age-restricted products, such as adult products, alcohol, tobacco, chemical products, fireworks and even financial services.
Not surprisingly, the methods that these sites use to perform age verification are at the heart of this matter. We wanted to better understand the methods, as well as the rationale, for using different age verification technologies for organisations that play in these age-restricted spaces — so we commissioned some research.
The research, conducted by Vitreous World, questioned 200 UK-based tech decision-makers within organisations that sell or offer age-restricted products or services. From their findings, we created this report, Protecting Minors Online: Methods and mindsets of businesses operating in age-restricted industries, which reveals the perspectives of UK businesses and what steps are being taken to prevent underage access.
The report finds that:
- An overwhelming majority (95%) believe it is important that minors do not access their company’s age-restricted product or service.
- But many (56%) are failing at the first hurdle by relying on inherently weak, more anonymous forms of age-verification processes
- Over a quarter (26%) depend on a self-assessment form, 20% use document verification and a further 10% employ the use of a credit check report
- Those selling products, such as alcohol or fireworks, are less likely (50%) to depend on weak age-verification methods than those offering a service, like pornography (71%), where the perceived harm is somewhat less tangible
But, it’s also clear from the research, and some of the subsequent interviews we conducted with key players in some of these spaces, that taking a one-size-fits-all approach makes little sense.
While it’s completely appropriate to hold any organisation that profits from selling age-restricted products and services accountable for the potential harms caused by their platform, not all organisations that operate in age restricted spaces have the same risk profile.
Many of the companies surveyed that sell potentially harmful products are already using more robust (less anonymous) forms of age verification, but others value more anonymous (less robust) methods. For example, it’s not surprising that the porn industry doesn’t adopt more robust forms of identity verification — doing so would crush their conversion rates as many visitors will simply not subject themselves to sharing a picture of their ID or selfie. It’s just not in their own self interest and their need for anonymity trumps their need to access those sites.
Dating sites have a similar need for anonymity. But, they would argue that there are more sensible alternatives to verifying the ages all new users. Instead, some dating sites are exploring adding a verification seal or badge which lets people know that a user is authentic and has been verified using more robust forms of age and identity verification. This way, users can make their own decision about whether they want to date anyone who doesn’t have that badge of authenticity.
Consequently, Ofcom needs to take a risk-based approach to age verification, depending on the industry and the likely harm of onboarding a bad actor — the greater the likelihood of social harm, the greater the need for more robust forms of non-anonymous methods of age verification.