Selfie-based age checks boom as gov’ts push for online controls

LONDON, United Kingdom — A technological revolution is quietly transforming how digital platforms enforce age restrictions. As governments worldwide implement stringent regulations targeting social media and adult content websites, artificial intelligence-driven age verification systems are experiencing unprecedented growth.

The emerging technology offers a seemingly seamless solution to legislative mandates such as Australia’s forthcoming ban on social media access for users under 16, scheduled to take effect December 10. The verification process demonstrates remarkable simplicity: users merely capture a straightforward selfie using their device’s camera, with an automated assessment delivered within 60 seconds.

At Yoti, a London-based technology startup, extensive testing occurs using mannequin heads adorned with wigs and masks. The company’s sophisticated algorithms consistently identify these attempts at deception, with test platforms promptly recognizing non-authentic facial representations. Chief Executive Robin Tombs explained the technological evolution: ‘Through continuous refinement, our algorithm has developed exceptional pattern recognition capabilities, enabling accurate age estimation by analyzing facial characteristics.’

The commercial success story is equally impressive. Yoti currently processes approximately one million daily verifications for prominent clients including Meta, TikTok, Sony, and Pinterest. The company achieved profitability this year following £20 million ($26 million) in revenue for the twelve months ending March, with projections indicating a 50% sales increase for the current fiscal year.

This expansion reflects broader industry trends. The Age Verification Providers Association (AVPA) counts 34 specialized companies among its members, including Persona, Kids Web Services, K-id, and VerifyMy. A 2021 AVPA forecast projected nearly $10 billion in annual revenue across OECD nations by 2031-2036, though updated projections remain pending.

AVPA director Iain Corby emphasizes the sector’s dynamic nature, noting: ‘The landscape evolves rapidly with simultaneous advancements in regulation and technology making precise predictions challenging.’

Despite technological promise, significant concerns persist regarding potential algorithmic biases and privacy implications. Professor Olivier Blazy, cybersecurity expert at France’s prestigious Polytechnique engineering school, warns that these tools ‘may potentially intrude upon personal privacy realms.’ He further suggests the current AI-dominated ecosystem might experience regulatory recalibration to enhance user protection.

Technical limitations also present challenges. Researchers note that conventional makeup techniques can effectively manipulate age appearance, while independent studies have identified accuracy disparities when processing non-white facial characteristics. An Australian regulatory assessment acknowledged ongoing challenges with Indigenous population representation, while noting vendor efforts to address these gaps.

Yoti’s CEO acknowledges historical data limitations for certain demographic groups but maintains their system can detect cosmetic alterations and accessories designed to deceive. Regarding privacy concerns, Tombs emphasizes immediate data deletion post-analysis.

Platforms implementing these solutions typically establish safety buffers, frequently requiring perceived age thresholds exceeding actual limits (e.g., judging visitors as over-21 for 18+ restrictions). Users falling within marginal ranges may still need to resort to traditional verification methods, including official identification documentation.