Understanding age assurance vs. age verification vs. age signals
and their impact on children and developers
For companies operating online, safeguarding kids in a digital world means navigating complex data protection rules along with many compliance challenges.
In the vacuum of federal legislation, individual states started passing their own regulations, creating a fast-growing patchwork of age-verification laws across the country. In several U.S. states, such as Florida, Texas, Louisiana, and Utah, among others, “age gating” for adult content is now or will become mandatory. In addition, many social media apps and app stores are now voluntarily “age gating” to meet privacy compliance requirements or reduce liability for AI-generated content. These laws and requirements vary substantially in their scope, applicable age thresholds, definitions of covered platforms, and enforcement frameworks.
For companies operating nationwide, that inconsistency is a major compliance obstacle, with implications reaching well beyond the protection of children online. Clear divisions have emerged between those who regard age verification as a necessary safeguard and critics who warn it could normalize a surveilled and censored internet. Profound consequences regarding privacy, speech and digital rights would affect every American, regardless of age.
A legal battleground is taking shape around age assurance, age verification and age signals. The motivations behind these laws are generally positive. Lawmakers want to (i) prevent children from accessing pornographic or other harmful content; (ii) provide age-appropriate content and guardrails regarding suicide, self-harm, and addictive content; and (iii) provide more parental controls around children’s data.
The right way to implement these policy goals, however, is a lot messier.
Here’s the key question everyone needs to consider: How much information are we going to ask people to hand over to “know” their age?
Before wading into this quagmire, let’s at least agree on some key definitions:
Age Assurance – These are techniques to determine a person’s age and can be as low-tech as getting a user to self-report their age, or as high-tech as using AI techniques to “guess” a person’s age based on facial estimation, behavioral analysis or an analysis of data broker information.
Age Verification – This is a subset of age assurance, where there is a high level of proof concerning a person’s age. This includes turning over a driver’s license or other types of digital IDs to access a service.
Age Signals – This is a signal from a device, operating system, or browser that can be based on age assurance or age verification techniques.
California’s Digital Age Assurance Act (AB 1043) set to take effect January 1, 2027 requires operating systems and app stores to obtain age verification upon account creation and then send age brackets via an age signal to developers. Developers cannot use this data or share it for purposes other than identifying a user’s age.
Compared to other age verification laws, which may require multiple websites or services to obtain vast amounts of personal data, this seems like a balanced approach. This law seems to limit the number of parties that collect sensitive data but still provides some level of age assurance for developers. In addition, we would strongly encourage app stores and operating systems to use on-device storage and processing, to further protect sensitive data.
So what do you think? Do you agree with the California approach, and should this approach be adopted nationally?