Mounting age verification legislation with unclear technical requirements has created opportunities for private vendors to emerge as de facto guides as companies navigate legal and reputational risk.
In the US alone, 25 states, including Texas and New York, have some form of age verification law or laws. Most of the regulations require only that age-verification technology be “commercially reasonable,” but they vary widely in spelling out levels of accuracy, privacy, and security.
Dozens of providers, many of which got their start doing “know your customer” services for financial institutions, have surfaced to meet the new demands faced by industries including social media, adult content, and websites that serve both kids and adults.
“I think that regulators have started off from the position of, ‘We’ve got to stimulate innovation, we’ve got to sort of encourage the market,’” said Julie Dawson, chief policy and regulatory officer at Yoti, a UK-based firm that specializes in AI that estimates a user’s age range by analyzing biometrics. “But actually, a lot of this innovation has already been tested in other sectors.”
Companies don’t suffer from a lack of options—Yoti’s approach is just one in a wide range of approaches, including the processing of government IDs or age signals collected at the device level. What is murkier is which technologies regulators will consider compliant.
“We’re at a stage where, I think, we need more granularity from regulators, and they need to look across each of those methods with those lenses — circumvention, privacy, fairness, security—and actually give some parameters,” Dawson said.
One Size Doesn’t Fit All
The new approach to online age-gating marks a significant shift from requirements under federal children’s privacy law, which focuses on parental notice and verifiable consent for users under 13. New age verification laws cover a much broader set of restrictions, from keeping users under 18 off pornographic websites to barring minors from social media altogether.
Vendors of age-assurance technology say there is no one-size-fits-all solution.
“It’s very different to say that someone’s over 18 talking with a chatbot versus that you’re talking to another human being who may be pretending to be somebody else,” said Rick Song, co-founder and CEO of Persona, an online identity-verification and fraud-prevention company. “Our recommendation is optimize for flexibility. When there’s a tremendous amount of uncertainty, the best thing you can do is optimize to make sure that you can cover every region differently.”
Friction Factor
Vendors said not every use case demands the same level of certainty.
“There’s trade-offs all over the place,” said Roman Karachinsky, chief product officer at financial-services-backed identity-verification firm Incode. “How effective is that particular way of assuring age going to be versus how privacy-protective? Generally what we see companies do is try to find some balance.”
In some cases, companies may intentionally choose a more visible age check to signal they are taking a stricter approach. In others, a user interface that makes it too difficult or frustrating to complete identity verification could cause friction, driving users away. Providers said one of the hardest questions is determining where stronger checks are necessary and where they are not.
“Sometimes you want to apply friction because you want people to be concerned or aware that this company is uber vigilant around something. And that’s a business decision. Maybe you want to apply more friction in the beginning and much less after that,” said Rivka Gewirtz Little, chief growth officer at Socure, an AI-powered identity-verification platform that also serves government clients. “The point is about making decisions where and how you apply friction, but what you don’t want to do is apply friction where it’s not necessary.”
Those decisions can have an enormous impact on public reception to the technology. Discord cut ties with Persona after backlash over the company’s use of off-device age verification. Persona said the the company immediately deleted the data.
Song noted that data security is a top priority for many of the company’s partners.
“They’re really concerned about, ‘Do we have the right retention policies? Do we have audits about how this data is being retained and redacted?’” Song said.
Another challenge for both vendors and companies is interoperability: making sure different methods can work together so users can rely on credentials across platforms instead of completing a new age check for every service.
One industry-backed concept is “age keys,” which would allow users to verify their age once and receive a reusable credential. Meta, Discord, Snap, Socure, Incode, and Persona are among the companies that have joined the OpenAge Initiative backing the technology.
The technology could become increasingly important as more laws require repeated and consistent age verification, such as Tennessee’s Protect Tennessee Minors Act.
The concept of a reusable credential faces the same challenge as the broader age-verification market: There is still no globally recognized standard approved by regulators, which could make adoption difficult, said Dawson.
More clarity may also emerge at the state level. Proposed rules under New York’s SAFE for Kids Act, which include minimum accuracy and detection-rate thresholds, could provide a model for other states. The rules are not finalized.
The Federal Trade Commission recently issued a policy statement requiring companies to take “reasonable steps” to audit their age-verification solutions for accuracy if they want to qualify for protections tied to use on children under 13. An agency official said at a conference last month that further updates to its children’s privacy rule are forthcoming.
To contact the reporter on this story:
To contact the editors responsible for this story:
