Age Verification Is a Biometric Privacy Minefield: What Discord, IEEE, and Texas HB 1181 Actually Require
Age verification is four legal regimes, not one. What BIPA, Texas CUBI, Washington MHMDA, and Free Speech Coalition v. Paxton actually require of platforms verifying user age in 2026 — plus a build-or-buy matrix.
If you are building age verification into a product in 2026 — a Discord-style social app, an adult-content platform, a gaming service with minor users — you are standing in the middle of four legal regimes that push in opposite directions. State mandates require you to verify. State biometric statutes penalize how you verify. Federal statute has not landed. And the Supreme Court's Free Speech Coalition v. Paxton ruling settled some of the constitutional ground under adult-content verification while leaving social-media mandates in active litigation. This article walks through what the statutes actually require, what the 2025 ruling actually decided, and which of the four common implementation patterns survives where.
The statutory and constitutional landscape — why 'age verification' is not one regime
If you are building age verification into a product in 2026, the first mistake is treating it as a single compliance problem. It is not. Four distinct legal sources stack on top of each other, and your obligations are the union — not the intersection — of all four.
The four regimes, in the order they tend to bite a product team:
- Biometric privacy statutes — Illinois's BIPA, Texas's CUBI, Washington's MHMDA, and a growing list of state analogs. These govern how you verify if your method touches a face scan, voiceprint, or other biometric identifier.
- State age-verification mandates — as of mid-2025, at least 20 states had enacted age-gating laws for sites where one-third or more of content is sexual material harmful to minors, with nine of those laws taking effect in 2025 alone. A parallel wave targets minors on general-purpose social media. These dictate when verification is required.
- COPPA and its state descendants — the federal children's privacy floor, now layered with California's AB 1394 and similar CSAM and minor-protection duties.
- First Amendment doctrine — reshaped by Free Speech Coalition v. Paxton, decided June 27, 2025, in which a 6-3 majority applied intermediate scrutiny to Texas HB 1181 and upheld it as regulating only speech obscene to minors. That ruling is the current constitutional ceiling, and its reasoning reaches well beyond adult content into any statute that conditions access on age.
One regime that is not on this list: federal statute. KOSA has not been enacted. As of early 2026 it sits in the 119th Congress as S. 1748 and H.R. 6484, with a rolled-up vehicle (the KIDS Act, H.R. 7757) still in committee. Plan around the state patchwork, because that is the law that actually governs you today.
What 'age verification' actually means — four implementation patterns, four different legal profiles
'Age verification' is a specification, not a product. The IEEE 2089.1-2024 standard formalizes this by defining four confidence levels — asserted, standard, enhanced, and strict — each corresponding to a different implementation pattern. The confidence level you pick dictates what data leaves the user's device, what you retain, and which biometric statute you just walked into.
Pattern 1 — Self-attestation (the birthdate gate). The user types a date. Nothing biometric is collected, nothing leaves the device except a boolean. Biometric statutes like BIPA and CUBI are not triggered because no identifier is captured. The tradeoff: this maps to IEEE's 'asserted' tier, which most state age-verification mandates treat as insufficient for content deemed harmful to minors. Useful as a first layer; not a defense on its own.
Pattern 2 — ID-document verification (Persona, Veratad, Jumio). The user uploads a government ID, often paired with a selfie for liveness. The document image, a face template, and verification logs leave the device and sit with the vendor. Retention is the core exposure — IEEE Spectrum has called age verification a 'trap' precisely because platforms must store biometric data, ID images, and verification logs long enough to defend their decisions to regulators, turning each record into a breach target. Vendor risk is not hypothetical here: after Persona's government-dashboard code was found exposed on a public endpoint in February 2026, both companies confirmed the partnership had already dissolved by the time the exposure became public, and Discord publicly stated that it 'will not be proceeding with Persona for identity verification.' BIPA, CUBI, and MHMDA all apply to the underlying pattern.
Pattern 3 — Facial age estimation from a selfie (Yoti, Incode). The user takes a selfie; a model returns an estimated age. In Yoti's implementation, the image is transmitted to the backend, and Yoti represents that the selfie is deleted as soon as the estimate is returned, a process the ICO and KPMG have audited. The template leaves the device even if it is not retained, which is enough to trigger BIPA's written-consent requirement and CUBI's notice-and-consent rule. A second exposure: peer-reviewed work shows facial age-estimation models fail unevenly across age, gender, smiling versus neutral faces, and race, which matters under disparate-impact and UDAP scrutiny when false negatives lock adults out of lawful content.
Pattern 4 — On-device or zero-knowledge attestation (k-ID-style flows, OS-level age signals from Apple and Google). The computation happens on the device; what leaves is a signed token asserting 'over 18' or 'over 13' with no underlying biometric payload. If the implementation genuinely keeps templates local, biometric statutes are largely side-stepped because no identifier is captured or possessed by the platform. The caveat worth stating plainly: most claims of fully on-device processing sit in vendor marketing rather than independent audit, so diligence the attestation architecture before you treat it as a BIPA safe harbor. This is the pattern Discord has moved toward for global teen-default settings via its k-ID integration, while layering Yoti for UK Online Safety Act compliance — a multi-vendor posture that treats pattern choice as ongoing risk management, not a one-time build decision.
BIPA as the binding constraint — the only biometric statute with a private right of action that has produced nine-figure verdicts
If your age-verification flow touches a face template and any user is in Illinois, the Illinois Biometric Information Privacy Act is the statute that will decide whether your product is viable. BIPA is the only biometric privacy law in the United States with a private right of action that has actually produced nine-figure settlements, and its structure maps uncomfortably well onto the operational reality of a selfie-based or ID-plus-liveness verification flow.
The operative section is 740 ILCS 14/15, and four subsections matter in sequence. Section 15(a) requires you to publish a written retention schedule and destruction guidelines before you possess any biometric identifier. Section 15(b) forbids collection, capture, or receipt of a biometric identifier unless you have (1) given written notice that the information is being collected, (2) given written notice of the specific purpose and length of term of collection and storage, and (3) obtained a written release. Section 15(c) bars sale or profit from biometric identifiers, full stop. Section 15(d) bars disclosure to third parties without separate consent. Section 20 supplies the private right of action, and that is where the teeth sit.
Three decisions define how those teeth bite. First, in Rosenbach v. Six Flags, the Illinois Supreme Court held in 2019 that a plaintiff need not plead any actual injury beyond the bare procedural violation — denial of the statutory right is itself the injury. Second, in Cothron v. White Castle, a 4-3 majority held in 2023 that a separate claim accrues on every scan and every third-party transmission, which pushed estimated class-wide exposure in that one case above $17 billion. Third, the legislature blinked. Public Act 103-0769 (SB 2979), effective August 2, 2024, caps repeated collections from the same person using the same method at a single violation with at most one recovery, and explicitly permits electronic signatures on the written release.
The most recent development narrows exposure further. On April 1, 2026, the Seventh Circuit held in Clay v. Union Pacific that the 2024 amendment is remedial and applies retroactively to pending cases, capping repeated-scan claims at one recovery per person even for collections that occurred before the amendment's effective date. Cothron's per-scan math is effectively dead.
What BIPA is not, however, is dormant. More than 100 new BIPA class actions were filed in Illinois in 2025, with Clearview AI settling for $51.75 million and additional multi-million-dollar settlements across the tech and retail sectors. The per-person cap narrowed the ceiling; it did not extinguish the cause of action, and the plaintiffs' bar has not moved on.
For an age-verification deployment, the operational rule is simple and unforgiving. Before any Illinois user's face template is captured — whether for liveness on an ID upload or for facial age estimation — your flow must present written notice of collection, written notice of purpose and retention term, and obtain a written release, which can now be an electronic signature. Your public-facing retention schedule must be posted before the first scan, not bolted on after launch. Your vendor contract must prohibit sale and restrict third-party disclosure to match Sections 15(c) and 15(d), because BIPA liability follows possession, and a vendor's transmission is your transmission. One caveat worth stating plainly: no reported BIPA judgment or settlement as of April 2026 has turned specifically on age verification as the use case, so the contours of how courts will treat age-gating biometrics are a foreseeable next wave rather than settled precedent. Build as if you are the test case.
The state patchwork — Texas CUBI and HB 1181, Washington MHMDA, and the mandate-vs.-consent tension
BIPA sets the damages ceiling, but the operational tension in age verification lives one state over. Texas Business and Commerce Code § 503.001, known as CUBI, prohibits capturing a biometric identifier — retina or iris scan, fingerprint, voiceprint, or record of hand or face geometry — for a commercial purpose without first informing the individual and obtaining consent. Civil penalties run up to $25,000 per violation, and enforcement is reserved exclusively to the Attorney General. There is no private right of action, which for years made CUBI look toothless next to BIPA.
That read ended on July 30, 2024. The Texas OAG announced a $1.4 billion settlement with Meta over its 'Tag Suggestions' facial-recognition feature — the first lawsuit ever brought under CUBI and the largest single-state settlement in U.S. history, paid over five years. The lesson for product teams is narrow and specific: face geometry derived from a selfie, including the template an age-estimation vendor generates, sits squarely inside CUBI's scope, and the Texas AG has now demonstrated both the appetite and the budget to price that capture at scale.
Now layer Texas HB 1181 (codified at Tex. Civ. Prac. & Rem. Code ch. 129B), effective September 1, 2023 and upheld by the Supreme Court in Free Speech Coalition v. Paxton. HB 1181 requires commercial entities whose content is more than one-third sexual material harmful to minors to verify user age via government-issued ID or transactional data. Penalties include $10,000 per day of non-compliance, up to $250,000 if a minor accesses harmful material as a result, and — critically — $10,000 for each instance of retaining the identifying information used to verify. The statute mandates capture and prohibits retention in the same breath.
Washington's My Health My Data Act (Chapter 19.373 RCW) adds the private-plaintiff dimension. MHMDA took effect March 31, 2024 for regulated entities and June 30, 2024 for small businesses, and makes any violation a per se violation of the Washington Consumer Protection Act — unlocking injunctive relief, actual damages, attorneys' fees, and treble damages capped at $25,000 per plaintiff. MHMDA's definition of 'consumer health data' is broad enough that a facial scan routed through an age-assurance vendor arguably qualifies. No MHMDA case has yet turned specifically on an age-verification selfie as of April 2026, but the theory is available and the plaintiffs' bar is watching.
California's AB 1394, effective January 1, 2025, pushes from the other direction. It does not mandate age verification, but it imposes statutory damages of $1 million to $4 million per act on social-media platforms that knowingly facilitate commercial sexual exploitation, with a safe harbor tied to periodic safety audits. The practical effect is to make age-assurance workflows a risk-control measure even where no statute directly requires them.
The tension is now explicit. HB 1181 and AB 1394 push you toward capturing ID and biometric signal to prove a user's age or shield the platform. CUBI and MHMDA penalize that same capture when notice and consent fall short, and HB 1181 itself penalizes retention of the very data it just required you to collect. The only architecture that satisfies both sides is one that verifies, does not retain, and can prove non-retention on audit — which is what the next section builds out.
First Amendment scrutiny after Free Speech Coalition v. Paxton — what the 2025 ruling actually held
The headline of Free Speech Coalition v. Paxton is narrower than the commentary around it. On June 27, 2025, a 6-3 majority authored by Justice Thomas upheld Texas HB 1181 and held that intermediate — not strict — scrutiny governs age-verification laws regulating only speech that is obscene to minors, on the theory that such laws impose no more than an incidental burden on adults' access to protected speech. The Court found HB 1181 survived that test because adults have no First Amendment right to avoid age verification when the regulated material is obscene to minors. Track 1 is settled: HB 1181 and the roughly 20 state clones that mirror its one-third-sexual-material trigger stand.
Track 2 is not. Justice Kagan's dissent, joined by Justices Sotomayor and Jackson, argued that HB 1181 is content-based and that Reno v. ACLU and Ashcroft v. ACLU required strict scrutiny for any content-based burden on adults' access to protected speech. The 6-3 split maps the doctrinal pressure points future challengers will exploit — and the lower courts are already exploiting them against broader minor-protection laws.
A year earlier, in Moody v. NetChoice, 603 U.S. 707 (2024), a unanimous Court vacated Fifth and Eleventh Circuit rulings on the Florida and Texas social-media content-moderation laws and re-anchored facial-challenge doctrine, holding that courts must analyze the full scope of a statute's applications. Age-verification mandates that sweep beyond obscene-to-minors content sit under Moody's rubric, not FSC's.
The post-Moody district-court record is one-sided. On April 16, 2025, the Southern District of Ohio permanently enjoined Ohio's Parental Notification by Social Media Operators Act as a content-based restriction implicating the First Amendment, and the District of Utah preliminarily enjoined Utah's Minor Protection in Social Media Act (SB 194/HB 464) in September 2024. The Ninth Circuit's August 16, 2024 ruling in NetChoice v. Bonta upheld the injunction against California's AADCA as to the DPIA requirement on compelled-speech grounds, while vacating and remanding on the default-privacy-setting provisions.
The operational takeaway for a product lead deciding whether to implement a social-media age gate in 2026: FSC is not the green light it reads as in press coverage. Its reasoning is cabined to content obscene to minors. Every broader regime — Ohio's, Utah's, California's AADCA — remains either enjoined, partially enjoined, or exposed to Track 2 challenge. Build for the narrower HB 1181-style obligation where it applies. Do not assume the constitutional ground under social-media age mandates will hold.
The build-or-buy decision matrix — which implementation pattern survives where, in 2026
The preceding sections should make one thing clear: there is no single right answer. The pattern that survives depends on which jurisdiction's users you touch and what business you are actually in. The matrix below is how we triage the question in implementation reviews.
By jurisdiction:
- Illinois — If any user is in Illinois and your flow captures a face template, Pattern 3 (server-side facial estimation) and Pattern 2 (ID plus liveness) both require 740 ILCS 14/15-compliant written notice, purpose-and-retention disclosure, and a written release before capture. Pattern 4 (on-device attestation) is the only path that meaningfully narrows BIPA exposure, and only if you can prove the template never left the device.
- Texas — CUBI's $25,000-per-violation ceiling and the $1.4 billion Meta settlement mean that any selfie-derived face geometry is AG-enforcement surface. If you are also subject to HB 1181, you are required to verify and prohibited from retaining in the same statute. Pattern 2 with contractual prompt-deletion, or Pattern 3 with audited immediate deletion, are the only postures that reconcile the two.
- Washington — MHMDA's consumer-health-data definition is broad enough to reach a facial scan routed through an age-assurance vendor. Any pattern that transmits biometric signal off-device needs MHMDA-grade consent language and a vendor contract that forecloses secondary use.
- California — AB 1394 does not mandate verification but prices the failure to deploy reasonable age-assurance at $1 million to $4 million per act where commercial sexual exploitation is facilitated. Pattern 1 alone is indefensible; Patterns 3 or 4 paired with periodic safety audits track the statutory safe harbor.
By business model:
- Adult content under HB 1181 — You must verify via government ID or transactional data and you may not retain identifying information used to verify. Pattern 2 with contractually bound prompt-deletion, or transactional-data verification (credit card plus billing name), are the two workable options. Self-attestation is not.
- Social platform with an age-mixed audience — Free Speech Coalition v. Paxton does not cover you; Moody and the Ohio and Utah injunctions do. Pattern 4 on-device attestation, layered with Pattern 3 only where a jurisdiction compels a higher confidence tier, is the posture that survives both the First Amendment challenge and the biometric-statute exposure. Discord's multi-vendor k-ID-plus-Yoti architecture is the template.
- Gaming or edtech with COPPA exposure — For verifiable parental consent, 16 CFR § 312.5 enumerates the permitted methods. The face-match method — government ID plus a contemporaneous selfie compared via facial recognition, with prompt deletion after match — is federally blessed, which makes it defensible even in BIPA and CUBI jurisdictions provided your notice and consent language is in place before capture. Demographic-performance data on the face-match method in FTC guidance remains thin, so document your vendor's accuracy metrics across age, gender, and skin-tone buckets as part of diligence.
Vendor contract checklist. Whichever pattern you choose, if you are buying rather than building, the contract is where liability actually moves. Five provisions are non-negotiable:
- No retention beyond the verification event. Images, templates, and ID scans deleted on confirmation, with logs that prove non-retention. Require independent audit of deletion, as Yoti's ICO- and KPMG-audited posture demonstrates is achievable.
- Indemnity for BIPA § 15 liability. Possession follows transmission, so the vendor must backstop the statutory subsections — 15(a) retention schedule, 15(b) notice and release, 15(c) sale prohibition, 15(d) third-party disclosure — not just generic privacy claims.
- Data residency controls. Contractually fixed processing regions, with a kill switch if the vendor migrates infrastructure.
- Audit right. Annual third-party audit against IEEE 2089.1-2024 and the Age Check Certification Scheme, or equivalent. Conformance becomes your standard of care.
- Right to switch vendors without re-enrolling existing users. The February 2026 Persona exposure showed why this matters: when a vendor fails, you need an exit that does not force every user through a second verification event.
If you want a second set of eyes on which pattern fits your user base and which contract terms your current vendor is actually giving you, book an implementation review at promise.legal that is usually enough to surface the retention clause that needs to change before your next release.