The TRAIGA Safe Harbor: Why the NIST AI RMF Is Now a Business Decision
TRAIGA went live January 1, 2026 with $200K-per-violation Texas AG enforcement and an affirmative defense for substantial NIST AI RMF compliance. That converts NIST adoption from a governance preference into a documented business decision.
The Inflection Point: TRAIGA Goes Live
The Texas Responsible Artificial Intelligence Governance Act took effect on January 1, 2026, and with it, AI governance moved from a policy preference to a statutory exposure for Texas enterprises. TRAIGA is enforced solely by the Texas Attorney General, with civil penalties of up to $200,000 per violation and per-day penalties available for continuing, uncured conduct. For an in-scope deployer or in-scope developer operating multiple AI systems across business lines, that penalty structure compounds quickly enough that a CFO will treat it as a material liability rather than a regulatory footnote. The statutory text sits in HB 149 of the 89th Legislature, and the enforcement architecture is laid out clearly in Norton Rose Fulbright's analysis of the act.
The countervailing mechanic is what makes TRAIGA structurally different from a pure penalty regime. The act provides an affirmative defense for companies that can demonstrate “substantial compliance” with the latest NIST AI Risk Management Framework or another nationally recognized AI risk-management framework, as Latham & Watkins has documented. In practice, that points enterprises to two specific reference texts: the NIST AI RMF (2023) and NIST AI 600-1, the Generative AI Profile published in 2024. Both are voluntary frameworks on their face. Under TRAIGA, they are the documented path to a statutory defense.
That changes the question general counsel and chief AI officers should be asking. The decision is no longer whether to adopt the NIST AI RMF as a matter of governance hygiene. It is what substantial compliance with NIST AI RMF and NIST AI 600-1 actually costs to build and maintain, measured against the penalty exposure the affirmative defense is designed to neutralize. That cost-versus-exposure calculation is the subject of what follows.
What Substantial Compliance Actually Requires
The affirmative defense is not a generalized good-faith standard. As Ropes & Gray has detailed, a deployer or developer invoking the defense must show substantial compliance with the most recent NIST AI Risk Management Framework, including the Generative AI Profile at NIST AI 600-1, or another nationally recognized AI risk-management framework, paired with internal-discovery and cure conditions on the underlying conduct. The reference text is therefore not optional or substitutable in spirit; it is the document the Texas Attorney General will measure a program against.
The NIST AI RMF organizes that program around four functions. Govern establishes the policies, roles, and accountability structures that sit above any individual AI system. Map identifies the context, purpose, and risk surface of each system in scope. Measure applies quantitative and qualitative testing to those identified risks. Manage allocates resources to treat, monitor, and respond to risk over the system's lifecycle. The four functions are designed to operate as a cycle, not a checklist completed once at deployment.
For generative AI specifically, the NIST AI 600-1 Generative AI Profile, published July 26, 2024, overlays additional expectations onto those four functions. It addresses governance tailored to generative systems, content provenance, pre-deployment testing, and incident disclosure as distinct workstreams. An enterprise deploying a foundation-model-based product without addressing those four overlays has not substantially complied with the framework TRAIGA references, even if its broader RMF program is mature.
What “substantial” means in this posture is not yet defined by Texas case law, and enterprises should treat predictions about enforcement as hedged rather than settled. The closest available analogy is documentation-driven federal compliance regimes. SEC guidance on FCPA affirmative defenses turns on whether entries were properly approved and contemporaneously documented, and on whether a written compliance program existed at the time of the conduct. The Attorney General and Texas courts are likely to read substantial compliance through a similar lens: a program that is documented, dated, reviewed, and defensible on its face, rather than one that is merely described after the fact. Promise Legal has separately critiqued the prevailing legal-market read of TRAIGA in Why Your Lawyer Must Actually Understand Technology (and What TRAIGA Gets Wrong), which argues that the statute's definitional reach is broader and less technically coherent than most firms acknowledge — a critique that bears directly on how “substantial compliance” will be argued and litigated in practice.
Two carve-outs deserve emphasis. As Greenberg Traurig has noted, TRAIGA's prohibited-practices list and its consumer-disclosure requirements operate independently of the affirmative defense. Substantial compliance with the NIST AI RMF does not cure a prohibited use, and it does not substitute for the disclosures the statute requires when consumers interact with an AI system. The safe harbor sits on top of those obligations; it does not absorb them. What the documentation looks like in practice is the next question.
The Documentary Spine: Eight Artifacts That Carry the Defense
The dispositive question in AI-failure litigation has shifted. After Marchand v. Barnhill and In re McDonald's, the operative inquiry is no longer whether the company had a policy — it is whether the company can produce contemporaneous, dated artifacts proving the policy was followed. Substantial compliance with the NIST AI RMF is established through documents, not declarations. Eight artifacts carry the defense.
- AI policy stack. Acceptable use policy, AI governance charter, and a written risk-classification methodology — the structural backbone aligned to ISO/IEC 42001:2023.
- AI inventory with risk classification. A written, dated register of every AI system in use, refreshed on a defined cadence and tied to the classification methodology.
- AI Council with charter and contemporaneous minutes. A standing governance body with a written charter, defined membership, and minutes that show the council actually met and decided.
- Signed Acceptable Use Policies for users. Per-user attestations, dated and retained, that close the gap between policy publication and policy adoption.
- Model and system cards. Per-system documentation capturing intended use, training-data provenance, known limitations, and evaluation results.
- Vendor onboarding artifacts. AI bill-of-materials disclosures and lawful-training-corpus warranties from each vendor — the post-Bartz diligence baseline.
- Training records and verification protocols. Dated rosters of who completed AI training, plus written verification procedures requiring independent review of GenAI output. ABA Formal Opinion 512 elevates this artifact from best practice to a professional-responsibility obligation for legal teams.
- Incident-response playbook with tabletop documentation. A written playbook plus dated records of tabletop exercises showing the playbook was rehearsed, not merely drafted.
The Delaware Supreme Court held in Marchand v. Barnhill that the absence of a documented monitoring system over a mission-critical risk was itself a breach of fiduciary duty. The Court of Chancery extended that logic in In re McDonald's, holding for the first time that corporate officers — not just directors — owe a duty of oversight within their areas of responsibility, which extends naturally to a CAIO or GC sitting over an enterprise's AI footprint. For Texas enterprises, AI is a mission-critical risk under TRAIGA, and the eight artifacts above are the contemporaneous evidence that converts an in-scope deployer's substantial-compliance posture from argument into record. The question becomes how to assemble them on a workable timeline.
Implementation Sequence: 90 Days to Defensible
A 90-day program is not ISO 42001 certification. Full ISO/IEC 42001 deployment typically runs 12 to 18 months with one to two dedicated FTEs, with certification valid for three years subject to annual surveillance audits. The TRAIGA-defensible posture is a deliberately narrower target: produce the eight-artifact documentary spine in working order, mapped to the NIST AI RMF functions, before the next Attorney General civil investigative demand lands. Practitioner consensus, including Baker Botts, converges on a four-phase sequence that an in-scope deployer can execute in roughly a quarter.
- Weeks 1-3 — Discovery. Build the enterprise AI inventory, classify each system by risk tier, and run a gap analysis against the NIST AI RMF Govern, Map, Measure, and Manage functions. Norton Rose Fulbright treats inventory and risk classification as the highest-leverage opening move, and the reasoning is structural: without a defined scope, downstream artifacts cannot be evaluated for substantial compliance because there is nothing concrete to measure them against.
- Weeks 4-6 — Governance. Stand up the AI Council with documented charter and decision rights, draft and approve the policy stack, and roll out the acceptable use policy (AUP) to employees with attestation tracking.
- Weeks 7-9 — Vendor and contracting. Issue the vendor onboarding template, identify high-risk vendors from the inventory, and begin contract reformation to push NIST-aligned representations, audit rights, and incident-notification terms into the paper.
- Weeks 10-12 — Operational. Capture training records, complete model cards for in-scope systems, run an incident-response tabletop against the GenAI Profile scenarios, and deliver the board memo documenting the program and residual risk.
The sequence is iterative, not waterfall — governance drafting begins while discovery is still closing, and vendor work surfaces inventory gaps that loop back to Phase 1. The same questions the inventory and contracting workstreams now answer internally are the questions every in-scope developer and downstream vendor will be asked next.
The Procurement Question: Why TRAIGA Restructures Vendor Diligence
TRAIGA's documentation discipline does not stop at the enterprise perimeter. Indemnity provisions cascade the underlying risk onto vendors, which means a Texas buyer's AI inventory, AI BOM, and policy stack now do double duty — as the affirmative-defense record before the AG, and as discovery exhibits in any vendor breach claim. Procurement questionnaires in 2026 ask what AI a vendor uses, how it is governed, and what risk framework sits behind it. Buyers expect documented answers, not assurances.
The legal pressure on those answers comes from two 2025 decisions. In Bartz v. Anthropic, Judge Alsup held that training on lawfully acquired books is “quintessentially transformative” fair use, but that Anthropic's pirated central library was infringing — Anthropic settled for $1.5 billion and agreed to destroy the original pirated files, the largest copyright settlement in US history. In Kadrey v. Meta, Judge Chhabria found Meta's training use fair on the record but cautioned that the ruling did not establish that Meta's training was lawful — only that those plaintiffs failed to develop a market-dilution record. Together, the cases push lawful-training-corpus warranties to the top of every 2026 AI vendor negotiation. Pirated corpora generate infringement exposure that, absent a negotiated cap or carve-out, flows upstream through indemnity.
This is now structured industry practice. The IAPP 2026 AI Governance Vendor Report taxonomizes four categories of governance services enterprises demand from vendors — policy and compliance tools, technical assessments, assurance and auditing, and advisory. Volume is climbing: by 2028, 70% of organizations and vendors will use GenAI on both sides of the third-party-risk questionnaire (Gartner research). The buyer's documentation stack is no longer a defensive artifact; it is the same evidence pattern the AG examines under TRAIGA, and the same record vendors will be measured against. TRAIGA is not the only regime asking for this stack.
Beyond TRAIGA: Why This Stack Travels
TRAIGA's NIST safe harbor is also the on-ramp to a multi-regime posture. The same documentary spine that earns the affirmative defense in Texas absorbs Colorado obligations, EU AI Act file requirements, and whatever federal posture eventually settles. The strictest-rule strategy fails here because the regimes diverge by mechanism, not just by severity.
Three fronts shape the exposure. Colorado is a moving target: the current Colorado AI Act takes effect June 30, 2026, but the Polis work group proposed a replacement framework on March 17, 2026, targeting a January 1, 2027 effective date for the successor regime (Mayer Brown). The EU AI Act's high-risk obligations are scheduled to apply August 2, 2026, with extraterritorial reach to non-EU providers and penalties up to €35M or 7% of worldwide turnover for prohibited practices and €15M or 3% for other infringements (Orrick). The European Commission's November 19, 2025 Digital Omnibus proposes to defer high-risk obligations to December 2, 2027, but until adopted the August 2026 deadline controls (DLA Piper). On the federal side, the December 11, 2025 preemption executive order directs agencies to challenge state AI laws and conditions certain federal funding on states avoiding “onerous” AI rules (White House EO). Practitioner consensus is that, absent a federal statute, the EO is unlikely to halt operative state regimes including TRAIGA (Goodwin).
The mechanisms differ. Colorado's draft framework centers on algorithmic discrimination in consequential decisions; the EU AI Act runs through a risk-class taxonomy with conformity assessments; TRAIGA pivots on intent and the NIST-aligned safe harbor. A program built to the strictest single rule cannot satisfy obligations that ask different questions. A modular NIST AI RMF and ISO 42001 spine can: build the artifacts once, then route regime-specific obligations through the same evidentiary base. That is the parallel-pipelines posture, and it is the only stable answer while timelines remain unsettled. Promise Legal's AI governance framework guide sketches the same architecture for counsel sitting in front of these decisions.
Key Implications for Practice
TRAIGA is more than a Texas statute. It is a forcing function that consolidates four trends — credibility pressure on legal departments, the rise of the Chief AI Officer, officer-level fiduciary exposure, and the operational reality of multi-regime AI compliance — into a single board-level decision.
First, TRAIGA reframes NIST adoption as a CFO-legible cost/benefit rather than a GC judgment call. Thomson Reuters' 2026 report shows 86% of GCs believe their department significantly contributes while only 17% of C-Suites agree — a 69-point credibility gap. The safe harbor lets legal translate compliance spend into a quantified liability ceiling.
Second, Promise Legal's view is that the window to be a Texas-jurisdiction-credentialed implementation shop is roughly six to nine months before AmLaw firms productize the same offering. Early-mover advantage compounds with each documented deployment.
Third, officer-level personal exposure is now real. Marchand set the director-level duty; In re McDonald's extended a duty of oversight to corporate officers within their respective areas of responsibility — meaning a CAIO or GC named to govern AI personally bears the obligation to establish good-faith monitoring of that domain. With 60% of enterprises now staffing a Chief AI Officer, the named individuals carry the documentation burden directly.
Fourth, the strategic move is to implement once, document continuously, and refresh quarterly — treating documentation discipline as the core deliverable, not a byproduct of deployment. The eight-artifact spine is built to be maintained, not archived. For Texas enterprises mapping the NIST safe harbor against their actual AI footprint, the work cannot wait until the first AG inquiry arrives.
Mapping the NIST safe harbor against your actual AI footprint is more straightforward with counsel who have built the eight-artifact spine before. Talk with our team about your TRAIGA exposure.