A 90-Day TRAIGA Compliance Plan for Texas Tech Companies

TRAIGA takes effect Jan 1, 2026 with civil penalties up to $200K per violation. Section 546.103 makes substantial NIST AI RMF compliance an affirmative defense. A 90-day, four-phase workplan: Discovery, Governance, Vendor, Operationalization.

A 90-Day TRAIGA Compliance Plan for Texas Tech Companies
Loading AudioNative Player...

What TRAIGA Demands of Texas Enterprises

The Texas Responsible Artificial Intelligence Governance Act (TRAIGA) takes effect January 1, 2026, imposing AI governance duties on government agencies and private entities operating in Texas. Enforcement sits exclusively with the Texas Attorney General, who must extend a 60-day cure period before assessing penalties. As Norton Rose Fulbright details, curable violations carry civil penalties of $10,000 to $12,000; uncurable violations escalate to $80,000 to $200,000, with continuing violations accruing at $2,000 to $40,000 per day.

The statute also offers a planning hook that reshapes how compliant enterprises should posture. Section 546.103 establishes that substantial compliance with the NIST AI Risk Management Framework operates as an affirmative defense in TRAIGA enforcement proceedings, provided the enterprise can produce documented evidence across all four RMF functions: Govern, Map, Measure, and Manage. The operative NIST artifact for generative systems is NIST AI 600-1, the Generative AI Profile, released July 26, 2024, which lays out more than 200 suggested actions across 12 risk categories spanning governance, content provenance, pre-deployment testing, and incident disclosure.

This guide is the operational companion to the firm's TRAIGA Safe Harbor analysis, which sets out the doctrinal case for treating substantial NIST AI RMF compliance as the dominant strategic posture. The pages that follow translate that posture into a four-phase, 90-day plan: Discovery, Governance, Vendor and Contracting, and Operationalization. Each phase produces the documented artifacts the safe harbor requires.

Phase 1 (Weeks 1-3): Discovery

Discovery sits at the front of the 90-day plan because every downstream substantial-compliance artifact — risk assessments, governance policies, adversarial testing, vendor diligence — depends on a defined scope. A compliance program built on an incomplete inventory will fail under AG scrutiny no matter how polished the documentation looks. Norton Rose Fulbright frames the same sequencing point: companies should first determine TRAIGA applicability, then build the policy and control infrastructure — policies, technical controls, audit trails — on top of that scope. Weeks 1 through 3 produce three deliverables that together define what the rest of the program must cover.

The first deliverable is an enterprise AI inventory. ISACA frames the baseline as cataloging every approved model, integration, and API with its purpose, data classification, and owner. Promise Legal layers a four-tier risk classification on top: consumer-facing systems, employment and HR systems, regulated-workflow systems, and internal productivity tools. Each tier carries a different TRAIGA exposure profile and a different evidentiary burden under the NIST AI RMF defense.

The second deliverable is a vendor map. Every third-party AI vendor, every underlying model the vendor uses, and every dataset that touches the system gets recorded. Baker Botts notes that NIST AI RMF alignment requires third-party vendor management alongside adversarial testing and incident response — none of which is possible without a vendor map. The third deliverable is a NIST AI RMF gap analysis that scores current-state controls against the four core functions — Govern, Map, Measure, Manage — and the GenAI Profile in NIST AI 600-1.

The predictable discovery surprise is shadow AI. ISACA defines it as unauthorized use of AI tools to perform job tasks, the direct parallel to shadow IT. Marketing teams running copywriting models, sales teams piping CRM data into prospecting assistants, engineering teams using code-generation tools, and HR teams testing resume-screening software all show up once discovery begins in earnest. Nudge Security observes that effective shadow-AI discovery spans network, SaaS, endpoint, browser, and identity layers, and that email-based discovery can surface historical AI use within minutes to hours. A single-channel sweep will miss material exposure.

Discovery scope determines governance scope. Whatever surfaces in Weeks 1 through 3 becomes the universe that Phase 2 must govern, document, and test.

Phase 2 (Weeks 4-6): Governance

Phase 2 produces three deliverables: a chartered AI Council, an approved policy stack, and an Acceptable Use Policy rolled out with per-employee attestation. These artifacts move the program from inventory to institutional control. Each one is dated, version-controlled, and built to survive forensic review by Texas AG investigators or downstream litigants.

Stand up the AI Council

The AI Council is a multidisciplinary body, not a Legal-only or IT-only committee. The IAPP 2026 AI Governance Vendor Report emphasizes that effective AI governance requires multidisciplinary oversight drawing on privacy, cybersecurity, IT, ethics, and legal expertise — not a Legal-only or IT-only committee. Council seats should include Legal, Risk, Security, Engineering or Data Science, HR, and Operations. ISO/IEC 42001:2023 establishes the cross-functional steering pattern: a group with real decision rights, a defined meeting cadence, and a rotating chair to prevent any single function from dominating outcomes.

The Council Charter must specify scope, decision rights, escalation channels to the CEO and Audit Committee, meeting cadence (monthly minimum during the compliance build-out), rotating chair across functions, and minutes discipline. Deloitte's ISO 42001 guidance frames AI as an enterprise-wide capability requiring leadership oversight, clear accountability, and lifecycle controls. The charter is the document that makes those concepts operational.

Approve the policy stack

Three policies form the core stack:

  • AI Acceptable Use Policy (AUP) — employee-facing rules covering approved tools, prohibited inputs (regulated data, client confidences), disclosure obligations, and incident reporting.
  • AI Governance Charter — board-level document defining the Council, executive sponsorship, and reporting lines to the Audit Committee.
  • Risk Classification Methodology — the four-tier rubric the Council applies to every AI system in the inventory.

Roll out the AUP with attestation

Publication is not adoption. Each employee must complete a tracked attestation acknowledging the AUP, with completion logged per user and per version. Baker Botts' TRAIGA guidance emphasizes that organizations should maintain detailed documentation of AI system purposes, intended use cases, testing protocols, and clear policies restricting deployment to lawful purposes. Dated attestations turn the AUP from a PDF into evidence — the exact form the affirmative-defense record must take heading into Phase 3 implementation.

Phase 3 (Weeks 7-9): Vendor and Contracting

Phase 3 produces two artifacts: a vendor onboarding template every new AI procurement runs through, and a contract reformation queue for the high-risk vendors already in production. Attempting to renegotiate every existing AI contract simultaneously is a losing strategy. The realistic move is to update the master template now so every new agreement and every renewal inherits the upgraded language, then reform existing contracts on a risk-tiered schedule.

The substantive content of that template is already documented in the eight-clause architecture for modern AI vendor contracts: lawful-training-corpus warranty, AI BOM disclosure, model-card delivery, audit rights, incident reporting, data-use limits, AI-specific indemnity, and AI-specific termination triggers. The AI Council should treat that eight-clause set as the floor for any vendor handling regulated workflows.

Two external standards anchor the AI BOM disclosure clause and should be cited by reference in the contract. The OWASP AIBOM Project defines an AI BOM as a standardized, auditable record of the datasets, weights, and methodologies underpinning a model. Linux Foundation SPDX 3.0 extends the SBOM concept to AI artifacts with machine-readable support for datasets, model metadata, pipelines, and runtime environment. Naming both gives the disclosure obligation a concrete, enforceable shape.

Reformation order follows risk tier. High-risk AI deployments come first: regulated workflows, employment decisions, and consumer-facing systems where TRAIGA exposure is greatest. Medium-risk vendors follow, then low-risk tools on natural renewal. The rationale for the cascade is structural. As Baker Botts notes, NIST AI RMF alignment includes third-party vendor management, and TRAIGA exposure flows upstream through vendor indemnity. The buyer's onboarding documentation becomes evidence in any downstream vendor dispute, so the artifacts produced in Phase 1 do double duty: they prove substantial compliance to the AG and they preserve recourse against the vendor whose model produced the harm.

Phase 4 (Weeks 10-12): Operational + Closing the Loop

Phase 4 converts the governance scaffolding from prior phases into operational artifacts that a regulator, plaintiff, or board would recognize as substantial compliance. Four deliverables anchor the final three weeks. First, dated training rosters confirming that personnel using in-scope AI systems have completed role-specific instruction on the AUP, escalation paths, and the AI Council intake process. For legal teams, training should incorporate the ABA Formal Opinion 512 verification overlay, which directs lawyers to independently verify generative AI outputs to a degree calibrated to the task and tool.

Second, model and system cards for every in-scope system, aligned to the NIST AI 600-1 GenAI Profile action calling for plain-language documentation of intended use, limitations, and system operation. Third, at least one dated incident-response tabletop exercise that stress-tests the playbook against GenAI-specific scenarios — model manipulation, prompt injection, misinformation cascades — which traditional incident-response frameworks were not designed to manage. Fourth, a board memo documenting the program, residual risk, and the company's substantial-compliance posture under TRAIGA's affirmative defense.

Actionable Next Steps

  1. Schedule the Phase 1 discovery kickoff this week. The 90-day clock is tight; delays compound across phases.
  2. Identify the AI Council chair and members. The chair is typically the General Counsel or Chief AI Officer, with cross-functional representation from engineering, security, and product.
  3. Engage external counsel or a specialist if internal capacity is tight. Substantial compliance is a defensible posture only if the underlying work is rigorous.
  4. Begin a personal AI governance log if you are a CAIO or GC named over the AI domain. As Promise Legal has detailed in its analysis of officer liability after the McDonald's decision, a contemporaneous log is a defensive artifact for officers facing oversight-failure claims.
  5. Set the 90-day calendar with named owners for each deliverable across Phases 1-4. Unowned deliverables do not ship.

For Texas tech companies that need a partner to operationalize this plan, the next step is a structured engagement.

90 days is achievable when the discovery kickoff happens this week, the AI Council has named owners, and the documentary artifacts are dated as they are produced. Talk with our team about scoping the plan for your enterprise.

Start the conversation