[COMMENTARY] Capriglione's Texas Responsible AI Governance Act — Actually, Pretty Good

[COMMENTARY] Capriglione's Texas Responsible AI Governance Act — Actually, Pretty Good
Photo by Ameer Basheer / Unsplash

If you've been following AI closely, you'll have noticed that there are almost as many misses as there are swings in the regulatory space. The whack-a-mole practice of sifting through self-proclaimed AI experts and the real deal is exhausting and contributes heavily to misguided policies and practices, even at the highest levels of a given profession.

Texas Rep. Giovanni Capriglione has released a draft AI bill that has some good meat on the bone, so let's dive in!

All indicated edits are mine. The original draft is preserved below, or you can view the PDF using this link.

551.001
(1) "Algorithmic discrimination" means any condition in which an artificial intelligence system when deployed creates or perpetuates an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, national origin, race, religion, sex, veteran status, or other protected classification in violation of provided for by the laws of this state or federal law.

🖋️
Algorithmic discrimination usually isn't the creation of a new form of discrimination. Rather it's the exacerbation of an existing problem that quickly scales to a large scope. An AI system can still effect disparate impacts even if a given AI system doesn't "create" the disparate treatment.

Adding "when deployed" is redundant, since the existence of an AI system that technically could perform an unlawful differential treatment does not, in itself, make the AI system unlawful. It requires implementation for there to be any disparate impact, and the "deployment" language simply introduces room for arguing that deployment is an element of the offense. If the AI is performing an unlawful treatment, it seems unlikely that Rep. Capriglione wants to add a hurdle by requiring that prosecutors to prove a "deployment" element.

(2) “Artificial intelligence system” means a machine-based system capable of:

(A) perceiving an environment through data acquisition and autonomously or systematically processing and interpreting the derived information to take self-determine an action or actions or to imitate intelligent behavior given a specific goal; and

(B) contributing data to, or implementing, learning and adapting behavior mechanisms by based on analyzing how the environment is outcomes are affected by prior actions.

🖋️
Artificial intelligence system definitions are hard to pin down—it's a fast-moving industry, and there are multiple methods by which the systems can be developed. However, binding the definition to systems that "perceive an environment through data acquisition" adds another hurdle to actual enforcement of this Act. Many AI tools that can unlawfully and disparately impact individuals do not need to "perceive an environment." A prime example of this would be credit determinations driven by AI models—they have no need to interact with the environment, nor do they need to perceive it: they simply process the data of the credit applicant based on observations of past data and outcomes. In other words, AI tools predict what an outcome of a given course of action might be based on historical data. Additionally, while the creation of AI models are based on learning attention mechanisms, the processing of data by an AI does not require the system to self-improve contemporaneously (or at all).

551.002
APPLICABILITY OF CHAPTER. This chapter applies only to a person that is not a small business as defined by the United States Small Business Administration, and:

🖋️
The ubiquity of AI solutions and the simplicity with which an AI system can be developed means the opportunity for abuse is within reach of even the smallest businesses. Small businesses are most likely to be misusing AI by volume, whether intentionally or accidentally, because of the sheer number of active businesses in the state, and giving them a pass to mislead or manipulate consumers is likely to encourage reckless implementations of AI tools with a large impact on the public.

(1) conducts business, promotes, or advertises in this state or produces a product or service consumed by residents of this state; or

(2) engages in the development, distribution, or deployment of a high-risk artificial intelligence system in this state, excepting contributions to technical repositories where the contributor is not contributing for a commercial purpose.

Notwithstanding the foregoing, Subchapter B and its enforcement mechanisms apply to all people subject to the laws of this state.

🖋️
A significant subset of AI development occurs in the open-source and the open-source adjacent contexts. Open-source development is when a project is made accessible on the web for others to contribute. Often, these contributors are not employed by the entity responsible for creating the software, and may have little to no connection to the creators of the software other than the software itself. Contributors often provide development support for personal reasons, such as belief in the public good provided by a solution, the utility of the project across a number of use-cases, or as a personal challenge or portfolio item.

Open-source projects can have hundreds or thousands of contributors, each providing granular improvements without necessarily having insight into the big picture or ultimate control of the software. Failing to provide exceptions for such contributors will severely limit the development of the state of the art because each contributor might otherwise be subject to penalties despite their limited control over the use of the software and lack of financial interest in the deployment of the software.

551.010
DIGITAL SERVICE PROVIDER AND SOCIAL MEDIA PLATFORM DUTIES REGARDING ARTIFICIAL INTELLIGENCE SYSTEMS. A digital service provider as defined by Section 509.001(2), Business & Commerce Code or a social media platform as defined by Section 120.001(1), Business & Commerce Code, shall make a commercially reasonable effort to prevent advertisers on the service or platform from deploying a high-risk artificial intelligence system on the service or platform that could expose the users of the service or platform to algorithmic discrimination.

🖋️
No change.

I'm slightly puzzled as to the goal here, which makes it hard to add any meaningful commentary. My best guess is that they are addressing a potential future advertising schema where AI bots interact with users through the ad, which is fair enough. The commercially reasonable efforts language is ambiguous enough as to obviate the section. I'd rather see something like a ban on the use of interactive AI in advertisements entirely—especially if the carveout from the chapter of small businesses remains—since that is a ripe area for abuse and poor implementation.

If they are simply trying to speak to the advertisement of services that are served via AI interfaces, then I'd suggest some clarifying language.

Sec. 551.051
MANIPULATION OF HUMAN BEHAVIOR TO CIRCUMVENT INFORMED DECISION-MAKING. An artificial intelligence system shall not be developed or deployed that uses subliminal techniques beyond a person’s consciousness, or purposefully manipulative or deceptive techniques, with the objective or the effect of materially distorting the behavior of a person or a group of persons by appreciably impairing interfering with their ability to make an informed decision, thereby and causing a person to make a decision that the person would not have otherwise made, take a course of action adverse to their interests in a manner that causes or is likely to cause significant [physical/financial/emotional/other] harm to that person or another person or group of persons.

🖋️
The problem with identifying the specifically forbidden techniques as an element of the violation is that the current state of the technology does not make insights into the underlying mechanisms of the AI model possible. In practice, this means that even if it can be shown that the objective or effect of distorting a person's behavior by a given model is present, any enforceability will likely be precluded by the inability to definitively say that the system is specifically using subliminal, purposefully manipulative, or purposefully deceptive techniques.

The rest of the edits in this section are similarly seeking to pin down, with specificity, the behavior the section seemingly wants to prevent. "Appreciably impairing," "make a decision that the person would not have otherwise made," and "significant harm" are wobbly enough phrases that courts are likely to try different approaches to define them; these terms will introduce confusion as to the goal of the text. The suggested edits simplify and specify measurable harms that are easy(er) to identify in practice.

Sec. 551.052
SOCIAL SCORING. An artificial intelligence system shall not be developed or deployed for the evaluation or classification of natural persons or groups of natural persons based on their social behavior or known, inferred, or predicted personal characteristics with the intent to determine a social score or similar categorical estimation or valuation of a person or groups of persons.

🖋️
Tasking the courts with parsing out social behavior from other types of behavior is yet another task that will introduce confusion to the courts. The goal of the section makes sense, but unless the terms "social" and "social score" are defined in the bill, I think being more direct and simplified here will be beneficial.

Sec. 551.053
CAPTURE OF BIOMETRIC IDENTIFIERS USING ARTIFICIAL INTELLIGENCE. An artificial intelligence system shall not be developed or deployed with the purpose or intended capability of capturing, through the targeted or untargeted gathering of images or other media from the internet or any other publicly available source, a biometric identifier of an individual without their informed consent. An individual is not considered to be informed nor to have provided consent pursuant to Section 503.001(b), Business and Commerce Code, based solely upon the existence on the internet, or other publicly available source, of an image or other media containing one or more biometric identifiers.

🖋️
The primary edit here is cleaning up the language around informed consent. Unless there is a use-case here that they are intending to exempt, then cutting the explanatory example actually expands the bill language to simply forbid capturing biometric identifiers without informed consent: based on the rest of the bill, I think it's safe to say that this is the intent anyway.

However, I added the word, "intended" to modify the capability of capturing biometric identifiers. The capability of capturing biometric identifiers is present in every AI system in one way or another. For example, Automated License Plate Readers are AI systems intended to capture license plate data from passing vehicles, but they naturally capture images of the entire environment including drivers and passengers. Adding the word "intended" would exempt systems such as those which are only incidentally swept up into the category.

Sec. 551.054
CATEGORIZATION BASED ON SENSITIVE ATTRIBUTES. An artificial intelligence system shall not be developed or deployed that is intended to support the capability to infers or interprets, or is capable of inferring or interpreting, sensitive personal attributes of a person or group of persons using biometric identifiers, except for the labeling or filtering of lawfully acquired biometric identifier data.

🖋️
Again, intentionality is really the most practical line to draw. Otherwise more systems will be in violation of this provision than is likely intended. If the intentionality standard is too lax, then as an alternative I'd suggest a "knowing" standard, but probably not a "reasonably should know." This would be slightly stronger than an intentionality standard even if it creates some space within which a developer may build a system that is too permissive as to interpreting sensitive personal attributes.

Combining strict liability with the simple capability of a system to interpret sensitive personal attributes misses the fact that most models are "capable" of interpreting any given set of data—it's more of a question of degree of accuracy when comparing a model more likely to violate the intent of this section than one that has a benign purpose, such as identifying the colors in a picture.

Sec. 551.055
UTILIZATION OF PERSONAL ATTRIBUTES FOR HARM. An artificial intelligence system shall only be deployed if reasonably strong protections are implemented that prevent the system from not utilizeing characteristics of a person or a specific group of persons based on their race, color, disability, religion, sex, national origin, age, or a specific social or economic situation, with the objective, or the effect, of materially distorting the behavior of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm.

🖋️
It is difficult, if not impossible to fully prevent a system from misuse by a sufficiently motivated bad actor. "Jailbreaking" AI models is already a trend among the public, as well as researchers, and, much like cybersecurity, it's a constant back and forth between security engineers and bad actors. Expecting perfection is a high burden, but placing an obligation on deployers of AI systems to protect against such behavior is appropriate.

Sec. 551.056
EMOTION RECOGNITION. Regardless of the intended use or purpose, aAn artificial intelligence system shall not be developed or deployed that is intentionally developed to have the capacity to infers, or is capable of inferring, the emotions of a natural person without the express consent of the natural person.

🖋️
Here is another example of the problem with "capable of" language. Generative AI systems like ChatGPT are not purpose-built for the inference of the emotions of a person. Rather, they are synthesis tools that predict what the output tailored to the given prompt should look like. If I was to keep asking follow up questions about the emotional state of the person in the image (below), the quality of the responses would gradually degrade; or if I was to contradict the AI, it would backpedal. This is because the AI is not intended to make determinations of the state of mind of the person in the image, it is intended to ingest a user's query (including any shared media) and make a prediction about the content most likely to match the user's query: it doesn't try to be accurate about any particular fact.

However, under the language of the statute as written, ChatGPT would be in violation of the statute.

Sec. 551.057
CERTAIN SEXUALLY EXPLICIT VIDEOS, IMAGES, AND CHILD PORNOGRAPHY. An artificial intelligence system shall not be developed or deployed that produces, assists, or aids in producing, or is capable of producing is trained or developed using, unlawful visual material in violation of Section 43.26, Penal Code or an unlawful deep fake video or image in violation of or Section 21.165, Penal Code.

🖋️
The "capable of" language is again, problematic. However, I think there are better fixes here than simply addressing that issue. Prohibiting the training of AI systems on unlawful visual material should do some extra work to outlaw illicit material without placing good-actor companies at extraneous risk.

Likely, an exception should be made to this section (and the rest of the chapter) for AI systems intended to identify and flag illegal content.

Sec. 551.101
CONSTRUCTION AND APPLICATION. (a) This chapter 4 shall be broadly construed and applied to promote its underlying 5 purposes, which are:

(1) to facilitate and advance the responsible development and use of artificial intelligence systems;

(2) to protect individuals and groups of individuals from known, and unknown but reasonably foreseeable, risks, including algorithmic discrimination, of the intentional or unintentional use of artificial intelligence systems;

(3) to provide transparency regarding those risks in the development, deployment, or use of artificial intelligence systems; and

(4) to provide reasonable notice regarding the use or considered use of artificial intelligence systems by state agencies.

(b) this Act does not apply to the developer of an artificial intelligence system who has released the system under a free and open-source license, provided that:

(1) the system is not deployed as a high-risk artificial intelligence system and the developer has taken reasonable steps to ensure that the system cannot be used as a high-risk artificial intelligence system without substantial modifications; and

(2) the weights and technical architecture of the system are made publicly available; and

(3) if applicable, the data on which the system was trained is identified and released under the same free and open-source license as the system.


Final Thoughts

That's it for the substantive portions of the bill related to technical issues in the AI space. There are a couple of fundamental issues that I would alter to ensure enforceability and precision in the bill.

First and foremost, the bill should target transparency in the data used to train the AI systems. The data used to create the systems has the greatest impact on the capabilities of the given model, and mandating disclosures in the specifics related to data that is fed to these models would surface potential violations of the chapter without needing to wait for harms to be observed. Additionally, while requiring AI developers, distributors, and deployers to disclose the types of data used for training is an important measure, targeting those who sell or deliver data to AI creators is a straightforward choke point for enforcement efforts.

For example, requiring public disclosures from data aggregators on the source and nature of the data they provide, and to who, will provide transparency and better research opportunities for identifying potentially harmful data collection practices (that are, themselves, responsible for the creation of harmful models).

Ultimate responsibility for the behavior of the AI systems falls to the creators of these systems, but smart legislation will enlist the aid of data collectors to provide some insight into the black box that is AI systems.


***DRAFT BILL***

By: Capriglione ___.B. No. _____


A BILL TO BE ENTITLED

AN ACT

relating to the regulation and reporting on the use of

artificial intelligence systems by certain business entities and

state agencies; providing civil penalties.

BE IT ENACTED BY THE LEGISLATURE OF THE STATE OF TEXAS:

SECTION 1. This Act may be cited as the Texas Responsible

AI Governance Act

SECTION 2. Title 11, Business & Commerce Code, is amended

by adding Subtitle D to read as follows:

SUBTITLE D. ARTIFICIAL INTELLIGENCE PROTECTION

CHAPTER 551. ARTIFICIAL INTELLIGENCE PROTECTION

SUBCHAPTER A. GENERAL PROVISIONS

Sec. 551.001. DEFINITIONS. In this chapter:

(1) "Algorithmic discrimination" means any condition in

which an artificial intelligence system when deployed creates an

unlawful differential treatment or impact that disfavors an

individual or group of individuals on the basis of their actual or

perceived age, color, disability, ethnicity, genetic information,

national origin, race, religion, sex, veteran status, or other

protected classification in violation of the laws of this state or

federal law.

(A) "Algorithmic discrimination" does not include

the offer, license, or use of a high-risk artificial intelligence

system by a developer or deployer for the sole purpose of the

developer's or deployer's self-testing to identify, mitigate, or

prevent discrimination or otherwise ensure compliance with state

and federal law.

(2) “Artificial intelligence system” means a machine-

based system capable of:

(A) perceiving an environment through data

acquisition and processing and interpreting the derived

information to take an action or actions or to imitate intelligent

behavior given a specific goal; and

(B) learning and adapting behavior by analyzing how

the environment is affected by prior actions.

(3) "Council" means the Artificial Intelligence Council

established under Chapter 553.

(4) "Consequential decision" means a decision that has

a material legal, or similarly significant, effect on a consumer’s

access to, cost of, or terms of:

(A) a criminal case assessment, a sentencing or

plea agreement analysis, or a pardon, parole, probation, or release

decision;

(B) education enrollment or an education

opportunity;

(C) employment or an employment opportunity;

(D) a financial service;

(E) an essential government service;

(F) electricity services;

(G) food;

(H) a health-care service;

(I) housing;

(J) insurance;

(K) a legal service;

(L) a transportation service;

(M) surveillance or monitoring systems; or

(N) water.

(m) elections

(5) “Consumer” means an individual who is a resident of

this state.

(6) "Contributing factor" means a factor intended:

(A) to be considered solely or with other criteria;

or

(B) to overrule conclusions from other factors in

making a consequential decision or altering the outcome of a

consequential decision.

(7) "Deploy" means to put into effect or commercialize.

(8) “Deployer” means a person doing business in this

state that deploys a high-risk artificial intelligence system.

(9) "Developer" means a person doing business in this

state that develops a high-risk artificial intelligence system or

substantially or intentionally modifies an artificial intelligence

system.

(10) “Digital service” and “Digital service provider”

have the meanings assigned by Section 509.001, Business & Commerce

Code.

(11) “Distributor” means a person, other than the

Developer, that makes an artificial intelligence system available

in the market.

(12) “Generative artificial intelligence” means

artificial intelligence models that can emulate the structure and

characteristics of input data in order to generate derived

synthetic content. This can include images, videos, audio, text,

and other digital content.

(13) "High-risk artificial intelligence system" means

any artificial intelligence system that, when deployed, makes, or

is a contributing factor in making, a consequential decision. The

term does not include:

(A) an artificial intelligence system if the

artificial intelligence system is intended to detect decision-

making patterns or deviations from prior decision-making patterns

and is not intended to replace or influence a previously completed

human assessment without sufficient human review;

(B) an artificial intelligence system that violates

a provision of Subchapter B; or

(C) the following technologies, unless the

technologies, when deployed, make, or are a contributing factor in

making, a consequential decision:

(i) anti-malware;

(ii) anti-virus;

(iii) calculators;

(iv) cybersecurity;

(v) databases;

(vi) data storage;

(vii) firewall;

(viii) internet domain registration;

(ix) internet website loading;

(x) networking;

(xi) spam- and robocall-filtering;

(xii) spell-checking;

(xiii) spreadsheets;

(xiv) web caching;

(xv) web hosting or any similar technology; or

(xvi) any technology that solely communicates

in natural language for the sole purpose of providing users with

information, making referrals or recommendations, and answering

questions and is subject to an accepted use policy that prohibits

generating content that is discriminatory or harmful, as long as

the system does not violate any provision listed in Subchapter B.

(14) “Personal data” has the meaning assigned to it by

Section 541.001, Business and Commerce Code.

(15) “Risk” means the composite measure of an event’s

probability of occurring and the magnitude or degree of the

consequences of the corresponding event.

(16) “Sensitive personal attribute” means race,

political opinions, religious or philosophical beliefs, or sex.

The term does not include conduct that would be classified as an

offense under Chapter 21, Penal Code.

(17) “Social media platform” has the meaning assigned by

Section 120.001, Business and Commerce Code.

(18) “Intentional and substantial modification" or

“Substantial Modification” means a deliberate change made to an

artificial intelligence system that results in any new reasonably

foreseeable risk of algorithmic discrimination.

Sec. 551.002. APPLICABILITY OF CHAPTER. This chapter applies

only to a person that is not a small business as defined by the

United States Small Business Administration, and:

(1) conducts business, promotes, or advertises in this

state or produces a product or service consumed by residents of

this state; or

(2) engages in the development, distribution, or

deployment of a high-risk artificial intelligence system in this

state.

Sec. 551.003. DEVELOPER DUTIES. (a) A developer of a high-

risk artificial intelligence system shall use reasonable care to

protect consumers from any known or reasonably foreseeable risks

of algorithmic discrimination arising from the intended and

contracted uses of the high-risk artificial intelligence system.

(b) Prior to providing a high-risk artificial intelligence

system to a deployer, a developer shall provide to the deployer,

in writing, a High-Risk Report that consists of:

(1) a statement describing how the high-risk artificial

intelligence system should be used, not be used, and be monitored

by an individual when the high-risk artificial intelligence system

is used to make, or is a substantial factor in making, a

consequential decision;

(2) any known limitations of the system, the metrics

used to measure the system’s performance, and how the system

performs under those metrics in its intended use contexts;

(3) any known or reasonably foreseeable risks of

algorithmic discrimination, unlawful use or disclosure of personal

data, or deceptive manipulation or coercion of human behavior

arising from its intended or likely use;

(4) a description of the type of data used to program or

train the high-risk artificial intelligence system;

(5) the data governance measures used to cover the

training datasets and their collection, the measures used to

examine the suitability of data sources, possible unlawful

discriminatory biases, and appropriate mitigation; and

(6) appropriate principles, processes, and personnel for

the deployers’ risk management policy.

(c) If a high-risk artificial intelligence system is

intentionally or substantially modified after a developer provides

it to a deployer, a developer shall provide a new High-Risk Report

in writing within 30 days of the modification.

(d) If a developer of a high-risk artificial intelligence

system considers or has reason to consider that a high-risk

artificial intelligence system that it has placed in the market or

put into service is not in compliance with any requirement in this

chapter, it shall immediately take the necessary corrective

actions to bring that system into compliance, to withdraw it, to

disable it, or to recall it, as appropriate. They shall inform the

distributors of the high-risk artificial intelligence system

concerned and, where applicable, the deployers.

(e) Where the high-risk artificial intelligence system

presents risks of algorithmic discrimination, unlawful use or

disclosure of personal data, or deceptive manipulation or coercion

of human behavior and the developer becomes aware or should

reasonably be aware of that risk, it shall immediately investigate

the causes, in collaboration with the deployer, where applicable,

and inform the attorney general of the nature of the non-compliance

and of any relevant corrective action taken.

(f) Developers shall keep detailed records of any generative

artificial intelligence training dataset used to develop a

generative artificial intelligence system or service. Record

keeping shall follow the suggested actions under GV-1.2-007 of the

current version of the Artificial Intelligence Risk Management

Framework: Generative Artificial Intelligence Profile by the

National Institute of Standards and Technology.

Sec. 551.004. DISTRIBUTOR DUTIES. A distributor of a high-

risk artificial intelligence system shall use reasonable care to

protect consumers from any known or reasonably foreseeable risks

of algorithmic discrimination. If a distributor of a high-risk

artificial intelligence system considers or has reason to consider

that a high-risk artificial intelligence system is not in

compliance with any requirement in this chapter, it shall

immediately withdraw, disable, recall as appropriate, the high-

risk artificial intelligence system from the market until the

system has been brought into compliance with the requirements of

this chapter. The distributor shall inform the developers of the

high-risk artificial intelligence system concerned and, where

applicable, the deployers.

Sec. 551.005. DEPLOYER DUTIES. A deployer of a high-risk

artificial intelligence system shall use reasonable care to

protect consumers from any known or reasonably foreseeable risks

of algorithmic discrimination. If a deployer of a high-risk

artificial intelligence system considers or has reason to consider

that a high-risk artificial intelligence system is not in

compliance with any requirement in this chapter, it shall

immediately suspend the use of the high-risk artificial

intelligence system from the market until the system has been

brought into compliance with the requirements of this chapter. The

deployer shall inform the developers of the high-risk artificial

intelligence system concerned and, where applicable, the

distributors. Deployers of a high-risk artificial intelligence

system shall assign human oversight, by persons who have the

necessary competence, training and authority, as well as the

necessary support, to oversee consequential decisions made by the

use of a high-risk artificial intelligence system.

Sec. 551.006. IMPACT ASSESSMENTS. (a) A deployer that deploys

a high-risk artificial intelligence system shall complete an

impact assessment for the high-risk artificial intelligence system

semiannually and within ninety days after any intentional and

substantial modification to the high-risk artificial intelligence

system is made available. An impact assessment must include, at a

minimum, and to the extent reasonably known by or available to the

deployer:

(1) a statement by the deployer disclosing the purpose,

intended use cases, and deployment context of, and benefits

afforded by, the high-risk artificial intelligence system;

(2) an analysis of whether the deployment of the high-

risk artificial intelligence system poses any known or reasonably

foreseeable risks of algorithmic discrimination and, if so, the

nature of the algorithmic discrimination and the steps that have

been taken to mitigate the risks;

(3) a description of the categories of data the high-

risk artificial intelligence system processes as inputs and the

outputs the high-risk artificial intelligence system produces;

(4) if the deployer used data to customize the high-risk

artificial intelligence system, an overview of the categories of

data the deployer used to customize the high-risk artificial

intelligence system;

(5) any metrics used to evaluate the performance and

known limitations of the high-risk artificial intelligence system;

(6) a description of any transparency measures taken

concerning the high-risk artificial intelligence system, including

any measures taken to disclose to a consumer that the high-risk

artificial intelligence system is in use when the high-risk

artificial intelligence system is in use;

(7) a description of the post-deployment monitoring and

user safeguards provided concerning the high-risk artificial

intelligence system, including the oversight, use, and learning

process established by the deployer to address issues arising from

the deployment of the high-risk artificial intelligence system;

and

(8) a description of cybersecurity measures and threat

modeling conducted on the system.

(b) Following an intentional and substantial modification to

a high-risk artificial intelligence system, a deployer must

disclose the extent to which the high-risk artificial intelligence

system was used in a manner that was consistent with, or varied

from, the developer's intended uses of the high-risk artificial

intelligence system.

(c) A single impact assessment may address a comparable set

of high-risk artificial intelligence systems deployed by a

deployer.

(d) A deployer shall maintain the most recently completed

impact assessment for a high-risk artificial intelligence system,

all records concerning each impact assessment, and all prior impact

assessments, if any, for at least three years following the final

deployment of the high-risk artificial intelligence system.

(e) At least annually, a deployer must review the deployment

of each high-risk artificial intelligence system deployed by the

deployer to ensure that the high-risk artificial intelligence

system is not causing algorithmic discrimination.

(f) A deployer may redact or omit any trade secrets as defined

by Section 541.001(33), Business & Commerce Code or information

protected from disclosure by state or federal law.

(g) Except as provided in subsection (e) of this section, a

developer that makes a high-risk artificial intelligence system

available to a deployer shall make available to the deployer the

documentation and information necessary for a deployer to complete

an impact assessment pursuant to this section.

(h) A developer that also serves as a deployer for a high-risk

artificial intelligence system is not required to generate and

store an impact assessment unless the high-risk artificial

intelligence system is provided to an unaffiliated deployer.

Sec. 551.007. DISCLOSURE OF A HIGH-RISK ARTIFICIAL

INTELLIGENCE SYSTEM TO CONSUMERS. (a) A deployer or developer that

deploys, offers, sells, leases, licenses, gives, or otherwise

makes available a high-risk artificial intelligence system that is

intended to interact with consumers shall disclose to each

consumer, before or at the time of interaction:

(1) that the consumer is interacting with an artificial

intelligence system;

(2) the purpose of the system;

(3) that the system may or will make a consequential

decision affecting the consumer;

(4) the nature of any consequential decision in which

the system is or may be a contributing factor;

(5) the factors to be used in making any consequential

decisions;

(6) contact information of the deployer;

(7) a description of:

(A) any human components of the system;

(B) any automated components of the system; and

(C) how human and automated components are used to

inform a consequential decision; and

(8) a declaration of the consumer’s rights under Section

551.107.

(b) Disclosure is required under subsection (a) of this

section regardless of whether it would be obvious to a reasonable

person that the person is interacting with an artificial

intelligence system.

(c) All disclosures under subsection (a) shall be conspicuous

and written in plain language.

Sec. 551.008. RISK IDENTIFICATION AND MANAGEMENT POLICY. (a)

A developer or deployer of a high-risk artificial intelligence

system shall, prior to deployment, identify potential risks of

algorithmic discrimination and implement a risk management policy

to govern the development or deployment of the high-risk artificial

intelligence system. The risk management policy shall:

(1) specify and incorporate the principles and processes

that the developer or deployer uses to identify, document, and

mitigate, in the development or deployment of a high-risk

artificial intelligence system:

(A) known or reasonably foreseeable risks of

algorithmic discrimination;

(B) prohibited uses and unacceptable risks under

Subchapter B; and

(C) potential systemic risks of other unintended or

harmful impacts; and

(2) be reasonable in size, scope, and breadth,

considering:

(A) guidance and standards set forth in the current

“Artificial Intelligence Risk Management Framework” published by

the National Institute of Standards and Technology;

(B) any existing risk management guidance,

standards or framework applicable to artificial intelligence

systems designated by the Banking Commissioner or Insurance

Commissioner, if the developer or deployer is regulated by the

Department of Banking or Department of Insurance;

(C) the size and complexity of the developer or

deployer;

(D) the nature, scope, and intended use of the high-

risk artificial intelligence systems developed or deployed; and

(E) the sensitivity and volume of data processed in

connection with the high-risk artificial intelligence systems.

(b) A risk management policy implemented pursuant to this

section may apply to more than one high-risk artificial

intelligence system developed or deployed, so long as the developer

or deployer complies with all of the forgoing requirements and

considerations in adopting and implementing the risk management

policy with respect to each high-risk artificial intelligence

system covered by the policy.

Sec. 551.009. RELATIONSHIPS BETWEEN ARTIFICIAL INTELLIGENCE

PARTIES. Any distributor, deployer, or other third-party shall be

considered to be a developer of a high-risk artificial intelligence

system for the purposes of this chapter and shall be subject to

the obligations and duties of a developer under this chapter in

any of the following circumstances:

(1) they put their name or trademark on a high-risk

artificial intelligence system already placed in the market or put

into service, without prejudice to contractual arrangements

stipulating that the obligations are otherwise allocated;

(2) they modify a high-risk artificial intelligence

system that has already been placed in the market or has already

been put into service in such a way that it remains a high-risk

artificial intelligence system under this chapter;

(3) they modify the intended purpose of an artificial

intelligence system, including a general-purpose artificial

intelligence system, which has not been classified as high-risk

and has already been placed in the market or put into service in

such a way that the artificial intelligence system concerned

becomes a high-risk artificial intelligence system in accordance

with this chapter of a high-risk artificial intelligence system.

Sec. 551.010. DIGITAL SERVICE PROVIDER AND SOCIAL MEDIA

PLATFORM DUTIES REGARDING ARTIFICIAL INTELLIGENCE SYSTEMS. A

digital service provider as defined by Section 509.001(2),

Business & Commerce Code or a social media platform as defined by

Section 120.001(1), Business & Commerce Code, shall make a

commercially reasonable effort to prevent advertisers on the

service or platform from deploying a high-risk artificial

intelligence system on the service or platform that could expose

the users of the service or platform to algorithmic discrimination.

Sec. 551.011. REPORTING REQUIREMENTS. (a) A deployer must

notify, in writing, the council, the attorney general, or the

director of the appropriate state agency that regulates the

deployer’s industry, and affected consumers as soon as practicable

and not later than the 10th day after the date on which the deployer

discovers or is made aware that a deployed high-risk artificial

intelligence system has caused or is likely to result in:

(1) algorithmic discrimination of an individual or

group of individuals; or

(2) an inappropriate or discriminatory consequential

decision.

(b) If a developer discovers or is made aware that a deployed

high-risk artificial intelligence system is using inputs or

providing outputs that constitute a violation of Subchapter B, the

deployer must cease operation of the offending system as soon as

technically feasible and provide notice to the council and the

attorney general as soon as practicable and not later than the

10th day after the date on which the developer discovers or is

made aware of the unacceptable risk.

Sec. 551.012. SANDBOX PROGRAM EXCEPTION. (a) Excluding

violations of Subchapter B, this chapter does not apply to the

development of an artificial intelligence system that is used

exclusively for research, training, testing, or other pre-

deployment activities performed by active participants of the

sandbox program in compliance with Chapter 552.

SUBCHAPTER B. PROHIBITED USES AND UNACCEPTABLE RISK

Sec. 551.051. MANIPULATION OF HUMAN BEHAVIOR TO CIRCUMVENT

INFORMED DECISION-MAKING. An artificial intelligence system shall

not be developed or deployed that uses subliminal techniques beyond

a person’s consciousness, or purposefully manipulative or

deceptive techniques, with the objective or the effect of

materially distorting the behavior of a person or a group of

persons by appreciably impairing their ability to make an informed

decision, thereby causing a person to make a decision that the

person would not have otherwise made, in a manner that causes or

is likely to cause significant harm to that person or another

person or group of persons.

Sec. 551.052. SOCIAL SCORING. An artificial intelligence

system shall not be developed or deployed for the evaluation or

classification of natural persons or groups of natural persons

based on their social behavior or known, inferred, or predicted

personal characteristics with the intent to determine a social

score or similar categorical estimation or valuation of a person

or groups of persons.

Sec. 551.053. CAPTURE OF BIOMETRIC IDENTIFIERS USING

ARTIFICIAL INTELLIGENCE. An artificial intelligence system shall

not be developed or deployed with the purpose or capability of

capturing, through the targeted or untargeted gathering of images

or other media from the internet or any other publicly available

source, a biometric identifier of an individual. An individual is

not considered to be informed nor to have provided consent pursuant

to Section 503.001(b), Business and Commerce Code, based solely

upon the existence on the internet, or other publicly available

source, of an image or other media containing one or more biometric

identifiers.

Sec. 551.054. CATEGORIZATION BASED ON SENSITIVE ATTRIBUTES.

An artificial intelligence system shall not be developed or

deployed that infers or interprets, or is capable of inferring or

interpreting, sensitive personal attributes of a person or group

of persons using biometric identifiers, except for the labeling or

filtering of lawfully acquired biometric identifier data.

Sec. 551.055. UTILIZATION OF PERSONAL ATTRIBUTES FOR HARM. An

artificial intelligence system shall not utilize characteristics

of a person or a specific group of persons based on their race,

color, disability, religion, sex, national origin, age, or a

specific social or economic situation, with the objective, or the

effect, of materially distorting the behavior of that person or a

person belonging to that group in a manner that causes or is

reasonably likely to cause that person or another person

significant harm.

Sec. 551.056. EMOTION RECOGNITION. Regardless of the intended

use or purpose, an artificial intelligence system shall not be

developed or deployed that infers, or is capable of inferring, the

emotions of a natural person without the express consent of the

natural person.

Sec. 551.057. CERTAIN SEXUALLY EXPLICIT VIDEOS, IMAGES, AND

CHILD PORNOGRAPHY. An artificial intelligence system shall not be

developed or deployed that produces, assists, or aids in producing,

or is capable of producing unlawful visual material in violation

of Section 43.26, Penal Code or an unlawful deep fake video or

image in violation of Section 21.165, Penal Code.

SUBCHAPTER C. ENFORCEMENT AND CONSUMER PROTECTIONS

Sec. 551.101. CONSTRUCTION AND APPLICATION. (a) This chapter

shall be broadly construed and applied to promote its underlying

purposes, which are:

(1) to facilitate and advance the responsible

development and use of artificial intelligence systems;

(2) to protect individuals and groups of individuals

from known, and unknown but reasonably foreseeable, risks,

including algorithmic discrimination, of the intentional or

unintentional use of artificial intelligence systems;

(3) to provide transparency regarding those risks in the

development, deployment, or use of artificial intelligence

systems; and

(4) to provide reasonable notice regarding the use or

considered use of artificial intelligence systems by state

agencies.

(b) this Act does not apply to the developer of an artificial

intelligence system who has released the system under a free and

open-source license, provided that:

(1) the system is not deployed as a high-risk artificial

intelligence system and the developer has taken reasonable steps

to ensure that the system cannot be used as a high-risk artificial

intelligence system without substantial modifications; and

(2) the weights and technical architecture of the system

are made publicly available.

Sec. 551.102. ENFORCEMENT AUTHORITY. The attorney general has

authority to enforce this chapter. Excluding violations of

Subchapter B, researching, training, testing, or the conducting of

other pre-deployment activities by active participants of the

sandbox program, in compliance with Chapter 552, does not subject

a developer or deployer to penalties or actions.

Sec. 551.103. INTERNET WEBSITE AND COMPLAINT MECHANISM. The

attorney general shall post on the attorney general's Internet

website:

(1) information relating to:

(A) the responsibilities of a developer,

distributor, and deployer under Subchapter A; and

(B) an online mechanism through which a consumer

may submit a complaint under this chapter to the attorney general.

Sec. 551.104. INVESTIGATIVE AUTHORITY. (a) If the attorney

general has reasonable cause to believe that a person has engaged

in or is engaging in a violation of this chapter, the attorney

general may issue a civil investigative demand.

(b) The attorney general may request, pursuant to a civil

investigative demand issued under Subsection (a), that a developer

or deployer of a high-risk artificial intelligence system disclose

their risk management policy required under Subchapter A. The

attorney general may evaluate the risk management policy for

compliance with the requirements set forth in Subchapter A.

(c) The attorney general may not institute an action for a

civil penalty against a developer or deployer for artificial

intelligence systems that remain isolated from customer

interaction in a pre-deployment environment.

Sec. 551.105. NOTICE OF VIOLATION OF CHAPTER; OPPORTUNITY TO

CURE. Before bringing an action under Section 551.044, the attorney

general shall notify a developer, distributor, or deployer in

writing, not later than the 30th day before bringing the action,

identifying the specific provisions of this chapter the attorney

general alleges have been or are being violated. The attorney

general may not bring an action against the developer or deployer

if:

(1) within the 30-day period, the developer or deployer

cures the identified violation; and

(2) the developer or deployer provides the attorney

general a written statement that the developer or deployer:

(A) cured the alleged violation;

(B) notified the consumer and the council that the

developer or deployer’s violation was addressed, if the consumer's

contact information has been made available to the developer or

deployer and the attorney general;

(C) provided supportive documentation to show how

the violation was cured; and

(D) made changes to internal policies, if

necessary, to ensure that no such further violations will occur.

Sec. 551.106. CIVIL PENALTY; INJUNCTION. (a) The attorney

general may bring an action in the name of this state to restrain

or enjoin the person from violating this chapter and seek

injunctive relief.

(b) The attorney general may recover reasonable attorney's

fees and other reasonable expenses incurred in investigating and

bringing an action under this section.

(c) The attorney general may assign an administrative fine to

a developer or deployer who fails to timely cure a violation or

who breaches a written statement provided by the attorney general,

other than those for a prohibited use, of not less than $5,000 and

not more than $10,000 per uncured violation.

(d) The attorney general may assign an administrative fine to

a developer or deployer who fails to timely cure a violation of a

prohibited use, or whose violation is determined to be uncurable,

of not less than $40,000 and not more than $100,000 per violation.

(e) A developer or deployer who continues to operate or do

business in Texas without complying with the provisions of this

chapter shall be assessed an administrative fine of not less than

$1,000 and not more than $20,000 per day.

(f) There is a rebuttable presumption that a developer,

distributor, or deployer used reasonable care as required under

this chapter if the developer, distributor, or deployer complied

with their duties under Subchapter A.

Sec. 551.107. CONSUMER RIGHTS & REMEDIES. (a) A consumer may

bring an action against a developer or deployer that violates

Subchapter B with respect to the consumer.

(b) If the consumer proves that the developer or deployer

violated this chapter with respect to the consumer, the consumer

is entitled to recover:

(1) declaratory relief under Chapter 37, Civil Practice

and Remedies Code, including costs and reasonable and necessary

attorney’s fees under Section 37.009; and

(2) injunctive relief.

(c) If a developer or deployer fails to promptly comply with

a court order in an action brought under this section, the court

shall hold the developer or deployer in contempt and shall use all

lawful measures to secure immediate compliance with the order,

including daily penalties sufficient to secure immediate

compliance.

(d) A consumer may bring an action under this section

regardless of whether another court has enjoined the attorney

general from enforcing this chapter or declared any provision of

this chapter unconstitutional unless that court decision is

binding on the court in which the action is brought.

(e) Nonmutual issue preclusion and nonmutual claim preclusion

are not defenses to an action brought under this section.

(f) A consumer may appeal a consequential decision made by a

high-risk artificial intelligence system regardless of whether the

decision was made with human oversight or not. Any affected person

subject to a decision which is taken by the deployer on the basis

of the output from a high-risk artificial intelligence system which

produces legal effects or similarly significantly affects that

person in a way that they consider to have an adverse impact on

their health, safety or fundamental rights shall have the right to

obtain from the deployer clear and meaningful explanations of the

role of the high-risk artificial intelligence system in the

decision-making procedure and the main elements of the decision

taken.

SUBCHAPTER D. CONSTRUCTION OF CHAPTER; LOCAL PREEMPTION

Sec. 551.151. CONSTRUCTION OF CHAPTER. This chapter may not

be construed as imposing a requirement on a developer, a deployer,

or other person that adversely affects the rights or freedoms of

any person, including the right of free speech.

Sec. 551.152. LOCAL PREEMPTION. This chapter supersedes and

preempts any ordinance, resolution, rule, or other regulation

adopted by a political subdivision regarding the use of high-risk

artificial intelligence systems.

CHAPTER 552. ARTIFICIAL INTELLIGENCE REGULATORY SANDBOX

PROGRAM

SUBCHAPTER A. GENERAL PROVISIONS

Sec. 552.001. DEFINITIONS. In this chapter:

(1) “Applicable agency” means a state agency responsible

for regulating a specific sector impacted by an artificial

intelligence system.

(2) “Consumer” means a person who engages in

transactions involving an artificial intelligence system or is

directly affected by the use of such a system.

(3) “Council” means the Artificial Intelligence

Council established by Chapter 553.

(4) “Department” means the Texas Department of

Information Resources.

(5) “Program participant” means a person or business

entity approved to participate in the sandbox program.

(6) “Sandbox program” means the regulatory framework

established under this chapter that allows temporary testing of

artificial intelligence systems in a controlled, limited manner

without full regulatory compliance.

SUBCHAPTER B. SANDBOX PROGRAM FRAMEWORK

Sec. 552.051. ESTABLISHMENT OF SANDBOX PROGRAM. (a) The

department, in coordination with the council, shall administer the

Artificial Intelligence Regulatory Sandbox Program to facilitate

the development, testing, and deployment of innovative artificial

intelligence systems in Texas.

(b) The sandbox program is designed to:

(1) promote the safe and innovative use of artificial

intelligence across various sectors including healthcare, finance,

education, and public services;

(2) encourage the responsible deployment of artificial

intelligence systems while balancing the need for consumer

protection, privacy, and public safety; and

(3) provide clear guidelines for artificial intelligence

developers to test systems while temporarily exempt from certain

regulatory requirements.

Sec. 552.052. APPLICATION PROCESS. (a) A person or business

entity seeking to participate in the sandbox program must submit

an application to the council.

(b) The application must include:

(1) a detailed description of the artificial

intelligence system and its intended use;

(2) a risk assessment that addresses potential impacts

on consumers, privacy, and public safety;

(3) a plan for mitigating any adverse consequences

during the testing phase; and

(4) proof of compliance with federal artificial

intelligence laws and regulations, where applicable.

Sec. 552.053. DURATION AND SCOPE OF PARTICIPATION. A

participant may test an artificial intelligence system under the

sandbox program for a period of up to 36 months, unless extended

by the department for good cause.

SUBCHAPTER C. OVERSIGHT AND COMPLIANCE

Sec. 552.101. AGENCY COORDINATION. (a) The department shall

coordinate with all relevant state regulatory agencies to oversee

the operations of the sandbox participants.

(b) A relevant agency may recommend to the department that a

participant’s sandbox privileges be revoked if the artificial

intelligence system:

(1) poses undue risk to public safety or welfare;

(2) violates any federal or state laws that the sandbox

program cannot override.

Sec. 552.102. REPORTING REQUIREMENTS. (a) Each sandbox

participant must submit quarterly reports to the department, which

shall include:

(1) system performance metrics;

(2) updates on how the system mitigates any risks

associated with its operation; and

(3) feedback from consumers and affected stakeholders

that are using a product that has been deployed from this section.

(b) The department must submit an annual report to the

legislature detailing:

(1) the number of participants in the sandbox program;

(2) the overall performance and impact of artificial

intelligence systems tested within the program; and

(3) recommendations for future legislative or regulatory

reforms.

CHAPTER 553. TEXAS ARTIFICIAL INTELLIGENCE COUNCIL

SUBCHAPTER A. CREATION AND ORGANIZATION OF COUNCIL

Sec. 553.001. CREATION OF COUNCIL. (a) The Artificial

Intelligence Council is administratively attached to the office of

the governor, and the office of the governor shall provide

administrative support to the council as provided by this section.

The equal employment opportunity officer and the internal auditor

of the office of the governor shall serve the same functions for

the council as they serve for the office of the governor.

(b) The office of the governor and the council shall enter

into a memorandum of understanding detailing:

(1) the administrative support the council requires

from the office of the governor to fulfill the purposes

of this chapter;

(2) the reimbursement of administrative expenses to the

office of the governor; and

(3) any other provisions available by law to ensure the

efficient operation of the council as attached to the office

of the governor.

(c) The purpose of the council is to:

(1) Issue advisory opinions on the ethical and legal

use of AI;

(2) Offer guidance and recommendations to state

agencies; and

(3) Ensure that artificial intelligence development in

the state is safe, ethical, and in the public interest.

Sec. 553.002. COUNCIL MEMBERSHIP. (a) The council is composed

of eight members as follows:

(1) four members appointed by the governor;

(2) two members appointed by the lieutenant governor;

and

(3) two members appointed by the speaker of the house of

representatives.

(b) Members serve staggered four-year terms, with the terms

of four members expiring every two years.

(c) The governor shall appoint a chair from among the members,

and the council shall elect a vice chair from its membership.

(d) The council may establish an advisory board composed of

individuals from the public who possess expertise directly related

to the council's functions, including technical, ethical,

regulatory, and other relevant areas.

Sec. 553.003. QUALIFICATIONS. (a) Members of the council must

be Texas residents and have knowledge or expertise in one or more

of the following areas:

(1) artificial intelligence technologies;

(2) data privacy and security;

(3) ethics in technology or law;

(4) public policy and regulation; or

(5) risk management or safety related to artificial

intelligence systems.

(b) Members must not hold an office or profit under the state

or federal government at the time of appointment.

Sec. 553.004. STAFF AND ADMINISTRATION. The council may

employ an executive director and other personnel as necessary to

perform its duties.

SUBCHAPTER B. POWERS AND DUTIES OF THE COUNCIL

Sec. 553.101. ISSUANCE OF ADVISORY OPINIONS. (a) A state

agency may request a written advisory opinion from the council

regarding the use of artificial intelligence systems in the state.

(b) The council may issue advisory opinions on:

(1) the compliance of artificial intelligence systems

with Texas law;

(2) the ethical implications of artificial intelligence

deployments in the state;

(3) data privacy and security concerns related to

artificial intelligence systems; or

(4) potential liability or legal risks associated with the

use of AI.

Sec. 553.102. RULEMAKING AUTHORITY. (a) The council may adopt

rules necessary to administer its duties under this chapter,

including:

(1) procedures for requesting advisory opinions;

(2) standards for ethical artificial intelligence

development and deployment;

(3) guidelines for evaluating the safety, privacy, and

fairness of artificial intelligence systems.

(b) The council’s rules shall align with state laws on

artificial intelligence, technology, data security, and consumer

protection.

Sec. 553.103. TRAINING AND EDUCATIONAL OUTREACH. The council

shall conduct training programs for state agencies and local

governments on the ethical use of artificial intelligence systems.

SECTION 3. Sections 541.051(b), 541.101(a), 541.102(a), and

Sec.541.104(a), Business & Commerce Code, are amended to read as

follows:

Sec. 541.051. CONSUMER'S PERSONAL DATA RIGHTS; REQUEST TO

EXERCISE RIGHTS. (a) A consumer is entitled to exercise the

consumer rights authorized by this section at any time by

submitting a request to a controller specifying the consumer rights

the consumer wishes to exercise. With respect to the processing of

personal data belonging to a known child, a parent or legal

guardian of the child may exercise the consumer rights on behalf

of the child.

(b) A controller shall comply with an authenticated consumer

request to exercise the right to:

(1) confirm whether a controller is processing the

consumer's personal data and to access the personal data;

(2) correct inaccuracies in the consumer's personal

data, taking into account the nature of the personal data and the

purposes of the processing of the consumer's personal data;

(3) delete personal data provided by or obtained about

the consumer;

(4) if the data is available in a digital format, obtain

a copy of the consumer's personal data that the consumer previously

provided to the controller in a portable and, to the extent

technically feasible, readily usable format that allows the

consumer to transmit the data to another controller without

hindrance; [or]

(5) know if the consumer’s personal data is or will be

used in any artificial intelligence system and for what purposes;

or

([5]6) opt out of the processing of the personal data

for purposes of:

(A) targeted advertising;

(B) the sale of personal data; [or]

(C) the sale or sharing of personal data for use in

artificial intelligence systems prior to being collected; or

([C]D) profiling in furtherance of a decision that

produces a legal or similarly significant effect concerning the

consumer.

Sec. 541.101. CONTROLLER DUTIES; TRANSPARENCY. (a) A

controller:

(1) shall limit the collection of personal data to what

is adequate, relevant, and reasonably necessary in relation to the

purposes for which that personal data is processed, as disclosed

to the consumer; [and]

(2) for purposes of protecting the confidentiality,

integrity, and accessibility of personal data, shall establish,

implement, and maintain reasonable administrative, technical, and

physical data security practices that are appropriate to the volume

and nature of the personal data at issue.; and

(3) for purposes of protecting the unauthorized access,

disclosure, alteration, or destruction of data collected, stored,

and processed by artificial intelligence systems, shall establish,

implement, and maintain, reasonable administrative, technical, and

physical data security practices that are appropriate to the volume

and nature of the data collected, stored, and processed by

artificial intelligence systems.

Sec.541.102. PRIVACY NOTICE. (a) A controller shall

provide consumers with a reasonably accessible and clear privacy

notice that includes:

(1) the categories of personal data processed by the

controller, including, if applicable, any sensitive data processed

by the controller;

(2) the purpose for processing personal data;

(3) how consumers may exercise their consumer rights

under Subchapter B, including the process by which a consumer may

appeal a controller ’s decision with regard to the consumer ’s

request;

(4) if applicable, the categories of personal data that

the controller shares with third parties;

(5) if applicable, the categories of third parties with

whom the controller shares personal data; [and]

(6) if applicable, an acknowledgment of the collection,

use, and sharing of personal data for artificial intelligence

purposes; and

([6]7) a description of the methods required under

Section 541.055 through which consumers can submit requests to

exercise their consumer rights under this chapter.

Sec. 541.104. DUTIES OF PROCESSOR. (a) A processor shall

adhere to the instructions of a controller and shall assist the

controller in meeting or complying with the controller's duties or

requirements under this chapter, including:

(1) assisting the controller in responding to consumer

rights requests submitted under Section 541.051 by using

appropriate technical and organizational measures, as reasonably

practicable, taking into account the nature of processing and the

information available to the processor;

(2) assisting the controller with regard to complying

with the [requirement]requirements relating to the security of

processing personal data, and if applicable, the data collected,

stored, and processed by artificial intelligence systems and to

the notification of a breach of security of the processor's system

under Chapter 521, taking into account the nature of processing

and the information available to the processor; and

(3) providing necessary information to enable the

controller to conduct and document data protection assessments

under Section 541.105.

SECTION 4. Subtitle E, Title 4, Labor Code, is amended by adding

Chapter 319 to read as follows:

CHAPTER 319. TEXAS ARTIFICIAL INTELLIGENCE WORKFORCE DEVELOPMENT

GRANT PROGRAM

SUBCHAPTER A. GENERAL PROVISIONS

Sec. 319.001. DEFINITIONS. In this chapter:

(1) “Artificial intelligence industry” means businesses,

research organizations, and educational institutions engaged in

the development, deployment, or use of artificial intelligence

technologies in Texas.

(2) “Commission” means the Texas Workforce Commission.

(3) “Eligible entity” means Texas-based businesses in

the artificial intelligence industry, public school districts,

community colleges, public technical institutes, and workforce

development organizations.

(4) “Program” means the Texas Artificial Intelligence

Workforce Development Grant Program established under this

chapter.

SUBCHAPTER B. ARTIFICIAL INTELLIGENCE WORKFORCE DEVELOPMENT

GRANT PROGRAM

Sec. 319.051. ESTABLISHMENT OF GRANT PROGRAM. (a) The

commission shall establish the Texas Artificial Intelligence

Workforce Development Grant Program to:

(1) support and assist Texas-based artificial

intelligence companies in developing a skilled workforce;

(2) provide grants to local community colleges and

public high schools to implement or expand career and technical

education programs focused on artificial intelligence readiness

and skill development; and

(3) offer opportunities to retrain and reskill workers

through partnerships with the artificial intelligence industry and

workforce development programs.

(b) The program is intended to:

(1) prepare Texas workers and students for employment in

the rapidly growing artificial intelligence industry;

(2) ensure that Texas maintains a competitive edge in

artificial intelligence innovation and workforce development; and

(3) address workforce gaps in artificial intelligence-

related fields, including data science, machine learning,

robotics, and automation.

Sec. 319.052. ELIGIBILITY FOR GRANTS. (a) The following

entities are eligible to apply for grants under this program:

(1) Texas-based businesses engaged in the development or

deployment of artificial intelligence technologies;

(2) public school districts and charter schools offering

or seeking to offer career and technical education programs in

artificial intelligence-related fields;

(3) public community colleges and technical institutes

that develop artificial intelligence-related curricula or training

programs; and

(4) workforce development organizations in partnership

with artificial intelligence companies to reskill and retrain

workers in artificial intelligence competencies.

(b) To be eligible, the entity must:

(1) submit an application to the commission in the form

and manner prescribed by the commission; and

(2) demonstrate the capacity to develop and implement

training, educational, or workforce development programs that

align with the needs of the artificial intelligence industry in

Texas.

Sec. 319.053. USE OF GRANTS. (a) Grants awarded under the

program may be used for:

(1) developing or expanding workforce training programs

for artificial intelligence-related skills, including but not

limited to machine learning, data analysis, software development,

and robotics;

(2) creating or enhancing career and technical education

programs in artificial intelligence for high school students, with

a focus on preparing them for careers in artificial intelligence

or related fields;

(3) providing financial support for instructors,

equipment, and technology necessary for artificial intelligence-

related workforce training;

(4) partnering with local businesses to develop

internship programs, on-the-job training opportunities, and

apprenticeships in the artificial intelligence industry;

(5) funding scholarships or stipends for students and

workers participating in artificial intelligence training

programs, particularly for individuals from underserved or

underrepresented communities; or

(6) reskilling and retraining workers displaced by

technological changes or job automation, with an emphasis on

artificial intelligence-related job roles.

(b) The commission shall prioritize funding for:

(1) initiatives that partner with rural and underserved

communities to promote artificial intelligence education and

career pathways; and

(2) proposals that include partnerships between the

artificial intelligence industry, educational institutions, and

workforce development organizations.

SECTION 5. Section 325.011, Government Code, is amended to

read as follows:

Sec. 325.011. CRITERIA FOR REVIEW. The commission and its

staff shall consider the following criteria in determining whether

a public need exists for the continuation of a state agency or its

advisory committees or for the performance of the functions of the

agency or its advisory committees:

(1) the efficiency and effectiveness with which the

agency or the advisory committee operates;

(2)(A) an identification of the mission, goals, and

objectives intended for the agency or advisory committee and of

the problem or need that the agency or advisory committee was

intended to address; and

(B) the extent to which the mission, goals, and

objectives have been achieved and the problem or need has been

addressed;

(3)(A) an identification of any activities of the

agency in addition to those granted by statute and of the authority

for those activities; and

(B) the extent to which those activities are

needed;

(4) an assessment of authority of the agency relating

to fees, inspections, enforcement, and penalties;

(5) whether less restrictive or alternative methods of

performing any function that the agency performs could adequately

protect or provide service to the public;

(6) the extent to which the jurisdiction of the agency

and the programs administered by the agency overlap or duplicate

those of other agencies, the extent to which the agency coordinates

with those agencies, and the extent to which the programs

administered by the agency can be consolidated with the programs

of other state agencies;

(7) the promptness and effectiveness with which the

agency addresses complaints concerning entities or other persons

affected by the agency, including an assessment of the agency's

administrative hearings process;

(8) an assessment of the agency's rulemaking process

and the extent to which the agency has encouraged participation by

the public in making its rules and decisions and the extent to

which the public participation has resulted in rules that benefit

the public;

(9) the extent to which the agency has complied with:

(A) federal and state laws and applicable rules

regarding equality of employment opportunity and the rights and

privacy of individuals; and

(B) state law and applicable rules of any state

agency regarding purchasing guidelines and programs for

historically underutilized businesses;

(10) the extent to which the agency issues and enforces

rules relating to potential conflicts of interest of its employees;

(11) the extent to which the agency complies with

Chapters 551 and 552 and follows records management practices that

enable the agency to respond efficiently to requests for public

information;

(12) the effect of federal intervention or loss of

federal funds if the agency is abolished;

(13) the extent to which the purpose and effectiveness

of reporting requirements imposed on the agency justifies the

continuation of the requirement; [and]

(14) an assessment of the agency's cybersecurity

practices using confidential information available from the

Department of Information Resources or any other appropriate state

agency; and

(15) an assessment, using information available from the

Department of Information Resources, the Attorney General, or any

other appropriate state agency, of the agency’s use of artificial

intelligence systems, high-risk artificial intelligence systems,

in its operations and its oversight of the use of artificial

intelligence systems by entities or persons under the agency’s

jurisdiction, and any related impact on the agency’s ability to

achieve its mission, goals, and objectives.

SECTION 6. Section 2054.068(b), Government Code, is amended

to read as follows:

(b) The department shall collect from each state agency

information on the status and condition of the agency's information

technology infrastructure, including information regarding:

(1) the agency's information security program;

(2) an inventory of the agency's servers, mainframes,

cloud services, and other information technology equipment;

(3) identification of vendors that operate and manage

the agency's information technology infrastructure; [and]

(4) any additional related information requested by the

department; and

(5) an evaluation of the use, or considered use, of

artificial intelligence systems and high-risk artificial

intelligence systems by each state agency.

SECTION 7. Section 2054.0965(b), Government Code, is amended

to read as follows:

Sec. 2054.0965. INFORMATION RESOURCES DEPLOYMENT REVIEW.

(b) Except as otherwise modified by rules adopted by the

department, the review must include:

(1) an inventory of the agency's major information

systems, as defined by Section 2054.008, and other operational or

logistical components related to deployment of information

resources as prescribed by the department;

(2) an inventory of the agency's major databases,

artificial intelligence systems, and applications;

SECTION 8. Not later than September 1, 2025, the attorney

general shall post on the attorney general's Internet website the

information and online mechanism required by Section 551.041,

Business & Commerce Code, as added by this Act.

SECTION 9. This Act takes effect September 1, 2025.