In June 2024, Texas Governor Abbott signed HB 149 for the Texas Responsible Artificial Intelligence Governance Act (TRAIGA, or the Act) into law, which will go into effect on January 1, 2026.
With this law, Texas joins California, Colorado and Utah in implementing AI-specific state laws. While TRAIGA was originally provided a comprehensive AI framework, the final version has been significantly pared down. With narrow substantive provisions, TRAIGA focuses on harms caused by AI, and the Act regulates – or completely bans – certain uses of these systems.
Businesses that develop or deploy AI systems in Texas should consider measures for compliance with the Act. This may include reviewing system uses and updating the system’s documentation with details the Texas Attorney General (AG) may need to investigate a complaint.
Scope & Applicability
TRAIGA applies broadly to private sector companies if they provide AI-generated content or services to Texas residents, even if they are located outside the state of Texas. Additionally, government agencies interacting with the public fall squarely within the scope of the Act.
Substantive Provisions
- Prohibited Uses for all AI. TRAIGA prohibits certain uses of AI systems for both the public and private sector, including intentionally inciting self-harm, violence or crime; infringing on an individual’s rights; or unlawfully discriminating. The Act also prohibits deploying AI systems that intentionally generate illegal content, as well as child sexual abuse material or sexually explicit chat systems that impersonate children.
Notably, accidental or disparate impact alone is most likely not enough to violate TRAIGA, as there most likely must be a purposeful intent to discriminate using the AI system.
- Public Sector: While the scope of TRAIGA includes both public and private entities, there are additional requirements for Texas governmental agencies.
- Prohibited Uses: Prohibited uses include, among other things, social scoring and uniquely identifying an individual using biometric data, with limited exceptions.
- Transparency Requirements: Governmental agencies must, among other things, provide conspicuous notice to consumers that are interacting with the AI system – even if this would be obvious to the user; and are prohibited from gathering biometric data without the individual’s informed consent if doing so would infringe that individual’s rights.
Enforcement
With no private right of action, TRAIGA can only be enforced by the Texas AG. The Act requires the attorney general to create an “online mechanism” on the AG’s website where consumers can submit complaints of potential violations.
If the AG investigates a complaint, developers and deployers of the AI systems may be required to provide information including, but not limited to, a description of the:
- purpose, intended use, deployment context, and associated benefits of the AI system;
- categories of data used as inputs and of the outputs produced by the system;
- types of data used to train the system;
- evaluation criteria of the performance of the system;
- known limitations of the system; and
- post-deployment and user safeguards (e.g., oversight, use and learning processes established to address issues).
If the AG determines a violation has occurred, there is a 60-day cure period. If the violation continues after this period, the AG may bring a claim for, among other things:
- an injunction;
- a civil penalty for curable breaches between $10,000 and $12,000;
- a civil penalty for uncurable breaches between $80,000 and $200,000; and
- a civil penalty for each day of continued violation between $2,000 and $40,000.
Safe Harbor
Notably, TRAIGA states that a defendant may not be found liable for violations if the defendant substantially complies with the most recent version of the Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (AI RMF Framework) published by the National Institute of Standards and Technology (NIST) or another recognized risk management framework for AI systems.
Sandbox Program
Under TRAIGA, the Texas Department of Information Resources is required to establish a sandbox program. This program would enable developers and deployers of AI systems to obtain legal protection and limited market access.
The AG may not file or pursue charges against a program participant for violations of TRAIGA that occur during the testing period.
Effective Date
The Act goes into effect on January 1, 2026.