TRAIGA

Texas state law governing AI From Wikipedia, the free encyclopedia

TRAIGA, or the Texas Responsible Artificial Intelligence Governance Act, is a state law regulating the development and deployment of artificial intelligence (AI) systems in Texas. Sponsored by Representative Giovanni Capriglione, the Act establishes a framework governing certain uses of AI, outlines prohibited uses, and creates obligations on state government entities, among other provisions.[1][2][3][4][5] TRAIGA was signed into law in 2025 and took effect on January 1, 2026.[6]

The law applies to AI developers and deployers that conduct business in Texas or whose systems are used by Texas residents. It prohibits the intentional development or deployment of AI systems to incite harm, violate constitutional rights, engage in unlawful discrimination, and produce child sexual abuse material or unlawful deepfakes. TRAIGA also establishes the Texas Artificial Intelligence Council and creates a regulatory sandbox program. The Texas Attorney General is charged with enforcement.[7][8][9][10][11][12][13][14][15][16]

It has received attention as one of the first comprehensive AI-related laws enacted by a U.S. state.[17][15][14][12] Legal analysts have compared it to the European Union (EU)’s Artificial Intelligence Act and the Colorado AI Act, noting its intent-based discrimination standard and narrower scope relative to those frameworks.[18][19][17][20]

Territorial extentTexas
EnactedJune 22, 2025
CommencedJanuary 1, 2026
Quick facts Territorial extent, Enacted ...
TRAIGA
  • Texas Responsible Artificial Intelligence Governance Act
Territorial extentTexas
EnactedJune 22, 2025
CommencedJanuary 1, 2026
Introduced byGiovanni Capriglione
IntroducedMarch 14, 2025
Status: In force
Close

Background

In June 2023, Texas Governor Greg Abbott signed House Bill 2060, which created an Artificial Intelligence Advisory Council within the Texas Department of Information Resources. The Council was tasked with monitoring the use of AI systems across state government. Its membership included representatives from law enforcement, academia, and the legal profession. After submitting a report to state policymakers, the Council was disbanded in December 2024.[21][22]

Separately, the Texas House Select Committee on Artificial Intelligence and Emerging Technologies was created in 2023 to examine the political and social implications of artificial intelligence. Among its recommendations was the creation of a regulatory sandbox to allow for controlled testing of AI systems.[23][24] This recommendation informed the regulatory sandbox provision included in TRAIGA.[21]

History

In December 2024, Representative Capriglione introduced House Bill 1709, the Texas Responsible Artificial Intelligence Governance Act. The bill sought to create a statewide framework for artificial intelligence, including transparency requirements for companies deploying AI systems, restrictions on certain uses of AI, and the creation of a regulatory sandbox.[25][13] Modeled in part on the EU Artificial Intelligence Act and the Colorado AI Act, House Bill 1709 focused on "high-risk" AI systems and included provisions addressing private sector liability.[18][8][13][21]

House Bill 1709 did not advance during the legislative session. Industry stakeholders raised concerns that several provisions were overly burdensome.[21] The bill informed the development of a revised proposal, House Bill 149, also titled the Texas Responsible Artificial Intelligence Governance Act. The revised version removed requirements for private companies to notify consumers when they interact with AI systems and to conduct impact assessments, among other provisions.[13][17][4][20]

In April 2025, an amended version of House Bill 149 passed the Texas House of Representatives and was referred to the Senate Committee on Business and Commerce.[26][4] The bill later received approval from both chambers, where the House voted on amendments adopted by the Senate.[21]

On May 31, 2025, the state legislature passed House Bill 149, one of several AI-related bills considered during the legislative session. Governor Abbott signed TRAIGA into law on June 22, 2025.[27][28][26][12]

During the legislative process, a proposed federal moratorium on state-level AI regulation initially raised questions about the enforceability of state AI laws, including TRAIGA. At the time of signing, Governor Abbott stated that Texas would ensure compliance with applicable federal requirements.[27] In July 2025, the United States Senate voted to remove the proposed moratorium from federal legislation.[29]

The Act took effect on January 1, 2026.[8]

Provisions

Definitions and scope

TRAIGA applies to AI developers and deployers that advertise or conduct business in Texas, develop products used by Texas residents, or develop or deploy AI systems within the state. The Act also applies to Texas state and local government entities.[8][12][14]

The Act defines a developer as a person who develops an AI system and a deployer as one who deploys an AI system in Texas. Consumers are defined as Texas residents.[8][12][14][16]

The Act defines an artificial intelligence system as a machine-based system that "infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments."[7][12][14][15]

Government use  

The Act requires government agencies to provide consumers with plain language notices before interacting with AI systems. It also prohibits government agencies from using artificial intelligence systems to assign social scores to consumers. It also restricts the use of AI systems to identify individuals using biometric data without the individual’s consent.[7][8][9][11][12][14][15][10][16]

Prohibitions

The Act prohibits the development or deployment of artificial intelligence systems intended to cause harm, self-harm, or criminal activity. It also prohibits the development or deployment of AI systems designed to violate constitutional rights or unlawfully discriminate based on protected classes.[7][8][9][10][11][12][13][14][15][16]

In addition, the Act prohibits the development or deployment of AI systems that are intended to produce or distribute child sexual abuse material or unlawful deepfakes.[7][8][9][10][11][12][13][14][15][16]

Enforcement

Enforcement authority under the Act rests with the Texas Attorney General. The Act does not create a private right of action.[30][15][11][12][14]

The Act requires the Texas Attorney General to create an online complaint system where consumers may submit allegations of potential violations. The Attorney General can investigate complaints received through this system and may request information relevant to the operation of an AI system, including information about training data.[11][13][15]

Before initiating an enforcement action, the Attorney General must provide a written notice to the alleged violator, who is then provided with a 60-day period to cure the alleged violation.[12][15]

Penalties

If a violation is not cured, the Act authorizes civil penalties. Penalties range from $10,000 to $12,000 per curable violation and from $80,000 to $200,000 per non-curable violation. The Act also authorizes additional penalties of $2,000 to $40,000 for each day the violation continues.[8][9][10][12][13][14][15]

If the Attorney General determines that a person certified or licensed by a state agency has violated the Act and recommends enforcement, the relevant agency may impose additional administrative sanctions, including license suspension or further monetary penalties.[9][13][14]

Safe harbor

The Act provides an affirmative defense for AI developers and deployers who identify potential violations through internal testing or auditing or who demonstrate compliance with National Institute of Standards and Technology (NIST)'s Artificial Intelligence Risk Management Framework or a comparable risk management framework.[30][8][9][12][13][14][15]

The Act also affords protection to developers and deployers when a third party uses their AI systems in a way that violates the Act.[12][13]

Texas Artificial Intelligence Council  

The Act creates the Texas Artificial Intelligence Council to assist the state legislatures in evaluating artificial intelligence policy and oversight. The Council is charged with developing recommendations for state agencies regarding the use of AI systems and with overseeing the regulatory sandbox.[7][10][11][13]

TRAIGA gives the Council the ability to organize AI-related training for state entities and issue reports concerning artificial intelligence. The Council does not have binding rulemaking authority.[10][11][13]

The Council consists of seven members appointed by the governor, the lieutenant governor, and the speaker of the Texas House of Representatives.[13][31]

Regulatory sandbox

The Act directs the Texas Department of Information Resources to create a regulatory sandbox program that allows participants to test AI systems under state supervision in a modified regulatory setting.[11][13][15]

To join the program, companies must submit applications that describe their AI systems and intended use. Approved participants may operate within the sandbox for up to 36 months. During that period, the Attorney General is restricted from initiating enforcement actions for certain categories of violations.[8][9][10][13]

Reception

Support

During legislative testimony, the Texas Public Policy Foundation stated that TRAIGA would benefit Texas businesses by reducing legal ambiguity and creating clearer compliance standards.[21][4] Representatives of business groups also expressed support, stating that the Act would not impose overly burdensome regulations.[32][3]

The consumer rights organization EFF-Austin praised the act for addressing concrete harms.[21] AI legal expert Matthew Murrell also said that TRAIGA would strengthen consumer rights and privacy protections.[31]

CBS News reported that additional supporters described the legislation as intended to reduce risks associated with AI and allow private sector innovation.[1]

Criticism

Texas Appleseed, a nonprofit advocacy organization, stated that while it did not oppose the bill, it was concerned that the Texas Artificial Intelligence Council lacks sufficient authority over the regulatory sandbox program.[4]

EFF-Austin noted that the Act does not include a private right of action, which the organization indicated would provide a more meaningful enforcement mechanism for consumers.[21]

The Texas Tribune also reported that some critics maintained that the legislation could create legal uncertainty for businesses.[21]

Legal commentators note that, as one of the first states to pass AI legislation, TRAIGA may serve as an example for other states.[31][19]

Legal commentators have compared TRAIGA to the EU Artificial Intelligence Act, the Colorado Artificial Intelligence Act, and Executive Order 14179. Observers note that, like the EU Artificial Intelligence Act, TRAIGA prohibits the intentional development or deployment of artificial intelligence systems for certain unlawful purposes, including for discrimination against protected classes and for the creation of unlawful deepfakes.[18][19][17]

In contrast, TRAIGA adopts an intent-based standard for discrimination, which requires proof of purposeful conduct rather than disparate impact.[7] Commentators have described this approach as more closely aligned with the Trump administration's policy priorities. TRAIGA is also characterized as narrower in scope than the EU Artificial Intelligence Act and the Colorado AI Act.[19][20]

Much of TRAIGA applies to state government entities.[18][33][34] Although the Act prohibits private companies from intentionally developing AI systems for specified unlawful purposes, its intent requirement creates a higher burden of proof for plaintiffs.[18][19][17][30][33][21]

See also

References

Related Articles

Wikiwand AI