Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
David Meyer

Why a Texas AI bill is shaping up as the next battleground over U.S. AI policy

Texas Capitol at sunset on a cloudy day (Credit: Getty Images)

Another furious battle over a state-level AI bill in the U.S. is brewing, after the vetoing of California’s contentious effort last September.

This time, the bill in question is the Texas Responsible AI Governance Act or TRAIGA, which was formally introduced by Republican Texas House Rep. Giovanni Capriglione just before Christmas. The bill would outlaw some uses of AI and place heavy compliance obligations on both the developers and deployers of any “high-risk” AI system “that is a substantial factor to a consequential decision.”

The bill, also known as HB 1709, is largely concerned with stopping AI-powered discrimination against people—unlike the Californian AI bill, which focused on tackling AI’s more theoretical, catastrophic risks to life and property.

TRAIGA would have massive implications for the deployment of AI systems in Texas, the world’s eighth-largest economy, particularly when it comes to use cases such as recruitment. (It would however also establish grants for Texas-based AI companies and local community colleges and high schools, to help them train workers in how to use AI.)

Critics warn that HB 1709 would effectively apply to AI developers outside Texas, too, potentially outlawing models like OpenAI’s GPT-4o. Some even claim TRAIGA exemplifies the dangers of state-level AI regulation. (The U.S. has no federal AI law, and it is unclear if such an idea will advance under the second Trump administration.)

“The Texas AI bill is exactly the kind of state-level overreach the United States should avoid. Federal and state antidiscrimination laws already apply to AI, so this measure would effectively add a third layer of regulation,” said Hodan Omaar, a senior policy manager at the Information Technology and Innovation Foundation (ITIF), a D.C. think tank that is funded by the likes of Alphabet and Microsoft.

“These are broad, systemic measures that, if needed, should be handled at the federal level,” added Omaar. “A patchwork of state mandates like [TRAIGA] risks derailing a coherent national approach, threatening to stall the important progress the nation is making toward a unified and effective AI strategy.”

TRAIGA is similar to what is currently the most comprehensive state-level AI bill, Colorado’s AI Act, which was passed last year and will go into effect in February 2026. These bills arguably draw heavily, in terms of their approach, on the EU’s AI Act, which was also passed last year.

The fate of HB 1709 will be determined fairly soon, owing to Texas’s unusual legislative system, in which bills are only considered from January to June in each odd-numbered year. Capriglione (who did not respond to an interview request) has proposed that TRAIGA should go into effect at the start of September.

The Texas attorney general would be the law’s enforcer, with the ability to issue fines of up to $200,000 per violation, plus administrative fines of up to $40,000 per day for those who flout TRAIGA. The levels of these fines have increased drastically since an earlier draft of the bill, released before its formal filing.

Bans and restrictions

The law would ban the use of AI to subliminally manipulate or deceive people, to classify them in “social scoring” systems, or to infer their racial or sexual characteristics from biometric information. It would also ban anyone from using AI to identify people based on images taken from the internet and other public sources.

AI systems that are even capable of producing sexual deepfakes would also be banned. This element worries even some experts who largely support TRAIGA, such as Matt Scherer, a senior policy counsel at the Center for Democracy and Technology (CDT), who says the provision raises free-speech concerns.

Under TRAIGA, the developers, distributors, and deployers of AI models (small businesses are exempt) would have to take “reasonable care” to protect consumers from the risk of intentional or unintentional algorithmic discrimination, and tell deployers about the models’ limitations and risks. Developers would also have to give deployers metrics about the “accuracy, explainability, transparency, reliability, and security” of their models, and details of the measures they have taken to “examine the suitability of data sources and prevent unlawful discriminatory biases” in the datasets used to train those models.

AI developers would have to keep “detailed records” of their training data. This appears to go beyond the EU’s AI Act, currently the world’s most significant comprehensive AI law, which only demands that AI companies provide summaries of that training data.

If a developer realizes that its model doesn’t comply with the law in any way, it would have to immediately withdraw or disable the model, if needed, to avoid breaking the law. Similarly, if the deployer realizes that there’s a risk of algorithmic discrimination, it would have to stop using the AI system and inform its developers and distributors. Also, if the model poses a risk of algorithmic discrimination, “deceptive manipulation or coercion of human behavior,” or the unlawful use or disclosure of personal data, the developer would have to investigate the issue and tell the Texas attorney general about it.

Unlike California’s failed SB 1047 bill, TRAIGA imposes these obligations on developers of any size, not just the ones training the largest and most capable models.

The deployers of high-risk AI systems would have to perform (or contract someone to perform) impact assessments annually and within 90 days of any “intentional and substantial modification” to the system.

“In the case of a frontier language model, such modifications happen almost monthly, so both developers and deployers who use such systems can expect to be writing and updating these compliance documents constantly,” wrote Dean W. Ball, a research fellow at George Mason University’s Mercatus Center, in a blog post last week.

TRAIGA would require anyone deploying a “high-risk” consumer-facing AI system to clearly tell the consumer that they are interacting with AI, and explain how the AI could be a “substantial factor” in making consequential decisions about them. Social media companies would have to stop advertisers from deploying AI systems on their platforms that could expose users to algorithmic discrimination.

Consumers would have the right to appeal consequential, AI-driven decisions that have an adverse effect on their health, safety, and fundamental rights. They would also have the right to know if and how their personal data is being used in any AI system. However, as with the Colorado AI Act, the Texan bill doesn’t give consumers any right to privately sue over violations.

A new regulator

TRAIGA would also see the creation of a Texas AI Council, attached to the governor’s office and mostly comprising members of the public with suitable expertise.

This group would try to find ways in which AI could make state government more effective, and to identify laws and regulations that could be reformed to stimulate AI development. It would be able to issue standards for ethical AI development, and would be able to “investigate and evaluate the influence of technology companies on other companies and determine the existence or use of tools or processes designed to censor competitors or users.”

According to Ball, TRAIGA would itself lead to “mass censorship of generative AI” and would also impede AI development, making the Texas AI Council’s powers “comical.” He dismissed the bill as “a great example of the ‘Brussels Effect,’ where the European inclination to regulate early and heavily causes other countries to adapt European standards simply by virtue of institutional momentum.”

The CDT’s Scherer disputes this, arguing that TRAIGA would be “not nearly as broad or as burdensome” as the EU’s AI Act.

Indeed, Scherer argues that TRAIGA should be tougher than it now is. He notes that the bill’s earlier draft followed the Colorado law in covering AI systems that are a “contributing factor” to consequential decisions, but the version that was formally proposed only talks about a “substantial factor.”

“That definition would allow companies to ignore the law by simply assigning a human to rubber-stamp algorithmic ‘recommendations,’” said Scherer. “That’s exactly what happened with New York City’s AI-in-hiring bill. The substance of the rest of the provisions on AI-driven decisions doesn’t really matter if that loophole stays.

“Hopefully there’s still time to close that and other loopholes before the bill makes it to the floor.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.