- Holtzman Vogel attorney critiques bill’s risk-based framework
- Could pave way for heavy-handed AI regulation nationwide
A sweeping artificial intelligence regulation bill introduced in Texas last month would impose the US’ strictest state-level restrictions on AI if enacted, threatening to stifle innovation and growth in the state.
Texas State Rep. Giovanni Capriglione (R) on Dec. 23 filed HB 1709—the Texas Responsible AI Governance Act, or TRAIGA—which would impose heavy-handed AI regulations across all industries and increase compliance costs for Texas businesses using AI.
Modeled after the European Union’s AI Act, TRAIGA adopts a risk-based framework for AI regulation. The framework classifies AI systems by their perceived risk levels, imposing stricter regulations on systems categorized as higher risk.
But TRAIGA—and risk-based frameworks generally—are fundamentally flawed, regulating speculative uses of AI technology rather than actual societal harms. For example, TRAIGA bans the use of AI to conduct social scoring, yet a company can still do so through non-AI means. So the bill penalizes AI use rather than the underlying harmful activity.
Risk-based frameworks subject entire industries to broad AI regulations, often overlooking sector-specific nuances. Case in point, TRAIGA imposes onerous obligations on developers, deployers, and distributors of “high-risk” AI systems and prohibits the development of certain AI systems altogether.
To date, Congress hasn’t passed comprehensive legislation that regulates or prohibits AI development or use. States have been more active in AI regulation, with over 31 states adopting resolutions or enacting AI legislation last year. But that legislation has generally been targeted and domain-specific, regulating activities such as deepfakes in elections and AI use in job interviews.
Colorado is the only state that has passed comprehensive AI legislation. With the Colorado AI Act enacted in May 2024, the state introduced a risk-based approach similar to the EU AI Act and TRAIGA.
But TRAIGA goes even further than Colorado’s AI Act. For instance, it defines a high-risk AI system as “any [AI] system that is a substantial factor to a consequential decision.”
A substantial factor is broadly defined as a factor that is “considered when making a consequential decision; likely to alter the outcome of a consequential decision; and weighed more heavily than any other factor contributing to the consequential decision.” There is no further framework or clarity for applying this vague definition.
In turn, a consequential decision is broadly defined as “any decision that has a material, legal, or similarly significant, effect on a consumer’s access to, cost of, or terms or conditions of” a criminal case and its related proceedings, education enrollment or opportunities, employment, insurance, a financial or legal service, elections or voting processes, and more.
These core definitions demonstrate the ambiguity and subjective nature of the bill. Categorization as a high-risk AI system depends on that system being a “substantial factor to a consequential decision,” but the definition of substantial factor is then tied to the factor bring related to a consequential decision. This recursive dependency fails to define either term independently, leaving both definitions reliant on the other, which creates ambiguity.
The ambiguity and circular nature of these definitions would grant excessive discretion to bureaucrats to determine what AI systems are high-risk and increase compliance costs for companies.
TRAIGA imposes obligations on developers, deployers, and distributors of systems deemed high-risk. These responsibilities include mandatory risk assessments, record-keeping, and transparency measures.
The bill requires AI distributors—individuals (except developers) who put AI systems in the market—to withdraw, disable, or recall non-compliant high-risk AI systems under certain conditions. TRAIGA also requires AI developers to maintain detailed records of training data, a highly burdensome requirement given the trillions of data points that large language models are trained on.
TRAIGA even bans certain AI systems that purportedly present unacceptable risks. This includes AI systems that manipulate human behavior, conduct social scoring, capture certain biometric identifiers, infer sensitive personal attributes, infer certain emotion recognition, and generate explicit or harmful content.
But blanket prohibitions, such as those proposed under TRAIGA, risk crippling innovation by banning technologies before their full range of benefits and risks can be understood.
Many of these AI capabilities hold immense potential for socially constructive applications, which include improving health-care diagnostics, streamlining legal processes, enhancing cybersecurity, and enabling personalized education tools.
While TRAIGA includes narrow exemptions for small businesses and AI systems in research and testing under its sandbox program, these carveouts offer only temporary relief and fail to justify a burdensome regulatory framework that raises compliance costs across industries.
The existence of TRAIGA would force startups and resource-limited entities to navigate complex compliance issues, assess exemption eligibility, and prepare for significant regulatory burdens once they exit small business status or the sandbox program. This would create unnecessary barriers to innovation.
Perhaps the most concerning aspect of this bill is that it would pave the way for lawmakers to adopt similar heavy-handed AI regulation nationwide. That would be severely misguided.
Policymakers should instead focus on narrowly tailored laws that directly address specific, real harms. For example, if certain activities are deemed intrinsically harmful, they should be banned through legislation that includes clear definitions and enforcement mechanisms—and captures both AI and non-AI implementation methods. This approach ensures that genuinely harmful uses are addressed, without subjecting benign AI systems to the same level of scrutiny and compliance costs.
By prioritizing sector-specific rules, policymakers can protect consumers without hampering technological progress or ceding AI leadership to adversarial nations.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Author Information
Oliver Roberts is co-head of Holtzman Vogel’s AI practice group at and CEO and cofounder of Wikard, a legal AI technology firm.
Write for Us: Author Guidelines
To contact the editors responsible for this story: