State lawmakers are strategizing on how to win passage of bills regulating AI use in employment decisions despite opposition from the tech industry and the Trump administration.
Legislators from New York to Texas have pushed measures ranging from requiring employers to mitigate potential bias caused by artificial intelligence hiring tools to mandating disclosure and appeals of AI-generated decisions for job applicants.
Despite resistance from a well-resourced tech industry that has sidelined some legislation, and new threats from the federal government and White House to override state AI laws altogether, the movement to regulate remains strong, even if bills are narrowed in scope, according to lawmakers from several states.
“I’m not intimidated, but we also have to be practical,” said Virginia state delegate Michelle Maldonado (D), whose AI bill (HB 2094) was vetoed this year by Gov. Glenn Youngkin (R).
President Donald Trump released an AI Action Plan in July that included threats to federal funding and possible preemption by the Federal Communications Commission for state AI laws the administration deems overly restrictive. The Senate also seriously considered, but ultimately rejected, legislative language this summer banning state laws that regulate AI.
“This is going to require us to think about, is there a different approach?” Maldonado said, suggesting targeting legislation narrowly on transparency and disclosures.
Colorado passed the nation’s broadest AI bias law to date, but lawmakers voted in August to delay it and try to revise it before it takes effect.
Connecticut Sen. James Maroney (D) said even blue-state lawmakers that attempted to imitate Colorado’s measure might be ready to narrow their focus in 2026, either via industry-specific restrictions or a transparency-only focus.
The latter would require employers to disclose to job applicants and employees when and how they’re using AI tools, but stop short of mandatory bias audits or detailed risk management plans.
Such proposals emphasize “the right to know if AI is being used to make an important decision about your life,” Maroney said.
Transparency Details
Even if state lawmakers adopt a transparency focus, the details could vary widely.
An Illinois law set to take effect Jan. 1, 2026, requires employers to give workers notice when using AI for employment decisions, but offers no specifics on what the notice should include.
By contrast, Colorado lawmakers in August considered an AI Sunshine Act to replace their broader AI bias law. It would have required businesses to notify individuals of up to 20 factors that AI tools considered before rejecting them, plus opportunity to correct inaccurate data.
The tech industry balked at the Colorado bill, including the measure’s language seeking to impose joint liability for discrimination claims on AI technology developers alongside the companies using the tools.
“We’re encouraged to see more lawmakers considering transparency-focused approaches,” said David Edmonson, senior vice president of state policy at industry association TechNet. “While the details matter, transparency can often be a more workable path than some of the more onerous mandates that have been proposed.”
Not everyone is ready to surrender efforts at broader job bias protections.
Colorado Rep. Brianna Titone (D), a cosponsor of the law delayed to June 2026, said she still sees hope for legislation forcing technology developers to share liability for discrimination claims, rather than assign it all to businesses using the tools to boost hiring efficiency. If policymakers don’t address liability, requiring transparency doesn’t accomplish much, she said.
“I still get denied my job. I still get denied my health care. I still get denied my insurance policy or whatever it is, but I have no recourse,” she said.
But even in California, which is often first in the nation on pro-worker legislation, supporters of AI bills affecting employment had mixed success in 2025. State legislators passed a bill targeting AI-powered workforce management while letting die a comprehensive Colorado-like proposalfocused on discrimination. Regulations pending at the state’s civil rights and privacy agencies will help govern employer use of automated decision tools.
Absent action from Washington, differing state requirements as they evolve can make compliance more difficult.
Some states are pressuring employers to take steps like bias testing, maintaining documentation, and other system controls, according to Lauren Hicks, a shareholder at Ogletree Deakins.
In the other corner, some states “are a little bit more process oriented,” Hicks said, focusing more on transparency, data privacy, or notice obligations about automated decision making, or offering rights of appeal or opt out.
“Employers now have an obligation to really dig in and develop a deep understanding of the software they are using,” Hicks added. “That’s extremely critical now, so that they can work to meet these compliance obligations that are going to vary state by state.”
Preemption Threat
How Congress might address preemption in future artificial intelligence legislation is yet to be seen, creating risk for states that advance AI protections.
Trump’s action plan instructs federal agencies to deny AI-related funding to states whose laws undermine the funds’ purpose, such as promoting AI industry growth.
But the plan doesn’t clarify which kinds of state laws are prone to federal scrutiny, said Mackenzie Arnold, director of US policy at the Institute for Law & AI.
“Conditioning federal grants on a state’s AI ‘regulatory climate’ is inherently malleable and thus hard to predict,” he said. “States need to know what laws are in and out of bounds.”
For some employment laws, like the Fair Labor Standards Act, federal rules act as a floor, and states are able to set higher standards.
But in other areas the US Supreme Court has recognized the primacy of federal law, preempting state statutes regulating conduct covered by the National Labor Relations Act, for instance.
Maldonado, who plans to reintroduce her legislation next session, said she doesn’t see the risk in moving while Congress figures out how federal and state AI laws will intersect.
“We should put something in place, and if it gets preempted, then fine,” she said, “but more likely than not, it may not.”
To contact the reporters on this story:
To contact the editors responsible for this story: