Companies Look to Jumble of AI Rules in Absence of US Direction

Feb. 18, 2025, 10:30 AM UTC

The head of a Silicon Valley startup that makes chips to speed AI applications said it’s a “headache” keeping up with the EU AI Act and state AI laws, but that’s one reality of innovating in the absence of federal rules.

“You have to be alert for what’s coming from different places,” said Krishna Rangasayee, founder and CEO of SiMa.ai, whose chips power artificial intelligence applications in robotics and automotive devices. “So the headache is, this is ever evolving, ever changing.”

SiMa.ai is among the US companies developing or using AI technology turning to a patchwork of laws from across the country and overseas for guidance as they wait for firm directives from Washington.

President Donald Trump said he wants less regulation on artificial intelligence but his executive order that replaced the one from President Joe Biden is scant on details. And it could be months before companies know more.

The Trump order doesn’t change what US companies have to do to sell or deploy AI systems in the European Union. And many US companies are now in scope of the EU AI Act, the world’s most comprehensive AI law whose first major provisions took effect this month, said Peter Schildkraut, an attorney at Arnold & Porter, who provides counsel on AI regulation and risk management.

Congress hasn’t passed major AI legislation, but at least half of US states have addressed it in some way, such as “through new laws targeting the technology’s use in elections and sexually explicit materials,” according to the National Conference of State Legislatures.

Companies also must be aware of those state laws, and this year Colorado, Nebraska, and Texas are considering what are essentially “mini European Union AI Acts,” said Adam Thierer, a senior fellow of the technology and innovation team at the R Street Institute, which advocates for a free market economy.

Colorado approved AI legislation in 2024, slated to take effect in February 2026, tasking those who develop artificial intelligence and those who deploy them to protect users from issues such as algorithmic discrimination. Lawmakers in Texas, New York, Virginia and other states have introduced bills to prevent AI-related discrimination in sectors such as banking, housing, and government services. And Nebraska’s unicameral legislature on Feb. 6 held a public hearing on an anti-AI bias bill that could pass in the session ending in May.

“The implementation of the EU AI Act is already underway, and many US states are developing and enacting their own AI legislation,” said Courtney Lang, VP of policy for trust, data, and technology at the Information Technology Industry Council, a trade group that counts among its members Anthropic and Google. “This may serve to create a patchwork of compliance requirements, which could be burdensome and ultimately harm US competitiveness and companies’ ability to innovate.”

Lang said that she was hopeful that Trump’s AI plan, which calls for a proposal from the federal government within six months, “will lay out a cohesive vision for US AI leadership.”

The White House didn’t reply to a request for comment.

Evi Fuelle, the global policy director of Credo AI, a firm that helps companies adopt AI, said companies that operate in global markets should implement risk-averse measures while being transparent with business partners and customers.

“Irrespective of what regulations or legislation is passed anywhere in the world, they are operating as global companies that will need to implement robust AI governance,” she said.

AI Regulation ‘By Default’

Last month, Microsoft used a blog post to tell business customers who use its AI products and who operate globally that “regulatory compliance is of paramount importance.”

“This is why, in every customer agreement, Microsoft has committed to comply with all laws and regulations applicable to Microsoft. This includes the EU AI Act,” the post said.

Vice President JD Vance said at an AI summit in Paris last week that strong regulations for tech companies would be a “terrible mistake.”

Criticism of the Biden administration being too regulatory skirts the fact that there was no real regulation on AI under Biden, whose executive order was largely voluntary, said Alondra Nelson, a professor of social science at the Institute for Advanced Study, who led the Biden White House on its “Blueprint for an AI Bill of Rights.”

Nelson was among speakers at a working dinner hosted by French President Emmanuel Macron at last week’s AI summit in Paris.

US companies have to make hard calls in the current environment similar to what they did when the EU enacted the General Data Protection Regulation, known as GDPR, in 2018, Nelson said. At the time, some multinational companies decided to abide by the EU rule, she said.

“That means that in some instances, EU regulation is going to, by default, also [be] the US AI regulation,” Nelson told Bloomberg Law.

But even as Europe leads in AI regulation, most of the technological advances are in the US and China, Rangasayee said.

“So they can only control the deployment of it, not the innovation,” Rangasayee said.

— With assistance from Zach Williams.

To contact the reporter on this story: Kaustuv Basu in Washington at kbasu@bloombergindustry.com

To contact the editor responsible for this story: Gregory Henderson at ghenderson@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.