- Some lawmakers say US must act fast to set new AI rules
- Others say Congress can learn and improve EU approach
The European Union reaching a landmark deal on artificial intelligence has thrust the congressional timeline for passing AI legislation into the limelight.
After EU officials agreed on Dec. 8 to move forward with sweeping AI legislation, members of Congress from both sides of the aisle and industry officials shared a renewed sense of urgency to respond to the emerging technology, and its potential to disrupt the global economy.
“The EU is setting the global standard, and US legislators are beginning to realize you can’t beat something with nothing,” Sen. Elizabeth Warren (D-Mass.) said, adding congressional action was needed “yesterday” to curb AI’s risks.
“Congress should act, and Congress should act now,” Workday Vice President of Public Policy Chandler Morse said. “And Congress should act in a way that is beneficial to the US tech industry, while having an eye towards how do we interoperate with these other frameworks. Congress should not cede the tech regulatory mantle to the Europeans.”
Other lawmakers were unperturbed by criticisms the US is falling behind in the race to address AI and called for a more cautious approach.
“We’ll take our time,” said Sen. Mike Rounds (R-S.D.), a member of a bipartisan Senate working group on AI. “This is not something that’s going to happen in the next couple of weeks, but in the next several months.”
“This could be the best piece of regulation since sliced bread or it could be a total disaster,” Rep. Ted Lieu (D-Calif.), vice chair of the House AI Caucus, said of the EU’s deal. “In the United States, we’re going to watch and see how this law goes. If it goes well, then we could copy the law or several portions of it. If it doesn’t go well, then we’re going to not do that.”
Congress since the spring has heard from hundreds of tech industry and civil society officials as it considers how to form rules that protect Americans against harm while promoting AI’s potential to do good. Lawmakers insist the US — home to the world’s top AI companies, including
“The E.U. agreement shows that the U.S. cannot sit on the sidelines in the race for A.I.,” Schumer said in a post on X. “We must lead the way and prioritize both innovation and safety, and Congress is working quickly to help make sure A.I. is accountable, transparent, and secure.”
The EU’s AI Act, expected to take effect by 2026, is poised to become the world’s most significant and comprehensive AI legislation, which would regulate the technology based on its risk. Lawmakers say some elements of the deal could be considered in the US, though some signaled caution around broad measures that could stifle innovation.
Planting the Flag
The EU is poised to become the de facto global standard setter on AI, several policy experts and industry officials said. Multinational tech companies with European users must gear up to follow European law.
The EU “has yet again planted the flag on policy, like it or not,” Rumman Chowdhury, chief executive officer and co-founder of tech nonprofit Humane Intelligence, said, referencing when the EU made its mark on worldwide tech regulation with its privacy law — General Data Protection Regulation.
Certain elements of the EU’s deal raised alarm bells among some industry officials and lawmakers concerned about impeding AI development. Rounds voiced skepticism about the EU “going above and beyond” with a regulatory regime. An incentives-based regime is preferable, the senator said, adding some tech companies warned Congress the EU’s reporting requirements may not encourage innovation.
The EU framework “has the potential to stifle innovation” and “lacks the concreteness to understand” AI’s impact given the technology is still evolving, said David Haber, founder and CEO of Switzerland-based startup Lakera AI. He also criticized the steep fines imposed for violations, ranging from 7.5 million euros ($8.2 million), or 1.5% of global turnover, to 35 million euros ($38.2 million), or 7% of turnover.
“Most companies don’t have any of the technical infrastructure to support the AI compliance process in place. It’ll be a challenging act to navigate this new regulatory landscape without incurring fines,” Haber said.
In the absence of congressional action, a number of states are considering AI laws of their own. That could lead to a messy patchwork of rules that companies will have to abide by, mirroring the landscape of state-level privacy laws the US now has in lieu of a national privacy standard.
State-by-state AI legislation would be “terrible,” said Susan Ariel Aaronson, a professor at George Washington University and director of the university’s Digital Trade and Data Governance Hub. It “undermines trust in the United States,” she said.
“It will be impossible for any technology company to follow 50 different state standards,” said Lieu, who’s introduced a bipartisan bill (
‘Risk-Based’ Approach
The EU legislation would regulate AI’s use based on its risks to individuals. Certain applications, such as biometric systems that use sensitive characteristics — like an individual’s race or religion — would be entirely prohibited. AI systems deemed to pose a high risk to rights and safety, such as reviewing job applicants, would be subject to impact assessments.
Many industry players, including
Companies also called on the US to regulate AI systems based on how risky they are, and lawmakers from both parties are considering the idea. A bill from Sens. John Thune (R-S.D.) and Amy Klobuchar (D-Minn.), described by industry officials as the most comprehensive AI proposal yet on Capitol Hill, would require companies to self-certify the safety of their AI tools, which would be categorized based on risk (
In addition to risk assessments, other aspects of the EU agreement — transparency requirements, model evaluations, adversarial testing, incident reporting, and penalties for noncompliance — would make sense in the US as well, according to Navrina Singh, founder and chief executive officer of Credo AI.
The EU deal, centered on promoting values like transparency and safety, will set the bar for “what’s going to be acceptable use, and not” said Ashley Casovan, managing director of the AI Governance Center at the International Association for Privacy Professionals.
Congress Eyes Next Year
The EU moving ahead gives the US an “observer advantage” to learn from and improve on with its own legislation, Haber said.
Several lawmakers agreed.
“There’s nothing detrimental about Europe moving forward ahead of us as long as we join,” said Sen. Richard Blumenthal (D-Conn.), who’s introduced a framework on AI with Sen. Josh Hawley (R-Mo.).
“We’re independent. We’ll do our own standards. We’ll try to improve on them,” Blumenthal said.
Lawmakers already introduced several bipartisan AI proposals, including one bill that would create a shared national research resource for AI (
Regulating AI is “very, very urgent,” Hawley said. He expressed concerns over AI’s risks to children, such as privacy violations and chatbots manipulating their behavior, but was skeptical about congressional action given past failures to regulate major tech companies. Besides historic shortcomings, passing legislation in a divided Congress is never easy and will be especially tough heading into next year’s elections.
Still, many lawmakers — including Lieu — said they’re hopeful Congress will act next year.
“In the meantime, my recommendation for the American people in terms of AI is simply to not fully trust it,” Lieu said.
To contact the reporters on this story:
To contact the editors responsible for this story:

