Early Trump AI Moves Come in a Complex Regulatory Landscape

Feb. 20, 2025, 9:30 AM UTC

The Bottom Line

  • Early signs are that Trump administration favors less AI regulation to encourage innovation, addressing problems as they arise.
  • If Congress acts on AI, it’s likely to be on targeted issues, not a comprehensive framework like the EU AI Act.
  • Companies should stay on course to comply with state regulations and Biden administration regulations until new regulations take shape.

As the Trump administration considers how to regulate artificial intelligence, it isn’t writing on a blank slate. It’s engaging with a fast-developing technological and regulatory landscape here and abroad, and the need to balance the significant risks and opportunities AI presents.

While no comprehensive AI law has yet been enacted in the US, there are a number of directives, pending legislative initiatives, and regulatory and enforcement activities with substantial implications. These are all now subject to consideration, revocation, or modification by the Trump administration.

This article provides an overview of the federal and state regulatory landscape for AI in the US as of the end of the Biden administration, briefly reviews the initial actions of the Trump administration, and looks ahead to what we might see next.

Biden Term

In October 2023, former President Joe Biden signed an executive order, aimed at promoting the responsible development and deployment of AI. Various agencies released guidance under the order, including the National Institute of Standards and Technology’s AI Risk Management Framework and Office of Management and Budget’s guidance on management of AI systems used by federal agencies. The Trump administration rescinded the 2023 AI executive order, but the order still provides important context for the ongoing US AI regulatory approach.

On Jan. 14, Biden signed a second executive order focused on AI. The 2025 AI executive order aims to advance US leadership in AI infrastructure, with implementation deadlines through 2027. The executive order directs the Department of Defense and the Department of Energy to lease federal sites to host large-scale AI data centers and the Department of Interior to support the data centers’ energy needs with clean energy.

On Jan. 16, 2025, Biden signed an executive order aimed at strengthening and promoting cybersecurity innovation, which acknowledges AI’s role in cyber defense. That order includes directives instructing various agencies to research AI software vulnerabilities and implement cyber defense programs using AI. Trump hasn’t yet rescinded these 2025 executive orders.

National Security

In October 2024, the Biden administration issued a memorandum on US AI leadership. The memorandum tasked the AI Safety Institute within NIST with testing frontier AI models and issuing guidance on how to measure AI models’ capabilities relevant to the risk that they might enable the development of biological or chemical weapons or aid offensive cyber operations. The memorandum also tasked the Committee on Foreign Investment in the United States with considering whether certain transactions may involve foreign access to proprietary information on AI training techniques and other proprietary insights regarding the creation and use of AI models.

The memorandum is part of a more expansive suite of regulations and administrative actions the Biden administration pursued in addressing the national security risks posed by AI. These include the introduction of new export control regulations from the Commerce Department’s Bureau of Industry and Standards aimed at AI that could support the development, production, or use of a missile or chemical, biological, or nuclear weapon. Also included are entities involved in AI research and development to the “Entity List” maintained by BIS for entity-based export controls because of their connections to the Chinese military and defense industry. More recently, BIS issued a proposed rule in September 2024 that would require US persons and entities developing dual-use foundation AI models or developing, acquiring, or possessing large-scale computing clusters to submit quarterly notifications to BIS.

These efforts accelerated in the final month of the Biden administration. On Dec. 27, the Department of Justice issued a final rule which prohibits licensing of algorithms or AI models for the purpose of accessing bulk US sensitive personal identifiers or government-related data contained in the algorithms that wouldn’t otherwise be accessible. On Jan. 2, the Treasury Department’s rule imposing restrictions on US outbound investment in Chinese companies active in developing AI and other national security-related technologies went into effect. And on Jan. 13, BIS announced updated controls for advanced computing chips, new export controls on model weights for advanced AI models, and new security conditions to safeguard storage of these advanced AI models.

Other Federal Enforcement

Beyond national security, various federal agencies have also increased enforcement under existing business law regimes against companies offering or purporting to offer AI products. In September 2024, the Federal Trade Commission announced “Operation AI Comply”, a new enforcement sweep that increases the agency’s scrutiny of companies’ false claims around their AI products and the sale of AI technology that can be used in deceptive and unfair ways. Operation AI Comply builds upon several earlier FTC cases involving deceptive or unsafe uses of AI, such as use of AI facial recognition without reasonable safeguards. The Securities and Exchange Commission has also begun enforcement targeting “AI washing.” In May 2024, the SEC announced two settled charges against investment advisers making false and misleading statements around their use of AI, in cases where the investment advisors claimed to use AI when they in fact did not.

Companies are also facing antitrust scrutiny from the DOJ and the FTC, both of which have reportedly initiated investigations into possible antitrust violations by AI industry leaders. The two agencies are both responsible for enforcement of federal antitrust laws, and have an interagency clearance process to determine which agency takes a specific case. The DOJ and the FTC, along with international counterpart enforcers in the UK and EU, issued a joint statement in July 2024 stating that AI raises competition risks which have the potential to harm consumers.

State Legislation

Apart from developments at the federal level, states have begun implementing their own AI regulatory frameworks. Notably, Colorado’s AI Act will impose obligations on “high-risk” AI systems, such as those made for health care, insurance, or legal services, when it goes into effect on Feb. 1, 2026. California’s AB 2013 will require developers of generative AI systems to publish documentation concerning the data used in training those AI systems by Jan. 1, 2026. In the health-care sector, California’s Health Care Services Bill (AB 3030), which went into effect on Jan. 1, 2025, now requires AI-generated patient communications related to clinical information from health-care providers to include a clear disclaimer informing patients that the content was created by AI.

California lawmakers have also expanded the California Consumer Privacy Act to include AI systems that output personal information or include neural data (i.e., information from an individual’s nervous system collected and interpreted by a device) in the law’s definition of “personal information.” Explainability and transparency also remain a focus for state and local regulations, such as New York City’s Local Law 144, which requires automated employment decision tools to undergo bias audits.

Early Shifts in Trump Term

During the first days of the Trump administration, there have been notable shifts in policy related to AI. Trump revoked the 2023 AI executive order and issued an executive order on Jan. 23. Trump’s Jan. 23 executive order generally revokes “certain existing AI policies and directives” presenting barriers to domestic AI innovation and mandates the development of an action plan within 180 days to “sustain and enhance America’s global AI dominance.” As directed by Trump’s Jan. 23 executive order, the Trump administration has published a request for information on the Development of an Artificial Intelligence Action Plan, inviting comments until March 15, 2025.

Trump has also stated his intention to use emergency declarations to support Stargate, a newly announced AI infrastructure joint venture. These early actions signal a clear pivot toward an approach that waits until issues arise before pushing for regulation. We anticipate that this “let all the flowers bloom” approach will take the bet that less regulation will enable more innovation and that concerning developments can be addressed reactively.

Trump’s appointees and nominees to several regulatory positions that may impact AI further suggest that the Trump administration’s approach will continue to diverge from that of the Biden administration. Notably, Chairman Andrew Ferguson of the FTC has criticized some of the FTC’s cases and reports, touting a lighter-touch approach to AI regulation. David Sacks, the AI and crypto czar, is an experienced AI investor and entrepreneur who is likewise expected to take a pro-industry approach.

Looking Ahead

In the last several months, members of the House and Senate have been active in proposing AI-focused bills, which have addressed a range of topics including AI research funding, biosecurity standards, and increased transparency requirements for AI developers. These scope-limited bills contrast with the comprehensive approach of the EU AI Act, which came into force in August 2024. The EU AI Act imposes a range of compliance requirements based on the risk tier of the AI system and role of the organization, including prohibited practices, transparency requirements, development guardrails, and general provisions. There isn’t any comprehensive EU AI Act-type legislation pending in the US, and we don’t expect the new administration to propose, or the new Congress to enact, such legislation.

It is still uncertain whether the Biden administration’s work on AI will be comprehensively undone, or whether the Trump administration will take a more surgical approach focused on particular issues. While some directives and guidance may be rescinded (as we have already seen), the Trump administration may well leave in place other regulations that support an aggressive stance on national security risks posed by AI and foreign competitors.

No matter how the Trump administration proceeds, until new regulations take shape, companies should not abandon efforts to comply with existing regulations and guidelines adopted under the Biden administration. And in parallel, businesses should continue work aimed at complying with state laws, which will go into force regardless of the change in Washington.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

David J. Kappos is partner in Cravath’s Corporate Department and is widely recognized as one of the world’s foremost leaders in the field of intellectual property, including intellectual property management and strategy, the development of global intellectual property norms, laws and practices as well as commercialization and enforcement of innovation‑based assets.

Evan Norris is partner in Cravath’s Litigation Department, where he focuses his practice on advising US and multinational companies, boards of directors and senior executives with respect to government and internal investigations, cybersecurity, regulatory enforcement and compliance, and related civil litigation.

Sasha Rosenthal-Larrea is partner in Cravath’s Corporate Department, where she focuses her practice on advising clients on the most significant intellectual property issues, including with respect to complex licensing and collaborations, patent and copyright licensing strategy, software and artificial intelligence.

Cravath of counsel Dean M. Nickles contributed to this article.

Write for Us: Author Guidelines

To contact the editors responsible for this story: Jessie Kokrda Kamens at jkamens@bloomberglaw.com; Max Thornberry at jthornberry@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.