Biden AI Rules Face Heightened Scrutiny in Post-Chevron World

July 22, 2024, 9:00 AM UTC

Federal agencies trying to rein in the risks of artificial intelligence may be vulnerable to court challenges after a Supreme Court decision stripped the executive branch of some of its rulemaking power.

Last month saw the fall of Chevron—the decades-old doctrine under which courts deferred to reasonable agency interpretations of vague laws. The Court’s June 28 Loper Bright ruling said judges must interpret laws.

The landmark reversal comes as the Biden administration has been relying on broad existing statutes to address novel AI-driven threats involving impersonation, discrimination, and safety. Yet without Chevron deference, companies may be more emboldened now to sue agencies over new regulations they consider a hindrance to the booming AI industry.

“If I were to pick one area where the overturning of Chevron is likely to have the greatest consequences, it would be in the ability of agencies to respond to new technologies like artificial intelligence under older statutes,” said Cary Coglianese, a professor of law and political science at the University of Pennsylvania.

The Federal Trade Commission, for example, could face legal scrutiny over future AI regulations proposed under the FTC Act—a statute passed 110 years ago that has empowered the agency to conduct investigations, promote market competition, and protect consumers against unfair business practices. President Joe Biden’s FTC led by Chair Lina Khan, in particular, has endured criticisms that the agency has stretched the bounds of its authority when it comes to tech issues.

“You’re certainly going to see a challenge to every AI rule,” said former Rep. Brad Carson (D-Okla.), co-founder and president of tech policy advocacy nonprofit Americans for Responsible Innovation.

Fears of legal attacks may lead to agencies becoming more strategic in how they regulate AI—a burgeoning technology that’s poised to transform the global economy and threatened to sow disinformation, violate individual privacy and civil rights, and displace workers, among other potential dangers.

Agencies will become “more cautious and more concerned” when setting AI rules now, said Helen Toner, a researcher at Georgetown’s Center for Security and Emerging Technology and former OpenAI board member.

President Joe Biden discusses his administration's commitment to seizing the opportunities and managing the risks of artificial intelligence on June 20, 2023.
President Joe Biden discusses his administration’s commitment to seizing the opportunities and managing the risks of artificial intelligence on June 20, 2023.
Photographer: Andrew Calballero-Reynolds/AFP via Getty Images.

Agencies Issue AI Guidance, Rules

The sudden explosion of AI development has driven government regulation around the world. While Congress has been mulling AI legislation, Biden last October signed a sweeping executive order that directed agencies to set standards and guidance on AI use. It also requires companies to submit the safety testing results of their most advanced AI systems to the government before releasing them to the public.

Some agencies have also relied on their existing authorities to issue rules related to AI—for example, an FTC rule banned the use of AI to impersonate governments and businesses. The agency is also considering rulemaking on how companies collect and use data, which could affect AI.

A Federal Communications Commission rule clarified that AI-generated voices used in robocalls are subject to existing restrictions, like obtaining prior consent from consumers, under the Telephone Consumer Protection Act of 1991. The agency has also proposed a rule that would require disclosures on AI-generated TV and radio political ads.

The Health and Human Services Department issued a first-of-its-kind rule that establishes transparency requirements for AI used in health IT. The Securities and Exchange Commission has proposed a rule that would require firms to eliminate any conflicts of interest that may arise from AI use.

Chevron’s overturning sparks regulatory authority questions about “whether an older statute that wasn’t written with AI in mind should encompass AI,” said Coglianese, the University of Pennsylvania professor.

For example, “does the National Highway Traffic Safety Administration just treat autonomous vehicle technology as just another piece of equipment that’s on a car?” he asked. “Does the Food and Drug Administration treat AI as just another medical device that it has had authority to deal with? Or is it something different?”

Some tech industry officials and conservatives have accused the Biden administration of heavy-handed regulation. The Republican National Committee, in its 2024 policy platform, pledged to toss out Biden’s AI strategy if former President Donald Trump wins the election.

Republican Attack on Biden AI Policy Fuels Urgency on Guardrails

Normally, federal agencies may interpret issues differently across administrations, and regulate accordingly, said Stacey Gray, senior director for artificial intelligence at the Future of Privacy Forum. With Chevron’s reversal, a court could decide one interpretation is right or wrong—then the answer would be fixed until Congress acts.

Lawmakers are hoping to regulate AI but have yet to pass significant legislation. In the meantime, companies could challenge rules by arguing that AI’s use is protected First Amendment activity, or that agencies are abusing their regulatory discretion, according to Americans for Responsible Innovation’s Carson.

Joseph Hoefer, AI policy lead at lobbying firm Monument Advocacy, wrote in an internal memo about the Loper Bright decision that “well-resourced companies with extensive legal teams may gain an advantage in resisting regulations they view as unfavorable.”

Overblown Fears

Some legal experts say concerns about the administration’s weakened ability to set regulations are overblown, especially given the current landscape on AI. In addition to mitigating risks, the Biden administration has sought to bolster AI development and maintain US technology competitiveness globally.

Although there has been some initial rulemaking, agencies have mostly opted to issue memos and voluntary guidelines on AI, which aren’t legally binding and cannot be challenged.

Since the bulk of federal actions on AI so far have come down through guidance, and not via rule, Chevron’s reversal “won’t impact one way or another,” according to I. Glenn Cohen, a Harvard Law School professor whose expertise is bioethics and health law. The FDA, Cohen said, has historically issued guidance to set industry standards and has similarly done so for AI.

The FTC, for example, has also released guidance and posted blogs on AI establishing best practices for companies to follow to avoid misleading or abusing consumers. The agency in December also relied on its enforcement powers to crack down on AI harms to consumers, alleging that pharmacy chain Rite Aid used facial recognition technology in a discriminatory manner.

Rite Aid Banned From AI Facial Recognition by FTC After Misuse

“With respect to enforcement under current statutes and regulations, federal agency leaders have repeatedly emphasized that there is no AI exception to existing laws. If someone uses an AI system to do something that would be unlawful using different—or no—technology, it will be unlawful,” said Peter Schildkraut, senior counsel and technology, media and telecommunications industry co-lead at Arnold & Porter.

Agencies are likely to continue to rely on these to influence the AI industry in the wake of the Supreme Court’s decision. The administration can also find new ways to get deference. If Congress does pass AI legislation, for example, it could specifically defer to agencies to interpret the text, according to legal experts.

Devika Kornbacher, partner and co-chair of the global tech group at Clifford Chance, said that other existing legal precedent still applies. This includes Skidmore deference, which is when courts take an agency’s technical expertise into account, if the agency’s interpretation is persuasive.

But agencies are already facing pressure to act carefully when setting rules. A handful of House Republican committee chairs sent letters last week to federal agency leaders highlighting Chevron’s reversal and reminding them “of the limitations it has set on your authority.”

“Perhaps no administration has gone as far as President Biden’s in issuing sweeping Executive edicts based on questionable assertions of agency authority,” the lawmakers wrote, requesting lists of pending rules that may have depended on Chevron.

To contact the reporters on this story: Oma Seddiq in Washington at oseddiq@bloombergindustry.com; Isabel Gottlieb in New York at igottlieb@bloombergindustry.com

To contact the editors responsible for this story: John Hewitt Jones at jhewittjones@bloombergindustry.com; Gregory Henderson at ghenderson@bloombergindustry.com

Learn more about Bloomberg Government or Log In to keep reading:

See Breaking News in Context

Providing news, analysis, data and opportunity insights.

Already a subscriber?

Log in to keep reading or access research tools and resources.