Thune AI Bill Requires Companies to Certify Safety of Their Tech

Nov. 15, 2023, 4:59 PM UTC

New legislation from Sens. John Thune and Amy Klobuchar would require companies to test their artificial intelligence tools that pose high risks to Americans and certify that they’re safe.

Under the proposal set for release Wednesday, technology companies must adhere to a self-certification regime and attest the safety of their AI systems, which would be categorized based on how risky they are, according to a copy of the bill’s text obtained by Bloomberg Government.

“AI is a revolutionary technology that has the potential to improve health care, agriculture, logistics and supply chains, and countless other industries,” Thune, the No 2. Senate Republican, said in a statement.

“As this technology continues to evolve, we should identify some basic rules of the road that protect consumers, foster an environment in which innovators and entrepreneurs can thrive, and limit government intervention,” the South Dakota senator continued. “This legislation would bolster the United States’ leadership and innovation in AI while also establishing common-sense safety and security guardrails for the highest-risk AI applications.”

It’s the latest bipartisan proposal tackling AI as Congress races to prevent the burgeoning technology from harming Americans through widespread disinformation, job displacement, privacy violations, and discrimination.

Tech executives, the national security community, and civil society leaders have been urgently calling for regulation since generative AI—large language models that can produce text, visuals, and audio almost instantly, such as OpenAI Inc’s ChatGPT—exploded a year ago. Lawmakers have met with hundreds of officials and held hearings and briefings to help inform legislation, seeking to mitigate AI’s dangers while promoting its benefits.

‘Extremely Balanced’

Several industry officials and policy experts consider Wednesday’s highly anticipated bill to be the most comprehensive AI proposal from Congress to date. The bill is “incredibly thoughtful” and “extremely balanced,” IBM Policy Lab Co-Director Ryan Hagemann said ahead of its release.

“Instead of drawing red lines, Senator Thune’s bipartisan proposal would set up a collaborative process with the government to mitigate risk by certifying high-impact systems and requiring new forms of transparency,” Tony Samp, head of AI policy at DLA Piper and founding director of the Senate’s AI Caucus, said, calling the bill a “notable contrast” with the European Union’s AI regulatory approach.

The legislation groups AI systems into three types: generative, high-impact, and critical-impact. Although companies would self-assess the risks and benefits of their systems and certify them, the Commerce Department would be responsible for enforcement.

For the “generative” category, internet platforms, such as social media companies and search engines, would have to disclose AI-generated content on their websites “in a clear and conspicuous manner” to users, according to a copy of the bill’s text. Nonprofits, or companies that employ less than 500 people or collect data from fewer than 1 million users, would be exempt from the rule.

“High-impact” applies to AI systems used to make decisions in non-defense settings that concern access to health care, insurance, housing, employment, credit, and education in a way that poses significant risks to individual rights or safety. Companies would be required to assess the safety of their high-impact systems before deploying them and submit those results to the Commerce Department.

Risk Management

The bill would also establish an advisory board within the Commerce Department to provide recommendations on the certification of “critical-impact” AI systems, which are defined as those used in non-defense contexts that pose a significant risk to individual rights concerning criminal justice, management of critical infrastructure, or the collection of biometric data, such as facial recognition. Companies would have to conduct a risk-management assessment of these systems, extensively detailing their risk testing and evaluation standards to be submitted to the Commerce Department.

Companies that fail to comply with the rules could face civil action or penalties of up to $300,000 for each violation. The Commerce Department could also outright ban developers in violation of the order from deploying their critical-impact AI systems.

The bill also calls for further examination of AI systems and the development of standards for AI use. It would require the Commerce Department to create a working group, including officials from industry, academia, and civil society, dedicated to boosting consumer education efforts on AI. The National Institute of Standards and Technology would have to provide recommendations to federal agencies for guardrails on high-impact AI systems, and conduct research related to online content authenticity to help distinguish between real and AI-generated content. The Government Accountability Office would study barriers on AI use in the federal government.

Some experts who have pushed for greater AI oversight and accountability said they are skeptical of self-certification rules, claiming such an approach puts too much faith in the industry’s word to deploy the tech safely.

Concern About Loopholes

“My resistance to any self-regulatory regime is because I’m just so aware that we’re operating in an industry where the companies are very capable of establishing loopholes for themselves if they set their own rules,” Deborah Raji, a University of California Berkeley AI researcher, said ahead of the bill’s release.

Raji in September attended the Senate’s first AI forum, a closed-door discussion on regulation that featured a slew of tech executives, including Meta Platforms Inc.‘s Mark Zuckerberg, Tesla Inc.’s Elon Musk, and Microsoft Corp. co-founder Bill Gates.

Senate Majority Leader Chuck Schumer (D-N.Y.), along with Sens. Mike Rounds (R-S.D.), Martin Heinrich (D-N.M.), and Todd Young (R-Ind.) have been leading the charge in the upper chamber to respond to AI, hosting that AI forum and several more since then. Schumer has repeatedly emphasized that future legislation should establish safeguards against AI’s threats while advancing the tech’s opportunities. The group of four has pursued an all-hands-on-deck approach, signaling that legislation should come from across committees.

Thune and Klobuchar (D-Minn.) are members of the Commerce Committee. Sens. Roger Wicker (R-Miss.), Shelley Moore Capito (R-W.Va.), John Hickenlooper (D-Colo.), and Ben Ray Luján (D-N.M.), who are co-sponsors of Wednesday’s bill, also sit on the panel.

“Artificial intelligence comes with the potential for great benefits, but also serious risks, and our laws need to keep up,” Klobuchar said in a statement. “This bipartisan legislation is one important step of many necessary towards addressing potential harms. It will put in place common sense safeguards for the highest-risk applications of AI—like in our critical infrastructure—and improve transparency for policy makers and consumers.”

The recent flurry of legislative activity on AI has left close observers encouraged that Congress may heed calls to set rules. “In a time of partisan gridlock on so many issues, promoting AI innovation and managing the risks of AI appear to be a rare find of bipartisan interest,” Samp said.

To contact the reporter on this story: Oma Seddiq at oseddiq@bloombergindustry.com

To contact the editors responsible for this story: John Hewitt Jones at jhewittjones@bloombergindustry.com; Robin Meszoly at rmeszoly@bgov.com

Learn more about Bloomberg Government or Log In to keep reading:

See Breaking News in Context

Providing news, analysis, data and opportunity insights.

Already a subscriber?

Log in to keep reading or access research tools and resources.