- Government AI use scrutinized before private sector
- Bias threats to citizens top concern of legislators
A growing number of states have enacted laws this year to study artificial intelligence, ahead of possible legislative action to address expected threats to jobs, civil liberties, and property rights with the emerging technology.
The actions have varied. Minnesota (SF 2909) is studying how intelligence sharing with AI might enable law enforcement to violate people’s civil rights. Meanwhile, North Dakota (HB 1003) is considering how the technology could affect matters ranging from the job market to the 2024 elections.
Three states—Connecticut (SB 1103), Texas (HB 2060), and Washington (SB5187)—have empowered panels to probe how automated systems might already be discriminating against their citizens based on criteria like race, gender, or religion.
On Wednesday, Gov. Tony Evers (D) added Wisconsin to the list of states studying AI by issuing an executive order creating an AI task force devoted to labor issues.
The newly launched probes aim to explore the burgeoning role of artificial intelligence in the operation of state government alongside possible policy responses. Lawmakers say they also represent a first step towards regulating tools in the private sector—like algorithmic decision-making and large language models, which crunch data at superhuman speeds for a variety of tasks—in the absence of action by the federal government due to congressional deadlock.
“I personally view legislating like painting houses,” said Connecticut Sen. James Maroney (D), who sponsored legislation mandating such a report. “Task forces are part of the prep work and the research is the prep work, and if you don’t do the prep work, the end product is not as good.”
Policymakers say they are wary of passing legislation on AI without a better understanding of how their own actions might limit the benefits of AI or exacerbate its downsides.
“We don’t know what we don’t know, right?” said Kuldip Mohanty, chief information officer for the state of North Dakota, at an Aug. 24 meeting on his state’s study. “How do we not create a wild, wild west?”
States Need More Information
Elected officials are putting aside their reservations about blue-ribbon panels, despite their reputation for wasting time or allowing them to avoid taking action on controversial issues.
“I know we all have an aversion generally speaking to study bills or workgroups, but I think given the magnitude of AI, the implications it has for our society, both in the business world as well as in government, I think it’s important that we study this further,” Texas Sen. Tan Parker (R) said at a March hearing ahead of the approval of a seven-member AI commission.
Parker’s bill outlined a range of concerns, including how the technology might affect the “liberty, finances, livelihood, and privacy interests” of Texans.
Lawmakers in other states will similarly rely on working groups to help craft their future AI policies.
North Dakota gave a broad mandate to its 17-member legislative panel, while instructing it to specifically investigate how automation will change “health care, effects on student learning, potential opportunities or threats to the integrity of state services” as well as elections.
Tasks forces in Connecticut and Washington will target AI’s possible effects within their state government operations more comprehensively with required reports on current systems and policy recommendations by the end of next year. The Connecticut law also requires agencies there to conduct impact assessments before deploying automated systems in the future.
Minnesota is pursuing a narrower focus than the other four states by studying the state Fusion Center that facilitates data sharing between law enforcement agencies. The mere existence of automation there would be a significant development, according to Ben Feist, chief programs officer at the ACLU of Minnesota.
“They have to at least talk at minimum about AI and these social media tools. Then, at least we’ll know what we’re working with,” he said in an interview.
Government panels on AI have previously gone nowhere, especially considering the lack of public interest generally attached to them, but that dynamic might change, according to Chloe Autio of the Cantellus Group, an advisory firm on tech issues. “Any activity on AI getting people’s attention, I think no matter where it is,” she said in an interview.
A lack of federal action only makes state-level commissions more important, she added.
Vermont Leads in AI Action
Vermont has pioneered with some success its approach to AI, which is now pursued by other states.
The state released in December 2022 what appears to be the first-ever public inventory of a state’s AI assets. That information was the result of legislation approved last year to implement findings from a task force originally established in 2018. The measure also established an Artificial Intelligence Advisory Council to develop new policies with the help of a Division of Artificial Intelligence established within the state Agency of Digital Services.
A top goal of the division is to serve as a “center for enablement” that helps state agencies adopt AI with the public interest in mind, said Josiah Raiche, who is the director of the five-person Division of Artificial Intelligence.
“We have kind of two sides of our mission,” he said in an interview. “One is thoughtful adoption where we look at how we can apply AI safely and to get the benefits of it within state government. And then the other side is more of the regulatory side, so we’re initially focused on AI usage within the state and how we can do that ethically and safely.”
The catalog identified 11 state systems where AI is used for purposes like cybersecurity and data management. In addition, state ethics policies are being developed. Vermonters affected by recent flooding even got some help from automated systems coordinated by the state AI division, said Raiche.
AI might continue to develop in scary ways in Vermont and beyond its borders, but at least some controls are now in place for state agencies, said state Rep. Brian Cina (D), who sponsored the 2022 legislation.
“On a national and international level, we’re going to need to see some guidelines,” he said in an interview. “So it’s like we’re now safe. It’s just that our state government is going to be less culpable.”
To contact the reporter on this story:
To contact the editors responsible for this story: