- Assesses pros and cons of AI tools before use
- Key for identifying equity concerns with algorithms
Several states are mirroring the federal government in seeking a quantifiable way to check whether artificial intelligence tools will do more good than harm.
State lawmakers are looking to prevent AI abuses in areas such as racial discrimination, privacy violations, and even the proliferation of bioweapons.
Impact assessments are gaining favor as a specific way to regulate AI at both the state and federal levels. They can include dozens of questions about why a government agency or business wants to buy AI products, their intended use, and their potential for harm. The assessments can be used to reject or approve cutting-edge software based on the results.
Government bodies can most easily require impact assessments for their own use of AI, but elected officials across the country are also floating their potential in the private sector. At least a dozen states with existing data privacy laws already require the evaluation for government bodies or businesses using AI that affects personal data. Lawmakers in other states are weaving similar provisions into their own AI proposals.
Connecticut is currently drafting rules for state agencies seeking to use AI to preempt unlawful discrimination in public services ranging from housing to crane operation exams. Those rules follow the enactment of a law (SB 1103) earlier this year requiring agencies to conduct impact assessments before acquiring AI. That law goes into effect Feb. 1, 2024.
Pending legislation in California (AB 331), New York (AB 8129), Massachusetts (HB 1873), and other states would similarly require impact assessments from state agencies or businesses. The bills cite their respective states’ authorities to enact laws regulating civil rights, technology, and the workplace in both the public and private sectors.
“Whether it’s cars, planes, vaccines, they go through a testing process to ensure they’re safe before they’re put out into public use,” said Connecticut state Rep. James Maroney (D), who authored the law in his state. “I think the same should go for algorithms.”
“As AI is becoming ever more pervasive in decision-making that affects so many aspects of our lives, it is urgent that we establish guardrails to ensure these tools offer bias-free objectivity and do not reinforce the same stereotypes we have worked to avoid,” said New York Assembly Housing Committee Chair Linda Rosenthal (D), who is sponsoring legislation (AB 7906) to require landlords to evaluate their own automated screening systems for bias against potential tenants.
Federal Role
More states could follow the lead of the Office of Management and Budget, which released draft rules last week detailing how federal agencies ought to conduct their own impact assessments. The OMB effort is part of President Joe Biden’s executive order outlining his administration’s approach to artificial intelligence.
Impact assessments should include “quantifiable measures” for determining whether certain AI tools will lead to positive outcomes such as reduced wait times, lower costs, or minimized “risk to human life,” according to an OMB memo.
Agencies must test software in a real-world context that includes consulting “underserved communities, in the design, development, and use of the AI,” according to the memo.
If adopted, the OMB proposed rules would also require agencies to notify individuals when automated decisions “meaningfully” influence decisions like the denial of government benefits.
Federal agencies also would be required to examine the “provenance and quality” of data input into AI systems while conducting “ongoing monitoring,” the proposal said.
Connecticut law similarly mandates that pre-procurement impact assessments not be the final word on an AI tools.
“We have to make sure that baked into whatever we do with this technology that we assess, prior to, during and after,” said New York Assemblyman Clyde Vanel (D), whose proposed “AI Bill of Rights” (AB 8129) would require the evaluations “be conducted for all automated systems.”
Unintended Consequences
Policymakers also are examining the unintended consequences that can arise from AI due to its dependence on data that could be inherently biased.
In one notable example, facial recognition software infamously identified 28 members of Congress as criminal fugitives, in part because of ongoing problems analyzing darker versus lighter skin colors. Researchers have found datasets for such tools often lack demographic balance.
In addition, bad actors can take advantage of tools intended for a beneficial purpose. For example, researchers demonstrated last year that an AI drug-development product was also good at generating 40,000 chemical warfare agents.
No Panacea
Impact assessments are no panacea, according to Rory Mir of the Electronic Frontier Foundation. They can become a “check box” for government agencies or businesses to fill before “doing something that’s actually privacy-invasive,” he said in an email.
Datasets behind technologies like predictive policing are often secret, making it more difficult for outsiders to evaluate potential dangers, Mir added.
AI works by predicting patterns beyond the oversight of humans, which means “even the people making the tools don’t have the insight to ensure their assessment is sufficient for all contexts,” said Mir.
A big concern for Ezekiel Dixon-Román, a professor at Teachers College Columbia University who studies how technology can exacerbate existing societal disparities, is that human biases might push impact assessments away from objectivity.
“The concern that I have about some of this is an adequately reliable source can become so subjective and political and ideological,” he said.
Yet even with these potential issues, there is another clear benefit to impact assessments—the paperwork makes decision-makers go on record if problems arise.
“It absolutely, absolutely, hands down is a mechanism for accountability,” said Dixon-Román.
To contact the reporter on this story:
To contact the editors responsible for this story: