Several states are mirroring the federal government in seeking a quantifiable way to check whether artificial intelligence tools will do more good than harm.
State lawmakers are looking to prevent AI abuses in areas such as racial discrimination, privacy violations, and even the proliferation of bioweapons.
Impact assessments are gaining favor as a specific way to regulate AI at both the state and federal levels. They can include dozens of questions about why a government agency or business wants to buy AI products, their intended use, and their potential for harm. The assessments can be used to reject or approve ...