Wiley Rein’s Duane Pozza analyzes the most critical areas of President Joe Biden’s executive order on artificial intelligence and what companies should adapt from its key principles.
The White House’s executive order on artificial intelligence, released Oct. 30, provides a roadmap of where companies in the private sector should focus as they seek to take advantage of AI’s benefits. Here are five of the most critical areas companies should watch when managing their own development and use of AI.
Safety and Security
The executive order is particularly focused on the safety and security of AI systems. A key component of the order invokes the Defense Production Act to require companies developing “dual-use foundation models” to provide safety-testing and other reports to the Commerce Department.
More broadly, the order encourages AI safety and security assessments based on potential risks, and directs the development of standards to help assess these risks. Notably, it requires the National Institute of Standards and Technology to develop standards on “red-team” testing for safety, and the Department of Homeland Security to incorporate NIST risk management approaches into guidelines for critical infrastructure operators.
Given this focus in the order, as a next step, companies developing or using AI tools should look closely at their safety and security controls, identify potential risks, and proactively implement measures to address safety and security risks. NIST’s existing AI Risk Management Framework provides a solid foundation for starting this work.
Authentication and Transparency
A big concern for legislators and regulators alike is that AI tools can be used to create deepfakes—content that resembles reality but is actually fake. Some federal and state proposals categorize deepfakes as deceptive if they’re used for certain purposes.
But tracking down what is real versus fake is a technical challenge. To help address this, the order directs the Commerce Department to develop guidance for content authentication and watermarking to clearly label AI-generated content. The White House fact sheet suggests this watermarking will be an “example” for the private sector, and these kinds of standards can potentially influence what approaches companies adopt.
As the technology develops, companies also should consider implementing controls on any content they develop or disseminate, to make sure they have a handle on what content is original and what is generated by AI.
Discrimination and Bias
Many observers have documented that AI tools, without proper oversight, can generate biased and potentially discriminatory outcomes—particularly if they are trained on biased data sets. Many regulators have emphasized that current antidiscrimination laws apply to AI when it’s used in areas like credit, employment, and housing.
The order encourages agencies to develop additional guidance for companies under their authority on how to avoid unlawful bias and discrimination. The result may be more guardrails in this area for private companies.
Companies deploying AI should watch for further federal guidance. And when deciding whether to use certain AI tools, companies should look closely at what data sets were used for training and what potential bias testing has been done.
Privacy
In terms of privacy, the order focuses on ways the federal government can advance development of privacy-preserving techniques in the context of AI. Because AI models often rely on large data sets, developing ways to protect privacy on the technical side can be a boon to advance AI development.
If privacy regulators wish to have companies engage in data minimization—focusing on limiting collection or use of data for certain purposes—while not undercutting the benefits of AI tools trained on large data sets—the development of such technical measures will be important.
Companies should pay close attention to privacy when using AI tools, including setting up reasonable guardrails against AI tools accessing personal data. They should also monitor further guidance on privacy-preserving technical approaches from federal agencies.
Risk Management
Over the past few years, NIST spearheaded development of a detailed risk management approach—the AI Risk Management Framework. The framework has developed in collaborative fashion with a wide range of stakeholders, with input from industry, and was designed to be voluntary.
The order highlights the RMF and suggests much of its approach should be part of procurement decisions by federal agencies. This will impact use of AI tools overall, since many tools can potentially be sold into the government.
Companies in certain regulated sectors like health care will want to watch out for further guidance from agencies that may call for certain minimum risk management expectations. And companies should consider implementing the AI RMF as a tool to help manage AI risks overall.
The order covers much more ground, including competition, national security, worker considerations, and use of AI in education and health care. These efforts also will have substantial impacts on many businesses.
Overall, companies trying to integrate AI tools and unlock AI’s potentially enormous benefits should look to the key principles behind the executive order, as they design their own strategies to build in compliance and risk management in deploying AI.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Author Information
Duane Pozza is partner at Wiley Rein, with focus on complex matters involving AI, privacy, consumer protection, data governance, and emerging technology.
Write for Us: Author Guidelines