Brown Rudnick’s Matthew Richardson says companies that are integrating and deploying artificial intelligence in their services or products should prioritize board-level AI oversight.
Artificial intelligence is transforming the way businesses operate—from process automation and personalized customer experiences to predictive analytics and fraud detection. Corporate boards and regulators are scrambling to provide proper oversight and regulation to avoid danger.
Board Oversight
Every company, whether public or private, is governed by the state laws in which it’s incorporated. These laws cover various aspects of a company’s legal existence, including the fiduciary duties owed to shareholders by its directors. One of those duties is the duty of care, which requires directors to act with the same level of diligence as a reasonable person in similar circumstances.
Public companies are also governed by federal law and the rules of the stock exchange on which it’s listed, building upon the state law duties (for example, by requiring board committees and setting the composition requirements of such committees). Rules from the Securities and Exchange Commission require certain disclosures of board structures and oversight roles.
Understanding AI is becoming a critical skill for directors, and AI oversight is becoming a critical responsibility for directors. Public companies, absent certain exceptions, are required to have at least three board committees: a nominating committee, an audit committee, and a compensation committee. They may have additional committees to support specific business objectives unique to their company, such as a research committee or risk committee.
Due to AI’s breadth of applications (finance, legal, product development, and supply chain functions), it’s difficult to narrow the scope of a director or committee of AI oversight responsibility. On the other hand, specialization can provide efficiency—requiring a full board to become AI experts may be impractical. Board-level AI oversight and risk management decisions should be made on a company-by-company basis, but the following pros and cons balancing list may assist in making such decisions:
Full Board Responsibility
- Pro: Requires all directors to assess AI benefits and risks, which may boost risk management coverage.
- Con: This may not be the most efficient use of directors’ limited time and attention.
Audit Committee Responsibility
- Pro: AI can be integrated into companies’ internal controls and is already being used by independent accounting firms for external audits.
- Con: While important, a focus on financial statement reporting accuracy isn’t the only or final application of AI. Such a focus by a board may not adequately address AI risks present elsewhere in the business.
AI Committee Responsibility
- Pro: A company can get the efficiency specialized knowledge provided (e.g., the committee members might all be AI experts) while allowing other board committees to function regularly, likely with input from the AI committee.
- Con: A company may not be exposed to AI benefits and risks to such a degree that a specialized AI committee is necessary or the best use of the directors’ time and the company’s assets.
Internal Controls
Companies are integrating and deploying AI in their services or products to increase revenue. While it’s easy to focus on the external benefits of AI, a potentially major change AI is making happens inside companies, even those not in the business of technology or external AI deployment.
These changes relate to the internal control tools used by a company to safeguard the company’s assets and ensure operational efficiency. A subset of internal controls is disclosure controls, which are designed to collect, process, and report material information accurately.
Tools are now available to apply data analytics technology (a form of AI) to support the internal control functions of a company. For example, AI can support the execution of internal audits in at least three ways:
- AI can automate routine and repetitive tasks, allowing internal auditors to focus their efforts on more complex and strategic activities.
- AI can identify anomalies and irregularities in financial transactions, helping discover potential fraud.
- AI can generate customized reports, which may increase reporting accuracy.
However, the use of AI for internal control processes isn’t a panacea. Large language models, a form of AI, are prone to mistakes and the dissemination of incorrect information stated as fact. The point of disclosure controls is to catch mistakes (and fraud) and ensure the accuracy of information.
Integrating AI into internal controls is a double-edged sword—it can make the processes more efficient, but, given too much latitude, can do more harm than if it wasn’t involved at all.
Given the pace of change of AI tools, companies should revisit their internal controls more frequently (e.g., quarterly or semi-annually) rather than on an “as-needed” basis, which is a surprisingly prevalent and worrisome policy.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Author Information
Matthew Richardson is partner in Brown Rudnick’s cybersecurity and data privacy and digital commerce groups.
Write for Us: Author Guidelines
To contact the editors responsible for this story: