- Banks eyeing new AI technology for credit, risk decisions
- US regulators say existing rules cover ChatGPT-like models
Recent leaps in artificial intelligence technology have raised significant questions about fair lending, fraud detection, and cybersecurity at banks, but federal banking regulators already have the tools in place to deal with many of those issues, according to industry watchers.
While AI in the financial services sector isn’t entirely new, banks are eyeing OpenAI’s ChatGPT and similar large language models from
“In financial services, the momentum has been going up over the last few years. But the slope of the trajectory has changed,” said Sameer Gupta, the leader of Ernst & Young’s North America Financial Services Organization Advanced Analytics unit.
Regulators have made it clear they expect banks to comply with fair lending, credit reporting, and other rules already on the books, regardless of the technologies lenders use.
“I don’t think you need laws or regulations or changes to [the Equal Credit Opportunity Act] simply because AI or decision models are making those decisions,” said Michael Brauneis, the global financial services industry leader at consulting firm Protiviti Inc.
Sign up for The Brief, a daily afternoon newsletter showcasing Bloomberg Law’s top stories.
Regulatory Concerns
Bank regulators are already looking at uses of new AI technologies in areas that fall under their jurisdictions.
The Consumer Financial Protection Bureau, the Justice Department’s Civil Rights Division, the Federal Trade Commission, and the Equal Employment Opportunity Commission in April said they intend to enforce all antidiscrimination laws regardless of whether credit decisions are made using AI or by a person.
The Office of the Comptroller of the Currency established an Office of Financial Technology, in part to study uses of AI. The agency’s acting leader, Michael Hsu, has raised concerns about discrimination built into the algorithms that bank AI models use.
“Developing good controls for AI, especially regarding discrimination and bias, should be in the shared long-term interests of banks and consumers,” Hsu said in a March speech.
The CFPB has separately flagged the potential pitfalls presented by the use of AI in chatbots that banks deploy to handle questions from consumers.
The Biden administration has also taken a keen interest in the development of AI, including in areas that could affect banks and their consumers. The White House released its blueprint for an AI Bill of Rights in October, and the National Institute of Standards and Technology is working on its own AI risk management framework.
Those initiatives are likely to deal with big-picture issues such as the potential for AI-related job losses that are out of bank regulators’ purview, Brauneis said.
As the administration raises red flags about various elements of AI, banks are left looking for a path forward with new generative AI tools.
‘Good Use Case’
One area that is ripe for a technological upgrade—and regulatory clarity—is anti-money laundering (AML) compliance, banking experts said.
For the most part, banks already use algorithms and other computer tools to spot potentially fraudulent financial transactions and report them to federal regulators, including the Financial Crimes Enforcement Network (FinCEN).
ChatGPT and similar generative AI tools could speed up the process for both banks and regulators to go through the flagged transactions, said Hilary Allen, a law professor at American University.
“There’s a really great win-win scenario in the AML situation for improving reporting on the industry side and reviewing on the regulatory side. There’s a good use case there,” she said.
FinCEN still needs to provide standards for testing anti-money laundering compliance models, said Nikhil Gore, a Covington & Burling LLP partner.
“If you’re running parallel models over real data, each model might find different things. Does that mean one model is better than another?” Gore said.
Risk Management
Other areas may not be a good fit for generative AI models.
Banks currently use some AI tools to assist with stress testing to determine whether they have enough capital to weather financial downturns. As the technology matures, they’re likely to use those tools in other areas of capital and liquidity planning.
But there’s a risk. Because advanced, generative AI tools use existing information on the internet to “learn,” each bank could end up having similar capital structures even if they face different risks based on their business models, Allen said.
“This is just going to take herding to the next extreme,” she said, adding that regulators should monitor the issue.
Cybersecurity and privacy are also major concerns with the increased use of AI.
Cyberattacks, such as distributed denial of service attacks targeting bank websites, are nothing new to the industry. But AI may make it easier for unsophisticated attackers to launch debilitating strikes against banks, said Matt Mittelsteadt, a research fellow at the Mercatus Center.
“It’s not new. It will be in greater volume, potentially,” he said.
Banks have the tools to defend against such attacks, and regulators already require lenders to notify consumers of a data breach or other hack, Mittelsteadt said.
Banks and regulators alike will simply have to heighten their efforts in the face of AI-generated attacks, he said.
To contact the reporter on this story:
To contact the editor responsible for this story:
