Robot Arbitrators Spark Conflicts Over AI in Dispute Resolution

Feb. 12, 2026, 10:09 AM UTC

AI-assisted arbitration technology is increasingly absorbing tasks human arbitrators traditionally perform, raising novel legal questions about how much automation is permitted in private dispute resolution without violating century-old federal arbitration law requirements.

The American Arbitration Association, a leading provider of alternative dispute-resolution services, is promoting AI-enabled workflows to help speed up and lower the cost of arbitration by analyzing thousands of documents, synthesizing evidence, and drafting proposed settlements, while keeping human arbitrators as final decision-makers.

But as AI advances further in structuring legal reasoning and case outcomes, states are likely to intervene, as California is a step closer to becoming the first to prohibit arbitrators from delegating decision-making to generative AI. This comes as practitioners and legal scholars anticipate new procedural questions regarding AI use that will test the Federal Arbitration Act’s core provisions, including disclosure obligations, arbitrator independence, and the enforceability of awards when parties voluntarily agree to machine-only decisions.

“If you look at the actual terms of the act that was passed in 1925, I guarantee you that they were not thinking about robot arbitrators at the time,” said Amy Schmitz, a professor at Ohio State University’s Moritz College of Law. “It basically says to enforce arbitration and agreements and awards according to their terms.”

“If that’s what you’ve agreed to, then does that mean” an AI-assisted arbitration decision “should be enforceable under the Federal Arbitration Act?” she said.

Arbitration isn’t clearly defined in the law, though US Supreme Court precedents describe it as a process involving a written contract, consent, and private dispute resolution before a neutral third-party, legal scholars said.

Enforceability under the FAA is questionable “if it’s pure AI and you don’t have a human in the loop who’s ultimately signing off on that arbitration award,” Schmitz added.

The debate over the technology’s role in arbitration emerges as the legal industry increasingly becomes a hotbed for AI misuse. Several attorneys have been disciplined for using tools that generated non-existent case citations.

Judges, too, have faced backlash for AI-related errors in rulings.

‘AI Arbitrator’

The AAA’s recent launch of its ‘AI Arbitrator’ tool to quickly resolve low-value, documents-only construction cases—involving issues like contract review, project design, and worker safety—marks a major shift for the traditionally risk-averse legal industry, even though its use is limited and participation is voluntary.

AI Arbitrator assesses the evidence, merits, and “uses legal reasoning” to draft a recommended award, and a human arbitrator then reviews for accuracy before issuing a final, binding order, the organization said. The AAA anticipates the tool will eventually expand into higher-value, more complex areas like consumer and employment law.

With AI Arbitrator, case time can be reduced from 60-75 days to 30-45 days, the AAA said.

The AAA’s guidance on AI use by arbitrators and lawyers requires independent human judgment, verification of outputs against trusted sources, and disclosure of tool use to parties.

Meanwhile, its competitor, JAMS: Mediation, Arbitration and ADR Services, offers an AI-powered transcription service that provides real-time drafts. It has only issued guidelines on handling cases involving AI systems.

Early Cautionary Tale

LaPaglia v. Valve was the first case to seek judicial clarity on whether AI-assisted drafting constitutes an arbitrator outsourcing his role, affecting an award’s validity, legal observers said.

The plaintiff accused an AAA arbitrator of exceeding authority by consolidating claims without consent and using ChatGPT to decide the dispute in 2024. A California federal court in December dismissed the petition for lack of jurisdiction, as there was no federal question and the controversy amount didn’t meet the $75,000 threshold for diversity jurisdiction.

It’s unclear how widespread AI use is by arbitrators because of the process’ confidential nature, but Valve indicates potential disputes on the horizon, said Sarah Reynolds, a partner at Kaplan & Grady LLC.

Eventually, consensus on best practices will develop, “and regulation and governance will catch up,” she said. “Comfort levels will increase as AI use will start to broaden over time.”

But artificial intelligence tools are likely to face resistance in labor arbitration—governed by the National Labor Relations Act—where human factors are even more crucial for resolving complex issues, Schmitz said, including collective bargaining, workplace rights, and due process, which affect workers’ rights and livelihoods.

“There’d be even more pushback, especially when you look at what’s at stake,” she said.

Worker-side lawyers already view FAA employment arbitration as being structurally tilted toward employers, and the use of AI heightens concerns about bias and inequalities, legal scholars said.

Judicial Limits

Risks of due process violations, improper delegation, and algorithmic bias in AI-assisted arbitration also create potential grounds to challenge awards in court.

But the FAA severely constrains courts’ ability to overturn awards, permitting vacatur only if arbitrators exceed their powers, engage in misconduct, or show bias, said David Horton, a professor at the UC Davis School of Law. Courts generally cannot examine arbitrators’ reasoning, he said.

Horton doubts a hallucinated citation alone “would be enough.”

Unlike federal courts, states regulate key parts of arbitration through ethical rules, arbitrator licensing, and bar rules on attorney conduct.

State AI laws mainly cover employment, consumer protection, and privacy issues.

But the California Senate’s Jan. 29 approval of a bill barring arbitrators from delegating decision-making to generative AI indicates a focus on arbitration is on the horizon. The measure is now before the Assembly for hearings and a possible vote before August. Other states are expected to follow suit.

A categorical ban on AI use in arbitration risks federal preemption issues, unless the Supreme Court rules that automation is prohibited, Horton said. That’s because current high court precedent bars state policy that “interferes with fundamental attributes of arbitration and thus creates a scheme inconsistent with the FAA.”

“I could imagine a state saying, ‘Fully automated arbitration is never permissible, or it’s only permissible subject to these fairness or transparency guidelines,” Horton said.

To contact the reporter on this story: Khorri Atkinson in Washington at katkinson@bloombergindustry.com

To contact the editors responsible for this story: Genevieve Douglas at gdouglas@bloomberglaw.com; Rebekah Mintzer at rmintzer@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.