AI-hallucinated case citations have moved from novelty to a core challenge for the courts, prompting complaints from judges that the issue distracts from the merits of the cases in front of them.
The growing burden placed by artificial intelligence became clear in 2025, two years after the first prominent instance of fake case citations popped up in a US court. There have been an estimated 712 legal decisions written about hallucinated content in court cases around the world, with about 90% of those decisions written in 2025, according to a database maintained by Paris-based researcher and law lecturer Damien Charlotin.
“It just is metastasizing in size,” said Riana Pfefferkorn, a policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence. “So, it seems like this is something that is actually becoming a widespread enough nuisance that it will merit treatment as a core problem.”
The additional stress on courts comes amid an ongoing shortage of federal judges that’s led to case backlogs and left litigants in legal limbo. Judges themselves have gotten tripped up by AI hallucinations, and two of them were called out by Senate Judiciary Chairman Chuck Grassley (R-Iowa) for publishing faulty rulings.
Judicial Resources
Judges say AI-hallucinated citations take time away from more pressing matters. Although opposing counsel often point out hallucinated cases in filings, sometimes judges are the ones to discover the faulty cases.
In Texas, Judge Marina Garcia Marmolejo of the US District Court for the Southern District of Texas sanctioned an attorney whose brief cited only one actual case—the one provided by opposing counsel. The Southern District of Texas has a standing order, issued in May, cautioning attorneys and pro se litigants against using AI without checking its outputs for accuracy.
“Given that the Laredo Division is one of the busiest court dockets in the nation, there are scant resources to spare ferreting out erroneous AI citations in the first place, let alone surveying the burgeoning caselaw on this subject,” Garcia Marmolejo wrote in July.
In the US District Court for the Eastern District of New York, magistrate judge Lee Dunst found five cases cited by a plaintiff in a motion didn’t exist.
The AI-generated fake cases left the court with “no choice but to survey the case law regarding attorney misconduct relating to the use of AI” rather than “resolving a routine matter of civil procedure,” Dunst wrote in an April order issuing sanctions.
Increasing Sanctions
In that New York opinion, Dunst said sanctions tended to range from $1,500 to $5,000. Offenders are now being asked to pay thousands more.
When hallucinations first started appearing in court filings, lawyers could plausibly argue that they didn’t know AI could make stuff up. As understanding of how AI works becomes more widespread, judges have been more willing to assess higher financial penalties.
The US District Court for the District of Oregon fined an attorney $15,500 in December for citing fake cases and not being “adequately forthcoming, candid, or apologetic” about it.
“That’s hopefully going to make the firm sit up and pay better attention,” Pfefferkorn said of higher fines.
The biggest monetary damages are coming because opposing counsel have figured out they can ask to be compensated for the time they spent contesting an AI-tainted motion, now that there’s precedent for reimbursing fees.
In Illinois, a law firm and one of its partners were ordered to pay a combined $59,500 to the opposition law firm that discovered fake citations in a filing.
It might take a “ruinous” fine to get lawyers to really pay attention, Pfefferkorn said: “What is it going to take before there’s some poster child for this issue?”
Getting Tired
Perhaps another sign that the hallucinated case issue is starting to drag: Charlotin, the researcher counting cases, is getting tired of keeping up the database.
“It’s really starting to feel a bit too much,” he said.
Charlotin, a lawyer who teaches law classes at HEC Paris and Sciences Po, was teaching his students about AI in the law and mentioned that hallucinations are a limit of the technology. But Charlotin didn’t have the data to figure out just how big of a problem it is.
So he decided to compile the data himself. As recently as September, Charlotin said, he was adding two or three cases a day. By December, it’s become five or six a day. His database has been widely cited by news media and academics.
Charlotin has had to work around hallucinations when using AI in his own work, but he said that inconvenience is still worth the boost in efficiency. Even though many hallucinated citations popping up in the courts are from pro se plaintiffs, Charlotin said AI has still been a boost for access to justice.
The 712 (and counting) hallucinated cases Charlotin has kept track of have given him a lot of work, but he said it’s a reasonable price to pay.
“In the grand scheme of things, I think AI is a positive for the law and for the legal profession, but because it’s not a mature technology, we have these issues going on.”
To contact the reporter on this story:
To contact the editors responsible for this story:
