Two federal judges blamed faulty rulings on the use of artificial intelligence tools by staff members, raising questions about how much they scrutinize documents issued under their names.
US district judges Julien Neals in New Jersey and Henry Wingate in Mississippi admitted to the AI foibles in letters to the Administrative Office of the US Courts. The two missives, sent Oct. 20 and 21 in response to questions by Senate Judiciary Committee Chairman Chuck Grassley (R-Iowa), were reviewed by Bloomberg Law.
The mistakes raise concerns about the judiciary that can’t be excused by the advent of generative AI, said Bruce Green, a professor at Fordham University School of Law. “The judges’ excuses raise the question of whether judges are regularly publishing draft opinions,” he said.
The mistaken rulings and the judges’ responses to them show that courts can expect the same kind of scrutiny for oversight of AI use that judges themselves have brought on lawyers practicing before them. Several lawyers have been sanctioned for their faulty use of AI in crafting filings.
Judge Neals said in his letter that a law school intern in his office used Chat GPT to perform legal research, resulting in a June 30 order that contained case quotations that didn’t exist. The intern didn’t have access to confidential or non-public information when using the AI tool, he said.
Judge Wingate said his law clerk used the AI tool Perplexity as a drafting assistant, resulting in a July 20 temporary restraining order that referred to parties, allegations, and quotes unconnected to the case. Wingate said the clerk didn’t input any confidential or non-public information about the case into Perplexity.
“It was a draft that should have never been docketed,” Wingate wrote. “This was a mistake.” He added that there was a “failure to put the draft opinion through the final review process.”
Neither judge responded to requests for comment Thursday.
Judicial Accountability
Whether drafted with AI or traditional research, judges are accountable for making sure the citations in their decisions are real, said Stephen Gillers, a professor at New York University School of Law.
“The judge has to read the case which they cite,” Gillers said. “If the judge is citing a case, whether the case comes from AI or a clerk doing traditional research, the judge should read that case.”
Green, the director of Fordham’s law and ethics center, said the judges’ mistakes raise questions as to how often they are docketing unverified drafts. Was it “just an incredible coincidence that on the rare occasion that two judges inadvertently released draft opinions, the drafts misused generative AI?”
Judge Wingate said in his letter that moving forward, all drafts of decisions in his chambers must undergo an independent review by a second law clerk before they are submitted to him. He also said that all cases cited in an order must be accompanied by print-outs of those cases.
Judge Neals said his intern’s use of ChatGPT defied his chamber’s policy against the use of generative AI in legal research and for drafting of orders. He said he now has committed this policy to writing, whereas before it had been a verbal understanding.
“I have taken preventative steps in my chambers,” he said.
But banning AI—a “useful research tool” —is an overreaction, Gillers said, especially considering how common its use has become in the practice of law.
“What the judge should say is learn how to use AI, use it carefully,” he said. “The judge who bars use of AI under any circumstance is misguided.”
Senate Investigation
Sen. Grassley began an investigation after both judges rescinded and replaced the rulings that lawyers in the cases had flagged as problematic. He doesn’t have a specific recommendation beyond asking courts to ensure the rights of litigating parties aren’t being trampled by new technology.
“The judicial branch needs to develop more decisive, meaningful and permanent AI policies and guidelines,” Grassley said in a statement. “We can’t allow laziness, apathy or overreliance on artificial assistance to upend the Judiciary’s commitment to integrity and factual accuracy.”
The judges sent their letters responding to Grassley’s questions to Robert Conrad, director of the Administrative Office of the US Courts and a former Judge of the US District Court for the Western District of North Carolina. His office serves as liaison between the judiciary and members of Congress.
Judge Conrad in his letter included recommendations on AI use by a task force he convened earlier this year. The interim guidance “cautions against delegating core judicial functions to AI,” such as case adjudication, especially when it comes to novel legal questions.
The guidance also says that users should independently verify all AI-generated output, “and it reminds judges and Judiciary users and those who approve the use of AI that they are accountable for all work performed with the assistance of AI.”
The administrative office of US courts did not immediately respond to a request for comment.
To contact the reporter on this story:
To contact the editor responsible for this story: