9th Circ. Judge Doubts AI 'Robot Judges' Can Replace Jurists

(October 18, 2024, 9:48 PM EDT) -- Ninth Circuit Judge William Fletcher expressed skepticism Friday that artificially intelligent "robot judges" should replace jurists, saying during a conference on complex litigation ethics that judges understand how to creatively apply the law to best serve justice, and "I don't trust the AI system to break the law when it should."

The judge's comments came during a panel discussion on the future of AI during the annual conference hosted by
the University of California College of the Law, San Francisco that was attended by dozens of legal practitioners and law school students.

Judge Fletcher, a senior judge who first took the bench in 1998, recalled that in the 1960s there were similar fears around the "horrors of what computers would do to us," and he acknowledged that many cases that land before the federal appellate courts could likely be resolved by AI tools if there are clerks who have systems in place to check the work.

But those tools wouldn't be useful in resolving many of the court's most significant cases, Fletcher said, and Social Security and immigration cases in particular would pose challenges given the complexities of the U.S. healthcare system and the nation's asylum-seeking process, which he said is broken.

"I'm not saying I'm any good at it, but I'm not at all confident that an AI system would be any better," Judge Fletcher said.

Judge Fletcher said U.S. Supreme Court precedent sets policy "in a way that computers can't" and added that there are many subtleties in the judicial decision-making process that jurists rely on to ensure that justice is served under the law.

As an example, Fletcher recalled that a colleague who once served on the Ninth Circuit would schedule oral arguments in all immigration appeals involving U.S.-born children, and depending on presidential administration at the time, the U.S. Department of Justice would often choose not to prosecute the case instead of spending additional resources on oral arguments.

"I don't think the AI system would be able to do that. I don't trust the AI system to break the law when it should," Judge Fletcher said, "To take the humanity out of the system is to take away the sympathetic nerves that exist in the system."

Judge Fletcher added that even if so-called robot judges could do a better job of resolving disputes than judges, there's also the issue of whether litigants would lose confidence in a system that has a computer deciding the outcome of their legal fights. He noted that litigants usually want their day in court, but it's unclear how litigants would react if they were given "Hal, the computer" — a reference to the homicidal AI computer in the science fiction film "2001: A Space Odyssey" — and told, "Well there's your day in court!"

Judge Fletcher also noted that the current federal court system isn't perfect and has a "serious flaw" that has been exacerbated in recent years by U.S. presidents appointing circuit judges based on political ideologies. As a result, the outcome of a few "hot button" cases has likely been determined based on the political ideology of the panel of judges, he said.

"That is not a good system of justice. That's a bad system of justice," Fletcher acknowledged.

During an earlier discussion on the current state of AI and complex litigation, panelists expressed cautious optimism that AI tools could significantly improve access to justice and help millions of litigants without attorneys navigate the judicial system amid what the panelists called the nation's "pro se crisis."

Stanford Law School professor Nora Freeman Engstrom said the unmet legal need in the U.S. is "staggering and it is scandalous." Pro se litigants are already using AI tools without much success, and it's unclear when they'll truly benefit from AI advances in a meaningful way, she said.

Lieff Cabraser Heimann & Bernstein LLP partner Anne Shaver said AI tools for pro se litigants hold "tremendous promise," but she noted that funding those tools is key, and there's currently a lack of financial incentive for companies to develop useful AI tools for nonlawyers.

Shaver also noted that all AI algorithms have some level of bias depending on the data used to train the AI tool, and outputs from certain AI large language models, or LLMs, like ChatGPT currently appear to mostly favor the defense bar as opposed to the plaintiffs' bar.

"In my view, that's endemic, and that's why it's so important that the plaintiffs' bar is on top of it," Shaver said.

Berger Montague PC shareholder F. Paul Bland, who was also honored Friday for his contribution to legal ethics, echoed Shaver's concerns and recalled asking a chatbot whether a plaintiff should file a consumer complaint in state or federal court. He said the chatbot's response criticized state courts and praised federal courts, even though many plaintiffs' attorneys would argue that federal court is a worse venue for plaintiffs to litigate those disputes.

The panelists also pointed out that AI tools still produce wrong or erroneous responses — dubbed AI hallucinations — a third of the time. Texas-based Kassi Burns, who is a senior attorney at King & Spalding LLP, said the hallucination rate makes it more essential for attorneys to be diligent and review AI-generated output for accuracy.

Lieff Cabraser founding partner Elizabeth Cabraser added that such a high error rate is unacceptable.

"[The last time] we tolerated 33% hallucinations was in the '60s," she quipped.

Cabraser acknowledged that as a plaintiffs' attorney, she's always been skeptical of technology, but she suggested there's a "tremendous opportunity" for AI tools to be used in thoughtful ways to make mass tort and multidistrict litigation more efficient.

As an example, Cabraser recalled that plaintiffs' counsel in the Volkswagen diesel multidistrict litigation over the German automaker's emissions-cheating scandal used an AI tool created by the Federal Trade Commission to calculate the value of Volkswagen vehicles and make car owners strong offers to get their noncompliant cars off the road.

She noted that the calculations weren't simple, because many car owners didn't want to give up their vehicles, which didn't meet emissions standards, but were otherwise driveable.

Cabraser suggested that similar AI tools could also be incorporated into the Camp Lejeune litigation pending in North Carolina, which involves more than 500,000 claims over firefighting foam chemicals that tainted water supplies at a U.S. Marine Corps base.

However, she said, the court's "fear" of AI has discouraged judges from embracing the technology. She added that the bellwether process in MDLs and mass tort litigation is too inefficient, costly, time-consuming and slow, and bellwether trials typically don't produce enough data points to justify the time spent on bellwether trials. Cabraser suggested that the bellwether process should be remade "for the 21st century," perhaps with the help of AI.

"We should be farther along by now," she said. "We were supposed to get more economies of scale than it turns out we [got with bellwether trials]. It's a human factor that, again, I think we have to grapple with."

--Editing by Rich Mills.

For a reprint of this article, please contact reprints@law360.com.

Hello! I'm Law360's automated support bot.

How can I help you today?

For example, you can type:
  • I forgot my password
  • I took a free trial but didn't get a verification email
  • How do I sign up for a newsletter?
Ask a question!