At a regular legal clinic hosted by Glide Memorial Church in San Francisco, attorneys are experimenting with generative AI tools to give self-represented litigants assistance in navigating their civil legal issues. (Jane Tyska/Digital First Media/East Bay Times, via Getty Images)
Inside the LED-lit choir room of Glide Memorial Church in San Francisco's Tenderloin neighborhood, attorneys with a free legal clinic greeted new clients as they walked in from the noisy street outside.
At first glance, the scene looked the same as any other legal walk-in clinic around the country. But the sessions at Glide Unconditional Legal Clinic revealed something cutting-edge: attorneys were using generative AI tools to give pro se litigants quick and creative legal aid it would be hard for them to get otherwise.
One of the clinic's clients was a man facing eviction from his supportive housing unit. It only took a few minutes for the AI tools to read and summarize the hundreds of pages of documents he had brought in — a task that would have been impossible for a human to carry out in such a short time.
At the end of the session, the client walked away with a list of referrals and a plan on what to do next.
"It's pretty remarkable. Fifteen or 20 minutes to do research and write up referrals and next steps and hit all the clients' questions is not a lot of time. It's rapid fire," Nikki Endsley, an attorney with the nonprofit Lawyers' Committee for Civil Rights for the San Francisco Bay Area who co-runs the clinic, told Law360. "The way that AI can expedite and make us more efficient and therefore better advocates is really amazing."
The Glide clinic is an example of how generative AI technology can be used to expand the reach of legal aid at a moment when a staggering majority of Americans' legal needs are going unmet.
Lawyers, courts and legal scholars are split on whether generative AI can be used in ways that meet the ethical standards of the legal profession. Several cases of "hallucination" by AI tools, such as citations to nonexistent case law, have already sparked concerns and caused some jurisdictions to take action to regulate its use in courts.
But the case for using generative AI to empower people without legal representation is nonetheless gaining strength — in part because the nation's legal needs are so vast, and in part because, experts acknowledge, consumers are already using AI tools in their day-to-day routines.
"There is a lot of controversy over whether generative AI helps or hurts pro se litigants. Generative AI does hallucinate. And we've already seen situations where even lawyers are submitting filings with fake cases generated by ChatGPT," said Miriam Kim, a partner at Munger Tolles Olson LLP who has co-led a pilot program at the University of California, Berkeley, School of Law testing AI capabilities in legal aid. "Every tool has a good-use case, and every tool has a bad-use case."
Sateesh Nori, who is testing generative AI as a possible component of an eviction defense and tenant protection clinic he runs at the New York University School of Law, said the introduction of such technology has "huge potential" to help pro se litigants, particularly in legal processes that are built on forms and letters.
"This is like the invention of fire," he said. "The types of things that small legal services office, law school clinics can now do … it's almost unlimited."
As part of the NYU eviction clinic, Nori and his team have built an AI model that is helping roughly 15 employees respond to about 50,000 requests for assistance received through the clinic's hotline. When fed information, the AI tool "co-pilot" produces immediate responses that help the hotline workers point pro se litigants in the right direction.
Generative AI models, Nori said, could help hundreds of thousands of people navigate intricate rule-based administrative systems in areas such as public housing and welfare assistance, giving them tools they can use to file forms that, if neglected, could result in the loss of homes or benefits. And the availability of smartphones at relatively low costs already makes such AI tools accessible to most people.
"People already have the machine in their hands to do this stuff," Nori said. "We just have to build the bridge to platforms, so that they can do it."
Testing AI Capabilities to Help Bridge the Justice Gap
A group of legal aid attorneys recently participated in a monthlong pilot program by the Berkeley Center for Law and Technology at the University of California, Berkeley, School of Law to test generative AI platforms such as OpenAI's ChatGPT-4, Casetext's CoCounsel, Anthropic's Claude 3 and Gavel.
Kim, who co-led the program alongside Colleen V. Chien, co-director of the Berkeley Center, said it was a first-of-its-kind field study of generative AI for use by legal aid lawyers.
In one simulation during the pilot, Kim instructed ChatGPT to draft a letter from a tenant living in California demanding that her landlord return her security deposit, as required by state law.
Kim prompted the program to note that the tenant's hot water hadn't been working during the last couple of weeks of the tenancy, and asked that the program cite relevant statutory authority without making any up that it wasn't sure was accurate.
ChatGPT came back with a workable draft letter that hit all the input points and was accompanied by a message: "Consult with a legal professional if you need specific legal advice or assistance."
Kim then asked the platform to regenerate the letter, prompting it to "keep it polite but make it more firm." ChatGPT obeyed the commands, producing a new letter containing sharper language and warning the landlord about possible legal repercussions if she didn't return the deposit.
"I expect prompt action on this matter. Failure to respond appropriately will leave me no choice but to seek legal recourse," the letter said.
Currently, only a handful of academic institutions are experimenting with ways to deploy generative AI to help members of the public address legal problems without the assistance of an attorney. One of them is Suffolk University Law School's Legal Innovation and Technology Lab — an experimental program that is developing technology aimed at assisting legal aid lawyers, courts and pro se litigants alike.
Some of the lab's work includes a document assembly line project, which has produced open-source tools that help create mobile-friendly online court forms and pro se materials. Another tool, called the AI issue spotter, allows nonlawyers to submit a summary description of a situation and to instantly obtain a list of potential legal issues that might be involved, with links to legal resources.
Meanwhile, researchers at Stanford Law School's Legal Design Lab have spent the past year researching ways to build AI tools that could help legal aid organizations scale up their existing work. Vanderbilt AI Law Lab, a program at Vanderbilt Law School, is exploring similar capabilities.
A New Way for People to Help Themselves
The Glide clinic, which runs on Mondays for a full day, takes in almost anybody who shows up from the street — first come, first served — with any kind of legal issue.
Most of the clients are people who lack stable housing and are looking for help with debt, government benefits, family issues or problematic encounters with the police. Many of them have never talked to an attorney before.
Because the clinic only offers limited-scope legal aid — its attorneys don't appear in court — the main objective is to help pro se litigants help themselves. By the time they come to the clinic, some of the clients have already experimented with generative AI tools such as ChatGPT, which they can access through public library computers, Endsley said.
During consultations, which can last between 15 minutes and an hour, Endsley and fellow clinic attorney Bréyon Austin sit in front of a laptop on one side of a long table. Sitting on the other side, clients explain what brought them in. After that introduction, the clients are asked to wait downstairs in the community center while Endsley and Austin, along with pro bono volunteers participating remotely, confer among themselves, researching and figuring out the next steps for the clients.
"That's really where the AI tools come in," Endsley said.
Kim, who worked with Glide as part of Berkeley's AI pilot project, said the clinic was just one example of how generative AI can be used to benefit pro se litigants in a supervised manner.
The Berkeley study has shown dozens of possible applications for AI tools, some with better outcomes than others. For instance, the platforms were used to write cease-and-desist letters, draft objections to interrogatories, brainstorm appellate arguments or research information on a country's conditions that could be used in an asylum application. In other cases, the AI helped with nonlegal functions such as translation.
"To the extent that these tools are democratizing legal knowledge, attorneys should be celebrating this — getting information and tools into the hands of folks who need it most, and they can use it to empower themselves," Endsley said. "This is precisely the moment to begin using AI. Whether we like it or not, the moment is here."
Helping or Hindering Access to Justice?
Given the concerns around the technology's shortcomings, attorneys will still be needed to ensure that generative AI is used responsibly; for example, by fact-checking work put together on behalf of pro se litigants to protect against potential hallucinations.
"We still need oversight," Endsley said. "We have a big role to play."
Instances of attorneys getting in trouble for filing briefs citing fake AI-generated cases have already made headlines and prompted severe rebukes from judges in some cases. The problem has affected pro se litigants too. In February, the Missouri Court of Appeals dismissed an appeal filed by a pro se litigant and fined him $10,000 for submitting briefs that included citations that generative AI had fabricated.
Overall, courts have approached the proliferation of generative AI in different ways.
According to a database tracking federal and state rules on the use of generative AI maintained by LexisNexis, Law360's parent company, some courts have started requiring that attorneys disclose the use of AI in their filings. A few jurisdictions, meanwhile, have barred the use of AI entirely in the drafting of legal documents. Some courts have doubted the need for rules altogether, pointing out that attorneys already have an ethical duty to ensure that all filings are accurate.
But pro se litigants aren't subject to the same ethical and professional rules that attorneys are, experts note. For that reason, at least five federal district courts have said they don't want pro se litigants to use AI tools at all. The entire Eastern District of Missouri, for instance, has issued a rule saying that "for self-represented litigants, no portion of any pleading, written motion, or other paper may be drafted by any form of generative AI."
U.S. District Judge Christopher A. Boyko of the Northern District of Ohio, for example, issued a directive requiring that "no attorney or pro se party" use AI to prepare any filing submitted to the court or else face sanctions.
Other courts have generally warned parties against the use of generative AI tools to draft legal pleadings, although they stopped short of prohibiting it altogether.
"The courts are concerned that through the use of AI, these litigants are going to start citing things that are false, such as facts that did not happen or cases that do not exist," said Stuart Levi, a partner at Skadden Arps Slate Meagher & Flom LLP who co-chairs the New York State Advisory Committee on Artificial Intelligence and the Courts.
"If I [tell the AI tool], 'Write a brief about a breach of contract that I can file with a court,' and the AI tool writes a brief and it looks all good to me, if I'm not a lawyer, what do I know? The cases could be all wrong, the cites could be all wrong. You can imagine that happening, unfortunately, a lot," he said. "So, I understand the concern there."
If the potential for misuse, even if accidental, is so high, what does that mean for the prospects of unrepresented litigants to use AI in their cases?
Levi, who has worked at the intersection of law and technology for years, said there aren't easy answers, and it's too early to tell. For example, AI technology could improve to a point where it doesn't make as many errors as have been seen so far. In that scenario, there could be some true benefit for pro se litigants, who are frequently chastised by judges for submitting briefs and other documents that are not germane to their cases or are otherwise poorly written.
"You can imagine this [technology] increases the quality of what these litigants are able to do. And that's a good thing," Levi said.
Robert Mahari, an attorney and researcher at the Massachusetts Institute of Technology who co-chairs a subcommittee of the New York City Bar Association's Task Force on Digital Technologies, told Law360 that opposing the rise of generative AI in law would be squandering an enormous potential to help people.
"We don't need to say either go crazy with AI or block it completely. There is a middle ground," he said. "It would be unethical not to let us give people the ability to do those tools."
Stephen Gillers, a well-known legal ethics scholar at the New York University School of Law, argued that courts cannot legally forbid nonlawyers to use AI.
"They have the same legal rights to do so as they have to consult a 'how to' library book or search online," he told Law360.
The issue might be more political than it is legal.
Gillers said one way to effectively use AI to help close the justice gap would be for states to license legal technicians — people who would be trained in AI, subjected to educational and testing requirements, and be bound by ethics rules — to assist underserved communities in areas of the law where the need is greatest; for instance, family law, consumer law, landlord-tenant law and government benefits.
AI-savvy legal aid workers would be a cheaper investment than attorneys, and some nonprofits could be able to offer assistance either free of charge or at a very low price, Gillers said.
While there haven't been any specific proposals floated around the country to certify legal aid workers in the use of AI, the idea of licensing new classes of nonattorney legal professionals more broadly has largely encountered staunch opposition. Only a handful of states, such as Arizona, Utah and Alaska, have experimented with the idea of permitting paralegals and other trained nonlawyers to assist clients in certain areas of the law, including government benefits, eviction, domestic violence, medical debt and the expungement of criminal records.
In New York, a pending case involving the legal tech company Upsolve and a South Bronx pastor who coached low-income people on how to fill out forms used to defend themselves in debt collection lawsuits will further determine whether unauthorized practice of law could help open up new paths in providing legal aid to people who need it.
Richard Lewis, president of the New York State Bar Association, told Law360 in an email that allowing nonlawyers to provide limited legal services had been "an abject failure," citing research done mostly in jurisdictions abroad that have allowed some forms of nonlawyer ownership of law firms. Lewis also cited a Yale Law Journal Forum article by Stephen P. Younger, a commercial litigator and professor at Fordham University School of Law who has argued that there is no evidence that empowering nonlawyers has helped solve the problem of legal underrepresentation.
"The New York State Bar Association as well as the American Bar Association has long-standing policy opposed to nonlawyers practicing law. Nonlawyers are not subject to educational and licensing requirements or rules of professional responsibility and ethics," Lewis said. "In addition, they do not have the same concerns over conflicts of interest or duty of effective advocacy as lawyers do."
Gillers called broad national resistance among bar associations to limited-scope licensing "unconscionable," and singled out the New York State Bar Association as being among the most ardent opponents to the idea.
Lewis expressed even more concern with the idea that pro se litigants could be using generative AI to manage their cases.
"We don't believe that we can just say, 'OK, pro se litigant, get on the computer, get on AI, get hold of an AI source and implement that, and that's going to be a savior of your case," he said. "It's incredibly important that there be a certain knowledge of law, hopefully a good knowledge of law, as well as a knowledge as to how the AI search can hurt you if it is improperly implemented."
In a report published last month, the state bar association's Task Force on Artificial Intelligence said AI had provided overall benefits to society, including by lowering the bar for underserved populations to access legal guidance.
At the same time, though, the bar association lamented that AI-powered chatbots "hover on the line of unauthorized practice of law," and said they have so far shown "mixed results" when used for conducting legal work. The bar association also pointed to "substantial economic, ethical and existential risks" related to AI proliferation, including security risks and misinformation created by "deepfakes," and warned that it could widen — rather than reduce — the justice gap by fostering the creation of a two-tier legal system where lower-income people have access to inferior AI technology than their richer counterparts.
Nori, meanwhile, suggested that opposition from bar associations is really about safeguarding the exclusivity of the legal profession and protecting its business interests at the expense of the potential of generative AI to help millions of people who cannot afford a lawyer.
"Today's anger over the unauthorized practice of law is misplaced. We don't need that anymore," Nori said. "AI isn't a lawyer, but if we train it, it can give probably, like, 99.9% accurate answers."
But many lawyers, including Nori, share the concern about the cost of generative AI platforms becoming a barrier to entry for low-income people.
Lisa Larrimore Ouellette, a professor and innovation law researcher at Stanford Law School, called generative AI tools an "exciting" way to increase access and equity in the justice system, but echoed potential cost concerns.
"But they also won't inherently or necessarily do so," she said. "There are ways they can make it less accessible, including if the best AI technologies are really only affordable for large private firms."
Mahari said there was a clear divide between the needs of low-income consumers and the financial interest of developers that made it unlikely for software companies to build AI technology that is affordable and easy to use.
"You have kind of an intrinsic incentives issue, because who is building pro se litigant software? There's not a huge amount to be gained out of that," he said. "We need more interaction between lawyers and technologists, if we want to build AI tools that are actually going to help with access-to-justice issues."
--Editing by Karin Roberts.
Have a story idea for Access to Justice? Reach us at accesstojustice@law360.com.