Responsible Use Of AI Could Bridge The Justice Gap

By Margaret Hagan and Amy Groff | December 6, 2024, 1:56 PM EST ·

Margaret Hagan
Margaret Hagan
Amy Groff
Amy Groff
When facing a legal problem like a divorce, eviction or a debt collection lawsuit, most people can't get affordable or accessible help to deal with it.

Low-income Americans do not have any or enough help for 92% of their substantial civil legal needs, according to the Legal Services Corp.'s 2022 "Justice Gap" report.[1] Despite the dedication and hard work of legal aid organizations, court help centers and pro bono attorneys, this number has been growing, up from 86% five years earlier.

The 2021 "Assessing Justice Needs Across the United States" study, conducted by the Hague Institute for Innovation of Law and the Institute for the Advancement of the American Legal System, likewise found that Americans across income groups face substantial legal problems and struggle to get assistance to resolve them.[2] These needs include critical issues like housing, healthcare, education, income and safety.

And the World Justice Project's most recent Rule of Law Index, published in October, ranked the U.S. last among the 47 wealthiest nations in affordability and accessibility of civil justice.

Our current system simply is not working to close this gap in access to justice. Artificial intelligence presents an opportunity to reverse this trend and address our country's access to justice crisis in a way that has not been possible in the past.

The inaugural AI + Access to Justice Summit at Stanford Law School in October brought together a group of frontline legal services providers, technology companies, academics, regulators, philanthropists, and law firm pro bono and innovation leaders to explore how AI can be used to tackle this issue.[3]

The group also saw demonstrations of transformational AI technologies from major tech companies, as well as smaller niche products; learned about current use cases from early adopters across the country; designed and evaluated their own potential AI pilots; and engaged in a productive discussion about how to collaborate and gain support to make AI innovation happen in this space.

This article will explore takeaways and insights, gleaned from the summit and beyond, to ensure that AI advancements do not widen the justice gap and leave behind the most vulnerable among us, but are instead used responsibly and effectively to help those in need.

Collaboration

A key theme that emerged from the summit is the need for collaboration to make sure AI innovation can lead to impactful change at scale, in order to close the justice gap.

While some legal services organizations have embraced the potential of AI and are leading the way with cutting-edge AI pilots, many are underfunded and understaffed, and do not currently have the bandwidth to investigate, develop and implement generative AI on their own.

This is an area where law firms and corporations can provide support and share experience and resources from their commercial use and development of generative AI. Several tech companies are already offering free or discounted licenses, as well as related product support, for legal aid or pro bono uses.

This partnership between legal aid, law firms or corporate legal departments, and technology vendors can benefit all involved.

It is also important for legal services organizations and similar groups — such as courts' self-help programs for pro se litigants — to share their experiences and learn from each other.[4]

This could include information about a new AI tool to help perform a specific legal task; a new or fine-tuned AI model for use in the legal domain; a benchmark or evaluation protocol to measure the performance of AI; a policy or regulation strategy to protect people from AI harms and encourage responsible innovation; or a proposed collaboration or network initiative to build a stronger ecosystem of people working on AI and justice.

This spirit of cooperation, with concrete steps to facilitate collaboration, will advance the common mission to provide greater access to justice.

Regulation

Another issue that will have a significant impact on progress in this space is regulation.

Many legal aid organizations are interested in AI tools that could serve as a digital partner to their lawyers and paralegals, and also AI that could provide services directly to the public. But they struggle to understand the rules of the road when it comes to AI regulation and ethics.

These questions include how licensed, practicing lawyers can ethically use AI in their legal work, as well as what constitutes the unauthorized practice of law, and impermissible sharing of legal fees with nonlawyers, when companies deploy tools that are said to provide legal advice to the public.

In terms of lawyers' use of AI, the current rules of professional conduct cover — and many believe are sufficient to address — lawyers' use of AI in their work.

Regulation relating to the unauthorized practice of law and fee sharing presents a more difficult question, and perhaps a greater obstacle to advancements.

One way to approach this challenge is through regulatory sandboxes that relax certain rules, while implementing appropriate guardrails, to allow exploration of different methods to deliver legal services. Regulatory sandboxes shift regulation from rules-based approaches — like the rule against the unauthorized practice of law — to risk-based approaches by measuring what risks a service involves, and what consumer protection issues arise.

For example, Minnesota, building on the example of Utah, is moving forward with plans to develop a sandbox aimed at allowing generative AI to help self-represented litigants. The Minnesota sandbox will consider what specific risks different legal AI projects might entail, and base regulation around mitigating and tracking them.

Aside from a formal sandbox, Arizona removed its version of Rule 5.4 of the Model Rules of Professional Conduct, which prohibits a lawyer from sharing legal fees with a nonlawyer or forming a partnership with a nonlawyer if any of the partnership activities consist of the practice of law.

This can open the door to partnerships between lawyers and technologists, fostering responsible innovation.

Innovation

A few legal issue areas that are particularly ripe for AI innovation include housing, immigration and reentry work. We have seen AI tools or pilots in these areas and others.

Legal services providers may start with smaller, lower-risk projects that are scalable. For example, in the housing context, an initial AI project may focus on helping individuals who need a landlord to make repairs — an area of great need — before tackling something like eviction, which is another area of great need, but with higher stakes and more complicated workflows.

Some organizations are using AI to assist legal aid lawyers in their day-to-day work. An example is a tool that retrieves relevant authority for lawyers working a hotline, which they can review, verify and use to more quickly answer consumers' questions.

Yet another approach is to deploy AI for internal administrative functions, freeing up resources the organization can devote to its mission. An example of this is a chatbot that answers questions about benefits and personnel policies for the organization's employees.

Finally, some tools provide basic legal information or resources to the public in response to questions they pose online.

Increasingly, grants are available for AI and technology-focused projects like these. Last year, the Legal Services Corp. awarded more than $5 million in technology innovation grants, and almost $6 million this year, bringing the total awarded under this program to more than $91 million since its inception in 2000.

As generative AI technology continues to develop — and has capabilities we cannot even imagine today — support for projects benefiting those in need will be critical.

Conclusion

The scale of the access to justice crisis requires new thinking and new approaches to delivering legal services. The past 10 years have shown that more pro bono hours alone will not solve this problem.

Responsible use of generative AI has the potential to transform this landscape and truly provide justice for all, especially if legal aid, court, pro bono, technology and law firm leaders can collaborate on this vision.



Margaret Hagan is the executive director at the Stanford Law School Legal Design Lab.

Amy Groff is a partner and vice chair of the firmwide pro bono committee at K&L Gates LLP.

"Perspectives" is a regular feature written by guest authors on access to justice issues. To pitch article ideas, email expertanalysis@law360.com.


The opinions expressed are those of the author(s) and do not necessarily reflect the views of their employer, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.

[1] https://justicegap.lsc.gov/.

[2] https://www.hiil.org/research/assessing-justice-needs-accross-the-us/.

[3] Margaret Hagan, Executive Director of the Legal Design Lab at Stanford and co-author of this article, led the event, and Richard Susskind, OBE (who was recently appointed Special Envoy for Justice and Artificial Intelligence, supporting the 56 Commonwealth countries) provided inspirational opening remarks about the future.

[4] Stanford's Legal Design Lab has rolled out a central repository to collect proposals, ideas, and early-stage pilots for how AI could close the justice gap. See https://airtable.com/appsifBhXrXloHGV8/pagtkg0uWYZ83sIJx/form.

Hello! I'm Law360's automated support bot.

How can I help you today?

For example, you can type:
  • I forgot my password
  • I took a free trial but didn't get a verification email
  • How do I sign up for a newsletter?
Ask a question!