Stanford Prof On Using Legal AI To Help Real People

By Marco Poggio | May 3, 2024, 7:03 PM EDT ·

Margaret Hagan, a Stanford Law School professor working on tech-driven solutions to problems in the justice system, said she has little doubt that artificial intelligence, and generative AI in particular, may be able to improve outcomes for ordinary people who interact with the courts.

smiling woman headshot

Margaret Hagan

But to do it right, Hagan said, legal experts and engineers must directly engage with the frontline stakeholders who deal with access to justice issues day in and day out. That is the focus of the Legal Design Lab, a project based at Stanford that she founded 11 years ago and has directed since.

As discussions over the risks and benefits of generative AI continue to play out, Hagan said, the legal community should embrace the technology and help shape it in ways aimed at improving participation in and interaction with the justice system.

"We have to find strategies that are proactive, to try to make AI as good as possible, as likely to help people, and avoid the harms as much as possible," Hagan said. "It is here, and it's here to stay."

In a recent interview with Law360, Hagan talked about the mission of the Legal Design Lab, its approach to AI research, and what she thinks are the best ways to test new technologies with access to justice in mind. Responses have been edited for length and clarity.

What is the Legal Design Lab?

We are a team — we have both staff and many students — who work on legal challenges together. The fundamental question is, "How do we make the justice system more accessible, more empowering for the people using it, and ultimately more helpful for their life outcomes?"

For example, we've been working a lot in the landlord-tenant and eviction space over the past several years. We work in partnership with frontline legal providers. We've partnered with the California courts, the Los Angeles Superior Court, the NAACP in South Carolina, the National Center for State Courts, and many groups that are providing services and administering the justice system.

We do hands-on design and legal research work. We run workshops where we do user research interviewing litigants, advocates, and other people in the system. It's like a diagnosis phase where we find where the major problems, frustration points and opportunity areas are. And then we go into our creative work of brainstorming possible solutions.

It could be anything from changing paperwork like summonses and complaints. We redesign those to make them more understandable. It could be changing the rules of the court and how the system is run, the deadlines, or how the courthouse is physically set up. Or it could be technology, for instance text messages, websites and interactive guides.

Now we're exploring AI. In particular, we're looking at technology that can either help a person to understand this crazy legal system, to be more strategic about what actions they're going to take, and how to follow through on the many deadlines and tasks that they have to do in order to really participate in their case.

How is the lab experimenting with generative AI?

We've taken the past year to do preliminary research to lay some ground before we actually start building new AI tools. Since last June, we have interviewed members of the public about if and how they would use generative AI for legal problem-solving.

We've given them fictional scenarios, like: "Pretend that you're renting an apartment, and your landlord left an eviction notice on your door. Would you use AI to respond to it? And would you like to try it out? And can we watch over your shoulder?"

What we found from that preliminary research is that there actually is a lot of interest from many of the members of the public we've interviewed. It's not universal, but many people said that they are excited about the potential of AI for dealing with legal problems and that they would likely want to use an AI tool. And we've heard similar cautious enthusiasm from the frontline legal help service providers. A few of those groups are using AI already.

That indicated that artificial intelligence is not just a hype cycle, but rather that there is promise there.

Now our lab is entering into a new phase of work where we are going to partner with different frontline providers to build AI demonstration projects, and then evaluate them rigorously to see if they can live up to the quality standards that the experts say they need to be safe.

Can you elaborate on the "hype" aspect of AI? Have other legal tech innovations fallen short in the past?

We don't want to rush towards a new technology just because there's hype and headlines. In the past, there have been news hype cycles around previous technologies, but there just hasn't been uptake by the public or by frontline service providers who are really crucial.

We really wanted to get more knowledge from possible users, meaning members of the public, about whether they even want to use AI tools. And we did research with frontline legal help providers — legal aid attorneys, court staff who run the help centers, people who run hotlines and chatlines — to understand if the community feels an appetite and an openness to AI.

What are your thoughts on how the use of AI implicates rules over the unauthorized practice of law?

This is a hot topic, both in academic and in practitioner circles: how to regulate, ensure safety, and apply traditional rules, like unauthorized practice of law, that were made before this wave of artificial intelligence. The most productive conversations I've been having lately have been about developing a regulatory strategy and how to improve the quality performance of AI.

If we acknowledge that AI is not going to go away or be eliminated, then what is the approach to make sure, first of all, that consumers are protected, but also that they can get the benefit that this new wave of technology offers? There is potential for justice that is accessible and quite helpful at scale. We have to think about strategies, not just slow it down or chill it or try to make it go away, because we can't.

That means regularly and systematically evaluating the AI models that members of the public are most likely to be interacting with — ChatGPT, Gemini, and other brand names that we know people are going to be using more and more. We need to be systematically auditing them, and encourage the companies running those models to make them as good as possible for legal help.

We need explicit, well-defined, engineer-friendly benchmarks for quality, because currently we don't often talk about measuring quality in a clear way.

We need to be much clearer about what makes something good or bad quality, helpful or harmful. And if we can define those better, and then rank the most popular AI platforms on new standards, we can hopefully see a cycle where we limit problems like hallucinations and increase good-quality performance.

What kind of metrics could be used to measure whether an AI platform is working properly?

We've been doing interviews with people who are performing the same kinds of tasks that generative AI is going to be doing, asking them to help us to find clear, quality criteria.

Headlines about AI's quality problems are mainly about the hallucination of case law. But what we're finding is that most normal people using AI tools are not using them for extensive legal research or the kinds of functions a corporate lawyer or a law student would use them for. Most people are actually using AI to ask questions about their legal problems, or about rules, deadlines, and maybe about their rights or what the laws involved in their situation are.

So, some of the "good things" we want to see with AI tools is legal analysis, meaning that we want them to spot issues, identify a person's needs, and spell out the rules, deadlines or laws that apply. We also want AI to give people a clear plan of action: steps to take, a list of services that can help them, and forms and digital tools they can use. That's what people really want.

On the "bad things" side, we want to make sure that there are no misrepresentations — imaginary laws, or cases from the wrong jurisdiction or that are no longer applicable, for instance. We also want to make sure there's no bias, toxicity, or hateful or offensive language.

What do you think are the best environments to test generative AI technology for potential access to justice benefits?

We have to do it in partnership with communities and with the providers. That means that, practically, that we need to emphasize safety and consumer protection.

We're going to see a lot of experiments on the behind-the-scenes tasks. For instance, that could be analyzing questions that are being asked by clients on hotlines, and then giving a legal expert or pro bono volunteer an AI tool they can use as their co-pilot to provide answers to the clients. Or it could be finding solutions with legal aid attorneys and court help workers for better screening and triaging cases.

I think there's a lot more hesitancy about putting AI tools directly into the hands of consumers because of the lack of human review.

My prediction is that there will be many legal aid groups looking to engage in exploratory AI research and development and other kinds of projects. When the Legal Services Corporation awards Technology Initiative Grants, I believe we'll see a huge set of new projects being launched, and hopefully more university partners doing research along with it. We're still at that kind of early phase.

--Editing by Peter Rozovsky.

Have a story idea for Access to Justice? Reach us at accesstojustice@law360.com.

Hello! I'm Law360's automated support bot.

How can I help you today?

For example, you can type:
  • I forgot my password
  • I took a free trial but didn't get a verification email
  • How do I sign up for a newsletter?
Ask a question!