The comments came during an afternoon panel titled "AI and the Practice of Law: Ethical and Practical Issues," on the first day of the 24th Annual Berkeley-Stanford Advanced Patent Law Institute conference held at Stanford University in Silicon Valley.
The discussion before dozens of attorneys and law school students was moderated by Miriam Kim of Munger Tolles & Olson LLP, and the panelists included University of California, Berkeley School of Law professor Colleen Chien, Stanford Law School professor Lisa Ouellette and intellectual property attorney Bijal Vakil of Allen & Overy LLP.
Both professors on the panel said they have begun incorporating AI into their legal course curriculum, instructing law school students on how to use AI tools like ChatGPT in their research. They said they encourage their students to critique the AI assistant's output, and they teach them how to improve their prompts to better refine their inputs.
"Everybody is going to be using these tools," Ouellette said. "The answer is not to ban them. The answer is to understand what their capabilities are ... and to use them better."
Chien added that it's been said that "it's not that AI will get rid of lawyers, but that lawyers who don't use AI won't be around anymore."
Chien began the discussion by warning that it's easy to both overestimate and underestimate AI tools' large language models, or LLMs. But she said AI has the potential to increase patent quality and make patent prosecution more efficient.
"Even if they're not perfect, they're still useful," she said.
Chien noted that women and smaller companies typically have a more difficult time securing patents or getting their name on patents, and AI tools could help "level the playing field and improve patent quality."
However, Ouellette said she was more skeptical of how quickly AI tools will develop to make quality legal improvements on work products, and she observed that oftentimes large firms are the ones with the data and financial resources to make the most of AI tools, potentially widening any diversity or equity gap.
Ouellette also noted that AI tools like ChatGPT are currently good at summarizing legal rules in "broad strokes," but not necessarily at applying those legal rules to novel situations or in "good lawyer writing," and it could make it difficult for patent examiners to do their jobs.
However, Vakil of Allen & Overy said his firm is already actively using AI in practice, while disclosing the use of the tools to clients.
He gave a short presentation showing the firm's Harvey AI tool, which quickly summarized lengthy depositions within seconds with key takeaways and comments, and wrote a motion to transfer citing legal rulings by U.S. District Judge Alan Albright.
"Really there's no limit to what the product can do," he said. "It does have pretty powerful capabilities."
Vakil noted, however, that Harvey AI is a closed system that protects client data, and he said it's quicker for him to check and correct the output of Harvey AI than to have an associate do the work, which then itself requires checking.
"I personally find it much easier to check than having an associate draft it," he said.
Vakil observed that lawyers don't "know the law, they practice the law," and they're supposed to evolve and grow, and the AI tools allow experienced attorneys to practice the law more efficiently and cheaply, which he said will likely drive law firms to want more experienced lawyers, and "not having these huge classes of lawyers without much value."
Vakil also quipped that with AI tools, there's no reason for law firms to pay dozens of associates large salaries to live in New York City.
Kim agreed that many law firms are using AI tools, but she noted that disclosure requirements in different jurisdictions and the California State Bar are also evolving, and different judges have standing orders requiring attorneys to disclose various uses of AI in litigation.
Kim said many standing orders are vague and imprecise regarding the type of AI used, because even spell-check tools can be considered generative AI.
She also expressed concerns that if AI tools replace early-career attorneys or first-year law students in the legal industry, that may affect diversity and equity efforts and any progress BigLaw has made in diversifying the workforce.
"How does this not impact diversity and equity?" she said.
The discussion wrapped the first day of the two-day conference, which featured a keynote speech by Klaus Grabinski, president of the Court of Appeal for the European Union's new Unified Patent Court. During his speech, Grabinski noted that 135 cases have been filed since the UPC opened in June, allaying concerns that the court would be overwhelmed by a flood of litigation.
Lord Justice Colin Birss, who is the deputy head of civil justice at the High Court of Justice of England and Wales, also gave a keynote speech, mentioning his own use of AI to save him time, but also cautioning that AI should only be used if attorneys and practitioners can recognize the veracity of the AI tool's output.
"Don't use it for information that you can't tell if it's true or not," he said. "It's not meant for that. It's not as radical as it might seem."
The conference will continue Friday morning with a discussion between U.S. Patent and Trademark Office Director Kathi Vidal and Canadian IP Office CEO Konstantinos Georgaras.
--Editing by Daniel King.
Update: This story has been updated to include additional context regarding Chien's statement regarding lawyers who use AI.
For a reprint of this article, please contact reprints@law360.com.