The discussion among lawyers and academics occurred during three morning panels at the daylong symposium, which was co-hosted by the law school's Berkeley Center for Law & Technology and the Civil Justice Research Initiative, and attended by dozens of law school students, professors and lawyers.
Although the use of AI facial recognition technology and AI algorithms used in the bail context regularly arises in criminal cases, Nicole A. Ozer, the technology and civil liberties director for the American Civil Liberties Union of Northern California, said the recent regulatory push in various jurisdictions portends more civil litigation in both state and federal courts over the use of AI algorithms in the corporate setting.
Ozer noted that there are currently more than 30 AI-related bills being considered by the California State Legislature seeking to regulate various uses of the technology, and that by September, when Gov. Gavin Newsom signs bills into law, the Golden State may have a new AI regulatory regime.
"Much more is coming in this space of AI and the courts," she said. "Almost every legislator wants to be in AI right now, some to varying degrees of making sense."
At the federal level, the U.S. Equal Employment Opportunity Commission, the Consumer Financial Protection Bureau, the U.S. Department of Justice and the Federal Trade Commission issued a joint letter last year warning that there is no AI exemption in U.S. law, which served as a further warning to corporations that their algorithms could be targeted in litigation, Ozer said.
"If you are all thinking that AI is sort of out of the box [shielded from legal liability], it's not," Ozer said.
Lawmakers in many other states are also pushing AI-related legislation, and last month the European Union passed the Artificial Intelligence Act. The expansive new law is expected to take effect this summer, making it the world's first regulatory scheme specifically targeting AI algorithms, according to the speakers.
Regardless of a litigant's jurisdiction, the speakers agreed that in the U.S., American jurists and juries will likely play the largest role in determining the law of the land on a range of legal issues related to AI given the significant legal uncertainties on the use of generative AI.
Karen Silverman of The Cantellus Group, a retired Latham & Watkins LLP partner, said she expects to see more AI-related litigation in intellectual property and privacy disputes; disputes over employment decisions; contract disputes between businesses over AI-related products and services; and consumer disputes with the financial services industry.
Silverman also observed that the language used to describe AI algorithms has become "sloppy" and imprecise, and that lawyers arguing before judges are likely going to start to unpack common terminology in ways that highlight assumptions about how the technology works.
She added that because generative AI is developing so quickly, AI disputes that are filed this year may look "completely juvenile and ridiculous" in 2026.
"The world is going to change very quickly, community standards are going to change very quickly. ... As litigators and jurists, we are going to have to figure out how to accommodate that dynamism," she said.
University of Baltimore School of Law professor Michele E. Gilman, who runs a student clinic serving marginalized communities, said her students serve clients who are already fighting discriminatory decisions driven by AI algorithms, and that there's a general lack of knowledge about what's going on.
"We see these automated systems that are shaping clients' opportunities in ways that aren't good," she said.
Gilman said companies don't necessarily disclose their use of the AI algorithms, and that the disputes are broad ranging — from adverse employment and financial decisions to denials of housing applications or Medicare benefits due to algorithms that use outdated or inaccurate data. Some clients have also sought legal help for decades-old records that have not been expunged from AI algorithms, she said.
She noted that the bulk of the legal disputes end up in small claims court, where there's limited discovery and no juries, and judges aren't necessarily experienced in determining liability when an AI algorithm may be to blame.
"The judges are not getting their hands dirty and judging these algorithms," she said.
Gilman added that there's a bias in which people generally trust automated systems more than humans, even in the face of apparent inaccuracies.
The speakers debated whether courts should be required to dig into algorithms and unpack how AI algorithms work in litigation, or if judges can decide disputes without understanding the workings of the black-box technology.
Ozer suggested that the courts don't need to go deep into the technology to determine liability. However, UCLA School of Law professor Andrew Selbst gave a presentation that argued courts shouldn't shy away from AI technology and jurists should prioritize deciding whether AI algorithms are even doing what they claim to do, which he noted is a common question raised in product liability litigation.
"A surprising amount of AI on the market is broken," Selbst said. "I don't mean that it's biased, but that it doesn't do what it says it's going to do."
Plaintiffs attorney Deborah Nelson of Nelson Boyd Attorneys said she's concerned about the prospect of a cottage industry of witnesses opining on AI-related evidence. That could create an "onslaught of really obnoxious behavior" in litigation that could go unchecked, she said, and the trend could lead to wrongful convictions or significant financial losses on behalf of plaintiffs.
For example, Nelson explained that if there is a piece of evidence that one side claims is a deepfake, the allegation could force each party to hire their own AI experts, which costs money and could sidetrack litigation.
"I could see the aggressive use of that, and that really, really concerns me," she said.
--Editing by Alanna Weissman.
Update: This story has been updated to include additional information about the event's co-hosts.
For a reprint of this article, please contact reprints@law360.com.