Love it or hate it, the growing adoption of generative artificial intelligence tools has given society no choice but to grapple with their novel challenges, and in 2024, trailblazing litigation over AI's impact on intellectual property, civil liberties and privacy will be hashed out in courtrooms across the country.
In 2024, courts will be tasked with weighing in on matters of first impression in litigation over the development and use of generative AI — the umbrella term for models or algorithms that can create content from the data on which they are trained.
Generative AI lawsuits are beginning to pile up, and so too are novel legal questions: Are tools such as ChatGPT trained and deployed in a law-abiding manner? Does their output constitute intellectual property? Who can be held liable for any harm caused by their output?
The answers to these and other questions about generative AI will come before judges in the year ahead, and those answers are sure to have broad implications across industries and across society.
Here, Law360 spoke with legal experts about the AI litigation likely to create the biggest waves in 2024, and what's at stake.
Algorithmic Bias
While many companies have embraced AI, not everyone has seen the benefit.
In 2023, Derek Mobley, a California resident, brought a putative class action claiming the human resources software provider Workday uses a discriminatory AI-powered hiring tool.
Mobley
claims he received rejections from dozens of jobs posted by employers using Workday's AI-powered applicant screening software because he's Black, older than 40 and has anxiety and depression. He says Workday's applicant screening software has algorithmic bias against African Americans who are over 40 and have a disability, which he says violates Title VII of the Civil Rights Act, the Age Discrimination in Employment Act and the Americans with Disabilities Act. Workday denies the allegations.
The Workday case presents new legal questions and could push the courts to provide early answers about whether vendors can face liability for AI products that yield biased results.
Peter Schildkraut, co-leader of the technology, media and telecommunications industry team at
Arnold & Porter Kaye Scholer LLP, told Law360 that he's watching to see if this case survives a motion to dismiss.
"That would be significant, because employment discrimination statutes typically subject employers, employment agencies and labor unions — but not their technology vendors — to liability for violations," he said.
Mobley isn't the only one who claims harm caused by an AI tool.
The estates of Gene B. Lokken and Dale Henry Tetzloff — both of whom were insured by UnitedHealth prior to their deaths —
claim UnitedHealth Group Inc. uses a flawed AI model, called
naviHealth predict, to determine coverage criteria for patients.
The putative class action, filed in November 2023 against UnitedHealth in Minnesota federal court, alleges that the insurer knowingly used an AI tool with a high error rate to override physician recommendations and to deny elderly patients care owed to them through Medicare Advantage healthcare plans.
Lokken's and Tetzloff's estates argue that UnitedHealth's AI model determines patients' coverage criteria for post-acute care settings with "rigid and unrealistic predictions for recovery" and overrides what an actual physician recommends, purportedly resulting in the denial of recommended and needed care. The country's largest health insurance company then "bank[s] on the patients' impaired conditions, lack of knowledge, and lack of resources to appeal the erroneous AI-powered decisions," according to the complaint.
UnitedHealth Group has not yet responded to the complaint, but it told Law360 in mid-December that "the naviHealth predict tool is not used to make coverage determinations."
"The tool is used as a guide to help us inform providers, families and other caregivers about what sort of assistance and care the patient may need both in the facility and after returning home. Coverage decisions are made by medical directors and are consistent with [
Centers for Medicare & Medicaid Services] coverage criteria for [Medicare Advantage] plans and the terms of the member's plan," UnitedHealth Group said. "This lawsuit has no merit, and we will defend ourselves vigorously."
Schildkraut said this case "may begin to define the circumstances under which companies using AI systems are liable for the systems' mistaken decisions."
As courts begin to resolve these AI-focused cases, Schildkraut said, there will be greater clarity on how existing laws do and do not address the new fact patterns arising from AI tools.
"And legislators will have greater clarity on the gaps they will need to fill," Schildkraut said.
Another discrimination case to watch is that of Illinois residents Jacqueline Huskey and Riian Wynn, who say State Farm's claims processing algorithms create discriminatory outcomes for Black homeowners
in violation of the Fair Housing Act.
In a 2022 putative class action, Husky and Wynn claim that State Farm's automated claims review process incorporates "historically biased housing and claims data" that led to their property damage claims being delayed, scrutinized more heavily and ultimately covered to a lesser degree than those of their white neighbors.
Husky and Wynn's complaint says that State Farm "does its best to keep the nature of its claims processing methods confidential," but that they believe its automated claims review process relies on AI, pointing to the insurer's career page, which states that it recruits and hires employees with a background in data analytics to "turn data into actionable insights by leveraging a combination of Natural Language Processing, Machine Learning, Artificial Intelligence, or other data science tools and concepts."
State Farm, which did not respond to a request for comment, moved to dismiss the suit in May, asserting that FHA disparate impact liability has never been applied to insurers, and that even if the claims were cognizable, they'd be barred by the McCarran-Ferguson Act, which gives states the authority to regulate the business of insurance.
In September, an Illinois federal judge trimmed the claims, but
allowed the discrimination case to continue.
With the new year will come new class actions asserting improper use or development of generative AI solutions, Chuck Hollis, a partner at
Norton Rose Fulbright who focuses on technology and AI, told Law360.
"As more and more companies build, train and deploy generative AI solutions in 2024, they will increase their liability and exposure to class action claims related to data privacy, discriminatory actions and decisions, civil rights violations and related claims," Hollis said.
IP Creators Sue Big Tech
John Grisham's latest legal thriller takes on artificial intelligence, but this time, it's not a work of fiction.
In the last half of 2023, Grisham and dozens of other writers filed lawsuits alleging tech companies violated their copyrights by using their written works to train the large language models, or LLMs, that power generative AI tools, raising novel legal issues that will be up to courts to decide in the year ahead.
Grisham and fellow members of the Authors Guild sued OpenAI and
Microsoft in New York federal court,
claiming that training ChatGPT on the authors' written works is copyright infringement. Author Julian Sancton also
sued OpenAI and Microsoft in New York.
Meanwhile, in California federal court, OpenAI has been
hit with a lawsuit from Pulitzer Prize winner Michael Chabon. Comedian Sarah Silverman and authors Christopher Golden and Richard Kadrey have
sued both OpenAI and Meta, pointing to the companies' AI products, ChatGPT and LLaMA, respectively. Authors Paul Tremblay and Mona Awad have
filed a copyright suit against OpenAI in California as well.
The authors say the unlicensed use of books to create large language models that generate texts is a threat to their profession and that generative AI models must seek permission to use their works. But Meta urged the judge to toss the bulk of the the Kadrey suit in September, arguing that the use of texts to train LLaMA "to statistically model language and generate original expression is transformative by nature and quintessential fair use."
In November, U.S. District Judge Vince Chhabria
dismissed without prejudice claims of vicarious copyright infringement, unfair competition and unjust enrichment from Kadrey, Golden and Silverman's case against Meta, but kept the writers' core theory of copyright infringement intact. The writers
filed an amended complaint in December.
"Unsurprisingly, the courts want to see allegations of substantial similarity to support copyright infringement claims and, absent this, are skeptical of allowing infringement claims to survive the pleading stage," Arnold & Porter partner Ryan Nishimoto told Law360, pointing to Judge Chhabria's initial dismissal order in that case. The court may similarly trim the consolidated case against OpenAI, Nishimoto said.
Dan Jasnow, co-leader of the AI, metaverse and blockchain industry practice group at
ArentFox Schiff LLP, said the authors' cases will set up "one high-stakes copyright infringement case against ChatGPT, and another against LLaMA."
Jasnow said the most powerful AI tools have been trained on massive amounts of content without authorization from their creators, with developers assuming that use of these third party works qualifies as "fair use" under the Copyright Act — an argument that Jasnow says "has never been directly tested in the context of gen AI."
"After much of 2023 was devoted to pretrial motions and resolving secondary claims, courts are likely to start considering the plaintiffs' core infringement claims in 2024," Jasnow told Law360. "The next year may determine whether the current moment in gen AI is more akin to the '
Napster Era' or the '
Spotify Era' of music streaming."
In mid-2023, consumers also filed a putative class action against
Google claiming its artificial intelligence chatbot, Bard, was trained on data secretly harvested from hundreds of millions of people.
Visual artists have also sued Stability AI Ltd., Midjourney Inc. and art-sharing website DeviantArt Inc., claiming the companies — which all use Stability AI's partly open-sourced software, Stable Diffusion, in their various generative AI programs — copied and stored billions of copyrighted images without consent. In October, a California federal judge
tossed all but one of the copyright claims, allowing the artists to amend their complaint to include greater specificity.
Stability AI has also been
sued in both Delaware and the U.K. by
Getty Images, which similarly alleges that the AI company is engaging in copyright infringement by scraping data from its websites without consent, and further asserting that certain images produced by Stable Diffusion contain a modified version of Getty's signature watermark. Stability AI has asked the Delaware court to dismiss the case or transfer it to California.
Agatha Liu, a partner at
Duane Morris LLP, told Law360 that since the content that AI models put out are often similar to the data they were trained on, "copyright infringement may exist at multiple levels" in the Getty case.
Following a criminal probe, gamers have also brought a novel suit involving AI,
alleging that AviaGames Inc. engaged in a racketeering scheme by using AI to manipulate games of skill to dupe unwitting gamers out of nearly $1 billion. In their November 2023 suit, the gamers said they believed they were competing against humans, when they were actually playing against bots that were programmed to win.
AviaGames is facing other legal challenges this year, including a February patent infringement trial brought by rival mobile game maker Skillz.
Elsewhere, in an unusual AI-related case in Georgia state court, Mark Walters, a pro-gun rights radio host, sued OpenAI in December 2023 claiming ChatGPT
defamed him when someone used it to produce a fake complaint naming Walters as a defendant. OpenAI, for its part, says another person who used the chatbot to do research on a real legal case was the owner of the text ChatGPT produced, and that OpenAI cannot be sued as the publisher.
Defining an Inventor
While there's a good amount of litigation seeking to hold generative AI tools and companies liable for IP infringement, there are also a number of cases to watch that are seeking to create legal rights for output created by generative AI tools.
For instance, artificial intelligence researcher Stephen Thaler, who created the generative AI machine, DABUS, has been making a splash with lawsuits challenging the idea that only humans can be holders of copyrights and patents.
Thaler's crusade has seen setbacks, with the
U.S. Supreme Court in April 2023
refusing to review a Federal Circuit ruling preventing AI tools from being named as inventors on patents, and in mid-December, the U.K. Supreme Court ruling against Thaler
when it found that British law requires a "natural person" to be behind an invention.
Thaler, who argues that allowing only humans to be inventors will hinder innovation, has a parallel legal fight pending over copyrights. In October, Thaler
appealed a Washington, D.C., federal judge's ruling that only people can have copyright protection.
Thaler's U.S. appeal could be quite consequential, according to University of Pennsylvania Carey Law School professor Cary Coglianese.
"There's potentially a lot at stake here given the ease with which generative AI can produce content, making anyone with a keyboard and internet connection an artist or an author and thus vastly expanding the potential for intellectual property disputes," Coglianese told Law360. "This case is emblematic of a larger set of legal issues for the courts to resolve about intellectual property and generative AI tools."
Bracing for Action
As the new year begins, businesses and attorneys are expecting the
Federal Trade Commission to launch civil investigations into unfair or deceptive use of AI.
Eric Vandevelde, co-chair of the AI practice group at
Gibson Dunn & Crutcher LLP and a former deputy chief of the cyber and IP crimes unit of the
U.S. Attorney's Office for the Central District of California, told Law360 he thinks regulators "are very keen on investigating the potential misuse of AI technologies." He said he expects state and federal regulators in 2024 to use their existing authority to investigate the development and use of AI technologies and to determine whether they harm consumers, investors, patients or others.
In a November resolution, the FTC created a subpoena-like process to streamline the agency's AI probes. Rebecca Engrav, co-chair of the artificial intelligence and machine learning practice at
Perkins Coie LLP, said the move "indicates that the FTC intends to conduct investigations specifically regarding products or services that use or claim to have been developed using AI, or that 'purport to use or detect the use of artificial intelligence.'"
"Companies engaging in these practices will want to be attuned to the possibility of FTC investigation," Engrav said.
Coglianese, of Penn Carey Law, said he's watching what the FTC will do in 2024 with its
open investigation of OpenAI. An enforcement action against OpenAI, Coglianese said, could turn out to be crucial for other tech firms producing generative AI tools, he said.
"We don't know exactly what the commission will settle upon, if it decides to take an enforcement action," Coglianese said. "But the commission has broad investigatory powers over unfair and deceptive trade practices, and it appears to be looking into data security at OpenAI as well as seeking a detailed inquiry into how the ChatGPT model is managed and trained."
--Editing by Alanna Weissman and Alyssa Miller.
For a reprint of this article, please contact reprints@law360.com.