Analysis

This article has been saved to your Favorites!

How The Plaintiffs Bar Is Getting Creative To Combat AI Bias

By Amanda Ottaway · 2024-08-16 15:11:55 -0400 ·

The opacity of workplace artificial intelligence tools poses a daunting challenge for plaintiff-side employment lawyers who think that technology causes discriminatory results.

Artificial intelligence image with man in suit with pen and paper

Plaintiffs attorneys say they are using publicly available information and applying existing law in new ways to bring AI-related discrimination charges. (iStock.com/2d illustrations and photos)

Though the inner workings of AI systems remain difficult to access, lawyers say they're bringing discrimination charges based on publicly available information. They're suing the vendors of the tools directly, rather than the employers that use them, and they're applying existing law in new ways.

That includes a complaint the ACLU announced May 30 that it had filed with the Federal Trade Commission, alleging that management and consulting firm Aon Consulting Inc. is falsely advertising discriminatory AI hiring tools to consumers.

"I think hiring tech vendors have been acting as though the laws don't apply to them for a long time," said Olga Akselrod, a senior staff attorney at the ACLU who leads the organization's efforts fighting algorithmic discrimination and who worked on the Aon complaint.

"It's clear that they can be held liable, both under employment discrimination laws and under consumer protection laws," Akselrod said. "There are challenges to developing litigation against automated hiring technologies, but the plaintiffs' bar is catching up. And that's critical."

Here, Law360 explores four strategies the plaintiffs' bar can use to help tackle AI discrimination in the workplace.

Use Existing Law in New Ways

Experts pointed out that existing laws can already be applied effectively, even to cases involving new technologies.

The ACLU is leading that charge with its application of consumer protection law to the hiring sphere. Holding vendors such as Aon accountable for "deceptive marketing" — in other words, by claiming that their automated decision tools are always fair and free of bias — is one way to attack the problem, Akselrod said.

"I think that consumer protection law is absolutely an important and viable strategy," she said. "Particularly since courts are only beginning to grapple with when vendors can be held liable under traditional civil rights laws. And it's clear that consumer protection laws apply to vendors both with respect to 'deceptive acts or practices,' and with regard to practices that are unfair to consumers."

Akselrod said the definition of "consumer" is understood broadly under the Federal Trade Commission's authority to include not just whoever purchased the product, but also workers.

"Workers are losing out on critical job opportunities as a result of these practices and don't have reasonable ways to avoid the harms from these practices," she said. "That is precisely the kind of scenario that the FTC has said is unfair under consumer protection law."

In response to a question from Law360 about whether the ACLU is planning to bring more FTC complaints or is encouraging plaintiffs' lawyers to do so, Akselrod said that the use of biased hiring tech tools is "a big priority" for the organization.

"We think it's important to be aggressive in using any legal remedies that are available," she said.

Many employers are using discriminatory or risky algorithms to make employment decisions, she added, "because they haven't been sued much for it."

Use Publicly Available Information

Peter Romer-Friedman, principal at PRF Law, is one of the lawyers behind a high-profile charge filed in 2022 with the U.S. Equal Employment Opportunity Commission, alleging that Meta Platforms' biased algorithms discriminated against women who work in the trucking industry by disproportionately serving Facebook job advertisements to younger men.

That complaint is still pending as the EEOC investigates, Romer-Friedman said. Meta is the parent company of Facebook.

Brown Goldstein & Levy partner Anthony May, who leads his firm's AI practice area, said the Real Women in Trucking case is a "very unique" way to tackle algorithmic bias.

"They were able to bring those claims because a lot of the information that Meta was maintaining was publicly available." he said. "They could see [for] themselves what was going on."

May also noted that the plaintiffs in the case had a lot in common — women who work in trucking — which is helpful, because these cases are likely to benefit from having larger numbers of plaintiffs.

One of the biggest challenges for the plaintiffs' bar in challenging AI bias is what Center for Democracy and Technology expert Matt Scherer calls "information asymmetry" — a lack of transparency requirements, meaning job applicants often don't know when or why they've been rejected by an AI tool.

But Romer-Friedman said the information underpinning the Real Women in Trucking complaint is an example of data "hiding in plain sight."

He and his colleagues used Facebook's own ad library, a searchable database of advertisements running on Facebook's platform to track age and gender distribution of who saw particular ads.

At that point, in 2020, employment advertisements had to go to everyone 18 and older regardless of gender, Romer-Friedman said.

"That meant that if we saw a skew in a job ad … it wasn't the advertiser's targeting that was responsible for the discrimination," he explained. "As we alleged, it's Facebook's algorithm that is the culprit."

The algorithm would "replicate the bias that the employers had previously shown" by steering job advertisements for perceived male jobs such as truckers, mechanics and HVAC work to men, and sending food service and administrative assistant job openings to women users, Romer-Friedman said.

It took a lot of effort to mine the data for those insights, Romer-Friedman said. But once they did, "the discrimination was so obvious that it smacked you right in the face."

Look for Low-Hanging Fruit

The ACLU'S Akselrod also said that workers and advocates might not always have to dig deep for the information they need to bring a bias case.

"One example of where I think the plaintiffs' bar can really go here is that there are many tools in use in hiring, for example, that are discriminatory on their face — such as many personality assessments," she said. "And the plaintiffs' bar could be far more aggressive in challenging those."

In the FTC complaint against Aon, for example, Akselrod and her colleagues wrote that an algorithmic personality test disproportionately impacted autistic applicants and those with depression and anxiety. That's "because it tests for characteristics that are close proxies of their disabilities — characteristics which are likely not necessary for essential job functions for most positions — and their disabilities are likely to significantly impact the scores they receive for those characteristics," the complaint said.

An applicant such as that can and should bring charges of discrimination if they are rejected, said Akselrod.

She also urged enforcement agencies, such as the EEOC, to "use the full scope of their investigatory powers" to dig into charges of hiring discrimination through such tools, because those agencies can access information that workers cannot.

Within its FTC filing, the ACLU also disclosed that it fired off a classwide action with the EEOC late last year, alleging Aon's personality assessment and cognitive tools discriminated against a client who is biracial and autistic.

Go After Tech Vendors

Another unique way a plaintiff is applying existing law is unfolding in the closely watched hiring discrimination case Mobley v. Workday , filed by a job seeker who sued a purveyor of selection tools directly rather than the hundred or so employers he said rejected him from jobs.

The Mobley case is also an example of a smart use of publicly available information, said the CDT's Scherer, who serves as senior policy counsel for workers' rights and technology. He said Workday had been "aggressive" in marketing its use of AI, and it is generally well known that "it is a company that tons of employers use."

The theory in Mobley's case, which the district court has so far seemed receptive to, is that Workday is an "agent" of an employer, meaning it could be liable under employment laws even though it's not Mobley's employer.

U.S. District Judge Rita F. Lin found in July there is basis to believe that Workday is operating in that role as agent because Mobley had laid out a good case that customers using Workday's software "delegate traditional hiring functions."

May said while that wasn't a novel theory, it was a "smart application of existing theories" of employment-agent relationships.

"I think those are some things that we'll be seeing a lot more, in terms of what is the relationship between the employer versus the vendor who was providing these AI services? How is it being used in the specific context?" he said.

Christine Webber, co-chair of Cohen Milstein's civil rights and employment practice, also said she'd prefer to sue vendors directly — she framed it as going straight to the source of the discrimination.

"My goal has always been to go after the vendors first. Here's my theory why. Because any given vendor has dozens, hundreds of customers," she said. "Let me sue the vendor once and deal with all of it instead of having to sue 100 different employers to accomplish the same thing."

Another benefit of suing vendors directly is that it could help boost plaintiffs' access to information that might otherwise be sealed off, Webber pointed out.

In the Workday case, because Workday is a direct defendant and the judge has found they can be held liable under the law, Workday may have a harder time keeping the details of its algorithms under seal by claiming they're trade secrets.

"It's going to be a lot harder for them to say, 'Oh, trade secret, we shouldn't have to disclose it,'" so as not to be at a disadvantage to competitors, she said. "Obviously, the plaintiffs in the case are not a competitor. I don't really see what the basis would be for denying that sort of discovery."

--Additional reporting by Anne Cullen. Editing by Neil Cohen and Amy Rowe.

This is the second in a two-part series looking at how worker-side discrimination lawyers are approaching the problem of bias in artificial intelligence. Missed the first installment on obstacles facing the plaintiffs' bar? Here it is

For a reprint of this article, please contact reprints@law360.com.