Two federal agencies' ongoing investigations into claims that artificial intelligence-fueled screening tools sold by management consulting firm Aon discriminate against disabled job applicants show that tech vendors are overlooking disability bias when they roll out new technology, worker advocates say.
The ACLU claims that Aon's AI-powered personality assessment tools unfairly screen out job applicants who are autistic or have mental health disabilities such as depression or anxiety. (iStock.com/Akarapong Chairean)
Both the
U.S. Equal Employment Opportunity Commission and
Federal Trade Commission are investigating complaints filed by the
American Civil Liberties Union, which alleged that the personality assessment tools Aon sells to employers unfairly screen out candidates who are autistic or who have mental health disabilities like depression or anxiety.
The ACLU also alleged that AI-based Aon's video screening tool and its gamified cognitive assessment programs disadvantage those with disabilities, as well as people of color.
In a detailed 50-page complaint lodged with the FTC on Thursday, the ACLU
contended that Aon has deceived its customers by claiming its products are "bias-free" and can help companies increase diversity. Within that filing, the ACLU disclosed that it fired off a classwide action with the EEOC late last year, alleging Aon's personality assessment and cognitive tools discriminated against a client who is biracial and autistic.
Matt Scherer, senior policy counsel for workers' rights and technology at the
Center for Democracy and Technology, said civil rights advocates have long been particularly concerned about disability discrimination when it comes to AI applicant screening programs, as companies tend to give it less attention than other forms of bias.
"By and large, vendors tend to completely ignore the impacts on disability with their products, more so than I would say race, gender, ethnicity and age," Scherer said.
"Widespread Problem"
According to the ACLU's complaint against the FTC, the "personality constructs" in Aon's tools overlap with the diagnostic criteria for autism and mental health disabilities.
Some of the questions directly align with a commonly used self-assessment for autism called the Autism
Spectrum Quotient — including about gauging someone's emotions by their facial expressions and someone's level of comfort in large groups — putting those with autism spectrum disorder at a disadvantage, according to the ACLU.
The ACLU also said Aon's questions aimed at gleaning someone's level of "positivity" and "composure" can result in unfairly low scores for applicants with symptoms of depression and anxiety.
Aon says on its site that its "assessment solutions" are used by a slew of major employers, including
Procter & Gamble,
Deloitte and
Burger King.
Olga Akselrod, a senior staff attorney in the ACLU's racial justice program who filed the complaints, told Law360 that other vendors market similar tools, and these software developers and their customers run the risk of contravening the Americans with Disabilities Act.
"Employers really need to understand that any time they are using personality assessments, they expose themselves to a high risk of liability under the ADA," she said. "This is both because the assessments tend to screen out people with disabilities without being carefully tailored to measure essential job functions and because the tests can cross the line into disability inquiries or medical examinations."
Aon is the second AI vendor to be roped into
a high-profile legal action over the tools it sells to employers. Software provider Workday is
waiting for a California federal judge to decide whether a job applicant's class action over its products can move ahead.
The legal battle was lodged by unsuccessful job seeker Derek Mobley, who alleged that Workday's applicant screening software — which he said dictates which resumes are passed on to companies — violates the ADA and other federal civil rights laws by disproportionately turning away candidates who are Black, older and disabled.
"Unfortunately, this is a widespread problem," Akselrod said.
Scherer, of the Center for Democracy and Technology, pointed out that New York City's
first-of-a-kind bill aiming to regulate the use of artificial intelligence in employment decisions expressly sidestepped disability discrimination.
New York City's Local Law 144, which
took effect last year, requires employers that use automated employment decision tools to audit them for potential discrimination, publicize the results of those audits and alert workers and job applicants that such tools are being used.
However, the mandate only protects against race- and gender-based discrimination, even though an earlier iteration of the measure would've targeted a broader set of misconduct.
"The New York City bill was narrowed in what companies had to check for in discrimination, and I think that's telling," Scherer said. "It opens the door for companies to come along and say, 'Hey, our tool is not biased, look, we test for the things that this New York City ordinance requires us to test for.'"
"But in reality, and as this complaint shows, there are forms of bias that go beyond simply running statistical tests on a couple of protected groups," he said.
Federal AI Focus
Despite the gap, the potential for disability bias in the AI arena has not been lost on the federal government, particularly the EEOC.
"The EEOC has been pretty aggressive in letting the public understand that the Americans with Disabilities Act applies to employers using AI," said
Epstein Becker Green employment partner Adam S. Forman, who frequently advises businesses on AI in the workplace.
One of the commission's first official forays into the AI arena was an
ADA-focused technical assistance document released in mid-2022. The guidance offered several tips to help company leaders steer clear of using biased technology or wielding these programs in a way that negatively affects people with disabilities.
The following year, the EEOC and several other agencies, including the FTC, put out a joint statement vowing to "vigorously enforce their collective authorities and to monitor the development and use of automated systems." The agencies put out another statement reaffirming that commitment in April.
Emily Lamm, an associate at
Gibson Dunn & Crutcher LLP who focuses on AI and employment, said the federal government has made clear that it will use existing laws — not wait for policymakers to craft new ones — to address problems they see with this technology in workplaces.
"The overarching understanding is that these agencies view existing statutes as reaching AI tools, so the expectations that employers and vendors have under these statutes do not evaporate when using these tools," Lamm said.
Employer Steps
Scherer said the legal risks exemplified by the filings against Aon and Workday haven't prompted the vendor community to shape up yet.
"If anything, we've seen vendors in this space start to close ranks against proposals that call for disclosure and regulation when it comes to their tools," he said.
To help curb the chance of bias, management-side employment attorneys emphasized that companies must ensure job candidates have an avenue to request a reasonable accommodation during the process and that this pathway is made known to them at the outset.
"It's important that applicants are given an opportunity to ask for a reasonable accommodation," said Lamm of Gibson Dunn. "Clients are considering including it on the job description or generally just early on in the application process."
These kinds of workplace adjustments could include specialized equipment, alternative tests or exam formats or the opportunity to work in a quieter setting, according to the EEOC's guidance.
Experts also said companies should ensure that any questions they're asking as part of an interview process are closely tethered to the job requirements and don't stray into medical territory.
Under the ADA, employers can't seek information that qualifies as a "medical examination" before offering a candidate a job. And EEOC guidance has also made clear that personality assessments can be problematic.
Asking questions that try to discern an applicant's level of optimism, for example, could run up against the ADA by screening out workers with depression, the commission has said.
Hilke Schellmann, an assistant professor of journalism at
New York University who recently authored a book looking into how artificial intelligence is being used in the employment context, said personality tests generally are not a great way to suss out the best applicant.
"When you ask people about their personality, it may not translate to workplace behavior. It's a pretty weak way to score job applicants," she said. "You want to test on skills, capabilities, things you need to do the work, not who you are.
"That's where the bias creeps in," she said.
--Additional reporting by Benjamin Morse and Vin Gurrieri. Editing by Bruce Goldman and Emma Brauer.
For a reprint of this article, please contact reprints@law360.com.