Analysis

This article has been saved to your Favorites!

Calif. Draft AI Rules Show Struggle To 'Keep Up' With Tech

By Amanda Ottaway · 2025-02-14 20:25:12 -0500 ·

California's civil rights watchdog recently pitched changes to proposed rules that would minimize artificial intelligence bias in the workplace, seemingly watering down enforcement options and demonstrating the difficulty of regulating such rapidly evolving technology, experts said. 

California's Civil Rights Council recently appeared to water down a number of proposed rules governing the use of artificial intelligence tools in the workplace, including reducing potential liability for developers and eliminating a controversial definition of "adverse impact." Experts said the moves reflect regulators' challenge of keeping up with the ever-changing technology.

The Civil Rights Council, part of the California Civil Rights Department, voted Feb. 7 to approve a new draft of proposed rules governing the use of artificial intelligence tools in employment and released it for public comment. The rules interpret the state's Fair Employment and Housing Act, or FEHA.

Two of the key changes the council made to this iteration were seemingly slashing potential liability for the tools' developers and removing a controversial definition of "adverse impact." The changes are a real-time look at the ways regulators are trying to get their heads around the sea change AI stands to pose to the workplace and erect guardrails around a technology that's changing so quickly, experts said.

"I really do think that what we're seeing is kind of an attempt to understand the paradigm in which AI operates. The problem is that this paradigm is just so quickly evolving that nobody can really keep up with it," said Travis Jang-Busby, a partner at management-side Blank Rome LLP.

"I think one of the other issues that you see front and center in California is, How do you regulate AI's use without stifling progress? And I think that's a big one, because California tries to be at the forefront in technology. I mean, we are Silicon Valley," he added.

Here's a look at the suggested changes and what's ahead.

Confusion Around Definitions of Discriminatory Impact

In its latest draft of the proposed rules, the Civil Rights Council struck a definition of "adverse impact" that had raised eyebrows in a previous version. The council wrote "reserved" next to the slashed text, indicating that it may make changes in a later draft.

Representatives of the Civil Rights Department declined Friday to comment on or clarify the most recent version of the rules.

Even worker advocates had expressed relief that the council removed its previous definition of adverse impact, which they've characterized as confusing. The previous version said in part, for example, that "'adverse impact' is synonymous with 'disparate impact.'"

Ridhi Shetty, senior policy council counsel at the nonprofit Center for Democracy and Technology's privacy and data project, said she wasn't too concerned about the removal of the adverse impact language. Instead, she drew attention to the removal of other key terms that would describe AI tools' potentially discriminatory effects on workers.

"I think what gives me more pause is that throughout other areas of the most recent draft of the proposed rules, language that acknowledged explicitly both disparate impact and discriminatory treatment has been removed," Shetty said.

"So it just refers to discrimination kind of broadly, and that more general language, I think, makes it easier for vendors and even employers themselves to focus more specifically on disparate treatment and maybe less so on disparate impact."

Disparate impact, a theory that allows plaintiffs to show they've faced worse outcomes than other groups from a supposedly neutral employer policy or practice, is currently a key measure of discrimination wrought by AI tools, because much of the way bias currently plays out is in hiring.

Two often-cited examples of early AI bias, for instance, are Facebook's targeting only users of a certain age with job ads, and a resume screening tool Amazon used that leaned in favor of men because those were largely the resumes in the data set the program had used for training.

"So being clear about what disparate impact means is beneficial," Shetty continued. "The language throughout the rest of the proposed rules could do a better job of addressing that."

In other words, Shetty said, what would be helpful is if the rules clarified what disparate impact means in the AI context specifically.

"The challenge becomes how disparate effect is held accountable and how it's actually treated when you come down to the enforcement angle," she said.

Employers, Not Vendors, to Carry Bias Prevention Burden

The latest version of the proposed rules also seemingly removed AI software developers from the definition of an employer's "agent," in other words taking away a specific example in a list of entities that could play a role in employment decisions.

An agent of an employer under the current version of the proposed rules includes in part "any person acting on behalf of an employer, directly or indirectly, to exercise a function traditionally exercised by the employer … which may include applicant recruitment, applicant screening, hiring, promotion, or decisions regarding pay, benefits, or leave."

Without the language referencing developers, Shetty said, third-party companies that sell AI tools to employers "can claim that they don't act on employers' behalf and therefore are not agents."

Shetty also noted that the council had removed language that sellers of AI tools don't have to "maintain relevant records" and that they could be liable for "aiding and abetting" discrimination, even though in her view, vendors should share responsibility with employers.

"So as a result, it's pretty much kind of left to employers to ensure that these systems they're using are not violating these rules," Shetty said.

But Jang-Busby said that California A.B. 2013, which goes into effect in 2026, can help fill the gap and will be a big help to employers trying to do their due diligence on AI tools they purchase, even though it relates to generative AI. That law requires developers of generative AI systems — which generally differ from the decision-making AI used to make hiring decisions — to be transparent about the data they use to train the tools.

Jang-Busby said that the fact that vendors were removed from the council's proposed rule might be a nod to the fact that they'll soon be subjected to greater transparency requirements, giving employers more tools to make sure they're buying products that won't lead to discrimination.

"I think that trying to bootstrap vendors into the 'employer' role for purposes of mitigating bias was ambitious," said Jang-Busby.

While the ultimate impact of removing vendors from the definition of employer "agents" is not yet entirely clear, it may not necessarily preclude vendor liability.

"I would say that it would be highly fact-specific, and any third party that meets the definition" might fall under the regs, said Orrick, Herrington & Sutcliffe LLP partner Erin Connell, adding "that would have to be litigated."

Jang-Busby noted that the removal of vendors from the proposed rule is an acknowledgment of the massive liability risk they would incur in a litigious landscape like California.

"When you look at a vendor who's supplying their software to staffing agencies and direct employers, the idea that they could be liable to any employee or candidate for multiple clients is unfathomable," he said. "The potential exposure … is significant."

California's Regulation Efforts "Not Happening in a Vacuum"

"These proposed changes to the regs aren't happening in a vacuum," said Connell of the latest draft to the California civil rights watchdog's proposed rules. 

"There are so many AI-related laws that are being produced right now. And I think that is a trend that's going to continue in the states," said Connell, who's not banking on federal legislation in the near future. "So I think states are going to try to 'pick up the slack,' and that includes regulations of employers, but also developers."

Connell said her firm has been tracking the number of AI bills and laws across the U.S.

"By our count," she said, "in 2025 alone, 22 AI bills … have been introduced in states around the country that in their current form would regulate AI developers."

Unlike Shetty, Connell said she thinks developers and employers should be regulated separately. She represents both in her capacity at Orrick.

"My personal view is that given the different roles that employers and developers play, I think it makes sense to treat them differently. And I think that's a trend that we're seeing in the bills that are being introduced," she said.

Experts also pointed to a separate set of California agency regulations in the works — those from the California Privacy Protection Agency, which enforces the California Consumer Privacy Act.

Requiring transparency from vendors should be a key factor for regulators, particularly on the consumer protection side, Shetty said.

"For an employer to determine whether they're making the right decision when purchasing any of these systems, they do need to have that degree of transparency" so they can make an informed decision as consumers, Shetty said.

In September, citing in part a desire not to chill innovation, California Gov. Gavin Newsom vetoed the sweeping S.B. 1047, which would have been the first law in the nation to require developers of the largest and most powerful AI systems to adhere to a series of standards ensuring the public's safety.

But more legislation is coming, and not just in California, experts agreed.

"A lot of times in law, it's funny, we just kind of draw an arbitrary line, right? And that may be the case here," said Jang-Busby.

"I think that — 100% certainty we will see some form of regulation of the way vendors account for bias," he added. "So it's coming. It's just a matter of enforcement mechanisms."

--Editing by Bruce Goldman and Nick Petruncio. 




For a reprint of this article, please contact reprints@law360.com.