President-elect Donald Trump's return to the White House could mark a shift in the federal government's approach to the ever-growing deployment of technology that utilizes artificial intelligence, and endanger guidance for its use that employment regulators issued during the Biden administration.
While the Trump administration's preferred policy approach to AI remains to be seen, it isn't likely that Congress will enter the fray by enacting legislation, at least in the near future, attorneys say. (iStock.com)
The widespread implementation and use of products that incorporate machine learning in employment and other contexts grew dramatically during President Joe Biden's four years in office, which prompted the federal government to take initial steps to make sure it was deployed lawfully.
According to recent media reports, Trump is mulling whether to name an "AI czar" to coordinate policy on the issue, though it remains to be seen whether that comes to fruition.
However the incoming administration's staffing shakes out, Trump is reentering the White House having championed a deregulatory ethos that is markedly different from that of his predecessor, which could result in a fresh approach to artificial intelligence-related issues, attorneys say.
"With the caveat that … it's difficult to predict the future, it seems the indications are that the Trump administration will take a lighter hand when it comes to AI policy and regulation," said Guy Brenner, a partner at
Proskauer Rose LLP.
"I think that, right now, it's a waiting game and looking at tea leaves," he added.
Here, management-side attorneys discuss what employers should watch for on the AI front after Inauguration Day.
Executive Orders May Get Shuffled
Among his most high-profile moves pertaining to AI, Biden
issued a lengthy executive order in 2023 titled "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." It laid out a road map for safeguarding workers and consumers from bias and other potential harms the use of AI and other automated tools may pose.
The executive order directed various federal agencies to develop policies and best practices that address a range of concerns raised by AI technologies, including their potential for exacerbating discrimination or stifling competition.
Rachel V. See, senior counsel at
Seyfarth Shaw LLP whose practice focuses on AI risk management and regulatory compliance, said that Trump has spoken publicly "about the importance of AI for global competitiveness" and has indicated that he wants to repeal Biden's executive order.
See also noted that Trump during his first term had
issued an executive order of his own related to artificial intelligence that could receive renewed attention by the incoming administration.
"I think people tend to forget … [that] there was a Trump executive order in [his] first administration … that talked about needing to strike a balance between managing risks and taking advantage of AI," See said. "Even though that's a core theme of President Biden's executive order, we can revert back to what the first Trump executive order said … and get at very similar perspectives."
Signed in February 2019, Trump's executive order was titled "Maintaining American Leadership in Artificial Intelligence." It called on federal agencies to develop rules of the road for integrating machine learning into the private sector that would allow America to lead the pack in AI research, development and deployment. The principles outlined in Trump's executive order included training workers with skills for applying AI-infused technology, and "protect[ing] civil liberties, privacy, and American values" in the application of AI.
Though the 2019 executive order is "showing its age" in some respects given that AI is a fast-evolving field, See said the 2019 order contains concepts that remain sound risk management guidance.
"The overarching message in that executive order was, 'Yes, there are risks. Yes, there are opportunities. We need to do both,'" See said.
While the Trump administration's preferred policy approach to AI remains to be seen, it isn't likely that Congress will enter the fray by enacting legislation, at least in the near future, attorneys say.
"What I would expect at the federal level is for the federal government to pull back on some of its efforts to regulate workplace AI" both legislatively and within the executive branch, said Joseph Schmitt of
Nilan Johnson Lewis PA.
"I don't anticipate that a Republican Congress is going to be very interested in passing new employment regulations," Schmitt added. "They seem quite determined to reduce employment regulations, not increase regulations. That will be true in AI as it is true in other areas of employment law."
Future of DOL Guidance Unclear
Both before and after Biden's 2023 executive order, various executive branch agencies, including the
U.S. Department of Labor, made their own initial forays into the AI space.
In April, the DOL's
Wage and Hour Division issued guidance in the form of a field assistance bulletin aimed at addressing employers' use of AI. It said that AI-infused technology could be beneficial but warned that it also might miscalculate employees' time worked and generate violations of federal wage and leave laws.
In October, the DOL issued
another round of guidance in response to Biden's executive order that laid out general best practices for employers and a series of principles for using AI in the workplace. The eight principles enumerated in that guidance included a call to focus on worker empowerment by seeking worker input, an emphasis on ensuring any tools are developed with workers' rights in mind, and a recognition that such tools can make work processes better.
The Labor Department's
Office of Federal Contract Compliance Programs issued its own guidance in April in response to Biden's executive order explaining federal contractors' legal obligations when they deploy technology that incorporates machine learning.
See of Seyfarth Shaw noted that the advice that the agency communicated was that existing laws apply, but added that the process for rescinding the DOL's various guidance documents would be a simple one for new political appointees at the agency.
"The Department of Labor issued a bunch of documents pursuant to the Biden EO and I think it's very straightforward for whoever President Trump puts in at the Department of Labor to revoke that subregulatory guidance," See said. "There wasn't formal notice-and-comment rulemaking that the Department of Labor did in response to the Biden EO on AI so it's pretty straightforward to just undo that."
David Walton, chair of
Fisher Phillips' artificial intelligence team, noted that Trump's pick to run the Labor Department, U.S. Rep. Lori Chavez-DeRemer of Oregon, has a moderate record on labor issues. If she is confirmed, it could bode well for guidance that is already in place in the AI space, particularly since federal legislation is unlikely.
"I think that if anything [related to AI] comes out of the federal government, it's going to come out of the Department of Labor," Walton said. "I don't think it's going to be anything striking."
EEOC's AI Enforcement Efforts May Be More Muted
Another federal employment regulator that
threw its hat into the AI arena during the Biden administration was the
U.S. Equal Employment Opportunity Commission.
In late 2021, the commission
launched its "Artificial Intelligence and Algorithmic Fairness Initiative," which is aimed at bolstering the agency's expertise in AI-related issues and help ensure that technology is deployed by employers in accordance with existing anti-discrimination laws.
The commission subsequently released two technical assistance documents,
one that addressed the ways in which algorithmic and AI tools deployed in the workplace can violate the ADA, and
another that explained the interplay between those sorts of technologies and Title VII of the Civil Rights Act.
Schmitt noted that the EEOC has been pushing for the regulation of AI, particularly in companies' selection processes, both through guidance and investigating companies that misuse AI as a
strategic enforcement initiative. But the agency could dial back any such efforts once Trump appointees are in place.
"I would expect that as the Trump administration begins to exert its influence over the EEOC, recogniz[ing] that that's not going to happen immediately, … that we will see a reduction of those enforcement mechanisms or a reprioritization to Trump priorities, which I don't think are going to be as focused on AI," Schmitt of Nilan Johnson Lewis said.
While he doesn't expect a deconstruction of AI measures the agency has taken and the guidance it has issued thus far, Schmitt does anticipate that the commission won't push the envelope when it comes to pursuing AI-related allegations against employers.
"I don't think that the EEOC will be pushing the enforcement of that guidance, and I also don't expect to see really robust legislative efforts to restrict or regulate workplace use of AI at the federal level," Schmitt said.
States Likely to Pick Up Slack
In the absence of federal regulation and with the likelihood of the Trump administration taking a more hands-off approach to AI regulation and enforcement, lawmakers from progressive states may increasingly take the opportunity to fill the void, attorneys say.
"I think states are going to dominate. States are going to take control of AI regulation, just like they did privacy," said Walton of Fisher Phillips. "So I think you're going to see a patchwork of laws, and we're starting to see the beginning of that."
Jill Vorobiev, a partner at
Reed Smith LLP, said states that were at the forefront of enacting paid sick leave laws may be the ones most likely to be at the forefront of regulating AI, particularly in the employment context.
"A couple states already have some statutes or regulations that address AI … and I think that we're going to see more states adopting or trying to pass legislation to govern the use of AI by employers in the employment setting, particularly on the application front," Vorobiev said.
Colorado recently
blazed a trail for regulating high-risk uses of artificial intelligence when it enacted a novel and comprehensive law that is slated to take effect in February 2026. Illinois recently adopted legislation that increases legal protections for job applicants and employees who are evaluated using AI tools, and New York City
has its own law in place governing the use of AI and algorithmic decision-making in employment processes.
Although sweeping legislation in California that would have addressed bias tied to the use of artificial intelligence products
stalled, several state agencies are
developing regulations that would govern AI usage in various contexts.
As more states enter the AI arena in the absence of an overarching federal legislation, Schmitt likened it to the
trend in recent years of states adopting
pay transparency laws that require companies to publicly disclose pay ranges for available jobs.
Those laws, he said, have effectively prompted multistate companies to adopt nationwide policies that comply with the most restrictive state requirements as opposed to companies maintaining different policies based on the law in each locale where they operate. A similar pattern could unfold when it comes to AI legislation and compliance, he said.
"I would expect that the blue states will try to drive AI regulation in a way that will allow them to control how AI is used, even on a national level," he said.
--Additional reporting by Allison Grande, Anne Cullen, Daniela Porat, Irene Spezzamonte and Dorothy Atkins. Editing by Amy Rowe and Bruce Goldman.
For a reprint of this article, please contact reprints@law360.com.