But employers and their attorneys face a host of other issues with AI as the technology that enables machines to learn from experience gains a foothold in the employee benefits industry, including potentially significant cybersecurity risks involved with so much new data.
"AI is evolutionary. It can make it easier to audit — you already automate nondiscrimination testing, you already automate a lot of things — this is another level of automation," said David Levine, co-chair of Groom Law Group's employers and sponsors group. Levine refers to testing on benefits nondiscrimination that is required by the Internal Revenue Service to ensure programs don't favor highly compensated employees.
"But is it the be-all, end-all? No, it's another wonderful tool," Levine said.
Here are three takeaways from attorneys on how AI is affecting employee benefits administration and litigation.
Innovative Research Tool
Employer-side attorneys said AI technology has the potential to unlock better ways to administer employee benefit plans and might even help investment managers spot problems with investment performance by churning large volumes of historical data.
Retirement asset management giant BlackRock is among the numerous large financial services providers that have publicly touted AI capabilities in the employee benefit plan management context. But attorneys also highlighted the potential risks if managers of benefit plans regulated by the Employee Retirement Income Security Act rely too much on AI technology, given the significant responsibilities imposed on ERISA fiduciaries, including the duties to manage retirement assets prudently and loyally.
Michael Abbott, partner at Foley & Lardner LLP and an employee benefits and tax attorney, said large retirement plan service providers are all marketing "different types of tools" that employ AI, including for customer service as well as for investment managers wanting to more quickly access research reports, for example.
"Sure, you can use it as a tool to get that backward information. That doesn't mean that it should be used primarily in terms of what the recommendation is going to be on a go-forward basis, but at least it gives you that historical perspective," he added.
Abbott said that AI is "not a replacement for what that service provider is supposed to be actually doing."
"Essentially, on the investment advisory side, AI is not intended to act as a substitute for the investment adviser or investment manager's advice related to selection of investment options," Abbott said.
Legal consulting firms have also begun to advertise AI as a way to spot potentially lucrative ERISA class action litigation against companies whose investment offerings underperform, including a New York-based AI startup called Darrow. The company says on its website it has more than $10 billion in claimed damages in litigation proceedings that were identified using AI technology.
Shelbi Lifshitz, a litigation partnerships executive at Darrow, said in an interview that "our AI uses public data to detect fact patterns that are indicative of legal violations in a ton of different practice areas, from privacy to consumer protection to financial cases like ERISA."
"And we then present our data and investigations to leading law firms, and we let them do what they do best, litigate," Lifshitz said.
Regardless of the individual product involved, benefits attorneys were skeptical of the idea that AI can predictably identify ERISA class action litigation given that publicly available data on ERISA plans often contain hidden complexities.
Levine, at Groom, gave the example of how a data field on the Form 5500 Schedule C, which provides information on service provider compensation, might generate results that indicate high per-participant cost that could be misleading. He noted how Schedule C has a slew of subcodes: "Some of it could be recordkeeping fees, but a portion of it could be other fees. But that all gets lumped together sometimes," Levine said.
"So if someone looks at the 5500 with machine learning ... it might allow you to say, hey, we see overall fees at a certain level, but it doesn't tell you what's wrong or right," Levine said.
Lifshitz, at Darrow, said when asked about the issue that "regarding recordkeeping fees, I do think that there's legitimate criticism of the ability to use AI to identify those types of cases."
"Because when it comes to those types of cases, the data that you need to make a case like that is not public. And we don't work on cases like that. …We're really focusing on the mismanagement cases, not the excessive recordkeeping cases," Lifshitz said.
New Discrimination Allegations
Another AI-related development affecting employee benefits that attorneys highlighted was the new litigation risk facing employee health plan providers from participants alleging discriminatory claims processing.
Numerous large health insurers are facing ERISA class actions over allegations that algorithmic technology contributed to discriminatory health care denials that reference AI in their complaints. The U.S. Department of Health and Human Services also recently addressed AI and algorithmic discrimination in healthcare in a final rule from April further implementing the Affordable Care Act's healthcare nondiscrimination provision, Section 1557.
Tom Hardy, a partner and managed care attorney at Reed Smith LLP, said "there's definitely interest in trying to figure out how AI can fit into an ERISA claims administrator scenario."
He added that "there is concern about making sure that what they're doing complies with ERISA, complies with their fiduciary duties under ERISA, and doesn't impact the quality of decision making."
Hardy said a challenge with ERISA as it relates to AI has to do with a potential lack of understanding about how it works: "Can you really understand what an AI application is really doing?" he asked.
"One of the challenges with an AI product is if it's a black box technology, and you don't have access to the inner workings of it, is that it can create challenges to establishing that what you did was not an abuse of discretion," Hardy said.
Growing Cybersecurity Risks
Another major AI takeaway for benefits attorneys was that all the new data associated with the use of the technology has implications for recordkeeping and cybersecurity, with some attorneys citing negotiations over contract language tied to its use.
Levine cited data security and privacy as major considerations when interacting with AI. He said because large language models can now import documents and other materials, a question that comes up frequently is, "You're going to train on my data. What does that mean? What happens with it?"
"There's a lot of questions about data security, privacy, all these things that remain there," Levine said.
Abbott, at Foley, said he's seen references to AI in some 401(k) service provider contracts. But he emphasized that ERISA fiduciaries need to incorporate AI tools into a management process that's prudent overall and in compliance with ERISA.
"You've got to go back to the origins of ERISA ... and analyze all of these AI aspects in that realm," Abbott said.
--Editing by Bruce Goldman and Nick Petruncio.
For a reprint of this article, please contact reprints@law360.com.