Top legal officers appear badly misaligned with other executives or are misinformed on the use of artificial intelligence at their companies, especially in the human resources area, according to a new survey released Tuesday.
More than 336 top executives across the U.S. responded to
Littler Mendelson PC's 2024 AI C-Suite Survey, which revealed areas of division, with CEOs and chief human resources officers on one side while chief legal officers and general counsel were on another.
The findings raise a serious question of whether top legal officers really know what's going on with generative AI at their companies.
For example, 52% of top legal officers said no, their organizations were not using AI tools in human resources, such as in recruiting, vetting or hiring decisions. But only 31% of CEOs said no, while a mere 18% of HR said they weren't using the tools.
That suggests that 82% of human resources departments are using AI while about half of their legal chiefs don't even know it.
These discrepancies among executives pose challenges for effective AI risk management, according to Littler's Niloy Ray, a shareholder in the Minneapolis office who for the past three-plus years has focused on AI use, regulation and policies.
The numbers are troubling "when you look at those who are in charge of the HR function versus those who are [in charge] at the legal level and are tasked with overall risk management across functions, from a litigation perspective," Ray told Law360 Pulse.
In another misalignment, 42% of CEOs and human resources officers said AI has the potential to enhance HR processes to a large extent, compared with only 18% of the legal officers.
The two sides also disagreed widely on levels of tracking and enforcing AI policies. For instance, only 38% of legal chiefs said their companies were using access controls over AI, versus a whopping 65% of CEOs and human resources officers who said they were using it.
For audits and review, the numbers were 27% vs. 56%, and for automated monitoring systems, they were 20% vs. 46%.
Ray said the HR perspective is probably more reliable "because they're more directly connected to what's happening in HR." He said the results are consistent with an analysis the law firm did a couple of years ago.
Ray explained that knowledge about the use of AI "is not top of mind for general counsel and chief legal officers, and they may not [be] asking the right questions of their own organization."
That's why, he added, "we recommend the first step in your AI journey is to determine where you're actually using AI."
Ray recommends that general counsel survey their organizations "carefully and closely to unearth the uses of AI that you already have. … Surprise, surprise, you will learn that AI is being used and adopted in places that you didn't know."
The next step, he said, is to develop an AI philosophy by determining the organization's enthusiasm for using AI compared with its risk tolerance. That leads to a point "somewhere on a curve on the spectrum of how much you want to encourage, and control, the use of AI within your organization."
Then the general counsel needs to develop an AI policy, he said. "You don't just adopt AI without putting guardrails and expectations in place," he explained. "And that might involve a difficult or deep conversation in the company among different sparring partners who will have different motivations and appetites for AI."
The company also needs to train the rank and file on the guardrails to determine if a specific AI use is too risky or not.
The responses show that less than half of the executives surveyed said their organizations had policies in place for use of generative AI. That's still a significant jump from a 2023 Littler survey when just 10% said they had policies.
"Lastly," Ray said, "you have to consider what your vendors and contractors are doing when it comes to AI … to determine every interaction and touch point that it has within your organization and whether to address" them.
The survey showed that 85% of responding executives are concerned about AI litigation risk. But Ray said that top legal officers are not as concerned about AI litigation as human resources officers are, and that bothers him.
"It's interesting and a little bit worrying," he said. "Maybe they just don't see it as a legitimate concern. But the other interpretation which I am concerned about is that they're just not aware of the level of AI use in their organizations and are deemphasizing the potential litigation risk."
So Ray's bottom line is this: "If you're at the general counsel level, you need to be talking to all of your stakeholders — not just legal, not just to HR, not just to IT — but to all of them, so that you can get a holistic view of what your AI risk profile really is."
--Editing by Robert Rudinger.
For a reprint of this article, please contact reprints@law360.com.