That was the theme across several panels at an American Health Law Association conference on AI featuring presentations from healthcare attorneys and industry leaders.
"In the name of efficiency, we've got to be very careful because there's a lot of liability being alleged right now for abuses of tools," Stephen Bittinger, a healthcare litigator at Polsinelli PC, told attendees at a Feb. 5 panel geared toward managed care organizations.
Healthcare organizations are seeing a rapidly expanding market of AI tools, accelerated in part by the release of ChatGPT in late 2022. These tools can help document patient visits, provide clinical decision support, facilitate drug development, process claims for coverage and reimbursement and detect fraud, attendees at the conference heard.
But with increased opportunity from AI also comes the risk of tools not working as intended or being misused.
"Just like if your employee does something wrong or your employee improperly codes claims or performs procedures that are medically unnecessary, the organization is held responsible for [the] actions of its employees, and I see no reason to think that the government won't take the same position around technology," Tony Maida, a healthcare attorney at McDermott Will & Emery LLP, told attendees.
Speaking at an AHLA panel on fraud and abuse risks from AI, Maida pointed to the U.S. Department of Justice's $145 million settlement with health information technology company Practice Fusion. In 2020, the Allscripts-owned company admitted to taking kickbacks from a major opioid company — later revealed to be Purdue Pharma — in exchange for embedding alerts in its electronic medical records software that encouraged physicians to prescribe opioids.
That case "arguably is an example of AI enforcement," Maida said, noting that it demonstrates the type of risks providers need to consider when purchasing AI-enabled electronic medical records software.
Polsinelli's Bittinger likewise cautioned attendees about the legal liabilities that can arise with new tools, pointing to his work representing plaintiffs in a suit against a managed care organization owned by Centene.
According to the suit, which has since headed to arbitration, the organization's statistical sampling methodologies improperly calculated that a diabetes management device supplier owed it $5.7 million in overpaid claims.
A key challenge in the case, Bittinger told conference attendees, was plaintiffs' lack of access to the software tool itself. In the complaint, the plaintiffs claimed the tool was a "predictive algorithm, generative artificial intelligence model, or some unknown software-based prediction tool."
"The [managed care organization] appeared to have used a [generative AI] software tool to read medical records produced in response to a post-payment review, and the tool spit out the denial bases," Bittinger said.
"I have a contracting recommendation … if you're a provider and you're contracting with a vendor, put a term in there that says if you get sued because of what your tool does, or you get a [civil investigative demand] from Main Justice, they're going to cough up the proprietary data and algorithm to show the plaintiff or DOJ how the secret sauce works," Bittinger added.
Bittinger also highlighted several ongoing cases alleging Humana, UnitedHealth Group and Cigna relied on artificial intelligence and algorithmic tools to wrongfully deny patients care and cut costs.
Across the panels, experts advised healthcare organizations to conduct careful evaluations of AI tools before contracting with vendors and to establish robust governance structures once the tools are integrated into their systems.
Robert Martin, who serves as senior legal counsel at Mass General Brigham Inc., highlighted the health system's "success story" with piloting and ultimately rolling out ambient clinical documentation.
Martin, who was speaking at a panel on how to enable healthcare compliance programs to assess and monitor AI, said the documentation tool was first piloted with a small group of clinicians and informatics experts and evaluated by progressively larger groups before being deployed across the health system. Clinicians assessed the performance of the tool as well as its utility within their workflow.
"It's now being rolled out to every employed clinician in the enterprise. It made sense, and it ticked all the boxes," Martin said.
The organization chose not to move forward with other tools, he said, because even though the technology was functional, there wasn't enough of an impact.
It doesn't make sense to be "spending money on something that isn't going to deliver a patient care benefit or a benefit to workflow for clinicians," Martin said.
Post-implementation monitoring is especially important as models may drift or degrade over time, panelists also said.
"It's not just a software system that you plug in and it runs — it continues to change over time, and so you have to adapt your monitoring and auditing aspects of your compliance program to account for these different and new risks that wouldn't necessarily exist before," Maida, of McDermott, told attendees at his panel.
In addition to internal monitoring and assessment, healthcare organizations also must stay abreast of regulation, said Martin, of Mass General Brigham.
While the health system has physical locations in Massachusetts, New Hampshire and Maine, Martin noted that laws throughout all of New England are important to watch because Mass General Brigham gets patient traffic from neighboring states and through telemedicine.
A Vermont prohibition on recording telemedicine consultations, for example, kept the system from using ambient clinical documentation for virtual visits with a patient in the state, Martin said. A bill introduced in the Vermont House in January would alter that prohibition by allowing telehealth visits to be recorded if both the provider and patient consent.
Martin highlighted the importance of following state activity in particular as the federal government takes a more deregulatory approach to AI.
"In the absence of federal legislation or regulation of the space, it doesn't mean there won't be regulation or legislation. It means that states are going to step in," Martin told attendees. "Which, to be honest with you, is more complicated than federal regulatory oversight would be."
--Editing by Haylee Pearl.
For a reprint of this article, please contact reprints@law360.com.