Physicians who rely on artificial intelligence for decision-making still bear ultimate responsibility, according to recent guidance from an association for state medical boards. (iStock.com/ipopba)
In what amounts to a road map for similar direction from state oversight groups, the FSMB describes AI as a powerful tool for administrative tasks like scheduling and documentation, as well as for diagnostic steps like interpreting MRIs, X-rays and mammograms. Artificial intelligence can even reduce physician burnout and redundancies, the group said.
According to the guidance, which cites news reports and an academic paper in the Harvard Business Review, AI is expected to replace as much as 80% of physicians' current tasks. But the AI revolution in medicine doesn't change the "key professional responsibility" of clinical decision-making.
"As with any other tool or differential used to diagnose or treat a condition, medical professionals are responsible for ensuring accuracy and veracity of evidence-based conclusions," the guidance says.
According to the FSMB, a physician's responsibilities regarding an AI tool do not end if they ultimately choose not to follow a given recommendation.
Regardless of whether a doctor hews to a course of treatment suggested by an AI tool, they must explain their rationale, especially if that decision leads to harmful patient outcomes.
"While the expanded use of AI may benefit a physician, failure to apply human judgment to any output of AI is a violation of a physician's professional duties," the group said.
The "best practices" guidance was passed by a vote of FSMB delegates, comprising members of each of the country's 70 medical boards, at the group's annual meeting this spring.
"These guidelines are some of the first that clearly outline steps physicians can take to meet their ethical and professional duties when using AI to assist in the delivery of care," said Humayun Chaudhry, president and CEO of the FSMB, in a statement.
"We hope that this policy will reduce the risk of harm to patients and guide physicians by providing recommendations for state medical boards on how to promote safe and effective incorporation of AI into medical practice in a manner that prioritizes patient well-being," he added.
The FSMB guidance came in the same month that the U.S. Department of Health and Human Services' Office for Civil Rights finalized a rule under Section 1557 of the Affordable Care Act, which protects against discriminatory practices in healthcare.
So far, the FSMB report has gotten some mixed reviews. Speaking at Stanford Medicine's RAISE Health Symposium in mid-May, American Medical Association President Jesse Ehrenfeld expressed concern that doctors would be held liable under the rule and guidance.
Under federal authority, "if you use an algorithm and a patient is discriminated against because of that algorithm that harm is caused, you, solely the clinician, are liable now."
Pondering who should be responsible — a clinician, doctor or somebody else — he pointed to whomever is best-positioned to mitigate the harm.
"The Federation of State Medical Boards recently released a set of principles that we have a little bit of an issue with, which again creates a new duty for physicians and says that the physician ought to be solely liable if there is harm caused to a patient with the use of these tools," he said.
The guidance diverges from the organization's belief that there should be "shared liability," Ehrenfeld said.
L. Glenn Cohen, a health law professor at Harvard University, told Law360 he was surprised the FSMB's guidelines placed such an emphasis on physicians. He suggested some liability should fall on hospital systems if they purchase and implement AI poorly. He also pointed to developers of AI tools, even while acknowledging "some obstacles" to pursuing legal claims against them.
The medical board guidance adds to a slew of attempts to foresee the impact of artificial intelligence in an array of contexts. Along with HHS, the U.S. Food and Drug Administration and the Federal Trade Commission are also exploring options to regulate the fast-moving tech.
In October, the Biden administration issued sweeping AI guidance, while the World Health Organization published a January report on generative AI tools, such as ChatGPT. Both broadly addressed the issue of legal liability but left many questions unanswered.
According to Bradley Merrill Thompson, a member at Epstein Becker Green, the FSMB's guidance aligns with the 21st Century Cures Act passed in 2016, which expanded the U.S. Food and Drug Administration's medical device definition to include AI used in clinical-decision support.
The basic idea, he said, is that AI may analyze specific patient information to arrive at a recommendation, but the decision is ultimately left to the doctor.
"The view at the time was that Congress doesn't need to regulate that sort of software because the decision-making is still firmly the physician's decision," Thompson said. "And regulation of that falls into the practice of medicine by the state boards of medicine."
State medical boards are responsible for licensing physicians, investigating complaints, disciplining those who violate the Medical Practice Act and referring doctors for evaluation and rehabilitation when necessary. The FSMB provides support to these state medical and osteopathic boards. The group doesn't have any direct authority over doctors, and FSMB guidance does not need to be adopted by state medical boards.
"It is completely voluntary by the states, and they can adopt it in its totality, they can adopt it in part, they can modify it any way they want," said Kristi Kung, chair of the healthcare regulatory practice DLA Piper.
Kung said there's interest in the medical community in consistent standards of care as the use of AI becomes more entrenched, and that's the organization's primary objective.
Those standards are important to state medical boards' oversight and regulation of physicians' licenses and when it comes to medical malpractice litigation.
"So it's the key component of both, and there has to be an understanding of the standard of care in a community in order to measure a physician's actions against it," Kung said.
According to Cohen, the key liability issues raised by AI-enabled clinical decisions will be whether a physician did something that varied from the standard of care and caused harm to a patient.
From a legal perspective, how a doctor ends up setting a course of action because of automation bias or some other reason isn't as central.
"When one thinks about the realities of a trial, if a case goes to trial, having a physician on the stand be cross-examined to testify that the physician ignored multiple alerts from a device — a form of automation bias — that would, if heeded, have caused them to save the patient's life or reduce injuries to the patient, is exactly the kind of evidence you would not want to have in front of the jury as the physician's lawyer," Cohen said.
The guidance also directs providers using AI tools to view informed consent as a "meaningful dialogue" rather than "a list of AI-generated risks and benefits."
"Bringing the patient into the discussion is kind of the last step, and the guidelines reiterate the need for transparency and the need for sharing information with a patient," Thompson said.
While the FSMB's guidance says it doesn't regulate tools or technologies, it recommends that state medical boards examine how the "practice of medicine" is legally defined in jurisdictions for those who provide healthcare, human or otherwise.
Kung said that while state medical boards only have authority over the "practice of medicine," traditionally understood to mean "by doctors and clinicians," the FSMB may now be suggesting the definition needs to be expanded to include AI.
"What does that mean?" Kung asked. "Are we going to license AI now so that medical boards can directly regulate a product and, by extension, the developers of that product? Because that's what it seems to suggest."
The FSMB did not respond to multiple requests for comment.
--Additional reporting by Gianna Ferrarin. Editing by Philip Shea and Dave Trumbore.
For a reprint of this article, please contact reprints@law360.com.