The interim guidance highlights issues for courts to consider when dealing with AI, including potential applications, public trust and confidence, and challenges. Some of the challenges identified are that generative AI tools are known to provide wrong answers and tools trained on incomplete data can be biased.
The guidance also notes that AI can be used to automate court operations like data entry, docketing, scheduling and case processing, and to create legal aid tools for self-represented litigants.
"AI can be a tool to aid courts, lawyers, and litigants in the right circumstances, but it is not a replacement for judges and lawyers, and there must be guardrails in place to ensure that it is ethically used by courts and parties," Shay Cleary, managing director at NCSC and staff lead of the AI rapid response team, said in a statement.
In December, the center announced the creation of the AI rapid response team, a joint project of NCSC, the Conference of Chief Justices and the Conference of State Court Administrators.
The team is co-chaired by District of Columbia Court of Appeals Chief Judge Anna Blackburne-Rigsby and Justin Forkner, the chief administrative officer of the Indiana Supreme Court.
In addition to the interim guidance, NCSC has resources on its website to assist courts with understanding AI including webinars, videos, reports and guides.
NCSC and the AI rapid response team have also created a U.S. map showing AI-related activities by state courts and legal regulators. For example, the map shows which state bar associations have created task forces or issued AI guidance.
"Artificial intelligence is already impacting the courts, and we must be prepared and forward-thinking when it comes to addressing how AI can be used effectively, efficiently, and ethically to promote the administration of justice," Blackburne-Rigsby said in a statement.
The legal industry as a whole is grappling with the latest advances in generative AI and its impact on the profession.
In January, the Florida state bar approved generative AI ethics guidelines for attorneys, becoming one of the first states to issue guidance on the technology. The guidelines advise lawyers that they can use generative AI as long as they meet their ethical obligations.
Last week, the Illinois Supreme Court confirmed that it has formed an AI task force to investigate generative AI and recommend ways that the Illinois judicial branch should both regulate and leverage the technology.
--Editing by Leah Bennett.
For a reprint of this article, please contact reprints@law360.com.