Fed. Ct. judges take to YouTube to address bar’s & litigants’ non-disclosure of gen-AI court filings

By Cristin Schmitz ·

Law360 Canada (November 4, 2024, 4:53 PM EST) -- Knowing that at least some (if not how many) lawyers and self-represented litigants are failing to disclose their use of ChatGPT or other generative artificial intelligence (AI) tools when they create content for documents filed in Federal Court, the national trial court has taken the novel step of posting YouTube videos, titled “Compliance with the Notice on the Use of Artificial Intelligence,” featuring judges who explain how to comply with the mandatory AI-use disclosure obligations the court imposed 10 months ago.

Judicial review applications, particularly in immigration cases, have surged in Federal Court since Dec. 20, 2023, when Federal Court Chief Justice Paul Crampton first issued a three-page notice to the parties and profession, titled “AI Use in Court Proceedings.”

The chief justice’s notice expressly requires parties and counsel to alert the court and each other, via a signed declaration, when an AI-generated or -created document is submitted to the court and prepared for the purpose of litigation.

Federal Court Chief Justice Paul Crampton

Federal Court Chief Justice Paul Crampton

The chief justice updated his original notice on May 7, 2024, assuring the bar and public, among other things, that members of the Federal Court would not make negative inferences about their court filings, based on their signed declarations disclosing AI use.

However, compliance with the mandatory disclosure obligation has been almost non-existent so far, the court says.

“We’ve asked for these declarations and, unfortunately, we haven’t had that many,” Chief Justice Crampton told Law360 Canada recently.

Yet given significant concerns about AI use in court proceedings, such as “deepfakes” and the potential fabrication of legal authorities (“hallucinations”), Chief Justice Crampton said public confidence in the court and the justice system requires that the Federal Court “be able to get these declarations telling us where it’s been used.”

Federal Court Justice Sébastien Grammond

Federal Court Justice Sébastien Grammond

To that end, the Federal Court posted on YouTube on Nov. 1, 2024, separate videos featuring Justice Sébastien Grammond, speaking in French, and Justice Alan Diner, chair of the court’s technology committee, who both explain when disclosure is, and is not, required.

“In the years prior to COVID, we were averaging 6,000 applications for judicial review a year,” but that is set to quadruple in 2024, Justice Diner notes in his 14-and-a-half-minute English-language video, which had had more than 400 views at press time.

“We also know that we are not getting a lot of declarations,” he says. “In fact, we’ve only had two” — one in English and one in French.

“Certainly of 24,000 cases that we’re expecting by the end of the year, we know that more than two have used artificial intelligence,” Justice Diner states. So “the reason that we’re providing this video to you is for your understanding of when you need to declare in order to comply with our notice.”

Federal Court Justice Alan Diner

Federal Court Justice Alan Diner

Justice Diner explains that the court’s disclosure requirement relates not to all forms of AI, such as grammar checkers or legal research software, but specifically to generative AI (genAI).

“There’s obvious advantages of using AI, which we recognize, and there’s some disadvantages too,” he says, citing AI-generated false information and hallucinated case law. He pointed, for example, to a case last February where a lawyer was hit with personal costs in a parenting dispute, after citing two non-existent cases that had been generated by AI: Zhang v. Chen, 2024 BCSC 285.

Justice Diner explains the Federal Court’s national policy for AI-use disclosure aims to preserve fairness and uniformity.

“We get lawyers from different jurisdictions who may have different rules in their law society guidelines, and so we wanted to come up with a standard policy that would cover the self-represented litigants, [and] these lawyers across the country and, essentially, … level the playing field,” he remarks.

“We wanted to ensure that when you’re using generative AI, at least you put the other side, at least you put the court, on notice and that everyone’s on the same page, and we can do the due diligence that we want as judges, and our law clerks can check into this, as can counsel on the other side,” he explains. “Obviously, we have an adversarial system, and so most of the checking, we hope, is going to be done by counsel.”

Justice Diner notes that when counsel is taking over a file from another lawyer, or is representing a formerly self-represented litigant — and is not sure whether genAI was used to prepare court materials — “we’d ask that you consult previous counsel or try to find the answer from” the client. 

“If you can’t find the answer, and you suspect nonetheless that AI has been used, then … we would appreciate you telling the court in a declaration that ‘I suspect that AI was used in the previous submissions made to the court,’” he advises.

Justice Diner reiterated the court’s assurance that “the inclusion of a declaration, in and of itself, will not attract an adverse inference by the court.”

In deciding whether AI use must be disclosed, he advises self-represented litigants and counsel to ask themselves: “‘Is AI doing something that I would have previously done myself?’”

“If you’re a self-represented [litigant] or a sole practitioner, or if you’re working in a law firm, [ask yourself] ‘Is AI doing something that I would have previously given to an associate, to a law student, articling student or someone else in the firm?’ And if you’re doing that, if AI is, in other words, a co-author for you or coming up with content, generating or creating content, then you must make a disclosure.”

Justice Diner says asking genAI to come up with the leading cases in a judicial review application would require such disclosure, notwithstanding that the authenticity/existence of the case law is verified by a human. “That’s your role as a ‘human in the loop,’ but still, you must declare to the court that you’ve used AI because, again, it has created or generated the content for you,” he explains.

Another “classic example” of when a declaration is required is when “you ask AI to provide certain content, and then you go and you copy-paste that into your submissions.”

Asking genAI to evaluate the strength of an argument from the opposing party, and then selectively copying and pasting parts of its answer into your submission to the court would also require a declaration, the judge says. “You have used AI to generate a part of your legal submissions to the court. So you would need to disclose … the parts that you have. . . been assisted with by AI. It has been your co-author.”

Justice Diner said using AI to translate decisions into French or English must also be disclosed. “Again, it is a co-author for you, unless you have drafted and translated that yourself and fed it into AI, and just use the AI for minor touch-ups.”

The judge notes a declaration is not required for use of a grammar checker to simplify content or to check grammar and spelling.

Photos of Federal Court Chief Justice Paul Crampton and Justices Alan Diner and Sébastien Grammond: Balfour

If you have any information, story ideas or news tips for Law360 Canada, please contact Cristin Schmitz at cristin.schmitz@lexisnexis.ca or call 613-820-2794.