AI response to Manitoba’s artificial intelligence initiative | Gary Goodwin

By Gary Goodwin

Law360 Canada (June 29, 2023, 10:25 AM EDT) --
Gary Goodwin
Gary Goodwin
The Court of King’s Bench recently passed a practice direction stating that there are legitimate concerns about the reliability and accuracy of the information generated from the use of artificial intelligence (AI). Any submission must indicate how AI was used.

The ability of ChatGPT to hallucinate and produce non-existent case law caught a few lawyers by surprise. So, the Canadian court system wisely tried to get ahead of the situation. A number of innovators also attempted to press pause on the recent onslaught of AI systems. A lot of things appear to be happening and likely no one completely understands the mischief AI might be causing.

Law360 Canada interviewed Manitoba Chief Justice Glenn Joyal to elaborate on what the practice directive might include.

Chief Justice Joyal suggested that AI could be normatively justifiable in certain instances. This would likely include when AI contributes to efficiency, accuracy, fairness and access to justice. Specifically, this covers legal research, e-discovery and document analysis, and predictive analytics and case outcome analysis. These last two items would likely not make it into any case submission, but rather big data would help determine when and if a case should be brought in the first place.

The benefits of AI presently exceed the downsides. Humans can inappropriately apply a halo effect, a cognitive bias and simply assume an AI is an expert in all things. If one draws the analogy of an AI to an expert witness, lawyers need to recognize when an AI exceeds its area of expertise.

A secondary source of clarification turns out to be ChatGPT itself. Of course, I provided the direction to the AI and asked its opinion:

By explicitly stating that the use of AI should be disclosed, the practice direction recognizes the importance of understanding and assessing the reliability and accuracy of information produced through AI systems. It acknowledges the potential concerns associated with AI-generated content and aims to address them through transparency and disclosure requirements.

Overall, this practice direction emphasizes the need for responsible use of AI in court cases and promotes transparency to ensure that all parties are aware of the role and impact of AI in the materials presented to the court.

This may be somewhat akin to asking the fox if the henhouse seems secure and safeguarded against predators and other infiltrators. But even the fox, or the devil, remains entitled to representation.

I believe anyone (everyone?) that uses CANLii remains fine. The default recognition should be that computer assisted research happens everywhere. All of the legal material simply remains within their portal to be accessed using various filters and other search parameters. This seems to be more along the lines of your father’s Buick, so to speak.

The practice direction refers more to the AI systems using machine learning or deep thinking. These systems digest enormous databases and recognize patterns to generate answers. Sometimes more unique than anticipated.

The problems with this becomes quickly apparent. The databases can be replete with certain biases. Machine learning relies upon certain algorithms to come up with their answers. Deep learning can be even problematic as it gets its material directly from the data itself. Over time, it refines its systems to provide answers with the greatest probability of being correct.

We can see that the systems do not truly understand what the words actually mean and much less the sentences. Their outputs are the culmination of probabilities.

By using my skilful cross-examination and leading the witness techniques, I asked AI about the above two paragraphs.

Yes, the statements you provided are generally correct and highlight some of the key aspects and challenges associated with AI systems, particularly those based on machine learning and deep learning.

ChatGPT does not access the Internet and cannot search items after 2021. Or so it keeps telling us. It needs to learn and refine the new databases before it can provide answers.

ChatGPT sometimes provides case information and citations that do not actually exist. These fictitious cases arise from AI’s need to provide case law-based probability. Its answers arise from patterns and relationships in the training data. AI refers to these answers as hypotheticals. And humans become prone to confirmation bias. If you look hard enough for something, then you will likely find it.

A major problem, and everyone should agree that it is a problem, AI does not possess its own value or ethical system. It can apply what others might see as ethical, but it does not possess its own opinion on the subject.

Therefore, opinions and other statements being generated by AI remain devoid of humanistic understanding or empathy.

AI exceeds human capability when it comes to data collection, information structuring and overall memory. However, only humans can apply this information to create knowledge. Adding context from social and other cultural environments to this knowledge creates what we like to call wisdom. This wisdom is lacking in any AI generated response.

And just checking in with the AI, it agreed.

As an AI language model, I don’t possess consciousness, self-awareness, or the ability to create wisdom in the same way humans do. Wisdom is a complex concept that involves deep understanding, insight, experience and the ability to apply knowledge in a meaningful and ethical manner.

But of course at some point, AI life … uh ... finds a way.

Gary Goodwin worked in environmental conservation across Canada for over three decades. He initially obtained a B.Sc. from Victoria majoring in marine biology. In addition to his law degree and MBA, he recently completed his LL.M from the University of London, emphasizing natural resources and international economic regulation. He has authored numerous articles on the environment and issues facing in-house counsel. He contributed three chapters to the recent textbook North American Wildlife Policy and Law.
 
The opinions expressed are those of the author(s) and do not necessarily reflect the views of the author’s firm, its clients, Law360 Canada, LexisNexis Canada, or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.

Interested in writing for us? To learn more about how you can add your voice to Law360 Canada, contact Analysis Editor Peter Carter at peter.carter@lexisnexis.ca or call 647-776-6740.