Is legal AI your next rogue employee? | Tara Sarvnaz Raissi

By Tara Sarvnaz Raissi ·

Law360 Canada (September 27, 2024, 2:58 PM EDT) --
Tara Sarvnaz Raissi
Tara Sarvnaz Raissi
Generative artificial intelligence (AI) powered tools tailor-made for the legal industry automate manual processes and accelerate tasks such as research, document review or drafting to support professionals. Legal AI tools integrate into a firm or legal department to generate output by building on a professional’s own data and external databases. With robust cybersecurity measures and continuous oversight, the potential for greater efficiency, accuracy and cost-effectiveness can benefit both lawyers and clients.  

When used responsibly, this cutting-edge technology that is embedded in an organization like a trusted employee is less likely to go rogue. This minimizes the risk of data breaches, incorrect outputs and consequences such as loss of the client, regulatory complaints, civil actions against the professional or sensitive information being shared to a client’s detriment.    

Legal AI is implemented through a multistep process that involves deploying and integrating the solution within the firm or department. Once embedded, it has access to sensitive and proprietary information. Data cannot be removed or “unlearned” from its collective memory. A tool that collects more than what is needed, that fails to prevent data loss or that exposes client information through generated content can compromise client confidentiality.

The development of AI systems built on case law databases lessens the possibility of cases “invented” by AI hallucinations (see Zhang v. Chen, 2024 BCSC 285). It also offers lower costs as clients are billed for legal work at “AI’s hourly rate.” However, access to a trustworthy legal database is one of the many steps in the legal process. AI can misinterpret the law, overlook client context and goals or suffer from operational failures even with reliable resources at its disposal. While there will be fewer instances of this as AI evolves to improve the quality of its output, human oversight and intervention further mitigate these risks.    

AI-driven legal research tools may hallucinate

Some legal AI tools use Retrieval-Augmented Generation (RAG) to tackle requests with a focus on greater precision and accuracy (Samat, Arunim. “Legal-RAG vs. RAG: A Technical Exploration of Retrieval Systems.” TrueLaw AI, 23 March 2024). RAG models are designed to improve the response quality of a large language model (LLM) by grounding them in verified knowledge repositories (Khangarot, Samder Singh. “The RAG Effect: How AI Is Becoming More Relevant and Accurate.” Forbes Business Council, 24 April 2024). RAG retrieves relevant information from a reliable case law database or the firm’s own document repository. It then uses an LLM to generate a response using the retrieved content.

The move from non-specialized AI to AI trained on legal materials is a significant development. However, AI can still miss legal nuances in the data it retrieves. A case retrieved by AI, though properly cited, may be irrelevant or contradictory to a client’s goals. It is counsel’s responsibility to review, scrutinize and use their judgment to ensure that client needs are met.

Operational failures can happen

AI tools that generate documents can be trained on existing templates within the firm or legal team. For example, a practice may have a preferred format for non-disclosure agreements. Updating the document repository with the latest version of an approved non-disclosure agreement will allow legal AI to retrieve a template that aligns with the department’s current legal and compliance standards.

Even if the document repository is routinely updated, AI can inadvertently retrieve an incorrect template for that particular situation. It can also fall short of modifying the template for a unique case and deliver language that misses client-specific needs. Lawyers who practice across different jurisdictions must take steps to ensure that the solution selects template language that is compliant with legal obligations in each jurisdiction.

Aside from maintaining a current document repository, training legal professionals on giving effective prompts to legal AI can mitigate some of these risks. Lawyers are skilled at researching and analyzing case law but communicating effectively with AI is a new skill that continues to evolve.

Training can be supplemented by establishing standardized input or prompt banks within the practice to achieve accurate and consistent results. These repositories can save time and support professionals who are less familiar with interacting with AI models.

Managing insider threats begins early

Like any employee with access to highly sensitive data, AI requires oversight. Without the appropriate safeguards, AI’s rogue actions may be difficult to detect before the damage is done. This oversight should begin even before the initial “shopping” phase for a solution. Decision-makers must systematically identify the ethical risk framework within their organization and practice. This will better position them to choose a solution that aligns with their risk tolerance. Tools that are developed in keeping with ethical AI rules are more privacy- and security-oriented. Finally, a review of the default privacy settings and policy will allow counsel to leverage trusted, secure software.

Post implementation, regular audits and tracking of the tool can flag suspicious activity early on. Restricting access to only what is essential and data loss prevention measures will limit the blast radius. Personhood credentials (PHCs) — digital identifiers that are deployed to link to human users (Adler, Steven, Zoe Hitzig et al. “Personhood credentials: Artificial intelligence and the need for privacy-preserving ways to distinguish who is real online.” 15 Aug. 2024) — are recommended to ensure that any review or final sign-off is done by a specific accountable professional. These proactive strategies will preserve the integrity of the legal process while utilizing the advantages of AI.

Conclusion

AI tools tailor-made for the legal industry have transformative potential. Their integration into law firms and organizations can lead to increased efficiency and cost-effectiveness. Implementing the appropriate safeguards at an early stage and continued human involvement can mitigate and manage the risks associated with this technology and help it realize its potential as an asset to the practice of law.  

Tara Sarvnaz Raissi (CIPP/C) is senior legal counsel (Ontario, Western and Atlantic Canada) at Beneva and is based out of Toronto. She has written extensively about AI use in legal practice.

The opinions expressed are those of the author and do not reflect the views of the author’s firm, its clients, Law360 Canada, LexisNexis Canada or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.

Interested in writing for us? To learn more about how you can add your voice to Law360 Canada, contact Analysis Editor Yvette Trancoso at Yvette.Trancoso-barrett@lexisnexis.ca or call 905-415-5811.

LexisNexis® Research Solutions