Blog Post
How Research and Experimentation with Generative Artificial Intelligence is Advancing Efficiencies in Legal Use Cases
FTI Technology’s data innovation lab leads research, development and testing for disruptive solutions. Working with advanced machine learning and analytic technologies, the team is charting the strategic direction for generative artificial intelligence in legal services. FTI Technology experts involved in this testing and development work recently hosted legal industry experts in Dubai to discuss the benefits of AI experimentation and participate in live, hands on lab testing. This post shares key insights from the event.
Over the past year, the legal community has expressed a growing openness to experimentation with technology. While lawyers may generally feel less comfortable with the prospect of trial and error (than scientists, for example), and an experiment certainly is not guaranteed to succeed, generative AI has shifted the tone somewhat. There is now more acceptance of testing hypotheses, documenting outcomes and revising experiments to determine best practices. This acceptance includes the understanding that though some of these efforts will not always produce the desired outcomes, going through the process will educate and allow for conversations with clients and internal teams about the possibilities of generative AI.
As adoption of generative AI increases, so does technical sophistication within corporate legal departments and law firms. For example, an increasing number are already adopting third-party generative AI solutions. A small handful are building in-house capabilities by wrapping up existing foundational models and creating a safe and secure environment for lawyers to conduct their own experiments. One law firm recently identified an AI champion in each of its practice areas and allowed these champions to spend a sizeable proportion of their billable hours focused on generative AI-related research and development which includes experimentation. The investment in time to conduct these experiments will provide important learnings that may help the firm stand out as an effective early adopter.
What is the strategic direction for the use of generative AI in legal services?
“We’re experimenting with what we can do ourselves and adopt in-house to make us more efficient. Our digital team created a tool, and compliance was one of the initial use cases. We fed in our policies and began asking increasingly complex questions relating to conflicts. We realized the model keeps learning, so we’re working to use it for additional processes that require querying legal policies for guidance — possibilities might include data privacy guidance, data retention procedures or contracting.” – Aneeza Siddiqui, General Counsel and Company Secretary, ADNOC Group
FTI Technology’s experts in the Middle East region and globally have been increasingly hearing from law firm clients proactively requesting to be involved in collaborative experiments. This is a new development in the e-discovery and investigations arena. In addition, requests for proposals are beginning to include generative AI-related questions for suppliers. This indicates a readiness to explore opportunities, address potential risks proactively and determine providers’ generative AI capabilities prior to selection. This line of questioning is expected to increase as legal teams further embrace and understand technological advancements.
In terms of generative AI’s interaction with external counsel services, is that something you’re exploring?
“We're very much testing that right now. We've developed several use cases that we think we can test in terms of where we can start deploying AI or use a generative AI system. Simple things such as generating an NDA based on given parameters and established templates. When it comes to disputes, we're looking at what we can do about claims documentation reviews and summaries and are exploring partnerships with law firms and third-party service providers.” – Aneeza Siddiqui, General Counsel and Company Secretary, ADNOC Group
In exploring potential use cases, there are many viable opportunities for law firms and corporate legal teams. Given the potential for generative AI to understand complex language relationships, it provides significant opportunity for deriving insights and summarising information from large pools of company or client data.
At the close of a dispute for example, data from the matter and other related matters could be mined to reveal adjacent insights that could inform legal teams of unknown risks, patterns of concerning behavior, the organisation’s compliance posture, opportunities for company enrichment and more.
Alongside the potential benefits, there are also challenges and risks to address. Organisations need a way to uphold data provenance for any data being fed into generative AI tools and understand the behavior of their systems when they are being trained. Support from data experts and governance professionals to establish guardrails around the data and systems is an important step as tools are being scaled from experimentation to implementation.
What challenges can perpetuate with generative AI over time, particularly as the technology changes so quickly?
“This whole issue of continuous diligence around your generative AI tools is quite intensive. The need for guardrails and protection is significant because of these challenges. We might roll out a platform today and then discover it's infringing on the copyright of a source. Making sure you have a continuous system of monitoring and checking the tools that you are using is critical, but it's not easy.” – Nasser Ali Khasawneh, Global Head of AI & the TMT Sector at Eversheds Sutherland
Additionally, legal teams may find it useful to implement an experiments toolkit. For example, while it may be easy to create an account with a large language model and tinker with it, if something goes wrong or unexpected results are produced, the tester needs to document an outline of steps taken to reproduce that problem. This is needed to eventually establish a reliable workflow. Thus, a robust experiment and evaluation framework is critical to help teams work through and learn from the process of trial and error.
From a more global and regulatory view, what do we need to be aware of in terms of the use of generative AI in discovery, regulation and investigations?
“When it comes to discovery, the trend globally is towards general regulation of AI and not the specific regulation by the judiciary of e-discovery and the like. It's worth looking at the overall regulatory picture before coming into e-discovery because the general regulatory approach is affecting discovery. It’s an area that requires international cooperation, as everything about generative AI does not fit the national border model. As we saw with the first AI summit held in the U.K. there were about 30 countries that participated, so it’s instantly global.” – Nasser Ali Khasawneh, Global Head of AI & the TMT Sector at Eversheds Sutherland
Whenever a new tool is introduced, there is a tendency to worry associated skills will be lost. That has been a major point of discussion surrounding generative AI, and fairly so. However, the calculator did not make people worry they would begin to lack math skills (at least not entirely). Rather, it allowed humans to tackle more complex problems faster. When implemented correctly, generative AI can allow for the same. The more experimentation that is done, the more sophisticated teams will become in using it effectively, and the sooner best practices will emerge for legal use cases.
Related topics:
The views expressed herein are those of the author(s) and not necessarily the views of FTI Consulting, its management, its subsidiaries, its affiliates, or its other professionals.