Blog Post
Generative AI: Near Term Opportunities to Enhance Human Insight and Efficiency in Legal Use Cases
FTI Technology’s data innovation lab leads research, development and testing for disruptive technologies. With a long track record of working with advanced machine learning and analytic technologies, the team is increasingly evaluating generative artificial intelligence and the role of large language models such as ChatGPT in disputes and investigations, compliance and other use cases. Several FTI Technology experts who are involved in this testing and development work recently hosted legal industry colleagues in London to discuss practical generative AI applications and participate in live, hands on lab testing. This article captures key takeaways from the event.
The February release of Google’s Gemini 1.5 is one of many examples that illustrate just how quickly generative AI is advancing, particularly as it relates to potential scalability in e-discovery and investigations. This release increases the number of tokens in a context window from the 32,000 available in GPT4 to 1 million. The larger a model’s context window, the more data it can process, a clear requirement for meeting the volume demands in modern e-discovery matters.
LexisNexis reported that approximately 25% of legal professionals are already using generative AI regularly, up from 11% in July 2023. An additional 35% have plans to begin using the technology. More than 60% of law firms are responding to generative AI use with changes to daily operations, and the percentage of lawyers confirming they use AI doubled over the last six months. Everything points toward a quickly rising uptick in technology capability and adoption.
The sheer pace of change, and the sensitive nature of the data and use cases that are likely to intersect with generative AI, are both exciting and risky. From the standpoint of supporting clients in capturing insights and innovation while mitigating digital risks, FTI Technology has established the following parameters to help guide clients in leveraging generative AI:
- Using it as a tool that complements but does not replace humans. Validation controls must be built into everything, and as AI continues to evolve, we remain thoughtful in its application alongside human knowledge and discernment.
- Remaining platform agnostic to ensure the best possible solution for each unique matter or challenge. There are many models today, and there will be more tomorrow. By maintaining openness to numerous models, clients can continue to evolve as the technology does.
- Leading with practicality, so that innovation can be pursued, but not simply for the sake of it. Ongoing testing and experimentation with tools and large language models will support a balanced approach. In doing so, FTI Technology has built a research playground integrated with e-discovery tools to allow testing across different models, prompts and approaches within an e-discovery environment.
- Flexibility as a key tenet to refine training sets, devise prompts, review for quality, adjust models, understand hallucinations and change workflows in support of continually improving outputs.
- Maintaining caution and healthy skepticism (i.e., trust but verify), with focus and commitment to accuracy and governance.
Ongoing testing within FTI Technology’s labs has uncovered several initial use cases where generative AI has the potential to be used in a more sophisticated way. Participants at the event were able to simulate some of these use cases in our labs and gain firsthand experience with the nuances in prompt engineering and the perplexities of hallucinations. The use cases covered include:
- Document summarisation
- Privilege review
- Support for fast fact finding when used to enhance human processes
- Compliance monitoring
- Expert report fact checking
- Optical character recognition and transcription cleanup
Just as the technology will continue to change, so too will the regulatory environment (albeit at a slower pace). For example, the EU AI Act is considered only the tip of the iceberg in the development of frameworks that will govern save use of AI. As organisations navigate how to use AI and remain compliant, the need for clear documentation — transparency around how data is collected, processed, used and shared — as well as controls for identifying and labelling AI-generated content and distinguishing it from human-generated content, will be important aspects of governance. Likewise, for law firms and legal service providers it will be critical to clearly explain to clients how, when and why AI is being used to support disputes and investigations workflows.
Related topics:
The views expressed herein are those of the author(s) and not necessarily the views of FTI Consulting, its management, its subsidiaries, its affiliates, or its other professionals.