Blog Post

Discussing IQ.AI for Emerging Data Sources

Tim, while IQ.AI for Emerging Data Sources was just released in March, this isn’t the first iteration of applying AI to emerging data sources challenges. Can you speak to some of the earlier uses that are still relevant today?

Exactly. The current AI capabilities are simply the latest evolution. For more than a decade, we’ve been using various forms of machine learning, natural language processing and artificial intelligence in e-discovery and investigations, including to rapidly analyze large volumes of complex data types. 

One example is our proprietary Universal Messaging Platform, a powerful tool that enables teams to explore, enrich and dynamically review diverse data sources in context. Custom-built on a modern technology stack, it supports advanced analytics and data processing tailored for short-form communications like chat. Beyond performing density analysis and segmenting conversations into logical, matter-specific threads involving key custodians or keywords, the platform also delivers summary analytics across channels and group chats, providing insights into participants, message volumes, dates of correspondence and, where available, channel descriptions across multiple custodians data. 

In addition to its analytical capabilities, the platform enables action, streamlining message culling and processing by standardizing across key metadata such as sender, recipients, timestamps and content to efficiently generate target, review-ready data sets containing only the information needed for downstream review and production. 

Even prior to development of our IQ.AI capabilities, Universal Messaging Platform included artificial intelligence and novel data models to support compliance monitoring, sentiment analysis and internal investigations workflows across dozens of data sources.

Now, IQ.AI is integrated with this solution and FTI Technology’s Connect solution. What are some of the new capabilities?

When the Universal Messaging Platform and Connect are integrated with our generative AI capabilities, our experts can quickly and defensibly enhance contextualization and summarization of chat threads and conversations. Moreover, these tools can analyze and interpret rich media including, images, GIFs, reactions, emojis, audio clips and other modern communication formats across a wide range of cloud-based applications and collaboration platforms. 

Subject-matter experts and digital forensics investigators can leverage generative AI to enrich metadata, extract key facts and uncover relevant patterns. These capabilities support efficient data processing in a closed, secure environment to mitigate risk, reduce data volumes and accelerate access to critical insights.

We’re continuously refining our models to summarize entire channels and conversation threads to highlight key points and identify central themes. These models group content thematically, regardless of the system it originated from, the message flow or format, enabling experts to piece together what happened, when and who was involved, even across fragmented data sources.

Ultimately, our teams continue to build powerful solutions that reveal communication patterns and surface key facts across a diverse range of modern data types and sources. 

What’s an example of an emerging data sources challenge where these capabilities are really making an impact? 

Linked content is one of the most pressing and debated topics in e-discovery right now. As cloud-based collaboration grows, more documents are shared via hyperlinks rather than as traditional static attachments raising complex questions about how these dynamic references should be handled in discovery.

Recent surveys of e-discovery professionals show that hyperlinked content is top of mind for e-discovery professionals, and recent case law is beginning to weight in. In one high-profile matter, a court ruled that while hyperlinked items differ from traditional attachments, they should be considered attachments in the context of e-discovery production requirements. At the same time, the opinion acknowledged the significant technical challenges of reconstructing hyperlinked content, echoing the concerns of many industry experts who believe that the fluid, evolving nature of linked content makes it difficult to collect, process and review using traditional methods. 

Our team has extensive experience navigating matters involving large volumes of linked content. We’ve developed analytics to proactively examine hyperlinks, assess link composition and identify source patterns. By integrating generative AI into our existing workflows, we’re further enhancing enrichment and efficiency, helping clients defensibly manage this emerging data challenge at scale.

Are there any other aspects of generative AI that overlap with emerging data sources challenges?

As generative AI becomes more embedded across  enterprises, the related data artifacts it creates, including prompt-response logs, file access records and AI-generated outputs will inevitably enter the scope of e-discovery. These artifacts represent the next wave of emerging data sources. 

In anticipation, our experts have proactively tested how leading generative AI tools create, store and share data, gaining critical insights into how this information can be preserved, accessed and reviewed. When these data types begin surfacing in litigation, organizations will need defensible strategies for preservation, collection and processing. 

As with earlier waves of emerging data, our teams are already preparing for the technical complexities and novel challenges generative AI will introduce, so that when these issues arrive, we’re ready to guide our clients with confidence. 

Related topics:

The views expressed herein are those of the author(s) and not necessarily the views of FTI Consulting, its management, its subsidiaries, its affiliates, or its other professionals.