Blog Post
Global Investigations Review Feature: Massimo Chiasera of Bär & Karrer and FTI Technology’s Jerry Lay Discuss Deepfakes in Litigation and Investigations
The global hype around generative artificial intelligence such as ChatGPT is larger, fiercer, and more speculative than many of the previous waves of technological disruption. However, the current fascination with ChatGPT's text-based language capabilities often overlooks the equally impactful and potentially riskier content AI can generate in the form of audio, text, video and images.
The number of deepfake videos created is estimated to double every six months. According to the Wall Street Journal, fewer than 10,000 deepfakes had been detected online in 2018, whereas after only four years, that figure has reached the millions.
Deepfakes are not only increasing in volume. Advancements in artificial intelligence have given rise to a new, and continually rising, level of deepfake sophistication, wherein digital media can be manipulated so well that the falsification becomes undetectable (or nearly so) to the human eye or ear. The believability that AI can conjure in mere moments is also increasingly being used for malicious intents.
These developments naturally pose a number of legal challenges. In disputes and investigations, deepfakes, and the authenticity of supposed evidence of any wrongdoings or alleged facts, are increasingly coming into question. Digital forensic investigators are being asked to help identify deepfakes and prove the validity of images, audio recordings, videos and other media. Lawyers are being approached with questions on how to treat such artifacts in internal investigations as well as in any kind of governmental- and/or contentious proceedings. On the other hand, it is also a challenge for organizations that want to develop this technology as part of their activities or utilize it as part of their work processes to correctly identify and take into account the legal standards.
These issues were covered at length in a recent article published in Global Investigations Review. The article, “True lies: deepfake-related challenges in internal investigations, litigation and digital forensics,” discusses the technical evolution that is making it more difficult to determine the authenticity of videos and images, and the emerging legal and regulatory issues that corporations, regulators and courts are grappling with. Key takeaways from the article include:
- Advancements in machine learning to support detection of items created by another machine learning tool. However, these tools are not yet fool-proof, and are limited to providing probability scores that indicate the likelihood that the item could be a deepfake.
- The important role digital forensics specialists play in defensibly collecting, preserving and analyzing the metadata of digital files — and comparing this information across an array of dimensions and sources — to authenticate or refute their legitimacy.
- Existing and emerging frameworks in the European Union and Switzerland that determine how AI may be used and admitted in legal proceedings in those jurisdictions and extraterritorially.
- How these issues overlap with broader AI governance requirements and considerations.
- Deepfakes-related risks and the documentation, monitoring, transparency and reporting obligations that may surround AI use and AI-generated content per data protection laws and other regulations.
Up until recently, creating deepfakes required substantial time, resources and technical skills. While at the same time, deepfakes could be detected by a discerning eye or a simple examination of the file. That has all changed. A layperson can now provide a generative AI tool with basic instructions, and the technology will almost instantaneously produce a realistic matching image, video or audio recording. Likewise, proving the veracity of those images and videos may require complex forensic inspection of metadata and other related digital artifacts. Organizations need to be prepared for these new digital risks and the new technical expertise that will be needed to verify and defend evidence. Read the full article in Global Investigations Review.
Related topics:
The views expressed herein are those of the author(s) and not necessarily the views of FTI Consulting, its management, its subsidiaries, its affiliates, or its other professionals.