Your single source for new lessons on legal technology, e-discovery, and the people innovating behind the scenes.

The White House Addresses Responsible AI: Investigating in an AI World

Brittany Roush

Editor's Note: This is the third article in a series we're writing about the recent Executive Order on artificial intelligence. In this installment of our discussion about the EO's anticipated impact on e-discovery, we turn our focus to use cases within investigations.

On October 30, President Joe Biden took a significant step in regulating the use of AI within the United States. with the signing of the "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence."

At the heart of the Executive Order is a resolute commitment to shielding Americans from AI-enabled fraud and deception. The order mandates the establishment of standards and best practices for detecting AI-generated content and authenticating official content, involving the development of guidance for content authentication, or “watermarking,” to distinctively label AI-generated content. Federal agencies are poised to adopt these tools, ensuring that citizens can readily discern the authenticity of communications from their government and setting a global example for both the private sector and international governments.

Due to the impact of AI, the landscape of investigations, particularly in the realm of e-discovery, is undergoing a major shift. The Executive Order, in essence, acknowledges the existential threat AI poses to the concept of defensibility—a cornerstone in the preservation, collection, maintenance, and production of evidence in a manner beyond reproach. From the surge of deepfakes to challenges in evidence and data provenance, the identification of what is deemed "real" and admissible is rapidly evolving into a pressing concern for investigators.

Proposed solutions, such as watermarking, leave much to be desired but may act as the first step in authenticating evidence. If adopted as a standard for legitimization, legal technology providers will be tasked with developing tools to extract, process, and analyze watermark information, ensuring the integrity of investigations in the face of AI-generated data. Even if watermarking isn’t adopted, there will be a need for some type of authentication method, and for the industry to enhance AI detection tools, safeguarding the precision and accuracy of e-discovery practices.

Let’s dig into all of these issues.

What is “Real” Anyway?

In 2017 the world was introduced to the concept of “deepfake” technology when a Redditor created the r/deepfake subreddit. (Important to note here that, as you might expect if you know anything about the internet and Reddit, the content of these publicly shared deepfakes is often inappropriate and/or disturbing; I would suggest only visiting r/SFWdeepfakes if you’re curious about what’s out there and don’t want to wade into a lot of garbage.) The conversation took off in this and other settings, and brought to light what researchers have known since around 1997: machine learning could be used to change the content of a video in a convincingly realistic way.

Fast forward to 2023 and deepfakes are ubiquitous in media. Creating a realistic deepfake of a still photo is a trivial effort, especially with the release of technologies like Midjourney and Stable Diffusion. The music industry has used deepfake technology to create moving holograms of dead musicians (such as Hologram Tupac). Film studios are resurrecting actors who die before filming is complete (e.g., Carrie Fisher in Star Wars: Rogue One), or making them youthful again in flashbacks (e.g., Indiana Jones and The Dial of Destiny). Deepfakes are even used in the medical field to simulate MRI images for medical research, and audio deepfakes are used to help patients regain their voices after an illness.

If you do a quick Google search on the misuse of deepfakes, you’ll find many articles and thought pieces on the potential for misuse by nation states, threat actors, and other criminals. As of writing this article, there have been very few real examples of convincing deepfakes used by threat actors. Nevertheless, there is an expectation that, given the easy access of deepfake technology, this will change in the coming years—so it’s no surprise that the Executive Order contained a provision to “protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.”

Riana Pfefferkorn writes in “'Deepfakes' in the Courtroom”: “What is more, even in cases that do not involve fake videos, the very existence of deepfakes will complicate the task of authenticating real evidence. The opponent of an authentic video may allege that it is a deepfake to try to exclude it from evidence or at least sow doubt in the jury’s minds. Eventually, courts may see a ‘reverse CSI effect’ among jurors.”

This has been the case in the Israeli and Palestinian conflict, where claims of deepfakes by both sides far outpaces verified deepfakes, and has led to confusion and distrust across the globe.

Indeed, since the launch of ChatGPT (and arguably before that) the AI community has been advocating for authentication tools like “watermarking” in AI-generated content (though it is far from foolproof) to help users see that AI created it. That sentiment has since been echoed across nearly every industry, and in many ways, it is an acutely e-discovery problem. After all, proving that something is what it claims to be, is at the heart of every investigation.

The Hon. Xavier Rodriguez writes in “Artificial Intelligence (AI) and the Practice of Law” that “Although technology is now being created to detect deepfakes (with varying degrees of accuracy), and government regulation and consumer warnings may help, no doubt if evidence is challenged as a deepfake, significant costs will be expended in proving or disproving the authenticity of the exhibit through expert testimony.”

With the release of the Office of Management and Budget’s implementation guidance following the Executive Order, we can expect that the Department of Justice will start to consider how to enforce evidence provenance as soon as the new year (if they’re not already considering it).

There are already several ways to identify AI-generated content, though none are comprehensive and success rates vary. One mechanism is via metadata, but using metadata alone is an incomplete solution. Metadata can be manipulated and wiped with the right tools, which is why e-discovery processes put such a high degree of importance on extracting and preserving metadata in a way that is accurate and enduring. In e-discovery, in particular, metadata is the provenance of a document.

I mentioned the issue of how, with the right tools, metadata can be manipulated, but AI itself can be used to manipulate metadata—making any sort of provenance metadata self-defeating. Experts have longed hoped that the more secure, and reliable, methodology is digital watermarking. (Read how it works in the next section.)

The U.S. Government is making a bet that watermarking can deliver. Per the Executive Order, “the Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.”

Watermarking in a Digital Age

Digital watermarking is the practice of concealing information within a digital asset, such as images, audio, video, or documents. This embedded information typically includes a unique identifier, serving to convey content provenance, authenticity, and copyright details securely tied to the asset itself. The process involves embedding and detecting the hidden information, serving to establish the asset's origin and integrity even if the file is somehow corrupted.

Digimarc, a leader in digital watermarking, outlines five essential characteristics for an effective digital watermark: it must be covert and machine-readable, immutable, ubiquitous, redundant, and secure. These characteristics ensure that the watermark is detectable by machines, remains inseparable from the digital asset, covers the entirety of the asset for enhanced protection, can withstand significant damage or manipulation, and securely encrypts information to prevent unauthorized access or removal.

AI providers have already started to explore this requirement.

In May, at their annual Build conference, Microsoft committed to providing provenance mechanisms for OpenAI products.

Adobe, in partnership with several other companies, joined together to create a joint development foundation project called the Coalition for Content Provenance and Authority (C2PA) to address the problem of misinformation “through the development of technical standards for certifying the source and history (or provenance) of media content.”

Adobe also signed a non-binding agreement to develop watermarks with the White House. Google’s Deepmind has a beta version of a watermark released already, their SynthID, and Meta and Amazon have also committed to watermarking content.

However, academic studies are not optimistic about the use of watermarking as a single solution for provenance verification.

“Watermarking at first sounds like a noble and promising solution, but its real-world applications fail from the onset when they can be easily faked, removed, or ignored,” says Ben Colman, the CEO of AI-detection startup Reality Defender. Several other research projects conducted by the University of Maryland and Berkeley back up Coleman’s claims. Even visible watermarks can be manipulated. Not only can they be removed to obfuscate that something was AI-generated, but they can be added to create false positives, bringing real evidence and its validity into question.

Despite the considerable flaws, watermarking may still be able to act as the first line of defense in attempts to verify evidence provenance. It is easy to foresee a future where forensic specialists first look to identify if a file has a watermark, before moving on to other methods of AI-content detection to identify evidence that data has been fabricated or tampered with. That is largely what forensic specialists do now when the validity of evidence comes into question.

In this complex landscape, the importance of constructing an airtight defense for the methodologies employed in AI-driven investigations cannot be overstated. Defensibility stands as the guardian of the investigative process—ensuring the veracity of findings and bolstering the admissibility of evidence in a court of law. Just as the digital realm expands with possibilities, the need for a robust defense becomes paramount, navigating the intricate intersection between technology, litigation, and the pursuit of justice.

The White House’s EO signals that, at least initially, watermarking will be used for defensibility purposes (or at least as one such mechanism). Given that, it seems likely that in the next 12-18 months the DOJ and other agencies will begin to require watermarking information in productions, though how that will be implemented remains to be seen. Though we’ve discussed how fallible metadata is for AI-generated content, metadata in tandem with watermarking may be sufficient for early days of evidence verification, but the industry should consider that what we in the biz like to call an MVP, or minimum viable product. Better and stronger methods of evidence validation and verification will need to be a top priority.   

In the meantime, human review, particularly in the analysis of images and videos, is still the most accurate way to detect AI. But soon (maybe even in the next year), the capabilities of AI will outpace even the keenest ability to detect AI, which will be compounded by threat actors seeking to obfuscate wrongdoing.

The EO does contemplate the need for additional research, stating that the administration will “Catalyze AI research across the United States through a pilot of the National AI Research Resource—a tool that will provide AI researchers and students access to key AI resources and data—and expanded grants for AI research in vital areas.”

What’s Next for Investigation Teams

To borrow from Notorious B.I.G., in e-discovery it’s safe to say, “more AI, more problems.” For all the promise of reduced costs and tedium, investigators are coming upon an inflection point in evidence provenance verification. Metadata, log analysis, hashing, and other traditional methods of verifying evidence will soon be insufficient on their own to ensure that evidence has not been AI generated.

The Executive Order signals a rapidly growing need to develop evidence verification protocols. Though the EO is focused on authenticating government communications, adoption of such measures is likely to be encouraged and eventually enforced by the DOJ, SEC, FTC, and other investigating authorities, especially as the sophistication of deepfakes and other AI-generated content grows.

To keep pace with the needs of the industry, evidence provenance will need to be the next frontier in processing and investigative technologies for legal tech providers. Forensic specialists and e-discovery practitioners will need to upskill their evidence verification policies, and learn techniques to identify AI-generated content, in absence of a “silver bullet” solution. This provides a great opportunity for investigative skill development and will likely become a differentiator in the market. It’s easy to foresee firms specializing in this type of forensic analysis, much like the code analysis groups of yore.

All things considered though, it’s a very exciting time for investigators. The challenges facing the industry now will define how the legal system tackles AI and defensibility far into the future, and that’s pretty cool.

Developing Responsible AI Solutions for e-Discovery and Investigation

Brittany Roush is a senior product manager at Relativity, working on features like search and visualizations. She has been at Relativity since 2021. Prior to Relativity, she spent over a decade conducting investigations, managing e-discovery projects, collecting data, and leading a data breach notification practice. She has a passion for building better tools for investigators and PMs to make their lives and cases easier (at least partly because her friends in the industry would hunt her down if she didn’t).

The latest insights, trends, and spotlights — directly to your inbox.

The Relativity Blog covers the latest in legal tech and compliance, professional development topics, and spotlights on the many bright minds in our space. Subscribe today to learn something new, stay ahead of emerging tech, and up-level your career.

Interested in being one of our authors? Learn more about how to contribute to The Relativity Blog.