No man is an island,” wrote a perilously ill John Donne in 1623. “Every man is a piece of the continent, a part of the main.”
Though he did study law, Donne almost certainly had more philosophical topics than document review in mind when he composed “Meditation XVII.” He wrote Devotions upon Emergent Occasions during a period of reflection fueled by relentless fevers and the thick fog of an English winter.
Still, the point remains as relevant to the legal industry as any other human community: none of us is meant to get through life, love, or labor in isolation.
Four hundred years past Donne and his meditations, we find ourselves at an inflection point in the history of human innovation. Shouted from every rooftop is the promise of artificial intelligence, particularly generative AI, and its ability to relieve us of doldrum through its expansive computational power, unlocking our time and, thereby, new levels of our intellectual success.
But AI—even that well-spoken chatbot who tells you playful stories, remembers your favorite songs, and knows just what recipe to recommend for a busy weekday evening—has no soul.
If Donne’s metaphysical wisdom is to be believed, no man can achieve greatness on an island. Even if he shares that island with a host of algorithms, servers, and the bountiful electricity needed to run them.
The key, then, to elevating ourselves through artificial intelligence is to make the most of our shared connections—our human intelligence—to guide how we develop ideas and make them reality.
So while the eager may want to build AI for AI’s sake simply to see what it can do, the wise will come together to build, and use, AI for each other—AI that is what we call “fit for purpose.” They set out to build realistic solutions to real-life problems they mutually understand, augmenting human intelligence with the artificial kind.
This is the story of one such community.
Today’s legal practitioners are drowning in a sea of technological and logistical challenges whipped by constant pressure to do more, with fewer resources, in less time, and with increasingly superior results. Tossed about by swells of short messages and spreadsheets and emails, case teams are denied the life lines that bigger budgets or distant deadlines might offer. Instead, they must simply find a way to float, treading water and piecing together solutions for case to harried case.
These challenges are not always life-or-death scenarios, but their consequences can reverberate through the livelihoods of the many people impacted by any resulting settlements, judgments, or sanctions.
Is artificial intelligence their life preserver? In various forms over the past 15 years, AI has found its way into legal teams’ toolboxes, keeping them abreast of these rising tides. Often, those developments have been accepted and explored only after thorough vetting by skeptical attorneys and judicious courts.
Computer-assisted review, for example—in which a computer is trained to predict the responsiveness of documents based on human reviewers’ coding decisions on a sample set of the data—was available for several years and used sparingly before it earned judicial approval and became more mainstream.
Things feel different now. In the last year, generative AI has received such a whirlwind of enthusiasm and interest and investment that new applications built on the technology have exploded. This is true across industries; an August 2023 survey from McKinsey showed that one-third of all respondents reported that their organizations are already regularly using generative AI in at least one function.
The topic is all over business mags, social media, and the news. Sentiments are mixed, with acute fears of AI taking human jobs balanced by strong opinions that its lack of street smarts will cripple its ability to take over the world anytime soon.
And even in the legal world, almost infamous for its slow adoption of new tech, we’re seeing an unprecedented amount of interest in this very young technology.
“Generative AI has invaded the popular consciousness in a way we just haven’t seen with previous legal technologies,” Jim Calvert, principal at Troutman Pepper eMerge, told me recently.
While most legal practitioners are approaching the use of generative AI with a healthy dose of caution, legal dockets around the United States are already giving voice to the consequences of its misuse.
“Its popularity can be a benefit toward adoption in some ways, but it seems just as likely that popular stories about the misuse or poor outputs of this tech will lead toward a skepticism that the industry needs to combat,” Jim continued. “Doing so will require that the technology be used responsibly and with a major focus on quality, reliability, and data security.”
We’re already seeing how poor—or even just less thoughtful—implementations of generative AI may mean embarrassing and professionally risky mistakes, but they can also trigger life-altering consequences and veer into litigious territory.
So, a few questions linger on the minds of legal practitioners eager to simplify their work and protect their reputations.
Is generative AI just another flash in the pan?
Is it worth a try now, or is it still too soon?
How can it be applied to my work specifically?
How can I use it safely and productively—and make smart investments accordingly?
Of course, these same questions are on the minds of developers, too. The real magic happens when both come together to find the answers.
I like to picture Aron Ahmadia, Relativity’s senior director of applied science, as a lighthouse.
There he is, standing tall on a rocky cliffside—a beacon above choppy seas. He illuminates a safe course to harbor, and yet stands as a warning. His presence, though reassuring, prompts everyone to put hand to tiller and take notice.
This first occurred to me as we spoke with Aron for our editorial on sentiment analysis in RelativityOne. As you may recall from that story, Aron and his team spotted potentially dangerous bias in the first version of the product—and brought the project to a halt so it could be re-engineered with greater intentionality and improved fit for purpose.
Similarly, as the generative AI conversation became more animated, Aron and the rest of Relativity’s data scientists tempered their excitement with prudence.
“We’ve been testing prototypes of a generative AI tool in RelativityOne since January of this year, and when GPT-4 came out in March, we were as excited as anyone else. But a model is not a product,” explained Aron at our 2023 keynote for Relativity Fest. “We wanted to understand how we could make this work for our customers and their data—not Enron or any hypothetical scenarios. So we knew it was going to be absolutely necessary to test it on real matters, with input from real users.”
Assembling a team of internal and customer advisors, as well as engineers, was the first step in exploring how generative AI could enhance RelativityOne for the e-discovery teams who spend hours in the platform each day.
“As a former customer myself, I know how important it is to understand how the product will actually be used in the ‘real world,’” observed Cristin Traylor, director of law firm strategy at Relativity (previously discovery counsel at McGuireWoods LLP) regarding her experience building that team of advisors. “Especially when it comes to the emerging field of generative AI, it is important to get customer buy-in on what we are doing and why. We don’t just want to build a black box that no one understands or trusts.”
Building understanding cuts both ways. Aron, Cristin, and the rest of the Relativity team needed to deeply understand what legal teams wanted from generative AI … and help those teams understand what generative AI is and isn’t capable of doing, and how to establish confidence in its outputs.
Ultimately, professionals from corporations and firms including BakerHostetler, Bayer, Complete Discovery Source, Foley & Lardner, Quinn Emanuel, Sidley Austin, and Troutman Pepper eMerge volunteered to help advise on, and conduct, the experiments that Aron and his team wanted to set up with generative AI in RelativityOne.
Relativity engineers planned to leverage Microsoft Azure’s OpenAI Service’s GPT-4—a large language model that scored in the 90th percentile on the bar exam—in the product.
The combination of e-discovery and GPT-4 promised an exciting marriage of use case and usefulness: the former a clash between limited human time and small bits of extremely important information hidden in massive piles of data; the latter a new technology that rapidly reads and understands massive piles of data and distills them into the important bits for human readers.
Looking at that marriage, you might say the potential applications are endless. From first-pass review and case strategy to entity understanding and privilege log creation, generative AI has the potential to optimize, accelerate, and potentially—eventually—eliminate much of the most time-consuming and exhausting manual work that e-discovery teams do every day.
But where to begin? The Relativity team realized their starting point had the potential not simply to create greater efficiencies for e-discovery practitioners, but to set the foundation for how they view generative AI from this point forward. To push them forward on the wrong foot could hamper their willingness—or indeed, their permission (from the judiciary as well as their clients and organizational decision-makers)—to engage with these tools well into the future.
“There is one thing that successful companies have in common: they all collaborate with attorneys. If you want a product that lawyers are going to use, you have to include them in the process and give them a seat at the table. Because when you’re bringing a tool to market, you have to make sure that it’s market-ready,” Melissa Dalziel, of counsel at Quinn Emanuel, told me when I interviewed her for this story.
“You don’t get a second chance to make a first impression, and it only takes one bad experience for you to write off a company for many years. They could be constantly innovating and developing and you wouldn't know it because they’re branded as being not worth your time.”
So again, Relativity asked: where to begin?
Let’s begin, then, at the beginning.
Why is this a formative moment in AI? What is generative AI? How is it different than the AI we’ve been using for decades?
Generative AI is artificial intelligence capable of generating original text, images, or other content based on a given text prompt. It does this by learning patterns, rules, and structures from massive amounts of training data and human feedback, then leveraging those inputs to generate new outputs based on a user’s request or query. The applications built on these models include image creation tools like DALL-E, as well as chatbots like ChatGPT.
Over the last few years, advancements in the development of neural processing units, distributed cloud architectures, and model architectures have meant that data scientists have been able to unlock unprecedented computational power and capability—resulting in increasingly sophisticated AI models that can be created, trained, tested, and improved more quickly than ever before.
This is why all the talk you’ve heard about AI isn’t just the chatter of another passing fad. It’s fueled by a truly unique moment in the history of computer science, which is so staggering in its potential that McKinsey this year predicted that half of all of today’s work tasks could be automated within the next three decades. This evolution will almost certainly transform careers and necessary skill sets in ways we can’t yet anticipate.
The coalescence of all these factors has sparked a sort of 21st Century gold rush. Thousands of AI startups and new apps are flooding a market that may grow as much as 42 percent in the next 10 years. They are all eager to scoop up a userbase that appears ready to fall head over heels for the chatbot or AI artist that manages to rock their world.
But while many of us do already enjoy having philosophical, artistic, and work-accelerating conversations with AI, we also, on average, have a healthy skepticism of AI and what relying on it too heavily might mean for us (professionally and existentially).
And in the legal realm in particular, where resistance to change and risk aversity are notoriously high, building and offering AI for AI’s sake is not a compelling way to introduce profoundly new and different tools.
“Tools like ChatGPT, which provide very human responses that look plausible straightaway, is sort of like the early days of the internet,” Nick Cole, director of litigation support at Foley & Lardner, noted for me. “You were naturally inclined to believe it rather than going through the discipline of validating the information and ensuring you actually agree with what it says.”
Fortunately, legal professionals are, by and large, aware that using ChatGPT as a more conversational alternative to Google simply isn’t the same as using GPT-4 to influence critical decisions in a matter that could influence the lives and livelihoods of real human beings.
So, with this landscape in mind—respecting the risks presented by poor implementations of AI, and the high standards that underpin legal teams’ willingness to experiment with such technology—Relativity developed six key principles to guide its AI development:
- 1 Building AI with purpose that delivers value for customers.
- 2 Empowering customers with clarity and control.
- 3 Ensuring fairness is front and center in AI development.
- 4 Championing privacy throughout the AI product development lifecycle.
- 5 Placing the security of customers’ data at the heart of everything.
- 6 Acting with a high standard of accountability.
The focus on fit for purpose, fairness, and accountability first shined through during the team’s development of a sentiment analysis feature that detects emotional tone in data: after discovering the first iteration of the tool was rife with bias against protected classes, the team scrapped it and instead built a from-scratch AI model engineered specifically for its use in litigation or investigative document review, and fine-tuned it to minimize bias.
Next, the team was ready to tackle generative AI as a new solution for customers’ growing data challenges.
And customers were ready to help.
At first glance, the legal industry is traditionally a place where everyone wants to know what everyone else is doing—but no one is willing to show their cards. It makes sense: having a secret sauce that elevates one firm’s services in a way others can’t match is just good business. And yet, no one wants to be the first to experiment with an unproven strategy or piece of technology if it means putting their cases and clients on the line. Still, their end goal is the same: uphold the rule of law and provide the best possible advice for one’s clients.
It's a razor-edge balance of innovation and caution. Of competition and shared purpose.
But, as it turns out, once you bring a few bright legal minds into a room to discuss new tech—what excites them, what gives them pause, what they could do with it, what they wouldn’t touch with a 10-foot pole—the communality takes over in the best of ways.
As they prepared to weave GPT-4 into RelativityOne, Aron, Cristin, and the rest of their taskforce began with a few ideas, and then invited customer volunteers to a user group meeting to discuss.
“Working on the pilot was really amazing because it brought so many different people and roles together: Relativians from engineering and product management and all different areas of the company, along with all of these customers,” Cristin recounted for me. “It was kind of a free-for-all of coming up with ideas, thinking about different ways to use this technology, and ways it would work or not work in the real world. That collaboration with everyone just it made it such a fun experience.”
Perhaps not all of us would equate “fun” with “detailed discussions on emerging legal technologies and their practical applications,” but it emerges as a theme from my conversations with this group.
“I haven’t had this much fun in my career in probably a decade,” said Melissa, from Quinn Emanuel, wearing a genuine smile.
A series of pitches, conversations, course corrections, and code iterations followed this first meeting—and many more conversations thereafter. Relativity engineers fed customers’ input into their dev cycles, and the back-and-forth continued as the group innovated their way to something really exciting.
Initial tests involved six customers getting hands-on with experiments across ten total matters, and while the work remains ongoing, the progress they’ve seen has already been promising. Aron shared that the team has compared the system’s output to first-pass human reviewers’ decisions more than 50,000 times, digging deeper into 5,000 of those decisions with second-pass reviews where any disagreements were found.
And after all that, how is the product performing?
“In a real matter with no human feedback, the system achieved 85 percent precision and 98 percent recall,” Aron proudly announced Relativity Fest 2023.
The fruit of all this collaborative experimentation is Relativity aiR for Review—the first application of generative AI in RelativityOne—which uses natural language and AI-powered predictions to augment the thorniest part of the e-discovery process.
“Doc review is the Brussels sprouts of legal work,” Nick Cole, of Foley, joked during our interview. “Nobody really likes it, but you’ve got to do it.”
aiR for Review targets three specific use cases to make this chore a bit more palatable: responsiveness review (predicting which documents will be relevant to a matter), issues review (locating documents related to the key issues at the heart of a case), and identification of key documents (pinpointing those potential smoking guns that attorneys need to get their hands on as quickly as possible).
From a workflow perspective, a user who wants to get these insights from RelativityOne prompts aiR with background information about their case—guided by prescribed questions—and then directs the system to analyze the large swaths of data in their case database. Once that analysis is complete, the user sees aiR’s recommendations presented in a document list. aiR provides highlights of key language, citations for what drove its conclusions, and detailed reasoning, in plain language, that adds context to its results. Users can then leverage integrated coding and commenting capabilities to make adjustments, add notes, and collaborate with other case team members as they integrate aiR’s insights into their overall review and case strategy.
The experience also offers space for user input that can provide further clarity and control around a matter and aiR’s results, giving case teams a chance to influence feedback loops that will help aiR become more accurate for that review. And, of course, all data remains private and secure within a user’s RelativityOne instance.
This is just the first iteration of what Relativity aiR will offer case teams. Aron says that his team of data scientists, as well as customer volunteers, continue to collaborate on other use cases, including case strategy and privilege review. They’re also exploring ways to enable users to create their own AI models for specific matters within their RelativityOne workspaces.
The technology itself is exciting, of course. But what felt really thrilling for many of the customers we spoke to was the opportunity to board the maiden voyage of something that may someday change the game in e-discovery and, ultimately, support the evolution of the practice of law.
“The business interest is to rush in and slap AI on everything, right? Sell it, market it, and monetize it. But I think that this technology is so transformative that it’s so important to get this right—and taking a pause, listening to the opinions of others, cultivating thought leadership about this technology, is a critical step,” emphasized Matt Jackson, counsel in data analytics and e-discovery at Sidley Austin.
“This technology, I believe, will change our industry more than anything that we’ve seen before. Investing the time and resources into getting it right is critical. To truly think critically about the technology, ways we might use it, and ways we shouldn’t try to use it, is such a rare opportunity.”
Beneath the themes of fun, fit for purpose, and collaboration in this story is another common refrain: generative AI is not magic. It’s not one-size-fits-all, and it’s not an “easy button.” And the legal professionals willing to ride the cutting edge of emerging technology recognize that an easy button shouldn’t be the goal.
“You still have to have lawyers who are signing off; they have a legal obligation to adequately perform discovery. You can’t just rely on one technology and not go in and do the work to verify it,” James Bekier, director of litigation services at BakerHostetler, said between takes of Relativity’s AI Principles video shoot (he’s the guy in the orange blazer).
“That’s why it’s important for us to have open communications around our technology and with our lawyers—to make sure everyone understands the pluses, the minuses, the pitfalls, and where things are going to benefit and help a case, or where they can’t be relied upon necessarily.”
Also present for that conversation was Danny Diette, director of advanced analytics and data privacy at Complete Discovery Source (who enjoyed a cup of tea during the video). Danny added, “There’s no one solution or software that solves all problems. The ability to control the way we use it and the results it produces is essential to us providing the outcomes that our clients need.”
For Danny and James, tools like generative AI are just another way to get unreasonable data volumes into reasonable workflows that reasonable humans can manage within reasonable amounts of time.
And if it’s really good at all that? It’ll free up more time for human experts—because we’ll always need those, in every place from the legal realm to the art world and engineering fields to classrooms—to focus on the strategic work that matters most.
Legal professionals regularly sail tumultuous seas. Unpredictable data, impossible deadlines, little visibility into what truly matters or what lies ahead—all of it is standard operating procedure for these folks.
Generative AI can provide a more even keel. But it will take a crew of like-minded peers and colleagues—fellow humans building bridges between their personal islands—to navigate these waters safely and efficiently.
It’s that communal effort and social touchpoint—that mutual eagerness—that keeps e-discovery teams grounded in what matters, armed with the tools that will bring them to the conclusion of each project safely, engaged in the creative thinking that will make the next adventure that much more exciting.
And that’s why they’re in this in the first place, right? To tackle new challenges. To have an impact on the lives and livelihoods of their clients. To look at out at uncharted waters and say, “Yeah, I can give that a try.”
In between all those volumes of data, all those spinning wheels of data ingestion and imaging and production, the chance to tackle new problems with emerging technologies is just pretty darn cool.
Some of them even call it fun.