Relativity Home logo

Your single source for new lessons on legal technology, e-discovery, and the people innovating behind the scenes.

Learning by Doing: Tracking the Legal Industry's Generative AI Learning Curve

Kristy Esparza
Learning by Doing: Tracking the Legal Industry's Generative AI Learning Curve Icon - Relativity Blog

I want you to think back … way back. Back to what feels like another lifetime.

January 2025.

Okay, it’s not that long ago. But a lot has happened since then, especially in the world of AI and how we, as an industry, are approaching it.

Back in January, we hosted a webinar called “Ahead of the Curve: Early Adopters of Generative AI-Powered e-Discovery Drive Transformative Change.” In the 60-minute discussion, some of our first Relativity aiR customers shared how they were testing and getting started with generative AI. Panelists weighed costs, speed, and accuracy—and whether AI could be trusted in high-stakes legal work.

By September, that tone had shifted. In a new webinar that month, “From Curiosity to Commitment: Building a Business Case for Generative AI in Legal,” the conversation was no longer a high-level overview of how to dip your toe into generative AI—it was an in-depth discussion of how legal teams can jump in, safely and effectively.

Across both conversations, a few themes held steady: start small, communicate transparently, and build the right partnerships. Here’s how these lessons unfolded in real legal departments—and what they might mean for yours.

Start Small to Build Buy-In                          

Legal teams aren’t known for rushing into new tech—and for good reason. Starting small lets you test the waters, measure outcomes, and build trust, without blowing up budgets.

In January’s conversation, Ben Sexton, SVP of innovation & strategy at JND eDiscovery, described his approach to easing clients into generative AI, typically by starting with low-stakes projects to verify the AI is working and to help them understand the impact.

“We don’t want anyone to take our word for it. We expect every relationship that each of our clients has with AI to start small [… so] they build their own trust,” Ben shared. “This isn't validation. This isn't data science. We're establishing a baseline comfort around the technology.”

Ben’s co-panelist (and customer!) Kurt Vollert—senior VP and managing counsel at Sedgwick—shared the importance of these early, small projects for his team.

“We don't generate revenue, so we need to show how we have saved the company money,” he said.

In September’s conversation, Susan Stone—director of discovery services at AT&T—echoed Kurt’s feelings. She shared how her team first built internal buy-in through low-stakes test projects to paint a picture for senior leadership on aiR’s cost-saving abilities.

“At the end of the day, money talks,” Susan said. “Senior leadership always cares about end results and money saved. Quantifying that impact is how we moved forward in utilizing AI within the litigation and regulatory side of the business.”

From a technology perspective, Sarah Green—product marketing manager at Relativity—agreed with the crawl, walk, run approach.

“It’s important to acknowledge the hesitancy around these new tools. No professional would be doing their job if they weren't appropriately evaluating the tools and making sure they were defensible and doing what they were supposed to do.”

The Takeaway: Start with low-risk matters that demonstrate impact and can paint a clear before-and-after picture for your stakeholders and decision makers.

Engage with Your Internal Partners Early and Often

Even the best pilots can stall if you’re not aligned with your internal gatekeepers. Susan, unfortunately, learned this the hard way.

“I got an email from our CTO basically telling me to put the brakes on everything generative AI. I then spent the next six to nine months working with our CTO, our CSO, our privacy team, our compliance team, our contracting team […] to make sure that the security and privacy policies that Relativity has […] are in line with AT&T’s policies,” she explained.

Fortunately, that pause led to process—one that is now becoming table stakes within corporations. Today, Susan says they have established an AI review board.

“There’s a front door now if we want to bring in a new tool as a proof of concept, and there are established procedures that we have to go through,” she says. She also offered a word of warning to the audience:

“Be sure to have conversations with your security and privacy and compliance teams early, before adoptions, so you're not in the same boat as me, where you get to play with the toy, and then somebody comes and takes the toy away from you.”

Although Susan acknowledged that her team has come a long way, she says that approvals are still slow—taking an average of three to six months in her organization.

For Sarah, the lesson seemed clear: “Start now.”

The Takeaway: Don’t wait for a red light. Engage privacy, compliance, and security partners from the start.

Define Your Guardrails: Be Clear on What AI Can and Can’t Do

To Susan’s point about building a “front door,” Cimplifi CTO Ari Perlstein shared how adoption of AI technologies, in general, has to follow a certain list of standards. In the September webinar, he shared the specific criteria his team uses, including but not limited to:

  • Your prompts and output (and any embeddings) should not be available to other customers of the software, to train models, or to improve third-party products or services
  • If you are fine-tuning a model, it should be exclusively for your use

“There are obviously other things that any corporation is going to want to consider, but that's the baseline of that you want to look at,” he said.

The Takeaway: Understand your own thresholds. Document what “safe and compliant” means for your organization, gain an understanding of what it means to your service provider partner, and use that clarity to evaluate tools.

Bring Outside Counsel and Service Providers Along for the Ride

To accelerate adoption, both Susan and Kurt agreed: outside counsel needs to be on board.

“First and foremost, I’m going to Sedgwick’s outside counsel. And if they’ve got reservations about [AI], we talk through that,” Kurt said.

For Susan’s part, she and her team expect outside counsel to be excited to explore AI—they’re not looking to convince them. In fact, her team sends surveys to their preferred law firms to gauge their generative AI readiness.

“From those responses, we identified firms that were more cutting edge, who were more willing to utilize generative AI. It identified for us who we can view as partners on our journey,” she said.

From the provider side, Ari captured a recent shift in sentiment: “Two years ago, we were seeing a lot more of a struggle to get outside counsel, or law firms in general, to look at these technologies. But that's changed significantly, especially recently.”

In fact, Ari shared that more law firms who aren’t willing to move with the times are getting left behind.

“There are corporations that are really, really pushing for this, and if law firms are not getting on board, they’re going to find someone who is.”

And outside counsel aren’t the only partners to consider. For Kurt and Susan, having a service provider partner to help with the tech side of things is huge.

“To effectively do my job, I need to have relationships with people I can trust,” Kurt shared. “I need to have someone I can go to and quickly say: here's my issue. Here's what I need. Ben [of JND] and I are exchanging emails weekly or even daily.”

The Takeaway: Build a bench you can trust. Look for outside counsel who are willing and able to innovate and explore, and invest in service provider relationships that offer responsiveness and expertise.

Generative AI: No Longer a Novelty

If the time between early 2025 and late 2025 has taught us anything, it’s that generative AI is now the norm; it’s the expectation. The teams making the most progress are like the panelists in these webinars—they’re asking the right questions, building the right guardrails, and bringing the right people along for the ride.

Want to hear insights from other in-house teams? Check out our Generative AI for In-House Legal Teams webinar series for practical tips and real-world success stories from Microsoft, Amgen, and more.

Graphics for this article were created by Caroline Patterson.

In-House Legal Teams Generative AI Webinar Collection

Kristy Esparza is a member of the marketing team at Relativity, specializing in content creation and copywriting.

The latest insights, trends, and spotlights — directly to your inbox.

The Relativity Blog covers the latest in legal tech and compliance, professional development topics, and spotlights on the many bright minds in our space. Subscribe today to learn something new, stay ahead of emerging tech, and up-level your career.

Interested in being one of our authors? Learn more about how to contribute to The Relativity Blog.