Relativity Home logo

Your single source for new lessons on legal technology, e-discovery, and the people innovating behind the scenes.

The Transformative Potential of Legal AI: Insights from Bennett Borden

Sam Bock
The Transformative Potential of Legal AI: Insights from Bennett Borden Icon - Relativity Blog

They say you must be a good reader before you can be a great writer.

Likewise, the best teachers start out as learners, and the most adept technical specialists begin as everyday users.

The volume—dimensional and auditory—of AI lessons and conversations makes it extraordinarily intimidating to sit down, today, and begin learning about the nature of artificial intelligence and its impact on the legal world. If you haven’t been reading up and following Google alerts for years already, the idea of wading into the AI quagmire right now seems … overwhelming, at best.

But it shouldn’t. One of the greatest secrets about this moment in the evolution of AI is that we’re all learning as we go. It’s all changing so fast that today is as good a time to jump in as last week, or last month, or this day in 2022.

Bennett Borden, in part one of our recent interview, told us the story of the career path that took him from Big Law partner to data scientist and AI founder.

Today, we’ll share his insights on where to begin if you’re ready to jump on the AI train right now. There’s still plenty of room on board—and we could use as many readers, learners, and users as we can get as we move toward a future in need of all kinds of experts.

Sam Bock: Generative AI’s viral popularity clearly isn’t just a fad; it’s already having a significant impact on how legal professionals do everything from document review and drafting to client communications and legal research. Where do you see the most potential?

Bennett Borden: Gone are the days where you can throw bodies at a problem and recreate the same work over and over again because you bill by the hour. A bunch of studies and my own research have backed it up—about 80 percent of what a lawyer does is better done by AI. That’s a problem if you’re trying to do things the same old way. Law firms are going to have to innovate or die. There’s literally no other option. It’s not just using AI to do what you do today; anybody can do that. If you’re just replacing what you do today, making it a little faster, you’re already falling behind.

I get asked all the time if lawyers are going to be replaced by AI. No, because the 20 percent of our work that is better done by humans is really, really important stuff. The critical stuff. But lawyers who can use AI to create new kinds of products and services that don’t yet exist are going to replace everybody else. Look at the top 100 companies in the US before the second industrial revolution and then the top 100 after; there are only three or four that overlap. Most companies didn’t take advantage of evolving technology and didn’t survive. The key is not just figuring out how to take advantage of tools off the shelf, either. Generative AI models that anybody can log into are fun and great and powerful. Even technology vendors building AI into their offerings are going to have good marginal benefits. But the real secret to earning those top spots? You’ve got to know how to build your own AI, to build your own solutions, for specific tasks.

Law firms are particularly poised to take advantage; they do the same things over and over across practice groups. They have decades of work experience and legal documents to build a rich database out of. Link that to constitutional AI and all of a sudden you’ve got all this deep knowledge, which your firm has spent decades building, at your fingertips. It’s not dependent upon a particular person and whether they’re tired or sick that day. Take a minute to imagine how much you can do with that, internally and in the way you serve your clients.

What new AI innovations are you most excited about and why?

Definitely agentic AI. Let’s define what that means. The real power of generative AI is not that it can spit out answers to your questions with text or videos, but that it understands your questions in the first place and responds appropriately. Right now, it’s popularly done with documents, pictures, videos; but technologically speaking, that response can be anything. Taking an action, pushing a (figurative) button. The concept of agentic AI is huge. Fundamentally, though, it’s very simple.

With generative AI, everybody worries about accuracy and hallucinations. But these models have gotten so good that you don’t hear a lot of stories about how “it’s just wrong” anymore. With 100 billion prompts going in and out every day, they’re learning really fast and getting better all the time.

The trick with really taking control of generative AI, and getting confident with it, is understanding a few basic concepts. First, retrieval-augmented generation (RAG). With RAG, instead of relying on a model’s general knowledge—everything stuffed into it during training and from the internet—you can take a well-curated set of data (like the resources for parents of autistic children), put it in a folder, and point your model at it. You’re telling the bot to get its answers from there, and not from its more general knowledge. You can design it a bunch of ways, with conditions, but basically you can address the risk of hallucinations in a massive way by having a well-curated set of data the AI can use as a nonnegotiable basis for all of its answers.

The other concept is constitutional AI. What we’ve learned is that the best way to control AI is with AI, so, in a research paper, Anthropic came up with this concept of giving AI a constitution to live by. You start with a persona statement, with which you tell the model who it is. For example, for the bot Clarion built to help serve up resources to parents of autistic children, it’s something like this: “You are a chatbot interacting with parents and other supporters of autistic children. It’s imperative that you are accurate, friendly, approachable, gender-neutral, and helpful.” You can think of this like a prime directive.

Next, you write the AI’s constitutional principles—again, in a plain language prompt, but coded in a way that’s different so the bot knows how important each principle is. Again, for our resources chatbot, one of those might look like: “The most important thing for you to do is to be accurate; therefor, have at least two sources to cite for everything you say. If the information you need is not in the RAG database, then say ‘I don’t know.’” (Depending on your use case, you might also tell your AI to look within its general knowledge for these types of answers but with a caveat that it had to go beyond its key database to educate its response.) Others included: “be compassionate,” “do not give medical advice based on the legal definition of medical advice,” and “always cite your sources.”

Sometimes those constitutional principles are going to be based on ethics, such as with being friendly. Sometimes they’ll be based on regulation, as with not giving medical advice. Figuring out the constitutional principles—these rules of robotics—takes strategizing, but anybody can write these. This is a cutting-edge principle, but what makes it super powerful is that it’s also where you build in the metrics. Add a mechanism so that the AI can score how it’s performing on each of its principles; that score is what you’re trying to optimize, and what you can use as a proof point of intentional, effective training.

Back to the example with this parents' database. We came up with our constitutional principles, and some of them had obvious scores—“don’t give medical advice” is pretty objective. But how do you score compassion or cultural sensitivity or friendliness?

What we’ve learned is that, in the early days, agents were being built by coders trying to use paragraphs of instructions to yield very specific results, and it wouldn’t work. All those instructions freaked out the AI’s “brain.”

When we experimented with ours, we tried to keep it simple but intentional. “Be compassionate.” That’s it. Then we said: “score yourself on these principles.” We ran 100 questions through it and watched the scores.

Then, we asked it: “explain to me what you understand by the instruction ‘be compassionate.’” And just through a conversation with it, you can fine-tune its understanding of compassion to help improve its performance. So we pointed it to three different responses where it scored itself differently and we said, “here are these responses and these scores; why did you score them like this?” And it explained. And we could say, “Cool, well pay a little more weight on this and less on that. We think this wasn’t quite right; here’s what we were thinking.” And it corrects itself.

Think of it like this: you’re forming the psychology of this bot much more than programming an agent.

Get out of the mindset of programming and into the mindset of designing a psychology for these agents.

In your AI Visionary profile from 2024, you shared that “the secret behind generative AI is that you don’t need a technical background to use it.” How do you encourage hesitant clients and peers to jump in and embrace AI, even if they aren’t super tech savvy?

I always tell clients and peers just to use it. Use every model you can get your hands on in every way you can think of. It’s the world’s largest compendium of knowledge, ever. It can answer questions about physics, technology, business plans. It can create recipes. Use the apps and get comfortable with what they can do.

The other thing is to learn how to verify the outputs. This is where the constitutional principles and scoring mechanisms come in. Most companies feel this intense pressure to do something. So then they come up with an idea, and their lawyers say no to it.

Why? Because there aren’t clear-cut rules. None are coming from the federal government here in the US anytime soon; we know with AI they’re focused on deregulation and industry self-regulation. Which means coming up with frameworks and standards for regulators and peers to adopt and give their imprimatur on is going to be important.

Until then, understand that, with a lack of clear guidance, the answer is not “no.” It’s “be reasonable.” Everything falls back on reasonableness.

What does that mean for a company? You must identify reasonably foreseeable risks, and take steps to reasonably mitigate them. We do this all the time. Why AI is somehow perceived as different, I don’t know. But this is why putting in the metrics and controls that prove compliance via scores—which is pretty technically simple—is an excellent approach.

My advice is simply that you can do this—especially business leaders, who already work within a regulatory rubric. You already know what the regulatory requirements are, and you already have controls in place to comply with them. Don’t reinvent the wheel; just decide, “if I insert AI into a step of a workflow, how does it change my risk profile?” That’s ascertainable.

I tell clients and peers all the time to think Iron Man, not the Terminator. AI is not gonna replace most jobs. It’s gonna replace tasks within jobs. Every company does stuff in a series of steps. Some steps are better done by AI; some by humans. Figure out which is which, optimize the right steps with agents, put QC in place, and suddenly you’ve got a new, better, more consistent solution for whatever you do today.

What’s kept you engaged with this industry over the years? What about Legal Data Intelligence challenges, including e-discovery, has driven you to develop your career path in these unique ways?

Because I came to see data as a reflection of human thought, action, and decision, for me, data has always been about people. It’s a reflection of human conduct, beliefs, and behavior, so I’ve always seen it as precious and a little sacred. I care about truth, justice, and mercy—and I know that truth lies in data.

You can’t walk across a room without leaving a digital trail. We have the greatest historical record of what it means to be a human being in the history of the planet. That comes with a lot of responsibility. There is huge potential to understand better medical treatment, better everything. It’s unimaginable. So that’s what drives me: the good that can be done by our more granular understanding of how humans go about being humans. I think that’s what drives my peers, too.

Insights are powerful. And insights come out of data. When you have more control and governance over your data, it can empower almost unimaginable things for any company or entity to accomplish in the pursuit of their unique mission.

Anything else you’d like to share with our readers?

The most important thing to realize is that we are at a turning point in the history of humankind. This is more impactful technology than has ever been developed. It makes our role as officers of the court, agents of justice, and zealous representatives of our clients more important than ever.

My advice to companies is, don’t defend the ways you’ve been successful in the past. So many companies that I see are talking about how they’re a $100 billion company and everything looks great! But look at Kodak or Xerox, right? They didn’t lean into it, they tried to fend off change, and it did not pay off. Xerox is a great example of a company that has reinvented itself, but they had to go through a lot to do so.

Think of AI as a supersuit for you and your company—a tool to augment and extend what you do, not replace it. Lean in, learn, adapt, and be optimistic. You’ll become even more capable of fulfilling our really sacred responsibilities as zealous advocates for truth and justice.

Harnessing Generative AI in a Law Firm

Sam Bock is a member of the marketing team at Relativity, and serves as editor of The Relativity Blog.

The latest insights, trends, and spotlights — directly to your inbox.

The Relativity Blog covers the latest in legal tech and compliance, professional development topics, and spotlights on the many bright minds in our space. Subscribe today to learn something new, stay ahead of emerging tech, and up-level your career.

Interested in being one of our authors? Learn more about how to contribute to The Relativity Blog.