Your single source for new lessons on legal technology, e-discovery, and the people innovating behind the scenes.

How to Manage Your Total Cost of Review: Actively Understand Your Document Set

Continuing on the subject of reining in your total cost of review (TCR), it must be reiterated that budget concerns were more prominent than ever among legal teams in 2020—and continue to be halfway into this year.

When it comes to e-discovery, review is always the greatest cost driver, so any opportunity we have to reduce its scope—and, in turn, its costs—should be seized. But teams must first take stock of where these opportunities exist throughout the e-discovery process in order to have the most significant impact on that bottom line.

So, returning to this always timely subject, let’s revisit that all-important working question of: How do our choices prior to review affect our total cost?

Thus far, we’ve analyzed the usefulness of targeted collections and ECA in reducing TCR. Today, let’s home in on artificial intelligence.

Establishing the Numbers

Let’s recap the assumptions we will continue to work off of. In this scenario, we have 6 devices with costs of $500 per device and each device has 50 gigabytes of data. Assume each gigabyte contains 1,000 documents, and our reviewers work at a rate of 50 documents per hour at $50 an hour.

Lastly, our processing rate will be $35 per gigabyte, and hosting will be $25 per gigabyte, per month. We will also be using a four-month lifecycle for this case.

In my previous article, "How to Manage Your Total Cost of Review: Every Percent Matters," we took this same scenario and presented the choice of paying upfront costs to utilize an ECA to reduce our cull rate by 5 percent. We found that, through this, we were able to produce significant cost savings: upwards of $11,000.

In today’s exercise, let’s move on from the ECA stage. The last question we’ll visit here is: With your current data set, would you invest $3,000 on analytics expertise upfront to use active learning technology? This would mean leveraging artificial intelligence to create a responsive and non-responsive model that will assist your review.

$3,000 for Active Learning?

With this option, there are no additional hosting fees and no other technology fees—you basically have to pay for an expert to help you think through how you're going to set up your workflow and then execute on it with the software you’re already using.

This is very much like the previous situation. The first and second month fees are essentially the same, because we're talking about a workflow option that's happening a little later in the process. In this scenario, you have this expertise you're spending money on in Month 3, versus the standard workflow where you're not adding that expense. So, you won’t see any cost savings until Month 3 or later.

If your standard workflow ends up costing you about $77,000, what does the active learning workflow cost you?

It’s Not Black and White

This becomes a tough question to answer actually, because active learning is used a few different ways.

One way is that we take all the documents that you are going to look at, split them into the documents we care about—the ones with the proverbial “thumbs up” for relevance—and the ones we don't care about, which are the “thumbs down.” A reviewer will still need to look at every document. This may be the safest way to utilize active learning, but It's also the least economically efficient. The benefit of this approach is that you’ll still find the documents that you care about faster, because we are using the model to push the most relevant documents to the front of the line.

The second way we can use active learning is less manual still. We can take your pile of documents, knowing there is the stuff we care about and stuff we don't, and we can put enough eyes on the document set to evaluate what’s relevant. With this approach, we opt not to train the model fully, by manually coding all documents. Instead, we are reasonably confident the model is producing accurate predictions. We never need to look at portions of the document set, avoiding things such as mass emails, fantasy football emails, or other junk, since the model is helping to sort it out for us.

And lastly, we can look at just enough documents to train the active learning model, and go with its predictions from there. But this approach requires full trust in the model and minimizes quality control checks, so it certainly isn’t right for every case.

And so, depending on which one of these workflows you use, your results in terms of the impact on TCR will vary. At Acorn, we almost always recommend the second approach, as more than often it tends to be most applicable and beneficial to clients.

What Do the Numbers Say?

With all of this in mind, in the chart below we illustrate how these approaches play out. As we can see, you can end up with an active learning project that results in equal or significantly less costs overall compared to the standard model.

If we conceptualized our active learning from the beginning, we more likely would see a reduction of about $20,000, or 30-35 percent. This is largely thanks to active learning’s ability to quickly and automatically differentiate the relevant documents from the noise. You will still need eyeballs on the relevant documents, but active learning can be an incredibly useful tool to avoid having to put eyeballs on every document—especially those that are not relevant to your matter.

What’s the Right Answer?

So again, generally, active learning is a good investment, even without reducing the document set before applying it. And that's because it's pretty inexpensive to implement. Even in cases where you don't save money, you gain time—which is, of course, its own form of value.

Learn the Basics with This Active Learning 101 Blog Post


Zef Deda is a business development manager at Acorn Legal Solutions. An e-discovery thought leader working with Am Law 100 firms and corporate legal departments, Zef plays a key role in leveraging his knowledge of advanced technology and phased project plans in helping clients solve complex issues. Zef positions himself as a collaborative thought partner to ensure the best outcomes through his understanding of his client’s problems, their end goals, and working closely with them to identify and ensure the best possible outcomes.

The latest insights, trends, and spotlights — directly to your inbox.

The Relativity Blog covers the latest in legal tech and compliance, professional development topics, and spotlights on the many bright minds in our space. Subscribe today to learn something new, stay ahead of emerging tech, and up-level your career.

Interested in being one of our authors? Learn more about how to contribute to The Relativity Blog.