Your single source for new lessons on legal technology, e-discovery, and the people innovating behind the scenes.

What is the Turing Test for Computer-Assisted Review?

Jay Leib

The late Dr. Alan Turing would have been 100 years old last month, and there was no shortage of memorials to celebrate his life. Dr. Turing’s work made big strides in the world of computer science: he is widely credited with developing the concept of the algorithm, pioneering artificial intelligence, and helping crack the code on the German Enigma machine during WWII.

In 1950, Dr. Turing authored Computing Machinery and Intelligence, a seminal paper on artificial intelligence. In this paper, he pondered the question, “Can machines think?” As part of a fictional test, Dr. Turing describes a game that involves a human interviewer and a hidden interviewee. After a series of questions and without the interviewer’s knowledge, a machine will begin to answer the questions instead of the human interviewee. The trick in this experiment is to see if the computer can answer the questions accurately enough to make the interviewer believe they’re still interviewing another human. In short, when a computer is indistinguishable from a human, we might conclude that a computer can “think” and “learn.”

Turing believed that, in time, computers will pass this Turing Test. He predicted that we would have a partial passing of the test by the year 2000 (Back to the Future anyone?). Modern futurologists put the date of computers passing a Turing Test at the year 2029.

As we read the recent tributes to Dr. Turing, we kept thinking about the Turing Test and the state of computer-assisted review. What would Dr. Turing think of computer-assisted review? At a very high level, its categorization of documents may be indistinguishable from that of a reviewer. But would the computer-assisted review process pass the Turing Test?

We concluded that the answer is no, computer-assisted review would not pass the Turing Test—and we wouldn’t want it to.

Sure, the categorization of documents may be indistinguishable from that of a reviewer. In fact, the quality may surpass that of a traditional human linear review. But it’s important to understand that the computer-assisted review engine itself is not thinking about those decisions or making its own judgment calls. We believe strong human reviewers are critical to computer-assisted review: their deep investigative or legal knowledge of a case still drives the review process. Computer-assisted review follows suit by using a math-based engine to amplify human reviewers’ coding decisions across the document universe, and provide transparent rankings based on those decisions. In terms of defensibility, each document categorized by the engine can be traced back to a decision that was made by a human reviewer, so there is no black box making fickle choices.

Clients often ask if Relativity Assisted Review is appropriate for cases with a lot at stake (e.g., “it’s really important”), cases that require immediate review (“need it done yesterday”), or cases with strict budgets restrictions (“cost conscious”). Our answer is often yes: Assisted Review can be useful in all of these types of situations. In fact, it was designed as a solution to real-world problems like these.

To challenge our perspective on computer-assisted review, we’ve developed our own e-discovery Turing Test for this blog post. Below are some fictional scenarios of varying risk and complexity. For each example, we had to decide whether a computer-assisted review or a traditional, linear review workflow would be the superior strategy. Our answers surprised us. We encourage you to think about what yours would look like:

Scenario A - You’ve been arrested and your only chance of being exonerated is to find the evidence that would clear your name, which is located in a huge universe of documents. Do you use a small team of reviewers combined with computer-assisted review, or do you use a large team of human reviewers for a linear review?

Scenario B - Someone’s left a message on your company’s whistleblower hotline: they’re aware of a systemic issue in the company and, unless you investigate and remediate the situation, they will go public with it in five days. Do you mobilize a large team of reviewers, or a small team combined with computer-assisted review?

Scenario C - During interviews with the actors provided by your client, you have composed an adequate list of keywords to guide your review. The penalty for missing a key document is very high. Do you choose a traditional keyword search workflow or a computer-assisted review workflow?

Scenario D - You’re working on an intellectual property case. In your document universe, there is a significant amount of engineering drawings and renderings. Would you use computer-assisted review?

As we remember the contributions of Dr. Turing, we may want to ask how we can use powerful analytical capabilities like computer-assisted review to supplement and assist human reviewers. We’re always around to discuss these scenarios with Relativity users—and help them figure out which cases could benefit from the use of Assisted Review—so please feel free to reach out at any time.


The latest insights, trends, and spotlights — directly to your inbox.

The Relativity Blog covers the latest in legal tech and compliance, professional development topics, and spotlights on the many bright minds in our space. Subscribe today to learn something new, stay ahead of emerging tech, and up-level your career.

Interested in being one of our authors? Learn more about how to contribute to The Relativity Blog.