Cimplifi Uses Generative AI in Relativity aiR for Review to Cut Review Time in Half

Customer Since
2013
How did they do it?
- Used aiR for Review’s generative AI technology to review 14,000 documents for their law firm client with as little “eyes-on” review as possible.
- Worked with a subject matter expert attorney to develop optimal prompt criteria, guide aiR for Review’s decisions, and save 250+ hours of linear review time.
- Validated results with an elusion test, finding that aiR for Review missed zero responsive documents in a sample reviewed by the expert attorney.
Embracing Early Generative AI Adoption
As an avid participant in Relativity aiR for Review’s limited availability program, leading legal service provider and Relativity Gold Partner, Cimplifi, has championed the use of generative AI to drive efficiencies and savings for their clients. When their law firm client needed to conduct review on a set of documents using “as little eyes-on-review as possible,” Cimplifi decided to put aiR for Review’s transformative technology into action.
The law firm’s case involved a claim arising from insurance litigation. The firm had already conducted a significant amount of review and produced documents to opposing counsel, but a recent expansion in scope introduced an additional 14,000 documents that needed to be assessed for relevance. The firm turned to Cimplifi for guidance on how to avoid another round of arduous linear review and move the case forward as quickly as possible.
The Perfect Prompt Criteria: Increasing aiR for Review’s Decisiveness
Cimplifi collaborated with one of their client’s attorneys to develop effective inputs for aiR for Review’s analysis. This process involved creating prompt criteria based on what was important in the case, testing the criteria on samples, and adjusting as needed to ensure aiR for Review would produce desired results when applied to the entire document set.
To begin, the attorney reviewed 100 randomly selected documents and created a first draft of aiR for Review’s prompt criteria based on what he knew about the case. The team used this first draft to run aiR for Review on 1,375 documents. While results were strong, approximately 10% came back as “borderline relevant.” Cimplifi worked with the attorney to review these borderline documents and create a second prompt criteria draft to better guide aiR for Review’s decisions.
When the borderline documents were assessed again with the second draft, aiR for Review made better decisions on what was relevant, and the borderline population was reduced by 60%. Cimplifi developed a final, third version of the prompt criteria, reducing that borderline population by an additional 10%.
Throughout this process, aiR for Review’s decisions remained consistent — aside from the purposefully implemented changes — giving Cimplifi the confidence to use the third version of the prompt criteria to run aiR for Review across the entire document population.
Delivering Results, From Start to Finish
The upfront work conducted by Cimplifi paid off, and aiR for Review delivered. When the team ran aiR for Review on all 14,000 documents, less than 4% came back as borderline responsive. The team then conducted an elusion test using an estimate of ~40% richness (i.e., estimated percentage of responsive documents across the full data set). The test was conducted on 357 documents and found no responsive material.
In other words, in the sample reviewed by the subject matter expert attorney, aiR for Review didn’t miss a single thing, supporting, with 95% confidence, that elusion across the entire data set was under 5%.
"Picture a future where all you have to do is budget a couple days of prompt iteration and some quality control, significantly reducing the time and costs of even large-scale review projects."
ARI PERLSTEIN, Chief Technology Officer, Cimplifi
Remarkable Outcomes Lead to Future Opportunities
By using aiR for Review instead of more traditional processes, Cimplifi eliminated 250 review hours, cutting the time to complete this project in half and delivering a remarkable outcome for their law firm client.
On top of these efficiencies, aiR for Review provided consistent coding decisions that in some cases surpassed the expertise of human reviewers. During QC of the case attorney’s first 100 coded documents, Cimplifi found three examples where aiR for Review made a correct prediction that differed from the subject matter expert’s decision in the initial review. Furthermore, aiR for Review’s detailed natural language rationales and citations allowed the team to understand why aiR for Review made the decision it did, and why the initial call was incorrect. With these results, Cimplifi was able to advocate for further use of aiR for Review’s generative AI technology and anticipates that the law firm will leverage the solution on larger cases in the future.
"These results have opened the door to other, even bigger opportunities. The client was thrilled with how aiR for Review performed and is eager to use it on additional projects."
ANDREW RUTTER, Senior Vice President of Cimplifi Analytics, Cimplifi