Relativity Assisted Review in Relativity 8 includes enhancements that provide an easier, more flexible workflow and additional statistical insight into your projects. Among these improvements is the introduction of two new round types: pre-coded seeds and a control set.
These round types were added to improve the efficiency of document review for an Assisted Review project. Additionally, we wanted to include an accurate way to calculate the statistical measurements that are emerging as common metrics for judging the accuracy of a computer-assisted review process.
Pre-coded Seeds
When Assisted Review is used after a linear review has begun, a pre-coded seed round allows you to include documents that have already been identified as responsive. This workflow ensures your team hasn’t wasted the time and effort they have already spent on manual review.
Additionally, pre-coded seed rounds can also be helpful if your workflow includes judgmental sampling, which allows you to identify responsive example documents without relying solely on random sampling. For example, you could use keyword searching, concept searching, or clustering to locate responsive documents before an Assisted Review round. Using the pre-coded seed round type, you can submit those documents to Assisted Review as examples. This is particularly helpful for data sets with low responsive rates because it saves the extra time required for Assisted Review to identify and understand responsiveness, if so few are identified through random sampling.
Control Sets
The control set—which is also known as a truth set or a golden set—is a group of documents that are manually coded but not included as examples. Instead, they are used to determine the stability of your results by allowing you to visualize how closely Assisted Review’s decisions match the manual coding decisions. This provides you with another way to measure the accuracy of your project’s results, and can be used in addition to your overturn reports.
After each round, the precision, recall, and F1 of the control set quantify your results. Precision tells you the percentage of documents that are both categorized as responsive and truly responsive, recall tells you the percentage of all responsive documents in the data set that have been categorized as responsive, and F1 is a measure of overall accuracy that combines precision and recall. During each round, as Assisted Review gains a better understanding of your data set, this statistical snapshot should trend upward until all three values reach a satisfactory level.
In addition to the new round types, the round creation process in Assisted Review for Relativity 8 has become much easier. The new version includes a sample size calculator that can help you understand how your statistical settings will affect the number of documents that will require manual review. Additionally, saved searches that are frequently used in round creation are now automatically generated, and review batches can also be automatically created.
Click here to learn more about these and other enhancements to Assisted Review in Relativity 8. As always, feel free to contact us if you have any questions about our newest release.