GTMAccount = 'GTM-NWQF9Q'; dataLayer = [{ 'pageCategories': ["blog-topics/law-firm","blog-topics/legal-industry-education","blog-topics/litigation-support","blog-topics/review-production","blog-topics-v2/community/guest-writers","blog-topics-v2/law-firm","blog-topics-v2/legal-industry","blog-topics-v2/litigation-support","blog-topics-v2/review-production"], 'pageTags': [] }];

Your single source for new lessons on legal technology, e-discovery, compliance, and the people innovating behind the scenes.

How to Win the Race for Defensible Productions

Steve Sawyer - FRONTEO

This article was originally published on the FRONTEO blog. It's a good look at the many workflow components that can maximize your team's efforts during review, so we wanted to share it here as well.

The review phase of an e-discovery project can look and feel like a lap of a relay race, where the baton has been handed off and the rest of the team cheers from the sidelines. Yet, a truly successful review requires planning and continued buy-in among all stakeholders. This starts with the development of a workflow, or defined project timeline. By preparing ahead and sticking with this workflow, the team can work together as partners towards efficient, defensible productions.

Here are a few best practices that should be essential in any winning review workflow.

Early Case Assessment: The Warm-up Lap

After data has been collected and processed, early case assessment, or ECA, allows the team to develop a plan for any upcoming hurdles. In ECA, the collection can be visualized, date filters and domain parsing can be applied to records categorically excluded from productions, and structured and concept analytics can be leveraged to organize and map relationships between records and custodians.

It can be tempting to bypass ECA and move immediately to review. Consider the opportunity cost in time and case intelligence. On a recent e-discovery project, the FRONTEO team collected data which, after global deduplication, contained more than 125,000 records. It became apparent through ECA that much of the email in the set consisted of passive distribution list traffic. These emails were isolated and further analyzed, date cuts were applied, and email threading was deployed. The set of content requiring human review was identified as less than 3,000 records which were subsequently promoted for review.

But, ECA is not simply about data reduction, and it should not be understood as a performance enhancing supplement to find the needle in a haystack, or a magical data reducer for every set. Analysis of metadata at this stage also provides insight into common file paths, visualizes relationships between individuals, and identifies key terms that can be used for QC searches. In sum, ECA is an essential staging point for any e-discovery project that provides visibility to the team as it gears up for a review.

Structured Analytics: Finding the Best Line

A review will be driven both by project requirements, and by the data in the set. Structured analytics can help find the most expedient path through the data.

As global organizations become more reliant on social media and user-generated content, review sets are more likely to contain content in multiple languages. It’s often possible to for bilingual teams to work side by side on data that’s been segmented by language, working from the same review protocol.

Further, many review platforms have robust support for email threading. Two common review strategies are the review of “last in time” email threads, and the elimination of duplicative spare emails, which contain textually identical content. As noted above, thread suppression where appropriate can produce significant wins in terms of data reduction.

As with ECA, these technologies have their limitations; analytics engines may have some challenges with thread suppression in multilingual sets. Further, the development of out of the box support for analytics in IM, text messaging, and social media are slowly becoming more robust for review, particularly as advances in computer forensics are integrated into review platforms. A full-service vendor with experience in working with multilingual data sets and user-generated content can serve as a clutch player on the team.

Advanced Review Teams: Getting a Jump Off the Starting Block

Once records have been promoted, review management can further cull the data set and provide additional insight. Regardless of how robust the keywords, how data is de-duped, and whether NIST lists have been run processing, many data sets will contain at least some records that are highly unlikely to be relevant and can be flagged for the review team. The FRONTEO Advanced Review Team approach has been successful in proactively addressing these records.

Prior to the review kick-off, for example, our review managers search the promoted data to develop QC benchmarks, and also analyze the set based upon file type and location. Further, we work with exceptions reports to identify password protected files and discuss the approach to such records prior to the review. This content can be segregated and tracked, to ensure consistent application of the review protocol and minimal time spent on irrelevant content.

Customized Reporting and Customized Objects: Make it Your Review

No two reviews are exactly the same, so the tools you use to complete review should reflect this. Customized extensions to the review platform can help the team chart out a path to success, and also provide all partners insight on the deliverables.

Review teams can leverage a variety of tools to enhance the review experience:

  • A data room and decision log within the review tool ensures that case teams and review teams across various sites have access to the most current process notes, and have quick access to examples from the document set.
  • Automated privilege log creation allows review teams and case teams to generate normalized, defensible logs that can be customized to stipulated orders, or to templates designed to meet the requirements of regulating agencies or local rules.
  • Field trackers within the review platform can be leveraged to generate robust QC metrics and customized reporting. Successful review requires thorough and reproducible reporting that not only details review but explains progress and spend on the e-discovery process. This reporting should be granular, comprehensible, and reproducible.

In addition to customizing the review platform, ensure that your reporting metrics track to the project goals. Reporting often provides insight beyond simply the productivity of a team. The review team and case team should work from the outset to define the scope of deliverables and develop reproducible and comprehensive metrics that all members of the team can follow.


Whether at the starting line of an e-discovery project or nearing completion, a well-defined and executed process will ensure that everyone on the team performs at top levels. Resist the urge to spring off the block, and instead work with your e-discovery vendor to develop a comprehensive strategy to complete review on your data set.

Steve Sawyer is Manager of Review Services for FRONTEO, and is a member of the Data Science & Strategy team. Mr. Sawyer is a Relativity Certified Review Specialist, and works with clients to implement review workflows leveraging analytics and customized reporting to deliver efficient, defensible productions.