Is TAR 1.0 dead—or is it long live TAR 1.0?

Understanding how to match the TAR approach to your review objective

Technology-assisted review (TAR) keeps getting better as artificial intelligence (AI) and machine learning technologies improve. But as it improves, some eDiscovery professionals have moved away from earlier approaches—and that’s not always a step in the right direction. The way to approach TAR is to use the strategy that best fits the goals of your review.

TAR 1.0 and 2.0 workflows

The hallmark of TAR 1.0 is one-time training, using a simple learning approach in which a subject matter expert trains the TAR algorithm until it reaches stable and acceptable results. Once that point is reached, the algorithm stops learning and starts working, ranking the remaining document set.

The newer approach, TAR 2.0, forgoes simple learning in favor of continuous active learning, which allows it to learn continuously throughout the review process. In TAR 2.0, the algorithm and the human review team work side by side throughout the review process, with the algorithm steadily improving until the human component decides that the mission of review has been satisfied. Unlike TAR 1.0, TAR 2.0 works well in low richness and small collections.

Many eDiscovery professionals have moved on from TAR 1.0, interpreting it as an early stage technology that has been replaced by the “better” TAR 2.0 approach. But it’s a mistake to think that you should completely abandon TAR 1.0.

Matching TAR strategies to the review objectives

TAR 2.0 is generally the preferred option in large-scale eDiscovery productions. For example, when the parties have agreed on a rolling production, TAR 2.0 allows the producing party to begin training the algorithm immediately, and accommodate rolling data loads that are immediately ranked and eligible for review. And, in many cases, the scope of discovery is not clearly defined at the outset; it evolves over time. TAR 1.0 cannot adjust to midstream changes. It must be fully trained before it analyzes the universe of documents (allowing only “one bite at the apple”).

Additionally, some knowledge-generation tasks are also better suited to TAR 2.0 than TAR 1.0. Most investigations, whether they are internal, regulatory, or even in anticipation of litigation, are true knowledge generation tasks. The primary objective is to find the critical documents on all the principal issues as quickly as possible. You need not find every document—just enough of the key documents to understand all of the underlying issues. So, recall (finding all of the relevant documents) is not crucial; precision and coverage are. And unlike litigation, there are no fact-laced complaints or prescriptive requests for production to focus the search for pertinent documents. An investigation oftentimes begins with, at most, a handful of informative documents, but more often nothing more than vague assertions of some actionable conduct.

In another example, opposing party productions, the objective is to weed through a collection to find particularly relevant documents. Recall is not as critical as precision — seeing more relevant documents than irrelevant ones — and surfacing more hot documents in the process. TAR 2.0 is particularly suited for this task since it is efficient in surfacing hot documents in sparse collections. Depo prep and issue analysis are yet other examples; both tasks often suffer from low richness within the larger responsive collection. Privilege and privilege QC are also knowledge generation tasks where TAR 2.0 can be an effective tool in locating privileged documents among a group of unreviewed documents, or a quality control measure to ensure that documents coded as not privileged are not privileged. In both cases, TAR 2.0 can be effective in preventing inappropriate production and disclosure.

But there are times and places when TAR 1.0 better matches the review objective. In some regulatory investigations, for example, a respondent may need to quickly and efficiently make a reasonable effort to identify and produce requested documents. In these cases, neither recall nor precision need be perfect; the standard is only reasonable compliance. The same applies to second requests and third-party subpoenas, where TAR 1.0 is a reasonable, and typically more affordable, approach to document review.

There’s another reason to maintain a functional TAR 1.0 approach: you may face cases in which you don’t have a choice of methodology. The Department of Justice, for example, all but mandates TAR 1.0 in its Predictive Coding Model Agreement, as do some courts.

Which TAR approach is best for your use case?

While there may be instances where the choice is clear, most eDiscovery and compliance productions will walk a finer line where the pros and cons of either approach are more closely balanced.

That’s why we put together an on-demand webinar, Is TAR 1.0 Dead?, to explain the differences between TAR 1.0 and 2.0 and set out the advantages and disadvantages of each. The webinar also offers some specific factors that might influence your decision on which approach you should use in a given review scenario.

The bottom line: no two reviews are the same, and one size doesn’t fit all for TAR. The best approach is to have options, and whatever level of TAR you decide to use, OpenText™ Discovery offers the approach and expertise you need. To learn more, visit the OpenText Discovery website, visit the Catalyst website (now part of OpenText Discovery), or download the authoritative guide to TAR.

Rachel Teisch

Rachel Teisch is Senior Director of Product Marketing at OpenText Discovery. She brings nearly two decades of experience in eDiscovery, and is responsible for product marketing for the OpenText Discovery suite of products. She most recently served as Vice President, Marketing, at Catalyst Repository Systems, which was acquired by OpenText in January 2019 and is now part of the OpenText Discovery portfolio.

Related Articles

Close