Prioritized Review with AI Endorsed by Federal Court

Parties can leverage Predictive Coding for eDiscovery without extensive transparency

Prioritized Review for eDiscovery: Something we can all agree on

It’s rare these days to find common ground that everyone can agree on – even more so in the argumentative world of legal services – but the benefits of using machine learning to prioritize the identification of important docs seems to be a no-brainer. In the world of eDiscovery, lawyers need to review and analyze millions of documents, although they usually only have the time, budget and energy to review a couple thousand. Fortunately, a recent federal court order has underscored that parties are free to use AI to organize, prioritize and conquer the document review challenge.

Prioritized review rarely gets judicial treatment because it is such an obvious win for everyone involved. And this recent eDiscovery order shows how a prioritized review workflow sidesteps one of the larger legal debates around the requisite level of process transparency for Predictive Coding (for more on that issue, check out this article by my colleague Hal Marcus).

Understanding eDiscovery AI and the Prioritized Review Workflow

By using continuous machine learning (Predictive Coding in OpenText Axcelerate for example), lawyers can train an AI algorithm to find documents that share similar traits and patterns to ones already identified as important to the case. The AI then searches for similar documents, prioritizing them for review over other documents. As humans review those documents and give the machine feedback, it can learn even more, refine the document model, and suggest even more accurate results.

As a result, humans are generally focusing on relevant content throughout the process. The rest of the documents – generally irrelevant content that won’t be produced – can be reviewed in an expedited workflow, often by lower-cost junior attorneys, by contractors/service providers, or phased for a later stage in the discovery process.

Axcelerate Predictive Coding Workflow

The prioritized review workflow really is just that straightforward. Wouldn’t you want to know the most important facts of your case up front, instead of being surprised at the end of your review project?

Recent Judicial Guidance on Prioritized Review Transparency

Case in point, a new federal opinion issued in Ollila v. Babock & Wilcox, 3:17-cv-109 (7-11-2018) from the Western District of North Carolina. The protocol includes two sentences that warrant a closer analysis:

The parties agree to confer in good faith regarding the possibility of utilizing common document review methodologies, including: (i) forms of targeted assisted review (“TAR”) such as simple active- or passive-learning, continuous active- or multi-model-learning, or some combination thereof; (ii) date filtering; and (iii) keyword search queries (e.g., single term or Boolean strings). No party shall use predictive coding/technology-assisted-review for the purpose of culling the documents to be reviewed or produced without notifying the opposing party prior to use and with ample time to meet and confer in good faith regarding the use of such technologies.

This section is important to call out for three reasons.

First, I’ve never seen TAR defined as Targeted Assisted Review, it’s always been Technology Assisted Review, which appears in the second sentence in conjunction with Predictive Coding. This case is literally the only hit in Google Scholar for “targeted assisted review,” and I’ve never seen the term defined that way before. Nonetheless, I’ll add it to my growing lexicon of AI pseudonyms just in case.

Second, there is a relatively recent debate among eDiscovery experts regarding the use of analytics in concert. Specifically, can search terms be applied to broadly cull a data set before using machine learning algorithms to further cull? There are legal opinions and dicta that go both ways on this. This opinion does nothing to end the debate, but its use of “and” in the list of methodologies is another piece of authority intimating that TAR is indeed compatible with keyword and date filters.

Third, and most important, is the final sentence regarding the transparency and disclosure standard for using Predictive Coding/Technology-Assisted Review. The disclosure clause only applies to culling workflows, not prioritization workflows. Notice how that limiting language specifically condones the use of Predictive Coding for the purposes of prioritized review. Using AI to prioritize the most important documents earlier in the process? No problem, the court says. And, by the way, this order does nothing to foreclose the use of TAR for culling—only that such use requires the parties to meet and confer regarding such a workflow.

The eDiscovery Disclosure and Transparency Debate Lives On

Of course, one published protocol does not a doctrine make. And this is one of the livelier areas of eDiscovery jurisprudence: How much of the eDiscovery process, and in what detail, must be disclosed to the other side?

Key word being must.

Many of the opinions that are cited to for the proposition of aggressive transparency are couched in volunteerism. The court suggested transparency. The parties agreed to transparency. In the Biomet case, the judge was very straightforward that he could find no legal authority to compel transparency. Furthermore, in the “old days” of paper discovery, it would be exceedingly unusual for parties to disclose more than even cursory details regarding their review process.

Learn more about eDiscovery AI

If you read this far, you obviously want to learn more about eDiscovery AI. Learn more about the different machine learning approaches from Forrester’s TAR Report here. Or read up on other Predictive Coding case law here. Want to see our Predictive Coding in action? Catch me at ILTACON next week for a demo (Booth #820-822) or watch our 2-minute video here. 

Adam Kuhn

Adam is an eDiscovery attorney and Product Marketing Manager at OpenText Discovery. He holds an advanced certification for the Axcelerate eDiscovery platform and is responsible for research, education and outreach programs. Adam also serves as a Senior Research Fellow at the McCarthy Institute for IP & Technology Law at the University of San Francisco School of Law.

Related Articles

2 Comments

  1. Re: TAR as an abbreviation for ” *Targeted* Assisted Review” — do you think that was simply a momentary error on the judge’s part? If so, will there be some sort of follow-up that this was a one-off mistake in nomenclature, where the correct term was intended? (I don’t know how often that happens in U.S. legal practice.)
    Or is this likely to inspire future references to “Targeted” Assisted Review? I know correct and unambiguous naming is important in the legal field (and other industries too).

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Close