Many of our listeners will know the term Technology-Assisted Review (TAR) by it’s more common nickname, “predictive coding.” Lawyers and judges alike need to pay attention to TAR due to potential changes to the Federal Rules of Civil Procedure (FRCP) coming up in December 2015. And since almost all courts accept when lawyers utilize TAR...
The Honorable Andrew J. Peck is senior counsel at DLA Piper in New York City. He served for 23...
Sharon D. Nelson, Esq. is president of the digital forensics, managed information technology and cybersecurity firm Sensei Enterprises. Ms....
John W. Simek is vice president of the digital forensics, managed information technology and cybersecurity firm Sensei Enterprises. He...
Many of our listeners will know the term Technology-Assisted Review (TAR) by it’s more common nickname, “predictive coding.” Lawyers and judges alike need to pay attention to TAR due to potential changes to the Federal Rules of Civil Procedure (FRCP) coming up in December 2015. And since almost all courts accept when lawyers utilize TAR for document review, it is important to keep up.
In this episode of Digital Detectives, Sharon Nelson and John Simek interview Judge Andrew Peck, an expert in issues relating to electronic discovery. Together they discuss the current state of technology-assisted review, how FRCP amendments will affect the way lawyers do discovery, and best practices when using TAR. Judge Peck explains the origin of using “technology-assisted review” as terminology over “predictive coding” or “computer-assisted review.” He explains that training the TAR program effectively is important, but the technology has progressed to a point where TAR will be successful as long as the training is sufficient and the scope of the team is in line. Finally, since the predictive coding programs are very expensive he explains when a case is big enough to warrant its use. Stick around to the end for a tip on using Federal Rule of Evidence 502 in court.
Judge Andrew Peck was appointed as U.S. Magistrate Judge for the Southern District of New York in 1995 and served as Chief Magistrate Judge in 2004-2006. Judge Peck is a frequent lecturer on issues relating to electronic discovery and is a member of the Sedona Conference and the Sedona Conference Judicial Advisory Board. He was awarded the Champion of Technology Award for 2011 by Law Technology News.
Digital Detectives: Judge Andrew Peck on TAR (Technology-Assisted Review) – 5/11/2015
Advertiser: Welcome to Digital Detectives, reports from the battlefront. We’ll discuss computer forensics, electronic discovery and information security issues and what’s really happening in the trenches. Not theory, but practical information that you could use in your law practice. Right here on the Legal Talk Network.
Sharon D. Nelson: Welcome to the 56th edition of Digital Detectives, we’re glad to have you with us. I’m Sharon Nelson, president of Sensei Enterprises.
John W. Simek: And I’m John Simek, vice president of Sensei Enterprises. Today on Digital Detectives, our topic is Judge Peck on TAR, or Technology-Assisted Review. We’re delighted to welcome Magistrate Judge Andrew Peck as our guest. We recently had the pleasure of serving on the faculty with him at a regional Sedona Conference in National. He was appointed as U.S. Magistrate Judge for the Southern District of New York in 1995 and served as Chief Magistrate Judge in 2004-2006. Judge Peck is a frequent lecturer on issues relating to electronic discovery and is a member of the Sedona Conference and the Sedona Conference Judicial Advisory Board. He was awarded the Champion of Technology Award for 2011 by Law Technology News. Thank you for joining us today, Judge Peck, and please feel free to augment your bio with anything you may think the listeners would find interesting.
Judge Andrew Peck: Well, it’s my pleasure to be here with you and I do find it interesting and perhaps not coincidental that the program is called Digital Detectives, and I am a member of the Baker Street Irregulars and other Sherlock Holmes organizations. And also a member and former board member of the Mystery Writers of America, the group that issues the Edgar Awards each year. In fact, they happened last week for the best mysteries in various categories of the year.
Sharon D. Nelson: That’s wonderful, so let’s start out because the game’s afoot.
Judge Andrew Peck: And unfortunately, too many lawyers are not finding e-discovery to be elementary.
Sharon D. Nelson: Boy, isn’t that the truth. But as we start out, there really is some good news about the proposed federal rules amendments. Would you share that with our listeners?
Judge Andrew Peck: Absolutely. The process has been going through a long road, but on April 29th the Supreme Court transmitted the proposed rules amendments to Congress. If Congress does not act, then the rules amendments will become effective December 1 of this year, and historically Congress has not acted. And I suppose it’s safe to say from either political party’s point of view, not that much is getting done in Congress nowadays. So I would say it’s almost 100% certain we will have these rules on December 1 this year.
John W. Simek: Well that’s great. There was a time when we had predictive coding, computer assisted review, technology assisted review, and a bunch of other terms. How did we come to settle on TAR and can you explain briefly what technology-assisted review is since we suspect that a lot of our listeners really don’t even know?
Judge Andrew Peck: Sure. TAR became the prevalent term in part because a vendor in the seal of beginning days of predictive coding claimed to have every sort of intellectual property right that they could in the technology and the term. So they were not only saying that their process was patented, but that somehow they had invented the term “predictive coding” and only they could use it for their product. And because of that, those of us talking about it had to come up with other terms. TAR seems to be the most accepted term at the moment, but all three of them are used somewhat interchangeably. Technology-assisted review is a combination of humans and technology, so we cannot forget the human aspect. Where essentially, the human reviewers train the technology and the technology then uses what it has learned to go through the potentially millions of ESI documents, electronically stored information, emails or otherwise. Probably the best analogy is for those who listen to music or those who shop on Amazon, you make a purchase or two and the next thing you know, those services are recommending additional purchases. You bought this book, Sherlock Holmes, you might enjoy these Agatha Christie books and things like that and then maybe you make a further purchase and they realize you’re not as interested in soft criminal cases mysteries, but you prefer the private detective novel and so they’ll make recommendations in that area. Similarly the way Pandora retrieves music. You start listening to Billy Joel and they may recommend Elton John and et cetera, et cetera. And the more the computer is able to see what you like, the more it is able to target that to you, and that is essentially how technology-assisted review works. There is a so-called C# set coded by the reviewers that is said back into the computer. The computer in their various different models either will give back the most likely relevant documents and say did I get it right. There will be further coding of that and further training in the system. Or perhaps in some of the other models, the computer will spit back out the gray-area documents and say we really can’t tell from what we’ve learned so far how to code those documents, please code some more of these so we can better train the system. And then that process keeps going. And then the result is that ultimately the reviewers may have to review only 10 to 20% of the documents and know that the rest is likely to be non-responsive. And so that’s where the savings comes in at the end.
Sharon D. Nelson: Well that’s a very good use of analogies and certainly a very fundamental and elementary description for our listeners. I know that to their chagrin, you can’t talk about Da Silva Moore Rio Tinto because both are still pending cases. However, there’s a quote from Rio Tinto, the March 2015 opinion that says, “It is now black letter law that where the producing party wants to utilize TAR for document review, courts will permit it.” Is it your view that this is really true for all federal courts and what do you think about the state courts?
Judge Andrew Peck: It is actually true internationally at this point. The most recent decision in this area is the Irish bank case from the Irish high court on March 3 of this year. A decision of that court following my Da Silva Moore opinion and other information approved the use of Tar in Ireland and approved it even though their system requires the production of all responsive information. And the judge there admitted that TaR is unlikely to find all but either as any other method and that TAR was better than anything else. Every case that I have seen, federal and state, where the producing, responding party wishes to use predictive coding, TAR, whatever term we’re using, the courts have allowed it. That’s not only federal court cases, but some in fact, the second earliest decision – the one following my Da Silva Moore – was the Global Aerospace decision out of the circuit court for Loudoun County, Virginia that approved the use of TAR. Another interesting case from a state court was the Delaware Chancery decision in the EORHB case, known as the Hooters case, where Vice Chancellor Laster listened to some 60-some-odd pages of oral argument on cross motions to dismiss. And at the end of that he ruled on those and sua sponte through after the parties that he suggested indeed was mandating that they show case to him if they were not going to use TAR. And it turned out that one of the parties responded several months later that they would indeed use TAR while the other party said that they don’t think they have enough information in order to make TAR cost effective. I guess the tax court is a federal court, but one of the more interesting recent decisions in September of 2014 in the Dynamo Holdings case against the commissioner of internal revenue, the tax judge said it’s sort of strange that you’re asking me to approve the use of TAR because in the old paper days, nobody would have asked me what reviewers should be use and how they should be trained and all of that. But he then went on to say that since this is the first time that this has come up in the tax court, he would indeed rule on it and he approved the use of TAR in that case. So only cases where either a party stipulated to keywords and then manual review and then midstream tried to change to TAR, was that this allowed in one case. Although in a second case with exactly the same procedural posture, the court allowed that switching horses midstream as that judge called it. Other than that, every time the responding party asked the court or the other side asked the court to stop the responding party from using TAR, the court allowed it to be used. In the few cases where the requesting party tried to force the responding party to use TAR, the courts have said no, they’re not going to do that. The question is reasonable is not theft. But a sort of footnote to that in the three cases I’ve put into that bucket, the producing party had already spent over a million dollars using either a keyword or a hybrid keyword first to whittle the production down and then TAR. The court said enough money has been spent, we’re not going to redo it. What would happen down the road if a requesting party tries at day one before the responding party has spent any money to force the use of TAR remains to be seen. Nowadays, probably not. Five years from now, that may well be something that the court either does; force its use as the most economic means or production or at least says do what you want but don’t come running to me for costs for manual review or a keyword and manual review if you’re not going to use the decision method.
John W. Simek: You spoke earlier about the significance of training a TAR system, but one thing that we hear quite a bit that there really isn’t a good way to ensure that the training of the TAR tool was done appropriately. Some people complain about the effectiveness and say that you should be using subject matter experts and others say that review teams could be used just as effectively. But could you talk a little bit about some of those complaints and how you think that they can be addressed?
Judge Andrew Peck: Sure. I think that in the infancy of TAR, there was much more of a need for a partner level or senior associate level or so called subject matter expert and either one or no more than two reviewers. Because if the TAR tool is getting inconsistent training, it makes it harder for it to stabilize. I think where we’re at now, the technology has improved and it can be subject matter reviewer, it can be review teams, as long as there is a clear knowledge of what the scope of the case is about and that as long as the training is sufficient, meaning there are enough rounds to stabilize the system or in continuous act of learning, the training is really part of that continuous production, if less important. Certainly, I still believe that the Sedona Cooperation Proclamation applies, and where possible, the parties should be cooperating with each other in this area. But if cooperation does not or cannot occur, one thing is certain: we should not be holding TAR to a different and higher standard than we hold keywords or manual review. To do that discourages parties from using the most effective, most cost-efficient method of analysis. One can look at the end result and determine whether the training was effective that way. The typical gap analysis. There are certain key documents that appear to be missing. The primary person involved in a matter suddenly is radio silent for a month, you know there’s probably a problem there. You can look at statistics at the end for recall and precision and see how good a job was done. So cooperation is good but if the parties believe it’s work product or otherwise or unable to cooperate with each other, there are other methods to both train the system and to determine at the end that the results were more than sufficient.
John W. Simek: Well, before we move onto the next segment, let’s take a quick commercial break.
John W. Simek: This is normally the spot in our show where we hear words from our sponsors. This potentially represents a unique opportunity for you. Digital Detectives is seeking sponsors. You can hear your advertisement right here. If you’re interested, contact the team at Legal Talk Network at [email protected].
Sharon D. Nelson: Welcome back to Digital Detectives on the Legal Talk Network. Today our topic is Judge Peck on TAR, technology-assisted review. Our guest is Judge Andrew Peck, who was appointed as US Magistrate Judge for the Southern district of New York in 1995 and served as Chief Magistrate Judge in 2004-2006. Judge Peck is a frequent lecturer on issues relating to electronic discovery and is a member of the Sedona Conference and the Sedona Conference Judicial Advisory Board. Judge Peck, when you’re using technology-assisted review or one of the parties in your cases, what do you think the best way is to come up with a C# DataSet used for training? Should the set be randomly selected? We were talking about it and it didn’t seem like it was the best method to us. Don’t we need some human expertise in there? And wouldn’t it be helpful to have conversations with primary known custodians?
Judge Andrew Peck: I think you’re absolutely correct. Randomness may be necessary to make sure you’re not missing something you don’t know about or not so much for the #C set but the control set so one could see what is the likely richness of the collection. But since the richness of collections for document production often is in single digits, it’s not a very useful method to only find one out of every hundred documents as relevant in responsive to training the system. So judgemental approaches to the C# set is probably the best approach and indeed it’s one approach where you can get that sort of transparency or cooperation with no downside. Ask the other side what keywords they would like you to run to find responsive documents. That doesn’t mean you eliminate documents using those keywords, it doesn’t mean you find every responsive document with that keyword. But it means you use that to pick some of the documents for the C# set. In addition, probably the only way you’re likely to know the acronyms or abbreviations used by the document custodians is to talk to them. If you don’t know that a product that’s now commercially available on the market was developed as project red or code red or anything like that, you’re not going to find those documents. So the good old-fashioned lawyering is very useful in coming up with the C# set.
John W. Simek: Judge Peck, I’d like to talk a little bit about those solo, small, mid-sized law firms that are out there that seem pretty intimidated with TAR. They feel it’s still pretty expensive, they don’t have the expertise to do the training and have to go out and hire experts, et cetera; but the costs are certainly a concern there. And since most of the lawsuits are small cases, what’s your thoughts about when is a case big enough to warrant TAR?
Judge Andrew Peck: There are some statistics that show that if you have 50 thousand or more emails, it’s an appropriate case for TAR. You have to balance the cost of the vendor for TAR versus how much it’s going to cost if you’re going to use keywords and manual review – or worse, manual review with no screening. And are paying – whether it’s contract attorneys or associates at the firm – X hundred dollars per hour to review documents. At almost any level 50,000 and above, you’re going to be spending more on that sort of review than you would spending the money to use TAR and it’s not that complicated.
Sharon D. Nelson: Well finally, we’re going to go a little bit off topic here, because I know that rule 502-D and the lawyer’s failure to use it is a pet peeve to you. So can you explain to us the rule, why you think lawyers fail to use it and why it’s such a pet peeve of yours?
Judge Andrew Peck: Absolutely. It’s rule 502 of the federal rules of evidence and it may be that because so few cases go to trial, lawyers don’t bother reading the rules of evidence anymore. But 502, which because it dealt with privilege, had to actually be enacted by Congress, was enacted in September 2008. And one of my fellow judges has come to facetiously call the “Peck Pole” at almost every conference I speak at to ask lawyers how many of them – or if they’re in-house – how many of them instruct their outside counsel to get a rule 502-D order. Now 502-D, which I call the “get out of jail free card,” says that a federal court may order that the privilege or protection is not waived by disclosure, connected with the litigation pending before the court. In which event, the disclosure is also not a waiver in any other federal or state proceeding. And that applies even when there are different parties involved. You’re sued by party A in federal court, then there’s another lawsuit in state court where party B sues you. A 502-D order is a non-waiver for all of that. Now when the advisory committee proposed 502 to Congress, the thought was it would be a major cost-saver because lawyers wouldn’t have to do as careful a privilege review. That really hasn’t come to pass because lawyers still have an ethical obligation to protect the client’s privilege and once you give the material over to the other side, we do not have the men in black nebulizer to wipe it out of their memory. But 502-D ensures that without having to litigate, whether you were careful enough, whether you took enough steps to protect the privilege, whether you asked for the privilege material that swift through that quickly enough, 502-D is just, I don’t want it to be a waiver, give me it back, and the other side has to do it. To me, there is absolutely no downside to getting a 502-D because in any case, whether the paper days where I lived through as a lawyer, or the much greater volume of ESI that we have today, I would guarantee that one or more privilege documents emails are going to slip through. The only concern I’ve had lawyers explain to me other than gee, I didn’t know there was such a rule, was that if we ask for this order and we’re in front of the neanderthal judge, he or she may say great, I’ve given you this now produce all your documents next week without having time to do a careful privilege review. I think that is clearly not allowed or called for under rule 502. it would be an error for a judge to do that but I’ve also come up with a suggestion. I’ve got a two-paragraph 502-D order that I used and is posted on the Southern district of New York website under my rules. Paragraph 1 is the simple paragraph giving the maximum protection allowed by 502-D. And paragraph 2 is to alleviate lawyers’ fears on that neanderthal judge. And it says, “nothing contained here is intended to or shall serve to limit a party’s right to conduct a review of documents, ESI or information, including metadata, for relevance, responsiveness, and or segregation of privileged and or protected information before production.” So if you don’t know your judge and you’re afraid the judge may two shorten your review period, either don’t ask for the 502-D order until after the scheduling order, and or use the two paragraphs that I have suggested. I have said several times before that it is malpractice not to consider getting a 502-D order and frankly, it may will be malpractice after considering it, not to get a 502-D order. No downside, get out of jail free card, use 502-D.
Sharon D. Nelson: I think everybody who listens to this podcast knows that if they’re going to litigate in your court, they better get that 502-D order.
Judge Andrew Peck: Even for those who didn’t or don’t listen to this podcast, I generally offer the 502-D order to parties at the initial scheduling conference. I don’t force it down their throats, I don’t issue a sua sponte, but I make sure that they really know that this is available, that I recommend it. Indeed, one of the changes in the new proposed rules, the December 1, 2015 rules, is a change to rule 26-F and rule 16-B the scheduling conference to explicitly call the party’s attention to federal rule of evidence 502 and instruct them to discuss whether they want a 502 order. So hopefully, parties at least read the rules; that civil procedure rule, even if they’re not reading the evidence rules.
Sharon D. Nelson: Well we certainly want to thank you for joining us today, Judge Peck. We also enjoyed when we served with you on that faculty, we seemed to serve a lot of quality elevator time together, and it really is nice to have a guest who’s such an expert on TAR. And of course, you are our very first Baker Street Irregular guest, so thank you very much for sharing all of your knowledge with us.
Judge Andrew Peck: My pleasure, thank you.
John W. Simek: Well that does it for this edition of Digital Detectives; and remember, you can subscribe to all of the editions of this podcasts at LegalTalkNetwork.com, or in iTunes. if you enjoyed this podcast, please review us on iTunes.
Sharon D. Nelson: And you could find out more about Sensei’s digital forensics, technology and security services at www.senseient.com. We’ll see you next time on Digital Detectives.
Advertiser: Thanks for listening to Digital Detectives on the Legal Talk Network. Check out some of our other podcasts on LegalTalkNetwork.com and in iTunes.
Sharon D. Nelson and John W. Simek invite experts to discuss computer forensics as well as information security issues.
David Ries gives an overview of work-at-home and remote access best practices.
Doug Austin surveys the current state of the eDiscovery industry and discusses emerging trends.
Ben Schorr shares tips for improving security in Microsoft products.
John Simek and Sharon Nelson answer lawyers’ frequently asked questions about how to work remotely and securely.
David Ries breaks down the trends in The Identity Theft Resource Center’s 2019 End-of-Year Data Breach Report.
Michael Mills surveys the landscape of AI and its ethical implications within the practice of law.