Maura R. Grossman, J.D., Ph.D., is a research professor in the School of Computer Science at the...
Daniel W. Linna Jr. has a joint appointment at Northwestern Pritzker School of Law and McCormick School...
Published: | September 30, 2020 |
Podcast: | Law Technology Now |
Category: | e-Discovery , Legal Technology |
Research Prof. Maura Grossman has long studied the effectiveness of machine learning and its implications for the delivery of legal services. She learned early on that machines were better than lawyers at many tasks, especially in the ediscovery process. But she recognized right away that to adopt and implement new technologies, lawyers needed to see proof that the new tools worked.
In this episode, host Dan Linna talks to Grossman about what questions lawyers should be asking when evaluating tools purporting to use artificial intelligence to solve problems with legal services delivery.
Grossman tells Linna that until there’s some sort of consumer consortium for AI, something like a Good Housekeeping Seal of Approval or Consumer Reports for algorithms, lawyers will need to be smart about evaluating the tools entering the market.
She and Linna discussed eight questions that should guide a lawyer’s evaluation of AI tools.
Grossman says lawyers should be asking pointed questions about:
On that last point, Grossman says the buyer doesn’t want to be a beta tester for the AI tool.
Professor Maura R. Grossman is a research professor in the School of Computer Science at the University of Waterloo, an adjunct professor at Osgoode Hall Law School of York University, and principal at Maura Grossman Law.
Law Technology Now
Good Housekeeping Seal of Approval for AI
10/10/2020
[Music]
Dan Linna: Hello, this is Dan Linna. Welcome to Law Technology Now on the Legal Talk Network. My guest today is Professor Maura Grossman, Research Professor in a School of Computer Science at the University of Waterloo. Maura is also an adjunct professor at Osgoode Hall Law School and affiliate faculty member at the Vector Institute, both in Toronto, as well as an e-discovery attorney and consultant in Buffalo, New York.
Previously, Maura was of counsel at Wachtell, Lipton, Rosen & Katz for 17 years and Maura is well-known an influential and as an e-discovery expert, and has deep expertise more generally in use of artificial intelligence for legal services. Maura, welcome to the show.
Maura Grossman: Thanks, Dan. I’m delighted to be here with you and your listeners.
Dan Linna: Well, thank you. Before we get started, I want to thank our sponsors. Thanks to Acumass. Patent and trademark renewal payments made easy. Find out how acumass.com can take the stress out of annuities and save you money on your PIN patent validations today.
Thanks as well to our sponsor, Logikcull. Instant discovery software for modern legal teams. Logikcull offers perfectly predictable pricing at just $250 per matter per month. Create your free account anytime at logikcull.com/ltn. That’s logikcull.com/ltn.
So, Maura, I’d really like to focus our conversation today on evaluating Artificial Intelligence and other technologies for legal services specifically, and you’re well-known for your work evaluating technology-assisted review for e-discovery. Can you just tell us a little bit about your background and how you got started doing that work?
Maura Grossman: Sure. I have, from an earlier life, a PhD in clinical psychology, so I was already familiar with the research method and statistics, a little bit of statistics. I wouldn’t call myself an expert. And I was working at a law firm in New York and we were drowning in data. We were working on very large matters like the World Trade Center litigation and matters like that where we just had too much data for our associates to be able to review. At Wachtell, it was not a pyramid structure. We had one associate per partner. So, when we got heavy document discovery matters, the firm was very hesitant to outsource to legal service providers and contract attorneys. So, we tried to do all of that ourselves. And I was, at the time, overseeing all of the electronic discovery for the firm.
My mentor at the firm said, “This is just an intractable problem and we need you to help solve it.” So, I went out into the world and started to meet with computer scientists and other people in that area. I started going to conferences with what you would call, “The Brainiacs”. And I got involved in the National Institute of Standards Technology and Technology TREC Program, Text Retrieval Program. And there, they do experiments with academics, government people, industry folks, and they all come together and they try to come up with the best methods for different kinds of search. And my good friend, Jason Baron, had started, the year before I think it was, a legal track trying to figure out how to do search better. And so, I got involved and I started to see that machine learning was better than attorneys at doing review.
And so, I went back to my firm and said, “You know, we really ought to start using machine learning and I had met someone at TREC named Gordon Cormack who was a professor at Waterloo, who had taken one of the best spam filters in the world and tweaked it so it could do instead of spam, not spam, relevant, not relevant. And now, you’re facing this issue. Of course, people thinking you’re on opioids or something like that because you’re saying that a computer can do this better. And I got into evaluation by saying, “The only way I’m going to convince people is to show them with hard data that these tools are actually better than the standard process.” So, I did a paper in 2011 where we did exactly that and that started me on my path of seeing the importance of evaluation, not just developing the technology, but being able to show it’s actually better than standard practice.
Dan Linna: Well, that’s a real wonderful introduction to our discussion today. I think we’re in the same place in a lot of ways in thinking about applying AI to other areas of legal practice, but we haven’t had that work done in a rigorous fashion the way that you’ve done that work in e-discovery area.
(00:05:00)
But before we jump into our conversation, why don’t we just take a little bit of a step back and think about how do we define AI because there’s a lot of hype around AI and even some experts maybe disagree sometimes about how we might define or how we might not define it. How do you think we should think about defining AI for our purposes here about thinking about evaluating AI for legal services delivery?
Maura Grossman: So, the term Artificial Intelligence or AI is actually not a new term. It was first used as an umbrella term at a conference at Dartmouth in 1956. And basically what it meant was, computers doing intelligent things performing cognitive tasks that were once thought to be the sole province of humans.
So, it’s not a single technology of function. Its whatever computers can’t do until they can and then we just call it software after that. So, I think back to when I was at Wachtell and when you first wanted to compare two documents, you’d send them down to word processing department and somebody would take a red pen and then something was removed from the document. They’d crossed it out and if it was new, they’d underline it and send it back up to you. And then, all of a sudden, there was this program called, “Compare”, that could do the same thing. And, of course, we all thought it was magic and would never work. And then, we got used to it. And then, it’s just software.
So, generally, AI involves either machine learning of some type or natural language processing, or some combination of both.
Dan Linna: So, that’s a great introduction thinking about — and you already talked a little bit about how AI is used for e-discovery and one of the things I think I would ask about that is that, there are different methods used and they’re part of this debate about, “Well, what is AI?” You mentioned machine learning and natural language processing. There are some who would say, “Well, that’s what is AI.” And even others would say, “Well, none of that even is AI.” But then, there’s kind of this debate about, “Well, if its rules-driven, that doesn’t count as AI because of the way it does. We don’t care about how it creates the output.” Some would say, “Well, it’s just about the outputs. It’s necessarily how you get there.”
What kind of, I guess, in that context might you describe — you talked a little bit about using a spam filter sort of technology for e-discovery. What sort of tools and AI tools are we seeing applied for e-discovery?
Maura Grossman: So, originally if I wanted to program a computer to be able to distinguish between two things, I would have to actually program in every single step. So, I remember when I took my first Python course a couple of years ago when it was me and a bunch of six-year-olds basically and they were far better than I was. We had to develop a point of sales program. So, in other words, like a basket where you buy all your things, you put it in the basket, you take your coupon, you figure out what the taxes and the shipping, and you have to sit down if you’re going to program and you have to figure out exactly what steps, write every single step down in the same order. So, you have to figure out, for example, if the coupon comes out before the tax or after the tax and so forth. Well, we don’t have to do that anymore today.
When we want to train a machine learning algorithm, we give it data, labeled data. So, if I wanted to teach a computer to distinguish between puppies and kittens, I would take hundreds of pictures of puppies that were labeled puppies and I would take hundreds of pictures of kitties that were labeled kitties, and the algorithm would learn itself. What are the features that distinguish between a puppy and a kitty and I wouldn’t have to program those in.
So, basically what supervised machine learning systems do is they infer the mathematical function, the rules, from old data and they help you make predictions or educated guesses about new data that it hasn’t seen before. So, I would train it for e-discovery on many examples of relevant and not relevant data, and then you can give it unlabeled data and it’s able to figure out which category it is most likely to fall in.
Dan Linna: Well, I want to talk a little bit about evaluating some of those methods for e-discovering and you’ve talked a little bit about the work you did with NIST and the TREC Program to actually evaluate comparing human review versus machine review and different methods. But before we get into that, there’s a lot of the discussion today about Artificial Intelligence, about the ethics of AI. And I know, actually I’m working on an IEEE work group that you’re a part of too, talking about some of the inside of organizations, how do you govern the use of AI. So, a lot of discussion about high-level topics like fairness, accountability, transparency.
We see the importance of that in the legal space when it’s been talked about using algorithms to predict recidivism, for example. A little less discussion about that when thinking about delivering legal services although I bet we find over time that there could be biases in certain areas maybe even in e-discovery applications where we might find things that we are surprised to see there.
(00:10:09)
But what I really wanted to talk about is when we see these ethical from a high-level perspective, what kind of things do you think about as lawyers and people in the legal profession we need to be thinking about as far as the ethics of AI that’s being used in law?
Maura Grossman: So, I want to start with the example of bias because I think it’s one of the biggest issues that we see both in the legal area and elsewhere and I want to take us back to May of 2016, an article that came out in ProPublica, and imagine the following. Brisha Borden is on her way home from school when she realizes she’s forgotten to pick up her godchild and she sees in the yard, she’s with her friend, and they see a scooter and a little bicycle and they figured that will get them where they need to go quickly. So, they pick up the bike and the scooter and then they go down the street a little bit. They realized they are for a six-year-old and they let them go, but it was a little too late because the woman whose child the stuff belongs to saw them and the neighbor called the police and the police come by. And I’ll mention that Brisha and her friend are Black. So, she’s brought down to the station and she’s administered an AI recidivism tool called, “Compass”, and she scores an eight out of ten in terms of risk of recidivism on that tool.
Vernon Prater is a White male in his 50s and he is in something like Home Depot and he shoplift some tools worth about the same $85 as what Brisha took. And he’s picked up going through the detector, also arrested, brought down to the same police station, administered the same exam, and he scores a three.
Now, Vernon had a long rap sheet including armed robbery. He had spent some time in jail. Brisha had spent actually a little truancy when she was a kid, but that was it. And we asked, “Well, how does something like that happen? How does an AI get it wrong?” And if we look at some of the questions that COMPAS asked, they asked for home address. Well, home address and ZIP code is highly correlated with race and asked, “Was one of your parents ever sent to jail?” We know that minorities are sent to jail with a much higher frequency than Whites. “How many of your friends use drugs?” Well, we know, from research, that Blacks are five times as likely to be arrested for drug use than Whites, and so on.
And what was shocking was that as of 2016, although COMPAS was being used for sentencing decisions, not just bail but sentencing, and was being used to many jurisdictions across the U.S., but it really had never been validated by the sentencing commission, the U.S. Sentencing Commission and most of the testing was undertaken under unknown circumstances on unknown data by the same company that sold the instrument.
And what that testing showed ultimately was that only 20% of the people who were predicted to commit violent crimes went on to do so. That when they looked at the full range of crimes, the tool had about 61% overall accuracy in predicting recidivism, but lo and behold, when you looked at Blacks and Whites, they scored very differently on this test.
So, there are two kinds of errors as you know. There’s false positives and false negatives. False positive is when the test says you’re pregnant and you’re not. And false negative is when the test says you’re not pregnant and you are. And it turns out on this particular test when it made a mistake and it said somebody was not going to reoffend when they did, they were invariably White. And when they said that somebody was going to reoffend when they didn’t, they were overwhelmingly Black. So, this overall accuracy number messed differential distribution of false positives and false negatives in Blacks and Whites. And that’s a real problem because the Wisconsin Supreme Court later went on to say that it wasn’t a violation of due process to sentence somebody based on this kind of test even though ProPublica had shown that it incorporated bias.
So, that some of the danger of using these kinds of tools in context where they haven’t been appropriately or at least fully validated in that sense.
(00:15:03)
So, that’s a great concern to me and one of the things that got me involved in Artificial Intelligence and ethical issues.
Dan Linna: And this is a fairly well-known case and problem. I think like a couple of week we’ll do a whole show even just around that one thing and talking about that. And of course there has been some work done subsequently as well. Sharad Goel at Stanford is someone I think of as written about this nicely and talked about.
The underlying statistics are maybe less flawed than what people might think if some of it is about — to me, one of the things that draws in the question is just the need for greater transparency and I wonder if you’re — I mean, I tend to think that it’s easy to blame the Wisconsin Supreme Court here, but also not asking the courts to bail us out and saying, “Well, what can we do as lawyers and why aren’t we advocating for changes and requiring more transparency rather than hoping the Supreme Courts of states are going to bail us out on these topics.”
Maura Grossman: I think that’s true. As I’ve said to you in the past, I have mixed feelings on this transparency requirement because you and I both get on airplanes all the time and I don’t know about you, but I haven’t the foggiest clue how Lyft and all of these things work. But somebody has tested them, an FAA, and certified that they work for their intended purpose and they work reliably and that makes us comfortable. Same thing with the FDA. We don’t always know the mechanisms, the biological mechanisms, by which many drugs work, but somebody has done testing on them.
So, I do think, in some ways, transparency is an important value, but we do have AI tools. Some of which are called, “Deep Learning or Neural Networks”, where they are just so black box and so complicated that even the people who developed them can’t explain how they have reached their conclusion. Well, I think for some AI, transparency and explainability is important. For me, the validation and the testing piece is the most important.
And you asked about using AI to deliver legal services. Well, if we think back to ABA Model Rule 1.1 Comment 8, that says that and the majority of states have adopted this that there has to be technological competence on the part of lawyers. They need to know the questions to ask. They need to have some basic understanding of how the technology they’re using works and where the mine fields are.
Dan Linna: Yeah. Well, I want to transition more to talking just about evaluating AI and one of the questions we need to ask. You wrote a great article with Rees Morrison and I want to walk through some of those things, but just kind of on the last thing to tie together. One of my thoughts about some of the questions about fairness, accountability, like these high-level ethical principles, if we spent more time just thinking about effectiveness and really had sound methods especially for think about the things about what we’re doing as lawyers and delivering legal services, that we could we could eliminate or prevent maybe some of the problems we have in this other space. What do you think of that?
Maura Grossman: I tend to agree and I’m often very surprised that you’ll see lawyers even in the e-discovery sphere and I assume this is the case in other areas where they get so blown away by the Star Wars aspects that all the buttons and the knobs and the pretty graphs, and tables that they never ask the question, “Does it work and how well does it work, and what’s it been tested on, and what proof do you have?” For me, it’s such a critical element and as you know as well as I do, so much of the tools that are out there just don’t have this kind of evaluation.
Dan Linna: Well, that’s a great segue and that’s where we’re going to focus the rest of this conversation on is, how do you ask the right questions and how we might evaluate the effectiveness of these tools? But before we continue our interview with Professor Maura Grossman, we’re going to take a quick break to hear a message from our sponsor.
[Music]
Advertiser: Trying to cut costs? You’re not alone. In today’s climate, a five-figure e-discovery bill per month is deep. Don’t say that. Use Logikcull to reduce expense and control your discovery process. Get started today for only $250 per matter and they’ll waive migration costs from competing platforms. For more information, visit logikcull.com/ltn. That’s logikcull.com/ltn.
Increase productivity and profitability through acumass.com. Acumass provides cost-effective and reliable annuities management while keeping customer satisfaction at the helm of the action. With 40 years of excellence in the field of IP renewals, Acumass understands how quickly annuities can become burdensome for clients who would prefer their focus elsewhere.
(00:20:02)
Contact [email protected] or visit acumass.com to discover how you can benefit from a management solution tailored to your needs.
Dan Linna: And we’re back. Thank you for joining us. We’re with Professor Maura Grossman and we’re talking about evaluating Artificial Intelligence for legal services delivery.
Maura, we spent a fair amount of time talking about your research on evaluating the effectiveness of technology-assisted review for e-discovery. Before we kind of transition to talking about more broadly AI tools for legal services delivery, tell us where we’re at as far as evaluating the effectiveness of AI tools because I know you’ve done some writing saying that there’s still more work we need to do there in evaluating those tools.
Maura Grossman: I would love to see us create some kind of consumer consortium that would award a good housekeeping seal of approval or consumer reports to evaluate AI tools and put them through their paces.
I’ve been frankly surprised that clients demanded that they’re often more focused on cost or relationships rather than that sort of thing. But I think that if we had that, then you can use and approve tool demonstrate you used it in a proper way and then perform a light evaluation as opposed to we’re seeing these very heavy duty evaluations in e-discovery that are actually serving as a disincentive to use these tools because people just don’t want to have to do basically a whole NIST TREC experiment after every time they use something. So, I think it slows adoption. I’d much rather see us do that testing upfront.
Dan Linna: Yeah, I think it’s interesting to compare e-discovery space. I wonder if maybe some of it is there’s kind of like the attorneys using these tools kind of know what the tools can do. It’s a closed system and maybe there’s not enough incentive to force people to do that evaluation. Whereas if we’re talking about tools for reviewing contracts and diligence, things like that, maybe there are more incentives in play there for consumers to say, “Wait a second. I want to know how well that tool works. And if you can find a tool that works better, that can help me avoid the liability or help me learn more about the company and do better in this deal.” I mean, might we tap in to some of those incentives to push the marketplace to do more of that evaluation and those spaces?
Maura Grossman: I definitely think you’re right about disincentives in litigation. It’s an adversarial system. And often, for the person who’s producing the documents, finding more isn’t to your benefit. It usually is harmful. But if you’re doing contract review and you’re trying to get a deal done, I do think the incentives are very different and maybe there’s more opportunity there.
It also depends on the state of the development of the tools. What we found with TREC was that many vendors were unwilling to put their tools to the test. My guess is because they weren’t sure that they would pass and I think people have to see a win in it for themselves. Otherwise, all it is is a risk of being shown to be ineffective.
Dan Linna: Yeah, yeah. Well, let’s talk about this article you wrote with Rees Morrison. I think it’s a great article, ‘Seven Questions Lawyers Should Ask Vendors About Their AI Products’. I just like to walk through each of these and tell us why you think it’s important. So, the first one. What do you mean when you say your software uses Artificial Intelligence or machine learning? Why do you think that’s important?
Maura Grossman: I’m going to step back for a second and put questions here which never appeared in my article.
Dan Linna: Okay, yeah.
Maura Grossman: And the first question I think you should be asking yourself is what problem or pain point am I trying to solve and what would success look like, and is this tool fit for the job in the first place? But often these days, somebody will come into your office and say they have Artificial Intelligence or they’re using machine learning. And what they basically mean is, “I have magic fairy dust.”
So, I think it’s very important for you to understand what their technology is and that you should be able to explain it at least at some level how it works because you may be called as a subject matter expert later on and have to explain it. And again, going back to the model rule in the ethical responsibilities of lawyers, you have to really have some comprehension and some basis to assume it is going to do what you wanted to do.
Dan Linna: Yeah. This is so great. I think sometimes as lawyers, we’re used to thinking we’re the experts in the room and we’re afraid of asking these questions we don’t know the answers to, but we should ask that. Well, what do you mean use AI? And they should be able to explain it to you in a way that you can actually understand what’s being done. Is it using training data? Is it using rules? Tell me more about this.
(00:25:00)
Maura Grossman: And it doesn’t always, “Use AI.” So, is it pressed hard enough?
Dan Linna: Yes. On Mechanical Turk some person in the background — yeah, sure. Okay, second question. How much will we have to clean our data for it to be used by your software?
Maura Grossman: So, your date is not always in the form that it can be easily ingested in the tool or you would have to do a lot of work to get it ingested. So, for example, in e-discovery, data has to be processed and they add cost to the process. So, I think it’s important to know how much work is it going to take to get your data from the format it’s in in your system into the format it needs to be in to be loaded. And you want to run test ahead of time to check on that migration process, both importing the data in and exporting the data out because often issues emerge that really slow you down or add extra cost in reformatting.
Dan Linna: And the third question you and Rees posed that should be asked to an AI vendor for legal services. What amount of data and training do we need to use your software effectively?
Maura Grossman: So, lots of these technologies don’t work straight out of the box. Remember, I talked before about the kitties and puppies. Well, if we’re now trying to distinguish jaguars from lions, the software may not work particularly well if it was trained on kitties and puppies. So, you need to understand how much additional training you’re going to have to do to get that system to work and I think of some of the contract tools that you referenced. Well, if they’re trained only on Delaware contracts and you’re in New York or Wisconsin and the law is different than it was in Delaware that you may need to do additional training in order to get that tool to work effectively.
Dan Linna: Sure. It’s been trained exclusively on technology transactions and this is an oil and gas deal, or yeah, sure asking those questions and figure out whether the — and that’s actually, when we are talking about bias, that’s a big problem with bias. I remember seeing something about a recidivism too. I think it was in St. Louis that the training data actually came from somewhere in Canada. I don’t have to tell you more. You’re calling in from Canada. Canada is different than the U.S. and even data from Chicago to create a tool in St. Louis would ought to raise red flags. So, knowing where that training data came from, is it right for the context of the task you want to solve? Okay. So, question four. What algorithms and assumptions does your software rely on?
Maura Grossman: So, often even if the systems are rule-based or whether they are machine learning-based, there are decisions that are made along the way about, not only the data, but there are features and weightings and things like that, decisions that are made by developers that can incorporate bias of all different types. So, I think you want to know in addition to what data that’s been used. For example, there are algorithms that predict who should be interviewed from for a job, but if all of the applicants are the vast majority were White, then it’s going to predict White people for the job. So, you need to know what that developer has considered. What are the elements, for example, they are looking for in making that determination of who should get the interview? Did they weight things in a particular way and so forth? Because, in part, you’re looking for bias.
Dan Linna: The fifth question. What resources will we need to implement your software successfully?
Maura Grossman: So, often in order to use the software, you may need a new stuff. It may not just plug in to your system easily. So, there may need to be integration work. How much training of new users are you going to be required? How complicated is this tool? Are you going to require a very extensive onboarding process?
I remember there was one tool we started to use at my firm that we had bought and it took — I don’t know if it was nine months or a year to just integrate that tool into the firm’s platforms and I don’t think people took that into account beforehand how long that was going to take.
Dan Linna: Question six that you and Rees proposed. What tools do we need to interpret the machine learning model and to visualize it, and are they included with your software?
Maura Grossman: So, you want to make sure you’re getting everything you need to operate the software and that there aren’t all these add-ons you’re going to have to buy after the fact. There may be additional applications that are needed to generate reports, for example, or graphics and you want to know if these are included or they are external.
(00:30:00)
If you’re going to need custom reports, you need to know that also in advance because that’s additional time and additional cost to get it ready. So, you shouldn’t just assume that just magically what you need is going to pop out of the system.
Dan Linna: All right, and the seventh question. How has your tool been validated for its intended purpose and how reliable is it?
Maura Grossman: So, you don’t want to be the guinea pig or the beta tester for a tool and you want to know how long it’s been around, how often it’s updated, and then the most important is validity and reliability. And people talk a lot about accuracy, but really what’s important is validity and reliability. Validity being does it work for its intended purpose and reliability having more to do with consistency. In the same circumstances, does it give you the same output? And in different circumstances, does it give you different output?
And it’s really important to ask those hard questions about whether the tool has been tested and if it hasn’t, then you need to — actually, I always recommend people test it on their own data anyway even if it has been tested because your data may be very different than the training and testing data that the person selling the tool used.
Dan Linna: Well, again, this is a really helpful resource. This article that you wrote with Rees, these seven questions. And then, your question zero too about asking the purpose like what problem are we really trying to solve here because I think all of us can kind of fall into that trap of like, “Wow, that’s really cool application.” Like, “Okay, wait. What are we really trying to accomplish here?”
This brings us I think to a question to though that there’s been some discussion about — you mentioned the model rules in the U.S. Rule 1.1 Comment 8, Competence. There’s a lot of discussion. Well, what does that mean? What does that require? What about just kind of these tools as they’re developing?
Generally, we’re seeing a lot of calls for regulation of a lot of these tools. Facial recognition is an obvious one, but just use of AI generally especially as we think about AI being used for consumer facing tools, but even just use by law firms and other customers and other people in the legal space. Do we need some sort of governance, some sort of regulation? What should we be thinking about there?
Maura Grossman: I think there’s a tension between the people who believe in innovation and that any regulation will stifle innovation. And the other side of the coin where we say that tools that have a significant impact on legal rights or similar rights and tools that run the risk of treating different groups and societies, so Blacks and persons with color and other marginalized people differentially. Yes, I think that governance becomes more important in that context.
I particularly like Andrew Tutt’s idea of an FDA for algorithms. He wrote a paper in 2017 in Administrative Law Review where he makes that proposal that those kinds of algorithms, not which movie you want to see, but ones that truly have impact should be governed. The other area I think it may be important is in tools that relate to access to justice where instead of having a lawyer, you’re going to use a tool that is going to prepare your pleading for you. And I think they have less consumer protections in those areas and that may be another area where we do want some kind of governance. It doesn’t have to be heavyweight or heavy-duty governance, but some kind of accountability.
Dan Linna: Okay. Well, at the same time that we’ve seen the pandemic has accelerated some of the use of technologies, use of technologies by courts. My last guest was Michigan Supreme Court Chief Justice Bridget McCormack, she was talking about using online platforms and she is really excited about the ways in which that can expand due process rights. It means we have so many default judgments people don’t even show up at court. So, thinking about the ways we can use these tools to improve things. We’re seeing jurisdictions, Utah, Arizona relaxed some of the restrictions on entrance who can deliver legal services. I think the direction we’re heading, we’d see more opportunities for startups and other organizations to serve individuals. I mean, what do you think? Do you think that these tools can have a real serious impact on improving access to justice?
Maura Grossman: Absolutely. I think British Columbia has also done a lot of experimentation in the area of online court systems. And if you look at some of the work they have done, it’s mind-blowing. I think it’s a great improvement. As we know the statistics, 80% of the people in this country can’t afford legal services, but the risk that comes with that is not the risk that’s most often talked about that is going to put lawyers out of work. The real risk is if these tools aren’t properly vetted and somebody does a will that turns out later to be ineffective, that’s a terrible tragedy.
(00:35:08)
So, we need some kind of accountability if we’re going to use these tools. But I think they showed tremendous promise.
Dan Linna: Yeah. I think the interesting thing is that, we, in a lot of ways, are not assessing — and this goes back to your e-discovery studies when e-discovery TAR tools first came out. Well, they’ll never be as good as what the humans do, but we found out the humans aren’t actually very good at it. And of course lawyers are making mistakes in some of these other areas as well, so we have to have the right baseline. We can’t expect that these tools are going to be perfect, but absolutely we should validate them. We should have effectiveness testing. We should understand when they work, when they might fail, and we have to understand that the right baseline is for most people. It’s getting nothing versus a tool that could potentially help them in some way.
So, the last question I wanted to ask you about, Maura, and then I kind of like tease this a little bit with this kind of where we look back and we had such — our expectations were that TAR tools could not do what humans could, viewing documents. There’s a lot of debate today about capabilities of AI for legal services and there are skeptics who say, “Well, the tech can’t do what the lawyers can do.”
How do we kind of use some of the techniques? Maybe we’re talking about effectiveness testing, but understanding the problems we’re solving. I do tend to think that sometimes maybe we spend too much time just thinking we’ve always done a little bit faster and cheaper and we’re not thinking big enough, but I don’t know. I mean, how can we really make the most of these tools in a way and what do you think is the potential really for using technology to automate some of the things lawyers do today, but augment lawyers and serve more people, create opportunities for lawyers at the same time? I mean, where do you think this is kind of going and where should we be making it go proactively kind of pushing the profession in the legal industry?
Maura Grossman: Berkeley, Dietvorst and some of his colleagues at UPenn did some really interesting research on what they call, “Algorithm Aversion”, and they do studies where they have people come into the lab and they give them a file of a student who’s applying for an MBA and they ask them to predict how well the student is going to do. And they give them the GRE scores and the grades and parents’ highest education, and they are not algorithm that does the same thing. And the person first makes their prediction and then they see the prediction of the algorithm. And invariably, the algorithm beats the pants off that human is much better at crunching this data and making predictions.
And then, the experiment goes as follows. So, the researchers come back and say, “Okay, on the next 10, we’re going to pay you a reward if you get the prediction right and you can bet on yourself or you can bet on the algorithm, or you can bet on a third person who’s next door. And invariably, after seeing themselves beaten by this algorithm, they put the money on themselves and it’s just remarkable phenomena that Dietvorst and his colleagues called, “Algorithm Aversion”, and that people just have this fear of giving up control. It’s not completely logical. They can see something be more effective. You see the same thing with the autonomous vehicles that have far less accidents, but people remember the Google accident and are hesitant. And what Dietvorst found in the later article which I found pretty amusing is that, if you put a knob on the AI, the knob doesn’t even have to do anything or it can even make the performance worse. Please will be more likely to use that tool if it has a knob where the human feels like they have more control. So, I think part of what we have to do is figure out how we can put knobs on things.
And the other thing I think is, going back to what you talked about incentives, we have to make sure people see it as a wind and that it’s not a threat to their Identity or their employment, but they see it as augmenting and helping them, and people will not adopt technology that they see as a personal threat. So, I think some of the psychological and social incentive structures need to be looked at in addition to just the technology.
Dan Linna: Those are great insights. Yeah, I wish we could get — I mean, sometimes we can fall into this trap of browbeating people like, “Oh, they’re greedy. They don’t want to change.” We’re going to have to get people to change and embrace this brighter future we could have four for the profession for lawyers, for society generally, for serving more people who have needs. So, I really like those ideas.
Well, thank you so much, Maura, for joining us. It’s been a great conversation. You have so much knowledge in this space. We could’ve made this five-part episode and had so much more to talk about.
(00:40:01)
But before we go, can you please tell our listeners about how to follow your work and get in contact with you whether it’s Twitter, LinkedIn. Where they should go and look?
Maura Grossman: Sure. So, the probably easiest way to find my work is to google Grossman and Cormack and technology-assisted review or TAR, and that will lead you to a lot of our papers and work. Or, you can email me at [email protected] and I’m more than happy to get inquiries and send out papers and things like that. And, Dan, thanks so much for having me.
Dan Linna: Well, thank you very much, Maura. It’s been a pleasure.
This has been another edition of Law Technology Now on the Legal Talk Network. Please take a minute to subscribe and rate us in Apple Podcasts and Google Podcasts. You can find me on Twitter at Dan Linna. Please follow me, re tweet links to this episode and join the legal innovation and technology discussion online and join us next time for another edition of Law Technology Now. I’m Dan Linna signing off.
[Music]
Outro: If you’d like more information about what you’ve heard today, please visit legaltalknetwork.com. Subscribe via iTunes and RSS. Find us on Twitter and Facebook or download our free Legal Talk Network app in Google Play and iTunes. The views expressed by the participants of this program are their own, and do not represent the views of, nor are they endorsed by Legal Talk Network, its officers, directors, employees, agents, representatives, shareholders, and subsidiaries. None of the content should be considered legal advice. As always, consult a lawyer.
00:41:42
Notify me when there’s a new episode!
Law Technology Now |
Law Technology Now features key players, in the legal technology community, discussing the top trends and developments in the legal technology world.