Ed Walters is the Chief Strategy Officer of vLex and the founder of Fastcase, a legal publishing...
Zack Glaser is the Lawyerist Legal Tech Advisor. He’s an attorney, technologist, and blogger.
Jennifer Whigham is the Community Director at Lawyerist.
Published: | June 29, 2023 |
Podcast: | Lawyerist Podcast |
Category: | Legal Technology |
Zack sits down with Ed Walters, Chief Strategy Officer of vLex and founder of Fastcase, to discuss AI’s role in legal research. Delve into why it’s important to demystify AI and debunk misconceptions about “robot lawyers.” Learn about using AI in discovery, the importance of choosing the right tools, and the limitations of language models like ChatGPT. And explore how the industry is working to create dependable tools that can be verified by legal professionals, while maintaining human judgment in the legal process.
Links from the episode:
Special thanks to our sponsor Lawyerist.
Announcer:
Welcome to The Lawyerist Podcast, a series of discussions with entrepreneurs and innovators about building a successful law practice in today’s challenging and constantly changing legal market. Lawyerist supports attorneys, building client-centered, and future-oriented small law firms through community, content, and coaching both online and through the Lawyerist Lab. And now from the team that brought you The Small Firm Roadmap and your podcast hosts
Zack Glaser (00:35):
Hey y’all. I’m Zack.
Jennifer Whigham (00:36):
And I’m Jennifer Whigham. And this is episode 452 of the Lawyerist Podcast, part of the Legal Talk Network Today, Zach Glaser interviews Ed Walters about data and privacy in the AI world.
Zack Glaser (00:49):
Today’s podcast is brought to you by Posh Virtual Receptionists, Clio, & LawPay. We wouldn’t be able to do this show without other support, so stay tuned and we’ll tell you more about them later on.
Jennifer Whigham (00:58):
Hey, Zach.
Zack Glaser (00:59):
Hey, Jennifer, if that is you,
Jennifer Whigham (01:01):
I, you never know. I’d never show my face. Nobody’s ever seen my face in the history of Lawyerist, so it’s kind of a thing.
Zack Glaser (01:08):
You could be an AI bot at this point.
Jennifer Whigham (01:11):
How can you prove it? I mean, there’s literally a way to prove it called a Turing test,
Zack Glaser (01:16):
Oddly. Oh, we are going to get into that in this interview at Walters. Apparently AI is passing the hack out of Turing tests now.
Jennifer Whigham (01:26):
Oh no.
Zack Glaser (01:27):
And we won’t get into too much because he explains it well, but
Jennifer Whigham (01:31):
We seen Terminator and Terminator to a Terminator revisited.
Zack Glaser (01:36):
But I mean, really, how bad of an idea could it be if you don’t come back personally to stop yourself from doing it?
Jennifer Whigham (01:44):
That’s true. Yeah. I mean, there definitely was a hero arc in that,
Zack Glaser (01:50):
Right? Yeah, that’s kind of what I go with. If somebody’s not coming back from the future to stop me from doing something,
Jennifer Whigham (01:56):
Then really, I mean, it’s a privilege to have your future self come back. I mean, not everybody gets that, right? Right. Check your time, traveler privilege. Okay. But anyways, so a question I wanted to ask you is, so AI is everybody here we’re hearing it, there was an article in the latest bench bar about Lawyerist being scared of ai. I don’t know, there’s a lot of things going on, but one thing is I think there are these other ways to use AI in law firms or in the office that we’re just not thinking about. And I have a feeling that some of these ways, so I was going to ask you what are they?
Zack Glaser (02:33):
I know some ways, and I think the big thing here is understanding what types of tools are out there and then being creative with it. Ed and I get into kind of the research aspects. How do you use artificial intelligence or large language models or something like that to help you practice law, to help you actually do the practice of law. And so what’s interesting though is how do you use as if that’s not interesting? What’s also interesting is how do you use AI or how are people using AI to help them run their law offices to help them do the things in their office? And so understanding what type of tools are out there, I think people can get really creative with it. One of the things a lot of people are doing is to use a generative ai, something like chat G P T to create marketing efforts.
(03:32):
And it can be done in multiple different ways because generative AI is kind of figuring out what is a likely answer to something, what sounds like an answer to something. And quite frankly, marketing is not a bad place to kind of wield that power. Oh, totally. It’s flexible, especially if you’re talking about small things. What would be a good caption for this article? What would be a good abstract for this article? What would be a good article for me to write about, a blog post for me to write about, set a calendar for me to, a cadence for me to write because it’s going out there into this worldwide web and wild world of the worldwide web and just gathering a ton of information and kind of bringing back what is a likely answer. It’s not even what’s the most likely answer, but what is a likely answer?
(04:26):
And frankly, yeah, there are some times where that’s okay. Yeah, we can also get into, you know, can create your logo using some of these artificial intelligence tools to say, I need a logo based on this. You have to have some parameters. I like to use it as either a first draft of things or to create outlines for things, especially when we’re talking about, I do a lot of writing here at Lois, as do many of us. One of the most difficult things to do when you’re writing, and I think a lot of attorneys have this feeling as well, is to just create the structure, create the outline of what you’re talking about. And quite frankly, chat gpt specifically, GPT four, three, and even Bard, the Google product, do a good job of creating that basic skeleton of something. And then if you look at it, you can go, I don’t really like exactly how that’s going, but it gets you close enough.
(05:28):
And then writing an article around that is relatively easy. So having it create outlines for blog posts, you can also have it create questions that you ask for new hires, things that are creative, that don’t have a lot of potential for if you really, you know, don’t have to have massive exactness. I think that’s the problem with part of the problem with creating a brief, using something like Chat j p t, and there are many problems with that, none of which are chat. G P T worked incorrectly, and we get into that with Ed, but it’s, you’re not using the right tool because this is something that isn’t really about generating something whole cloth. It’s about doing research. And so research is kind of difficult to do, but there are a lot of things that we can do in our office that allow for us to use an algorithm.
(06:26):
Is it something that we would use an algorithm for? And an algorithm is really just a process of doing something. And so is it prime for that? You know, can translate things with ai. Now, my issue with something like translation is do you have the ability to check its work? If I take something and I translate it into Spanish, I have a reasonable understanding of Spanish. I can’t check its work. And I think that’s the big thing here. So yeah, there’s a lot of things that you can do with AI that are safe. And I think the biggest thing is making sure that we’re safe taking responsibility for what it is that is created by these tools. Yes, because they are just tools. They’re not right. We’re not giving them the ability to have copyright even, right? There’s
Jennifer Whigham (07:20):
Still a human element that needs to be there.
Zack Glaser (07:23):
There is still a human element because we aren’t at a place where these things are going to take over the world just yet.
Jennifer Whigham (07:29):
No, Terminator?
Zack Glaser (07:31):
Not yet. Not yet. But who’s to say if it’s coming or not?
Jennifer Whigham (07:36):
Well, let’s hear more with your conversation with Ed.
Ed Walters (07:43):
Hi, I’m Ed Walters. I am the chief strategy officer of V LX and the founder of Fast Case, the legal research service that is available to most members of bar associations for free. I was a lawyer at Covington and Burling before that, and I’m very excited to be here today.
Zack Glaser (08:01):
Ed, thanks for being with me. Excited to have you here. Every time somebody says Fast Case is likely available to you through your Bar association, I’m like, yeah, probably 12 and it No, it’s a lot. Most, it probably is available to you through your Bar Association.
Ed Walters (08:17):
47 out of the 50.
Zack Glaser (08:19):
Yeah,
Ed Walters (08:20):
Yeah. The best bar associations in the country.
Zack Glaser (08:23):
So I wanted to make sure that that people know that’s not just a throwaway sentence, it likely is available to you through your Bar Association.
Ed Walters (08:30):
So no, there’s more than 1.2 million Lawyers who have subscription access to Fast Case and more than 3 million Lawyerist around the world who have access to V L A Cs and Fast Case together.
Zack Glaser (08:43):
Well, fantastic. Well that’s why we have you on today is because you are very knowledgeable in that legal research, legal data, and specifically today artificial space of dealing with a lot of this is just manipulation of data. And I say manipulation, not in a bad way, just manipulating. So wanted to have you on to talk about some of the issues that we’re having with artificial intelligence. There are some big headlines being made related to people doing research with AI tools, specifically Chat g pt. So I wanted to bring you on for that. But before we do that, two things want to kind of introduce V lx, what is V LX to people?
Ed Walters (09:25):
So Fast Case merge with V L Cs at the end of March, beginning of April 20, 23. V LX is like a peer company. It’s a merger of equals. We were together moving to democratize the law and make legal research smarter fast case in the US V L LX starting in Spain, but then around the world, V LX had amazing breadth. They had more than a hundred countries law online, a huge subscriber base. Fast case was very deep in the US where V Ls didn’t really have much of a presence. When you combine the two, like I said before, it’s more than 3 million Lawyerist subscribed around the world. More than a billion, 1 billion with a B documents in the library and the law of more than a hundred countries. So I think, I hope it’s not an exaggeration to say it is in breadth, the largest in scope law library, online legal research platform ever built by far the most subscribers of any legal research platform ever created. It’s a really cool opportunity to kind of globalize the mission that we started Fast Case with 23 years ago to democratize the law. That mission is not just about the us it’s about the whole world. The same problems that exist here exist everywhere.
Zack Glaser (10:43):
Right. Well I think that’s important to note before we get started because a lot of this discussion around artificial intelligence and what it’s doing, especially in relation to research is premised on having good data to connect with. And so I want to make sure that we’re kind of talking about a resource that has very, very good and very broad data, but as we dive in, I’d also like to kind of make some definitions here. AI is a big, big word that means quite honestly, very little, but also a ton at the same time. When we’re talking about AI kind of in the legal space, what is that kind of to you, what are you thinking about in there? If we can focus in on some stuff here.
Ed Walters (11:29):
That’s a very sophisticated question. The term artificial intelligence goes back to a 1956 conference at Dartmouth and the idea was that you could create with software the same outputs that human thought or human intellectual labor could but by a different means. Maybe the same inputs, different process, but then similar or better outputs. And to demystify this a little bit, we use it all the time, of course we use artificial intelligence and we use spellcheck. So lawyers use artificial intelligence every day. When you type a text to somebody and your phone changes the word to ducking, you’re using artificial intelligence. In some ways calculators are artificial intelligence. So yeah, I think when we’re talking about this in law, I want to demystify it a little bit. I am in favor of using artificial intelligence, but I’m against magical thinking. So I’m not into the concept of robot Lawyerist or something like that.
(12:32):
If someone tells you they built a robot lawyer, you should run, doesn’t exist. Shouldn’t exist. No more than a calculator is a robot mathematician. There are still mathematicians in the age of calculators and there are still Lawyerist in the age of artificial intelligence tools for law. The military uses artificial intelligence and robots for things that are dull, dirty or dangerous. And I hope that’s the aspiration for how we use artificial intelligence in law. It’s the reason I think that the first major application for AI was discovery. The worst part of practicing law immediately went to artificial intelligence. And there are more lawyers in the AI discovery world than there were before, right? It’s not like lawyers locked their jobs or lawyers who primarily did document review no longer exist. It’s just that the worst part of our job we have automated.
Zack Glaser (13:30):
That’s a funny thing to tease out just slightly because not only did we automate the worst part of our job, but we’ve, we’ve been able to do a better job and that area of the law has kind of grown. We do more e-discovery now. We are willing to either send over more documents or accept more documents now and look at things. So it hasn’t just said, Hey, we’re going to just destroy this area. We’ve just made it better. And frankly in some ways even broader. So yeah, like you said, we, we’ve got more attorneys working in that space now than we really ever have. So the artificial intelligence, obviously we have been using it in a lot of different places. I think that Apple has decided not to change ducking anymore. So I think they’re even adjusting artificial intelligence as we speak. But what we think of, and I think a place that’s kind of ripe for what I think a lot of attorneys think is ripe for artificial intelligence is kind of this research area.
(14:35):
And we don’t really stop at research. We start to think if something could write my briefs for me, that would be fantastic. And that’s like three steps really that we’re asking artificial intelligence to do. Obviously we have this very recent issue that we won’t really go into too much because it’s just not worth really going into the attorney who used chat g p t for to create a brief and then got in trouble because of many different reasons. But I think that’s a good entree into this. Why did that person get in trouble? Was there, there are many mistakes there, but what was the mistake really if you had to say what it was?
Ed Walters (15:16):
Well, I think the sexy answer, the one everyone wants to talk about is Chat gpt. But I think it’s worth pointing out here as Bob and Bro G and others, this is a lawyer who lied on multiple occasions, who had some kind of routine non-AI issues with the court who submitted cases that he hadn’t read, whose law partner said he was on vacation when he wasn’t. There are real issues with candor to the tribunal that have nothing to do with artificial intelligence, but putting that to one side, I think the kind of original sin of this whole story was using the wrong tool for the job. So this lawyer tried to create a shortcut by using Chachi p t to ask a question to ask for legal authorities in the hope that Chachi p t would create a list of cases that stand for at this proposition.
(16:20):
So there’s two problems here. Mean the first problem I think is he’s using it for a task that is not created to do large language models statistically create plausible sounding sentences that are statistically likely that one word follows the next, but they’re not made to answer questions. So you may remember like IBM Watson created this artificial intelligence that would play Jeopardy. Yes. And in that case it was sort of create a hypothesis about what the answer was cause it was jeopardy and used that to kind of create a question that matched that is really like a question answer tool, right? Where chat gp PT is not that if you ask chat G P T A question, which you’ll get is a statistically likely answer. So here’s a good example of that. I recently took a list of books that I was trying to alphabetize. It was a long list and took the unsorted list and plugged it into chat G P T and said, please sort this alphabetically.
(17:24):
And it did. But the last item in the list started with a letter M as in Mary after Z. And my question was like, why is this the last item in the list? And the answer was because M is the last letter of the alphabet, so are you sure about that? I said, ah, no, of course you’re right. Z is the last letter of the alphabet. All right, do you want to try that sorting again? So the issue here is that I asked a question that has an actual answer and the tool isn’t made to answer the question. And in this case, the lawyer says, can you give me precedence that stand for this point of law to a large language model trained on the worldwide web, right? Of all things a source full of misinformation and racism and bias,
Zack Glaser (18:17):
Massive bias.
Ed Walters (18:20):
But on top of that, asking it to create something without answering the question is not using the right tool for the job. The last thing I’ll say about this is if you ask a magic eight ball whether you should have a tumor removed, the magic eight ball will answer you very confidently. Signs point to yes, but we shouldn’t think for a minute that a magic eight ball is giving out medical advice. It’s not scanning you for what kind of tumor you have. And large language models at least that’s currently created aren’t made to answer questions. It’s not a tool for answering questions. It’s a tool for statistically creating plausible answers. Now the last thing I’ll say about this, the true last thing I’ll say about it is that there’s also a problem baked into at least the chat G P T family of tools, which is that they are made to sound super authoritative, right?
(19:19):
When you say, I have a question, can you answer it? The answer is not just, here’s a statistical blah of words. It’s like, absolutely, I’ll be happy to answer that question. Here are four precedents that stand for that point of law. They have citations that we’re going to cite for you. And here’s a quote from that case, which I think you could forgive people for saying, wow, that sounds like what I’d get back if I asked an associate that question, right? The problem is the case doesn’t exist, the citation is to something else. The quote is fabricated and the point of law may not even exist. And so the thing we need to be careful about here is we need to have something in between the belief that artificial intelligence is all garbage, or the belief that artificial intelligence is omniscient and is about to become self-aware and travel back in time to kill our parents. So that begins, I think, really with understanding what the tools do, what they’re good for and how to use them in ways that are good.
Zack Glaser (20:27):
Well, ed ed, let’s take a quick break to hear word from our sponsors. And when we come back we’ll talk about what the tools are used for, how to use them and how to kind of use them properly. I think the next question we have is how does this jive with our professional responsibility obligations and things like that. So here when we come back, we’ll we’ll talk about those things as well.
Zack Glaser:
The Lawyerist Podcast is brought to you by Posh Virtual Receptionists. As an attorney. Do you ever wish you could be in two places at once? You could take a call while you’re in court, capture a lead during a meeting or schedule an appointment with a client while you’re elbow deep in an important case. Well, that’s where Posh comes in. They’re a team of professional, US-Based, live virtual receptionists available 24/7/365. They answer and transfer your calls, so you never miss an opportunity and you can devote more time to building your law firm. And with the Posh app you’re in total control of when your receptionist steps in. You can save as much as 40% off your current service provider’s rates. Even better, Posh is extending a special offer to Lawyerist listeners, visit posh.com/lawyerist to learn more and start your free trial of Posh Live Virtual Receptionist services.
And by Clio. What do solo and small firm lawyers with great client relationships all have in common? They use cloud-based legal practice management software to run their law firms. This is just one finding from Clio’s latest Legal Trends Report. There’s no getting around it… the fact is… when it comes to client expectations—standards are higher than ever for lawyers. Proof is in the numbers: 88% of lawyers using cloud-based software report good relationships with clients. For firms not in the cloud, barely half can say the same. That gap is significant. For more information on how cloud software creates better client relationships, download Clio’s Legal Trends Report for free at clio.com/trends. That’s Clio spelled C-L-I-O dot com/trends.
And by LawPay. Did you know 80% of lawyers struggle to make their firms profitable? If you want to build a thriving practice, you need the right set of tools. LawPay, the #1 legal payments processor and MyCase, the leader in legal practice management software, have joined forces to offer law firms a complete software solution. Access everything your firm needs to succeed, all in one place. Track time, send invoices, get paid, handle accounting and three-way trust reconciliation, manage client intake, and more—without switching between programs. Plus, access dozens of integrations that seamlessly sync with your current software. Over 65,000 lawyers trust LawPay and MyCase to streamline their firm’s operations. In fact, users get paid 39% faster and gain three billable hours per day on average. So why wait? Learn more and schedule a demo now at LawPay.com/lawyerist. That’s lawpay.com/lawyerist.
(23:46):
And we’re back with Ed Walters talking about artificial intelligence in the law. And before the break, we were kind of talking about right tool for right job and this idea that artificial intelligence is this solve for everything. And so we just go and say, well, what can it be used for? And it’s so confident, especially in the L l M space, specifically chat, G P T or Bard, it’s so confident that we say, well, that has to be be right. It just mansplain the heck out of things to us. So how do we use this intelligence? How do we use these tools properly? Because LLMs aren’t the only things that we can be doing or taking advantage of in our offices.
Ed Walters (24:30):
Yeah, well, so I’ll just say I’m blown away by this generation of tools. The tools from Anthropic Bard from Alphabet, GPT four from OpenAI and Microsoft. Very impressive. Extremely good. So I teach the law of robots at Georgetown Law and at Cornell Tech, and I used to teach the touring test the idea that you could ask a question and there’s an answer that comes out and you don’t know whether the answer came from a machine or a human. It was a test of the sophistication of ai,
Zack Glaser (25:03):
Right?
Ed Walters (25:03):
I don’t teach that anymore. That’s history. It doesn’t exist. Wow. By far, G PT three, certainly G P T, like 3.5, the Da Vinci version of it, and by far G P T four more than past the Turing test that doesn’t exist anymore. It’s a benchmark that’s long passed.
Zack Glaser (25:23):
I just kind of want to pause on that for a second because I don’t know that I really, yes, that I really recognize that. So the Turing test obviously named for Alan Turing is the can we get a response back from a computer that we don’t know if it’s a human or a computer responding back, we actually can’t say it’s not a human, is essentially what it is. And so we’re past that
Ed Walters (25:48):
And for years and years, computer scientists were touting the fact that it would take now four whole questions, five questions before you could find out that it was a machine and not a human. I think we’re 29 questions like infinite questions in some ways when AI starts to try to convince a journalist that his wife doesn’t love him. Pretty far past the Turing test,
Zack Glaser (26:15):
Gone into some personal things there at that point. Yeah.
Ed Walters (26:21):
So maybe just on the topic of what AI is good for and what it’s not good for.
Zack Glaser (26:25):
Yes.
Ed Walters (26:26):
So you wouldn’t ask the calculator on your computer to create a budget for Lawyerist next year, right? It’s very good with numbers, but it doesn’t have a notional idea of how much Lawyerist should grow next year. And in a similar way, tools that create statistically likely ends of words aren’t designed to answer questions. Some things they’re good for though they’re very good at summarization in docket alarm in the past year, we create a tool that when you open a docket sheet and it has hundreds of entries, you can just sort of mouse over one of the numbers in the docket and it will create a G P T 3.5 summary of the whole document. So you don’t have to open hundreds of documents one by one to find out what they say. You can get a concise summary that’s reliable and accurate. It’s not statistically trying to generate things, it’s taking the content of a document and condensing it into a summary. And those tools are amazing for that. Very, very good.
Zack Glaser (27:36):
And that’s kind of a fundamentally different thing to do to one is trying to generate something out of whole cloth, and the other is saying, here is this silo of information, what does it say? And that’s, you’re not going to get hallucinations out of that sort of thing. You’re not going to get just made up stuff out of that because it has a specific directive, specific job to do.
Ed Walters (28:00):
Yeah, that’s right. I’ll break a little news here. I think you heard it here first. Okay. Internally, it’s not a big surprise. People in internally at B Ls, we have these AI labs who are working on new tools. And one of the things we’re working on is an adversarial process that will take, when considering what the statistically most likely next words are or citations are, it is comparing it against the actual law, comparing it against actual citations, actual quotes, actual cases, actual statutes from around the world to create things like research memos, but research memos that don’t have the risk of hallucination. I just saw the kind of first outputs from the labs. It takes a lot to impress me. I don’t impress very easily. I was really impressed, blown away by the first versions of this. So I do think there’s great potential in the future for large language models that aren’t trained on the Huffington Post or buzzfeed or page six or the New York Post or whatever, or the comments section of Reddit, but that are trained on judicial opinions, statutes, constitutions, regulations, briefs, pleadings, motions to really create first drafts and first drafts that are verifiable if it’s site to real sources that you can click links and go directly to those sources.
(29:27):
So I think there’s potential for that, but that kind of last step, that almost like a L L M cator if you will, I don’t think yet really exists. It needs to, I think it’ll be relatively common five years from now, you, you’ll see it in DLAC and Thompson Reuters and Rex products, but it’s also vital to avoid the idea of fabricated citations or phony law.
Zack Glaser (29:53):
And that again, that’s the right tool for the right job we can make with the proper training and research, we can make generative AI do what we want it to do. It’s just that the tools that are out there right now for the most part, certainly the ones that we’re talking about are not built for that chat. G P T is not built for that. And so I really like your eight ball example because that eight ball might be right, but it’s just sheer, it’s random as to whether it’s going to be right. So yeah, could chat G P D possibly be right? Sure.
Ed Walters (30:31):
So could flipping a coin,
Zack Glaser (30:33):
Right? And I think that gets into when I was practicing, I always told people that a lot of my job was being the adult in the room, which is not my job now, but a lot of my job was being the adult and the most responsible person in the room. And I think when we’re dealing with chat G P T, AI large language models, and in using artificial intelligence to create or even summarize anything, the question comes up to me of who’s the adult in the room? Who’s the one responsible for this? And then why am I responsible for this? So touching on what is our responsibility towards these artificial intelligence tools?
Ed Walters (31:18):
Yeah, that’s a great question. In 2019, I wrote a law review article basically talking about the intersection of AI and the model rules of professional responsibility adopted by most states as they’re kind of local rules. And I touch on this a little bit, we can maybe link to it in the show notes, but Oh
Zack Glaser (31:36):
Yeah, definitely Rule
Ed Walters (31:37):
5.1 of the ABAs model rules duty of supervision. And the idea is that, yeah, a partner in a law firm or a lawyer who works together with other Lawyerist and other legal professionals is responsible for them. You have direct supervisory authority, but the buck stops with the license, the license professional in that situation. And I think with ai, the idea is very similar. If you’re going to use AI to practice law, you have to make sure that you are ultimately responsible, which in the Steven Schwartz case that we talked about at the top, there’s no problem with using even the wrong tool for the job. If you then read the cases to make sure they exist, to make sure that they say what the chatbot says, they say, and then you discover that they don’t, right? And the buck stops with you. And so you could find out that the magic eight ball is wrong, you could find out that the chat bot doesn’t really know what it’s talking about because you take the ultimate responsibility, the duty of supervision, just like you would with the first year associate, just like you would if you were supervising a law student or a paralegal or another allied legal professional in the firm, the buck stops with you.
(32:54):
And that’s, I think, the rule we should take away from working with artificial intelligence. You have that same rule, 5.1 duty professional responsibility.
Zack Glaser (33:05):
I like that because it really doesn’t matter if I like it or not, it is. But I like that because it makes it a little bit less scary and more approachable. I’ve always had that. I’ve always had to do that. If I have an associate write something, if I have my friend write whatever it is, I still have to look into that and make sure it’s correct. And I think that one of the bigger mistakes right now is us saying, okay, well I trust that. Well, why do you trust a tool that you haven’t proven is correct?
Ed Walters (33:37):
Yeah, that’s right. That’s right. And especially if the tool is trained on the worldwide web. So as you said early on, data is really at the heart of this. I’ll point to another, this is self-interested, but I’ll point to a book that I authored and edited in 2018 called Data-Driven Law that really talks about the importance of legal data and artificial intelligence. There’s even a chapter by Steven Wolfram talking about this exact subject. If you are going to have any of these tools, they’re only going to be as good as the training data. And so I think in six months there’s going to be 22, 222, 2022 different large language models. Transformers are not new. Google invented them six years ago, seven years ago. So I think in some ways the large language models are going to become commoditized, but what’s not going to be commoditized is authenticated trusted, pure provenance legal data.
(34:41):
And that’s sort of where we’re pointing in V L A Cs and Fast Case right now, we’ve pulled together the most authoritative, broadest scope legal law library and large language model training dataset in the world. And I think it’s going to produce tools that are authoritative, reliable, that won’t change people’s responsibility under 5.1 to supervise. But I think we’ll see, because we’re going to test it with real law, the results will be much more reliable. We’ll be more akin to asking a doctor for medical advice, asking for first opinion, that you then go get a second opinion with asking somebody to take a look at something, even if you’re going to go see your primary physician later.
Zack Glaser (35:32):
I like that the idea that we can check on these things or even having tools that can us check on these things, because it kind of made me think of this idea in writing programs where writing code, where many times you have to write code that can hunt bugs for you, you know, have to write code to find out if something is wrong. And so it does kind of scare me a little bit thinking about the progression of artificial intelligence and are we going to, and I doubt that we will, but are we going to get to a place where we don’t have the capacity to check our tools that where our tools are just right out outside the leading edge of our checking tools? Because right now, I can’t envision a scenario where I can’t physically go in and check the law that is created, but maybe there is in a future where I have to use a tool to do this.
Ed Walters (36:26):
There’s a great Isaac Asimov story about this in the iRobot series. It’s a book of short stories, not the Will Smith movie, but the Asimov book. iRobot is short stories. And the last of those short stories is called The Inevitable Conflict. And in the story, basically we have created a series of algorithms that perfectly allocates work and money and food and responsibility and liability. And the whole world is basically run by these algorithms. And they’ve become so complicated that no person understands how they work anymore. And they start malfunctioning in this story. And that might seem a little farfetched, but that’s exactly the way the Dow Jones Industrial average, the nasdaq, the stock exchange works today. Most the vast majority of transactions are algorithmic. It’s high frequency traders that are setting the price for stocks and a large amount of the world’s wealth people’s retirements rely on that system working. But if you ask someone to explain how those interactions all work together, almost no one can. And so it’s not super farfetched to think that we might over-rely on these things that you might have in a not too distant future briefs that are prepared completely by software that are opposed completely by software and then reviewed by judges who are relying completely on software. And so could have real algorithmic transactional justice that happens much faster. And then has several layers of algorithmic review before a human judge ever considers whether the outcome is correct or not.
Zack Glaser (38:16):
Right. That’s right. I can kind of extrapolate to there and it worries me in a sense, but at the same time it doesn’t because we do have these rules of professional responsibility unlike some other areas maybe. But we do have these rules of professional responsibility. And so it brings me to the question of what sort of responsibility, and I’m very careful to not use ethical obligation because ethics is a totally different thing than our professional responsibility. But what sort of responsibility do we have to understand these tools that we’re using, not just to check but to know how they work?
Ed Walters (38:56):
Yeah, that’s right. And I’ll just say this was a big concern at the dawn of computer assisted legal research. Online legal research was thought to be removing a critical component of the lawyers understanding of the law, how lawyers had to understand the law in order to cite it. And if computers using these newfangled technologies of Boolean queries, were figuring out what the cases that should be cited might be will dumb down lawyers to a point where they won’t understand the law. I think you could debate whether that actually happened in practice or not. But I don’t think lawyers understand the law less because we’ve automated some of that filtering process. We’ll take 50 million legal documents in the law library and filter it down to 411, maybe search results and then rank them by how likely they are to be useful for your work. It doesn’t necessarily make us any dumber any more than the original West Key number system did the digest system in books where it would filter down this universe of legal cases into topics and subtopics by jurisdiction.
Zack Glaser (40:12):
I’ve never actually thought about that before about how the keys, it’s based on another human’s perspective of what this case was about. And there could be a very specific point in a case that they didn’t get and didn’t kind of put it into the right bucket and you never find it.
Ed Walters (40:31):
Yeah, that’s right. John B. West is a huge inspiration to me, by the way. I think he’s an amazing entrepreneur. I’ve got nothing bad to say about the key number system. It is by far the most efficient way of doing that task.
Zack Glaser (40:43):
Absolutely.
Ed Walters (40:44):
And it miss cases all the time. It would miss issues or miscategorize things or not include something in a key where it’s a closed call. Could be either. It could be both. And you list it in one, not the other, and you miss something.
Zack Glaser (40:59):
Well, so one thing I want to get to before we close out is the kind of duty of confidentiality as it relates to artificial intelligence, because I think that’s a big question for attorneys right now is, am I putting this information just out there? Am I putting client information into the web if I put client information into chat, G B T or something like that? So I guess what is the relationship of the information that we kind of put into some of these models, the large language models to the public, or I guess more importantly, our own kind of private space?
Ed Walters (41:36):
I think it’s a key issue. So the early versions of chat, G P T did use the queries and the responses to inform the model, which meant that things that you typed in there weren’t confidential. And I think there are still settings for those open AI tools that can make them public. If you use Bard, I think the default setting is that whatever you type in as a query or as a prompt is used to retrain the model. So rule 1.6 of the model rules due to confidentiality, the lawyer shall not reveal information relating to the representation of a client unless the client gives informed consent or the disclosure is impliedly authorized in nor to carry out the representation. I would not be surprised to have lawyers or law firms begin to ask for with clients in their engagement letter permission to disclose that information to large language models with the idea that no individual person’s going to read it. And so it doesn’t really breach confidentiality any more than sending an email that’s unencrypted. I think violates that rule of confidentiality. I would expect clients in that same situation to say no. Yes. So to say you may not use queries that are specific to my confidential information in large language models that don’t secure and lock down that data any more than I would say you could store our confidential information in the cloud
Zack Glaser (43:14):
Right now, I would expect clients to say that, oh, well, I like kind of getting to the definition of the word expect. Would hope clients would say that. Do I expect them all to say that? Probably not. And I think that gets into an informed consent issue. But let’s move down that line of, alright, I probably need to, and I think it’s more than probably I need to ask my clients if I can put their information in there, they say no, then how can I still use these sorts of technologies to help my clients?
Ed Walters (43:46):
Well, I think the new generation of transformers of large language models are keeping the prompts confidential. Okay. They’re not using the prompts to train the model going forward. And so I think there is going to be some diligence required by Lawyerist and law firms to make sure that the tools they’re using don’t brief the studio of confidentiality or if there is some good reason that we need to use a tool that get consent from the client.
Zack Glaser (44:16):
So it really does hit a lot of places on our rules of professional conduct. And honestly, I would really encourage people to go read the Law review article that we’re going to link to in the show notes here. And Ed, I could sit here and chat with you for days about this, but I know you’ve got other things you need to do. So before we kind of leave, what would be your advice, kind of simple advice to an attorney who is just trying to make sure they’re taking advantage of ai, but also doing it thoughtfully? And just kind of jumping into it just a little bit, what would be your advice to them to do that initially?
Ed Walters (44:57):
Well, I think what I would say is we’re in a world of great change, a time of really great change. And I think we all need to be open to what those changes might create, what kinds of opportunities they might create. And we should do it carefully. There’s a reading, it’s the, there’s a reason it’s called the bleeding edge. There’s a lot of bleeding that happens there. And so if you don’t believe me, ask ibm, right? Yeah. I mean Watson was thought to be the next big thing. It’s kind of hard to remember this, but I think the Jeopardy match was in like 2011, 2013. It was like insanely long ago,
Zack Glaser (45:40):
Right? Oh
Ed Walters (45:41):
Yeah. They were just like a little too early and they’re for the most part, not even in the conversation now.
Zack Glaser (45:48):
Well, that’s what I said about being just a couple months ago. Yeah, no, that’s right. Now,
Ed Walters (45:54):
That’s right. Maybe the, to take a macro metaphor, you sort of want to be Microsoft and not IBM in this case. So Microsoft remained open to the possibilities, but they proceeded a little more slowly. They were a little more methodical, but when they had the opportunity to buy the controlling stake in open ai, they were very fast moving. And so I think we all need to be very nimble in our thinking right now, and we need to make sure that we are not engaged in magical thinking about ai nor closed-minded thinking about ai. Lots of Lawyerist I talked to about this say, ah, well, I’m just glad that I’m old enough to be retiring before any of that affects my practice. And the truth is, unless you’re retiring last year, it’s going to affect your practice. And so I think we need to be very flexible minded about those opportunities and opportunistic when we can be, but we need to maintain that same model of critical, ethical, responsible thinking that makes us good Lawyerist in the first place.
Zack Glaser (47:03):
So it depends is really our, no, not, it depends.
Ed Walters (47:07):
I would say it depends. I think means there’s no answer. Yeah. Here the answer is certainly going to be we are going to use artificial intelligence more in our practice. We already do. Yeah. We should be critical minded about what we use. We should be able to see transparently what sources the training data came from. Okay. Make sure that we trust the provenance of that data, make sure it’s not trained on the open internet. So we need to be critical thinkers. It’s the reason we are good Lawyerist in the first place and we need to be flexible minded about the opportunities that are coming.
Zack Glaser (47:43):
Fantastic. Ed, I think that’s great advice, and I appreciate your time today. I have learned a lot. I know your wealth of information on this, and I would encourage people to go obviously to the Law Review article, but also to go pick up the book Data-Driven Law Ed. You wrote some and Ed, and there’s a lot of great minds in there. Some of them have been on this podcast before as well. Like Nika Kabiri. We’ll drop a link to that as well in the show notes. Ed, thank you for being with me. I really appreciate it.
Ed Walters (48:14):
Thanks for having me, Zach.
Notify me when there’s a new episode!
Lawyerist Podcast |
The Lawyerist Podcast is a weekly show about lawyering and law practice hosted by Stephanie Everett.