Lately, AI developments have been top of mind for just about everybody—lawyers included. And with AI’s potential to solve all kinds of workflow problems and majorly improve access to justice, the excitement around it does have real merit. However, concerns about accuracy, security, and the current lack of regulation are just as pressing as the excitement surrounding the AI bandwagon. To address all these issues and more, Sharon Nelson and Jim Calloway welcome John Simek to unpack the latest AI trends and their implications for lawyers.
John W. Simek is vice president of the digital forensics, managed information technology and cybersecurity firm Sensei Enterprises.
Special thanks to our
Intro: Welcome to The Digital Edge with Sharon Nelson and Jim Calloway. Your hosts, both legal technologists, authors, and lecturers, invite industry professionals to discuss a new topic related to lawyers and technology. You are listening to Legal Talk Network.
Sharon D. Nelson: Welcome to the 183rd edition of The Digital Edge Lawyers and Technology. We’re glad to have you with us. I’m Sharon Nelson, President of Sensei Enterprises an Information Technology, Cybersecurity and Digital Forensics firm in Fairfax, Virginia.
Jim Calloway: And I’m Jim Calloway, Director of the Oklahoma Bar Associations Management Assistance Program. Our topic today is Lawyers Hopping Aboard the AI Train and It’s Gathering Speed. Our guest today is John W. Simek, the Vice President of Sensei Enterprises, Inc. He is a Certified Information Systems Security Professional, Certified Ethical Hacker, and a nationally known expert in the area of digital forensics. Sensei provides legal technology, cybersecurity and digital forensic services. John has co-authored 18 books published by the American Bar Association. Thanks for joining us today, John.
John W. Simek: It’s great to be here. Jim,
Sharon D. Nelson: Let’s start off with this rousing topic of ours. Why has there been such a wild and crazy increased interest in artificial intelligence as it pertains to lawyers, John?
John W. Simek: Jeez, and here I thought you were going to say it in the voice of Steve Martin.
Sharon D. Nelson: I could have if I’d use the AI.
John W. Simek: Yeah, I guess yeah, there has been a huge and sudden increase here, at least in the last several months. But you know, AI has actually been around as you know for a long, long time, primarily with the big, really heavy usage in the discovery world for years. And it’s actually baked into a lot of products that as opposed to standalone. So a lot of folks kind of like the cloud, right? They didn’t know they’re using the cloud when they really were. A lot of folks I think are actually taking advantage of artificial intelligence and what they’re working with, but they just don’t know it. They just don’t see it that’s there. That radically changed with the introduction of ChatGPT in November of last year because it’s a standalone product, and it’s free.
So pretty much open to everyone. very effective, very easy to use. I know you play around with ChatGPT Sharon multiple times a day. But other others have also jumped on that as again standalone or at least augmented products like Microsoft and Google in there, as well as many, many others have introduced their version. But I really think to answer your question why the sudden increase? I think a couple of things. It’s the FOMO, right? The Fear of Missing Out, because everyone’s talking about it. So it’s kind of like streaming videos, “Oh, jeez, I need to on this bandwagon.” And so everyone is really, really interested in it. But I think one of the biggest reasons is that it’s freely available to the masses. It’s so accessible, and that I think has really increased the interest and the popularity.
Jim Calloway: Well, let’s kind of begin at the beginning of this new tool, can you explain Generative AI to our audience?
John W. Simek: I’m not an expert in that Jim and I’ll try but I’m not going to get into a lot of the Propellerhead stuff. But essentially Generative AI, it’s a type of artificial intelligence. It takes text, images, audio, et cetera and generates those in response to prompts. It recognizes patterns, and it makes predictions based upon those patterns. So they’ve given it this ton of data in order to train it. The key thing I think for folks to understand is that it actually creates new content as the output, it’s not merely a regurgitation of information like a search engine would be right when you do a search it returns back if you will segments of what it discovers. It’s not that it’s actually new content. And it continues to learn as it creates more and more of this content.
ChatGPT is one of the more famous ones obviously, it’s by OpenAI. It’s a large language model. It generates human like texts. DALL E is also OpenAI, it generates text and artwork and images from text prompts. And there are many others that are out there like Midjourney and DeepMind. The underlying technology though, and I don’t want to spend a lot of time going through this because that’s what the engineers deal with, but there’s transformer based models like ChatGPT. And that’s actually what the GPT stands for is Generative Pre-trained Transformer, and it’s runs on a neural network for natural language processing. There’s generative adversarial networks, that’s the GANs. Those are the same things that are used in deep fakes where you have two neural networks, the generator and discriminator that compete with each other to generate the output.
And there’s other, other technologies that are available out there that are actually being used as part of all this generative AI. But I think the key thing for most folks to remember, to think about is the new content portion. And that it’s massive, massive amounts of data that are accessible to the AI and it’s extremely, extremely fast, given the amount of data. So that’s the takeaway from that.
Sharon D. Nelson: It’s so fast. I know when we lecture on this and we seem to lecture on it all the time, when I gave it a hyperlink, because it didn’t have information that was current and now it can go out to some hyperlinks. It went out, I had the full body text of answer in three seconds. I mean, three seconds, it digested everything and had an answer for me. That is really blinding speed. Another thing that we talked about, John, as you know, is our friend Andy Perlman, the Dean of Suffolk University Law School, and he is quite fixated on what AI means for access to justice. So what does it mean, do you think?
John W. Simek: Well, I think that there’s some controversy around that. I really think it does increase the access to justice, uncover a couple of the actual responses at ChatGPT. So if you ask ChatGPT what is artificial intelligence mean for access to justice, it’ll come back with several bullets, several points of them, some of which are legal research and answers for those of limited means is one thing, document review by AI can reduce the cost of legal services, chat bots, and AI assistance can provide basic legal information, those that don’t have access to legal services, predictive analysis can help those without lawyers make informed decisions about what their legal rights are.
And then overall, I think it’s going to make and ChatGPT agrees with this is will make legal services more accessible, efficient and affordable. Really, by lowering costs, making things again more efficient. But I think the key thing that for a lot of folks to realize at least in my opinion is that those folks, and I know we’ve heard Andy say this as well is people that who normally wouldn’t contact a lawyer, they’re more likely to use something like a ChatGPT to gain access to legal services. I know, Jim, you’re big on the people law thing, but kind of like the DIY stuff, right? I mean, it’s, where — and also that it’s going to help lawyers because they can be more efficient and be more accessible to clients.
So I think overall, I really do think that it’s going to increase the access to justice. Some folks though, they would argue that it actually reduces it. And I have read some of those things. Because it doesn’t reduce expense. What their argument is, is that, “Well, jeez. Now we got to pay for technology.” The lawyers do, and therefore they’re not going to reduce costs, they’re going to actually increase costs. I don’t buy that argument, but it is out there.
Jim Calloway: Well, and I think where it’s really going to work, John, if we ever had a court that would index some of this directly to the court. So the answers will be coming from a more authoritative source if you will.
John W. Simek: Okay.
Jim Calloway: Well no, we can dream.
John W. Simek: Not in its present form. I’m not so sure, Jim.
Jim Calloway: I understand. I’m more meant like making the court’s data about dockets and hearings and whatever all available. That can be incorporated into the answers.
John W. Simek: That’s a good segue for some of that, because that’s the predictive analysis thing that AI is very good at. Yeah.
Jim Calloway: Back to your other comment, how can AI be used in the delivery of legal services?
John W. Simek: Well, you know, as I said early on, artificial intelligence is really baked into a lot of the products now, through API’s, et cetera. It’s part of them. And because it’s the hot, new shiny penny, everyone wants it, and they’re incorporating it in their products. But it can be used in legal research, document assembly and review. I’ll tell a quick story for that, which I thought was really pretty cool. Actually, it was a colleague of ours, Sharon and I were at an event the other day. And he said that he’s got this research assistant paralegal that really does a really, really good job on getting the court legal concepts down of things, but not really good about assembling if you will, documents, whether it’s a brief or some sort of argument.
And so what he does is he anonymizes that code or whatever she typed something to research, replaces names with placeholders and that stuff so you don’t have that client confidential data, pumps it into chat GPT and it basically says make it better. And it does a really, really good job at that. So think like that I think are really, really good for AI to do. Contract reviews, right? That can be used for that due diligence, online dispute resolutions. One of the things that’s near and dear to my heart, since I’m going to testifying expert is AI being used for deposition questions. And I said, “Wow,” I hadn’t thought about that and said, “Oh, you know, if an attorney is not really familiar with a particular subject area, this AI could really help them, educate them or whatever and come up with some really good deposition questions.”
And then that predictive analytics stuff, Jim, that we started with and we’re talking about all of that data, that historical data that’s out there about courts and judges and the rulings and predictions can be made now. Sharon knows I call it the legal oddsmaker. It’s like, what are your chances of winning this case with this particular judge, with this particular matter in this court, and one of these days I bet she’s going to say, “You know what, Tuesdays are better than Thursdays.” Seriously, I mean, it’s given historical data, it’s going to give you some confidence level of what those are. But what I think a lot of lawyers don’t realize that when they talk about using predictive analytics for court data is the consumer isn’t dumb and they’re going to begin taking a look at historical data for lawyers, and what’s their success rates? And they’ll be doing those same kind of models and saying, “Jeez, which lawyer would be potentially better for my case?”
Jim Calloway: Well, that’s interesting, John. Before we move on to our next segment, let’s take a quick commercial break.
Male: Built for lawyers, Nota’s cloud-based business banking is perfect for your solo or small law firm. You want to spend your day helping clients not struggling to reconcile bank statements, Nota’s customer service specialists are here to help you. They only support attorneys so they understand the tools you use and the requirements you’re up against and take your business as seriously as you do. Don’t miss out on exciting new member benefits including our partnership with Law Line to earn ethics credits for your CLEs. Online at trustnota.com/legal. Nota, banking built for law firms, like yours. Terms and conditions may apply.
Christopher T. Anderson: If you’re a lawyer running a solo or small firm, and you’re looking for other lawyers to talk through issues you’re currently facing in your practice, join the Un-Billable Hours Community Roundtable, a free virtual event on the third Thursday of every month. Lawyers from all over the country come together and meet with me, Lawyer and Law Firm Management Consultant Christopher T. Anderson to discuss best practices on topics such as marketing, client acquisition, hiring and firing, and time management. The conversation is free to join, but requires a simple reservation. The link to RSVP can be found on the Un-Billable Hour page at legaltalknetwork.com. We’ll see you there.
Sharon D. Nelson: Welcome back to The Digital Edge on the Legal Talk Network. Today our subject is Lawyers Hopping Aboard the AI Train and It’s Gathering Speed. Our guest today is John Simek, the Vice President of Sensei Enterprises. He is a Certified Information Systems Security, Certified Ethical Hacker and a nationally known expert in the area of digital forensics. And I think we’ll just leave it there. He’s also my husband. So John, again when we lecture, I talked specifically about the hallucinations that I have seen with respect to AI. But of course, a lot of people still are not familiar with the hallucinations. So if you could explain that to them, and go ahead and feel free to re quote me on the things that have happened to me that have been very disquieting.
John W. Simek: Yeah, I think and I’ve seen it as well, I don’t use ChatGPT nearly as frequently as you do. But hallucinations, it’s an industry label. And it’s put upon things when the artificial intelligence makes stuff up. And I don’t know why they call it hallucinations. I don’t know why it isn’t called something like an algorithmic screw up. Or why don’t you just call it a plain error, right? I mean, but no, it’s labeled hallucinations. And it’s really, it’s things like where links that don’t work. It could be link rot, you know, where they’re aged out or whatever. Articles that were never written, it makes references to those. The answers can be plain wrong as ChatGPT in particular is not really good apparently at doing math where others are. I think when you look at AI in general, the whole purpose of it and directionally where it’s going is to try to make it more human like, and humans do the same thing, right? They make things up to sound more convincing.
Sharon D. Nelson: They say it confidently as though it were true, right? And that’s what AI does.
John W. Simek: So I think even though they call it a hallucination, you know what, it’s really more human like.
Sharon D. Nelson: And to add to the list that John gave, I’ve seen it make up books that don’t exist. Make up articles that don’t exist. I’ve seen it make up cases that don’t exist. Lots of other lawyers have experienced the same thing. So, we’re all looking at it saying “verify, verify, verify,” because you can’t believe no matter how confident the AI is. You got to verify what it’s telling you.
John W. Simek: Yeah, and I want to tell a quick story though, and you’ve heard this Sharon about being a victim of what these hallucinations can be. And there was a criminal defense attorney whose name is Jonathan Turley. And a UCLA professor asked ChatGPT to cite five examples of sexual harassment by professors and American law schools, along with quotes from relevant newspaper articles. So that was his prompt to ChatGPT. The response that came back returned one of the entries was concerning Jonathan Turley. And so this UCLA professor emailed Jonathan and said, “Did you know that ChatGPT says that you were part of an alleged 2018 incident as a Georgetown University Law Center professor accused of sexual harassment by a former female student, and it actually quoted a Washington Post article and said that the article said the complaint alleges Turley made sexually suggestive comments, tempted to touch her in sexual manner during a law school sponsored trip to Alaska. Well, none of that was true. And Jonathan, when he heard this, he went, “What?” He’s not a teacher at Georgetown, he’s a teacher at George Washington University.
And I’ll quote what Jonathan said, it says, ChatGPT invented an allegation where I was on the faculty at a school where I never taught went on a trip I never took and report an allegation that was never made.
Jim Calloway: Wow.
John W. Simek: So that is an hallucination on steroids. Let me tell you.
Jim Calloway: John, I believe they call them hallucinations because the ChatGPT believes them. A binary psychedelic is I that what you say?
John W. Simek: That’s right.
Jim Calloway: John, I got one quick story, since you shared one, what I want to share our practice management, Julie Bayes, and I were teaching at a daylong seminar in a courtroom. And we taught a session on ChatGPT and she was showing another tools, and she was showing how the script will take an audio recording, it’ll make a transcript for you and then you can remove all your errs, and ums, and all that kind of stuff, because it identifies those for you. And if you’ll give it a sample your voice, you can actually put your voice in there. And when Julie mentioned that, and I didn’t bother to program my image in here, give it to data hours or my image or whatever. She said, they would do that too, it would edit that too.” And the two judges in the audience both threw up their hands and says, “How do you catch that?” Well, probably with an expert, but this is just the early free version. We’ll see what we have in a year, right?
John W. Simek: Oh, yeah, the fabrication of evidence. Yeah. Yeah. That’s a big problem. Yeah.
Jim Calloway: Are there specific ethical AI products intended for the legal profession?
John W. Simek: Yeah, there are. And as I said early on, Jim, everyone’s jumping on the AI train. There’s this huge massive movement to make sure that it’s incorporated. So the short answer is yes. We don’t have time and I don’t know them all frankly to mention all of them. But systems like Harvey which I know Sharon is very familiar with, based on OpenAI’s GPT technologies, GPT-4. It’s not actually ChatGPT, but it’s legal specific. And some major law firms are using that. Lexis just announced I think it was a week, a week and a half ago or whatever their product Lexis Plus AI, where they’ve got kind of the way I understand is kind of a sandbox if you will of a ChatGPT environment. So it’s more controllable, to try to address some of the privacy issues and those kinds of things.
But other products that are out there, like Casetext, Fastcase. Fastcase use an AI for a long time in their research stuff. Kira for due diligence, as well. A lot of the practice management products now as you’re aware are integrating AI into their products. But there’s many, many more. There’s a lot of lot of legal products now that are that are using artificial intelligence. And if they don’t today, standby, because they will pretty soon.
Sharon D. Nelson: Are there specific security concerns, John, that lawyers should have when they’re working with products such as ChatGPT?
John W. Simek: Security and privacy? Yeah, definitely. The accuracy piece of it. Confidentiality, the data that you know — what happens to that data that you give these AI systems? It might be discoverable as an example. Who has access to it? Now, recently, OpenAI implemented the ability to no longer save your chat history. So you can turn that off and not have your prompts being used for training purposes. They do hold it for 30 days so that they can review it for abuse, and then they delete it. But who might have access to that data during that 30-day period? And what if you put client data in there? Have you asked your clients for permission to use that that information and pump that in there? There’s also concerns about intellectual property and copyright, potential copyright infringement. The argument obviously is that well, ChatGPT and systems like that, it’s not a human being, it’s a machine.
But you know, that’s certainly a legal battle, and an argument that going to be hitting the courts. But security wise, yes, because the cyber criminals are going after ChapGPT. And systems like that the artificial intelligence because they know how powerful they are. So, it’s kind of a cat and mouse thing. You need to be concerned certainly with the security, with the access of it. And the data I think primarily, today, it’s the data, what data do you give it and what happens to the data.
Jim Calloway: Any other concerns you can think about besides security that lawyer should have when working with AI?
John W. Simek: You know, I mentioned the confidentiality thing, that’s big. I’d be very careful about what you give it. Even if you’ve turned your ChatGPT, the history turned that off, that’s one of the reasons Jim why products like the Lexis product, and those other ones that are legal specific that they have the walls and the rules that you will set up to isolate that, so that you’re not going to contaminate other people’s areas, work areas and those kinds of things. So I would be more comfortable in using those products for lawyers as opposed to if you are using an open system like a ChatGPT system, or being Microsoft’s or Google systems. Certainly, you want to make sure that that data is anonymized, or you’re not giving client data if you will into those open type of environments.
Jim Calloway: Before we move on to our next segment, let’s take a quick commercial break.
Craig Williams: Today’s legal news is rarely as straightforward as the headlines that accompany them. On Lawyer 2 Lawyer, we provide legal perspective you need to better understand the current events that shape our society. Join me, Craig Williams, and a wide variety of industry experts as we break down the top stories. Follow Lawyer 2 Lawyer on the Legal Talk Network or wherever you subscribe to podcasts.
Jared Correia: They say the best things in life are free, which either means The Legal Toolkit podcast is pretty awesome, or we’re totally committed to the wrong business model. You’ll just have to tune in to find out which it is. I’m Jared Correia, and each episode I run the risk of making a total ass of myself so that you can have a laugh, learn something new, and why not, maybe even improve your law practice. Stop Believing podcasts can’t be both fun and helpful. Subscribe now to The Legal Toolkit. Go ahead. I’ll wait.
Sharon D. Nelson: Welcome back to the digital edge on the Legal Talk Network. Today, our subject is Lawyers Hopping Aboard the AI Train and It’s Gathering Speed. Our guest today is John Simek, the Vice President of Sensei Enterprises, he holds a lot of certifications including being a Certified Ethical Hacker, nationally known expert in the area of digital forensics. And he is my husband. I hope by the end of the night, that’s still true. John, we do give each other a lot of grief, but that’s part of the fun. Okay, what is happening or not happening with the regulation of AI both here and in Europe, which has become a very interesting phenomenon.
John W. Simek: It probably wouldn’t surprise a lot of folks. But you know, the EU is way out in front and they typically have been way up in front. They’ve been out in front with the whole privacy thing and the whole GDPR and it seems to be that it goes EU, California, then other states, and then nothing happens federally. Isn’t that the chart.
Sharon D. Nelson: That’s close.
John W. Simek: And the same is true in AI. EU is way out there in front several years ago, they put their guidelines together and their rules, if you will, about artificial intelligence and transparency and all these kinds of stuff. They had that out there and then just this past December, they amended those regulations and rules and passed those. So you know, the way I describe it is that the EU on Version 2.0 already. They’ve already had the first generation and amended it. And here in the federal government and US wise, I think we’re at like 0.01 or something, I don’t know.
Sharon D. Nelson: You know, John, we thought they were going to indeed pass those but they did not. They ended up in some amount of disagreements and they are still have not currently signed the new proposed law. The last word I had which was this morning was that they were shooting to sign it in June. Now, they’re still way ahead of us. They got a law they want to pass.
John W. Simek: The EU, yeah, yeah. The EU is what you’re talking about.
Sharon D. Nelson: Yeah, yeah, yeah, But even they got kind of derailed by disagreements about various things. But there’s still way ahead of us. I just wish we’d appoint a czar of something of this, because we don’t have anybody in charge.
John W. Simek: But I don’t know that that would solve anything. You know, frankly. I mean, as you know that the White House, they issued a blueprint for an AI bill of rights back in October, and that hasn’t taken any traction. But that blueprint really addresses five principles, which is similar and almost identical to what the EU has. Safe and efficient systems, algorithmic discrimination protections, they talk about data privacy, notice and explanation that’s to get rid of some of that the black box kind of things. And then human alternatives, considerations and fallback. In other words, if it messes up, what are you going to do with it? That’s the principles, but I really think we’re in a similar position, Sharon, and then what where we are with data breach notifications with privacy, et cetera is that the states are taking the lead. I mean, they’re tired, our own government hasn’t done a heck of a lot. And so they’re taking action, and they’re passing their own laws. I don’t know, I don’t hold a lot of hope, if you will for something federally coming out when it comes to regulating AI.
Jim Calloway: As you guys have discussed, I’ve been getting a lot of calls from lawyers about AI and ChatGPT as well. So let’s close with what many lawyers have identified as the most important question about AI. Will AI replace lawyers?
John W. Simek: They probably didn’t surprise you, Jim. Everyone has an opinion. I say no, I don’t think so. I believe AI is going to help lawyers be more efficient, and produce a better more consistent product. It’s unlikely, highly unlikely I think at least in my lifetime, our lifetime, that AI is going to replace that human factor. There are certain things that the human being is capable of doing that does even in the practice of law, and that touchpoints. AI can’t replace that. And it’s more likely I think that lawyers that are utilizing the AI, or artificial intelligence and taking advantage of it, those lawyers are going to be the ones that are going to replace those lawyers that don’t. Really, I think that’s true. AI think is here to stay in some form or fashion, with regulation or without regulation. The train has left the station, and we need to get our arms around it, we need to do some guidelines, et cetera. But it’s not going away. Bottom line, but I just want to I want to close with one thing to remember. And Sharon knows this well is, it’s a quote from our friend, Ed Walters, the CEO of Fastcase, “When you’re dealing with AI, what could possibly go wrong?”
Sharon D. Nelson: I think that question is about to be answered in spades. Well, John, we certainly thank you for joining us today, it’s always a pleasure to have you on The Digital Edge podcast. So it’s fun. And of course, AI has been an obsession for us since everybody seems to want to have some sort of a webinar or session on AI and what they can do with it and what they can’t do with it. And I think we’ve done about a dozen and we have about two dozen scheduled in the future. So the interest is so intense. I’ve never seen a subject lawyers who have become so enraptured with and really want to know so much about. So I really thank you for all the information you gave the audience today, because a lot of this is still new, as we say all the time. And I know you’re always saying it, this is what we know today, because tomorrow, something will have changed. Let me think a minute, it was about April 24 when we did the last PowerPoint. And we looked at it yesterday after we gave the PowerPoint and we said to ourselves, “You know, we’ve got to redo the PowerPoint already.” In two weeks. I mean, the PowerPoint has to be substantially redone. And that’s how fast this is moving. So thank you for taking us on the train because it is speeding down the track. And that’s a fact. So we really appreciate your time and effort, John.
John W. Simek: Well thanks so much. It’s always fun to be with you guys.
Sharon D. Nelson: That does it for this edition of The Digital Edge Lawyers and Technology. And remember, you can subscribe to all of the editions of this podcast at legaltalknetwork.com or on Apple Podcast. And if you enjoyed our podcast, please rate us in Apple Podcast.
Jim Calloway: Thanks for joining us. Goodbye, Ms. Sharon.
Sharon D. Nelson: Happy trails, cowboy.
Outro: Thanks for listening to The Digital Edge, produced by the broadcast professionals at Legal Talk Network. Join Sharon Nelson and Jim Calloway for their next podcast covering the latest topic related to lawyers and technology. Subscribe to the RSS feed on legaltalknetwork.com or in iTunes.