We all know generative AI is making significant waves all over the world, but lawyers need to take a hard look at its implications in the law and their own legal practice. John Simek and Sharon Nelson welcome Doug Austin to discuss AI technologies and their impacts on attorneys. They cover notable legal cases, new ABA resolutions, the latest AI trends in legal practice, and much more.
Doug Austin is an established eDiscovery thought leader with over 30 years of experience providing eDiscovery best practices, legal technology consulting, and technical project management services to numerous commercial and government clients.
Special thanks to our
sponsors and .
Intro: Welcome to Digital Detectives. Reports from the battlefront. We’ll discuss computer forensics, electronic discovery and information security issues and what’s really happening in the trenches not theory, but practical information that you can use in your law practice right here on the Legal Talk Network.
Sharon Nelson: Welcome to the 151st edition of Digital Detectives. We’re glad to have you with us. I’m Sharon Nelson, president of Sensei Enterprises, a digital forensics managed cybersecurity and managed information technology firm in Fairfax, Virginia.
John Simek: And I’m John Simek, vice president of Sensei Enterprises. Today on Digital Detectives, our topic is Generative AI and ChatGPT: Why the Legal Industry Needs to “Brace for Impact”. Our guest today is Doug Austin, an established eDiscovery thought leader with over 30 years of experience providing eDiscovery best practices, legal technology consulting and technical project management services to numerous commercial and government clients. Doug has published a daily blog since 2010 and has written numerous articles and white papers, receiving the JD Supra Readers Choice Award as a top eDiscovery author and top cybersecurity author. Doug has presented a numerous webcasts, events and conferences, including Legal Tech New York, ILTACON, Relativity Fest, University of Florida eDiscovery Conference, Master’s Conference, and many local and regional conferences. It’s always great to have you back with us, Doug.
Doug Austin: Great to be here, John and Sharon, as always, got a fun topic to get into today, so I’m looking forward to it.
John Simek: Everyone’s talking about ChatGPT, and you’ve covered several stories about it. So tell our listeners a little bit about what your impressions have been so far?
Doug Austin: Well, gosh, John, where do I begin? Impressions is a great word, because ChatGPT can do some really impressive things. Certainly, as its best, ChatGPT can generate some amazing content and provide insightful analysis. And it’s being used for everything from writing emails, articles and blog posts, which is a little scary to me, being a blog writer, to even doing things like passing the bar exam or evaluating a set of documents and identifying the important facts from them, which makes its potential for eDiscovery really exciting. I even covered a story. It was even used to officiate a wedding in Colorado a couple of weeks ago, believe it or not. So I think we’re just still beginning to realize all the things that ChatGPT can do, and that Generative AI models like it are doing and are going to do to revolutionize how we manage and use data in business and society.
Of course, it can also do some impressively awful things as well. At its worst, it can hallucinate, it can allegedly violate data privacy and copyright laws, and it can be used by hackers to streamline cyberattacks and create deep fakes, which can be used to extort people. So there’s a lot of downside too.
From a personal standpoint, I’ve both read everything I can get my hands on about it and also spent considerable time actually using it and testing it out. I do have a ChatGPT account and I’ve tested it out quite a bit. And one thing I should make clear is I’m not a data scientist. I’m an eDiscovery professional, who’s been evaluating it from a use case perspective for eDiscovery, Info Gov, cybersecurity and other disciplines.
I should also note that I’m using the term AI generally, even though I know there are more precise terms that are appropriate in certain cases, like machine learning or augmented intelligence, which proves I’m not a data scientist, because they tend to be sticklers for that sort of thing. So if you are listening to this and you are a data scientist, please be kind.
John Simek: Yeah. I think the other point, Doug, is that you have a sense of humor, too.
Doug Austin: That’s true. That’s what I’m counting on is why ChatGPT won’t replace me. As far as I know, it doesn’t have one.
Sharon Nelson: That’s really funny. Why do you think, Doug, that ChatGPT became so popular so very quickly? And how will that popularity translate to other generative AI, do you think?
Doug Austin: Well, to answer that question, I’ll start with a quick training lesson on how to use ChatGPT. Step one, sign up for an account. Step two, type what you want to know in the box and hit enter there. You’ve now been trained on how to use ChatGPT. It’s that easy. And of course, just because you can ask it a question doesn’t mean it’s an appropriate or precise question. But that hasn’t stopped literally hundreds of millions of people from using it. It’s super easy to use, and if you don’t get the answer you want because the first question wasn’t precise enough, you can simply try asking it a different question till you get there.
It literally has no learning curve to use, which made it immediately popular with the masses. I think the other thing about ChatGPT is it’s fast. You ask your question and it’s almost instantaneous in its response to your question.
Of course, it’s also very verbose in its response to almost any question, usually with a bullet pointed list. It’s sort of like being stuck sitting next to Cliff Clavin at a party. It’s not just going to answer the question, it’s going to deliver a monologue in answering the question. But of course, you can always pick and choose what you want out of that. But still, the ease of use and the speed in which it responds, I think has set a standard that gets people thinking about how they can apply it and other Generative AI to all sorts of use cases, which is why literally everyone’s talking about it.
John Simek: I know you want to talk a little bit about the caveats and issues that are associated with ChatGPT and other Generative AI that you’ve seen, but expand on that a little bit for our listeners, will you?
Doug Austin: Sure. Well, so I mentioned hallucinations before. So early on, when I first got an account, I tested it by asking a few questions about eDiscovery. I asked it, what is electronic discovery? And it gave a really good answer. I asked it who Craig Ball is, and again, it was spot on in identifying who he is. Then I asked it who I am and it said, I’m the founder and CEO of Eagle Discovery, which is a company that doesn’t appear to exist, that I wrote several books on eDiscovery that also don’t appear to exist, so of course I didn’t write them. And my favorite one is that I’ve been teaching eDiscovery courses at the University of California, Berkeley since 2000, which I haven’t been doing either. I don’t know who that Doug Austin is, but he’s way more impressive than this Doug Austin.
And of course, that’s what everyone’s calling, ChatGPT hallucinations, which seems to be a fancy way of saying. It make stuff up. I usually substitute a different word from stuff, but you know how that we do.
John Simek: We do.
Doug Austin: Yeah. So that’s one of the most common issues. And obviously, we saw that recently with the Avianca Airlines case, where the attorneys relied on ChatGPT for case law citations. And at least six of them were bogus, which got them a $5,000 sanction and a major hit to their reputations as lawyers. And of course, my favorite part was the lawyer asking ChatGPT if they were real cases, and it replied, “Yes, they’re real cases”. Well, that solves it right there.
Sharon Nelson: I love when you ask a liar whether they’re telling the truth.
Doug Austin: Right. Yeah, I mean, if I were George Santos, I would say that ChatGPT wrote my resume and that’s what happened there. But anyway, I will say, I’ve seen improvements since I started using GPT-4, which came out in February. I really haven’t seen any hallucinations to speak of. And I asked GPT-4 who Doug Austin is. Now, it simply says, it doesn’t have enough information to answer the question, which is better than giving false information, so that’s an improvement.
But certainly, that’s not the only caveat. I think that one big one is how the information is providing isn’t sourced, which means it could be hallucination, or it could conceivably be someone’s thought leadership work product or other copyrighted information, which is what we’ve been seeing recently with a few lawsuits filed against the parent company OpenAI by book authors like comedian Sarah Silverman. It creates really great sounding content, which tends to get people to trust it a bit too much, but you don’t know where the information came from and that can be problematic.
And then, of course, like any powerful tool, it also has the power to wreak havoc as well. There’s data privacy concerns about the data using to train the model and people have been sharing personal data with it, which certainly doesn’t help anything. And then probably the biggest caveat may be the ability for hackers to manipulate it, to enhance their ability to initiate cyberattacks. That’s being used by hackers to do things like obfuscate malware signatures, creating mutating malware that can avoid detection by EDR applications, and do things like crank out, phishing, social engineering and ransomware attacks.
It’s trained not to do these things, but hackers are finding if they ask the right questions in the right way, it can be tricked into helping them, which is scary. It’s certainly notable, and we’re recording this on July 13. There was just in the news that FTC has open an investigation on OpenIA. We don’t have a lot of information at this point, but certainly some of these caveats are at play there, so it will be interesting to see how that unfolds.
Sharon Nelson: Well, that was fascinating, so I’m going to go back to something a little less fascinating. As you know, there have been a couple of ABA Resolutions related to the use of AI over the past few years. What do you think the impact of those resolutions has been on the legal community?
Doug Austin: You’re right when you say ABA Resolutions are not fascinating, but I still want to talk about them, because I think they’re notable. And there are three resolutions that I know of. Resolution 112, which passed in 2019, urges lawyers in courts. I’m quoting here to address ethical and legal issues arising from the use of AI in the practice of law. And the report that accompanies Resolution 112 is excellent in laying out ethical duties for lawyers regarding AI, risks such as bias and transparency that need to be considered, and questions to ask an AI vendor.
The other two resolutions are more recent. Resolution 700, which was passed last year, called on governmental entities to refrain from using pre-trial risk assessment tools unless the data supporting the risk assessment is transparent, publicly disclosed, and validated to demonstrate the absence of conscious or unconscious racial, ethnic, or other demographic, geographic or socioeconomic bias. Yes, I’m quoting again, as you can tell. From a criminal justice standpoint, this has been important, because we’ve seen major bias concerns in AI tools such as facial recognition.
And then finally, ABA Resolution 604, which was passed earlier this year, is targeted to developers of AI to ensure their AI is subject to human authority, oversight and control, that they should be accountable for consequences related to their use of AI, and they should ensure the transparency and traceability of their AI. So, unfortunately, I haven’t found that any of these resolutions have had much impact so far. In fact, many lawyers I’ve talked to aren’t even aware of Resolution 112, and it’s been around for four years now. So it would be great if we can at least spread awareness with the legal community, which is the first step towards having an impact, simply knowing that these resolutions exist.
John Simek: Well, before we move on to our next segment, let’s take a quick commercial break.
Adriana Linares: Are you looking for a podcast that was created for new solos? Then join me, Adriana Linares each month on the new solo podcast. We talk to lawyers who have built their own successful practices and share their insights to help you grow yours. You can find new solo on the Legal Talk Network or anywhere you get your podcast.
Sharon Nelson: The Digital Edge Podcast, where the law and technology intersect. I’m Sharon Nelson, and together with Jim Calloway, we invite professionals from all fields to discuss the latest trends, tips and tools within the legal industry. Stay up to date on the rapidly changing legal tech landscape with The Digital Edge on the Legal Talk Network.
Welcome back to Digital Detectives on the Legal Talk network. Today, our topic is Generative AI and ChatGPT: Why the Legal Industry Needs to “Brace for Impact”. Our guest today is Doug Austin, an established eDiscovery thought leader with over 30 years of experience providing eDiscovery best practices, legal technology consulting and technical project management services to numerous commercial and government clients.
John Simek: Doug, let’s get into this paper. The GPT judge justice and Generative AI world that you covered. Why did you decide to cover that particular paper and why should legal professionals read it? I’m interested in your opinion, because I know it certainly caught my attention.
Doug Austin: You know, to paraphrase a 1990’s movie, they had me at Dr. Maura R. Grossman and Judge Paul Grimm. And I should mention the other two authors, Daniel Brown and Molly Shu as well. And honestly, I think the paper does a great job in discussing many of the issues and considerations we see in the legal world today related to Generative AI. One of the first things the paper does, is describe four different hypothetical lawsuit scenarios involving Gen AI.
The first one involves a pre-law student who sues their university because they’ve determined that the use of ChatGPT to write a paper is cheating, even though the rules only prohibit help from another person and another student uses spell check and grammarly. So that’s one case.
Another is a potential copyright infringement of artwork from an app that integrates DALL·E 2, which is OpenAI’s Gen AI model for images. And of course, we’re already seeing copyright infringement cases being filed against OpenAI over ChatGPT. It also includes a scenario involving an elderly couple who are scammed out of $12,000 through the use of Murf.AI, which is an AI voice cloning tool that was used to convince them that their grandson is in trouble. And we’ve seen instances where that’s played out already as well. So it’s really interesting how they set the stage there.
Most of the paper, though, really discusses considerations, including whether we’ll need new rules of evidence to address Gen AI, or whether every case will now require a Gen AI expert and will juries still be able to do their jobs, and even whether judges should be allowed to use ChatGPT or other Gen AI to help with research and draft opinions.
The paper is really quite comprehensive and raised a lot of issues lawyers could be considering or should be considering. So I certainly would recommend it as one that the audience should read.
Sharon Nelson: One of the issues that the paper discusses that really caught my attention is the consideration of deep fakes and how they can have an impact on authentication of evidence. What do you think the impact of deep fakes will be on authentication of evidence? And do you think we do need new rules of evidence to address issues associated with Gen AI?
Doug Austin: Well, you know, it’s interesting. Deep fakes are certainly already beginning to have a major impact on society, and I can only imagine how many deep fakes we’ll see as we approach an election year next year, because I expect we’ll see quite a few. So of course, they’ll have an impact on courts as well.
From a rule standpoint, Federal Rules of Evidence 901 and 902 address authenticity. And Rule 901 states that the proponent must produce evidence sufficient to support a finding that the item is what the proponent claims it is. And I originally thought, well, that’s not really strong enough today in this age of deep fakes and other fabricated evidence. But I don’t think it’ll necessarily be the rule that needs to change, but maybe how the rule is applied. And I think that means forensic examination and metadata will play an even bigger role in discovery than it already does.
Metadata will be vitally important in establishing how the evidence was created, where it was stored, and so forth. So I don’t have any deep fake court examples to share yet, but here’s a non-deep fake example that I think maybe helps illustrate the point. I’m sure you both remember the Johnny Depp and Amber Heard trial last year.
John Simek: Right down the block from our office.
Doug Austin: Right? Exactly. Yes, indeed. How could you not know about that? And you probably, you may remember the famous Amber Heard bruise photograph. I’m walking by the TV one day and I see metadata expert testifying on the stand and it’s like a dog seeing a squirrel in the yard. I’m like, metadata expert? I got to check it out. But Depp’s team put a forensic examiner on the stand who pointed out that the photo was saved in a photo editing program, photos that 3.0.
Now of course, just because it was saved there doesn’t guarantee it was edited, but it’s not a good look. If the photo wasn’t edited, Heard’s team should have produced the original from her iPhone. The metadata cast doubt on the evidence. And I think metadata will become a vital component in determining that the item is what the proponent claims it is. And I think that’s going to be very important in identifying find deep fakes. It’s also going to be important encountering the “deep fake defense” that the paper talks about where legitimate evidence is challenged and questions as being a deep fake. And I think metadata will be the key to that as well.
John Simek: You hit the nail on the head, Doug. I mean, that’s certainly one of my big concerns as a testifying expert, is the cost of litigation. And is every case now going to require a gen AI expert? And are we going to get to that point where you’re not going to be successful if only one side has one and the other side doesn’t?
Doug Austin: Well, yeah. I mean, I think we will get to the point that every case will have a greater need for technical and forensic experts who understand Generative AI and deep fakes. As you pointed out, John, unfortunately, not everyone will be able to afford these experts, especially a lot of criminal defendants. So that’ll either mean that cases get more expensive than ever or parties will do without the experts they need to effectively evaluate the evidence, and both are a problem.
Hopefully, we’ll see some standards from a discovery standpoint, and people like Craig Ball, who’d been shouting from the rooftops about the importance of native file production to preserve and produce the metadata will finally be heard. Because without that, it doesn’t matter what it costs. The metadata won’t be there to make these evaluations.
Sharon Nelson: Another question from the paper is, will juries still be able to do their jobs? And I wonder about that, too. Do you think that juries will be able to understand the complicated issues associated with AI Generated evidence, especially when audiovisual evidence, real or fake, can be so very compelling?
Doug Austin: Well, you know, Sharon, I think at first it will be challenging because it’s so new. I think the analogy that I look to here is DNA. I mean, if you remember back in 1995 when the famous O.J. Simpson case, when he was on trial, a lot of people didn’t understand what DNA was. Certainly, I think several of the jury members didn’t fully understand it, and it may have impacted the verdict in that case.
Now, after nearly three decades of true crime shows later, everybody knows what DNA is. In fact, juries almost expect DNA to be present in order to convict. It’s almost a detriment if you don’t have it.
So I think the same will be true of the metadata needed to discern between deep fakes and legitimate evidence. I think presenting it so that juries can understand it will be the key for now, at least until eventually it becomes part of societal general knowledge, which I think it will.
John Simek: Well, before we move on to our final segment, let’s take a quick commercial break.
Jared Correia: They say the best things in life are free, which either means the Legal Toolkit Podcast is pretty awesome, or we’re totally committed to the wrong business model. You’ll just have to tune in to find out which it is. I’m Jared Correia, and each episode I run the risk of making a total ass of myself so that you can have a laugh, learn something new, and why not, maybe even improve your law practice. Stop believing podcasts can’t be both fun and helpful. Subscribe now to the Legal Toolkit. Go ahead, I’ll wait.
Male: If you’re a lawyer running a solo or small firm and you’re looking for other lawyers to talk through issues you’re currently facing in your practice, join the Unbillable Hours Community Roundtable, a free virtual event on the third Thursday of every month. Lawyers from all over the country come together and meet with me, lawyer and law firm management consultant Christopher T. Anderson, to discuss best practices on topics such as marketing, client acquisition, hiring and firing, and time management. The conversation is free to join, but requires a simple reservation. The link to RSVP can be found on the Unbillable Hour page at legaltalknetwork.com. We’ll see you there.
Sharon Nelson: Welcome back to Digital Detectives on the Legal Talk network. Today, our topic is Generative AI and Chat GPT: Why the Legal Industry Needs to “Brace for Impact”. Our guest today is Doug Austin, an established eDiscovery thought leader with over 30 years of experience providing eDiscovery best practices, legal technology consulting and technical project management services to numerous commercial and government clients.
John Simek: Doug, before we went off on the break, we were talking about the paper, the GPT judge. I guess my question to you now is the idea of judges using Generative AI to help research or draft opinions. Do you think that we’re going to get to that point very soon?
Doug Austin: Well, honestly, John, I think we already are. I think I’ve already seen two or three examples where judges have admitted they’ve used Generative AI to do both of those things, or at least tried to use it. I know in one recent ruling I saw, the judge indicated in a footnote to his ruling that he tried using ChatGPT, but reviewed the results and determined the case citations not to be real. So he discarded the output. He did what the attorneys in the Avianca case failed to do verify the results. And that’s important, right? It’s not just run it and go with it, but it is very good at putting out content that you can then make some decisions with. I expect to see more of that.
But to me, I think the key is the verifying the results. AI solutions are always going to be a tool for us to use. They’re not going to be a replacement for us. I know I can certainly understand concerns out there about job security when it comes to AI, because it replaces a lot of manual tasks that people have been paid historically to perform. But historically, technology developments like AI have also created opportunities and new jobs. So how it will shake out and will that be enough to replace all the jobs that are lost? I don’t know. But I do think when it comes to AI, and how lawyers and judges need to treat it, it will be like the famous saying Ronald Reagan put with Russian relations back in the 1980s. “Trust but verify”.
It’s going to provide some really good content for us to use, but lawyers and judges will still need to apply the expertise to know when and how to use it.
Sharon Nelson: What advice do you have for our audience regarding how they should address emerging AI technologies, including Generative AI? Because I think that’s something that a lot of people are looking for the advice. And actually, to go along with what you said earlier, if there’s job loss, I asked the AI, there was GPT, I asked, how can paralegals who are not ready for this new world, with much of their talents being able to be replaced, how can they survive? And what a list the AI gave me? I mean, it was brilliant. Absolutely, brilliant.
And of course, the first one was, understand AI, be the AI person. But many other examples as well of things that they could do. And I was just really impressed by that. But what would you have to say about that, Doug?
Doug Austin: There’s a saying, Sharon and John, and I’m sure you know this saying.
“Sometimes you’re the windshield, sometimes you’re the bug.” And we all know the feeling of being both of those. So AI is the windshield. It’s coming down the highway whether you want it to or not. Whether you’re going to be the bug depends on how you address it. If you try to ignore it and try to just really try to fight for traditional ways of doing things that other people have in mind to replace, it’s not going to go well. But if you really try to embrace and like you said, Sharon, really try to learn about it and try to be as much of that AI person, that AI knowledgeable person as you can, I think you’ll avoid being the bug that’s hit by the AI windshield.
And I always give advice when it comes to learning something. I give the same advice for eDiscovery, and I would give that here. It’s got to be a daily process, and it doesn’t require a lot of time. Set aside 5 to 15 minutes daily to read about it. Set aside one hour a month to attend a free webinar about it, because there’s tons of them out there. Everybody wants to do an AI webinar these days. I’ve done three in the past like six weeks, I think.
The other thing is, to the extent you can, play with the technology, ChatGPT is out there for free, and it’s only $20 a month if you really want to play with GPT Plus and GPT-4. Really — the key is just continuing to educate yourself. And it’s a daily process for reading, it’s a monthly process for attending events, and it’s just a spend some time and really get to know the technology. That’s the advice I would have for our audience regarding how they really become more knowledgeable in learning about AI technologies.
I mean, heck, it’s what I’m doing and I’m sitting here on a podcast talking about it. So there you go.
Sharon Nelson: I think we’ve had much the same path that we’ve walked down. And as always, it’s such a pleasure to have you with us. Always comedic. The bug and the windshield analogy was certainly very appropriate, but it’s kind of nice to get all the knowledge with wit and wisdom wrapped in. So thanks so much for being our friend and our guest.
Doug Austin: Well, thanks, John and Sharon, for having me as always. I always have a great time on your show and just getting a chance to catch up with you.
John Simek: Well, that does it for this edition of Digital Detectives.
And remember, you could subscribe to all the editions of this podcast at legaltalknetwork.com or an Apple podcast. And if you enjoyed our podcast, please rate us on Apple podcasts.
Sharon Nelson: And you could find out more about Sensei’s digital forensics, managed technology and manage cybersecurity services at senseient.com. We’ll see you next time on Digital Detectives.
Outro: Thanks for listening to Digital Detectives on the Legal Talk Network. Check out some of our other podcasts on legaltalknetwork.com and in iTunes.