Mary Simon is a devoted advocate of the injured, particularly those suffering from serious injuries related to...
Elizabeth Lenivy provides excellent, detailed representation in the areas of product liability, medical malpractice, and personal injury....
As a dedicated and passionate advocate, Elizabeth always goes the extra mile to ensure that her clients...
Katie St. John’s devotion to serve as a trusted advocate for her clients is rooted in a...
| Published: | April 8, 2026 |
| Podcast: | Heels in the Courtroom |
| Category: | Legal Technology , Litigation , Women in Law |
In this episode of Heels in the Courtroom, Liz Lenivy, Mary Simon, Elizabeth McNulty and Katie St. John explore recent cases raising concerns about attorney-client privilege, confidentiality, and discoverability when AI tools are used. The hosts highlight the risks of exposing sensitive information as more clients use AI to draft summaries of their cases when looking to hire an attorney. The ladies also examine how AI is emerging in discovery and expert research, and what lawyers must do to stay ahead as these tools continue to reshape litigation.
Special thanks to our sponsor Simon Law Firm.
Announcer:
Welcome to Heels in the Courtroom where the trial lawyers of the Simon Law Firm break down what it takes to win in the courtroom and in life.
Liz Lenivy:
Hello and welcome back to Heals in the Courtroom. I’m Liz Lenivy, and today I’m joined by Katie St. John, Mary Simon, and Elizabeth McNulty. Hello, ladies. Hey.
Katie St. John:
Hi. Hello.
Liz Lenivy:
So we are again talking about AI within the context of law. It’s back in the news, but this is a slightly different story with a slightly different perspective. I feel like oftentimes when we are talking about lawyers and AI, it is attorneys who are getting busted for submitting filings to the court that have hallucinated case law. But this time it’s actually a discussion about AI and client usage. There was a recent decision that came out of the Southern District of New York that I kind of want to focus on today, but it raised some interesting questions and concerns about the use of AI, especially in the context of our communications with our clients and how they communicate back with us. But I think it can also raise questions about within the context of what we do, how we request things when we are thinking about discovery.
So first I’m going to put this into context. And before I discuss that Southern District of New York, I’ll bring up another case that actually came down in November of last year. This was a case called Fortis Advisors LLC versus Crafton Inc. And it was a business dispute that’s pending in Delaware State Court. This particular dispute involved two corporations, I guess, having a disagreement over a video game. The plaintiff is a video game studio or designer and the defendant is a video game publisher. So the allegations were that the defendant, so the publisher, breached a purchase agreement and specifically terminated the plaintiff to avoid paying a $250 million bonus if the video game hit a certain amount of sales. So the plaintiff filed a pretrial briefing in November of 2025 claiming that through discovery, they had learned that the defendant’s CEO had consulted ChatGPT to specifically discuss how to dodge the payout.
And I’ll read directly from the filing. So desperate to avoid such a professional embarrassment, CEO obsessed over finding a way to avoid defendant’s obligation to pay the earnout. Another member of the team told CEO, “It seems to be highly likely that the earnout will still be paid if the sales goal is achieved regardless of the dismissal with the cause.” CEO turned to artificial intelligence to help him brainstorm ways to avoid paying the earn out. ChatGPT likewise advised that it would “be difficult to cancel the earnout, but CEO was undeterred.” So this is a situation where CEO has decided to let ChatGPT be his lawyer. And it’s not exactly clear how the plaintiff found out that the CEO had used ChatGPT, but my guess is that there were some Slack discussions that went back and forth and that’s how they learned about the CEO’s use of ChatGPT.
Although specifically in this case, the defendant alleged that the ChatGPT conversation no longer existed. They didn’t deny that it had happened, but they said that it no longer existed, so they could not produce the conversations. Now, the second case that I think is making some bigger waves among attorneys is US versus Hepner. Now this is actually a criminal case that’s pending in the Southern District of New York. And I want to give credit to the source that I’m getting a lot of this information from. It’s jdsupra.com. But in this particular case, the defendant Bradley Hepner was arrested in November of 2025 on securities and wire fraud charges regarding the collapse of a GWG holdings. And so it’s a criminal case. After his arrest and after retaining defense counsel, so now he has an attorney. He’s got an attorney on board. He’s been charged. Hepner allegedly used a consumer AI chatbot, Clawe to prepare reports outlining his defense strategy and potential legal arguments.
He then shared the AI generated reports with his lawyers. The government then sought production of those 31 documents, but defendant Hepner asserted both attorney-client privilege as well as the work product doctrine. The judge in this case rejected both, ruling that they are protected by neither the attorney-client privilege nor the work product doctrine. And this case is, I think, kind of a big deal because it’s the first federal court decision to directly address whether privileges attached to materials generated through a consumer AI platform. And the court here reasoned that there was no privilege because the AI claude did not promise confidentiality. So from JD Supra, Judge Rakoff found that sharing information with a consumer AI tool is inconsistent with the confidentiality requirement of privilege. As the court stated, the defendant disclosed it to a third party in effect AI, which had an express provision that what was submitted was not confidential.
The government pointed to the AI provider’s privacy policy, which permits use of inputs for model training and allows disclosure to governmental authorities and third parties. Under well established law, voluntary disclosure of privileged information to a third party that does not maintain confidentiality waives the privilege. So because they shared it and because I guess Claude specifically said, “Nothing you tell us is confidential, we’re going to use it to train our AI.” The court ruled that it is a third party communication and it’s open for the government to get that information. So the court was also not convinced by the defendant’s argument that he could retroactively protect himself with privilege by sending them to his attorneys. Preexisting non-privileged materials do not become privileged merely because a client eventually shares them with an attorney. Specifically, he reasoned that the work product doctrine failed because Hepner didn’t do the research at the instruction or request of his attorneys.
“The doctrine protects materials prepared by or for a party’s attorney in anticipation of litigation. A layperson’s independent AI assisted research falls outside that scope and the work was not performed at counsel’s behest. “So obviously this is a very narrow ruling who’s very dependent upon the facts. Specifically in this case, what did the AI model inform the defendant and also why was the defendant doing this kind of research? Was it at the request of the attorney? But I think it raises some interesting question because, and I kind of want to turn this all to you. If you all have had the experience now where you’re seeing more AI generated material, whether it’s from opposing counsel, whether it’s from, I don’t know, law clerks, or if you’re hearing about it, or from clients or from experts yourself. Has anyone seen that come up in your practice?
Mary Simon:
Well, this is an interesting … Looking at the case, I’m taking it to mean that the judge is essentially saying if a party, talking specific about a party for this case, it’s almost like if a party’s going to use a third party platform that is AI based or whatever third party tool they’re going to use, it’s the equivalent of standing in a coffee shop, talking to somebody out loud, talking about your case because it’s not protected. So even though it could still be in your living room, sitting there by yourself as you’re typing things in, the court, at least in this case, is interpreting that as that doesn’t mean it’s protected. The only things that I’ve encountered so far, I’d say maybe in the last six months or so, I am thinking about sometimes I see it on initial intakes with clients. Things will be written in a way, for lack of a better word, will sound like a little bit robotic, kind of like a robot generated email of … I’d imagine it would be something that if someone typed in like, ” How can I ask a lawyer about looking at this case?
“And then they copy and paste that email and send it to you. I’ve seen it there. And then also I had a client, I think it was early 2025, I had a client who would send follow-up emails after I’d talk with him and it would have a series of questions that would be beyond what he and I had talked about. And it was all stuff that I’m happy to pick up the phone and have the conversation, but it’s either that he’s talking to another attorney getting that information or a computer. I mean, because just based on the context, that’s what I could gather from that. And that matter has since ended, but it never got to a point where I needed to have conversation about like, ” What’s prompting these questions? Are you writing these questions that didn’t come up because it was nothing I could have talked to him about?
“But it is interesting now that a decision is coming. What I’m taking from this too is it depends on who’s using the tool and what the tool is for that person. I mean, we all know that when we have an expert, if the expert is doing independent research, they’re going to be asked about that in their deposition for both sides. “What did you look at when you form the opinions in this case? Oh, I used the book that all general practitioners use for … ” I mean, then we get the book, then we look at the book and we look at the chapter that they read. So it seems now that we are on that trajectory of, well, what other tools did you use as an expert? That all to me fits within the bounds of stuff we already know in terms of what’s going to be discoverable or what avenues are going to be explored.
It’s something that’s new that I definitely need to think about.
Liz Lenivy:
Yeah. And I had this happen recently and I didn’t take this case, but I got an unprompted summary from a person seeing if I would take their case. They were providing details, but the way that it was written, and I’ve used ChatGPT before. I know that the way that their style of summary and kind of giving headings and the points and everything, and it was very obvious to me because it had those headings and it even had the little symbols next to emojis. It had emojis. And I was like, “Oh, so this is ChatGPT.” But it was all of this person’s information, health information. They put the information regarding their situation into ChatGPT and I guess asked it a prompt of, “Can you take all of this and prepare a nice summary to send to a lawyer to review my case?” And the thing that we’re always told is the AI, the more information you feed it, the more instructions you give it, the better the output is going to be, right?
So I’m sure that that’s what this individual was doing was they were saying, “Well, I’m going to just word dump everything that happened to me and all of the information I know into it, and then it will organize it for me. ” And then ChatGPT, I don’t know if they prompted this or not, but ChatGPT went a step even further in giving a medical and legal analysis saying, “Well, this is potential negligence here and then X, Y, Z.” And I didn’t take this case, as we now know from Hepner, that’s not necessarily a private communication just because you’ve generated it with the intent of giving it to the attorney. If the attorney didn’t ask for that, then it’s potentially not an attorney-client privilege and it’s not work product doctrine or it’s not protected by the work product doctrine. So do we need to start telling clients ahead of time, “Please don’t send me anything AI generated because I don’t know if that’s going to necessarily be protected or not.
” And it’s been a couple of years, but I worked on a products matter several years ago and something that kind of always came up was, well, what warnings … We always had a failure to warn claim attached to it. And so people were often asked, the plaintiffs were often asked in their depositions, “Well, what kind of research did you do before the surgery?”
Elizabeth McNulty:
Sure.
Liz Lenivy:
“Before you got this product implanted in you, what kind of research did you do? Did you go onto Google? Can you recreate any of those Google searches? What were the specific inputs that you were putting in? ” And I think when we do a general Google search, we just put in a couple of words here and there and it generates some options. But now with ChatGPT, people are putting in, or these other AI, people are putting in specific details about themselves. And so that I think opens a whole new door to potential problems, things we need to advise clients about. Like this is something that I’m now going to have a conversation about with clients in those initial communications. It’s sort of like the social media conversation we have of be careful what you put on social media. I think now we have to say, be careful what you might put into an AI.
Even if you think that you’re typing this up to send it to me, that doesn’t necessarily mean it’s protected. It could still be something that is now discoverable by the other side. So that’s something I think we need to talk about with clients, but on the reverse side of that, if you’ve got a defendant who maybe you represent a plaintiff who has been injured by a defendant trucking company and then the CEO of that trucking company is going in and saying, “How do I get out of this? Or what do I do if my driver was drunk?” You have no idea. We need to be asking that in discovery about, do you use or have you discussed this specific incident or this specific driver or whatever with an AI model? Do you have those chats? You need to produce those chats to us. So that’s something that I think also is a consideration that we have to have going forward.
Mary Simon:
Yeah. And it’s also, it’s the same stuff that as the lawyers are continuing to learn a lot more about AI and using it in practice and different resources that are available to attorneys, like specifically to attorneys with subscriptions and whatnot, it’s the same stuff that we know ourselves. It’s like, don’t be using a chatbot to write briefs. You know what I mean? It’s like we know we’re not putting in information that … Like to a client, I would be like, “It’s not a lawyer. If you’re someone anywhere who thinks that you might have a legal problem, go talk to a lawyer about that. ” Don’t just put it on the internet or just in the same way that you wouldn’t go into the coffee shop and ask someone or crowdsource the people in the coffee shop, “Hey, I have this issue. What do you guys think about it?
” Don’t do that with ChatGPT, but just contact an attorney and ask them those questions and then at that point, like what you’re saying, Liz, then the attorney should be telling you the stuff that we normally tell them. Everything at this point is where it’s at. No deleting anything. It is what it is with your social media and stuff, but don’t go posting about it because whatever conversations we have, they’re now between us and the client, but it does add an extra layer of communication and it’s so vast. I mean, that’s the thing that in my head, you just think about how many silly things you might even use. I can’t tell you the amount of times I’ve even used ChatGPT of things with kids or like what’s a typical sleep schedule for a however year old. And then I’m like, why am I … I should be telling myself, “Call the doctor instead of typing this in.
” It’s important to bring up because you don’t think about it. It’s just a tool that is … It makes sense as to why someone would do that on either side, but moving forward, it is an important conversation to have. And also, it’s a significant case to read, to familiarize yourself with that.
Elizabeth McNulty:
I am curious though, in both of these cases that we’ve been talking about, they’re seemingly like, I don’t want to say the bad actors in the scenarios, and they both seem to be sophisticated users. One is like the CEO of a big company or a company, and then the other one is a criminal defendant and like a white collar case. And my guess would be like there would be a distinction between those people and then the kind of folks that we represent. Or I’m curious if a court would decide it differently with that in mind because both of these people are kind of trying to get out of bad situations they’d gotten themselves in. And I feel like our clients are using them more in situations to like learn more about what happened to them. I have clients that you can tell that they’ve talked to ChatGPT about like just litigation in general, I feel like.
And obviously that’s different. I don’t think the other side would care to learn anything about that, but I would assume that there’s a line at some point. Even if some of this is discoverable, they aren’t entitled to every single search or prompt that a client has put in about litigation or about like their case. I think there’s got to be a line drawn. I could also see courts going the other way in some … I mean, but I mean, the litigation is so new. I would be curious if any of this goes up on appeal.
Liz Lenivy:
I’m not sure that either court made the distinction specifically that the background or the education of the defendant was the big issue. I think it was more of like, what was the input? What was the prompt? And I think if a client goes on and says, “Can you generally explain how a medical malpractice case or how a personal injury case or wrongful death or whatever, how this case works?” I think that’s something that, that’s no different than a Google search, right? But I think where people are starting to get in trouble is you put in specific information. So again, I’m imagining this scenario where someone says, “Okay, so I went to the doctor and the doctor told me this and this is my back, this is my history, and then the orthopedic surgeon told me I needed this. Are these two surgeries related? But then I also had this preexisting condition.” I start to get concerned is where they are putting that personal information in there that’s something that could be relevant in a deposition, for example.
Now, it’s also the question of people have to ask for it and people I think have to ask for the right parameters in those searches, right? Because if someone just says, “Well, I want every single communication you’ve ever had, you have to give me more information than that. ” And I think it has to be time limited. And again, it’s the thing that we all do where we want narrow requests, but I still think regardless of the level of sophistication of the client, it’s a conversation we have to have of, look, you can’t be putting these things in because if a court does decide that they are entitled to that information, we might have to give them whatever it is that you fed into it. And that’s a conversation you’ve had about your case, about your injuries that I don’t have control over. I can’t polish that for you.
It’s just whatever you spat into.
Mary Simon:
I had a client once who, to Elizabeth’s point, I had a client who in conversations, and this happens, probably everyone at this table has had this happen, but my client was talking to her doctor and brought up the case and her genuine concerns with certain damages that she had. And it was like the most candid doctor-patient conversation that the doctor put in. She’s stressed about X, Y, and Z injuries, and she recently talked to an attorney about not being able to do whatever in the future. And the other side jumped on that in the records because we produced the records like, “Oh my gosh, and this is admissible and all this. ” And the judge was just like, “No, it’s not. This is clearly an individual who’s trying to advocate for herself and being completely forthcoming with her physician and trying to figure this out.
” The judge was looking at this case that we just looked at today is wildly different than that circumstance where someone is trying to self-advocate of being like, “What’s a good way I can articulate this to my attorney?” That’s not the same thing as there is a distinction there in intent, in the purpose of it, in use of the information, inadmissibility of that versus, “Hey, I have a deposition coming up and I own a company and I need everyone on the interwebs who’s ever entered any information about how to testify in a deposition to tell me how to do it. Give me an outline so I can study it before my deposition.” Those two things are wildly different, but we can see that those things are now coming into courtrooms, which to be mindful of.
Katie St. John:
I also think it’s interesting because I think the more that this happens, there’s going to be some lines drawn, but I think it’s one of those things where we talk about like with the social media conversation, Mary, you mentioned if you already posted it’s out there, don’t delete it, right? But with AI and the different platforms, sometimes depending on if you log in or you have a subscription, like maybe your conversations are saved, maybe they aren’t. I mean, it’s just like a whole nother new world whereas attorneys, you almost have to kind of familiarize yourself with the different platforms, how they save things, because if your client does come to you and says, “I did do all of this, ” you’ve got to know how to help navigate that situation and be prepared in the event that it does come up and you are ordered to turn it over, like you don’t want to be the one at the 11th hour trying to figure out like, “Well, I don’t even know if we still have that, because if you knew about it when you took the case, we don’t have a ton of guidance right now on it, but as it continues to happen, I think it’ll be interesting to see where it all shakes out, where you got to draw the line.”
Liz Lenivy:
And Katie, speaking of new platforms, the other thing that I want to mention is this is not only a conversation we need to be having with our clients and figuring out how to get discovery on opposing parties, but also with experts. And this is something that actually just happened to me recently in a case, I was deposing a physician in a personal injury matter and I asked my standard questions of, “Did you do any type of research?” And I said, “Did you go on PubMed or UpToDate?” And I always feel like, “Oh, see, I’m in the no doc. I know about today.” And the doctor responded, “No, I didn’t do any of that, but I did do a little bit of research on a different platform.” And I go, “Well, can you explain to me what action you specifically took?” And he goes, “Oh, I looked at open evidence.” And I went, “Well, that’s a new one.” I literally said, “That’s a new one to me.
What’s that?
” And he responded, “Oh, it’s fantastic. It’s a new AI tool for physicians only. You have to be granted special access to get it. ” He called it high functioning. It’s more and more in the medicine. And I was like, now I’m sitting there like, “Well, what the hell is this? ” And I go, “Is that like ChatGPT?” And he goes, “It’s an online AI, it’s medical and it’s restricted to physicians.” And so I asked him, I was like, “Well, what were the props you fed it? How did you utilize it? ” And he told me, he’s like, “This is the information I entered and it was specific to the case that we were working on. ” And I go, “Well, what response do you get? ” And he goes, “Oh, the things I already knew.” And I was like, “Okay. Now look, that case is resolved.
I don’t need to do any follow up.” But now I’m thinking, well, I asked in my Exhibit A attached to my notice of deposition for this expert, I asked for all research that you did. You have clearly done some research, even if it’s using this special AI just to confirm what you already knew, Doc, but is that something now that I need to go and tweak my notice of deposition or my Exhibit A and say, “Hey, when I say research, I mean all research, not just your literature or your books or whatever.” Which
Mary Simon:
Of course there is none.
Liz Lenivy:
There’s never any. But if you use a search engine, I get to know what you put into the search engine and I get to know what answers it’s spat out because here’s the thing. If they say that they used it, what if they changed some facts or what if they left out some important facts in order to, I guess, reaffirm what they think they already knew, that’s important. I need to know what are you feeding the AI? So that’s something that, again, that was new to me that just came up pretty recently and I know I’m going to be changing my exhibit As going forward to make sure that I ask for that specifically and to also ask the question in the depositions of, are you using any AI models to support your opinions in this case?
Katie St. John:
I think that’s a great point. And that’s exactly what I mean when I say at some point, like for a doctor, he’s saying it’s physician only, you’re granted some type of special access. To me, that would mean your chats are saved, right? There’s going to be a history of what you do on that. So that would be, I know you said that case settled, but I would love to know how that would have shaken out.
Liz Lenivy:
Yeah. Yeah. But it’s good to know. It’s good to know for the future. So this is something that I got a feeling we’re going to keep talking about AI in other contexts. I know this is like the third or fourth episode we’ve done on AI, but it is such a growing part of our practice in all different aspects of our practice. So I think it is our responsibility to stay up to date on it. And with that, thank you all for joining us for this episode of Heals in the Courtroom. Remember, new episodes drop every other Wednesday, and if you’d like to join the discussion, you can reach out to us at heelsinthecourtroom.law. Bye guys.
Announcer:
Thanks for listening to Heals in the Courtroom. At the Simon Law Firm, we know that trial success isn’t just about experience, it’s about strategy, resources, and the power of collaboration. That’s why attorneys across the country partner with us to strengthen their cases and deliver justice for their clients. If you’re interested in working with our team of seasoned trial lawyers, call 314-241-2929. And if you enjoyed the podcast, be sure to subscribe and send us your thoughts at heelsinthecourtroom.Law.
Notify me when there’s a new episode!
|
Heels in the Courtroom |
Heels in the Courtroom is a fresh and insightful podcast offering the female lawyer's perspective of trial work with Liz Lenivy, Mary Simon and Elizabeth McNulty.