Cat Casey is Chief Growth Officer at Reveal Brainspace, helping lead innovation and direction for their legal...
Stephanie Everett leads the Lawyerist community and Lawyerist Lab. She is the co-author of Lawyerist’s new book...
Zack Glaser is the Lawyerist Legal Tech Advisor. He’s an attorney, technologist, and blogger.
| Published: | April 9, 2026 |
| Podcast: | Lawyerist Podcast |
| Category: | Legal Technology , Practice Management , Solo & Small Practices |
AI is not just another tool. It is changing how clients think, act, and make decisions. In episode 612 of the Lawyerist Podcast, Stephanie Everett talks with Cat Casey about what lawyers need to understand before AI creates problems they cannot fix.
As more clients turn to tools like ChatGPT, lawyers are facing new challenges around trust, accuracy, and confidentiality. Cat explains why even lawyers who choose not to use AI still need to understand it, and how failing to do so can create real risks for both lawyers and their clients.
They share practical ways to approach AI with the right mindset, including how to think about its limitations, how to use it as a thinking partner instead of a replacement, and how to guide clients who may not understand the consequences of using it.
If you want to stay ahead of where the legal profession is heading, this episode offers a clear and actionable place to start.
Listen to our previous episodes on Using AI in Law Practice.
Links from the episode:
https://thetechnocat.com/about
Check out the episode on YouTube:
https://youtu.be/PQwAxiNjbhY
Have thoughts about today’s episode? Join the conversation on LinkedIn, Facebook, Instagram, and X!
If today’s podcast resonates with you and you haven’t read The Small Firm Roadmap Revisited yet, get the first chapter right now for free! Looking for help beyond the book? See if our coaching community is right for you.
Access more resources from Lawyerist at lawyerist.com.
Chapters / Timestamps:
00:00 – Introduction
02:10 – A New Way of Working with AI
05:00 – From Task Execution to Orchestration
07:45 – Meet Cat Casey
09:30 – The Shift Happening Right Now in Law
11:15 – The Rise of AI in Client Behavior
13:45 – Understanding AI Without Getting Technical
16:35 – Why AI Is Not a Research Tool Replacement
18:50 – The Role of Judgment in an AI World
20:10 – AI as a Collaborative Tool
22:10 – Why Iteration Is Part of the Process
23:25 – When AI Creates Legal Risk
25:55 – Confidentiality and Privilege in Practice
27:10 – The Reality of Discoverability
28:20 – Lessons from Past Tech Shifts
29:30 – Where Lawyers Should Start
Special thanks to our sponsor Lawyerist.
Zack Glaser:
Hi, I’m Zack.
Stephanie Everett:
And I’m Stephanie, and this is episode 612 of The Lawyerist Podcast, part of the Legal Talk Network. Today, I’m talking with Cat Casey about what every lawyer needs to understand about AI, because even if you’re not using it, your client might be.
Zack Glaser:
Ooh, I like that. I like that. You just have to at this point. You have to understand it. Don’t use it necessarily. I mean, I would argue that you should, but …
Stephanie Everett:
Yeah, me too. But I mean, we’re going to get into it in the episode, but some of the new things came out that how you’re advising your client to use it. Sume, just like we say, you should assume your team is using it, and that’s why you should have some policies and procedures in place.
Zack Glaser:
You should
Stephanie Everett:
Also assume your client might use it. And what happens if they put that memo you just wrote to them or the advice you just gave them into the tool?
Zack Glaser:
Yeah. You should
Stephanie Everett:
Probably understand that.
Zack Glaser:
That’s a good point. Well, we are obviously using AI technology, ChatGPT, Copilot, but especially right now at the very least, Claude here at Lawyerist. And one thing I’ve noticed, and I know you have too, is that the use of Claude and use of this agentic, like the skills and things like that has really affected how I approach work. And I know that you and I were talking about that. What’s it done for the way you kind of do work in the day?
Stephanie Everett:
Yeah. I mean, at a basic level, it’s about the schedule. So the best example I have is last Friday, I knew I wanted to use Claude to work on a presentation that I was preparing, and it takes a minute to do slides. So I came in. So normally if I come down to my office, I work from home, but I’m very intentional about, okay, I go in my office, it’s work time. So normally I go and get my coffee and do my morning routine. And then I come into my office and sit down at my desk and I’m like, “Okay, now I’m ready to work.” Let’s
Zack Glaser:
Get started.
Stephanie Everett:
Yeah, I noticed Friday I did something different, which was, “Okay, I know it’s going to take Claude a little bit to work on this presentation. So I’m going to go into my office first and I’m going to get Claude started. I’m going to give it the instructions that I know it needs and let it get started on its work. And I’ll go get my coffee while it’s working.” And I don’t know, that was just really different and new for me to be like, “Oh, this is what they mean by in the future we’ll be about managing our agents and getting these things working for us and I’ll step away.”
Zack Glaser:
I like what you’re saying there. When you talked about this to me earlier, I was like, “Oh, I feel that completely.” And I like what you’re saying there about managing the agents. I think of myself a lot of times during the day as orchestrating my different agents that are doing different things. And for people that aren’t quite kind of up to knowing, they can’t quite figure out what we’re talking about with agents because it’s not a nailed down term. What I’m talking about are AI tools that I’m able to give deep instructions and context to, and they can take some sort of action, either on my behalf or something like that. They can make something usually. And I think yours, it was making a presentation off of information that you fed in and context that you had fed in and all that. And I manage these different little agents that have different little tasks and different specialties and get them going.
And I find myself kind of anxious when I don’t have one running. I don’t have one doing something. So I’m like, I’m losing that time.
Stephanie Everett:
Yeah. There’s so much about the way we work down to just the schedule we keep and how we are doing it that shifts. And in my house, I mean, you guys know what I do, but sometimes I talk about my husband and he is new to all of this. He never used any AI tools until about 10 days ago when I shared an article with him and I was like, “He’s starting a new business.” And I said, “Listen, you need to read this and I want to have a discussion about it. ” And he did. And then since then, he’s purchased a subscription to Claude and he started using it. And every day he’s just like, “Stephanie, this is blowing my mind because of what it can do. ” And it was almost like I forgot a beginner’s mind. I had to go back and kind of try to help him get it set up because I showed him some of the things I was creating and he was like, “That’s amazing.” And so it’s interesting because now I realize this morning he sort of leapfrogged and is doing some of the work that I’m doing and he didn’t realize some of the basics of how we all used to work with Claude a few weeks ago.
And I had to go back and teach him like, “Oh no, this is just how you have a normal chat experience.” He skipped that and went right into the projects and building stuff. Oh,
Zack Glaser:
That’s funny. And I think that’s the thing is kind of like imagining what you’re able to do with this. I spend more time structuring the skills, structuring the things that Claude is able to do as opposed to actually doing the things. And it really lets my ADHD or ADD or whatever brain go because I can switch from task to task to task and just like keep it moving, keep it moving, keep it moving.
Stephanie Everett:
I have a follow-up, but I’ll take that offline with you because I mean, I think, well, or no, you know what? I’ll just discuss it here
Zack Glaser:
Because
Stephanie Everett:
I think a lot of lawyers, I appreciate you bringing up your brain and I think a lot of lawyers have this. And I think that the threat there, the risk is that you have all these things moving, but then you got to come back and close the loop and how do you get them finished? So maybe even in a prior world, we used to talk about just procrastinating and it was hard to get started on a task because that blank page problem is so big. Now these tools kind of eliminate that blank page problem. And in fact, Cat’s going to talk about that here in a second, like how lawyers can start to think about that and leverage and where they can really use their skills differently. But then we run the risk of kind of starting so many things. How do we get them over the finish line?
Zack Glaser:
I’m actually really glad that you asked that on air because one of the things that I’ve really noticed is that I’m only able to do that if I’m sticking to my project management system. I have to be disciplined Zack. I have to have my discipline hat on and think I’m not the one doing these things. Again, I’m orchestrating this and sticking to the project management system means I have to have scoped the project. I’m only doing it. I have multiple templates now that are what’s in scope, what’s out of scope and what is done look like. And so all these things exist and they’re being done. But yes, if I just pick up a little project and I’m like, “Ooh, I want to play with this, ” you’ll see me three weeks later and it won’t be done. And I think that’s a really good point.
I’m glad that you brought that up because that is the other side of this, is that personally with my little ADHD brain, I have to have that structure and I have to stick with it.
Stephanie Everett:
Yeah. So lots of good opportunities out there, but it’s again about understanding these tools and how you leverage them and your work and your workday and how you structure your workday. And it’s almost like we’re learning to work new again, right? We’re putting new guardrails around your work. I’m setting up my day differently. It’s kind of fun.
Zack Glaser:
Interesting. It is. It is. It is fun and interesting. Well, speaking of fun and interesting, now let’s get into your conversation with Cat.
Cat Casey:
Thank you so much for having me. My name is Kat Casey. A lot of people know me as the technocat and I am, well, I’m basically like Sisyphus and Sequence. I’ve been shouting in the void about AI and legal for 20 years and now the world’s caught up. I am the chief legal AI futurist for the first AI native for Lawyers By Lawyers conference series called the Masters AI. I’m the author of, I don’t know if you can see it behind me, but AI and legal tech. And I’ve been a leader at big firms like Gibson Dunn Crutcher for technology, foundational building tech for KPMGPC and in the C-suite at a lot of AI companies. So been there, done that, the world caught up and I want to help the rest of you catch up with us because it’s a really crazy, interesting, scary, awesome time.
Stephanie Everett:
Yeah, no, I love that. And I love that framing you just gave of maybe cutting through the noise a little bit and really catching up and figuring out what is it that we need to know today because it does feel like it just shifts. I used to say by the months, now it feels like it’s by the hour. Millisecond. Yeah. So maybe with that, which is a very broad question, what is happening today that lawyers need to be paying attention to that maybe has shifted even in the last 30, 45 days?
Cat Casey:
Yeah, I mean, I think if you’re a solo practitioner or a lit boutique, maybe you thought, “Hey, this AI thing, I’m going to let the big law handle it. It’s more of an enterprise play.” And what’s really shifted is a couple things. One, once Cartman from South Park was talking about ChatGPT, your clients heard about it, your colleagues, your peers, suddenly you can’t avoid it. And that was pretty early on though. It’s two, three years ago. But then we had Anthropic on February 3rd come in and make a play in the space. We’ve got billion dollar investments happening all over and we have everywhere from the Georgia Supreme Court getting hallucinated cases to false information. There’s a lot of stuff happening. And so I think the shift in the last 45 days or so is it went from, “Hey, this is a tech lawyer play, this is an enterprise, big law play.” I don’t really need to pay attention yet to, oh my goodness, if I’m not advising my clients, they might be feeding our legal notes into ChatGPT visiating PRIV, which happened two weeks ago or three weeks ago.
So in order to do the job of lawyering, you need to be at least AI literate. It doesn’t mean I’m using Claude to build the AI-empowered law firm of the future, but you need to know it to be able to issue spot, identify risk, and to, I don’t know, guide your clients. The world, I think, is so AI enmeshed now that to effectively advise, you need to at least be able to issue spot or use that linguistic power that lawyers have to parse problems to identify them and to say, “Hey, I might need help.” And so that’s the big pivot for me. It’s not that, oh, we all have to use AI, some will, some won’t, but we all got to be able to talk about it because the way the world works has shifted so much.
Stephanie Everett:
Yeah. I think it’s a good point that even if you’ve been nervous about using it for your law firm because of security concerns or people don’t trust it yet, I get the reasons why lawyers are pushing back. And I’ll gently remind them, we have a lot of episodes on why they need to maybe rethink that. But I think the shift too is like now you better believe your clients are probably using it. And so what does that mean that we need to change in our conversations? But even if we’re not using it, if our clients are, we need to be aware and we probably need to be advising them on the front end about what that implication looks like.
Cat Casey:
A hundred percent. I talk to people that are, they’re never going to be all in on AI. And you know what? I tell them, well, you need to be able to still talk about it to explain why you’re not all in on AI. So whether you consider yourself a never or an all in anywhere on the spectrum, the language of lawyering now has to include the ability to translate those tech issues into legal risk and opportunity for your clients. And so that’s a big pivot. I think people have been looking at, say, the internet or even the printing press, how long it took to get mass adoption. Well, it took 20 years to get a hundred million users of the internet and there were still articles coming out saying, “The internet is dead.” I think Newsweek did it in 94, 95. That adoption curve’s broken.
62 days for ChatPT to get a hundred million users. Cartman was talking about it in month three, and then my mom’s making God awful knitting abominations, and my nephew’s using it to gamify and win Minecraft, whatever that means. And my clients are using it and my colleagues are using it. So the adoption curve, it’s really, it’s double exponential. It’s moving so much faster. So even if you’re towards the tail end of your career, you’re like, “Hey, it’s not going to trickle down to the smaller firms.” That’s what’s shifted because it’s not trickling down, it’s trickling out across the horizontal of how we live, work, and play. And because of that, even if you’re not in an industry, that’s tech. Even if you’re not advising a client that, well, you think ever use AI, they might still have questions where you’ve got to be able to talk effectively about it.
And so whether you’re saying don’t use it or do, that need for a common language, I think is the biggest shift. The urgency is, I always knew it was urgent, but I’ve been shouting that for 20 years. I felt a little bit like instead of Chicken Little, maybe it was Kitty Little, right? But the reality has caught up with that urgency.
Stephanie Everett:
And so at a basic level, what do lawyers who are listening to this and saying, “Okay, great. I believe you, Kat. Now I got to figure this out. ” What is it that they need to figure out? What do they need to be able to talk about?
Cat Casey:
So the way I would look at it is there’s a couple tranches to it. You need to know the key terms of art. What do they mean? So an LLM versus AI, AI versus generative AI. There’s these big buckets and there’s different risks that are posed by them like generative AI makes new stuff. Well, that’s a different risk from an AI that just finds patterns. And so you need to be able to know that difference. The other thing though is you need to know what tools can and can’t do because some of the risk we’re hearing about, hallucinations and bias and a whole host of other things in the generative AI space are because of features of generative AI. And so if you’re trying to find that one determinative answer, maybe using a gen AI tool doesn’t work. So you need to kind of know the key terms of art, what the types of tools can and can’t do.
And then frankly, hey, when do I raise my hand and ask for help? It’s like going back to law school, IRAC, right? Be able to issue spot and be able to identify what that risk or rule is. And then it doesn’t mean you have to learn to code. It doesn’t mean you have to be vibing out and creating an app. I mean, I don’t even do that, but you need to know when to say, “Hey, I need help.” Or, “Hey, my client, I know you think this is awesome and we’ll solve all your problems. There’s some risks. Let’s talk about it before you do anything.” So it’s that basic fluency. I mean, that’s why I wrote the book, which was it’s basically just a primer for the rest of us who maybe went and studied existential philosophy instead of learning to code. And I should be a barista, not an AI evangelist.
So those of us who maybe didn’t lean into math and science and who that’s not what we’re comfortable in, maybe you decided at 12 on a swing set because if you made a mistake as a doctor, someone dies. If you made a mistake as a lawyer, you can appeal. Maybe I’m projecting, but that was my process at 12. I pivoted away from the hard math and science to words and phrases and the power of language. And I think for a lot of lawyers, it feels like, well, I missed that boat. I made that decision in high school, in middle school, whatever. The nice thing along with the AI literacy is that lawyers and legal people have a skill that makes you a superhero in the age of AI because you have the power and precision of language. So if you have the right words and the right way to communicate in a natural language way, these new tools really level up with you.
So if you can combine basic literacy with your legal skills, that syntax, that semantics, that issue parsing, you’re not just going to not die in the age of AI. You actually can leapfrog people. So it’s not just a, “Hey, existential dread must fix this. ” It’s, “Hey, if I combine basic literacy with these skills I’ve honed for 20, 30 years, I might be ahead of my colleagues, ahead of peers who don’t have that language prowess.” So it’s an urgency, an opportunity, and a really unique moment when the wordsmiths might rule the world in a way we didn’t expect. I
Stephanie Everett:
Like that. And I like that understanding of the tools. I feel like we also have been preaching that around here. And when you were talking, it occurred to me in a way, lawyers have been using Westlaw and Lexus online for years to return results of real cases. And sometimes these tools, if you don’t understand what a generative AI tool is actually doing, you may feel like, “Well, it’s just like Westlaw.” I ask it a question and it gives me an answer. And I think that that false premise is probably what’s getting a lot of these lawyers in trouble because then they think, “Oh, it just gave me a case.” So that must be a real case because that’s what it does.
Cat Casey:
I’d even push back a little on that. A lot of the people making the headlines … Wait, let me step back. Yes, and I’ll go back to my improper days. You’re absolutely right. But also, I think a lot of the issues we’re seeing come from bad lawyering. Would you trust the first year that gave you an awesome citation you’ve never seen in your 20 years that’s so perfect? No. You would go and double check, make sure they didn’t type in theBoulian search in Westlaw wrong. And so what we’re seeing, even in the big Avianca case, the first one, the guy watched a YouTube video because his kid said GenAI is cool. And then when he got caught with his pants down, instead of saying, “Oh, let me look at the citations, actually do the work of lawyering,” he asked the generative AI, “Did I get it wrong?” That was bad lawyering.
So a lot of times what you’re seeing is people not doing the basic due diligence, the ethical obligation to supervise. A little bit of its tech competence and knowing where the failings are with tech. But the other is just because the robot said it doesn’t mean you don’t have to do the lawyer stuff of trust but verify, authenticate, look for issues. I think some people are getting confused and it’s very human to get confused. We’re trained to trust tech. Google give us answer. Google’s right. 25 years of being tablocked by Google. It’s a little bit different now. And you can’t give away your ethical duty of judgment. That thing that you’ve been honing for 30 years, it’s more important. And for anyone afraid of job displacement, I would point that out with a big gold star. If you become AI literate and you can use these tools, you are more important to the process, not less.
If you don’t use the tools, then you might fall behind and there will be displacement. But if you want to safeguard your career, the best thing you can do is get this basic fluency and know the risks and opportunities.
Stephanie Everett:
Yeah, 100%. I agree on the bad lawyering. I always say, “I never even relied on a Westlaw head note. I would still read the case and make sure that the opinion said what it said.” So please don’t forget how to be a lawyer, a good lawyer.
Cat Casey:
Exactly, exactly. And I think that the language precision and our brain’s trained to identify risk and to not trust output that if we keep our lawyer brains on, puts us in a better position to thrive in an age of AI because that’s how you need to work with these tools. It’s not a give me answer robot overlord. I wish. It’s, “Hey, help me think through this process. Help me find my blind spots. What haven’t I thought of? ” It’s having a really good sparring partner or a good, very eager to please overcaffeinated junior that you’re going to have a dialogue with, a discourse, not someone that’s going to say, “Hey, here’s the answer. I don’t need your lawyer brain. Good luck.”
Stephanie Everett:
Yes, yes. And I just read this morning, the people who are power users of these tools get that and they get that you go in and you could fight with it, but you do. I do. Yeah, you leverage it, you use it. It’s an ongoing conversation. It’s not a once and done. And I think the people who are starting to get that are using it much more effectively.
Cat Casey:
Well, I would say, so the other thing I think lawyers and legal pros struggle with is you don’t get the ESQ or to the types of roles we’re in by liking failure very much. We tend to be academic people that have thrived and done well and succeeded. And who likes to do stuff that doesn’t feel like thriving and doing well and succeeding? Unfortunately or fortunately, iteration and that back and forth banter and not just having the AI give you an answer. It’s a feature, not a bug with these new types of tools, but for legal professionals, it can feel like the AI is not working. And so you do have to kind of recalibrate your brain on what success looks like and how you think and how you work with the tool. Or it can feel like, oh my gosh, I asked the AI to do five things.
It took me longer. It’s awful. The AI doesn’t work. You need to kind of realize it’s about getting you to think differently and about the AI training you to ask it questions in a way that get answers more quickly. It’s not a one and done. I was not great three, four years ago. I’ve gotten much better just through obsessively using it, not even on high risk stuff. I’ve got a good buddy that kept burning his brisket. So I took a picture of the brisket and asked ChatGPT, how do I quit burning the brisket? Or I’m writing a snarky email that needs to be like 17% snark, not 87.9%. So help me dial it back. There’s a lot of ways you can kind of gain that comfort level. But the first step is realizing it’s iterative. You’re not failing if it doesn’t work the first time, even if it feels different from what you’re used to.
You can’t draft the perfect Boolean search to have the AI make you a masterpiece of a brief. Don’t expect that.
Stephanie Everett:
Yeah. I think that is really great advice that can’t be understated enough. I just had someone on a call last week and he’s like, “Stephanie, I asked it to write a brief and it just did a terrible job.” And I was like, “Back up, did you just say write me a brief?” Because it can’t do that. That’s not what it was. Maybe you could say, “Help me write the statement of facts or help me write this argument.” You got to break it down just like you do with stuff. Help
Cat Casey:
Me think about this. What issues did I miss? I love it for the blank page. So if I’m just starting something, all right, how would you start? What’s your thought on this? So I’m not staring for 30 minutes trying to get the ADD hamster wheels to align appropriately. Maybe that’s just me again projecting. But also, hey, what are my blind spots? How would you think about this? How can I change the tone? And what I love doing is like, all right, now read this as a judge in the Southern District of New York who likes Sherlock Holmes a lot. Maybe I’m just thinking of Andy Peck, but you know what I mean? There’s ways that you can use it for helping you think, helping you kind of pressure test as opposed to make the thing for me because it’s designed off of billions, trillions of data points and aim for the hundreds of millions of billions of users at this point.
So it’s going to aim for the midline. It’s not going to hyper customize for you if you don’t kind of go through that whole process. So what you’ll get is, eh, not something super useful and not better than what you would’ve made.
Stephanie Everett:
Yeah, great advice. You hinted earlier that some stuff just came out with clients using these tools and especially clients feeding lawyers advice into the tools. And I feel like this is still pretty new and a lot of people aren’t aware. So I’d love for you to talk to us a little bit about that and what we need to be understanding and thinking about differently now.
Cat Casey:
Yeah. Yeah. And forgive me, I’m sure we can add the case citation in the after, but basically there was a case where attorney-client privilege information, feedback that a client got about a certain matter was fed into an LLM and it was discoverable and it vitiated PRIB. Now, part of the reason had to do with it being an open model. So there’s the type that’s free, which means what, you’re the product, right? It’s training on your data, which means there’s no expectation of privacy, no expectation of confidential information. They use that, which means you might as well have just posted it in my blog, great attorney advice. What do you think, universe? It’s gotten about that much protection. So it vitiated PRIV. There are some judges that are even thinking if you’re using it in your paid model because it still potentially could be used for some level of training that it may waive confidentiality or vitiate PRIB.
And this kind of goes back, I mean, this is a specific to legal output, but there were similar issues with people doing code in maybe two, three years ago. I think Samsung had that where patent protected or trade, I think it was patent protected code was added in and they were building out more code on it. And then the next person who asked a similar question got that patent protected code because it trained the model on it. So that’s why literacy is important because how your clients use this can have a material impact, open versus closed, enterprise grade, whether it’s safe or not safe or what information you can even put in there. And there’s a big gap. Companies aren’t training their people quickly enough. So it really falls on the lawyers to offer that advice. And I think it’ll be refined some more, but we’re going to see more and more clients who are like, “Well, I’m using this AI for everything I do.
I take a picture of my fridge and ask what I should bake with the ingredients in my refrigerator or I’m going on a road trip.” I ask where the best places to stop are, why wouldn’t I if I’ve got a big case? But there was also, and I don’t remember who it was, but there was a $250 million case where the client was given advice by their attorneys. It’s not going to win on the merits. They pursued ChatGPT advice lost and it was flamed out in the media. So even if you, the practitioner, big or small, doesn’t matter, aren’t using these tools, your clients have heard about it. Their kids are using it, they’re using it, it’s all over the media. If you’re not telling them, “Hey, this is a risk if you do something with what I’m telling you, ” they may inadvertently expose themselves in a way they didn’t even anticipate.
And it’s very, very hard to impossible to claw back, especially if they did it to the free version.
Stephanie Everett:
Yeah. I mean, it’s probably been a while, but there was a time where we would tell clients, “Hey, if I give you advice, don’t share this. ” You wouldn’t go out and tell your neighbor that. And maybe now we need to remember that great advice and learning and remember to educate our clients on, this is what I need you to be thinking about when I give you information because there’s a risk to your point if you feed it into these tools and especially the free tools.
Cat Casey:
Well, and it’s all discoverable. Before I was TechnoCat, I was eDiscoverycat. I did that for 15 years. And so most of my early career was all around data. And so I’ve got much younger siblings. They’re seven, five years and 14 years younger. And so I would tell them, “Don’t tweet it, type it, slack it, post it, snap it unless you want mom and dad to know about it potentially, employer, your future wife or husband doesn’t matter.” It’s sort of the same thing. I think we sometimes think because these tools feel like we’re working, that they aren’t discoverable, they are. Will everyone use this data in every case? Maybe not, but much like a Google search, it can be dispositive. And so you need to sort of think about that from a discoverability standpoint, from a PRIV, from a confidentiality standpoint. And this is all bread and butter stuff that lawyers know.
You only need to know that the risk could be triggered, but it builds off of your decades of experience advising on risk of, “Hey, don’t do this. ” The exposures, the way it’s being exposed might be new, but the what the exposure is and what risk it creates isn’t. It’s just kind of you’re adding another layer of, please don’t do this.
Stephanie Everett:
Yeah, I think that’s great advice. And same goes with our rules of professional responsibility. Like you said, the rule of being a competent lawyer still exists. That rule didn’t change.
Cat Casey:
And supervising, just because it’s ones and zeros instead of highly caffeinated and people pleasing doesn’t mean you don’t have to supervise it, especially with some of the coworks coming out where these agents might be operating somewhat autonomously. You need to put yourself back in the loop. And this all feels very scary and different. And in some ways it is because we haven’t had to think about this before. And a lot of us could really avoid the tech question if we didn’t feel like talking about it. I think the shift is we need to apply our legal brains to the tech question. And the first step, and I mean, again, that goes back to why I wrote a book and why I launched, relaunched the Master’s AI. Lawyers need to have this fluency. And I think for a lot of us, we didn’t put ourselves, maybe not me.
I’ve been screaming about it for 20 years, but a lot of other lawyers, there’s only 10,000 of us who would consider ourselves legal tech. There’s 1.3 million legal humans out there. So for the 1.29 million out there, I think now you’ve got to put yourself in that room and just start getting that basic level of fluency. Otherwise, you’re kind of like that. And I think back to my early career, that partner, I walked in his office and the big CRT monitor and the CPU were bookends. He wasn’t using them and he told me confidently, “I have nothing discoverable. I don’t send email.” And his secretary as I walked out said, “Everything he dictates to me, I send an email. We don’t interoffice it anymore.” So even if you think I’m not doing it, you are still possibly looped in. And so I think it’s a similar inflection point.
And the good thing is 20 years ago, there was no E&E discovery. As an industry, we pivoted to all these new data sources and to being able to translate a little bit between tech and not tech. So it’s not our first rodeo. We can do this. We maybe are a little ahead of the curve from some other people. We just need to start doing it.
Stephanie Everett:
Yeah, I love that. And I love that you wrote a book to try to help folks figure this out and make it easy because I know I’ll be honest, I have considered, our team’s talked about, should we be writing an AI book? And it just seems so darn intimidating because it changes so much. So I appreciate that you did it.
Cat Casey:
It was more daunting than I thought. I write a lot. I write like 70, 80 articles a year and I’m like, a book. It’ll take me four months, a year and change later. But what I did is I tried to sort of, in pop culture references in a very human, non-technical way, explain the 70 years of AI history that got us to hear where and how you can use it, what it correlates to, like similar to a baby associate trying Trying to people please you and a prompt prime or a glossary. But my goal was to create a foundation because it feels like every conference talks for three minutes about AI on every single panel. So you get enough to know I should know more about it and then stop. And every book is either about should we or shouldn’t we, or so technical, even I who’ve been talking about AI for 20 years, I’m uncomfortable.
So I kind of wanted to bridge that gap. It’s sort of an AI book for the rest of us.
Stephanie Everett:
I love it. And where can people find it if they’re interested?
Cat Casey:
On Amazon. So just AI and legal tech and type in Kat Casey. I’m sure we can add a link under the comments. And I travel around and I go and I speak at corporations. I’m talking with Dolby later in this week to a hundred plus of their legal ops people. So I’ll go and I’ll talk with people about it too. I don’t want legal to go it alone. It’s a scary transition, especially if you built your career on words and phrases, not numbers and statistics. And so if I can help, I want it. That’s sort of my why.
Stephanie Everett:
I love it. Well, we’ll make sure to put a link to the book in the comments and the show notes. And Kat, thank you so much for being with us today and making tech sound fun because this book is definitely like, I love that you said, “Oh, let’s do pop culture references and let’s make it easy and approachable.”
Cat Casey:
Well, we’re all in this together. It doesn’t have to be like a root canal.
Stephanie Everett:
Nice.
Notify me when there’s a new episode!
|
Lawyerist Podcast |
The Lawyerist Podcast is a weekly show about lawyering and law practice hosted by Stephanie Everett and Zack Glaser.