Merisa K. Bowers, Esq. is a lawyer, public servant, and advocate for ethical, people-centered leadership. As Loss...
Zack Glaser is the Lawyerist Legal Tech Advisor. He’s an attorney, technologist, and blogger.
As a Lab Coach, Chad guides law firm owners in transforming their practices into thriving businesses, enabling...
| Published: | October 9, 2025 |
| Podcast: | Lawyerist Podcast |
| Category: | Legal Technology , News & Current Events , Practice Management , Solo & Small Practices |
In episode 582 of Lawyerist Podcast, Zack Glaser talks with Merisa Bowers, Loss Prevention and Outreach Counsel at the Ohio Bar Liability Insurance Company, about how artificial intelligence is reshaping lawyers’ ethical duties.
Merisa explains how deepfakes and realistic scams are creating new challenges for diligence and verification, why unregulated chatbots can accidentally create attorney-client relationships, and what disclosures lawyers should make when using AI tools. She also shares practical steps to maintain confidentiality, protect client data, and apply long-standing ethics rules to fast-changing technologies.
Links from the episode:
ABA Formal Opinion 512 – Generative AI
ABA Formal Opinion 510 – Prospective Clients & Rule 1.18
Listen to our previous episodes about non-lawyer ownership:
Have thoughts about today’s episode? Join the conversation on LinkedIn, Facebook, Instagram, and X!
If today’s podcast resonates with you and you haven’t read The Small Firm Roadmap Revisited yet, get the first chapter right now for free! Looking for help beyond the book? See if our coaching community is right for you.
Access more resources from Lawyerist at lawyerist.com.
Chapters / Timestamps:
0:00 – ClioCon
4:45 – Meet Merisa Bowers
6:50 – Tech Shifts & New Ethics Risks
9:10 – Deepfakes & Diligence
13:40 – AI Scams & Fake Clients
18:30 – Chatbots Creating Clients
23:40 – Ethical Chatbot Models
26:45 – Should Lawyers Disclose AI?
29:40 – Don’t Let AI Think for You
34:20 – Protecting Client Data
36:10 – Staying Ethical with AI
37:40 – Wrap-Up & Final Thoughts
Special thanks to our sponsor Lawyerist.
Chad Fox:
Hi, I’m Chad.
Zack Glaser:
And I’m Zack. And this is episode 5 82 of the Lawyerist Podcast, part of the Legal Talk Network. Today I talk with Merisa Bowers about artificial intelligence, your professional responsibility, and chatbots on your website. So Chad, coming up in about a week, we’ve got one of the bigger conferences that we go to. One that I’m usually very excited about and very excited about. Again, ClioCon, it’s going to be in Boston. Have you been before?
Chad Fox:
I’ve never been to Cliocon or Boston.
Zack Glaser:
Or Boston. Oh man, this is my second time in Boston and it’s going to be the second time for a conference, so I don’t know that I’m going to be able to say that I’ve been to Boston even now. Yeah,
Chad Fox:
Well I still won’t be able to say I’ve been to Boston or Cle Ocon, unfortunately.
Zack Glaser:
Yeah,
Chad Fox:
So I will not be joining you, which makes me sad.
Zack Glaser:
Well, yeah, that makes me sad too. I would like to see you there, but you’re around at a lot of different places. You get pulled to a ton of different places. But alright, Chad, just for you, I’m doing a morning show each morning so you can feel like you’re there. So you, I’m doing this for you so you can have the pleasure of watching me every morning. Talk to some of the people from Clio and talk about what’s going on at Clio Con so you can feel like you’re there.
Chad Fox:
Yeah, that’s great. Live streaming is awesome. It’s unpredictable and uncertain, but you get a lot of really good raw moments in live streaming.
Zack Glaser:
And quite honestly, Chad, it scares the shit out of me. I’ve done live webinars, streamed webinars, and I can say this to you, the people out there in podcast land don’t necessarily know just yet, but you do live streaming for some DJing that you do. So you’re well versed in the uncontrollable nature that it is. And so I’m kind of just bearing how nervous I am about that to you.
Chad Fox:
The interesting thing about nerves is as soon as you get to the thing, it goes away. It’s all the buildup leading up to the thing that makes us nervous. But as soon as we get to the thing, it almost immediately leaves us
Zack Glaser:
Hopefully. And you don’t want to be without nerves. If you don’t have nerves going into something, it’s one of two things. Either it’s not worth a shit, it’s not worth doing, and you’re bored of it and it doesn’t matter. Or two, you’re kind of a psychopath, you have no emotion. You figured out a way to get the emotion out. You should, when you go into trial or depositions or things like that, you should have nerves. But just it’s, I don’t know who says it, but I use it in my cross country team all the time. Do it scared, do it anyway. But yeah, so I imagine every time you do your DJing sets, which you do live and you do streaming as well. I mean, I guess live is just as nerve wracking, right?
Chad Fox:
Live is probably a little more nerve wracking because you’re in front of people, you’re in front of humans. Humans, you’re right in front of you and you’re up on a stage and everybody’s looking at you. Whereas live streaming online, you’re basically playing to a monitor.
Zack Glaser:
So I’ve done live CLEs where I’m in front of people and then I’ve done CLEs that are virtual, completely virtual and I can’t see anybody. And there is something a little more difficult sometimes about not being able to see, not being able to do an interaction, not being able to see if are people engaging.
Chad Fox:
It’s the energy. The energy is different. And that was a transition, and I know we’re probably going to talk about this more on a future intro, but that was a transition I had to make going from DJing in front of live crowds to everything that happened in 2020 that pushed everything to live streaming was I always used to feed off of the energy of the crowd. And so now the energy becomes the chat. So if the chat has energy, now I’ve learned to feed off of that energy. But if it’s lacking, then it’s kind of like being in front of a live crowd and they’re not being that energy. It’s just
Zack Glaser:
Crickets. Yeah. Well that’s a live CLE crowd anyway. I’ve never been in front of a live CLE crowd and nobody’s been, you see DJ sets in movies and shows. Well, first thing you see DJ sets in movies and television shows. You don’t see CLEs in movies and television shows because they’re not entertaining. But you see people bouncing up and down and really getting into the music. That has never happened surprisingly enough, has never happened at A CLE that I was doing.
Chad Fox:
Got to bring a dj.
Zack Glaser:
Alright, Chad to CLE. We’re doing it. We’re doing DJ and ACL E. I will bring down your DJ set. No way, buddy. Alright, on that note, let’s get into my conversation here with Merisa.
Merisa Bowers:
Hi, I’m Merisa Bowers and I’m loss prevention and outreach council here at Ohio Bar Liability Insurance Company. I’ve been in this role for about three years now. And prior to that I was in private practice mostly as a solo small firm practitioner and some of your prior guests. I’m also an elder millennial and graduated in the recession. And so had to really figure some things out on my Own. And when I started in practice in early 2010 after passing theBar in 2009 in the height of the recession and your prior guest said that was so much fun, there was new technology that was emerging at that time. So we also had the emergence of cloud-based case management systems like Clio and Smokeball, and that I think allowed attorneys to scale their practices to be more flexible and to be agile in a pretty difficult economy.
Zack Glaser:
Yeah. Well, Merisa, thank you for being with us. You have a lot of experience and kind of staying on the forefront of technology. And I think to add to your introduction there, what all of this technology has done is yes, it’s allowed us to operate potentially more shoestring. It’s allowed us to operate more broadly. It’s allowed us to operate remotely, but at the same time, it has fundamentally changed some of the basics of how law firms have been operating for a long, long time. And so it’s adjusted some of the safety protocols that we’ve had in place and we have to kind of adjust our brains a little bit because my father-in-law practice had, we had physical files for years and years and years and years, and it’s basically are they locked? Are the files locked? Did you lock your building? And we have kind of one vector that somebody that we can have an issue from a professional responsibility standpoint. And it had been like that free long time, but things are changing tremendously. What I want to talk about though, you went into cloud-based technology, which had and has its own kind of professional responsibility issues. And I think people are still struggling with that, but now we’re putting another layer on it with artificial intelligence.
Merisa Bowers:
Right, and that’s a great point. The issue with cloud-based technology is that we really aren’t in, I think we like to think sometimes that we’re in control of that data, but there’s a trust agreement that has to Happen. And you’re right, I think that we’re still struggling with that and still recognizing some of the steps that we need to take in order to ensure that that technology is securing our files safely. So some of the things that are still recommended by NIST or by other types of cyber insurance carriers is to turn on multifactor authentication. So Things as simple as creating stronger passwords and creating a function where you’re having to tap into another device in order to unlock that access. Little things like that. But you’re right, we’re just going to dial this up with ai
Zack Glaser:
And I think we’re just getting into a place where people are saying, okay, let’s get our multifactor authentication that’s going to help us be safe. And with cloud-based technology, one of our big questions is just security. It’s can somebody access this? Is there a third party connecting to that? And there is an element of that in our move to artificial intelligence as well. But there are, as we kind of rethink how we’re practicing law with AI and how people are living, frankly, we’re actually getting into even different sort of professional responsibility realms. One of the things that you and I were talking about prior to recording is deep fakes that doesn’t have anything to do with our security, but it does have a lot to do with our professional responsibility to the court, to our client and things like that. Talk to me a little bit about what the rise of deep fakes is kind of bringing to the forefront.
Merisa Bowers:
So it is diligence and competence that this brings to the forefront. Again, one of the things that I believe is that the structure of rules that we have in place, and Ohio largely follows the ABAs model rules as well. But we’ve got this great system that can apply to these new technologies, and I think that as long as we start to view our rules through these lenses, I think that that’s going to give us good direction with deep fake technology. What we’re seeing is how easy it is to access. It doesn’t require some type of fancy editing
Studio. My dad was in audio visual technology through the eighties and nineties and I just remember the huge array of devices and panels and dials and slides and all that kind of stuff that he used to have in the studio. And that’s not what is necessary anymore. It’s a couple clicks on your phone or on your mobile device and you’ve created your own recording of anybody who’s ever had any kind of voice sampling done ever. And now with video technology, that soon is going to catch up to the place where it’s going to be extraordinarily hard to distinguish between what is AI and what is genuine video. So it puts a big onus on attorneys to verify this information. I heard recently about an audio recording or it was purportedly of a phone call and the attorney needed to verify it through whether the records even supported that there was a call made between these two parties at a specific time. So thinking about ways that we can verify this information instead of just taking it on face value, I think is going to be a critical component of diligence going forward.
Zack Glaser:
I mean that’s bizarre for us as attorneys because it has always been video evidence. If it doesn’t look blatantly tampered with is pretty decent. Obviously we have to have somebody that says, yes, this is what happened here, but in my practice I was looking for does something look like it was photoshopped or does something look like it was taken to the copy machine and made, does it look like there’s white out in there? But this is something beyond just kind of like, I mean it’s going to pass the sniff tests. I mean, looking at some of these videos that are out there that are deep fakes, they’re
Merisa Bowers:
Convincing, we better get those canine noses out. It’s not a standard sniff test anymore. It’s really at a different level.
Zack Glaser:
Okay. So that’s kind of in our professional responsibility as it relates to kind of the court and diligence with the court and whatnot. But these deep fakes are also kind of coming at our offices as well. They’re scamming us, they’re getting us a little bit better. Talk to me a little bit about how deep fakes are being used to really just trick lawyers in their offices even.
Merisa Bowers:
So one of the most common scams on attorneys is that I call it the deal is already done scam and this fake client purports to already have a deal done, but they need you to be the middleman and finishing the transaction. And so you’ve probably heard about these, they’ve mostly been over email for the last several years, but they’ve gotten more sophisticated. The language is harder to discern as perhaps it’s smoother and more conversational in large part due to LLM technology. But going forward, I think that even phone call verifications are going to be problematic, right? So I think that we need to be incredibly aware of loss prevention tactics. So number one, making sure that if you get a six figure check that is supposed to go into your trust account, that you wait until it fully clears both institutions, not until it posts when it’s posted, that doesn’t mean anything. So that’s one of the risk management techniques that attorneys can use in the interim, in the event that they’re concerned or have any kind of hesitation about a transaction.
Zack Glaser:
When I was practicing, I got these emails as well that were like, Hey, I just need somebody to finalize this thing. I need somebody in this area. And you could tell for the most part it was weird language structure. But what you’re saying is that these are not only getting better in the language structure because the email are getting better, but couple that with this deep fake technology that could be audio or video and interactive deep fakes. That’s what scares the heck out of me is we had, and I think I’ve said this on the podcast before, we’ve had multiple bots actually apply for jobs at our company and they did a pretty good job of seeming like a human. And so if I’m calling a number and verifying that something is right, that’s problematic too, right?
Merisa Bowers:
Yeah. So what I recently learned is that that scam in particular where it is an applicant for a job that’s an organized cyber crime group that perpetuates that, right? And it is intended to not necessarily, it can gain access to systems too, that’s one of the goals, but a bigger part of the goal is siphoning off money from payroll. So I think that that is something to be very aware of and astute to as attorneys think about offshore hires, non-local hires. I think that that’s something that we need to be very attentive to.
Zack Glaser:
I love that. Thank you. That is really amazing information because Stephanie and I were sitting there talking about the applicants and we’re thinking, what’s the end game? Is it for somebody to get multiple jobs, but if this bot essentially gets access to your system, well that’s a fascinating way to hack in. And so that’s an even bigger or a new attack vector that’s coming out of our offices.
Merisa Bowers:
And what happens a lot is they want access to your emails so that they can send scam emails to other people from a real domain name. So that’s where they would send you owe us money, and they pay this invoice link to their own portals, but it’s from a Lawyerist dot com email address,
Zack Glaser:
Right? If it’s coming from Zack glazer legal firm.com. Well, and I practiced debt collection when I was practicing, and so that would be a fantastic place to have an invoice come from a demand letter or something like that. So that’s deep fakes coming from that direction. And we could talk a lot more about this. There’s a lot more depth to this.
Merisa Bowers:
That’s right.
Zack Glaser:
But I think those are some really great introduc, well, not even introductory, those are some really great concepts for the DeepFakes coming at us. But let’s talk about our use as attorneys of that technology of client facing or public facing bots like chatbots on our website. What are some of the issues that we need to be thinking about with chatbots on our website specifically?
Merisa Bowers:
So I think what we’re seeing is that a lot of consumer facing firms, especially in competitive practice areas, there is a drive to make sure that you are the first to respond to an inquiry that you’re able to humanize and bring that client in as quick or that prospective client in as quickly as possible. And like I said, I think this is driven in part by competition in certain practice areas. So it’s easy to see, I think why these bots are so popular with these practice areas because it gives you that 24 hour, seven day a week virtual intake team that can be responsive. But one of the concerns that I’ve raised with a number of our policy holders is the potential risks that come with this technology. So in my view, I see there to be a problem with potentially inadvertently creating an attorney-client relationship in what I do with loss prevention and risk management For lawyers, the first element of a legal malpractice claim is whether duty existed and duty of course is created by the presence of an attorney-client relationship. So if this chatbot is not programmed to very restrictive parameters and limited in what it is able to say and engage, I do think that there is potential risk of these tools making a client or prospective client think that a lawyer is going to show up at a hearing
Or that they’re represented or soliciting additional information like that, significantly harmful information that’s defined in rule 1.18 that could easily come through a client interacting with a chatbot
Zack Glaser:
Easily. And then I can envision a scenario pretty quickly that a client, a potential client interacting with the chatbot thinks that they’ve created an attorney-client relationship and they’ve got a statute that’s about to run. That’s Exactly right. Statute of limitations that’s about to run. And now they’ve sued you for that, for not filing in time because they thought they had taken care of themselves. And especially when it comes to personal injury in Tennessee, that was a one year statute of limitations. And yeah, I could absolutely see that if you don’t put the proper parameters. Is there any kind of quick advice on potential parameters that we would want to put on these that you can think of? Or is it, I mean, a lot of times the advice is you’re an attorney, you’re an attorney, figure out what it is to do, but do you have any thoughts on that?
Merisa Bowers:
Yeah, so yes, I think that a lot of times what I see is that there are a couple of attorneys who are tech forward that are the ones that are leading some of these initiatives in a firm. And that’s great and that’s important, but I think that we need to make sure that there’s a multidisciplinary approach to integrating this technology. So general counsel with their ethics hat on should also be reviewing the technology before it’s deployed. I think that’s one piece that could be really critical if you are a solo shop, right? I get it. I was that for many years. Take a look at rent intervention. Have you heard of this, Zack?
Zack Glaser:
No.
Merisa Bowers:
So rent intervention is a tool that was developed by a nonprofit legal services association in Cook County, Illinois. And they have developed this phenomenal, I think it’s a model chatbot to support low income tenants who are having landlord tenant issues or real estate issues. And it is very tightly controlled. There are very tight parameters on how the chatbot interacts, but the benefit of it is that it’s able to provide information almost like a law librarian with natural language prompting. And so it would allow someone who’s not trained in legal search terms or statutory language to be able to express the concern and then the chat bot can help identify information that would be useful and potentially direct that person to consult with the attorneys with that organization.
Zack Glaser:
And in that organization there would definitely be a strong desire to not create that attorney-client relationship right at the beginning. So you’re saying that one does a pretty good job of The thing is we don’t want to have a chilling effect and not be able to provide help to our potential clients or not be able to show our potential clients that we can help them, but we do want to walk that line of maintaining them as potential clients as opposed to clients right there.
Merisa Bowers:
And the key to this is everything we do, which is communication, making sure that we’re setting up the tools to disclose that they are AI technology, setting up the tools to disclose that they cannot provide legal advice, nor are they creating an attorney-client relationship. So I think that with proper disclaimers, just like we were told, I mean, you remember when we were building out our websites for the first time, and if you had a contact me form, you better have the disclosure. Contacting the firm does not create an attorney-client relationship with the firm,
Zack Glaser:
Right? Yeah.
Merisa Bowers:
We can apply a lot of the same technology principles that we’ve developed over the last 20 years to the moment that we’re in right now,
Zack Glaser:
But it’s so damn exciting to think that I can have a bot intaking my potential clients all day, all night, all the time
Merisa Bowers:
If it’s collecting your basic conflict information. I encourage people to take a look at a advisory opinion five 10 and 4 92. They’re kind of partner advisory opinions that really break down rule 1.18 on prospective clients. I think that would be a great place for attorneys to start if they’re thinking about implementing this chatbot technology in order to onboard or intake prospective clients.
Zack Glaser:
I like that. We’ll definitely link to those in the show notes as well. So this makes me think, I was talking to one of our lab stirs a couple of days ago who was thinking about implementing, it’s not even a chat bot, but a AI driven answering service that, again, we’re talking about artificial intelligence that is client facing, consumer facing. And they were asking, do I have to disclose that this is artificial intelligence? And you couldn’t correct me, but I don’t know that you have to, but I like the idea of it now in the sense of, well, A, why not? And B, if I disclose that, that would potentially help me in saying, you’re not even talking to a human. Of course you haven’t created an attorney-client relationship, but you are getting more information.
Merisa Bowers:
Zack think I would take a different position, but my role is also in best practices. And so I think that it is important to disclose our use of technology from a couple perspectives. One, I think our rules indicate that the more transparency and the more communication about the methods that we’re going to achieve our client’s interests are communicated to them, the better relationship that we’re going to have with them and the more we’re going to adhere to our rules. The other side of this is that I think pretty much all the states that have released advisory opinions at least encourage disclosure if not obtaining consent. So consent to the use of the tools. And I also really liked, I think that they might’ve been backpedaled a little bit, but some of the rules and guidance that was promulgated by the European Union around adoption of AI technology really pushed on transparency and agency allowing people to have agency in whether they are opting into using AI technology at this point or not, it will be here, we will all be using it, but at this point we need to make sure that people have opportunities to ask questions and engage with us about how their data is going to be used.
Zack Glaser:
Yeah, I mean, I think definitely from a just consumer facing perspective, you should,
I think you’re absolutely right on that. And I think what I like though about your suggestion previously though is sometimes I think about these from a cynical standpoint of what’s in it for the attorney and what’s in it for the attorney is you cover your ass a little better if somebody knows they’re talking with a chat bot if they know they’re literally speaking on the phone with artificial intelligence, a bot of some sort. Well, I think there’s not only is there a better argument that they haven’t created an attorney-client relationship, but you also have a moment if you’ve taken the moment to say, this is an AI chatbot, then you also have the moment to say, this does not create an attorney-client relationship.
Merisa Bowers:
Yeah. Yeah, I think that’s exactly right.
Zack Glaser:
So the last place that I want to go with this that is, again, we’re not getting into the confidentiality issues, the typical AI and technology questions is this using artificial intelligence to help us in our judgment and in strategy, what should we think about when we’re bringing artificial intelligence, these LLMs into our own practice and thought process?
Merisa Bowers:
Oh my gosh. Did you see the recent South Park episode where they sent tally to dc? I did not. I swear it’s been probably 10 years since I watched a South Park episode and I happened to put it on the other night. And the joke was that Randy Marsh was using chat GPT for everything, and every response back from chat GPT was affirming and telling him what a great idea he had.
Zack Glaser:
Oh man. Yeah.
Merisa Bowers:
Yeah.
Zack Glaser:
And for people that don’t know, Randy Marsh is the father of one of the kids in South Park, and he is not bright. He’s not particularly, so this is very much down his alley.
Merisa Bowers:
Yeah, maybe well-intentioned I think.
Zack Glaser:
Well, both of the
Merisa Bowers:
Time, right? But what I have seen in my own use, and I love chat GPT, right? I think it is a phenomenal tool. It helps polish my written content for me. But what I have certainly noticed is that it sometimes overstates the quality of my work unless I tell it not to. And so I loved one of your prior guests who talked about using the crit model, and I think that that’s a great way to sort of critique ourselves and check ourselves and make sure that we’re not just getting what we want to be told out of these tools.
Zack Glaser:
That happens to me all the time, Zack, that’s a great idea. And I’m like, Ooh, let’s keep going down this path. This is a dumb path and it is not going to work. And if you were to ask the bot, if you were to say, is this going to work? It’d be like, oh no, this is not going to work. No, absolutely not. But great question. You are so smart for asking that. I think you’re right, kind of having, being thoughtful about it, but what’s that? I mean, there’s an extra layer of that being an attorney, right?
Merisa Bowers:
Absolutely. And thanks for bringing it back to what we’re doing here. So yeah, we cannot delegate strategy decisions or legal advice without our critical analysis. We have years of training, years of experience, access to legitimate case law databases and other legal research data tools. So we need to make sure that we are applying those in a analog function as much as possible. Right now, it’s not a bad place to start. It’s not a bad place for us to, I’m sure you’ve had all kinds of folks on to talk about how to use and what to do with the tools, but we’re not at a place where we should be foregoing our own judgment that would be incredibly contrary to our rules of professional conduct.
Zack Glaser:
And I think I was having a conversation with somebody that’s not an attorney the other day, and they were asking about the hallucinations and citing cases that aren’t really there. And I think this brings it back to what you were saying at the beginning of this program is that we already have the rules in place to take care of these things, to deal with these things. Somebody that is putting forth a hallucinated site into a brief to a court, they delegated this strategy. They didn’t actually put their own critical thinking into something. And that’s a basic problem for professional responsibility. And I think that’s an important point is that that’s when people get in trouble is when they’re delegating this, but we can taking you back into this episode, that can be done sometimes unintentionally inside of a chat bot.
Merisa Bowers:
Sure.
Zack Glaser:
We can be be accidentally giving license for the bot to tell somebody more information than we want it to. So I think it’s, yeah.
Merisa Bowers:
So bringing it back to that confidentiality discussion too, it doesn’t require, names are not the only thing that betrays confidential information, right?
So even if I’ve heard too often attorneys say, well, you can use chat GPT if you’re redacting the names. And that’s not what our guidance tells us. Our guidance tells us that if the fact pattern is identifiable, that if someone else who were to encounter this information could link it to a specific matter that is a breach of confidentiality. So attorneys have to be cautious with the tools, and this comes back to that cloud conversation that we were having earlier about data storage and data use. We need to be aware of and intentional about the level of service or subscription that we’re obtaining with these providers so that we can monitor how our information, how our inputs and our prompts are going to be used to train the tools or potentially how that information is otherwise stored.
Zack Glaser:
That kind of goes back to the idea of if you’re not paying for the product, you likely are the product.
Merisa Bowers:
Yes.
Zack Glaser:
And in our case, our client information is likely the product, and that’s really problematic.
Merisa Bowers:
Correct,
Zack Glaser:
Correct. Well, Merisa, thank you for all of this information. But before we go, what would be kind of a first step or a basic step that you would suggest for attorneys who are kind of at sea right now, where would you suggest they start in making sure that they’re on the right side of professional responsibility here?
Merisa Bowers:
So first and foremost, I think take advantage of CLEs that are offered by your local bars or your state bars across the country. Attorneys are leaning into this and making sure that they understand how this technology is being used in practice, but also how to do it responsibly and within our codes of professional conduct. The other piece of advice before you even get there, I would recommend that attorneys take a look at a formal opinion five 12, which was published last July, and maybe we can also publish that in the show notes. But the, A formal opinion was one of the later in sort of a long series of state advisory opinions that came out around ai. And so I think that this particular formal opinion summarizes a lot of what we were seeing from the other states and jurisdictions, like Florida’s advisory opinion and dcs, et cetera. So that’s a great place to start.
Zack Glaser:
Okay, great. Well, we will definitely link that in the show notes because I think that sounds like a good hub for figuring out what your state might specifically say. But obviously again, we’re lawyers, you should be able to go look into these things.
Merisa Bowers:
Yeah. Yeah.
Zack Glaser:
Well, Merisa, I really appreciate you being here and talking to us about some of the issues that have come up with artificial intelligence and our professional responsibilities. Thank you for your vast knowledge in this.
Merisa Bowers:
Oh gosh. Well, we’re all trying to keep up with a fast moving topic, and so I’m just happy to be part of the discussion. So thanks for having me.
Zack Glaser:
Absolutely. I appreciate it.
Notify me when there’s a new episode!
|
Lawyerist Podcast |
The Lawyerist Podcast is a weekly show about lawyering and law practice hosted by Stephanie Everett and Zack Glaser.